The path towards Axoloti 2.0


#1

I'll try to shine some light on my development plans:

When arriving closer to a fresh release, I was hitting a hard wall. One was getting a perfectly nice working undo-redo, the "live" switch barrier is not one that can be crossed nicely by undo-redo operations. Recompilation by launching gcc is not snappy enough. A second issue was sharing code along objects and avoiding name-conflicts, I was not really happy with the "modules" approach (in the experimental branch on github).

So I've been tinkering with a radically different approach to the Axoloti architecture: rather than generating c++ code from a patch and using gcc to generate a native binary, generating and uploading a binary (document, not arm code) representation of a patch to the Axoloti hardware, and "compiling" that binary document into native executable code on the Axoloti microcontroller. An interpreter approach would add too much overhead. It does not aim to be a full compiler, it only loads binary objects and generates native code to call functions, and passing data from outlets to inlets.

It's an approach I had dismissed in the past. Writing something that resembles a compiler is not a suitable one-man army project. It also locks the code strongly into the arm architecture.

It's large transition: the object library will need to turn into binary code, breaking current attributes. It affects the GUI, the object definition format, the patch file format, the usb communication protocol, the firmware... Hard to get there via incremental non-breaking changes.

I have not announced this effort before since I first wanted to proof its feasibility first. I have underestimated the complexity of the development, and has taken a far deeper and slower effort than I had imagined. Yet it is still my highest priority, believe it is the way to go, and addresses most griefs.

Please allow me till the end of this week to publish a first draft on github. Thanks for your patience and understanding!


My Axoloti Plan
Live patching Axoloti
#2

I don't understand much you are saying @johannes, but I glad you are live and posting here! You have created an amazing device! :+1::ok_hand:


#3

Thanks for the info @johannes.

This will provide a fluider workflow which is very important to beginners (as the try/fail/retry/succeed cycle will be faster).

I guess that community objects will need to be adapted / versionned between Axoloti 1.0 and Axoloti 2.0. As you said, I'll have a look in these issues after you publish the first draft on github.


#4

@johannes Cool to hear about the new architecture ideas.

Here are a couple of ideas I've been mulling over:
- Why not compile and include all objects ahead of time and expose them through an API? Is it really the case that all of the core objects couldn't fit in flash?
- The uploaded patch could just be some simplified representation that describes what objects are in use and how they are connected etc. The device would then instantiate objects and connect them as needed. Adding objects to the on device library would require a firmware update (so a user object would not necessarily carry its firmware code around at the patch level).
- The patch could be unaware of particular object versions. It would be more like targeting an API level. The patch would just refer to objects in the abstract like "Sin Oscillator" or whatever without worrying about precisely what version of that object was deployed to the device. The client could even query for what objects are available on the device, etc.

It would be easier to do that kind of thing with dynamic memory allocation on the device itself which I don't think would be the end of the world, but I think we could avoid it if that's something you absolutely want to avoid.

Hope you're doing well by the way. Let me know if you want to catch up on voice chat sometime.

PS: I've had success recently using Nanopb and protocol buffers to handle the actual wire format for an embedded API. It's worth checking out.


#5

what about custom modules then? writing my own objects or customising existing is one of the most essential features of axoloti, and should not be disabled!


#6

@lokki I definitely agree. There would still be a mechanism for user created objects. I think that compilation should be moved offline somehow though and decoupled from the patch. Roughly what I'm thinking is that you would have some kind of library manager mechanism to control what objects are deployed on your device. At a UI level this could be similar to the existing workflow for editing an object definition. Compilation wouldn't happen at all during normal usage of a patch, going "live" etc. It would simply alert you if you attempted to refer to some unknown object.

I this would give us more of a chance to detect errors in an object definition as well and be clearer about the object compilation process, i.e. what names are visible etc. If your "library compile" failed, you would resolve it before patching. If your object definitions weren't changing, you could just continue patching like normal without a recompile.


#7

Thank You for the update, and for all your hard work. Both are most appreciated.

Axo rules anyway, but I'm very excited that you're striving to make it better.


#8

that sounds like a feasible approach mostly. it somehow makes it harder to have a wide set of different patches (with different objects) on the sdcard, since not all of them will fit in the firmware.


#9

I'm skeptical about the notion that we couldn't pack many, many object definitions into flash (we have 1M to work with). It will require us to rethink how the objects are expressed though of course and probably clever code sharing / reuse etc.


#10

Even if most object codes are very short there is a lot of objects in the community library, thus this question is not trivial.


#11

Ideally, the library manager / recompilation view would connect to the community repository directly. You'd just flag objects that you would like to include in your build and there would be a way to keep track of available flash storage and rough object size. You could swap unused object out to save space. In the case that you tried to use a patch with an unknown object dependency, it could simply prompt you to fetch it and recompile your library. If all objects in a given patch were already available in your library build, you would just continue patching without having to recompile.

Again, this would probably require establishing some new conventions for the custom object API, etc.


#12

Thanks for update information it is highly appreciated.

Nice to hear some development is still going on :slight_smile:


#13

This sounds very exciting to me. Well, the fact that now there seems to be a real path towards axoloti 2.0, the fact that Axoloti 2.0 is still alive, is awesome. I don't really get why compiling inside the Axoloti is better, how it solves code sharing or how it solves the live switch barrier. Wouldn't the live switch barrier still exist, and undo would have to trigger an upload of the binaries? Would it always be live? Can someone that gets it explain how is it better than gcc to us mere mortals?

Thank you very much for everything.


#14

So each object is pre-compiled to some binary representation (arm code + meta-data) and is then dynamically linked/called as the complete patch on the axoloti itself? Is that correct?

Questions:

1) What problem does this solve?

2) Some objects are very simple (E.g. multiply 2 numbers) and it's going to be horribly inefficient if such an object can only be called through a jump table rather than just in-lined in a compilation of the entire patch. Is this a concern?


#15

Sure, custom objects libraries can be compiled on the host using gcc, object libraries can sit on sdcard, loaded into ram when used. Factory objects can sit in flash.

It more like a linker than like a compiler. Code is only generated to inlet values and to call binary functions. The binary patch format uploaded to the Axoloti hardware or stored on sdcard can load back into the GUI. On the horizon, connections could change and objects could be added, replaced, deleted without re-initializing those that are already loaded.

yes

Removing the live switch.

Sure, it's a concern. Already in its current unpublished draft, there is a function type that can be inlined. So there can be a multiply object that emits asm "MUL r0,r0,r1". If the result of the object connected to the first inlet outputs its result in r0, only extra code is added to load r1...
If the same multiply object is hover used to compute a square, it would still need to copy r0 into r1, it won't optimize it into "MUL r0,r0,r0", that would require a separate square object.
Writing functions that can be inlined via binary concatenation in C or C++ is not generally possible, those need to be written in assembly (__attribute__((naked))...) It's only relevant for objects that are not larger than only a couple of instructions.

Hope this addresses most questions...


#16

Total handwave / back of the envelope approximation:

48000khz -> 2e-5 s cycle time
168Mhz -> 6e-9 s cycle time
2e-5 / 6e-9 = 3333 "cpu cycles" per sample

Is it already impossible to hit that deadline without aggressive inlining and avoiding function calls etc? I'm wondering how much function call overhead would eat into that cycle allotment in practice. It's cool to be able to reserve all of those cycles for useful work of course, but it doesn't seem like a worthwhile tradeoff if we get live patching etc by sacrificing some headroom. Sacrificing some performance for ease of use of the API seems like a worthwhile tradeoff to me too actually.


#17

3333 "cpu cycles" per sample

It's a budget. If you have a patch that hits 100% of CPU usage you are spending all the budget. Having a dynamic link rather than a static link is not going to be more efficient, so in general live patching will impose a lower limit on total system performance. If you value live patching that may be a reasonable price to pay.

I come from the embedded development world so the idea of off-line static compilation and linking is home-base for me. Having said that, I'm intrigued by the interactivity afforded by live coding environments. It's been eye-opening to see people who don't know much about code be able to put together complicated patches with axoloti. The underlying code disturbs me, but I think the boxes with pipes approach makes it intuitive for a much larger group of people.


#18

hmm yeah, i don't value live patching, for me the strength of axoloti is that it works WITHOUT a pc, so live patching does not interest me. i want the last bit of performance out of it! but i guess i am the minority.


#19

I'm tempted to say the same thing...
but...
i remember how great live patching was with the Clavia Modulars.

Live patching is a game changer.

With live patching you don't "edit" your patches, you "create"/"modify" them.
It adds an "artistic" dimension to patching.

I guess that many users will be addicted to this feature when it will be available.

This feature is not incompatible with the "standalone/ no PC" usage of the Axoloti.


#20

Most of the time, my patches are limited because of SRAM usage, not by CPU. I guess that the new scheme will change this.