Object lifecycle


#1

So far the object referencing in a patch is robust to name changes, and implementation changes as there is a UUID reference kept.

Lately a few cases have appeared that challenge library maintenance. Some examples:
* DrJustice found improvements to some filter objects like "filter/vcf3", cutoff in the original objects is artificially limited to only 12kHz.
* philoop noticed a typo in the "delay/write sdram" attribute list
Fixing by altering the existing objects means breaking users' patches, the selected attribute is string matched, in absence of a match the first option is the default. A different filter will cause an existing patch to sound different, and can also be considered broken rather than an improvement.

I believe it is desirable to propagate improvements to objects, but breaking users' patches should be avoided.

Moving the existing (unfixed) object to something like "obsolete/filter/vcf3" and "obsolete/delay/write sdram" would preserve patches and adding a bugfixed objects with the original name to the library would preserve patch functionality.

But that does not really promote migration to fixed objects, we don't want to keep dragging along all obsolete objects forever.

So I'd like to open a discussion on how to "retire" objects.
I'd be useful to
* provide the user the choice between loading the upgraded or obsolete version of an object
* provide the user with an explanation the expected impact of an upgrade
* making automatic adjustments to a patch would be beautiful but also opens a can of worms I believe.

By simply moving an object to an "obsolete" directory, a direct (UUID) reference to the new replacement object, and the possibility to explain the user the expected impact are missing. Does that motivate a new "obsolete object" file type? Or extend the existing object file type with a forward UUID reference and expected-impact text field?

So I open this thread and invite comments and discussion of possible approaches...


Open discussion about the best way to update community library objects!
#2

I think it sounds like a great idea to have the obsolete/
I think as soon as any of us see that, we will automatically want to use a newer version. For my sake, you can go ahead and implement it just the way you described:)


#3

I guess we all want it to magically upgrade, with zero dev and user effort :slight_smile:

where we are now:

  • we now version all objects in github, they are tagged with the release... so you can always run an old version of the software to get the original sound
  • from 1.0.10 , you will be able to run different versions of Axoloti side by side.
  • embedded patches and objects.

note: as stated previously, I'd be open to versioning the community library.... but we need to discuss this.

potential future:

  • firmware + patch at binary level could be used by newer UIs

this is I believe something bigger that we should be aiming for... a better separation between the UI in Live mode and the patch/firmware... we know we need to do this for controllers, and we have discussed 'presentation' mode etc.
this could lead to new versions of the software being able to talk to old patches/firmware i.e. like a 'read only' mode I suppose.

strawman : the only way to replicate exactly a patch for a performance in a years time, is to run it completely unchanged...
(note: we have to assume you will need to always be able to run newer version on the computer, due to OS upgrades etc)

the upgrading side i think is hard... most software I know , usually end up doing this in some kind of scripting (or code, which is kind of script like), as most often the upgrade has very similar tasks. (renaming parameters, rescaling them, applying defaults to new parameters). In axoloti, probably most of the time this could be done at object level, but sometimes if you want to do an 'automatic upgrade' it may have to be at patch level....

obsolete... sounds like 'depreciation' warnings?
I can see this working for 'limited time', and with the current object/patch version approach, I guess we could make this clearer. personally not a fan of chaining UUIDs , I think Id prefer the UUID to remain, perhaps with a version # or even timestamp, its a little more user friendly.

an idea, which I wonder if we can some how use, is to 'freeze' changed objects into a patch , using embedding... and then a revert to library, to unfreeze them. you still need the old object to do this action, but it unhooks it from the obsolete directory.... (has its own issues, but wonder if it might provide an interesting route)

thought 'experiment' : and not a suggestion...how it works 'today'
today, if you wanted to do a migration from 1.09 to 1.0.10
a) run version 1.0.9 and 1.0.10 in parallel
b) run your patch in 1.0.10, hear a difference, use github to review objects that have changed since 1.09 (read change comments)
c) run version 1.0.9 and embed the objects, save as new version of patch
d) run version 1.0.10... should be the same, and then you can 'migrate it' at leisure.
is it possible we should/could streamline this procedure?
e.g do this all within 1.0.10 application, use git to diff objects from 1.0.9 to 1.0.10, pull these changed objects temporarily into a temporary archive git clone, so then can embed them.
note: we already know the git tag, as the patch is saved with the version. we only need 2 git versions*, the current and the patch version.

personally, Id really prefer to use a 'proper' version control system, than 'roll our own'... but perhaps our requirements are very different

community library...
actually this 'scares' me more... increasingly users will be using more community objects/sub-patches etc, and its a lot harder to put things in places to migrate these... and the factory library is fairly static by comparison.
(as raised in other threads, I'm on the fence about versioning the library, due to maintenance)

Its a thorny issue indeed...
how do other similar environments deal with this? Max ? Reaktor ? PD ?

really just a collection of 'ideas' more than a cohesive solution, but perhaps some may be useful and compatible with other ideas raised.

*tech note:
we already clone the repo, so have 'theoretically' all versions of the objects available, its simply a matter of checking out an object with the right commit/tag. the only real complication (which is surmountable) is its a bit messy having a checked out branch with a mixture of objects in it... but ideally we'd work off the one local cloned repo (minimise disk and network resources), perhaps checking out old versions into a temporary space.
a quick look at stack overflow/jgit, looks like its possible to read a specific version into a file stream... so this might allow us to stream into a temp space or even potentially directly into an embedded object... how cool would that be :wink:


Sputnki contributions
#4

I don't really have any concrete ideas either. I can say though that one reason I haven't uploaded any objects to the community section as yet (apart from general lazyness :slight_smile: )is that I'm not really clear how to go about updating or maintaining objects.

Somehow I'm thinking that some form of version number could be used. That way, old versions can remain in the repository, always accessible, and under the same path as the latest one. Having an 'obselete' section sortof implies that there is only one obsolete version of an object, but there may be a whole history of old versions.

My completely un-thought-through idea is something like this: For every object, there is a version code. It is used to identify the object, and is also present in the patch, just like the UUID. When loading a patch, the corresponding version of all objects is automatically loaded. That way a patch when loaded will always sound identical, regardless of which versions are used in the patch and which versions exist in the repository (assuming none of the versions have gotten lost...). A function in the Patcher (or completely automatically upon loading perhaps) can also check the version numbers, and suggest upgrading those objects which have newer versions.

One consequence of this is that we would never retire old versions - they would always present in the repository. When adding a new object to a patch, only the latest versions should be used by default, so there won't be any additional clutter of obsolete and old objects when creating patches. (I think it should be possible to select any version of an object manually when editing, for instance if a filter implementation has changed and you really want the old version because there is already that version used in a patch, for instace).

Although not retiring old objects will tend to increase the size of the repository, in general objects are not that big compared to other forms of data on a computer, so the actual space used is rather small.

Sure we could use some form of existing version control, but one difference here is that in order to guarantee that old patches load fine, we really need to have all versions in the repository at the same time. Consider for instance, having two patches open which use two different versions of the same objects - both versions of the objects need to be present.


#5

What Ricard suggests for object versions sounds reasonable. Since filenames are out of the equation, it would be up to the object author to deal with that, e.g. multiple versions in one myobj.axo file or myobj_v1,axo, myobj_v2.axo etc.. I suppose there would be a manually maintained version attribute or tag in the XML(?) And that would pretty much be it on the object authors part, with the patcher being responsible for loading the correct version or offering to change to a newer version.


#6

Historical Anecdote: The VMS operating system for the Digital Equipement VAX series of computers (we're talking 1970s-1980s here) actually appended a version number to a file every time it was saved. So if you saved a flle called AXOLOTI.TXT it would actually be called AXOLOTI.TXT;1 . Saving a new version created a new file called AXOLOTI.TXT;2 . When referring to a file, the latest version was used by default, so attempting to open AXOLOTI.TXT by default would open the latest version, but you could explicitly ask for any version you wanted by specifiying the complete file name. The OS had a setting for how many old versions it would keep, I believe the default was 5.


#7

The vax system was not intended as a version control system, just a slightly better version of the .bak files which were around at the time. We still used a proper version control system for coding.

How if you have object versions are you planning to relate these to application/firmware releases. Eg some objects will only work with some software releases?
Having this and branching on git as well seems a 'wrong'


#8

this is a handy 'use-case' we should be aware of...

so this happen because, 1.0.10 has contain a software change, which means this object will only work in 1.0.10+
i.e. we not only have versions of objects, due to behavioural changes (e.g. filter changes/delay timing) but also due to software versions.
(I think whats also kind of interesting... is in this case I wasn't even aware that Id introduced something specific to 1.0.10, which i suspect is going to be 'common' for most users)

admittedly id already partially addressed this in 1.0.10, by refusing to load 'future versions' i.e. 1.0.10 will not load 1.0.11 objects, but it still demonstrates the fragile nature in this area..

community library will be versioned from 1.0.10

I discussed this briefly with @johannes yesterday, as said, it was likely that we were going to have to branch the community library in the same way as we do factory. so I will be doing this from version 1.0.10. @johannes I will look into the code changes today. this will mean for the official releases , they will be versioned from day 1.
(I will reset the master branch, after all contributors have moved to 1.0.10)
of couse, as i outlined in a previous post, this will mean a 'process' when new releases are made.. but I will details this nearer the time. (there will be a similar process to move away from the 'master branch')

btw: its worth noting we have a 'technical' issue with the way objects generally are loaded, due to the fairly fragile way the xml is dealt with in the persistence framework we use. there are changes in xml formats that will mean its impossible to load objects... and whilst we take care of this when doing upgrades from version X to version X+1, the idea of keeping this 'dead' workaround code, is not very inviting.

back on topic : object versioning

as for obsolete directory, or user versioning... as I kind of indicated, whilst I'm sure we can implement it, and kind of make it work (at least initially), my gut feeling (and it is that) is that it is trivialising the problem... as developers, we know its not a matter of just have different versions of files.... as their is often a relationship between files, version controls system work on a common timeline for everything in the repo.

you may argue that objects are self contained, but I disagree... they are linked to firmware versions, potentially header/source files (see filters as an example), and the patches also join them together...
(there is even interaction between objects e.g. table/alloc table/read table/play etc)

I guess fundamentally, whilst I see the user might want a different view, the underlying problems of version control for graphical programming is no different from normal coding... its all about keeping things aligned, and being able to manage change.

as I say, Ive no doubt, something with folders/versions numbers/uuid chaining can be cobbled together,
my fear is your just building a version control system by a different name, so what will happen is over time you will keep adding more and more features.

anyway... just my view, perhaps I'm just so used to version in software development thats its tainting my view, of what might be a simpler problem, with a simpler solution.


#9

You're right that essentially, what we're after is version control. (My reference to the VAX/VMS above was just to put some perspective on it). I too am used to cvs and git, so I'm asking myself, in what way might managing objects be different to managing ordinary source code? My gut feeling tells me there is a difference, but I can't really put my finger on it. What follows is me thinking aloud, sorry if it's hard to follow.

For one thing, git and cvs tend to manage a bunch of code which has a common state. Before accepting a new commit to the firmware, the firmware is built and to some extent tested. The new commit represents a new state of the firmware. But if an object is updated, there's no build process which verifies that it is compatible with all patches which might run it, or even with other objects. So each object represents its own state. The dependencies to other objects are not as clear as with software. I think this is one of the core issues. An updated object may break patches, which are not part of the commit.

In the Linux kernel there is a golden rule: no change in the kernel may break userspace. API:s may be added and bugs fixed, but an existing API may not be changed in a way that would cause an existing userspace program to stop working. It's a nice idea, but carrying it over to the Axoloti would mean that objects may never be changed if that means breaking patches, which seems rather harsh. Also, who is going police that? The Linux kernel has a stringent review process to avoid code getting in which will cause problems down the line, and we don't really have that option with the Axoloti. But perhaps that is the way to go. If a new version of an object needs a new interface, it must be given a new name. That would clutter up the namespace though with objects which essentially are obsoleted. Hence the idea of having an 'obsolete' area where they can reside. And there's still the issue of updating the implementation of an object (a filter implementation for instance) which will change its behavior but not its interface. As well as how to maintain a chain of object updates, somehow we want to tell the user that there is a newer version of a given object available, and where to find it.

I think the problem is that we're trying to solve several problems at once, and it essientially is a complex problem. So in order to make it manageble I think we need to make some simple rules, some of which might be a pain to object designers when updating objects in different ways, but on the whole will make the user experience better.

Even if we use a version control system to handle the different versions of objects, we've still got the problem of how to refer to given version, and how to manage chains of object updates.


#10

the issue is version numbers are only useful IF they are allocated consistently, and that I think will be impossible to achieve in the community library... as you rightly point out, there we cannot enforce any rules.... and we also have to make it easy for all contributors.

first on reflection, my example using app versions, was not a good ideas....
I think using a sequential timeline is the better way... it requires no user intervention.

I agree there are a number of different issues..

lets consider the problem in a few parts...
a) ensure we can retrieve the correct objects for an old patch
b) retrieving/using old objects
c) upgrading

(a) a patch uses libraries, we could when we save the patch store the commit id of the head of each library. this means for any patch you know exactly which libraries (repo) are used, and the point in their history they were at .

(b) retrieving, you can retrieve any object easily if you know the commit id of the head.
the next trick is , how do you use that code?
my straw man said, retrieve it, and put it in an embedded object, but thats only one solution, theoretically you load/stream directly from the repo to use... or if that turned out to be slow, you could 'cache' the file (its temporary only, not like a obsolete directory which is permeant)

(c) upgrading
easy, the commit log tells you exactly what changes have occurred and when....
the change log can be used to describe to the user what changed and when, and as we are doing object by object, they can choose to upgrade one or all, or any combo
the could even upgrade in steps i.e. not do that last 'big api' change yet, but leave for another day.

so why use vcs, what does it buy you , over a 'roll your own'... to be more precise, what features might you use , that you would also have to add to a roll your own solution?

well the main thing is commit history.... comments on changes, dates and times of those changes, how those changes related to other changes.
(I don't agree objects are always isolated, some objects change together...)

also I think you have a misunderstanding in my approach...
I know in conventional coding, we checkout a whole commit, i.e. many files in one go... but thats not how I'm suggesting to use vcs here, Im saying use the individual history an object has.

the other advantage is, it has a toolset in place....
sure most users may not need anything advance, but the ability for an inquiring mind, to be able to diff objects over time, or query the changes between two dates is useful.... especially when debugging why your patches behaviour has changed.

finally... the 'freebies' of a vcs...
i imagined when I created the libraries, that I might add features like 'freeze' , this would basically use tags to allow users to have a snapshot in time of the libraries... so they can freely go back and forth e.g. for performances.
or perhaps even add 'user level' support for private repos, so such that an end user can backup their own patches too the cloud, and then retrieve old versions.

of course, my basic premise here is simple, use the power of a vcs, but hide it from the user... wrap it so they don't need to know the complexities.
vcs are great for managing change over time, thousands of hours of development has been done on them to do just that, and they are used by 100,000s of users daily so are robust....

for me, id say I think either approach can work? do you agree?
(or do you think there is a fundamental flaw using a vcs which means it cannot work?)

I've suggested the advantages of the vcs approach.

the advantage i can see for a roll your own are, you aren't tied so much to an implementation/external... perhaps on the surface it appears simpler (as it self contained, so no knowledge of git/jgit needed)

dev effort, id say its pretty similar

anyway, thats just my views... I think both could work, really its a choice....


#11

Ive a few questions for object developers, that I think might raise a few lifecycle questions....

Do you want to be able support your users on multiples versions of Axoloti?

Imagine, we have axoloti 1.2.0 , and you have released "FantasticAndPopular(tm).axo" , which is being used by lots of user, it becomes almost de-facto, its that good. (I can already think of a few community objects/patches that might fit this, but I'll save blushes by naming :slight_smile: )

Now we release 1.4.0, which is all singing and dancing, but of course some performers, will probably just want to wait and see, let it settle down. (understandable, its pretty much a recommendation in music circles.
But you being the cool developer you are, you continue to develop "FantasticAndPopular(tm).axo", its fairly small changes, with only minor and compatible differences... do you want to also release this for 1.2.0 for your massive and loyal fan base.

If so, how do you think this should be handled? what versions of axoloti will you be using?

Will you be running multiple versions of Axoloti to support the above? or stay on the old version... and allow changes to be propagated?

I guess the questions also circles around, how quickly users can be expected to upgrade... and how to deal with this.
e.g. I can imagine object developers may be quite quick, to get the new benefits, but perhaps some of 'your' users, will be much slower... as they are happy with Axoloti as is... but still would want updates to your objects.
Im kind of hoping this will provide an 'incentive' for them to upgrade (all developers want users on latest versions, eases support etc) , but is this realistic?

thoughts?


#12

I liked the idea of freezing patches. Maybe a macro that selectively embeds all community objects could save a lot of patches (the important ones, at least).

A "beta" tag could be added inside the xml files and in the object editor, again: an user could be able to easily freeze all beta objects. I guess the problem here comes when you want to unfreeze, do patch/object and patch/patcher remember the original uuid?