Morph object - How hard would it be to port the LORIS library to axoloti?


#1

LORIS is a sound modeling package based on the Reassigned Bandwidth-Enhanced Additive Sound Model.

http://www.cerlsoundgroup.org/Loris/

Im curious to know how possible it would be to port to the Axoloti. I am posting this now before I start building this because I want thoughts on the best way to build it. Do I go maths route or try to covert the existing Loris code?

Existing open source example of a morph object:
The Loris software modules have a well documented example of both the DSP mathmatics and doing this in C:
http://www.cerlsoundgroup.org/Loris/LorisMorphingInC.html

There are a few other exmples of a moprh object in csound but this one is probably the best in terms of documentation and the number of source conversions available.

Why a morph object?
A morph is not the same thing as cross fading. A morph causes the sound to change between the characteristics of one input into another. You could for example Morph between a sine wav and a square wave to get all the wave forms in between, morph between delay and reverb, piano into a violin etc you get the idea.

The last thing I need to add to this post is to explain why I would cross voids to build this- the easiest way to explain would be a tutorial on the classic emu filter morph. Because morphing is what the classic z-plane EMU sound was built on. To add this lost capability using solely open source material to a modular environment like Axoloti would be just awesome:


#2

Ive not looked at LORIS in any detail but ....
in general porting C code is not difficult (assuming its not using IO etc), the issue is more likely is...
is it going to be useable once ported? or does it use too much CPU / memory?
... also can it be optimised to use NEON?
is it even realistic to run this on a STM32F4? (Im not saying its not, but we have to have realistic expectations...)


#3

Well the basic thing you want to do is take two audio signals, mix them together and then listen to the difference of the two signals?


#4

I think your describing crossfade...

the other approach (afaik, and what I think Loris is doing) is to analyse the frequency content of the two input signals, and then mix these by reconstructing using additive synthesis.

id assume this is computation expensive, I guess you need to do an FFT of the two signals, and then additive synthesis is expensive since you need N sines waves. I did consider these fleeting when we were discussing wavetables... which seemed manageable since your just doing waves (quite small, a one cycle), not long samples.
I guess with samples you have to chop it up into tiny bits and do this for each section.
BUT ... this is a complete guess, as never studied it.... so Im sure LORIS can does a lot more that this. (I think its derived off a PhD project, so not simple stuff :smile: )


#5

Ah yes.. I am describing crossfade, ups :smile: Yes it seems a bit more complicated than that.


#6

Porting Loris (or other McAulay/Quatieri based partial techniques) 'd be quite challenging, perhaps the synthesis side is doable, but the analysis side is best kept as a preprocessing step. Not sure if you 'd end up re-using much of the original Loris code after porting...

The EMU Z-plane filtering is a different thing than this Loris morphing and far more doable!
http://music-dsp.music.columbia.narkive.com/TCIU1fd1/emu-iv-filter-design
http://www.google.com/patents/US5170369
I think that patent is expired now...
An EMU z-plane filter clone is certainly a welcome addition to the object library.


#7

Yeah Im still hoping the sythesis side of LORIS would be do-able.

Interesting on the actual EMU method - I hadn't looked to closely due to the patent but I can see how its much more efficient. That ARMAdillo table is just genius really.