Short Convolution (Guitar Cabinet Modeling)


#6

Yeah lets see if we can get anything going. Though again I think i am out for a while cause of the default boards. Two on same day.

Wil have to look into fixing the boards in some way first.


#7

I suppose the limit on the FIR filter object (16 coefficients) is the same issue as with longer FFTs: that the sample buffer is 16 samples, so that is the longest you can do "easily". 16 samples is 1/3 of a ms, too short even for typical audio frequency filters.


#8

Naive question (again):

Couldnt you Chain a few of Them for more samples?


#9

Kassu, would you know how to use that to create an extremely short convolution capability? I'd even be willing to try that by slicing the impulse to 16 samples, but I wouldn't know what to do with it. Also, what do you think about jaffasplaffa's idea of chaining them in serial?


#10

I don't know the best solution to this, but the algorithm linked below might be roughly what you need to do. I'm pretty sure you have to implement this as a custom object to stand a chance, and I'm not able to write it myself.

The idea seems to be chopping the input signal in pieces, and convolve each piece with the entire filter kernel (inpulse response). Then you need to add the result for each piece together, with a certain amount of overlap. If I understand the algorithm correctly, you need one extra memory buffer with the length of the impulse response + 16 samples (the piece length), which sounds feasible. I don't know what the limits on axoloti are, basically how big a convolution can you do in the time of 16 samples...


#11

there is an example of an overlap add shifter in Axoloti.

Tutorial 22.

But I dont understand what is going on in the patch yet.


#12

@kassu "partitioned convolution" removes the latency inherent to the method described in that wikipedia article. The term "overlap add" also covers other things than the FFT convolution described in that article.

That's a different "overlap add" than in that pitch shifter.


#13

A rough estimate - implementing direct convolution (pretty simple, far easier than implementing regular fft convolution or partitioned convolution), should work out up to around a 1600 samples impulse response length, ~ 33 milliseconds) for mono in/out.

http://www.arm.com/files/pdf/DSPConceptsM4Presentation.pdf


#14

Sweet! So it should be possible as long as the impulse sample is 33ms or less! Can it be done with current objects, or would this be a future object or addition to the core code?


Reverbs creation
#15

This can't be done with current objects, it requires development of a custom object.


Reverbs creation
#16

@johannes

Is that something you would consider implementing sometime down the road? Would be a great addition to Axoloti. No rush on it. Just curios if you have thought about it :wink:

Thanks.


#17

I'm derailing the topic a little here, but I was just looking through the CMSIS DSP library documentation, and there is an example of a convolution setup using the library's FFT functions.

Assuming CMSIS DSP is available to custom objects, could this example be adapted to make a spectral morph between two s-rate audio inputs? If so, would it work with two 16-sample buffers, or would it be necessary to aggregate several buffers (and increase latency)?

a|x


#18

I have pushed a convolution object and help patch to the 1.0.10 factory library, "filter/convolution", using version 1.0.10 "sync libraries" should get it to you.

I also added an example synth patch in axoloti-factory/patches/demos/synth/dreamy.axp

I haven't tried cabinet modelling with it, but should do fine :slight_smile:

edit1: OOPS: the length is fixed to 1024 samples, working on a fix...
edit2: fixed...


#19

Hey @johannes

Nice. Fun to play with. Thanks :wink: But I think the object would benefint a lot from having a modulation input for the coefficients, like the FIR filter has. I think that will help eliminate some of the "digitalness" of the sound of the object.


#20

Johannes!

Fantastic work! THIS was what I was wishing for! I replaced the noise input sources with various cab impulses and rocked out! I put an awesome distortion with gate in the front and an eq, delay, and small reverb module in the mix and am running at around 80%. I actually tried a more lush reverb, but it used too much sdram.

The ONLY problem I have is that I have to push the button for the impulse to be read into the table, so it doesn't work in stand alone mode. Any ideas on how to have an impulse auto load to the table?


#21

can you share your cab impulses? or a link?


#22

@lokki

I got a bunch from this site, though I havent converted any of them to "Axoloti format" yet:

http://www.grgr.de/index.html#ir

Just let the site load. It start out with scrolling far down a site in a bit odd manner. Just let it scroll and you will get to the cab files. The ones I have checked are all random sizes, so they probably have to be stretched individually.

Also this site:


#23

I would like to know:
1. How much is the algorithmic latency? (That is, if the impulse response were an undelayed dirac pulse, which delay value for a [delay/echo] would be equivalent?)
2. Does it only work with 16-bit resolution for the impulse response?
Thanks in advance!


#24
  1. The current implementation has 2 samples algorithmic latency
  2. Yes, it works only with a 16 bit impulse response. The internal accumulator is 64 bit though. 32-bit resolution requires a different implementation, and would load the processor much more. I do believe that a this is an acceptable compromise in most cases.

#25

Sorry to bring up an old thread but I'm very confused as to how you can convert a impulse response in wav format into an axoloti table.

Can someone please explain this?

Thanks