Upsampling / downsampling: am i doing it good?


#1

My idea was to
1) linearly interpolate between current input sample and preceding one (dividing the interval in N points)
2) "do stuff" for every interpolated value (calculating a polynomial, in my case)
3) sum all the results from all interpolations and divide by N

This actually reduces aliasing a bit, but not really that much, however increasing a lot the dsp load..

Any tips for clevererer techniques?

Edit: also sometimes i fail to understand the discrete math notation (with all those Z[t-1] and stuff), is there someting fairly easy i could read to sort that out? Wikipedia is full of crazy dsp math, but it's like ancient aramaic to me


#2

Linear interpolation and moving average do not have great stopband attenuation. For small oversampling ratios, a polyphase FIR is the most common topology I believe, for high ratios, CIC "Cascading Integrator Comb" filters are more efficient.
Here's a topic about polyphase FIR with a reference to a test-object doing up-and downsampling, and here's a topic with a reference to a CIC downsampling test object.


#3

I noticed the MI clouds implementation, includes a SR converter

clouds/dsp/sample_rate_converter.h

its used for downsampling and then upsampling... (i.e a pair of converters) , for each you give it a ratio for conversion, and some filter coefficients.
(you can find its usage in the clouds code, when 'low fi' is activated, as this downsamples by 2x)

there is some python code which helps you calculate these co-effecitents, given a bunch of parameters (including sample rate and ratios)

the only thing is, it does appear to be quite expensive in cpu operation.
but it still might be interesting to look at

(sorry, I dont know enough about filters/downsampling to tell you what method it uses)