SDRAM Tables and Powers of 2


#1

Do tables stored in SDRAM always need to be powers of 2 in length, or can they be of arbitrary size?

I want to make a version of the 'alloc 16b sdram load' object to load 8-bit data from a binary file, but I'd rather not waste a load of SDRAM by allocating more space than required.

The file-loading will be integrated into a custom object, so there's no need for the loaded data to be accessible from a table load/play etc. object.

a|x


#2

the read/write objects are optimised for (and will only work with ) powers of 2.
but no , arrays don't generally need to be n^2, if you are planning on using your own access objects.


#3

Cool, thanks.

I realised I may be able to get away with a fixed 8kb size (8192 bytes), anyway, but good to know.

How does the optimisation you mention work, incidentally?

I'm going to be reading data from SDRAM at a fraction of K-rate, so optimisation isn't really necessary, especially if there's some form of tradeoff in some other area (processor cycles, etc.).

a|x


#4

all to do with bit shifting, a quick looks (on one table/read) looks like its using it to scale the Q27 to the size of the table.
e.g. imagine going from 0..64 to a table that was 123 in size, you'd need to divide, whereas a shift is much faster.

note: I've not reviewed all the objects, so there may be other optimisations, but this was the 'obvious' one. (just look at usages of LENGTHPOW). Id suspect that the int access its possibly not as relevant BUT why scaled input are used is a completely different question l)

anyway have a look at the code, the non-interp versions are not that daunting.

of course your 'main issue' in terms of wasting memory, is that standard arrays do not dynamically allocate.... mainly for the 'reason', you might as well allocate the size of your largest data set/sample.

where this logic fails , is when using lots of samples (etc) , then theory should be the allocated size should be the max of the COMBINED sample set... I suppose you could load these into one array however. (then use appropriate indexing)
oops, these thoughts, get off-topic quite quickly - sorry :slight_smile:


#5

No need to apologise- it's all useful information.

The scaling-via-bit-shifting makes a lot of sense.

I'm not planning to traverse through the data in the sense of playing back a table of audio data, anyway. The binary file will store a number of different kinds of data, including character strings, and pointers to encoded audio streams within the array, so it's quite an obscure use-case.

Since we're only talking 8k, I'm not too worried about a possible few kb of waste, now I think about it.

a|x