all to do with bit shifting, a quick looks (on one table/read) looks like its using it to scale the Q27 to the size of the table.
e.g. imagine going from 0..64 to a table that was 123 in size, you'd need to divide, whereas a shift is much faster.
note: I've not reviewed all the objects, so there may be other optimisations, but this was the 'obvious' one. (just look at usages of LENGTHPOW). Id suspect that the int access its possibly not as relevant BUT why scaled input are used is a completely different question l)
anyway have a look at the code, the non-interp versions are not that daunting.
of course your 'main issue' in terms of wasting memory, is that standard arrays do not dynamically allocate.... mainly for the 'reason', you might as well allocate the size of your largest data set/sample.
where this logic fails , is when using lots of samples (etc) , then theory should be the allocated size should be the max of the COMBINED sample set... I suppose you could load these into one array however. (then use appropriate indexing)
oops, these thoughts, get off-topic quite quickly - sorry