Variable types, their range & their SRAM consumption


#1

This is probably a stupid question, but since people here are so nice...

I am trying to squeeze the last bit of SRAM out of my axo. I suppose that choosing variables with as few bits as possible would be one way of doing that. But i noticed that e.g. switching from an "int" variable to an "int8_t" variable uses more SRAM than the other way around.

So if someone were kind enough to take the time and explain to me what the different int (and similar) variable types are ---int, int8_t, int16, uint and what have you--- and how they differ in terms of range and SRAM usage, that'd be sooo appreciated!


The position of an object effects the amount of SRAM it uses?
How to improve performance of objects and patches
#2

hmm, interesting question, and one id need to play with to give a good answer .. but some thoughts to give you something to look into...

I think the M4, being 32bit, uses 4 byte aligned data ,
so by default, I think unpacked data , like an int8 would effectively use 32 bit anyway. (1 byte data, 3 byte pad, and perhaps also a performance penalty for conversion :frowning:

however, I think you can use packed structures to stop this(!?)

but Id need to check this, the best way to do this would be to check with structs, and do a sizeof.
( a simple type, wont tell you about memory wasted due to alignment)

anyway hopefully this gives you a few keywords to do a bit of research, as I don't have time at the moment.

(but also do check, that your quest for more ram, doesn't come at a cpu performance hit that's unacceptable for your need - this stuff is aligned by default for a reason :wink: )


#3

Misaligned accesses (ie accessing a N bit variable on something other than a N bit memory boundary) tend to be either prohibited or slow, so compilers avoid generating them. As a consequence data structures are not packed- and if you don't arrange things efficiently the compiler will generate data structures with holes.

E.g.

BAD
struct {
a uint32_t,
b uint8_t,
c uint32_t,
d uint8_t,
e uint32_t,
}

GOOD:
struct {
a uint32_t,
c uint32_t,
e uint32_t,
b uint8_t,
d uint8_t,
}

The latter case uses less RAM for the same data. So that's one thing to watch out for. Use sizeof() to check.

Also - Compilers generally generate the most concise code when dealing with variables that match the natural word size of the CPU- so use something that makes life easy for the compiler.

BAD:
unsigned char i;
for (i = 0; i < 100; i ++) {
// blah
}

GOOD:
int i;
for (i = 0; i < 100; i ++) {
// blah
}

i is a stack variable - and is probably going to have the same stack usage in either case.
The code may be smaller (.text) because it's all 32 bit ops. ARM does have 8/16/32 bit ops,
so may do an ok job with this - but the point is that picking a local variable that is smaller in
size is really a no-op as far as data and code size go.

int
signed integer - size is compiler dependent, 32 bits on the ARM cortex M4
int8_t,int16_t,int32_t
These are also signed integers, but the size is explicit.

min_value = -(1 << (n - 1))
max_value = (1 << (n-1)) - 1
where n is the size in bits.

e.g. in python

n = 8
min_value = -(1 << (n - 1))
print min_value
-128
max_value = (1 << (n-1)) - 1
print max_value
127

BTW - if you really want to save RAM you should look at the *.map file and see where it's being used.
That'll tell you where you need to focus your efforts.


#4

Thanks, guys! Between your two answers, i'm beginning to see the light. I think .map file reading is beyond me, but variable sorting i can do (-:


#5

[quote/]
E.g.

BAD
struct {
a uint32_t,
b uint8_t,
c uint32_t,
d uint8_t,
e uint32_t,
}

GOOD:
struct {
a uint32_t,
c uint32_t,
e uint32_t,
b uint8_t,
d uint8_t,
}

[/quote]

aha ok... this should save up some mem in quite some of my modules..