I googled for this, and was surprised to find no guidelines, rules of thumb, styles, etc. When declaring a (signed or not signed) integer in C, one can make the choice to just use whatever the processor defines for int, or one can specify the width (e.g. uint16_t
, int8_t
, uint32_t
, etc).
When doing desktop/dedicated C programs, I've tended very much towards the "just use the defaults" unless it was really important for me to specify width (e.g. "this is a 32 bit ID").
Having done more microcontroller work lately (pic18 and AVR), I've tended to size everything, just because you become so space conscience.
And now I'm working on some Pic32 code (no OS), where I find myself torn between the two extremes.
I'm curious what rubric (if any) people have formulated that help them decide when to size their ints, and when to use the defaults? And why?