I thought of using tests at runtime to determine the endianness so that I can be sure of the behaviour of shifts, and noticed a somewhat peculiar optimization by my compiler. It would suggest that the endianness of the machine it will run on is known at compile time.
These are the two routines I timed. Routine 2, which makes use of const, was about 33% faster.
/* routine 1 */
int big_endian = 1 << 1;
for (register int i = 0; i < 1000000000; ++i) {
int value = big_endian ? 5 << 2 : 5 >> 2;
value = ~value;
}
/* routine 2 */
const int big_endian = 1 << 1;
for (register int i = 0; i < 1000000000; ++i) {
int value = big_endian ? 5 << 2 : 5 >> 2;
value = ~value;
}
The speed of routine 2 matches that of using a constant expression computable at compile time. How is this possible, if the behaviour of shifts depends on the processor?
Also, on a side note, why do we call numbers that end with the least significant digit big endian numbers, and those that end with the most significant digit little endian numbers.
Edit:
Some people in the comments claim bitwise shifts have nothing to do with endianness. If this is true, does that mean that a number such as 3 is always stored as 00000011 (big endian)
and never as 11000000 (little endian)?
And if this is indeed the case, which actually does seem to make sense, wouldn't it act weird when using little endian, since 10000000 00000000 00000000 (128)
shifted to the left by one would become 00000000 00000001 00000000 (256)?
Thank you in advance.