For the following code:
#include <stdio.h>
int main(){
printf("%f", 5);
printf("%d", 5.01);
}
The first statement will print 0.000000 and the second statement will print a large number.
I thought the printf see the format %f will pop a 4-byte argument from stack. Then I looked up some references that say the printf function will transform float into double, so 8-byte argument. So I thought it may print out a unpredictable value. But how could it print out a 0.000000.
The second one is also weired. The binary format for 5.01 should be 0 10000001 01000000101000111101100 (0x40A051EC), It should be 1084248556 in decimal format, but the result of the statement is 1889785610. Why is this happen?