The MSDN page you linked to talks about the syntax of a floating-point literal in source code. It doesn't define how the number will be displayed by whatever tool you're using. If you print a floating-point number using either printf
or std:cout << ...
, the language standard specifies how it will be printed.
If you print it in the debugger (which seems to be what you're doing), it will be formatted in whatever way the developers of the debugger decided on.
There are a number of different ways that a given floating-point number can be displayed: 1.0
, 1.
, 10.0E-001
, and .1e+1
all mean exactly the same thing. A trailing .
does not typically tell you anything about precision. My guess is that the developers of the debugger just used 1232432.
rather than 1232432.0
to save space.
If you're seeing the trailing .
for some values, and a decimal number with no .
at all for others, that sounds like an odd glitch (possibly a bug) in the debugger.
If you're wondering what the actual precision is, for IEEE 32-bit float
(the format most computers use these days), the next representable numbers before and after 1232432.0
are 1232431.875
and 1232432.125
. (You'll get much better precision using double
rather than float
.)