In most cases, it's simply a matter of saying what you mean.
For example, you can certainly write:
#include <math.h>
...
const double sqrt_2 = sqrt(2);
and the compiler will generate an implicit conversion (note: not a cast) of the int
value 2
to double
before passing it to the sqrt
function. So the call sqrt(2)
is equivalent to sqrt(2.0)
, and will very likely generate exactly the same machine code.
But sqrt(2.0)
is more explicit. It's (slightly) more immediately obvious to the reader that the argument is a floating-point value. For a non-standard function that takes a double
argument, writing 2.0
rather than 2
could be much clearer.
And you're able to use an integer literal here only because the argument happens to be a whole number; sqrt(2.5)
has to use a floating-point literal, and
My question would be this: Why would you use an integer literal in a context requiring a floating-point value? Doing so is mostly harmless, since the compiler will generate an implicit conversion, but what do you gain by writing 2
rather than 2.0
? (I don't consider saving two keystrokes to be a significant benefit.)