So I just found this bug in my code and I am wondering what rules I'm not understanding.
I have a float
variable logDiff, that currently contains a very small number. I want to see if it's bigger than a constant expression (80% of a 12th). I read years ago in Code Complete to just leave calculated constants in their simplest form for readability, and the compiler (XCode 4.6.3) will inline them anyway. So I have,
if ( logDiff > 1/12 * .8 ) {
I'm assuming the .8 and the fraction all evaluates to the correct number. Looks legit:
(lldb) expr (float) 1/12 * .8
(double) $1 = 0.0666666686534882
(lldb) expr logDiff
(float) $2 = 0.000328541
But it always wrongly evaluates to true. Even when I mess with enclosing parens and stuff.
(lldb) expr logDiff > 1/12 * .8
(bool) $4 = true
(lldb) expr logDiff > (1/12 * .8)
(bool) $5 = true
(lldb) expr logDiff > (float)(1/12 * .8)
(bool) $6 = true
I found I have to explicitly spell at least one of them as floats to get the correct result,
(lldb) expr logDiff > (1.f/12.f * .8f)
(bool) $7 = false
(lldb) expr logDiff > (1/12.f * .8)
(bool) $8 = false
(lldb) expr logDiff > (1./12 * .8f)
(bool) $11 = false
(lldb) expr logDiff > (1./12 * .8)
(bool) $12 = false
but I recently read a popular style guide explicitly eschew these fancier numeric literals, apparently according to my assumption that the compiler would be smarter than me and Do What I Mean.
Should I always spell my numeric constants like 1.f
if they might need to be a float? Sounds superstitious. Help me understand why and when it's necessary?