double d = 0.0;
for (int i = 0; i < 10; i++)
{
d = d+0.1;
}
System.out.println(d);
This is an example I read somewhere on "Principle of Least Surprise"
I was just curious on why the code would return a 0.999999999 and if I change the datatype of d to float, i get a 1.0000001. What is the reason behind such behavior.