If I have the following code (this was written in .NET)
double i = 0.1 + 0.1 + 0.1;
Why doesn't i
equal 0.3
?
Any ideas?
If I have the following code (this was written in .NET)
double i = 0.1 + 0.1 + 0.1;
Why doesn't i
equal 0.3
?
Any ideas?
You need to read up on floating point numbers. Many decimal numbers don't have an exact representation in binary so they won't be an exact match.
That's why in comparisons, you tend to see:
if (abs(a-b) < epsilon) { ...
where epsilon is a small value such as 0.00000001, depending on the accuracy required.
Double is a 64-bit floating point data type. It stores decimals as approximate values. If you need exact values, use the Decimal data type which is a Binary Coded Decimal data type.
The precision of floating point arithmetic cannot be guaranteed.
Equality with floating point numbers is often not used because there is always an issue with the representation. Normally we compare the difference between two floats and if it is smaller than a certain value (for example 0.0000001) it is considdered equal.
Double calculation is not exact. You have two solution:
Decimal
type which is exactHave a look at this thread:
Why do I see a double variable initialized to some value like 21.4 as 21.399999618530273?