There's a great discussion on SO about why floating-point should not be used for currency operations and a lovely example is given here from Bloch's Effective Java.
I was playing around with this and noticed something weird. If the numbers are doubles:
System.out.println(1.03d - .42d);
//prints out 0.6100000000000001.
But, if the numbers are floats:
System.out.println(1.03f - .42f);
//prints out 0.61.
Why does this not fail in the same way for floats also? Both types are susceptible to the same problem, but what causes the difference in behaviour?
There are eight interesting values involved here (four for each type). Their exact values are:
Double values
1.03d: 1.0300000000000000266453525910037569701671600341796875
0.42d: 0.419999999999999984456877655247808434069156646728515625
Result: 0.6100000000000000976996261670137755572795867919921875
0.61d: 0.60999999999999998667732370449812151491641998291015625
Float values
1.03f: 1.0299999713897705078125
0.42f: 0.4199999868869781494140625
Result: 0.61000001430511474609375
0.61f: 0.61000001430511474609375
Note how the closest double
to 1.03 is slightly more than 1.03, and the closest double to 0.42 is slightly less than 0.42... so the result of the subtraction differs from the precise (decimal) subtraction by the sum of those two errors.
The closest float
to 1.03 and the closest float
to 0.42 are both less than the original values, so the errors mitigate against each other to some extent. That's why the double
result "feels" more inaccurate than the float
result. The float
result happens to be as close to 0.61 as you can represent as a float
, so the string representation is just "0.61". As there is a closer double
to 0.61 than the subtraction result, the string representation has to differentiate between the two.
See more on this question at Stackoverflow