Is floating point arithmetic stable?

I know that floating point numbers have precision and the digits after the precision is not reliable.

But what if the equation used to calculate the number is the same? can I assume the outcome would be the same too?

for example we have two float numbers x and y. Can we assume the result x/y from machine 1 is exactly the same as the result from machine 2? I.E. == comparison would return true

Jon Skeet
people
quotationmark

But what if the equation used to calculate the number is the same? can I assume the outcome would be the same too?

No, not necessarily.

In particular, in some situations the JIT is permitted to use a more accurate intermediate representation - e.g. 80 bits when your original data is 64 bits - whereas in other situations it won't. That can result in seeing different results when any of the following is true:

  • You have slightly different code, e.g. using a local variable instead of a field, which can change whether the value is stored in a register or not. (That's one relatively obvious example; there are other much more subtle ones which can affect things, such as the existence of a try block in the method...)
  • You are executing on a different processor (I used to observe differences between AMD and Intel; there can be differences between different CPUs from the same manufacturer too)
  • You are executing with different optimization levels (e.g. under a debugger or not)

From the C# 5 specification section 4.1.6:

Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form x * y / z, where the multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a higher range format may cause a finite result to be produced instead of an infinity.

people

See more on this question at Stackoverflow