Differences between mult and div operations on floating point numbers

Is any difference in computation precision for these 2 cases:
1) x = y / 1000d;
2) x = y * 0.001d;

Edit: Shoudn't add C# tag. Question is only from 'floating-point' point of view. I don't wanna know what is faster, I need to know what case will give me 'better precision'.

Jon Skeet
people
quotationmark

No, they're not the same - at least not with C#, using the version I have on my machine (just standard .NET 4.5.1) on my processor - there are enough subtleties involved that I wouldn't like to claim it'll do the same on all machines, or with all languages. This may very well be a language-specific question after all.

Using my DoubleConverter class to show the exact value of a double, and after a few bits of trial and error, here's a C# program which at least on my machine shows a difference:

using System;

class Program
{
    static void Main(string[] args)
    {
        double input = 9;
        double x1 = input / 1000d;
        double x2 = input * 0.001d;

        Console.WriteLine(x1 == x2);
        Console.WriteLine(DoubleConverter.ToExactString(x1));
        Console.WriteLine(DoubleConverter.ToExactString(x2));
    }
}

Output:

False
0.00899999999999999931998839741709161899052560329437255859375
0.009000000000000001054711873393898713402450084686279296875

I can reproduce this in C with the Microsoft C compiler - apologies if it's horrendous C style, but I think it at least demonstrates the differences:

#include <stdio.h>

void main(int argc, char **argv) {
    double input = 9;
    double x1 = input / 1000;
    double x2 = input * 0.001;
    printf("%s\r\n", x1 == x2 ? "Same" : "Not same");
    printf("%.18f\r\n", x1);
    printf("%.18f\r\n", x2);
}

Output:

Not same
0.008999999999999999
0.009000000000000001

I haven't looked into the exact details, but it makes sense to me that there is a difference, because dividing by 1000 and multiplying by "the nearest double to 0.001" aren't the same logical operation... because 0.001 can't be exactly represented as a double. The nearest double to 0.001 is actually:

0.001000000000000000020816681711721685132943093776702880859375

... so that's what you end up multiplying by. You're losing information early, and hoping that it corresponds to the same information that you lose otherwise by dividing by 1000. It looks like in some cases it isn't.

people

See more on this question at Stackoverflow