Why float shows exact representation when declared

I have read many times in articles and MSDN that float (or double) doesn't have exact representation of real world integers or decimal values. Correct ! That is visible when getting equality checks going wrong or also while asserting simple addition or subtraction tests.

It is also said that float doesn't have exact representation of decimal values like 0.1, but then if we declare a float in visual studio like float a = 0.1f;, how do they show exact 0.1 while debugging ? It should show something like 0.09999999... Where do I miss a link to understand it.

enter image description here

It is a layman sort of question or may be I am still missing some concepts !

Jon Skeet
people
quotationmark

how do they show exact 0.1 while debugging

0.1 isn't the exact value of the float. It happens to be what you specified in the original assignment, but that's not the value of the float. I can see it's confusing :) I suspect the debugger is showing the shortest string representation which unambiguously ends up at the same value.

Try using:

float a = 0.0999999999f;

... and then I suspect in the debugger you'll see that as 0.1 as well.

So it's not that the debugger is displaying a "more exact" value - it's that it's displaying a "more generally convenient" representation.

If you want to display the exact value stored in a float or double, I have some code you can use for that.

people

See more on this question at Stackoverflow