I am familiar with the fact that decimal fractions often don't convert neatly into binary, and so an approximation is stored and when converted back into decimal is a bit off from the original number. I believe 0.21 would be an example of such a fraction.
How and why is this program compiled in gcc showing 7.21 accurately?
And (the important second part to this question), why given that it's showing 7.21 accurately, is it showing 839.21 inaccurately?
I can understand why it'd show 0.21 inaccurately, or any number point 21 inaccurately, since it takes a lot of bits to put it in binary accurately, if one even at all. But i'd expect it to consistently show inaccurately show
n.21
printf("%.100f\n", 7.21);
printf("%.100f\n", 839.21);
produces 7.210000000000000000000...
839.2100000000000400000....
gcc.exe (rubenvb-4.5.4) 4.5.4
Copyright (C) 2010 Free Software Foundation, Inc.
where gcc
C:\Perl64\site\bin\gcc.exe
How and why is this program compiled in gcc showing 7.21 accurately?
The older printf()
handling on the least significant decimal digits is limited.
printf("%.100f\n", x);
need not print x
accurately passed a certain digit. There really is no C spec on this yet decimal output should be at least correct for DBL_DIG
digits (typically 15) - this would be "weak" and problematic.
A better, and often acceptable, printf()
will be correct for at least DBL_DECIMAL_DIG
digits (typically 17). Without going to unlimited precision, getting the last digit correct can be troublesome. See Table-maker's dilemma. Padding the "right" with zeros rather than the correct digits is not uncommon. This is what OP's code did. It went to the correct rounded 17 digits and then zero padded.
A high quality printf()
will print double x = 839.21;
and x = 7.21;
correctly for all digits. E.g.:
839.2100000000000363797880709171295166015625...
839.210000000000000019984014443252817727625370025634765625.... (as long double)
vs OP's
839.2100000000000400000....
123.4567890123456789 // digit place
7.20999999999999996447286321199499070644378662109375...
7.210000000000000000034694469519536141888238489627838134765625000... (as long double)
7.210000000000000000000
OP's printf()
is only good up to 16 or so digits.
The output of 7.210000000000000000000....
only looks great because the printf()
went out to a point and then padded with zeros. See @Eric Postpischil and luck
Note: Some optimizations will perform the task with long double
(research FLT_EVAL_METHOD
) so long double
as x86 extended precision results posted too.
Added by Barlop
Some additional points from Eric Postpischil
OP's printf is not exact for 7.21. OP's printf was passed exactly 7.20999999999999996447286321199499070644378662109375, but it printed 7.210…, so it is wrong. It just happens to be wrong in a way that the error in that the OP's printf exactly cancelled out the error that occurred when 7.21 was rounded to binary floating-point.
Eric's correct, printf("%.100f\n", 7.20999999999999996447286321199499070644378662109375); prints 7.210000000
Eric elaborates on how he knows it is 7.20999999999999996447286321199499070644378662109375 which was sent to printf and not some other long number close to that.
Eric commented that he knows that, by using a good C implementation that converts decimal numerals in source code to binary floating-point properly and that prints properly. (Apple’s developer tools on macOS.) And used some Maple (math software) code he wrote to help him with floating-point arithmetic. In the absence of those, he might have had to work it out the long way, just like in elementary school but with more digits.