0xff 0xff - 6 months ago 45
C Question

Initializing floating point variable with large literal

#include <stdio.h>

int main(void) {
double x = 0.12345678901234567890123456789;
printf("%0.16f\n", x);
return 0;

In the code above I'm initializing
with literal that is too large to be represented by the IEEE 754 double. On my PC with gcc 4.9.2 it works well. The literal is rounded to the nearest value that fits into double. I'm wondering what happens behind the scene (on the compiler level) in this case? Does this behaviour depend on the platform? Is it legal?


When you write double x = 0.1;, the decimal number you have written is rounded to the nearest double. So what happens when you write 0.12345678901234567890123456789 is not fundamentally different.

The behavior is essentially implementation-defined, but most compilers will use the nearest representable double in place of the constant. The C standard specifies that it has to be either the double immediately above or the one immediately below.