Is there a flag in gcc/clang that specifies the precision of the intermediate floating-point calculation?
Suppose I have a C code
double x = 3.1415926;
double y = 1.414;
double z = x * y;
It appears this is actually automatic, by use of the 80-bit registers in the x87 FPU. You can override it with
-mfpmath=sse to get the IEEE behavior where 64-bit registers are used at every step. So
-mfpmath=387, the default, is already what you want.
Thus, by default x87 arithmetic is not true 64/32 bit IEEE, but gets extended precision from the x87 unit. However, anytime a value is moved from the registers to an IEEE 64 or 32 bit storage location, this 80 bit value must be rounded down to the appropriate number of bits.
If your operation is extremely complex, however, register spilling may occur; the FP register stack is only depth 8. So when the spillage copies out to a word-sized RAM location you'll get the rounding then. You'll either need to declare
long double yourself this case and round manually at the end, or check the assembler output for explicit spillage.