Just started learning C#. I plan to use it for heavy math simulations, including numerical solving. The problem is I get precision loss when adding and subtracting
static void Main(string args)
double x = 1e-20, foo = 4.0;
Console.WriteLine((x + foo)); // prints 4
Console.WriteLine((x - foo)); // prints -4
Console.WriteLine((x + foo)==foo); // prints True BUT THIS IS FALSE!!!
(x + foo)==foo
Take a look at the MSDN reference for
It states that a
double has a precision of 15 to 16 digits.
But the difference, in terms of digits, between
4.0 is 20 digits. The mere act of trying to add or subtract
1e-20 to or from
4.0 simply means that the
1e-20 is lost because it cannot fit within the 15 to 16 digits of precision.
So, as far as
double is concerned,
4.0 + 1e-20 == 4.0 and
4.0 - 1e-20 == 4.0.