Mikhail T. Mikhail T. - 3 months ago 16
C# Question

Double comparison precision loss in C#, accuracy loss happening when adding subtracting doubles

Just started learning C#. I plan to use it for heavy math simulations, including numerical solving. The problem is I get precision loss when adding and subtracting

double
's, as well as when comparing. Code and what it returns (in comments) is below:

namespace ex3
{
class Program
{
static void Main(string[] args)
{

double x = 1e-20, foo = 4.0;

Console.WriteLine((x + foo)); // prints 4
Console.WriteLine((x - foo)); // prints -4
Console.WriteLine((x + foo)==foo); // prints True BUT THIS IS FALSE!!!
}
}
}


Would appreciate any help and clarifications!

What puzzles me is that
(x + foo)==foo
returns
True
.

Answer

Take a look at the MSDN reference for double: https://msdn.microsoft.com/en-AU/library/678hzkk9.aspx

It states that a double has a precision of 15 to 16 digits.

But the difference, in terms of digits, between 1e-20 and 4.0 is 20 digits. The mere act of trying to add or subtract 1e-20 to or from 4.0 simply means that the 1e-20 is lost because it cannot fit within the 15 to 16 digits of precision.

So, as far as double is concerned, 4.0 + 1e-20 == 4.0 and 4.0 - 1e-20 == 4.0.