Samuel Neff Samuel Neff -3 years ago 82
C# Question

Evaluate if two doubles are equal based on a given precision, not within a certain fixed tolerance

I'm running NUnit tests to evaluate some known test data and calculated results. The numbers are floating point doubles so I don't expect them to be exactly equal, but I'm not sure how to treat them as equal for a given precision.

In NUnit we can compare with a fixed tolerance:

double expected = 0.389842845321551d;
double actual = 0.38984284532155145d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));


and that works fine for numbers below zero, but as the numbers grow the tolerance really needs to be changed so we always care about the same number of digits of precision.

Specifically, this test fails:

double expected = 1.95346834136148d;
double actual = 1.9534683413614817d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));


and of course larger numbers fail with tolerance..

double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));


What's the correct way to evaluate two floating point numbers are equal with a given precision? Is there a built-in way to do this in NUnit?

Answer Source

From msdn:

By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.

Let's assume 15, then.

So, we could say that we want the tolerance to be to the same degree.

How many precise figures do we have after the decimal point? We need to know the distance of the most significant digit from the decimal point, right? The magnitude. We can get this with a Log10.

Then we need to divide 1 by 10 ^ precision to get a value around the precision we want.

Now, you'll need to do more test cases than I have, but this seems to work:

  double expected = 1632.4587642911599d;
  double actual = 1632.4587642911633d; // really comes from a data import

  // Log10(100) = 2, so to get the manitude we add 1.
  int magnitude = 1 + (expected == 0.0 ? -1 : Convert.ToInt32(Math.Floor(Math.Log10(expected))));
  int precision = 15 - magnitude ;

  double tolerance = 1.0 / Math.Pow(10, precision);

  Assert.That(expected, Is.EqualTo(actual).Within(tolerance));

It's late - there could be a gotcha in here. I tested it against your three sets of test data and each passed. Changing pricision to be 16 - magnitude caused the test to fail. Setting it to 14 - magnitude obviously caused it to pass as the tolerance was greater.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download