ProfAWSM ProfAWSM - 2 months ago 7
C Question

Why does C give different values to Python for this algorithm?

I wrote a short program in C to perform linear interpolation, which iterates to give a root of a function to a given number of decimal points. So far, for a function f(x):

long double f(long double x) {
return (pow(x, 3) + 2 * x - 8);
}


The program fails to converge to a 1dp value. The program updates variables a and b, between which the root of f(x) lies, until a and b both round to the same number at a given precision. Using long doubles and the above function, a debugger shows for the first 2 iterations:

a = 1.5555555555555556
a = 1.6444444444444444


though it should have been:

a = 1.5555555555555556
a = 1.653104925053533


The program fails to update values after this. The equation for linear interpolation I'm using is a rearranged version of the mathematical one given here and the code I'm using is a C-version of a python program I wrote. Why does the C implementation get different values, despite the same algorithm, and how can I fix it?

OK I'm still getting the hang of this, but hopefully below I have a Minimal, Complete, and Verifiable example:

#include <stdlib.h>
#include <stdio.h>
#include <math.h>

long double a; long double b; long double c; // The values for interpolation
long double fofa; long double fofb; long double fofc; // The values f(a), f(b) and f(c)
const int dp = 1; // The number of decimal places to be accurate to

long double f(long double x) {
return (pow(x, 3) + 2 * x - 8);
}

int main(void) {
a = 1; b = 2;
while(roundf(a * pow(10, dp))/pow(10, dp) != roundf(b * pow(10, dp))/pow(10, dp)) { // While a and b don't round to the same number...

fofa = f(a); fofb = f(b); // Resolve the functions
printf("So f(a) = %g, f(b) = %g\n", (double)fofa, (double)fofb); // Print the results

c = (b * abs(fofa) + a * abs(fofb)) / (abs(fofb) + abs(fofa)); // Linear Interpolation

fofc = f(c);

if(fofc < 0) {
a = c;
}
else if(fofc == 0) {
a = c;
break;
}
else {
b = c;
}

}

printf("The root is %g, to %d decimal places, after %d iterations.\n", (double)a, dp, i);
}

Answer

The function abs() (from <stdlib.h>) has the signature int abs(int); — you are getting integers from your calculations.

You should be using long double fabsl(long double); from <math.h>.

You should also be using powl() instead of pow() (long double vs double), and roundl() instead of roundf() (long double vs float).

Make sure you're using the correct types, in other words.


When you've fixed the type problems, you still have an issue with convergence. It would have helped if you had produced an MCVE (Minimal, Complete, Verifiable Example). However, this is an MCVE that I can deduce from your question:

#include <math.h>
#include <stdio.h>

static inline long double f(long double x)
{
    return(powl(x, 3) + 2 * x - 8);
}

int main(void)
{
    long double a = 1.0L;
    long double b = 2.0L;
    int dp = 6;

    while (roundl(a * powl(10, dp)) / powl(10, dp) != roundl(b * powl(10, dp)) / powl(10, dp))
    {
        long double fofa = f(a);
        long double fofb = f(b);
        long double c = (b * fabsl(fofa) + a * fabsl(fofb)) / (fabsl(fofb) + fabsl(fofa));
        long double fofc = f(c);
        printf("a = %+.10Lf, f(a) = %+.10Lf\n", a, fofa);
        printf("b = %+.10Lf, f(b) = %+.10Lf\n", b, fofb);
        printf("c = %+.10Lf, f(c) = %+.10Lf\n", c, fofc);
        putchar('\n');

        if (fofc < 0.0L)
        {
            a = c;
        }
        else if (fofc == 0.0L)
        {
            a = c;
            break;
        }
        else
        {
            b = c;
        }
    }
    printf("Result: a = %Lg\n", a);
    return 0;
}

The output I get from it is:

a = +1.0000000000, f(a) = -5.0000000000
b = +2.0000000000, f(b) = +4.0000000000
c = +1.5555555556, f(c) = -1.1248285322

a = +1.5555555556, f(a) = -1.1248285322
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6531049251, f(c) = -0.1762579238

a = +1.6531049251, f(a) = -0.1762579238
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6677455452, f(c) = -0.0258828049

a = +1.6677455452, f(a) = -0.0258828049
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6698816424, f(c) = -0.0037639074

a = +1.6698816424, f(a) = -0.0037639074
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6701919841, f(c) = -0.0005465735

a = +1.6701919841, f(a) = -0.0005465735
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702370440, f(c) = -0.0000793539

a = +1.6702370440, f(a) = -0.0000793539
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702435859, f(c) = -0.0000115206

a = +1.6702435859, f(a) = -0.0000115206
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702445357, f(c) = -0.0000016726

a = +1.6702445357, f(a) = -0.0000016726
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702446735, f(c) = -0.0000002428

a = +1.6702446735, f(a) = -0.0000002428
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702446936, f(c) = -0.0000000353

a = +1.6702446936, f(a) = -0.0000000353
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702446965, f(c) = -0.0000000051

a = +1.6702446965, f(a) = -0.0000000051
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702446969, f(c) = -0.0000000007

a = +1.6702446969, f(a) = -0.0000000007
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702446970, f(c) = -0.0000000001

a = +1.6702446970, f(a) = -0.0000000001
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702446970, f(c) = -0.0000000000

a = +1.6702446970, f(a) = -0.0000000000
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702446970, f(c) = -0.0000000000

a = +1.6702446970, f(a) = -0.0000000000
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702446970, f(c) = -0.0000000000

a = +1.6702446970, f(a) = -0.0000000000
b = +2.0000000000, f(b) = +4.0000000000
c = +1.6702446970, f(c) = -0.0000000000

The reason for the infinite loop is clear; the difference between a and b is not a small fraction. You need to review your convergence criterion. It should probably be comparing fofc with 0.0 to within the given number of decimal places — or something along those lines.