zof zof - 3 months ago 15
C Question

Execution time in Miracl library

In the bmark program, the execution time is calculated as follow:

For example, for EC point multiplication:

#define MIN_TIME 10.0
#define MIN_ITERS 20
start=clock();
do {
ecurve_mult();
iterations++;
elapsed=(clock()-start)/(double)CLOCKS_PER_SEC;
} while (elapsed<MIN_TIME || iterations<MIN_ITERS);
elapsed=1000.0*elapsed/iterations;
printf("ER - %8d iterations",iterations);
printf(" %8.2lf ms per iteration\n",elapsed);


The question is: Why do not simply use:

start=clock();
ecurve_mult();
elapsed=(clock()-start)/(double)CLOCKS_PER_SEC;
printf("%f\n",elapsed*1000.0);


In other words, what is the purpose of using MIN_TIME and MIN_ITERS

NB: the two codes give different outputs.

Answer

The code is trying to execute ecurv_mult() enough times to make up for the lack of precision in the time measurement. So the loop makes sure it gets executed at least 20 times and for at least 10 seconds.

Since some machines might be so fast that executing 20 times is not enough time to get the accurate precision, on those we make sure to do it for at least 10 seconds regardless of how fast we did those 20 times.

If we do it only once, our performance measurement is not going to be very accurate due to the resolution of the clock being too high (e.g. the clock gives us 1ms while the benchmark runs in 500us).