Zodiac Zubeda Zodiac Zubeda - 1 month ago 5
C++ Question

Counting from 0 to 99

In C++, which of the following solutions is more robust and reliable to count from 0 to 99 and store each iteration in variables for tens and ones places? And how can either method be improved to make it as fast and non-resource intensive as possible?

typedef int (*IntFunction) (int* _SegmentList);

int display1SegmentPinNums[] = {...pin numbers...};
int display2SegmentPinNums[] = {...other pin numbers...};

// Then I have some functions that display a number to 7-segment displays. They each return an integer 1 and have a parameter of (int* _SegmentList), as per the type definition above

// An array of all the functions
IntFunction displayFunctions[10] = {display_0, display_1, display_2, display_3, display_4, display_5, display_6, display_7, display_8, display_9};

// Solution 1
for (int tens = 0; tens < 10; tens++)
for (int ones = 0; ones < 10; ones++)

// Solution 2
for (int i = 0; i < 100; i++)
ones = (i % 10);
tens = ((i - ones) / 10);


I've included a somewhat simplified version of my full code. Hopefully it will help get an answer better. This is for an Arduino project BTW, with 7-segment displays and an attempt to make a stopwatch.


Any decent optimizing compiler would come to the result that tens and ones contain 9 in the end, based on constant propagation, loop unrolling and dead code elimination.

Now depending on your real loop body, and not taking into account clever compiler optimizations, you can analyze your code, counting type of operations:

  • Solution 1: 11 initializations, 121 comparisons, 110 increments, 200 assignments
  • Solution 2: 1 initialization, 101 comparisons, 100 increments, 200 assignments, 200 divisive operations (modulo and division), 100 subtractions

Then it depends on CPU architecture and other factors:

  • If hypothetically all operations would take one hypothetical CPU cycle, solution 1 would clearly win.
  • But in reality it's much more complex, taking into account hardware optimization such as caching, branch prediction, and others, but also cost of modulo and division. So the best way would certainly be to measure with some benchmarking code.

Edit: about your code changes

If the functions perform some side effects (displaying, etc), then of course, your loop body won't be optimized away. The remaining comment I made remain true, because solution 1 and solution 2 both call the additional functions the same number of times with the same parameters.