Daniel Rudy - 4 months ago 10
C Question

# Converting a big integer to decimal string

At the risk of having this question voted as a duplicate, or even to have it closed, I had this question has come up.

Background

In "normal" data types such as int, long long, etc..., to convert from the binary numeric value to a decimal string, you would do the following (in pseudo code):

``````Set length = 0
Set divisor to largest base10 value the data type will hold (Divisor).
Loop
Divide number in question by divisor.
Place result in a string at position length.
Increment the length by 1.
Divide the divisor by 10.
Reverse the string.
Print the string.
``````

The actual implementation in (most) any language is quite trivial.

The Problem

The issue that I am encountering with the above method is that with big integer numbers (also known as arbitrary precision arithmetic), there is no largest base 10 value to start with. So the question is "How do you initialize the divisor to the largest possible base10 value if there is no way to know what that value is?"

What I Have Tried

Still trying to draft a solution.

Research

Some of the links that I have found here include the following:

Convert a "big" Hex number (string format) to a decimal number (string format) without BigInteger Class

C: print a BigInteger in base 10

Fastest way to convert a BigInteger to a decimal (Base 10) string?

Convert a "big" Hex number (string format) to a decimal number (string format) without BigInteger Class

A Google search turned up other things, but nothing that specifically answers my question.

Ideas

One method that I think that might work is as follows (in pseudo code):

``````Define p_divisor as previous divisor.
Set divisor = 1
Loop:
if divisor < dividend
then
Set p_divisor = divisor
divisor = divisor * 10
else
end loop
Loop:
Divide number in question by divisor.
Place result in a string at position length.
Increment the length by 1.
Divide the divisor by 10.
if divisor == 1 then end loop
Reverse the string.
Print the string.
``````

Would this be the correct way? I have a big integer library up and working (including multiplication and division) so it wouldn't be that hard to pull this off. The big issue that I see with this method is performance, because you have to run a multiplication sequence to get the initial divisor, then you have to divide twice for each base10 position. One for the actual division, and the other for the divisor.

One (fairly common) way to do this, whether for big integer or normal integer types, is to repeatedly divide the number by 10, saving the remainder as the next digit (starting with the least significant). Keep going until the number reaches zero. Since the first digit found is the least significant, you may need to reverse the string at the end, or build it in reverse as you go.

An example using ordinary `unsigned int` might look like:

``````void printUInt(unsigned x) {
char buf[(sizeof(x) * CHAR_BIT) / 3 + 2]; // slightly oversize buffer
char *result  = buf + sizeof(buf) - 1; // index of next output digit

// add digits to result, starting at
//   the end (least significant digit)

*result = '\0'; // terminating null
do {
*--result = '0' + (x % 10);  // remainder gives the next digit
x /= 10;
} while (x); // keep going until x reaches zero

puts(result);
}
``````

The process is pretty much the same for a big integer -- though it would be best to do the division and find the remainder in one step if you can.

The above example builds the string from the end of the buffer (so `result` ends up pointing in the middle of the buffer somewhere), but you could also build it from the start and reverse it afterward.

You can estimate the size needed for the output if you can determine the number of bits used in your original number (about 1 additional digit per 3 bits -- slightly less).

Source (Stackoverflow)