Enmanuel Enmanuel - 3 months ago 11
C Question

Type selection for literal numeric values in C

I'm wondering about this: when I try to assign an integer value to an

int
variable (16-bit compiler, 2 bytes for integers) let's say:

int a;

a=40000;


that can't be represented with the range of the type it will be truncated. But what I'm seeing is that the resulting value in a is the bit pattern for -25000 (or some close number) which means that the binary representation that the compiler chooses for decimal 40000 was unsigned integer representation. And that raises my question: how does the compiler choose the type for this literal expression?

I'm guessing it uses the type capable of handling the value with less storage space needed.

Answer

Behaviour here differs between C89 and C99.

In C89, a decimal integer literal takes the first of these types in which it can be represented:

int, long, unsigned long

In C99, a decimal integer literal takes the first of these types in which it can be represented:

int, long, long long

For your particular code snippet it makes no difference, since 40000 is guaranteed to fit in a long, but there are a few significant differences between C89 and C99 literals.

Some of those consequences are described here:

http://www.hardtoc.com/archives/119