Chaitanya Vaishampayan - 1 year ago 40

C Question

`#include <stdio.h>`

int main() {

float k;

scanf("%f", &k);

printf("%f", k);

}

In this simple program when I enter a number containing at most 8 digits then it is displayed correctly.

But if I exceed 8 digits i.e. for the input

`123456789`

`123456792`

Why this is happening? Well the fun fact is that if I enter any number between

`123456789`

`123456796`

`123456792`

Is it something related to the 8 decimal precision of floating numbers?

Recommended for you: Get network issues from **WhatsUp Gold**. **Not end users.**

Answer Source

Floating point types have a limited precision. For a `float`

on your machine, which appears to be 32 bit, it has 24 bits of precision (23 explictly stored, 1 implied). That means integers greater than ~16000000, which require more than 24 bits to store, can't be represented exactly with this datatype.

For example, the value 123456789 you used has a binary representation of:

```
111 0101 1011 1100 1101 0001 0101
```

This value takes up 27 bits which is more than is available. So it is rounded to the closest value that can be stored with 23 bits, which is 123456792. In binary:

```
111 0101 1011 1100 1101 0001 1000
```

For this value, the lower 3 bits with value 0 are not explicitly stored.

Recommended from our users: **Dynamic Network Monitoring from WhatsUp Gold from IPSwitch**. ** Free Download**