Alex Poca - 7 months ago 52

SQL Question

First: I am a MySql newbie and it looks like I am missing something important here.

In a MySQL database my float values range from units to billions.

I spent days trying to understand why all of them showed no more than 6 significant cyphers with the less significant ones set to 0:

i.e.:

`select `field` from `table`;`

(instead of X -> i get Y)

1 -> 1

12 -> 12

123 -> 123

1234 -> 1234

12345 -> 12345

123456 -> 123456

1234567 -> 1234570

12345678 -> 12345700

123456789 -> 123457000

Only with a "trick" I found somewhere I could read the real values:

`select `field`+0.0 from `table`;`

My questions: what is the rationale behind this (imho) strange behavior? and where is it described in the documentation?

I find completely non-intuitive that I don't see the real value (of course approximated according to IEEE specifications) with the normal select * but I need the trick... What am I missing here?

Answer

A

`FLOAT`

has 24 bits, or`log`

(over 7) digits, of significance:_{10}(2^{24})`12345678.9`

is stored in a_{10}`FLOAT`

as`101111000110000101001111`

(which is_{2}`12345679`

). When displaying as decimal, the client knows_{10}*for certain*that everything from the 8th digit onwards is*definitely*false precision (since a`FLOAT`

was only capable of storing 7 decimal digits of precision). It therefore discards the remainder, leaving you with`12345680`

.A

`DOUBLE`

has 53 bits, or`log`

(almost 16) digits, of significance:_{10}(2^{53})`12345678.9`

is stored in a_{10}`DOUBLE`

as`101111000110000101001110.11100110011001100110011001101`

(which is_{2}`12345678.90000000037`

). When displaying as decimal, the client knows_{10}*for certain*that everything from the 16th digit onwards is*definitely*false precision (since a`DOUBLE`

was only capable of storing 15 decimal digits of precision). It therefore discards the remainder, leaving you with`12345678.9000000`

.

When adding `+0.0`

, MySQL first casts the `FLOAT`

up to a `DOUBLE`

.