kdbanman kdbanman - 5 months ago 23
Ruby Question

Why does ruby BigDecimal show representation inaccuracy similar to float?

Certain floating point numbers have inherent inaccuracy from binary floating point representation:

> puts "%.50f" % (0.5) # cleanly representable

> puts "%.50f" % (0.1) # not cleanly representable

This is nothing new. But why does ruby's
also show this behaviour?

> puts "%.50f" % ("0.1".to_d)

(I'm using the rails shorthand
instead of
for brevity only, this is not a rails specific question.)

Question: Why is
still showing errors on the order of 10-17? I thought the purpose of
was expressly to avoid inaccuracies like that?

At first I thought this was because I was converting an already inaccurate floating point
, and
was just losslessly representing the inaccuraccy. But I made sure I was using the string constructor (as in the snippet above), which should avoid the problem.


A bit more investigation shows that
does still internally represent things cleanly. (Obvious, because otherwise this would be a huge bug in a very widely used system.) Here's an example with an operation that would still show error:

> puts "%.50f" % ("0.1".to_d * "10".to_d)

If the representation were lossy, that would show the same error as above, just shifted by an order of magnitude. What is going on here?


The %.50f specifier takes a floating point value, so that decimal value needs to be converted to floating point before it's rendered for display, and as such is subjected to the same floating point noise you get in ordinary floating point values.

sprintf and friends, like the String#% method, do conversions automatically depending on the type specified in the placeholder.

To suppress that you'd have to use the .to_s method on the BigDecimal number directly. It can take an optional format specifier if you want a certain number of places, and this can be chained in to a %s placeholder in your other string.