Dschoni Dschoni - 2 months ago 26
Python Question

Enforce precision in decimal python

In some environments, exact decimals (numerics, numbers...) are defined with

scale
and
precision
, with scale being all significant numbers, and precision being those right of the decimal point. I want to use python's decimal implementation to raise an error, if the precision of the casted string is higher than the one defined by the implementation.

So for example, I have an environment, where
scale = 4
and
precision = 2
.
How can I achieve these commands to raise an error, because their precision exceeds that of the implementation?

decimals.Decimal('1234.1')
decimals.Decimal('0.123')

Answer

The closest I could find in the decimal module is in the context.create_decimal_from_float example, using the Inexact context trap :

>>> context = Context(prec=5, rounding=ROUND_DOWN)
>>> context.create_decimal_from_float(math.pi)
Decimal('3.1415')
>>> context = Context(prec=5, traps=[Inexact])
>>> context.create_decimal_from_float(math.pi)
Traceback (most recent call last):
    ...
Inexact: None

The decimal module doesn't seem to have the concept of scale. It's precision is basically your scale + your precision.