Dschoni Dschoni - 1 year ago 184
Python Question

Enforce precision in decimal python

In some environments, exact decimals (numerics, numbers...) are defined with

, with scale being all significant numbers, and precision being those right of the decimal point. I want to use python's decimal implementation to raise an error, if the precision of the casted string is higher than the one defined by the implementation.

So for example, I have an environment, where
scale = 4
precision = 2
How can I achieve these commands to raise an error, because their precision exceeds that of the implementation?


Answer Source

The closest I could find in the decimal module is in the context.create_decimal_from_float example, using the Inexact context trap :

>>> context = Context(prec=5, rounding=ROUND_DOWN)
>>> context.create_decimal_from_float(math.pi)
>>> context = Context(prec=5, traps=[Inexact])
>>> context.create_decimal_from_float(math.pi)
Traceback (most recent call last):
Inexact: None

The decimal module doesn't seem to have the concept of scale. It's precision is basically your scale + your precision.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download