Dschoni - 9 months ago 86

Python Question

In some environments, exact decimals (numerics, numbers...) are defined with

`scale`

`precision`

So for example, I have an environment, where

`scale = 4`

`precision = 2`

How can I achieve these commands to raise an error, because their precision exceeds that of the implementation?

`decimals.Decimal('1234.1')`

decimals.Decimal('0.123')

Answer

The closest I could find in the `decimal`

module is in the `context.create_decimal_from_float`

example, using the `Inexact`

context trap :

```
>>> context = Context(prec=5, rounding=ROUND_DOWN)
>>> context.create_decimal_from_float(math.pi)
Decimal('3.1415')
>>> context = Context(prec=5, traps=[Inexact])
>>> context.create_decimal_from_float(math.pi)
Traceback (most recent call last):
...
Inexact: None
```

The decimal module doesn't seem to have the concept of scale. It's precision is basically your scale + your precision.

Source (Stackoverflow)