jammycrisp - 1 year ago 248

Python Question

I've written a library of functions to make my engineering homework easier, and use them in the python interpreter (kinda like a calculator). Some return matrices, some return floats.

The problem is, they return too many decimals. For example, currently, when a number is 0, I get an extremely small number as a return (e.g. 6.123233995736766e-17)

I know how to format outputs individually, but that would require adding a formatter for every line I type in the interpreter. I'm using python 2.6.

**Is there a way to set the global output formatting (precision, etc...) for the session?**

*Note: For scipy functions, I know I can use

`scipy.set_printoptions(precision = 4, suppress = True)`

but this doesn't seem to work for functions that don't use scipy.

Recommended for you: Get network issues from **WhatsUp Gold**. **Not end users.**

Answer Source

What you are seeing is the fact that decimal floating point numbers can only be *approximated* by binary floating point. See Floating Point Arithmetic: Issues and Limitations.

You could put a module-level variable in your library and use that as the second parameter of `round()`

to round off the return value of the functions in your module, but that is rather drastic.

If you use ipython (which I would recommend for interactive use, *much* better than the regular interpreter), you can use the 'magic' function `%precision`

.