csta csta - 7 days ago 4
Python Question

Floating point vs integer performance

Is distinguishing between ints and floats as important as it use to be?

If this is language dependent, I'm interested in Python.

Edit

I'm also interested in learning more about when the time period was (if there was one) where one would notice a difference in performance by choosing a float instead of an integer.

Answer

If you're talking about performance: For most purposes, there is no performance difference. You can probably still measure one in purely number-crunching code compiled to machine code, and for slightly less math-intense code on hardware that doesn't have a dedicated FPU (i.e. mostly embedded stuff). But for Python (and many other languages), any difference in the hardware's performance is dwarfed (by many many orders of magnitude) by the interpretation and boxing overhead. When number are treated as pointers to 16-byte structures with addition being a dynamically-dispatched method call in response to an interpreted opcode, it doesn't matter if the actual processing takes one nanosecond or hundred.

Semantically, the difference between integers and (approximations of) reals is still and always will be a mathematical fact rather than a necessity that follows from the state of the art of computer engineering. For example, floats (in general, not implicit conversions from floats that are exactly integers) will never make sense as indices.