Rockybilly - 1 year ago 130

Python Question

I wanted to use

`NumPy`

`[[1, 1], [1, 0]]`

I wrote some very simple code but after increasing

`n`

`import numpy`

def fib(n):

return (numpy.matrix("1 1; 1 0")**n).item(1)

print fib(90)

# Gives -1581614984

What could be the reason for this?

`linalg.matrix_power`

`list`

`linalg.matrix_power`

`-5.168070885485832e+19`

`-51680708854858323072L`

`int()`

`L`

Recommended for you: Get network issues from **WhatsUp Gold**. **Not end users.**

Answer Source

The reason you see negative values appearing is because NumPy has defaulted to using the `np.int32`

dtype for your matrix.

The maximum positive integer this dtype can represent is 2^{31}-1 which is 2147483647. Unfortunately, this is less the 47th Fibonacci number, 2971215073. The resulting overflow is causing the negative number to appear:

```
>>> np.int32(2971215073)
-1323752223
```

Using a bigger integer type (like `np.int64`

) would fix this, but only temporarily: you'd still run into problems if you kept on asking for larger and larger Fibonacci numbers.

The only sure fix is to use an unlimited-size integer type, such as Python's `int`

type. To do this, modify your matrix to be of `np.object`

type:

```
def fib_2(n):
return (np.matrix("1 1; 1 0", dtype=np.object)**n).item(1)
```

The `np.object`

type allows a matrix or array to hold any mix of native Python types. Essentially, instead of holding machine types, the matrix is now behaving like a Python list and simply consists of pointers to integer objects in memory. Python integers will be used in the calculation of the Fibonacci numbers now and overflow is not an issue.

This flexibility comes at the cost of decreased performance: NumPy's speed originates from direct storage of machine integer/float types.

Recommended from our users: **Dynamic Network Monitoring from WhatsUp Gold from IPSwitch**. ** Free Download**