Josh Bleecher Snyder - 8 months ago 223

Python Question

I have two numpy arrays of different shapes, but with the same length (leading dimension). I want to shuffle each of them, such that corresponding elements continue to correspond -- i.e. shuffle them in unison with respect to their leading indices.

This code works, and illustrates my goals:

`def shuffle_in_unison(a, b):`

assert len(a) == len(b)

shuffled_a = numpy.empty(a.shape, dtype=a.dtype)

shuffled_b = numpy.empty(b.shape, dtype=b.dtype)

permutation = numpy.random.permutation(len(a))

for old_index, new_index in enumerate(permutation):

shuffled_a[new_index] = a[old_index]

shuffled_b[new_index] = b[old_index]

return shuffled_a, shuffled_b

For example:

`>>> a = numpy.asarray([[1, 1], [2, 2], [3, 3]])`

>>> b = numpy.asarray([1, 2, 3])

>>> shuffle_in_unison(a, b)

(array([[2, 2],

[1, 1],

[3, 3]]), array([2, 1, 3]))

However, this feels clunky, inefficient, and slow, and it requires making a copy of the arrays -- I'd rather shuffle them in-place, since they'll be quite large.

Is there a better way to go about this? Faster execution and lower memory usage are my primary goals, but elegant code would be nice, too.

One other thought I had was this:

`def shuffle_in_unison_scary(a, b):`

rng_state = numpy.random.get_state()

numpy.random.shuffle(a)

numpy.random.set_state(rng_state)

numpy.random.shuffle(b)

This works...but it's a little scary, as I see little guarantee it'll continue to work -- it doesn't look like the sort of thing that's guaranteed to survive across numpy version, for example.

Answer

Your "scary" solution does not appear scary to me. Calling `shuffle()`

for two sequences of the same length results in the same number of calls to the random number generator, and these are the only "random" elements in the shuffle algorithm. By resetting the state, you ensure that the calls to the random number generator will give the same results in the second call to `shuffle()`

, so the whole algorithm will generate the same permutation.

If you don't like this, a different solution would be to store your data in one array instead of two right from the beginning, and create two views into this single array simulating the two arrays you have now. You can use the single array for shuffling and the views for all other purposes.

Example: Let's assume the arrays `a`

and `b`

look like this:

```
a = numpy.array([[[ 0., 1., 2.],
[ 3., 4., 5.]],
[[ 6., 7., 8.],
[ 9., 10., 11.]],
[[ 12., 13., 14.],
[ 15., 16., 17.]]])
b = numpy.array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.]])
```

We can now construct a single array containing all the data:

```
c = numpy.c_[a.reshape(len(a), -1), b.reshape(len(b), -1)]
# array([[ 0., 1., 2., 3., 4., 5., 0., 1.],
# [ 6., 7., 8., 9., 10., 11., 2., 3.],
# [ 12., 13., 14., 15., 16., 17., 4., 5.]])
```

Now we create views simulating the original `a`

and `b`

:

```
a2 = c[:, :a.size//len(a)].reshape(a.shape)
b2 = c[:, a.size//len(a):].reshape(b.shape)
```

The data of `a2`

and `b2`

is shared with `c`

. To shuffle both arrays simultaneously, use `numpy.random.shuffle(c)`

.

In production code, you would of course try to avoid creating the original `a`

and `b`

at all and right away create `c`

, `a2`

and `b2`

.

This solution could be adapted to the case that `a`

and `b`

have different dtypes.

Source (Stackoverflow)