rmkemker - 3 years ago 130

Python Question

I am extracting two feature responses from two separate machine/deep-learning frameworks. I now have two matrices that have NxF dimensions where N is the number of samples and F is the number of features. I want to do a comparison of how similar the learned features are. I have tried several things, but the main idea is using correlation (tried Pearson and Spearman) to correlate the feature responses into a FxF matrix. I then take the absolute value, max across a single axis, and then compute the mean of those max values. I actually have several frameworks that I would like to compare, but I am getting very similar results. Has anybody done this? Does anyone else have any better suggestions? My code sample is below.

`from scipy.stats import spearmanr`

import numpy as np

def similarity(resp1, resp2):

rho, p = spearmanr(resp1,resp2)

corr = rho[:resp1.shape[1],resp1.shape[1]:]

sim_mtrx = np.abs(corr)

feature_match = np.max(sim_mtrx,axis=1)

return np.mean(feature_match)

Recommended for you: Get network issues from **WhatsUp Gold**. **Not end users.**

Answer Source

Has anybody done this? Does anyone else have any better suggestions? My code sample is below.

To be honest this makes no sense. Why? Because there is no ordering in features in things like deep nets. Consequently comparison you are doing cannot be used to draw any reasonable conclusions. Your matrix `N x F`

is probably your weight matrix from your first layer. Consequently each of (column) vectors of these matrices represent **a single neuron**. The trick is - `i`

th neuron in one network (trained with one framework) can has nothing to do with `i`

th neuron in the other one, while it might be identical to `j`

th. For example consider a network trained on images with `F=3`

, you might find that these neurons learned to detect horizontal (neuron 1), vertical (neuron 2) lines and maybe a circle (neuron 3). Now you train again, either with different framework, or even with the same one but different random seed. Now even if this second network learns exactly the same thing - to detect horizontal line, vertical one and a circle - but simply in different neurons (like horizontal-2, vertical-3, circle-1) your method will claim that these are completely different models, which is obviously false. The problem of "having similar representations" is a research direction on its own.

The minimum you have to do is to find **best matching** between neurons in two networks, before applying basic analysis you are proposing. You can do this by brute forcing (F^2 possible mappings, just take the one claiming the biggest similarity) or use something like Hungarian algorithm to find perfect matching.

The most imporatnt thing is to keep **reference comparison**, to avoid problems like the above, so instead of training a single model per framework, train **at least 2** per framework. And now instead of claiming "method A and B produce (dis)similar representations because representations generated by a single experiment with A and B are (dis)similar" you should check whether there is a statisticaly significant difference between (dis)similarity between two runs (with different seeds) of the same algorithm and single runs of two different algorithms, in other words:

- you have 2 algorithms A, B (frameworks)
- you create representations A1, A2, B1, B2
- you test whether mean(sim(A1, A2), sim(B1, B2)) != mean(sim(A1, B1), sim(A2, B2)) (while previously you were just checking if sim(A1, B1) is "big")

Just to show why the metric considered is wrong, lets use it on:

```
>>> x
array([[0, 3, 6],
[1, 4, 7],
[2, 5, 8]])
>>> y
array([[ 6, 0, 0],
[ 7, -1, -123],
[ 8, 0, 1000]])
>>> similarity(x,y)
1.0
```

You end up with just **a single** match, you do not care that 90% of data is completely different - you still report maximum similarity.

Recommended from our users: **Dynamic Network Monitoring from WhatsUp Gold from IPSwitch**. ** Free Download**