Majid Azimi Majid Azimi - 1 month ago 6
Python Question

Implementation of the paper "Deep inside convolutional networks: Visualising image classification models and saliency maps", Simonyan et al

In visualization of gradient data in Convolutional Neural Networks, employing Caffe framework, having already visualized gradient data with respect to all classes, it is interesting to take gradient regarding a specific class. in deploy.prototxt file in "bvlc_reference_caffenet" model, I have set:

force_backward: true


and has commented the last part:

layer {
name: "prob"
type: "Softmax"
bottom: "fc8"
top: "prob"
}


,which is before:

layer {
name: "fc8"
type: "InnerProduct"
bottom: "fc7"
top: "fc8"
inner_product_param {
num_output: 1000
}
}


, and added instead of it:

layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "fc8"
bottom: "label"
top: "prob"
}


, in the python code by calling:

out = net.forward()


, we forward towards the last layer and afterwards, by calling:

backout = net.backward()


,got the visualization of gradient. Firstly, I'd like to ask this is called saliency map and if I want to do backward with respect to a specific class e.g. 281 is for a cat. what shall I do?

thanks in advance for your guidance.

P.S. benefited from the code by Yangqing for its notebook in filter visualization.

imagenetMeanFile = caffe_root +'python/caffe/imagenet/ilsvrc_2012_mean.npy'
caffe.set_mode_cpu()
net = caffe.Net(caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt',
caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',
caffe.TRAIN)
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1))
transformer.set_mean('data', np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1)) # mean pixel
transformer.set_raw_scale('data', 255) # the reference model operates on images in [0,255] range instead of [0,1]
transformer.set_channel_swap('data', (2,1,0)) # the reference model has channels in BGR order instead of RGB

Answer

also for full visualization you can refer to my github, which is more complete and visualize the saliency map as well as the visualization of class models and the gradient visualization in backpropagation.

https://github.com/smajida/Deep_Inside_Convolutional_Networks