Hi all,
I'm trying to implement this paper -
https://arxiv.org/abs/1610.02391 using python. For that I want to get gradient of the output of a specific class with respect to the last convolutional layer. I came across the following usage of
backward() function.
label = np.zeros((1, 6))
label[0, interested_class] = 1
net.backward(**{net.output[0]: label})
Assuming I have size classes in my network.
However it gives gradient w.r.t to the input layer.
I tried to use the following usage but it is not giving the desirable output.
label = np.zeros((1,6))
label[0,interested_class] = 1
net.backward(end=conv, **{net.output[0]:label})
Precisely, I want to get the gradient of output layer w.r.t conv layer values.
Any help is highly appreciated!