Custom layer definition without backward

11 views
Skip to first unread message

Morpheus

unread,
Mar 3, 2018, 5:31:56 PM3/3/18
to torch7
Hi,

I am trying to design a network that takes in two images (img1 & img2) and gives me an vector as output (of size 10). I then use this vector and one of the image (say img2) to calculate a predicted image (pred_image) using simple mathematical operations. I then calculate the loss of the network using the predicted image and img1 as the ground truth. The forward pass works as expected, the network takes the two image (128x128) and outputs me the vector 10, which I take to reconstruct, outside of the network, an image of 128x128.

However for the backward pass, my gradients is of the size 128x128, which doesn't agree with the network output of 10. One way I thought of working around it, is to have the prediction reconstruction as part of the network, but I ran into trouble since I had to do some constant scalar operations, which can't be done with NNGraph. So I thought of writing a custom layer in order to overcome this. But couldn't figure out what I should put in the UpdateGradOutput function for the layer. The reconstruction part shouldn't have any learned weights, since the reconstruction is solely dependent on the vector being produced, so I don't see how I can circumvent the loss to be backpropagated from the vector, rather than the pred_image. 

In other words, I would like to backpropagate the loss obtained from the pred_image starting from the vector output. This is because I don't have the ground truth directly, but can get it passively.

I looked for documentation on the same, but couldn't find out any on the internet. Any and all help would be really appreciated.

I am using a modified MSECriterion and mini-batches.
Reply all
Reply to author
Forward
0 new messages