Python and backward computation

446 views
Skip to first unread message

Rester Hall

unread,
Aug 26, 2014, 3:28:12 PM8/26/14
to caffe...@googlegroups.com
Hi everyone,

my question concerns backward computation with the python wrapper.
If I do backward computation and look at the derivates, they are all zero.

What I've done so far:
- Load network: net = caffe.Classifier() (the imagenet reference model)
- perform forward computation on empty image: net.forward() (by default: input is taken from (empty) data layer). This gives my some probabilities as output.
- perform backward computation: net.backward()
- look at the derivatives with net.blob['layer_name'].diff. Unfortunately, every single derivative is empty (only zeros).

What I've tried to solve the problem:
- set force_backward: true (in deploy protofile)
- try to give the network a label during forward computation - how is the loss computed, when there is no label present? - by adding input: "label" to the protofile

Thanks in advance!

Rester Hall

Rester Hall

unread,
Aug 27, 2014, 7:24:08 AM8/27/14
to caffe...@googlegroups.com
I already found the solution (just in case, somebody else is having this problem, here it is):

Adapt you deploy.prototxt:
input: "label"
input_dim: 1
input_dim: 1
input_dim: 1
input_dim: 1
force_backward: true

Load the net in python and do a forward / backward computation:
net.forward_all(data=my_data_array, label=my_label_array)
net.backward()

Actually, it was pretty straightforward...

Rester Hall

unread,
Aug 27, 2014, 7:30:03 AM8/27/14
to caffe...@googlegroups.com
Oh I forgot, of course you have to add a loss layer to your deploy.prototxt:

layers {
  name: "loss"
  type: MY_LOSS_LAYER
  bottom: "last_fc"
  bottom: "label"
  top: "loss"
Reply all
Reply to author
Forward
0 new messages