Finetuning caffenet and regression, loss layers

1,946 views
Skip to first unread message

Devendra Mandan

unread,
Aug 29, 2015, 5:17:28 PM8/29/15
to Caffe Users
I modified the Finetuning for style recognition example according to my needs:

- I am doing a regression (training images = ~14000, test # ~1500) so I changed the num outputs in the final layer to 1
- This is my last layer in deploy.prototxt


layer {
  name
: "my-fc8"
  type
: "InnerProduct"
  bottom
: "fc7"
  top
: "my-fc8"
 
 
# lr_mult is set to higher than for other layers, because this layer is
 starting
from random while the others are already trained
  param
{
    lr_mult
: 10
    decay_mult
: 1
 
}
  param
{
    lr_mult
: 20
    decay_mult
: 0
 
}
  inner_product_param
{
    num_output
: 1
    weight_filler
{
      type
: "gaussian"
      std
: 0.01
   
}
    bias_filler
{
      type
: "constant"
      value
: 0
   
}
 
}

- In train_val.prototxt I removed the accuracy layer since I am doing regression and have the following Euclidean Loss Layers

layer {
  name
: "loss"
  type
: "EuclideanLoss"
  bottom
: "my-fc8"
  bottom
: "label"
}
layer
{
  name
: "loss"
  type
: "EuclideanLoss"
  bottom
: "my-fc8"
  bottom
: "label"
  top
: "loss"
  include
{
    phase
: TEST
 
}
}

- And since my training and test labels were float I used this PR, (which breaks the softmax layer but I don't need that) according to this discussion.
- One thing which is bugging me is when I use the matlab wrapper functions matcaffe_demo() to test the 100000 iteration trained net on any image,
instead of 1 label for 1 image, I get output of 10*1, i.e 10 labels instead of 1, which I then average to infer the predicted label.

So, if anyone can verify that I have proceeded correctly, especially the Euclidean Loss layers for train and test, and the PR for using float labels, I would be very grateful. Before I proceed, I need to be sure that what I have done till now is correct.

Devendra Mandan

unread,
Aug 31, 2015, 7:41:44 AM8/31/15
to Caffe Users
Any help is welcome!

Leonid Berov

unread,
Jan 26, 2016, 11:33:16 AM1/26/16
to Caffe Users
First make sure to check for batch-size settings in your input layer, and reshape accordingly.
If that is not the problem use the python wrapper to check the dimensions of each layer, that should show were the rogue 10 is coming from. Probably you can do that with the matlab wrapper as well, but since I didn't touch that one I couldn't say.

Jan C Peters

unread,
Jan 27, 2016, 3:08:19 AM1/27/16
to Caffe Users
First of all: why do you have the same loss layer twice? Just do

layer {
  name
: "loss"
  type
: "EuclideanLoss"
  bottom
: "my-fc8"
  bottom
: "label"
  top
: "loss"
}


that works for both training and testing.

I have float labels myself, but I use HDF5 for input, and there it is not a problem at all. The problem of the PR with the softmax loss layer is probably stemming from the fact that the softmax loss layer assumes its incoming labels to be _class_ labels in [0, N-1] and treats them that way, which inevitably causes problems when you feed non-integer numbers to it. This is all in all reasonable, as the softmax loss does not really make sense for regression problems, there the Euclidean would be the go-to loss function.

Jan




Am Samstag, 29. August 2015 23:17:28 UTC+2 schrieb Devendra Mandan:
Reply all
Reply to author
Forward
0 new messages