Problems with lmdb when there're multiple inputs of image and corresponding labels of different size

50 views
Skip to first unread message

Ben

unread,
Oct 12, 2015, 9:15:14 PM10/12/15
to Caffe Users
For example, I have inputs data1, data2, and corresponding labels label1 and label2.  label are pixelwise labeling of images.
And I met a problem:
case 1 multiple input of different size:
       data1 + label1 ----> loss1  (* 1.)
       data2 + label2 ----> loss2  (* 0.)
case 2 multiple input of same  size:
       data1+ label1 ---->  loss1  (* 1.)
       data1+ label1-----> loss1   (* 0.)
case 3 single input:
       data1+ label1 ------> loss1
 

In my test experiment, the effect of case 2 and case 3 is same.
But I found the loss of case 1 is quite strange, So I try to test the net. In case 1, I set loss_weight  0. for loss2, and observe the change of loss1 compared to case 3.  And the losses are different at the 6th image. I try to monitor the blob mean of convolution layers and loss layers.
In forward passing, all the convolution blobs has same mean.

In the loss layer, I try to get the mean of bottom[1], i.e. the label blob. and find that
mean of "label1" in case 1 is different from the mean of "label1" in case 3?
I think this is why the loss is different. what's happening, is it a bug ? Or I misused something.




 

Ben

unread,
Oct 13, 2015, 2:59:19 AM10/13/15
to Caffe Users
It's weird that sometimes the mean of label blob is right, sometimes it's wrong. What's going on ?
Reply all
Reply to author
Forward
0 new messages