Euclidean Loss Layer returning large losses (e7 magnitude)

32 views
Skip to first unread message

danbon

unread,
Jan 26, 2018, 2:38:42 PM1/26/18
to Caffe Users

I'm using Caffe to perform super resolution, taking both the low resolution (LR) and the high resolution (HR) images as input to the training phase. As loss layer, I'm using Euclidean Loss Layer to compare the input HR and the generated image. The problem is that the loss obtained is enormous, such as 3e7.

Any idea as to why?


PS: I am training using build/tools/caffe train -solver 'solver.prototext'


Attached are my model and solver.


Thank you.

Solver.prototxt
Model.prototxt

Przemek D

unread,
Jan 29, 2018, 4:21:36 AM1/29/18
to Caffe Users
Euclidean loss is just sum of squares of differences between each pixel of the network output and the ground truth - so it scales with the size of your image. The larger images your network produces, the larger the loss can get. I suppose you normalize the loss by the total number of pixels in each image to get some insight into how is your network doing. Just be aware that EuclideanLoss layer already divides by the number of images in a batch.

danbon

unread,
Jan 31, 2018, 6:21:03 AM1/31/18
to Caffe Users
Hi Przemek,

I normalized my input data (0 mean, unit standard deviation) and the results are better. The loss on the validation set is now hovering around 72. As I understand it, the loss is divided by the batch size, not the number of pixels in the image, correct? Thank you.

Przemek D

unread,
Jan 31, 2018, 6:45:48 AM1/31/18
to Caffe Users
the loss is divided by the batch size, not the number of pixels in the image
That is correct.

I made a typo in the previous message; I wanted to say "I suppose you could normalize the loss (...) to get insight into how the network is doing" - making a suggestion to do so.
Reply all
Reply to author
Forward
0 new messages