I'm using Caffe to perform super resolution, taking both the low resolution (LR) and the high resolution (HR) images as input to the training phase. As loss layer, I'm using Euclidean Loss Layer to compare the input HR and the generated image. The problem is that the loss obtained is enormous, such as 3e7.
Any idea as to why?
PS: I am training using build/tools/caffe train -solver 'solver.prototext'
Attached are my model and solver.
Thank you.
the loss is divided by the batch size, not the number of pixels in the image