Is there a L1 loss layer implemeted in Caffe?

2,343 views
Skip to first unread message

Qun Zhang

unread,
Oct 20, 2016, 10:14:40 AM10/20/16
to Caffe Users
Hi guys

Is there a L1 loss layer implemeted in Caffe?

Hieu Do Trung

unread,
Oct 21, 2016, 3:04:03 AM10/21/16
to Caffe Users

  • Layer type: HingeLoss
  • CPU implementation: ./src/caffe/layers/hinge_loss_layer.cpp
  • CUDA GPU implementation: none yet
  • Parameters (HingeLossParameter hinge_loss_param)
    • Optional
      • norm [default L1]: the norm used. Currently L1, L2

Qun Zhang

unread,
Oct 21, 2016, 3:14:59 AM10/21/16
to Caffe Users
Thanks for replay. But i am looking for Euclidean kind of loss but in L1 norm, which is Least Absolute Deviation norm like

loss = sum | y-h(x) | (L1 norm)

loss = sum | y-h(x) |^2 (Euclidean norm or L2 norm)


Przemek D

unread,
Oct 25, 2016, 9:34:50 AM10/25/16
to Caffe Users
There is no such layer to my knowledge. However, you can make it yourself - tutorial on loss layers mentions that you can make caffe use any layer (capable of backpropagating) as loss if you assign it a new parameter: loss_weight. I think you could make an L1 loss using an Eltwise layer (this answer shows how to use it to subtract two blobs) followed by AbsVal and then an InnerProduct. Just initialize with a constant type weight filler of 1 and zero lr_mult and decay_mult so it didn't change during learning, then have it output a single number, which will then be your loss. The way I see it, Eltwise+Abs computes |y-h(x)| and then InnerProduct would return sum over all items. I never tried this myself though, so good luck ;)
Reply all
Reply to author
Forward
0 new messages