Dynamic loss_weight while training

43 views
Skip to first unread message

PRATIK BORHADE

unread,
Jan 24, 2017, 11:43:29 AM1/24/17
to Caffe Users
HI everyone,

So I'm training a network with losses(L2 loss) at multiple layers. Each loss has certain loss_weight given to them. I want to make these loss_weights dynamic while training. That is I want to change their values after certain number of iterations and even make some of them zero so that they wont contribute to the final loss anymore. Is there a way to do that if any of you know ? I have read somewhere that you can use a decay function and I tried to get more details but i couldn't. If someone could help me i would really appreciate it. I am relatively new to Caffe.

Przemek D

unread,
Jan 25, 2017, 2:56:01 AM1/25/17
to Caffe Users
I don't think you can change this parameter once you've started learning. I'd say the best way to go for you is just stop learning at that certain number of iterations, then changing loss_weights and resuming from snapshot.
Alternatively, you could dive into caffe C++ code and look for possibilities there.

PRATIK BORHADE

unread,
Jan 25, 2017, 5:52:37 PM1/25/17
to Caffe Users
hi,
thanks for your reply. This does look like a way to achieve what I'm trying to do. Also, for example, there are two same kind of losses in the network, one at the output and at a hidden layer. I want to make sure that the loss function at the earlier stage vanishes after a few iterations. Do you have an idea about how will I be able to do that ? 

Przemek D

unread,
Jan 26, 2017, 4:18:10 AM1/26/17
to Caffe Users
Reply all
Reply to author
Forward
0 new messages