(Un-)freezing layers during training, on-the-fly?

14 views
Skip to first unread message

pwj

unread,
Mar 4, 2018, 4:22:12 PM3/4/18
to Caffe Users
Dear all,

I am facing the following problem: I would like to freeze a network layer but resume training it at a later stage of the training phase (think of it as some kind of sequential fine tuning of different layers).

I know that the network definition in the *.prototxt allows to define a learning rate multiplier parameter "lr_mult" for layers with weights and biases.
However, it seems there is no way to get or set these values in pyCaffe.So is there any conventional way to modify them anyhow, or how can the above problem be tackled efficiently?

Thanks!

Przemek D

unread,
Mar 5, 2018, 7:09:06 AM3/5/18
to Caffe Users
Currently it is not possible in pycaffe. We would have to expose some private attributes of the Net class (learnable_params_). There is some ongoing work on improving pycaffe, but currently this does not involve exposing lr_mult - I recommend voicing your suggestion under that PR (or making a new issue).
Reply all
Reply to author
Forward
0 new messages