You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Caffe Users
Dear all,
I am facing the following problem: I would like to freeze a network layer but resume training it at a later stage of the training phase (think of it as some kind of sequential fine tuning of different layers).
I know that the network definition in the *.prototxt allows to define a learning rate multiplier parameter "lr_mult" for layers with weights and biases. However, it seems there is no way to get or set these values in pyCaffe.So is there any conventional way to modify them anyhow, or how can the above problem be tackled efficiently?
Thanks!
Przemek D
unread,
Mar 5, 2018, 7:09:06 AM3/5/18
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Caffe Users
Currently it is not possible in pycaffe. We would have to expose some private attributes of the Net class (learnable_params_). There is some ongoing work on improving pycaffe, but currently this does not involve exposing lr_mult - I recommend voicing your suggestion under that PR (or making a new issue).