Data update - fine tuning

22 views
Skip to first unread message

mtngld

unread,
Oct 12, 2015, 6:42:06 AM10/12/15
to Caffe Users
Hi All,

Assume the following scenario:

  1. You have a well trained model of X different labels.
  2. You now want to add more  Y labels, so you add your new labels data to your original dataset.
  3. Retrain the model with new 'fc8' layer (or any other last layer, model dependent) which now outputs X+Y values to the loss layer.
After following these steps, I find out that X labels accuracy outperforms the Y labels accuracy, which is reasonable of course, however I am wondering if there is a way to tell the back prop algorithm to finetune parts of the 'fc8' layer faster (much like using lt_mult param, but only for parts of that layer)

Thanks,

Matan
Reply all
Reply to author
Forward
0 new messages