Fine Tuning - Train a fully connected layer from "Bottleneck features" as a separate step

6 views
Skip to first unread message

Gil Mor

unread,
Mar 20, 2017, 3:04:35 PM3/20/17
to Caffe Users
I use fine tuning with caffenet and I don't remember reading about this in any tutorial..
I want to know if this is the right way to do Fine Tuning because my Fine Tuning works very well without this step..

In Keras blog entry on Fine Tuning They write (They use the VGG16 model):

"in order to perform fine-tuning, all layers should start with properly trained weights:
 for instance you should not slap a randomly initialized fully-connected network on top of a pre-trained convolutional base. 
This is because the large gradient updates triggered by the randomly initialized weights would wreck the learned weights in the convolutional base.
 In our case this is why we first train the top-level classifier, and only then start fine-tuning convolutional weights alongside it."

So as a separate step in Fine tuning they save the output of the last layer before the fully connected layer (the "bottleneck features") and then they train a "small fully-connected model" on those features and only then they put the newly trained fully connected layer on top of the whole net and train the "last convolutional block".

What do you think?

thanks


Reply all
Reply to author
Forward
0 new messages