Fine tune strategy for feature extraction

326 views
Skip to first unread message

Alpli tan

unread,
Nov 9, 2015, 10:05:02 AM11/9/15
to Caffe Users
Hi all,

I am currently new to Deep Learning problems so I want to understand one thing.

I will use caffe for feature extraction so I will use caffenet.model for fine tuning. I will use 7th layer of caffenet.model to obtain 1x4096 feature vector. My dataset has 40 different class.

So my question is; do I have to divide my dataset training/validation and test set? Can I use my all dataset for training, because I will not classify anthing, only need a feature vector.

If I have to divide my dataset training and validation for fine tuning, do I have to change last layer output class as a 40?

Jan C Peters

unread,
Nov 10, 2015, 2:54:02 AM11/10/15
to Caffe Users
Hi,

whether you divide your dataset into training, validation and test sets is completely up to you. Of course you can use all your data for training, but then you only have the loss and accuracy values for the training set as a means of measuring your network's performance, which is not very meaningful on its own (see overfitting).

If only want to extract features using a given model, then you do not need the output layer or a loss layer. And whether you divide your data or not is completely unrelated to the size of the last layer.

Jan

ath...@ualberta.ca

unread,
Nov 10, 2015, 12:33:06 PM11/10/15
to Caffe Users
If you are new, I would recommend, as a first step, just skipping the training step altogether and just extracting pre-trained representations (features) from pool5, fc6 & fc7 (after fwd pass) and using them in 1 vs all SVMs (use scipy and this is a few lines of code). If the images are registered (parts of each image are in same spot in image), then pool5 will do better - f7 will do better where parts are not in the same place (pics of cats where cat head can be anywhere in image) - f6 will be in-between. The effectiveness of this Transfer Learning (training on one set and testing on another) takes many by *surprise* even though (initially) it may seem a bit counterintuitive.

After you have tried this, then consider fine-tuning which can improve results but usually less than you might expect but this is highly dependent on how much training data you have.

I would also highly recommend a basic course of machine learning like: https://www.coursera.org/learn/machine-learning.

Alpli tan

unread,
Nov 10, 2015, 3:12:22 PM11/10/15
to Caffe Users
Thanks for your answers guys.

I understand that it is good to use validation set for my fine-tuning process because its meaningful with that. But num of output value can be anything because I will not use last layer, I will use fc7 layer to extract features.
Reply all
Reply to author
Forward
0 new messages