bvlc_googlenet preprocessing parameters

296 views
Skip to first unread message

Yoann

unread,
May 6, 2015, 5:52:08 AM5/6/15
to caffe...@googlegroups.com
Hi all,

The parameters of the images used as input of the bvlc GoogleNet are not written explicitly.
From the train_val.prototxt, since it is using the same dataset as the bvlc reference model, I assume that the preprocessing for the input pictures for the training phase is the same. Am I right?

Properties:
channel_swap 2,1,0
image_scale 0-255
transpose 2,0,1

I'm trying to fine-tune googlenet and my pictures need to be processed in the same way.

Thanks,
Yoann

npit

unread,
May 6, 2015, 7:54:34 AM5/6/15
to caffe...@googlegroups.com
Where are you getting these properties from?

Yoann

unread,
May 6, 2015, 8:34:11 AM5/6/15
to caffe...@googlegroups.com
From the defaults parameters of the python script "classify.py" for the bvlc_reference_caffenet.

echeng

unread,
May 8, 2015, 6:55:54 PM5/8/15
to caffe...@googlegroups.com
I too am trying to use a fine-tuned GoogleNet model, but when using the default preprocessing parameters you mention above, the results do not match what the test accuracy reports during training.  

echeng

unread,
May 9, 2015, 3:09:35 AM5/9/15
to caffe...@googlegroups.com
Turns out I had a bug -- modified the name of the classifier layers in train_val.prototxt, but did not modify deploy.prototxt to match.  Once I did, everything worked as expected using the parameters specified in the ImageNet Classification ipython notebook example (http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/classification.ipynb) which match the preprocessing parameters you cite above. 
Reply all
Reply to author
Forward
0 new messages