Hi all,
The parameters of the images used as input of the bvlc GoogleNet are not written explicitly.
From the train_val.prototxt, since it is using the same dataset as the bvlc reference model, I assume that the preprocessing for the input pictures for the training phase is the same. Am I right?
Properties:
channel_swap 2,1,0
image_scale 0-255
transpose 2,0,1
I'm trying to fine-tune googlenet and my pictures need to be processed in the same way.
Thanks,
Yoann