1. Center crop has the same size as input to your network. Ex. Pre-trained ImageNet Model has 227x227. This parameter (I mean input size of image) is defiend in
MODEL_FILE = '../models/bvlc_reference_caffenet/deploy.prototxt'
Moreover, line
define what in input resolution, from which is taken central crop.So, every image is firstly resized to 256x256, then central crop is taken.
2. If you want to extract features from entire image, you should change:
image_dims=(256, 256)) -> image_dims=(227, 227))
and
prediction = net.predict([input_image],oversample=False)
This will produce prediction using image without cropping
As you may know, extracting the features from layer is rather simple in Caffe. You simply delete all layers after selected (say fc7). Then,
will produce the vector of features.