Hi,
The size of images I use in training procedure is 256x256. At first I used a lmdb dataset and applied cropping:
layers {
name: "mnist"
type: DATA
top: "data"
top: "label"
data_param {
source: "train_lmdb"
backend: LMDB
batch_size: 32
}
transform_param {
crop_size: 224
mean_file: "mean.binaryproto"
mirror: true
}
include: { phase: TRAIN }
}
In another training procedure I wanted to apply cropping to the data stored in HDF5 dataset:
layers {
name: "mnist"
type: HDF5_DATA
top: "data"
top: "label"
hdf5_data_param {
source: "./hdf5/training_output.txt"
batch_size: 24
}
transform_param {
crop_size: 224
mean_file: "mean.binaryproto"
mirror: true
}
include: { phase: TRAIN }
}
In the latter case I realized that the crop_size had no effect; when I tried to use the learned model for predictions with input_dim 224 it returned an error. When I changed the input_dim of deploy.prototxt to 256 it worked well.
Anyway, if anyone knows if it is possible to crop the images from HDF5 dataset similarly to images from lmdb I would be thankful for any suggestions.
Best regards,
Niko