Python prediction different model prediction in test phase during training

1,460 views
Skip to first unread message

Yoann

unread,
Feb 18, 2015, 11:22:40 AM2/18/15
to caffe...@googlegroups.com
Hello all,

I know that there are some problems related to new data prediction in Python but I'm a bit confused. I don't really understand what to use to do prediction on new values using Python: MEMORY_DATA layer or what they do in CaffeNet deploy.prototxt example?

I tried using the second option but the predicted values in my python script are different from those that I get on my test set during training. First I create the model using my own training data in LMDB format and a test set also in LMDB format (see below to know how they are created). Preprocessing is the same as in the ImageNet model (mean substraction and 3x256x256 are cropped to 3x227x227).

Thus, the input of my test data in train_val.prototxt is:
 
layers {
  name
: "data"
  type
: DATA
  top
: "data"
  data_param
{
    source
: "data/liris-accede/test_data_lmdb"
    backend
: LMDB
    batch_size
: 16
 
}
  transform_param
{
    crop_size
: 227
    mean_file
: "data/liris-accede/train_data_mean.binaryproto"
 mirror
: false
 
}
  include
: { phase: TEST }
}

Now that the model is created (final MSE for test set is 0.028), I want to run the model on my test set and compute the MSE to see if it is the same.

Now in my deploy.prototxt the input data is:

input: "data"
input_dim
: 1
input_dim
: 3
input_dim
: 227
input_dim
: 227

and in my python script I load the original images (and not the images in the LMDB format) and I predict values on these images:

# Set the right path to your model definition file, pretrained model weights,
# and the image you would like to classify.
MODEL_FILE
= '../models/liris-accede_baseline/deploy.prototxt'
PRETRAINED
= '../models/liris-accede_baseline/_iter_5000.caffemodel'
TEST_FILE
= '../data/liris-accede/test_arousal.txt'
MEAN_FILE
= '../data/liris-accede/train_data_mean.binaryproto'

# Open mean.binaryproto file
blob
= caffe.proto.caffe_pb2.BlobProto()
data
= open(MEAN_FILE , 'rb').read()
blob
.ParseFromString(data)
mean_arr
= caffe.io.blobproto_to_array(blob)

# Initialize NN
net
= caffe.Classifier(MODEL_FILE, PRETRAINED)

net
.set_phase_test()
net
.set_mode_gpu()

# input preprocessing: 'data' is the name of the input blob == net.inputs[0]
net
.set_mean('data', mean_arr[0]) # ImageNet mean
net
.set_raw_scale('data', 255)  # the reference model operates on images in [0,255] range instead of [0,1]
net
.set_channel_swap('data', (2,1,0))  # the reference model has channels in BGR order instead of RGB

# Load test file
Inputs = ['...'] # list of paths
GroundTruths = ['...'] # list of groud truths
Predictions = []

# Compute predictions
for idx in range(len(Inputs)):
 im
= caffe.io.load_image_uint(Inputs[idx])
 prediction
= net.predict([im], False) # predict takes any number of images, and formats them for the Caffe net automatically
                                     
# average predictions across center, corners, and mirrors when True (default). Center-only prediction when False.
 
Predictions.append(prediction[0][0])

The MSE of values predicted in the python script is 0.042 which is very different from the one obtained during training, on test set (0.028).

Do you have an idea why these values are so different?

NB: To create my LMDB files I use (the path of the pictures in my set are 'in_' and stored in the list 'Inputs'):

in_db_data = lmdb.open(lmdb_data_name, map_size=int(1e12))
with in_db_data.begin(write=True) as in_txn:
 for in_idx, in_ in enumerate(Inputs):
  im
= caffe.io.load_image_uint(in_)
  im
= im[:, :, (2, 1, 0)]
  im
= im.transpose((2, 0, 1))
  im
= im.astype(np.uint8, copy=False)
  im_dat
= caffe.io.array_to_datum(im)
  in_txn
.put('{:0>10d}'.format(1000*idx + in_idx), im_dat.SerializeToString())

Evan Shelhamer

unread,
Feb 18, 2015, 1:37:20 PM2/18/15
to Yoann, caffe...@googlegroups.com
Have a look at my comment on #1774: https://github.com/BVLC/caffe/issues/1774#issue-55103559. If you have a test LMDB you can load the test net definition, with the data layer and not input fields, and call forward in Python to get the results.

Hope that helps,

Evan Shelhamer

--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users...@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/07356ee4-28c5-4138-a518-9684def5c9db%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Yoann

unread,
Feb 19, 2015, 4:09:33 AM2/19/15
to caffe...@googlegroups.com, yoann....@gmail.com
Yeah thanks so much, it's working now :-)
Actually I have a test LMDB but it will become a validation LMDB and I need a python script that can compute prediction on raw jpeg files (my test set). Using Net instead of Classifier as you suggested in your post really helped me to understand what happened.

My problem was that I created a new function (load_image_uint) to load images as uint.
Thus, the scale of the images was already [0, 255] and my error was to add the line net.set_raw_scale('data', 255)

Thanks again,
Yoann

Evan Shelhamer

unread,
Feb 19, 2015, 1:46:28 PM2/19/15
to Yoann, caffe...@googlegroups.com
For the validation set of jpegs you can define a net with an ImageData layer but the same transform_param as your current data layer. Life is easier with `caffe.Net` and data layers as long as you double-check the details.

Evan Shelhamer

Zheng Shou

unread,
Mar 5, 2015, 11:46:56 PM3/5/15
to caffe...@googlegroups.com, yoann....@gmail.com
Dear all,

I got error "Unknown blob input data to layer 0" when I try to test on LMDB to get the prediction scores at all classes. I am wondering except for just including TEST phase in prototxt, is there anything else I shall do? And I use caffe.Net(MODEL_FILE, PRETRAINED) and then net.forward(). Is that correct?

Thanks so much.


在 2015年2月18日星期三 UTC-5下午1:37:20,Evan Shelhamer写道:
Reply all
Reply to author
Forward
0 new messages