Testing Pretrained MNIST Model on Single Image

63 views
Skip to first unread message

E Sexton

unread,
Jan 16, 2018, 12:16:20 PM1/16/18
to Caffe Users
I have the MNIST model trained after 10000 iterations, showing 99% accuracy, and I am attempting to test it on new single image. Following the steps here I have modified the network to deploy it, full prototxt can be found here: https://pastebin.com/fJr0Fij0

Using a prediction routine found elsewhere, the model is unable to reliably predict my new image


import caffe
import numpy as np
import matplotlib.pylab as plt

caffe.set_mode_cpu()


MODEL_FILE = 'lenet_deploy.prototxt'
PRETRAINED = 'lenet_iter_10000.caffemodel'

net = caffe.Classifier(MODEL_FILE, PRETRAINED,
               raw_scale=1,
               image_dims=(28, 28))

def caffe_predict(path):
        input_image = caffe.io.load_image(path)
        input_image = input_image[:,:,0]
        input_image = input_image.reshape(28,28,1)
        print path
        prediction = net.predict([input_image])


        print prediction
        print "----------"

        #print 'prediction shape:', prediction[0].shape
        print 'predicted class:', prediction[0].argmax()


        proba = prediction[0][prediction[0].argmax()]
        ind = prediction[0].argsort()[-5:][::-1] # top-5 predictions


        return prediction[0].argmax(), proba, ind

prediction, prob, ind = caffe_predict('test.png')


 

Przemek D

unread,
Jan 19, 2018, 5:50:55 AM1/19/18
to Caffe Users
Isn't MNIST dataset black-on-white? That is, background is white and digits are white? My version of the dataset looks that way.
Please check your classifier on an inverted image.

E Sexton

unread,
Jan 19, 2018, 9:10:09 AM1/19/18
to Caffe Users
No dice, I'm getting the same random classifications as before.

Przemek D

unread,
Jan 22, 2018, 6:29:15 AM1/22/18
to Caffe Users
What about preprocessing? Do you do anything like mean subtraction or scaling? Any transformations must stay the same between training and deployment.
Also, what about the very images you were training on? When you test the model on those, are they being classified correctly?

Jonathan R. Williford

unread,
Jan 22, 2018, 8:10:00 AM1/22/18
to Caffe Users
E,

To add to Przemek's comment, look at how the values of the blobs changes between datasets. Ie. look at the min, mean, median, and max of the data and conv1 blobs after running the forward pass and make sure they are similar. Convolutional networks are typically very sensitive to the range of the input values, so this should be a standard check in your procedure.

Cheers,
Jonathan

--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users+unsubscribe@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/b45638bf-b0b1-458a-b83f-a51f29a2ae43%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages