Caffe Classify.py Prediction

45 views
Skip to first unread message

Saman Sarraf

unread,
Feb 23, 2016, 3:12:35 PM2/23/16
to Caffe Users
Hi Caffe experts, 

I have trained LeNet using a personal dataset and achieved around 90% accuracy . I repeated the training process a few times and the accuracy is pretty much consistent. 
After so much effort , I managed to use classify.py to test a single image against my trained network. The results are very strange and I think there should be something wrong. 
I used the same "test data" to test my trained network ( accuracy = 90% ) but after counting the correct labels (from classify.py) , I got an accuracy something around 35%. Question is that this is something can happen or there is something wrong in the script I wrote on top of caffe classify.py?

Probably an easier question would be, if I get 90% accuracy from LeNet and I test the trained network with the same "test data" , I assume I should get 90% accuracy , is that right?

Your prompt help is much appreciated
S.
Reply all
Reply to author
Forward
0 new messages