I use the “same” dataset for validation and testing, but why I got different accuracy?

21 views
Skip to first unread message

Lei Xun

unread,
May 17, 2019, 7:19:23 AM5/17/19
to Caffe Users

Hi,


I am following the caffe cifar10 example (http://caffe.berkeleyvision.org/gathered/examples/cifar10.html).

In this example, it uses cifar10 training and validation dataset from (http://www.cs.toronto.edu/~kriz/cifar.html) to train the network and validate the training, I got 0.7509 Top1 and 0.9825 Top5 validation accuracy.


Then I modify caffe CaffeNet C++ Classification example to test my cifar10 trained network. I keep the classification C++ code and modify it to take 10000 images one by one, and make it uses cifar10 model (.caffemodel), network architecture (.prototxt), image mean, image label and cifar10 images in png format from (https://pjreddie.com/projects/cifar-10-dataset-mirror/). However, I got 0.5748 Top1 and 0.9300 Top5 testing accuracy.


I think they should be the same because the images I used are the same, but in different formats: LMDB in validation and png in testing.


I wonder what could potentially cause this difference?


Thanks.

Reply all
Reply to author
Forward
0 new messages