Validation vs Deploy (getting different results for the same dataset)

16 views
Skip to first unread message

Kağan İncetan

unread,
Oct 17, 2017, 5:43:43 AM10/17/17
to Caffe Users
Hi,

I have trained a model which was open source example from Alex Kendall (posenet) by using my own dataset.

There is a validation script which is provided from the example and this script uses LMDB data that I have created by using my custom images that I seperated for validation. Then this scripts returns the median and mean error by calculating the difference between real labels and computed labels. Please see the attached document for it. (test_posenet.py)

Then I have made one script to deploy images and get the labels for images that network does not know but I had doubts if my deploy script works properly so I have input the same validation dataset to my deploy program to check if I will get the same errors/ However I realized that there is big difference in mean and median errors which shows I have problems with deployment. Please see the attached document for it (deploy.py)

Can someone help me showing where I may be wrong?

P.S. I use train.prototxt for validation set (test_posenet.py) (as this network includes test phase and gets the images from LMBD folder).
I use deploy.protoxt for deploy.py which is also provided in the example. So I dont think problem is related with it. I assume there is something wrong in my preprocessing.

Thanks in advance

With best regards
deploy.py
deploy_posenet.prototxt
test_posenet.py
train_posenet.prototxt
Reply all
Reply to author
Forward
0 new messages