I found problem in pre-processing step. I used tools/convert_imageset.cpp for constructing my train and valid lmdb. And at my c++ classifier i did preprocessing manually. So i see the caffe/data_transformer.cpp code(that used by data layer). And change my manual preprocessing code same as data_transformer.cpp's one. But still i has 1% worse accuracy at manual classifier than caffe test. below is what i did at manual preprocessing step. my sample is gray image.
cv::Mat img = imread(xx.jpg, -1);
img.resize(30,30) (convert_imageset.cpp also use same cv::reisize function. option is also same.)
img.convertTo(img, CV_32FC1);
img = (img - mean_) * scale; (i used same scale parameter and mean.binaryproto with my train_val.prototxt datalayer's transform_param, and load mean code is same as example/cpp_classification) (caffe/data_transformer.cpp also do exactly same operation but there code is " for(for((img[] - mean_[])) * scale)) " but i think it is just same operation.
img.copyTo(input blob)
forward.
so now, i think i did same operation with caffe test. But still now i get 1% worse result. So i want know why..... And i wish caffe will provide preprocessing layer for deploy.prototxt.... that can be same with train_val.prototxt.. This kind of problem is so tricky and hard to find reason.
2016년 4월 30일 토요일 오전 2시 40분 53초 UTC+9, Hongxin Liu 님의 말: