I trained two kinds of gray images labeled 0 and 1. Creat_imgaenet.sh added --gray order was used to transfer these images into LMDB.
The results was satisfying and I snapshot the last interation caffemodel meant to apply to the test set in matlab wrapper.
I1223 11:24:44.722620 23351 solver.cpp:298] Test net output #0: accuracy = 0.932667
I1223 11:24:44.722808 23351 solver.cpp:298] Test net output #1: loss = 0.186326 (* 1 = 0.186326 loss)
I1223 11:24:45.010155 23351 solver.cpp:191] Iteration 8700, loss = 0.126491
I1223 11:24:45.010244 23351 solver.cpp:206] Train net output #0: loss = 0.126491 (* 1 = 0.126491 loss)
I1223 11:24:45.010268 23351 solver.cpp:403] Iteration 8700, lr = 1e-05
I1223 11:24:59.381989 23351 solver.cpp:191] Iteration 8750, loss = 0.0999351
I1223 11:24:59.382076 23351 solver.cpp:206] Train net output #0: loss = 0.0999351 (* 1 = 0.0999351 loss)
I1223 11:24:59.382098 23351 solver.cpp:403] Iteration 8750, lr = 1e-05
I1223 11:25:13.754562 23351 solver.cpp:191] Iteration 8800, loss = 0.0708305
I1223 11:25:13.754653 23351 solver.cpp:206] Train net output #0: loss = 0.0708305 (* 1 = 0.0708305 loss)
I1223 11:25:13.754678 23351 solver.cpp:403] Iteration 8800, lr = 1e-05
I1223 11:25:28.124819 23351 solver.cpp:191] Iteration 8850, loss = 0.284232
I1223 11:25:28.125131 23351 solver.cpp:206] Train net output #0: loss = 0.284232 (* 1 = 0.284232 loss)
I1223 11:25:28.125159 23351 solver.cpp:403] Iteration 8850, lr = 1e-05
I1223 11:25:42.494565 23351 solver.cpp:191] Iteration 8900, loss = 0.121537
I1223 11:25:42.494654 23351 solver.cpp:206] Train net output #0: loss = 0.121537 (* 1 = 0.121537 loss)
I1223 11:25:42.494676 23351 solver.cpp:403] Iteration 8900, lr = 1e-05
I1223 11:25:56.866093 23351 solver.cpp:191] Iteration 8950, loss = 0.146871
I1223 11:25:56.866194 23351 solver.cpp:206] Train net output #0: loss = 0.146871 (* 1 = 0.146871 loss)
I1223 11:25:56.866219 23351 solver.cpp:403] Iteration 8950, lr = 1e-05
I1223 11:26:10.955507 23351 solver.cpp:317] Snapshotting to examples/mywork/sig_3000_1/sig_3000_1_iter_9000.caffemodel
I1223 11:26:10.965046 23351 solver.cpp:324] Snapshotting solver state to examples/mywork/sig_3000_1/sig_3000_1_iter_9000.solverstate
I1223 11:26:11.044157 23351 solver.cpp:228] Iteration 9000, loss = 0.0784842
I1223 11:26:11.044220 23351 solver.cpp:247] Iteration 9000, Testing net (#0)
I1223 11:26:13.243867 23351 solver.cpp:298] Test net output #0: accuracy = 0.936667
I1223 11:26:13.243935 23351 solver.cpp:298] Test net output #1: loss = 0.178111 (* 1 = 0.178111 loss)
I1223 11:26:13.243957 23351 solver.cpp:233] Optimization Done.
I1223 11:26:13.243980 23351 caffe.cpp:121] Optimization Done.
The deploy.prototxt was wrote according to the train_val.protoxt and the input_dim in it like this.
input: "data"
input_dim: 1
input_dim: 1
input_dim: 256
input_dim: 256
I wrote my own matlab test file just followed the matcaffe_demo.m.However,I did not use the mirror and crop in my caffe train,so I just send the single image which was changed from unit8 into single and minus the train set mean into caffe('forward',input_data) and everything goes well, but the single image test accuracy is disappointed (I tried different images). To confirm it, I fixed the code which can be used in a large test set which just changed the input images and output scores into matrixs and did not follow the matcaffe_batch.The accuracy is around 0.5 with the test images which were merely copied from the caffe val.
That was confused me. I used the caffemodel and deploy.prototxt which got the high accuracy classification. Is the matlab test file I wrote wrong or did I ingored some details with caffe use?