Hi All,
I've modified the mnist example to work on a different dataset of 20.000 images (size 1x18x36 , CxWxH), with 2 classes. In short I did the following:
1. Modified the create_imagenet.sh in /examples/imagenet to produce an lmdb train and test set (no resizing).
2. Modified the make_imagenet_mean.sh and calculated the mean.
3. Modified lenet_solver.prototxt and lenet_train_test.prototxt to make use of the new data
4. Started training on the new data
This then gives me a bunch of output. When it starts training the accuracy is really low, almost 0. Which is very strange as I have only two classes and the minimal accuracy (By a random guess) should be at least 0.5 as I understand from the documentation (see
http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1AccuracyLayer.html then scroll to forward_cpu).
I've run the train command several times and found that in some of the cases (about 1 in 8) the accuracy stagnates at around 0.5.
What could I be doing wrong? Any help is greatly appreciated.
PS i also included part of the output from one off my training tests.
Solving LeNet
Learning Rate Policy: inv
Iteration 0, Testing net (#0)
Test net output #0: accuracy = 0.0065625
Test net output #1: loss = 70.2702 (* 1 = 70.2702 loss)
Iteration 0, loss = 73.7216
Train net output #0: loss = 73.7216 (* 1 = 73.7216 loss)
Iteration 0, lr = 0.01
Iteration 100, loss = 87.3365
Train net output #0: loss = 87.3365 (* 1 = 87.3365 loss)
Iteration 100, lr = 0.00992565
Iteration 200, loss = 87.3365
Train net output #0: loss = 87.3365 (* 1 = 87.3365 loss)
Iteration 200, lr = 0.00985258
Iteration 300, loss = 87.3365
Train net output #0: loss = 87.3365 (* 1 = 87.3365 loss)
Iteration 300, lr = 0.00978075
Iteration 400, loss = 87.3365
Train net output #0: loss = 87.3365 (* 1 = 87.3365 loss)
Iteration 400, lr = 0.00971013
Iteration 500, Testing net (#0)
Test net output #0: accuracy = 0
Test net output #1: loss = 87.3365 (* 1 = 87.3365 loss)
Iteration 500, loss = 87.3365
Train net output #0: loss = 87.3365 (* 1 = 87.3365 loss)