Getting 0% accuracy while fine-tuning

116 views
Skip to first unread message

Vir Gandhi

unread,
Mar 27, 2016, 4:09:53 PM3/27/16
to Caffe Users
Hi, 
I'm new to caffe and trying different examples, one of which is:
I have to finetune ImageNet trained caffe model on a new dataset. So, I followed the notebook example given the caffe website: Fine-tune the ImageNet-trained CaffeNet on the "Flickr Style" dataset.

I modified my dataset accordingly and also changed the nodes in the final layer from 5 to 7 (as I have 7 categories and flickr had 5) by changing .prototxt files.
But after training I'm getting zero% accuracy, and I'm able to figure out where I'm making mistake.

I'm also attaching my prototxt files and the python code and last part of my output:

Running solvers for 200 iterations...
I0328 06:08:32.630990  5870 solver.cpp:228] Iteration 0, loss = 1.94591
I0328 06:08:32.631019  5870 solver.cpp:244]     Train net output #0: acc = 0
I0328 06:08:32.631037  5870 solver.cpp:244]     Train net output #1: loss = 1.94591 (* 1 = 1.94591 loss)
I0328 06:08:32.631059  5870 sgd_solver.cpp:106] Iteration 0, lr = 0.001
I0328 06:08:32.715519  5870 solver.cpp:228] Iteration 0, loss = 1.94591
I0328 06:08:32.715559  5870 solver.cpp:244]     Train net output #0: acc = 0
I0328 06:08:32.715579  5870 solver.cpp:244]     Train net output #1: loss = 1.94591 (* 1 = 1.94591 loss)
I0328 06:08:32.715596  5870 sgd_solver.cpp:106] Iteration 0, lr = 0.001
  0) pretrained: loss=1.946, acc= 0%; scratch: loss=1.946, acc= 0%
 10) pretrained: loss=0.000, acc=100%; scratch: loss=0.625, acc=100%
 20) pretrained: loss=0.000, acc=100%; scratch: loss=0.077, acc=100%
 30) pretrained: loss=0.000, acc=100%; scratch: loss=0.028, acc=100%
 40) pretrained: loss=0.000, acc=100%; scratch: loss=0.015, acc=100%
 50) pretrained: loss=0.000, acc=100%; scratch: loss=0.013, acc=100%
 60) pretrained: loss=0.000, acc=100%; scratch: loss=0.011, acc=100%
 70) pretrained: loss=0.000, acc=100%; scratch: loss=0.011, acc=100%
 80) pretrained: loss=0.000, acc=100%; scratch: loss=0.009, acc=100%
 90) pretrained: loss=0.000, acc=100%; scratch: loss=0.009, acc=100%
100) pretrained: loss=0.000, acc=100%; scratch: loss=0.009, acc=100%
110) pretrained: loss=0.000, acc=100%; scratch: loss=0.009, acc=100%
120) pretrained: loss=0.000, acc=100%; scratch: loss=0.007, acc=100%
130) pretrained: loss=0.000, acc=100%; scratch: loss=0.007, acc=100%
140) pretrained: loss=0.000, acc=100%; scratch: loss=0.007, acc=100%
150) pretrained: loss=0.000, acc=100%; scratch: loss=0.007, acc=100%
160) pretrained: loss=0.000, acc=100%; scratch: loss=0.006, acc=100%
170) pretrained: loss=0.000, acc=100%; scratch: loss=0.006, acc=100%
180) pretrained: loss=0.000, acc=100%; scratch: loss=0.006, acc=100%
190) pretrained: loss=0.000, acc=100%; scratch: loss=0.006, acc=100%
199) pretrained: loss=0.000, acc=100%; scratch: loss=0.006, acc=100%
Done.

[...]

Accuracy, trained from random initialization: 0.0%
finetune_code.py
train_val.prototxt
deploy.prototxt
solver.prototxt
style_names.txt

Jan

unread,
Mar 30, 2016, 5:38:17 AM3/30/16
to Caffe Users
To clarify: Your problem is not the training itself but that your code outputs a different accuracy than caffe itself during training, correct?

From a first look I don't see any problem with your code. What does the logging output look like,  are the weights actually loaded by caffe when you do the caffe.Net(..., weights, caffe.TEST) in eval_style_net ?

Jan
Reply all
Reply to author
Forward
0 new messages