Getting best test accuracy weights

12 views
Skip to first unread message

Akanksha Paul

unread,
Aug 3, 2017, 2:50:04 PM8/3/17
to Caffe Users
Hello all,

I am new to Caffe and it would be nice if someone could be of help. 

--> If i want the weights that give me the best accuracy over the test set while running solver.prototxt, say its iteration 1505 but its accuracy decreases when i am snapshotting it on 1500 or 1600.

Is there a way in Caffe where i could save the weights at an iteration which gives me the best test accuracy. In keras its through model checkpoints but have no idea about Caffe.

Any help is appreciated. :) :)  

Przemek D

unread,
Aug 22, 2017, 6:01:42 AM8/22/17
to Caffe Users
There is no way to do that. Caffe automatically snapshots at given intervals, and the only thing you can do is to brute-force a snapshot at every iteration. However, it is not only very inefficient, but also pointless - and let me give you an intuition as to why.

Your test accuracy is only some estimate of how well will your network perform in deployment - not a perfect verdict allowing you to say "this model will definitely work better". Because in the end, what matters is final performance: either in some production environment, or a competition test set. You're trying to make the model learn some population, but the training and test sets are just samples of it. They most likely aren't perfectly representative, which means some error (with respect to that population) was unavoidably introduced into data itself at the moment it was gathered. Now, you can fit this data really well, get high results on test etc. - but so what, if that data misrepresented the actual population you will later deploy the model on?
The bottom line is that if your model gets 99.251% in a snapshotted iteration, and 99.253% in the next one... then most likely this isn't worth fighting over. And if the oscillations are larger, then this is probably what you want to sort out first.
Reply all
Reply to author
Forward
0 new messages