There is no way to do that. Caffe automatically snapshots at given intervals, and the only thing you can do is to brute-force a snapshot at every iteration. However, it is not only very inefficient, but also pointless - and let me give you an intuition as to why.
Your test accuracy is only some estimate of how well will your network perform in deployment - not a perfect verdict allowing you to say "this model will definitely work better". Because in the end, what matters is final performance: either in some production environment, or a competition test set. You're trying to make the model learn some population, but the training and test sets are just samples of it. They most likely aren't perfectly representative, which means some error (with respect to that population) was unavoidably introduced into data itself at the moment it was gathered. Now, you can fit this data really well, get high results on test etc. - but so what, if that data misrepresented the actual population you will later deploy the model on?
The bottom line is that if your model gets 99.251% in a snapshotted iteration, and 99.253% in the next one... then most likely this isn't worth fighting over. And if the oscillations are larger, then this is probably what you want to sort out first.