Hi all,
I am trying to finetune the NIN model (from model zoo) on a subset of 268 imagenet categories (not necessarily from the original challenge), I used
that as a fine-tuning reference, you can also find my actual net attached.
At the begining the loss goes down nicely but arround 4000 iterations it jumps and stays constant (see attached image and log)
I tried several learning rate, solver methods, (SGD and ADAGRAD), and learning rate policies (inv, poly, step) - whatever I do, at some stage the train loss jumps and stay constant,
Any idea about how to solve this?
Thanks,
Matan