Both training and validation loss decrease, but eventual test loss is not decrease at all

133 views
Skip to first unread message

Jinchao Lin

unread,
Dec 12, 2015, 10:45:12 PM12/12/15
to Caffe Users
I am doing an image classification competition. While I manage to decrease both training loss and validation loss to a very small number (<0.1), but after I submit my prediction to the website, the test set loss is still huge ( > 7, which is even worse than random guess). 

Can anyone gives some hint about why this will happen? Plus, it also quite suspicious as my validation loss should not decrease to < 0.1, which seems too small ... 

p.s. I use the Caffe-window (https://github.com/ChenglongChen/caffe-windows), which will augment images. 

Thanks,
Jin

Mohit Jain

unread,
Dec 13, 2015, 5:09:43 AM12/13/15
to Caffe Users
That looks like a case of overfitting. The only possible explanation I can have right now for the low validation loss is there being an inherent similarity in the training and validation set. Try to relax the learning parameters... decrease the learning rate, momentum, etc. It's fine if your training/validation loss is high. You should get better testing loss values. Hope that helps. :)

Regards,
Mohit
Reply all
Reply to author
Forward
0 new messages