caffe-dilation, network diverge

41 views
Skip to first unread message

lhao

unread,
Aug 28, 2016, 8:20:10 AM8/28/16
to Caffe Users
When I train the frontend-net on pascal-voc 2012 training set with the initialization of vgg_conv.caffemodel, the loss diverge sometimes and I had done nothing except changing the batch_size to 8(default is 14) for my limited GPU. The loss in the first several iterations is 1~3 and then it changes to 60~80 and keep the state finally. I have tried to modify the iter_size and the problem still exists. Does anyone have idea what might be wrong ?
Reply all
Reply to author
Forward
0 new messages