Test_loss diverge during training FCN32s-RGB on NYUDv2 with TensorFlow

27 views
Skip to first unread message

Howard Mahé

unread,
Jul 6, 2017, 3:04:15 PM7/6/17
to Caffe Users
Hello,

After successfully train FCN8s-at-once on my own dataset with Caffe, I'm trying to train FCN32s-RGB with TensorFlow on NYUDv2 dataset (40 classes challenge).
Sorry for posting TF stuff on the group, but this is the best community I know about FCN and I'm pretty sure, this might interest some folks.

I forked the FCN architecture definition from this project, and I implement the following stuff in my repo https://github.com/howard-mahe/tensorflow-fcn :
- create NYUDv2DataHandler.py, equivalent of nyud_layers.py
- implement a training script based on the 'heavy learning' strategy described in FCN's journal paper (2015): batch=1, lr=1e-10, momentum=0.99, unnormalized loss

Training goes well in the first iterations but quickly my test loss starts to diverge but the most surprising is that my test metrics (global accuracy, mean accuracy per class, mean IoU) doesn't collapse at all. Does anyone got an idea about that ?

Thanks a lot for any feedbacks.

Best,
Howard

Howard Mahé

unread,
Jul 6, 2017, 3:07:20 PM7/6/17
to Caffe Users
I forgot to mention that, considering FCN's journal article, I am expecting the following perf:

FCN-32s RGB
gacc: 0.618
macc: 0.447
miu: 0.316
Reply all
Reply to author
Forward
0 new messages