Hello,
After successfully train FCN8s-at-once on my own dataset with Caffe, I'm trying to train FCN32s-RGB with
TensorFlow on NYUDv2 dataset (40 classes challenge).
Sorry for posting TF stuff on the group, but this is the best community I know about FCN and I'm pretty sure, this might interest some folks.
I forked the FCN architecture definition from
this project, and I implement the following stuff in my repo
https://github.com/howard-mahe/tensorflow-fcn : - create
NYUDv2DataHandler.py, equivalent of
nyud_layers.py- implement a training script based on the 'heavy learning' strategy described in FCN's journal paper (2015): batch=1, lr=1e-10, momentum=0.99, unnormalized loss
Training goes well in the first iterations but quickly my test loss starts to diverge but the most surprising is that my test metrics (global accuracy, mean accuracy per class, mean IoU) doesn't collapse at all. Does anyone got an idea about that ?
Thanks a lot for any feedbacks.
Best,
Howard
![](https://lh3.googleusercontent.com/-7SqhusmTeDA/WV6JdOvoS3I/AAAAAAAAAL4/MtuxHwrWyrY_AnXhiItEgC4ChBYsc-jkwCLcBGAs/s1600/fcn32s_bgr_training.png)