Why even 94% accuracy has so much bad classification?

104 views
Skip to first unread message

skeptic.sisyphus

unread,
Jul 23, 2016, 5:23:01 PM7/23/16
to Caffe Users
Hi,

I am training FCN on a multi-class (12) dataset. The testset accuracy is reported to be 94% but when i see the classified labels obtained by scores they are completely wrong. Could anyone please tell what might be going wrong here? Is it due to false positives? If yes how can this be mitigated?
One more thing: i have noticed that for inference as well as for segmentation testings, only RGB images are being used while training has been performed on RGBD (4-channel) data. How is that working?

Here is the result and the original ground truth.


spanky...@gmail.com

unread,
Jul 27, 2016, 3:57:09 AM7/27/16
to Caffe Users
Can you give us some insights of yor dataset. Maybe it to small and the model is only overfitting?
Reply all
Reply to author
Forward
0 new messages