Large and suspicious differences on Pascal VOC 2012 and MS COCO 2017 validation sets (1 class)

22 views
Skip to first unread message

Alex Ter-Sarkisov

unread,
Apr 7, 2018, 1:08:27 PM4/7/18
to Caffe Users

I'm developing an instance segmentation algorithm for a particular class (Cows). I tested it on COCO and Pascal validation sets (71 image with cows in Pascal and 87 in COCO), and the results I got are rather confusing: State-of-the-art models - MaskRCNN and FCIS do very well on COCO and badly on Pascal. My method (finetuned from FCN8s weights + my ideas on MS COCO 2017 dataset, 1.9K+ images with cows) is the exact opposite:

MS COCO (AP@50% IoU threshold, mAP)

  1. MaskRCNN 0.67 0.39
  2. FCIS 0.71 0.43
  3. Mine 0.31 0.13

Pascal (AP@50% IoU threshold, mAP)

  1. MaskRCNN 0.37 0.20
  2. FCIS 0.35 0.17
  3. Mine 0.61 0.38

I'm not sure how to explain it. I've tinkered with the ground truth (removed and added back some small objects, fixed some contour bugs), but no big changes.

Igor Kasyanenko

unread,
Apr 18, 2018, 12:13:49 PM4/18/18
to Caffe Users
Hello, 

I think the answer to your question is "My method (finetuned..." that you trained your model only on cows, while state of the art models are trained on dozens of classes and might be less accurate.

Alex Ter-Sarkisov

unread,
Apr 20, 2018, 5:01:36 AM4/20/18
to Caffe Users
Yeah maybe, but why the difference on Pascal and COCO (only cows in each case)

Alex
Reply all
Reply to author
Forward
0 new messages