Dear Organizers,
Thank you for giving the research community the chance to participate in this event. However, we have a few concerns about the Marine Vision Restoration Challenge (MVRC), including:
1- The training images for detection are just 70 labeled images (we are not sure if this lack of data to train a detector is part of the competition).
2- The validation set has no detection labels nor ground truth pairs!! This completely excludes it from training! and its only potential use case that we can think of is to use it for testing the enhancement model using a non-reference evaluation!
3- The submission only contains the enhanced images (Supposedly by the contender's method) and the detection labels and coordinates. What is the input to the enhancement model to produce those enhanced images since we have 200 degraded images for each ground truth image!!?
4- What about the FPS shown in the leaderboard?
5- Are we bound to using the detection model (YOLOv5) mentioned in the instructions?
6- How is the final score produced? is it a weighted average of all metrics? and is it solely based on metrics with no subjective evaluation?
7- We noticed that the full reference enhancement evaluation was removed. Will there be any major future changes to the contest?
Please advise on the provided points. We are also concerned about the possible cancellation of this contest.
Thank you for your time and consideration,
Ali,
RSSL lab, Michigan Technological University (MTU).