Confusion About the Marine Vision Restoration Challenge (MVRC)

62 views
Skip to first unread message

Ali Awad

unread,
Dec 2, 2024, 5:50:36 PM12/2/24
to MaCVi Support
Dear Organizers,

Thank you for giving the research community the chance to participate in this event. However, we have a few concerns about the Marine Vision Restoration Challenge (MVRC), including:

1- The training images for detection are just 70 labeled images (we are not sure if this lack of data to train a detector is part of the competition).

2- The validation set has no detection labels nor ground truth pairs!! This completely excludes it from training! and its only potential use case that we can think of is to use it for testing the enhancement model using a non-reference evaluation!

3- The submission only contains the enhanced images (Supposedly by the contender's method) and the detection labels and coordinates. What is the input to the enhancement model to produce those enhanced images since we have 200 degraded images for each ground truth image!!?

4- What about the FPS shown in the leaderboard?

5- Are we bound to using the detection model (YOLOv5) mentioned in the instructions?

6- How is the final score produced? is it a weighted average of all metrics? and is it solely based on metrics with no subjective evaluation?

7- We noticed that the full reference enhancement evaluation was removed. Will there be any major future changes to the contest?

Please advise on the provided points. We are also concerned about the possible cancellation of this contest.

Thank you for your time and consideration,
Ali,
RSSL lab, Michigan Technological University (MTU).

Nikhil Akalwadi

unread,
Dec 3, 2024, 3:07:02 AM12/3/24
to Ali Awad, MaCVi Support
Dear Ali, 

1. The dataset in terms of object detection, should be a fairly simple and easy task given there is no much complexity for training OD models. (You can use degraded images as augmentation for training OD models.)

2. The validation set has been updated. It only corresponds to testing your model performance on the leaderboard page. Not for validation during training. 

3. The submission zip example is only to give you an idea of what to submit. The input for the image enhancement models will be the 200 degraded images. The participants are expected to contribute in the enhancement method rather the OD as the OD is just a use case of underwater image enhancement. 

4.  As far as the FPS is concerned you can use any number between 1-9 as it won’t be considered for evaluation. 

5. You are free to use any OD model as far as you refactor the code to output the “predictions.csv" file as instructed. 

6. The final score will be 80% of perceptual quality metrics and 20% of OD evaluation metrics. 

7. You can expect to only evaluate your methods on no-reference metrics. 


Hope this answers your queries. Feel free to ask if you still have any doubts. 

--
Sie erhalten diese Nachricht, weil Sie in Google Groups E-Mails von der Gruppe "MaCVi Support" abonniert haben.
Wenn Sie sich von dieser Gruppe abmelden und keine E-Mails mehr von dieser Gruppe erhalten möchten, senden Sie eine E-Mail an macvi-suppor...@googlegroups.com.
Diese Diskussion finden Sie unter https://groups.google.com/d/msgid/macvi-support/d84d07ee-12dc-4b06-9473-1dd22e083d0dn%40googlegroups.com.
Weitere Optionen finden Sie unter https://groups.google.com/d/optout.

Regards,
--
Nikhil Neelkanth Akalwadi
Researcher
(M) +91 87921 88443 
https://nikhilakalwadi.github.io






Reply all
Reply to author
Forward
0 new messages