Question about Marine Vision Restoration Challenge (MVRC)

95 views
Skip to first unread message

S. Takao

unread,
Dec 13, 2024, 6:17:54 AM12/13/24
to MaCVi Support
Dear Colleagues

We're preparing a deep model and going to join the Marine Vision Restoration Challenge (MVRC) next week. Could I confirm following points about MVRC?

*When will the test dataset be released? I understand that the leaderboard results are evaluated on the validation dataset and is different from the test data.
*Will the competition be held as scheduled? We are concerned about cancelation of the competition.

---
Shunsuke Takao,
University of Tsukuba, Japan

Nikhil Akalwadi

unread,
Dec 13, 2024, 8:48:42 AM12/13/24
to S. Takao, MaCVi Support
Dear Shunsuke, 

Glad to know you will be participating in the MVRC. However, I would suggest you quickly start submitting your results. The deadline for the challenge is approaching in about a week.  
We shall be releasing the test dataset in a couple of hours. 



On Dec 13, 2024, at 16:47, S. Takao <53.pabl...@gmail.com> wrote:

Dear Colleagues
--
Sie erhalten diese Nachricht, weil Sie in Google Groups E-Mails von der Gruppe "MaCVi Support" abonniert haben.
Wenn Sie sich von dieser Gruppe abmelden und keine E-Mails mehr von dieser Gruppe erhalten möchten, senden Sie eine E-Mail an macvi-suppor...@googlegroups.com.
Diese Diskussion finden Sie unter https://groups.google.com/d/msgid/macvi-support/93ff5bcf-2950-456c-857d-8c891038f1b2n%40googlegroups.com.
Weitere Optionen finden Sie unter https://groups.google.com/d/optout.

S. Takao

unread,
Dec 13, 2024, 11:52:39 PM12/13/24
to MaCVi Support
Thank you very much for your kind response. We will submit our result soon.

2024年12月13日金曜日 22:48:42 UTC+9 akalwad...@gmail.com:

Ching Heng

unread,
Dec 14, 2024, 12:52:51 PM12/14/24
to MaCVi Support

Dear Colleagues,

We have trained a UIE model to restore the images in the dataset named "new-val-set" and subsequently used an object detection (OD) model to detect the fish.

However, when we zip the restored images along with the OD predictions.csv file into a compressed folder and submit it to the website, we observe significantly poor scores (as shown in the attached image below). We would like to confirm if there might be any issues or incorrect steps in our dataset processing workflow.

Interestingly, when we submit the "mvrc_example_submission" provided by the MVRC organizer, we obtain better and expected results on the dashboard. For comparison, we have attached images of our submission (#14124) and the "mvrc_example_submission" (our submission is on the left, and the example submission is on the right). From the visual inspection of the results, the restored images from our submission appear to be of higher quality.

Could you please review our submission (#14124) and help identify any potential issues we may have overlooked? We would greatly appreciate your assistance.

Thank you for your time and support.

Best regards,

ACVLab, National Cheng Kung University, Taiwan.

screenshot.pngscreenshot.png
Nikhil Akalwadi 在 2024年12月13日 星期五晚上9:48:42 [UTC+8] 的信中寫道:

Nikhil Akalwadi

unread,
Dec 14, 2024, 1:15:35 PM12/14/24
to Ching Heng, MaCVi Support
Hello, 

1. For the enhancement part, I suspect it could be because of the “photo negative” or the “invert" effect is quite a few images (for ex. Images from 81, 83, 84, 92). That may explain the CCF metric resulting in negative as it primarily involved evaluating color in the images. 

2. And for the OD part, I think its because of the second column submitted in the predictions.csv in the submission with ID#14124 is in lower case (the class name) and the GT given to the algorithm is case sensitive. I would say, please refer the training/testing csv files to refactor your submission csv file and that should give better results. 

Please feel free to ask us if you have more questions. 

On 14 Dec 2024, at 23:22, Ching Heng <henry...@gmail.com> wrote:

Dear Colleagues,

We have trained a UIE model to restore the images in the dataset named "new-val-set" and subsequently used an object detection (OD) model to detect the fish.

However, when we zip the restored images along with the OD predictions.csv file into a compressed folder and submit it to the website, we observe significantly poor scores (as shown in the attached image below). We would like to confirm if there might be any issues or incorrect steps in our dataset processing workflow.

Interestingly, when we submit the "mvrc_example_submission" provided by the MVRC organizer, we obtain better and expected results on the dashboard. For comparison, we have attached images of our submission (#14124) and the "mvrc_example_submission" (our submission is on the left, and the example submission is on the right). From the visual inspection of the results, the restored images from our submission appear to be of higher quality.

Could you please review our submission (#14124) and help identify any potential issues we may have overlooked? We would greatly appreciate your assistance.

Thank you for your time and support.

Best regards,

ACVLab, National Cheng Kung University, Taiwan.

<screenshot.png><screenshot.png>
Nikhil Akalwadi 在 2024年12月13日 星期五晚上9:48:42 [UTC+8] 的信中寫道:
Dear Shunsuke, 

Glad to know you will be participating in the MVRC. However, I would suggest you quickly start submitting your results. The deadline for the challenge is approaching in about a week.  
We shall be releasing the test dataset in a couple of hours. 



On Dec 13, 2024, at 16:47, S. Takao <53.pabl...@gmail.com> wrote:

Dear Colleagues

We're preparing a deep model and going to join the Marine Vision Restoration Challenge (MVRC) next week. Could I confirm following points about MVRC?

*When will the test dataset be released? I understand that the leaderboard results are evaluated on the validation dataset and is different from the test data.
*Will the competition be held as scheduled? We are concerned about cancelation of the competition.

---
Shunsuke Takao,
University of Tsukuba, Japan


--
Sie erhalten diese Nachricht, weil Sie in Google Groups E-Mails von der Gruppe "MaCVi Support" abonniert haben.
Wenn Sie sich von dieser Gruppe abmelden und keine E-Mails mehr von dieser Gruppe erhalten möchten, senden Sie eine E-Mail an macvi-suppor...@googlegroups.com.
Diese Diskussion finden Sie unter https://groups.google.com/d/msgid/macvi-support/93ff5bcf-2950-456c-857d-8c891038f1b2n%40googlegroups.com.
Weitere Optionen finden Sie unter https://groups.google.com/d/optout.

--
Sie erhalten diese Nachricht, weil Sie in Google Groups E-Mails von der Gruppe "MaCVi Support" abonniert haben.
Wenn Sie sich von dieser Gruppe abmelden und keine E-Mails mehr von dieser Gruppe erhalten möchten, senden Sie eine E-Mail an macvi-suppor...@googlegroups.com.

Weitere Optionen finden Sie unter https://groups.google.com/d/optout.
<screenshot.png><screenshot.png>

Regards,
--
Nikhil Neelkanth Akalwadi
Researcher,
MVRC Team
(M) +91 87921 88443 
https://nikhilakalwadi.github.io






Ching Heng

unread,
Dec 15, 2024, 4:02:25 AM12/15/24
to MaCVi Support

Hi,

I apologize for bringing up this question again, but I am genuinely confused about why submitting the example compressed folder directly results in a better score compared to the restored results we generated.

To provide additional context, I have attached an image showing the submission result from the official example for your reference.

Thank you for your patience and assistance in clarifying this matter.

screenshot.png
Nikhil Akalwadi 在 2024年12月15日 星期日凌晨2:15:35 [UTC+8] 的信中寫道:

Nikhil Akalwadi

unread,
Dec 15, 2024, 6:31:10 AM12/15/24
to Ching Heng, MaCVi Support
Hello Ching Heng, 

The evaluation metrics employed are entirely no-reference based. You can refer to the respective research papers published in which they were introduced to comprehend the criteria and modeling of these metrics. As previously mentioned, I believe you are encountering unexpected outcomes due to the images appearing to be slightly reminiscent of a photographic negative. Other contestants have been able to attain higher metrics compared to the test submission zip, but unfortunately, I am unable to disclose their names or results until the conclusion of the challenge deadline. I recommend that you consider improving your UIE model. 



On 15 Dec 2024, at 14:32, Ching Heng <henry...@gmail.com> wrote:

Hi,

I apologize for bringing up this question again, but I am genuinely confused about why submitting the example compressed folder directly results in a better score compared to the restored results we generated.

To provide additional context, I have attached an image showing the submission result from the official example for your reference.

Thank you for your patience and assistance in clarifying this matter.

<screenshot.png>

Weitere Optionen finden Sie unter https://groups.google.com/d/optout.
<screenshot.png>

Ching Heng

unread,
Dec 15, 2024, 11:56:15 AM12/15/24
to MaCVi Support
"OK, thanks for your response!"

Nikhil Akalwadi 在 2024年12月15日 星期日晚上7:31:10 [UTC+8] 的信中寫道:

Ching Heng

unread,
Dec 16, 2024, 4:45:14 AM12/16/24
to MaCVi Support

Hello Nikhil,

We have been working on validating whether there are any photographic negative or inverted effects in our results. Additionally, we have been modeling the UCIQE, CCF, and UQIM metrics to evaluate the restoration results. Based on our analysis, the restoration results appear to be significantly better than the example unrestored images provided.

Therefore, we are curious about the exact evaluation metrics calculation formula used in this challenge. We have been evaluating our restoration results based on the following GitHub repositories, which are cited in several papers or having several stars:

CCF: https://github.com/zhenglab/CCF

Could you please clarify which evaluation metrics are being used in this challenge? Your assistance would be greatly appreciated.

Thank you very much!

Nikhil Akalwadi 在 2024年12月15日 星期日晚上7:31:10 [UTC+8] 的信中寫道:
Hello Ching Heng, 

Nikhil Akalwadi

unread,
Dec 16, 2024, 5:11:57 AM12/16/24
to Ching Heng, MaCVi Support
Hi Ching, 

The code for computing the metrics is heavily borrowed from the following open-source GitHub repositories: 


And for evaluating Object Detection we are referring the official YOLO evaluation code provided by Ultralytics in their open-source repositories. 

I have reviewed the codebases. The repositories you have shared and the ones we are using are largely the same logic for UCIQE and UIQM (CCF is the exact same codebase).
Please let us know if we happen to miss out on something or that you need any further assistance. 




Weitere Optionen finden Sie unter https://groups.google.com/d/optout.

yp lin

unread,
Dec 16, 2024, 5:49:27 AM12/16/24
to MaCVi Support
Hello, I feel a bit confused about how the evaluation metrics assess the results. Below are my restoration results and metric scores. Compared to the other contestant’s results above, I believe my results should be significantly better, but the evaluation metrics show only a small difference, with UCIQE differing by just 0.045. I’m worried that this might affect the final results of the competition.

{F250A871-A374-4142-A8E8-0A61347FA3F0}.png
{0B08262E-D368-444F-97C3-3DC9DE65E95B}.png

Nikhil Akalwadi

unread,
Dec 16, 2024, 5:58:09 AM12/16/24
to yp lin, MaCVi Support
Hi, 

As previously mentioned, the evaluation is conducted using quantitative metrics without reference. The final metric reported is the average of metrics obtained for individual image. Given that the test set encompasses various degradation patterns for the same scene, and that the resulting restoration may differ across images within a scene, there is a likelihood of significant variation in the metrics reported for each image. Consequently, when an average is calculated, the reported delta may be smaller. 



On 16 Dec 2024, at 16:19, yp lin <ilyf...@gmail.com> wrote:

Hello, I feel a bit confused about how the evaluation metrics assess the results. Below are my restoration results and metric scores. Compared to the other contestant’s results above, I believe my results should be significantly better, but the evaluation metrics show only a small difference, with UCIQE differing by just 0.045. I’m worried that this might affect the final results of the competition.

<{F250A871-A374-4142-A8E8-0A61347FA3F0}.png>

Weitere Optionen finden Sie unter https://groups.google.com/d/optout.
<{F250A871-A374-4142-A8E8-0A61347FA3F0}.png><{0B08262E-D368-444F-97C3-3DC9DE65E95B}.png>

Regards,
--
Nikhil Neelkanth Akalwadi
Researcher
Reply all
Reply to author
Forward
0 new messages