On Dec 13, 2024, at 16:47, S. Takao <53.pabl...@gmail.com> wrote:
Dear Colleagues
--
Sie erhalten diese Nachricht, weil Sie in Google Groups E-Mails von der Gruppe "MaCVi Support" abonniert haben.
Wenn Sie sich von dieser Gruppe abmelden und keine E-Mails mehr von dieser Gruppe erhalten möchten, senden Sie eine E-Mail an macvi-suppor...@googlegroups.com.
Diese Diskussion finden Sie unter https://groups.google.com/d/msgid/macvi-support/93ff5bcf-2950-456c-857d-8c891038f1b2n%40googlegroups.com.
Weitere Optionen finden Sie unter https://groups.google.com/d/optout.
Dear Colleagues,
We have trained a UIE model to restore the images in the dataset named "new-val-set" and subsequently used an object detection (OD) model to detect the fish.
However, when we zip the restored images along with the OD predictions.csv file into a compressed folder and submit it to the website, we observe significantly poor scores (as shown in the attached image below). We would like to confirm if there might be any issues or incorrect steps in our dataset processing workflow.
Interestingly, when we submit the "mvrc_example_submission" provided by the MVRC organizer, we obtain better and expected results on the dashboard. For comparison, we have attached images of our submission (#14124) and the "mvrc_example_submission" (our submission is on the left, and the example submission is on the right). From the visual inspection of the results, the restored images from our submission appear to be of higher quality.
Could you please review our submission (#14124) and help identify any potential issues we may have overlooked? We would greatly appreciate your assistance.
Thank you for your time and support.
Best regards,
ACVLab, National Cheng Kung University, Taiwan.
On 14 Dec 2024, at 23:22, Ching Heng <henry...@gmail.com> wrote:
Dear Colleagues,
We have trained a UIE model to restore the images in the dataset named "new-val-set" and subsequently used an object detection (OD) model to detect the fish.
However, when we zip the restored images along with the OD predictions.csv file into a compressed folder and submit it to the website, we observe significantly poor scores (as shown in the attached image below). We would like to confirm if there might be any issues or incorrect steps in our dataset processing workflow.
Interestingly, when we submit the "mvrc_example_submission" provided by the MVRC organizer, we obtain better and expected results on the dashboard. For comparison, we have attached images of our submission (#14124) and the "mvrc_example_submission" (our submission is on the left, and the example submission is on the right). From the visual inspection of the results, the restored images from our submission appear to be of higher quality.
Could you please review our submission (#14124) and help identify any potential issues we may have overlooked? We would greatly appreciate your assistance.
Thank you for your time and support.
Best regards,
ACVLab, National Cheng Kung University, Taiwan.
<screenshot.png><screenshot.png>
Nikhil Akalwadi 在 2024年12月13日 星期五晚上9:48:42 [UTC+8] 的信中寫道:Dear Shunsuke,Glad to know you will be participating in the MVRC. However, I would suggest you quickly start submitting your results. The deadline for the challenge is approaching in about a week.We shall be releasing the test dataset in a couple of hours.On Dec 13, 2024, at 16:47, S. Takao <53.pabl...@gmail.com> wrote:Dear ColleaguesWe're preparing a deep model and going to join the Marine Vision Restoration Challenge (MVRC) next week. Could I confirm following points about MVRC?*When will the test dataset be released? I understand that the leaderboard results are evaluated on the validation dataset and is different from the test data.
*Will the competition be held as scheduled? We are concerned about cancelation of the competition.---Shunsuke Takao,University of Tsukuba, Japan--
Sie erhalten diese Nachricht, weil Sie in Google Groups E-Mails von der Gruppe "MaCVi Support" abonniert haben.
Wenn Sie sich von dieser Gruppe abmelden und keine E-Mails mehr von dieser Gruppe erhalten möchten, senden Sie eine E-Mail an macvi-suppor...@googlegroups.com.
Diese Diskussion finden Sie unter https://groups.google.com/d/msgid/macvi-support/93ff5bcf-2950-456c-857d-8c891038f1b2n%40googlegroups.com.
Weitere Optionen finden Sie unter https://groups.google.com/d/optout.
--
Sie erhalten diese Nachricht, weil Sie in Google Groups E-Mails von der Gruppe "MaCVi Support" abonniert haben.
Wenn Sie sich von dieser Gruppe abmelden und keine E-Mails mehr von dieser Gruppe erhalten möchten, senden Sie eine E-Mail an macvi-suppor...@googlegroups.com.
Diese Diskussion finden Sie unter https://groups.google.com/d/msgid/macvi-support/53815968-60f5-4906-b6f0-d61f9680e4a8n%40googlegroups.com.
<screenshot.png><screenshot.png>
Hi,
I apologize for bringing up this question again, but I am genuinely confused about why submitting the example compressed folder directly results in a better score compared to the restored results we generated.
To provide additional context, I have attached an image showing the submission result from the official example for your reference.
Thank you for your patience and assistance in clarifying this matter.
The evaluation metrics employed are entirely no-reference based. You can refer to the respective research papers published in which they were introduced to comprehend the criteria and modeling of these metrics. As previously mentioned, I believe you are encountering unexpected outcomes due to the images appearing to be slightly reminiscent of a photographic negative. Other contestants have been able to attain higher metrics compared to the test submission zip, but unfortunately, I am unable to disclose their names or results until the conclusion of the challenge deadline. I recommend that you consider improving your UIE model.
On 15 Dec 2024, at 14:32, Ching Heng <henry...@gmail.com> wrote:
Hi,
I apologize for bringing up this question again, but I am genuinely confused about why submitting the example compressed folder directly results in a better score compared to the restored results we generated.
To provide additional context, I have attached an image showing the submission result from the official example for your reference.
Thank you for your patience and assistance in clarifying this matter.
<screenshot.png>
Diese Diskussion finden Sie unter https://groups.google.com/d/msgid/macvi-support/53cc3ae8-2a56-428e-a6f1-3b78f2d2a915n%40googlegroups.com.
Hello Nikhil,
We have been working on validating whether there are any photographic negative or inverted effects in our results. Additionally, we have been modeling the UCIQE, CCF, and UQIM metrics to evaluate the restoration results. Based on our analysis, the restoration results appear to be significantly better than the example unrestored images provided.
Therefore, we are curious about the exact evaluation metrics calculation formula used in this challenge. We have been evaluating our restoration results based on the following GitHub repositories, which are cited in several papers or having several stars:
CCF: https://github.com/zhenglab/CCFCould you please clarify which evaluation metrics are being used in this challenge? Your assistance would be greatly appreciated.
Thank you very much!
Hello Ching Heng,
Diese Diskussion finden Sie unter https://groups.google.com/d/msgid/macvi-support/09cd6043-f84d-40fd-a0db-511b70bf1987n%40googlegroups.com.
As previously mentioned, the evaluation is conducted using quantitative metrics without reference. The final metric reported is the average of metrics obtained for individual image. Given that the test set encompasses various degradation patterns for the same scene, and that the resulting restoration may differ across images within a scene, there is a likelihood of significant variation in the metrics reported for each image. Consequently, when an average is calculated, the reported delta may be smaller.
On 16 Dec 2024, at 16:19, yp lin <ilyf...@gmail.com> wrote:
Hello, I feel a bit confused about how the evaluation metrics assess the results. Below are my restoration results and metric scores. Compared to the other contestant’s results above, I believe my results should be significantly better, but the evaluation metrics show only a small difference, with UCIQE differing by just 0.045. I’m worried that this might affect the final results of the competition.
<{F250A871-A374-4142-A8E8-0A61347FA3F0}.png>
Diese Diskussion finden Sie unter https://groups.google.com/d/msgid/macvi-support/ca3eeab8-c963-43e0-8fe1-112b08bc7c71n%40googlegroups.com.
<{F250A871-A374-4142-A8E8-0A61347FA3F0}.png><{0B08262E-D368-444F-97C3-3DC9DE65E95B}.png>