2023 PhysioNet Challenge final validation scores & choosing your best algorithm for final testing

184 views
Skip to first unread message

PhysioNet Challenge

unread,
Sep 7, 2023, 1:12:49 PM9/7/23
to physionet-challenges

Dear Challengers,


We have almost completed running the final submissions on the validation data. (We expect them to finish sometime early next week, but we’re pushing AWS’s capacity to the limit.)  We will continue to update the team summaries and scores on the Challenge website:

https://physionetchallenges.org/2023/results/


When you have received your final score, please choose which algorithm you want us to run on the *test* data by Sept 13th, 2023, 23:59:00 GMT and submit that request here. (For algorithms that have not completed by this time, we will review the probability that we have time to run it to completion, but it is still best that you pick your algorithm by this time.) 


This final *test* score will determine your official ranking. You should use your validation score as quoted on the leaderboard in your preprint paper, but you should replace it with your test score after CinC (you will have about a week to upload the final paper with this modification). Please do update your manuscripts with these updated results, including your scores and ranks on the test set after the conference – it is vital that your scientific publication is accurate. 


Please check the CinC proceedings paper template for important information about how to prepare your final papers (so that we don't need to ask you for last-minute changes):

https://moody-challenge.physionet.org/2023/papers/#preparing-your-paper


Please make sure you use the three citations listed in the template, and make sure you upload the preprint of your paper to CinC by the deadline - 20th September, 2023, 23:59:00 GMT.


We will review all papers and reject those that are unable to edit their papers to fix out-of-date information and other mistakes (such as missing citations). 


Please note, as we do every year, we will perform some simple tests on your code to ensure it is more usable and reusable. We suggest you also try these similar approaches including:

  1. Change the data and/or labels in the training set. Does your code work with missing, unknown, non-physiological values in the data? Does your code work if you change the prevalence rates of the classes or remove one of the classes? (It will probably have a slightly different performance, but that is to be expected.)

  2. Change the size of the training set. You can extract a subset of the training set or duplicate the training set. Does your code work with a training set that is 15% or 150% of the size of the original training set? (Again, your performance will differ, but the code should still execute.)

  3. Run your training code on the modified training set. If your training code fails, then your code is too sensitive to the changes in the training set, and you should update your code until it works as expected.

  4. Score the resulting model on part of the unmodified training set – ideally, data that you did not use to train your model. If your code fails, or if the model trained on the modified training set receives the same scores or almost the same scores as the model trained on the unmodified training set, then your training code didn’t learn from the training set, and you should update your code until it works as expected.

Again, this is a simplified process, and we may change how we stress test your code in future tests (such as randomizing the labels), so please think about how you can ensure that your code isn’t dependent on a single set of data and labels or a single test for robustness. Of course, you should also try similar steps to check the rest of your code as well.


All of this work is in service of protecting your scientific contributions over the course of the Challenge, and we appreciate, as always, your feedback and help.


Best,

Matt & Gari

(On behalf of the Challenge team.)


Please post questions and comments in the forum. However, if your question reveals information about your entry, then please email info at physionetchallenge.org. We may post parts of our reply publicly if we feel that all Challengers should benefit from it. We will not answer emails about the Challenge to any other address. This email is maintained by a group. Please do not email us individually.
Reply all
Reply to author
Forward
0 new messages