Dear Challengers,
Thank you for another successful Challenge! We were happy to see many of you in person at CinC 2022 in Tampere, Finland, and we hope that we can see even more of you next year at CinC 2023 in Atlanta, Georgia, USA – hosted by us!
This announcement has important information about the scores on the test data and the updates to your papers (which you need to implement to be ranked). Please read it carefully.
Test scoresPlease note that there are five tables on this page: a table with a summary of the teams, two tables for the murmur detection task, and two tables for the clinical outcome identification task. For each task, there is an official ranked list of teams sorted by test score, and an unofficial unranked list of the other teams in alphabetical order. The teams in the second list were not ranked because they did not meet one or more of the Challenge rules, including non-functioning or non-reusable training code, failure to register for CinC or to upload a CinC preprint by the deadline.
Please remember that we used the weighted accuracy metric to score and rank the murmur detection task and the cost metric to score and rank the clinical outcome identification task, but we included additional metrics in these tables, and you are more than welcome to include and compare them in your papers as long as you clearly include the official metrics (see below).
Please contact us by Friday, 16 Sept. 2022 at 23:59 GMT if you believe that your team is on the wrong list, or if you believe that any of the information about your team is incorrect. We will update the results afterwards. Please note that your team may move from an official entry to an unofficial entry if you do not adhere to the instructions below on finalizing and uploading your papers by the deadline.
(A closer look at the rounded scores revealed another (joint) winner – congratulations! Please look forward to an email from us. :))
Final papers and deadlinePlease update your four-page conference papers to include the test scores, update your discussion and conclusions, and address any issues with your preprint. Please upload your final papers on Softconf by 23:59 (your local time) on 24 September 2022:
https://www.softconf.com/m/cinc2022/The above tables include the scores for your chosen models on the training, validation, and test sets. Please note that the "validation" score is the intermediate score that the teams received during the official phase of the Challenge before we ran your final selected code on the test set. Please include the training, validation, and test scores in your papers using the format described in the paper template:
https://physionetchallenges.org/2022/papers/If you did not receive a final test score, please be clear about that in your paper. Articles that refer to a validation score or “local” test score as if it were the final metric by which to be judged will not be eligible for publication.
We review each paper and frequently need to ask teams to make corrections that we’ve already requested! Please read the paper template for instructions, including the following items:
- Cite the Challenge description and data correctly using the references in the CinC template. Specifically they are:
1) Reyna, M. A., Kiarashi, Y., Elola, A., Oliveira, J., Renna, F., Gu, A., Perez-Alday, E. A., Sadr, N., Sharma, A., Mattos, S., Coimbra, M. T., Sameni, R., Rad, A. B., Clifford, G. D. (2022). Heart murmur detection from phonocardiogram recordings: The George B. Moody PhysioNet Challenge 2022. medRxiv, doi: 10.1101/2022.08.11.22278688
2) Oliveira, J., Renna, F., Costa, P. D., Nogueira, M., Oliveira, C., Ferreira, C., … & Coimbra, M. T. (2022). The CirCor DigiScope Dataset: From Murmur Detection to Murmur Classification. IEEE Journal of Biomedical and Health Informatics, doi: 10.1109/JBHI.2021.3137048. - We ask you to cite these articles so we can measure the impact of the Challenge and report this to those that sponsor us. If you fail to cite this reference article, our impact is under-reported and funding for future Challenges are much less likely. We appreciate your help with this. It also prevents authors from incorrectly describing the data (a problem we often see).
- Do not cite the Challenge websites … and avoid citing websites in general.
- Cite your other references correctly: https://ieeeauthorcenter.ieee.org/wp-content/uploads/IEEE-Reference-Guide.pdf. We’ve seen many sloppy references where authors’ names are butchered, abbreviations and journal names are uncapitalized, and important information (like volume or page numbers) are missing.
- Try to avoid citing preprints - look for the journal article that the authors finally published. This will be more accurate and more balanced, since it has gone through peer-review. If there’s no journal article following the preprint, that may be because the authors were unable to find a journal that would publish it. Be skeptical of the claims and work in the preprint.
- Present your results in your abstract and results table in the same way we did in the CinC template for consistency with other teams. This makes comparisons easier and ensures you don’t miss key information.
- Be clear about your data sets (train, cross-validation on training data, validation, test) and metrics/scores. Include your scores and rankings on the validation and test data - you don't strictly need to provide training or cross validation scores, but if you do, make sure you identify them next to the real test data so there’s no misinterpretation. If you did not receive scores on the validation or test data, then say so. Do not describe your “local test set”, which is just confusing. The only test set in the context of the Challenge is the one on which we ran your final code submission.
- Do not make misleading or inaccurate statements about your results. In particular, do not claim an inaccurate ranking, or report inaccurate statistics. If you are in the unofficial list, the code is not ranked, and you should just say you were not ranked. Do not say where you ‘would’ have been, had you been ranked. This is misleading and confusing.
Teams that are unable to address these issues by the deadline are in danger of having their papers rejected and being removed from both the ranked and unofficial unranked lists, so please review your papers carefully before you resubmit them!
Focus IssueAs we announced on the Challenge forum before the conference, we are asking teams to submit their extended work as preprints to medRxiv for peer pre-review. The ‘best’ pre-prints will be invited to submit to focus issue on this year’s Challenge:
https://groups.google.com/g/physionet-challenges/c/IjX_GdhvDrcAnother Shot at the Test Data!We encourage you to make improvements to your code in light of what you have learned last week at the conference. If you do so, and send us a draft of the extended preprint describing the modifications, we will attempt to run your new code one more time. If you include this new approach (and score) in the preprint, please ensure you identify it as a post-Challenge submission, and compare it to your Challenge submission. You may do this before or after you post your medRxiv preprint (although we ask you to modify the preprint on medRxiv if you do it after posting your first version there).
Parting ThoughtsWe look forward to seeing your revised papers (and code) and hope that you will consider submitting extensions of your work to the focus issue. Congratulations to the winners, thank you all for participating, and we hope that you will participate again in the next Challenge!
Best,
The Challenge team
https://PhysioNetChallenges.org/
Please post questions and comments in the forum. However, if your question reveals information about your entry, then please email challenge at
physionet.org. We may post parts of our reply publicly if we feel that all Challengers should benefit from it. We will not answer emails about the Challenge to any other address. This email is maintained by a group. Please do not email us individually.