Precdicted CPC score on different timestamps

182 views
Skip to first unread message

Labros Kokkalas

unread,
Feb 21, 2023, 10:53:06 AM2/21/23
to physionet-challenges
Dear Challenge team,
Congratulations on the organization of this year challenge!
It is unclear to me how should we provide a CPC score for each time point separately (12 hours, 24 hours, 48 hours, and 72 hours from the time of ROSC), since there is no option to define the time point in the run_model parameters and the example code generates only one output per patient.

Thank you in advance,
Labros Kokkalas

PhysioNet Challenge

unread,
Feb 21, 2023, 10:58:00 AM2/21/23
to physionet-challenges
Dear Labros,

Thanks for the kind words about our organization of the Challenge!

We will provide up to 72 hours of signal data with the training set and up to 12, 24, 48, or 72 hours of signal data with the validation and test sets. For example, if we want your trained model to make predictions at 12 hours after ROSC, then we will simply run the "run_model" script with 12 hours of data. In practice, we will run your "train_model" script once on the training set and your "run_model" script four times on the 12, 24, 48, and 72 hours of the validation and test sets.

We will use the following script to truncate the recordings in the validation and test sets, so please use this script on the training set to check your code:
https://github.com/physionetchallenges/python-example-2023/blob/master/truncate_recordings.py

The example code can make predictions at any time point, but we only expect your code to make predictions at 12, 24, 48, and 72 hours after ROSC. You can decide how best to accomplish that for your code.

You may note that this process forces your code to make causal predictions, which is good, but it also forces your code to repeat the same calculations multiple times on the earlier parts of the recordings, which is not as good. We made this trade-off to make things simpler for everyone (your time and our time is more valuable than computer time). Most entries spend (much) more time training the models on the training set than running the trained models on the validation or test sets, so the overhead is relatively minimal, and we can make adjustments if we find that entries do not have enough time to finish. Of course, any questions, concerns, and feedback about this process are more than welcome.

Best,
Matt
(On behalf of the Challenge team.)

Please post questions and comments in the forum. However, if your question reveals information about your entry, then please email info at physionetchallenge.org. We may post parts of our reply publicly if we feel that all Challengers should benefit from it. We will not answer emails about the Challenge to any other address. This email is maintained by a group. Please do not email us individually.

Allan Moser

unread,
Feb 23, 2023, 11:04:52 PM2/23/23
to physionet-challenges
Thank you for organizing this very interesting challenge. This is the first time I've participated, so my question may be due to a lack of knowledge about previous challenges.
 
Some of the patient data has time after ROSC starting after 12 hours (such as ICARE_0286 starting at hour 21), or does not have data extending to 72 hours (such as ICARE_0464, ending at hour 21).  Should predictions be made for 12, 24, 48, and 72 hours for these instances?  How will scoring at the 72 hour point be handled for these cases?

Thanks in advance,
Allan Moser

PhysioNet Challenge

unread,
Feb 23, 2023, 11:12:20 PM2/23/23
to physionet-challenges
Hi Allan,

This is a perfectly good question and likely one that other teams have as well.

Yes, the EEG data for some patients start several hours after ROSC, and the EEG data for some patients end several (or more) hours before the 72 hour time point. You may want to consider why the data may be missing, but the missing data are a realistic part of this clinical prediction task and one that we wanted to preserve for the Challenge.

Your algorithm should make predictions at all of these time points regardless of the availability of data before the time point. In fact, we encourage you to make your algorithm robust to missing data; it should not crash if there are no EEG data before a time point or if some of the other variables are missing. We will score your algorithms' predictions on patients with missing data.

Best,
Matt
(On behalf of the Challenge team.)

Please post questions and comments in the forum. However, if your question reveals information about your entry, then please email info at physionetchallenge.org. We may post parts of our reply publicly if we feel that all Challengers should benefit from it. We will not answer emails about the Challenge to any other address. This email is maintained by a group. Please do not email us individually.

Reply all
Reply to author
Forward
0 new messages