Dear Simon,
Good question. To be clear, we encourage a diversity of approaches in general, but not any one single approach whether transfer learning or otherwise.
To include large files in your submission, you may want to:
- Include large files in your git repo using git lfs. The submission system will automatically pull them once it's checked out the repository. Most teams do this for large files.
- Use another git host with a larger file limit.
- Host the large files on another service (e.g., Google Drive), then include a download script in the Dockerfile. Make sure that you download these files somewhere under /challenge inside the container; if you download to /root or /tmp, those paths won't be available when we run your code. Also, make sure that you download these files with a command in your Dockerfile: your code won't have internet access during training (train_model) and inference (run_model), so you can't download anything during those steps.
Please note that teams do not need to include
external data in their submissions, and teams that do transfer learning should actually do transfer learning, and not simply submit a pre-trained model to try to avoid the training code requirements or resource constraints (which is
not allowed and would result in disqualification from rankings and prizes).
Best,
Matt
(On behalf of the Challenge team.)
Please post questions and comments in the forum. However, if your question reveals information about your entry, then please email info at
physionetchallenge.org. We may post parts of our reply publicly if we feel that all Challengers should benefit from it. We will not answer emails about the Challenge to any other address. This email is maintained by a group. Please do not email us individually.