Dear participants,
Thank you for your continued interest in the VisDA-21 competition, and we are happy to announce that the test phase is live!
The test leaderboard can be found at
https://competitions.codalab.org/competitions/33396 under “Results > Testing Data Released”. Teams will be ranked according to their average ranking in target accuracy and detection AUROC, with accuracy as a tie-breaker (see
http://ai.bu.edu/visda-2021/#rules for details). Each team is allowed to make at most one submission a day and at most five submissions total. If different team members use different codalab accounts, the same rules apply to their cumulative submission counts. In order to qualify as winners, a team must make their final result public on the test leaderboard by Oct 10th, 11:59 am EST. Shortly after, we will request winning teams to provide four+ page reports describing their method, and submit their code so that we could verify their results within the next two days. Reports must contain a checklist section (see below) verifying that the proposed solution fits into challenge rules. All winners’ reports will be posted on the challenge website and authors will be given an opportunity to provide updated “camera-ready” versions of their reports later. Open sourcing the code is optional but is highly encouraged.
If your submission did not make it to top-3 but is particularly efficient in terms of resources required for training, please consider submitting an entry into our energy efficiency “honorable mention” track:
https://forms.gle/ArxjHfcSrfYuQjXj9
Good luck and have fun!
VisDA-21 team
Report checklist:
1. Supervised Training: Teams may only submit test results of models trained on the source domain data. To ensure equal comparison, we do not allow training on any other external data or any form of manual data labeling.
2. Unsupervised training: Models can be adapted (trained) on the test data in an unsupervised way, i.e. without labels.
3. Model size: To encourage improvements in universal domain adaptation, rather than general optimization or underlying model architectures, models must be limited to a total size of 100 million parameters.
4. Ensembling: Ensembling is now allowed, but each additional forward pass through a parameter will count towards the total number of parameters allowed (100M). For example, an ensemble that passes through a 50M model twice will count as 100M towards the parameter cap.
5. Energy efficiency: Teams must report the total training time of the submitted model which should be reasonable (which we define as not exceeding 100 GPU days of V100 16GB version).