[Shared Task 2022] test sets out!

46 views
Skip to first unread message

Song Feng

unread,
Mar 24, 2022, 4:21:04 PM3/24/22
to dialdoc
Dear all,

We have added the data for the Test Phase to the same folder shared under "Submit" tab of eval.ai. Please check it out.

Note that
- For test set, there is only one turn to predict for each dialogue. There would be a smaller set of IDs for prediction for "MDD-UNSEEN" in test set comparing to dev set as there are multiple turns per dialogue to predict in dev set. 

- Please don't worry about including the experimental results on newly released test set in the technical paper if the time is tight for you. The reviewers understand the situation. However, please make sure to include the results on final test set in your camera-ready copy.

- The final date to submit to the leaderboards is April 3, AoE.

- We encourage all teams to submit a technical paper to describe their system and report results, due March 27 AoE . The technical paper is required to qualify for the rewards.

- We encourage all the teams to make their leaderboard submission public. A public submission on the Test Phase leaderboards is required to qualify for the rewards.

- We will send a separate email regarding human evaluation and ranking-related specifics.

Feel free to let us know if you have any questions or concerns! Please either email "diald...@googlegroups.com" (only visible to workshop organizers) or reply to this email address.


Thank you for your participation! Thanks to IBM for sponsoring the rewards.


Best,
Song

Reply all
Reply to author
Forward
0 new messages