Help us cite you - Very brief form to learn about your system

100 views
Skip to first unread message

Scott Hale

unread,
Feb 19, 2022, 9:27:58 AM2/19/22
to semeval-2022-task-8-multilingual-news
Dear All,

Thank you for your participation in the SemEval competition. We are thrilled to see multiple strong models. We would love to describe your best performing models in more depth in the final task paper and, if you are interested, to test them on a large Media Cloud dataset.

First, we ask you to please fill out this brief Google Form to give us a 1-2 sentence description of your system, whether you plan to share the source code of your model (e.g., via Github, with a short description on how to use it), and to let us know if you plan to submit a paper. This will greatly help us in preparing the task paper and bringing attention to your work.

Second, if you are potentially interested in working with us on a Media Cloud dataset including full text of all labeled articles and tens of millions of unlabelled articles, please share your source code and declare your interest via the Google Form.

Third, the original released dataset averaged labels from multiple annotators. However, some items were labeled by many annotators, while many other items by just one annotator, hence giving a less accurate average label. Today, we are releasing per-annotation data to supplement the previous data. We are excited to learn if this data and/or any of the individual scores (geographic, entity, narrative, etc. similarity) can be used to improve or better diagnose performance of your systems. For instance, we hypothesize that a simple model re-training on per-annotator data may improve your system’s performance. If you find time, we encourage you to test this hypothesis and report the result in your paper. If you decide to do so, please report two different Pearson correlations: one computed on the per-item test data and the other on the per-annotation test data. Please compute these two correlations for the two versions of your system, i.e., trained on per-annotation and per-item data. Overall, this will give you four different correlation values to assess the impact of averaged vs. per-annotator data.


Many thanks,
Task 8 Organizers

nidhir bhavsar

unread,
Feb 24, 2022, 5:39:10 AM2/24/22
to semeval-2022-task-8-multilingual-news
Btw where do we have to submit the paper? Since I am unable to find any information related to this in chats as well as the semeval official website. Thank you

Best,
Nidhir

Emanuela Boros

unread,
Feb 24, 2022, 5:31:39 PM2/24/22
to semeval-2022-task-8-multilingual-news

Scott Hale

unread,
Feb 24, 2022, 5:49:06 PM2/24/22
to Emanuela Boros, semeval-2022-task-8-multilingual-news
Thank you, Emanuela. Yes, this is the URL for all submissions to SemEval. I don't know why this isn't more prominent on the SemEval website. 


Best wishes,
Scott



--
You received this message because you are subscribed to the Google Groups "semeval-2022-task-8-multilingual-news" group.
To unsubscribe from this group and stop receiving emails from it, send an email to semeval-2022-task-8-mult...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/semeval-2022-task-8-multilingual-news/0694f5ec-385f-4842-accc-e908c32ec096n%40googlegroups.com.


--
Reply all
Reply to author
Forward
0 new messages