PARSEME ST 1.2 results are out!

2 views
Skip to first unread message

Carlos Ramisch

unread,
Jul 9, 2020, 12:11:21 PM7/9/20
to verbalmwe

Dear participants,


We are very happy to announce that we received 9 system submissions for the PARSEME Shared Task 1.2, building on various methodologies and techniques.

You can find the detailed results for each submission at the following link:

http://multiword.sourceforge.net/sharedtaskresults2020


The gold standard test files have also been made available on Gitlab:

https://gitlab.com/parseme/sharedtask-data/tree/master/1.2

And so are the output files of all participant systems: https://gitlab.com/parseme/sharedtask-data/tree/master/1.2/system-results


Next, we would like to encourage you to describe your results in a system description paper to be submitted to the MWE-LEX 2020 workshop.

Submission is not mandatory, however, we are curious to learn more details about your systems. 

The deadline for submission is Sep 2, 2020. 

In START, please choose SHARED TASK TRACK system descriptions: https://www.softconf.com/coling2020/MWE-LEX/


Some more technical details:


- If you submitted two variants of the same system, please write one description paper.


- In your paper, please use system names as provided in the rankings, to ensure anonymity for the system description paper reviews (final versions can include Bibtex references to other system description papers).


- As for the paper title, we suggest using "<Your SystemName> at PARSEME Shared Task 2020: <additional free text>".


- Only give a minimal introduction to the task/context, as there will be a specific submission to describe the shared task in general.


- Contentwise, please do not focus on rankings, rather include extra analyses (unseen, variants, discontinuous VMWEs), ablation studies, error analyses, unofficial runs, etc.


- Results can be enriched with examples, it is easy to find them  using the --debug option of evaluate.py


- Terminology: use the standard evaluation measure names “Unseen MWE-based P/R/F1”, "Global MWE-based P/R/F1" and "Global Token-based P/R/F1".


- Papers should be anonymous (or as anonymous as possible) at the time of submission. In the final version of the papers, mention any biases such as authors participating in the creation of corpora or having early access to test corpora.


- Please prefer objective statements such as "our system was ranked X in the shared task according to criteria Y and Z" rather than unsupported claims such as "our system is the best/better than X for this task".


- If the system description paper is accepted, a co-author has to register and attend the MWE-LEX workshop to present it. Note that it is unclear today if the conference will be held online or in Barcelona, but in any case one co-author must register and be available for presentation.


- If you decide not to submit a system description paper, you can still send us a link to a description paper (e.g., posted to arXiv) and we will link it from our page after MWE-LEX papers are reviewed.


Congratulations to everyone! We would like to thank you again for your great efforts, without which the shared task could not have been such a successful event.


Looking forward to reading your papers,


PARSEME shared task organizers

Reply all
Reply to author
Forward
0 new messages