PARSEME shared task 1.2 - evaluation phase starting

8 views
Skip to first unread message

Carlos Ramisch

unread,
Jul 1, 2020, 3:55:17 AM7/1/20
to verbalmwe
Dear PARSEMErs,

The evaluation phase of the PARSEME shared task 1.2 on semi-supervised identification of verbal MWEs has just started!

We have released the blind test data for all 14 languages on our public Gitlab repo:
You can also use the larger unannotated corpora available here (also in closed track):
This year's focus is on unseen VMWEs: the general ranking will emphasize results on unseen VMWEs.

The deadline for the submission of results was extended to July 6 (anywhere in the world).

Results submission is to be made on the MWE-LEX softconf page:
Results must be a single compressed archive ("zip") with one folder per language, named according to the 2-letter language code (e.g. GA/ for Irish).
Each output must be named test.system.cupt and conform to the .cupt format.
Before submitting, please, download the format validation script and check the format as follows:
./validate_cupt.py --input test.system.cupt 

If you participate in both the closed and open tracks, please  make distinct submissions for each.
Each team can submit 2 results per track, i.e. at most 4 in total (with one result per language in each submission).
It is not mandatory to cover all languages, but then the macro-averages will not be comparable to other systems.

Subscribe and use the participants' mailing list if you find a bug or if you have questions:
To reach the organizers, you can write to Parseme...@nlp.ipipan.waw.pl

Best
Agata, Ashwini, Bruno, Carlos, Jakub, Marie


--
Carlos RAMISCH
http://pageperso.lis-lab.fr/carlos.ramisch
Assistant professor at LIS/TALEP and Aix Marseille University, France
Visiting researcher (2019/2020) at IRIT/MELODI in Toulouse, France
Reply all
Reply to author
Forward
0 new messages