We invite you to participate in the shared task on Word Sense Induction and Disambiguation for the Russian Language co-located with the Dialogue 2018 conference (http://www.dialog-21.ru/en/
Word Sense Induction (WSI) is the process of automatic identification of the word senses. While evaluation of various sense induction and disambiguation approaches was performed in the past for the Western European languages, e.g. English, French, and German, no systematic evaluation of WSI for Slavic languages (http://sigslav.cs.helsinki.fi
) is available at the moment. This shared task makes a first step towards bridging this gap by setting up a shared task on one Slavic language: The goal of this task is to compare sense induction and disambiguation systems for the Russian language. Slavic languages still do not have broad coverage lexical resources available in English, such as WordNet, which provide a comprehensive inventory of senses. Therefore, word sense induction methods investigated in this shared task can be of great value to enable semantic processing of Slavic languages.
We use the “lexical sample” settings. Namely, we provide the participants with the set of contexts representing examples of ambiguous words, like the word “bank” in “In geography, the word **bank** generally refers to the land alongside a body of water.” For each context, a participant needs to disambiguate one target word. Note that, we do not provide any sense inventory: the participant can assign sense identifiers of their choice to a context, e.g. “bank#1” or “bank (area)”.
The task will feature two tracks. In the “knowledge-free” track participants need to induce a sense inventory from a text corpus of their own. The participants need to use it to assign each context with a sense identifier according to this induced inventory. In the “knowledge-rich” track participants are free to use a sense inventory from an existing dictionary to disambiguate the target words (yet the use of the gold standard inventory is prohibited). The advantage of our setting is that virtually any existing word sense disambiguation approach can be used within the framework of our shared task starting from unsupervised sense embeddings to the graph-based methods that rely on lexical knowledge bases, such as WordNet.
We will provide training datasets, which can be used for development of the models. Later, test datasets will be released: The participants will need to use the developed models to disambiguate the test sentences and submit their final results to the organizers. Training and testing datasets will use the same corpora and annotations approaches, but the target words will be different for training and testing datasets.
Similarly to SemEval 2010 Task 14 WSI&D, we use a gold standard, where each ambiguous target word is provided with a set of instances, i.e., the contexts containing the word. Each instance is manually annotated with the single sense identifier according to a predefined sense inventory. Each participating system assigns the sense labels for these ambiguous words, which can be viewed as a clustering of instances, according to sense labels. To evaluate a system, the system's labeling of contexts is compared to the gold standard labeling. We use the Adjusted Rand Index (ARI) as the quantitative measure of the clustering.
We will offer a simple open source baseline system that will demonstrate the task, the input and output data formats as well as the used quality measure. For the knowledge-free track, we particularly encourage participation of various systems based on unsupervised word sense embeddings, e.g. AdaGram. For the knowledge-rich track, word sense embeddings based on inventories based on lexical resources, e.g. AutoExtend, can be obtained on the basis of lexical resources such as RuThes (http://www.labinform.ru/pub/ruthes/index.htm
) and RuWordNet (http://ruwordnet.ru/ru/
Dissemination of the Results
The results of the shared task will be disseminated and discussed at the 24th International Conference on Computational Linguistics and Intellectual Technologies “Dialogue 2018”: http://www.dialog-21.ru/en/
. Training and the test datasets will be published online to foster future research and developments.
- First Call for Participation: October 15, 2017.
- Release of the Training Data: November 1, 2017.
- Release of the Test Data: December 15, 2018.
- Submission of the Results: January 15, 2018.
- Results of the Shared Task: February 1, 2018.
- Alexander Panchenko, University of Hamburg
- Dmitry Ustalov, Krasovskii Institute of Mathematics and Mechanics
- Konstantin Lopukhin, Scrapinghub Inc.
- Anastasiya Lopukhina, Neurolinguistics Laboratory, National Research University Higher School of Economics & Russian Language Institute of the Russian Academy of Sciences
- Nikolay Arefyev, Moscow State University & Samsung Research
- Natalia Loukachevitch, Moscow State University