[Apologies if you receive multiple copies of this email.]
Dear colleagues and friends,
We are
pleased to release the 1st Call for Participation - SemEval
2021 Task 11: NLPContributionGraph
URL: https://ncg-task.github.io/
Overview: Since scientific literature is growing at a rapid rate and researchers today are faced with this publications deluge, it is increasingly tedious, if not practically impossible to keep up with the research progress even within one's own narrow discipline. The Open Research Knowledge Graph (ORKG) is posited as a solution to the problem of keeping track of research progress minus the cognitive overload that reading dozens of full papers impose. It aims to build a comprehensive knowledge graph that publishes the research contributions of scholarly publications per paper, where the contributions are interconnected via the graph even across papers. Within the NLPContributionGraph Shared Task, we have formalized the building of such a scholarly contributions-focused graph over NLP scholarly articles.
SemEval 2021 Task 11: NLPContributionGraph (NCG) provides
participants with a dataset of NLP scholarly articles with their
“contributions” information structured to be integrable within Knowledge Graph
infrastructures such as the ORKG.
The annotation data elements include: (1) contribution sentences - a set of
sentences about the contribution in the article; (2) scientific terms and
relations - a set of scientific terms and relational cue phrases extracted from
the contribution sentences; and (3) triples - semantic statements that pair
scientific terms with a relation, modeled toward subject-predicate-object RDF
statements for KG building. The task is to automatically extract these elements
given a new NLP article.
NCG will be divided into three evaluation phases:
- Evaluation Phase 1: End-to-end evaluation phase;
- Evaluation Phase 2, Part 1: Phrase Extraction Testing;
- Evaluation Phase 2, Part 2: Triples Extraction Testing
Codalab site: https://competitions.codalab.org/competitions/25680
Dates
Trial data available: July 31, 2020
Training data available: October 1, 2020
Test data available/Evaluation starts: January 10, 2021
Evaluation ends: January 31, 2021
Paper submission due: February 23, 2021
Notification to authors: March 29, 2021
Camera ready due: April 5, 2021
SemEval workshop: Summer 2021
Task Organizers
Jennifer D’Souza (TIB Leibniz Information Centre for Science and Technology -
Germany)
Sören Auer (TIB Leibniz Information Centre for Science and Technology - Germany)
Ted Pedersen (University of Minnesota in Duluth - USA)
We look forward to having you on board!