SemEval-2026: Call for Task Proposals
URL:
https://semeval.github.io/SemEval2026/cft
# Call for Task Proposals
We invite proposals for tasks to be run as part of SemEval-2026.
SemEval (the International Workshop on Semantic Evaluation) is an
ongoing series of evaluations of computational semantics systems,
organized under the umbrella of SIGLEX, the Special Interest Group on
the Lexicon of the Association for Computational Linguistics.
SemEval tasks explore the nature of meaning in natural languages: how
to characterize meaning and how to compute it. This is achieved in
practical terms, using shared datasets and standardized evaluation
metrics to quantify the strengths and weaknesses and possible
solutions. SemEval tasks encompass a broad range of semantic topics
from the lexical level to the discourse level, including word sense
identification, semantic parsing, coreference resolution, and
sentiment analysis, among others.
For SemEval-2026, we welcome tasks that can test an automatic system
for semantic analysis of text (e.g., intrinsic semantic evaluation, or
an application-oriented evaluation). We especially encourage tasks for
languages other than English, cross-lingual tasks, and tasks that
develop novel applications of computational semantics. See the
websites of previous editions of SemEval to get an idea about the
range of tasks explored, e.g. SemEval-2020
(
http://alt.qcri.org/semeval2020/) and SemEval-2021/2025
(
https://semeval.github.io).
We strongly encourage proposals based on pilot studies that have
already generated initial data, evaluation measures and baselines. In
this way, we can avoid unforeseen challenges down the road that may
delay the task. We suggest providing a reasonable baseline (e.g.,
providing a BERT baseline for a classification task) apart from
majority vote / random guess.
In case you are not sure whether a task is suitable for SemEval,
please feel free to get in touch with the SemEval organizers at
semevalo...@gmail.com to discuss your idea.
## Task Selection
Task proposals will be reviewed by experts, and reviews will serve as
the basis for acceptance decisions. Everything else being equal, more
innovative new tasks will be given preference over task reruns. Task
proposals will be evaluated on:
- Novelty: Is the task on a compelling new problem that has not been
explored much in the community? Is the task a rerun, but covering
substantially new ground (new subtasks, new types of data, new
languages, etc. - one addition is not sufficient)?
- Interest: Is the proposed task likely to attract a sufficient number
of participants?
- Data: Are the plans for collecting data convincing? Will the
resulting data be of high quality? Will annotations have meaningfully
high inter-annotator agreements? Have all appropriate licenses for use
and re-use of the data after the evaluation been secured? Have all
international privacy concerns been addressed? Will the data
annotation be ready on time?
- Evaluation: Is the methodology for evaluation sound? Is the
necessary infrastructure available or can it be built in time for the
shared task? Will research inspired by this task be able to evaluate
in the same manner and on the same data after the initial task? Is the
task significantly challenging (e.g. room for improvement over the
baselines)?
- Impact: What is the expected impact of the data in this task on
future research beyond the SemEval Workshop?
- Ethical – The data must be compliant with privacy policies. e.g.
a) avoid personally identifiable information (PII). Tasks aimed at
identifying specific people will not be accepted,
b) avoid medical decision making (compliance with HIPAA, do not try
to replace medical professionals, especially if it has anything to do
with mental health)
c) these are representative and not exhaustive
## Submission Details
The task proposal should be a self-contained document of no longer
than 3 pages (plus additional pages for references). Please see
website for further information.
## Important dates
- Task proposals due 31 March 2025 (Anywhere on Earth)
- Task selection notification 19 May 2025
## Preliminary timetable
- Sample data ready 15 July 2025
- Training data ready 1 September 2025
- Evaluation data ready 1 December 2025 (internal deadline; not for
public release)
- Evaluation start 10 January 2026
- Evaluation end by 31 January 2026 (latest date; task organizers may
choose an earlier date)
- Paper submission due February 2026
- Notification to authors March 2026
- Camera ready due April 2026
- SemEval workshop Summer 2026 (co-located with a major NLP conference)
Tasks that fail to keep up with crucial deadlines (such as the dates
for having the task and CodaLab website up and dates for uploading
sample, training, and evaluation data) may be cancelled at the
discretion of SemEval organizers. While consideration will be given to
extenuating circumstances, our goal is to provide sufficient time for
the participants to develop strong and well-thought-out systems.
Cancelled tasks will be encouraged to submit proposals for the
subsequent year’s SemEval. To reduce the risk of tasks failing to meet
the deadlines, we are unlikely to accept multiple tasks with overlap
in the task organizers.
## Chairs
- Sara Rosenthal, IBM Research AI
- Aiala Rosá, Universidad de la República, Uruguay
- Marcos Zampieri, George Mason University, USA
- Debanjan Ghosh, Educational Testing Service,