We invite you to take part in CLEF 2023 LongEval: Longitudinal Evaluation of Model Performance shared task.
** Registration open, training data available **
The CLEF 2023 LongEval lab is motivated by recent research showing that the performance of information retrieval and text classification models drops as the test data becomes more distant in time from the training data. LongEval differs from traditional IR and classification shared tasks with special considerations on evaluating models that mitigate performance drop over time. It encourages participants to develop temporal information retrieval systems and longitudinal text classifiers that survive through dynamic temporal text changes, introducing time as a new dimension for ranking models performance.
The lab consists of two tasks:
* Task 1. LongEval-Retrieval: The goal of Task 1 is to propose an information retrieval system which can handle changes over the time. The proposed retrieval system should follow the temporal timewise evolution of Web documents.
* Task 2. LongEval-Classification: The goal of Task 2 is to propose a temporal persistence classifier which can mitigate performance drop over short and long periods of time compared to a test set from the same time frame as training.
Data and models
April 2023: Practice data release
5 June 2023: Submission of Participant Papers [CEUR-WS]
4 May 2023: Test data release
Rabab Alkhalifa, Iman Bilal, Hsuvas Borkakoty, Jose Camacho-Collados, Romain Deveaud, Alaa El-Ebshihy, Luis Espinosa-Anke, Gabriela Gonzalez-Saez, Petra Galuščáková, Lorraine Goeuriot, Elena Kochkina, Maria Liakata, Daniel Loureiro, Harish Tayyar Madabushi, Philippe Mulhem, Florina Piroi, Martin Popel, Christophe Servan, Arkaitz Zubiaga.
----------Should you have any questions, please post them to the Slack channel.