[announcement] Release of the test sets and delay on starting the evaluation phase.

38 views
Skip to first unread message

LLMs4OL Challenge

unread,
Jun 2, 2025, 7:52:30 AMJun 2
to LLMs4OL Challenge
Dear Participants,

I hope this email finds you well.

Unfortunately, due to the technical issue with CodaBench, we are delayed in starting the evaluation phase. Please rest assured that we are aware of the timeline and will be extending the submission deadlines accordingly to ensure a smooth evaluation process.

Moreover, the test sets for Seen Eval are released via https://github.com/sciknoworg/LLMs4OL-Challenge/tree/main/2025. We encourage you to begin preparing your predictions while we finalize the CodaBench/CodaLab setup.  Once the challenge platform is live, we will promptly release the Blind-Eval phase through CodaBench/CodaLab as well.

We truly appreciate your patience and enthusiasm for the challenge.

Warm Regards,
Hamed
On behalf of the LLMs4OL Challenge Organization Team

LLMs4OL Challenge

unread,
Jun 5, 2025, 4:09:19 AMJun 5
to LLMs4OL Challenge
Dear Participants,

I hope this email finds you well.

We are pleased to announce that the CodaLab platform for the challenge is now live. You can register and begin submitting your results here: https://codalab.lisn.upsaclay.fr/competitions/23065

Important Updates:
  • Deadline Extensions:
    • Submission deadline: June 27
    • Paper submission deadline: July 10
  • Updated SWEET Ontology: We've revised the SWEET ontology for test cases related to Tasks B, C, and D. Please ensure you use the latest version.
  • Subtask A1.1-Ecology (Text2Onto Term Extraction task) Removed: Due to some issues identified, Subtask A1.1 - Ecology - term extraction subtask of TextOnto within ecology will no longer be part of the challenge.
Thank you for your participation, and don’t hesitate to reach out if you have any questions.

Best regards,

Hamed
On behalf of the LLMs4OL Challenge Organization Team.
Reply all
Reply to author
Forward
0 new messages