Shared Task at FNP 2026 @ Financial Causality Detection (FinCausal 2026)

17 views
Skip to first unread message

Rayson, Paul

unread,
Jan 9, 2026, 9:18:18 AM (2 days ago) Jan 9
to ml-...@googlegroups.com
FinCausal 2026

Introduction
============
Financial analysis depends on factual data and explanations of data variability. While data present facts, they do not reveal why these facts occurred. It's important to examine the narrative behind the data presentation. Recognising causality is essential for understanding decision-making processes. 
The Financial Document Causality Detection Task aims to develop the ability to explain, from external sources, why a transformation occurs in the financial landscape as a preamble to generating accurate and meaningful financial narrative summaries. Its goal is to evaluate which events or chains of events can cause a financial object to be modified or an event to occur, regarding a given context. Participants will need to identify the cause or effect within the given segments. There are two subtasks: one in English and one in Spanish. The English dataset has been sourced from various 2017 UK financial annual reports from the corpus provided by UCREL at Lancaster University and from the English version of the 2018 FinT-esp corpus. The Spanish dataset has been extracted from a corpus of Spanish financial annual reports from 2014 to 2018. These datasets are comparable across both languages to facilitate testing of multilingual models. For the 2026 edition, the previous dataset has been expanded with 500 examples in each language, including complex cause-and-effect structures.
Traditionally, the extraction of cause-effect relationships has been extractive, as seen in the first editions of FinCausal. In 2025, it was presented as a generative AI task, in which abstractive questions about causes or effects are posed, and the system’s responses are evaluated using exact-matching and similarity metrics. 
In FinCausal 2026, we have introduced more features and innovations:
• Complete review of the previous datasets, removing ambiguous or very simple cases. More than 500 new fragments with complex relationships have been added (such as a causal chain of three or more elements).
• Rephrasing of the abstractive questions in 10% of the cases, to create a dataset that requires more advanced reasoning and is less dependent on similarity to the original text.
• Random partitioning of the training and test sets, based on the new 2026 dataset. In this way, the innovations are distributed evenly.
• Introduction of an LLM-as-a-judge–based evaluation metric, which scores system responses on a 1–5 scale according to their adequacy. This metric replaces the previous SAS + Exact Match evaluation scheme and aligns FinCausal with current practices adopted in recent shared tasks and competitions.

Causality
=========
A causal relationship involves stating a cause and its effect, meaning that, according to the text, two events are related, and one (the cause) triggers the other (the effect). There are two types of causes:
1. Justification of a statement.
For example: «This is my final report since I have been succeeded as President of the Commission as of January 24, 2019.»
2. The reason for explaining a result.
For example: «In Spain, revenue grew by 10.8% to 224.9 million euros, due to an increase in cement volume accompanied by a more moderate price increase.»

In previous editions, we examined both types of causes because they are useful for studying and understanding decision-making in a company. In the current edition, we mainly focus on the second type (EXPLANATION), especially on causes that result in measurable effects, which are highly relevant for financial analysis. Causes can be agents or facts. Effects can be quantified or not, but they are always events, not expectations, hypotheses, or future projections. Finally, the task concentrates on text-internal causality (how the document encodes it), not on the truth or validity of the statements.

Dataset
=======
The dataset will consist of three parts: context, question, and answer:
• Context: The original paragraph from the annual reports.
• Question: It is formulated to find the other part of the relationship, either the cause or the effect. It will always be abstractive, meaning it should reflect the content of the cause or effect being asked about, but not exactly match the provided context. For example:  
Why did X (effect) happen?    
What is the consequence (effect) of X (cause)?
• Answer: The response will be the cause or effect previously asked about, taken verbatim from the text, making it extractive.

For both subtasks, participants can use any method they see fit (regex, corpus linguistics, entity-relationship models, deep learning methods) to identify the cause or effect being questioned.

Shared Task Organisers
======================
• Antonio Moreno-Sandoval (UAM, Spain)
• Jordi Porta (UAM, Spain)
• Yanco Torterolo (UNED, Spain)
• Alexia Stanescu (UAM, Spain)
• Melina Chatzi (UAM, Spain)
• Sofía Roseti (UAM, Spain)

Shared Task Contact:
For any questions, please contact the organisers at l...@uam.es.

Key Dates
=========
• First CFP: 22 December 2025
• Second CFP: 5 January 2026
• Training set release: 8 January 2026
• Blind test set release: 1 February 2026
• Systems submission: 16 February 2026 
• Release of results: 20 February 2026
• Paper Submission Deadline: 6 March 2026
• Notifications of Acceptance: 11 March 2026
• Camera-ready Paper Deadline: 30 March 2026
• Workshop Date: 16 May 2026

-- 

Paul Rayson

Director of UCREL and Professor of Natural Language Processing

SCC Data Theme Lead

School of Computing and Communications, InfoLab21, Lancaster University, Lancaster, LA1 4WA, UK.

Web: https://www.research.lancs.ac.uk/portal/en/people/Paul-Rayson/  

Tel: +44 1524 510357

Contact me on Teams

 

Reply all
Reply to author
Forward
0 new messages