Februari 5th: Amsterdam Causality Meeting with Tom Claassen and Onno Zoeter

6 views
Skip to first unread message

Boeken, P.A. (Philip)

unread,
Jan 9, 2026, 4:15:08 AM (3 days ago) Jan 9
to amscau...@googlegroups.com
Dear all,

This is a reminder for the next Amsterdam Causality Meeting, which will take place on Thursday February 5th

Date & time: Februari 5th, 14.30-17.30
Location: UvA Science Park, Lab42, room L3.36

Schedule:
14:30-14:45: Opening
14.30-15.30: Tom Claassen (Radboud University Nijmegen), Sound and complete causal inference with background knowledge in the presence of latent confounders and selection bias.
15.30-16.30: Onno Zoeter (Booking.com), When the problem becomes richer than supervised learning. A real-world use of causality in machine learning.
16.30-17.30: Drinks at Polder

Please find the abstracts below this message.

If you're interested in this event or in the seminar series, please check our website. For announcements regarding upcoming meetings, you can also register to our Google group.

This meeting is financially supported by the ELLIS unit Amsterdam and the Big Statistics group at Amsterdam UMC.

Best wishes,

Philip Boeken, Giovanni Cinà, Sara Magliacane, Joris Mooij and Stéphanie van der Pas


Abstracts:

Sound and complete causal inference with background knowledge in the presence of latent confounders and selection bias, by Tom Claassen (Radboud University Nijmegen)

Causal discovery from observational data has come a long way over the years. In particular constraint-based approaches come complete with provable guarantees on sound- & completeness, even when latent confounders and selection effects may be present (Zhang,2008). The result is a so-called maximally informative PAG, representing a Markov equivalence class as the output causal model. A downside is that often many edge marks (read ‘causal orientations’) remain undetermined. This is where additional background information can be invaluable, possibly helping to orient many additional edge marks. Meek already showed how to do this for CPDAGs (i.e. without latent confounders) some 30 years go. Recent work by Wang et al. (2022,2024) and Venkateswaran & Perkovic (2025) has made good progress to extend this result to the causally insufficient case for certain types of PAGs, but so far the general task still eludes resolution. In this talk I will present a new approach that aims to do just that. It generalises and simplifies some of the orientation rules recently discovered, and adds a few twists to Zhang’s familiar set. The resulting algorithm is very fast in processing arbitrary background information on edge marks in the PAG, even for large graphs. In addition, it can be used to verify consistency between background knowledge and a given PAG, and offers a straightforward way to generate all possible MAGs consistent with a given PAG plus available background info.


When the problem becomes richer than supervised learning. A real-world use of causality in machine learning, by Onno Zoeter (Booking.com)

The classic supervised learning problem that is taught in machine learning courses and is the subject of many machine learning competitions is often too narrow to reflect the problems that we face in practice. Historical datasets typically reflect a combination of a source of randomness (for example customers making browsing and buying decisions) and a controlling mechanism such as a ranker or highlighting heuristics (badges, promotions, etc.). Or there might be a selection mechanism (such as the decision to not accept transactions with high fraud risk) that influences the training data. A straightforward regression approach would not be able to disentangle the causal influence of the controller and the phenomenon under study. As a result it risks making incorrect predictions as the controller is changed. In practice however, such problems are typically treated as a classic regression problem in a first iteration and attempts to identify and correct for these complications come as afterthoughts or are not undertaken at all. Ideally there is a rigorous and flexible formalism that captures the correct framing of the problem from the very start, accompanied by a set of practical algorithms that work well in practice for each of the identified cases. In our initial set of successes, structural causal models have proven to be an effective language to express the understanding of the phenomena and to make accurate causal predictions for changes just in the part that is under control, e.g. the ranker, the acceptance policy, etc. This overall research objective is the main goal of the Mercury Machine Learning Lab, one of the labs within Booking AI Research. The Mercury lab is a collaboration between the University of Amsterdam, the Technical University of Delft and Booking.com. It brings together the fields of information retrieval, causality and reinforcement learning where the topic is studied under different names e.g. off-line evaluation, transportability and s-recoverability and off-policy learning. This presentation will sketch the problem, highlight some of the theoretical results so far and describe a significant real-world application.
Reply all
Reply to author
Forward
0 new messages