Date & time: Februari 5th, 14.30-17.30
Location: UvA Science Park, Lab42, room L3.36
Schedule:
14:30-14:45: Opening
14.30-15.30: Tom Claassen (Radboud University Nijmegen), Sound and complete causal inference with background knowledge in the presence of latent confounders and selection bias.
15.30-16.30: Onno Zoeter (Booking.com), When the problem becomes richer than supervised learning. A real-world use of causality in machine learning.
16.30-17.30: Drinks at
Polder
Please find the abstracts below this message.
If you're interested in this event or in the seminar series, please check
our
website. For announcements regarding upcoming meetings, you can also register to our
Google
group.
This meeting is financially supported by the
ELLIS
unit Amsterdam and the
Big
Statistics group at Amsterdam UMC.
Best wishes,
Philip Boeken, Giovanni Cinà, Sara Magliacane, Joris Mooij and Stéphanie van der Pas
Abstracts:
Sound and complete causal inference with background knowledge in the presence of latent confounders and selection bias, by Tom Claassen (Radboud University Nijmegen)
Causal discovery from observational data has come a long way over the years. In particular constraint-based approaches come complete with provable guarantees on sound- & completeness, even when latent confounders and selection effects may be present (Zhang,2008).
The result is a so-called maximally informative PAG, representing a Markov equivalence class as the output causal model. A downside is that often many edge marks (read ‘causal orientations’) remain undetermined. This is where additional background information
can be invaluable, possibly helping to orient many additional edge marks. Meek already showed how to do this for CPDAGs (i.e. without latent confounders) some 30 years go. Recent work by Wang et al. (2022,2024) and Venkateswaran & Perkovic (2025) has made
good progress to extend this result to the causally insufficient case for certain types of PAGs, but so far the general task still eludes resolution. In this talk I will present a new approach that aims to do just that. It generalises and simplifies some of
the orientation rules recently discovered, and adds a few twists to Zhang’s familiar set. The resulting algorithm is very fast in processing arbitrary background information on edge marks in the PAG, even for large graphs. In addition, it can be used to verify
consistency between background knowledge and a given PAG, and offers a straightforward way to generate all possible MAGs consistent with a given PAG plus available background info.
When the problem becomes richer than supervised learning. A real-world use of causality in machine learning, by Onno Zoeter (Booking.com)