Dear all,
Are you interested in understanding the difference between correlation and causation in the context of reinforcement learning? Could causal reasoning be the next step toward building more robust, generalizable, and interpretable RL agents?
We invite you to submit your work to the Causal Reinforcement Learning (CausalRL) Workshop, which will be held on August 5th, 2025, as part of the Reinforcement Learning Conference (RLC 2025).
The CausalRL Workshop brings together researchers at the intersection of Reinforcement Learning and Causal Inference to explore how these two powerful frameworks can be combined to improve decision-making under uncertainty. We welcome contributions from both academia and industry that investigate how causal principles can enhance RL, and vice versa.
Topics of interest include (but are not limited to):
Causal representation learning for RL
Causal discovery and structural learning in interactive environments (MDPs, POMDPs, or SCMs)
Counterfactual reasoning for policy/value learning
Offline RL with unobserved confounders
Generalization, robustness, and safety via causal reasoning
Multi-agent causal reinforcement learning
Submission Deadline: June 6th, 2025 (AOE)
Notification of Acceptance: June 30th, 2025
We welcome submissions of papers between 4 and 8 content pages in length (excluding references and appendices). Submissions may include theoretical results, empirical findings, position papers, or negative results that provoke thoughtful discussion. We especially encourage early-stage work and cross-disciplinary perspectives.
For more information, please visit our website.
Feel free to reach out to us at dcorsi[at]uci.edu or ml[at]cs.columbia.edu with any questions. Looking forward to your submissions and to insightful discussions at the workshop!
With best regards,
Davide Corsi and Mingxuan Li, on behalf of the CausalRL Workshop organizers