Dear colleagues,
**Submission instructions: https://sites.google.com/view/eimlicml2026/calls-for-papers
**Submission deadline: April 20th, 2026
We are pleased to invite submissions to the Workshop on Epistemic Intelligence in Machine Learning (EIML) at ICML 2026. The workshop brings together researchers across machine learning, statistics, philosophy of science, decision theory, and related disciplines to examine a shared and increasingly urgent challenge: how to reason and make decisions in the presence of unknown unknowns.
As machine learning systems are deployed in open-ended and high-stakes environments, their limitations are often not merely a matter of noise or risk, but of epistemic uncertainty, gaps in knowledge that are unobserved, unmodelled, or fundamentally unknowable. This workshop seeks to advance both the theoretical foundations and practical methodologies required to address this challenge.
We welcome both mature work and works-in-progress that explore these themes from complementary perspectives. Topics of interest include, but are not limited to:
Foundations of Uncertainty
Connections between statistical, philosophical, and decision-theoretic perspectives
Uncertainty-aware Generative AI and Foundation Models
Hallucination as an epistemic failure and strategies for its mitigation
Uncertainty-aware decoding, prompting, and inference
Reward modelling and alignment under uncertainty
AI Safety as an Epistemic Problem
Overconfident extrapolation and failures outside the support of the data
Identifying epistemic blind spots, abstention mechanisms, and safe fallback behaviour
Criteria for when learning systems should refuse to act
AI Alignment under Objective Uncertainty
Explicit modelling of value uncertainty beyond fixed reward optimisation
Limits of preference learning under partial observability
Epistemic mismatches between system beliefs, incentives, and societal goals
Lifelong and Continual Learning in an Open World
Challenges arising from non-stationarity, novelty, and concept emergence
Catastrophic forgetting as a failure of coherent uncertainty propagation
Principled update rules for maintaining uncertainty over time
We particularly encourage submissions that challenge prevailing assumptions, propose new benchmarks, or engage with the philosophical and foundational dimensions of uncertainty in AI.
Best wishes,
Krikamol (on behalf of the organising team)