Call for Papers: EIML Workshop at ICML 2026 — Reasoning Under Unknown Unknowns

40 views
Skip to first unread message

Krikamol Muandet

unread,
Apr 14, 2026, 12:01:29 PM (2 days ago) Apr 14
to Machine Learning News

Dear colleagues,

**Submission instructions: https://sites.google.com/view/eimlicml2026/calls-for-papers
**Submission deadline: April 20th, 2026

We are pleased to invite submissions to the Workshop on Epistemic Intelligence in Machine Learning (EIML) at ICML 2026. The workshop brings together researchers across machine learning, statistics, philosophy of science, decision theory, and related disciplines to examine a shared and increasingly urgent challenge: how to reason and make decisions in the presence of unknown unknowns.

As machine learning systems are deployed in open-ended and high-stakes environments, their limitations are often not merely a matter of noise or risk, but of epistemic uncertainty, gaps in knowledge that are unobserved, unmodelled, or fundamentally unknowable. This workshop seeks to advance both the theoretical foundations and practical methodologies required to address this challenge.

We welcome both mature work and works-in-progress that explore these themes from complementary perspectives. Topics of interest include, but are not limited to:

Foundations of Uncertainty

  • Formal frameworks for representing epistemic uncertainty and ignorance
  • Connections between statistical, philosophical, and decision-theoretic perspectives

Uncertainty-aware Generative AI and Foundation Models

  • Epistemic uncertainty and ignorance in generative models
  • Hallucination as an epistemic failure and strategies for its mitigation

  • Uncertainty-aware decoding, prompting, and inference

  • Reward modelling and alignment under uncertainty

AI Safety as an Epistemic Problem

  • Moving beyond robustness to known failures toward reasoning under unknown unknowns
  • Overconfident extrapolation and failures outside the support of the data

  • Identifying epistemic blind spots, abstention mechanisms, and safe fallback behaviour

  • Criteria for when learning systems should refuse to act

AI Alignment under Objective Uncertainty

  • Alignment under incomplete, evolving, or strategically manipulated objectives
  • Explicit modelling of value uncertainty beyond fixed reward optimisation

  • Limits of preference learning under partial observability

  • Epistemic mismatches between system beliefs, incentives, and societal goals

Lifelong and Continual Learning in an Open World

  • Learning as long-term belief revision rather than repeated retraining
  • Challenges arising from non-stationarity, novelty, and concept emergence

  • Catastrophic forgetting as a failure of coherent uncertainty propagation

  • Principled update rules for maintaining uncertainty over time

We particularly encourage submissions that challenge prevailing assumptions, propose new benchmarks, or engage with the philosophical and foundational dimensions of uncertainty in AI.

Best wishes,
Krikamol (on behalf of the organising team)

Reply all
Reply to author
Forward
0 new messages