Workshop on Reliability In Planning and Learning (RIPL; joint with HSRL)Visit
https://icaps26.icaps-conference.org/program/workshops/ripl/ for up-to-date information.
ICAPS’26 Workshop,
Dublin, Ireland
Date: June 27 or 28, 2026 (TBD)
Paper submission deadline:
May 15, 2026, AoEPaper acceptance notification:
June 9, 2026, AoEAim and Scope of the WorkshopLearning
is the dominating trend in AI at this time, achieving (among others)
unprecedented versatility and scalability in many forms of sequential
decision making. Given the opaque nature of ML models and the lack of
inherent guarantees, reliability is a key concern, prominently including
safety, robustness, and fairness in various forms, but possibly other
concerns as well. Arguably, this is indeed one of the grand challenges
in AI for the foreseeable future. Research on this challenge is
widespread across the AI community and beyond. Research topics relevant
to ICAPS include, for example, safe and high-stakes reinforcement
learning, quality assurance for LLM-generated plans or planning models,
as well as stress testing and formal verification of learned action
policies. The mission of this workshop is to represent this important
topic space at ICAPS, providing a joint discussion forum, and gradually
forming a sub-community, addressing any topic related to reliability
issues in the use of ML methods for planning and scheduling purposes.
The
first workshop of the RIPL series at ICAPS was held in 2022, and ran
through 2024, then under the name RDDPS. The workshop was renamed to
RIPL in 2025 reflecting a more inclusive scope. In the 2026 edition,
RIPL is merged with another proposed workshop centering on high-stakes
reinforcement learning, expanding our vision to high-stakes domains
where traditional trial-and-error learning is infeasible and thus
explicit world models and planning as strict guardrails for safe
deployment are needed.
From a planning and scheduling
perspective – and for sequential decision making in general – the
importance of learning is manifested in two major kinds of technical
artifacts that are rapidly gaining importance. First, planning models
partially learned from data (such as a weather forecast in a model of
flight actions), or generated by LLMs. Second, action-decision
components learned from data, in particular action policies or
planning-control knowledge for making decisions in dynamic environments
(e.g., manufacturing processes under resource-availability and
job-length fluctuations).
Reliability of data-driven artifacts,
in particular ML classifier robustness and fairness, is one of the key
research issues in other sub-areas of AI for quite some time already.
Yet the topic has so far been scarcely addressed at ICAPS, whose focus
in planning and learning has so far been mainly on plan-generation
performance. The organizers of this workshop believe that this needs to
change, as it is important that ICAPS contributes to address the
reliable AI challenge. We furthermore believe that ICAPS is in a good
position to make such a contribution, as the combination of symbolic and
data-driven methods is a key avenue for obtaining reliable AI. The
workshop aims at establishing an ICAPS sub-community focusing on this
vision.
Topics of InterestAs
per the above, the workshop includes any topic that falls into the
following problem space, roughly classified along three dimensions:
- Data-driven artifacts:
Learned or ML-generated planning and scheduling models (e.g.,
LLM-generated PDDL, or learned transition probabilities and environment
predictions); learned action-decisions (e.g., action policies,
components thereof and previous plans); learned search guidance (e.g.,
heuristics and state rankings); and combinations thereof.
- Objectives:
Reliability in whatever form, including risk, safety, robustness,
fairness, error bounds, etc., alongside possibly other concerns such as
scalability and data efficiency, system design/engineering principles
and challenges, and the interactions of these with reliability.
- Methodologies:
Planning and scheduling algorithms in the presence of learned artifacts
as per (1); analyzing such learned artifacts (quality assurance,
reasoning, verification, testing, etc.); making such analyses amenable
to human users (e.g., visualization, interaction); and potentially
others as relevant to the objectives as per (2).
Some example points in this problem space are:
- Safe reinforcement learning, methods that guarantee actions remain within safety limits during learning and/or execution.
- Safeguarding
of learned action policies through techniques such as monitoring,
shielding, lookahead search, planning as a safety guardrail,
temporal-logic constraints, barrier functions.
- Quality assurance for LLM-generated planning models.
- Safeguarding
and quality assurance for LLM-based planning, e.g., reliability of
chain-of-thought approaches and LLM-generated plans.
- Reliability
of learned planning models, like (structured) action and environment
models incorporating data-driven predictions, e.g., in the face of
sparse, noisy, and/or out-of-distribution data.
- Data-driven model refinement.
- Verifying
or testing safety, robustness, goal-reaching guarantees, or other
desirable properties of learned action policies and planning-control
knowledge.
- Irreversible actions / no free exploration: settings
where trial-and-error is fundamentally infeasible because a single
failure can cause unacceptable harm, and high-fidelity simulation may be
impractical.
- Conservative / risk-sensitive learning: optimizing
safety-aware objectives (e.g., worst-case, CVaR) rather than maximizing
expected return alone.
- Offline-to-online transition &
sim-to-real robustness: safely moving from offline data or simulation to
real deployment without early-stage performance degradation or safety
violations.
- Interpretability & verifiability: ensuring
learned behavior is explainable and amenable to auditing in
deployment-critical contexts.
- Capability awareness / uncertainty
estimation: enabling agents to recognize distributional shift or
uncertainty and respond conservatively or defer appropriately, adapting
to non-stationary environments.
- Diagnosis of systems involving ML components.
- Risk analysis of planning and scheduling with data-driven models.
- Addressing the optimizer’s curse (the tendency of an optimizer to find extrapolation errors in learned models).
- Bias in data-driven models.
- Interactive visualizations enabling users to understand a planning/scheduling model or a learned action policy.
Important Dates- Paper submission deadline: May 15, 2026 (AoE)
- Paper acceptance notification: June 9, 2026
Submission DetailsAll papers must be formatted like at the main conference (
ICAPS author kit). Submitted papers should be anonymous for double-blind reviewing. Paper submission is via
EasyChair.
We call for two kinds of submissions:
- Technical papers,
of length up to 8 pages plus unlimited references and appendices. The
workshop is meant to be an open and inclusive forum, and we encourage
papers that report on work in progress.
- Position papers,
of length up to 4 pages plus unlimited references and appendices. Given
that reliability of data-driven planning and scheduling is rather new at
ICAPS, we encourage authors to submit positions on what they believe
are important challenges, questions to be considered, approaches that
may be promising. We will include any position relevant to discussing
the workshop topic. We expect to group position paper presentations into
a dedicated session, followed by an open discussion.
Every
submission will be reviewed by members of the program committee
according to the usual criteria such as relevance to the workshop,
significance of the contribution, and technical quality.
Please
do not submit papers that are already accepted for the ICAPS main
conference. All other submissions are welcome. Authors submitting papers
rejected from the ICAPS main conference, please ensure you do your
utmost to address the comments given by ICAPS reviewers. Also, it is
your responsibility to ensure that other venues your work is submitted
to allow for papers to be already published in “informal” ways (e.g., on
proceedings or websites without associated ISSN/ISBN).
Organizing CommitteeDaniel Höller, Saarland University, Germany
Nitay Alon, Hebrew University of Jerusalem, Israel
Guy Azran, Technion – Israel Institute of Technology, Israel
Sarah Eisenstein-Keren, Technion – Israel Institute of Technology, Israel
Timo P. Gros, German Research Center for Artificial Intelligence, Germany
Jörg Hoffmann, Saarland University, Germany
Sarath Sreedharan, Colorado State University, USA
Marcel Steinmetz, French National Centre for Scientific Research (CNRS), France
Sylvie Thiebaux, University of Toulouse, France, and Australian National University, Australia
Felipe Trevizan, Australian National University, Australia
Marcel Vinzent, Saarland University, Germany
Eyal Weiss, Bar-Ilan University, Israel