We invite submissions to the workshops on Programmatic Reinforcement Learning (PRL) at RLC 2025 and Programmatic Representations for Agent Learning (PRAL) at ICML 2025.
- Web page: [
pral-workshop.github.io](
https://pral-workshop.github.io/)
- Submission Deadline: May 30, 2025, AoE
- Author Notification: June 13, 2025, AoE
- Workshop Date: July 18, 2025 @ Vancouver, Canada
- Web page: [
prl-workshop.github.io](
https://prl-workshop.github.io/)
- Submission Deadline: June 6, 2025, AoE
- Author Notification: June 20, 2025, AoE
- Workshop Date: August 5, 2025 @ Edmonton, Canada
Recent advances in reinforcement learning have significantly improved agents' ability to reason, plan, and interact in diverse settings. However, we still face significant challenges in deploying such agents in practice. For example, models often overfit to specific training conditions and struggle with domain shifts. Training high-performance agents is also computationally expensive, requiring vast data and computing power. Scalability is further hindered by the need for extensive fine-tuning and hyperparameter optimization, making real-world deployment difficult. Addressing these challenges requires a paradigm shift toward more structured, interpretable, and data-efficient learning frameworks.
This full-day workshop at RLC explores the emerging paradigm of programmatic representations to enhance sequential decision-making. Using structured representations, such as symbolic programs, code-based policies, and rule-based abstractions, agents can achieve greater interpretability, improved generalization, and increased efficiency. Programs can explicitly encode policies, reward functions, task structures, and environment dynamics, providing human-understandable reasoning while reducing the reliance on massive data-driven models. Furthermore, programmatic representations enable modularity and compositionality, allowing agents to efficiently reuse knowledge across tasks and adapt with minimal retraining.
By bringing together the sequential decision-making community—including researchers in reinforcement learning, planning, search, and optimal control—with experts in program synthesis and code generation, this workshop aims to tackle the fundamental challenges of agent learning at scale and drive progress toward interpretable, generalizable, verifiable, robust, and safe autonomous systems across domains ranging from virtual agents to robotics.