Hi everyone,
We would like to invite you to submit your work to our RLC workshop on “Reinforcement Learning Beyond Rewards (RLBRew): Ingredients for Developing Generalist Agents”. The details of the workshop and submission instructions are as follows:
Reinforcement Learning Beyond Rewards: Ingredients for Developing Generalist Agents
The Second Reinforcement Learning Conference (RLC)
Aug 5, 2025
Edmonton, Alberta, CA
https://rlbrew2-workshop.github.io/
RLC 2025 is an in-person conference.
Abstract: Reinforcement Learning (RL) has traditionally focused on maximizing rewards. However, intelligent agents often rely on reward-free interactions and diverse environmental signals to form abstractions that facilitate rapid adaptation. Recent RL research has begun leveraging reward-free transitions—available through exploratory interactions or expert datasets—to increase decision-making efficiency and task specification. However, unlike in vision or language modeling, RL still lacks scalable methods for learning generalizable representations from unlabeled data. Additionally, difficulties in specifying reward functions have led researchers toward alternative signals, such as human demonstrations, preferences, and implicit feedback. This workshop seeks to advance beyond traditional reward-centric RL by exploring methods like intrinsic motivation, skill discovery, predictive and contrastive representation learning, and leveraging human-centric signals. Building upon recent progress, including foundational models employing scalable alternative signals, the workshop aims to bridge theoretical insights and practical applications, fostering collaborations toward creating more versatile, adaptive decision-making agents.
SUBMISSION INSTRUCTIONS
We encourage submissions of up to 8 pages of content (with no limit on references and supplementary material) in the RLC 2025 format that have not been previously accepted at an archival venue (such as ICML, NeurIPS, ICLR) and also allow submissions of recently accepted papers which will be judged to a higher standard. While up to 8 pages of content is allowed, we strongly encourage authors to limit their submission to 4-6 pages, to ensure higher quality reviewer feedback. All submissions will be managed through OpenReview. Supplementary Materials uploads are to only be used optionally for extra videos/code/data/figures and should be uploaded separately on the submission website.
The review process is double-blind so the submission should be anonymized. Accepted work will be presented as posters during the workshop, and select contributions will be invited to give spotlight talks during the workshop. Each accepted work entering the poster sessions will have an accompanying pre-recorded 5-minute video. Please note that at least one coauthor of each accepted paper will be expected to have a RLC conference registration and participate in one of the poster sessions.
Submissions will be evaluated based on novelty, rigor, and relevance to the theme of the workshop. Both empirical and theoretical contributions are welcome. The focus of the work should relate to the list of topics specified below. There will be no proceedings for this workshop, however, the papers will be public on workshop's openreview page.
We encourage submissions around (but not limited to) the following topics that are important in the context of utilizing reward-free RL:
Reward-Free Task Specification: Learning from preferences, language, cross-embodiment, social constraints, safety, demonstrations, implicit human feedback.
Utilizing large-scale reward-free data: Learning representations, skills, from video, and novel datasets.
Utilizing foundational models for efficient adaptation/finetuning.
Using reward-free interactions: exploration, sample efficiency, unsupervised skill learning, self-supervised objectives, goal-conditioning, and model learning for RL.
Please submit your papers via the following link: https://openreview.net/group?id=rl-conference.cc/RLC/2025/Workshop/RLBrew
IMPORTANT DATES
* Submission deadline: May 30 at 11:59PM (AOE) on Openreview
* Accept/Reject Notification: June 15, 2025
* Workshop: Aug 5, 2025
TENTATIVE SPEAKERS
Chelsea Finn (Stanford University)
John Schulman (Thinking Machines)
Ahmed Touati (FAIR Meta)
ORGANIZERS
Caleb Chuck (University of Texas at Austin)
Harshit Sikchi (University of Texas at Austin/OpenAI)
Siddhant Agarwal (University of Texas at Austin)
Yingchen Xu (UCL)
Pranaya Jajoo (University of Alberta)
Chuning Zhu (University of Washington)
Abhishek Gupta (University of Washington)
Amy Zhang (University of Texas at Austin)
REGISTRATION
Participants should refer to the RLC 2025 website (https://rl-conference.cc/register.html) for information on how to register.
CONTACT
Please reach out to us at rlbrew2....@gmail.com if you have any questions. We look forward to receiving your submissions!
Kind Regards,
Workshop Organizers
Reinforcement Learning Beyond Rewards