Hi everyone,
We would like to invite you to submit your work to our RLC workshop on “Reinforcement Learning Beyond Rewards (RLBRew)”. The details of the workshop and submission instructions are as follows:
Reinforcement Learning Beyond Rewards
The First Reinforcement Learning Conference (RLC)
Aug 9, 2024
Amherst, MA, USA
https://rlbrew-workshop.github.io
RLC 2024 is an in-person conference.
Abstract: Reinforcement learning has been widely successful in solving particular tasks defined with a reward function - from superhuman Go playing to magnetic confinement for plasma control. On the other hand, creating a generalist RL agent poses the unresolved question of what agent can learn not just from reward-defined environments, but from the often substantial quantity of reward-free interactions with the environment. This question has been explored recently and has taken diverse forms—learning representations that are action-free, causal, predictive, and contrastive; learning from large-scale action-free datasets; learning exploration using intrinsic reward and skill discovery; learning policies that are arbitrary goal-reaching, language-conditioned, policies optimal for distribution of reward function, or even optimal for all reward functions; learning intent from datasets using a variety of learning signals like preferences, rankings, expert, and human cues; learning imitative foundational action models. The RLBrew workshop focuses on this setting of reward-free RL. Considering the wide variety of possibilities for RL beyond rewards, we aim to bring a set of diverse opinions to the table to spark discussion about the right questions and novel tools to introduce new capabilities for RL agents in the reward-free setting.
SUBMISSION INSTRUCTIONS
We encourage submissions of up to 8 pages of content (with no limit on references and supplementary material) in the RLC 2024 format that have not been previously accepted at an archival venue (such as ICML, NeurIPS, ICLR) and also allow submissions of recently accepted papers which will be judged to a higher standard. While up to 8 pages of content is allowed, we strongly encourage authors to limit their submission to 4-6 pages, to ensure higher quality reviewer feedback. All submissions will be managed through OpenReview. Supplementary Materials uploads are to only be used optionally for extra videos/code/data/figures and should be uploaded separately on the submission website.
The review process is double-blind so the submission should be anonymized. Accepted work will be presented as posters during the workshop, and select contributions will be invited to give spotlight talks during the workshop. Each accepted work entering the poster sessions will have an accompanying pre-recorded 5-minute video. Please note that at least one coauthor of each accepted paper will be expected to have a RLC conference registration and participate in one of the poster sessions.
Submissions will be evaluated based on novelty, rigor, and relevance to the theme of the workshop. Both empirical and theoretical contributions are welcome. The focus of the work should relate to the list of topics specified below. There will be no proceedings for this workshop, however, authors can opt to have their abstracts/papers posted on the workshop website.
We encourage submissions around (but not limited to) the following topics that are important in the context of utilizing reward-free RL::
Reward-Free Task Specification: Learning from preferences, language, cross-embodiment, social constraints, safety, demonstrations, implicit human feedback.
Utilizing large-scale reward-free data: Learning representations, skills, from video, and novel datasets.
Utilizing foundational models for efficient adaptation/finetuning.
Using reward-free interactions: exploration, sample efficiency, unsupervised skill learning, self-supervised objectives, goal-conditioning, and model learning for RL.
Please submit your papers via the following link: https://openreview.net/group?id=rl-conference.cc/RLC/2024/Workshop/RLBrew
IMPORTANT DATES
* Submission deadline: May 3 at 11:59PM (AOE) on Openreview
* Accept/Reject Notification: May 20, 2024
* Camera-ready (final) paper deadline: June 10, 2024 at 11:59PM (AOE)
* Workshop: Aug 9, 2024
CONFIRMED SPEAKERS & PANELISTS
Biwei Huang (University of California San Diego)
Sergey Levine (University of California Berkeley)
Yonatan Bisk (Carnegie Mellon University)
Glen Berseth (University of Montreal)
Abhishek Gupta (University of Washington)
Amy Zhang (University of Texas at Austin, Meta)
Ida Momennejad (Microsoft Research New York City)
Fei Xia (Google Deepmind)
ORGANIZERS
Caleb Chuck (University of Texas at Austin)
Siddhant Agarwal (University of Texas at Austin)
Fan Feng (City University of Hong Kong)
Yuchen Cui (Stanford University)
Harshit Sikchi (University of Texas at Austin)
Joey Hejna (Stanford University)
Gokul Swamy (Carnegie Mellon University)
Akanksha Saran (Sony Research)
Roberta Raileanu (FAIR, Meta)
REGISTRATION
Participants should refer to the RLC 2024 website (https://umass.irisregistration.com/Form/RLC) for information on how to register.
CONTACT
Please reach out to us at rlbrew....@gmail.com if you have any questions. We look forward to receiving your submissions!
Kind Regards,
Workshop Organizers
Reinforcement Learning Beyond Rewards