We are pleased to announce the NeurIPS 2021 Workshop on Safe and Robust Control of Uncertain Systems.
Call for Papers: https://sites.google.com/view/safe-robust-control/home
This workshop will explore how to build data-driven control systems that are scalable and efficient while also being safe and robust enough for reliable deployment. This is motivated by the variety of data-driven control systems that are in widespread use for which safety and robustness is critical, such as recommender systems in online retail and social media and algorithms for robotic control.
We welcome any submissions focused on safety and robustness for reinforcement learning and control, but particularly encourage submissions on the following topics:
Safe and Efficient Exploration: Explore an uncertain environment while avoiding undesirable states and actions
Specifying Undesirable Behaviors: Convey undesirable behaviors to an autonomous agent scalably, efficiently, and safely
Off-Policy Evaluation: Evaluate performance before execution in the environment
Model-Based Controller Design + Data: Synthesize ideas from control theory and ML to design provably safe controllers
Offline RL/Control: How can we leverage offline data to learn robust controllers/policies before interacting with the environment
Active and Human-in-the-loop Learning: Leverage human interactions to enable better exploration strategies and more robust policies
Scalability and Safety: Balance tractability and scalability with robustness to uncertainty in reward functions and system dynamics
Submissions will be in the form of 4 page extended abstracts due by September 17, 2021 on CMT. The workshop will be virtual and consist of a mix of invited talks, poster sessions, and discussions. More details on our website: https://sites.google.com/view/safe-robust-control/home.
Organizers
Ashwin Balakrishna (UC Berkeley)
Brijen Thananjeyan (UC Berkeley)
Daniel Brown (UC Berkeley)
Sylvia Herbert (UC San Diego)
Marek Petrik (UNH)
Melanie Zeilinger (ETH Zurich)