Dear colleagues,
Our next BeNeRL Reinforcement Learning Seminar (January 8) is coming:
Title: What Does RL Theory Have to Do with Robotics?
Date: January 8, 16.00-17.00 (Amsterdam time zone)
The goal of the online BeNeRL seminar series is to invite RL researchers (mostly advanced PhD or early postgraduate) to share their work. In addition, we invite the speakers to briefly share their experience with large-scale deep RL experiments, and their style/approach to get these to work.
We would be very glad if you forward this invitation within your group and to other colleagues that would be interested (also outside the BeNeRL region). Hope to see you on January 8!
Kind regards,
Zhao Yang & Thomas Moerland
VU Amsterdam & Leiden University
——————————————————————
Upcoming talk:
Date: January 8, 16.00-17.00 (Amsterdam time zone)
Title: What Does RL Theory Have to Do with Robotics?
Abstract: While the theory of reinforcement learning has advanced to a fairly mature place, it is often not apparent how this theory can impact practice. This is especially true in domains such as robotics, where the challenges faced by practitioners typically feel far removed from the settings and algorithms considered by theorists. In this talk, I will discuss how RL theory can impact practice in robotics despite this apparent gap, and how theorists might approach their work to further this impact. I will focus in particular on two case studies centered around the question of pretraining for online adaptation. In the first case, I will explore the question of sim-to-real transfer for robotics, and how we should pretrain with RL in a simulator to enable effective transfer to the real world. In the second case, I will discuss how we can pretrain a policy from human demonstration data to ensure it is a good initialization for further RL finetuning. In both cases, I will show how theory provides the key algorithmic insights that lead to highly effective practical approaches.
Bio: Andrew Wagenmaker is a postdoctoral researcher in Electrical Engineering and Computer Sciences at UC Berkeley working with Sergey Levine. Previously, he completed a PhD in Computer Science at the University of Washington, where he was advised by Kevin Jamieson. While in graduate school, he also spent time at Microsoft Research, mentored by Dylan Foster, as well as the Simons Institute, and his work was supported by an NSF Graduate Research Fellowship. Before that, he completed a master's and bachelor's degree at the University of Michigan, both in Electrical Engineering. His research centers on developing learning-based algorithms for decision-making in sequential environments, both in theory and practice.