Hello everyone,
Corentin Tallec is visiting and will give a presentation on some recent
work about RL in continuous time, on monday at 10:30 in room F107.
For those who haven't met him yet, he is a friend (and co-author) of
mine .
Making Deep Q-learning Methods Robust to Time Discretization
<https://arxiv.org/pdf/1901.09732.pdf>�(https://arxiv.org/pdf/1901.09732.pdf)
Abstract:
Despite remarkable successes, Deep Reinforcement Learning (DRL) is not
robust to hyperparameterization, implementation details, or small
environment changes (Henderson et al. 2017, Zhang et al. 2018).
Overcoming such sensitivity is key to making DRL applicable to real
world problems. In this paper, we identify sensitivity to time
discretization in near continuous-time environments as a critical
factor; this covers, e.g., changing the number of frames per second, or
the action frequency of the controller. Empirically, we find that
Q-learning-based approaches such as Deep Qlearning (Mnih et al., 2015)
and Deep Deterministic Policy Gradient (Lillicrap et al., 2015) collapse
with small time steps. Formally, we prove that Q-learning does not exist
in continuous time. We detail a principled way to build an off-policy RL
algorithm that yields similar performances over a wide range of time
discretizations, and confirm this robustness empirically.
Corentin also worked on recurrent networks in the past; Let me know if
you'd like to talk to him.
Thomas