Talk about RL on Monday 25/3/2019, 10:30 am, F107 INRIA Montbonnot

3 views
Skip to first unread message

Jakob Verbeek

unread,
Mar 22, 2019, 5:24:29 PM3/22/19
to smile-in...@googlegroups.com
Hello everyone,

Corentin Tallec is visiting and will give a presentation on some recent 
work about RL in continuous time, on monday at 10:30 in room F107.

For those who haven't met him yet, he is a friend (and co-author) of 
mine .

Making Deep Q-learning Methods Robust to Time Discretization 
<https://arxiv.org/pdf/1901.09732.pdf>�(https://arxiv.org/pdf/1901.09732.pdf) 


Abstract:
Despite remarkable successes, Deep Reinforcement Learning (DRL) is not 
robust to hyperparameterization, implementation details, or small 
environment changes (Henderson et al. 2017, Zhang et al. 2018). 
Overcoming such sensitivity is key to making DRL applicable to real 
world problems. In this paper, we identify sensitivity to time 
discretization in near continuous-time environments as a critical 
factor; this covers, e.g., changing the number of frames per second, or 
the action frequency of the controller. Empirically, we find that 
Q-learning-based approaches such as Deep Qlearning (Mnih et al., 2015) 
and Deep Deterministic Policy Gradient (Lillicrap et al., 2015) collapse 
with small time steps. Formally, we prove that Q-learning does not exist 
in continuous time. We detail a principled way to build an off-policy RL 
algorithm that yields similar performances over a wide range of time 
discretizations, and confirm this robustness empirically.




Corentin also worked on recurrent networks in the past; Let me know if 
you'd like to talk to him.

Thomas

-- Jakob Verbeek Phone: +33 4 7661 5233 Inria Grenoble Rhone-Alpes Cell: +33 6 2806 3136 655 Avenue de l'Europe Jakob....@inria.fr 38330 Montbonnot, France http://thoth.inrialpes.fr/~verbeek

Reply all
Reply to author
Forward
0 new messages