[PRL2020] Call for Participation: Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning

10 views
Skip to first unread message

Michael Katz

unread,
Oct 17, 2020, 8:54:50 PM10/17/20
to icaps-co...@googlegroups.com, planni...@googlegroups.com, searc...@googlegroups.com, rl-...@googlegroups.com, is...@googlegroups.com, const...@yahoogroups.com
** Apologies for cross-posting - Please forward to anybody who might be interested **

================================================================
                      CALL FOR PARTICIPATION

Bridging the Gap Between AI Planning and Reinforcement Learning
                           (PRL 2020)

        https://icaps20.icaps-conference.org/workshops/prl/    
                Virtual, October 22 and 23, 2020
================================================================

We have an exciting program with 5 invited talks, 11 paper presentations, and a poster session! The schedule is available on the workshop website.

Participation:

The workshop will take place online, on October 22nd and 23rd. In order to participate in the workshop, you need to register for ICAPS. The registration is free, and once registered, you will get a password to Gather platform and a direct link to the virtual conference. Please remember to select the PRL workshop when registering.


Invited Talks:
  • Will Dabney, DeepMind: Advances in Distributional Reinforcement Learning And Connections With Planning
  • Alan Fern, Oregon State University: Deep Flat MDPs for Offline Model-Based Reinforcement Learning
  • Michael Littman, Brown University: Logical Planning in Murky Perceptual Domains: From Soup to Nots
  • Julian Schrittwieser, DeepMind: MuZero – Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
  • Peter Stone, The University of Texas at Austin: Task-Motion Navigation Planning with Learning for Adaptable Mobile Service Robots

Accepted papers:
  • PDDLGym: Gym Environments from PDDL Problems (Tom Silver and Rohan Chitnis)
  • Model-free Automated Planning Using Neural Networks (Michaela Urbanovská, Jan Bím, Leah Chrestien, Antonín Komenda and Tomáš Pevný)
  • Generalized Planning With Deep Reinforcement Learning (Or Rivlin, Tamir Hazan and Erez Karpas)
  • Reinforcement Learning of Risk-Constrained Policies in Markov Decision Processes (Extended Abstract) (Tomas Brazdil, Krishnendu Chatterjee, Petr Novotný and Jiří Vahala)
  • Time-based Dynamic Controllability of Disjunctive Temporal Networks with Uncertainty: A Tree Search Approach with Graph Neural Network Guidance (Kevin Osanlou, Jeremy Frank, J. Benton, Andrei Bursuc, Christophe Guettier, Eric Jacopin and Tristan Cazenave)
  • Synthesis of Search Heuristics for Temporal Planning via Reinforcement Learning (Andrea Micheli and Alessandro Valentini)
  • A Framework for Reinforcement Learning and Planning: Extended Abstract (Thomas Moerland, Joost Broekens and Catholijn Jonker)
  • Think Neither Too Fast Nor Too Slow: The Computational Trade-off Between Planning And Reinforcement Learning (Thomas Moerland, Anna Deichler, Simone Baldi, Joost Broekens and Catholijn Jonker)
  • Learning Heuristic Selection with Dynamic Algorithm Configuration (David Speck, André Biedenkapp, Frank Hutter, Robert Mattmüller and Marius Lindauer)
  • Knowing When To Look Back: Bidirectional Rollouts in Dyna-style Planning (Yat Long Lo, Jia Pan and Albert Y.S. Lam)
  • PBCS: Efficient Exploration and Exploitation Using a Synergy between Reinforcement Learning and Motion Planning (Guillaume Matheron, Olivier Sigaud and Nicolas Perrin)
  • Hierarchical Reinforcement Learning in StarCraft II with Human Expertise in Subgoals Selection (Xinyi Xu, Tiancheng Huang, Pengfei Wei, Akshay Narayan and Tze-Yun Leong)
  • Symbolic Network: Generalized Neural Policies for Relational MDPs (Sankalp Garg, Aniket Bajpai and Mausam Mausam)
  • Safe Learning of Lifted Action Models (Brendan Juba, Hai Le and Roni Stern)
  • Reinforcement Learning for Planning Heuristics (Patrick Ferber, Malte Helmert and Joerg Hoffmann)
  • Bridging the gap between Markowitz planning and deep reinforcement learning (Eric Benhamou, David Saltiel, Sandrine Ungari and Abhishek Mukhopadhyayg)
  • Planning from Pixels in Atari with Learned Symbolic Representations (Frederik Drachmann, Andrea Dittadi and Thomas Bolander)
  • Offline Learning for Planning: A Summary (Giorgio Angelotti, Nicolas Drougard and Caroline Ponzoni Carvalho Chanel)
  • Real-time Planning as Data-driven Decision-making (Maximilian Fickert, Tianyi Gu, Leonhard Staut, Sai Lekyang, Wheeler Ruml, Joerg Hoffmann and Marek Petrik)

Please send your questions to prl...@easychair.org
Michael Katz on behalf of PRL Organizers

Reply all
Reply to author
Forward
0 new messages