CALL FOR PAPERS
EvoRL 2022
Evolutionary Reinforcement Learning workshop at GECCO 2022, July 9-13, Boston, USA
In recent years reinforcement learning (RL) has received a lot of attention thanks to its performance and ability to address complex tasks. At the same time, multiple recent papers, notably work from OpenAI, have shown that evolution strategies (ES) can be competitive with standard RL algorithms on some problems while being simpler and more scalable. Similar results were obtained by researchers from Uber, this time using a gradient-free genetic algorithm (GA) to train deep neural networks on complex control tasks. Moreover, recent research in the field of evolutionary algorithms (EA) has led to the development of algorithms like Novelty Search and Quality Diversity, capable of efficiently addressing complex exploration problems and finding a wealth of different policies while improving the external reward (QD) or without relying on any reward at all (NS). All these results and developments have sparked a strong renewed interest in such population-based computational approaches.
Nevertheless, even if EAs can perform well on hard exploration problems they still suffer from low sample efficiency. This limitation is less present in RL methods, notably because of sample reuse, while on the contrary they struggle with hard exploration settings. The complementary characteristics of RL algorithms and EAs have pushed researchers to explore new approaches merging the two in order to harness their respective strengths while avoiding their shortcomings.
Some recent papers already demonstrate that the interaction between these two fields can lead to very promising results. We believe that this is a nascent field where new methods can be developed to address problems like sparse and deceptive rewards, open-ended learning, and sample efficiency, while expanding the range of applicability of such approaches.
With the Evolutionary Reinforcement Learning Workshop, we want to highlight this new field currently developing while proposing an outlet for the two communities (RL and EA) to present new applications and ideas and discuss past and new challenges.
AIM
===============
Authors are encouraged to submit original research articles, case studies, reviews, position papers, and theoretical papers within the following topics of interest:
- Evolutionary reinforcement learning
- Evolution strategies
- Population-based methods for policy search
- Neuroevolution
- Hard exploration and sparse reward problems
- Deceptive reward
- Novelty and diversity search methods
- Divergent search
- Sample-efficient direct policy search
- Intrinsic motivation, curiosity
- Building or designing behaviour characterizations
- Meta-learning, hierarchical learning
- Evolutionary AutoML
- Open-ended learning
For more information, including relevant topic areas, please consult the
workshop website.
SUBMISSIONS
===============
Authors have to follow the official GECCO paper formatting guidelines.
Please see the GECCO 2022 information for workshop authors for further details regarding formats and how to submit, accessible through https://gecco-2022.sigevo.org/Call-for-Workshop-Papers.
RELATED JOURNAL SPECIAL ISSUE
===============
TO BE ANNOUNCED
IMPORTANT DATES
===============
Submission opening: February 11, 2022
Submission deadline: April 11, 2022
Notification: April 25, 2022
Camera-ready: May 2, 2022
Presenter mandatory registration: May 2, 2022
Conference Dates: July 9-13, 2022 (Saturday to Wednesday)
CONTACT INFORMATIONS
-----
As a published ACM author, you and your co-authors are subject to all ACM Publications Policies (
https://www.acm.org/publications/policies/toc), including ACM's new Publications Policy on Research Involving Human Participants and Subjects (
https://www.acm.org/publications/policies/research-involving-human-participants-and-subjects).