Participate in the NeurIPS 2021 MineRL Learning from Human Feedback Competition!

38 views
Skip to first unread message

Steph Milani

unread,
Jul 7, 2021, 11:53:16 AM7/7/21
to Stephanie Milani

Hi all,


We are organizing the first-annual MineRL BASALT Competition on Learning from Human Feedback at NeurIPS 2021 in collaboration with UC Berkeley, Carnegie Mellon University, OpenAI, AIcrowd, and the University of Eastern Finland. We are now officially open for submissions. 


Please consider participating! Sign up here: https://www.aicrowd.com/challenges/neurips-2021-minerl-basalt-competition 


Information about the competition is below.


Thanks,

Stephanie Milani

Machine Learning PhD Student, Carnegie Mellon University


---------------------------------------------------------------------

The 2021 MineRL BASALT Competition on Learning from Human Feedback

 

Competition website: http://minerl.io/basalt/ 

Competition Twitter: @minerl_official

Sign-up Page: https://www.aicrowd.com/challenges/neurips-2021-minerl-basalt-competition 

 

Abstract:

The last decade has seen a significant increase of interest in deep learning research, with many public successes that have demonstrated its potential. As such, these systems are now being incorporated into commercial products. With this comes an additional challenge: how can we build AI systems that solve tasks where there is not a crisp, well-defined specification? While multiple solutions have been proposed, in this competition we focus on one in particular: learning from human feedback. Rather than training AI systems using a predefined reward function or using a labeled dataset with a predefined set of categories, we instead train the AI system using a learning signal derived from some form of human feedback, which can evolve over time as the understanding of the task changes, or as the capabilities of the AI system improve.


The MineRL BASALT competition aims to spur forward research on this important class of techniques. We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions. These tasks are defined by a paragraph of natural language: for example, "create a waterfall and take a scenic picture of it", with additional clarifying details. Participants must train a separate agent for each task, using any method they want. Agents are then evaluated by humans who have read the task description. To help participants get started, we provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline that leverages these demonstrations.


Our hope is that this competition will improve our ability to build AI systems that do what their designers intend them to do, even when the intent cannot be easily formalized. Besides allowing AI to solve more tasks, this can also enable more effective regulation of AI systems, as well as making progress on the value alignment problem.


Organizers:

Rohin Shah (UC Berkeley), Cody Wild (UC Berkeley), Steven H. Wang (UC Berkeley), Neel Alex (UC Berkeley), Brandon Houghton (OpenAI and Carnegie Mellon University), William H. Guss (OpenAI and Carnegie Mellon University), Sharada Mohanty (AIcrowd), Anssi Kanervisto (University of Eastern Finland), Stephanie Milani (Carnegie Mellon University), Nicholay Topin (Carnegie Mellon University), Pieter Abbeel (UC Berkeley), Stuart Russell (UC Berkeley), Anca Dragan (UC Berkeley).

Advisors:

Sergio Guadarrama (Google Brain), Katja Hofmann (Microsoft Research).

Sponsors:

Microsoft and OpenAI



--
Stephanie Milani
Machine Learning Ph.D. Student, Carnegie Mellon University
Computer Science (B.S.) and Psychology (B.A.), UMBC '19
Reply all
Reply to author
Forward
0 new messages