Auckland, New Zealand
Robots deployed in the real world will interact with many different humans to perform many different tasks in their lifetime, which makes it difficult (perhaps even impossible) for designers to specify all the aspects that might matter ahead of time. Instead, robots can extract these aspects implicitly when they learn to perform new tasks from their users' input. The challenge is that this often results in representations which pick up on spurious correlations in the data and fail to capture the human’s representation of what matters for the task, resulting in behaviors that do not generalize to new scenarios. In this workshop, we are interested in exploring ways in which robots can align their representations with those of the humans they interact with so that they can more effectively learn from their input. By bringing together experts from representation learning, human-robot interaction, and cognitive science, we believe we can foster an environment where we can exchange ideas for how the robot learning community can best benefit from learning representations from human input and vice-versa, and how the HRI community can best direct their efforts towards discovering more effective human-robot teaching strategies. We encourage participation from researchers working in robot learning, human-robot interaction, cognitive science, and representation learning.
Topics of Interest
Discussion topics and questions will include:
- What kind of representations do humans form about their surrounding world to plan and accomplish their goals effectively?
- Conversely, what kinds of representations should robots learn in order to be most aligned with what humans care about? Should we represent the world using features? Knowledge graphs? Object-centric representations? Is it important that we learn representations that generalize across many tasks or should we always directly specialize?
- When and to what extent is human input necessary for learning good robot representations? Should we try to eliminate human input from representation alignment as best as possible or should we focus our efforts on enabling people to give the right kinds of input to distill their knowledge into the robot?
- What is the value of simulation for representation alignment? As a community, should we spend all the human effort building simulators with good assets and just collect a lot of human data OR should we focus our research effort into figuring out effective teaching strategies?
- What are the best types of human input for distilling a person’s knowledge of the world into the robot and aligning their representations? Is natural language “the” interface to communicate with robots?
- How can robots be more transparent about what representation they do or don’t know such that humans can more appropriately communicate what it is they care about?
- What are the benefits and limitations of existing ML representation learning techniques?
- What is the role of domains such as vision and natural language in helping humans communicate representations to robots? In robots communicating representations to humans?
We invite research papers of 4-8 pages, not including references or appendix.
- Submission deadline for papers: October 21, 2022
- Notification of acceptance: November 11, 2022
- Camera-ready version: November 25, 2022
- Workshop Day: December 15, 2022
- Jacob Andreas, Massachusetts Institute of Technology
- Daniel S. Brown, University of Utah
- Matthew Gombolay, Georgia Institute of Technology
- Mark Ho, Princeton University
- George Konidaris, Brown University
- Lerrel Pinto, New York University
- Dorsa Sadigh, Stanford University
- Amy Zhang, Facebook AI Research