Overview: Systems that can learn interactively from their end-users are quickly becoming widespread in real-world applications. Typically humans provide tagged rewards or scalar feedback for such interactive learning systems. However, humans offer a wealth of implicit information (such as multimodal cues in the form of natural language, speech, eye movements, facial expressions, gestures etc.) which interactive learning algorithms can leverage during the process of human-machine interaction to create a grounding for human intent, and thereby better assist end-users. A closed-loop sequential decision-making domain offers unique challenges when learning from humans -– (1) the data distribution may be influenced by the choices of the algorithm itself, and thus interactive ML algorithms need to adaptively learn from human feedback, (2) the nature of the environment itself changes rapidly, (3) humans may express their intent in various forms of feedback amenable to naturalistic real-world settings, going beyond tagged rewards or demonstrations. By organizing this workshop, we attempt to bring together interdisciplinary experts in interactive machine learning, reinforcement learning, human-computer interaction, cognitive science, and robotics to explore and foster discussions on such challenges. We envision that this exchange of ideas within and across disciplines can build new bridges, address some of the most valuable challenges in interactive learning with implicit human feedback, and also provide guidance to young researchers interested in growing their careers in this space.
Areas of interest: We solicit submissions related to (but not limited to) the following themes on interaction-grounded machine learning with humans:
Leveraging different types of human input modalities for interactive learning
Models and representations learned from human data
Online learning algorithms for human-machine collaboration
Personalized interaction-based learning
Theoretical advances for interactive learning with implicit human feedback
Interactive Learning with non-stationary rewards and environment dynamics
Applications for HCI and accessibility
Understanding how humans teach other humans and learning agents/embodied robots
All submissions will be managed through OpenReview. The review process is double-blind so the submission should be anonymized. Papers should be a maximum of 8 pages (excluding references), and formatted in ICML style. Accepted papers will be presented as posters during the workshop and select works will be invited to give spotlight talks during the workshop. Accepted papers will be made available online on the workshop website as non-archival reports, allowing submissions to future conferences or journals.
Authors may optionally add appendices in their submitted paper. Supplementary Materials uploads are to only be used optionally for extra videos/code/data/figures and should be uploaded separately in the submission website.
Submissions will be evaluated based on novelty, rigor, and relevance to the theme of the workshop. Both empirical and theoretical contributions are welcome. All participants must adhere to the ICML Code of Conduct.
Important Dates
Submission deadline for papers: May 24, 2023
Notification of acceptance: June 18, 2023
Camera-ready version: July 10, 2023
Workshop Day: July 29, 2023
Invited Speakers
David Abel, DeepMind
Daniel Brown, University of Utah
Jonathan Grizou, University of Glasgow
Taylor Kessler Faulkner, University of Washington
Paul Mineiro, Microsoft Research
Dorsa Sadigh, Stanford University
Jesse Thomason, University of Southern California and Amazon