Hello everyone,
We are pleased to announce the Models of Human Feedback for AI Alignment Workshop at ICML 2024 taking place July 26, 2024 in Vienna, Austria.
The workshop will discuss crucial questions for AI alignment and learning from human feedback including how to model human feedback, how to learn from diverse human feedback, and how to ensure alignment despite misspecified human models.
Call for Papers: https://sites.google.com/view/mhf-icml2024/call-for-papers
Submission Portal: https://openreview.net/group?id=ICML.cc/2024/Workshop/MFHAIA
Key dates:
Submission deadline: May 31st AOE 2024
Acceptance notification: June 17th 2024
Workshop: July 26th 2024
We invite submissions related to the theme of the workshop.
Topics include but are not limited to:
ORGANIZERS
Thomas Kleine Buening (The Alan Turing Institute)
Christos Dimitrakakis (Universite de Neuchatel)
Scott Niekum (UMass Amherst)
Constantin Rothkopf (TU Darmstadt)
Aadirupa Saha (Apple ML Research)
Harshit Sikchi (UT Austin)
Lirong Xia (Rensselaer Polytechnic Institute)