CfP: ICML 2024 Workshop on Models of Human Feedback for AI Alignment

74 views
Skip to first unread message

Harshit Sikchi

unread,
May 17, 2024, 11:09:40 AM5/17/24
to ml-...@googlegroups.com

Hello everyone,

We are pleased to announce the Models of Human Feedback for AI Alignment Workshop at ICML 2024 taking place July 26, 2024 in Vienna, Austria. 

The workshop will discuss crucial questions for AI alignment and learning from human feedback including how to model human feedback, how to learn from diverse human feedback, and how to ensure alignment despite misspecified human models. 

Call for Papers: https://sites.google.com/view/mhf-icml2024/call-for-papers
Submission Portal: 
https://openreview.net/group?id=ICML.cc/2024/Workshop/MFHAIA 

Key dates:
Submission deadline: May 31st AOE 2024
Acceptance notification: June 17th 2024
Workshop: July 26th 
2024

We invite submissions related to the theme of the workshop. 

 

Topics include but are not limited to: 

  • Learning from Demonstrations (Inverse Reinforcement Learning, Imitation Learning, ...)
  • Reinforcement Learning with Human Feedback (Fine-tuning LLMs, ...) 
  • Human-AI Alignment, AI Safety, Cooperative AI 
  • Robotics (Human-AI Collaboration, ...) 
  • Preference Learning, Learning to Rank (Recommendation Systems, ...)
  • Computational Social Choice (Preference Aggregation, ...) 
  • Operations Research (Assortment Selection, ...)
  • Behavioral Economics (Bounded Rationality, ...)
  • Cognitive Science (Effort in Decision-Making, ...)

 

ORGANIZERS
Thomas Kleine Buening (The Alan Turing Institute)
Christos Dimitrakakis (Universite de Neuchatel)
Scott Niekum (UMass Amherst)
Constantin Rothkopf (TU Darmstadt)
Aadirupa Saha (Apple ML Research)
Harshit Sikchi (UT Austin)
Lirong Xia (Rensselaer Polytechnic Institute)


Best,
MHF organizers
Reply all
Reply to author
Forward
0 new messages