Dear All,
We are pleased to announce the Models of Human Feedback for AI Alignment Workshop at ICML 2024 taking place July 26 in Vienna, Austria.
We invite submissions related to the theme of the workshop. Key dates:
Submission deadline: May 31st AOE
Acceptance notification: June 17th
Workshop: July 26th @ICML24, Vienna, Austria.
Topics include but are not limited to:
Learning from Demonstrations (Inverse Reinforcement Learning, Imitation Learning, ...)
Reinforcement Learning with Human Feedback (Fine-tuning LLMs, ...)
Human-AI Alignment, AI Safety, Cooperative AI
Robotics (Human-AI Collaboration, ...)
Preference Learning, Learning to Rank (Recommendation Systems, ...)
Computational Social Choice (Preference Aggregation, ...)
Operations Research (Assortment Selection, ...)
Behavioral Economics (Bounded Rationality, ...)
Cognitive Science (Effort in Decision-Making, ...)
Please share with your students, colleagues, community, and let us know if you have any questions. Looking forward to hosting you!
Best,
Organizers
(Thomas, Christos, Scott, Constantin, Harshit, Lirong, Aadirupa)