[CFP] Models of Human Feedback for AI Alignment Workshop, ICML, 2024.

93 views
Skip to first unread message

Aadirupa Saha

unread,
May 21, 2024, 12:04:37 PM5/21/24
to ml-...@googlegroups.com
Dear All,

We are pleased to announce the Models of Human Feedback for AI Alignment Workshop at ICML 2024 taking place July 26 in Vienna, Austria.

Follow us on Twitter @mhf_icml2024: https://x.com/mhf_icml2024/status/1790667579310100752 and stay tuned for more updates!

The workshop will discuss crucial questions for AI alignment and learning from human feedback including how to model human feedback, how to learn from diverse human feedback, and how to ensure alignment despite misspecified human models.

Call for Papers: https://sites.google.com/view/mhf-icml2024/call-for-papers
Submission Portal: https://openreview.net/group?id=ICML.cc/2024/Workshop/MFHAIA

We invite submissions related to the theme of the workshop. Key dates:
Submission deadline: May 31st AOE
Acceptance notification: June 17th
Workshop: July 26th @ICML24, Vienna, Austria.

Topics include but are not limited to:

Learning from Demonstrations (Inverse Reinforcement Learning, Imitation Learning, ...)
Reinforcement Learning with Human Feedback (Fine-tuning LLMs, ...)
Human-AI Alignment, AI Safety, Cooperative AI
Robotics (Human-AI Collaboration, ...)
Preference Learning, Learning to Rank (Recommendation Systems, ...)
Computational Social Choice (Preference Aggregation, ...)
Operations Research (Assortment Selection, ...)
Behavioral Economics (Bounded Rationality, ...)
Cognitive Science (Effort in Decision-Making, ...)

Please feel free to write us if you have any questions. Looking forward to hosting you at our workshop!

Best,
Organizers
(Thomas, Christos, Scott, Constantin, Harshit, Lirong, Aadirupa)

aadiru...@gmail.com

unread,
May 28, 2024, 12:19:26 PM5/28/24
to Machine Learning News
Dear All,

A gentle reminder: The workshop submission deadline is this Friday, May 31st AOE. The deadline is strict, so please consider submitting within time if you have any recent work related to "Models of Human Feedback for AI Alignment."

Topics include (1) Learning from Demonstrations (Inverse Reinforcement Learning, Imitation Learning), (2) Reinforcement Learning with Human Feedback (Fine-tuning LLMs) (3) Human-AI Alignment, (4) AI Safety, (5) Cooperative AI (5) Robotics (6) Preference-based Learning & Learning to Rank (Recommendation Systems) (7) Computational Social Choice (8) Operations Research (Assortment Selection), (9) Behavioral Economics (Bounded Rationality), (10) Cognitive Science and anything related.

For more details, please visit our website https://sites.google.com/view/mhf-icml2024/call-for-papers?authuser=0,
and follow us on Twitter at https://x.com/mhf_icml2024/status/1793844023452709195 for more updates.

We look forward to reading your new results and seeing you at ICML24. Should you have any questions, please do not hesitate to reach out.

Best regards,
The Organizers

(Thomas, Christos, Scott, Constantin, Harshit, Lirong, Aadirupa)

Reply all
Reply to author
Forward
0 new messages