[CFP] Models of Human Feedback for AI Alignment Workshop, ICML, 2024.

15 views
Skip to first unread message

Aadirupa Saha

unread,
May 19, 2024, 8:14:24 PMMay 19
to learnin...@googlegroups.com

aadiru...@gmail.com

unread,
May 28, 2024, 6:56:15 AMMay 28
to COLT (Computational Learning Theory)
Dear All,

A gentle reminder: The workshop submission deadline is this Friday, May 31st AOE. The deadline is strict, so please consider submitting within time if you have any recent work related to "Models of Human Feedback for AI Alignment."

Topics include (1) Learning from Demonstrations (Inverse Reinforcement Learning, Imitation Learning), (2) Reinforcement Learning with Human Feedback (Fine-tuning LLMs) (3) Human-AI Alignment, (4) AI Safety, (5) Cooperative AI (5) Robotics (6) Preference-based Learning & Learning to Rank (Recommendation Systems) (7) Computational Social Choice (8) Operations Research (Assortment Selection), (9) Behavioral Economics (Bounded Rationality), (10) Cognitive Science and anything related.

For more details, please visit our website https://sites.google.com/view/mhf-icml2024/call-for-papers?authuser=0,
and follow us on Twitter at https://x.com/mhf_icml2024/status/1793844023452709195 for more updates.

We look forward to reading your new results and seeing you at ICML24. Should you have any questions, please do not hesitate to reach out.

Best regards,
The Organizers

(Thomas, Christos, Scott, Constantin, Harshit, Lirong, Aadirupa)
Reply all
Reply to author
Forward
0 new messages