ARIA's Programme on Safeguarded AI: constructing safety by design

28 views
Skip to first unread message

Jobst Heitzig

unread,
Jan 31, 2024, 1:55:11 PMJan 31
to Social Choice for AI Ethics and Safety
Hi everyone,

ARIA , a UK R&D funding agency, has just published a "programme thesis" on "Safeguarded AI: constructing safety by design" that may soon become a concrete funding opportunity. 

It explicitly mentions "Social Choice Theory" as part of its methodology, and the Programme's Director, David "davidad" Dalrymple, has personally expressed to me an interest to work with this community.

This might be an exciting opportunity to be involved also in other AI safety work that is not RLHF.

Best, Jobst

Saffron Huang

unread,
Jan 31, 2024, 2:15:15 PMJan 31
to Jobst Heitzig, Social Choice for AI Ethics and Safety
Side note: Davidad came to a workshop I organized (it was a CIP and Cooperative AI Foundation workshop) on "AI for institutions" actually, the output of which may be interesting to folks (we have some "project cards" i.e. project proposals on the website)

Saffron Huang
Co-Director, Collective Intelligence Project (CIP)


--
You received this message because you are subscribed to the Google Groups "Social Choice for AI Ethics and Safety" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sc4ai+un...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/sc4ai/9ae3f68a-5f21-4d5a-9c9c-5540ade7dc1en%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages