Hi all,
We have openings for a postdoc position and a research assistant position as part of our research project
Aggregating Safety Preferences for AI Systems: A Social Choice Approach, funded by the Advanced Research + Invention Agency (ARIA). Our project aims to develop and analyze new methods in computational social choice for eliciting and aggregating safety specifications for safeguarded AI systems (
https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai).
These positions will be based at the
University of Oxford (UK) and will be supervised by Paul Goldberg and Markus Brill.
We are looking for candidates with a strong background in (computational) social choice, (algorithmic) game theory, and/or AI safety. The positions are intended to begin in
October 2025 (with some flexibility) and are offered for one year.
The application deadline is
July 11th, 2025 (noon UK time).
For position details and application instructions, please visit the project homepage at
https://sites.google.com/view/sc4ai/projects/aspaiPlease forward this email to potentially interested candidates. Interested candidates are encouraged to reach out informally to express their interest and/or ask questions.
Best regards,
Markus Brill, Niclas Boehmer, Paul W. Goldberg, Davide Grossi, Jobst Heitzig, and Wesley H. Holliday