You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to guarantee...@googlegroups.com
Dear Guaranteed Safe AI enjoyers,
Thanks for having participated or being interested in the seminar series!
2024 in review:
The series started in April, and as of December, it grew to ~230 subscribers. We had ~490 RSVPs. The recordings had ~76 hours of watch time and ~900 views.
Each session had a solid presentation and insightful community discussion.
We had the following sessions:
Compact Proofs of Model Performance via Mechanistic Interpretability – Louis Jaburi
Bayesian oracles and safety bounds – Yoshua Bengio
Constructability: Designing plain-coded AI systems – Charbel-Raphaël Ségerie & Épiphanie Gédéon
Proving safety for narrow AI outputs – Evan Miyazono
Gaia: Distributed planetary-scale AI safety – Rafael Kaufmann
Provable AI Safety – Steve Omohundro
Synthesizing Gatekeepers for Safe Reinforcement Learning by Justice Sefas
Verifying Global Properties of Neural Networks by Roman Soletskyi
There was a 2 months hiatus because the main organizer had an accident.
It started as Provably Safe AI Seminars but expanded in scope and rebranded to GS AI Seminars.
We thank Long-Term Future Fund for supporting the series for a 6 months period.
Visit our Donation page for a subscription or one-off donation.
Interested in speaking or know someone that might be? Visit or share https://www.horizonevents.info/guaranteedsafeaiseminars. We welcome speakers working on GS AI or related research agendas, including in relation to world models, verification methods, safety specifications, real-world applications, …