Hello everyone,
Happy New Year! We’re excited to invite you all to our 2025 seminar series, which continues on the second Thursday of each month at 13:00 Eastern Time. Each session runs about one hour, typically split between a presentation and a Q&A/discussion. Visit the s
eries website & Luma schedule for more info. We publish the
recordings on YouTube so the broader research community can benefit as well.
Recent speakers include:
• Compact proofs of model performance via mechanistic interpretability – Louis Jaburi
• Bayesian oracles and safety bounds – Yoshua Bengio
• Constructability: Designing plain-coded AI systems – Charbel-Raphaël Ségerie & Épiphanie Gédéon
• Proving safety for narrow AI outputs – Evan Miyazono
• Gaia: Distributed planetary-scale AI safety – Rafael Kaufmann
• Provable AI safety – Steve Omohundro
• Synthesizing Gatekeepers for Safe Reinforcement Learning – Justice Sefas
• Verifying Global Properties of Neural Networks – Roman Soletskyi
We are now scheduling presenters for 2025. If your work relates to formal verification, AI safety guarantees, or any aspect of guaranteed safe AI, we’d love to feature you in the series. If you’d like to be a speaker, please complete our application form (5 minutes):
Apply here.
Feel free to forward this invitation to people you know who might be interested. We also encourage you to attend and participate in the lively sessions. We look forward to seeing you in 2025!
Additionally, we invite you to
donate to help keep the series running happily.
Best regards,
Orpheus Lummis
P.S. If you have any questions or need more info, just reach out.