
A rare, unscripted conversation on how expanding machine intelligence is reshaping what human judgment, agency, and adaptation now require. Four longtime friends with different views meet to talk it out: Yudkowsky, More, Sandberg, and Vita-More.
AGI moved from theory to trajectory and the debate has narrowed into familiar extremes: total prohibition or unquestioned acceleration. But this framing obscures the deeper issue—how humans interpret, respond to, and evolve alongside emerging intelligence.
Four thinkers—Eliezer Yudkowsky, Max More, Anders Sandberg, Natasha Vita-More—meet for a rare, unscripted conversation on the assumptions driving today’s AGI discourse.
Yudkowsky argues that unchecked AGI poses an existential threat and calls for a global halt. More advances the proactionary principle, emphasizing progress guided by informed risk-taking. Sandberg brings long-term risk modeling and analytical caution. Vita-More introduces a framework of cognitive co-evolution—discerning, adopting, and adapting as intelligence expands.
This is not a debate meant to converge on consensus. It is an effort to restore intellectual clarity at a moment when fear, certainty, and slogans are replacing thought.
If you attend, expect to leave with better questions—not reassurances. Live online via Zoom. Registration by donation | Limited capacity. All who Register will receive the Zoom link via email. Eventbrite Registration!
--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/extropolis/99aa2dae-98e6-4c66-8f94-2522b9a7f4dan%40googlegroups.com.