Dear colleagues,
I'm writing to invite abstracts for an Open Panel I'm convening at 4S Toronto 2026: "Polycentric Futures: Governing AI Beyond Monocentric Control."
This year's 50th anniversary theme, "TechnoPower · Technoscientific Futures," asks how technoscience intersects with power and shapes possible futures. This panel takes that question to AI governance, not who should govern AI, but who actually does, and how.
As AI development concentrates among a small number of powerful actors, fundamental questions emerge about where governance authority migrates, and whether it can be redistributed. The panel examines how polycentric, privately ordered, and community-driven governance arrangements contest, complement, or complicate dominant configurations of AI infrastructure.
Drawing on Ostrom's design principles and Alexander's form–context fit, the panel investigates three interrelated dynamics:
1. Form–context fit: How do decentralised governance models (from crypto-protocols and open collectives to platform–community hybrids) achieve, or fail to achieve, alignment between institutional form and operational context?
2. Improvised autonomy: How do workers, communities, and institutions develop locally improvised governance arrangements that carve out autonomy within dominant AI infrastructures?
3. Breakdown as diagnostic: How do failures in AI-mediated governance reveal sociotechnical foundations that ordinarily remain invisible, creating openings for institutional repair?
The panel welcomes empirical case studies alongside theoretical contributions. Topics of interest include (indicative, not exhaustive):
· Private or community-based AI governance (DAOs, open-source communities, platform/AI cooperatives)
· Shadow AI use and governance gaps as diagnostic indicators of institutional design failure
· Sovereign capacity, infrastructure dependence, and middle-power AI strategy
· Epistemic governance: how AI changes what institutions know and how they know it
· Theoretical work connecting polycentric theory and commons governance design principles to AI systems
The panel particularly encourages papers that address how communities exercise agency within algorithmic infrastructures and how design principles from commons governance might inform AI development.
The core commitment: treating AI not merely as an object of governance, but as an active participant in governance networks.
If your work sits at the intersection of STS, institutional analysis, platform governance, or AI policy (especially if it's grounded in empirical use cases) I'd love to see your abstract! And if you have any questions about fit, please don't hesitate to get in touch.
Best wishes,
Luis Lozano Paredes
-----
Dr Luis Lozano Paredes, PhD, MPIA
Lecturer; Transdisciplinary School
Chair; Transdisciplinary School AI Working Group (Research + Teaching and Learning)
Transdisciplinary School
University of Technology, Sydney (UTS)
PO Box 123 Broadway NSW 2007 Australia