Dear Colleagues,
it has now been a couple of weeks since the joint DC-NN and DC-CIV
session at this year's IGF on
https://igf2025.sched.com/event/2480q/ws-#187-bridging-internet-ai-governance-from-theory-to-practice
Thanks to Pari and Luca for expertly leading us through a session
that was so short, 75 minutes only, but so filled with valuable
insights from everyone. The room, workshop room 3, had a capacity of
stage + 112 classroom seats of which I'd say at least 80-90 of them
were filled. Online, we had in excess of 20 remote participants. By
IGF standards this was a well attended session.
The recording of the session is on:
https://youtu.be/nDp9MWtHtgg?si=vcU3ErZo6pTqrgIp
Within 24H we had to submit three takeaways and three action points
- all of which were word-restricted.
These are:
Session key takeaway 1
The Internet and AI are "two different beasts," but share
foundational governance challenges.
The Internet was built on decentralisation and openness, enabling a
participatory digital
ecosystem. In contrast, AI—especially large language models—is often
opaque and highly
centralised, controlled by few actors. Bridging this difference is
crucial for equitable digital
futures.
Session key takeaway 2
Core Internet Values can serve as a compass for AI governance, but
need adaptation.
Fundamental principles such as interoperability, openness, and
transparency are vital for
safeguarding user autonomy and ensuring pluralism within AI systems.
However, new
principles such as explainability, provenance, and safety are also
necessary due to AI’s
unique risks.
Session key takeaway 3
Multistakeholder governance and Global South inclusion are vital.
Panellists emphasised the importance of inclusive governance models
that engage civil
society, academia, technical experts, end users and governments,
especially from the
Global South. There was consensus that governance frameworks must
not be dictated
solely by Global North corporations or governments.
Session call-to-action-point 1
Develop and implement AI governance frameworks grounded in openness,
transparency,
and interoperability.
This includes making training data sources traceable (provenance),
ensuring users can
understand and audit AI models (explainability), and avoiding
concentration of power in a
few platforms or providers.
Session call-to-action-point 2
Establish regulatory and oversight mechanisms to manage risks and
enforce
accountability.
Risk-based regulation, especially in high-impact areas (e.g.,
healthcare, finance), and the
development of global standards for AI safety and liability were
proposed as urgent needs.
Session call-to-action-point 3
Encourage global, inclusive, and multistakeholder collaboration.
Participants called for AI governance that reflects diverse
perspectives and protects public
interest—particularly ensuring equitable access to data,
infrastructure, and innovation for
developing nations.
We also submitted a workshop report which was filed on time, by 6th
July 2025.
I invite you to read it on:
https://intgovforum.org/en/content/igf-2025-ws-187-bridging-internet-ai-governance-from-theory-to-practice#edit-group-report
It contains plenty of interesting material.
May I propose that next steps, would be to organise our work and
start planning for action threads, building on the AIs we now have
on the table?
Kindest regards,
Olivier Crépin-Leblond
DC CIV Chair