[PHILOS-L] CFA - The Large-Scale Risks of AI: Control, Governance, and Ethics (23-24 June 2026, Leuven)

17 views
Skip to first unread message

Rebera Andrew

unread,
Dec 17, 2025, 2:07:30 PM12/17/25
to PHIL...@listserv.liv.ac.uk

CAUTION: This email originated outside of the University. Do not click links unless you can verify the source of this email and know the content is safe. Check sender address, hover over URLs, and don't open suspicious email attachments.

 
Call for Abstracts
 The Large-Scale Risks of AI: Control, Governance, and Ethics
 23 - 24 June 2026, Leuven, Belgium
We are delighted to announce the 2nd two-day international conference on the large-scale risks of AI. We welcome scholars from a variety of research domains: philosophy, ethics, computer science, engineering science, social and political sciences, law & international relations, and many others. 
The conference is organised by the Chair of Ethics and AI at KU Leuven and the Future of Life Institute. The conference will take place at the Faculty Club, which is located in Leuven's UNESCO-protected Grand Beguinage.
Confirmed Keynote Speakers
  • Roman Yampolskiy
  • Werner Stengg
Conference Theme
The landscape of artificial intelligence (AI) is rapidly evolving, driven by the development of more capable systems. While this technology holds the promise of immense benefits for society, it also raises concerns about large-scale risks. Prominent concerns include the loss of control over AI, e.g. via scheming and alignment faking, malicious use of AI for bioterrorism, and erosion of safety standards driven by competitive development pressures.
Recognizing the multifaceted nature of these challenges, addressing them effectively requires insights and expertise from diverse disciplines. Solutions must be forged through interdisciplinary collaboration, drawing on expertise from STEM fields, social sciences, the humanities, and beyond. This conference aims to create a platform for fostering such interdisciplinary dialogue, promoting a holistic understanding of large-scale AI risks, and exploring potential solutions through the combined knowledge of experts from various fields.
Paper and Workshop Topics
We invite participants from a wide range of academic disciplines to submit abstracts of their work relating to large-scale risks of AI. We welcome proposals for both individual paper presentations (20 minutes plus Q&A) and workshops (90 minutes total). Note that the particular focus of this conference is on interdisciplinary communication and learning. The work that is presented should focus on big picture issues and be accessible to a broad audience. These may include the following topics:
  • Large-scale risks emerging from future AI systems: If the rapid advancement of AI technology continues, it may soon present novel risks that societies are unprepared for. The identification of such risks, such as bioterrorism, cyberwarfare, rogue or misaligned AI, is crucial in aiding societies to develop prevention and mitigation strategies before the risks materialize.
  • Technical alignment plans, i.e. roadmaps for aligning very powerful systems: How to robustly control advanced AI systems is one of the key technical problems that need to be solved. Proposals vary widely, taking inspiration from various disciplines and schools of thought. Examples include creating a human-level alignment researcher to help with the alignment of increasingly stronger AIs, developing advanced AIs with mathematical safety guarantees, explicitly encoding human morality in LLMs, or Scientist AI.
  • Governance and control of agentic AI: Active research aims at creating agentic AI. How do these systems differ from non-agentic ones? How does agency relate to e.g. planning and goal-directedness? What risks emerge from complex multi-agent interactions? What are the technical and societal infrastructure requirements to manage agent ecosystems? Differential access to agentic AI may foster inequality throughout society. What kind of regulatory frameworks are needed to mitigate this risk?
  • Ethical considerations and frameworks for handling AI risks and control: The emergence of advanced AI gives rise to many new ethical questions. What do we align them to? How should we deal with the implications on the economy, well-being, and politics? Can AI have moral rights?
  • Identification of present large-scale risks: Modern AI systems have not yet reached human-level performance in general, but still may pose large-scale risks that require our immediate attention. For example, some argue that the proliferation of misinformation through AI is threatening informed decision-making in democratic societies.
  • Psychology, cognitive science and large-scale risks: Cognitive biases, heuristics, and other psychological factors may lead to underestimating or misunderstanding existential risks from advanced AI. What is the role of human psychology in the development, deployment, and evaluation of AI systems? What insights does cognitive science bring to the challenge of aligning advanced, general or superintelligent AI with human values?
  • Policy considerations to prevent catastrophic outcomes: As governments around the world are starting to consider AI regulations, there is a strong need for policy proposals that help ensure that advanced AI is broadly beneficial. Relevant proposals are necessarily multi-disciplinary, and draw both from direct insights into the technology (e.g. strict controls based on compute capabilities) and from inspiration from other disciplines (e.g. nuclear safety regulations).
  • Philosophical considerations of large-scale AI risks: What precisely do the concepts of existential risk and catastrophic risks mean? One can also ask questions about large-scale risks from the perspective of philosophy of science: how to assign probability levels to events that have never occurred, or what roles do epistemic values play in predictions about AI’s future impact? From a political philosophy perspective, we welcome papers that are about, for example, the implications for the governance of large-scale risks of the distinction between distributive and procedural justice.
If your work does not clearly fit one or more of the listed areas but aligns with the goals of the conference, we encourage you to submit a contribution. If you are unsure, please reach out to existen...@kuleuven.be.
Submission Instructions
We welcome submissions for both paper presentations (20 minutes + Q&A) and workshops (90 minutes). To facilitate a blind peer-review process, please submit your proposal in two separate documents. The first document contains the submission type (paper or workshop), presentation title, abstract (max. 400 words), and 3-5 keywords. The second document provides the name(s), affiliation(s), and contact details of the author(s).
The conference organizers value inclusivity and diversity: we especially encourage submissions from underrepresented groups.
Submission Deadline 
Abstracts should be submitted by email to existen...@kuleuven.be no later than 15.02.2026
Peer Review Process
All abstracts will undergo a blind peer-review process by the scientific committee of the conference. Applicants will receive notification of acceptance or rejection on or before 01.03.2026
For further inquiries, please send a message to existen...@kuleuven.be

Philos-L "The Liverpool List" is run by the Department of Philosophy, University of Liverpool https://www.liverpool.ac.uk/philosophy/philos-l/ Messages to the list are archived at http://listserv.liv.ac.uk/archives/philos-l.html. Recent posts can also be read in a Facebook group: https://www.facebook.com/PhilosL/ Follow the list on Twitter @PhilosL. Follow the Department of Philosophy @LiverpoolPhilos To sign off the list send a blank message to philos-l-unsub...@liverpool.ac.uk.

Reply all
Reply to author
Forward
0 new messages