We had a very well attended session on 4 March. If you were unable to attend be sure to check out the videos on the session page at
https://ontologforum.com/index.php/ConferenceCall_2026_03_04 or on the Ontology Summit YouTube channel.
We are pleased to announce that the next session of the Ontology Summit 2026 will feature
Randy Goebelwho will be presenting
A (partial) framework for debugging foundation modelsRandy Goebel is a Professor of Computing Science and adjunct Professor in the Faculty of Medicine at the University of Alberta, and Fellow and Co-founder of the Alberta Machine Intelligence Institute (AMII), one of three Canadian federally-funded AI research organizations.
As usual, all summit sessions are zoom sessions on Wednesdays at Noon US/Canada Eastern Time and each session lasts one hour. The summit is open to the public and no registration is necessary. All summit sessions are recorded and are available on the summit web pages and on the Ontology Summit YouTube channel.
The session page is:
https://ontologforum.com/index.php/ConferenceCall_2026_03_11Abstract: The current most popular mechanisms of AI are Large Language Models (LLMs) despite the reality that they are computer programs that produce incorrect results. If any evolution of AI systems are to be trusted, the possible choices of foundation models must be further developed. We propose a simple framework that admits a number of different formalisms for so-called foundation models, and argue that, while the methods for debugging them are varied, the crucial scientific question should focus on how to provide a foundation for their debugging. The overall hypothesis is that if we want to establish trust in AI system behaviour we must ensure mechanisms to ensure their reliable operation.
Relevant ideas come from discrete mathematics (e.g., Gödel, Turing), logic and logic programming, Bayesian probability, reinforcement learning, and transformers. Overall, we seek to understand how to choose amongst such methods and how to integrate them, depending on expectations about application correctness (or not).