Data Science Platform Seminar Series IV
==============================
===
Title:
Understanding Large Language Models and the Path to AGI Speaker:
Prof. John Abela Date & Time:
Wednesday, 5th November 2025, 12:00 (noon) Location:
Faculty of ICT, ICT Communications Lab (Level 0, Block B, Room 1)
Save to
Google Calendar Talk Abstract
===========
Neural
Networks are often perceived by the general population as a form of
magic, but at their core, they are essentially a structured sequence of
mathematical transformations mapping an input tensor space to an output
tensor space. Large Language Models (LLMs), such as ChatGPT, operate
through a series of tensor algebra operations, leveraging vast amounts
of data and computation. The true "magic" emerges not from individual
calculations but from scaling—when models grow larger, they exhibit
emergent properties that were not explicitly programmed. This talk
explores the implications of scale in AI, drawing lessons from nature.
Evolution did not grant humans 86 billion neurons and 100 trillion
synaptic connections by accident; nature is economical, and the
complexity of human intelligence is deeply tied to its capacity. The
human brain's encephalization quotient—the ratio of brain mass to body
size—exceeds that of any other primate, highlighting the importance of
scale in biological intelligence.
A central
question arises: Are human intelligence and consciousness
Turing-computable? If intelligence is simply the product of sufficient
capacity and complexity, then in principle, AI models, when scaled,
should be able to achieve human-level Artificial General Intelligence
(AGI). But does the nature of intelligence go beyond computation? What
is the Kolmogorov complexity of human intelligence? The Chinese Room
argument, proposed by philosopher John Searle, challenges the idea that
syntactic manipulation alone is sufficient for true understanding.
Meanwhile, philosopher and cognitive scientist Daniel Dennett’s theories
on consciousness suggest that intelligence is just an emergent property
of information processing, much like what we observe in modern AI
models.
This talk will critically examine these
perspectives, discussing whether AI is on the trajectory to achieving
human-like cognition or if there are fundamental barriers that limit
computational models from replicating consciousness. Ultimately, we will
explore whether the rapid scaling of AI is bringing us closer to AGI or
revealing the limits of algorithmic intelligence. Is the human brain
super-Turing powerful?
This is the fourth talk in the 2025 Data Science Platform Seminar Series.
Speaker’s Bio
===========
John
Abela is an Associate Professor in the Department of Computer
Information Systems at the Faculty of ICT. He holds a BSc in Mathematics
and Computing (Malta), an MSc in Computer Science (UNB) and a PhD in
Theoretical Machine Learning (UNB). His main areas of specialization are
AI, machine learning, deep learning, machine vision, optimization,
computational complexity, and Large Language Models (LLMs).