is the AI bubble about to burst?

13 views
Skip to first unread message

John F Sowa

unread,
Sep 6, 2025, 9:21:39 PM (10 days ago) Sep 6
to ontolog-forum, c...@lists.iccs-conference.org
Following is an article by a pundit commenting on the current state of the LLM-based AI technology.  I agree with his negative evaluation of AI systems that use only LLMs and large volumes of data from which the LLMs find or generate answers to any requests.  

The latest and greatest systems process more and more voluminous amounts of  data.  The cost of building those huge computer systems is enormous (in the billions of $$$).  But the incremental improvement is minimal.  As they increase the amount of data, they get a huge amount of repetition.  They get somewhat more relevant data, but they also get enormous amounts of wrong and irrelevant data.  The incremental improvement in going from a million $$$ hardware system to a multibillion $$$ system (by Elon M) is negligible.

There is a solution:  Stop building bigger and more expensive hardware.  Instead, build hybrid software systems:  LLMs for searching and translating, and GOFAI (Good Old Fashioned AI) for reasoning, evaluating, and testing the results. 

There is over 60 years of GOFAI research and development.   It has reached a very high level of sophistication,  100% precise logical methods, and very precise statistical methods.  The research methods for relating and combining these methods has reached a high degree of power and accuracy.   Hybrid systems that use a modest amount of LLM power and an appropriate method of GOFAI can be very precise, dependable, and reliable.

There are various ways of designing the LLM and GOFAI hybrids.  Various systems have achieved a high level of guaranteed accuracy.  And there are many promising directions for combining LLMs with the 60 years of GOFAI research..

John

PS:  I don't make any predictions about the stock market.  I would never recommend a huge system like Elon's.  But Nvidia might make more money by selling multiple smaller systems. Bigger is not better.
__________________

From: Will Lockett <planeteart...@substack.com>
Date: Sat, Sep 6, 2025 at 2:40 PM
Subject: Is The AI Bubble About To Burst?

It isn't an easy question to answer.

If you need any proof that the AI hype is getting a little out of
control, just look at Musk.  He has claimed that FSD will soon make car
ownership obsolete, despite the fact that after a decade of development,
it still drives like a myopic 12-year-old.  He even recently claimed
that Tesla’s AI robot Optimus, which so far has only shuffled around on
completely flat surfaces like it has shat itself and been puppeteered
like a ’90s Disney animatronic, will soon make up 80% of Tesla’s
revenue.  And, somehow, despite these brain-rotten, wildly unrealistic
and idiotic claims, analysts and investors aren’t calling Musk a clown
and are instead pumping money into his quaker schemes.  So it is no
wonder the idea that the AI bubble is about to burst has been floating
around the media over the past week.  However, some key elements of this
conversation have been overlooked.

For one, we know that AI is a severely limited technology, yet the
industry is pretending that it isn’t.

For example, we have known about the efficient compute frontier for
years now.  This boundary states that the maths behind AI is reaching a
point of diminishing returns and that generative models are currently as
good as they will get, even if the models are made exponentially larger
(read more here).  We can already see this with ChatGPT-4 and 5, which,
despite OpenAI significantly increasing the model size and training
time, have only delivered very minor improvements.

If you need any proof that the AI hype is getting a little out of
control, just look at Musk.  He has claimed that FSD will soon make car
ownership obsolete, despite the fact that after a decade of development,
it still drives like a myopic 12-year-old.  He even recently claimed
that Tesla’s AI robot Optimus, which so far has only shuffled around on
completely flat surfaces like it has shat itself and been puppeteered
like a ’90s Disney animatronic, will soon make up 80% of Tesla’s
revenue.  And, somehow, despite these brain-rotten, wildly unrealistic
and idiotic claims, analysts and investors aren’t calling Musk a clown
and are instead pumping money into his quaker schemes.  So it is no
wonder the idea that the AI bubble is about to burst has been floating
around the media over the past week.  However, some key elements of this
conversation have been overlooked.

For one, we know that AI is a severely limited technology, yet the
industry is pretending like it isn’t.

For example, we have known about the efficient compute frontier for
years now.  This boundary states that the maths behind AI is reaching a
point of diminishing returns and that generative models are currently as
good as they will get, even if the models are made exponentially larger

We can already see this with ChatGPT-4 and 5, which, despite OpenAI
significantly increasing the model size and training time, have only
delivered very minor improvements.

Then there is the Floridi Conjecture, which again explains that the
maths that powers AI means that AI systems can either have great scope
but no certainty or a constrained scope and great certainty.  Crucially,
Floridi’s Conjecture states that an AI absolutely can’t have both a
great scope and great certainty (read more here).  This means that AI
models, which are treated as general-purpose intelligent systems, like
LLMs or Tesla’s FSD, can never be reliable, as their scope is far, far
too large.  But in more constrained applications, in which the system
isn’t treated as intelligent, it can be made dependable and reliable.

This inability to be even remotely accurate in a broad application is
reflected in real-world applications.

An MIT report found that 95% of AI pilots didn’t increase a company’s
profit or productivity.  For the 5% in which it did, the AI was
relegated to back-room, highly constrained admin jobs, and even then,
there were only marginal improvements.

A METR report found that AI coding tools actually slow developers down.
The inaccuracy of these models means they repeatedly make very bizarre
coding bugs that are highly arduous to find and correct.  As such, it is
quicker and cheaper to get a developer to code it themselves.

Research has even found that for 77% of workers, AI has increased their
workload and not their productivity.

Then there is the issue of treating AI as intelligent, even in slightly
constrained tasks.  Take Amazon, which used AI to power its
checkout-less stores before switching to remote workers, given that the
AI was getting it wrong so frequently and costing them so much money
that it was unsustainable.  Even in very constrained tasks like this,
AI’s constant errors are more costly than the savings they deliver.

The real-world data and our understanding of this technology have
painted a very clear picture.  AI is a severely limited technology,
which cannot improve much past its current form and only has a positive
impact in a few niche and mostly boring applications...
Reply all
Reply to author
Forward
0 new messages