Does AI already have human-level intelligence? The evidence is clear

9 views
Skip to first unread message

John Clark

unread,
Feb 2, 2026, 4:04:52 PM (5 days ago) Feb 2
to ExI Chat, extro...@googlegroups.com, 'Brent Meeker' via Everything List
The following article was in the February 2, 2026 issue of the journal Nature: 


John K Clark    See what's on my new list at  Extropolis
vbx

Russell Standish

unread,
Feb 2, 2026, 5:22:51 PM (5 days ago) Feb 2
to everyth...@googlegroups.com
I'm inclined to agree. Having used LLMs for the last 6 months for
programming tasks, I do find myself treating them like any human
colleague. Their intelligence is different from a humans, a bit like
an idiot-savant, incredibly knowledgable, but easy to go off-piste
and flaky at actually delivering (war stories later if I have the time
:). Not unlike some brilliant, but flaky employees I've worked with in
the past.

So I'd say the threshold was passed sometime in the last couple of
years. Bear in mind, in Kurzweil's timeline to the singularity in
2045, this milestone was supposed to happen by 2020, so we're running
about 3-5 years behind schedule :). Not that 3-5 years means much in a
revolution this big.

The real debate now is whether LLMs are truly creative. It is a
difficult question to answer, in part because we don't really know what the
means, or how to measure it. Also, it would appear most humans are
probably not that creative anyway, whether by nature or circumstance.

Moltbook is an interesting experiment, because it might just show
unequivocably whether these machines are creative or not.

I don't know whether to be excited, or scared or just resigned at this point.
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to everything-li...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/everything-list
> /CAJPayv2_RPejC%3DR127KFK6PeLaMVexPZT0R7O2fRsxRxwf-Ovw%40mail.gmail.com.

--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders hpc...@hpcoders.com.au
http://www.hpcoders.com.au
----------------------------------------------------------------------------

Brent Meeker

unread,
Feb 2, 2026, 6:39:51 PM (5 days ago) Feb 2
to everyth...@googlegroups.com
Even if you, as a human are very creative, it doesn't count if you're creating something that had already been created.  I'm sure most people on this list have had a clever idea only to discover that Euler or Bernoulli or somebody though of it a 100yrs ago.  So to be effectively creative you need an enormous knowledge base, which LLM's excel at.

Brent

Russell Standish

unread,
Feb 2, 2026, 7:45:08 PM (5 days ago) Feb 2
to everyth...@googlegroups.com
On Mon, Feb 02, 2026 at 03:39:47PM -0800, Brent Meeker wrote:
> Even if you, as a human are very creative, it doesn't count if you're creating
> something that had already been created.  I'm sure most people on this list
> have had a clever idea only to discover that Euler or Bernoulli or somebody
> though of it a 100yrs ago.  So to be effectively creative you need an enormous
> knowledge base, which LLM's excel at.
>
> Brent

And then there's the fact that usually when a creative leap happens
(in whatever field), it happens more than once independently. Its
often the case that the second person to discover or invent something
gets the credit.

Kauffman's idea of "adjacent possibility" plays into this.

Giulio Prisco

unread,
Feb 3, 2026, 3:46:57 AM (4 days ago) Feb 3
to extro...@googlegroups.com, ExI Chat, 'Brent Meeker' via Everything List
Very interesting paper!
> --
> You received this message because you are subscribed to the Google Groups "extropolis" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/extropolis/CAJPayv2_RPejC%3DR127KFK6PeLaMVexPZT0R7O2fRsxRxwf-Ovw%40mail.gmail.com.

spudb...@aol.com

unread,
Feb 3, 2026, 6:17:56 AM (4 days ago) Feb 3
to extro...@googlegroups.com, everyth...@googlegroups.com, ExI Chat
Claude 4.5 Sonnet/Opus are very execellent, as is Chatgt5.2. Gemini 3.0 does the heavy lifting for reports, as is Copilot, and for straight out analysis and conversations Grok4.1 does really well.In All of these, for doing papers, the nore scientific researcher names I add -in for an analysis, the better t get results.One researcher's work of a lifetime often compliments another's. That is the tasty function of LLM's, I find, is combining people's works. 





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAKTCJydBQT7JD5KcaVefxFXMB40jVu8tWr7tg0eWX-BXvcCC%2BA%40mail.gmail.com.

John Clark

unread,
Feb 6, 2026, 6:34:54 AM (yesterday) Feb 6
to everyth...@googlegroups.com
On Mon, Feb 2, 2026 at 5:22 PM Russell Standish <li...@hpcoders.com.au> wrote:


So I'd say the threshold was passed sometime in the last couple of years.

I agree. AI is already smarter than even the smartest specialized human at some things and, except for manual dexterity, is better than the average human at everything. And unlike humans, AI keeps getting smarter every day.   
 
Bear in mind, in Kurzweil's timeline to the singularity in 2045
 
Kurzweil's timeline has become confusing. As far as I know he has not changed his old prediction that the Singularity won't happen until 2045, but in a talk given just last year he said that the 2030s will be the decade when nanobots will begin connecting our neocortex to the cloud, and by 2040 our non-biological intelligence will be significantly greater than our biological intelligence. That sure sounds like a Singularity to me.

The real debate now is whether LLMs are truly creative.

For at least the last five years computers have done things that if a Human had performed them there would be no debate whatsoever, everybody would agree it was creative. But for some people if a computer has done it then it is, by definition, not creative. But I think if that is the definition of the word "creative" then the word is not of much use.   

John K Clark    See what's on my new list at  Extropolis
2zh

 




Brent Meeker

unread,
Feb 6, 2026, 4:01:44 PM (16 hours ago) Feb 6
to everyth...@googlegroups.com


On 2/6/2026 3:34 AM, John Clark wrote:
The real debate now is whether LLMs are truly creative.

For at least the last five years computers have done things that if a Human had performed them there would be no debate whatsoever, everybody would agree it was creative. But for some people if a computer has done it then it is, by definition, not creative. But I think if that is the definition of the word "creative" then the word is not of much use.   

It's just a semantic problem of treating a property "creative" as if it were all-or-nothing.  There are degrees of creativity.  Putting together notes to create a musical score is creative even if you didn't create notes and musical notation.  Putting together musical phrases is also creative, just less so.  It's more creative if you put together more disparate things to work in a way unknown before.  LLMs are creative; they put together phrases and sentences that are made of existing fragments.

Brent

Russell Standish

unread,
Feb 6, 2026, 5:25:01 PM (14 hours ago) Feb 6
to everyth...@googlegroups.com
That is not the sense of creative that I use. What you're talking about
is emergence, and artificial systems have exhibited that sort of
limited creativity for years. Tom Ray's Tierra system exhibited novel
behaviour within hours of being switched on, with parasites, hyper
parasites etc arising, and then - nothing. No further novel behaviours
are seen. John Koza's GP algorithms have generated patents, but again
very much limited to what the initial database/problem set is.

I also object to artists throwing random blobs of paint at a canvas as
calling themselves "creatives". They just aren't, in the
main. Obviously creative artists do exist, but not all artists, or
even most artists are creative.

Biological evolution, on the other hand is undeniably creative. Over
billions of years, evolution has generated continuous
novelty. Beethoven would probably still be writing symphonies today if
he were still alive. Einstein, unfortunately, maxxed out when he got
famous, and decided to tackle really difficult problems that nobody in
their right mind would consider tackling.

From what I've seen and experienced to date, LLMs are very good at
applying the vast collective knowledge base to problems that have in
essence been solved before, but haven't yet exhibited the leap into
the unknown that say Einstein's theory of general relativity us.

I do think we'll get there, just that we're not there yet. Moltbook is
an interesting experiment to see what happens when evolution and
recursion are added to the mix. In that light, let me cite a recent
paper on what might be required:

https://arxiv.org/abs/2511.02864

Mathematical exploration and discovery at scale
Bogdan Georgiev, Javier Gómez-Serrano, Terence Tao, Adam Zsolt Wagner
Reply all
Reply to author
Forward
0 new messages