Recursive exponential improvement

15 views
Skip to first unread message

John Clark

unread,
Nov 25, 2025, 7:09:27 AM (7 days ago) Nov 25
to ExI Chat, extro...@googlegroups.com, 'Brent Meeker' via Everything List
Last week Google introduced Gemini-3 and against all the benchmarks it easily beat all other AI's. However, its superiority did not last long. Yesterday Anthropic introduced Claude Opus-4.5 and it easily beat Gemini-3 in the ability to write computer code. One thing I found particularly interesting, since it started Anthropic has always had a policy that before they hired anybody they gave them a notoriously difficult computer programming test that the applicant could take home and bring back the next day; they decided to give Claude Opus-4.5 that test and give it a two hour time limit to complete it. The result was Claude Opus-4.5 got a higher score on coding ability than ANY human candidate ever had! If that isn't screaming "recursive exponential improvement" I don't know what could. 

And to think, some people are still worried about trivialities like the war on Christmas, illegal immigration, global warming, and the US not balancing the budget. 


John K Clark    See what's on my new list at  Extropolis
2df


John Clark

unread,
Nov 25, 2025, 4:09:21 PM (7 days ago) Nov 25
to ExI Chat, extro...@googlegroups.com, 'Brent Meeker' via Everything List

Coincidentally just a few hours after I started this thread I found another article about Recursive  Exponential Improvement, although that particular phrase is not used. It's from yesterday's issue of the journal Nature. 


The article is well worth reading in its entirety, but it is rather long, here are the parts that I found most interesting: 

"A predator must predict actions that will get the prey into its stomach; the prey must predict the predator’s behaviour to stop that from happening. Starting in the 1970s, neuropsychologists and anthropologists began to realize that other intelligent entities are often the most important parts of the environment to model — because they are the ones modelling you back, whether with friendly or hostile intent. Increasingly intelligent predators put evolutionary pressure on their prey to become smarter, and vice versa."

"The pressures towards intelligence become even more intense for members of social species. Winning mates, sharing resources, gaining followers, teaching, learning and dividing labour: all of these involve modelling and predicting the minds of others. But the more intelligent you become — the better to predict the minds of others (at least in theory) — the more intelligent, and thus hard to predict, those others have also become, because they are of the same species and doing the same thing. These runaway dynamics produce ‘intelligence explosions’. Over the past billion years, symbiogenesis has produced increasingly complex nervous systems, colonies of social animals — and eventually our own technological society. Is this nature’s version of Moore’s law?" 

"Since around 2006, transistors have continued to shrink, but the rise in semiconductor operating speed has stalled. To keep increasing computer performance, chip-makers are instead adding more processing cores. They began, in other words, to parallelize silicon-based computation. It’s no coincidence that this is when modern, neural-net-based AI models finally began to take off." 

"AI is not distinct from humanity, but rather is a recent addition to a mutually interdependent superhuman entity we are all already part of. An entity that has long been partly biological, partly technological — and always wholly computational. The picture of the future that emerges here is sunnier than that often painted by researchers studying the ethics of AI or its existential risks for humanity. People often presume that evolution — and intelligence — are zero-sum optimization processes, and that AI is both alien to and competitive with humanity. The symbiogenetic view does not guarantee positive outcomes, but neither does it position AI as an alien ‘other’, nor the future as a Malthusian tug-of-war over resources between humans and machines." 

John K Clark





Russell Standish

unread,
Nov 25, 2025, 6:20:16 PM (7 days ago) Nov 25
to everyth...@googlegroups.com
This idea also goes by the name of "Machievellian intelligence
hypothesis" (https://en.wikipedia.org/wiki/Machiavellian_intelligence_hypothesis).

Also - I hope it is true that biological/technoological cyborgisation
is the end-state of the Singularity, but I don't think there's any
guarantees of that.
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to everything-li...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/everything-list
> /CAJPayv0-xRe-tU91QJKbajo3dAk-uE06x_bp0wAbsi%3DnFKwTJQ%40mail.gmail.com.

--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders hpc...@hpcoders.com.au
http://www.hpcoders.com.au
----------------------------------------------------------------------------
Reply all
Reply to author
Forward
0 new messages