Will AI R&D Automation Cause a Software Intelligence Explosion?

21 views
Skip to first unread message

John Clark

unread,
Apr 24, 2025, 5:11:34 PM4/24/25
to extro...@googlegroups.com, 'Brent Meeker' via Everything List

Here is another analysis of what we may expect from AI in the future from a group called  Forethought , I think AI is important to examine from a number of different viewpoints because I think it's the most important development since the Cambrian Explosion. I was particularly interested in what they had to say on:


Their conclusions are largely consistent with what the AI Futures Project says on the subject and with my own views. I've made a synopsis for those who don't wish to read the entire thing: 

"The emergence of ASARA [their torturous acronym for "AI Systems for AI R&D Automation"] would trigger a feedback loop in which ASARA systems performing AI R&D lead to more capable ASARA systems, which in turn conduct even better AI R&D, and so on, culminating in an “intelligence explosion” – a period of very rapid and accelerating AI progress which results in a superhuman AI. This is because the resulting  positive feedback loop would create a software intelligence explosion even if a constant amount of computer hardware is available, but it's very hard to imagine a scenario where the amount of computer hardware used for AI is not increasing."  

"One way to measure efficiency improvements is to look at the amount of computing power needed for an AI system to exhibit a particular level of performance, and consider how much more computing power was previously needed for AI systems to reach the same level of performance. By tracking the change over time, we can chart how efficiency has improved over time so that more can be done with less computation. For example:"

"Image Recognition: OpenAI found that, between 2012 and 2017, state-of-the-art image recognition algorithms became much more efficient, requiring 1/18th as much computing power to run in order to achieve consistent results. This growth rate corresponds to the runtime efficiency doubling every 15 months on average.  Similarly, they found that, between 2012 and 2019, the amount of computing power needed to train these state-of-the-art image recognition systems (to the same level of performance) fell by 44x, corresponding to a training efficiency doubling time of 16 months. As another data point, the research group Epoch has estimated that, from 2012 – 2022, training efficiency of image recognition algorithms had a shorter doubling time of only 9 months."

    "Language Translation and Game Playing. OpenAI found even faster progress in the efficiency of training AI systems for language translation and game playing. For language translation, based on two analyses, they calculated an efficiency doubling time of 4 months and 6 months, and for game playing, they found an efficiency doubling time of 4 months for Go and 25 days for Dota."

    "Large Language Models. Analysis from Epoch estimates that, from 2012 to 2023, training efficiency for language models has doubled approximately every 8 monthsThe analyses so far just look at improvements for unmodified “base models” and therefore neglect efficiency benefits from improvements in “post-training enhancements” like fine-tuning, prompting, and scaffolding. These neglected benefits from post-training enhancements can be substantial. A separate analysis finds that individual innovations in post-training enhancements for LLMs often give >5x efficiency improvements in particular domains (and occasionally give ~100x efficiency improvements). In other words, AI models that incorporate a given innovation can often outperform models trained with 5x the computational resources but without the innovation."
"And a separate informal analysis finds that for LLMs of equivalent performance, the cost efficiency of running the LLM (i.e., amount of tokens read or generated per dollar) has doubled around every 3.6 months since November 2021. (Though note that cost efficiency doesn’t just take into account software improvements, but also decreases in hardware costs and in profit margins; with that said, software improvements are probably responsible for the great majority of the cost efficiency improvements.) We believe it’s reasonable to split the difference between these two estimates and conclude that both training efficiency and runtime efficiency of LLMs have a ~6 month doubling time."

"The amount of computing power it takes to train a new AI system tends to be much larger than the amount of computing power it takes to run a copy of that AI system once it’s already trained. This means that if the computing power used to train ASARA  [a AI Systems for AI R&D Automation is repurposed to run these systems, then a gigantic number of these systems could be run in parallellikely implying much larger “cognitive output” from ASARA systems collectively than what’s currently available from human AI researchers. Thus if you have enough computing power to train a frontier AI system today, then you have enough computing power to run hundreds of thousands of copies of this systemWhat that means is by the time we reach ASARA, which should  happen within the next few years, the total cognitive output of ASARA systems will likely be equivalent to millions of top-notch human researchers all working 24 hours a day seven days a week."

"Humans are presumably not the most intelligent lifeform possible, but simply the first lifeform on Earth intelligent enough to engage in activities like science and engineering. ASARA will most likely be trained with orders of magnitude more computational power than estimates of how many “computations” the human brain uses over a human’s development into adulthood, suggesting there’s significant room for efficiency improvements in training ASARA systems to match human learning. If a software intelligence explosion does occur, it would very quickly lead to huge gains in AI capacity. Soon after ASARA, progress might well have sped up to the point where AI software was doubling every few days or faster (compared to doubling every few months today)."

John K Clark    See what's on my new list at  Extropolis

rx0

Cosmin Visan

unread,
Apr 25, 2025, 9:03:00 AM4/25/25
to Everything List
Orgasmic explosion!

spudb...@aol.com

unread,
Apr 25, 2025, 11:41:27 AM4/25/25
to Everything List
I listen to physicist,  Federico Faggin occasionally, and sort of view with interest his description of the Universe. Here is a 24 hour old Tube vid. It applies to this discussion. 





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit
https://groups.google.com/d/msgid/everything-list/39800d77-4382-42a5-a847-5f8ad8c7654en%40googlegroups.com
.
Reply all
Reply to author
Forward
0 new messages