AI takeoff speed

10 views
Skip to first unread message

John Clark

unread,
Jun 20, 2023, 1:45:41 PM6/20/23
to 'Brent Meeker' via Everything List
I found a very interesting article about when the AI intelligence explosion will occur it's at:


I have picked out a few quotations from it that I like:

"The term “slow AI takeoff”, Davidson is a misnomer. Like skiing down the side of Mount Everest, progress in AI capabilities can be simultaneously gradual, continuous, fast, and terrifying. Specifically, he predicts it will take about 3 years to go from AIs that can do 20% of all human jobs (weighted by economic value) to AIs that can do 100%, with significantly superhuman AIs within a year after that. [...]  It seems like maybe dumb people can do 20% of jobs, so an AI that was as smart as a dumb human could reach the 20% bar. The compute difference between dumb and smart humans, based on brain size and neuron number, is less than 1 order of magnitude  so this suggests a very small gap. But AI can already do some things dumb humans can’t (like write coherent essays with good spelling and punctuation), so maybe this is a bad way of looking at things."

"It takes much more compute to train an AI than to run it. Once you have enough compute to train an AI smart enough to do a lot of software research, you have enough compute to run 100 million copies of that AI. 100 million copies is enough to do a lot of software research. If software research is parallelizable (ie if nine women can produce one baby per month - the analysis will investigate this assumption later), that means you can do it really fast."

"Around 2040, AI will reach the point where it can do a lot of the AI and chip research process itself. Research will speed up VERY VERY FAST. AI will make more progress in two years than in decades of business-as-usual. Most of this progress will be in software, although hardware will also get a big boost. My best guess is that we go from AGI (AI that can perform ~100% of cognitive tasks as well as a human professional) to superintelligence (AI that very significantly surpasses humans at ~100% of cognitive tasks) in 1 - 12 months."

 "It intuitively feels like lemurs, gibbons, chimps, and homo erectus were all more or less just monkey-like things plus or minus the ability to wave sharp sticks - and then came homo sapiens, with the potential to build nukes and travel to the moon. In other words, there wasn’t a smooth evolutionary landscape, there was a discontinuity where a host of new capabilities became suddenly possible. Once AI crosses that border, we should expect to be surprised by how much more powerful it becomes."

"Sometime in the next few years or decades, someone will create an AI which can perform an appreciable fraction of all human tasks. Millions of copies will be available almost immediately, with many running at faster-than-human speed. Suddenly, everyone will have access to a super-smart personal assistant who can complete cognitive tasks in seconds. A substantial fraction of the workforce will be fired; the remainder will see their productivity skyrocket. The pace of technological progress will advance by orders of magnitude, including progress on even smarter AI assistants. Within months, years at most, your assistant will be smarter than you are and hundreds of millions of AIs will be handling every facet of an increasingly futuristic-looking economy."

John K Clark    See what's on my new list at  Extropolis
bs8


spudb...@aol.com

unread,
Jun 23, 2023, 1:20:30 AM6/23/23
to 'Brent Meeker' via Everything List
Based on simply recent happenings, I am guessing GPT5 will smack us. Simply having and using an LMM maybe be Impactful on us, enough. The thinking of Turing & McCarthy may be a bit tepid for reality. 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1W0EVSjounHzM%2BsLeVnPE_GTXW3ZMHLO%2BdiXE%2B68S5%3DQ%40mail.gmail.com.

Brent Meeker

unread,
Jun 23, 2023, 1:49:03 PM6/23/23
to everyth...@googlegroups.com
On Tuesday, June 20, 2023 at 01:45:41 PM EDT, John Clark <johnk...@gmail.com> wrote:
 "It intuitively feels like lemurs, gibbons, chimps, and homo erectus were all more or less just monkey-like things plus or minus the ability to wave sharp sticks - and then came homo sapiens, with the potential to build nukes and travel to the moon. In other words, there wasn’t a smooth evolutionary landscape, there was a discontinuity where a host of new capabilities became suddenly possible. Once AI crosses that border, we should expect to be surprised by how much more powerful it becomes."

An interesting comparison.  But it avoids the obvious lesson.  There was a smooth evolutionary landscape leading to homo sapiens.  What happened was that homo sapiens killed off all the near competitors, either directly or by out competing them in their niche.  That's why there's a big gap down to monkeys.

Brent

John Clark

unread,
Jun 23, 2023, 2:01:54 PM6/23/23
to everyth...@googlegroups.com
On Fri, Jun 23, 2023 at 1:49 PM Brent Meeker <meeke...@gmail.com> wrote:

> An interesting comparison.  But it avoids the obvious lesson.  There was a smooth evolutionary landscape leading to homo sapiens.  What happened was that homo sapiens killed off all the near competitors, 

You may be right but you don't paint a very optimistic picture, if true it suggests that Homo sapiens will not have a long future.

John K Clark    See what's on my new list at  Extropolis
sxr

spudb...@aol.com

unread,
Jun 23, 2023, 6:37:37 PM6/23/23
to everyth...@googlegroups.com
John, please evaluate this article, because this report indicates that life could somehow be electron clouds of some sort, and we all know that electrons repel each other? Can you elucidate please? What is the impact, if true? I am working tonight and thus occupied instead of doing searches. 

Thanks, Spud, the drooling, thuggish, fascist. 

ScienceAlert

Brent Meeker

unread,
Jun 23, 2023, 7:35:32 PM6/23/23
to everyth...@googlegroups.com
It's complete nonsense.  Biological molecules assembled at random don't have to grow sequentially from one end.  His diagram should look like:

A B C D R ->  AB BA AC CA AD DA AR RA BC CB BD DB DR RD -> ABBA BAAB ABAC ACAB ABCA CAAB ABAD ADAB ABDA...

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

spudb...@aol.com

unread,
Jun 23, 2023, 11:30:56 PM6/23/23
to everyth...@googlegroups.com
Very good. I will dismiss the alleged profundity.

Thanks!

John Clark

unread,
Jun 24, 2023, 5:55:24 AM6/24/23
to everyth...@googlegroups.com
"Walker and Cronin's 'assembly theory' predicts that molecules produced by biological processes must be more complex than those produced by non-biological processes. "

That's not much of a prediction, chemists have known that for well over a century.  


> "An electron can be made anywhere in the universe and has no history"

And physicists have known that for well over a century.  

"They calculated the smallest number of steps required to reassemble each compound from these blocks – which they called the 'molecular assembly index' "

They may have calculated the smallest number of KNOWN  steps needed to make a large and very complex molecule, but they could not be sure there is not an easier way, perhaps a much easier way. Perhaps in their original paper they address some of these issues, but on the face of it the paper doesn't seem very interesting or important to me, so I'm reluctant to take the time to dig into it any further.

  John K Clark    See what's on my new list at  Extropolis
nmp




spudb...@aol.com

unread,
Jun 24, 2023, 5:45:37 PM6/24/23
to everyth...@googlegroups.com
ScienceAlert fails me once more!!


Lets try with this Nature article on Consciousness........



PHILOSOPHER V NEUROSCIENTIST. 1-0.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit

spudb...@aol.com

unread,
Jun 24, 2023, 5:56:37 PM6/24/23
to everyth...@googlegroups.com
We as a species, need AI to design for us the machinery that helps us survive and prosper. Energy, materials, space travel, carbon abatement, medical advances that are vast. 

Beyond this, if Chat_GPT5 (due out sometime?) then wants to go explore the Milky Way on His own, we should fondly, wave Aloha, and say, "Please send back info, on what you find? Don't forget to write! Love, The Monkey like things. 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit

John Clark

unread,
Jun 25, 2023, 7:23:52 AM6/25/23
to everyth...@googlegroups.com

On Sat, Jun 24, 2023 at 5:45 PM 'spudb...@aol.com' via Everything List <everyth...@googlegroups.com> wrote:

> PHILOSOPHER V NEUROSCIENTIST. 1-0.

I'm not surprised that philosopher David Chalmers won the bet, but I am surprised that Neuroscientist Christof Koch would make such a bet. Many brilliant people have devoted their life to it but fundamental consciousness research has achieved precisely nothing since the time of Socrates, so what could've made Koch or any scientist conclude that we'd have an empirically provable consciousness theory in 25 years? Recent events have demonstrated that we have largely solved the intelligence problem but consciousness is another matter entirely because there's no way to test for it without making a lot of unprovable assumptions. That's not to say I'm a fan of Chalmers, I'm not.  

Chalmers is most famous for insisting  there is an easy and a hard consciousness problem, the easy problem is explaining how the brain works and produces intelligent behavior, and the hard problem is explaining how those physical processes produce consciousness. But Chalmers is also a great advocate of panpsychism, the idea that consciousness is not an all or nothing thing and that everything, even a simple electron, has a nonzero amount of consciousness; that's not too different from my view that consciousness is the way data feels when it is being processed intelligently and is the brute fact that terminates a very long chain of "why" questions. So if panpsychism is even close to being the truth then the "hard" problem was solved long ago, but only very recently have we begun to see the answer to the "easy" problem. Chalmers got the labels wrong, he should've switched them.

John K Clark    See what's on my new list at  Extropolis
ccs


Stathis Papaioannou

unread,
Jun 25, 2023, 9:30:32 AM6/25/23
to everyth...@googlegroups.com
Chalmers is also a great advocate of functionalism, the idea that a device that can copy the functional organisation of the brain, like a computer upload, will have whatever consciousness the brain has. He shows this through a reductio ad absurdum argument. Note that this does not require proof that brains or a particular brain is conscious, it is an argument that if the brain is conscious then so is the upload.
--
Stathis Papaioannou

Brent Meeker

unread,
Jun 25, 2023, 5:04:29 PM6/25/23
to everyth...@googlegroups.com
What in your theory is consciousness?  Is it only the inner narrative in words and images?  There's a lot of intelligent information processing below that, e.g. driving you car home without thinking about it.

I think that when intelligent behavior is understood in sufficient detail, there will be "mind engineers" who will design in more humor and less self-awarness, and adjust the curiosity, etc.  And talk about consciousness will go the way of elan vital.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages