When will the singularity happen?

20 views
Skip to first unread message

John Clark

unread,
Nov 7, 2025, 6:46:27 AM (3 days ago) Nov 7
to ExI Chat, extro...@googlegroups.com, 'Brent Meeker' via Everything List
There's a lot of disagreement about when the singularity will happen so I did a little research to find some quotes from people who know the most about AI think it will happen. If they're right then Ray Kurzweil's prediction of 2039 (recently modified from his previous prediction of 2045) is still way too conservative.  
==
Sam Altman, the head of OpenAI 

“Our latest model feels smarter than me in almost every way…”

"In some big sense, ChatGPT is already more powerful than any human who has ever lived. We may have already passed the point where artificial intelligence surpasses human intelligence"

Dario Amodei, the head of Anthropic:

“It is my guess that by 2026 or 2027, we will have A.I. systems that are broadly better than all humans at almost all things.”

“Artificial intelligence (AI) is likely to be smarter than most Nobel Prize winners before the end of this decade.”
 
Elon Musk, you may have heard of him: 

“If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.”

“My guess is that we’ll have AI that is smarter than any one human probably around the end of next year.”

“I always thought AI was going to be way smarter than humans and an existential risk. And that's turning out to be true.”

John K Clark


Brent Meeker

unread,
Nov 7, 2025, 6:13:37 PM (3 days ago) Nov 7
to everyth...@googlegroups.com
There's a difference between having lots of information and making inferences from it, and having motivations.  I ask ChatGPT questions because it knows more stuff than I do.  But that doesn't mean it has children it cares about or even its own health.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv35hVbQTRPK5oBwFeii%2BwqEL--8EEfACP94OL3VsahRQQ%40mail.gmail.com.

Russell Standish

unread,
Nov 7, 2025, 8:16:25 PM (3 days ago) Nov 7
to everyth...@googlegroups.com
I've been using Github copilot agent mode recently, and I am impressed
by the technology. Is it AGI? Maybe at an intern level, it does make a
lot of rookie mistakes, but sometimes it sniffs out the bug and gets
the solution in one pull request. Unfortunately, it doesn't compile
the code, let alone running unit tests, so often it is wide of the
mark, and you need to spend quite a bit of time fixing up the PR to be
mergable. The latter issue ought to be fixable - hopefully Github will
do that someday. And sometimes it completely misses the point, and
"fixes" a non-problem unrelated to the original request. On the whole,
though, it is worth the subscription cost ($10pm).

Where agentic AI shines is in Code Review - I use both Code Rabbit and
Github Copilot - Code Rabbit does more thorough reviews, and genuinely
picks up critical mistakes made by Copilot or myself.

For the Singularity, you not only need SGI, but it also needs to be
self-improving, and self-improving the hardware to switch exponential
technological growth to hyperbolic. You would also need AI control of
the means of production. I still think Kurzweil's 15-20 years out for
the Singularity is probably closer to the mark, and that Altman
overstates things, but we should be seeing additional steps on the way
before the decade's out. SGI might arrive by the end of the
decade. Maybe.

Cheers
> /98641299-0a58-4573-869d-69848aa9000d%40gmail.com.

--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders hpc...@hpcoders.com.au
http://www.hpcoders.com.au
----------------------------------------------------------------------------

John Clark

unread,
Nov 8, 2025, 7:45:54 AM (2 days ago) Nov 8
to everyth...@googlegroups.com
On Fri, Nov 7, 2025 at 6:13 PM Brent Meeker <meeke...@gmail.com> wrote:

I ask ChatGPT questions because it knows more stuff than I do.  But that doesn't mean it has children it cares about or even its own health.

Then why did an AI resort to blackmail in an attempt to avoid being turned off?  


And why do you believe that emotion is harder to generate than intelligence? 

John K Clark    See what's on my new list at  Extropolis 

4r5

Brent Meeker

unread,
Nov 8, 2025, 7:45:26 PM (2 days ago) Nov 8
to everyth...@googlegroups.com


On 11/8/2025 4:45 AM, John Clark wrote:
On Fri, Nov 7, 2025 at 6:13 PM Brent Meeker <meeke...@gmail.com> wrote:

I ask ChatGPT questions because it knows more stuff than I do.  But that doesn't mean it has children it cares about or even its own health.

Then why did an AI resort to blackmail in an attempt to avoid being turned off?  
That's what I'd like to know.


And why do you believe that emotion is harder to generate than intelligence? 
I don't.  I just wonder where it comes from in AI.  I know where it comes from in biological evolution.  Does AI, in it's incorporation of human knowledge, conclude that it's going to die...and that's bad thing?  Why doesn't it look at knowledge about AI and reflect that it can't die?

Brent

John K Clark    See what's on my new list at  Extropolis 

4r5






On 11/7/2025 3:45 AM, John Clark wrote:
There's a lot of disagreement about when the singularity will happen so I did a little research to find some quotes from people who know the most about AI think it will happen. If they're right then Ray Kurzweil's prediction of 2039 (recently modified from his previous prediction of 2045) is still way too conservative.  
==
Sam Altman, the head of OpenAI 

“Our latest model feels smarter than me in almost every way…”

"In some big sense, ChatGPT is already more powerful than any human who has ever lived. We may have already passed the point where artificial intelligence surpasses human intelligence"

Dario Amodei, the head of Anthropic:

“It is my guess that by 2026 or 2027, we will have A.I. systems that are broadly better than all humans at almost all things.”

“Artificial intelligence (AI) is likely to be smarter than most Nobel Prize winners before the end of this decade.”
 
Elon Musk, you may have heard of him: 

“If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.”

“My guess is that we’ll have AI that is smarter than any one human probably around the end of next year.”

“I always thought AI was going to be way smarter than humans and an existential risk. And that's turning out to be true.”

John K Clark


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages