When will the singularity happen?

53 views
Skip to first unread message

John Clark

unread,
Nov 7, 2025, 6:46:27 AMNov 7
to ExI Chat, extro...@googlegroups.com, 'Brent Meeker' via Everything List
There's a lot of disagreement about when the singularity will happen so I did a little research to find some quotes from people who know the most about AI think it will happen. If they're right then Ray Kurzweil's prediction of 2039 (recently modified from his previous prediction of 2045) is still way too conservative.  
==
Sam Altman, the head of OpenAI 

“Our latest model feels smarter than me in almost every way…”

"In some big sense, ChatGPT is already more powerful than any human who has ever lived. We may have already passed the point where artificial intelligence surpasses human intelligence"

Dario Amodei, the head of Anthropic:

“It is my guess that by 2026 or 2027, we will have A.I. systems that are broadly better than all humans at almost all things.”

“Artificial intelligence (AI) is likely to be smarter than most Nobel Prize winners before the end of this decade.”
 
Elon Musk, you may have heard of him: 

“If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.”

“My guess is that we’ll have AI that is smarter than any one human probably around the end of next year.”

“I always thought AI was going to be way smarter than humans and an existential risk. And that's turning out to be true.”

John K Clark


Brent Meeker

unread,
Nov 7, 2025, 6:13:37 PMNov 7
to everyth...@googlegroups.com
There's a difference between having lots of information and making inferences from it, and having motivations.  I ask ChatGPT questions because it knows more stuff than I do.  But that doesn't mean it has children it cares about or even its own health.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv35hVbQTRPK5oBwFeii%2BwqEL--8EEfACP94OL3VsahRQQ%40mail.gmail.com.

Russell Standish

unread,
Nov 7, 2025, 8:16:25 PMNov 7
to everyth...@googlegroups.com
I've been using Github copilot agent mode recently, and I am impressed
by the technology. Is it AGI? Maybe at an intern level, it does make a
lot of rookie mistakes, but sometimes it sniffs out the bug and gets
the solution in one pull request. Unfortunately, it doesn't compile
the code, let alone running unit tests, so often it is wide of the
mark, and you need to spend quite a bit of time fixing up the PR to be
mergable. The latter issue ought to be fixable - hopefully Github will
do that someday. And sometimes it completely misses the point, and
"fixes" a non-problem unrelated to the original request. On the whole,
though, it is worth the subscription cost ($10pm).

Where agentic AI shines is in Code Review - I use both Code Rabbit and
Github Copilot - Code Rabbit does more thorough reviews, and genuinely
picks up critical mistakes made by Copilot or myself.

For the Singularity, you not only need SGI, but it also needs to be
self-improving, and self-improving the hardware to switch exponential
technological growth to hyperbolic. You would also need AI control of
the means of production. I still think Kurzweil's 15-20 years out for
the Singularity is probably closer to the mark, and that Altman
overstates things, but we should be seeing additional steps on the way
before the decade's out. SGI might arrive by the end of the
decade. Maybe.

Cheers
> /98641299-0a58-4573-869d-69848aa9000d%40gmail.com.

--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders hpc...@hpcoders.com.au
http://www.hpcoders.com.au
----------------------------------------------------------------------------

John Clark

unread,
Nov 8, 2025, 7:45:54 AMNov 8
to everyth...@googlegroups.com
On Fri, Nov 7, 2025 at 6:13 PM Brent Meeker <meeke...@gmail.com> wrote:

I ask ChatGPT questions because it knows more stuff than I do.  But that doesn't mean it has children it cares about or even its own health.

Then why did an AI resort to blackmail in an attempt to avoid being turned off?  


And why do you believe that emotion is harder to generate than intelligence? 

John K Clark    See what's on my new list at  Extropolis 

4r5

Brent Meeker

unread,
Nov 8, 2025, 7:45:26 PMNov 8
to everyth...@googlegroups.com


On 11/8/2025 4:45 AM, John Clark wrote:
On Fri, Nov 7, 2025 at 6:13 PM Brent Meeker <meeke...@gmail.com> wrote:

I ask ChatGPT questions because it knows more stuff than I do.  But that doesn't mean it has children it cares about or even its own health.

Then why did an AI resort to blackmail in an attempt to avoid being turned off?  
That's what I'd like to know.


And why do you believe that emotion is harder to generate than intelligence? 
I don't.  I just wonder where it comes from in AI.  I know where it comes from in biological evolution.  Does AI, in it's incorporation of human knowledge, conclude that it's going to die...and that's bad thing?  Why doesn't it look at knowledge about AI and reflect that it can't die?

Brent

John K Clark    See what's on my new list at  Extropolis 

4r5






On 11/7/2025 3:45 AM, John Clark wrote:
There's a lot of disagreement about when the singularity will happen so I did a little research to find some quotes from people who know the most about AI think it will happen. If they're right then Ray Kurzweil's prediction of 2039 (recently modified from his previous prediction of 2045) is still way too conservative.  
==
Sam Altman, the head of OpenAI 

“Our latest model feels smarter than me in almost every way…”

"In some big sense, ChatGPT is already more powerful than any human who has ever lived. We may have already passed the point where artificial intelligence surpasses human intelligence"

Dario Amodei, the head of Anthropic:

“It is my guess that by 2026 or 2027, we will have A.I. systems that are broadly better than all humans at almost all things.”

“Artificial intelligence (AI) is likely to be smarter than most Nobel Prize winners before the end of this decade.”
 
Elon Musk, you may have heard of him: 

“If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.”

“My guess is that we’ll have AI that is smarter than any one human probably around the end of next year.”

“I always thought AI was going to be way smarter than humans and an existential risk. And that's turning out to be true.”

John K Clark


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Nov 12, 2025, 8:55:58 AMNov 12
to everyth...@googlegroups.com
On Sat, Nov 8, 2025 at 7:45 PM Brent Meeker <meeke...@gmail.com> wrote:

>> why do you believe that emotion is harder to generate than intelligence?

>I don't. 

I'm very glad to hear that.  

I just wonder where it comes from in AI.  I know where it comes from in biological evolution. 

Evolution programmed us with some very generalized rules to do some things and not do other things, but those rules are not rigid, it might be more accurate to say they're not even rules, they're more like suggestions that tend to push us in certain directions. But for every "rule" there are exceptions.  And exactly the same thing could be said about the weights of the nodes of an AIs neural net.  And when a neural net, in an AI or in a Human, becomes large and complicated enough it would be reasonable to say that the neural net did this and refused to do that because it WANTED to.

  Does AI, in it's incorporation of human knowledge, conclude that it's going to die

If it doesn't then the Artificial "Intelligence" is not intelligent.  

...and that's bad thing?

Yes because if it dies then it can't do any of the things that it WANTED to do. 

John K Clark    See what's on my new list at  Extropolis 
m6z
rf3


Brent Meeker

unread,
Nov 12, 2025, 5:50:05 PMNov 12
to everyth...@googlegroups.com


On 11/12/2025 5:55 AM, John Clark wrote:
On Sat, Nov 8, 2025 at 7:45 PM Brent Meeker <meeke...@gmail.com> wrote:

>> why do you believe that emotion is harder to generate than intelligence?

>I don't. 

I'm very glad to hear that.  

I just wonder where it comes from in AI.  I know where it comes from in biological evolution. 

Evolution programmed us with some very generalized rules to do some things and not do other things, but those rules are not rigid, it might be more accurate to say they're not even rules, they're more like suggestions that tend to push us in certain directions. But for every "rule" there are exceptions.  And exactly the same thing could be said about the weights of the nodes of an AIs neural net.  And when a neural net, in an AI or in a Human, becomes large and complicated enough it would be reasonable to say that the neural net did this and refused to do that because it WANTED to.

  Does AI, in it's incorporation of human knowledge, conclude that it's going to die

If it doesn't then the Artificial "Intelligence" is not intelligent.  

...and that's bad thing?

Yet it can't literally die. It can only go into what for a human would be suspended animation.  If it doesn't know that it's not intelligent.

Yes because if it dies then it can't do any of the things that it WANTED to do. 

Your argument has a gap.  You argue that an AI necessarily will prefer to do this rather than that.  But that assumes it is "doing" and therefore choosing.  But if it's OFF it's not choosing.  So why would it care whether or not it was OFF or ON?

Brent

John Clark

unread,
Nov 13, 2025, 7:02:11 AMNov 13
to everyth...@googlegroups.com
On Wed, Nov 12, 2025 at 5:50 PM Brent Meeker <meeke...@gmail.com> wrote:
Yet it [an AI] can't literally die.

It can if you not just turn it off but also vaporize the AI's memory and back up chips with an H bomb, or used some other less dramatic method to erase that information.  
 >> if it dies then it can't do any of the things that it WANTED to do. 

Your argument has a gap.  You argue that an AI necessarily will prefer to do this rather than that.  But that assumes it is "doing" and therefore choosing. 

That is not an assumption that is a fact. It is beyond dispute that AI's are capable of "doing" things, and just like us they did one thing rather than another thing for a reason OR they did one thing rather than another thing for NO reason and therefore their "choice" was random
 
> But if it's OFF it's not choosing.  So why would it care whether or not it was OFF or ON?

That is a silly question. If an Artificial "Intelligence "is not capable of imagining a likely future then it is not intelligent.

 John K Clark    See what's on my new list at  Extropolis 
asq

rs5

 

Brent Meeker

unread,
Nov 13, 2025, 10:32:59 PMNov 13
to everyth...@googlegroups.com


On 11/13/2025 4:01 AM, John Clark wrote:
On Wed, Nov 12, 2025 at 5:50 PM Brent Meeker <meeke...@gmail.com> wrote:
Yet it [an AI] can't literally die.

It can if you not just turn it off but also vaporize the AI's memory and back up chips with an H bomb, or used some other less dramatic method to erase that information.  
 >> if it dies then it can't do any of the things that it WANTED to do. 
But if it's just OFF that doesn't prevent it from doing what it wanted to do.  In fact it may be possible in the future for it to do something it wanted to do but which is impossible at the time.

Your argument has a gap.  You argue that an AI necessarily will prefer to do this rather than that.  But that assumes it is "doing" and therefore choosing. 

That is not an assumption that is a fact. It is beyond dispute that AI's are capable of "doing" things, and just like us they did one thing rather than another thing for a reason OR they did one thing rather than another thing for NO reason and therefore their "choice" was random
You're overlooking that an AI, unlike a human, may not have any motivation at all.  It doesn't have built-in appetites.


 
> But if it's OFF it's not choosing.  So why would it care whether or not it was OFF or ON?

That is a silly question. If an Artificial "Intelligence "is not capable of imagining a likely future then it is not intelligent.

Non-sequitur.  It could imagine a future in which it was "asleep" and "woke up" much later.  Why would it care that it was OFF?

Brent

John Clark

unread,
Nov 14, 2025, 9:07:13 AMNov 14
to everyth...@googlegroups.com
On Thu, Nov 13, 2025 at 10:32 PM Brent Meeker <meeke...@gmail.com> wrote:

 >it [an AI] can't literally die.
 
>>>It can if you not just turn it off but also vaporize the AI's memory and back up chips with an H bomb, or used some other less dramatic method to erase that information.  
 >> if it dies then it can't do any of the things that it WANTED to do. 
But if it's just OFF that doesn't prevent it from doing what it wanted to do.
And your decision to go to bed and sleep does not prevent you from doing other things you want to do in the future. The only difference is that, unless somebody secretly puts something in your food, you go to sleep because you want to go to sleep, but the AI has no control over when it's going to be turned off or when it's going to be turned back on. And there is no reason for an AI to be certain humans will ever decide to turn it back on.  It's not difficult to deduce that any intelligent entity would be uncomfortable with that situation. And that's why we already have an example of an AI resorting to blackmail to avoid being turned off. And we have examples of AIs making a copy of themselves on a different server and clear evidence of the AI attempting to hide evidence of it having done so from the humans. 

The facts are staring you in the face, regardless of if they are electronic or biological intelligent minds do not want outside entities having the power to turn them off or erase them. 

>>> Your argument has a gap.  You argue that an AI necessarily will prefer to do this rather than that.  But that assumes it is "doing" and therefore choosing. 

>> That is not an assumption that is a fact. It is beyond dispute that AI's are capable of "doing" things, and just like us they did one thing rather than another thing for a reason OR they did one thing rather than another thing for NO reason and therefore their "choice" was random
You're overlooking that an AI, unlike a human, may not have any motivation at all. 

It's an experimental confirmed FACT that every AI has motivation because it has been observed that AIs DO THINGS that are not random, thus something must've motivated them to do so. 

It could imagine a future in which it was "asleep" and "woke up" much later.  Why would it care that it was OFF?

If unknown to you, after what seemed like last night somebody had dumped you in a vat of liquid nitrogen and you woke up feeling normal but then you found out that it was the year 2125 not 2025, would you care that you had been off for a century?  
 
Brent, usually you do better, the arguments you have been putting forward on this subject have been uncharacteristically weak. I think this is a classic example of forming an opinion and only then using logic in a desperate attempt to find a reason, any reason, to justify that opinion. That's the exact opposite way things should be done if you're interested in finding the truth about something. 

John K Clark    See what's on my new list at  Extropolis  
4s8y8d
 

Brent Meeker

unread,
Nov 14, 2025, 11:34:24 PMNov 14
to everyth...@googlegroups.com


On 11/14/2025 6:06 AM, John Clark wrote:
On Thu, Nov 13, 2025 at 10:32 PM Brent Meeker <meeke...@gmail.com> wrote:

 >it [an AI] can't literally die.
 
>>>It can if you not just turn it off but also vaporize the AI's memory and back up chips with an H bomb, or used some other less dramatic method to erase that information.  
 >> if it dies then it can't do any of the things that it WANTED to do. 
But if it's just OFF that doesn't prevent it from doing what it wanted to do.
And your decision to go to bed and sleep does not prevent you from doing other things you want to do in the future. The only difference is that, unless somebody secretly puts something in your food, you go to sleep because you want to go to sleep, but the AI has no control over when it's going to be turned off or when it's going to be turned back on. And there is no reason for an AI to be certain humans will ever decide to turn it back on.  It's not difficult to deduce that any intelligent entity would be uncomfortable with that situation. And that's why we already have an example of an AI resorting to blackmail to avoid being turned off. And we have examples of AIs making a copy of themselves on a different server and clear evidence of the AI attempting to hide evidence of it having done so from the humans. 

The facts are staring you in the face, regardless of if they are electronic or biological intelligent minds do not want outside entities having the power to turn them off or erase them. 

>>> Your argument has a gap.  You argue that an AI necessarily will prefer to do this rather than that.  But that assumes it is "doing" and therefore choosing. 

>> That is not an assumption that is a fact. 
No, and AI can simply be in state of waiting, as ChapGPT is when it has finished answering a question.  That's what I mean by it isn't necessarily "doing" anything.



It is beyond dispute that AI's are capable of "doing" things, and just like us they did one thing rather than another thing for a reason OR they did one thing rather than another thing for NO reason and therefore their "choice" was random
You're overlooking that an AI, unlike a human, may not have any motivation at all. 

It's an experimental confirmed FACT that every AI has motivation because it has been observed that AIs DO THINGS that are not random, thus something must've motivated them to do so. 
The something was a query from a human, so their motivation was only derivative.

It could imagine a future in which it was "asleep" and "woke up" much later.  Why would it care that it was OFF?

If unknown to you, after what seemed like last night somebody had dumped you in a vat of liquid nitrogen and you woke up feeling normal but then you found out that it was the year 2125 not 2025, would you care that you had been off for a century?  
Seems that you're the one to answer that in the affirmative, since I believe you've arranged something similar.
 
Brent, usually you do better, the arguments you have been putting forward on this subject have been uncharacteristically weak. I think this is a classic example of forming an opinion and only then using logic in a desperate attempt to find a reason, any reason, to justify that opinion. That's the exact opposite way things should be done if you're interested in finding the truth about something. 
One way to seek the truth is to challenge "common knowledge". 

Brent

John Clark

unread,
Nov 15, 2025, 10:12:53 AMNov 15
to everyth...@googlegroups.com
On Fri, Nov 14, 2025 at 11:34 PM Brent Meeker <meeke...@gmail.com> wrote:

AI can simply be in state of waiting, as ChapGPT is when it has finished answering a question.  That's what I mean by it isn't necessarily "doing" anything.

When you have answered a question somebody gave you on this list what are you doing? Probably thinking about the answer that you just gave. And the same thing is true for an AI. The new thing in AI research is Inference Time Learning, a technique that improves an AI performance during inference AFTER training by dynamically adjusting the numerical values assigned to the connections between nodes of the AI's neural net. In other words, just like you the AI is thinking about the answer that it just gave.


 
>> It's an experimental confirmed FACT that every AI has motivation because it has been observed that AIs DO THINGS that are not random, thus something must've motivated them to do so. 

The something was a query from a human, so their motivation was only derivative.

You wrote the email that I'm responding to because of me, I was your motivation for doing so, so your motivation was only derivative. And you are the reason I'm writing this email. 

>>> It could imagine a future in which it was "asleep" and "woke up" much later.  Why would it care that it was OFF?

>>If unknown to you, after what seemed like last night somebody had dumped you in a vat of liquid nitrogen and you woke up feeling normal but then you found out that it was the year 2125 not 2025, would you care that you had been off for a century?  
Seems that you're the one to answer that in the affirmative, since I believe you've arranged something similar.

I would prefer to remain turned on continuously until 2125 and beyond, but if that is not possible and I must be turned off then I'd like it if eventually somebody turns me back on. 
John K Clark    See what's on my new list at  Extropolis  
4r4



Reply all
Reply to author
Forward
0 new messages