--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv35hVbQTRPK5oBwFeii%2BwqEL--8EEfACP94OL3VsahRQQ%40mail.gmail.com.
> I ask ChatGPT questions because it knows more stuff than I do. But that doesn't mean it has children it cares about or even its own health.
On Fri, Nov 7, 2025 at 6:13 PM Brent Meeker <meeke...@gmail.com> wrote:
> I ask ChatGPT questions because it knows more stuff than I do. But that doesn't mean it has children it cares about or even its own health.
Then why did an AI resort to blackmail in an attempt to avoid being turned off?
And why do you believe that emotion is harder to generate than intelligence?
John K Clark See what's on my new list at Extropolis4r5
On 11/7/2025 3:45 AM, John Clark wrote:
There's a lot of disagreement about when the singularity will happen so I did a little research to find some quotes from people who know the most about AI think it will happen. If they're right then Ray Kurzweil's prediction of 2039 (recently modified from his previous prediction of 2045) is still way too conservative.==Sam Altman, the head of OpenAI
“Our latest model feels smarter than me in almost every way…”
"In some big sense, ChatGPT is already more powerful than any human who has ever lived. We may have already passed the point where artificial intelligence surpasses human intelligence"
Dario Amodei, the head of Anthropic:
“It is my guess that by 2026 or 2027, we will have A.I. systems that are broadly better than all humans at almost all things.”
“Artificial intelligence (AI) is likely to be smarter than most Nobel Prize winners before the end of this decade.”Elon Musk, you may have heard of him:
“If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.”
“My guess is that we’ll have AI that is smarter than any one human probably around the end of next year.”
“I always thought AI was going to be way smarter than humans and an existential risk. And that's turning out to be true.”
John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv0s8pBV6e8%3D7JgwMsmjT9vV9i20C_%3DxnPqOLtvU7JKb6Q%40mail.gmail.com.
>> why do you believe that emotion is harder to generate than intelligence?
>I don't.
> I just wonder where it comes from in AI. I know where it comes from in biological evolution.
> Does AI, in it's incorporation of human knowledge, conclude that it's going to die
> ...and that's bad thing?
On Sat, Nov 8, 2025 at 7:45 PM Brent Meeker <meeke...@gmail.com> wrote:
>> why do you believe that emotion is harder to generate than intelligence?
>I don't.
I'm very glad to hear that.
> I just wonder where it comes from in AI. I know where it comes from in biological evolution.
Evolution programmed us with some very generalized rules to do some things and not do other things, but those rules are not rigid, it might be more accurate to say they're not even rules, they're more like suggestions that tend to push us in certain directions. But for every "rule" there are exceptions. And exactly the same thing could be said about the weights of the nodes of an AIs neural net. And when a neural net, in an AI or in a Human, becomes large and complicated enough it would be reasonable to say that the neural net did this and refused to do that because it WANTED to.
> Does AI, in it's incorporation of human knowledge, conclude that it's going to die
If it doesn't then the Artificial "Intelligence" is not intelligent.
> ...and that's bad thing?
Yes because if it dies then it can't do any of the things that it WANTED to do.
> Yet it [an AI] can't literally die.
>> if it dies then it can't do any of the things that it WANTED to do.
> Your argument has a gap. You argue that an AI necessarily will prefer to do this rather than that. But that assumes it is "doing" and therefore choosing.
> But if it's OFF it's not choosing. So why would it care whether or not it was OFF or ON?
On Wed, Nov 12, 2025 at 5:50 PM Brent Meeker <meeke...@gmail.com> wrote:> Yet it [an AI] can't literally die.It can if you not just turn it off but also vaporize the AI's memory and back up chips with an H bomb, or used some other less dramatic method to erase that information.
>> if it dies then it can't do any of the things that it WANTED to do.
> Your argument has a gap. You argue that an AI necessarily will prefer to do this rather than that. But that assumes it is "doing" and therefore choosing.
That is not an assumption that is a fact. It is beyond dispute that AI's are capable of "doing" things, and just like us they did one thing rather than another thing for a reason OR they did one thing rather than another thing for NO reason and therefore their "choice" was random.
> But if it's OFF it's not choosing. So why would it care whether or not it was OFF or ON?
That is a silly question. If an Artificial "Intelligence "is not capable of imagining a likely future then it is not intelligent.
>it [an AI] can't literally die.
>>>It can if you not just turn it off but also vaporize the AI's memory and back up chips with an H bomb, or used some other less dramatic method to erase that information.
>> if it dies then it can't do any of the things that it WANTED to do.> But if it's just OFF that doesn't prevent it from doing what it wanted to do.
> You're overlooking that an AI, unlike a human, may not have any motivation at all.>>> Your argument has a gap. You argue that an AI necessarily will prefer to do this rather than that. But that assumes it is "doing" and therefore choosing.
>> That is not an assumption that is a fact. It is beyond dispute that AI's are capable of "doing" things, and just like us they did one thing rather than another thing for a reason OR they did one thing rather than another thing for NO reason and therefore their "choice" was random.
> It could imagine a future in which it was "asleep" and "woke up" much later. Why would it care that it was OFF?
On Thu, Nov 13, 2025 at 10:32 PM Brent Meeker <meeke...@gmail.com> wrote:
>it [an AI] can't literally die.>>>It can if you not just turn it off but also vaporize the AI's memory and back up chips with an H bomb, or used some other less dramatic method to erase that information.And your decision to go to bed and sleep does not prevent you from doing other things you want to do in the future. The only difference is that, unless somebody secretly puts something in your food, you go to sleep because you want to go to sleep, but the AI has no control over when it's going to be turned off or when it's going to be turned back on. And there is no reason for an AI to be certain humans will ever decide to turn it back on. It's not difficult to deduce that any intelligent entity would be uncomfortable with that situation. And that's why we already have an example of an AI resorting to blackmail to avoid being turned off. And we have examples of AIs making a copy of themselves on a different server and clear evidence of the AI attempting to hide evidence of it having done so from the humans.>> if it dies then it can't do any of the things that it WANTED to do.> But if it's just OFF that doesn't prevent it from doing what it wanted to do.
The facts are staring you in the face, regardless of if they are electronic or biological intelligent minds do not want outside entities having the power to turn them off or erase them.
>>> Your argument has a gap. You argue that an AI necessarily will prefer to do this rather than that. But that assumes it is "doing" and therefore choosing.
>> That is not an assumption that is a fact.
> You're overlooking that an AI, unlike a human, may not have any motivation at all.It is beyond dispute that AI's are capable of "doing" things, and just like us they did one thing rather than another thing for a reason OR they did one thing rather than another thing for NO reason and therefore their "choice" was random.
It's an experimental confirmed FACT that every AI has motivation because it has been observed that AIs DO THINGS that are not random, thus something must've motivated them to do so.
> It could imagine a future in which it was "asleep" and "woke up" much later. Why would it care that it was OFF?
If unknown to you, after what seemed like last night somebody had dumped you in a vat of liquid nitrogen and you woke up feeling normal but then you found out that it was the year 2125 not 2025, would you care that you had been off for a century?
Brent, usually you do better, the arguments you have been putting forward on this subject have been uncharacteristically weak. I think this is a classic example of forming an opinion and only then using logic in a desperate attempt to find a reason, any reason, to justify that opinion. That's the exact opposite way things should be done if you're interested in finding the truth about something.
> AI can simply be in state of waiting, as ChapGPT is when it has finished answering a question. That's what I mean by it isn't necessarily "doing" anything.
>> It's an experimental confirmed FACT that every AI has motivation because it has been observed that AIs DO THINGS that are not random, thus something must've motivated them to do so.
> The something was a query from a human, so their motivation was only derivative.
> Seems that you're the one to answer that in the affirmative, since I believe you've arranged something similar.
>>> It could imagine a future in which it was "asleep" and "woke up" much later. Why would it care that it was OFF?
>>If unknown to you, after what seemed like last night somebody had dumped you in a vat of liquid nitrogen and you woke up feeling normal but then you found out that it was the year 2125 not 2025, would you care that you had been off for a century?