Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED

9 views
Skip to first unread message

John Clark

unread,
Jul 14, 2023, 1:11:57 PM7/14/23
to 'Brent Meeker' via Everything List
Recently  Eliezer Yudkowsky gave a TED talk and basically said the human race is doomed. You can see it on YouTube and I put the following in the comment section: 
--

I think Eliezer was right when he said nobody can predict what moves a chess program like Stockfish will make but you can predict that it will beat you in a game of chess, that's because Stockfish is super good at playing chess but it can't do anything else, it can't even think of anything else. But an AI program like GPT-4 is different, it can think of a great many things besides chess so you can't really predict what it will do, sure it has the ability to beat you at a game of Chess but for its own inscrutable reasons it may deliberately let you win. So yes in a few years an AI will have the ability to exterminate the human race, but will it actually do so? I don't know and I can't even give a probability of it occurring, all I can say is the probability is greater than zero and less than 100%.

John K Clark    See what's on my new list at  Extropolis
mfx

Terren Suydam

unread,
Jul 14, 2023, 3:22:46 PM7/14/23
to everyth...@googlegroups.com
It's hard to know how to think about this kind of risk. It's safe to say EY has done more thinking on this issue than just about anyone, and he's one of the smartest people on the planet, probably. I've been following him for over a decade, from even before his writings on lesswrong.

However, there's an interesting dynamic with highly intelligent people - ironically, being really smart makes it possible to justify stupid conclusions that you might be highly motivated to want to. A very smart google engineer I know is a conspiracy theorist who can justify to himself bullshit like 9/11 was a hoax and even Sandy Hook was a hoax. I'm not saying EY's conclusions are stupid or bullshit. Just saying that people clever enough to jump through the cognitive hoops required can convince themselves of shit most people would think is bs, and that irrational motivations often drive this - for example, conspiracy theorists are often driven by psychological motivations such as the need to be right, or the desire to have some kind of insider status.

So what irrational motivations might EY be prone to?  Over the years I've gotten the sense he's got a bit of a savior complex, so the motivation to view AI in the way he does could be rooted in that psychological motivation. I mean obviously I'm speculating here, but it does play a factor in how I process his message, which is quite dire.

That's not to say he's wrong. As you say John, it's totally unpredictable, but I think there's room for less dire narratives about how it could all go.  But one thing I do 100% agree with is that we're not taking this seriously enough and that the usual capitalist incentives are pushing us into dangerous territory.

Terren


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2VMSWOJfL74rL%3DNoe2o5y%3DE8F%3DY5GCQKN%3D-inOY1x1kQ%40mail.gmail.com.

John Clark

unread,
Jul 14, 2023, 3:45:13 PM7/14/23
to everyth...@googlegroups.com
On Fri, Jul 14, 2023 at 3:22 PM Terren Suydam <terren...@gmail.com> wrote:

>It's hard to know how to think about this kind of risk. It's safe to say EY has done more thinking on this issue than just about anyone, and he's one of the smartest people on the planet, probably. I've been following him for over a decade, from even before his writings on lesswrong.

I've been arguing with him since the mid 1990s when he was just a very precocious teenager. Back then Eliezer tried to convince me that it was possible for human beings to remain in control of AIs and I kept trying to convince him that such a thing was impossible, neither of us succeeded in convincing the other but I see that lately he has come around to my way of thinking, at least partly. He now agrees with me that control is impossible but he also thinks we're definitely doomed. I say we won't remain in control but we may or may not be doomed, it's impossible to say. That's why they call it the Singularity.

 > I'm not saying EY's conclusions are stupid or bullshit.

Nor do I. I'm very fond of  Eliezer and he's one of the most logical people I've ever known. And he might be right about the doomed part, time will tell. 

> That's not to say he's wrong. As you say John, it's totally unpredictable, but I think there's room for less dire narratives about how it could all go. 

That sums up my view as well.  

> But one thing I do 100% agree with is that we're not taking this seriously enough

 
Yes, most Republicans think Drag Queen Story Hour is a bigger existential problem than AI.

John K Clark    See what's on my new list at  Extropolis
enq

yl0


 

I think Eliezer was right when he said nobody can predict what moves a chess program like Stockfish will make but you can predict that it will beat you in a game of chess, that's because Stockfish is super good at playing chess but it can't do anything else, it can't even think of anything else. But an AI program like GPT-4 is different, it can think of a great many things besides chess so you can't really predict what it will do, sure it has the ability to beat you at a game of Chess but for its own inscrutable reasons it may deliberately let you win. So yes in a few years an AI will have the ability to exterminate the human race, but will it actually do so? I don't know and I can't even give a probability of it occurring, all I can say is the probability is greater than zero and less than 100%.


mf

spudb...@aol.com

unread,
Jul 14, 2023, 5:08:20 PM7/14/23
to everyth...@googlegroups.com
AI is not yet the existential threat that nuclear war is. 
Nor, is Climate Change.
Nor, are Child Molestors (kids sucking their dicks) as a political plank.Dems. 
Nor, is sex changing.

I would say that nothing yet has convinced me of the dreaded machine eliminating the species.

In fact, I see an eventual merging of us into a new species. The machines get our emotion and connectivity. We get a longer life, and the ability to tour the Galaxy. 

Yudkowsky was a fine fellow, back in the day, with a wry sense of humor in the Extropian Days. 



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit

spudb...@aol.com

unread,
Jul 15, 2023, 1:40:19 PM7/15/23
to everyth...@googlegroups.com

Interview with NYU professor:


''Sam Bowman

So there’s two connected big concerning unknowns. The first is that we don’t really know what they’re doing in any deep sense. If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second, and we just have no idea what any of it means. With only the tiniest of exceptions, we can’t look inside these things and say, “Oh, here’s what concepts it’s using, here’s what kind of rules of reasoning it’s using. Here’s what it does and doesn’t know in any deep way.” We just don’t understand what’s going on here. We built it, we trained it, but we don’t know what it’s doing.'

Noam Hassenfeld

Very big unknown.''
If accurate, are we now looking at Pantheism? Should we? 




spudb...@aol.com

unread,
Jul 15, 2023, 4:03:44 PM7/15/23
to everyth...@googlegroups.com
I am following up on my own post with this Ameca-public interaction.


Youtube.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit

Tomasz Rola

unread,
Jul 18, 2023, 9:06:22 PM7/18/23
to Everything List
On Sat, Jul 15, 2023 at 05:40:11PM +0000, 'spudb...@aol.com' via Everything List wrote:
> Even the scientists who build AI can’t tell you how it works (msn.com)
>
> Interview with NYU professor:
>
>
> ''Sam Bowman
>
> So there’s two connected big concerning unknowns. The first is that
> we don’t really know what they’re doing in any deep sense. If we
> open up ChatGPT or a system like it and look inside, you just see
> millions of numbers flipping around a few hundred times a second,
> and we just have no idea what any of it means. With only the tiniest
> of exceptions, we can’t look inside these things and say, “Oh,
> here’s what concepts it’s using, here’s what kind of rules of
> reasoning it’s using. Here’s what it does and doesn’t know in any
> deep way.” We just don’t understand what’s going on here. We built
> it, we trained it, but we don’t know what it’s doing.'

Yeah. But I think this was obvious from the very beginning, at least
if someone was paying attention. A lot of people, this unfortunately
includes many decision makers (I hope I am not too optimistic), do not
want to busy themselves with details.

What is going on with Chad Gepettos and their ilk, is a bit like
building a car by throwing parts inside a box. In a computer, one can
do this really fast. Oh, something have built up. Let's make a
schoolbus. This is being done by software firms who already mastered
the art of licence writing - they cannot be liable for software errors
(unless something changed during last few years) and you agreed to
this. Yes you did. Now go read the licence.

Sooo, if somebody's life is screwed up...

Mr Jeffery Battle (veteran, businessman and professor) now sues
Microsoft because its Bing conflated him with a person of similar
name, who apparently is a convicted wannabe taliban.

[

https://reason.com/volokh/2023/07/13/new-lawsuit-against-bing-based-on-allegedly-ai-hallucinated-libelous-statements/

]

We will see how it unfolds.

> Noam Hassenfeld
> Very big unknown.''If accurate, are we now looking at Pantheism? Should we? 

Such questions are loaded with a suggestion, intentional or not.

Only a cretin makes atomic mushroom and then prays to it. Oh, wait a
minute, who is the dominant species here...

I suspect "we" were a lot smarter in a past. The engineers who built
steam engines did not pray to them. The coal diggers did not pray to
their shovels. Tailors did not pray to their sewing machines. Nobody
(mentally capable) treated the devices as magical or impossible to
understand. (or so I think)

Machines are like any other machines. If they are doing what they are
supposed to do and I can repair them when they break, then I have no
problem. The problem starts when they do not break and do not do what
they are expected to.

Anyway, I like this quote, it is absolutely thrilling:

"And it also plays into some of the concerns about these
systems. That sometimes the skill that emerges in one of these
models will be something you really don’t want. The paper describing
GPT-4 talks about how when they first trained it, it could do a
decent job of walking a layperson through building a biological
weapons lab. And they definitely did not want to deploy that as a
product. They built it by accident. And then they had to spend
months and months figuring out how to clean it up, how to nudge the
neural network around so that it would not actually do that when
they deployed it in the real world."

The humans knew what they did not want to release and were able, with
huge effort, to rub this out. What was this thing, those things, which
they did not know they had not wanted, which they could not know at
that moment because nobody knew it yet?

Ask Ding, or Brad, eh. Chad Gepetto has those things buried inside it,
waiting for a right question to autocomplete.

--
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature. **
** As the answer, master did "rm -rif" on the programmer's home **
** directory. And then the C programmer became enlightened... **
** **
** Tomasz Rola mailto:tomas...@bigfoot.com **
Reply all
Reply to author
Forward
0 new messages