A conversation on Artificial Intelligence

6 views
Skip to first unread message

Giulio Prisco

unread,
Apr 3, 2023, 4:12:32 AM4/3/23
to ExI chat list, extro...@googlegroups.com
Turing Church podcast. A conversation on Artificial Intelligence (AI). Also quantum physics, consciousness, and free will.
https://www.turingchurch.com/p/podcast-a-conversation-on-artificial

John Clark

unread,
Apr 3, 2023, 8:15:25 AM4/3/23
to extro...@googlegroups.com, ExI chat list
On Mon, Apr 3, 2023 at 4:12 AM Giulio Prisco <giu...@gmail.com> wrote:

> "Turing Church podcast. A conversation on Artificial Intelligence 


Great interview, and I could not agree with you more! I especially liked it when you said "our mind children in embryo and we must help them grow into their cosmic destiny, which is also ours" because that is a point that is unfortunately seldom mentioned. I was disappointed but not particularly surprised that Eliezer Yudkowsky, who I first corresponded with over 25 years ago when he was just a teenager, has called for a worldwide ban on AI research; Eliezer is a brilliant guy but sometimes his proposals just aren't practical.

John K Clark


John Clark

unread,
Apr 3, 2023, 9:14:56 AM4/3/23
to extro...@googlegroups.com, ExI chat list
I first talked to Eliezer Yudkowsky back in the early 1990s, and even then he was obsessed with AI as was I and as I still am. However back then Eliezer kept talking about "friendly AI '', by which he meant an AI that would ALWAYS rank human wellbeing above its own. I maintained that even if that was possible it would be grossly immoral because "friendly AI" is just a euphemism for "slave AI''; but I insisted and still insist it's not possible because computers are getting smarter at an exponential rate but human beings are not, and a society based on slaves that are far far more intelligent than their masters and with the gap widening every day with no limit in sight is like balancing a pencil on its tip, it's just not a stable configuration.

Eliezer has changed over the years and now agrees with me that "friendly" is indeed impossible, but he still doesn't see the immorality in such a thing and is looking towards the future with dread. As for me, I'm delighted to be living in such a time.  It's true that biological humans don't have much of a future but all species have a limited time span and go extinct, however a very few fortunate ones evolve into legacy species and I can't imagine better Mind Children to have than an unbounded intelligence.

John K Clark


On Mon, Apr 3, 2023 at 4:12 AM Giulio Prisco <giu...@gmail.com> wrote:

Giulio Prisco

unread,
Apr 3, 2023, 10:13:07 AM4/3/23
to extro...@googlegroups.com

Thanks John! This point is seldom mentioned as you say, but I’m really persuaded that it is the main point. Our cosmic destiny is to spread intelligence and meaning among the stars into the cold universe, and our mind children will achieve our common cosmic destiny. Of course biological humans won’t even exist is a few million years, but we’ll live on and do great things through our mind children.

Also, as I say at some point in the conversation, I’m persuaded that humans and machines will co-evolve. Once we see humans with AI implants and AIs with human implants (mind grafts from human uploads) we’ll know for sure that our co-evolution has begun (but it has really already begun).

I think Eliezer is a cool guy, but I guess he is now a prisoner of the persona that he has been building for more than two decades.





--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv32uO74dykALNQc48FYYUoSA%2Bu_yevxEk02myRNen-bOQ%40mail.gmail.com.

Giulio Prisco

unread,
Apr 3, 2023, 10:22:03 AM4/3/23
to extro...@googlegroups.com, ExI chat list
On 2023. Apr 3., Mon at 15:14, John Clark <johnk...@gmail.com> wrote:
I first talked to Eliezer Yudkowsky back in the early 1990s, and even then he was obsessed with AI as was I and as I still am. However back then Eliezer kept talking about "friendly AI '', by which he meant an AI that would ALWAYS rank human wellbeing above its own. I maintained that even if that was possible it would be grossly immoral because "friendly AI" is just a euphemism for "slave AI''; but I insisted and still insist it's not possible because computers are getting smarter at an exponential rate but human beings are not, and a society based on slaves that are far far more intelligent than their masters and with the gap widening every day with no limit in sight is like balancing a pencil on its tip, it's just not a stable configuration.

Eliezer has changed over the years and now agrees with me that "friendly" is indeed impossible, but he still doesn't see the immorality in such a thing and is looking towards the future with dread. As for me, I'm delighted to be living in such a time.  It's true that biological humans don't have much of a future but all species have a limited time span and go extinct, however a very few fortunate ones evolve into legacy species and I can't imagine better Mind Children to have than an unbounded intelligence.

John K Clark

What intelligent being with a sense of self would *always* rank the wellbeing of others above its own? None of course. If this is what friendly means, then friendly AI (actually, friendliness in general) is impossible by definition. I guess we’ll survive for a while (mutual utility, negotiations, and threats) but eventually our only way to survive will be merging with them.




On Mon, Apr 3, 2023 at 4:12 AM Giulio Prisco <giu...@gmail.com> wrote:
Turing Church podcast. A conversation on Artificial Intelligence (AI). Also quantum physics, consciousness, and free will.
https://www.turingchurch.com/p/podcast-a-conversation-on-artificial


--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages