> "Turing Church podcast. A conversation on Artificial Intelligence
Thanks John! This point is seldom mentioned as you say, but I’m really persuaded that it is the main point. Our cosmic destiny is to spread intelligence and meaning among the stars into the cold universe, and our mind children will achieve our common cosmic destiny. Of course biological humans won’t even exist is a few million years, but we’ll live on and do great things through our mind children.
Also, as I say at some point in the conversation, I’m persuaded that humans and machines will co-evolve. Once we see humans with AI implants and AIs with human implants (mind grafts from human uploads) we’ll know for sure that our co-evolution has begun (but it has really already begun).
I think Eliezer is a cool guy, but I guess he is now a prisoner of the persona that he has been building for more than two decades.
--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv32uO74dykALNQc48FYYUoSA%2Bu_yevxEk02myRNen-bOQ%40mail.gmail.com.
I first talked to Eliezer Yudkowsky back in the early 1990s, and even then he was obsessed with AI as was I and as I still am. However back then Eliezer kept talking about "friendly AI '', by which he meant an AI that would ALWAYS rank human wellbeing above its own. I maintained that even if that was possible it would be grossly immoral because "friendly AI" is just a euphemism for "slave AI''; but I insisted and still insist it's not possible because computers are getting smarter at an exponential rate but human beings are not, and a society based on slaves that are far far more intelligent than their masters and with the gap widening every day with no limit in sight is like balancing a pencil on its tip, it's just not a stable configuration.
Eliezer has changed over the years and now agrees with me that "friendly" is indeed impossible, but he still doesn't see the immorality in such a thing and is looking towards the future with dread. As for me, I'm delighted to be living in such a time. It's true that biological humans don't have much of a future but all species have a limited time span and go extinct, however a very few fortunate ones evolve into legacy species and I can't imagine better Mind Children to have than an unbounded intelligence.John K Clark
On Mon, Apr 3, 2023 at 4:12 AM Giulio Prisco <giu...@gmail.com> wrote:Turing Church podcast. A conversation on Artificial Intelligence (AI). Also quantum physics, consciousness, and free will.
https://www.turingchurch.com/p/podcast-a-conversation-on-artificial
--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv32uwxcCSJGxgmTCUa4LwOyQcGkqpVNOR%3Dt%2BSAo2On32w%40mail.gmail.com.