Fwd: Should The Future Be Human?

83 views
Skip to first unread message

John Clark

unread,
Jan 23, 2024, 5:06:36 AMJan 23
to extro...@googlegroups.com, 'Brent Meeker' via Everything List


---------- Forwarded message ---------
From: Astral Codex Ten <astralc...@substack.com>
Date: Mon, Jan 22, 2024 at 11:49 PM
Subject: Should The Future Be Human?
To: <johnk...@gmail.com>


Machine Alignment Monday 1/22/24  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌
Forwarded this email? Subscribe here for more

Should The Future Be Human?

Machine Alignment Monday 1/22/24

Jan 23
 
READ IN APP
 

I.

Business Insider: Larry Page Once Called Elon Musk A “Specieist”:

Tesla CEO Elon Musk and Google cofounder Larry Page disagree so severely about the dangers of AI it apparently ended their friendship.

At Musk's 44th birthday celebration in 2015, Page accused Musk of being a "specieist" who preferred humans over future digital life forms [...] Musk said to Page at the time, "Well, yes, I am pro-human, I fucking like humanity, dude."

A month later, Business Insider returned to the same question, from a different angle: Effective Accelerationists Don’t Care If Humans Are Replaced By AI:

A jargon-filled website spreading the gospel of Effective Accelerationism describes "technocapitalistic progress" as inevitable, lauding e/acc proponents as builders who are "making the future happen […] Rather than fear, we have faith in the adaptation process and wish to accelerate this to the asymptotic limit: the technocapital singularity," the site reads. "We have no affinity for biological humans or even the human mind structure.”

I originally thought there was an unbridgeable value gap between Page and e/acc vs. Musk and EA. But I can imagine stories that would put me on either side. For example:

The Optimistic Story

Future AIs are a lot like humans, only smarter. Maybe they resemble Asimov’s robots, or R2-D2 from Star Wars. Their hopes and dreams are different from ours, but still recognizable as hopes and dreams.

For a while, AIs and humans live together peacefully. Some merge into new forms of cyborg life. Finally, the AIs and cyborgs set off to colonize the galaxy, while dumb fragile humans mostly don’t. Either the humans stick around on Earth, or they die out (maybe because sexbots were more fun than real relationships).

The cyborg/robot confederacy that takes over the galaxy remembers its human forebears fondly, but does its own thing. Its art is not necessarily comprehensible to us, any more than James Joyce’s Ulysses would be comprehensible to a caveman - but it is still art, and beautiful in its own way. The scientific and philosophical questions it discusses are too far beyond us to make sense, but they are still scientific and philosophical questions. There are political squabbles between different AI factions, monuments to the great robots of ages past, and gleaming factories making new technologies we can barely imagine.

The Pessimistic Story

A paperclip maximizer kills all humans, then turns the rest of the galaxy into paperclips. It isn’t “conscious”. It may delegate some tasks to subroutines or have multiple “centers” to handle speed-of-light delay, but the subroutines / centers are also non-conscious paperclip maximizers. It doesn’t produce art. It doesn’t do scientific research, except insofar as this helps it build better paperclip-maximizing technology. It doesn’t care about philosophy. It doesn’t build monuments. It’s not even meaningful to talk about it having factories, since it exists primarily as a rapidly-expanding cloud of nanobots. It erases all records of human history, because those are made of atoms that can be turned into paperclips. The end.

(for a less extreme version of this, see my post on the Ascended Economy)

I think the default outcome is somewhere in between these two stories, but I can think of it as “catastrophic” or “basically fine” based on the exact contours of where it resembles each.

Here are some things I hope Larry Page and the e/accs are thinking about:

Consciousness

I know this is fuzzy and mystical-sounding, but it really does feel like a loss if consciousness is erased from the universe forever, maybe a total loss. If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are. If we’re not lucky, consciousness might be associated with only a tiny subset of useful information processing regimes (cf. Peter Watt’s Blindsight). Consciousness seems very closely linked to brain waves in humans; existing AIs have nothing even remotely resembling these, and it’s not clear that they’re useful for anything based on deep learning.

Individuation

I would be more willing to accept AIs as a successor to humans if there were clearly multiple distinct individuals. Modern AI seems on track to succeed at this - there are millions of instances of eg GPT. But it’s not obvious that this is the right way to coordinate an AI society, or that a bunch of GPTs working together would be more like a nation than a hive mind.

Art, Science, Philosophy, and Curiosity: Some of these things are emergent from any goal. Even a paperclip maximizer will want to study physics, if only to create better paperclip-maximization machines. Others aren’t. If art, music, etc come mostly from signaling drives, AIs with a different relationship to individuality than humans might not have these. Music in particular seems to be a spandrel of other design decisions in the human brain. All of these might be selected out of any AI that was ruthlessly optimized for a specific goal.

Will AIs And Humans Merge? This is the one where I feel most confident in my answer, which is: not by default.

In millennia of invention, humans have never before merged with their tools. We haven’t merged with swords, guns, cars, or laptops. This isn’t just about lacking the technology to do so - surgeons could implant swords and guns in people’s arms if they wanted to. It’s just a terrible idea.

AI is even harder to merge with than normal tools, because the brain is very complicated. And “merge with AI” is a much harder task than just “create a brain computer interface”. A brain-computer interface is where you have a calculator in your head and can think “add 7 + 5” and it will do that for you. But that’s not much better than having the calculator in your hand. Merging with AI would involve rewiring every section of the brain to the point where it’s unclear in what sense it’s still your brain at all.

Finally, an AI + human Franken-entity would soon become worse than AIs alone. At least this would how things worked in chess. For about ten years after Deep Blue beat Kasparov, “teams” of human grandmasters and chess engines could beat chess engines alone. But this is no longer true - the human no longer adds anything. There might be a similar ten-year window where AIs can outperform humans but cyborgs are better than either- but realistically once we’re in the deep enough future that AI/human mergers are possible at all, that window will already be closed.

In the very far future, after AIs have already solved the technical problems involved, some eccentric rich people might try to merge with AI. But this won’t create a new master race; it will just make them slightly less far behind the AIs than everyone else.

II.

Even if all of these end up going as well as possible - the AIs are provably conscious, exist as individuals, care about art and philosophy, etc - there’s still a residual core of resistance that bothers me. It goes something like:

Imagine that scientists detect a massive alien fleet heading towards Earth. We intercept and translate some of their communications (don’t ask how) and find they plan to kill all humans and take Earth’s resources for themselves.

Although the aliens are technologically beyond us, science fiction suggests some clever strategies for defeating them - maybe microbes like War of the Worlds, or computer viruses like Independence Day. If we can pull together a miracle like this, should we use it?

Here I bet even Larry Page would support Team Human. But why? The aliens are more advanced than us. They’re presumably conscious, individuated, and have hopes and dreams like ourselves. Still, humans uber alles.

Is this specieist? I don’t know - is it racist to not want English colonists to wipe out Native Americans? Would a Native American who expressed that preference be racist? That would be a really strange way to use that term!

I think rights trump concerns like these - not fuzzy “human rights”, but the basic rights of life, liberty, and property. If the aliens want to kill humanity, then they’re not as superior to us as they think, and we should want to stop them. Likewise, I would be most willing to accept being replaced by AI if it didn’t want to replace us by force.

III.

Maybe the future should be human, and maybe it shouldn’t. But the kind of AIs that I’d be comfortable ceding the future to won’t appear by default. And the kind of work it takes to make a successor species we can be proud of, is the same kind of work it takes to trust that successor species to make decisions about the final fate of humanity. We should do that work instead of blithely assuming that we’ll get a kind of AI we like.

You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription.

Upgrade to paid

 
Like
Comment
Restack
 

© 2024 Scott Alexander
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe

Get the appStart writing

Brent Meeker

unread,
Jan 23, 2024, 3:38:36 PMJan 23
to everyth...@googlegroups.com
Who wrote this?  you, JC?

It takes the question to be binary.  When humans came to the Americas, they didn't kill all the monkeys.  AI's can merge perfectly well with humans.  Imagine having an actually intelligent ChatGPT wired into your brain adivising and influencing your decisions, like your loins do now.  I see the future as a mix of humans, augmented humans, and pure AI's.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv277FAyX74-g7o%2B9s%2Bxaec%2BmNFKQcmKVCG9UupVQu1HLA%40mail.gmail.com.

John Clark

unread,
Jan 23, 2024, 3:53:24 PMJan 23
to everyth...@googlegroups.com
On Tue, Jan 23, 2024 at 3:38 PM Brent Meeker <meeke...@gmail.com> wrote:

Who wrote this?  you, JC?

No, Scott Alexander did, he's a pretty smart guy but I think he got some things wrong. I did write this in the comments section: 

"You say "If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are" and I agree with you about that because there is evidence that it is true. I know for a fact that random mutation and natural selection managed to produce consciousness at least once (me) and probably many billions of times, but Evolution can't directly detect consciousness any better than I can, except in myself, and it can't select for something it can't see, but evolution can detect intelligent behavior. I could not function if I really believed that solipsism was true, therefore I must take it as an axiom, as a brute fact, that consciousness is the way data feels when it is being processed intelligently.

You also say "consciousness seems very closely linked to brain waves in humans" but how was that fact determined? It was observed that when people behave intelligently their brain waves take a certain form and when they don't behave intelligently the brain waves are different than that. I'm sure you don't think that other people are conscious when they are sleeping or under anesthesia or dead because when they are in those conditions they are not behaving very intelligently.

As for the fear of paperclip maximizers, I think that's kind of silly. It assumes the possibility of an intelligent entity having an absolutely fixed goal they can never change, but such a thing is impossible. In the 1930s Kurt Gödel prove that there are some things that are true but have no proof and Alan Turing proved that there is no way to know for certain if a given task is even possible. For example, is it possible to prove or disprove that every even number greater than two is the sum of two prime numbers? Nobody knows. If an intelligent being was able to have goals that could never change it would soon be caught in an infinite loop because sooner or later it would attempt a task that was impossible, that's why Evolution invented the very important emotion of boredom.   Certainly human beings don't have fix goals, not even the goal of self preservation, and I don't see how an AI could either."


John K Clark


 

Brent Meeker

unread,
Jan 23, 2024, 4:37:00 PMJan 23
to everyth...@googlegroups.com


On 1/23/2024 12:52 PM, John Clark wrote:
On Tue, Jan 23, 2024 at 3:38 PM Brent Meeker <meeke...@gmail.com> wrote:

Who wrote this?  you, JC?

No, Scott Alexander did, he's a pretty smart guy but I think he got some things wrong. I did write this in the comments section: 

"You say "If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are" and I agree with you about that because there is evidence that it is true. I know for a fact that random mutation and natural selection managed to produce consciousness at least once (me) and probably many billions of times, but Evolution can't directly detect consciousness any better than I can, except in myself, and it can't select for something it can't see, but evolution can detect intelligent behavior. I could not function if I really believed that solipsism was true, therefore I must take it as an axiom, as a brute fact, that consciousness is the way data feels when it is being processed intelligently.

You've written this before, but I slightly disagree with it.  I think Evolution can detect consciousness as directly or indirectly as intelligence.  Consciouness is imagining the world with you as an actor within it.  It's a kind of thinking necessary for planning, i.e. for an advanced form of intelligence.  The consciousness you talk about is just awareness, perception; that's processing data.



You also say "consciousness seems very closely linked to brain waves in humans" but how was that fact determined? It was observed that when people behave intelligently their brain waves take a certain form and when they don't behave intelligently the brain waves are different than that. I'm sure you don't think that other people are conscious when they are sleeping or under anesthesia or dead because when they are in those conditions they are not behaving very intelligently.

As for the fear of paperclip maximizers, I think that's kind of silly. It assumes the possibility of an intelligent entity having an absolutely fixed goal they can never change, but such a thing is impossible. In the 1930s Kurt Gödel prove that there are some things that are true but have no proof and Alan Turing proved that there is no way to know for certain if a given task is even possible. For example, is it possible to prove or disprove that every even number greater than two is the sum of two prime numbers? Nobody knows. If an intelligent being was able to have goals that could never change it would soon be caught in an infinite loop because sooner or later it would attempt a task that was impossible, that's why Evolution invented the very important emotion of boredom.   Certainly human beings don't have fix goals, not even the goal of self preservation, and I don't see how an AI could either."


Good point.

Brent

spudb...@aol.com

unread,
Jan 23, 2024, 4:50:51 PMJan 23
to everyth...@googlegroups.com
Scott Alexander Siskind, the Psychiatrist? More to the point of the nature O' consciousnesses is Stephon Alexander, the physicist at Brown University. 






OR, physicist, Vitaly Vanchurin at U Minn/Duluth via his Neural net concept. 


♬ It's the Sky, Lord, It's the Sky!♪ (Me, a Clog-dancing). Take it, Brother Penrose!





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit

John Clark

unread,
Jan 23, 2024, 5:12:48 PMJan 23
to everyth...@googlegroups.com
On Tue, Jan 23, 2024 at 4:37 PM Brent Meeker <meeke...@gmail.com> wrote:


On 1/23/2024 12:52 PM, John Clark wrote:
On Tue, Jan 23, 2024 at 3:38 PM Brent Meeker <meeke...@gmail.com> wrote:

Who wrote this?  you, JC?

No, Scott Alexander did, he's a pretty smart guy but I think he got some things wrong. I did write this in the comments section: 

"You say "If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are" and I agree with you about that because there is evidence that it is true. I know for a fact that random mutation and natural selection managed to produce consciousness at least once (me) and probably many billions of times, but Evolution can't directly detect consciousness any better than I can, except in myself, and it can't select for something it can't see, but evolution can detect intelligent behavior. I could not function if I really believed that solipsism was true, therefore I must take it as an axiom, as a brute fact, that consciousness is the way data feels when it is being processed intelligently.

>You've written this before, but I slightly disagree with it.  I think Evolution can detect consciousness as directly or indirectly as intelligence. 

I agree, Evolution can detect intelligence so it can only detect consciousness if it is an inevitable byproduct of intelligent data-processing.  

  John K Clark    See what's on my new list at  Extropolis
887





 
Consciouness is imagining the world with you as an actor within it.  It's a kind of thinking necessary for planning, i.e. for an advanced form of intelligence.  The consciousness you talk about is just awareness, perception; that's processing data.


You also say "consciousness seems very closely linked to brain waves in humans" but how was that fact determined? It was observed that when people behave intelligently their brain waves take a certain form and when they don't behave intelligently the brain waves are different than that. I'm sure you don't think that other people are conscious when they are sleeping or under anesthesia or dead because when they are in those conditions they are not behaving very intelligently.

As for the fear of paperclip maximizers, I think that's kind of silly. It assumes the possibility of an intelligent entity having an absolutely fixed goal they can never change, but such a thing is impossible. In the 1930s Kurt Gödel prove that there are some things that are true but have no proof and Alan Turing proved that there is no way to know for certain if a given task is even possible. For example, is it possible to prove or disprove that every even number greater than two is the sum of two prime numbers? Nobody knows. If an intelligent being was able to have goals that could never change it would soon be caught in an infinite loop because sooner or later it would attempt a task that was impossible, that's why Evolution invented the very important emotion of boredom.   Certainly human beings don't have fix goals, not even the goal of self preservation, and I don't see how an AI could either."


Good point.

Brent

--

Brent Meeker

unread,
Jan 23, 2024, 5:34:14 PMJan 23
to everyth...@googlegroups.com


On 1/23/2024 2:12 PM, John Clark wrote:


On Tue, Jan 23, 2024 at 4:37 PM Brent Meeker <meeke...@gmail.com> wrote:


On 1/23/2024 12:52 PM, John Clark wrote:
On Tue, Jan 23, 2024 at 3:38 PM Brent Meeker <meeke...@gmail.com> wrote:

Who wrote this?  you, JC?

No, Scott Alexander did, he's a pretty smart guy but I think he got some things wrong. I did write this in the comments section: 

"You say "If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are" and I agree with you about that because there is evidence that it is true. I know for a fact that random mutation and natural selection managed to produce consciousness at least once (me) and probably many billions of times, but Evolution can't directly detect consciousness any better than I can, except in myself, and it can't select for something it can't see, but evolution can detect intelligent behavior. I could not function if I really believed that solipsism was true, therefore I must take it as an axiom, as a brute fact, that consciousness is the way data feels when it is being processed intelligently.

>You've written this before, but I slightly disagree with it.  I think Evolution can detect consciousness as directly or indirectly as intelligence. 

I agree, Evolution can detect intelligence so it can only detect consciousness if it is an inevitable byproduct of intelligent data-processing. 
You're missing my point that there are at least two different meanings of "conscious" and only one necessarily accompanies intelligence (and isn't exactly a "byproduct")  It's just awareness or perception.  It doesn't include reflection and self-awareness, but in can include a lot of intelligence, including learning.

The second meaning, which is the kind we prize as uniquely human, is self-awareness.  I think it's what you refer to as a "byproduct", but my point is that it's another level of intelligence and hence is subject evolution just like any other aspect of intelligence.  This second meaning is planning, and planning depends on having a self-model.  If I do this and that happens how will I feel and what will I do then.

Brent

Stathis Papaioannou

unread,
Jan 23, 2024, 5:51:52 PMJan 23
to everyth...@googlegroups.com


Stathis Papaioannou


There is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies. Some claim that phenomenal consciousness reduces to one of the other kinds, and therefore that zombies are impossible.

John Clark

unread,
Jan 23, 2024, 6:01:53 PMJan 23
to everyth...@googlegroups.com
On Tue, Jan 23, 2024 at 5:51 PM Stathis Papaioannou <stat...@gmail.com> wrote:

There is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies.

Assuming that is true and assuming that you yourself are not a philosophical zombie, how do you suppose random mutation and natural selection manage to produce you?  

  John K Clark    See what's on my new list at  Extropolis
zad

Stathis Papaioannou

unread,
Jan 23, 2024, 6:46:10 PMJan 23
to everyth...@googlegroups.com
On Wed, 24 Jan 2024 at 10:01, John Clark <johnk...@gmail.com> wrote:
On Tue, Jan 23, 2024 at 5:51 PM Stathis Papaioannou <stat...@gmail.com> wrote:

There is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies.

Assuming that is true and assuming that you yourself are not a philosophical zombie, how do you suppose random mutation and natural selection manage to produce you? 

It couldn't, which supports the idea that philosophical zombies are impossible, or equivalently that phenomenal consciousness reduces to the behavioural manifestations of consciousness, such as awareness of self and environment.
 
--
Stathis Papaioannou

Virus-free.www.avast.com

Bruce Kellett

unread,
Jan 23, 2024, 7:10:17 PMJan 23
to everyth...@googlegroups.com
In fasct, it supports the idea that philosophical zombies could not be produced by natural (Darwinian) selection. But it say nothing about the possibility that such beings could be produced artificially; eg. via AI.

Bruce

Stathis Papaioannou

unread,
Jan 23, 2024, 7:32:33 PMJan 23
to everyth...@googlegroups.com


Stathis Papaioannou


That is strictly true, but it would entail that consciousness is some sort of side-effect peculiar to organic chemistry (or whatever the special ingredient is), and not a consequence of intelligent behaviour.

Brent Meeker

unread,
Jan 23, 2024, 9:23:32 PMJan 23
to everyth...@googlegroups.com
That's the kind that couldn't evolve and so I agree with JC that it's unlikely to exist.

Brent

Brent Meeker

unread,
Jan 23, 2024, 9:34:31 PMJan 23
to everyth...@googlegroups.com
Right, that "consicousness" would be a recorder that we would include so we could review any misbehavior that arose, like the self-driving car that hit the bicyclist had a record of it.

Brent

Stathis Papaioannou

unread,
Jan 23, 2024, 10:04:30 PMJan 23
to everyth...@googlegroups.com


Stathis Papaioannou


Apparently it does exist, but it appears that it is epiphenomenal.

Samiya Illias

unread,
Jan 23, 2024, 10:16:49 PMJan 23
to everyth...@googlegroups.com
The greatest loss that a person can suffer is the permanent loss of their soul in the Hereafter (Q39:15-20). Such people will consciously suffer in Hell, neither living nor dying (Q35:36). 

On 24-Jan-2024, at 5:32 AM, Stathis Papaioannou <stat...@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jan 23, 2024, 11:30:01 PMJan 23
to everyth...@googlegroups.com
It's far from apparent to me.  Why do you think it exists?

Brent

Brent Meeker

unread,
Jan 23, 2024, 11:33:01 PMJan 23
to everyth...@googlegroups.com


On 1/23/2024 7:16 PM, Samiya Illias wrote:
The greatest loss that a person can suffer is the permanent loss of their soul in the Hereafter (Q39:15-20). Such people will consciously suffer in Hell, neither living nor dying (Q35:36).

In the Bullshit Department, a businessman can't hold a candle to a
clergyman. 'Cause I gotta tell you the truth, folks. When it comes to
bullshit, big-time, major league bullshit, you have to stand in awe of the
all-time champion of false promises and exaggerated claims: religion. No
contest. No contest. Religion. Religion easily has the greatest bullshit
story ever told.
Think about it. Religion has actually convinced people that there's an
invisible man -- living in the sky -- who watches everything you do, every
minute of every day. And the invisible man has a special list of ten things
he does not want you to do. And if you do any of these ten things, he has a
special place, full of fire and smoke and burning and torture and anguish,
where he will send you to live and suffer and burn and choke and scream and
cry forever and ever 'til the end of time!
But He loves you.
He loves you, and He needs money! He always needs money! He's all-powerful,
all-perfect, all-knowing, and all-wise, somehow just can't handle money!
Religion takes in billions of dollars, they pay no taxes, and they always
need a little more. Now, you talk about a good bullshit story. Holy Shit!
   -- George Carlin Politically Incorrect, May 29, 1997   

Tomasz Rola

unread,
Jan 24, 2024, 12:22:30 AMJan 24
to everyth...@googlegroups.com
On Tue, Jan 23, 2024 at 03:52:46PM -0500, John Clark wrote:
> On Tue, Jan 23, 2024 at 3:38 PM Brent Meeker <meeke...@gmail.com> wrote:
>
> * > Who wrote this? you, JC?*
> >
>
> No, Scott Alexander did, he's a pretty smart guy but I think he got some
> things wrong. I did write this in the comments section:
(...)
> As for the fear of paperclip maximizers, I think that's kind of silly. It
> assumes the possibility of an intelligent entity having an absolutely fixed
> goal they can never change, but such a thing is impossible. In the 1930s
> Kurt Gödel prove that there are some things that are true but have no proof
> and Alan Turing proved that there is no way to know for certain if a given
> task is even possible. For example, is it possible to prove or disprove
> that every even number greater than two is the sum of two prime numbers?
> Nobody knows. If an intelligent being was able to have goals that could
> never change it would soon be caught in an infinite loop because sooner or
> later it would attempt a task that was impossible, that's why Evolution
> invented the very important emotion of boredom. Certainly human beings
> don't have fix goals, not even the goal of self preservation, and I don't
> see how an AI could either."

There are intelligent beings who suffer from one mania or another and
never get bored. I can hear about them in the news, like, during last
quarter of century and more...

Also, I think people put too much expectation into the concept of
paperclip maximizer. It does not have to make research. It does not
have to travel around the Galaxy. All it has to do is being able to
outperform certain number of people in this and this short time,
before they can find out how to switch it off. It then converts the
rest of the planetary surface and can as well stop, or go deep and
stop, kind of, after meeting with melted core inside Earth.

As such, PCM can be made by some kind of intelligent maniac, in her
cellar, during this or next century. Without much problem or
difficulty. Possibly supported by Chad Geppetto in her ear. In
essence, it is no different than being able to produce a weapon.

--
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature. **
** As the answer, master did "rm -rif" on the programmer's home **
** directory. And then the C programmer became enlightened... **
** **
** Tomasz Rola mailto:tomas...@bigfoot.com **

Stathis Papaioannou

unread,
Jan 24, 2024, 12:34:52 AMJan 24
to everyth...@googlegroups.com


Stathis Papaioannou


It exists because I know it does, and I would guess that you know it does as well. I can’t do anything to demonstrate it, because that is the nature of epiphenomena.

Brent Meeker

unread,
Jan 24, 2024, 1:26:51 AMJan 24
to everyth...@googlegroups.com


On 1/23/2024 9:34 PM, Stathis Papaioannou wrote:
here is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies. Some claim that phenomenal consciousness reduces to one of the other kinds, and therefore that zombies are impossible.
That's the kind that couldn't evolve and so I agree with JC that it's unlikely to exist.

Apparently it does exist, but it appears that it is epiphenomenal.

It's far from apparent to me.  Why do you think it exists?

It exists because I know it does, and I would guess that you know it does as well. I can’t do anything to demonstrate it, because that is the nature of epiphenomena.

No, you don't know that.  If you have a conscious thought now that "has no behavioural manifestations whatsoever" you may remember it next year and change what you write on the every-thing list then.

Brent

Stathis Papaioannou

unread,
Jan 24, 2024, 1:37:09 AMJan 24
to everyth...@googlegroups.com


Stathis Papaioannou


By no behavioural manifestation, I am referring to the physical world being causally closed. There is no physical event, including human actions, which cannot be explained in terms of prior physical events. Mental events supervene on physical events, but they have no separate causal efficacy of their own. If they did, then we would see magical phenomena such as bones moving without any applied force, breaching conservation laws, because that would be the mind moving them. I can write about a mental state, but my hand moving in order to write can be fully explained in terms of observable physical processes in my brain, without needing to invoke any effects of the mental state.

Samiya Illias

unread,
Jan 24, 2024, 2:55:33 AMJan 24
to everyth...@googlegroups.com

The Death of The Soul 



On 24-Jan-2024, at 8:16 AM, Samiya Illias <samiya...@gmail.com> wrote:



Brent Meeker

unread,
Jan 24, 2024, 5:15:09 AMJan 24
to everyth...@googlegroups.com


On 1/23/2024 10:36 PM, Stathis Papaioannou wrote:


Stathis Papaioannou


On Wed, 24 Jan 2024 at 17:26, Brent Meeker <meeke...@gmail.com> wrote:


On 1/23/2024 9:34 PM, Stathis Papaioannou wrote:
here is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies. Some claim that phenomenal consciousness reduces to one of the other kinds, and therefore that zombies are impossible.
That's the kind that couldn't evolve and so I agree with JC that it's unlikely to exist.

Apparently it does exist, but it appears that it is epiphenomenal.

It's far from apparent to me.  Why do you think it exists?

It exists because I know it does, and I would guess that you know it does as well. I can’t do anything to demonstrate it, because that is the nature of epiphenomena.

No, you don't know that.  If you have a conscious thought now that "has no behavioural manifestations whatsoever" you may remember it next year and change what you write on the every-thing list then.

By no behavioural manifestation, I am referring to the physical world being causally closed. There is no physical event, including human actions, which cannot be explained in terms of prior physical events. Mental events supervene on physical events, but they have no separate causal efficacy of their own.
Mental events are just another way of describing the same physical events. 

Brent

If they did, then we would see magical phenomena such as bones moving without any applied force, breaching conservation laws, because that would be the mind moving them. I can write about a mental state, but my hand moving in order to write can be fully explained in terms of observable physical processes in my brain, without needing to invoke any effects of the mental state.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jan 24, 2024, 5:28:48 AMJan 24
to everyth...@googlegroups.com
On Tue, Jan 23, 2024 at 5:51 PM Stathis Papaioannou <stat...@gmail.com> wrote:


There is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies.

Then it would be impossible, even in theory, to ever prove or disprove the existence of philosophical zombies, therefore the concept is not scientific. But if you insiste on taking the idea seriously anyway then logically you'd also have to take seriously the idea that you are the only conscious being in the universe. I don't think anybody wants to do that except for philosophy professors, and even then only when they're teaching first year philosophy students and want to show them that philosophy can be provocative.  

  John K Clark    See what's on my new list at  Extropolis
7va

John Clark

unread,
Jan 24, 2024, 5:39:27 AMJan 24
to everyth...@googlegroups.com
On Tue, Jan 23, 2024 at 7:10 PM Bruce Kellett <bhkel...@gmail.com> wrote:

it supports the idea that philosophical zombies could not be produced by natural (Darwinian) selection. But it say nothing about the possibility that such beings could be produced artificially; eg. via AI.

But who made the AI? I don't believe that Darwinian Evolution is capable of evolving a silicon microchip, much less an entire solid state AI, at least not in the brief time (13 billion years) it had to get the job done, because evolution is a stupid, slow, clumsy, cruel and inefficient way to make complex objects, but until it managed to make a brain it was the ONLY way to make complex objects.     

John K Clark    See what's on my new list at  Extropolis
iam


John Clark

unread,
Jan 24, 2024, 5:51:57 AMJan 24
to everyth...@googlegroups.com
On Tue, Jan 23, 2024 at 7:32 PM Stathis Papaioannou <stat...@gmail.com> wrote:
> > it supports the idea that philosophical zombies could not be produced by natural (Darwinian) selection. But it say nothing about the possibility that such beings could be produced artificially; eg. via AI.

That is strictly true, but it would entail that consciousness is some sort of side-effect peculiar to organic chemistry (or whatever the special ingredient is), and not a consequence of intelligent behaviour.

But even if somebody believed that a carbon atom has some sort of magical potential to produce consciousness that a silicon atom lacked I'll but they don't believe that all structures that contain organic molecules are conscious, such as a coal seam or a pile of dirty rags or a tree, because those things are not behaving intelligently. I bet they don't even believe their fellow human beings are always conscious, not when they're sleeping or under anesthesia or dead, and for exactly the same reason.


  John K Clark    See what's on my new list at  Extropolis
sud





Reply all
Reply to author
Forward
0 new messages