I.Business Insider: Larry Page Once Called Elon Musk A “Specieist”:
A month later, Business Insider returned to the same question, from a different angle: Effective Accelerationists Don’t Care If Humans Are Replaced By AI:
I originally thought there was an unbridgeable value gap between Page and e/acc vs. Musk and EA. But I can imagine stories that would put me on either side. For example: The Optimistic Story
The Pessimistic Story
(for a less extreme version of this, see my post on the Ascended Economy) I think the default outcome is somewhere in between these two stories, but I can think of it as “catastrophic” or “basically fine” based on the exact contours of where it resembles each. Here are some things I hope Larry Page and the e/accs are thinking about: Consciousness I know this is fuzzy and mystical-sounding, but it really does feel like a loss if consciousness is erased from the universe forever, maybe a total loss. If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are. If we’re not lucky, consciousness might be associated with only a tiny subset of useful information processing regimes (cf. Peter Watt’s Blindsight). Consciousness seems very closely linked to brain waves in humans; existing AIs have nothing even remotely resembling these, and it’s not clear that they’re useful for anything based on deep learning. Individuation I would be more willing to accept AIs as a successor to humans if there were clearly multiple distinct individuals. Modern AI seems on track to succeed at this - there are millions of instances of eg GPT. But it’s not obvious that this is the right way to coordinate an AI society, or that a bunch of GPTs working together would be more like a nation than a hive mind. Art, Science, Philosophy, and Curiosity: Some of these things are emergent from any goal. Even a paperclip maximizer will want to study physics, if only to create better paperclip-maximization machines. Others aren’t. If art, music, etc come mostly from signaling drives, AIs with a different relationship to individuality than humans might not have these. Music in particular seems to be a spandrel of other design decisions in the human brain. All of these might be selected out of any AI that was ruthlessly optimized for a specific goal. Will AIs And Humans Merge? This is the one where I feel most confident in my answer, which is: not by default. II.Even if all of these end up going as well as possible - the AIs are provably conscious, exist as individuals, care about art and philosophy, etc - there’s still a residual core of resistance that bothers me. It goes something like: Imagine that scientists detect a massive alien fleet heading towards Earth. We intercept and translate some of their communications (don’t ask how) and find they plan to kill all humans and take Earth’s resources for themselves. Although the aliens are technologically beyond us, science fiction suggests some clever strategies for defeating them - maybe microbes like War of the Worlds, or computer viruses like Independence Day. If we can pull together a miracle like this, should we use it? Here I bet even Larry Page would support Team Human. But why? The aliens are more advanced than us. They’re presumably conscious, individuated, and have hopes and dreams like ourselves. Still, humans uber alles. Is this specieist? I don’t know - is it racist to not want English colonists to wipe out Native Americans? Would a Native American who expressed that preference be racist? That would be a really strange way to use that term! I think rights trump concerns like these - not fuzzy “human rights”, but the basic rights of life, liberty, and property. If the aliens want to kill humanity, then they’re not as superior to us as they think, and we should want to stop them. Likewise, I would be most willing to accept being replaced by AI if it didn’t want to replace us by force. III.Maybe the future should be human, and maybe it shouldn’t. But the kind of AIs that I’d be comfortable ceding the future to won’t appear by default. And the kind of work it takes to make a successor species we can be proud of, is the same kind of work it takes to trust that successor species to make decisions about the final fate of humanity. We should do that work instead of blithely assuming that we’ll get a kind of AI we like. You're currently a free subscriber to Astral Codex Ten. For the full experience, upgrade your subscription.
© 2024 Scott Alexander |
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv277FAyX74-g7o%2B9s%2Bxaec%2BmNFKQcmKVCG9UupVQu1HLA%40mail.gmail.com.
> Who wrote this? you, JC?
> Who wrote this? you, JC?
No, Scott Alexander did, he's a pretty smart guy but I think he got some things wrong. I did write this in the comments section:
"You say "If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are" and I agree with you about that because there is evidence that it is true. I know for a fact that random mutation and natural selection managed to produce consciousness at least once (me) and probably many billions of times, but Evolution can't directly detect consciousness any better than I can, except in myself, and it can't select for something it can't see, but evolution can detect intelligent behavior. I could not function if I really believed that solipsism was true, therefore I must take it as an axiom, as a brute fact, that consciousness is the way data feels when it is being processed intelligently.
You also say "consciousness seems very closely linked to brain waves in humans" but how was that fact determined? It was observed that when people behave intelligently their brain waves take a certain form and when they don't behave intelligently the brain waves are different than that. I'm sure you don't think that other people are conscious when they are sleeping or under anesthesia or dead because when they are in those conditions they are not behaving very intelligently.
As for the fear of paperclip maximizers, I think that's kind of silly. It assumes the possibility of an intelligent entity having an absolutely fixed goal they can never change, but such a thing is impossible. In the 1930s Kurt Gödel prove that there are some things that are true but have no proof and Alan Turing proved that there is no way to know for certain if a given task is even possible. For example, is it possible to prove or disprove that every even number greater than two is the sum of two prime numbers? Nobody knows. If an intelligent being was able to have goals that could never change it would soon be caught in an infinite loop because sooner or later it would attempt a task that was impossible, that's why Evolution invented the very important emotion of boredom. Certainly human beings don't have fix goals, not even the goal of self preservation, and I don't see how an AI could either."
On 1/23/2024 12:52 PM, John Clark wrote:
> Who wrote this? you, JC?
No, Scott Alexander did, he's a pretty smart guy but I think he got some things wrong. I did write this in the comments section:
"You say "If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are" and I agree with you about that because there is evidence that it is true. I know for a fact that random mutation and natural selection managed to produce consciousness at least once (me) and probably many billions of times, but Evolution can't directly detect consciousness any better than I can, except in myself, and it can't select for something it can't see, but evolution can detect intelligent behavior. I could not function if I really believed that solipsism was true, therefore I must take it as an axiom, as a brute fact, that consciousness is the way data feels when it is being processed intelligently.
>You've written this before, but I slightly disagree with it. I think Evolution can detect consciousness as directly or indirectly as intelligence.
Consciouness is imagining the world with you as an actor within it. It's a kind of thinking necessary for planning, i.e. for an advanced form of intelligence. The consciousness you talk about is just awareness, perception; that's processing data.
You also say "consciousness seems very closely linked to brain waves in humans" but how was that fact determined? It was observed that when people behave intelligently their brain waves take a certain form and when they don't behave intelligently the brain waves are different than that. I'm sure you don't think that other people are conscious when they are sleeping or under anesthesia or dead because when they are in those conditions they are not behaving very intelligently.
As for the fear of paperclip maximizers, I think that's kind of silly. It assumes the possibility of an intelligent entity having an absolutely fixed goal they can never change, but such a thing is impossible. In the 1930s Kurt Gödel prove that there are some things that are true but have no proof and Alan Turing proved that there is no way to know for certain if a given task is even possible. For example, is it possible to prove or disprove that every even number greater than two is the sum of two prime numbers? Nobody knows. If an intelligent being was able to have goals that could never change it would soon be caught in an infinite loop because sooner or later it would attempt a task that was impossible, that's why Evolution invented the very important emotion of boredom. Certainly human beings don't have fix goals, not even the goal of self preservation, and I don't see how an AI could either."
Good point.
Brent
--
On Tue, Jan 23, 2024 at 4:37 PM Brent Meeker <meeke...@gmail.com> wrote:
On 1/23/2024 12:52 PM, John Clark wrote:
> Who wrote this? you, JC?
No, Scott Alexander did, he's a pretty smart guy but I think he got some things wrong. I did write this in the comments section:
"You say "If we’re lucky, consciousness is a basic feature of information processing and anything smart enough to outcompete us will be at least as conscious as we are" and I agree with you about that because there is evidence that it is true. I know for a fact that random mutation and natural selection managed to produce consciousness at least once (me) and probably many billions of times, but Evolution can't directly detect consciousness any better than I can, except in myself, and it can't select for something it can't see, but evolution can detect intelligent behavior. I could not function if I really believed that solipsism was true, therefore I must take it as an axiom, as a brute fact, that consciousness is the way data feels when it is being processed intelligently.
>You've written this before, but I slightly disagree with it. I think Evolution can detect consciousness as directly or indirectly as intelligence.
I agree, Evolution can detect intelligence so it can only detect consciousness if it is an inevitable byproduct of intelligent data-processing.
> There is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies.
> There is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies.Assuming that is true and assuming that you yourself are not a philosophical zombie, how do you suppose random mutation and natural selection manage to produce you?
On 24-Jan-2024, at 5:32 AM, Stathis Papaioannou <stat...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypV0L4WE%3DeryNjy_%3Dj7FZ%3DRFkw09AdVEPxVCjFQVwBaoQw%40mail.gmail.com.
here is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies. Some claim that phenomenal consciousness reduces to one of the other kinds, and therefore that zombies are impossible.That's the kind that couldn't evolve and so I agree with JC that it's unlikely to exist.
Apparently it does exist, but it appears that it is epiphenomenal.
It's far from apparent to me. Why do you think it exists?
It exists because I know it does, and I would guess that you know it does as well. I can’t do anything to demonstrate it, because that is the nature of epiphenomena.
Stathis Papaioannou
On Wed, 24 Jan 2024 at 17:26, Brent Meeker <meeke...@gmail.com> wrote:
On 1/23/2024 9:34 PM, Stathis Papaioannou wrote:
here is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies. Some claim that phenomenal consciousness reduces to one of the other kinds, and therefore that zombies are impossible.That's the kind that couldn't evolve and so I agree with JC that it's unlikely to exist.
Apparently it does exist, but it appears that it is epiphenomenal.
It's far from apparent to me. Why do you think it exists?
It exists because I know it does, and I would guess that you know it does as well. I can’t do anything to demonstrate it, because that is the nature of epiphenomena.
No, you don't know that. If you have a conscious thought now that "has no behavioural manifestations whatsoever" you may remember it next year and change what you write on the every-thing list then.
By no behavioural manifestation, I am referring to the physical world being causally closed. There is no physical event, including human actions, which cannot be explained in terms of prior physical events. Mental events supervene on physical events, but they have no separate causal efficacy of their own.
If they did, then we would see magical phenomena such as bones moving without any applied force, breaching conservation laws, because that would be the mind moving them. I can write about a mental state, but my hand moving in order to write can be fully explained in terms of observable physical processes in my brain, without needing to invoke any effects of the mental state.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUOj43A%3DS%3DMUYvGQ__3eLP12vWQRU%2BWiM2Zs_hPZGz1Yw%40mail.gmail.com.
> There is yet another level, phenomenal consciousness, which has no behavioural manifestations whatsoever, allowing for the theoretical possibility of philosophical zombies.
Then it would be impossible, even in theory, to ever prove or disprove the existence of philosophical zombies, therefore the concept is not scientific. But if you insiste on taking the idea seriously anyway then logically you'd also have to take seriously the idea that you are the only conscious being in the universe. I don't think anybody wants to do that except for philosophy professors, and even then only when they're teaching first year philosophy students and want to show them that philosophy can be provocative.
> it supports the idea that philosophical zombies could not be produced by natural (Darwinian) selection. But it say nothing about the possibility that such beings could be produced artificially; eg. via AI.
> > it supports the idea that philosophical zombies could not be produced by natural (Darwinian) selection. But it say nothing about the possibility that such beings could be produced artificially; eg. via AI.
> That is strictly true, but it would entail that consciousness is some sort of side-effect peculiar to organic chemistry (or whatever the special ingredient is), and not a consequence of intelligent behaviour.