Are Philosophical Zombies possible?

53 views
Skip to first unread message

John Clark

unread,
Jul 10, 2024, 9:08:11 AM (13 days ago) Jul 10
to 'Brent Meeker' via Everything List

On Tue, Jul 9, 2024 at 7:22 PM Brent Meeker <meeke...@gmail.com> wrote:

>> what I call "a Turing firewall", software has no ability to know its underlying hardware implementation, it is an inviolable separation of layers of abstraction, which makes the lower levels invisible to the layers above.

That's roughly true, but not exactly.  If you think of intelligence implemented on a computer it would make a difference if it had a true random number generator (hardware) or not.

For most problems a software pseudo random number generator is good enough, but I admit that on some problems you might need to stick on a hardware true random number generator, however unless it was specifically told, I don't think an AI would be able to intuitively tell if it had a pseudo random number generator or a real one.

> It would make a difference if it were a quantum computer or not.  

Any function that is calculable can be computed by a Turing Machine, and although it's never been formally proven to be true,  most think there is no problem that a Turing Machine can NOT compute (like finding Busy Beaver Numbers) that a quantum computer CAN. However there are lots of problems that would be easy for a Quantum Computer to solve even if it only had a hundred high quality Qubits that would be impractical for a conventional computer the size of Jupiter to solve even if it had 1 trillion years to work on it. And I doubt that an AI could intuitively tell if its inner machinery was using quantum computing principles or not. Incidentally Ray Kurzweil is skeptical that quantum computers will ever be practical, all his predictions are based on the assumption that they will never amount to much. If he's wrong about that then all his predictions will prove to be much too conservative.  

And going the other way, what if it didn't have a multiply operation. 

That would be no problem as long as the AI still had the addition operation, just do repeated additions, although it would slow things down. But you could start removing more and more operations until you got all the way down to First Order Logic, and then an AI could actually prove its own consistency. Kurt Godel showed that a few years before he came up with this famous incompleteness theorem  in what we now call Godel's Completeness Theorem. His later Incompleteness Theorem only applies to logical systems powerful enough to do arithmetic, and you can't do arithmetic with nothing but first order logic. The trouble is you couldn't really say an Artificial Intelligence was intelligent if it couldn't even pass a first grade arithmetic test.  
 See what's on my new list at  Extropolis
wo1





Cosmin Visan

unread,
Jul 10, 2024, 11:24:21 AM (13 days ago) Jul 10
to Everything List
Why do you even use the word AI ? Why can't use just use the words "computer program" ? Aaa... hype. Makes you look more intelligent than you actually are! Look at me: AI! AI! AI! Ooo.... so smaaaart! =))))))))))))))))))))

John Clark

unread,
Jul 10, 2024, 12:50:04 PM (13 days ago) Jul 10
to everyth...@googlegroups.com
On Wed, Jul 10, 2024 at 11:24 AM 'Cosmin Visan' via Everything List <everyth...@googlegroups.com> wrote:

Why do you even use the word AI ? Why can't use just use the words "computer program" ? Aaa... hype. Makes you look more intelligent than you actually are! Look at me: AI! AI! AI! Ooo.... so smaaaart! =))))))))))))))))))))

You sir are an ass.  
 See what's on my new list at  Extropolis
ayr


 
 

Cosmin Visan

unread,
Jul 10, 2024, 1:30:46 PM (13 days ago) Jul 10
to Everything List
lol. You are so obsessed with the AI as if it is some supernatural entity. When in fact is just a random computer program. Even worse: a computer program with no use whatsoever. Nobody actually uses AI for anything.

Brent Meeker

unread,
Jul 11, 2024, 2:08:34 AM (12 days ago) Jul 11
to everyth...@googlegroups.com


On 7/10/2024 6:07 AM, John Clark wrote:

On Tue, Jul 9, 2024 at 7:22 PM Brent Meeker <meeke...@gmail.com> wrote:

....


And going the other way, what if it didn't have a multiply operation. 

That would be no problem as long as the AI still had the addition operation, just do repeated additions, although it would slow things down. But you could start removing more and more operations until you got all the way down to First Order Logic, and then an AI could actually prove its own consistency. Kurt Godel showed that a few years before he came up with this famous incompleteness theorem  in what we now call Godel's Completeness Theorem. His later Incompleteness Theorem only applies to logical systems powerful enough to do arithmetic, and you can't do arithmetic with nothing but first order logic. The trouble is you couldn't really say an Artificial Intelligence was intelligent if it couldn't even pass a first grade arithmetic test. 

There are many levels of intelligence.  An octopus can't pass a first grade arithmetic test but it can escape thru a difficult maze.

Brent

John Clark

unread,
Jul 11, 2024, 6:48:19 AM (12 days ago) Jul 11
to everyth...@googlegroups.com
On Thu, Jul 11, 2024 at 2:08 AM Brent Meeker <meeke...@gmail.com> wrote:

>> That [lack of a multiply operationwould be no problem as long as the AI still had the addition operation, just do repeated additions, although it would slow things down. But you could start removing more and more operations until you got all the way down to First Order Logic, and then an AI could actually prove its own consistency. Kurt Godel showed that a few years before he came up with this famous incompleteness theorem  in what we now call Godel's Completeness Theorem. His later Incompleteness Theorem only applies to logical systems powerful enough to do arithmetic, and you can't do arithmetic with nothing but first order logic. The trouble is you couldn't really say an Artificial Intelligence was intelligent if it couldn't even pass a first grade arithmetic test. 

There are many levels of intelligence.  An octopus can't pass a first grade arithmetic test but it can escape thru a difficult maze

Claude Shannon, the father of information theory, made a computerized mouse way back in 1951 that was able to escape a difficult maze. It was a big advance at the time, if the term had been invented, some would've called it Artificial Intelligence. However these days nobody would call something like that AI; one of the many reasons why is that it couldn't pass a first grade arithmetic test.

 See what's on my new list at  Extropolis
mey

Cosmin Visan

unread,
Jul 11, 2024, 3:41:30 PM (12 days ago) Jul 11
to Everything List
AI is just a fancy word for lonely boys to give meaning to their empty life. lol

Brent Meeker

unread,
Jul 11, 2024, 4:37:11 PM (12 days ago) Jul 11
to everyth...@googlegroups.com
A rock, along with many other things, can't pass a first grade arithmetic test either; but that doesn't show that anything that can't pass a first grade arithmetic test is unintelligent or unconscious, as for example an octopus or a 3yr old child.

Brent

 See what's on my new list at  Extropolis
mey

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1faq%2BFXVrLv8_%3DQhNnspADRir3op7xBWBZo6ZXHFSA7w%40mail.gmail.com.

John Clark

unread,
Jul 11, 2024, 4:57:01 PM (12 days ago) Jul 11
to everyth...@googlegroups.com
On Thu, Jul 11, 2024 at 4:37 PM Brent Meeker <meeke...@gmail.com> wrote:

A rock, along with many other things, can't pass a first grade arithmetic tes either; but that doesn't show that anything that can't pass a first grade arithmetic test is unintelligent or unconscious, as for example an octopus or a 3yr old child.

And because of their failure to pass a first year arithmetic test we would say that a rock, an octopus and a three year old child are not behaving very intelligently. But as I said before, the Turing Test is not perfect, however it's all we've got. If something passes the test then it's intelligent and conscious. If fails the test then it may or may not be intelligent and or conscious     

See what's on my new list at  Extropolis
asb



Jason Resch

unread,
Jul 11, 2024, 5:01:51 PM (12 days ago) Jul 11
to Everything List
There are easier and harder tests than the Turing test. I don't know why you say it's the only test we have.

Also: would passing the Argonov test (which I described in my document on whether zombies are possible) not be a sufficient proof of consciousness? Note that the Argonov test is much harder to pass than the Turing test.

Jason 



See what's on my new list at  Extropolis
asb



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 11, 2024, 5:28:25 PM (11 days ago) Jul 11
to everyth...@googlegroups.com
On Thu, Jul 11, 2024 at 5:01 PM Jason Resch <jason...@gmail.com> wrote:
 
There are easier and harder tests than the Turing test. I don't know why you say it's the only test we have. Also: would passing the Argonov test (which I described in my document on whether zombies are possible) not be a sufficient proof of consciousness? Note that the Argonov test is much harder to pass than the Turing test.
 
I have a clear understanding of exactly what the Turing Test is, but I am unable to get a clear understanding of exactly, or even approximately, what the Argonov test is. I know it has something to do with "phenomenal judgments" but I don't know what that means and I don't know what I need to do to pass the Argonov Test, so I guess I'd fail it. And because of my failure to understand the test it seems that I've been wrong all my life about being conscious and really I am a philosophical zombie.

  See what's on my new list at  Extropolis
pzx

Jason Resch

unread,
Jul 11, 2024, 5:33:21 PM (11 days ago) Jul 11
to Everything List
“Phenomenal judgments” are the words,
discussions, and texts about consciousness,
subjective phenomena, and the mind-body
problem. […]

In order to produce detailed phenomenal
judgments about problematic properties of
consciousness, an intelligent system must have a source of knowledge about the properties of consciousness. [...]

Consider a deterministic intelligent machine
having no innate philosophical knowledge or
philosophical discussions while learning. Also, the machine does not contain informational models of other creatures (that may implicitly or explicitly contain knowledge about these creatures’ consciousness). If, under these conditions, the machine produces phenomenal judgments on all problematic properties of consciousness, then, according to [the postulates], materialism is true and the machine is conscious.
— Victor Argonov in “Experimental Methodsfor Unraveling the Mind-Body Problem: The Phenomenal Judgment Approach” (2014)



Jason 




  See what's on my new list at  Extropolis
pzx

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 11, 2024, 6:00:01 PM (11 days ago) Jul 11
to everyth...@googlegroups.com
On Thu, Jul 11, 2024 at 5:33 PM Jason Resch <jason...@gmail.com> wrote:

Consider a deterministic intelligent machine having no innate philosophical knowledge or philosophical discussions while learning. Also, the machine does not contain informational models of other creatures (that may implicitly or explicitly contain knowledge about these creatures’ consciousness). If, under these conditions, the machine produces phenomenal judgments on all problematic properties of consciousness, then, according to [the postulates], materialism is true and the machine is conscious.

Who judges if the "phenomenal judgments" of the machine are correct or incorrect? Even humans can't agree among themselves about most philosophical matters, certainly that's true of members of this list. And the fact is many, perhaps most, human beings don't think about deep philosophical questions at all, they find it all to be a big bore, so does that mean they're philosophical zombies? And just because a machine can pontificate about consciousness, what reason, other than Argonov's authority, would I have for believing the machine was conscious? 

I'm going to take a break from the list right now because I wanna watch Joe Biden's new press conference .... ah... I think I think I wanna watch it it 

 See what's on my new list at  Extropolis
bfq


Jason Resch

unread,
Jul 11, 2024, 7:01:55 PM (11 days ago) Jul 11
to Everything List


On Thu, Jul 11, 2024, 6:00 PM John Clark <johnk...@gmail.com> wrote:
On Thu, Jul 11, 2024 at 5:33 PM Jason Resch <jason...@gmail.com> wrote:

Consider a deterministic intelligent machine having no innate philosophical knowledge or philosophical discussions while learning. Also, the machine does not contain informational models of other creatures (that may implicitly or explicitly contain knowledge about these creatures’ consciousness). If, under these conditions, the machine produces phenomenal judgments on all problematic properties of consciousness, then, according to [the postulates], materialism is true and the machine is conscious.

Who judges if the "phenomenal judgments" of the machine are correct or incorrect? Even humans can't agree among themselves about most philosophical matters, certainly that's true of members of this list.

They don't have to be correct, as far as I know. The machine just has to make phenomenal judgements (without prior training on such topics). If a machine said "I think, therefore I am", or proposed epiphenomenalism, without having been trained on any philosophical topics, those would constitute phenomenal judgements that suggest the machine possesses consciousness.


And the fact is many, perhaps most, human beings don't think about deep philosophical questions at all, they find it all to be a big bore, so does that mean they're philosophical zombies?

Failing the test doesn't imply a lack of consciousness. But passing the test implies the presence of consciousness.

And just because a machine can pontificate about consciousness, what reason, other than Argonov's authority, would I have for believing the machine was conscious? 

That there must be a source of information to permit the making of phenomenal judgements, and since the machine was not trained on them, what else, would you propose that source could be, other than consciousness?

Jason 


I'm going to take a break from the list right now because I wanna watch Joe Biden's new press conference .... ah... I think I think I wanna watch it it 

 See what's on my new list at  Extropolis
bfq


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 11, 2024, 8:09:57 PM (11 days ago) Jul 11
to everyth...@googlegroups.com


On 7/11/2024 1:56 PM, John Clark wrote:
On Thu, Jul 11, 2024 at 4:37 PM Brent Meeker <meeke...@gmail.com> wrote:

A rock, along with many other things, can't pass a first grade arithmetic tes either; but that doesn't show that anything that can't pass a first grade arithmetic test is unintelligent or unconscious, as for example an octopus or a 3yr old child.

And because of their failure to pass a first year arithmetic test we would say that a rock, an octopus and a three year old child are not behaving very intelligently.
In case you've forgotten, the Turing test was based on text only communication between an interlocutor asked to distinguish between a computer pretending to be a human and a man or woman pretending to be a woman or man.  It's already been passed by some LLM's by dumbing-down their response.  It may be all you've got but it's a very poor test that can't tell the difference between a 3yr old and a rock.

Brent



But as I said before, the Turing Test is not perfect, however it's all we've got. If something passes the test then it's intelligent and conscious. If fails the test then it may or may not be intelligent and or conscious     

See what's on my new list at  Extropolis
asb



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Cosmin Visan

unread,
Jul 12, 2024, 4:08:45 AM (11 days ago) Jul 12
to Everything List
Who cares about the Turing test ? What does it have to do with being alive ? =)))))))))))))))))))

John Clark

unread,
Jul 12, 2024, 6:16:08 AM (11 days ago) Jul 12
to everyth...@googlegroups.com


On Thu, Jul 11, 2024 at 8:09 PM Brent Meeker <meeke...@gmail.com> wrote:
In case you've forgotten, the Turing test was based on text only communication between an interlocutor asked to distinguish between a computer pretending to be a human and a man or woman pretending to be a woman or man.

Yes but that is an unimportant detail, the essence of the Turing Test is that whatever method you use to determine the consciousness or lack of it in one of your fellow human beings you should use that same method when judging the consciousness of a computer.   

It's already been passed by some LLM's by dumbing-down their response

Don't you find that fact to be compelling? An AI needs to play dumb in order to fool a human into thinking it is human.  

See what's on my new list at  Extropolis
 
But as I said before, the Turing Test is not perfect, however it's all we've got. If something passes the test then it's intelligent and conscious. If fails the test then it may or may not be intelligent and or conscious     


asb




John Clark

unread,
Jul 12, 2024, 7:02:11 AM (11 days ago) Jul 12
to everyth...@googlegroups.com
On Thu, Jul 11, 2024 at 7:01 PM Jason Resch <jason...@gmail.com> wrote:

>> Who judges if the "phenomenal judgments" of the machine are correct or incorrect? Even humans can't agree among themselves about most philosophical matters, certainly that's true of members of this list.

They don't have to be correct, as far as I know. The machine just has to make phenomenal judgements (without prior training on such topics).

The AI's responses don't have to be correct?!  Generating philosophical blather about consciousness is the easiest thing in the world because there is nothing to work on, there are no facts that the blather must fit. For it to rise a little above the level of blather you've got to start with an unproven axiom such as "consciousness is the way data feels when it is being processed and thus I am not the only conscious being in the universe".   
 
Failing the test doesn't imply a lack of consciousness. But passing the test implies the presence of consciousness.

So the Argonov Test has the same flaw that the Turing Test has, and is far easier to pass. For a computer to pass the Turing Test it must be able to converse intelligently, but not too intelligently, ON ANY SUBJECT, but to pass the  Argonov Test it only needs to be able to prattle on about consciousness.


there must be a source of information to permit the making of phenomenal judgements, and since the machine was not trained on them, what else, would you propose that source could be, other than consciousness?

From your questions to the AI. When I meet someone we don't spontaneously start talking about consciousness, it only happens when one of us steers the conversation into that direction, and that seldom happens (except on this list) because usually both of us would rather talk about other things.  

  See what's on my new list at  Extropolis
ubu

Jason Resch

unread,
Jul 12, 2024, 9:33:56 AM (11 days ago) Jul 12
to Everything List
Do you think that passing the Argonov test would constitute positive proof of consciousness?

Jason 



  See what's on my new list at  Extropolis
ubu

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 12, 2024, 10:48:53 AM (11 days ago) Jul 12
to everyth...@googlegroups.com
On Fri, Jul 12, 2024 at 9:33 AM Jason Resch <jason...@gmail.com> wrote:

Do you think that passing the Argonov test would constitute positive proof of consciousness?

Maybe, but unlike Turing's Test, Argonov's Test  will tell you nothing about intelligence because it's  too easy.

 See what's on my new list at  Extropolis

att

ubu

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3QeUwqAay0GfDmXXwySOhQj-%2BBSbfiZcqvMdhGrNq%2BGQ%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Jul 12, 2024, 7:17:09 PM (10 days ago) Jul 12
to everyth...@googlegroups.com


On 7/12/2024 3:15 AM, John Clark wrote:


On Thu, Jul 11, 2024 at 8:09 PM Brent Meeker <meeke...@gmail.com> wrote:
In case you've forgotten, the Turing test was based on text only communication between an interlocutor asked to distinguish between a computer pretending to be a human and a man or woman pretending to be a woman or man.

Yes but that is an unimportant detail, the essence of the Turing Test is that whatever method you use to determine the consciousness or lack of it in one of your fellow human beings you should use that same method when judging the consciousness of a computer.   

It's already been passed by some LLM's by dumbing-down their response

Don't you find that fact to be compelling? An AI needs to play dumb in order to fool a human into thinking it is human. 

No, it only passed because the human interlocutor didn't ask the right questions; like, "Where are you?" and  "Is it raining outside?". 

Now I think an LLM could be trained to imagine a consistent model of itself as a human being, i.e. having a location, having friends, motives, a history,...which would fool everyone who didn't actually check reality.

Brent

John Clark

unread,
Jul 13, 2024, 7:08:19 AM (10 days ago) Jul 13
to everyth...@googlegroups.com
On Fri, Jul 12, 2024 at 7:17 PM Brent Meeker <meeke...@gmail.com> wrote:

An AI needs to play dumb in order to fool a human into thinking it is human. Don't you find that fact to be compelling? 

No, it only passed because the human interlocutor didn't ask the right questions; like, "Where are you?" and  "Is it raining outside?". 

If the AI  was trying to deceive the human into believing it was not a computer then it would simply say something like "I am in Vancouver Canada and it's not raining outside it's snowing".  And I don't see how a question like that could help you figure out the nature of an AI's mind, or any mine for that matter, even if the AI was ordered to tell the truth. The position of a mind in 3D space is a nebulous concept; if your brain is in one place and your sense organs are in another place, and you're thinking about yet another place, then where exactly is the position of your mind? I think it's a nonsense question because  "you" should not be thought of as a pronoun but as an adjective.  You are the way atoms behave when they are organized in a Brentmeekerian way. So asking a question like that is like asking where is "big" located or the color yellow.


 See what's on my new list at  Extropolis
y11



 

Brent Meeker

unread,
Jul 13, 2024, 4:18:18 PM (10 days ago) Jul 13
to everyth...@googlegroups.com


On 7/13/2024 4:07 AM, John Clark wrote:
On Fri, Jul 12, 2024 at 7:17 PM Brent Meeker <meeke...@gmail.com> wrote:

An AI needs to play dumb in order to fool a human into thinking it is human. Don't you find that fact to be compelling? 

No, it only passed because the human interlocutor didn't ask the right questions; like, "Where are you?" and  "Is it raining outside?". 

If the AI  was trying to deceive the human into believing it was not a computer then it would simply say something like "I am in Vancouver Canada and it's not raining outside it's snowing". 
Which could easily be checked in real time.  Anyone question won't resolve whether it's a person or not but a sequence can provide good evidence.  Next question, "Is there a phone in your room."  Answer, "Yes"  Call the number and see if anyone answers.  etc.  The point is a human IS in a specific place and can act there.  An LLM AI isn't anyplace in particular.


And I don't see how a question like that could help you figure out the nature of an AI's mind, or any mine for that matter, even if the AI was ordered to tell the truth. The position of a mind in 3D space is a nebulous concept; if your brain is in one place and your sense organs are in another place, and you're thinking
At other times you say consciousness is just how data feels when being processed.  It's processed in your brain...which has a definite location.


about yet another place, then where exactly is the position of your mind?
I just asked "Where are you?"  Not "Where is your mind?"


I think it's a nonsense question because  "you" should not be thought of as a pronoun but as an adjective.  You are the way atoms behave when they are organized in a Brentmeekerian way.
And those atoms have a location in order to interact.

Brent

So asking a question like that is like asking where is "big" located or the color yellow.

 See what's on my new list at  Extropolis
y11



 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Jason Resch

unread,
Jul 13, 2024, 5:14:25 PM (10 days ago) Jul 13
to Everything List


On Sat, Jul 13, 2024, 4:18 PM Brent Meeker <meeke...@gmail.com> wrote:


On 7/13/2024 4:07 AM, John Clark wrote:
On Fri, Jul 12, 2024 at 7:17 PM Brent Meeker <meeke...@gmail.com> wrote:

An AI needs to play dumb in order to fool a human into thinking it is human. Don't you find that fact to be compelling? 

No, it only passed because the human interlocutor didn't ask the right questions; like, "Where are you?" and  "Is it raining outside?". 

If the AI  was trying to deceive the human into believing it was not a computer then it would simply say something like "I am in Vancouver Canada and it's not raining outside it's snowing". 
Which could easily be checked in real time.  Anyone question won't resolve whether it's a person or not but a sequence can provide good evidence.  Next question, "Is there a phone in your room."  Answer, "Yes"  Call the number and see if anyone answers.  etc.  The point is a human IS in a specific place and can act there.  An LLM AI isn't anyplace in particular.


The reason for conducting the test by text (rather than in person with an android body) was to prevent external clues from spoiling the result. To be completely fair, perhaps the test needs to be amended to judge between an AI and an uploaded human brain.

Jason


And I don't see how a question like that could help you figure out the nature of an AI's mind, or any mine for that matter, even if the AI was ordered to tell the truth. The position of a mind in 3D space is a nebulous concept; if your brain is in one place and your sense organs are in another place, and you're thinking
At other times you say consciousness is just how data feels when being processed.  It's processed in your brain...which has a definite location.

about yet another place, then where exactly is the position of your mind?
I just asked "Where are you?"  Not "Where is your mind?"

I think it's a nonsense question because  "you" should not be thought of as a pronoun but as an adjective.  You are the way atoms behave when they are organized in a Brentmeekerian way.
And those atoms have a location in order to interact.

Brent

So asking a question like that is like asking where is "big" located or the color yellow.

 See what's on my new list at  Extropolis
y11



 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3Lg2s8mRn_n%2B_oKR0ozfLhvJhpgrPA3__PKDjPyW8Cfw%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 13, 2024, 5:18:50 PM (10 days ago) Jul 13
to everyth...@googlegroups.com
On Sat, Jul 13, 2024 at 4:18 PM Brent Meeker <meeke...@gmail.com> wrote:

>> If the AI  was trying to deceive the human into believing it was not a computer then it would simply say something like "I am in Vancouver Canada and it's not raining outside it's snowing". 
 
Which could easily be checked in real time. 

Yes you can easily check if it's snowing in Vancouver right now, so why couldn't the AI do the same thing? If you insist that the human interrogator is allowed to have access to the Internet but the AI is not then you are no longer talking about the Turing Test.


>> I don't see how a question like that could help you figure out the nature of an AI's mind, or any mine for that matter, even if the AI was ordered to tell the truth. The position of a mind in 3D space is a nebulous concept; if your brain is in one place and your sense organs are in another place, and you're thinking

At other times you say consciousness is just how data feels when being processed. 

Correct.  

  It's processed in your brain...which has a definite location.
 
But the position is not unique, data can be processed anywhere and the result is the same. And if exactly the same data is being processed in exactly the same way at two different places, or even 1 million different places, then only one consciousness is produced.  Besides, the AI may not even know or care where its data processors are. And if you aren't consciously aware that your data is being processed in Vancouver Canada then what sense does it make to say that your consciousness is located in Vancouver Canada even though you don't consciously know it? If you're thinking about Peking at the time it would be slightly less ridiculous to say that your consciousness is located in China rather than Canada. But only slightly less ridiculous.
 
I just asked "Where are you?"  Not "Where is your mind?"

If "you" are not Brent Meeker's mind then what are "you"? Asking where consciousness is located is like asking where Beethoven's ninth Symphony is located. A proper noun has a unique position but an adjective does not, and I am an adjective, I am the way atoms behave when they are organized in a Johnkclarkian way.

>> I think it's a nonsense question because  "you" should not be thought of as a pronoun but as an adjective.  You are the way atoms behave when they are organized in a Brentmeekerian way.

And those atoms have a location in order to interact.

But there is no unique location, any place will do fine, and any atoms will work fine because all carbon atoms are identical, atoms don't have your name engraved on them. The location of the interaction has no effect on the intelligence or on the consciousness, although the location of the sense organs and the hands could. 
 See what's on my new list at  Extropolis
jwc

Reply all
Reply to author
Forward
0 new messages