Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Atheist evolutionary accounts (contd)

446 views
Skip to first unread message

someone

unread,
Jul 2, 2015, 10:51:24 AM7/2/15
to talk-o...@moderators.isc.org
Having problems with posts coming through again (posted 2 responses neither came through) so posting a continuation again (a moderation issue?)

Quick recap, Burkhard hadn't answered a question and I had pointed this out in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/QfgvXSHPZq0J

SortingItOut had stepped in and replied and I did a quick recap of the conversation.
https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ

Burkhard replied to the recap post, but again failed to answer the question that I'd previously pointed out that he/she had failed to answer. And this post is a continuation from that:
https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/UUpGdRkDrsIJ

I'll quote the scenario and the question:
---
In this harder to not-understand-the-point scenario, imagine you are in a box with an LED display which not display any symbol initially and then display either a "1" or a "0", go off again, and then display either a "1" or a "0", and then go off again. Imagine this on off cycle will happen several times. Imagine also that you are instructed to shout out "I'm in a blue box" if when the LED comes on it displays a "1" and shout out "I'm in a green box" if the LED comes on and it displays a "0". Imagine you don't know the colour of the box you are in, and that you are able to and do follow the instructions.

Now imagine that unknown to you, what caused the LED to display a "0" or a "1" changed each time. Given that you wouldn't know what caused the LED to display a "0" or "1", would you agree that you weren't basing what you shouted out on what caused the LED to display a "0" or "1" each time?
---

Perhaps this time Burkhard could answer.

Also in the recap I had stated that Burkhard hadn't answered the question in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/WJfMAo-NfzQJ (second paragraph) about how he/she would go about measuring whether the universe was a zombie one or not to get the correct input.

Burkhard responded that he/she had answered it.

My response: Could you (Burkhard) please post a link, as I was interested in how you were going to suggest you'd do it.


Burkhard

unread,
Jul 2, 2015, 12:26:24 PM7/2/15
to talk-o...@moderators.isc.org
someone wrote:
> Having problems with posts coming through again (posted 2 responses neither came through) so posting a continuation again (a moderation issue?)
>
> Quick recap, Burkhard hadn't answered a question and I had pointed this out in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/QfgvXSHPZq0J
>
> SortingItOut had stepped in and replied and I did a quick recap of the conversation.
> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ
>
> Burkhard replied to the recap post, but again failed to answer the question that I'd previously pointed out that he/she had failed to answer. And this post is a continuation from that:
> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/UUpGdRkDrsIJ

Matter of fact, I answered it twice - your thought experiment has now
become simply Searle's Chinese room argument, and shares its problems
and most importantly limitations That is it is now so far away from the
way in which real people or for that matter machines interact with their
environment that it takes away any point that your original two universe
model may have had.

Inez

unread,
Jul 2, 2015, 12:36:24 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 7:51:24 AM UTC-7, someone wrote:
> Having problems with posts coming through again (posted 2 responses neither came through) so posting a continuation again (a moderation issue?)
>
> Quick recap, Burkhard hadn't answered a question and I had pointed this out in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/QfgvXSHPZq0J
>
> SortingItOut had stepped in and replied and I did a quick recap of the conversation.
> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ
>
> Burkhard replied to the recap post, but again failed to answer the question that I'd previously pointed out that he/she had failed to answer. And this post is a continuation from that:
> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/UUpGdRkDrsIJ
>
> I'll quote the scenario and the question:
> ---
> In this harder to not-understand-the-point scenario, imagine you are in a box with an LED display which not display any symbol initially and then display either a "1" or a "0", go off again, and then display either a "1" or a "0", and then go off again. Imagine this on off cycle will happen several times. Imagine also that you are instructed to shout out "I'm in a blue box" if when the LED comes on it displays a "1" and shout out "I'm in a green box" if the LED comes on and it displays a "0". Imagine you don't know the colour of the box you are in, and that you are able to and do follow the instructions.
>
> Now imagine that unknown to you, what caused the LED to display a "0" or a "1" changed each time. Given that you wouldn't know what caused the LED to display a "0" or "1", would you agree that you weren't basing what you shouted out on what caused the LED to display a "0" or "1" each time?
> ---

I am sad that this is an example of your "harder to misunderstand" writing. Perhaps you should try reading your posts aloud to yourself before hitting send.

In the immediate sense, you are basing your shouting on the digit displayed, so even if you do know what is generating those digits you're not basing what you shout on that knowledge. In a less immediate sense the thing that generates the digits is the first part of the causal stream that ends in your shouting and so you are basing what you shout on it, even if you don't know what it is.

someone

unread,
Jul 2, 2015, 12:46:23 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 5:26:24 PM UTC+1, Burkhard wrote:
> someone wrote:
> > Having problems with posts coming through again (posted 2 responses neither came through) so posting a continuation again (a moderation issue?)
> >
> > Quick recap, Burkhard hadn't answered a question and I had pointed this out in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/QfgvXSHPZq0J
> >
> > SortingItOut had stepped in and replied and I did a quick recap of the conversation.
> > https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ
> >
> > Burkhard replied to the recap post, but again failed to answer the question that I'd previously pointed out that he/she had failed to answer. And this post is a continuation from that:
> > https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/UUpGdRkDrsIJ
>
> Matter of fact, I answered it twice - your thought experiment has now
> become simply Searle's Chinese room argument, and shares its problems
> and most importantly limitations That is it is now so far away from the
> way in which real people or for that matter machines interact with their
> environment that it takes away any point that your original two universe
> model may have had.
>

That is a comment about the thought experiment but not an answer to the question, which I wrote out again below.


> >
> > I'll quote the scenario and the question:
> > ---
> > In this harder to not-understand-the-point scenario, imagine you are in a box with an LED display which not display any symbol initially and then display either a "1" or a "0", go off again, and then display either a "1" or a "0", and then go off again. Imagine this on off cycle will happen several times. Imagine also that you are instructed to shout out "I'm in a blue box" if when the LED comes on it displays a "1" and shout out "I'm in a green box" if the LED comes on and it displays a "0". Imagine you don't know the colour of the box you are in, and that you are able to and do follow the instructions.
> >
> > Now imagine that unknown to you, what caused the LED to display a "0" or a "1" changed each time. Given that you wouldn't know what caused the LED to display a "0" or "1", would you agree that you weren't basing what you shouted out on what caused the LED to display a "0" or "1" each time?
> > ---
> >
> > Perhaps this time Burkhard could answer.
> >

But again you didn't.


> > Also in the recap I had stated that Burkhard hadn't answered the question in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/WJfMAo-NfzQJ (second paragraph) about how he/she would go about measuring whether the universe was a zombie one or not to get the correct input.
> >
> > Burkhard responded that he/she had answered it.
> >
> > My response: Could you (Burkhard) please post a link, as I was interested in how you were going to suggest you'd do it.
> >
> >

I notice you didn't post a link or give an answer here.

John Harshman

unread,
Jul 2, 2015, 1:01:23 PM7/2/15
to talk-o...@moderators.isc.org
On 7/2/15, 9:41 AM, someone wrote:
> On Thursday, July 2, 2015 at 5:26:24 PM UTC+1, Burkhard wrote:
>> someone wrote:
>>> Having problems with posts coming through again (posted 2 responses neither came through) so posting a continuation again (a moderation issue?)
>>>
>>> Quick recap, Burkhard hadn't answered a question and I had pointed this out in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/QfgvXSHPZq0J
>>>
>>> SortingItOut had stepped in and replied and I did a quick recap of the conversation.
>>> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ
>>>
>>> Burkhard replied to the recap post, but again failed to answer the question that I'd previously pointed out that he/she had failed to answer. And this post is a continuation from that:
>>> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/UUpGdRkDrsIJ
>>
>> Matter of fact, I answered it twice - your thought experiment has now
>> become simply Searle's Chinese room argument, and shares its problems
>> and most importantly limitations That is it is now so far away from the
>> way in which real people or for that matter machines interact with their
>> environment that it takes away any point that your original two universe
>> model may have had.
>>
>
> That is a comment about the thought experiment but not an answer to the question, which I wrote out again below.

He was pointing out that the question is meaningless and unanswerable.
An example, which if you don't answer yes or no will completely
invalidate everything you have ever said: Have you stopped beating your
dead horse?


someone

unread,
Jul 2, 2015, 1:16:24 PM7/2/15
to talk-o...@moderators.isc.org
What were you thinking was meaningless or unanswerable about it?

John Harshman

unread,
Jul 2, 2015, 2:11:24 PM7/2/15
to talk-o...@moderators.isc.org
You didn't answer my question. You first.

someone

unread,
Jul 2, 2015, 2:21:24 PM7/2/15
to talk-o...@moderators.isc.org
Your question wasn't meaningless or unanswerable. it was just loaded with a false presumption (that I was beating my dead horse). Known as the fallacy of the loaded gun. I could answer you question by stating that I haven't been beating any dead horses, and that me not answering yes or not to your statement doesn't invalidate anything I've ever said, you are simply wrong.

What fallacy were you suggesting my question contained?

RSNorman

unread,
Jul 2, 2015, 2:31:23 PM7/2/15
to talk-o...@moderators.isc.org
On Thu, 2 Jul 2015 09:41:29 -0700 (PDT), someone
<glenn....@googlemail.com> wrote:

>On Thursday, July 2, 2015 at 5:26:24 PM UTC+1, Burkhard wrote:
>> someone wrote:
>> > Having problems with posts coming through again (posted 2 responses neither came through) so posting a continuation again (a moderation issue?)
>> >
>> > Quick recap, Burkhard hadn't answered a question and I had pointed this out in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/QfgvXSHPZq0J
>> >
>> > SortingItOut had stepped in and replied and I did a quick recap of the conversation.
>> > https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ
>> >
>> > Burkhard replied to the recap post, but again failed to answer the question that I'd previously pointed out that he/she had failed to answer. And this post is a continuation from that:
>> > https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/UUpGdRkDrsIJ
>>
>> Matter of fact, I answered it twice - your thought experiment has now
>> become simply Searle's Chinese room argument, and shares its problems
>> and most importantly limitations That is it is now so far away from the
>> way in which real people or for that matter machines interact with their
>> environment that it takes away any point that your original two universe
>> model may have had.
>>
>
>That is a comment about the thought experiment but not an answer to the question, which I wrote out again below.

I thought if I left for more than a week, all this nonsense would be
over by the time I got back. No such luck

Here is your problem in its entirety: the question of consciousness,
even defining exactly what it is let alone devising tests for it, is
extremely complex and contentious and has been debated for centuries
if not millenia. People for more intelligent and knowledgeable about
the problem than you have written volumes about it.

Instead you insist on proposing one trivial and virtually
unintelligible construct and insist we answer only your particular
version couched only in your specific language. The notion of zombies
is a well known philosophic concept but your differs. The notion of
whether consciousness is computable or materialistic is a well
discussed problem but you insist on formulating everything in terms of
NAND gates and imaginary universes with very different physics but
identical physical laws.

Here is the problem restated in another way. Although what is meant
by "consciousness" differs enormously from author to author, let's
assume that all you mean is the introspective sense that "I am me, a
real thing in the real world, and can act as I choose as a free
agent." I can say that to myself. If you say that to me I can assume
for a number of reasons that you really mean it. But if a robot says
it to me my first reaction would be "You are merely programmed to do
that. You don't or can't 'really' have an inner sense. So you are
simply lying."

So if the robot reports back that consciousness is present, whether in
the zombie world or what you describe as "the world the zombie world
parodies" (for some mysterious reason of your own), it doesn't matter.
There is no reason to believe anything the robot says or does, NAND
gates or no.

Inez

unread,
Jul 2, 2015, 2:41:23 PM7/2/15
to talk-o...@moderators.isc.org
I for one am still unable to see what you think the problem for evolution is. What are you trying to get at?

You propose a zombie robot that can emulate a conscious robot by having a physically identical brain. Therefore what? Consciousness can't be physical because otherwise the zombie robot wouldn't be a zombie? But you made up that zombies are possible! They aren't, so there's no need to explain their existence!

Message has been deleted

someone

unread,
Jul 2, 2015, 2:56:23 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 7:31:23 PM UTC+1, RSNorman wrote:
> On Thu, 2 Jul 2015 09:41:29 -0700 (PDT), someone
I'm not sure what difference you think it matters whether the way I outline the issue is original or not. But you haven't seemed to have understood it. The issue is *how could* a robot respond to whether it is a zombie universe or not. If it wasn't possible for a robot to respond to whether it is a zombie universe or not, then it isn't possible that I am simply a biological robot. Which means that explaining how evolution could have built a biological robot wouldn't be a sufficient explanation for the reality of our situation.

NAND gates are useful as they are functionally complete.

I thought I'd explained pretty well what I meant by consciousness, by explaining what I meant by a zombie. https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/aUXNeEh7lycJ
Because if you can understand the difference between a zombie and a human you know what I am meaning by consciousness. And it is quite hard to pretend that you can't imagine what an atheist might think death would be like (an total absence of conscious experience).

The recap post I wrote today also gives another synopsis, so hopefully eventually the total failure of the atheists on this forum to handle the point being blamed on me not explaining clearly enough that they could understand it, will become less and less convincing as people start to get it, and watch them flounder (though I admit, quite a few of my posts do have a few mistakes which might make people think that English isn't my fist language, even though it is).

https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ

Jimbo

unread,
Jul 2, 2015, 3:11:24 PM7/2/15
to talk-o...@moderators.isc.org
On Thu, 2 Jul 2015 11:48:56 -0700 (PDT), someone
<glenn....@googlemail.com> wrote:

>On Thursday, July 2, 2015 at 7:31:23 PM UTC+1, RSNorman wrote:
>> On Thu, 2 Jul 2015 09:41:29 -0700 (PDT), someone
>I'm not sure what difference you think it matters whether the way I outline the issue is original or not. But you haven't seemed to have understood it. The issue is *how could* a robot respond to whether it is a zombie universe or not. If it wasn't possible for a robot to respond to whether it is a zombie universe or not, then it isn't possible that I am simply a biological robot. Which means that explaining how evolution could have built a biological robot wouldn't be a sufficient explanation for the reality of our situation.
>
>NAND gates are useful as they are functionally complete.
>
>I thought I'd explained pretty well what I meant by consciousness, by explaining what I meant by a zombie. https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/aUXNeEh7lycJ
>Because if you can understand the difference between a zombie and a human you know what I am meaning by consciousness. And it is quite hard to pretend that you can't imagine what an atheist might think death would be like (an total absence of conscious experience).

If atheism is simply a lack of belief in gods, why do you assume that
an atheist must think that death is a total absence of conscious
experience? Perhaps the atheist believes in reincarnation, or
transmigration of souls or that one's consciousness merges into
nature, or any of many other possibilities.

>The recap post I wrote today also gives another synopsis, so hopefully eventually the total failure of the atheists on this forum to be able to handle the point being continually blamed on me will become less and less convincing (though I admit, quite a few of my posts do have a few mistakes in which might make people think that English isn't my fist language, even though it is).
>
>https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ
>

someone

unread,
Jul 2, 2015, 3:16:23 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 8:11:24 PM UTC+1, Jimbo wrote:
> On Thu, 2 Jul 2015 11:48:56 -0700 (PDT), someone
I did qualify the statement with "might".

John Harshman

unread,
Jul 2, 2015, 3:26:23 PM7/2/15
to talk-o...@moderators.isc.org
A question with a false premise isn't answerable. Witness the fact that
you still haven't answered it yes or no. What you have done is to reject
the question.

> What fallacy were you suggesting my question contained?

Burkhard has told you several times. I merely point out that you can't
demand he answer a question he rejects, just as you have not answered my
question.

Jimbo

unread,
Jul 2, 2015, 3:41:23 PM7/2/15
to talk-o...@moderators.isc.org
Can you give a concise definition of consciousness without zombies?

someone

unread,
Jul 2, 2015, 3:51:23 PM7/2/15
to talk-o...@moderators.isc.org
I gave an answer albeit not a direct yes or no answer. In the answer I pointed out a false premise that I rejected.


> > What fallacy were you suggesting my question contained?
>
> Burkhard has told you several times. I merely point out that you can't
> demand he answer a question he rejects, just as you have not answered my
> question.

I didn't see him point out any fallacy, or point out any premise that it contained that he rejected. Why don't you point out what fallacy you think it, or point out the premise that you think he rejected. I'll write it again here (with some grammar corrections), as you snipped it.

---
In this harder to not-understand-the-point scenario, imagine you are in a box with an LED display which will not display any symbol initially, and then display either a "1" or a "0", go off again, and then display either a "1" or a "0", and then go off again. Imagine this on off cycle will happen several times. Imagine also that you are instructed to shout out "I'm in a blue box" if when the LED comes on it displays a "1" and shout out "I'm in a green box" if the LED comes on and it displays a "0". Imagine you don't know the colour of the box you are in, and that you are able to and do follow the instructions.

someone

unread,
Jul 2, 2015, 3:56:24 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 8:41:23 PM UTC+1, Jimbo wrote:
> On Thu, 2 Jul 2015 12:13:50 -0700 (PDT), someone
Some atheists might imagine that when they die there would be a total absence of experience. To be conscious is to experience something other than such a total absence of experience.

John Harshman

unread,
Jul 2, 2015, 3:56:24 PM7/2/15
to talk-o...@moderators.isc.org
If you call that an answer, then Burkhard's comment is an answer to your
question.

> I didn't see him point out any fallacy, or point out any premise that
> it contained that he rejected. Why don't you point out what fallacy
> you think it, or point out the premise that you think he rejected.
> I'll write it again here (with some grammar corrections), as you
> snipped it.

> --- In this harder to not-understand-the-point scenario, imagine you
> are in a box with an LED display which will not display any symbol
> initially, and then display either a "1" or a "0", go off again, and
> then display either a "1" or a "0", and then go off again. Imagine
> this on off cycle will happen several times. Imagine also that you
> are instructed to shout out "I'm in a blue box" if when the LED comes
> on it displays a "1" and shout out "I'm in a green box" if the LED
> comes on and it displays a "0". Imagine you don't know the colour of
> the box you are in, and that you are able to and do follow the
> instructions.

> Now imagine that unknown to you, what caused the LED to display a "0"
> or a "1" changed each time. Given that you wouldn't know what caused
> the LED to display a "0" or "1", would you agree that you weren't
> basing what you shouted out on what caused the LED to display a "0"
> or "1" each time?

I would agree. But despite this being a harder to not understand the
point scenario, I don't understand the point. What is the point?

Jimbo

unread,
Jul 2, 2015, 4:11:23 PM7/2/15
to talk-o...@moderators.isc.org
Are grasshoppers conscious? How about dogs and chimpanzees? Can you
think of any criteria by which such questions can be answered?

someone

unread,
Jul 2, 2015, 4:11:23 PM7/2/15
to talk-o...@moderators.isc.org
Why did you feel that his/her failing to answer, and not pointing out any fallacy or false premise equivalent to responding and pointing out a fallacy and false premise?


> > I didn't see him point out any fallacy, or point out any premise that
> > it contained that he rejected. Why don't you point out what fallacy
> > you think it, or point out the premise that you think he rejected.
> > I'll write it again here (with some grammar corrections), as you
> > snipped it.
>
> > --- In this harder to not-understand-the-point scenario, imagine you
> > are in a box with an LED display which will not display any symbol
> > initially, and then display either a "1" or a "0", go off again, and
> > then display either a "1" or a "0", and then go off again. Imagine
> > this on off cycle will happen several times. Imagine also that you
> > are instructed to shout out "I'm in a blue box" if when the LED comes
> > on it displays a "1" and shout out "I'm in a green box" if the LED
> > comes on and it displays a "0". Imagine you don't know the colour of
> > the box you are in, and that you are able to and do follow the
> > instructions.
>
> > Now imagine that unknown to you, what caused the LED to display a "0"
> > or a "1" changed each time. Given that you wouldn't know what caused
> > the LED to display a "0" or "1", would you agree that you weren't
> > basing what you shouted out on what caused the LED to display a "0"
> > or "1" each time?
>
> I would agree. But despite this being a harder to not understand the
> point scenario, I don't understand the point. What is the point?

If you couldn't see any fallacy or false premise, and managed to understand and answer it without any problem, why did you state that she/he had pointed out that the question was unanswerable or meaningless, and give as an analogy a question which was a well known type of fallacy which contained a false premise?

For the context, look at the recap post mentioned in the history.
https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ

Bill Rogers

unread,
Jul 2, 2015, 4:21:23 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 10:51:24 AM UTC-4, someone wrote:

> In this harder to not-understand-the-point scenario, imagine you are in a box with an LED display which not display any symbol initially and then display either a "1" or a "0", go off again, and then display either a "1" or a "0", and then go off again. Imagine this on off cycle will happen several times. Imagine also that you are instructed to shout out "I'm in a blue box" if when the LED comes on it displays a "1" and shout out "I'm in a green box" if the LED comes on and it displays a "0". Imagine you don't know the colour of the box you are in, and that you are able to and do follow the instructions.
>
> Now imagine that unknown to you, what caused the LED to display a "0" or a "1" changed each time. Given that you wouldn't know what caused the LED to display a "0" or "1", would you agree that you weren't basing what you shouted out on what caused the LED to display a "0" or "1" each time?
> ---
>
It doesn't matter why the LED displays 0 or 1. My instructions are only based on the particular number that I see. I'm certainly not instructed to look at the color of the box and say what that color is. So of course, when I see a 0 I'll say what I was instructed to say when I see a zero, and when I see a 1 I'll say whatever I was instructed to say when I see a 1.

So what?

Burkhard is right, this is a weak and poorly fleshed out version of Searle's Chinese Room (which is itself an advertisement for a point of view rather than an argument).

Ernest Major

unread,
Jul 2, 2015, 4:26:24 PM7/2/15
to talk-o...@moderators.isc.org
I expect most of your readers, whether atheist or otherwise, will long
ago have concluded that you're trying to argue that if dualism is true
then dualism is true, but trying to camouflage the premise in the hope
that people won't notice the circularity of the argument.

If this is not the case then you should give up the pseudo-Socratic
dialog and lay out your argument properly. (It might also be a good idea
for you to explain what account you are ascribing to "atheist
evolutionists", so we can make a judgement as to whether it is a
strawman or not.)

You might also like to consider whether your behaviour here is
consistent with the "loving selfless path" you advocate in another
thread, and then stop doing it. (Demanding that people answer your
questions while refusing to engage with their questions and arguments is
not consistent with the golden rule; nor is treating them as objects to
be manipulated into giving an answer which you can interpret as
supporting your argument.)

--
alias Ernest Major

someone

unread,
Jul 2, 2015, 4:26:27 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 9:11:23 PM UTC+1, Jimbo wrote:
> On Thu, 2 Jul 2015 12:55:13 -0700 (PDT), someone
Well they would be if they experienced something other than a total absence of experience. That is the criteria.

What is it you are asking me to do, guess at what experimental differences there might be? I could do so if you like, but my guess wouldn't become the definition, or the criteria, it would just be a guess, so I'm not really sure where you would be going with it, but if you want me to just guess I'll do so.

RSNorman

unread,
Jul 2, 2015, 4:31:23 PM7/2/15
to talk-o...@moderators.isc.org
As is always the case, instead of actually considering what I wrote
you simply recast everything into your original formulation which we
all here reject utterly.

You tell me, if you are indeed capable of actually ansering a
question, why we should believe anything a robot says? Of course we
might have great confidence in the robot if we first have an
independent way of verifying that what it says is true but then if the
subject is consciousness, that destroys the whole premise about having
a robot decide. If what it says agrees with what we feel or "know"
then we say "Aha! You see the robot is right!". If what it says
disagrees then we say "I told you you can't trust the robot!" No
matter what, the robot answer is in no way an answer to the problem.

RSNorman

unread,
Jul 2, 2015, 4:36:23 PM7/2/15
to talk-o...@moderators.isc.org
"someone" clearly does not any other person's formulation of anyone
else's argument. Tell me what happens if you load the Chinese Room
with NAND gates!

someone

unread,
Jul 2, 2015, 4:41:23 PM7/2/15
to talk-o...@moderators.isc.org
If they were the type that couldn't follow it, and then just thought making up a strawman was ok maybe.

someone

unread,
Jul 2, 2015, 4:46:22 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 9:31:23 PM UTC+1, RSNorman wrote:
> On Thu, 2 Jul 2015 11:52:46 -0700 (PDT), someone
I think if people characterise what I am saying incorrectly, and therefore create a strawman argument, like you did, I am entitled to point it out.

> You tell me, if you are indeed capable of actually ansering a
> question, why we should believe anything a robot says? Of course we
> might have great confidence in the robot if we first have an
> independent way of verifying that what it says is true but then if the
> subject is consciousness, that destroys the whole premise about having
> a robot decide. If what it says agrees with what we feel or "know"
> then we say "Aha! You see the robot is right!". If what it says
> disagrees then we say "I told you you can't trust the robot!" No
> matter what, the robot answer is in no way an answer to the problem.

I'm not suggesting you should believe a robot to be conscious. I'm wouldn't think anyone would be given the experience of being a robot. What would be the point?

John Harshman

unread,
Jul 2, 2015, 4:51:23 PM7/2/15
to talk-o...@moderators.isc.org
[snip non-answer]

Please answer my question. What is the point?

Jimbo

unread,
Jul 2, 2015, 4:51:23 PM7/2/15
to talk-o...@moderators.isc.org
I'm trying to determine what point you're trying to make with your
zombies and alteranate universe. Could you state briefly what it is
that you're trying to demonstrate?

Inez

unread,
Jul 2, 2015, 5:01:23 PM7/2/15
to talk-o...@moderators.isc.org
I want to add my voice to the growing chorus of requests that you just state the point your trying to make directly, rather than leading us there with questions. The question method is not working and has never worked for you. If you are a conscious person, do what a conscious person would do and try for a more direct approach rather than just doing the same thing that doesn't work over and over again.

someone

unread,
Jul 2, 2015, 5:01:23 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 9:51:23 PM UTC+1, Jimbo wrote:
> On Thu, 2 Jul 2015 13:22:35 -0700 (PDT), someone
That robots couldn't base a response on a conscious experience, whereas I can. Therefore I cannot simply be a biological robot. So while atheist evolutionary theories might be able to explain how a biological robot could be built, that isn't sufficient to explain the reality of the situation that we find ourselves in. And it doesn't seem right that that should be hidden from children when they are being educated. It should be explicitly pointed out.

John Harshman

unread,
Jul 2, 2015, 5:16:22 PM7/2/15
to talk-o...@moderators.isc.org
Excellent. How do you know that robots couldn't base a response on a
conscious experience?

Inez

unread,
Jul 2, 2015, 5:16:22 PM7/2/15
to talk-o...@moderators.isc.org
Sure they could! The only reason one of them can't is because you artificially defined it as not being able to. You are assuming that it is possible to have a robot with a NAND-gate configuration identical to a conscious robot without it being conscious. You cannot show that this is possible in real life.

Jimbo

unread,
Jul 2, 2015, 5:21:25 PM7/2/15
to talk-o...@moderators.isc.org
But you seem just to be presenting a variant of Searle's Chinese-room
thought experiment. Apparently you're claiming that it represents an
accurate model for the human behavior in interaction with varied
environments. What evidence can you present that it's an accurate
model? Can you use it to make general and specific predictions about
human behavior?

Sneaky O. Possum

unread,
Jul 2, 2015, 5:41:22 PM7/2/15
to talk-o...@moderators.isc.org
Bill Rogers <broger...@gmail.com> wrote in
news:857dae25-bd1d-41a1...@googlegroups.com:
It ain't *that* bad. Searle has argued that artificial intelligence is
feasible, with the proviso that an artificial thinking machine will have
to mimic the architecture of an organic one in ways that a binary
processor doesn't. By his own account, he formulated the Chinese Room
thought-experiment to demonstrate that a binary processor functions in an
essentially different way from an organic brain, and will never become a
brain regardless of how fast it gets or how much power it has.

At the time he formulated the thought-experiment, people were arguing
that continual increases in the processing speed of computers would
eventually result in a conscious computer: there may still be some people
who hold that view.

Searle may be wrong, but I've yet to read a convincing rebuttal of his
actual claims. (A number of people have convincingly rebutted claims he
never actually made, but the utility of such rebuttals is questionable.)
--
S.O.P.

RSNorman

unread,
Jul 2, 2015, 5:51:24 PM7/2/15
to talk-o...@moderators.isc.org
I think people are quite capable of characterising what you say
correctly and putting exactly the same concept into different terms
but you cannot accept that and insist it must be a strawman.

I am entitled to point out that everyone here seems to accuse you of
being incredibly obtuse and failing to say what you really want to
say.
>
>> You tell me, if you are indeed capable of actually ansering a
>> question, why we should believe anything a robot says? Of course we
>> might have great confidence in the robot if we first have an
>> independent way of verifying that what it says is true but then if the
>> subject is consciousness, that destroys the whole premise about having
>> a robot decide. If what it says agrees with what we feel or "know"
>> then we say "Aha! You see the robot is right!". If what it says
>> disagrees then we say "I told you you can't trust the robot!" No
>> matter what, the robot answer is in no way an answer to the problem.
>
>I'm not suggesting you should believe a robot to be conscious. I'm wouldn't think anyone would be given the experience of being a robot. What would be the point?

So you can't answer a simple question: why would you believe what a
robot reports about consciousnes?

someone

unread,
Jul 2, 2015, 5:56:24 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 10:21:25 PM UTC+1, Jimbo wrote:
> On Thu, 2 Jul 2015 13:58:27 -0700 (PDT), someone
I've allowed for the possibility that the robot passes the Turing Test, is that what you are referring to?

Do you agree that the robot's behaviour would be determined by that state a NAND gate arrangement and the inputs it received?

What in the atheist story has an arrangement of NAND gates, whatever the state, and whatever the inputs got to do with whether reality is a zombie universe or not? The thought experiment has the whatever arrangement, state, and inputs reflected in the zombie universe to highlight the issue.

John Harshman

unread,
Jul 2, 2015, 6:11:24 PM7/2/15
to talk-o...@moderators.isc.org
There's your problem. You are assuming that the behavior of an
arrangement of NAND gates must be the same in the zombie universe as in
our own, which is nothing more than a claim that consciousness is not
the result of physical processes, i.e. an assumption of dualism. In
other words, you assume the conclusion.

To put it yet another way, you are assuming that the only difference
between the zombie universe and the universe in which consciousness is
possible is that consciousness is not possible; but if consciousness is
a phenomenon in a material universe, that assumption would be incorrect,
and you couldn't separate what makes consciousness possible from what
makes anything else possible.

Bill Rogers

unread,
Jul 2, 2015, 6:36:23 PM7/2/15
to talk-o...@moderators.isc.org
Searle's claim was that no computer running a program based on formal manipulation of symbols can have either understanding or intentionality, because no formal manipulation of symbols (syntactics) can give the symbols meaning (semantics). His argument just doesn't get even to that conclusion, and it certainly does not get to the broader conclusions that people sometimes attribute to him.

Here are a few problems with his paper

1. The person in the box is a distraction, put there for rhetorical purposes to force the reader's conclusion in the desired direction without an actual argument. We expect the person to be conscious. But the person in the box is acting as a small part of the circuitry in a computer. We would not expect either a small collection of semiconductors or a handful of neurons to be conscious. It would have been a more honest argument if Searle simply left out the man in the box and just talked about a computer.

2. The "algorithmic program" that Searle describes is simply a look-up table. Lots of programs are far more complex than that, and I doubt anyone thinks that a look-up table would emulate consciousness. Searle is really describing an over-simplified program designed to mimic conversation rather than a program designed to act conscious. And yet he wants the reader to treat that look-up table as the paradigm for all algorithmic programs.

3. There is no ongoing interaction between the man/box system and the world, and no self-reference, no process by which the man/box system monitors and models its own internal states (and that is certainly something that could be done algorithmically).

4. Searle's key claim that no manipulation of symbols (syntactics) can produce meaning (semantics), but he does not make much of an argument to support this claim, which probably seems self-evident to him. Specifically, he does not ask where meaning comes from in humans. He merely attributes it to unspecified characteristics of the brain.

5. Finally, he does not explain what there is in the brain that is non-algorithmic or non-binary. What exactly is there in the timing of the firing of neurons that cannot be broken down to binary operations? Of course the description of them as binary operations would be very complex considering summation of lots of inputs, the global level of different neurotransmitters and neuroactive drugs, etc. But he doesn't offer a good explanation of what there is specifically non-algorithmic in the brain.

The Chinese box is basically a trick to force the reader to a conclusion by tweaking his intuitions in the right way. It's not a real argument.

someone

unread,
Jul 2, 2015, 6:41:23 PM7/2/15
to talk-o...@moderators.isc.org
The behaviour of an arrangement of NAND gates is a consequence of the behaviour of a single NAND gate, i.e. that a NAND gate will give an "on" output signal unless both it's input signals are "on" in which case it will give an "off" signal. Unless you are suggesting that a single NAND gate behaves the way it does because of what the conscious experience is like, I don't see where in your story you are suggesting the behaviour of any single NAND gate in the arrangement was based on the arrangement itself. Supposing each NAND gate was a box, and inside there were two LED displays, which would light up for 10 seconds every minute, displaying either the word "on" or "off", and the person inside was instructed to within 15 seconds of the display lighting up press the "on" button *unless* both the displays displayed the word "on" in which case they were to press the "off" button. Imagine that the people inside were capable of following those instructions. You can imagine you were one if you like. What are you suggesting that the resultant behaviour of the NAND gate arrangement would depended on what pistachio ice cream tasted like? Or put it another way, if you were one of the people in the NAND gate, would you be basing your behaviour on what the NAND gate arrangement could be imagined to represent?

Also I am entertaining, for discussion purposes, that consciousness is a feature of a physical universe. If I had been suggesting an identical physical universe in which there wasn't consciousness, then you would have a point, but I'm not. I'm just considering different physical universe with different physical features. The universe in question, the zombie universe, is physically different from the universe it parodies, because it has different physical features (it doesn't have the consciousness feature). This has been explained numerous times, and was in the original post.
https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/aUXNeEh7lycJ

Jimbo

unread,
Jul 2, 2015, 6:51:24 PM7/2/15
to talk-o...@moderators.isc.org
I'm saying that your thought experiment appears to have no explanatory
power. Until you can demonstrate that it actually models some aspect
of the behavior of real organisms, it's just a pointless fantasy.

someone

unread,
Jul 2, 2015, 7:01:23 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 11:51:24 PM UTC+1, Jimbo wrote:
> On Thu, 2 Jul 2015 14:55:02 -0700 (PDT), someone
The thought experiment isn't designed to explain anything, or model anything. It is simply to point out why certain stories don't work. And why shouldn't children be informed about problems with certain stories?

John Harshman

unread,
Jul 2, 2015, 7:31:23 PM7/2/15
to talk-o...@moderators.isc.org
Nobody is claiming that a single NAND gate shows consciousness, so your
imagined scenario is pointless. The claim is that some physical
arrangement of matter shows consciousness. And I further hypothesize
that some properly designed and programmed digital computer (which is
what I suppose your NAND gates are supposed to represent) could show
consciousness. But that would be a property arising from a quite complex
apparatus and, I hasten to add, not just any complex apparatus.

I'm presuming you can imagine a digital computer that could have the
proper sensory inputs to make some particular NAND gate, or whatever,
return yes iff the system detected the chemical cues of pistachio ice
cream. I have no reason to believe that a more complex system couldn't
also think of that return as "the taste of pistachio ice cream". What's
your argument against that possibility?

Now, if there were such a conscious computer, its equivalent in a zombie
universe wouldn't be conscious, by definition. But why? Since the
consciousness is a product and consequence of the workings of the
machine, there must be something different about that machine's workings
in the zombie universe. You can't just say it's the same but leads to a
different result.

> Also I am entertaining, for discussion purposes, that consciousness
> is a feature of a physical universe. If I had been suggesting an
> identical physical universe in which there wasn't consciousness, then
> you would have a point, but I'm not. I'm just considering different
> physical universe with different physical features. The universe in
> question, the zombie universe, is physically different from the
> universe it parodies, because it has different physical features (it
> doesn't have the consciousness feature). This has been explained
> numerous times, and was in the original post.

Consciousness isn't a feature you can just turn on and off. It's a
consequence (in the materialist worldview) of a particular arrangement
of stuff. Your notion is incoherent.

> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/aUXNeEh7lycJ

That seems like a silly bit of entertainment. The consciousness isn't a
feature of the universe. We have no evidence that the universe is
conscious, that atoms are conscious, that plants, bacteria, and most
animals are conscious (unless you leach the term of meaning). We have
evidence that humans and, to a lesser extent, some of our relatives are
conscious. And all evidence suggests that the phenomenon is located in
our brains, as a pattern of firing neurons. Deal with it.

Jimbo

unread,
Jul 2, 2015, 7:31:23 PM7/2/15
to talk-o...@moderators.isc.org
You previously said the purpose was to demonstrate that robots can't
base a response on conscious experience whereas you can, and therefore
you can't be a biological robot.

In order to support this claim by means of your thought experiment you
would need to demonstrate that the thought experiment models some
aspect of the behavior of real organisms. You haven't done that. You
appear unable to link your thought experiment to the behavior of real
organisms. When I asked if you had some means of determining whether
organisms such as a grasshopper, a dog and a chimpanzee are conscious,
you replied that your answer would just be your personal opinion. So
your thought experiment appears to have no application to the behavior
of real organisms.

You may say that the thought experiment isn't designed to explain
anything or model anything, but that means it has no explanatory
power. Perhaps you could make a genuine evidence-based argument that
biological robots can't develop consciousness, but you haven't even
started to do that. Unless you can use it to model real behavior
patterns of real biological organisms, it's simply irelevent.

someone

unread,
Jul 2, 2015, 7:56:23 PM7/2/15
to talk-o...@moderators.isc.org
The point was that the each and every NAND gate in the arrangement behaved the way it did because the person in the box was following the instructions, and I just couldn't see where in the story the taste of pistachio ice cream had any bearing on how any of them behaved, such that the behaviour could be said to be based on it. Perhaps you could explain.

> Now, if there were such a conscious computer, its equivalent in a zombie
> universe wouldn't be conscious, by definition. But why? Since the
> consciousness is a product and consequence of the workings of the
> machine, there must be something different about that machine's workings
> in the zombie universe. You can't just say it's the same but leads to a
> different result.
>

I was assuming the consciousness was being said to be a property of the physical thing, in the example a property of the physical computer.

I was also assuming different physical things could have different properties, and in the zombie universe all that is being imagined is that it is a different physical thing with different physical properties.

So I'm not saying it is the same, as I explained but you seem to ignore. An example I have used before would be matter and anti matter. You could imagine there was a universe which was predominantly matter, and another which was predominantly anti-matter. Underlying the objects the physical would be different, but it would still follow the same laws of physics, there wouldn't be an experiment to tell (I think that is the case anyway, though I might be wrong, if so ignore that as an example).


> > Also I am entertaining, for discussion purposes, that consciousness
> > is a feature of a physical universe. If I had been suggesting an
> > identical physical universe in which there wasn't consciousness, then
> > you would have a point, but I'm not. I'm just considering different
> > physical universe with different physical features. The universe in
> > question, the zombie universe, is physically different from the
> > universe it parodies, because it has different physical features (it
> > doesn't have the consciousness feature). This has been explained
> > numerous times, and was in the original post.
>
> Consciousness isn't a feature you can just turn on and off. It's a
> consequence (in the materialist worldview) of a particular arrangement
> of stuff. Your notion is incoherent.
>

It's incoherent that different physical universes could have different
properties?

> > https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/aUXNeEh7lycJ
>
> That seems like a silly bit of entertainment. The consciousness isn't a
> feature of the universe. We have no evidence that the universe is
> conscious, that atoms are conscious, that plants, bacteria, and most
> animals are conscious (unless you leach the term of meaning). We have
> evidence that humans and, to a lesser extent, some of our relatives are
> conscious. And all evidence suggests that the phenomenon is located in
> our brains, as a pattern of firing neurons. Deal with it.

It might suggest that to you, but I don't think you've thought it through.

someone

unread,
Jul 2, 2015, 8:26:23 PM7/2/15
to talk-o...@moderators.isc.org
On Friday, July 3, 2015 at 12:31:23 AM UTC+1, Jimbo wrote:
> On Thu, 2 Jul 2015 15:59:20 -0700 (PDT), someone
No I don't. I just have to show the issue with robots. I can then leave it to the atheist to either explain why I'm wrong about robots made up of NAND gates, or point out why humans would be different to robots.

It isn't an issue of whether the NAND gate arrangement would consciously experience or not, it would be what difference it would make to the behaviour.

If the behaviour of a single NAND gate wasn't based on what the experience was like, then unless the NAND gates start changing the reason they behave as they do (they no longer give "on" or "off" signals for the same chemical reasons as when a single NAND gate for example), then the reasons for their behaviour are the same and doesn't involve what the experience was like.

Bill Rogers

unread,
Jul 2, 2015, 8:56:23 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 8:26:23 PM UTC-4, someone wrote:

> > You previously said the purpose was to demonstrate that robots can't
> > base a response on conscious experience whereas you can, and therefore
> > you can't be a biological robot.
> >
> > In order to support this claim by means of your thought experiment you
> > would need to demonstrate that the thought experiment models some
> > aspect of the behavior of real organisms.
>
> No I don't. I just have to show the issue with robots. I can then leave it to the atheist to either explain why I'm wrong about robots made up of NAND gates, or point out why humans would be different to robots.

You still have not shown why a robot could not be conscious. If materialism is correct, then some arrangement of the NAND gates and some behavior that follows from that behavior is conscious. If materialism is correct, then it is impossible to have that same arrangement and the same behavior in the absence of consciousness. Your scenario *assumes* that it is possible to dissociate conscious experience from the particular arrangement and behavior of you NAND gates. That is assuming your conclusion (as loads of people keep telling you). Your conclusion might well be correct, but your scenario does not provide an argument to support that conclusion, since the conclusion is baked in from the start.

>
> It isn't an issue of whether the NAND gate arrangement would consciously experience or not, it would be what difference it would make to the behaviour.
>
> If the behaviour of a single NAND gate wasn't based on what the experience was like, then unless the NAND gates start changing the reason they behave as they do (they no longer give "on" or "off" signals for the same chemical reasons as when a single NAND gate for example), then the reasons for their behaviour are the same and doesn't involve what the experience was like.
>
>
It's also not clear at all what any of this has to do with atheism. A material God could have created biological robots, in which case materialism and theism would both be true. A spiritual God could have created biological robots, in which case theism would be true, and materialism would be true of everything except God. On the other hand, it could be the case that some spirit is required for consciousness but that there is no God, only a bunch of more or less conscious spirits hanging around the brains of more or less conscious animals. In that case dualism and atheism would both be true.

John Harshman

unread,
Jul 2, 2015, 9:16:23 PM7/2/15
to talk-o...@moderators.isc.org
You seem to be mixing your examples. Now there's a complex, brain-sized
network of NAND gates and people in boxes? Please present this new
example explicitly.

>> Now, if there were such a conscious computer, its equivalent in a zombie
>> universe wouldn't be conscious, by definition. But why? Since the
>> consciousness is a product and consequence of the workings of the
>> machine, there must be something different about that machine's workings
>> in the zombie universe. You can't just say it's the same but leads to a
>> different result.
>>
>
> I was assuming the consciousness was being said to be a property of
> the physical thing, in the example a property of the physical
> computer.

> I was also assuming different physical things could have different
> properties, and in the zombie universe all that is being imagined is
> that it is a different physical thing with different physical
> properties.

Previously you said it was the same thing with identical physical
properties. So I'm confused.

> So I'm not saying it is the same, as I explained but you seem to
> ignore. An example I have used before would be matter and anti
> matter. You could imagine there was a universe which was
> predominantly matter, and another which was predominantly
> anti-matter. Underlying the objects the physical would be different,
> but it would still follow the same laws of physics, there wouldn't be
> an experiment to tell (I think that is the case anyway, though I
> might be wrong, if so ignore that as an example).

I'm sorry, but this unspecified difference in properties renders your
example meaningless. If consciousness is a property of the proper
assemblage of NAND gates in our universe, and the NAND gates all operate
the same way (regardless of whatever you mean by "underlying the objects
the physical), then the assemblage of NAND gates would also have
consciousness in this zombie universe.

You seem to be imagining some kind of magic thing like The Force that
causes consciousness, but if we just eliminate The Force everything will
be just the same except for the absence of consciousness.

>>> Also I am entertaining, for discussion purposes, that consciousness
>>> is a feature of a physical universe. If I had been suggesting an
>>> identical physical universe in which there wasn't consciousness, then
>>> you would have a point, but I'm not. I'm just considering different
>>> physical universe with different physical features. The universe in
>>> question, the zombie universe, is physically different from the
>>> universe it parodies, because it has different physical features (it
>>> doesn't have the consciousness feature). This has been explained
>>> numerous times, and was in the original post.
>>
>> Consciousness isn't a feature you can just turn on and off. It's a
>> consequence (in the materialist worldview) of a particular arrangement
>> of stuff. Your notion is incoherent.
>
> It's incoherent that different physical universes could have different
> properties?

No, but it's incoherent that consciousness is something you can
reasonably call a physical property of the universe.

>>> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/aUXNeEh7lycJ
>>
>> That seems like a silly bit of entertainment. The consciousness isn't a
>> feature of the universe. We have no evidence that the universe is
>> conscious, that atoms are conscious, that plants, bacteria, and most
>> animals are conscious (unless you leach the term of meaning). We have
>> evidence that humans and, to a lesser extent, some of our relatives are
>> conscious. And all evidence suggests that the phenomenon is located in
>> our brains, as a pattern of firing neurons. Deal with it.
>
> It might suggest that to you, but I don't think you've thought it through.

I don't think you're capable of thinking anything through. You certainly
can't express yourself coherently. Or won't, at least. Are you in fact
thinking about something like The Force here?

someone

unread,
Jul 2, 2015, 9:26:23 PM7/2/15
to talk-o...@moderators.isc.org
On Friday, July 3, 2015 at 1:56:23 AM UTC+1, Bill Rogers wrote:
> On Thursday, July 2, 2015 at 8:26:23 PM UTC-4, someone wrote:
>
> > > You previously said the purpose was to demonstrate that robots can't
> > > base a response on conscious experience whereas you can, and therefore
> > > you can't be a biological robot.
> > >
> > > In order to support this claim by means of your thought experiment you
> > > would need to demonstrate that the thought experiment models some
> > > aspect of the behavior of real organisms.
> >
> > No I don't. I just have to show the issue with robots. I can then leave it to the atheist to either explain why I'm wrong about robots made up of NAND gates, or point out why humans would be different to robots.
>
> You still have not shown why a robot could not be conscious. If materialism is correct, then some arrangement of the NAND gates and some behavior that follows from that behavior is conscious. If materialism is correct, then it is impossible to have that same arrangement and the same behavior in the absence of consciousness. Your scenario *assumes* that it is possible to dissociate conscious experience from the particular arrangement and behavior of you NAND gates. That is assuming your conclusion (as loads of people keep telling you). Your conclusion might well be correct, but your scenario does not provide an argument to support that conclusion, since the conclusion is baked in from the start.
>

It doesn't follow that if all that exists in the universe is the physical, and that consciousness is a property of the physical in this universe, that there couldn't be a different universe with a different physical with different physical features. As I keep explaining. If I had said there were two identical physical things but one was conscious but not the other, then yes you'd have a point, there would be an implicit assumption that consciousness was not a property of anything physical. But I didn't do that, and clearly stated the zombie universe was a universe with different physical features from the one it parodies.

> >
> > It isn't an issue of whether the NAND gate arrangement would consciously experience or not, it would be what difference it would make to the behaviour.
> >
> > If the behaviour of a single NAND gate wasn't based on what the experience was like, then unless the NAND gates start changing the reason they behave as they do (they no longer give "on" or "off" signals for the same chemical reasons as when a single NAND gate for example), then the reasons for their behaviour are the same and doesn't involve what the experience was like.
> >
> >
> It's also not clear at all what any of this has to do with atheism. A material God could have created biological robots, in which case materialism and theism would both be true. A spiritual God could have created biological robots, in which case theism would be true, and materialism would be true of everything except God. On the other hand, it could be the case that some spirit is required for consciousness but that there is no God, only a bunch of more or less conscious spirits hanging around the brains of more or less conscious animals. In that case dualism and atheism would both be true.

What it has to do with atheism is that I'm asking for an atheist account where a robot could react to reality not being a zombie universe.

Jimbo

unread,
Jul 2, 2015, 9:31:22 PM7/2/15
to talk-o...@moderators.isc.org
You've demonstrated no issue with robots. You've admited that your
thought experiment doesn't give you any insight into whether organisms
ranging from insects to chimpanzees are conscious. Insects behave like
robots equipped with very good and very extensive sensory systems. Can
you specify any reason *not* to think that they're biological
automatons? On the other hand chimpanzees and bonobos exhibit
behaviors very similar to our own in many respects. Can you specify
any reason to doubt that they are conscious? Your thought experiment
is useless for making such assessments, so can't demonstrate any
'issue' with biological robots.

>It isn't an issue of whether the NAND gate arrangement would consciously experience or not, it would be what difference it would make to the behaviour.

Activation of particular brain regions and structures can be measured
by patterns of blood flow, and correlated with arousal, attention and
problem-solving behavior in human beings, with conscious introspection
and visualization. The development of the brain and nervous system can
also be traced from its simple beginnings in primitive vertebrates.
From this, and other empirical data, inferences about consciousness
can be made and tested.

>If the behaviour of a single NAND gate wasn't based on what the experience was like, then unless the NAND gates start changing the reason they behave as they do (they no longer give "on" or "off" signals for the same chemical reasons as when a single NAND gate for example), then the reasons for their behaviour are the same and doesn't involve what the experience was like.

Are you assuming that individual NAND gates are conscious? If so, what
evidence underlies that assumption? Consciousness is an emergent
property associated with complex and coordinated information
processing. This is a conclusion based on the evaluation of various
types of empirical data. What data do you base your own assumption
upon?

RSNorman

unread,
Jul 2, 2015, 9:41:22 PM7/2/15
to talk-o...@moderators.isc.org
What you say makes someone's argument hew even closer to the Chinese
Box story: they are both basically tricks to force the reader to a
conclusion by tweaking the situation in the proper way.

There is an enormous difference between a person with a brain doing
"computations" and a computer doing supposedly the same computations.
The person has effectors -- we can act in the world and make changes
to the physical environment. The person has sensors -- we can detect
changes in the physical environment. And we can determine that the
changes we see are often direct consequences of the actions we take.
Even more, the person has internal machinery and the actions and even
the computations (thoughts, if you want) cause changes in the internal
machinery (metabolic changes, for example). And the person has
internal sensors and we can detect the changes in our own bodies that
are produced from our mental activity. All of these things act in a
smoothly coherent and coordinated way (usually). These are consistent
with us being agents in the world acting on it and being acted on. I
would argue that all these notions are essential parts of what we call
"being aware of ourselves".

It should not be impossible to build a robot to do a lot of this
although the incredible quantity of physical changes that occur within
our body and the incredible quantity of sensors we have to detect
those changes would be virtually impossible to duplicate in practice.
Would a robot be designed to find itself in some pain and distress
because of an overactive immune and hormonal system simply because it
was given problems to solve that were incredibly difficult with
insufficient information and conflicting requirements to compute
behavior that it can easily calculate would be worthless in coping
with the problem at hand yet still is necessary to perform for many
other reasons? We humans very often react extremely poorly to being
subjected to stress, physical, emotional, and mental, for long periods
of time. Would the computer say "I really feel shitty -- I need a
vacation!" representing self knowledge? Or would it say "my
diagnostic programs indicate something is amiss -- please summon a
repair technician"? What we would now build is the latter. I do not
know why a robot capable of learning from experience and sharing
experiences and learning and even discussing alternative courses of
action and details of internal states with others would not act as the
former and express self awareness.

To return to the Chinese Room specifically, the person locked in the
room would not demonstrate any understanding. However if the person
actually got out and interacted with Chinese speakers and saw the
results of the translations, how they produced changes both in the
behavior and the emotional state of the listeners, and further engage
in back and forth dialog, then I would argue that true understanding
could easily result.

To return to someone's silly premise: NAND gates alone may be able to
produce a fully functional general pupose computer with massive memory
stores capable of machine learning and all but it would be like a
"brain in a vat" or the person locked inside the Chinese Room. To
have consciousness requires actual behavior in the world, actual
experiences to detect, analyze, and interpret and, finally, to
internalize. So the robot has an awful lot of machinery far beyond
simple NAND gates.

someone

unread,
Jul 2, 2015, 9:41:25 PM7/2/15
to talk-o...@moderators.isc.org
I did, you can still see it in the history above in the text.


> >> Now, if there were such a conscious computer, its equivalent in a zombie
> >> universe wouldn't be conscious, by definition. But why? Since the
> >> consciousness is a product and consequence of the workings of the
> >> machine, there must be something different about that machine's workings
> >> in the zombie universe. You can't just say it's the same but leads to a
> >> different result.
> >>
> >
> > I was assuming the consciousness was being said to be a property of
> > the physical thing, in the example a property of the physical
> > computer.
>
> > I was also assuming different physical things could have different
> > properties, and in the zombie universe all that is being imagined is
> > that it is a different physical thing with different physical
> > properties.
>
> Previously you said it was the same thing with identical physical
> properties. So I'm confused.
>

I didn't, you must have just assumed that and confused yourself.

> > So I'm not saying it is the same, as I explained but you seem to
> > ignore. An example I have used before would be matter and anti
> > matter. You could imagine there was a universe which was
> > predominantly matter, and another which was predominantly
> > anti-matter. Underlying the objects the physical would be different,
> > but it would still follow the same laws of physics, there wouldn't be
> > an experiment to tell (I think that is the case anyway, though I
> > might be wrong, if so ignore that as an example).
>
> I'm sorry, but this unspecified difference in properties renders your
> example meaningless. If consciousness is a property of the proper
> assemblage of NAND gates in our universe, and the NAND gates all operate
> the same way (regardless of whatever you mean by "underlying the objects
> the physical), then the assemblage of NAND gates would also have
> consciousness in this zombie universe.
>
> You seem to be imagining some kind of magic thing like The Force that
> causes consciousness, but if we just eliminate The Force everything will
> be just the same except for the absence of consciousness.
>

No, I'm entertaining for the sake of discussion that consciousness is physical property. I'm also entertaining the idea of a different physical universe with different physical properties. Stephen Hawking suggests such a thing in the Grand Design, where there are lots of different physical universes with different physical features, and it serves as an explanation for why this one seems so finely tuned for life. He doesn't mention one having different physical features with regards to consciousness, but why should that feature be exempt.

> >>> Also I am entertaining, for discussion purposes, that consciousness
> >>> is a feature of a physical universe. If I had been suggesting an
> >>> identical physical universe in which there wasn't consciousness, then
> >>> you would have a point, but I'm not. I'm just considering different
> >>> physical universe with different physical features. The universe in
> >>> question, the zombie universe, is physically different from the
> >>> universe it parodies, because it has different physical features (it
> >>> doesn't have the consciousness feature). This has been explained
> >>> numerous times, and was in the original post.
> >>
> >> Consciousness isn't a feature you can just turn on and off. It's a
> >> consequence (in the materialist worldview) of a particular arrangement
> >> of stuff. Your notion is incoherent.
> >
> > It's incoherent that different physical universes could have different
> > properties?
>
> No, but it's incoherent that consciousness is something you can
> reasonably call a physical property of the universe.
>

I'm not sure why it would be incoherent, but I wasn't suggesting that the universe was conscious. I was just entertaining the idea that this was a physical universe and that certain physical things within it had the property of being conscious, and so it would be a property of whatever physical thing was conscious and therefore a physical property (and in a different physical universe, different physical things could have different physical properties).


> >>> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/aUXNeEh7lycJ
> >>
> >> That seems like a silly bit of entertainment. The consciousness isn't a
> >> feature of the universe. We have no evidence that the universe is
> >> conscious, that atoms are conscious, that plants, bacteria, and most
> >> animals are conscious (unless you leach the term of meaning). We have
> >> evidence that humans and, to a lesser extent, some of our relatives are
> >> conscious. And all evidence suggests that the phenomenon is located in
> >> our brains, as a pattern of firing neurons. Deal with it.
> >
> > It might suggest that to you, but I don't think you've thought it through.
>
> I don't think you're capable of thinking anything through. You certainly
> can't express yourself coherently. Or won't, at least. Are you in fact
> thinking about something like The Force here?

As I've said I'm entertaining the idea that there are physical things with the property of being conscious.

Bill Rogers

unread,
Jul 2, 2015, 9:51:22 PM7/2/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 9:26:23 PM UTC-4, someone wrote:
> On Friday, July 3, 2015 at 1:56:23 AM UTC+1, Bill Rogers wrote:
> > On Thursday, July 2, 2015 at 8:26:23 PM UTC-4, someone wrote:
> >
> > > > You previously said the purpose was to demonstrate that robots can't
> > > > base a response on conscious experience whereas you can, and therefore
> > > > you can't be a biological robot.
> > > >
> > > > In order to support this claim by means of your thought experiment you
> > > > would need to demonstrate that the thought experiment models some
> > > > aspect of the behavior of real organisms.
> > >
> > > No I don't. I just have to show the issue with robots. I can then leave it to the atheist to either explain why I'm wrong about robots made up of NAND gates, or point out why humans would be different to robots.
> >
> > You still have not shown why a robot could not be conscious. If materialism is correct, then some arrangement of the NAND gates and some behavior that follows from that behavior is conscious. If materialism is correct, then it is impossible to have that same arrangement and the same behavior in the absence of consciousness. Your scenario *assumes* that it is possible to dissociate conscious experience from the particular arrangement and behavior of you NAND gates. That is assuming your conclusion (as loads of people keep telling you). Your conclusion might well be correct, but your scenario does not provide an argument to support that conclusion, since the conclusion is baked in from the start.
> >
>
> It doesn't follow that if all that exists in the universe is the physical, and that consciousness is a property of the physical in this universe, that there couldn't be a different universe with a different physical with different physical features. As I keep explaining. If I had said there were two identical physical things but one was conscious but not the other, then yes you'd have a point, there would be an implicit assumption that consciousness was not a property of anything physical. But I didn't do that, and clearly stated the zombie universe was a universe with different physical features from the one it parodies.

Yes, you keep saying that. And you keep saying that whatever this mysterious physical feature is, it has no effect on the behavior of NAND gates or on the behavior of the robot. And we keep saying that you are assuming your conclusion. Your scenario presupposes that consciousness cannot simply be the result of a certain configuration of NAND gates and physical laws. Instead, your scenario presupposes that consciousness depends on the presence or absence of some additional undefined physical feature which has no interaction with or effect on the NAND gates. That might be true, but your scenario includes that truth as an assumption. Your scenario is not an argument in support of the conclusion that consciousness depends on more than the NAND gate arrangement and the physical laws. As everybody tells you again and again, you are assuming your conclusion.

>
> > >
> > > It isn't an issue of whether the NAND gate arrangement would consciously experience or not, it would be what difference it would make to the behaviour.
> > >
> > > If the behaviour of a single NAND gate wasn't based on what the experience was like, then unless the NAND gates start changing the reason they behave as they do (they no longer give "on" or "off" signals for the same chemical reasons as when a single NAND gate for example), then the reasons for their behaviour are the same and doesn't involve what the experience was like.
> > >
> > >
> > It's also not clear at all what any of this has to do with atheism. A material God could have created biological robots, in which case materialism and theism would both be true. A spiritual God could have created biological robots, in which case theism would be true, and materialism would be true of everything except God. On the other hand, it could be the case that some spirit is required for consciousness but that there is no God, only a bunch of more or less conscious spirits hanging around the brains of more or less conscious animals. In that case dualism and atheism would both be true.
>
> What it has to do with atheism is that I'm asking for an atheist account where a robot could react to reality not being a zombie universe.

Damn, you never learn, do you? "An atheist account where a robot could react to reality not being a zombie universe."?? Does that mean anything other than "An atheist account of how a robot could be conscious"? Why not have a go at writing clearly. It might help you think clearly.

And finding that some mysterious physical property, or a spirit, for that matter, was required for consciousness would not rule out atheism in the least.


someone

unread,
Jul 2, 2015, 9:51:22 PM7/2/15
to talk-o...@moderators.isc.org
On Friday, July 3, 2015 at 2:31:22 AM UTC+1, Jimbo wrote:
> On Thu, 2 Jul 2015 17:23:36 -0700 (PDT), someone
I'm not assuming any arrangement of NAND gates are conscious, and there could be no evidence if they were (because there could be no experiment the behaviour would always be as expected if they weren't). As I said:
---
If the behaviour of a single NAND gate wasn't based on what the experience was like, then unless the NAND gates start changing the reason they behave as they do (they no longer give "on" or "off" signals for the same chemical reasons as when a single NAND gate for example), then the reasons for their behaviour are the same and doesn't involve what the experience was like.
---

I was just pointing out a problem for stories which don't consider a single NAND gate to be conscious, but do consider that certain arrangements might be. The type of stories that suggest that consciousness is an emergent property associated with complex and coordinated information processing for example. Because if it continued to give an "on" or "off" signal for the same reasons as before (perhaps some chemical activity), then the reasons for the behaviour of each and every one in the arrangement didn't involve what the experience was like, and therefore it couldn't be displaying behaviour based on the experience, and couldn't, based on its experience, claim it wasn't in a zombie universe. The problem with that suggestion is that I can, so how comes the difference. (Consciousness wouldn't just need to emerge, its emergence would need to make a difference to the behaviour).

(anyway last post for the night, it's getting late)


John Harshman

unread,
Jul 2, 2015, 10:11:22 PM7/2/15
to talk-o...@moderators.isc.org
I see one NAND gate, or something rather like it, and one person in one
box. Not relevant.

>>>> Now, if there were such a conscious computer, its equivalent in a zombie
>>>> universe wouldn't be conscious, by definition. But why? Since the
>>>> consciousness is a product and consequence of the workings of the
>>>> machine, there must be something different about that machine's workings
>>>> in the zombie universe. You can't just say it's the same but leads to a
>>>> different result.
>>>>
>>>
>>> I was assuming the consciousness was being said to be a property of
>>> the physical thing, in the example a property of the physical
>>> computer.
>>
>>> I was also assuming different physical things could have different
>>> properties, and in the zombie universe all that is being imagined is
>>> that it is a different physical thing with different physical
>>> properties.
>>
>> Previously you said it was the same thing with identical physical
>> properties. So I'm confused.
>
> I didn't, you must have just assumed that and confused yourself.

The problem is that you really have no idea what your scenario entails.
You just imagine that consciousness is some kind of property you can
either add or subtract. Which is dualism in a nutshell.
Sounds like vitalism to me. The different features Hawking was
suggesting were not things like presence/absence of consciousness, which
makes about as much sense as presence/absence of purple.

Consciousness is not a physical property like charge or spin. It's
apparently the property that emerges from the right arrangement of parts
interacting in the right way. You have specified that the interactions
of the parts do not change in your examples, and that's the only
difference in universes that ought to count. Unless you're a vitalist.

>>>>> Also I am entertaining, for discussion purposes, that consciousness
>>>>> is a feature of a physical universe. If I had been suggesting an
>>>>> identical physical universe in which there wasn't consciousness, then
>>>>> you would have a point, but I'm not. I'm just considering different
>>>>> physical universe with different physical features. The universe in
>>>>> question, the zombie universe, is physically different from the
>>>>> universe it parodies, because it has different physical features (it
>>>>> doesn't have the consciousness feature). This has been explained
>>>>> numerous times, and was in the original post.
>>>>
>>>> Consciousness isn't a feature you can just turn on and off. It's a
>>>> consequence (in the materialist worldview) of a particular arrangement
>>>> of stuff. Your notion is incoherent.
>>>
>>> It's incoherent that different physical universes could have different
>>> properties?
>>
>> No, but it's incoherent that consciousness is something you can
>> reasonably call a physical property of the universe.
>
> I'm not sure why it would be incoherent, but I wasn't suggesting that
> the universe was conscious. I was just entertaining the idea that
> this was a physical universe and that certain physical things within
> it had the property of being conscious, and so it would be a property
> of whatever physical thing was conscious and therefore a physical
> property (and in a different physical universe, different physical
> things could have different physical properties).

The problem with that is that it isn't something that could be present
or absent in a universe without vast changes in how objects interact.

>>>>> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/aUXNeEh7lycJ
>>>>
>>>> That seems like a silly bit of entertainment. The consciousness isn't a
>>>> feature of the universe. We have no evidence that the universe is
>>>> conscious, that atoms are conscious, that plants, bacteria, and most
>>>> animals are conscious (unless you leach the term of meaning). We have
>>>> evidence that humans and, to a lesser extent, some of our relatives are
>>>> conscious. And all evidence suggests that the phenomenon is located in
>>>> our brains, as a pattern of firing neurons. Deal with it.
>>>
>>> It might suggest that to you, but I don't think you've thought it through.
>>
>> I don't think you're capable of thinking anything through. You certainly
>> can't express yourself coherently. Or won't, at least. Are you in fact
>> thinking about something like The Force here?
>
> As I've said I'm entertaining the idea that there are physical things with the property of being conscious.

That's good. But you can't coherently imagine a universe in which
physical things are mostly similar but lack that property, if you are
entertaining the idea that the property arises through the actions of
those physical things.


John Vreeland

unread,
Jul 2, 2015, 10:16:22 PM7/2/15
to talk-o...@moderators.isc.org
I do, assuming it is functionally connected in some way. Why wouldn't
it be? People used to think that a few relays in a device was amazing.
"It can make decisions!"

In any event he still hasn't defined consciousness. I am surprised
that everyone gave him a pass when he defined consciousness *twice* as
the opposite of non-consciousness. I always half-expect a circular
argument from a dualist, but you specifically asked him for a
definition and then ignored it.
__
Church of the FSM: "I believe _because_ it is ridiculous."

John Harshman

unread,
Jul 2, 2015, 10:26:22 PM7/2/15
to talk-o...@moderators.isc.org
It's hard to keep up with all the nonsense. Sorry.

> Church of the FSM: "I believe _because_ it is ridiculous."
>
Cribbed from Kierkegaard? Was he a secret Pastafarian?

Inez

unread,
Jul 3, 2015, 12:06:25 AM7/3/15
to talk-o...@moderators.isc.org
Try imagining that some of the input was coming from the robot brain itself, which would be the thoughts it was having.

someone

unread,
Jul 3, 2015, 1:46:22 AM7/3/15
to talk-o...@moderators.isc.org
Do you have to be spoon fed everything? Notice where it states:" ...I don't see where in your story you are suggesting the behaviour of any single NAND gate in the arrangement was based on the arrangement itself. Supposing each NAND gate was a
box,..."

There's the arrangement, and it goes onto consider each NAND gate in the arrangement.

> >>>> Now, if there were such a conscious computer, its equivalent in a zombie
> >>>> universe wouldn't be conscious, by definition. But why? Since the
> >>>> consciousness is a product and consequence of the workings of the
> >>>> machine, there must be something different about that machine's workings
> >>>> in the zombie universe. You can't just say it's the same but leads to a
> >>>> different result.
> >>>>
> >>>
> >>> I was assuming the consciousness was being said to be a property of
> >>> the physical thing, in the example a property of the physical
> >>> computer.
> >>
> >>> I was also assuming different physical things could have different
> >>> properties, and in the zombie universe all that is being imagined is
> >>> that it is a different physical thing with different physical
> >>> properties.
> >>
> >> Previously you said it was the same thing with identical physical
> >> properties. So I'm confused.
> >
> > I didn't, you must have just assumed that and confused yourself.
>
> The problem is that you really have no idea what your scenario entails.
> You just imagine that consciousness is some kind of property you can
> either add or subtract. Which is dualism in a nutshell.
>

I am entertaining it as a physical property. And what it entails is two imagined universes with different physical properties.
Why wouldn't whether the property emerges or not count?
What differences were you thinking there would need to be if arrangements of NAND gates never consciously experienced?


> >>>>> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/aUXNeEh7lycJ
> >>>>
> >>>> That seems like a silly bit of entertainment. The consciousness isn't a
> >>>> feature of the universe. We have no evidence that the universe is
> >>>> conscious, that atoms are conscious, that plants, bacteria, and most
> >>>> animals are conscious (unless you leach the term of meaning). We have
> >>>> evidence that humans and, to a lesser extent, some of our relatives are
> >>>> conscious. And all evidence suggests that the phenomenon is located in
> >>>> our brains, as a pattern of firing neurons. Deal with it.
> >>>
> >>> It might suggest that to you, but I don't think you've thought it through.
> >>
> >> I don't think you're capable of thinking anything through. You certainly
> >> can't express yourself coherently. Or won't, at least. Are you in fact
> >> thinking about something like The Force here?
> >
> > As I've said I'm entertaining the idea that there are physical things with the property of being conscious.
>
> That's good. But you can't coherently imagine a universe in which
> physical things are mostly similar but lack that property, if you are
> entertaining the idea that the property arises through the actions of
> those physical things.

I can because the property would still have to arise from those physical things doing whatever they do, it would have to be a property of those physical things, and if I imagine those physical things to be different, then I can without any incoherence imagine them to have different physical features.

someone

unread,
Jul 3, 2015, 1:56:22 AM7/3/15
to talk-o...@moderators.isc.org
What difference were you thinking it would make. Just for fun, imagine each NAND gate in the arrangement being a box in space, with light detectors as inputs, and lasers as outputs, where lasers are used to communicate the signals between them (mirrors could be used to aid the correct signal getting the the correct receiver). Let's imagine they fire once every thousand years, and are each separated by a light year. How were you imagining physically in your story there would be the overview of the states of the NAND gates which would be needed if the arrangement was to have a conscious experience like yours?

SortingItOut

unread,
Jul 3, 2015, 2:46:22 AM7/3/15
to talk-o...@moderators.isc.org
On Thursday, July 2, 2015 at 9:51:24 AM UTC-5, someone wrote:
> Having problems with posts coming through again (posted 2 responses neither came through) so posting a continuation again (a moderation issue?)
>
> Quick recap, Burkhard hadn't answered a question and I had pointed this out in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/QfgvXSHPZq0J
>
> SortingItOut had stepped in and replied and I did a quick recap of the conversation.
> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ

..and in your recap, you included this:

<start someone's statement>

So where you want to go from here is up to you. If you wish to abandon the attempt to suggest it would matter whether the inputs to the NAND gate arrangement represented whether it was a zombie universes or not (and given the point that you've just conceded I'm not sure where you'd go), then perhaps you'd care to explain either:

1) How the NAND gate arrangement could make a response based on whether it was or wasn't in a zombie universe, given that its response would just depend on how the NAND gates were arranged and their state and inputs, which could be mirrored in the zombie universe.

or

2) Explain the significant difference between a human being and a NAND gate arrangement which allows us, but not the NAND gate arrangement, to respond to the reality of it not being a zombie universe.

<end someone's statement>

As an answer, I'll address both of your questions, starting with #1. You seem to be thinking that since a NAND gate is so simple, no possible arrangement of NAND gates can result in consciousness. But clearly, we've already created arrangements of NAND gates that do incredibly sophisticated things. And since no one on the planet understands how consciousness works or what future arrangements of NAND gates may be capable of, no one can possibly know whether future arrangements of NAND can produce consciousness.

But you may eventually be proved right. Humans may eventually determine that no possible arrangement of NAND gates can produce consciousness. But that's no concern at all to "atheist evolutionists". Neurons are not NAND gates. Conclusions regarding NAND gates have no bearing on what arrangements of neurons are capable of.

Further, you seem to be constraining your thinking by considering that any theories about consciousness need to address a zombie universe. This is simply not the case. A zombie universe is a complete fabrication, and if consciousness is derived from purely physical components, then a zombie universe is not even possible. Again, you've introduced a concept that is a complete non-concern to "atheist evolutionists".

Regarding your #2, the significant difference between humans and NAND gates is simply that neurons are not NAND gates. And no one is claiming that a single neuron is capable of consciousness. Similarly, no one is claiming that a single neuron is capable of image processing, but clearly arrangements of neurons can perform that task.

And consider the following causes of loss of consciousness:
1) A severe blow to the head
2) Low levels of oxygen to the brain
3) Application of general anesthetic
4) Coma resulting from all manner of injury and disease.

And consider that if consciousness is not derived from physical components, why would we ever experience a loss of consciousness?

It seems far more reasonable to tentatively conclude that consciousness is derived from the brain. Until humans figure out how consciousness works, that'll have to do. And there's nothing about neurons that is not compatible with evolutionary theory.

>
> Burkhard replied to the recap post, but again failed to answer the question that I'd previously pointed out that he/she had failed to answer. And this post is a continuation from that:
> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/UUpGdRkDrsIJ
>
> I'll quote the scenario and the question:
> ---
> In this harder to not-understand-the-point scenario, imagine you are in a box with an LED display which not display any symbol initially and then display either a "1" or a "0", go off again, and then display either a "1" or a "0", and then go off again. Imagine this on off cycle will happen several times. Imagine also that you are instructed to shout out "I'm in a blue box" if when the LED comes on it displays a "1" and shout out "I'm in a green box" if the LED comes on and it displays a "0". Imagine you don't know the colour of the box you are in, and that you are able to and do follow the instructions.
>
> Now imagine that unknown to you, what caused the LED to display a "0" or a "1" changed each time. Given that you wouldn't know what caused the LED to display a "0" or "1", would you agree that you weren't basing what you shouted out on what caused the LED to display a "0" or "1" each time?
> ---
>
> Perhaps this time Burkhard could answer.
>
> Also in the recap I had stated that Burkhard hadn't answered the question in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/WJfMAo-NfzQJ (second paragraph) about how he/she would go about measuring whether the universe was a zombie one or not to get the correct input.
>
> Burkhard responded that he/she had answered it.
>
> My response: Could you (Burkhard) please post a link, as I was interested in how you were going to suggest you'd do it.

someone

unread,
Jul 3, 2015, 4:16:22 AM7/3/15
to talk-o...@moderators.isc.org
On Friday, July 3, 2015 at 7:46:22 AM UTC+1, SortingItOut wrote:
> On Thursday, July 2, 2015 at 9:51:24 AM UTC-5, someone wrote:
> > Having problems with posts coming through again (posted 2 responses neither came through) so posting a continuation again (a moderation issue?)
> >
> > Quick recap, Burkhard hadn't answered a question and I had pointed this out in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/QfgvXSHPZq0J
> >
> > SortingItOut had stepped in and replied and I did a quick recap of the conversation.
> > https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ
>
> ..and in your recap, you included this:
>
> <start someone's statement>
>
> So where you want to go from here is up to you. If you wish to abandon the attempt to suggest it would matter whether the inputs to the NAND gate arrangement represented whether it was a zombie universes or not (and given the point that you've just conceded I'm not sure where you'd go), then perhaps you'd care to explain either:
>
> 1) How the NAND gate arrangement could make a response based on whether it was or wasn't in a zombie universe, given that its response would just depend on how the NAND gates were arranged and their state and inputs, which could be mirrored in the zombie universe.
>
> or
>
> 2) Explain the significant difference between a human being and a NAND gate arrangement which allows us, but not the NAND gate arrangement, to respond to the reality of it not being a zombie universe.
>
> <end someone's statement>
>
> As an answer, I'll address both of your questions, starting with #1. You seem to be thinking that since a NAND gate is so simple, no possible arrangement of NAND gates can result in consciousness. But clearly, we've already created arrangements of NAND gates that do incredibly sophisticated things. And since no one on the planet understands how consciousness works or what future arrangements of NAND gates may be capable of, no one can possibly know whether future arrangements of NAND can produce consciousness.
>
> But you may eventually be proved right. Humans may eventually determine that no possible arrangement of NAND gates can produce consciousness. But that's no concern at all to "atheist evolutionists". Neurons are not NAND gates. Conclusions regarding NAND gates have no bearing on what arrangements of neurons are capable of.
>
> Further, you seem to be constraining your thinking by considering that any theories about consciousness need to address a zombie universe. This is simply not the case. A zombie universe is a complete fabrication, and if consciousness is derived from purely physical components, then a zombie universe is not even possible. Again, you've introduced a concept that is a complete non-concern to "atheist evolutionists".
>

You didn't entirely seem to understand the issue in (1). You seemed to be thinking that I was thinking that because of the simplicity of NAND gates that they couldn't be conscious. That wasn't the point. I'm quite happy to entertain a story that they could be. What I wanted to know was if they were, how could their behaviour be based upon the fact, such that they could base it upon reality not being a zombie universe. This doesn't actually require a zombie universe to exist. The simplicity of NAND gates is useful to investigate the story however. Supposing for example in the story a single NAND gate wouldn't be consciously experiencing and so what was being consciously experienced could be playing no part in how it behaved. Supposing the story was that if there was a "special" arrangement behaving in a "special" way, then it would be, but each NAND gates individual behaviour was due to the same factors as if it was a single NAND gate, in other words what was consciously experienced played no part in its behaviour. It would be a story in which consciousness was epiphenomenal, and I know consciousness isn't, because I know I am basing my statement on reality not being a zombie universe because it isn't.

You answer to (1) so far seems to me to be that you can't imagine how whether a NAND gate arrangement was consciously experiencing or not would make any difference to the behaviour of any single NAND gate within it.


> Regarding your #2, the significant difference between humans and NAND gates is simply that neurons are not NAND gates. And no one is claiming that a single neuron is capable of consciousness. Similarly, no one is claiming that a single neuron is capable of image processing, but clearly arrangements of neurons can perform that task.
>

That is an assumption of yours, it isn't clear.

> And consider the following causes of loss of consciousness:
> 1) A severe blow to the head
> 2) Low levels of oxygen to the brain
> 3) Application of general anesthetic
> 4) Coma resulting from all manner of injury and disease.
>

I think here you've slipped into another definition of consciousness. If a being had a memory of no longer than a millisecond during that period and only experienced blackness and weren't self aware of their experience, they'd still be conscious, and you haven't shown that that isn't the case.


> And consider that if consciousness is not derived from physical components, why would we ever experience a loss of consciousness?
>

Because what you experience is based on a certain symbolism of your brain state.

> It seems far more reasonable to tentatively conclude that consciousness is derived from the brain. Until humans figure out how consciousness works, that'll have to do. And there's nothing about neurons that is not compatible with evolutionary theory.
>

I don't think so. For example, if my brain was in a vat and being given the same stimulus as it would be in the human, and I experienced the same as I would if it was in a human, then it would seem reasonable to assume that the conscious experience of being the human is based on the brain state and not the context the brain is found in. It would also seem reasonable to assume that the objects I consciously experience are somehow represented in my brain state. It seems reasonable to assume that if you assumed the stimulus activity to represent what it would in the context of the brain being in a human, and if you understood the symbolic narrative of the neural processing in response to the stimuli you'd understand what your conscious experiences were based on. But if the experience is context independent (it doesn't matter whether the context was that it was in a human, or in a vat or whatever) there would be the issue of what would be special about the brain being in a human? There could be billions of different contexts the "brain activity" could have been occurring in. Imagine humans had never existed and aliens had genetically engineered some organic computer, part of which had the same composition as a human brain, and that the human brain bit they'd done just for the fun of it, and that what the activity actually represented (in terms of causal relationships) given the context of the larger organic computer could be totally different from what it would represent in a human. As you can imagine, if you had billions of contexts the brain could have been in, and for the vast majority, say 999,999,999 out of every billion for arguments sake, an experience appropriate to the context wouldn't have been appropriate for a spiritual being having a spiritual experience where they whether to follow the loving selfless path. There would then be the issue of why it was appropriate. A universe finely tuned for the experience to be appropriate for a spiritual being having a spiritual experience where they whether to follow the loving selfless path?

You can imagine a NAND gate arrangement in different contexts and see the issue there for whatever conscious experience you were ever going to attribute to it.

If I haven't misunderstood your answer to (2) it is that you can't imagine what in your story the significant difference between a robot and a human would be.

So to recap my understanding of your response to (1) and (2)

(1) You can't imagine how your story could explain it.

(2) You can't imagine how your story could explain it.

Have I misunderstood?

Ernest Major

unread,
Jul 3, 2015, 5:56:22 AM7/3/15
to talk-o...@moderators.isc.org
On 02/07/2015 22:50, RSNorman wrote:
> On Thu, 2 Jul 2015 13:45:08 -0700 (PDT), someone
> <glenn....@googlemail.com> wrote:
>
>> On Thursday, July 2, 2015 at 9:31:23 PM UTC+1, RSNorman wrote:
>>> On Thu, 2 Jul 2015 11:52:46 -0700 (PDT), someone
>>> wrote:
>>>
>>>> On Thursday, July 2, 2015 at 7:31:23 PM UTC+1, RSNorman wrote:
>>>>> On Thu, 2 Jul 2015 09:41:29 -0700 (PDT), someone
>>>>> wrote:
>>>>>
>>>>>> On Thursday, July 2, 2015 at 5:26:24 PM UTC+1, Burkhard wrote:
>>>>>>> someone wrote:
>>>>>>>> Having problems with posts coming through again (posted 2 responses neither came through) so posting a continuation again (a moderation issue?)
>>>>>>>>
>>>>>>>> Quick recap, Burkhard hadn't answered a question and I had pointed this out in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/QfgvXSHPZq0J
>>>>>>>>
>>>>>>>> SortingItOut had stepped in and replied and I did a quick recap of the conversation.
>>>>>>>> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ
>>>>>>>>
>>>>>>>> Burkhard replied to the recap post, but again failed to answer the question that I'd previously pointed out that he/she had failed to answer. And this post is a continuation from that:
>>>>>>>> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/UUpGdRkDrsIJ
>>>>>>>
>>>> The recap post I wrote today also gives another synopsis, so hopefully eventually the total failure of the atheists on this forum to handle the point being blamed on me not explaining clearly enough that they could understand it, will become less and less convincing as people start to get it, and watch them flounder (though I admit, quite a few of my posts do have a few mistakes which might make people think that English isn't my fist language, even though it is).
>>>>
>>>> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ
>>>
>>> As is always the case, instead of actually considering what I wrote
>>> you simply recast everything into your original formulation which we
>>> all here reject utterly.
>>>
>>
>> I think if people characterise what I am saying incorrectly, and therefore create a strawman argument, like you did, I am entitled to point it out.
>
> I think people are quite capable of characterising what you say
> correctly and putting exactly the same concept into different terms
> but you cannot accept that and insist it must be a strawman.
>
> I am entitled to point out that everyone here seems to accuse you of
> being incredibly obtuse and failing to say what you really want to
> say.
>>
>>> You tell me, if you are indeed capable of actually ansering a
>>> question, why we should believe anything a robot says? Of course we
>>> might have great confidence in the robot if we first have an
>>> independent way of verifying that what it says is true but then if the
>>> subject is consciousness, that destroys the whole premise about having
>>> a robot decide. If what it says agrees with what we feel or "know"
>>> then we say "Aha! You see the robot is right!". If what it says
>>> disagrees then we say "I told you you can't trust the robot!" No
>>> matter what, the robot answer is in no way an answer to the problem.
>>
>> I'm not suggesting you should believe a robot to be conscious. I'm wouldn't think anyone would be given the experience of being a robot. What would be the point?
>
> So you can't answer a simple question: why would you believe what a
> robot reports about consciousnes?
>

If both robots in his thought experiment are not conscious then I don't
see the point of the thought experiment at all. At that point the robots
would be "nothing more" than consciousness detectors (and all the
verbiage about robots and NAND gates a smokescreen). And nobody, atheist
or non-atheist, evolutionist or non-evolutionist, has an account of
consciousness detectors, so there's no "atheist evolutionary" account
that he needs help to understand.

--
alias Ernest Major

Burkhard

unread,
Jul 3, 2015, 8:26:21 AM7/3/15
to talk-o...@moderators.isc.org
someone wrote:
> On Thursday, July 2, 2015 at 5:26:24 PM UTC+1, Burkhard wrote:
>> someone wrote:
>>> Having problems with posts coming through again (posted 2 responses neither came through) so posting a continuation again (a moderation issue?)
>>>
>>> Quick recap, Burkhard hadn't answered a question and I had pointed this out in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/QfgvXSHPZq0J
>>>
>>> SortingItOut had stepped in and replied and I did a quick recap of the conversation.
>>> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/HiGA7Ef6NQUJ
>>>
>>> Burkhard replied to the recap post, but again failed to answer the question that I'd previously pointed out that he/she had failed to answer. And this post is a continuation from that:
>>> https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/UUpGdRkDrsIJ
>>
>> Matter of fact, I answered it twice - your thought experiment has now
>> become simply Searle's Chinese room argument, and shares its problems
>> and most importantly limitations That is it is now so far away from the
>> way in which real people or for that matter machines interact with their
>> environment that it takes away any point that your original two universe
>> model may have had.
>>
>
> That is a comment about the thought experiment but not an answer to the question, which I wrote out again below.

Of course it is an answer. Your thought experiment has now deteriorated
so much it has lost any point it may have had.

In your example, yes of course I'm not basing my answer on anything- you
designed it that way.

It also stops to be reliably correlated with the colour of the box (or
anything else for that matter) so you simply have a malfunction piece of
equipment.

Philosphical significance exactly 0



>
>
>>>
>>> I'll quote the scenario and the question:
>>> ---
>>> In this harder to not-understand-the-point scenario, imagine you are in a box with an LED display which not display any symbol initially and then display either a "1" or a "0", go off again, and then display either a "1" or a "0", and then go off again. Imagine this on off cycle will happen several times. Imagine also that you are instructed to shout out "I'm in a blue box" if when the LED comes on it displays a "1" and shout out "I'm in a green box" if the LED comes on and it displays a "0". Imagine you don't know the colour of the box you are in, and that you are able to and do follow the instructions.
>>>
>>> Now imagine that unknown to you, what caused the LED to display a "0" or a "1" changed each time. Given that you wouldn't know what caused the LED to display a "0" or "1", would you agree that you weren't basing what you shouted out on what caused the LED to display a "0" or "1" each time?
>>> ---
>>>
>>> Perhaps this time Burkhard could answer.
>>>
>
> But again you didn't.
>
>
>>> Also in the recap I had stated that Burkhard hadn't answered the question in https://groups.google.com/d/msg/talk.origins/2r0TBligpfw/WJfMAo-NfzQJ (second paragraph) about how he/she would go about measuring whether the universe was a zombie one or not to get the correct input.
>>>
>>> Burkhard responded that he/she had answered it.
>>>
>>> My response: Could you (Burkhard) please post a link, as I was interested in how you were going to suggest you'd do it.
>>>
>>>
>
> I notice you didn't post a link or give an answer here.
>

Burkhard

unread,
Jul 3, 2015, 8:31:21 AM7/3/15
to talk-o...@moderators.isc.org
Only if there is a realistic chance that they have encountered that
story, I'd say, which is unlikely with the story of -zombies that is
normally graduate philosophy stuff.

And the only think you have demonstrated so far is that your thought
experiments don't work, which is hardly national curriculum stuff.

John Harshman

unread,
Jul 3, 2015, 9:46:22 AM7/3/15
to talk-o...@moderators.isc.org
And yet it appears to boil down to a single gate, as if the properties
of he network can be explained as the properties of a single gate. I
don't think you've thought this through.

>>>>>> Now, if there were such a conscious computer, its equivalent in a zombie
>>>>>> universe wouldn't be conscious, by definition. But why? Since the
>>>>>> consciousness is a product and consequence of the workings of the
>>>>>> machine, there must be something different about that machine's workings
>>>>>> in the zombie universe. You can't just say it's the same but leads to a
>>>>>> different result.
>>>>>>
>>>>>
>>>>> I was assuming the consciousness was being said to be a property of
>>>>> the physical thing, in the example a property of the physical
>>>>> computer.
>>>>
>>>>> I was also assuming different physical things could have different
>>>>> properties, and in the zombie universe all that is being imagined is
>>>>> that it is a different physical thing with different physical
>>>>> properties.
>>>>
>>>> Previously you said it was the same thing with identical physical
>>>> properties. So I'm confused.
>>>
>>> I didn't, you must have just assumed that and confused yourself.
>>
>> The problem is that you really have no idea what your scenario entails.
>> You just imagine that consciousness is some kind of property you can
>> either add or subtract. Which is dualism in a nutshell.
>
> I am entertaining it as a physical property. And what it entails is two imagined universes with different physical properties.

What you are entertaining is incoherent. That's the problem. You have no
idea what the physical properties might be. I suggest that there are no
such conceivable properties.
What would be different about the universes that would make everything
the same, including the behavior of individual NAND gates, except for
the absence of one emergent property? How could a property that arises
from the architecture of a network be different if the architecture
didn't change?
That's the question you would have to answer. I say it would have to be
a difference that made the NAND gates non-functional. That is, your
scenario is impossible.

>>>>>>> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/aUXNeEh7lycJ
>>>>>>
>>>>>> That seems like a silly bit of entertainment. The consciousness isn't a
>>>>>> feature of the universe. We have no evidence that the universe is
>>>>>> conscious, that atoms are conscious, that plants, bacteria, and most
>>>>>> animals are conscious (unless you leach the term of meaning). We have
>>>>>> evidence that humans and, to a lesser extent, some of our relatives are
>>>>>> conscious. And all evidence suggests that the phenomenon is located in
>>>>>> our brains, as a pattern of firing neurons. Deal with it.
>>>>>
>>>>> It might suggest that to you, but I don't think you've thought it through.
>>>>
>>>> I don't think you're capable of thinking anything through. You certainly
>>>> can't express yourself coherently. Or won't, at least. Are you in fact
>>>> thinking about something like The Force here?
>>>
>>> As I've said I'm entertaining the idea that there are physical things with the property of being conscious.
>>
>> That's good. But you can't coherently imagine a universe in which
>> physical things are mostly similar but lack that property, if you are
>> entertaining the idea that the property arises through the actions of
>> those physical things.
>
> I can because the property would still have to arise from those
> physical things doing whatever they do, it would have to be a
> property of those physical things, and if I imagine those physical
> things to be different, then I can without any incoherence imagine
> them to have different physical features.

Ah, but you have the physical things still doing the same things, only
without consciousness arising. That's what is incoherent. If
consciousness arises from the arrangement of those physical things, the
same arrangement should produce the same effects. If the individual NAND
gates don't act differently, the arrangement of them shouldn't either.
So if that arrangement results in consciousness in one universe, it
should do the same in another. That's why you can't articulate what the
difference between universes might be.

RSNorman

unread,
Jul 3, 2015, 10:21:21 AM7/3/15
to talk-o...@moderators.isc.org
On Fri, 03 Jul 2015 06:42:22 -0700, John Harshman
<jhar...@pacbell.net> wrote:


<snip hundreds of lines that represents "someone" still failing to
respond to criticism>

>Ah, but you have the physical things still doing the same things, only
>without consciousness arising. That's what is incoherent. If
>consciousness arises from the arrangement of those physical things, the
>same arrangement should produce the same effects. If the individual NAND
>gates don't act differently, the arrangement of them shouldn't either.
>So if that arrangement results in consciousness in one universe, it
>should do the same in another. That's why you can't articulate what the
>difference between universes might be.

It is clear that you are getting no farther than I with well argued
critical comments showing clearly how someone's arguments are
meaningless.

I would just like to reinforce your argument that a collection of NAND
gates interconnected and interacting is an entirely different entiry
from a single NAND gate. The collection can have memory, it can have
its behavior altered by previous activity, it can learn. You (John)
err here in saying that the same arrangement should produce the same
effects. That is true only if by "arrangement" you mean not only
wiring but also internal state. Two identical wirings of NAND gates
can produce extremely different effects if they live in different
universes and are subjected to different experiences. That is the
same as saying that identical "wirings" of neurons into brains can
produce extremely different effects at different times: we wake and
sleep and go into comas and hypnotic states. Our experience and our
environment shape our behavior along with our genetics. If robots R10
and R10-Z live in different universes then, although built identically
and started in the identical initial states, they will almost
certainly end up behaving differently even to the extent of one
demonstrating behavior that we usually call "conscious" and the other
not.

The whole NAND gate business is entirely a distraction so that
"someone" can always turn the argument back into the way a single
isolated NAND gate performs. I could add that all sorts of external
influences, cosmic rays, for example, can switch memory states and
change the behavior of a NAND gate circuit.

Inez

unread,
Jul 3, 2015, 10:26:21 AM7/3/15
to talk-o...@moderators.isc.org
No idea what you mean by that last sentence. What "overview" are you talking about?

I think that what you're actually arguing against is free will, not consciousness. Consciousness is just complicated processing, free will is making choices from out of your head rather than just following the dictates of your NAND gates. I don't actually believe that people have free will.

someone

unread,
Jul 3, 2015, 10:51:22 AM7/3/15
to talk-o...@moderators.isc.org
I was presuming that you didn't think that the conscious experience would correlate to the activity of just one of the NAND gates, and instead would correlate to the activity of more than one. So that the emergent mind property would effectively have an overview of the activity. I was just wondering how you were thinking that was physically possible.

Having some problems with posts getting through so the following is a cut-down reply to John Harshman:

Imagine there were two atheists which both considered this to be a physical universe but disagreed about the type. One thought that NAND gate configurations would consciously experience if they were is special formations doing special activity, and the other thought that NAND gate configurations would never consciously experience. Are you claiming that one of those ideas was incoherent?

If not then could you just explain how you were thinking a NAND gate arrangement could base its behaviour on which one was correct?

RSNorman

unread,
Jul 3, 2015, 12:01:22 PM7/3/15
to talk-o...@moderators.isc.org
You seem not to want to respond to me but I will keep trying anyway.

First, about that emergent activity: why should not consciousness be
an emergent property of the activity of the brain connected to a body
acting in the world? It is your task to show us why that is
physically impossible. So do it. Answer a simple question. Whay is
consciousness not a possible consequence of the emergent activity of a
complex system acting in the world and experiencing the results of its
own actions?

Second, about those NAND gates and atheists: why would the NAND gate
system, the robot, base its own conclusion on which atheist to
believe? It bases its conclusion on its own experiences acting in the
world. It may come down on either side depending on what it
experienced, what it learned from those experiences, and how it
interpreted all that. That is no different from what the atheists do.
That is no different from what theists do.

You still have never acknowledged that an ensemble of NAND gates
suitably arranged and connected to effectors and sensors so that it
could act in the world and could sense the consequences of those
actions will change its behavior according to its life experiences.
You fail to understand that your robot could read all the works of all
the philosophers, read all these posts on t.o., and relate all that to
its own education and experience as an agent acting in the world in
forming an "opinion" on consciousness. All you accept is a single
NAND gate that outputs "high" or "low" as a result of its two current
inputs. An ensemble of NAND gates is rather more than this.





Ernest Major

unread,
Jul 3, 2015, 12:06:20 PM7/3/15
to talk-o...@moderators.isc.org
What is incoherent is your apparent opinion that both of those positions
can simultaneously be true.

--
alias Ernest Major

Jimbo

unread,
Jul 3, 2015, 12:41:20 PM7/3/15
to talk-o...@moderators.isc.org
You appear to be assuming that the input to any particular NAND gate
is the product of a purely unconscious processes. This clearly is not
the case for neurons and synapses in the brain's thalamocortical
structures and regions such as the associational areas of the cerebral
cortex. Conscious brain functioning involves multiple complex
feebacks, recursive processing and regenerative activity. If these
rhythmic waves of processing aren't occurring, you're not conscious.
Alpha, beta and theta patterns are associated with different varieties
of consciousness. So your basic assumption appears to be false. Or one
of your basic assumptions - you seem to have found multiple ways to
assume your conclusion.

>---
>
>I was just pointing out a problem for stories which don't consider a single NAND gate to be conscious, but do consider that certain arrangements might be. The type of stories that suggest that consciousness is an emergent property associated with complex and coordinated information processing for example. Because if it continued to give an "on" or "off" signal for the same reasons as before (perhaps some chemical activity), then the reasons for the behaviour of each and every one in the arrangement didn't involve what the experience was like, and therefore it couldn't be displaying behaviour based on the experience, and couldn't, based on its experience, claim it wasn't in a zombie universe. The problem with that suggestion is that I can, so how comes the difference. (Consciousness wouldn't just need to emerge, its emergence would need to make a difference to the behaviour).

You haven't justified your claim that such a zombie universe could
exist you've merely assumed it and from that unsupported assumption
gone on to assume your conclusion. There's no evidence for it, and in
light of the fact that activation of the arousal and consciousness
systems in human brains are always associated with conscious
attention, there's plenty of evidence against it.

By the way, is dcl...@qis.net one of your nyms? I could swear I've had
this same conversation with him.

someone

unread,
Jul 3, 2015, 1:31:20 PM7/3/15
to talk-o...@moderators.isc.org
On Friday, July 3, 2015 at 5:41:20 PM UTC+1, Jimbo wrote:
> On Thu, 2 Jul 2015 18:50:29 -0700 (PDT), someone
> wrote:
>
> >On Friday, July 3, 2015 at 2:31:22 AM UTC+1, Jimbo wrote:
> >> On Thu, 2 Jul 2015 17:23:36 -0700 (PDT), someone
> >> wrote:
> >>
> >> >On Friday, July 3, 2015 at 12:31:23 AM UTC+1, Jimbo wrote:
> >> >> On Thu, 2 Jul 2015 15:59:20 -0700 (PDT), someone
> >> >> wrote:

[snip]
No I've never posted under the nym you mentioned.

I didn't mention a zombie universe in the bit you were responding to. If you read it again, you'll see I was just pointing out a problem with your suggestion that "consciousness is an emergent property associated with complex and coordinated information processing."

I'll repeat the problem again, though you can see it above.

If you suggest that a single NAND gate doesn't consciously experience, and that there are physical reasons (chemical reactions for example) why it gives an "on"/"off" output when it receives its "on"/"off" inputs, and whatever arrangement it is in it still gives the "on"/"off" output for the same physical reasons, then those physical reasons don't include the arrangement having the property of consciously experiencing (because that physical property wasn't a one of the reasons it behaved as it did as a single NAND gate). If you wish to say that in an arrangement it wouldn't behave for the same chemical reasons as when it isn't in an arrangement for example, then please explain further.

The problem *doesn't*:
1) Assume the NAND arrangement wouldn't be conscious.
2) Assume that consciousness couldn't emerge.
3) Include any assumptions about how they are arranged, they could be recursive or whatever, you can imagine it to be a very complex arrangement (if you think that the complexity would have a significant impact on the chemical activity in the individual NAND gates for example, then just mention it).

For a separate issue, could you imagine each NAND gate in the arrangement being a box in space, with light detectors as inputs, and lasers as outputs, where lasers are used to communicate the signals between them (mirrors could be used to aid the correct signal getting the the correct receiver). And imagine that they fire once every 10 years years, and are each separated by each one they have direct light communication with by between 1 and 10 light years. Would you still expect the arrangement to consciously experience, and if so, how were you imagining physically in your story there would be a conscious experience which incorporated information of various NAND gate states simultaneously if no information in the physical universe travelled faster than the speed of light?






Bill Rogers

unread,
Jul 3, 2015, 2:01:20 PM7/3/15
to talk-o...@moderators.isc.org
As a materialist, my "story" is that the activity of all the NAND gates in the conscious arrangement *is* exactly what consciousness is. There is nothing more to be added. It is not something separate from the ordinary behavior of the NAND gates in the arrangement. Each one behaves just as it should; it responds to inputs just as it would if it were a single gate.

You keep assuming that consciousness is "something else," something that gets added onto some arrangement of the NAND gates. Something that could get added on in some universes and not in others. That you think of this extra thing that gets added on as some unspecified physical property rather than as a spirit, does not make it significantly different from dualism.

You might be right. There might be something *added on* to produce consciousness. But at the moment, you are building that assumption into all of your "thought experiments." So your thought experiments don't help you prove anything. You keep assuming your conclusion, whether or not you are able to see that that's what you are doing.

Jimbo

unread,
Jul 3, 2015, 2:31:20 PM7/3/15
to talk-o...@moderators.isc.org
I don't agree with the assumption that conscious output of the
assemblage of NAND gates wouldn't involve inputs to each NAND gate
that were influenced by prior conscious processing of the assemblage.
Consciousness is causal and it's an activation state of the entire
assemblage. When one wakes from a coma or deep dreamless sleep, the
conscious processsing develops during the process of waking.


>The problem *doesn't*:
>1) Assume the NAND arrangement wouldn't be conscious.
>2) Assume that consciousness couldn't emerge.
>3) Include any assumptions about how they are arranged, they could be recursive or whatever, you can imagine it to be a very complex arrangement (if you think that the complexity would have a significant impact on the chemical activity in the individual NAND gates for example, then just mention it).

I don't agree with your assumption that consciousness couldn't emerge.
I think it's completely unsupported by any empirical evidence.

>For a separate issue, could you imagine each NAND gate in the arrangement being a box in space, with light detectors as inputs, and lasers as outputs, where lasers are used to communicate the signals between them (mirrors could be used to aid the correct signal getting the the correct receiver). And imagine that they fire once every 10 years years, and are each separated by each one they have direct light communication with by between 1 and 10 light years. Would you still expect the arrangement to consciously experience, and if so, how were you imagining physically in your story there would be a conscious experience which incorporated information of various NAND gate states simultaneously if no information in the physical universe travelled faster than the speed of light?

I think it's silly to try to introduce relativistic effects into a
discusion of neuronal processing. It's in irrelevant diversion.

someone

unread,
Jul 3, 2015, 2:46:21 PM7/3/15
to talk-o...@moderators.isc.org
Are you suggesting that if I consider it to be a physical property that not all NAND gate arrangements have that I am assuming my conclusion because I'd be suggesting that it is something *added on* to certain NAND gate arrangements but not others?


> >
> > The problem *doesn't*:
> > 1) Assume the NAND arrangement wouldn't be conscious.
> > 2) Assume that consciousness couldn't emerge.
> > 3) Include any assumptions about how they are arranged, they could be recursive or whatever, you can imagine it to be a very complex arrangement (if you think that the complexity would have a significant impact on the chemical activity in the individual NAND gates for example, then just mention it).
> >
> > For a separate issue, could you imagine each NAND gate in the arrangement being a box in space, with light detectors as inputs, and lasers as outputs, where lasers are used to communicate the signals between them (mirrors could be used to aid the correct signal getting the the correct receiver). And imagine that they fire once every 10 years years, and are each separated by each one they have direct light communication with by between 1 and 10 light years. Would you still expect the arrangement to consciously experience, and if so, how were you imagining physically in your story there would be a conscious experience which incorporated information of various NAND gate states simultaneously if no information in the physical universe travelled faster than the speed of light?

You didn't fancy answering this issue?

someone

unread,
Jul 3, 2015, 2:56:20 PM7/3/15
to talk-o...@moderators.isc.org
On Friday, July 3, 2015 at 7:31:20 PM UTC+1, Jimbo wrote:
> On Fri, 3 Jul 2015 10:31:02 -0700 (PDT), someone
I'm not sure what you mean by a "conscious output", do you just mean an output from something that is conscious? And "conscious processing", do you just mean processing in something that is conscious?

You say consciousness is causal, but in what sense is it if not one NAND gate in the arrangement behaves any differently than it would not having its behaviour influenced by any conscious experience?

>
> >The problem *doesn't*:
> >1) Assume the NAND arrangement wouldn't be conscious.
> >2) Assume that consciousness couldn't emerge.
> >3) Include any assumptions about how they are arranged, they could be recursive or whatever, you can imagine it to be a very complex arrangement (if you think that the complexity would have a significant impact on the chemical activity in the individual NAND gates for example, then just mention it).
>
> I don't agree with your assumption that consciousness couldn't emerge.
> I think it's completely unsupported by any empirical evidence.
>

I was pointing out that I don't make assumptions 1-3.


> >For a separate issue, could you imagine each NAND gate in the arrangement being a box in space, with light detectors as inputs, and lasers as outputs, where lasers are used to communicate the signals between them (mirrors could be used to aid the correct signal getting the the correct receiver). And imagine that they fire once every 10 years years, and are each separated by each one they have direct light communication with by between 1 and 10 light years. Would you still expect the arrangement to consciously experience, and if so, how were you imagining physically in your story there would be a conscious experience which incorporated information of various NAND gate states simultaneously if no information in the physical universe travelled faster than the speed of light?
>
> I think it's silly to try to introduce relativistic effects into a
> discusion of neuronal processing. It's in irrelevant diversion.

No it isn't irrelevant, because unless you suggest that the NAND gate arrangement was conscious, and how could it be with the relativity issue, then it acts as the zombie arrangement for me. It behaves exactly the same, and that is relevant for where you claim that "consciousness is causal" in your story.

Jimbo

unread,
Jul 3, 2015, 3:21:20 PM7/3/15
to talk-o...@moderators.isc.org
I'm saying that consciousness is the entire process of multiple
complex feebacks, recursive processing and regenerative activity
carried out within a normally functioning brain's thalamocortical
system. Consciousness at any moment is influenced by previous states
of consciousness as well as by new sensory input.

>> >The problem *doesn't*:
>> >1) Assume the NAND arrangement wouldn't be conscious.
>> >2) Assume that consciousness couldn't emerge.
>> >3) Include any assumptions about how they are arranged, they could be recursive or whatever, you can imagine it to be a very complex arrangement (if you think that the complexity would have a significant impact on the chemical activity in the individual NAND gates for example, then just mention it).
>>
>> I don't agree with your assumption that consciousness couldn't emerge.
>> I think it's completely unsupported by any empirical evidence.
>>
>
>I was pointing out that I don't make assumptions 1-3.

Sorry, I completely overlooked your *doesn't* assume stipulation. I
must have fallen into a momentary state of unconsciousness. Your
argument still has no empirical foundation. Consciousness is a state
of arousal and attention that, in human beings at least, tends to be
associated with creation of internal 'maps' or models both of oneself
and the environment.

>> >For a separate issue, could you imagine each NAND gate in the arrangement being a box in space, with light detectors as inputs, and lasers as outputs, where lasers are used to communicate the signals between them (mirrors could be used to aid the correct signal getting the the correct receiver). And imagine that they fire once every 10 years years, and are each separated by each one they have direct light communication with by between 1 and 10 light years. Would you still expect the arrangement to consciously experience, and if so, how were you imagining physically in your story there would be a conscious experience which incorporated information of various NAND gate states simultaneously if no information in the physical universe travelled faster than the speed of light?
>>
>> I think it's silly to try to introduce relativistic effects into a
>> discusion of neuronal processing. It's in irrelevant diversion.
>
>No it isn't irrelevant, because unless you suggest that the NAND gate arrangement was conscious, and how could it be with the relativity issue, then it acts as the zombie arrangement for me. It behaves exactly the same, and that is relevant for where you claim that "consciousness is causal" in your story.

There's no evidence that consciousness is exhibited by any system
besides brains in living bodies. You can imagine that the entire
universe is conscious if you want to, but it's just a fantasy unless
you can adduce evidence to support that supposition.

someone

unread,
Jul 3, 2015, 3:36:20 PM7/3/15
to talk-o...@moderators.isc.org
On Friday, July 3, 2015 at 8:21:20 PM UTC+1, Jimbo wrote:
> On Fri, 3 Jul 2015 11:54:08 -0700 (PDT), someone
So you are abandoning the idea that robots can be conscious to avoid the zombie arrangement issue?

Still before you realised the issue you were suggesting that consciousness would be causal in the NAND gate arrangement. In your story is consciousness in the brain "causal" in the same way as you were imagining it to be in the NAND gate arrangement or is there a significant difference in your story between the way it would be "causal" in the brain from the way you were thinking it would be "causal" in the NAND gate arrangement?

Bill Rogers

unread,
Jul 3, 2015, 3:36:20 PM7/3/15
to talk-o...@moderators.isc.org
Yes, you are assuming that there is something (physical or not) beyond the NAND gate arrangement itself that is responsible for consciousness. And yet, that is what you are trying to prove. So, yes, you are assuming your conclusion.

>
>
> > >
> > > The problem *doesn't*:
> > > 1) Assume the NAND arrangement wouldn't be conscious.
> > > 2) Assume that consciousness couldn't emerge.
> > > 3) Include any assumptions about how they are arranged, they could be recursive or whatever, you can imagine it to be a very complex arrangement (if you think that the complexity would have a significant impact on the chemical activity in the individual NAND gates for example, then just mention it).
> > >
> > > For a separate issue, could you imagine each NAND gate in the arrangement being a box in space, with light detectors as inputs, and lasers as outputs, where lasers are used to communicate the signals between them (mirrors could be used to aid the correct signal getting the the correct receiver). And imagine that they fire once every 10 years years, and are each separated by each one they have direct light communication with by between 1 and 10 light years. Would you still expect the arrangement to consciously experience, and if so, how were you imagining physically in your story there would be a conscious experience which incorporated information of various NAND gate states simultaneously if no information in the physical universe travelled faster than the speed of light?
>
> You didn't fancy answering this issue?

No, basically you are introducing long time delays in what is the equivalent of neural transmission. It is quite possible that is you slowed down neural transmission times enough in our brain that our consciousness would no longer work. THis scenario just adds pointless complication to your original, already not very satisfactory, scenario.


someone

unread,
Jul 3, 2015, 3:41:20 PM7/3/15
to talk-o...@moderators.isc.org
I'm slightly confused with your story. Are you suggesting all NAND gates arrangements experience consciousness?


> >
> >
> > > >
> > > > The problem *doesn't*:
> > > > 1) Assume the NAND arrangement wouldn't be conscious.
> > > > 2) Assume that consciousness couldn't emerge.
> > > > 3) Include any assumptions about how they are arranged, they could be recursive or whatever, you can imagine it to be a very complex arrangement (if you think that the complexity would have a significant impact on the chemical activity in the individual NAND gates for example, then just mention it).
> > > >
> > > > For a separate issue, could you imagine each NAND gate in the arrangement being a box in space, with light detectors as inputs, and lasers as outputs, where lasers are used to communicate the signals between them (mirrors could be used to aid the correct signal getting the the correct receiver). And imagine that they fire once every 10 years years, and are each separated by each one they have direct light communication with by between 1 and 10 light years. Would you still expect the arrangement to consciously experience, and if so, how were you imagining physically in your story there would be a conscious experience which incorporated information of various NAND gate states simultaneously if no information in the physical universe travelled faster than the speed of light?
> >
> > You didn't fancy answering this issue?
>
> No, basically you are introducing long time delays in what is the equivalent of neural transmission. It is quite possible that is you slowed down neural transmission times enough in our brain that our consciousness would no longer work. THis scenario just adds pointless complication to your original, already not very satisfactory, scenario.

I'm just considering NAND gates at the moment, or are you suggesting that digital computers couldn't be conscious no matter what processing they did? Because what difference would the clock speed make?

Message has been deleted

Jimbo

unread,
Jul 3, 2015, 3:56:21 PM7/3/15
to talk-o...@moderators.isc.org
No I'm not abandoning it. If non-biological robots eventually achieve
cognitive abilities as flexible and adaptive as our own, or even
surpass us in this respect, then I suspect that they would be as or
more conscious than we are. Inflexible programmed responses wouldn't
do it, but it might be accomplished through systems that could support
processes of multiple complex feebacks, recursive processing and
regenerative activity comparable to those that occur in our own
brains.

>Still before you realised the issue you were suggesting that consciousness would be causal in the NAND gate arrangement. In your story is consciousness in the brain "causal" in the same way as you were imagining it to be in the NAND gate arrangement or is there a significant difference in your story between the way it would be "causal" in the brain from the way you were thinking it would be "causal" in the NAND gate arrangement?

As you'll note from my response above I haven't 'realized' whatever
issue you think exists. I don't think you've demonstrated any real
issues. It's causal in that it affects both future brain states and
behavior.

someone

unread,
Jul 3, 2015, 4:06:20 PM7/3/15
to talk-o...@moderators.isc.org
On Friday, July 3, 2015 at 8:56:21 PM UTC+1, Jimbo wrote:
> On Fri, 3 Jul 2015 12:31:24 -0700 (PDT), someone
Well then, could you perhaps answer whether you would be thinking that the consciousness of the NAND gate arrangement would be causal in the the same way as the consciousness of the brain in your story.

Also could you explain whether in your story the NAND gate in space arrangement would be conscious, or whether it would be an example of a zombie arrangement.

(my last post for tonight, I intend to post again on Sunday)

Jimbo

unread,
Jul 3, 2015, 4:36:20 PM7/3/15
to talk-o...@moderators.isc.org
I've already stated that I think non-biological robots might exhibit
consciousness, if their 'brains and nervous systems' are capable of
multiple complex feebacks, recursive processing and regenerative
activity comparable to those that occur in our own brains. It seems to
me it would be the combined sensory-motor-cognitive capacity that's
important, no matter what the physical basis of that capacity might
happen to be.

RSNorman

unread,
Jul 3, 2015, 5:26:20 PM7/3/15
to talk-o...@moderators.isc.org
Another case of someone avoiding answering the issue and reverting to
a demand that you answer his original question.

RSNorman

unread,
Jul 3, 2015, 5:31:19 PM7/3/15
to talk-o...@moderators.isc.org
On Thu, 02 Jul 2015 17:50:30 -0400, RSNorman <r_s_n...@comcast.net>
wrote:


>So you can't answer a simple question: why would you believe what a
>robot reports about consciousnes?

No response, someone?

RSNorman

unread,
Jul 3, 2015, 5:31:19 PM7/3/15
to talk-o...@moderators.isc.org
On Fri, 03 Jul 2015 11:59:12 -0400, RSNorman <r_s_n...@comcast.net>
wrote:


>You seem not to want to respond to me but I will keep trying anyway.
>

No response, someone?



RSNorman

unread,
Jul 3, 2015, 5:31:19 PM7/3/15
to talk-o...@moderators.isc.org
Long delays in neural transmission cause problems interpreting the
result of your own actions.

RSNorman

unread,
Jul 3, 2015, 5:31:19 PM7/3/15
to talk-o...@moderators.isc.org
On Thu, 02 Jul 2015 21:40:08 -0400, RSNorman <r_s_n...@comcast.net>
wrote:

>On Thu, 2 Jul 2015 15:34:15 -0700 (PDT), Bill Rogers
><broger...@gmail.com> wrote:
>
>>On Thursday, July 2, 2015 at 5:41:22 PM UTC-4, Sneaky O. Possum wrote:
>>> Bill Rogers <broger...@gmail.com> wrote in
>>> news:857dae25-bd1d-41a1...@googlegroups.com:
>>>
>>> > On Thursday, July 2, 2015 at 10:51:24 AM UTC-4, someone wrote:
>>> >
>>> >> In this harder to not-understand-the-point scenario, imagine you are
>>> >> in a box with an LED display which not display any symbol initially
>>> >> and then display either a "1" or a "0", go off again, and then
>>> >> display either a "1" or a "0", and then go off again. Imagine this on
>>> >> off cycle will happen several times. Imagine also that you are
>>> >> instructed to shout out "I'm in a blue box" if when the LED comes on
>>> >> it displays a "1" and shout out "I'm in a green box" if the LED comes
>>> >> on and it displays a "0". Imagine you don't know the colour of the
>>> >> box you are in, and that you are able to and do follow the
>>> >> instructions.
>>> >>
>>> >> Now imagine that unknown to you, what caused the LED to display a "0"
>>> >> or a "1" changed each time. Given that you wouldn't know what caused
>>> >> the LED to display a "0" or "1", would you agree that you weren't
>>> >> basing what you shouted out on what caused the LED to display a "0"
>>> >> or "1" each time? ---
>>> >>
>>> > It doesn't matter why the LED displays 0 or 1. My instructions are
>>> > only based on the particular number that I see. I'm certainly not
>>> > instructed to look at the color of the box and say what that color is.
>>> > So of course, when I see a 0 I'll say what I was instructed to say
>>> > when I see a zero, and when I see a 1 I'll say whatever I was
>>> > instructed to say when I see a 1.
>>> >
>>> > So what?
>>> >
>>> > Burkhard is right, this is a weak and poorly fleshed out version of
>>> > Searle's Chinese Room (which is itself an advertisement for a point of
>>> > view rather than an argument).
>>>
>>> It ain't *that* bad. Searle has argued that artificial intelligence is
>>> feasible, with the proviso that an artificial thinking machine will have
>>> to mimic the architecture of an organic one in ways that a binary
>>> processor doesn't. By his own account, he formulated the Chinese Room
>>> thought-experiment to demonstrate that a binary processor functions in an
>>> essentially different way from an organic brain, and will never become a
>>> brain regardless of how fast it gets or how much power it has.
>>>
>>> At the time he formulated the thought-experiment, people were arguing
>>> that continual increases in the processing speed of computers would
>>> eventually result in a conscious computer: there may still be some people
>>> who hold that view.
>>>
>>> Searle may be wrong, but I've yet to read a convincing rebuttal of his
>>> actual claims. (A number of people have convincingly rebutted claims he
>>> never actually made, but the utility of such rebuttals is questionable.)
>>> --
>>> S.O.P.
>>
>>Searle's claim was that no computer running a program based on formal manipulation of symbols can have either understanding or intentionality, because no formal manipulation of symbols (syntactics) can give the symbols meaning (semantics). His argument just doesn't get even to that conclusion, and it certainly does not get to the broader conclusions that people sometimes attribute to him.
>>
>>Here are a few problems with his paper
>>
>>1. The person in the box is a distraction, put there for rhetorical purposes to force the reader's conclusion in the desired direction without an actual argument. We expect the person to be conscious. But the person in the box is acting as a small part of the circuitry in a computer. We would not expect either a small collection of semiconductors or a handful of neurons to be conscious. It would have been a more honest argument if Searle simply left out the man in the box and just talked about a computer.
>>
>>2. The "algorithmic program" that Searle describes is simply a look-up table. Lots of programs are far more complex than that, and I doubt anyone thinks that a look-up table would emulate consciousness. Searle is really describing an over-simplified program designed to mimic conversation rather than a program designed to act conscious. And yet he wants the reader to treat that look-up table as the paradigm for all algorithmic programs.
>>
>>3. There is no ongoing interaction between the man/box system and the world, and no self-reference, no process by which the man/box system monitors and models its own internal states (and that is certainly something that could be done algorithmically).
>>
>>4. Searle's key claim that no manipulation of symbols (syntactics) can produce meaning (semantics), but he does not make much of an argument to support this claim, which probably seems self-evident to him. Specifically, he does not ask where meaning comes from in humans. He merely attributes it to unspecified characteristics of the brain.
>>
>>5. Finally, he does not explain what there is in the brain that is non-algorithmic or non-binary. What exactly is there in the timing of the firing of neurons that cannot be broken down to binary operations? Of course the description of them as binary operations would be very complex considering summation of lots of inputs, the global level of different neurotransmitters and neuroactive drugs, etc. But he doesn't offer a good explanation of what there is specifically non-algorithmic in the brain.
>>
>>The Chinese box is basically a trick to force the reader to a conclusion by tweaking his intuitions in the right way. It's not a real argument.
>
>What you say makes someone's argument hew even closer to the Chinese
>Box story: they are both basically tricks to force the reader to a
>conclusion by tweaking the situation in the proper way.
>
>There is an enormous difference between a person with a brain doing
>"computations" and a computer doing supposedly the same computations.
>The person has effectors -- we can act in the world and make changes
>to the physical environment. The person has sensors -- we can detect
>changes in the physical environment. And we can determine that the
>changes we see are often direct consequences of the actions we take.
>Even more, the person has internal machinery and the actions and even
>the computations (thoughts, if you want) cause changes in the internal
>machinery (metabolic changes, for example). And the person has
>internal sensors and we can detect the changes in our own bodies that
>are produced from our mental activity. All of these things act in a
>smoothly coherent and coordinated way (usually). These are consistent
>with us being agents in the world acting on it and being acted on. I
>would argue that all these notions are essential parts of what we call
>"being aware of ourselves".
>
>It should not be impossible to build a robot to do a lot of this
>although the incredible quantity of physical changes that occur within
>our body and the incredible quantity of sensors we have to detect
>those changes would be virtually impossible to duplicate in practice.
>Would a robot be designed to find itself in some pain and distress
>because of an overactive immune and hormonal system simply because it
>was given problems to solve that were incredibly difficult with
>insufficient information and conflicting requirements to compute
>behavior that it can easily calculate would be worthless in coping
>with the problem at hand yet still is necessary to perform for many
>other reasons? We humans very often react extremely poorly to being
>subjected to stress, physical, emotional, and mental, for long periods
>of time. Would the computer say "I really feel shitty -- I need a
>vacation!" representing self knowledge? Or would it say "my
>diagnostic programs indicate something is amiss -- please summon a
>repair technician"? What we would now build is the latter. I do not
>know why a robot capable of learning from experience and sharing
>experiences and learning and even discussing alternative courses of
>action and details of internal states with others would not act as the
>former and express self awareness.
>
>To return to the Chinese Room specifically, the person locked in the
>room would not demonstrate any understanding. However if the person
>actually got out and interacted with Chinese speakers and saw the
>results of the translations, how they produced changes both in the
>behavior and the emotional state of the listeners, and further engage
>in back and forth dialog, then I would argue that true understanding
>could easily result.
>
>To return to someone's silly premise: NAND gates alone may be able to
>produce a fully functional general pupose computer with massive memory
>stores capable of machine learning and all but it would be like a
>"brain in a vat" or the person locked inside the Chinese Room. To
>have consciousness requires actual behavior in the world, actual
>experiences to detect, analyze, and interpret and, finally, to
>internalize. So the robot has an awful lot of machinery far beyond
>simple NAND gates.

No response, someone?

RSNorman

unread,
Jul 3, 2015, 5:31:20 PM7/3/15
to talk-o...@moderators.isc.org
On Fri, 03 Jul 2015 10:20:07 -0400, RSNorman <r_s_n...@comcast.net>
wrote:
No response, someone?

Bill Rogers

unread,
Jul 3, 2015, 6:11:20 PM7/3/15
to talk-o...@moderators.isc.org
Of course not. I'm saying that if some NAND arrangement is conscious, then it is conscious simply because of the arrangement itself and not because of some mysterious added special something, be it physical or spiritual.

>

> > No, basically you are introducing long time delays in what is the equivalent of neural transmission. It is quite possible that is you slowed down neural transmission times enough in our brain that our consciousness would no longer work. THis scenario just adds pointless complication to your original, already not very satisfactory, scenario.
>
> I'm just considering NAND gates at the moment, or are you suggesting that digital computers couldn't be conscious no matter what processing they did? Because what difference would the clock speed make?

The clock speed makes a difference if the inputs are still coming in at normal speed. If you simply slow everything in the universe down by the same factor, I'm not sure what would happen. I don't think that this scenario gets at anything particularly fundamental or interesting.


Bill Rogers

unread,
Jul 3, 2015, 6:16:19 PM7/3/15
to talk-o...@moderators.isc.org
In fairness to someone, it was S.O.P. who thought I was being too hard on Searle, not someone.

Mark Isaak

unread,
Jul 3, 2015, 6:21:19 PM7/3/15
to talk-o...@moderators.isc.org
On 7/2/15 7:47 AM, someone wrote:
> In this harder to not-understand-the-point scenario,

I hope you mean by that that it is obvious that the scenario has no point.

> imagine you are in a box with an LED display which not display
> any symbol initially and then display either a "1" or a "0",
> go off again, and then display either a "1" or a "0", and then
> go off again. Imagine this on off cycle will happen several
> times. Imagine also that you are instructed to shout out "I'm
> in a blue box" if when the LED comes on it displays a "1" and
> shout out "I'm in a green box" if the LED comes on and it
> displays a "0". Imagine you don't know the colour of the box you
> are in, and that you are able to and do follow the instructions.
>
> Now imagine that . . . . would you agree that you weren't
> basing what you shouted out on what caused the LED to
> display a "0" or "1" each time?

No need to imagine anything. The basis for what I shouted was your
instructions, nothing more. A simple binary switch which pointed to
"BLUE BOX" or "GREEN BOX" depending on the LED would be a lot simpler.

--
Mark Isaak eciton (at) curioustaxonomy (dot) net
"Keep the company of those who seek the truth; run from those who have
found it." - Vaclav Havel

Roger Shrubber

unread,
Jul 3, 2015, 8:21:19 PM7/3/15
to talk-o...@moderators.isc.org
How you doing? How about the weather we're having, eh? You think
the Giants are going to catch up with the Dodgers?

Roger Shrubber

unread,
Jul 3, 2015, 8:31:20 PM7/3/15
to talk-o...@moderators.isc.org
Consciousness? Not really sure what it is.
A recognition of self as a distinct entity from others?
The same with a third eye awareness of ones own awareness of self?
I don't think we need infinite regress, the later stages seem to
all focus on the navel, but perhaps it was significant to Adam.

Is it something akin to the integral of sensation? That would be an
interesting metaphor to exhaust, sensory perception through time,
through "self", through ???

Perhaps you can propose some differential equations that I can
fail to understand.


RSNorman

unread,
Jul 3, 2015, 8:31:20 PM7/3/15
to talk-o...@moderators.isc.org
Those are trivial matters. What is important is that my collection of
NAND gates indicates decisively that the Detroit Lions will take the
Super Bowl next season. And that is both in the zombie universe and
the universe that it parodies.

Unfortunately, in the universe I occupy, the Detroit Lions themselves
are the parody.



Jimbo

unread,
Jul 3, 2015, 8:36:19 PM7/3/15
to talk-o...@moderators.isc.org
On Fri, 03 Jul 2015 20:19:34 -0400, Roger Shrubber
<rog.sh...@gmail.com> wrote:

You're someone, but you're not someone.

Roger Shrubber

unread,
Jul 3, 2015, 9:06:19 PM7/3/15
to talk-o...@moderators.isc.org
You mean, I'm not "someone".

It is loading more messages.
0 new messages