Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

atheist evolutionary accounts...

1,204 views
Skip to first unread message

someone

unread,
Jul 30, 2015, 3:24:58 AM7/30/15
to talk-o...@moderators.isc.org
[This is being posted because my response to post
https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
isn't getting through]

If you haven't been following the conversation, here is a quick synopsis (if your a talk.origins atheist, you might want to show that there's no problem for atheism here). There is a scenario outlined in this post (you can ignore the history):

https://groups.google.com/d/msg/talk.origins/gudeS48dvuc/bonNIJoZXAkJ

And the following questions about an atheist perspective.

1) Could a robot with a NAND gate control unit consciously experience?

If the answer is "yes" then the next question is

1.y) If the hand raisers performed accurately, such that
the resulting outputs of their NAND gate arrangement were the same as those the robot control unit was recorded as giving, would the property of the NAND gate arrangement consciously experiencing emerge?

If the answer to (1.y) is "yes" then the next question is

(1.y.y) If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?

If the answer to (1.y.y) is "yes" then how does the account suggest that consciously experiencing makes a difference to behaviour, if the behaviour when it does emerge is the same as when it doesn't.

If the answer to (1.y.y) is "no" then we can discuss from there.

If the answer to (1.y) is "no" then how does the account suggest that consciously experiencing make difference to the robot's behaviour if the NAND gate arrangements outputs would be the same whether the property had emerged or not.

If the answer to (1) is "no" then what is special about the biological chemistry?

This post is a response to the poster Bill Rogers, who claimed that a "yes" or "no" answer couldn't be given for a response to 1.y.y because they wouldn't be able to guess it right. Bill seemed to be thinking of more than billions guessing, and I asked about the situation where only one guessed once each time, and they did the thing 10 times. Bill didn't answer the question in the response, but when it was pointed out, Bill did respond:
https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ

for convenience quoted here:
---
There are billions, if not trillions of people required in your hand-raising system. For the system to be conscious each of them has to randomly guess whether to raise his hand in exactly the right way to produce the output that the NAND gate in a conscious robot would produce. And each one of those trillions of hand raisers has to guess correctly at each cycle and to keep do it indefinitely. It's an utterly absurd premise. And it's not as though nobody's told how and why it's an absurd premise already.

Now, you want to change the scenario so that all but 1 of the trillions of gates behave normally, and that one guesses only 10 times (out of the millions of cycles that would be required to become conscious). That's like a brain with a transiently sick neurons, or a computer with a brief loose connection connections. Is this really a key point? Seriously? If the guessing gates are a tiny enough fraction of the total, and if none of them is critical and if there's enough redundancy in the system, then consciousness could probably withstand a few wrong guesses. So what?
---

My response is:

Are you suggesting that there would be critical NAND gates where if one of there results was due to a guess, that the arrangement wouldn't have the conscious experience of being a robot emerge?

[It was never specified how many people guessed, so it doesn't require a change to what was suggested for only one to be considered. The reason I'm specifically asking about you about the case with one, is that you had assumed loads and then were refusing to answer, claiming that it wouldn't be likely that they'd all guess correctly. I decided not to go onto discuss why the likelihood of an event happening need prevent us from considering what if it did in order to investigate your account, but just consider one guesser and bypass the obstacle]







Bill Rogers

unread,
Jul 30, 2015, 7:44:58 AM7/30/15
to talk-o...@moderators.isc.org
On Thursday, July 30, 2015 at 3:24:58 AM UTC-4, someone wrote:

>
> This post is a response to the poster Bill Rogers, who claimed that a "yes" or "no" answer couldn't be given for a response to 1.y.y because they wouldn't be able to guess it right. Bill seemed to be thinking of more than billions guessing, and I asked about the situation where only one guessed once each time, and they did the thing 10 times. Bill didn't answer the question in the response, but when it was pointed out, Bill did respond:
> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
>
> for convenience quoted here:
> ---
> There are billions, if not trillions of people required in your hand-raising system. For the system to be conscious each of them has to randomly guess whether to raise his hand in exactly the right way to produce the output that the NAND gate in a conscious robot would produce. And each one of those trillions of hand raisers has to guess correctly at each cycle and to keep do it indefinitely. It's an utterly absurd premise. And it's not as though nobody's told how and why it's an absurd premise already.
>
> Now, you want to change the scenario so that all but 1 of the trillions of gates behave normally, and that one guesses only 10 times (out of the millions of cycles that would be required to become conscious). That's like a brain with a transiently sick neurons, or a computer with a brief loose connection connections. Is this really a key point? Seriously? If the guessing gates are a tiny enough fraction of the total, and if none of them is critical and if there's enough redundancy in the system, then consciousness could probably withstand a few wrong guesses. So what?
> ---
>
> My response is:
>
> Are you suggesting that there would be critical NAND gates where if one of there results was due to a guess, that the arrangement wouldn't have the conscious experience of being a robot emerge?

Think about your scenario in detail. There's a vast NAND gate arrangement; it is embedded in a robot and receiving inputs from the environment. In order to be conscious it is going through millions of steps. Now you ask what happens if in one gate "one of the results was due to a guess." What exactly do you mean? Do you mean that the gate happened to guess correctly millions of times in succession? Do you mean that it guessed once and just happened to guess correctly that once, and all the rest of the time functioned normally?

My answer will probably seem to you to reject the premise of your question. It does not matter *why* the gate produces the correct result. Sure. But, realistically, the gate cannot possibly produce the correct result consistently over the many cycles required to maintain consciousness unless it it producing results based on its instructions as a gate, rather than as random outputs.

Now you ask, could there be a critical gate such that a wrong guess in that gate would preclude consciousness. Well, if there were no built in redundancies, why not? For one thing imagine a get hooked up so that if it outputs "yes" the power remains on and if it outputs "no" the power shuts down permanently. No imagine the rule for that gate is that it should output "yes" no matter what. But one day it "guesses" "no." Bye bye consciousness.

>
> [It was never specified how many people guessed, so it doesn't require a change to what was suggested for only one to be considered. The reason I'm specifically asking about you about the case with one, is that you had assumed loads and then were refusing to answer, claiming that it wouldn't be likely that they'd all guess correctly. I decided not to go onto discuss why the likelihood of an event happening need prevent us from considering what if it did in order to investigate your account, but just consider one guesser and bypass the obstacle]

Actually, the likelihood of an event is quite important. You are postulating that by random chance the gates in your second NAND gate arrangement produce outputs exactly matching those from a conscious robot. And, unless I am far wrong, you want to draw the implication that if it doesn't matter *why* the gates give the outs they do (rules versus random guesses) and if consciousness can occur, then consciousness is independent of the behavior of the gates, or something like that. Only you can say where you are going with your argument, and you studiously refuse to do so.

The short answer is that random outputs will never, realistically, just happen to match those from a conscious robot. If you reduce the noise in the system to the point that there are only extremely rare random outputs, and usually the gates function normally, then perhaps conscious can go on in spite of the hiccups.

Mark is basically correct about your argument; it is as meaningful as saying, well if I put a brain through a blender and just by chance it was still conscious that would prove that consciousness does not depend on the brain. SFW.


js

unread,
Jul 30, 2015, 8:04:58 AM7/30/15
to talk-o...@moderators.isc.org
Is it a 74HC00 or maybe a 74LS00? Guess it could be CMOS. Is it a die or packaged? I'm not aware (woops, hahahaha) of a single gate package but somehow that does ring a bell (they generally come 4 to package).
What if it's a DeMorgan NAND gate? That would be more components. That would be more power and board space. Does that count off or add to? That would be more things to be aware of so I guess that could help out.
What if one of your inputs got stuck high? Then you would all of a sudden have an inverter and not a NAND gate. What would that do to your universe? All of a sudden you would have no NAND gate. Just by getting one input stuck high! A new universe!!!
I once had a board like that. I put an oscilloscope probe on a NAND gate of a broken board. The board started working! I took the probe away... the board started failing. I could change the universe with my hand!!! What a trip that was. A universe with a broken board one second and a universe with a working board the next.
You remember that day? Did you guess that is what was making the universe change? Or did you even notice the universe was changing? Perhaps you were just unconscious of it? Or you couldn't guess what was happening? I was conscious of it, but you were not!
What's wrong with you?

someone

unread,
Jul 30, 2015, 9:54:57 AM7/30/15
to talk-o...@moderators.isc.org
On Thursday, July 30, 2015 at 12:44:58 PM UTC+1, Bill Rogers wrote:
> On Thursday, July 30, 2015 at 3:24:58 AM UTC-4, someone wrote:
>
> >
> > This post is a response to the poster Bill Rogers, who claimed that a "yes" or "no" answer couldn't be given for a response to 1.y.y because they wouldn't be able to guess it right. Bill seemed to be thinking of more than billions guessing, and I asked about the situation where only one guessed once each time, and they did the thing 10 times. Bill didn't answer the question in the response, but when it was pointed out, Bill did respond:
> > https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
> >
> > for convenience quoted here:
> > ---
> > There are billions, if not trillions of people required in your hand-raising system. For the system to be conscious each of them has to randomly guess whether to raise his hand in exactly the right way to produce the output that the NAND gate in a conscious robot would produce. And each one of those trillions of hand raisers has to guess correctly at each cycle and to keep do it indefinitely. It's an utterly absurd premise. And it's not as though nobody's told how and why it's an absurd premise already.
> >
> > Now, you want to change the scenario so that all but 1 of the trillions of gates behave normally, and that one guesses only 10 times (out of the millions of cycles that would be required to become conscious). That's like a brain with a transiently sick neurons, or a computer with a brief loose connection connections. Is this really a key point? Seriously? If the guessing gates are a tiny enough fraction of the total, and if none of them is critical and if there's enough redundancy in the system, then consciousness could probably withstand a few wrong guesses. So what?
> > ---
> >
> > My response is:
> >
> > Are you suggesting that there would be critical NAND gates where if one of there results was due to a guess, that the arrangement wouldn't have the conscious experience of being a robot emerge?
>
> Think about your scenario in detail. There's a vast NAND gate arrangement; it is embedded in a robot and receiving inputs from the environment. In order to be conscious it is going through millions of steps. Now you ask what happens if in one gate "one of the results was due to a guess." What exactly do you mean? Do you mean that the gate happened to guess correctly millions of times in succession? Do you mean that it guessed once and just happened to guess correctly that once, and all the rest of the time functioned normally?
>
> My answer will probably seem to you to reject the premise of your question. It does not matter *why* the gate produces the correct result. Sure. But, realistically, the gate cannot possibly produce the correct result consistently over the many cycles required to maintain consciousness unless it it producing results based on its instructions as a gate, rather than as random outputs.
>
> Now you ask, could there be a critical gate such that a wrong guess in that gate would preclude consciousness. Well, if there were no built in redundancies, why not? For one thing imagine a get hooked up so that if it outputs "yes" the power remains on and if it outputs "no" the power shuts down permanently. No imagine the rule for that gate is that it should output "yes" no matter what. But one day it "guesses" "no." Bye bye consciousness.
>

The question (1.y.y) is about whether the people that guessed correctly. I was asking whether there were any critical NAND gates where if the person guessed the conscious experience of being a robot wouldn't evolve. Unfortunately your answer was about if they didn't guess correctly. So could you perhaps state whether there are any critical NAND gates where if the person guessed correctly the conscious experience of being a robot wouldn't evolve.

Bill Rogers

unread,
Jul 30, 2015, 12:14:56 PM7/30/15
to talk-o...@moderators.isc.org
On Thursday, July 30, 2015 at 9:54:57 AM UTC-4, someone wrote:
> On Thursday, July 30, 2015 at 12:44:58 PM UTC+1, Bill Rogers wrote:
> > On Thursday, July 30, 2015 at 3:24:58 AM UTC-4, someone wrote:
> >
> > >
> > > This post is a response to the poster Bill Rogers, who claimed that a "yes" or "no" answer couldn't be given for a response to 1.y.y because they wouldn't be able to guess it right. Bill seemed to be thinking of more than billions guessing, and I asked about the situation where only one guessed once each time, and they did the thing 10 times. Bill didn't answer the question in the response, but when it was pointed out, Bill did respond:
> > > https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
> > >
> > > for convenience quoted here:
> > > ---
> > > There are billions, if not trillions of people required in your hand-raising system. For the system to be conscious each of them has to randomly guess whether to raise his hand in exactly the right way to produce the output that the NAND gate in a conscious robot would produce. And each one of those trillions of hand raisers has to guess correctly at each cycle and to keep do it indefinitely. It's an utterly absurd premise. And it's not as though nobody's told how and why it's an absurd premise already.
> > >
> > > Now, you want to change the scenario so that all but 1 of the trillions of gates behave normally, and that one guesses only 10 times (out of the millions of cycles that would be required to become conscious). That's like a brain with a transiently sick neurons, or a computer with a brief loose connection connections. Is this really a key point? Seriously? If the guessing gates are a tiny enough fraction of the total, and if none of them is critical and if there's enough redundancy in the system, then consciousness could probably withstand a few wrong guesses. So what?
> > > ---
> > >
> > > My response is:
> > >
> > > Are you suggesting that there would be critical NAND gates where if one of there results was due to a guess, that the arrangement wouldn't have the conscious experience of being a robot emerge?
> >
> > Think about your scenario in detail. There's a vast NAND gate arrangement; it is embedded in a robot and receiving inputs from the environment. In order to be conscious it is going through millions of steps. Now you ask what happens if in one gate "one of the results was due to a guess." What exactly do you mean? Do you mean that the gate happened to guess correctly millions of times in succession? Do you mean that it guessed once and just happened to guess correctly that once, and all the rest of the time functioned normally?
> >
> > My answer will probably seem to you to reject the premise of your question. It does not matter *why* the gate produces the correct result. Sure. But, realistically, the gate cannot possibly produce the correct result consistently over the many cycles required to maintain consciousness unless it it producing results based on its instructions as a gate, rather than as random outputs.
> >
> > Now you ask, could there be a critical gate such that a wrong guess in that gate would preclude consciousness. Well, if there were no built in redundancies, why not? For one thing imagine a get hooked up so that if it outputs "yes" the power remains on and if it outputs "no" the power shuts down permanently. No imagine the rule for that gate is that it should output "yes" no matter what. But one day it "guesses" "no." Bye bye consciousness.
> >
>
> The question (1.y.y) is about whether the people that guessed correctly. I was asking whether there were any critical NAND gates where if the person guessed the conscious experience of being a robot wouldn't evolve. Unfortunately your answer was about if they didn't guess correctly. So could you perhaps state whether there are any critical NAND gates where if the person guessed correctly the conscious experience of being a robot wouldn't evolve.

A critical gate is critical because a wrong output from the gate would prevent consciousness from happening. If that critical gate gave random outputs, instead of outputs according to instruction, and if in each successive cycle it randomly happened to give the same output that it would have given had it followed its instructions, then yes, consciousness would occur. But the premise of the question is absurd; the chance that the gate with a random output would just happen to randomly produce the correct output consistently over the thousands or millions of cycles required for conscious experience is negligible.

So let's say that at just one point, during just one cycle, one critical gate fails to act according to instructions, but, by random chance, happens to give the correct output anyway, then, sure consciousness would continue. If you want to go from that to the claim that consciousness doesn't depend on the behaviour of the gates, that's your account, not mine.

someone

unread,
Jul 30, 2015, 12:49:56 PM7/30/15
to talk-o...@moderators.isc.org
On Thursday, July 30, 2015 at 5:14:56 PM UTC+1, Bill Rogers wrote:
> On Thursday, July 30, 2015 at 9:54:57 AM UTC-4, someone wrote:
> > On Thursday, July 30, 2015 at 12:44:58 PM UTC+1, Bill Rogers wrote:
> > > On Thursday, July 30, 2015 at 3:24:58 AM UTC-4, someone wrote:
> > >
> > > >
> > > > This post is a response to the poster Bill Rogers, who claimed that a "yes" or "no" answer couldn't be given for a response to 1.y.y because they wouldn't be able to guess it right. Bill seemed to be thinking of more than billions guessing, and I asked about the situation where only one guessed once each time, and they did the thing 10 times. Bill didn't answer the question in the response, but when it was pointed out, Bill did respond:
> > > > https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
> > > >
> > > > for convenience quoted here:
> > > > ---
> > > > There are billions, if not trillions of people required in your hand-raising system. For the system to be conscious each of them has to randomly guess whether to raise his hand in exactly the right way to produce the output that the NAND gate in a conscious robot would produce. And each one of those trillions of hand raisers has to guess correctly at each cycle and to keep do it indefinitely. It's an utterly absurd premise. And it's not as though nobody's told how and why it's an absurd premise already.
> > > >
> > > > Now, you want to change the scenario so that all but 1 of the trillions of gates behave normally, and that one guesses only 10 times (out of the millions of cycles that would be required to become conscious). That's like a brain with a transiently sick neurons, or a computer with a brief loose connection connections. Is this really a key point? Seriously? If the guessing gates are a tiny enough fraction of the total, and if none of them is critical and if there's enough redundancy in the system, then consciousness could probably withstand a few wrong guesses. So what?
> > > > ---
> > > >
> > > > My response is:
> > > >
> > > > Are you suggesting that there would be critical NAND gates where if one of there results was due to a guess, that the arrangement wouldn't have the conscious experience of being a robot emerge?
> > >
> > > Think about your scenario in detail. There's a vast NAND gate arrangement; it is embedded in a robot and receiving inputs from the environment. In order to be conscious it is going through millions of steps. Now you ask what happens if in one gate "one of the results was due to a guess." What exactly do you mean? Do you mean that the gate happened to guess correctly millions of times in succession? Do you mean that it guessed once and just happened to guess correctly that once, and all the rest of the time functioned normally?
> > >
> > > My answer will probably seem to you to reject the premise of your question. It does not matter *why* the gate produces the correct result. Sure. But, realistically, the gate cannot possibly produce the correct result consistently over the many cycles required to maintain consciousness unless it it producing results based on its instructions as a gate, rather than as random outputs.
> > >
> > > Now you ask, could there be a critical gate such that a wrong guess in that gate would preclude consciousness. Well, if there were no built in redundancies, why not? For one thing imagine a get hooked up so that if it outputs "yes" the power remains on and if it outputs "no" the power shuts down permanently. No imagine the rule for that gate is that it should output "yes" no matter what. But one day it "guesses" "no." Bye bye consciousness.
> > >
> >
> > The question (1.y.y) is about whether the people that guessed correctly. I was asking whether there were any critical NAND gates where if the person guessed the conscious experience of being a robot wouldn't evolve. Unfortunately your answer was about if they didn't guess correctly. So could you perhaps state whether there are any critical NAND gates where if the person guessed correctly the conscious experience of being a robot wouldn't evolve.
>
> A critical gate is critical because a wrong output from the gate would prevent consciousness from happening. If that critical gate gave random outputs, instead of outputs according to instruction, and if in each successive cycle it randomly happened to give the same output that it would have given had it followed its instructions, then yes, consciousness would occur. But the premise of the question is absurd; the chance that the gate with a random output would just happen to randomly produce the correct output consistently over the thousands or millions of cycles required for conscious experience is negligible.
>
> So let's say that at just one point, during just one cycle, one critical gate fails to act according to instructions, but, by random chance, happens to give the correct output anyway, then, sure consciousness would continue. If you want to go from that to the claim that consciousness doesn't depend on the behaviour of the gates, that's your account, not mine.
>

I'll just repost the questions as a convenient reminder:
---
1) Could a robot with a NAND gate control unit consciously experience?

If the answer is "yes" then the next question is

1.y) If the hand raisers performed accurately, such that
the resulting outputs of their NAND gate arrangement were the same as those the robot control unit was recorded as giving, would the property of the NAND gate arrangement consciously experiencing emerge?

If the answer to (1.y) is "yes" then the next question is

(1.y.y) If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?

If the answer to (1.y.y) is "yes" then how does the account suggest that consciously experiencing makes a difference to behaviour, if the behaviour when it does emerge is the same as when it doesn't.

If the answer to (1.y.y) is "no" then we can discuss from there.

If the answer to (1.y) is "no" then how does the account suggest that consciously experiencing make difference to the robot's behaviour if the NAND gate arrangements outputs would be the same whether the property had emerged or not.

If the answer to (1) is "no" then what is special about the biological chemistry?
---

When you first answered them https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/UPrXEMVKFawJ

you stated that you were replying "no" to (1.y)

Now you seem to raise the possibility that you have changed your position and are now saying "yes" to (1.y) and "no" to (1.y.y) because you are stating that it wouldn't matter which NAND gate was guessed as long as it was guessed correctly.

Have you changed your position, or does it matter how many guess (this isn't meant as a loaded question, the "does it matter how many guess" is meant as a guess at what might be alternative to you having changed your position).

Bill Rogers

unread,
Jul 30, 2015, 1:09:56 PM7/30/15
to talk-o...@moderators.isc.org
I've explained *my account* at length. You can decide whether you want to continue to ignore it (as you seem to be doing).

I think it possible that a robot with a computer made of some very large NAND gate arrangement could be conscious.

I think your hand raisers just confuse the issue. Hand raisers who guess are equivalent to gates which give random outputs. Hand raisers widely separated acting with mirrors are equivalent to gates with a built in time-delay between input and output. So I'm just going to answer in terms of gates.

If you substitute random gates for the NAND gates used to build the robot's "brain," and they just give random outputs, then the robot won't be conscious. In fact, the robot won't do much of anything. If you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then sure, the robot would be conscious, but that scenario is ridiculously improbable, billions of gates, thousands of cycles, each gate just happening to give the same output it would as if it were a proper NAND gate. [And note that, even in this ridiculous case, the consciousness depends on the allegedly random gates persistently behaving just as though they were proper NAND gates, so consciousness would still depend on the NAND-like behavior of the gates.]

OK, now you want to reduce the number of random gates in the huge NAND gate arrangement, and maybe you want to reduce how often those gates give random output as opposed to acting like a proper NAND gate. Well, you might get lucky for a handful of gates for a handful of cycles, or there might be enough redundancy in the huge arrangement to tolerate some errors, but once you've swapped in any significant number of random output gates, then my expectation is that the consciousness would be disrupted. That's because consciousness depends on the gates in the arrangement acting like NAND gates.

someone

unread,
Jul 30, 2015, 1:44:56 PM7/30/15
to talk-o...@moderators.isc.org
The issue doesn't involve any incorrect guessing. I'm not sure why you would bring incorrect guessing into it other than to obfuscate the issue.

Well I've brought up that you have stated that you were answering "no" to (1.y), and you have made no effort to state that your position has changed, so I'll assume it remains the same.

So in according to your account we'd know from the hand-raisers what outputs to expect if the property of experiencing being a robot hadn't influenced the behaviour (since the property wouldn't emerge even if they did it correctly thus "no" to (1.y)). What I am wondering is how are you interpreting the exact same set of output results from a robot control unit as reflecting an influence of the emergent conscious experience of being a robot?



Bill Rogers

unread,
Jul 30, 2015, 2:55:04 PM7/30/15
to talk-o...@moderators.isc.org
Incorrect guessing is important, because if the possibility of an incorrect guess does not exist, then it's not a guess.

>
> Well I've brought up that you have stated that you were answering "no" to (1.y), and you have made no effort to state that your position has changed, so I'll assume it remains the same.

Did you read what I wrote.

>
> So in according to your account we'd know from the hand-raisers what outputs to expect if the property of experiencing being a robot hadn't influenced the behaviour (since the property wouldn't emerge even if they did it correctly thus "no" to (1.y)). What I am wondering is how are you interpreting the exact same set of output results from a robot control unit as reflecting an influence of the emergent conscious experience of being a robot?

I did not say that the property would not emerge if the random gates happened to give the same outputs as a proper NAND gate. I said that it was impossible that randomly acting gates *could* give the same outputs as proper NAND gates consistently. I said your question was flawed and based on an absurd premise. I answered neither yes, nor no, but explained in detail why your premise was absurd.

You clearly paid not the slightest attention to what I wrote in my last post. I gave you "my account." You are inventing an account of your own. You really don't need me here to do that. Just don't attribute your account to me.


someone

unread,
Jul 30, 2015, 3:59:57 PM7/30/15
to talk-o...@moderators.isc.org
There would be a possibility of the guess being incorrect, but I'm just asking about the subset of cases where the guesses were correct. So obfuscating that by keep bringing in discussion about cases which wouldn't be in the set that is being discussed.


> >
> > Well I've brought up that you have stated that you were answering "no" to (1.y), and you have made no effort to state that your position has changed, so I'll assume it remains the same.
>
> Did you read what I wrote.
>

You've written lots of things (and stuff which seems to me to obfuscate matters I skim over) and quite a few could be interpreted as being designed to obfuscate. Here you aren't exactly being helpful, I would have though if your motive wasn't to obfuscate, you would tried to make clearer your position by confirming whether you have changed your position from saying "no" to (1.y) or not, but you made no such attempt.

> >
> > So in according to your account we'd know from the hand-raisers what outputs to expect if the property of experiencing being a robot hadn't influenced the behaviour (since the property wouldn't emerge even if they did it correctly thus "no" to (1.y)). What I am wondering is how are you interpreting the exact same set of output results from a robot control unit as reflecting an influence of the emergent conscious experience of being a robot?
>
> I did not say that the property would not emerge if the random gates happened to give the same outputs as a proper NAND gate. I said that it was impossible that randomly acting gates *could* give the same outputs as proper NAND gates consistently. I said your question was flawed and based on an absurd premise. I answered neither yes, nor no, but explained in detail why your premise was absurd.
>
> You clearly paid not the slightest attention to what I wrote in my last post. I gave you "my account." You are inventing an account of your own. You really don't need me here to do that. Just don't attribute your account to me.

(1.y) is the question "If the hand raisers performed accurately, such that the resulting outputs of their NAND gate arrangement were the same as those the robot control unit was recorded as giving, would the property of the NAND gate arrangement consciously experiencing emerge?"

Notice that there is guessing at all. So it isn't a question about guessing.

(1.y.y) is the question about guessing "If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?"

Now try to keep that in mind while trying to spot the problem with your answer in https://groups.google.com/forum/#!topic/talk.origins/VJMS6crS9AU%5B576-600%5D

I'll quote a bit of it here:
"I say no to 1.y because I do not accept that random behavior of gates would be the same as the behavior of gates "following instructions" at least not for so many gates over any length of time. The problem here is your question, and no simple yes or no will sort that out."

Can you see the problem (what random behaviour is there in question (1.y) there is no guessing, it's functioning a standard NAND gate arrangement (that was a hint))?

Bill Rogers

unread,
Jul 30, 2015, 4:09:57 PM7/30/15
to talk-o...@moderators.isc.org
Gee, perhaps I got confused by your numbering system and typed 1.y when I meant 1.y.y. But if you'd actually paid attention to what I wrote, that would have been obvious to you. So just in case you don't remember, here is my account (reposted for your convenience.)

I've explained *my account* at length. You can decide whether you want to continue to ignore it (as you seem to be doing).

I think it possible that a robot with a computer made of some very large NAND gate arrangement could be conscious.

I think your hand raisers just confuse the issue. Hand raisers who guess are equivalent to gates which give random outputs. Hand raisers widely separated acting with mirrors are equivalent to gates with a built in time-delay between input and output. So I'm just going to answer in terms of gates.

If you substitute random gates for the NAND gates used to build the robot's "brain," and they just give random outputs, then the robot won't be conscious. In fact, the robot won't do much of anything. If you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then sure, the robot would be conscious, but that scenario is ridiculously improbable, billions of gates, thousands of cycles, each gate just happening to give the same output it would as if it were a proper NAND gate. [And note that, even in this ridiculous case, the consciousness depends on the allegedly random gates persistently behaving just as though they were proper NAND gates, so consciousness would still depend on the NAND-like behavior of the gates.]

OK, now you want to reduce the number of random gates in the huge NAND gate arrangement, and maybe you want to reduce how often those gates give random output as opposed to acting like a proper NAND gate. Well, you might get lucky for a handful of gates for a handful of cycles, or there might be enough redundancy in the huge arrangement to tolerate some errors, but once you've swapped in any significant number of random output gates, then my expectation is that the consciousness would be disrupted. That's because consciousness depends on the gates in the arrangement acting like NAND gates.

Now, please go ahead and tell me what you find confusing or where you think I am obfuscating.

someone

unread,
Jul 30, 2015, 4:24:56 PM7/30/15
to talk-o...@moderators.isc.org
With the issue cleared up, that seems fine.

With the hand-raisers the position of the hand-raisers in space was quite flexible they could have used telescopes and mirrors. Once you went onto say that guessing didn't matter as long as the guess was correct (as you did when you stated "no" to (1.y.y)), then neither did the causal connection. All there was was individuals doing sequences and there was no need for there to be any causal connection between their actions. It didn't matter which of them did which sequence, it would just be a set of sequences that would need to be done in parallel, with a flexible spacial arrangement. Adding a few more hand-raisers in doing sequences (doesn't matter what they were) wouldn't presumably harm the situation. If you were to claim that if more people started raising their hands, in or around the vicinity of your hand-raisers (the ones that are doing the special series of sequences that would allow the property of consciously experiencing being a robot emerge), then that'd stop the property from emerging there would be the issue of the difference it made to behaviour where it emerged or not. If the new hand-raisers joined the original group then it wouldn't matter which of the group members performed one of the special sequences that needed to be performed in parallel. For any given binary sequence, there would be a certain chance of it being performed, and if you had enough people randomly raising their hands, then it would be likely it would be performed. So with your account, all that is required is enough people randomly raising their hands, to have the list of the required sequences ticked off.


Bill Rogers

unread,
Jul 30, 2015, 4:34:56 PM7/30/15
to talk-o...@moderators.isc.org
On Thursday, July 30, 2015 at 4:24:56 PM UTC-4, someone wrote:

> > Gee, perhaps I got confused by your numbering system and typed 1.y when I meant 1.y.y. But if you'd actually paid attention to what I wrote, that would have been obvious to you. So just in case you don't remember, here is my account (reposted for your convenience.)
> >
> > I've explained *my account* at length. You can decide whether you want to continue to ignore it (as you seem to be doing).
> >
> > I think it possible that a robot with a computer made of some very large NAND gate arrangement could be conscious.
> >
> > I think your hand raisers just confuse the issue. Hand raisers who guess are equivalent to gates which give random outputs. Hand raisers widely separated acting with mirrors are equivalent to gates with a built in time-delay between input and output. So I'm just going to answer in terms of gates.
> >
> > If you substitute random gates for the NAND gates used to build the robot's "brain," and they just give random outputs, then the robot won't be conscious. In fact, the robot won't do much of anything. If you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then sure, the robot would be conscious, but that scenario is ridiculously improbable, billions of gates, thousands of cycles, each gate just happening to give the same output it would as if it were a proper NAND gate. [And note that, even in this ridiculous case, the consciousness depends on the allegedly random gates persistently behaving just as though they were proper NAND gates, so consciousness would still depend on the NAND-like behavior of the gates.]
> >
> > OK, now you want to reduce the number of random gates in the huge NAND gate arrangement, and maybe you want to reduce how often those gates give random output as opposed to acting like a proper NAND gate. Well, you might get lucky for a handful of gates for a handful of cycles, or there might be enough redundancy in the huge arrangement to tolerate some errors, but once you've swapped in any significant number of random output gates, then my expectation is that the consciousness would be disrupted. That's because consciousness depends on the gates in the arrangement acting like NAND gates.
> >
> > Now, please go ahead and tell me what you find confusing or where you think I am obfuscating.
>
> With the issue cleared up, that seems fine.
>
> With the hand-raisers the position of the hand-raisers in space was quite flexible they could have used telescopes and mirrors. Once you went onto say that guessing didn't matter as long as the guess was correct (as you did when you stated "no" to (1.y.y)), then neither did the causal connection.

But of course the causal connection *does* matter, because it is simply impossible for all the required guesses to be correct. You appear to be entirely uninterested in *my* "atheist account" and only really interested in setting up your own deeply flawed "atheist account" to knock around.

Talk about obfuscating, what critical aspect of your scenario cannot be adequately described with reference to NAND gates (handraisers following instructions) and random output gates (handraisers guessing)?

>All there was was individuals doing sequences and there was no need for there to be any causal connection between their actions. It didn't matter which of them did which sequence, it would just be a set of sequences that would need to be done in parallel, with a flexible spacial arrangement. Adding a few more hand-raisers in doing sequences (doesn't matter what they were) wouldn't presumably harm the situation.

>If you were to claim that if more people started raising their hands, in or around the vicinity of your hand-raisers (the ones that are doing the special series of sequences that would allow the property of consciously experiencing being a robot emerge), then that'd stop the property from emerging there would be the issue of the difference it made to behaviour where it emerged or not.

Of course I wouldn't claim that, since I'm not interested in translating back and forth between NAND gates, random gates, obedient handraisers and guessing handraisers. I told what I thought; you ignored it in preference to what you wish I thought.

someone

unread,
Jul 30, 2015, 4:49:55 PM7/30/15
to talk-o...@moderators.isc.org
On Thursday, July 30, 2015 at 9:34:56 PM UTC+1, Bill Rogers wrote:
> On Thursday, July 30, 2015 at 4:24:56 PM UTC-4, someone wrote:
>
> > > Gee, perhaps I got confused by your numbering system and typed 1.y when I meant 1.y.y. But if you'd actually paid attention to what I wrote, that would have been obvious to you. So just in case you don't remember, here is my account (reposted for your convenience.)
> > >
> > > I've explained *my account* at length. You can decide whether you want to continue to ignore it (as you seem to be doing).
> > >
> > > I think it possible that a robot with a computer made of some very large NAND gate arrangement could be conscious.
> > >
> > > I think your hand raisers just confuse the issue. Hand raisers who guess are equivalent to gates which give random outputs. Hand raisers widely separated acting with mirrors are equivalent to gates with a built in time-delay between input and output. So I'm just going to answer in terms of gates.
> > >
> > > If you substitute random gates for the NAND gates used to build the robot's "brain," and they just give random outputs, then the robot won't be conscious. In fact, the robot won't do much of anything. If you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then sure, the robot would be conscious, but that scenario is ridiculously improbable, billions of gates, thousands of cycles, each gate just happening to give the same output it would as if it were a proper NAND gate. [And note that, even in this ridiculous case, the consciousness depends on the allegedly random gates persistently behaving just as though they were proper NAND gates, so consciousness would still depend on the NAND-like behavior of the gates.]
> > >
> > > OK, now you want to reduce the number of random gates in the huge NAND gate arrangement, and maybe you want to reduce how often those gates give random output as opposed to acting like a proper NAND gate. Well, you might get lucky for a handful of gates for a handful of cycles, or there might be enough redundancy in the huge arrangement to tolerate some errors, but once you've swapped in any significant number of random output gates, then my expectation is that the consciousness would be disrupted. That's because consciousness depends on the gates in the arrangement acting like NAND gates.
> > >
> > > Now, please go ahead and tell me what you find confusing or where you think I am obfuscating.
> >
> > With the issue cleared up, that seems fine.
> >
> > With the hand-raisers the position of the hand-raisers in space was quite flexible they could have used telescopes and mirrors. Once you went onto say that guessing didn't matter as long as the guess was correct (as you did when you stated "no" to (1.y.y)), then neither did the causal connection.
>
> But of course the causal connection *does* matter, because it is simply impossible for all the required guesses to be correct. You appear to be entirely uninterested in *my* "atheist account" and only really interested in setting up your own deeply flawed "atheist account" to knock around.
>
> Talk about obfuscating, what critical aspect of your scenario cannot be adequately described with reference to NAND gates (handraisers following instructions) and random output gates (handraisers guessing)?
>

I just like the absurdity of it.

> >All there was was individuals doing sequences and there was no need for there to be any causal connection between their actions. It didn't matter which of them did which sequence, it would just be a set of sequences that would need to be done in parallel, with a flexible spacial arrangement. Adding a few more hand-raisers in doing sequences (doesn't matter what they were) wouldn't presumably harm the situation.
>
> >If you were to claim that if more people started raising their hands, in or around the vicinity of your hand-raisers (the ones that are doing the special series of sequences that would allow the property of consciously experiencing being a robot emerge), then that'd stop the property from emerging there would be the issue of the difference it made to behaviour where it emerged or not.
>
> Of course I wouldn't claim that, since I'm not interested in translating back and forth between NAND gates, random gates, obedient handraisers and guessing handraisers. I told what I thought; you ignored it in preference to what you wish I thought.
>
> >If the new hand-raisers joined the original group then it wouldn't matter which of the group members performed one of the special sequences that needed to be performed in parallel. For any given binary sequence, there would be a certain chance of it being performed, and if you had enough people randomly raising their hands, then it would be likely it would be performed. So with your account, all that is required is enough people randomly raising their hands, to have the list of the required sequences ticked off.

So you are you ok with my characterisation of your belief being that all that your experience requires is enough people to randomly raise their hands in a correct sequence?


RSNorman

unread,
Jul 30, 2015, 4:54:56 PM7/30/15
to talk-o...@moderators.isc.org
On Thu, 30 Jul 2015 05:03:22 -0700 (PDT), js <jld...@skybeam.com>
wrote:
Satire and sarcasm don't at all work with that individual. However
the rest of us do enjoy it.

Bill Rogers

unread,
Jul 30, 2015, 4:59:55 PM7/30/15
to talk-o...@moderators.isc.org
On Thursday, July 30, 2015 at 4:49:55 PM UTC-4, someone wrote:

> So you are you ok with my characterisation of your belief being that all that your experience requires is enough people to randomly raise their hands in a correct sequence?

In the language of gates....all the random gates have to give exactly the output they would have given if they were proper NAND gates. The odds of that happening are infinitesimal. But, even if it happened, it would not break the causal connection, because conscious would have happened only because the gates did exactly what they would have done if they were proper NAND gates.

You, someone, would then be left explaining how billions of random gates had just happened to randomly produce the same outputs over thousands of cycles that they would have put out if they were proper NAND gates.

someone

unread,
Jul 30, 2015, 6:04:58 PM7/30/15
to talk-o...@moderators.isc.org
It doesn't have to happen for us to able investigate what your account would suggest if it did happen.

You seem to be saying that you are ok with my characterisation of your belief being that all that your experience requires is enough people to randomly raise their hands in a correct sequence.

I'm not sure what you mean by "in the language of gates...".

I was going to bring up how finely tuned the universe would need to be to provide us with an experience like this in your account. It seems to suggest that as long as a certain list of hand raising sequences was performed then the experience of having a robot form would emerge. It would seem to need to be a physical reality that in this example would react to certain sequences of hand being performed if they could be mapped to the inputs and outputs of NAND gates within an imaginary arrangement of NAND gates in which the inputs and outputs were causally related. I presume we aren't only talking NAND gates here, your theory suggests that the reaction would be the same if same function could have been achieved using hand-raising sequences that if they'd not been hand raising sequences but electronic signals within an imaginary robots robot control unit, then the reaction would be the same. Your account links imaginary combinations of NOR, NOT, AND and OR gates to what is consciously experienced. I say imaginary because I'm not sure in what sense people just raising their hands randomly can actually be considered to be performing as logic gates of any type.

There would also seem to need to be the link to the robot, but what would be special about the arrangement having been in the robot where the inputs meant one thing rather than the arrangement having been in a different scenario where the inputs meant something else?

Also whatever would make the robot special, is that a type of rule that your account is suggesting would have been around at the 'Big Bang'?

And also what difference are you suggesting it makes to the behaviour of random hand-raising group whether the property of consciousness emerges from their group or not?

Bill Rogers

unread,
Jul 30, 2015, 10:49:55 PM7/30/15
to talk-o...@moderators.isc.org
On Thursday, July 30, 2015 at 6:04:58 PM UTC-4, someone wrote:
> On Thursday, July 30, 2015 at 9:59:55 PM UTC+1, Bill Rogers wrote:
> > On Thursday, July 30, 2015 at 4:49:55 PM UTC-4, someone wrote:
> >
> > > So you are you ok with my characterisation of your belief being that all that your experience requires is enough people to randomly raise their hands in a correct sequence?
> >
> > In the language of gates....all the random gates have to give exactly the output they would have given if they were proper NAND gates. The odds of that happening are infinitesimal. But, even if it happened, it would not break the causal connection, because conscious would have happened only because the gates did exactly what they would have done if they were proper NAND gates.
> >
> > You, someone, would then be left explaining how billions of random gates had just happened to randomly produce the same outputs over thousands of cycles that they would have put out if they were proper NAND gates.
>
> It doesn't have to happen for us to able investigate what your account would suggest if it did happen.

It's your account, not mine. My account says that replacing the NAND gates (or obedient handraisers) with random output gates (guessing handraisers) would eliminate consciousness.

>
> You seem to be saying that you are ok with my characterisation of your belief being that all that your experience requires is enough people to randomly raise their hands in a correct sequence.

No. No. No. You need billions of people to raise their hand in the right sequence for each cycle of the "computer". Just by chance? Get real. It's an utterly implausible scenario. And for the reasons I've stated in my last 2 or 3 posts, I am not OK with your characterization of my position.

>
> I'm not sure what you mean by "in the language of gates...".

I mean switching from the "just for the absurdity of it" game of talking about people raising their hands and going back to talking about NAND gates, which correspond to hand raisers who follow instructions, and random output gates, which correspond to the hand raisers who guess. I've explained this before. But you don't seem to do more than skim whatever I write.

>
> I was going to bring up how finely tuned the universe would need to be to provide us with an experience like this in your account.

It is *your* account, not mine.

>It seems to suggest that as long as a certain list of hand raising sequences was performed then the experience of having a robot form would emerge. It would seem to need to be a physical reality that in this example would react to certain sequences of hand being performed if they could be mapped to the inputs and outputs of NAND gates within an imaginary arrangement of NAND gates in which the inputs and outputs were causally related.


>I presume we aren't only talking NAND gates here, your theory suggests that the reaction would be the same if same function could have been achieved using hand-raising sequences that if they'd not been hand raising sequences but electronic signals within an imaginary robots robot control unit, then the reaction would be the same.


This is your theory, not mine. As I and others have mentioned, I expect that consciousness requires a real time interaction between the outside world and the robot; if you slow the processing down by using had raising people, or separating the individual NAND gates by light-years, then the robot will get run over by a truck long before it knows what's happening. If you want to postulate a very slow processor, but then also postulate a very slow outside world, maybe that's fine, but you're just complicating things unnecessarily. Stick with the gates you started with, or explain why the difference between the gates and hand raisers is critical to your argument.


>Your account links imaginary combinations of NOR, NOT, AND and OR gates to what is consciously experienced. I say imaginary because I'm not sure in what sense people just raising their hands randomly can actually be considered to be performing as logic gates of any type.


This is not my account, it is your account. Let me repeat. This is not my account, it is your account. I agree that random output gates are not NAND gates. A previously conscious robot in which you replaced the NAND gates with random output gates (or guessing hand raisers) would not be conscious, because the billions of random output gates could not possibly *just happen* to all give the same output they would have given if they were NAND gates. I've explained this many times, and you keep ignoring what I say and replacing it with what you wish I'd said.

>
> There would also seem to need to be the link to the robot, but what would be special about the arrangement having been in the robot where the inputs meant one thing rather than the arrangement having been in a different scenario where the inputs meant something else?

The inputs need to reflect the external world that the robot lives in. I know you'd like to imagine a simulated world, a brain-in-a-vat version of the inputs in which just the right inputs are piped into the robot to have the same effect as would be had by inputs from the real world. It's a distraction which doesn't help your case.

>
> Also whatever would make the robot special, is that a type of rule that your account is suggesting would have been around at the 'Big Bang'?

It's your damn account, not mine. I've told you what my account is, and you ignore what I say.

>
> And also what difference are you suggesting it makes to the behaviour of random hand-raising group whether the property of consciousness emerges from their group or not?

If they behave one way consciousness emerges, if they behave in another way, it doesn't. But you should stick to the gates.


someone

unread,
Jul 31, 2015, 3:49:56 AM7/31/15
to talk-o...@moderators.isc.org
On Friday, July 31, 2015 at 3:49:55 AM UTC+1, Bill Rogers wrote:
> On Thursday, July 30, 2015 at 6:04:58 PM UTC-4, someone wrote:
> > On Thursday, July 30, 2015 at 9:59:55 PM UTC+1, Bill Rogers wrote:
> > > On Thursday, July 30, 2015 at 4:49:55 PM UTC-4, someone wrote:
> > >
> > > > So you are you ok with my characterisation of your belief being that all that your experience requires is enough people to randomly raise their hands in a correct sequence?
> > >
> > > In the language of gates....all the random gates have to give exactly the output they would have given if they were proper NAND gates. The odds of that happening are infinitesimal. But, even if it happened, it would not break the causal connection, because conscious would have happened only because the gates did exactly what they would have done if they were proper NAND gates.
> > >
> > > You, someone, would then be left explaining how billions of random gates had just happened to randomly produce the same outputs over thousands of cycles that they would have put out if they were proper NAND gates.
> >
> > It doesn't have to happen for us to able investigate what your account would suggest if it did happen.
>
> It's your account, not mine. My account says that replacing the NAND gates (or obedient handraisers) with random output gates (guessing handraisers) would eliminate consciousness.
>
> >
> > You seem to be saying that you are ok with my characterisation of your belief being that all that your experience requires is enough people to randomly raise their hands in a correct sequence.
>
> No. No. No. You need billions of people to raise their hand in the right sequence for each cycle of the "computer". Just by chance? Get real. It's an utterly implausible scenario. And for the reasons I've stated in my last 2 or 3 posts, I am not OK with your characterization of my position.
>
> >
> > I'm not sure what you mean by "in the language of gates...".
>
> I mean switching from the "just for the absurdity of it" game of talking about people raising their hands and going back to talking about NAND gates, which correspond to hand raisers who follow instructions, and random output gates, which correspond to the hand raisers who guess. I've explained this before. But you don't seem to do more than skim whatever I write.
>
> >
> > I was going to bring up how finely tuned the universe would need to be to provide us with an experience like this in your account.
>
> It is *your* account, not mine.
>

I'm going by the answers you gave about your account.


> >It seems to suggest that as long as a certain list of hand raising sequences was performed then the experience of having a robot form would emerge. It would seem to need to be a physical reality that in this example would react to certain sequences of hand being performed if they could be mapped to the inputs and outputs of NAND gates within an imaginary arrangement of NAND gates in which the inputs and outputs were causally related.
>
> >I presume we aren't only talking NAND gates here, your theory suggests that the reaction would be the same if same function could have been achieved using hand-raising sequences that if they'd not been hand raising sequences but electronic signals within an imaginary robots robot control unit, then the reaction would be the same.
>
>
> This is your theory, not mine. As I and others have mentioned, I expect that consciousness requires a real time interaction between the outside world and the robot; if you slow the processing down by using had raising people, or separating the individual NAND gates by light-years, then the robot will get run over by a truck long before it knows what's happening. If you want to postulate a very slow processor, but then also postulate a very slow outside world, maybe that's fine, but you're just complicating things unnecessarily. Stick with the gates you started with, or explain why the difference between the gates and hand raisers is critical to your argument.
>

As I had understood it you had answered "yes" to 1.y, and "no" to 1.y.y about *your* account.

The use of the questions in the argument is obvious, I even lay out an explanation when asking them, for convenience here is a reminder of the questions:
---
1) Could a robot with a NAND gate control unit consciously experience?

If the answer is "yes" then the next question is

1.y) If the hand raisers performed accurately, such that
the resulting outputs of their NAND gate arrangement were the same as those the robot control unit was recorded as giving, would the property of the NAND gate arrangement consciously experiencing emerge?

If the answer to (1.y) is "yes" then the next question is

(1.y.y) If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?

If the answer to (1.y.y) is "yes" then how does the account suggest that consciously experiencing makes a difference to behaviour, if the behaviour when it does emerge is the same as when it doesn't.

If the answer to (1.y.y) is "no" then we can discuss from there.

If the answer to (1.y) is "no" then how does the account suggest that consciously experiencing make difference to the robot's behaviour if the NAND gate arrangements outputs would be the same whether the property had emerged or not.

If the answer to (1) is "no" then what is special about the biological chemistry?
---

Just to confirm, with your account you are stating "yes" to 1.y when talking about *your* account aren't you?


>
> >Your account links imaginary combinations of NOR, NOT, AND and OR gates to what is consciously experienced. I say imaginary because I'm not sure in what sense people just raising their hands randomly can actually be considered to be performing as logic gates of any type.
>
>
> This is not my account, it is your account. Let me repeat. This is not my account, it is your account. I agree that random output gates are not NAND gates. A previously conscious robot in which you replaced the NAND gates with random output gates (or guessing hand raisers) would not be conscious, because the billions of random output gates could not possibly *just happen* to all give the same output they would have given if they were NAND gates. I've explained this many times, and you keep ignoring what I say and replacing it with what you wish I'd said.
>
> >
> > There would also seem to need to be the link to the robot, but what would be special about the arrangement having been in the robot where the inputs meant one thing rather than the arrangement having been in a different scenario where the inputs meant something else?
>
> The inputs need to reflect the external world that the robot lives in. I know you'd like to imagine a simulated world, a brain-in-a-vat version of the inputs in which just the right inputs are piped into the robot to have the same effect as would be had by inputs from the real world. It's a distraction which doesn't help your case.
>
> >
> > Also whatever would make the robot special, is that a type of rule that your account is suggesting would have been around at the 'Big Bang'?
>
> It's your damn account, not mine. I've told you what my account is, and you ignore what I say.
>
> >
> > And also what difference are you suggesting it makes to the behaviour of random hand-raising group whether the property of consciousness emerges from their group or not?
>
> If they behave one way consciousness emerges, if they behave in another way, it doesn't. But you should stick to the gates.

I understand that in *your* account you believe that if the members of the random hand raising group were to between them perform the special hand raising sequences (it not mattering which did which sequence) that the experience of consciously being a robot would emerge. The question was about what difference you believed it would make to their behaviour if you were correct and it did emerge, or does it make no difference to the behaviour, but will have emerged if one of the correct sequence performances are done.

Bill Rogers

unread,
Jul 31, 2015, 7:24:55 AM7/31/15
to talk-o...@moderators.isc.org
In "my account" I believe that an array of random output gates (or guessing handraisers) could not be conscious. I've said it many times now, but you consistently ignore what I say.

You seem to wish I would say that if, by inconceivable good luck, the random gates all happened to consistently produce the output that NAND gates would have produced, then the random gate array would be conscious. You would then like to conclude that since the random gate array (or huge group of guessing hand raisers) was just behaving randomly that there was no causal link between consciousness and behaviour. But I won't say that.

To propose that an array of random output gates could become conscious is to introduce the assumption that consciousness has nothing to do with the physical behavior of the array. I'd call that assuming your conclusion. And it is most definitely *not* part of "my account."

Until you give me some reason why switching from NAND gates to handraisers is important, I'm going to keep translating back to gates, using the translation "handraiser following instructions" = NAND gate; "handraiser guessing" = random output gate.

Here's why. It's inappropriate to make the components of your gate arrangement conscious people. Even though you claim they are acting as unconsciously as an NAND gate just giving an output rigidly determined by the input, or a random gate just giving an output unrelated to the input, the idea that they are actual people creeps in. So when you have them guess, you vaguely raise the connotation that perhaps they are understanding whats going on a bit and making educated guesses. If that's the case, all bets are off - you've slipped consciousness into the "computer" through the back door. And that s perhaps why it does not seem utterly implausible to you that billions of people could just guess right consistently. So, if you really mean for the guesses to be random, drop the handraisers and just describe an array of random output gates and tell me why you would ever think that such an array could perform arithmetic, never mind becoming conscious.

someone

unread,
Jul 31, 2015, 10:19:54 AM7/31/15
to talk-o...@moderators.isc.org
Problems posting again, so trying replying to a different post.
Meant as a reply to https://groups.google.com/d/msg/talk.origins/bycATB6rEho/pK-ybjOtRfsJ

Here is how I think it would look if I had replied (it is a paste of when I'd tried to reply).


On Friday, July 31, 2015 at 12:24:55 PM UTC+1, Bill Rogers wrote:
> On Friday, July 31, 2015 at 3:49:56 AM UTC-4, someone wrote:
> > On Friday, July 31, 2015 at 3:49:55 AM UTC+1, Bill Rogers wrote:
> > > On Thursday, July 30, 2015 at 6:04:58 PM UTC-4, someone wrote:
> > > > On Thursday, July 30, 2015 at 9:59:55 PM UTC+1, Bill Rogers wrote:
> > > > > On Thursday, July 30, 2015 at 4:49:55 PM UTC-4, someone wrote:
> > > > >
> > > > > > So you are you ok with my characterisation of your belief being that all that your experience requires is enough people to randomly raise their hands in a correct sequence?
> > > > >
> > > > > In the language of gates....all the random gates have to give exactly the output they would have given if they were proper NAND gates. The odds of that happening are infinitesimal. But, even if it happened, it would not break the causal connection, because conscious would have happened only because the gates did exactly what they would have done if they were proper NAND gates.
> > > > >
> > > > > You, someone, would then be left explaining how billions of random gates had just happened to randomly produce the same outputs over thousands of cycles that they would have put out if they were proper NAND gates.
> > > >
> > > > It doesn't have to happen for us to able investigate what your account would suggest if it did happen.
> > >
> > > It's your account, not mine. My account says that replacing the NAND gates (or obedient handraisers) with random output gates (guessing handraisers) would eliminate consciousness.
> > >
> > > >
> > > > You seem to be saying that you are ok with my characterisation of your belief being that all that your experience requires is enough people to randomly raise their hands in a correct sequence.
> > >
> > > No. No. No. You need billions of people to raise their hand in the right sequence for each cycle of the "computer". Just by chance? Get real. It's an utterly implausible scenario. And for the reasons I've stated in my last 2 or 3 posts, I am not OK with your characterization of my position.
> > >
> > > >
> > > > I'm not sure what you mean by "in the language of gates...".
> > >
> > > I mean switching from the "just for the absurdity of it" game of talking about people raising their hands and going back to talking about NAND gates, which correspond to hand raisers who follow instructions, and random output gates, which correspond to the hand raisers who guess. I've explained this before. But you don't seem to do more than skim whatever I write.
> > >
> > > >
> > > > I was going to bring up how finely tuned the universe would need to be to provide us with an experience like this in your account.
> > >
> > > It is *your* account, not mine.
> > >
> >
> > I'm going by the answers you gave about your account.
> >
> >
> > > >It seems to suggest that as long as a certain list of hand raising sequences was performed then the experience of having a robot form would emerge. It would seem to need to be a physical reality that in this example would react to certain sequences of hand being performed if they could be mapped to the inputs and outputs of NAND gates within an imaginary arrangement of NAND gates in which the inputs and outputs were causally related.
> > >
> > > >I presume we aren't only talking NAND gates here, your theory suggests that the reaction would be the same if same function could have been achieved using hand-raising sequences that if they'd not been hand raising sequences but electronic signals within an imaginary robots robot control unit, then the reaction would be the same.
> > >
> > >
> > > This is your theory, not mine. As I and others have mentioned, I expect that consciousness requires a real time interaction between the outside world and the robot; if you slow the processing down by using had raising people, or separating the individual NAND gates by light-years, then the robot will get run over by a truck long before it knows what's happening. If you want to postulate a very slow processor, but then also postulate a very slow outside world, maybe that's fine, but you're just complicating things unnecessarily. Stick with the gates you started with, or explain why the difference between the gates and hand raisers is critical to your argument.
> > >
> >
> > As I had understood it you had answered "yes" to 1.y, and "no" to 1.y.y about *your* account.
> >
> > The use of the questions in the argument is obvious, I even lay out an explanation when asking them, for convenience here is a reminder of the questions:
> > ---
> > 1) Could a robot with a NAND gate control unit consciously experience?
> >
> > If the answer is "yes" then the next question is
> >
> > 1.y) If the hand raisers performed accurately, such that
> > the resulting outputs of their NAND gate arrangement were the same as those the robot control unit was recorded as giving, would the property of the NAND gate arrangement consciously experiencing emerge?
> >
> > If the answer to (1.y) is "yes" then the next question is
> >
> > (1.y.y) If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?
> >
> > If the answer to (1.y.y) is "yes" then how does the account suggest that consciously experiencing makes a difference to behaviour, if the behaviour when it does emerge is the same as when it doesn't.
> >
> > If the answer to (1.y.y) is "no" then we can discuss from there.
> >
> > If the answer to (1.y) is "no" then how does the account suggest that consciously experiencing make difference to the robot's behaviour if the NAND gate arrangements outputs would be the same whether the property had emerged or not.
> >
> > If the answer to (1) is "no" then what is special about the biological chemistry?
> > ---
> >
> > Just to confirm, with your account you are stating "yes" to 1.y when talking about *your* account aren't you?
> >
> >
> > >
> > > >Your account links imaginary combinations of NOR, NOT, AND and OR gates to what is consciously experienced. I say imaginary because I'm not sure in what sense people just raising their hands randomly can actually be considered to be performing as logic gates of any type.
> > >
> > >
> > > This is not my account, it is your account. Let me repeat. This is not my account, it is your account. I agree that random output gates are not NAND gates. A previously conscious robot in which you replaced the NAND gates with random output gates (or guessing hand raisers) would not be conscious, because the billions of random output gates could not possibly *just happen* to all give the same output they would have given if they were NAND gates. I've explained this many times, and you keep ignoring what I say and replacing it with what you wish I'd said.
> > >
> > > >
> > > > There would also seem to need to be the link to the robot, but what would be special about the arrangement having been in the robot where the inputs meant one thing rather than the arrangement having been in a different scenario where the inputs meant something else?
> > >
> > > The inputs need to reflect the external world that the robot lives in. I know you'd like to imagine a simulated world, a brain-in-a-vat version of the inputs in which just the right inputs are piped into the robot to have the same effect as would be had by inputs from the real world. It's a distraction which doesn't help your case.
> > >
> > > >
> > > > Also whatever would make the robot special, is that a type of rule that your account is suggesting would have been around at the 'Big Bang'?
> > >
> > > It's your damn account, not mine. I've told you what my account is, and you ignore what I say.
> > >
> > > >
> > > > And also what difference are you suggesting it makes to the behaviour of random hand-raising group whether the property of consciousness emerges from their group or not?
> > >
> > > If they behave one way consciousness emerges, if they behave in another way, it doesn't. But you should stick to the gates.
> >
> > I understand that in *your* account you believe that if the members of the random hand raising group were to between them perform the special hand raising sequences (it not mattering which did which sequence) that the experience of consciously being a robot would emerge. The question was about what difference you believed it would make to their behaviour if you were correct and it did emerge, or does it make no difference to the behaviour, but will have emerged if one of the correct sequence performances are done.
>
> In "my account" I believe that an array of random output gates (or guessing handraisers) could not be conscious. I've said it many times now, but you consistently ignore what I say.
>

I don't ignore it, it is just that you seem trying to obfuscate. For example, here, the question is about hand-raisers that randomly guessed correctly, which you have stated would consciously experience. But you seem to be using the word random to refer to a scenario where the random guesses weren't correct. That's right isn't it, if the hand-raisers had randomly guessed correctly, you'd be stating that they would consciously be consciously experiencing wouldn't you.


> You seem to wish I would say that if, by inconceivable good luck, the random gates all happened to consistently produce the output that NAND gates would have produced, then the random gate array would be conscious. You would then like to conclude that since the random gate array (or huge group of guessing hand raisers) was just behaving randomly that there was no causal link between consciousness and behaviour. But I won't say that.
>

And here in this second paragraph it again seems designed to obfuscate. You state that I wish you say that if they all happened to consistently produce the output that NAND gate would have produced, then the random gate array would be conscious. You don't state that according to your previous responses, that is exactly what you are saying. You then place another statement, and then declare that you won't say that, so there is an ambiguity as to whether you which statement you were referring to.

You have stated before that you made a mistake in saying "no" to (1.y) and that it was (1.y.y) that you meant to say "no" to.
https://groups.google.com/d/msg/talk.origins/bycATB6rEho/6dba0Qy6_fkJ

(1.y.y) If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?

You have already responded to the effect that in (1.y) that the property of consciousness would emerge if the hand-raisers acted as NAND gates, and in responding "no" to (1.y.y) you were stating that it wouldn't make a difference even if some of them had guessed.

But then if you are saying "no" to (1.y.y) those 2 first paragraphs do seem to be designed to obfuscate.

Bill Rogers

unread,
Jul 31, 2015, 11:24:53 AM7/31/15
to talk-o...@moderators.isc.org
If they are randomly guessing, then you cannot ignore the (overwhelmingly likely) possibility that many of them will guess wrong. That's not obfuscation, it's just the facts.

>
>
> > You seem to wish I would say that if, by inconceivable good luck, the random gates all happened to consistently produce the output that NAND gates would have produced, then the random gate array would be conscious. You would then like to conclude that since the random gate array (or huge group of guessing hand raisers) was just behaving randomly that there was no causal link between consciousness and behaviour. But I won't say that.
> >
>
> And here in this second paragraph it again seems designed to obfuscate. You state that I wish you say that if they all happened to consistently produce the output that NAND gate would have produced, then the random gate array would be conscious. You don't state that according to your previous responses, that is exactly what you are saying. You then place another statement, and then declare that you won't say that, so there is an ambiguity as to whether you which statement you were referring to.

As I explain below, I reject the premise of your question. I once indulged that premise enough to say something like "Yes, if they all guessed correctly and continued to do so, consciousness would emerge," but then I went on to say that such a coincidence was simply impossible. You've ignored my explanation. You've ignored my rejection of the premise of your question. That's what I get for answering a question whose premise I disagree with. Of course, you could understand this easily if you were interested in understanding.

>
> You have stated before that you made a mistake in saying "no" to (1.y) and that it was (1.y.y) that you meant to say "no" to.
> https://groups.google.com/d/msg/talk.origins/bycATB6rEho/6dba0Qy6_fkJ
>
> (1.y.y) If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?
>
> You have already responded to the effect that in (1.y) that the property of consciousness would emerge if the hand-raisers acted as NAND gates, and in responding "no" to (1.y.y) you were stating that it wouldn't make a difference even if some of them had guessed.
>
> But then if you are saying "no" to (1.y.y) those 2 first paragraphs do seem to be designed to obfuscate.

I am not obfuscating anything. For the nth time, I am rejecting the premise of question 1.y.y. I do not accept that it would be possible for an array of random output gates to be conscious. The premise of your question is that an array composed of random gates *could* be conscious. I reject that premise.

You know, if you ask me "Have you stopped beating your wife?" I will reject the premise of your question. Most people understand that if you then force me to choose "yes" or "no," you are trying to force me to accept the false premise of your question. When I refuse to do so, or when I explain that neither yes nor no is a correct answer, I am not obfuscating. Likewise, when I explain to you that I reject the premise of your question 1.y.y, which is that an array composed of random output gates could be conscious I am not obfuscating, I am simply rejecting what I hold to be a false premise. If you want me to accept your premise, then explain to me how you think an arrangement of random output gates could even do arithmetic, much less be conscious.

One more thing about the wife beating question. Imagine I lose patience with your game and say "Well, I'm not beating my wife now, so the best answer would be, 'yes, I stopped beating my wife." And then I go on to explain that I'm only choosing this answer because, in fact I am not beating my wife. You then totally ignore my explanation, and say "Ha, so you admit you beat your wife in the past." And, I lose a little more patience and say "Well, no I didn't beat my wife in the past, so I guess the best answer is "No, I haven't stopped beating my wife," and I explain carefully to you that I am only choosing that answer to reflect that fact that I never beat my wife in the past. Then you totally ignore my explanation and you say "See, not only are you admitting that you are still beating your wife, but you keep changing your story. First you said yes, now you say no, so you're not only a wife beater, but you lie, and you keep trying to obfuscate your story." That is EXACTLY what it is like to try to answer your loaded questions about the guessing hand raisers.

someone

unread,
Aug 4, 2015, 8:04:40 PM8/4/15
to talk-o...@moderators.isc.org
Regarding paragraph:

"I am not obfuscating anything. For the nth time, I am rejecting the premise of question 1.y.y. I do not accept that it would be possible for an array of random output gates to be conscious. The premise of your question is that an array composed of random gates *could* be conscious. I reject that premise."

Question (1.y) was: "If the hand raisers performed accurately, such that the resulting outputs of their NAND gate arrangement were the same as those the robot control unit was recorded as giving, would the property of the NAND gate arrangement consciously experiencing emerge?"

You had answered "yes" to that.

Question (1.y.y) was: "If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?"

A "no" to question 1.y.y would mean that it wouldn't make any difference whether they guessed or not. So if it didn't make any difference, the situation would be like it was for the arrangement in 1.y, in other words consciously experiencing.

But then in the post where you written that you'd made a mistake in writing that you were saying "no" to 1.y, you mentioned "If you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then sure, the robot would be conscious...". So you were clearly stating that if they guessed correctly (it isn't a question about whether you think that'd be likely to happen).

The problem I am having, is reconciling that with you writing:
"I do not accept that it would be possible for an array of random output gates to be conscious. The premise of your question is that an array composed of random gates *could* be conscious. I reject that premise."

... while it could be that you have changed your position (and no longer think that if they guessed correctly, then sure, the robot would be conscious), it could also be that by an array of random output gates you meant random guesses around half of which each cycle would wrong. But that would seem to obfuscate the issue, as a person might well have thought that you were talking about if you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then... . And why wouldn't they because question 1.y.y was about them correctly guessing the results, so it certainly didn't contain any premise about a situation where there was one or more incorrect guess.

Is there perhaps a difference with a robot with random gates, and handraisers, that would cause you to change your answer regarding the two, because at the moment you seem to be stating that if the random gates all guessed correctly that the robot would consciously experience, but that your refuse to accept that if the handraisers guessed correctly that their arrangement would consciously experience.

Bill Rogers

unread,
Aug 4, 2015, 8:24:40 PM8/4/15
to talk-o...@moderators.isc.org
On Tuesday, August 4, 2015 at 8:04:40 PM UTC-4, someone wrote:
>
> Regarding paragraph:
>
> "I am not obfuscating anything. For the nth time, I am rejecting the premise of question 1.y.y. I do not accept that it would be possible for an array of random output gates to be conscious. The premise of your question is that an array composed of random gates *could* be conscious. I reject that premise."
>
> Question (1.y) was: "If the hand raisers performed accurately, such that the resulting outputs of their NAND gate arrangement were the same as those the robot control unit was recorded as giving, would the property of the NAND gate arrangement consciously experiencing emerge?"
>
> You had answered "yes" to that.
>
> Question (1.y.y) was: "If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?"
>
> A "no" to question 1.y.y would mean that it wouldn't make any difference whether they guessed or not. So if it didn't make any difference, the situation would be like it was for the arrangement in 1.y, in other words consciously experiencing.
>
> But then in the post where you written that you'd made a mistake in writing that you were saying "no" to 1.y, you mentioned "If you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then sure, the robot would be conscious...". So you were clearly stating that if they guessed correctly (it isn't a question about whether you think that'd be likely to happen).
>
> The problem I am having, is reconciling that with you writing:
> "I do not accept that it would be possible for an array of random output gates to be conscious. The premise of your question is that an array composed of random gates *could* be conscious. I reject that premise."
>
> ... while it could be that you have changed your position (and no longer think that if they guessed correctly, then sure, the robot would be conscious), it could also be that by an array of random output gates you meant random guesses around half of which each cycle would wrong. But that would seem to obfuscate the issue, as a person might well have thought that you were talking about if you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then... . And why wouldn't they because question 1.y.y was about them correctly guessing the results, so it certainly didn't contain any premise about a situation where there was one or more incorrect guess.
>
> Is there perhaps a difference with a robot with random gates, and handraisers, that would cause you to change your answer regarding the two, because at the moment you seem to be stating that if the random gates all guessed correctly that the robot would consciously experience, but that your refuse to accept that if the handraisers guessed correctly that their arrangement would consciously experience.
>
I explained my answers before. You claim I'm obfuscating. There's nothing new I can say. The answers to your questions are in the text below. Unless you ask me a question that indicates that you've actually read and tried to understand what I wrote, I'm done. I'm happy with my argument - on the odd chance that anyone is still interested, they can read my argument and decide whether I'm obfuscating or running away.

Try to keep things clear and simple.

You asked if an NAND gate arrangement attached to a robot could be consciousness. I said (and I think Mark agrees) that I thought it might well be able to, although it would require a very large and complex arrangement.

Next you asked about an analogous arrangement where the NAND gates were replace by people who raised their hands in accordance with instructions, which would presumably instruct them to act like NAND gates.

This was a non-essential distraction for several reasons. First, replacing the gates with people slows everything down. Since most of us think that for consciousness to emerge in the first place, there has to be recurring interaction with the environment, now you have the problem that the robot thinks glacially slowly and cannot react to events fast enough. So then you have to spend multiple posts trying to clarify this and presuppose simulated environmental inputs that happen appropriately slowly. So that's just a waste of time.

The other problem is that when when switch back and forth between NAND gates and hand raisers, anybody who is familiar with philosophy of mind thinks you are targeting functionalism. Functionalism is in contrast to identity theory. Identity theory claims that there is a one-to-one relation between physical states and mental states. Under identity theory you could not have the same mental state with both a particular NAND gate arrangement and a hand raiser arrangement. Functionalism, on the other hand, claims that you could instantiate functionally equivalent mental states in very different physical states (e.g. human brain versus computer/robot). So when you bring in two different physical systems, naturally folks who are familiar with this area of philosophy think you are trying to argue something about functionalism versus identity theory. But as far as I can tell, that's not really what you care about. They are both physicalist views, anyway.

Then your next move is to ask whether consciousness could arise if some of the hand raisers were only guessing about what they should do. So let's drop the handraisers, since it just adds elements not relevant to your main argument. What you ask is equivalent to asking whether if you had a conscious robot/NAND gate arrangement you could still get consciousness if some or all of the NAND gates were replaced with random output gates.

Early on I answered that "Yes, but...." and then explained why the question did not really make sense. You seized on the "yes" and totally ignored the "but." So I'll simply revise my answer and say "no." [Please don't stop reading at this point and just run off and put "no" into your flow chart.] It is not possible for an arrangement of random output gates to produce consciousness. Your claim that believing that some arrangement of NAND gates could be conscious implies that one also believes that an arrangement of random output gates could be conscious is nonsensical, and Mark appropriately scoffed at it.

Here is my summary of your argument as to why a conscious arrangement of NAND gates implies that an arrangement of random output gates could be conscious. Here goes.

Someone: You say the NAND gate arrangement could be conscious, right?

Physicalist: Yes.

Someone: Well, if we took out a bunch of the NAND gates and replaced them with random output gates, could the arrangement be conscious?

Physicalist: Of course not, there's no chance that the random output gates would consistently just happen to give the same outputs they would have given if they were proper NAND gates.

Someone: Well, of course the odds are low, but if they *did* just happen to give those outputs, why wouldn't the arrangement be conscious? After all, all the inputs and outputs would then be identical to those under the NAND gate arrangement, which you already said was conscious.

Physicalist: Sure, but you're ignoring the overwhelming likelihood that the random output gates *won't* give the right outputs.

Someone: But we're talking about the case in which they *do* give the right outputs. No matter how unlikely that case is, it's the only one we're considering at the moment. Bringing up other cases is just an attempt at obfuscation.

Physicalist: OK, so, if in spite of astronomical odds, in this one highly unlikely case, the arrangement including random output gates would be conscious.

Someone: Well, that's it. What that means is that consciousness can emerge even with random outputs. So there's no causal connection between the outputs of the gates and the emergence of consciousness. And if there's no causal connection between the physical and consciousness, then physicalism must be false.

Correct me if that is not a correct summary of the argument you are trying to make.

The problems are (1) magnitudes matter - something so improbable as an array of random output gates consistently producing the same outputs they would produce if they were NAND gates can be considered impossible for all practical purposes (2) it is a, perhaps dishonest, perhaps self-deluding, sleight of hand to consider the random output gates to have random output only during part of the argument. When the physicalist complains about the enormously more likely case in which the random output gates give the wrong output, you say that you are not considering that case. OK. But when you draw your conclusion that the behaviour of the gates was irrelevant to consciousness you go back to considering them as truly random output gates and suddenly you want all those other cases to influence your conclusion. You can't have it both ways. If the gates are truly random (if the hand raisers are really guessing) then they won't guess correctly enough to produce consciousness. If they do give the correct outputs consistently, then there's no meaningful sense in which they were "just guessing." Finally, even if you say they were really guessing, you cannot claim there is no causal link with consciousness, because even though they were guessing, consciousness only emerged because they guessed just exactly like NAND gates.

I suggest again that you stop laying that flattering unction to your soul that people reject your argument because they are intellectual cowards. Your argument is not expressed clearly, it's cluttered with elements irrelevant to your main point, and even when it's made clear, it does not hold up. There are interesting arguments for dualism and theism, but you are not making them.

someone

unread,
Aug 4, 2015, 8:39:40 PM8/4/15
to talk-o...@moderators.isc.org
On Wednesday, August 5, 2015 at 1:24:40 AM UTC+1, Bill Rogers wrote:
> On Tuesday, August 4, 2015 at 8:04:40 PM UTC-4, someone wrote:
> >
> > Regarding paragraph:
> >
> > "I am not obfuscating anything. For the nth time, I am rejecting the premise of question 1.y.y. I do not accept that it would be possible for an array of random output gates to be conscious. The premise of your question is that an array composed of random gates *could* be conscious. I reject that premise."
> >
> > Question (1.y) was: "If the hand raisers performed accurately, such that the resulting outputs of their NAND gate arrangement were the same as those the robot control unit was recorded as giving, would the property of the NAND gate arrangement consciously experiencing emerge?"
> >
> > You had answered "yes" to that.
> >
> > Question (1.y.y) was: "If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?"
> >
> > A "no" to question 1.y.y would mean that it wouldn't make any difference whether they guessed or not. So if it didn't make any difference, the situation would be like it was for the arrangement in 1.y, in other words consciously experiencing.
> >
> > But then in the post where you written that you'd made a mistake in writing that you were saying "no" to 1.y, you mentioned "If you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then sure, the robot would be conscious...". So you were clearly stating that if they guessed correctly (it isn't a question about whether you think that'd be likely to happen).
> >
> > The problem I am having, is reconciling that with you writing:
> > "I do not accept that it would be possible for an array of random output gates to be conscious. The premise of your question is that an array composed of random gates *could* be conscious. I reject that premise."
> >
> > ... while it could be that you have changed your position (and no longer think that if they guessed correctly, then sure, the robot would be conscious), it could also be that by an array of random output gates you meant random guesses around half of which each cycle would wrong. But that would seem to obfuscate the issue, as a person might well have thought that you were talking about if you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then... . And why wouldn't they because question 1.y.y was about them correctly guessing the results, so it certainly didn't contain any premise about a situation where there was one or more incorrect guess.
> >
> > Is there perhaps a difference with a robot with random gates, and handraisers, that would cause you to change your answer regarding the two, because at the moment you seem to be stating that if the random gates all guessed correctly that the robot would consciously experience, but that your refuse to accept that if the handraisers guessed correctly that their arrangement would consciously experience.
> >
> I explained my answers before. You claim I'm obfuscating. There's nothing new I can say. The answers to your questions are in the text below. Unless you ask me a question that indicates that you've actually read and tried to understand what I wrote, I'm done. I'm happy with my argument - on the odd chance that anyone is still interested, they can read my argument and decide whether I'm obfuscating or running away.
>
> Try to keep things clear and simple.
>
> You asked if an NAND gate arrangement attached to a robot could be consciousness. I said (and I think Mark agrees) that I thought it might well be able to, although it would require a very large and complex arrangement.
>
> Next you asked about an analogous arrangement where the NAND gates were replace by people who raised their hands in accordance with instructions, which would presumably instruct them to act like NAND gates.
>
> This was a non-essential distraction for several reasons. First, replacing the gates with people slows everything down. Since most of us think that for consciousness to emerge in the first place, there has to be recurring interaction with the environment, now you have the problem that the robot thinks glacially slowly and cannot react to events fast enough. So then you have to spend multiple posts trying to clarify this and presuppose simulated environmental inputs that happen appropriately slowly. So that's just a waste of time.
>
> The other problem is that when when switch back and forth between NAND gates and hand raisers, anybody who is familiar with philosophy of mind thinks you are targeting functionalism. Functionalism is in contrast to identity theory. Identity theory claims that there is a one-to-one relation between physical states and mental states. Under identity theory you could not have the same mental state with both a particular NAND gate arrangement and a hand raiser arrangement. Functionalism, on the other hand, claims that you could instantiate functionally equivalent mental states in very different physical states (e.g. human brain versus computer/robot). So when you bring in two different physical systems, naturally folks who are familiar with this area of philosophy think you are trying to argue something about functionalism versus identity theory. But as far as I can tell, that's not really what you care about. They are both physicalist views, anyway.
>
> Then your next move is to ask whether consciousness could arise if some of the hand raisers were only guessing about what they should do. So let's drop the handraisers, since it just adds elements not relevant to your main argument. What you ask is equivalent to asking whether if you had a conscious robot/NAND gate arrangement you could still get consciousness if some or all of the NAND gates were replaced with random output gates.
>
> Early on I answered that "Yes, but...." and then explained why the question did not really make sense. You seized on the "yes" and totally ignored the "but." So I'll simply revise my answer and say "no." [Please don't stop reading at this point and just run off and put "no" into your flow chart.] It is not possible for an arrangement of random output gates to produce consciousness. Your claim that believing that some arrangement of NAND gates could be conscious implies that one also believes that an arrangement of random output gates could be conscious is nonsensical, and Mark appropriately scoffed at it.
>

In
https://groups.google.com/d/msg/talk.origins/bycATB6rEho/6dba0Qy6_fkJ
you clearly wrote:

"If you substitute random gates for the NAND gates used to build the robot's "brain," and they just give random outputs, then the robot won't be conscious. In fact, the robot won't do much of anything. If you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then sure, the robot would be conscious, but that scenario is ridiculously improbable, billions of gates, thousands of cycles, each gate just happening to give the same output it would as if it were a proper NAND gate."

Clearly stating that if the random gates all guessed correctly that the robot would consciously experience. Do you refuse to accept that if the handraisers guessed correctly that their arrangement would consciously experience?

(I haven't read what you've written below, as I this point needs to be cleared up, and I'd rather not get distracted away from. I would have thought the idea multiple realisability allows a compatibility between identity theory and functionalism, if you disagree we can maybe discuss it after the other points have been cleared up).

Bill Rogers

unread,
Aug 4, 2015, 9:04:40 PM8/4/15
to talk-o...@moderators.isc.org
As I said...Unless you ask me a question that indicates that you've actually read and tried to understand what I wrote, I'm done. So, I'm done.

RSNorman

unread,
Aug 4, 2015, 9:39:39 PM8/4/15
to talk-o...@moderators.isc.org
On Tue, 4 Aug 2015 18:03:26 -0700 (PDT), Bill Rogers
<broger...@gmail.com> wrote:

>As I said...Unless you ask me a question that indicates that you've actually read and tried to understand what I wrote, I'm done. So, I'm done.

And another one bites the dust. Welcome to the club.

Someone is like the Energizer bunny. Except the bunny was sort of
funny.

someone

unread,
Aug 4, 2015, 9:39:40 PM8/4/15
to talk-o...@moderators.isc.org
I'm hardly surprised that you didn't attempt to explain whether since you were clearly stating that if the random gates all guessed correctly that the robot would consciously experience, you were also stating that if the handraisers raised their hands correctly that their arrangement would consciously experience.

Because you had also stated:
"I am not obfuscating anything. For the nth time, I am rejecting the premise of question 1.y.y. I do not accept that it would be possible for an array of random output gates to be conscious. The premise of your question is that an array composed of random gates *could* be conscious. I reject that premise."

And question 1.y.y is a question about if the handraisers guessed correctly, like those random NAND gates that if you stated would consciously experience if the guessed correctly.

I think an issue for the talk.origins debate, is that what debate can there be when the atheists on the forum can't even come up with an alternative.

Burkhard

unread,
Aug 5, 2015, 4:44:39 AM8/5/15
to talk-o...@moderators.isc.org
Yeah, but he got me thinking.... Imagine a universe that is just like
ours, only that instead of quarks, the most basic building blocks of
matter is jell-o, and time runs backwards. Now, in this universe we can
imagine that across the planets of a galaxy, there is a fish on every
planet that burps a single note - but when you listen to all of them
sequentially, it would sound exactly like the Marseillaise played an
cymbals. The internal consistency of this thought experiment obviously
proves the Rousseauian theory of popular representation through a
"volontee generale" wrong! Chew on that, atheists!!!!

Nick Roberts

unread,
Aug 5, 2015, 6:29:40 AM8/5/15
to talk-o...@moderators.isc.org
In message <mpsi8a$qgm$1...@dont-email.me>
The problem with the the burping fish is that it only sounds like the
Marseillaise from one specific point, because of delays in the signal
reaching your ears. If you weren't at that point, the notes would
appear to be played in the wrong order.

But we'll ignore that because it doesn't reflect the overall concept of
the thought experiment, which proves that the universe was designed.


--
Nick Roberts tigger @ orpheusinternet.co.uk

Hanlon's Razor: Never attribute to malice that which
can be adequately explained by stupidity.

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

Nick Roberts

unread,
Aug 5, 2015, 6:29:40 AM8/5/15
to talk-o...@moderators.isc.org
In message <ceaa8287-be27-4dee...@googlegroups.com>
someone <glenn....@googlemail.com> wrote:

> On Wednesday, August 5, 2015 at 2:04:40 AM UTC+1, Bill Rogers wrote:

[SNIP]

> > As I said...Unless you ask me a question that indicates that you've
> > actually read and tried to understand what I wrote, I'm done. So,
> > I'm done.
>
> I'm hardly surprised that you didn't attempt to explain whether since
> you were clearly stating that if the random gates all guessed
> correctly that the robot would consciously experience, you were also
> stating that if the handraisers raised their hands correctly that
> their arrangement would consciously experience.
>
> Because you had also stated: "I am not obfuscating anything. For the
> nth time, I am rejecting the premise of question 1.y.y. I do not
> accept that it would be possible for an array of random output gates
> to be conscious. The premise of your question is that an array
> composed of random gates *could* be conscious. I reject that
> premise."
>
> And question 1.y.y is a question about if the handraisers guessed
> correctly, like those random NAND gates that if you stated would
> consciously experience if the guessed correctly.
>
> I think an issue for the talk.origins debate, is that what debate can
> there be when the atheists on the forum can't even come up with an
> alternative.

I've avoided getting involved in your increasingly self-indulgent
thought experiments up until now, but I do have one observation to
make.

Bill Rogers asked you to demonstrate that you had had thought about and
considered his answers to your questionnaire, and sought information
about his perspective. It came as no surprise that you demonstrated
exactly the opposite, and started patting yourself on the back.

So my observation is this: I am yet to be convinced that you (someone)
isn't just a fairly complex sequence of NAND gates, with some of those
gates replaces by human hand-raisers who guess. Certainly there has
been nothing in any of your recent posts that demonstrates any
awareness of your environment or ability to modify your flow chart.

A lot of modern software understands the concept of fuzzy logic, so a
flow chart that only deal with yes/no responses is seriously primitive.
So some NAND gate arrangements demonstrate significantly more
flexibility in responsiveness than you have demonstrated.

William Morse

unread,
Aug 5, 2015, 9:04:37 PM8/5/15
to talk-o...@moderators.isc.org
On 07/30/2015 03:23 AM, someone wrote:
> [This is being posted because my response to post
> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
> isn't getting through]
>
> If you haven't been following the conversation, here is a quick synopsis (if your a talk.origins atheist, you might want to show that there's no problem for atheism here). There is a scenario outlined in this post (you can ignore the history):
>
> https://groups.google.com/d/msg/talk.origins/gudeS48dvuc/bonNIJoZXAkJ
>
> And the following questions about an atheist perspective.
>
> 1) Could a robot with a NAND gate control unit consciously experience?

What does this question have to do with an atheist perspective?
Consciousness is likely to be about non-linear systems with feedback, so
strict NAND gates are not particularly interesting. It could be that a
large enough array of NAND gates with some error (which is true of all
real world systems) might be capable of consciousness, but this is a
different question. And it still doesn't have anything to do with an
atheist perspective.


> If the answer is "yes" then the next question is
>
> 1.y) If the hand raisers performed accurately, such that
> the resulting outputs of their NAND gate arrangement were the same as those the robot control unit was recorded as giving, would the property of the NAND gate arrangement consciously experiencing emerge?
>
> If the answer to (1.y) is "yes" then the next question is
>
> (1.y.y) If some of the hand raisers guessed correctly would it make a difference to whether the property of consciously experiencing emerged?
>
> If the answer to (1.y.y) is "yes" then how does the account suggest that consciously experiencing makes a difference to behaviour, if the behaviour when it does emerge is the same as when it doesn't.
>
> If the answer to (1.y.y) is "no" then we can discuss from there.
>
> If the answer to (1.y) is "no" then how does the account suggest that consciously experiencing make difference to the robot's behaviour if the NAND gate arrangements outputs would be the same whether the property had emerged or not.
>
> If the answer to (1) is "no" then what is special about the biological chemistry?
>

Burkhard

unread,
Aug 6, 2015, 5:14:38 AM8/6/15
to talk-o...@moderators.isc.org
William Morse wrote:
> On 07/30/2015 03:23 AM, someone wrote:
>> [This is being posted because my response to post
>> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
>> isn't getting through]
>>
>> If you haven't been following the conversation, here is a quick
>> synopsis (if your a talk.origins atheist, you might want to show that
>> there's no problem for atheism here). There is a scenario outlined in
>> this post (you can ignore the history):
>>
>> https://groups.google.com/d/msg/talk.origins/gudeS48dvuc/bonNIJoZXAkJ
>>
>> And the following questions about an atheist perspective.
>>
>> 1) Could a robot with a NAND gate control unit consciously experience?
>
> What does this question have to do with an atheist perspective?


Others have asked before, you won't be surprised to learn that answers
there were none.

someone

unread,
Aug 10, 2015, 6:19:24 AM8/10/15
to talk-o...@moderators.isc.org
On Thursday, August 6, 2015 at 2:04:37 AM UTC+1, William Morse wrote:
> On 07/30/2015 03:23 AM, someone wrote:
> > [This is being posted because my response to post
> > https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
> > isn't getting through]
> >
> > If you haven't been following the conversation, here is a quick synopsis (if your a talk.origins atheist, you might want to show that there's no problem for atheism here). There is a scenario outlined in this post (you can ignore the history):
> >
> > https://groups.google.com/d/msg/talk.origins/gudeS48dvuc/bonNIJoZXAkJ
> >
> > And the following questions about an atheist perspective.
> >
> > 1) Could a robot with a NAND gate control unit consciously experience?
>
> What does this question have to do with an atheist perspective?
> Consciousness is likely to be about non-linear systems with feedback, so
> strict NAND gates are not particularly interesting. It could be that a
> large enough array of NAND gates with some error (which is true of all
> real world systems) might be capable of consciousness, but this is a
> different question. And it still doesn't have anything to do with an
> atheist perspective.
>
>

It would have been useful if you'd have just answered the questions, I for one would be interested.

eridanus

unread,
Aug 10, 2015, 7:39:24 AM8/10/15
to talk-o...@moderators.isc.org
On Thursday, July 30, 2015 at 8:24:58 AM UTC+1, someone wrote:
> [This is being posted because my response to post
> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
> isn't getting through]
>
> If you haven't been following the conversation, here is a quick synopsis (if your a talk.origins atheist, you might want to show that there's no problem for atheism here). There is a scenario outlined in this post (you can ignore the history):
>
> https://groups.google.com/d/msg/talk.origins/gudeS48dvuc/bonNIJoZXAkJ
>
> And the following questions about an atheist perspective.
>
> 1) Could a robot with a NAND gate control unit consciously experience?
>
instead of using such incomprehensible jargon, can we speak in plain
English what damn is consciousness?
We can present the problem in plain English about when something can be
considered true or false. It is not so easy, once we get out far from
the most trivial considerations.
We all know that a lot of questions deemed true, are true as an statistical
probability. When Dr. Koch presented his inform about the "vibrio Cholera"
and presented a test tube with a liquid containing the microbe of the
cholera a young doc went to the desk where Dr. Koch was speaking and he swallowed the content of the whole test tube, trying to show the opinion he
had about this theory of a microbe causing the cholera.
I had serious problems to explain to my brother in law "why he could not
win playing at the roulette". He do not understood any word about probabilities. He pretended to win playing at repeating the bet by
duplicating the amount of money risked in repeated odd or pair numbers.

Our sense of certitude comes from things that repeated routinely day after day. Like the light of the morning follows the darkness of the night.
We can believe in very trivial questions, like you need to drink some
amount of water each 24 hours. Water either directly or indirectly like
when eating apples or oranges, or other. I read about an experiment a
British made to prove the assertion of Bedouins that they do not drink
water while traveling in the desert. They do not drunk water, but plenty
of tea, any time they saw the tent of someone in the desert.
I mean, some trivial questions cannot be disputed. If a person do not drink water a way or another, in a few days he would have serious problems.
You feel very bad when the thirst increases so that in less than a week
without drinking water you would die of dehydration with horrible pains.
It is a horrible death, much worse than dying of hunger.
We can speak of consciousness here. The consciousness of being hungry, seriously hungry, the consciousness of thirst, or even the consciousness
of not being able to sleep for someone is torturing you to impede your
sleep. Another case of consciousness is torture. We have sensors in our
body to warn us of something that hurts, or it is too cold, or too hot,
that you must try to avoid it if possible. We have a respondent behavior to avoid pain; piercing pains, or pressure pains, or excessive hot, or excessive cold. This is a clear cause of consciousness.
But in general, the word consciousness is associated with "verbal messages"
that pretend to change our mind, the light ideas one have about what is true
or false. It takes a lot of time to acquire a whole repertoire about what is true or false; but it is mostly a case of "brain washing", many thousands of repetitions, more than real personal experiences like when you have been flogged, to put an example. If you had ever been flogged, it is quite clear
for you brain what shit reality it is.
Then, any discussions about the awareness of a robot, must pass by some
sensors that could give the robot some sense of the hard reality like humans have.
But if by awareness or consciousness you are thinking about the intricacy
of human language, and how a robot could discern a false argument from a valid one, this robot must be as lost as human beings. For human beings have very serious troubles to discern true from false.
I can present you the case of a totally "artificial experience" like maths. Someone can pretend to prove something with math reasoning, but the prove
only can be valid if your experience of maths is good enough to discern a false argument of math from a good one. Depending on the complexity of
the argument and your maths experience, this is possible or impossible.
Out of our trivial experience, we have not enough means to have a
certitude.

Then, the question of the robot, he cannot have more consciousness than
humans can have. And if our consciousness is rather limited, our best
robot would be in a similar case, assuming we have enough intelligence
to make a robot that would emulate a human being.

Instead of presenting this abstract problem that results incomprehensible,
is a lot better to present a valid definition of consciousness, to determine
in which way we can be conscious, what are the limits of our consciousness,
and what are our technical limitations to program a robot to emulate the
human intelligence.
Most often, all the gibberish of consciousness refers to nonsense speeches.
The problem with those speeches is that contain a lot of undefined words,
that if we demand to be define, require a lot of definitions on the words
used to define something, so in most cases we cannot be sure what we are
really talking about.






Ernest Major

unread,
Aug 10, 2015, 8:29:25 AM8/10/15
to talk-o...@moderators.isc.org
On 10/08/2015 11:16, someone wrote:
> On Thursday, August 6, 2015 at 2:04:37 AM UTC+1, William Morse wrote:
>> On 07/30/2015 03:23 AM, someone wrote:
>>> [This is being posted because my response to post
>>> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
>>> isn't getting through]
>>>
>>> If you haven't been following the conversation, here is a quick synopsis (if your a talk.origins atheist, you might want to show that there's no problem for atheism here). There is a scenario outlined in this post (you can ignore the history):
>>>
>>> https://groups.google.com/d/msg/talk.origins/gudeS48dvuc/bonNIJoZXAkJ
>>>
>>> And the following questions about an atheist perspective.
>>>
>>> 1) Could a robot with a NAND gate control unit consciously experience?
>>
>> What does this question have to do with an atheist perspective?
>> Consciousness is likely to be about non-linear systems with feedback, so
>> strict NAND gates are not particularly interesting. It could be that a
>> large enough array of NAND gates with some error (which is true of all
>> real world systems) might be capable of consciousness, but this is a
>> different question. And it still doesn't have anything to do with an
>> atheist perspective.
>>
>>
>
> It would have been useful if you'd have just answered the questions, I for one would be interested.

It would have been useful if you had just answered the question. The
fact that the your pseudo-Socratic dialogue is intended to eventually
lead to a "proof" of God is insufficient grounds to label physicalist
accounts of consciousness as atheistic or evolutionary. I can understand
that from a false premise that theists are dualists one might draw the
mistaken conclusion that physicalist accounts are inherently atheistic,
but your arguments have nothing do to with evolution.

If you are working from the false premises that physicalist accounts of
consciousness are inherently atheistic and evolutionary why should we
trust any of your conclusions?

--
alias Ernest Major

someone

unread,
Aug 10, 2015, 9:39:23 AM8/10/15
to talk-o...@moderators.isc.org
The questions are about an atheist perspective by virtue that I am asking for the answers from an atheist perspective.

someone

unread,
Aug 10, 2015, 9:54:22 AM8/10/15
to talk-o...@moderators.isc.org
Well by something consciously experiencing I mean that it doesn't have an absence of experience as some atheists might believe they will have when they are dead.

Do you feel that this is a case where you aren't sure what is being talked about, i.e. you aren't sure what an atheist might mean if it were to state that it believed that when you die you won't experience anything any more?

Bill Rogers

unread,
Aug 10, 2015, 10:04:23 AM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 9:39:23 AM UTC-4, someone wrote:
> On Monday, August 10, 2015 at 1:29:25 PM UTC+1, Ernest Major wrote:
> > On 10/08/2015 11:16, someone wrote:
> > > On Thursday, August 6, 2015 at 2:04:37 AM UTC+1, William Morse wrote:
> > >> On 07/30/2015 03:23 AM, someone wrote:
> > >>> [This is being posted because my response to post
> > >>> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
> > >>> isn't getting through]
> > >>>
> > >>> If you haven't been following the conversation, here is a quick synopsis (if your a talk.origins atheist, you might want to show that there's no problem for atheism here). There is a scenario outlined in this post (you can ignore the history):
> > >>>
> > >>> https://groups.google.com/d/msg/talk.origins/gudeS48dvuc/bonNIJoZXAkJ
> > >>>
> > >>> And the following questions about an atheist perspective.
> > >>>
> > >>> 1) Could a robot with a NAND gate control unit consciously experience?
> > >>
> > >> What does this question have to do with an atheist perspective?
> > >> Consciousness is likely to be about non-linear systems with feedback, so
> > >> strict NAND gates are not particularly interesting. It could be that a
> > >> large enough array of NAND gates with some error (which is true of all
> > >> real world systems) might be capable of consciousness, but this is a
> > >> different question. And it still doesn't have anything to do with an
> > >> atheist perspective.
> > >>
> > >>
> > >
> > > It would have been useful if you'd have just answered the questions, I for one would be interested.
> >
> > It would have been useful if you had just answered the question. The
> > fact that the your pseudo-Socratic dialogue is intended to eventually
> > lead to a "proof" of God is insufficient grounds to label physicalist
> > accounts of consciousness as atheistic or evolutionary. I can understand
> > that from a false premise that theists are dualists one might draw the
> > mistaken conclusion that physicalist accounts are inherently atheistic,
> > but your arguments have nothing do to with evolution.
> >
> > If you are working from the false premises that physicalist accounts of
> > consciousness are inherently atheistic and evolutionary why should we
> > trust any of your conclusions?
> >
>
> The questions are about an atheist perspective by virtue that I am asking for the answers from an atheist perspective.

Well, here's an answer to one of your previous questions, from an atheist perspective. No, no arrangement of NAND gates could be conscious, because in order to be conscious things have to have an immaterial soul. So, from this atheist perspective, the answer to your first question is simply no.

someone

unread,
Aug 10, 2015, 10:54:22 AM8/10/15
to talk-o...@moderators.isc.org
So you are an atheist, but believe that you are a conscious immaterial soul?

Burkhard

unread,
Aug 10, 2015, 10:59:23 AM8/10/15
to talk-o...@moderators.isc.org
as nobody here understands how an "atheist perspective" looks like as
opposed to just the ordinary "philosophy of mind" perspective, nobody
can answer it then.

someone

unread,
Aug 10, 2015, 11:04:23 AM8/10/15
to talk-o...@moderators.isc.org
It'd be great if that was the problem I was worrying it might be intellectual cowardice. By atheist perspective I mean a belief perspective of reality in which God isn't believed to exist.

Bill Rogers

unread,
Aug 10, 2015, 11:09:22 AM8/10/15
to talk-o...@moderators.isc.org
Why not? I'm just postulating that there are two sorts of stuff in the world, material and non-material. That doesn't entail believing in God.

RSNorman

unread,
Aug 10, 2015, 11:09:22 AM8/10/15
to talk-o...@moderators.isc.org
Generally speaking, any scientific analysis of consciousness, atheist
or not, NAND gates or not, has absolutely no connection with whether
or not God exists. That is why nobody understands "atheist
perspective".



Bill Rogers

unread,
Aug 10, 2015, 11:09:22 AM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 10:54:22 AM UTC-4, someone wrote:
> On Monday, August 10, 2015 at 3:04:23 PM UTC+1, Bill Rogers wrote:
> > Well, here's an answer to one of your previous questions, from an atheist perspective. No, no arrangement of NAND gates could be conscious, because in order to be conscious things have to have an immaterial soul. So, from this atheist perspective, the answer to your first question is simply no.
>
> So you are an atheist, but believe that you are a conscious immaterial soul?

Why not? I'm jut postulating that there are two sorts of stuff in the world, material stuff and non-material stuff (including soul, Platonic geometric forms). That doesn't entail believing in God.

Bill Rogers

unread,
Aug 10, 2015, 11:14:30 AM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 11:04:23 AM UTC-4, someone wrote:
> On Monday, August 10, 2015 at 3:59:23 PM UTC+1, Burkhard wrote:
> > as nobody here understands how an "atheist perspective" looks like as
> > opposed to just the ordinary "philosophy of mind" perspective, nobody
> > can answer it then.
>
> It'd be great if that was the problem I was worrying it might be intellectual cowardice. By atheist perspective I mean a belief perspective of reality in which God isn't believed to exist.

It must be very comforting to think that people reject your arguments out of intellectual cowardice rather than from any cause having to do with the quality of the arguments themselves.

someone

unread,
Aug 10, 2015, 11:19:24 AM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 4:09:22 PM UTC+1, RSNorman wrote:
> On Mon, 10 Aug 2015 08:03:43 -0700 (PDT), someone
I've just explained what I mean by "atheist perspective", and as I've previously pointed out the questions are about an atheist perspective by virtue that I am asking for the answers from an atheist perspective. Is it that you can't imagine what I mean by "a belief perspective of reality in which God isn't believed to exist", in a way that would make sense given the context?


someone

unread,
Aug 10, 2015, 11:24:22 AM8/10/15
to talk-o...@moderators.isc.org
It's fine, I just wanted to check that you were stating that you were the soul, I wasn't sure what the soul was referring to otherwise.

Well before we get to the symbolism argument, how are you suggesting that what you consciously experience affects the behaviour of the material human? You don't know what neurons would need to fire to allow you to type on the keyboard for example so how can you be responsible for them firing.

Bill Rogers

unread,
Aug 10, 2015, 11:34:22 AM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 11:24:22 AM UTC-4, someone wrote:

> > > So you are an atheist, but believe that you are a conscious immaterial soul?
> >
> > Why not? I'm jut postulating that there are two sorts of stuff in the world, material stuff and non-material stuff (including soul, Platonic geometric forms). That doesn't entail believing in God.
>
> It's fine, I just wanted to check that you were stating that you were the soul, I wasn't sure what the soul was referring to otherwise.
>
> Well before we get to the symbolism argument, how are you suggesting that what you consciously experience affects the behaviour of the material human? You don't know what neurons would need to fire to allow you to type on the keyboard for example so how can you be responsible for them firing.

I don't know how immaterial stuff interacts with material stuff. But that it does so is obvious by introspection. I think about raising my arm, and up it goes. I read a proposition by Euclid about abstract geometry, and whatever neurons have to fire are forced to do so by the abstract, non-physical properties of the ideal geometric objects.


Nick Roberts

unread,
Aug 10, 2015, 11:39:23 AM8/10/15
to talk-o...@moderators.isc.org
In message <1dac1fdc-5ed5-4d22...@googlegroups.com>
No, you're patting yourself on the back by believing that.

Perhaps if you showed any interest in what people actually posted,
rather than insisting that they had to follow your preprogrammed flow
chart then those who were interested in discussing consciousness would
have had more patience with you.

As it is, every time any other poster asked you for more information,
your response was roughly the equivalent of "I'm not interested in
doing anything other than making you follow the pseudo-Socratic
argument I've already planned out".

That only works if you're Plato, and get to put the words you want into
other people's mouths.

> By atheist perspective I mean a belief
> perspective of reality in which God isn't believed to exist.

Along with Burkhard, I don't understand the difference between a
scientific theory of consciousness (which, because it is scientific,
does not depend on the existence of God to make something work) and one
that assumes a reality in which God doesn't exist.

Like too many theists, you confuse science (which isn't allowed to use
God as an explanation) with atheism (which is a belief system which
rejects the existance of God). There is no "atheistic" version of the
Theory of Evolution, or the Theory of Gravity, or Atomic Theory. There
is just the Theory of Evolution, the Theory of Gravity, and Atomic
Theory.

someone

unread,
Aug 10, 2015, 11:49:25 AM8/10/15
to talk-o...@moderators.isc.org
So you, the immaterial soul doesn't know how to, and doesn't, fire the neurons, but abstract ideal geometric objects do?

Anyway I'll move onto the symbolism issue.

If you're brain was existing in a vat and fed the same inputs (nerve signals, chemical concentrations in solutions, etc.) as your one is now, would the conscious experience be the same?

someone

unread,
Aug 10, 2015, 12:04:22 PM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 4:39:23 PM UTC+1, Nick Roberts wrote:
> In message <1dac1fdc-5ed5-4d22...@googlegroups.com>
I was thinking of science as an experimental method. As such I wasn't thinking of it as appropriate to think of science having some metaphysical bias.

You state that it isn't a case of intellectual cowardice, but why don't you answer the questions, and illustrate your point when you come to ask for more information relevant to your consideration (rather than a distraction, to change the subject so that you can avoid the question while not appearing an intellectual coward (to someone who hasn't been following it))?

Ernest Major

unread,
Aug 10, 2015, 12:09:22 PM8/10/15
to talk-o...@moderators.isc.org
Given your studied refusal to present your argument you would be better
served to worry about your own courage.

One doesn't have to be a atheist to be an physicalist, nor does one have
to be a physicalist to be an atheist, nor does one have to be an
antiphysicalist to be a theist, nor does one have to be a theist to be
an antiphysicalist.

Given this, even if you could prove something that has defeated the
community of philosophers and scientists, you would be very little
further on your road to proving the existence of your deity of choice.

--
alias Ernest Major

someone

unread,
Aug 10, 2015, 12:24:23 PM8/10/15
to talk-o...@moderators.isc.org
There were questions you are being asked to answer. Are you suggesting that there is some information that you need in order to be able to answer the questions (rather than question which acts as a distraction in order for you to avoid answering the questions), which I am withholding because I lack the courage to face what those like you would do if you were only able to be able to get at those questions?

If there is no information that you are waiting on, why do posts which don't answer them?


Bill Rogers

unread,
Aug 10, 2015, 12:44:22 PM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 11:49:25 AM UTC-4, someone wrote:
> On Monday, August 10, 2015 at 4:34:22 PM UTC+1, Bill Rogers wrote:
> > On Monday, August 10, 2015 at 11:24:22 AM UTC-4, someone wrote:
> >
> > > > > So you are an atheist, but believe that you are a conscious immaterial soul?
> > > >
> > > > Why not? I'm jut postulating that there are two sorts of stuff in the world, material stuff and non-material stuff (including soul, Platonic geometric forms). That doesn't entail believing in God.
> > >
> > > It's fine, I just wanted to check that you were stating that you were the soul, I wasn't sure what the soul was referring to otherwise.
> > >
> > > Well before we get to the symbolism argument, how are you suggesting that what you consciously experience affects the behaviour of the material human? You don't know what neurons would need to fire to allow you to type on the keyboard for example so how can you be responsible for them firing.
> >
> > I don't know how immaterial stuff interacts with material stuff. But that it does so is obvious by introspection. I think about raising my arm, and up it goes. I read a proposition by Euclid about abstract geometry, and whatever neurons have to fire are forced to do so by the abstract, non-physical properties of the ideal geometric objects.
>
> So you, the immaterial soul doesn't know how to, and doesn't, fire the neurons, but abstract ideal geometric objects do?

No, I don't know how to fire the neurons. Why does that surprise you? Do you know how to turn on the neurons that begin peristalsis in your gut? That my mind is composed of immaterial stuff doesn't make me omniscient, there are all sorts of processes which go on in both my physical and non-physical parts, of which I am not aware.

>
> Anyway I'll move onto the symbolism issue.
>
> If you're brain was existing in a vat and fed the same inputs (nerve signals, chemical concentrations in solutions, etc.) as your one is now, would the conscious experience be the same?

I don't know. Since I'm not sure how the material stuff in me interacts with the non-material stuff in me, it's hard for me to guess what would happen if you changed the physical stimulation of my physical bits without changing anything about my non-material bits. But I'll be interested to know what happens whenever somebody does the experiment.


Ernest Major

unread,
Aug 10, 2015, 12:44:22 PM8/10/15
to talk-o...@moderators.isc.org
Several people have explained to you that there isn't "an answer from an
atheist perspective". There's very little that atheists are by
definition required to believe about consciousness - that it's not a
product of a god is the only that comes to mind.

If you want to prove that physicalism is false, all this guff about
atheist perspectives is just confusing the issue.

--
alias Ernest Major

someone

unread,
Aug 10, 2015, 12:59:22 PM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 5:44:22 PM UTC+1, Bill Rogers wrote:
> On Monday, August 10, 2015 at 11:49:25 AM UTC-4, someone wrote:
> > On Monday, August 10, 2015 at 4:34:22 PM UTC+1, Bill Rogers wrote:
> > > On Monday, August 10, 2015 at 11:24:22 AM UTC-4, someone wrote:
> > >
> > > > > > So you are an atheist, but believe that you are a conscious immaterial soul?
> > > > >
> > > > > Why not? I'm jut postulating that there are two sorts of stuff in the world, material stuff and non-material stuff (including soul, Platonic geometric forms). That doesn't entail believing in God.
> > > >
> > > > It's fine, I just wanted to check that you were stating that you were the soul, I wasn't sure what the soul was referring to otherwise.
> > > >
> > > > Well before we get to the symbolism argument, how are you suggesting that what you consciously experience affects the behaviour of the material human? You don't know what neurons would need to fire to allow you to type on the keyboard for example so how can you be responsible for them firing.
> > >
> > > I don't know how immaterial stuff interacts with material stuff. But that it does so is obvious by introspection. I think about raising my arm, and up it goes. I read a proposition by Euclid about abstract geometry, and whatever neurons have to fire are forced to do so by the abstract, non-physical properties of the ideal geometric objects.
> >
> > So you, the immaterial soul doesn't know how to, and doesn't, fire the neurons, but abstract ideal geometric objects do?
>
> No, I don't know how to fire the neurons. Why does that surprise you? Do you know how to turn on the neurons that begin peristalsis in your gut? That my mind is composed of immaterial stuff doesn't make me omniscient, there are all sorts of processes which go on in both my physical and non-physical parts, of which I am not aware.
>

So there is an unconscious immaterial too?


> >
> > Anyway I'll move onto the symbolism issue.
> >
> > If you're brain was existing in a vat and fed the same inputs (nerve signals, chemical concentrations in solutions, etc.) as your one is now, would the conscious experience be the same?
>
> I don't know. Since I'm not sure how the material stuff in me interacts with the non-material stuff in me, it's hard for me to guess what would happen if you changed the physical stimulation of my physical bits without changing anything about my non-material bits. But I'll be interested to know what happens whenever somebody does the experiment.

Well just guess, and then we can explore the other alternative later. No need to stop here, we can continue.

Bill Rogers

unread,
Aug 10, 2015, 1:19:23 PM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 12:59:22 PM UTC-4, someone wrote:

> > > So you, the immaterial soul doesn't know how to, and doesn't, fire the neurons, but abstract ideal geometric objects do?
> >
> > No, I don't know how to fire the neurons. Why does that surprise you? Do you know how to turn on the neurons that begin peristalsis in your gut? That my mind is composed of immaterial stuff doesn't make me omniscient, there are all sorts of processes which go on in both my physical and non-physical parts, of which I am not aware.
> >
>
> So there is an unconscious immaterial too?

Sure, why not?

> > >
> > > Anyway I'll move onto the symbolism issue.
> > >
> > > If you're brain was existing in a vat and fed the same inputs (nerve signals, chemical concentrations in solutions, etc.) as your one is now, would the conscious experience be the same?
> >
> > I don't know. Since I'm not sure how the material stuff in me interacts with the non-material stuff in me, it's hard for me to guess what would happen if you changed the physical stimulation of my physical bits without changing anything about my non-material bits. But I'll be interested to know what happens whenever somebody does the experiment.
>
> Well just guess, and then we can explore the other alternative later. No need to stop here, we can continue.

You don't need me for that. You can go ahead and explore all the alternatives you want. Whatever you do from here on out is not exploring my atheist perspective, it's you thinking about how you'd like various arguments to go. That's a fine thing to do, you just don't need me to do it. But when someone does the actual experiment, I'll be interested in seeing how it comes out.

Burkhard

unread,
Aug 10, 2015, 1:49:23 PM8/10/15
to talk-o...@moderators.isc.org
So you want an answer to the mind body problem where the nonexistence of
a deity is a necessary feature that carries explanatory weight? No such
thing exists, and I have no idea how it would even look like.



>

Nick Roberts

unread,
Aug 10, 2015, 5:29:21 PM8/10/15
to talk-o...@moderators.isc.org
In message <b59209ac-d7e2-47de...@googlegroups.com>
Not all science is experimental, at least in the classic sense of the
word.

> As such I wasn't
> thinking of it as appropriate to think of science having some
> metaphysical bias.

It doesn't have a metaphysical bias. It does have a methodological
bias towards naturalism, which is why God is not regarded as a
scientific explanation of anything.

But if you understand that science doesn't have a metaphysical bias,
why are you insistant that you are only interested in an atheistic
explanation of consciousness? This is what people who have been
exchanging posts with you keep trying to get you to understand, and
every time they try to distinguish between scientific (or naturalistic)
and atheistic, you immediately conflate them again.

> You state that it isn't a case of intellectual cowardice, but why
> don't you answer the questions, and illustrate your point when you
> come to ask for more information relevant to your consideration
> (rather than a distraction, to change the subject so that you can
> avoid the question while not appearing an intellectual coward (to
> someone who hasn't been following it))?

Because my experience of observing you exchanging posts with others
(Bill Rogers, RSNorman, etc), is that you ignore what they say and
continually try to force their responses into your prepackaged set of
acceptable responses. You may claim my refusal to get bogged down in a
totally unproductive exchange "intellectual cowardice" - I claim it
as a side-effect of having a low boredom threshold.

If you provide evidence of your willingness to actually pay attention
to what people are posting, rather than insisting on a "yes/no" answer
to questions that warrant conditional answers, then I may occasionally
contribute (occasionally, because theory of mind is not something I
have studied in any detail). As it is, my observations of the content
of your posts (which I have been skimming, although not contributing)
on your latest visit to this NG and your previous one make me conclude
that you aren't interested in understanding other people's
perspectives, only in demonstrating your own a priori assumptions. That
may not be intellectual cowardice, but it's certainly intellectual
dishonesty.

someone

unread,
Aug 10, 2015, 6:44:21 PM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 6:19:23 PM UTC+1, Bill Rogers wrote:
> On Monday, August 10, 2015 at 12:59:22 PM UTC-4, someone wrote:
>
> > > > So you, the immaterial soul doesn't know how to, and doesn't, fire the neurons, but abstract ideal geometric objects do?
> > >
> > > No, I don't know how to fire the neurons. Why does that surprise you? Do you know how to turn on the neurons that begin peristalsis in your gut? That my mind is composed of immaterial stuff doesn't make me omniscient, there are all sorts of processes which go on in both my physical and non-physical parts, of which I am not aware.
> > >
> >
> > So there is an unconscious immaterial too?
>
> Sure, why not?
>

I'm just not clear on what it would be. I can understand the physical as an alternative to mind, but quite what you are imagining the unconscious immaterial to be I'm not sure, an alternative physical to mind?

> > > >
> > > > Anyway I'll move onto the symbolism issue.
> > > >
> > > > If you're brain was existing in a vat and fed the same inputs (nerve signals, chemical concentrations in solutions, etc.) as your one is now, would the conscious experience be the same?
> > >
> > > I don't know. Since I'm not sure how the material stuff in me interacts with the non-material stuff in me, it's hard for me to guess what would happen if you changed the physical stimulation of my physical bits without changing anything about my non-material bits. But I'll be interested to know what happens whenever somebody does the experiment.
> >
> > Well just guess, and then we can explore the other alternative later. No need to stop here, we can continue.
>
> You don't need me for that. You can go ahead and explore all the alternatives you want. Whatever you do from here on out is not exploring my atheist perspective, it's you thinking about how you'd like various arguments to go. That's a fine thing to do, you just don't need me to do it. But when someone does the actual experiment, I'll be interested in seeing how it comes out.

Ok, well presumably you can see that you could have answered that there would be a difference. But that would presumably give an experimental test. For example with bionic eye technology. Does it actually matter whether the input came from a light detector or could it come from anywhere and it still be experienced as a visual experience?

Alternatively if you had said that the experience would have been the same, then there would have been the issue of why the processing should have a conscious experience based upon a symbolic narrative of the processing if the inputs had had a certain symbolism, when there are billions of imaginable contexts the processing could have occurred in, why should symbolism of the inputs in the context suitable for a spiritual being having a spiritual experience happen to be the one that it was in an un-designed universe?

Still if you aren't willing to go any further with the one atheist position that you seemed to be brave enough to answer the questions to, the atheist who believing in immaterial soul option, then we can leave it there.

someone

unread,
Aug 10, 2015, 6:59:21 PM8/10/15
to talk-o...@moderators.isc.org
Where did I say that it is a necessary feature that carries explanatory weight? Oh that's right I didn't, you made it up (to try to look clever?).

Bill Rogers

unread,
Aug 10, 2015, 8:04:20 PM8/10/15
to talk-o...@moderators.isc.org
On Monday, August 10, 2015 at 6:44:21 PM UTC-4, someone wrote:
> > > >
> > >
> > > So there is an unconscious immaterial too?
> >
> > Sure, why not?
> >
>
> I'm just not clear on what it would be. I can understand the physical as an alternative to mind, but quite what you are imagining the unconscious immaterial to be I'm not sure, an alternative physical to mind?

I'm not sure what it would be, either. But since an ideal triangle is non-physical and, presumably, non-conscious, I'm not sure why there could not be stuff in the non-material, non-conscious category.

>
> > > > >
> > > > > Anyway I'll move onto the symbolism issue.
> > > > >
> > > > > If you're brain was existing in a vat and fed the same inputs (nerve signals, chemical concentrations in solutions, etc.) as your one is now, would the conscious experience be the same?
> > > >
> > > > I don't know. Since I'm not sure how the material stuff in me interacts with the non-material stuff in me, it's hard for me to guess what would happen if you changed the physical stimulation of my physical bits without changing anything about my non-material bits. But I'll be interested to know what happens whenever somebody does the experiment.
> > >
> > > Well just guess, and then we can explore the other alternative later. No need to stop here, we can continue.
> >
> > You don't need me for that. You can go ahead and explore all the alternatives you want. Whatever you do from here on out is not exploring my atheist perspective, it's you thinking about how you'd like various arguments to go. That's a fine thing to do, you just don't need me to do it. But when someone does the actual experiment, I'll be interested in seeing how it comes out.
>
> Ok, well presumably you can see that you could have answered that there would be a difference. But that would presumably give an experimental test. For example with bionic eye technology. Does it actually matter whether the input came from a light detector or could it come from anywhere and it still be experienced as a visual experience?

Yes, there would be an experimental test, but I have no idea how it would come out. You've arrange to make all the physical inputs identical, but what about possible non-physical inputs? I just don't see any way of guessing how the experiment would come out. Saying "I don't know" is often the best answer.

>
> Alternatively if you had said that the experience would have been the same, then there would have been the issue of why the processing should have a conscious experience based upon a symbolic narrative of the processing if the inputs had had a certain symbolism, when there are billions of imaginable contexts the processing could have occurred in, why should symbolism of the inputs in the context suitable for a spiritual being having a spiritual experience happen to be the one that it was in an un-designed universe?

I suppose I'd have said that in that case it was the actual input, rather than anything that that input symbolized, that mattered. And I would question whether the concept of the input symbolizing anything at all was really a good concept, if whatever it symbolizes has no detectable effect on anything. But again, I have no idea what would happen if you make all the material inputs the same but don't specify anything about the non-material inputs. It's just the sort of thing where you have to wait and see how the experiment turns out.

someone

unread,
Aug 11, 2015, 3:34:20 AM8/11/15
to talk-o...@moderators.isc.org
On Tuesday, August 11, 2015 at 1:04:20 AM UTC+1, Bill Rogers wrote:
> On Monday, August 10, 2015 at 6:44:21 PM UTC-4, someone wrote:
> > > > >
> > > >
> > > > So there is an unconscious immaterial too?
> > >
> > > Sure, why not?
> > >
> >
> > I'm just not clear on what it would be. I can understand the physical as an alternative to mind, but quite what you are imagining the unconscious immaterial to be I'm not sure, an alternative physical to mind?
>
> I'm not sure what it would be, either. But since an ideal triangle is non-physical and, presumably, non-conscious, I'm not sure why there could not be stuff in the non-material, non-conscious category.

So ideal triangles exist ontologically?


>
> >
> > > > > >
> > > > > > Anyway I'll move onto the symbolism issue.
> > > > > >
> > > > > > If you're brain was existing in a vat and fed the same inputs (nerve signals, chemical concentrations in solutions, etc.) as your one is now, would the conscious experience be the same?
> > > > >
> > > > > I don't know. Since I'm not sure how the material stuff in me interacts with the non-material stuff in me, it's hard for me to guess what would happen if you changed the physical stimulation of my physical bits without changing anything about my non-material bits. But I'll be interested to know what happens whenever somebody does the experiment.
> > > >
> > > > Well just guess, and then we can explore the other alternative later. No need to stop here, we can continue.
> > >
> > > You don't need me for that. You can go ahead and explore all the alternatives you want. Whatever you do from here on out is not exploring my atheist perspective, it's you thinking about how you'd like various arguments to go. That's a fine thing to do, you just don't need me to do it. But when someone does the actual experiment, I'll be interested in seeing how it comes out.
> >
> > Ok, well presumably you can see that you could have answered that there would be a difference. But that would presumably give an experimental test. For example with bionic eye technology. Does it actually matter whether the input came from a light detector or could it come from anywhere and it still be experienced as a visual experience?
>
> Yes, there would be an experimental test, but I have no idea how it would come out. You've arrange to make all the physical inputs identical, but what about possible non-physical inputs? I just don't see any way of guessing how the experiment would come out. Saying "I don't know" is often the best answer.
>

Oh you can say "I don't know" but this avenue I suspect is already closed by experimental evidence, and that there is no indication that bionic eye patients have ESP about what triggered the stimulation of the optic nerve.

> >
> > Alternatively if you had said that the experience would have been the same, then there would have been the issue of why the processing should have a conscious experience based upon a symbolic narrative of the processing if the inputs had had a certain symbolism, when there are billions of imaginable contexts the processing could have occurred in, why should symbolism of the inputs in the context suitable for a spiritual being having a spiritual experience happen to be the one that it was in an un-designed universe?
>
> I suppose I'd have said that in that case it was the actual input, rather than anything that that input symbolized, that mattered. And I would question whether the concept of the input symbolizing anything at all was really a good concept, if whatever it symbolizes has no detectable effect on anything. But again, I have no idea what would happen if you make all the material inputs the same but don't specify anything about the non-material inputs. It's just the sort of thing where you have to wait and see how the experiment turns out.
>

I don't understand what you meant by "in that case it was the actual input, rather than anything the input symbolized". What was the actual input? I was referring to the objects you consciously experience, are you claiming that they aren't represented in the neural state?

Burkhard

unread,
Aug 11, 2015, 4:14:22 AM8/11/15
to talk-o...@moderators.isc.org
that is the logical implication of what you claim above, and the only
way in which you can distinguish a theist dualist position from a
atheist one or a theist monist position from an atheist one.


> Oh that's right I didn't,

you didn;t say it explicitly, but it is the logical implication of what
you write - that yu don;t think though what you write is your problem,
nt mine.

someone

unread,
Aug 11, 2015, 4:24:20 AM8/11/15
to talk-o...@moderators.isc.org
Well at least quote the piece you are claiming it is a logical consequence of.

Notice I explained what I meant by atheist perspective (a belief perspective of reality in which God isn't believed to exist) and I explained why the questions are about such a perspective (by virtue that I am asking for the answers from an atheist perspective).

Burkhard

unread,
Aug 11, 2015, 5:04:20 AM8/11/15
to talk-o...@moderators.isc.org
So you want an account of the mind body problem where at some stage, the
theory says: "and because gods do not exists, X"
>
> Notice I explained what I meant by atheist perspective (a belief perspective of reality in which God isn't believed to exist) and I explained why the questions are about such a perspective (by virtue that I am asking for the answers from an atheist perspective).
>

No, you wrote "more words" about your original request, none of them
however an "explanation". They still suffer from the original deficit,
that unless you accept what I wrote above about the implications of your
claim,, there is no way to distinguish a "theistic" form a "atheistic"
monism, dualism, reductionism, emergentism etc etc

someone

unread,
Aug 11, 2015, 5:34:23 AM8/11/15
to talk-o...@moderators.isc.org
So you decided not to quote the stuff then.

Burkhard

unread,
Aug 11, 2015, 7:14:22 AM8/11/15
to talk-o...@moderators.isc.org
It is above, within the quotation marks. Using quotation marks is very
often a way to well, quote things

Nick Roberts

unread,
Aug 11, 2015, 7:14:22 AM8/11/15
to talk-o...@moderators.isc.org
In message <a09f5aa0-5f54-4abf...@googlegroups.com>
Perhaps you could read it that way. Alternatively, you could interpret
it as Burkhard being generous enough to assume that if he explained the
inevitable consequences of what you _did_ say, you would accept it.

So explain how an atheist theory of consciousness differs from a
theistic but materialist theory of consciousness. You keep insisting
that you want to discuss a materialist theory of consciousness, but you
also insist on labelling it an atheistic theory of consciousness, so
there must be a difference in your mind. It's just not clear to anyone
other than you what that difference is.

smith...@gmail.com

unread,
Aug 11, 2015, 7:19:19 AM8/11/15
to talk-o...@moderators.isc.org
So you decided not to even attempt to understand or respond to
Burkhard's point(s).

Or even read his post, because he _did_ quote you.

Bill Rogers

unread,
Aug 11, 2015, 7:19:19 AM8/11/15
to talk-o...@moderators.isc.org
On Tuesday, August 11, 2015 at 3:34:20 AM UTC-4, someone wrote:
> > > Ok, well presumably you can see that you could have answered that there would be a difference. But that would presumably give an experimental test. For example with bionic eye technology. Does it actually matter whether the input came from a light detector or could it come from anywhere and it still be experienced as a visual experience?
> >
> > Yes, there would be an experimental test, but I have no idea how it would come out. You've arrange to make all the physical inputs identical, but what about possible non-physical inputs? I just don't see any way of guessing how the experiment would come out. Saying "I don't know" is often the best answer.
> >
>
> Oh you can say "I don't know" but this avenue I suspect is already closed by experimental evidence, and that there is no indication that bionic eye patients have ESP about what triggered the stimulation of the optic nerve.

I don't think ESP would be required. It would be enough to notice that somebody was fooling around with the connections. Or that the simulated inputs were not behaving as expected when the patient turned his head, or starting walking around the room.
>
> > >
> > > Alternatively if you had said that the experience would have been the same, then there would have been the issue of why the processing should have a conscious experience based upon a symbolic narrative of the processing if the inputs had had a certain symbolism, when there are billions of imaginable contexts the processing could have occurred in, why should symbolism of the inputs in the context suitable for a spiritual being having a spiritual experience happen to be the one that it was in an un-designed universe?
> >
> > I suppose I'd have said that in that case it was the actual input, rather than anything that that input symbolized, that mattered. And I would question whether the concept of the input symbolizing anything at all was really a good concept, if whatever it symbolizes has no detectable effect on anything. But again, I have no idea what would happen if you make all the material inputs the same but don't specify anything about the non-material inputs. It's just the sort of thing where you have to wait and see how the experiment turns out.
> >
>
> I don't understand what you meant by "in that case it was the actual input, rather than anything the input symbolized". What was the actual input? I was referring to the objects you consciously experience, are you claiming that they aren't represented in the neural state?

I didn't realize you were talking about neural representations. I thought you were talking about some situation in which you gave the physical brain all the physical inputs it would get if it were actually in a body walking around in the real world. It seems like you are talking about the brain-in-a-vat or the evil demon from Descartes' Discourse on Method, but maybe you have something else in mind.

In any case, none of the questions you are asking me have anything to do with whether a god exists or not, so even though I am arguing from an "atheist perspective" it's hard to see that the atheism is relevant to anything you are asking about.

Andre G. Isaak

unread,
Aug 11, 2015, 9:29:22 AM8/11/15
to talk-o...@moderators.isc.org
In article <3d92f173-b311-4f5a...@googlegroups.com>,
Inputs don't have 'symbolism'. Period. Symbolism exists within the brain
(or some other conscious system such as your conscious NAND computer).

> why should symbolism of the
> inputs in the context suitable for a spiritual being having a spiritual
> experience happen to be the one that it was in an un-designed universe?

What does this even mean? How do you determine whether a particular
symbolic representation is 'suitable for a spiritual being having a
spiritual experience?' And what does that have to do with whether the
universe is designed?

Andre

someone

unread,
Aug 11, 2015, 2:39:19 PM8/11/15
to talk-o...@moderators.isc.org
On Tuesday, August 11, 2015 at 12:19:19 PM UTC+1, Bill Rogers wrote:
> On Tuesday, August 11, 2015 at 3:34:20 AM UTC-4, someone wrote:
> > > > Ok, well presumably you can see that you could have answered that there would be a difference. But that would presumably give an experimental test. For example with bionic eye technology. Does it actually matter whether the input came from a light detector or could it come from anywhere and it still be experienced as a visual experience?
> > >
> > > Yes, there would be an experimental test, but I have no idea how it would come out. You've arrange to make all the physical inputs identical, but what about possible non-physical inputs? I just don't see any way of guessing how the experiment would come out. Saying "I don't know" is often the best answer.
> > >
> >
> > Oh you can say "I don't know" but this avenue I suspect is already closed by experimental evidence, and that there is no indication that bionic eye patients have ESP about what triggered the stimulation of the optic nerve.
>
> I don't think ESP would be required. It would be enough to notice that somebody was fooling around with the connections. Or that the simulated inputs were not behaving as expected when the patient turned his head, or starting walking around the room.

The inputs would be the same (in terms of nerve firing), so where is the information about where they came from coming without some ESP? Though if the scientists that have already installed bionic eyes were to confirm that it didn't matter how they trigger the electrodes connected to the optic nerve the patient still experiences visual stimulation, then that option would be closed for you would it not?


> >
> > > >
> > > > Alternatively if you had said that the experience would have been the same, then there would have been the issue of why the processing should have a conscious experience based upon a symbolic narrative of the processing if the inputs had had a certain symbolism, when there are billions of imaginable contexts the processing could have occurred in, why should symbolism of the inputs in the context suitable for a spiritual being having a spiritual experience happen to be the one that it was in an un-designed universe?
> > >
> > > I suppose I'd have said that in that case it was the actual input, rather than anything that that input symbolized, that mattered. And I would question whether the concept of the input symbolizing anything at all was really a good concept, if whatever it symbolizes has no detectable effect on anything. But again, I have no idea what would happen if you make all the material inputs the same but don't specify anything about the non-material inputs. It's just the sort of thing where you have to wait and see how the experiment turns out.
> > >
> >
> > I don't understand what you meant by "in that case it was the actual input, rather than anything the input symbolized". What was the actual input? I was referring to the objects you consciously experience, are you claiming that they aren't represented in the neural state?
>
> I didn't realize you were talking about neural representations. I thought you were talking about some situation in which you gave the physical brain all the physical inputs it would get if it were actually in a body walking around in the real world. It seems like you are talking about the brain-in-a-vat or the evil demon from Descartes' Discourse on Method, but maybe you have something else in mind.
>
> In any case, none of the questions you are asking me have anything to do with whether a god exists or not, so even though I am arguing from an "atheist perspective" it's hard to see that the atheism is relevant to anything you are asking about.

The question was about a brain in a vat, and it was an exploration of if you were to state that it would experience the same, if it received the same inputs. Are you denying that the objects you consciously experience are represented in your neural state or not?


Bill Rogers

unread,
Aug 11, 2015, 4:09:20 PM8/11/15
to talk-o...@moderators.isc.org
On Tuesday, August 11, 2015 at 2:39:19 PM UTC-4, someone wrote:

> > I don't think ESP would be required. It would be enough to notice that somebody was fooling around with the connections. Or that the simulated inputs were not behaving as expected when the patient turned his head, or starting walking around the room.
>
> The inputs would be the same (in terms of nerve firing), so where is the information about where they came from coming without some ESP?

That should be obvious from what I wrote. In the first place information comes from the other senses. The patient will feel someone disconnecting his camera, or whatever the normal input is, and hooking up the other input. In the second place, it's not likely that the other input will be able to mimic the input the subject would get simply by moving around the world.

Or perhaps you are some bionic eye way beyond current technology and are comparing that bionic eye to vision with a normal eye. My understanding was that you were thinking of some real system involving a camera hooked up to an array of electrodes (normally these are attached to the skin of the chest or abdomen - people learn to "see" with these arrays and sometimes report that it feels something like really seeing). Then you were, I think, comparing what would happen if you disconnected the camera, and just turned on and off the electrodes *as though* they were still attached to the camera. My contention is that, practically you could not do that well enough to fool the patient. But imagine you could do so, say by attaching the electrodes to a computer which was monitoring the position of the camera and calculating the input that the camera should be receiving, and then delivering that input to the electrodes in the array. If you did that, all you'd have done is add an extra step to the artificial vision, and the subject would still be getting his information from the world, just after an extra step.

Now, if you say that's not what you meant, but that what you meant was that some computer would feed input to the array that did not reflect what the patient would see in front of his face, then he could easily detect the discrepancies just by using his other senses, no ESP required.

And I still utterly fail to see what any of this has to do with the "atheist perspective" from which I am answering your questions.

>Though if the scientists that have already installed bionic eyes were to confirm that it didn't matter how they trigger the electrodes connected to the optic nerve the patient still experiences visual stimulation, then that option would be closed for you would it not?

> >
> > In any case, none of the questions you are asking me have anything to do with whether a god exists or not, so even though I am arguing from an "atheist perspective" it's hard to see that the atheism is relevant to anything you are asking about.
>
> The question was about a brain in a vat, and it was an exploration of if you were to state that it would experience the same, if it received the same inputs. Are you denying that the objects you consciously experience are represented in your neural state or not?

What do you mean by "the objects that I consciously experience are represented in my neural state?" I am arguing from a dualist, atheist perspective. So the objects that I consciously experience are not irrelevant to my neural state, but for all I know they are not completely determined by my neural state either. Let's say your evil scientist matched all the physical inputs to my brain, and managed to do it so well that it perfectly modeled the inputs I would get as I lived a normal life in the real world. Well, that's pretty much a computationally impossible task, so impossible that I think it makes the question pointless, but let's proceed anyway. Even if your evil scientist did that, since I am arguing from a dualist perspective it remains possible that there is some ineffably different quality to experiences produced that way when compared to experiences produced by just living in the world. So it's possible that I'd just have a sense that something was wrong. I'm not sure how that would work, but since my consciousness is not merely physical, it seems perfectly possible that I could tell the difference between real world experiences and brain in the vat experiences.

And I still don't see where atheism or theism come into any of these questions. Maybe you'll get there eventually, but, damn, it would be easier if you'd just make your argument. [For example, if you said, "materialism is self-inconsistent, therefore dualism is true, and dualism implies theism", I'd think you'd left out a lot of important steps in the argument, but at least I could see what you were trying to prove].


Öö Tiib

unread,
Aug 11, 2015, 4:09:20 PM8/11/15
to talk-o...@moderators.isc.org
On Thursday, 30 July 2015 10:24:58 UTC+3, someone wrote:
> [This is being posted because my response to post
> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
> isn't getting through]
>
> If you haven't been following the conversation, here is a quick synopsis (if your a talk.origins atheist, you might want to show that there's no problem for atheism here). There is a scenario outlined in this post (you can ignore the history):
>
> https://groups.google.com/d/msg/talk.origins/gudeS48dvuc/bonNIJoZXAkJ
>
> And the following questions about an atheist perspective.
>
> 1) Could a robot with a NAND gate control unit consciously experience?

May be somehow. I do not know. To me it feels unlikely. NAND gates
feel too limited for that to me. Can you replace electrolytic
capacitor with NAND transistor gates? No? Why? It is easy to
describe what and how capacitor does unlike consciousness. Also,
what does position that gods do not exist have to do with that?

> If the answer is "yes" then the next question is
>
> 1.y) If the hand raisers performed accurately, such that
> the resulting outputs of their NAND gate arrangement were the same as those the robot control unit was recorded as giving, would the property of the NAND gate arrangement consciously experiencing emerge?

Answer was "maybe, but likely no". Answer to 1.y) is definitely no way.
To get billions of people to do something that stupid will take more
powerful gods organizing it than any described anywhere. Atheist has
position that gods do not exist whatsoever so the mission is impossible
from atheist perspective.

> If the answer to (1.y) is "yes" then the next question is

No. No no.

> If the answer to (1.y) is "no" then how does the account suggest that consciously experiencing make difference to the robot's behaviour if the NAND gate arrangements outputs would be the same whether the property had emerged or not.

I do not even understand what are the alleged "outputs" of consciously
experiencing. The whole thing is dim for me. Does goldfish have
consciousness? Does cockroach? Does spermatozoon? What are the outputs?
Perhaps you could get some weirdos to play that they are control unit
of spermatozoon but most likely "output" that you get from them is
raised middle finger.

> If the answer to (1) is "no" then what is special about the biological chemistry?

There are seemingly nothing special in biochemistry that can not be
learned. The problem is that very few things can be made of NAND
gates. We do not know how to make consciously experiencing devices
but it is unlikely that consciously experiencing device is one
of those few devices that can be made of NAND gates. However it
seems all apparent regardless if there do exist any gods or none so
what it has to do with atheism?


someone

unread,
Aug 11, 2015, 6:34:18 PM8/11/15
to talk-o...@moderators.isc.org
On Tuesday, August 11, 2015 at 9:09:20 PM UTC+1, 嘱 Tiib wrote:
> On Thursday, 30 July 2015 10:24:58 UTC+3, someone wrote:
> > [This is being posted because my response to post
> > https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/ZYjKWl_cVQYJ
> > isn't getting through]
> >
> > If you haven't been following the conversation, here is a quick synopsis (if your a talk.origins atheist, you might want to show that there's no problem for atheism here). There is a scenario outlined in this post (you can ignore the history):
> >
> > https://groups.google.com/d/msg/talk.origins/gudeS48dvuc/bonNIJoZXAkJ
> >
> > And the following questions about an atheist perspective.
> >
> > 1) Could a robot with a NAND gate control unit consciously experience?
>
> May be somehow. I do not know. To me it feels unlikely. NAND gates
> feel too limited for that to me. Can you replace electrolytic
> capacitor with NAND transistor gates? No? Why? It is easy to
> describe what and how capacitor does unlike consciousness. Also,
> what does position that gods do not exist have to do with that?
>

If you can imagine at least one account where it would and at least one where it wouldn't could you just pick one of them and answer the questions using that one? And if the same thing happens again at one of the other questions, just use the same technique again, consider if you know at least one where it would and at least one where it wouldn't and pick one of them and answer the questions using that one.

What an atheist position has to do with it is that it is a position that can't bring God into it as part of the explanation.

someone

unread,
Aug 11, 2015, 6:49:21 PM8/11/15
to talk-o...@moderators.isc.org
On Tuesday, August 11, 2015 at 9:09:20 PM UTC+1, 嘱 Tiib wrote:
> On Thursday, 30 July 2015 10:24:58 UTC+3, someone wrote:

Or we could just carry on from here:

>
> > If the answer to (1) is "no" then what is special about the biological chemistry?
>
> There are seemingly nothing special in biochemistry that can not be
> learned. The problem is that very few things can be made of NAND
> gates. We do not know how to make consciously experiencing devices
> but it is unlikely that consciously experiencing device is one
> of those few devices that can be made of NAND gates. However it
> seems all apparent regardless if there do exist any gods or none so
> what it has to do with atheism?

When you say few devices, NAND gates are functionally complete so that includes any function does it not (in the mathematical sense)?

RSNorman

unread,
Aug 11, 2015, 6:59:17 PM8/11/15
to talk-o...@moderators.isc.org
Simulating neural activity with NAND gates (or, to be more precise,
with computer algorithms) is extremely complex. It is not anything
like reproducing the Boolean algebra in neural computation because
neurons don't work by Boolean algebra. It is possible for them to
compute AND, OR, and NOT and so produce the equivalent of NAND gates
but then they act like simplified cartoon models of neurons.

Neurons can be modeled by mathematical models only in the same way
that any physical process in the universe can be so modeled. The
model in no way resembles the original except that it produces a
result that is close enough for the purpose at hand.




someone

unread,
Aug 11, 2015, 7:19:21 PM8/11/15
to talk-o...@moderators.isc.org
On Tuesday, August 11, 2015 at 11:59:17 PM UTC+1, RSNorman wrote:
> On Tue, 11 Aug 2015 15:44:35 -0700 (PDT), someone
If the brain was thought to have the property of consciously experiencing emerge because it was performing some type of computation, then why can't the NAND gates not also perform the computation, and if they did why wouldn't they also have the property of consciously experiencing emerge?

RSNorman

unread,
Aug 11, 2015, 8:39:18 PM8/11/15
to talk-o...@moderators.isc.org
On Tue, 11 Aug 2015 16:18:15 -0700 (PDT), someone
<glenn....@googlemail.com> wrote:

>On Tuesday, August 11, 2015 at 11:59:17 PM UTC+1, RSNorman wrote:
>> On Tue, 11 Aug 2015 15:44:35 -0700 (PDT), someone
>> wrote:
>>
A computer can compute the movement of a pendulum swinging but it
wouldn't actually swing. It may well be that a computational system
them computes what a brain does will perfectly simulate every aspect
of consciousness that we can measure without being conscious itself.

On the other hand it may well be that a computational system that
exists in the real world and interacts so thoroughly with the real
world as to mimic human experience will also show true consciousness.

There is no reason why it must be only one way or the other.

someone

unread,
Aug 12, 2015, 3:44:18 AM8/12/15
to talk-o...@moderators.isc.org
On Wednesday, August 12, 2015 at 1:39:18 AM UTC+1, RSNorman wrote:
> On Tue, 11 Aug 2015 16:18:15 -0700 (PDT), someone
Well you've joined in a conversation where it is being claimed that a NAND gate system couldn't consciously experience, so perhaps stick with that, or pick one account, and answer the questions yourself, so that the discussion doesn't get muddied as a few different accounts are brought into it at the same time.

The pendulum example seems to confuse the issue, as a pendulum isn't suggested to have some property because it performs a computation. I'm not suggesting that a NAND gate arrangement performing a computation is the same as a neural state performing a computation, but if the brain was thought to have the property of consciously experiencing emerge because it was performing some type of computation, then why can't the NAND gates not also perform the computation, and if they did why wouldn't they also have the property of consciously experiencing emerge?

You didn't answer, and you didn't mention whether the "if" was incorrect (i.e. the brain isn't thought to have the property of consciously experiencing emerge because it is some type of computation).

Öö Tiib

unread,
Aug 12, 2015, 5:09:18 AM8/12/15
to talk-o...@moderators.isc.org
Why? You can make first order boolean arithmetic with NAND gates. Formal
logic being something important was indeed a naive position of some in sixties
and at 1972 even a formal logic programming language Prolog was made.
Prolog is still widely used in research and education since it is interesting
toy ... but it has had near to none impact to software industry. It has huge
performance penalty and other difficulties when we try to make useful
applications using pure formal logic. So even if consciousness can be made as
computer software the NAND gates and especially your hand-wavers and
hand-raisers are clearly a red herring.

It has nothing to do with atheism ... a position that gods do not exist outside
of imagination of some people. Conscious entity may be possible to make
using only components known to current human technology or there may be
need for something not yet known to current human technology. Gods may
exist or not on both cases. One of the points in many religions is that
conscious entities were made by Conscious Entity so ... I do not understand
the whole controversy here.

RSNorman

unread,
Aug 12, 2015, 8:14:19 AM8/12/15
to talk-o...@moderators.isc.org
I have repeatedly told you that your NAND gate business is utter
nonsense. The brain-body is not a simple computational device. It
exists in the real world performing actions and detecting the results
of those actions, including detecting internal details of what was
involved in performing the actions. It detects other entities in the
real world and communicates with those entities.

A computer, which can be built of NAND gates, can perform any
computation the brain is capable of making. But unless that computer
is capable of changing its action as a result of its past history (of
learning) and is installed in a robot which the incredible array of
sensors that duplicate both the somatic and autonomic nervous systems
plus the hormonal and metabolic changes that occur within our own
bodies that result from acting and interacting in the world then it
will not be conscious. If the robot does all these things then I have
no reason to believe that it could not also have consciousness.


God has no role in this story. That does not mean that there is no
god, only that the presence or absence of a god is irrelevant. Hence
my argument is in no way to be considered atheistic.

someone

unread,
Aug 12, 2015, 8:29:16 AM8/12/15
to talk-o...@moderators.isc.org
I'm not sure there is any controversy. Just some questions. I did mention in the post:
https://groups.google.com/d/msg/talk.origins/bycATB6rEho/12BliXlAiIEJ

that I'd like to deal with one account at a time, and so don't want the conversation to get confusing, where you bring in other accounts which you also believe might be true. I'm not suggesting we shouldn't discuss them, rather we just discuss them one at a time. So here I am investigating the account that you felt was more likely, and that was that no NAND gate configuration could consciously experience.

So with the NAND gates, they can perform any computation can they not? Any computation in any simulation done by modern computers for example. The reason I ask is that if you are (and I'm not suggesting that you are) going to suggest that the reason the property of consciously experiencing emerges from the brain activity is because it is performing a certain computation, then could not NAND gates also perform that computation, and if they did why would they not also consciously experience?

someone

unread,
Aug 12, 2015, 8:39:16 AM8/12/15
to talk-o...@moderators.isc.org
On Wednesday, August 12, 2015 at 1:14:19 PM UTC+1, RSNorman wrote:
> On Wed, 12 Aug 2015 00:39:31 -0700 (PDT), someone
Ok, you seem to have jumped in on a conversation where the answer to question (1) was "no" and that isn't what you are suggesting, but you seem to be suggesting that if the NAND gates were in a suitable robot then the answer would be "yes". For reasons known to you, you're avoiding answering the questions yourself. But then just look at the majority of atheists that have replied here, there does seem to be that pattern of bothering to make an effort (in replying) while avoiding answering the questions. Was there any excuse for not answering that you've read so far that you thought was convincing?

Bill Rogers

unread,
Aug 12, 2015, 8:49:17 AM8/12/15
to talk-o...@moderators.isc.org
So, no explanation as to how atheism or theism is relevant to all of this?

RSNorman

unread,
Aug 12, 2015, 8:59:16 AM8/12/15
to talk-o...@moderators.isc.org
You seem to have forgotten the many posts where I explained just why
your whole NAND gate business was utter nonsense. Many people here,
including me, have tried to engage you in a real discussion about
consciousness without adhering to your finely subdivided list of
exceptionally specific but meaningless questions.

And you persist on forcing atheism into the discussion when many
people, including me, have patiently (or not) explained just why
belief in one or more gods is completely irrelevant.

someone

unread,
Aug 12, 2015, 9:24:18 AM8/12/15
to talk-o...@moderators.isc.org
On Wednesday, August 12, 2015 at 1:59:16 PM UTC+1, RSNorman wrote:
> On Wed, 12 Aug 2015 05:35:37 -0700 (PDT), someone
Presumably your claim is that there is no point in you answering the questions because it won't lead to any significant realisation for either of us. Fine, that's you're theory, but why not answer the questions, and see whether your theory is correct?

someone

unread,
Aug 12, 2015, 9:44:18 AM8/12/15
to talk-o...@moderators.isc.org
Sorry had I misunderstood, had you been suggesting you couldn't understand the questions because they were nonsensical?

RSNorman

unread,
Aug 12, 2015, 9:49:16 AM8/12/15
to talk-o...@moderators.isc.org
I won't answer the questions because they are utter nonsense.

Furthermore, answering them just enables you to pursue your nonsense
to even more ridiculous extremes like groups of people raising their
hands and groups of people guessing whether to raise their hands and
...

Others here have been enabling you in this quest for far too long but,
if you notice, everyone eventually drops out to be replaced by
newcomers who give it a try for a while until they realize what is
going on.

RSNorman

unread,
Aug 12, 2015, 10:24:16 AM8/12/15
to talk-o...@moderators.isc.org
It is not at all that I don't understand the questions. I understand
them perfectly well. They just don't have any significance to
understanding anything about consciousness.

someone

unread,
Aug 12, 2015, 10:24:16 AM8/12/15
to talk-o...@moderators.isc.org
On Wednesday, August 12, 2015 at 2:49:16 PM UTC+1, RSNorman wrote:
> On Wed, 12 Aug 2015 06:22:50 -0700 (PDT), someone
By nonsense do you mean that you are unable to understand them and hold the belief that nobody else can understand them either because they are nonsensical? If so, taking question 1 as an example, a few repliers seem to have understood it, so what leads you to the conclusion that nobody can understand it?

Mark Isaak

unread,
Aug 12, 2015, 11:09:16 AM8/12/15
to talk-o...@moderators.isc.org
On 8/12/15 5:35 AM, someone wrote:
> On Wednesday, August 12, 2015 at 1:14:19 PM UTC+1, RSNorman wrote:
>> On Wed, 12 Aug 2015 00:39:31 -0700 (PDT), someone
>> wrote:
>>> [...]
I have noticed a consistent tendency for you to try to force other
people's answers into something away from what they said into a track of
your own making. Personally, I find your attempts at manipulation
obnoxious in the extreme. Why don't you just say what you think and
accept that others will disagree with you?

--
Mark Isaak eciton (at) curioustaxonomy (dot) net
"Keep the company of those who seek the truth; run from those who have
found it." - Vaclav Havel

Mark Isaak

unread,
Aug 12, 2015, 11:09:16 AM8/12/15
to talk-o...@moderators.isc.org
On 8/12/15 6:22 AM, someone wrote:
> [...] but why not answer the questions, and see whether your
> theory is correct?

Why don't *you* answer those questions?

someone

unread,
Aug 12, 2015, 11:34:17 AM8/12/15
to talk-o...@moderators.isc.org
On Wednesday, August 12, 2015 at 3:24:16 PM UTC+1, RSNorman wrote:
> On Wed, 12 Aug 2015 06:39:23 -0700 (PDT), someone
So it was as I presumed (that you think that there is no point in you answering the questions because it won't lead to any significant realisation for either of us). You didn't answer why you aren't willing to test your theory.

someone

unread,
Aug 12, 2015, 11:49:16 AM8/12/15
to talk-o...@moderators.isc.org
I don't mind others disagreeing with me. I'm just asking some questions on this thread, if they don't want to answer them, then don't bother replying. Yes I do try to keep the conversation on the point of the questions, but that is because I wouldn't want some intellectually cowardly atheist to pretend that, by replying while avoiding answering the questions, that it wasn't that they had been forced to avoid the questions because of the absurdity of their position which they couldn't face up to even to themselves, but that they had bravely stepped up and it was all just a matter of opinion, and that they just had a different one.

What difference of opinion was it you thought you had, as I remember it, it was just a point of where you hadn't realised that what you were suggesting implied that all that was required for a conscious experience was for enough people to randomly put their hands up. It seemed like you knew how absurd it is, but didn't realise that was what you were suggesting implied.
https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/lGRFQb0cZrsJ

Bill Rogers

unread,
Aug 12, 2015, 12:04:16 PM8/12/15
to talk-o...@moderators.isc.org
On Wednesday, August 12, 2015 at 11:49:16 AM UTC-4, someone wrote:

> I don't mind others disagreeing with me. I'm just asking some questions on this thread, if they don't want to answer them, then don't bother replying. Yes I do try to keep the conversation on the point of the questions, but that is because I wouldn't want some intellectually cowardly atheist to pretend that, by replying while avoiding answering the questions, that it wasn't that they had been forced to avoid the questions because of the absurdity of their position which they couldn't face up to even to themselves, but that they had bravely stepped up and it was all just a matter of opinion, and that they just had a different one.
>
> What difference of opinion was it you thought you had, as I remember it, it was just a point of where you hadn't realised that what you were suggesting implied that all that was required for a conscious experience was for enough people to randomly put their hands up. It seemed like you knew how absurd it is, but didn't realise that was what you were suggesting implied.
> https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/lGRFQb0cZrsJ

I (and others) have explained what's wrong with your random hand raising scenario (I'll paste in my explanation again below). Every time someone makes an argument about what's wrong with your questions you just claim they are "obfuscating" or running away. There's some running away and an unwillingness to think through things carefully going on here, but it's not on the part of the folks arguing with you.

Here's a detailed critique of your questions and the implied argument. You've never dealt with it except to dismiss it as obfuscation.

You asked if an NAND gate arrangement attached to a robot could be consciousness. I said (and I think Mark agrees) that I thought it might well be able to, although it would require a very large and complex arrangement.

Next you asked about an analogous arrangement where the NAND gates were replace by people who raised their hands in accordance with instructions, which would presumably instruct them to act like NAND gates.

This was a non-essential distraction for several reasons. First, replacing the gates with people slows everything down. Since most of us think that for consciousness to emerge in the first place, there has to be recurring interaction with the environment, now you have the problem that the robot thinks glacially slowly and cannot react to events fast enough. So then you have to spend multiple posts trying to clarify this and presuppose simulated environmental inputs that happen appropriately slowly. So that's just a waste of time.

The other problem is that when when switch back and forth between NAND gates and hand raisers, anybody who is familiar with philosophy of mind thinks you are targeting functionalism. Functionalism is in contrast to identity theory. Identity theory claims that there is a one-to-one relation between physical states and mental states. Under identity theory you could not have the same mental state with both a particular NAND gate arrangement and a hand raiser arrangement. Functionalism, on the other hand, claims that you could instantiate functionally equivalent mental states in very different physical states (e.g. human brain versus computer/robot). So when you bring in two different physical systems, naturally folks who are familiar with this area of philosophy think you are trying to argue something about functionalism versus identity theory. But as far as I can tell, that's not really what you care about. They are both physicalist views, anyway.

Then your next move is to ask whether consciousness could arise if some of the hand raisers were only guessing about what they should do. So let's drop the handraisers, since it just adds elements not relevant to your main argument. What you ask is equivalent to asking whether if you had a conscious robot/NAND gate arrangement you could still get consciousness if some or all of the NAND gates were replaced with random output gates.

Early on I answered that "Yes, but...." and then explained why the question did not really make sense. You seized on the "yes" and totally ignored the "but." So I'll simply revise my answer and say "no." [Please don't stop reading at this point and just run off and put "no" into your flow chart.] It is not possible for an arrangement of random output gates to produce consciousness. Your claim that believing that some arrangement of NAND gates could be conscious implies that one also believes that an arrangement of random output gates could be conscious is nonsensical, and Mark appropriately scoffed at it.

Here is my summary of your argument as to why a conscious arrangement of NAND gates implies that an arrangement of random output gates could be conscious. Here goes.

Someone: You say the NAND gate arrangement could be conscious, right?

Physicalist: Yes.

Someone: Well, if we took out a bunch of the NAND gates and replaced them with random output gates, could the arrangement be conscious?

Physicalist: Of course not, there's no chance that the random output gates would consistently just happen to give the same outputs they would have given if they were proper NAND gates.

Someone: Well, of course the odds are low, but if they *did* just happen to give those outputs, why wouldn't the arrangement be conscious? After all, all the inputs and outputs would then be identical to those under the NAND gate arrangement, which you already said was conscious.

Physicalist: Sure, but you're ignoring the overwhelming likelihood that the random output gates *won't* give the right outputs.

Someone: But we're talking about the case in which they *do* give the right outputs. No matter how unlikely that case is, it's the only one we're considering at the moment. Bringing up other cases is just an attempt at obfuscation.

Physicalist: OK, so, if in spite of astronomical odds, in this one highly unlikely case, the arrangement including random output gates would be conscious.

Someone: Well, that's it. What that means is that consciousness can emerge even with random outputs. So there's no causal connection between the outputs of the gates and the emergence of consciousness. And if there's no causal connection between the physical and consciousness, then physicalism must be false.

Correct me if that is not a correct summary of the argument you are trying to make.

The problems are (1) magnitudes matter - something so improbable as an array of random output gates consistently producing the same outputs they would produce if they were NAND gates can be considered impossible for all practical purposes (2) it is a, perhaps dishonest, perhaps self-deluding, sleight of hand to consider the random output gates to have random output only during part of the argument. When the physicalist complains about the enormously more likely case in which the random output gates give the wrong output, you say that you are not considering that case. OK. But when you draw your conclusion that the behaviour of the gates was irrelevant to consciousness you go back to considering them as truly random output gates and suddenly you want all those other cases to influence your conclusion. You can't have it both ways. If the gates are truly random (if the hand raisers are really guessing) then they won't guess correctly enough to produce consciousness. If they do give the correct outputs consistently, then there's no meaningful sense in which they were "just guessing." Finally, even if you say they were really guessing, you cannot claim there is no causal link with consciousness, because even though they were guessing, consciousness only emerged because they guessed just exactly like NAND gates.

I suggest again that you stop laying that flattering unction to your soul that people reject your argument because they are intellectual cowards. Your argument is not expressed clearly, it's cluttered with elements irrelevant to your main point, and even when it's made clear, it does not hold up. There are interesting arguments for dualism and theism, but you are not making them.

Öö Tiib

unread,
Aug 12, 2015, 12:54:16 PM8/12/15
to talk-o...@moderators.isc.org
I do not imagine that there is some sort of correlation between people
being atheists and if they think if consciousness can be made using
digital equipment or not.

> that I'd like to deal with one account at a time, and so don't want the conversation to get confusing, where you bring in other accounts which you also believe might be true. I'm not suggesting we shouldn't discuss them, rather we just discuss them one at a time. So here I am investigating the account that you felt was more likely, and that was that no NAND gate configuration could consciously experience.

Yes that is more likely.

> So with the NAND gates, they can perform any computation can they not?

Yes, within limits. Basic algebra with discrete input and discrete output.
Lot of people live their life without knowing anything of it.

> Any computation in any simulation done by modern computers for example. The reason I ask is that if you are (and I'm not suggesting that you are) going to suggest that the reason the property of consciously experiencing emerges from the brain activity is because it is performing a certain computation, then could not NAND gates also perform that computation, and if they did why would they not also consciously experience?

It feels very unlikely for me (but I do not know it for certain) that
human brain is performing certain computation that is anyhow reasonable
to do with NAND gates.

I thought it is well-known fact that chemistry and nanotechnology
(and human brain involves both) rely on understanding of quantum
systems and that such systems are impossible to simulate in efficient
manner with classical digital equipment (IOW your NAND gates).

Most electronic devices contain components that you can not make with
NAND gates (like capacitors, resistors and inductors). Conscious
experience may involve components that we do not even use in electronics.

Formal logic has shown to have rather serious issues when processing
real information around us (that tends to be uncertain, subjective,
probabilistic, controversial, defective and even maliciously
constructed misinformation). So NAND gates feel likely insufficient
for dealing with such.

someone

unread,
Aug 12, 2015, 12:59:16 PM8/12/15
to talk-o...@moderators.isc.org
On Wednesday, August 12, 2015 at 5:04:16 PM UTC+1, Bill Rogers wrote:
> On Wednesday, August 12, 2015 at 11:49:16 AM UTC-4, someone wrote:
>
> > I don't mind others disagreeing with me. I'm just asking some questions on this thread, if they don't want to answer them, then don't bother replying. Yes I do try to keep the conversation on the point of the questions, but that is because I wouldn't want some intellectually cowardly atheist to pretend that, by replying while avoiding answering the questions, that it wasn't that they had been forced to avoid the questions because of the absurdity of their position which they couldn't face up to even to themselves, but that they had bravely stepped up and it was all just a matter of opinion, and that they just had a different one.
> >
> > What difference of opinion was it you thought you had, as I remember it, it was just a point of where you hadn't realised that what you were suggesting implied that all that was required for a conscious experience was for enough people to randomly put their hands up. It seemed like you knew how absurd it is, but didn't realise that was what you were suggesting implied.
> > https://groups.google.com/d/msg/talk.origins/VJMS6crS9AU/lGRFQb0cZrsJ
>
> I (and others) have explained what's wrong with your random hand raising scenario (I'll paste in my explanation again below). Every time someone makes an argument about what's wrong with your questions you just claim they are "obfuscating" or running away. There's some running away and an unwillingness to think through things carefully going on here, but it's not on the part of the folks arguing with you.
>
> Here's a detailed critique of your questions and the implied argument. You've never dealt with it except to dismiss it as obfuscation.
>

Looked back on some posts on similar stuff and I noticed that I did explain why I thought it was obfuscation (e.g. https://groups.google.com/d/msg/talk.origins/NRQ8VbHAHgE/gXCn8eh920cJ)

Anyway, I'll go through this now.


> You asked if an NAND gate arrangement attached to a robot could be consciousness. I said (and I think Mark agrees) that I thought it might well be able to, although it would require a very large and complex arrangement.
>

Mark stated "yes" to (1) I think

> Next you asked about an analogous arrangement where the NAND gates were replace by people who raised their hands in accordance with instructions, which would presumably instruct them to act like NAND gates.
>

Mark stated "yes" to (1.y) I think.

The following two paragraphs are non-essential distractions, because they have nothing to do with answering the question. If you'd answer the question differently if it was about hand-raisers than if it was about some other form of NAND gate, and the question is about hand-raisers, then guess which way you'd be expected to answer it. And what difference does it make what I'm targeting, surely the answer would be the same anyway.

> This was a non-essential distraction for several reasons. First, replacing the gates with people slows everything down. Since most of us think that for consciousness to emerge in the first place, there has to be recurring interaction with the environment, now you have the problem that the robot thinks glacially slowly and cannot react to events fast enough. So then you have to spend multiple posts trying to clarify this and presuppose simulated environmental inputs that happen appropriately slowly. So that's just a waste of time.
>
> The other problem is that when when switch back and forth between NAND gates and hand raisers, anybody who is familiar with philosophy of mind thinks you are targeting functionalism. Functionalism is in contrast to identity theory. Identity theory claims that there is a one-to-one relation between physical states and mental states. Under identity theory you could not have the same mental state with both a particular NAND gate arrangement and a hand raiser arrangement. Functionalism, on the other hand, claims that you could instantiate functionally equivalent mental states in very different physical states (e.g. human brain versus computer/robot). So when you bring in two different physical systems, naturally folks who are familiar with this area of philosophy think you are trying to argue something about functionalism versus identity theory. But as far as I can tell, that's not really what you care about. They are both physicalist views, anyway.
>

So a non-essential distraction done with, you go onto try to change what you are being asked about (like that's necessary just to have answered the questions). Though as I think I pointed out, I'd keep the hand-raisers as I liked the absurdity of it.

> Then your next move is to ask whether consciousness could arise if some of the hand raisers were only guessing about what they should do. So let's drop the handraisers, since it just adds elements not relevant to your main argument. What you ask is equivalent to asking whether if you had a conscious robot/NAND gate arrangement you could still get consciousness if some or all of the NAND gates were replaced with random output gates.
>
> Early on I answered that "Yes, but...." and then explained why the question did not really make sense. You seized on the "yes" and totally ignored the "but."

You stated (https://groups.google.com/d/msg/talk.origins/bycATB6rEho/6dba0Qy6_fkJ):

"If you hypothesize that somehow, all the random gates just happen to always give the correct output that a proper NAND gate would have given, then sure, the robot would be conscious, but that scenario is ridiculously improbable, billions of gates, thousands of cycles, each gate just happening to give the same output it would as if it were a proper NAND gate."

That is a "no" to 1.y.y which is the same as Mark stated. I didn't ignore the but, it is just that the probability of it isn't relevant.


>So I'll simply revise my answer and say "no." [Please don't stop reading at this point and just run off and put "no" into your flow chart.] It is not possible for an arrangement of random output gates to produce consciousness.

You were already stating "no", and you seem to maintain that below.

>Your claim that believing that some arrangement of NAND gates could be conscious implies that one also believes that an arrangement of random output gates could be conscious is nonsensical, and Mark appropriately scoffed at it.
>
> Here is my summary of your argument as to why a conscious arrangement of NAND gates implies that an arrangement of random output gates could be conscious. Here goes.
>
> Someone: You say the NAND gate arrangement could be conscious, right?
>
> Physicalist: Yes.
>
> Someone: Well, if we took out a bunch of the NAND gates and replaced them with random output gates, could the arrangement be conscious?
>
> Physicalist: Of course not, there's no chance that the random output gates would consistently just happen to give the same outputs they would have given if they were proper NAND gates.
>
> Someone: Well, of course the odds are low, but if they *did* just happen to give those outputs, why wouldn't the arrangement be conscious? After all, all the inputs and outputs would then be identical to those under the NAND gate arrangement, which you already said was conscious.
>
> Physicalist: Sure, but you're ignoring the overwhelming likelihood that the random output gates *won't* give the right outputs.
>
> Someone: But we're talking about the case in which they *do* give the right outputs. No matter how unlikely that case is, it's the only one we're considering at the moment. Bringing up other cases is just an attempt at obfuscation.
>
> Physicalist: OK, so, if in spite of astronomical odds, in this one highly unlikely case, the arrangement including random output gates would be conscious.
>
> Someone: Well, that's it. What that means is that consciousness can emerge even with random outputs. So there's no causal connection between the outputs of the gates and the emergence of consciousness. And if there's no causal connection between the physical and consciousness, then physicalism must be false.
>
> Correct me if that is not a correct summary of the argument you are trying to make.
>
> The problems are (1) magnitudes matter - something so improbable as an array of random output gates consistently producing the same outputs they would produce if they were NAND gates can be considered impossible for all practical purposes (2) it is a, perhaps dishonest, perhaps self-deluding, sleight of hand to consider the random output gates to have random output only during part of the argument. When the physicalist complains about the enormously more likely case in which the random output gates give the wrong output, you say that you are not considering that case. OK. But when you draw your conclusion that the behaviour of the gates was irrelevant to consciousness you go back to considering them as truly random output gates and suddenly you want all those other cases to influence your conclusion. You can't have it both ways. If the gates are truly random (if the hand raisers are really guessing) then they won't guess correctly enough to produce consciousness. If they do give the correct outputs consistently, then there's no meaningful sense in which they were "just guessing." Finally, even if you say they were really guessing, you cannot claim there is no causal link with consciousness, because even though they were guessing, consciousness only emerged because they guessed just exactly like NAND gates.
>
> I suggest again that you stop laying that flattering unction to your soul that people reject your argument because they are intellectual cowards. Your argument is not expressed clearly, it's cluttered with elements irrelevant to your main point, and even when it's made clear, it does not hold up. There are interesting arguments for dualism and theism, but you are not making them.

Yeah, you state:
"Here is my summary of your argument as to why a conscious arrangement of NAND gates implies that an arrangement of random output gates could be conscious. Here goes."

A big problem for you here is that I don't make any argument that a conscious arrangement of NAND gates implies that an arrangement of random output gates could be conscious. Question (1.y.y) could be considered to be about whether an arrangement of random output gates could consciously experience depending on the results, but I don't give an argument for why you should answer one way rather than another, I just point out the problem whatever way you answer.

I don't state that physicalism is false on the basis of a belief that the property of consciousness could emerge from a bunch of people randomly raising their hands. I was just pointing out that that is your and Mark's position. Mark hadn't seemed to realise. You seem not to understand that the probability doesn't matter. It wasn't an issue of whether if there was a group of people randomly raising their hands would the property emerge. It was just for you to explain what your account suggest *if* it happened. And while the plausibility of reality being such that whether the property emerged or not was a matter of whether they raised the correct hands at the right time is worth contemplating, it wasn't the only one point. Another is what difference it would be suggested to make to the random handraising if it did emerge. Wouldn't it be the result of the handraising rather than an influence to any of it?

Of course if they guessed correctly then there would be a meaningful sense in which they just guessed, in the sense that it actually happened (had it of happened). Also there was no need to assume that they all guessed, I had only mentioned "some" in the question. If the guesses were truly random there would be a chance they'd get it right first time.






It is loading more messages.
0 new messages