Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Consciously experiencing (contd 4)

118 views
Skip to first unread message

someone

unread,
Nov 25, 2015, 8:33:52 AM11/25/15
to talk-o...@moderators.isc.org
Posts not getting through on talk origins again, so this is a continuation on from post: https://groups.google.com/d/msg/talk.origins/G-A62IC9Ppk/Xcmo004KBwAJ

On Wednesday, November 25, 2015 at 12:03:51 PM UTC, Bill Rogers wrote:
> On Wednesday, November 25, 2015 at 5:03:51 AM UTC-5, someone wrote:
> > On Wednesday, November 25, 2015 at 1:13:52 AM UTC, Bill Rogers wrote:
> > > On Tuesday, November 24, 2015 at 6:48:51 PM UTC-5, someone wrote:
> > > > On Tuesday, November 24, 2015 at 11:13:53 PM UTC, Bill Rogers wrote:
> > > > > On Tuesday, November 24, 2015 at 5:23:51 PM UTC-5, someone wrote:
> > > > > > On Tuesday, November 24, 2015 at 10:13:52 PM UTC, Bill Rogers wrote:
> > > > > > > On Tuesday, November 24, 2015 at 4:08:55 PM UTC-5, someone wrote:
> > > > > > > > On Tuesday, November 24, 2015 at 8:43:52 PM UTC, Bill Rogers wrote:
> > > > > > > > > On Tuesday, November 24, 2015 at 3:23:53 PM UTC-5, someone wrote:
> > > > > > > > > > On Tuesday, November 24, 2015 at 8:08:53 PM UTC, Bill Rogers wrote:
> > > > > > > > > > > On Tuesday, November 24, 2015 at 12:18:54 PM UTC-5, someone wrote:
> > > > > > > > > > >
> > > > > > > > > > > > > > [snip]
> > > > > > > > > > > > >
> > > > > > > > > > > > > As I said, any feature that philosophical zombies lack is epiphenomenal, by definition. It has no causal effect on their behavior and it is experimentally undetectable. Therefore I conclude that the feature you are referring to, your "Obvious clue to reality....", has no causal effect and is undetectable. It's perhaps not simply indoctrination that makes me fail to see undetectable clues to reality.
> > > > > > > > > > > >
> > > > > > > > > > > > I didn't ask you whether the the existence of philosophical zombies would imply that the feature was epiphenomenal or not, I asked you whether you understood the feature philosophical zombies are imagined to lack. So are you suggesting that you cannot understand what feature they are imagined to lack?
> > > > > > > > > > >
> > > > > > > > > > > What I am saying is that any feature that philosophical zombies lack is, by definition epiphenomenal.
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > A dualist can believe that philosophical zombies wouldn't exist, because without the feature that they are imagined to lack, the behaviour would be different. So a dualist could understand what feature was being referred to, without believing it to be an epiphenomenal feature. As I mentioned I'll be referring to the feature as "The Obvious Clue To Reality That Atheists Are Sometimes Too Indoctrinated To Be Able To Face". I assume you are an atheist, but can you give a "yes" or "no" answer as to whether you understand the feature that a philosophical zombie is imagined to lack?
> > > > > > > > > >
> > > > > > > > > > [snip]
> > > > > > > > >
> > > > > > > > > I've told you all I can tell you about features that a philosophical zombie lacks. Any feature that a philosophical zombie lacks is, by definition, epiphenemonal. If the philosophical zombie does not actually lack the feature you are trying to describe, why are you referring to zombies in the first place? The standard definition of a philosophical zombie is that it lacks conscious experience. But I cannot say that I understand what feature a philosophical zombie is imagined to lack (ie consciousness) unless I know what *you* mean by consciousness.
> > > > > > > >
> > > > > > > > I don't think that there are some people that think philosophical zombies lack one feature, but others think it lacks some other feature. I think everyone who understands the feature that a philosophical zombie is thought to lack understands the same feature. I'm not interested in whether if philosophical zombies were possible it would imply that the feature was epiphenomenal. You presumably understand that a philosophical zombie is imagined to behave the same as a human, so you presumably understand it isn't any behavioural feature that is imagined to lack. You must know whether you think you understand it or not, whether if a person asked you whether you understand what a philosophical zombie is or not you'd be able to reply "yes" or "no". You presumably don't feel that you'd need to run and ask me. So again, I'll ask you do you understand what feature a philosophical zombie is imagined to lack?
> > > > > > > >
> > > > > > > > [snip]
> > > > > > >
> > > > > > > I don't know why you keep ignoring my answer. I don't know what feature *you* imagine a philosophical zombie to lack, because you have not said.
> > > > > >
> > > > > > I'm not ignoring your response, but you keep avoiding answering the question. I imagine it to lack the same feature that the people in the articles I linked imagine it to lack. The philosophical zombie isn't something I've made up. I supplied some articles describing what a philosophical zombie is, but you snipped the articles, however for your convenience, I've put them back in
> > > > > >
> > > > > > http://plato.stanford.edu/entries/zombies/
> > > > > > https://en.wikipedia.org/wiki/Philosophical_zombie
> > > > > >
> > > > > > Do you think that you understand that what feature the people in the articles imagine philosophical zombies to be lacking? With regards to the wiki article I'm referring to the neurological zombie.
> > > > > >
> > > > > > [snip]
> > > > >
> > > > > I keep answering your question. Maybe it's hard for you to understand answers longer than one syllable. The standard definition of a zombie (as given in your links, and as I have said again and again) is that a zombie lacks conscious experience. Yes, I understand that they say that.
> > > >
> > > > But you have repeatedly stated that you understand consciousness as a behaviour, and clearly they aren't suggesting that there is any behavioural difference between a philosophical zombie and a human. They mention that the zombie "lacks conscious experience, qualia, or sentience" (https://en.wikipedia.org/wiki/Philosophical_zombie ), or that "by definition there is 'nothing it is like' to be a zombie" (http://plato.stanford.edu/entries/zombies/). But without any help from me, do you think you are capable of understanding the non-behavioural feature that the authors are claiming a philosophical zombie lacks?
> > > >
> > > > [snip]
> > >
> > > No.
> >
> > Though I suspect you do understand but are too much of an intellectual coward to admit it, for the purpose of discussion I'll take your claim at face value and try to help.
>
> As I explained earlier, the part that I don't understand is what *you* mean by consciousness. Telling me that consciousness is that feature that zombies are imagined to lack is circular and useless.

I had tried to explain in the original thread what I meant by consciousness, but you haven't understood that. But maybe it was that I was explaining it badly, so I gave you some links to two different articles about philosophical zombies, and told you it was the non-behavioural feature that the authors were suggesting that a philosophical zombie lacks, but you have said that you couldn't understand what non-behavioural feature they were referring to either, and presumably you didn't think it was a behavioural feature that they were suggesting a philosophical zombie lacked, because they both state that the philosophical zombie behaves the same as a human. So in order to help you, I'm trying other thought experiments.

[snip]

> >
> > I don't know whether you are familiar with Mary and the black and white room thought experiment, but if not here is a link that contains a rough synopsis of it in section 2 http://plato.stanford.edu/entries/qualia-knowledge/ and a couple of other similar thought experiments in section 1.
> >
> > So let's imagine for the sake of discussion that the Physics Assumptions were correct, for your convenience I'll restate them here:
> > (a) the minimal set of laws of physics required to describe the behaviour of things that aren't consciously experiencing are sufficient to describe the behaviour of things that are consciously experiencing
> > (b) the fundamental forces behind the laws in (a) are the same for things that consciously experience and things that don't.
> >
> > I realise that you will interpret the phrase "consciously experiencing" differently from how I intend it to be understood, and that you'll be interpreting it as either a behaviour, or referring to some feature of reality that you don't recognise. But that's ok.
> >
> > So imagine Mary is in her black and white room, and has all the observable information gleaned from all manner of scanners and probes about subject X that was in a field and stated "The sky is blue". And that she can explain "just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal chords and expulsion of air from the lungs that results in the uttering of the sentence 'The sky is blue'".
> >
> > Can you imagine that when shown a blue picture that Mary will obtain knowledge about a feature of subject X that she didn't previously know, what it was like for subject X to experience a blue?
>
> No. Mary learns nothing. The reason you think she learns something is that you've not thought through in detail what it means for her to know absolutely everything that is going on in subject X. But I'm not interested in wasting time on Mary the color scientist when you (1) won't cough up your own definition of what you mean by consciousness (2) ignore the "atheist evolutionary account" that I've already given you.

Regarding (1) I've told you I mean the same as the authors of the articles. You've claimed you can't understand what non-behavioural feature they are referring to. Regarding (2) This thread isn't about any argument against any evolutionary accounts, it is about the feature that you claim you can't understand.

So in order to help you understand the articles I have brought in the Mary thought experiment. I didn't suggest she knows absolutely everything that is going on in subject X (I just mentioned that she knows "all the observable information gleaned from scanners and probes..." but the summary in the article could have been interpreted that way, even though it did go onto clarify that it could be argued that she didn't know all physical facts). So to be clear, just assume her knowledge is restricted to all the *behaviour* internal, and external of subject X. So, can you imagine that when shown a blue picture that Mary will obtain knowledge about a feature of subject X that she didn't previously know, what it was like for subject X to experience a blue?

If you don't want any help in understanding the feature, then we can stop the conversation, but then there is no point in you keep interjecting, when you have admitted you can't follow the conversation because you don't even understand what feature is being discussed, and you rejected help in understanding it.

eridanus

unread,
Nov 25, 2015, 8:53:50 AM11/25/15
to talk-o...@moderators.isc.org
if we are using the same language, it is to you to explain what are you
saying.

Think about an advance math scientist. He can be presenting some new
theory of maths, or some weird proposition... if this math scientist want
other mathematicians to accept his theory, or new model of something...
he must explain it to them.
Of course, a mathematicians of advanced rank is not going to explain me
anything, for I do not know a word of advanced maths.

Coming back to you case, either you are a sort of metaphysicist, in which
case I cannot listen to you any word, for I do not believe in metaphysics...
or either you pretend what you are saying is understandable as you said
by a child of 10 years. In this case, it must be pretty easy for you
to explain what you are saying. For we have a little more intelligence than
a child of 10 years.
It is your responsibility to explain to us what damn are you talking about.

Eri


someone

unread,
Nov 25, 2015, 9:08:50 AM11/25/15
to talk-o...@moderators.isc.org
I had stated that I assume most 10 year olds could understand what feature the authors were suggesting the philosophical zombie would lack. If you can't then you can't, or if you are pretending you can't through intellectual cowardice, then you could go on pretending. But I'll go through it with Bill Rogers, assuming he doesn't bail out. I assume Dualists have no problem understanding the feature that a philosophical zombie lacks, even though a philosophical zombie assumes physicalism and assumes the feature is epiphenomenal (and Dualists don't believe it to be). As I've mentioned I'll be referring to the feature as "The Obvious Clue To Reality That Atheists Are Sometimes Too Indoctrinated To Be Able To Face".

Bill Rogers

unread,
Nov 25, 2015, 11:13:51 AM11/25/15
to talk-o...@moderators.isc.org
On Wednesday, November 25, 2015 at 8:33:52 AM UTC-5, someone wrote:

> > As I explained earlier, the part that I don't understand is what *you* mean by consciousness. Telling me that consciousness is that feature that zombies are imagined to lack is circular and useless.
>
> I had tried to explain in the original thread what I meant by consciousness, but you haven't understood that. But maybe it was that I was explaining it badly, so I gave you some links to two different articles about philosophical zombies, and told you it was the non-behavioural feature that the authors were suggesting that a philosophical zombie lacks, but you have said that you couldn't understand what non-behavioural feature they were referring to either, and presumably you didn't think it was a behavioural feature that they were suggesting a philosophical zombie lacked, because they both state that the philosophical zombie behaves the same as a human. So in order to help you, I'm trying other thought experiments.

All you need to do is say what you mean by consciousness. I know what I think consciousness is, and I've explained my position to you. Your referring me to articles on philosophical zombies does not help me understand what *you* think consciousness is.

All I've gotten from all your posts is that you think consciousness is not a behavior but that, since it is not epiphenomenal, it affects behavior.
>
> [snip]
>
> > >
> > > I don't know whether you are familiar with Mary and the black and white room thought experiment, but if not here is a link that contains a rough synopsis of it in section 2 http://plato.stanford.edu/entries/qualia-knowledge/ and a couple of other similar thought experiments in section 1.
> > >
> > > So let's imagine for the sake of discussion that the Physics Assumptions were correct, for your convenience I'll restate them here:
> > > (a) the minimal set of laws of physics required to describe the behaviour of things that aren't consciously experiencing are sufficient to describe the behaviour of things that are consciously experiencing
> > > (b) the fundamental forces behind the laws in (a) are the same for things that consciously experience and things that don't.
> > >
> > > I realise that you will interpret the phrase "consciously experiencing" differently from how I intend it to be understood, and that you'll be interpreting it as either a behaviour, or referring to some feature of reality that you don't recognise. But that's ok.
> > >
> > > So imagine Mary is in her black and white room, and has all the observable information gleaned from all manner of scanners and probes about subject X that was in a field and stated "The sky is blue". And that she can explain "just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal chords and expulsion of air from the lungs that results in the uttering of the sentence 'The sky is blue'".
> > >
> > > Can you imagine that when shown a blue picture that Mary will obtain knowledge about a feature of subject X that she didn't previously know, what it was like for subject X to experience a blue?
> >
> > No. Mary learns nothing. The reason you think she learns something is that you've not thought through in detail what it means for her to know absolutely everything that is going on in subject X. But I'm not interested in wasting time on Mary the color scientist when you (1) won't cough up your own definition of what you mean by consciousness (2) ignore the "atheist evolutionary account" that I've already given you.
>
> Regarding (1) I've told you I mean the same as the authors of the articles.

Great. The authors say that the feature missing in zombies is consciousness. Can you see how saying that does not get me any closer to understanding what you (or they) mean by consciousness? The meaning of the word is not obvious or universally agreed upon. For example, I and others here, claim that consciousness is a certain type of behavior. You say it is not behavioral. So, if you want to be understood, come ahead and say what you mean by consciousness. It can't be that hard.


>You've claimed you can't understand what non-behavioural feature they are referring to. Regarding (2) This thread isn't about any argument against any evolutionary accounts, it is about the feature that you claim you can't understand.

Ah, earlier you had said you were trying to show that "atheist evolutionary accounts" were implausible.
>
> So in order to help you understand the articles I have brought in the Mary thought experiment. I didn't suggest she knows absolutely everything that is going on in subject X (I just mentioned that she knows "all the observable information gleaned from scanners and probes..." but the summary in the article could have been interpreted that way, even though it did go onto clarify that it could be argued that she didn't know all physical facts). So to be clear, just assume her knowledge is restricted to all the *behaviour* internal, and external of subject X. So, can you imagine that when shown a blue picture that Mary will obtain knowledge about a feature of subject X that she didn't previously know, what it was like for subject X to experience a blue?
>

No, I don't think Mary will learn anything new. She already knows "what it's like" to see blue. If she knows *all* the relevant behavior, then she knows all there is to know.

> If you don't want any help in understanding the feature, then we can stop the conversation, but then there is no point in you keep interjecting, when you have admitted you can't follow the conversation because you don't even understand what feature is being discussed, and you rejected help in understanding it.

I'll continue to interject when I feel like it, of course. I thought you were interested in the "atheist evolutionary account." I gave you a simple, straightforward version. You seem uninterested in dealing with it for the moment. If you go back to trying to show that it is implausible, I'll put in my two cents.


Message has been deleted
Message has been deleted
Message has been deleted

someone

unread,
Nov 25, 2015, 1:33:47 PM11/25/15
to talk-o...@moderators.isc.org
On Wednesday, November 25, 2015 at 4:13:51 PM UTC, Bill Rogers wrote:
> On Wednesday, November 25, 2015 at 8:33:52 AM UTC-5, someone wrote:
>
> > > As I explained earlier, the part that I don't understand is what *you* mean by consciousness. Telling me that consciousness is that feature that zombies are imagined to lack is circular and useless.
> >
> > I had tried to explain in the original thread what I meant by consciousness, but you haven't understood that. But maybe it was that I was explaining it badly, so I gave you some links to two different articles about philosophical zombies, and told you it was the non-behavioural feature that the authors were suggesting that a philosophical zombie lacks, but you have said that you couldn't understand what non-behavioural feature they were referring to either, and presumably you didn't think it was a behavioural feature that they were suggesting a philosophical zombie lacked, because they both state that the philosophical zombie behaves the same as a human. So in order to help you, I'm trying other thought experiments.
>
> All you need to do is say what you mean by consciousness. I know what I think consciousness is, and I've explained my position to you. Your referring me to articles on philosophical zombies does not help me understand what *you* think consciousness is.
>
> All I've gotten from all your posts is that you think consciousness is not a behavior but that, since it is not epiphenomenal, it affects behavior.
>
> >
> > [snip]
> >
> > > >
> > > > I don't know whether you are familiar with Mary and the black and white room thought experiment, but if not here is a link that contains a rough synopsis of it in section 2 http://plato.stanford.edu/entries/qualia-knowledge/ and a couple of other similar thought experiments in section 1.
> > > >
> > > > So let's imagine for the sake of discussion that the Physics Assumptions were correct, for your convenience I'll restate them here:
> > > > (a) the minimal set of laws of physics required to describe the behaviour of things that aren't consciously experiencing are sufficient to describe the behaviour of things that are consciously experiencing
> > > > (b) the fundamental forces behind the laws in (a) are the same for things that consciously experience and things that don't.
> > > >
> > > > I realise that you will interpret the phrase "consciously experiencing" differently from how I intend it to be understood, and that you'll be interpreting it as either a behaviour, or referring to some feature of reality that you don't recognise. But that's ok.
> > > >
> > > > So imagine Mary is in her black and white room, and has all the observable information gleaned from all manner of scanners and probes about subject X that was in a field and stated "The sky is blue". And that she can explain "just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal chords and expulsion of air from the lungs that results in the uttering of the sentence 'The sky is blue'".
> > > >
> > > > Can you imagine that when shown a blue picture that Mary will obtain knowledge about a feature of subject X that she didn't previously know, what it was like for subject X to experience a blue?
> > >
> > > No. Mary learns nothing. The reason you think she learns something is that you've not thought through in detail what it means for her to know absolutely everything that is going on in subject X. But I'm not interested in wasting time on Mary the color scientist when you (1) won't cough up your own definition of what you mean by consciousness (2) ignore the "atheist evolutionary account" that I've already given you.
> >
> > Regarding (1) I've told you I mean the same as the authors of the articles.
>
> Great. The authors say that the feature missing in zombies is consciousness. Can you see how saying that does not get me any closer to understanding what you (or they) mean by consciousness? The meaning of the word is not obvious or universally agreed upon. For example, I and others here, claim that consciousness is a certain type of behavior. You say it is not behavioral. So, if you want to be understood, come ahead and say what you mean by consciousness. It can't be that hard.
>
>
> >You've claimed you can't understand what non-behavioural feature they are referring to. Regarding (2) This thread isn't about any argument against any evolutionary accounts, it is about the feature that you claim you can't understand.
>
> Ah, earlier you had said you were trying to show that "atheist evolutionary accounts" were implausible.

Different thread.

> >
> > So in order to help you understand the articles I have brought in the Mary thought experiment. I didn't suggest she knows absolutely everything that is going on in subject X (I just mentioned that she knows "all the observable information gleaned from scanners and probes..." but the summary in the article could have been interpreted that way, even though it did go onto clarify that it could be argued that she didn't know all physical facts). So to be clear, just assume her knowledge is restricted to all the *behaviour* internal, and external of subject X. So, can you imagine that when shown a blue picture that Mary will obtain knowledge about a feature of subject X that she didn't previously know, what it was like for subject X to experience a blue?
> >
>
> No, I don't think Mary will learn anything new. She already knows "what it's like" to see blue. If she knows *all* the relevant behavior, then she knows all there is to know.
>

Again strange (unless you are feigning ignorance) but I'll try a connected approach.

Imagine there is a robot. The Mark 1, which gives the type of behaviour that you would classify as consciousness. It has cameras for eyes, each pixel having three 8-bit intensity values (so a range from 0-255) which each represent a colour intensity. These come through 3 channels A, B, and C. Internally, in software ("version 1"), the robot holds a table so to speak of linking words to the colour values. E.g. if channel A has an intensity of 255, and B & C intensities of 0, the robot will use the word "red" to describe the colour if B is 255 and A & C are 0 it will use the word "green" and if C is 255 and A & B are 0, it will use the word "blue" to describe the colour. You can imagine that all its reactions to colour are based off the ABC channel values.

Now imagine that the Mark 1 can take two types of eye camera. The RGB eye camera which, for each pixel, uses channel A for the red light intensity data, channel B for the green light intensity data, and channel C for the blue light intensity data. Or the BGR eye camera which uses channel the A channel for the blue light intensity data, channel B for the green light intensity data, and channel C for the red light intensity data.

I hope you've managed to follow that. If you aren't sure, we can spend some more time on it. If it's ok we can go on to consider four Mark 1 robots, two with RGB eye cameras, and two with BGR eye cameras. From each pair, one is in a red room with a blue table, and the other is in a blue room with a red table.

Do both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as you would, and the table with a similar blue qualia as you would?

[snip]

eridanus

unread,
Nov 25, 2015, 2:13:48 PM11/25/15
to talk-o...@moderators.isc.org
Your posts are not convincing. You are pretending to say something rational,
but you are speaking only gibberish, that pretend to pass for knowledge.

Unless you put a few clear examples of what is what you are saying? You are
saying nothing. Pure show off with petulant words.
Eri

Bill Rogers

unread,
Nov 25, 2015, 4:43:47 PM11/25/15
to talk-o...@moderators.isc.org
I'm not interested in discussing Mary the color scientist with you. You're not interested in the "atheist evolutionary account." I think we've hit a dead end. If you get back to attempting to show that the "atheist evolutionary account" of consciousness is implausible, I'll put in my two cents again.

someone

unread,
Nov 25, 2015, 7:58:48 PM11/25/15
to talk-o...@moderators.isc.org
I don't think we have, and I will get onto showing the problem with your atheist account once I have managed to explain to you what feature I am referring to when I use the term consciously experiencing. It has to be that way, because as I mentioned I'm referring to that feature as "The Obvious Clue To Reality That Atheists Are Sometimes Too Indoctrinated To Be Able To Face". I'll need you to understand the feature before I can show you why it was a clue, and your belief about reality is beyond reasonable doubt wrong. The question about the robots isn't a question about Mary the colour scientist. So why don't you not bail, and answer the question?

Bill Rogers

unread,
Nov 25, 2015, 10:23:47 PM11/25/15
to talk-o...@moderators.isc.org
OK, so go ahead and say what you mean by the term "consciously experiencing." It's your meaning; there's no need for me to answer a bunch of questions. All you have to do is say what you mean. So far, I think that you mean that (1) conscious experience is whatever philosophical zombies are imagined to lack (2) it is not epiphenomenal (3) it is therefore not something that zombies actually lack, since anything they lack would by definition be epiphenomenal (4) it is not a behavior. So now you can just finish saying straight out what you mean by conscious experience - maybe "the ineffable whatever it is like to be you," maybe "the direct subjective connection to the world". I have no idea what you think, and the best way for you to explain is simply to say what you mean. I assume you're not afraid to do that,.

solar penguin

unread,
Nov 26, 2015, 2:48:46 AM11/26/15
to talk-o...@moderators.isc.org
On Wed, 25 Nov 2015 10:32:56 -0800, someone wrote:

>
> Imagine there is a robot. The Mark 1, which gives the type of behaviour
> that you would classify as consciousness. It has cameras for eyes, each
> pixel having three 8-bit intensity values (so a range from 0-255) which
> each represent a colour intensity. These come through 3 channels A, B,
> and C. Internally, in software ("version 1"), the robot holds a table so
> to speak of linking words to the colour values. E.g. if channel A has
> an intensity of 255, and B & C intensities of 0, the robot will use the
> word "red" to describe the colour if B is 255 and A & C are 0 it will
> use the word "green" and if C is 255 and A & B are 0, it will use the
> word "blue" to describe the colour. You can imagine that all its
> reactions to colour are based off the ABC channel values.
>
> Now imagine that the Mark 1 can take two types of eye camera. The RGB
> eye camera which, for each pixel, uses channel A for the red light
> intensity data, channel B for the green light intensity data, and
> channel C for the blue light intensity data. Or the BGR eye camera which
> uses channel the A channel for the blue light intensity data, channel B
> for the green light intensity data, and channel C for the red light
> intensity data.

Wouldn't the one with modified eyes be called a Mark 2?

>
> I hope you've managed to follow that. If you aren't sure, we can spend
> some more time on it. If it's ok we can go on to consider four Mark 1
> robots, two with RGB eye cameras, and two with BGR eye cameras. From
> each pair, one is in a red room with a blue table, and the other is in a
> blue room with a red table.
>
> Do both the Mark 1s (the one with RGB camera eyes, and the one with BGR
> camera eyes) in the red room with a blue table experience the room with
> similar red qualia as you would, and the table with a similar blue
> qualia as you would?
>

Eh? What? Pardon? Sorry?

Are you trying to say that colour-blind people aren't conscious? Because
if not, what does any of this have to do with consciousness of real
people in the real world?

*Hemidactylus*

unread,
Nov 26, 2015, 2:53:46 AM11/26/15
to talk-o...@moderators.isc.org
I kinda take issue with considering consciousness a sort of behavior.
That reeks of BF Skinner to me, not that I entirely discount his views
on behaviorism and the paradise envisioned in Walden 2 ;-)

Behavior to me is what is overtly observable. It's that 2nd-3rd person
perspective where from the outside we may *infer* consciousness in
others, but lacks the 1st person feel. Frankly I think from a
methodological standpoint our 3rd person stance can only asymptotically
approach the actual experience. We may learn much from the outside, but
maybe not quite break all the way in scientifically. Philosophically we
can talk about zombies til we are blue in the face and it still sounds
like mysterian nonsense. One is either constituted in a way to support
conscious awareness or not. You can't have it both ways. Brain damage
makes zombies. Now the Mary growing up in black and white I can dig up
to a point, because methodologically I doubt Mary can truly know from
science what color experience is really like. The omniscient Mary is a
philosophic cheat move.

Now back to the overt thing. Much of what is conscious is covert to the
outside world. We cannot introspect very well, if at all, so our
unconscious wellsprings are covert to us, though can be inferred using
tricky experimentation.

I would hazard consciousness is an admixture of the epiphenomenal and
functional and that brain states map to "mental" states in a very messy
non 1:1 way. I'm not a big fan of isomorphy either between individuals
or within individuals over time. Too much flux for that stream to be
stepped in twice.

*Hemidactylus*

unread,
Nov 26, 2015, 2:58:45 AM11/26/15
to talk-o...@moderators.isc.org
What about the clue dualists cannot face, that there's no interface
point where a ghost can open a door? Philosophical zombies are a
nonstarter yet your idee fixe that keeps you stunted.

solar penguin

unread,
Nov 26, 2015, 3:03:48 AM11/26/15
to talk-o...@moderators.isc.org
On Wed, 25 Nov 2015 19:21:18 -0800, Bill Rogers wrote:

>
> OK, so go ahead and say what you mean by the term "consciously
> experiencing." It's your meaning; there's no need for me to answer a
> bunch of questions. All you have to do is say what you mean. So far, I
> think that you mean that (1) conscious experience is whatever
> philosophical zombies are imagined to lack (2) it is not epiphenomenal
> (3) it is therefore not something that zombies actually lack, since
> anything they lack would by definition be epiphenomenal (4) it is not a
> behavior.

You forgot (5) it is something that some atheists think they will not
experience after death.

*Hemidactylus*

unread,
Nov 26, 2015, 3:03:48 AM11/26/15
to talk-o...@moderators.isc.org
As mysterians do.

*Hemidactylus*

unread,
Nov 26, 2015, 3:38:45 AM11/26/15
to talk-o...@moderators.isc.org
Some women have even richer color perception that the rest. They are
very conscious.

someone

unread,
Nov 26, 2015, 3:58:46 AM11/26/15
to talk-o...@moderators.isc.org
I need to ask the questions, because words that people would normally understand like, qualia, sentience, etc. presumably you believe are just behaviours. Every feature of a human you believe is just a behaviour. Also you keep repeating the sentiments in (3) but that is not the case and I have explained why. The reason is while it is true that if philosophical zombies existed it would show that the feature is epiphenomenal, they don't, and dualists for example realise that the feature isn't epiphenemenal and that I presume is one of the reasons philosophical zombies have been imagined. They take the average physicalist account, minus the feature, and they are left with a philosophical zombie which is an absurdity because it implies epiphenomenalism, but with the average physicalist account, there isn't a mind controlling a physical human, there only is the human, and it is governed by the laws of physics, not by features such as free will, and decisions based on what it felt like. I could try to ask you whether you understood dualism, and understood what the features the dualists believed weren't features of the physical human but were features of the soul/mind, the response to which influences the human behaviour perhaps through quantum events which microtubules are sensitive to and are able to make neuron firings sensitive to. You could presumably imagine the human in which there are no quantum events corresponding to such a mind/soul but which were just by chance. So the human behaviour was the same, but the explanation different. Whether you can or not I don't know. But if you can then the feature I am talking about are the sensations that the dualists claim are a feature of the mind. If you can't then could you answer the question, as the thought experiment is designed to allow you to keep your behaviourist translation of the words, but burst your behaviourist bubble by highlighting to you how absurd your position is, so that you drop such a ridiculous translation, and admit that you did understand how others were translating the word (though maybe you won't). So just in case you are going to claim that you can't understand what sensations dualists claim are a feature of the mind, I'll repeat the question, in the hope that you'll stop avoiding answering it and wasting time: Do both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as you would, and the table with a similar blue qualia as you would?

Ernest Major

unread,
Nov 26, 2015, 4:43:47 AM11/26/15
to talk-o...@moderators.isc.org
My reading is that when Bill Rogers says that consciousness is a
behaviour he is saying that consciousness is physical, and that when
Glenn Spiegel says that consciousness is not a behaviour he is saying
that consciousness is not physical. If either means something else then
they are likely talking past each other.
>
> Behavior to me is what is overtly observable. It's that 2nd-3rd person
> perspective where from the outside we may *infer* consciousness in
> others, but lacks the 1st person feel. Frankly I think from a
> methodological standpoint our 3rd person stance can only asymptotically
> approach the actual experience. We may learn much from the outside, but
> maybe not quite break all the way in scientifically. Philosophically we
> can talk about zombies til we are blue in the face and it still sounds
> like mysterian nonsense. One is either constituted in a way to support
> conscious awareness or not. You can't have it both ways. Brain damage
> makes zombies. Now the Mary growing up in black and white I can dig up
> to a point, because methodologically I doubt Mary can truly know from
> science what color experience is really like. The omniscient Mary is a
> philosophic cheat move.
>
> Now back to the overt thing. Much of what is conscious is covert to the
> outside world. We cannot introspect very well, if at all, so our
> unconscious wellsprings are covert to us, though can be inferred using
> tricky experimentation.
>
> I would hazard consciousness is an admixture of the epiphenomenal and
> functional and that brain states map to "mental" states in a very messy
> non 1:1 way. I'm not a big fan of isomorphy either between individuals
> or within individuals over time. Too much flux for that stream to be
> stepped in twice.
>


--
alias Ernest Major

Bill Rogers

unread,
Nov 26, 2015, 7:18:48 AM11/26/15
to talk-o...@moderators.isc.org
Yes. When I say behavior, I mean everything that is, even only in principle, observable about the system. So that's pretty much equivalent to saying that consciousness is physical.

someone

unread,
Nov 26, 2015, 7:18:48 AM11/26/15
to talk-o...@moderators.isc.org
That just shows how poor your comprehension skills are. I'm suggesting that even in a physicalist theory, the conscious experiences while being a feature of the physical wouldn't be a behavioural feature. Bill Rogers is suggesting that he only understands consciousness as a behaviour, so to him a philosophical zombie is something that is being imagined to behave in an indistinguishable fashion from a human while at the same time being imagined to behave differently, and so is just a plain contradiction.

Bill Rogers

unread,
Nov 26, 2015, 7:38:48 AM11/26/15
to talk-o...@moderators.isc.org
Yes. Every feature of humans is "just" a behavior. Remember that by behavior, I mean everything that is in principle observable, so it includes all the details of the behavior of individual neurons in the brain, the way those neurons would respond to electrical stimulation, etc. I don't just mean walking around and talking.

>Also you keep repeating the sentiments in (3) but that is not the case and I have explained why. The reason is while it is true that if philosophical zombies existed it would show that the feature is epiphenomenal, they don't, and dualists for example realise that the feature isn't epiphenemenal and that I presume is one of the reasons philosophical zombies have been imagined.

We agree on this point. You only get fixated on the fact that I express essentially the same idea using different words.


>They take the average physicalist account, minus the feature, and they are left with a philosophical zombie which is an absurdity because it implies epiphenomenalism, but with the average physicalist account, there isn't a mind controlling a physical human, there only is the human, and it is governed by the laws of physics, not by features such as free will, and decisions based on what it felt like.

Well, if that's the position you are arguing against, it's not a position held by me or anyone I've met. My physicalist position is that there is a mind, and it's a physical thing, that one of the things the physical mind does is make choices, and those choices are governed by physical laws.


>I could try to ask you whether you understood dualism, and understood what the features the dualists believed weren't features of the physical human but were features of the soul/mind, the response to which influences the human behaviour perhaps through quantum events which microtubules are sensitive to and are able to make neuron firings sensitive to. You could presumably imagine the human in which there are no quantum events corresponding to such a mind/soul but which were just by chance. So the human behaviour was the same, but the explanation different. Whether you can or not I don't know. But if you can then the feature I am talking about are the sensations that the dualists claim are a feature of the mind. If you can't then could you answer the question, as the thought experiment is designed to allow you to keep your behaviourist translation of the words, but burst your behaviourist bubble by highlighting to you how absurd your position is, so that you drop such a ridiculous translation, and admit that you did understand how others were translating the word (though maybe you won't).

Let's be clear. I am not a behaviorist. When I say that consciousness is a behavior, all I mean is that it is something physical and in principle observable. I do not treat the brain as a black box and insist that you only look at macroscopically observable, external behavior. That would be behaviorism


>So just in case you are going to claim that you can't understand what sensations dualists claim are a feature of the mind, I'll repeat the question, in the hope that you'll stop avoiding answering it and wasting time:

Somebody is an Olympic medalist at wasting time, and it's not me.


>Do both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as you would, and the table with a similar blue qualia as you would?

No, of course not. What I feel and how I respond to a color is a great deal more complex than recognizing a certain frequency of light and naming it. More complex, but not supernatural or immaterial.

Here's the reason I'm not interested in working through things by answering your questions. After months and months of my and others trying to answer your questions, interpreting your questions, trying to make sense of them in the best possible light, getting diverted into pointless side issues, finally two or three of your arguments against physicalism have emerged from the mist. And all of them could have been stated up front in your first post. But whenever it becomes clear what you are arguing, and when people critique your argument, you just run off and take refuge in more pointless questions, rather than deal with the arguments that you've coughed up after so much labor.

Here are your arguments, stated simply:

1. If everything follows the same physical laws, then, if we knew all the laws perfectly, we could predict the behavior of conscious systems without making any reference to consciousness. That would mean consciousness is epiphenomenal.

2. A sufficiently large collection of components (hand raisers or random electrical gates) or a subset of the collection could, by chance behave exactly as the computer controlling a conscious robot. So consciousness could arise randomly, even say, among an enormous collection of people randomly raising their hands.

3. A human brain experiencing, say, the sensation of eating a strawberry cupcake, might be produced on an alien world, by an alien artist simply interested in producing a mechanism to control an artistic display of fairy lights, even though the alien knew nothing of humans. And it's obviously absurd that that brain could feel the experience of eating a strawberry cupcake when there are no humans, no strawberries, and no cupcakes anywhere to be seen.

They are all arguments that stand in need of some defense. But whenever we start talking about your actual arguments you run off into another thread. So, no, Im not interested in going through another month-long string of questions to arrive at another similarly non-spectacular refutation of physicalism.


Bill Rogers

unread,
Nov 26, 2015, 7:43:45 AM11/26/15
to talk-o...@moderators.isc.org
Ernest got it right. You're the one whose comprehension skills are lacking. But I think that's just because when I explain what I mean, you don't read it for comprehension, you just look to see whether you can score it as a "yes" or a "no."

someone

unread,
Nov 26, 2015, 7:58:46 AM11/26/15
to talk-o...@moderators.isc.org
No he got it wrong, because my point wasn't that consciousness wasn't a physical feature, I was suggesting it was a non-behavioural feature as I pointed out. You claimed my comprehension skills were lacking, so what did misunderstand about what you were suggesting?

someone

unread,
Nov 26, 2015, 7:58:46 AM11/26/15
to talk-o...@moderators.isc.org
I'm not suggesting that the robot just recognises a certain frequency of light and names it, as the first two sentences stated : "Imagine there is a robot. The Mark 1, which gives the type of behaviour that you would classify as consciousness." So give an example of the type of behaviour you'd require it to demonstrate for you to claim that it is doing the consciousness behaviour...


> Here's the reason I'm not interested in working through things by answering your questions. After months and months of my and others trying to answer your questions, interpreting your questions, trying to make sense of them in the best possible light, getting diverted into pointless side issues, finally two or three of your arguments against physicalism have emerged from the mist. And all of them could have been stated up front in your first post. But whenever it becomes clear what you are arguing, and when people critique your argument, you just run off and take refuge in more pointless questions, rather than deal with the arguments that you've coughed up after so much labor.
>
> Here are your arguments, stated simply:
>
> 1. If everything follows the same physical laws, then, if we knew all the laws perfectly, we could predict the behavior of conscious systems without making any reference to consciousness. That would mean consciousness is epiphenomenal.
>
> 2. A sufficiently large collection of components (hand raisers or random electrical gates) or a subset of the collection could, by chance behave exactly as the computer controlling a conscious robot. So consciousness could arise randomly, even say, among an enormous collection of people randomly raising their hands.
>
> 3. A human brain experiencing, say, the sensation of eating a strawberry cupcake, might be produced on an alien world, by an alien artist simply interested in producing a mechanism to control an artistic display of fairy lights, even though the alien knew nothing of humans. And it's obviously absurd that that brain could feel the experience of eating a strawberry cupcake when there are no humans, no strawberries, and no cupcakes anywhere to be seen.
>
> They are all arguments that stand in need of some defense. But whenever we start talking about your actual arguments you run off into another thread. So, no, Im not interested in going through another month-long string of questions to arrive at another similarly non-spectacular refutation of physicalism.

But you can't understand any of the arguments, because they all rely on understanding the feature I am referring to as consciously experiencing.

Bill Rogers

unread,
Nov 26, 2015, 8:13:46 AM11/26/15
to talk-o...@moderators.isc.org
What you misunderstood is that, for me, all physical features are behavioral features. I've explained that in my posts to you multiple times. By behavior I mean everything observable, even in principle, about the system. That includes not just, say, walking and talking, but the reading given when you put the system on a scale, the results of experiments you do on the system, etc.

Bill Rogers

unread,
Nov 26, 2015, 8:23:47 AM11/26/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 7:58:46 AM UTC-5, someone wrote:
>
> I'm not suggesting that the robot just recognises a certain frequency of light and names it, as the first two sentences stated : "Imagine there is a robot. The Mark 1, which gives the type of behaviour that you would classify as consciousness." So give an example of the type of behaviour you'd require it to demonstrate for you to claim that it is doing the consciousness behaviour...
>
>
> > Here's the reason I'm not interested in working through things by answering your questions. After months and months of my and others trying to answer your questions, interpreting your questions, trying to make sense of them in the best possible light, getting diverted into pointless side issues, finally two or three of your arguments against physicalism have emerged from the mist. And all of them could have been stated up front in your first post. But whenever it becomes clear what you are arguing, and when people critique your argument, you just run off and take refuge in more pointless questions, rather than deal with the arguments that you've coughed up after so much labor.
> >
> > Here are your arguments, stated simply:
> >
> > 1. If everything follows the same physical laws, then, if we knew all the laws perfectly, we could predict the behavior of conscious systems without making any reference to consciousness. That would mean consciousness is epiphenomenal.
> >
> > 2. A sufficiently large collection of components (hand raisers or random electrical gates) or a subset of the collection could, by chance behave exactly as the computer controlling a conscious robot. So consciousness could arise randomly, even say, among an enormous collection of people randomly raising their hands.
> >
> > 3. A human brain experiencing, say, the sensation of eating a strawberry cupcake, might be produced on an alien world, by an alien artist simply interested in producing a mechanism to control an artistic display of fairy lights, even though the alien knew nothing of humans. And it's obviously absurd that that brain could feel the experience of eating a strawberry cupcake when there are no humans, no strawberries, and no cupcakes anywhere to be seen.
> >
> > They are all arguments that stand in need of some defense. But whenever we start talking about your actual arguments you run off into another thread. So, no, Im not interested in going through another month-long string of questions to arrive at another similarly non-spectacular refutation of physicalism.
>
> But you can't understand any of the arguments, because they all rely on understanding the feature I am referring to as consciously experiencing.

Then go ahead and say what you mean when you refer to "consciously experiencing." It's hard to see why you are so afraid to lay out your position.

So far we've got (1) consciousness is not epiphenomenal and (2) consciousness is not a behavior. Now, let's be sure I understand what you mean here. When you say consciousness is not a behavior do you mean it is not something macroscopic and externally observable, like talking, running, grimacing, etc? If that's what you mean, then I agree with you. Or do you mean that consciousness is not something that is even in principle observable? If that's what you mean, then I disagree. Though in either case I'll have no trouble understanding what you mean, as long as you just come out and *say* what you mean.

Of course, defining consciousness by what it is not, is not the most direct route, so, presumably you have more to add. If you mean, "qualia" go ahead, and say it. If you mean "subjectivity" go ahead and say it. I assure I'll have no trouble understanding what you mean, even if I think you're wrong.

someone

unread,
Nov 26, 2015, 8:23:47 AM11/26/15
to talk-o...@moderators.isc.org
Where did I suggest otherwise in the comment: "Bill Rogers is suggesting that he only understands consciousness as a behaviour, so to him a philosophical zombie is something that is being imagined to behave in an indistinguishable fashion from a human while at the same time being imagined to behave differently, and so is just a plain contradiction."?

Here were the two assertions about what you were suggesting, are either of them wrong:

(1) You only understand consciousness as a behaviour
(2) To you a philosophical zombie is something that is being imagined to behave in an indistinguishable fashion from a human while at the same time being imagined to behave differently, and so is just a plain contradiction.

Bill Rogers

unread,
Nov 26, 2015, 8:48:45 AM11/26/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 8:23:47 AM UTC-5, someone wrote:

> > >
> > > No he got it wrong, because my point wasn't that consciousness wasn't a physical feature, I was suggesting it was a non-behavioural feature as I pointed out. You claimed my comprehension skills were lacking, so what did misunderstand about what you were suggesting?
> >
> > What you misunderstood is that, for me, all physical features are behavioral features. I've explained that in my posts to you multiple times. By behavior I mean everything observable, even in principle, about the system. That includes not just, say, walking and talking, but the reading given when you put the system on a scale, the results of experiments you do on the system, etc.
>
> Where did I suggest otherwise in the comment: "Bill Rogers is suggesting that he only understands consciousness as a behaviour, so to him a philosophical zombie is something that is being imagined to behave in an indistinguishable fashion from a human while at the same time being imagined to behave differently, and so is just a plain contradiction."?

When you said "..my point wasn't that consciousness wasn't a physical feature, I was suggesting it was a non-behavioural feature as I pointed out," it seem ed to me that you were implying that there were physical features which were not behaviors. If that's your position fine. As long as you understand that when I say "behavioral feature" I just mean any feature which is in principle observable or experimentally detectable.

>
> Here were the two assertions about what you were suggesting, are either of them wrong:
>
> (1) You only understand consciousness as a behaviour

Correct, as long as you understand that by behavior, I mean anything experimentally detectable, even in principle, about the system, not just macroscopic, external behaviors like walking and talking.

> (2) To you a philosophical zombie is something that is being imagined to behave in an indistinguishable fashion from a human while at the same time being imagined to behave differently, and so is just a plain contradiction.

That's correct.

Great. Now we understand each other. So go ahead and say what you yourself mean by "consciously experiencing."



someone

unread,
Nov 26, 2015, 8:48:45 AM11/26/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 1:23:47 PM UTC, Bill Rogers wrote:
> On Thursday, November 26, 2015 at 7:58:46 AM UTC-5, someone wrote:
> >
> > I'm not suggesting that the robot just recognises a certain frequency of light and names it, as the first two sentences stated : "Imagine there is a robot. The Mark 1, which gives the type of behaviour that you would classify as consciousness." So give an example of the type of behaviour you'd require it to demonstrate for you to claim that it is doing the consciousness behaviour...

Why didn't you give an example, or if you'd prefer, just imagine that it demonstrates whatever behaviour you think that would involve. Was it that having a look-up table where certain channel values relate to certain words for the purposes of communicating in whatever communication language the software has loaded would prevent it from ever being considered being conscious? If not, then why not assume that it performs whatever behaviour you think it would require. I'll remind you of the robot again.

Imagine there is a robot. The Mark 1, which gives the type of behaviour that *you* would classify as consciousness. It has cameras for eyes, with each pixel having three 8-bit intensity values (so a range from 0-255) each represent a colour intensity. These come through 3 channels A, B, and C. Internally, in software ("version 1"), the robot holds a table so to speak of linking words to the colour values. E.g. if channel A has an intensity of 255, and B & C intensities of 0, the robot will use the word "red" to describe the colour if B is 255 and A & C are 0 it will use the word "green" and if C is 255 and A & B are 0, it will use the word "blue" to describe the colour. You can imagine that all its reactions to colour are based off the ABC channel values.

Imagine that the Mark 1 can take two types of eye camera. The RGB eye camera which, for each pixel, uses channel A for the red light intensity data, channel B for the green light intensity data, and channel C for the blue light intensity data. Or the BGR eye camera which uses channel the A channel for the blue light intensity data, channel B for the green light intensity data, and channel C for the red light intensity data.

Consider four Mark 1 robots, two with RGB eye cameras, and two with BGR eye cameras. From each pair, one is in a red room with a blue table, and the other is in a blue room with a red table. Do both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as you would, and the table with a similar blue qualia as you would?



> >
> > > Here's the reason I'm not interested in working through things by answering your questions. After months and months of my and others trying to answer your questions, interpreting your questions, trying to make sense of them in the best possible light, getting diverted into pointless side issues, finally two or three of your arguments against physicalism have emerged from the mist. And all of them could have been stated up front in your first post. But whenever it becomes clear what you are arguing, and when people critique your argument, you just run off and take refuge in more pointless questions, rather than deal with the arguments that you've coughed up after so much labor.
> > >
> > > Here are your arguments, stated simply:
> > >
> > > 1. If everything follows the same physical laws, then, if we knew all the laws perfectly, we could predict the behavior of conscious systems without making any reference to consciousness. That would mean consciousness is epiphenomenal.
> > >
> > > 2. A sufficiently large collection of components (hand raisers or random electrical gates) or a subset of the collection could, by chance behave exactly as the computer controlling a conscious robot. So consciousness could arise randomly, even say, among an enormous collection of people randomly raising their hands.
> > >
> > > 3. A human brain experiencing, say, the sensation of eating a strawberry cupcake, might be produced on an alien world, by an alien artist simply interested in producing a mechanism to control an artistic display of fairy lights, even though the alien knew nothing of humans. And it's obviously absurd that that brain could feel the experience of eating a strawberry cupcake when there are no humans, no strawberries, and no cupcakes anywhere to be seen.
> > >
> > > They are all arguments that stand in need of some defense. But whenever we start talking about your actual arguments you run off into another thread. So, no, Im not interested in going through another month-long string of questions to arrive at another similarly non-spectacular refutation of physicalism.
> >
> > But you can't understand any of the arguments, because they all rely on understanding the feature I am referring to as consciously experiencing.
>
> Then go ahead and say what you mean when you refer to "consciously experiencing." It's hard to see why you are so afraid to lay out your position.
>
> So far we've got (1) consciousness is not epiphenomenal and (2) consciousness is not a behavior. Now, let's be sure I understand what you mean here. When you say consciousness is not a behavior do you mean it is not something macroscopic and externally observable, like talking, running, grimacing, etc? If that's what you mean, then I agree with you. Or do you mean that consciousness is not something that is even in principle observable? If that's what you mean, then I disagree. Though in either case I'll have no trouble understanding what you mean, as long as you just come out and *say* what you mean.
>

Did you not understand when I had stated:

"I could try to ask you whether you understood dualism, and understood what the features the dualists believed weren't features of the physical human but were features of the soul/mind, the response to which influences the human behaviour perhaps through quantum events which microtubules are sensitive to and are able to make neuron firings sensitive to. You could presumably imagine the human in which there are no quantum events corresponding to such a mind/soul but which were just by chance. So the human behaviour was the same, but the explanation different. Whether you can or not I don't know. But if you can then the feature I am talking about are the sensations that the dualists claim are a feature of the mind."


> Of course, defining consciousness by what it is not, is not the most direct route, so, presumably you have more to add. If you mean, "qualia" go ahead, and say it. If you mean "subjectivity" go ahead and say it. I assure I'll have no trouble understanding what you mean, even if I think you're wrong.

Well I'd consider qualia a feature of consciously experiencing. What behaviour were you considering the qualia of blue to be?

Bill Rogers

unread,
Nov 26, 2015, 9:03:45 AM11/26/15
to talk-o...@moderators.isc.org
OK. Qualia. On my view the qualia of blue is the sum of the following behavior, the firing of blue-receptive cone cells in the retina, the transmission of that signal up the visual pathways, the integration of the color information with the shape and boundary information collected by the rods, the higher level visual processing involving identification of whatever it was that was blue, the modulation of the perception of blue by adjacent colors, the triggering of the word "blue," the associations brought up from memory of other blue things, the interaction of those memories with my current emotional state, any connections to thoughts I'd been having about anything that might be effected by seeing that blue thing. I could go on, but I suspect you get the drift. All those things are behaviors that my brain does in response to the sight of that blue thing, and that's the behavior that I would identify as a qualia of blue.

someone

unread,
Nov 26, 2015, 9:08:46 AM11/26/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 1:48:45 PM UTC, Bill Rogers wrote:
> On Thursday, November 26, 2015 at 8:23:47 AM UTC-5, someone wrote:
>
> > > >
> > > > No he got it wrong, because my point wasn't that consciousness wasn't a physical feature, I was suggesting it was a non-behavioural feature as I pointed out. You claimed my comprehension skills were lacking, so what did misunderstand about what you were suggesting?
> > >
> > > What you misunderstood is that, for me, all physical features are behavioral features. I've explained that in my posts to you multiple times. By behavior I mean everything observable, even in principle, about the system. That includes not just, say, walking and talking, but the reading given when you put the system on a scale, the results of experiments you do on the system, etc.
> >
> > Where did I suggest otherwise in the comment: "Bill Rogers is suggesting that he only understands consciousness as a behaviour, so to him a philosophical zombie is something that is being imagined to behave in an indistinguishable fashion from a human while at the same time being imagined to behave differently, and so is just a plain contradiction."?
>
> When you said "..my point wasn't that consciousness wasn't a physical feature, I was suggesting it was a non-behavioural feature as I pointed out," it seem ed to me that you were implying that there were physical features which were not behaviors. If that's your position fine. As long as you understand that when I say "behavioral feature" I just mean any feature which is in principle observable or experimentally detectable.

Ok, just to be clear, I have assumed that by a behaviour you mean features that are in principle or experimentally detectable by those other than the subject. So imagine scientists examine the Mark 1 robot that I mentioned.

The Mark 1 gives the type of behaviour that *you* would classify as consciousness. It has cameras for eyes, with each pixel having three 8-bit intensity values (so a range from 0-255) each represent a colour intensity. These come through 3 channels A, B, and C. Internally, in software ("version 1"), the robot holds a table so to speak of linking words to the colour values. E.g. if channel A has an intensity of 255, and B & C intensities of 0, the robot will use the word "red" to describe the colour if B is 255 and A & C are 0 it will use the word "green" and if C is 255 and A & B are 0, it will use the word "blue" to describe the colour. You can imagine that all its reactions to colour are based off the ABC channel values.

Imagine that the Mark 1 can take two types of eye camera. Either the RGB eye camera which, for each pixel, uses channel A for the red light intensity data, channel B for the green light intensity data, and channel C for the blue light intensity data. Or the BGR eye camera which uses channel the A channel for the blue light intensity data, channel B for the green light intensity data, and channel C for the red light intensity data.

Consider four Mark 1 robots, two with RGB eye cameras, and two with BGR eye cameras. From each pair, one is in a red room with a blue table, and the other is in a blue room with a red table. I'm assuming you are suggesting that the scientists can in principle observe or experimentally detect whether both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as themselves. Have I misunderstood, and that you are suggesting that only the individual Mark 1's could observe what the conscious experience was like for them?


>
> >
> > Here were the two assertions about what you were suggesting, are either of them wrong:
> >
> > (1) You only understand consciousness as a behaviour
>
> Correct, as long as you understand that by behavior, I mean anything experimentally detectable, even in principle, about the system, not just macroscopic, external behaviors like walking and talking.
>
> > (2) To you a philosophical zombie is something that is being imagined to behave in an indistinguishable fashion from a human while at the same time being imagined to behave differently, and so is just a plain contradiction.
>
> That's correct.
>
> Great. Now we understand each other. So go ahead and say what you yourself mean by "consciously experiencing."

This part of the conversation wasn't about that, it was about you claiming that I had misunderstood your position, so far you failed to point out where I had misunderstood, and so failed to justify your claim.

Ernest Major

unread,
Nov 26, 2015, 9:18:46 AM11/26/15
to talk-o...@moderators.isc.org
On 26/11/2015 12:15, someone wrote:
>> My reading is that when Bill Rogers says that consciousness is a
>> >behaviour he is saying that consciousness is physical, and that when
>> >Glenn Spiegel says that consciousness is not a behaviour he is saying
>> >that consciousness is not physical. If either means something else then
>> >they are likely talking past each other.
> That just shows how poor your comprehension skills are. I'm suggesting that even in a physicalist theory, the conscious experiences while being a feature of the physical wouldn't be a behavioural feature. Bill Rogers is suggesting that he only understands consciousness as a behaviour, so to him a philosophical zombie is something that is being imagined to behave in an indistinguishable fashion from a human while at the same time being imagined to behave differently, and so is just a plain contradiction.
>

You might like to consider that it shows how poor your communication
(and comprehension) skills are instead. You've either failed to
recongnise that you were using behaviour in a different sense from Bill
Rogers, or you've failed to explain that you were using behaviour in a
different sense from Bill Rogers.

--
alias Ernest Major

someone

unread,
Nov 26, 2015, 9:18:46 AM11/26/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 2:03:45 PM UTC, Bill Rogers wrote:
> On Thursday, November 26, 2015 at 8:48:45 AM UTC-5, someone wrote:
> > On Thursday, November 26, 2015 at 1:23:47 PM UTC, Bill Rogers wrote:
> > > On Thursday, November 26, 2015 at 7:58:46 AM UTC-5, someone wrote:
> > > >
> > > > I'm not suggesting that the robot just recognises a certain frequency of light and names it, as the first two sentences stated : "Imagine there is a robot. The Mark 1, which gives the type of behaviour that you would classify as consciousness." So give an example of the type of behaviour you'd require it to demonstrate for you to claim that it is doing the consciousness behaviour...
> >
> > Why didn't you give an example, or if you'd prefer, just imagine that it demonstrates whatever behaviour you think that would involve. Was it that having a look-up table where certain channel values relate to certain words for the purposes of communicating in whatever communication language the software has loaded would prevent it from ever being considered being conscious? If not, then why not assume that it performs whatever behaviour you think it would require. I'll remind you of the robot again.
> >
> > Imagine there is a robot. The Mark 1, which gives the type of behaviour that *you* would classify as consciousness. It has cameras for eyes, with each pixel having three 8-bit intensity values (so a range from 0-255) each represent a colour intensity. These come through 3 channels A, B, and C. Internally, in software ("version 1"), the robot holds a table so to speak of linking words to the colour values. E.g. if channel A has an intensity of 255, and B & C intensities of 0, the robot will use the word "red" to describe the colour if B is 255 and A & C are 0 it will use the word "green" and if C is 255 and A & B are 0, it will use the word "blue" to describe the colour. You can imagine that all its reactions to colour are based off the ABC channel values.
> >
> > Imagine that the Mark 1 can take two types of eye camera. The RGB eye camera which, for each pixel, uses channel A for the red light intensity data, channel B for the green light intensity data, and channel C for the blue light intensity data. Or the BGR eye camera which uses channel the A channel for the blue light intensity data, channel B for the green light intensity data, and channel C for the red light intensity data.
> >
> > Consider four Mark 1 robots, two with RGB eye cameras, and two with BGR eye cameras. From each pair, one is in a red room with a blue table, and the other is in a blue room with a red table. Do both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as you would, and the table with a similar blue qualia as you would?

Why are you avoiding answer this?

> >
> >
> >
> > > >
> > > > > Here's the reason I'm not interested in working through things by answering your questions. After months and months of my and others trying to answer your questions, interpreting your questions, trying to make sense of them in the best possible light, getting diverted into pointless side issues, finally two or three of your arguments against physicalism have emerged from the mist. And all of them could have been stated up front in your first post. But whenever it becomes clear what you are arguing, and when people critique your argument, you just run off and take refuge in more pointless questions, rather than deal with the arguments that you've coughed up after so much labor.
> > > > >
> > > > > Here are your arguments, stated simply:
> > > > >
> > > > > 1. If everything follows the same physical laws, then, if we knew all the laws perfectly, we could predict the behavior of conscious systems without making any reference to consciousness. That would mean consciousness is epiphenomenal.
> > > > >
> > > > > 2. A sufficiently large collection of components (hand raisers or random electrical gates) or a subset of the collection could, by chance behave exactly as the computer controlling a conscious robot. So consciousness could arise randomly, even say, among an enormous collection of people randomly raising their hands.
> > > > >
> > > > > 3. A human brain experiencing, say, the sensation of eating a strawberry cupcake, might be produced on an alien world, by an alien artist simply interested in producing a mechanism to control an artistic display of fairy lights, even though the alien knew nothing of humans. And it's obviously absurd that that brain could feel the experience of eating a strawberry cupcake when there are no humans, no strawberries, and no cupcakes anywhere to be seen.
> > > > >
> > > > > They are all arguments that stand in need of some defense. But whenever we start talking about your actual arguments you run off into another thread. So, no, Im not interested in going through another month-long string of questions to arrive at another similarly non-spectacular refutation of physicalism.
> > > >
> > > > But you can't understand any of the arguments, because they all rely on understanding the feature I am referring to as consciously experiencing.
> > >
> > > Then go ahead and say what you mean when you refer to "consciously experiencing." It's hard to see why you are so afraid to lay out your position.
> > >
> > > So far we've got (1) consciousness is not epiphenomenal and (2) consciousness is not a behavior. Now, let's be sure I understand what you mean here. When you say consciousness is not a behavior do you mean it is not something macroscopic and externally observable, like talking, running, grimacing, etc? If that's what you mean, then I agree with you. Or do you mean that consciousness is not something that is even in principle observable? If that's what you mean, then I disagree. Though in either case I'll have no trouble understanding what you mean, as long as you just come out and *say* what you mean.
> > >
> >
> > Did you not understand when I had stated:
> >
> > "I could try to ask you whether you understood dualism, and understood what the features the dualists believed weren't features of the physical human but were features of the soul/mind, the response to which influences the human behaviour perhaps through quantum events which microtubules are sensitive to and are able to make neuron firings sensitive to. You could presumably imagine the human in which there are no quantum events corresponding to such a mind/soul but which were just by chance. So the human behaviour was the same, but the explanation different. Whether you can or not I don't know. But if you can then the feature I am talking about are the sensations that the dualists claim are a feature of the mind."
> >

Why are you avoiding answering this?


> >
> > > Of course, defining consciousness by what it is not, is not the most direct route, so, presumably you have more to add. If you mean, "qualia" go ahead, and say it. If you mean "subjectivity" go ahead and say it. I assure I'll have no trouble understanding what you mean, even if I think you're wrong.
> >
> > Well I'd consider qualia a feature of consciously experiencing. What behaviour were you considering the qualia of blue to be?
>
> OK. Qualia. On my view the qualia of blue is the sum of the following behavior, the firing of blue-receptive cone cells in the retina, the transmission of that signal up the visual pathways, the integration of the color information with the shape and boundary information collected by the rods, the higher level visual processing involving identification of whatever it was that was blue, the modulation of the perception of blue by adjacent colors, the triggering of the word "blue," the associations brought up from memory of other blue things, the interaction of those memories with my current emotional state, any connections to thoughts I'd been having about anything that might be effected by seeing that blue thing. I could go on, but I suspect you get the drift. All those things are behaviors that my brain does in response to the sight of that blue thing, and that's the behavior that I would identify as a qualia of blue.

What do you mean by "adjacent colours" do you mean light frequencies? Also what do you mean by "seeing that blue thing"? Are you talking about brain processes associated with the firing of blue-receptive cone cells in the retina?

Take for example this quote from Ullin Place:

'I want to stress from the outset that in defending the thesis that consciousness is a process in the brain, I am not trying to argue that when we describe our dreams, fantasies, and sensations we are talking about processes in our brains. That is, I am not claiming that statements about sensations and mental images are reducible to or analyzable into statements about brain processes, in the way in which "cognition statements" are analyzable into statements about behaviour. To say that statements about consciousness are statements about brain processes is manifestly false. This is shown (a) by the fact that you can describe your sensations and mental imagery without knowing anything about your brain processes or even that such things exist, (b) by the fact that statements about one's consciousness and statements about one's brain processes are verified in entirely different ways, and (c) by the fact that there is nothing self-contradictory about the statement "X has a pain but there is nothing going on in his brain." What I do want to assert, however, is that the statement "Consciousness is a process in the brain," although not necessarily true, is not necessarily false.'

Can you understand the difference between statements about consciousness and the statements about brain processes, and if so, was your description supposed to be a descriptive statement about brain processes?

Bill Rogers

unread,
Nov 26, 2015, 9:23:46 AM11/26/15
to talk-o...@moderators.isc.org
If you understand my position, great. I explained above why I thought you hadn't, but if you had, great.

someone

unread,
Nov 26, 2015, 9:33:47 AM11/26/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 2:23:46 PM UTC, Bill Rogers wrote:
> On Thursday, November 26, 2015 at 9:08:46 AM UTC-5, someone wrote:
> > On Thursday, November 26, 2015 at 1:48:45 PM UTC, Bill Rogers wrote:
> > > On Thursday, November 26, 2015 at 8:23:47 AM UTC-5, someone wrote:
> > >
> > > > > >
> > > > > > No he got it wrong, because my point wasn't that consciousness wasn't a physical feature, I was suggesting it was a non-behavioural feature as I pointed out. You claimed my comprehension skills were lacking, so what did misunderstand about what you were suggesting?
> > > > >
> > > > > What you misunderstood is that, for me, all physical features are behavioral features. I've explained that in my posts to you multiple times. By behavior I mean everything observable, even in principle, about the system. That includes not just, say, walking and talking, but the reading given when you put the system on a scale, the results of experiments you do on the system, etc.
> > > >
> > > > Where did I suggest otherwise in the comment: "Bill Rogers is suggesting that he only understands consciousness as a behaviour, so to him a philosophical zombie is something that is being imagined to behave in an indistinguishable fashion from a human while at the same time being imagined to behave differently, and so is just a plain contradiction."?
> > >
> > > When you said "..my point wasn't that consciousness wasn't a physical feature, I was suggesting it was a non-behavioural feature as I pointed out," it seem ed to me that you were implying that there were physical features which were not behaviors. If that's your position fine. As long as you understand that when I say "behavioral feature" I just mean any feature which is in principle observable or experimentally detectable.
> >
> > Ok, just to be clear, I have assumed that by a behaviour you mean features that are in principle or experimentally detectable by those other than the subject. So imagine scientists examine the Mark 1 robot that I mentioned.
> >
> > The Mark 1 gives the type of behaviour that *you* would classify as consciousness. It has cameras for eyes, with each pixel having three 8-bit intensity values (so a range from 0-255) each represent a colour intensity. These come through 3 channels A, B, and C. Internally, in software ("version 1"), the robot holds a table so to speak of linking words to the colour values. E.g. if channel A has an intensity of 255, and B & C intensities of 0, the robot will use the word "red" to describe the colour if B is 255 and A & C are 0 it will use the word "green" and if C is 255 and A & B are 0, it will use the word "blue" to describe the colour. You can imagine that all its reactions to colour are based off the ABC channel values.
> >
> > Imagine that the Mark 1 can take two types of eye camera. Either the RGB eye camera which, for each pixel, uses channel A for the red light intensity data, channel B for the green light intensity data, and channel C for the blue light intensity data. Or the BGR eye camera which uses channel the A channel for the blue light intensity data, channel B for the green light intensity data, and channel C for the red light intensity data.
> >
> > Consider four Mark 1 robots, two with RGB eye cameras, and two with BGR eye cameras. From each pair, one is in a red room with a blue table, and the other is in a blue room with a red table. I'm assuming you are suggesting that the scientists can in principle observe or experimentally detect whether both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as themselves. Have I misunderstood, and that you are suggesting that only the individual Mark 1's could observe what the conscious experience was like for them?
> >

I've highlighted the ambiguity that I saw with what you were potentially suggesting, so why are you avoiding answering in order to clear it up?

*Hemidactylus*

unread,
Nov 26, 2015, 9:53:47 AM11/26/15
to talk-o...@moderators.isc.org
I would have to concur to the extent that consciousness is not a
behavior in my book. It may be a byproduct and/or function and have
effects of its own. But I think it strains the meaning of behavior to
call it that.

eridanus

unread,
Nov 26, 2015, 9:58:48 AM11/26/15
to talk-o...@moderators.isc.org
there is a little problem with behavior. For it exist an inner behavior
that is not visible to others. But perhaps, in the present, or even in
the near future, we can have some idea of what is thinking someone, with
a proper scanner of his brain. But to do it, we need put the head of this
person within a scanner.

So, behavior "is not only" an external one that can be observed by other
people.
eri

*Hemidactylus*

unread,
Nov 26, 2015, 10:08:44 AM11/26/15
to talk-o...@moderators.isc.org
There's something higher level (that may be a partial consequence of
differences in lower physiological processing stemming from how genes
butted heads with environmental factors during visual development) that
differentiates my experience of blue from yours.

To me qualia are a means of exploring subjective difference between
people. There are nooks and crannies of nonoverlap where experience
differs between people. The same objects in the environment may mean
something different for me than you. Red, due to cumulative experience,
elicits different outcomes in my brain than yours, beyond the hue or the
way my cones are uniquely situated or the subtle differences in my
cornea and lenses that may be due to long term exposure to sunlight and
other environmental degraders.

Or shifting modalities to smell memories, the scent of the ungodly
cologne Drakkar Noir, elicits something in me bringing me back to the
late 80s that may mean nothing to you if you smelled it, aside from
"What the hell is that ungodly strong smell". People who take baths in
it habituate to it and leavve a vapor trail that lasts days if not
weeks. They have a 10 meter radius of a smell aura that arrives to
locations before they do. Polo by Ralph Lauren has similar "behavior".

eridanus

unread,
Nov 26, 2015, 10:18:52 AM11/26/15
to talk-o...@moderators.isc.org
this people of the consciousness is trying to sell us some immaterial stuff
related to of the brain. We are resisting for obvious reasons to accept
the immaterial junk, for we had never witnessed any immaterial stuff...
except perhaps some cosmologists that are trying to see the "dark matter"
or the "dark energy".
For some time, the neutrinos were postulated to be "massless" and this recalls
me one those guys of the spiritual mantras. The consciousness of Krishna.

Then, other question is no yet clear what is made off "is gravity". This
could be a pure spirit or something. It can be property of space, that so
far, I had never read the space is something "material" or a physical stuff.

But they are trying hard to brain wash us into believing in the souls, and with souls, the sot often mentioned "consciousness that is not a behavior" makes
a little sense.
The problem I see is that space is nothing but... let me see the Oxford what
is saying about space. I had never been trying to define this word.
Space:
1 Continuous extension viewed with or without reference to the existence
of objects within it.
2 Interval between points or objects viewed as having one, two or three
dimensions; amount of paper used in writing, etc.

The rest are derivate of the basic meaning.

Then, philosophically, we have troubles to define the space as something
close to being material. Then, if it is not material, the space must be
an spirit... just a guess. I am not an expert in spirits.
So, it must exist a metaphysical problem to discern if the space is meat
or fish. For in case it would be meat, we could not eat it in times of abstinence or during the Lent. Just a guess.
Eri

eridanus

unread,
Nov 26, 2015, 10:33:47 AM11/26/15
to talk-o...@moderators.isc.org
Someone has the delusion of understanding what he is saying.
Remember that think we are right it is not necessary any particular
condition except the "certitude that he is right, and that he understands"
To understand something is a subjective state of the mind. You are saying
some mantra, and you know that you are right, for you are able to recite or
chant other mantras, some dozens of them, to prove that he understand.

It is like a number of mantras chained to others. Like Groucho Marx said,
"those are my principles but well, if you don't like them... I have others."

He has his principles, that is some verbal mantras he is repeating again
and again. But he is unable to write this in plain English. Just imagine
he had learned this in Sanskrit. Or he had learned those phrases translated
into English, but he has not yet arrived to understand what those mantras
really mean.

Eri

Bill Rogers

unread,
Nov 26, 2015, 10:58:47 AM11/26/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 9:18:46 AM UTC-5, someone wrote:
> On Thursday, November 26, 2015 at 2:03:45 PM UTC, Bill Rogers wrote:
> > On Thursday, November 26, 2015 at 8:48:45 AM UTC-5, someone wrote:
> > > On Thursday, November 26, 2015 at 1:23:47 PM UTC, Bill Rogers wrote:
> > > > On Thursday, November 26, 2015 at 7:58:46 AM UTC-5, someone wrote:
> > > > >
> > > > > I'm not suggesting that the robot just recognises a certain frequency of light and names it, as the first two sentences stated : "Imagine there is a robot. The Mark 1, which gives the type of behaviour that you would classify as consciousness." So give an example of the type of behaviour you'd require it to demonstrate for you to claim that it is doing the consciousness behaviour...
> > >
> > > Why didn't you give an example, or if you'd prefer, just imagine that it demonstrates whatever behaviour you think that would involve. Was it that having a look-up table where certain channel values relate to certain words for the purposes of communicating in whatever communication language the software has loaded would prevent it from ever being considered being conscious? If not, then why not assume that it performs whatever behaviour you think it would require. I'll remind you of the robot again.
> > >
> > > Imagine there is a robot. The Mark 1, which gives the type of behaviour that *you* would classify as consciousness. It has cameras for eyes, with each pixel having three 8-bit intensity values (so a range from 0-255) each represent a colour intensity. These come through 3 channels A, B, and C. Internally, in software ("version 1"), the robot holds a table so to speak of linking words to the colour values. E.g. if channel A has an intensity of 255, and B & C intensities of 0, the robot will use the word "red" to describe the colour if B is 255 and A & C are 0 it will use the word "green" and if C is 255 and A & B are 0, it will use the word "blue" to describe the colour. You can imagine that all its reactions to colour are based off the ABC channel values.
> > >
> > > Imagine that the Mark 1 can take two types of eye camera. The RGB eye camera which, for each pixel, uses channel A for the red light intensity data, channel B for the green light intensity data, and channel C for the blue light intensity data. Or the BGR eye camera which uses channel the A channel for the blue light intensity data, channel B for the green light intensity data, and channel C for the red light intensity data.
> > >
> > > Consider four Mark 1 robots, two with RGB eye cameras, and two with BGR eye cameras. From each pair, one is in a red room with a blue table, and the other is in a blue room with a red table. Do both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as you would, and the table with a similar blue qualia as you would?
>
> Why are you avoiding answer this?

Because, based on your track record, your complicated scenarios inevitably lead to dead ends and are a waste of time. It may comfort you to think that people avoid them because they are afraid of your trenchant insights. If so, go ahead and be comforted.

>
> > >
> > >
> > >
> > > > >
> > > > > > Here's the reason I'm not interested in working through things by answering your questions. After months and months of my and others trying to answer your questions, interpreting your questions, trying to make sense of them in the best possible light, getting diverted into pointless side issues, finally two or three of your arguments against physicalism have emerged from the mist. And all of them could have been stated up front in your first post. But whenever it becomes clear what you are arguing, and when people critique your argument, you just run off and take refuge in more pointless questions, rather than deal with the arguments that you've coughed up after so much labor.
> > > > > >
> > > > > > Here are your arguments, stated simply:
> > > > > >
> > > > > > 1. If everything follows the same physical laws, then, if we knew all the laws perfectly, we could predict the behavior of conscious systems without making any reference to consciousness. That would mean consciousness is epiphenomenal.
> > > > > >
> > > > > > 2. A sufficiently large collection of components (hand raisers or random electrical gates) or a subset of the collection could, by chance behave exactly as the computer controlling a conscious robot. So consciousness could arise randomly, even say, among an enormous collection of people randomly raising their hands.
> > > > > >
> > > > > > 3. A human brain experiencing, say, the sensation of eating a strawberry cupcake, might be produced on an alien world, by an alien artist simply interested in producing a mechanism to control an artistic display of fairy lights, even though the alien knew nothing of humans. And it's obviously absurd that that brain could feel the experience of eating a strawberry cupcake when there are no humans, no strawberries, and no cupcakes anywhere to be seen.
> > > > > >
> > > > > > They are all arguments that stand in need of some defense. But whenever we start talking about your actual arguments you run off into another thread. So, no, Im not interested in going through another month-long string of questions to arrive at another similarly non-spectacular refutation of physicalism.
> > > > >
> > > > > But you can't understand any of the arguments, because they all rely on understanding the feature I am referring to as consciously experiencing.
> > > >
> > > > Then go ahead and say what you mean when you refer to "consciously experiencing." It's hard to see why you are so afraid to lay out your position.
> > > >
> > > > So far we've got (1) consciousness is not epiphenomenal and (2) consciousness is not a behavior. Now, let's be sure I understand what you mean here. When you say consciousness is not a behavior do you mean it is not something macroscopic and externally observable, like talking, running, grimacing, etc? If that's what you mean, then I agree with you. Or do you mean that consciousness is not something that is even in principle observable? If that's what you mean, then I disagree. Though in either case I'll have no trouble understanding what you mean, as long as you just come out and *say* what you mean.
> > > >
> > >
> > > Did you not understand when I had stated:
> > >
> > > "I could try to ask you whether you understood dualism, and understood what the features the dualists believed weren't features of the physical human but were features of the soul/mind, the response to which influences the human behaviour perhaps through quantum events which microtubules are sensitive to and are able to make neuron firings sensitive to. You could presumably imagine the human in which there are no quantum events corresponding to such a mind/soul but which were just by chance. So the human behaviour was the same, but the explanation different. Whether you can or not I don't know. But if you can then the feature I am talking about are the sensations that the dualists claim are a feature of the mind."
> > >
>
> Why are you avoiding answering this?

First, I'm not a dualist. Second, I think Penrose's stuff about quantum effects on microtubulues is hogwash. But since it does not seem to me to be central to your argument, I'm not interested in the digression.

But you can read a critique of Penrose's argument here http://arxiv.org/abs/quant-ph/9907009

>
>
> > >
> > > > Of course, defining consciousness by what it is not, is not the most direct route, so, presumably you have more to add. If you mean, "qualia" go ahead, and say it. If you mean "subjectivity" go ahead and say it. I assure I'll have no trouble understanding what you mean, even if I think you're wrong.
> > >
> > > Well I'd consider qualia a feature of consciously experiencing. What behaviour were you considering the qualia of blue to be?
> >
> > OK. Qualia. On my view the qualia of blue is the sum of the following behavior, the firing of blue-receptive cone cells in the retina, the transmission of that signal up the visual pathways, the integration of the color information with the shape and boundary information collected by the rods, the higher level visual processing involving identification of whatever it was that was blue, the modulation of the perception of blue by adjacent colors, the triggering of the word "blue," the associations brought up from memory of other blue things, the interaction of those memories with my current emotional state, any connections to thoughts I'd been having about anything that might be effected by seeing that blue thing. I could go on, but I suspect you get the drift. All those things are behaviors that my brain does in response to the sight of that blue thing, and that's the behavior that I would identify as a qualia of blue.
>
> What do you mean by "adjacent colours" do you mean light frequencies?

I mean the colors of whatever things are adjacent to the blue thing that I'm looking at.

>Also what do you mean by "seeing that blue thing"? Are you talking about brain processes associated with the firing of blue-receptive cone cells in the retina?

I'm talking about the whole process, blue receptive cones, visual processing, accessing words for colors, stimulating memories, etc.

>
> Take for example this quote from Ullin Place:
>
> 'I want to stress from the outset that in defending the thesis that consciousness is a process in the brain, I am not trying to argue that when we describe our dreams, fantasies, and sensations we are talking about processes in our brains. That is, I am not claiming that statements about sensations and mental images are reducible to or analyzable into statements about brain processes, in the way in which "cognition statements" are analyzable into statements about behaviour. To say that statements about consciousness are statements about brain processes is manifestly false. This is shown (a) by the fact that you can describe your sensations and mental imagery without knowing anything about your brain processes or even that such things exist, (b) by the fact that statements about one's consciousness and statements about one's brain processes are verified in entirely different ways, and (c) by the fact that there is nothing self-contradictory about the statement "X has a pain but there is nothing going on in his brain." What I do want to assert, however, is that the statement "Consciousness is a process in the brain," although not necessarily true, is not necessarily false.'
>
> Can you understand the difference between statements about consciousness and the statements about brain processes, and if so, was your description supposed to be a descriptive statement about brain processes?

I understand that Ullin Place thinks there is a difference between statements about consciousness and statements about brain processes. But since I think that consciousness *is* a brain process, I hold that statements about consciousness *are* statements about brain processes, even if they are expressed in different words.


someone

unread,
Nov 26, 2015, 11:28:46 AM11/26/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 3:58:47 PM UTC, Bill Rogers wrote:
> On Thursday, November 26, 2015 at 9:18:46 AM UTC-5, someone wrote:
> > On Thursday, November 26, 2015 at 2:03:45 PM UTC, Bill Rogers wrote:
> > > On Thursday, November 26, 2015 at 8:48:45 AM UTC-5, someone wrote:
> > > > On Thursday, November 26, 2015 at 1:23:47 PM UTC, Bill Rogers wrote:
> > > > > On Thursday, November 26, 2015 at 7:58:46 AM UTC-5, someone wrote:
> > > > > >
> > > > > > I'm not suggesting that the robot just recognises a certain frequency of light and names it, as the first two sentences stated : "Imagine there is a robot. The Mark 1, which gives the type of behaviour that you would classify as consciousness." So give an example of the type of behaviour you'd require it to demonstrate for you to claim that it is doing the consciousness behaviour...
> > > >
> > > > Why didn't you give an example, or if you'd prefer, just imagine that it demonstrates whatever behaviour you think that would involve. Was it that having a look-up table where certain channel values relate to certain words for the purposes of communicating in whatever communication language the software has loaded would prevent it from ever being considered being conscious? If not, then why not assume that it performs whatever behaviour you think it would require. I'll remind you of the robot again.
> > > >
> > > > Imagine there is a robot. The Mark 1, which gives the type of behaviour that *you* would classify as consciousness. It has cameras for eyes, with each pixel having three 8-bit intensity values (so a range from 0-255) each represent a colour intensity. These come through 3 channels A, B, and C. Internally, in software ("version 1"), the robot holds a table so to speak of linking words to the colour values. E.g. if channel A has an intensity of 255, and B & C intensities of 0, the robot will use the word "red" to describe the colour if B is 255 and A & C are 0 it will use the word "green" and if C is 255 and A & B are 0, it will use the word "blue" to describe the colour. You can imagine that all its reactions to colour are based off the ABC channel values.
> > > >
> > > > Imagine that the Mark 1 can take two types of eye camera. The RGB eye camera which, for each pixel, uses channel A for the red light intensity data, channel B for the green light intensity data, and channel C for the blue light intensity data. Or the BGR eye camera which uses channel the A channel for the blue light intensity data, channel B for the green light intensity data, and channel C for the red light intensity data.
> > > >
> > > > Consider four Mark 1 robots, two with RGB eye cameras, and two with BGR eye cameras. From each pair, one is in a red room with a blue table, and the other is in a blue room with a red table. Do both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as you would, and the table with a similar blue qualia as you would?
> >
> > Why are you avoiding answer this?
>
> Because, based on your track record, your complicated scenarios inevitably lead to dead ends and are a waste of time. It may comfort you to think that people avoid them because they are afraid of your trenchant insights. If so, go ahead and be comforted.
>

This isn't that complicated, and it is related to the part of this thread regarding what you mean by behaviour, and since there is an ambiguity there that I have pointed out, it would be useful if you'd answer.

https://groups.google.com/d/msg/talk.origins/NRAe8H0KIjs/9YoY2hFhBwAJ

Or maybe you'll answer there, but it might be easier if you answered here, and kept it in the one thread.


> >
> > > >
> > > >
> > > >
> > > > > >
> > > > > > > Here's the reason I'm not interested in working through things by answering your questions. After months and months of my and others trying to answer your questions, interpreting your questions, trying to make sense of them in the best possible light, getting diverted into pointless side issues, finally two or three of your arguments against physicalism have emerged from the mist. And all of them could have been stated up front in your first post. But whenever it becomes clear what you are arguing, and when people critique your argument, you just run off and take refuge in more pointless questions, rather than deal with the arguments that you've coughed up after so much labor.
> > > > > > >
> > > > > > > Here are your arguments, stated simply:
> > > > > > >
> > > > > > > 1. If everything follows the same physical laws, then, if we knew all the laws perfectly, we could predict the behavior of conscious systems without making any reference to consciousness. That would mean consciousness is epiphenomenal.
> > > > > > >
> > > > > > > 2. A sufficiently large collection of components (hand raisers or random electrical gates) or a subset of the collection could, by chance behave exactly as the computer controlling a conscious robot. So consciousness could arise randomly, even say, among an enormous collection of people randomly raising their hands.
> > > > > > >
> > > > > > > 3. A human brain experiencing, say, the sensation of eating a strawberry cupcake, might be produced on an alien world, by an alien artist simply interested in producing a mechanism to control an artistic display of fairy lights, even though the alien knew nothing of humans. And it's obviously absurd that that brain could feel the experience of eating a strawberry cupcake when there are no humans, no strawberries, and no cupcakes anywhere to be seen.
> > > > > > >
> > > > > > > They are all arguments that stand in need of some defense. But whenever we start talking about your actual arguments you run off into another thread. So, no, Im not interested in going through another month-long string of questions to arrive at another similarly non-spectacular refutation of physicalism.
> > > > > >
> > > > > > But you can't understand any of the arguments, because they all rely on understanding the feature I am referring to as consciously experiencing.
> > > > >
> > > > > Then go ahead and say what you mean when you refer to "consciously experiencing." It's hard to see why you are so afraid to lay out your position.
> > > > >
> > > > > So far we've got (1) consciousness is not epiphenomenal and (2) consciousness is not a behavior. Now, let's be sure I understand what you mean here. When you say consciousness is not a behavior do you mean it is not something macroscopic and externally observable, like talking, running, grimacing, etc? If that's what you mean, then I agree with you. Or do you mean that consciousness is not something that is even in principle observable? If that's what you mean, then I disagree. Though in either case I'll have no trouble understanding what you mean, as long as you just come out and *say* what you mean.
> > > > >
> > > >
> > > > Did you not understand when I had stated:
> > > >
> > > > "I could try to ask you whether you understood dualism, and understood what the features the dualists believed weren't features of the physical human but were features of the soul/mind, the response to which influences the human behaviour perhaps through quantum events which microtubules are sensitive to and are able to make neuron firings sensitive to. You could presumably imagine the human in which there are no quantum events corresponding to such a mind/soul but which were just by chance. So the human behaviour was the same, but the explanation different. Whether you can or not I don't know. But if you can then the feature I am talking about are the sensations that the dualists claim are a feature of the mind."
> > > >
> >
> > Why are you avoiding answering this?
>
> First, I'm not a dualist. Second, I think Penrose's stuff about quantum effects on microtubulues is hogwash. But since it does not seem to me to be central to your argument, I'm not interested in the digression.
>
> But you can read a critique of Penrose's argument here http://arxiv.org/abs/quant-ph/9907009
>

I've explained to you before that this thread isn't about any argument, it is just me highlighting what feature I am referring to when I use the term "consciously experiencing". You've asked me for an explanation, and I've given more than one, and that was yet another one. Regarding your first point, are you suggesting that non-dualists are incapable of understanding which features the dualists were suggesting were of the soul/mind and not features of the physical human, and that because you aren't a dualist you are therefore incapable of understanding which features they were?

Regarding your second point you were right Penrose and Hameroffs' theory about orchestrated reduction aren't important to the point, it was added in more to answer a point that the poster which goes under the name *Hemidactylus* made, which was that "there's no interface point where a ghost can open a door". All that was required from Penrose and Hameroffs' theory is that there exists a mechanism which allows quantum effects in microtubules to influence neural firing, things like quantum entanglement are irrelevant.

> >
> >
> > > >
> > > > > Of course, defining consciousness by what it is not, is not the most direct route, so, presumably you have more to add. If you mean, "qualia" go ahead, and say it. If you mean "subjectivity" go ahead and say it. I assure I'll have no trouble understanding what you mean, even if I think you're wrong.
> > > >
> > > > Well I'd consider qualia a feature of consciously experiencing. What behaviour were you considering the qualia of blue to be?
> > >
> > > OK. Qualia. On my view the qualia of blue is the sum of the following behavior, the firing of blue-receptive cone cells in the retina, the transmission of that signal up the visual pathways, the integration of the color information with the shape and boundary information collected by the rods, the higher level visual processing involving identification of whatever it was that was blue, the modulation of the perception of blue by adjacent colors, the triggering of the word "blue," the associations brought up from memory of other blue things, the interaction of those memories with my current emotional state, any connections to thoughts I'd been having about anything that might be effected by seeing that blue thing. I could go on, but I suspect you get the drift. All those things are behaviors that my brain does in response to the sight of that blue thing, and that's the behavior that I would identify as a qualia of blue.
> >
> > What do you mean by "adjacent colours" do you mean light frequencies?
>
> I mean the colors of whatever things are adjacent to the blue thing that I'm looking at.
>

So the light frequencies reflected by the things adjacent to the blue thing?

> >Also what do you mean by "seeing that blue thing"? Are you talking about brain processes associated with the firing of blue-receptive cone cells in the retina?
>
> I'm talking about the whole process, blue receptive cones, visual processing, accessing words for colors, stimulating memories, etc.
>

When you say accessing words, or stimulating memories, do you mean anything other than brain processes?

> >
> > Take for example this quote from Ullin Place:
> >
> > 'I want to stress from the outset that in defending the thesis that consciousness is a process in the brain, I am not trying to argue that when we describe our dreams, fantasies, and sensations we are talking about processes in our brains. That is, I am not claiming that statements about sensations and mental images are reducible to or analyzable into statements about brain processes, in the way in which "cognition statements" are analyzable into statements about behaviour. To say that statements about consciousness are statements about brain processes is manifestly false. This is shown (a) by the fact that you can describe your sensations and mental imagery without knowing anything about your brain processes or even that such things exist, (b) by the fact that statements about one's consciousness and statements about one's brain processes are verified in entirely different ways, and (c) by the fact that there is nothing self-contradictory about the statement "X has a pain but there is nothing going on in his brain." What I do want to assert, however, is that the statement "Consciousness is a process in the brain," although not necessarily true, is not necessarily false.'
> >
> > Can you understand the difference between statements about consciousness and the statements about brain processes, and if so, was your description supposed to be a descriptive statement about brain processes?
>
> I understand that Ullin Place thinks there is a difference between statements about consciousness and statements about brain processes. But since I think that consciousness *is* a brain process, I hold that statements about consciousness *are* statements about brain processes, even if they are expressed in different words.

And refer to different features presumably. So a neurosurgeon for example could be operating on someone's brain discussing certain features of the brain processes for example, those observable from a third person perspective, and the person being operated on could be discussing features of those same brain processes that are only observable from a first person perspective. So with regards to Place's point (a) the person could describe the features that are observable only from a first person perspective the sensations and mental imagery for example without knowing anything about features of the brain processes that the neurosurgeon is observing, or even that the features the neurosurgeon is observing exist.

Bill Rogers

unread,
Nov 26, 2015, 11:53:46 AM11/26/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 11:28:46 AM UTC-5, someone wrote:
> On Thursday, November 26, 2015 at 3:58:47 PM UTC, Bill Rogers wrote:
> > On Thursday, November 26, 2015 at 9:18:46 AM UTC-5, someone wrote:
> > > On Thursday, November 26, 2015 at 2:03:45 PM UTC, Bill Rogers wrote:
> > > > On Thursday, November 26, 2015 at 8:48:45 AM UTC-5, someone wrote:
> > > > > On Thursday, November 26, 2015 at 1:23:47 PM UTC, Bill Rogers wrote:
> > > > > > On Thursday, November 26, 2015 at 7:58:46 AM UTC-5, someone wrote:
> > > > > > >
> > > > > > > I'm not suggesting that the robot just recognises a certain frequency of light and names it, as the first two sentences stated : "Imagine there is a robot. The Mark 1, which gives the type of behaviour that you would classify as consciousness." So give an example of the type of behaviour you'd require it to demonstrate for you to claim that it is doing the consciousness behaviour...
> > > > >
> > > > > Why didn't you give an example, or if you'd prefer, just imagine that it demonstrates whatever behaviour you think that would involve. Was it that having a look-up table where certain channel values relate to certain words for the purposes of communicating in whatever communication language the software has loaded would prevent it from ever being considered being conscious? If not, then why not assume that it performs whatever behaviour you think it would require. I'll remind you of the robot again.
> > > > >
> > > > > Imagine there is a robot. The Mark 1, which gives the type of behaviour that *you* would classify as consciousness. It has cameras for eyes, with each pixel having three 8-bit intensity values (so a range from 0-255) each represent a colour intensity. These come through 3 channels A, B, and C. Internally, in software ("version 1"), the robot holds a table so to speak of linking words to the colour values. E.g. if channel A has an intensity of 255, and B & C intensities of 0, the robot will use the word "red" to describe the colour if B is 255 and A & C are 0 it will use the word "green" and if C is 255 and A & B are 0, it will use the word "blue" to describe the colour. You can imagine that all its reactions to colour are based off the ABC channel values.
> > > > >
> > > > > Imagine that the Mark 1 can take two types of eye camera. The RGB eye camera which, for each pixel, uses channel A for the red light intensity data, channel B for the green light intensity data, and channel C for the blue light intensity data. Or the BGR eye camera which uses channel the A channel for the blue light intensity data, channel B for the green light intensity data, and channel C for the red light intensity data.
> > > > >
> > > > > Consider four Mark 1 robots, two with RGB eye cameras, and two with BGR eye cameras. From each pair, one is in a red room with a blue table, and the other is in a blue room with a red table. Do both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as you would, and the table with a similar blue qualia as you would?
> > >
> > > Why are you avoiding answer this?
> >
> > Because, based on your track record, your complicated scenarios inevitably lead to dead ends and are a waste of time. It may comfort you to think that people avoid them because they are afraid of your trenchant insights. If so, go ahead and be comforted.
> >
>
> This isn't that complicated, and it is related to the part of this thread regarding what you mean by behaviour, and since there is an ambiguity there that I have pointed out, it would be useful if you'd answer.
>
> https://groups.google.com/d/msg/talk.origins/NRAe8H0KIjs/9YoY2hFhBwAJ
>
> Or maybe you'll answer there, but it might be easier if you answered here, and kept it in the one thread.

If you don't understand what I mean by behavior, just ask, and I'll try to resolve the ambiguity. No need for robots. As I said before, I consider a system's behavior to be absolutely everything about it that can in principle be detected by observation and experiment. If that's not clear enough, tell me what part you can't grasp, and I'll try again.
>
>
> > >
> > > > >
> > > > >
> > > > >
> > > > > > >
> > > > > > > > Here's the reason I'm not interested in working through things by answering your questions. After months and months of my and others trying to answer your questions, interpreting your questions, trying to make sense of them in the best possible light, getting diverted into pointless side issues, finally two or three of your arguments against physicalism have emerged from the mist. And all of them could have been stated up front in your first post. But whenever it becomes clear what you are arguing, and when people critique your argument, you just run off and take refuge in more pointless questions, rather than deal with the arguments that you've coughed up after so much labor.
> > > > > > > >
> > > > > > > > Here are your arguments, stated simply:
> > > > > > > >
> > > > > > > > 1. If everything follows the same physical laws, then, if we knew all the laws perfectly, we could predict the behavior of conscious systems without making any reference to consciousness. That would mean consciousness is epiphenomenal.
> > > > > > > >
> > > > > > > > 2. A sufficiently large collection of components (hand raisers or random electrical gates) or a subset of the collection could, by chance behave exactly as the computer controlling a conscious robot. So consciousness could arise randomly, even say, among an enormous collection of people randomly raising their hands.
> > > > > > > >
> > > > > > > > 3. A human brain experiencing, say, the sensation of eating a strawberry cupcake, might be produced on an alien world, by an alien artist simply interested in producing a mechanism to control an artistic display of fairy lights, even though the alien knew nothing of humans. And it's obviously absurd that that brain could feel the experience of eating a strawberry cupcake when there are no humans, no strawberries, and no cupcakes anywhere to be seen.
> > > > > > > >
> > > > > > > > They are all arguments that stand in need of some defense. But whenever we start talking about your actual arguments you run off into another thread. So, no, Im not interested in going through another month-long string of questions to arrive at another similarly non-spectacular refutation of physicalism.
> > > > > > >
> > > > > > > But you can't understand any of the arguments, because they all rely on understanding the feature I am referring to as consciously experiencing.
> > > > > >
> > > > > > Then go ahead and say what you mean when you refer to "consciously experiencing." It's hard to see why you are so afraid to lay out your position.
> > > > > >
> > > > > > So far we've got (1) consciousness is not epiphenomenal and (2) consciousness is not a behavior. Now, let's be sure I understand what you mean here. When you say consciousness is not a behavior do you mean it is not something macroscopic and externally observable, like talking, running, grimacing, etc? If that's what you mean, then I agree with you. Or do you mean that consciousness is not something that is even in principle observable? If that's what you mean, then I disagree. Though in either case I'll have no trouble understanding what you mean, as long as you just come out and *say* what you mean.
> > > > > >
> > > > >
> > > > > Did you not understand when I had stated:
> > > > >
> > > > > "I could try to ask you whether you understood dualism, and understood what the features the dualists believed weren't features of the physical human but were features of the soul/mind, the response to which influences the human behaviour perhaps through quantum events which microtubules are sensitive to and are able to make neuron firings sensitive to. You could presumably imagine the human in which there are no quantum events corresponding to such a mind/soul but which were just by chance. So the human behaviour was the same, but the explanation different. Whether you can or not I don't know. But if you can then the feature I am talking about are the sensations that the dualists claim are a feature of the mind."
> > > > >
> > >
> > > Why are you avoiding answering this?
> >
> > First, I'm not a dualist. Second, I think Penrose's stuff about quantum effects on microtubulues is hogwash. But since it does not seem to me to be central to your argument, I'm not interested in the digression.
> >
> > But you can read a critique of Penrose's argument here http://arxiv.org/abs/quant-ph/9907009
> >
>
> I've explained to you before that this thread isn't about any argument, it is just me highlighting what feature I am referring to when I use the term "consciously experiencing". You've asked me for an explanation, and I've given more than one, and that was yet another one. Regarding your first point, are you suggesting that non-dualists are incapable of understanding which features the dualists were suggesting were of the soul/mind and not features of the physical human, and that because you aren't a dualist you are therefore incapable of understanding which features they were?

I'm not interested in what feature some abstract dualist is interested in. Just go ahead and tell me what you mean by "consciously experiencing." Or have you implicitly done that? By "consciously experiencing" do you mean "using your immaterial mind to influence the outcome of events on a quantum scale"?

You need to stop conflating "understanding an idea" with "agreeing with the idea." I'll try to be careful, too. For example, I understand dualism, I just disagree with it.

>
> Regarding your second point you were right Penrose and Hameroffs' theory about orchestrated reduction aren't important to the point, it was added in more to answer a point that the poster which goes under the name *Hemidactylus* made, which was that "there's no interface point where a ghost can open a door". All that was required from Penrose and Hameroffs' theory is that there exists a mechanism which allows quantum effects in microtubules to influence neural firing, things like quantum entanglement are irrelevant.

Well, no. If decoherence occurs fast enough, faster than the time scales on which microtubules move and neurons fire, then everything acts classically. But again, this is not central and I'm not interested in the digression.
>
> > >
> > >
> > > > >
> > > > > > Of course, defining consciousness by what it is not, is not the most direct route, so, presumably you have more to add. If you mean, "qualia" go ahead, and say it. If you mean "subjectivity" go ahead and say it. I assure I'll have no trouble understanding what you mean, even if I think you're wrong.
> > > > >
> > > > > Well I'd consider qualia a feature of consciously experiencing. What behaviour were you considering the qualia of blue to be?
> > > >
> > > > OK. Qualia. On my view the qualia of blue is the sum of the following behavior, the firing of blue-receptive cone cells in the retina, the transmission of that signal up the visual pathways, the integration of the color information with the shape and boundary information collected by the rods, the higher level visual processing involving identification of whatever it was that was blue, the modulation of the perception of blue by adjacent colors, the triggering of the word "blue," the associations brought up from memory of other blue things, the interaction of those memories with my current emotional state, any connections to thoughts I'd been having about anything that might be effected by seeing that blue thing. I could go on, but I suspect you get the drift. All those things are behaviors that my brain does in response to the sight of that blue thing, and that's the behavior that I would identify as a qualia of blue.
> > >
> > > What do you mean by "adjacent colours" do you mean light frequencies?
> >
> > I mean the colors of whatever things are adjacent to the blue thing that I'm looking at.
> >
>
> So the light frequencies reflected by the things adjacent to the blue thing?

And the effect those frequencies in from the adjacent objects have on the way my brain processes blue.

>
> > >Also what do you mean by "seeing that blue thing"? Are you talking about brain processes associated with the firing of blue-receptive cone cells in the retina?
> >
> > I'm talking about the whole process, blue receptive cones, visual processing, accessing words for colors, stimulating memories, etc.
> >
>
> When you say accessing words, or stimulating memories, do you mean anything other than brain processes?

No.

>
> > >
> > > Take for example this quote from Ullin Place:
> > >
> > > 'I want to stress from the outset that in defending the thesis that consciousness is a process in the brain, I am not trying to argue that when we describe our dreams, fantasies, and sensations we are talking about processes in our brains. That is, I am not claiming that statements about sensations and mental images are reducible to or analyzable into statements about brain processes, in the way in which "cognition statements" are analyzable into statements about behaviour. To say that statements about consciousness are statements about brain processes is manifestly false. This is shown (a) by the fact that you can describe your sensations and mental imagery without knowing anything about your brain processes or even that such things exist, (b) by the fact that statements about one's consciousness and statements about one's brain processes are verified in entirely different ways, and (c) by the fact that there is nothing self-contradictory about the statement "X has a pain but there is nothing going on in his brain." What I do want to assert, however, is that the statement "Consciousness is a process in the brain," although not necessarily true, is not necessarily false.'
> > >
> > > Can you understand the difference between statements about consciousness and the statements about brain processes, and if so, was your description supposed to be a descriptive statement about brain processes?
> >
> > I understand that Ullin Place thinks there is a difference between statements about consciousness and statements about brain processes. But since I think that consciousness *is* a brain process, I hold that statements about consciousness *are* statements about brain processes, even if they are expressed in different words.
>
> And refer to different features presumably. So a neurosurgeon for example could be operating on someone's brain discussing certain features of the brain processes for example, those observable from a third person perspective, and the person being operated on could be discussing features of those same brain processes that are only observable from a first person perspective.

I would say that the same features look different from different perspectives, rather than that some features are observable only from one of the perspectives. But that's just a question of how to use words.

>So with regards to Place's point (a) the person could describe the features that are observable only from a first person perspective the sensations and mental imagery for example without knowing anything about features of the brain processes that the neurosurgeon is observing, or even that the features the neurosurgeon is observing exist.

OK. I'd call them different perspectives, rather than different features. A pyramid looks square viewed from the bottom and triangular viewed from one side, but it's still a single pyramid.


someone

unread,
Nov 26, 2015, 12:08:47 PM11/26/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 4:53:46 PM UTC, Bill Rogers wrote:
> On Thursday, November 26, 2015 at 11:28:46 AM UTC-5, someone wrote:
> > On Thursday, November 26, 2015 at 3:58:47 PM UTC, Bill Rogers wrote:
> > > On Thursday, November 26, 2015 at 9:18:46 AM UTC-5, someone wrote:
> > > > On Thursday, November 26, 2015 at 2:03:45 PM UTC, Bill Rogers wrote:
> > > > > On Thursday, November 26, 2015 at 8:48:45 AM UTC-5, someone wrote:
> > > > > > On Thursday, November 26, 2015 at 1:23:47 PM UTC, Bill Rogers wrote:
> > > > > > > On Thursday, November 26, 2015 at 7:58:46 AM UTC-5, someone wrote:
> > > > > > > >
> > > > > > > > I'm not suggesting that the robot just recognises a certain frequency of light and names it, as the first two sentences stated : "Imagine there is a robot. The Mark 1, which gives the type of behaviour that you would classify as consciousness." So give an example of the type of behaviour you'd require it to demonstrate for you to claim that it is doing the consciousness behaviour...
> > > > > >
> > > > > > Why didn't you give an example, or if you'd prefer, just imagine that it demonstrates whatever behaviour you think that would involve. Was it that having a look-up table where certain channel values relate to certain words for the purposes of communicating in whatever communication language the software has loaded would prevent it from ever being considered being conscious? If not, then why not assume that it performs whatever behaviour you think it would require. I'll remind you of the robot again.
> > > > > >
> > > > > > Imagine there is a robot. The Mark 1, which gives the type of behaviour that *you* would classify as consciousness. It has cameras for eyes, with each pixel having three 8-bit intensity values (so a range from 0-255) each represent a colour intensity. These come through 3 channels A, B, and C. Internally, in software ("version 1"), the robot holds a table so to speak of linking words to the colour values. E.g. if channel A has an intensity of 255, and B & C intensities of 0, the robot will use the word "red" to describe the colour if B is 255 and A & C are 0 it will use the word "green" and if C is 255 and A & B are 0, it will use the word "blue" to describe the colour. You can imagine that all its reactions to colour are based off the ABC channel values.
> > > > > >
> > > > > > Imagine that the Mark 1 can take two types of eye camera. The RGB eye camera which, for each pixel, uses channel A for the red light intensity data, channel B for the green light intensity data, and channel C for the blue light intensity data. Or the BGR eye camera which uses channel the A channel for the blue light intensity data, channel B for the green light intensity data, and channel C for the red light intensity data.
> > > > > >
> > > > > > Consider four Mark 1 robots, two with RGB eye cameras, and two with BGR eye cameras. From each pair, one is in a red room with a blue table, and the other is in a blue room with a red table. Do both the Mark 1s (the one with RGB camera eyes, and the one with BGR camera eyes) in the red room with a blue table experience the room with similar red qualia as you would, and the table with a similar blue qualia as you would?
> > > >
> > > > Why are you avoiding answer this?
> > >
> > > Because, based on your track record, your complicated scenarios inevitably lead to dead ends and are a waste of time. It may comfort you to think that people avoid them because they are afraid of your trenchant insights. If so, go ahead and be comforted.
> > >
> >
> > This isn't that complicated, and it is related to the part of this thread regarding what you mean by behaviour, and since there is an ambiguity there that I have pointed out, it would be useful if you'd answer.
> >
> > https://groups.google.com/d/msg/talk.origins/NRAe8H0KIjs/9YoY2hFhBwAJ
> >
> > Or maybe you'll answer there, but it might be easier if you answered here, and kept it in the one thread.
>
> If you don't understand what I mean by behavior, just ask, and I'll try to resolve the ambiguity. No need for robots. As I said before, I consider a system's behavior to be absolutely everything about it that can in principle be detected by observation and experiment. If that's not clear enough, tell me what part you can't grasp, and I'll try again.

I did ask, the link is to a post where I was questioning why you didn't answer. Though I think we may be getting to it anyway lower down in this thread. If it becomes ambiguous, I might bring in the robot again just to be clear.


> >
> > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > > >
> > > > > > > > > Here's the reason I'm not interested in working through things by answering your questions. After months and months of my and others trying to answer your questions, interpreting your questions, trying to make sense of them in the best possible light, getting diverted into pointless side issues, finally two or three of your arguments against physicalism have emerged from the mist. And all of them could have been stated up front in your first post. But whenever it becomes clear what you are arguing, and when people critique your argument, you just run off and take refuge in more pointless questions, rather than deal with the arguments that you've coughed up after so much labor.
> > > > > > > > >
> > > > > > > > > Here are your arguments, stated simply:
> > > > > > > > >
> > > > > > > > > 1. If everything follows the same physical laws, then, if we knew all the laws perfectly, we could predict the behavior of conscious systems without making any reference to consciousness. That would mean consciousness is epiphenomenal.
> > > > > > > > >
> > > > > > > > > 2. A sufficiently large collection of components (hand raisers or random electrical gates) or a subset of the collection could, by chance behave exactly as the computer controlling a conscious robot. So consciousness could arise randomly, even say, among an enormous collection of people randomly raising their hands.
> > > > > > > > >
> > > > > > > > > 3. A human brain experiencing, say, the sensation of eating a strawberry cupcake, might be produced on an alien world, by an alien artist simply interested in producing a mechanism to control an artistic display of fairy lights, even though the alien knew nothing of humans. And it's obviously absurd that that brain could feel the experience of eating a strawberry cupcake when there are no humans, no strawberries, and no cupcakes anywhere to be seen.
> > > > > > > > >
> > > > > > > > > They are all arguments that stand in need of some defense. But whenever we start talking about your actual arguments you run off into another thread. So, no, Im not interested in going through another month-long string of questions to arrive at another similarly non-spectacular refutation of physicalism.
> > > > > > > >
> > > > > > > > But you can't understand any of the arguments, because they all rely on understanding the feature I am referring to as consciously experiencing.
> > > > > > >
> > > > > > > Then go ahead and say what you mean when you refer to "consciously experiencing." It's hard to see why you are so afraid to lay out your position.
> > > > > > >
> > > > > > > So far we've got (1) consciousness is not epiphenomenal and (2) consciousness is not a behavior. Now, let's be sure I understand what you mean here. When you say consciousness is not a behavior do you mean it is not something macroscopic and externally observable, like talking, running, grimacing, etc? If that's what you mean, then I agree with you. Or do you mean that consciousness is not something that is even in principle observable? If that's what you mean, then I disagree. Though in either case I'll have no trouble understanding what you mean, as long as you just come out and *say* what you mean.
> > > > > > >
> > > > > >
> > > > > > Did you not understand when I had stated:
> > > > > >
> > > > > > "I could try to ask you whether you understood dualism, and understood what the features the dualists believed weren't features of the physical human but were features of the soul/mind, the response to which influences the human behaviour perhaps through quantum events which microtubules are sensitive to and are able to make neuron firings sensitive to. You could presumably imagine the human in which there are no quantum events corresponding to such a mind/soul but which were just by chance. So the human behaviour was the same, but the explanation different. Whether you can or not I don't know. But if you can then the feature I am talking about are the sensations that the dualists claim are a feature of the mind."
> > > > > >
> > > >
> > > > Why are you avoiding answering this?
> > >
> > > First, I'm not a dualist. Second, I think Penrose's stuff about quantum effects on microtubulues is hogwash. But since it does not seem to me to be central to your argument, I'm not interested in the digression.
> > >
> > > But you can read a critique of Penrose's argument here http://arxiv.org/abs/quant-ph/9907009
> > >
> >
> > I've explained to you before that this thread isn't about any argument, it is just me highlighting what feature I am referring to when I use the term "consciously experiencing". You've asked me for an explanation, and I've given more than one, and that was yet another one. Regarding your first point, are you suggesting that non-dualists are incapable of understanding which features the dualists were suggesting were of the soul/mind and not features of the physical human, and that because you aren't a dualist you are therefore incapable of understanding which features they were?
>
> I'm not interested in what feature some abstract dualist is interested in. Just go ahead and tell me what you mean by "consciously experiencing." Or have you implicitly done that? By "consciously experiencing" do you mean "using your immaterial mind to influence the outcome of events on a quantum scale"?
>
> You need to stop conflating "understanding an idea" with "agreeing with the idea." I'll try to be careful, too. For example, I understand dualism, I just disagree with it.
>

I wasn't asking whether you agreed with it, I was just asking you whether you could understand which features they were claiming were of the mind/soul and not features of the physical human. So are you saying that you can understand which features they are (but don't agree that they are features of the mind/soul but are instead features of the physical human)?

> >
> > Regarding your second point you were right Penrose and Hameroffs' theory about orchestrated reduction aren't important to the point, it was added in more to answer a point that the poster which goes under the name *Hemidactylus* made, which was that "there's no interface point where a ghost can open a door". All that was required from Penrose and Hameroffs' theory is that there exists a mechanism which allows quantum effects in microtubules to influence neural firing, things like quantum entanglement are irrelevant.
>
> Well, no. If decoherence occurs fast enough, faster than the time scales on which microtubules move and neurons fire, then everything acts classically. But again, this is not central and I'm not interested in the digression.
>

But dualism wouldn't rely on entanglement at all, so I'm not clear on why decoherence would matter. Perhaps you could explain why you think it would be relevant?

> > > >
> > > >
> > > > > >
> > > > > > > Of course, defining consciousness by what it is not, is not the most direct route, so, presumably you have more to add. If you mean, "qualia" go ahead, and say it. If you mean "subjectivity" go ahead and say it. I assure I'll have no trouble understanding what you mean, even if I think you're wrong.
> > > > > >
> > > > > > Well I'd consider qualia a feature of consciously experiencing. What behaviour were you considering the qualia of blue to be?
> > > > >
> > > > > OK. Qualia. On my view the qualia of blue is the sum of the following behavior, the firing of blue-receptive cone cells in the retina, the transmission of that signal up the visual pathways, the integration of the color information with the shape and boundary information collected by the rods, the higher level visual processing involving identification of whatever it was that was blue, the modulation of the perception of blue by adjacent colors, the triggering of the word "blue," the associations brought up from memory of other blue things, the interaction of those memories with my current emotional state, any connections to thoughts I'd been having about anything that might be effected by seeing that blue thing. I could go on, but I suspect you get the drift. All those things are behaviors that my brain does in response to the sight of that blue thing, and that's the behavior that I would identify as a qualia of blue.
> > > >
> > > > What do you mean by "adjacent colours" do you mean light frequencies?
> > >
> > > I mean the colors of whatever things are adjacent to the blue thing that I'm looking at.
> > >
> >
> > So the light frequencies reflected by the things adjacent to the blue thing?
>
> And the effect those frequencies in from the adjacent objects have on the way my brain processes blue.
>
> >
> > > >Also what do you mean by "seeing that blue thing"? Are you talking about brain processes associated with the firing of blue-receptive cone cells in the retina?
> > >
> > > I'm talking about the whole process, blue receptive cones, visual processing, accessing words for colors, stimulating memories, etc.
> > >
> >
> > When you say accessing words, or stimulating memories, do you mean anything other than brain processes?
>
> No.
>
> >
> > > >
> > > > Take for example this quote from Ullin Place:
> > > >
> > > > 'I want to stress from the outset that in defending the thesis that consciousness is a process in the brain, I am not trying to argue that when we describe our dreams, fantasies, and sensations we are talking about processes in our brains. That is, I am not claiming that statements about sensations and mental images are reducible to or analyzable into statements about brain processes, in the way in which "cognition statements" are analyzable into statements about behaviour. To say that statements about consciousness are statements about brain processes is manifestly false. This is shown (a) by the fact that you can describe your sensations and mental imagery without knowing anything about your brain processes or even that such things exist, (b) by the fact that statements about one's consciousness and statements about one's brain processes are verified in entirely different ways, and (c) by the fact that there is nothing self-contradictory about the statement "X has a pain but there is nothing going on in his brain." What I do want to assert, however, is that the statement "Consciousness is a process in the brain," although not necessarily true, is not necessarily false.'
> > > >
> > > > Can you understand the difference between statements about consciousness and the statements about brain processes, and if so, was your description supposed to be a descriptive statement about brain processes?
> > >
> > > I understand that Ullin Place thinks there is a difference between statements about consciousness and statements about brain processes. But since I think that consciousness *is* a brain process, I hold that statements about consciousness *are* statements about brain processes, even if they are expressed in different words.
> >
> > And refer to different features presumably. So a neurosurgeon for example could be operating on someone's brain discussing certain features of the brain processes for example, those observable from a third person perspective, and the person being operated on could be discussing features of those same brain processes that are only observable from a first person perspective.
>
> I would say that the same features look different from different perspectives, rather than that some features are observable only from one of the perspectives. But that's just a question of how to use words.
>

I don't think it is, see below.

> >So with regards to Place's point (a) the person could describe the features that are observable only from a first person perspective the sensations and mental imagery for example without knowing anything about features of the brain processes that the neurosurgeon is observing, or even that the features the neurosurgeon is observing exist.
>
> OK. I'd call them different perspectives, rather than different features. A pyramid looks square viewed from the bottom and triangular viewed from one side, but it's still a single pyramid.

Does a cup have a first person perspective? If not then having a first person perspective is a feature that not all physical things would have.

Bill Rogers

unread,
Nov 26, 2015, 2:53:45 PM11/26/15
to talk-o...@moderators.isc.org
As I said above, I understand dualism, I just don't agree with it. If you cannot understand that that statement answers your question, that's on you.

Let me repeat it. I understand dualism. That means I understand that it claims that qualia, free will, subjective experience, the mind are not physical features of the brain. There are lots of dualists, though, so just to be sure, maybe you should say which features exactly, you think are features of the mind/soul rather than features of the physical human.
>
> > >
> > > Regarding your second point you were right Penrose and Hameroffs' theory about orchestrated reduction aren't important to the point, it was added in more to answer a point that the poster which goes under the name *Hemidactylus* made, which was that "there's no interface point where a ghost can open a door". All that was required from Penrose and Hameroffs' theory is that there exists a mechanism which allows quantum effects in microtubules to influence neural firing, things like quantum entanglement are irrelevant.
> >
> > Well, no. If decoherence occurs fast enough, faster than the time scales on which microtubules move and neurons fire, then everything acts classically. But again, this is not central and I'm not interested in the digression.
> >
>
> But dualism wouldn't rely on entanglement at all, so I'm not clear on why decoherence would matter. Perhaps you could explain why you think it would be relevant?

Dualism, and the particular version espoused by Penrose are not the same thing. In any case Penrose's version still suffers from the problems that afflict all dualist accounts. How does the immaterial mind interact with material particulars to change their quantum behavior? All bringing in quantum mechanics does is hide the interaction between the matierial and the non-material a bit. If the non-material is non-material, how is it localized in space? Why does the mind only effect the quantum behavior of particles within the brain it is associated with? One can wave away all this problems, and if you are unsatisfied with physicalism, perhaps you have no choice but to wave them away, but the dualist account is no more complete than the physicalist account, and intoning the words "quantum mechanics" does not make it any more complete.
>
> > > > >
> > > > >
> > > > > > >
> > > > > > > > Of course, defining consciousness by what it is not, is not the most direct route, so, presumably you have more to add. If you mean, "qualia" go ahead, and say it. If you mean "subjectivity" go ahead and say it. I assure I'll have no trouble understanding what you mean, even if I think you're wrong.
> > > > > > >
> > > > > > > Well I'd consider qualia a feature of consciously experiencing. What behaviour were you considering the qualia of blue to be?
> > > > > >
> > > > > > OK. Qualia. On my view the qualia of blue is the sum of the following behavior, the firing of blue-receptive cone cells in the retina, the transmission of that signal up the visual pathways, the integration of the color information with the shape and boundary information collected by the rods, the higher level visual processing involving identification of whatever it was that was blue, the modulation of the perception of blue by adjacent colors, the triggering of the word "blue," the associations brought up from memory of other blue things, the interaction of those memories with my current emotional state, any connections to thoughts I'd been having about anything that might be effected by seeing that blue thing. I could go on, but I suspect you get the drift. All those things are behaviors that my brain does in response to the sight of that blue thing, and that's the behavior that I would identify as a qualia of blue.
> > > > >
> > > > > What do you mean by "adjacent colours" do you mean light frequencies?
> > > >
> > > > I mean the colors of whatever things are adjacent to the blue thing that I'm looking at.
> > > >
> > >
> > > So the light frequencies reflected by the things adjacent to the blue thing?
> >
> > And the effect those frequencies in from the adjacent objects have on the way my brain processes blue.
> >
> > >
> > > > >Also what do you mean by "seeing that blue thing"? Are you talking about brain processes associated with the firing of blue-receptive cone cells in the retina?
> > > >
> > > > I'm talking about the whole process, blue receptive cones, visual processing, accessing words for colors, stimulating memories, etc.
> > > >
> > >
> > > When you say accessing words, or stimulating memories, do you mean anything other than brain processes?
> >
> > No.
> >
> > >
> > > > >
> > > > > Take for example this quote from Ullin Place:
> > > > >
> > > > > 'I want to stress from the outset that in defending the thesis that consciousness is a process in the brain, I am not trying to argue that when we describe our dreams, fantasies, and sensations we are talking about processes in our brains. That is, I am not claiming that statements about sensations and mental images are reducible to or analyzable into statements about brain processes, in the way in which "cognition statements" are analyzable into statements about behaviour. To say that statements about consciousness are statements about brain processes is manifestly false. This is shown (a) by the fact that you can describe your sensations and mental imagery without knowing anything about your brain processes or even that such things exist, (b) by the fact that statements about one's consciousness and statements about one's brain processes are verified in entirely different ways, and (c) by the fact that there is nothing self-contradictory about the statement "X has a pain but there is nothing going on in his brain." What I do want to assert, however, is that the statement "Consciousness is a process in the brain," although not necessarily true, is not necessarily false.'
> > > > >
> > > > > Can you understand the difference between statements about consciousness and the statements about brain processes, and if so, was your description supposed to be a descriptive statement about brain processes?
> > > >
> > > > I understand that Ullin Place thinks there is a difference between statements about consciousness and statements about brain processes. But since I think that consciousness *is* a brain process, I hold that statements about consciousness *are* statements about brain processes, even if they are expressed in different words.
> > >
> > > And refer to different features presumably. So a neurosurgeon for example could be operating on someone's brain discussing certain features of the brain processes for example, those observable from a third person perspective, and the person being operated on could be discussing features of those same brain processes that are only observable from a first person perspective.
> >
> > I would say that the same features look different from different perspectives, rather than that some features are observable only from one of the perspectives. But that's just a question of how to use words.
> >
>
> I don't think it is, see below.
>
> > >So with regards to Place's point (a) the person could describe the features that are observable only from a first person perspective the sensations and mental imagery for example without knowing anything about features of the brain processes that the neurosurgeon is observing, or even that the features the neurosurgeon is observing exist.
> >
> > OK. I'd call them different perspectives, rather than different features. A pyramid looks square viewed from the bottom and triangular viewed from one side, but it's still a single pyramid.
>
> Does a cup have a first person perspective? If not then having a first person perspective is a feature that not all physical things would have.

A cup does not have a first person perspective. Not all physical things have a first person perspective. We already agree, I think, that not all physical things are conscious.

Andre G. Isaak

unread,
Nov 26, 2015, 4:43:45 PM11/26/15
to talk-o...@moderators.isc.org
In article <080b04f9-7c44-41df...@googlegroups.com>,
Bill Rogers <broger...@gmail.com> wrote:

> I understand that Ullin Place thinks there is a difference between statements
> about consciousness and statements about brain processes. But since I think
> that consciousness *is* a brain process, I hold that statements about
> consciousness *are* statements about brain processes, even if they are
> expressed in different words.

Place would agree with you on this point. Someone likes quoting that
particular passage (from his 1949 paper "Is consciousness a brain
process?", but he doesn't seem to have read (or to have understood) what
follows it since what Place is actually saying is diametrically opposed
to what Someone claims he is saying.

The entire paper can be found here:
<http://preview.tinyurl.com/p88s35o>

Andre

Bob Casanova

unread,
Nov 27, 2015, 1:43:41 PM11/27/15
to talk-o...@moderators.isc.org
On Thu, 26 Nov 2015 14:40:18 -0700, the following appeared
in talk.origins, posted by "Andre G. Isaak"
<agi...@gmail.com>:
I think an analogy could be made (and probably has, multiple
times) using computers. The brain processes would correspond
to the operation of the underlying hardware; consciousness
would correspond to the operation of the software
implemented on that hardware. So consciousness could be
considered a brain process, but only at second hand, and
only because the hardware supports it.
--

Bob C.

"The most exciting phrase to hear in science,
the one that heralds new discoveries, is not
'Eureka!' but 'That's funny...'"

- Isaac Asimov

eridanus

unread,
Nov 28, 2015, 6:23:39 AM11/28/15
to talk-o...@moderators.isc.org
if one considers a computer that speak and interact verbally with humans
in a very natural way, it would not be distinguishable from other human,
at least on the way it is speaking. All that requires this computer is to
have a flexible enough language, that could simulate a human being speaking.

You can ask some philosophical question to a computer about the question of
the famous example, "the sky is blue".
You tell the computer or any other person,
"Come here to the window. Do you see the sky?"
"Yeah."
"What color it is?"
"Blue."
"Why you know it is blue?"
"well, all people says it is blue."
"But what if all people is wrong?"
"Then, in this case I must be wrong as well."
"What do you think if I tell you the sky is green?"
"That you are crazy."

What this people of the consciousness of Krishna seem to ignore, it that
the name of the colors is a linguistic convention. If you were a mutant
and had other perceptions we ignore about the colors, you would be saying
as well, the sky is blue.
Eri
But how do you know it is blue?
I do not know it. People says the color of the sky is blue.


Bob Casanova

unread,
Nov 28, 2015, 1:23:45 PM11/28/15
to talk-o...@moderators.isc.org
On Sat, 28 Nov 2015 03:18:30 -0800 (PST), the following
appeared in talk.origins, posted by eridanus
<leopoldo...@gmail.com>:
>if one considers a computer that speak and interact verbally with humans
>in a very natural way, it would not be distinguishable from other human,
>at least on the way it is speaking. All that requires this computer is to
>have a flexible enough language, that could simulate a human being speaking.
>
>You can ask some philosophical question to a computer about the question of
>the famous example, "the sky is blue".
>You tell the computer or any other person,
>"Come here to the window. Do you see the sky?"
>"Yeah."
>"What color it is?"
>"Blue."
>"Why you know it is blue?"
>"well, all people says it is blue."
>"But what if all people is wrong?"
>"Then, in this case I must be wrong as well."
>"What do you think if I tell you the sky is green?"
>"That you are crazy."
>
>What this people of the consciousness of Krishna seem to ignore, it that
>the name of the colors is a linguistic convention. If you were a mutant
>and had other perceptions we ignore about the colors, you would be saying
>as well, the sky is blue.
>Eri
>But how do you know it is blue?
>I do not know it. People says the color of the sky is blue.

Valid points, but nothing to do with the analogy.

eridanus

unread,
Nov 28, 2015, 2:38:40 PM11/28/15
to talk-o...@moderators.isc.org
OK, but I have a problem. I do not know really what this guy pretends to say
about spirituality, or consciousness. I mean, he mentioned something about
we could be saying "the sky is blue" and it gave to the sentence a sort of
mystical meaning I cannot fathom. We cannot be sure what shit is this, with
the phrase "the sky is blue". We cannot have any idea how other person sees
the color of the sky, but he is using like me the same phrase, "the sky is
blue". I recall when I was an adolescent and we had this same discussion.
Some said, that perhaps "someone could see a green color as red". I replied
back but this must be sort of constant, and he would be using the same word
as other people and say this color is green. For people is using the word
"green" for the color he sees. Then, this discussion is a little absurd.

We are using some words in a consistent manner. Then, even there is some
degree of relativism, a kid is a kid, and the word can be used mostly to
refer to young people or a boy. Or perhaps, we are calling kid to some old
man that is doing things a kid often does.
Then, we rarely change the use of words, and an "old man" is an old man, and
not a boy. The reverse is also true. A boy is a boy and not an old man.

I cannot imagine any logical circumstance to think that some experience is
showing the existence of some immaterial entity than could seen as an spirit.

All our language, or most of it, refers to material phenomena. But religion
had introduced us to the use of words related to spirits. The problem is
they have problems with the language, for the language refers to on material
entities or material phenomena. Then to jump speaking from the material to
the immaterial would need a great mastery of the language.
Eri


eridanus

unread,
Nov 28, 2015, 3:28:38 PM11/28/15
to talk-o...@moderators.isc.org
Well, I forgot you spoke something about a computer.
Let me try again.
Let us suppose we do not know what a computer have inside. A little like in
the case of humans and their brain. We basically ignore how our brain is
working.
Then, tit for tat. You are speaking with a robot, and you cannot tell a
word about what makes the robot to speak like a human being. You cannot tell
as well, how a human speak like they do. The comparison is valid.
Then, even admitting that the robot does not move or walk around like humans
do, but the robot is static and have the form of a metallic box, the robot
is speaking through some loudspeakers. And it hears your voice by a couple of
holes the box has. You know that, because if you experiment and close those
holes with a thick rubber, the robot complains that he is not hearing you
any more. And the he cannot speak with you because he cannot hear anything
you said, but a very faint murmur.
We can invent now, some example that would tell us that humans have a soul
while the computer has not. How can we postulate that humans have a soul
while the robot of this example has not?
It is not valid to present inferences about how our brain works versus how
the brain of the robot works. For we basically cannot many any comparisons
other than the external aspects of the behavior of both speakers.

A future computer that would imitate perfectly the way humans speak, on any question known or unknown, both cases are possible, do not tell us anything
about the internal mechanism of both speakers.
We can say, the robot and the human can speak, because we see them speaking.
But we cannot explain how they had learned. It probably has something to
do with some abstract word called "memory". But we cannot explain how this
memory is acting. How the memory is fading as time passes, or how the memory
fails when we accumulate excessive data in some "theoretical" storage room.
Or perhaps there is not a single room, but a number of small rooms that are
storing data, sometimes some data can be copied and stored in more than one
place.

Then, by accumulating a lot of abstract words, we are not solving the problem
of understanding the memory. But we can say, the builders of this robot must
know how they had solved the problem of the memory, aka "data storage".
And we are speaking here about a robot that speaks, but we do not know how
the makers of the robot had solved the problem of making the robot to imitate so perfectly the way humans speak. It is not that humans can speak perfectly, that we cannot not.
I know about the imperfection of human language for "someone" is unable to
write in plain English what he is saying. He had apparently solved the
this problem simulating he is telling something, but what he is saying is unintelligible. It is only logical he would have insurmountable problems
to present an argument to prove the existence "immaterial beings". As our language is meant to refer to material entities and phenomena, he cannot
find a way to express what is an angel, by example.
I recall some years ago, I was talking about the way to tell a kid what was
an angel, for he had read, or heard a reading a story about an angel.
I was seriously trying to explain how a mum would tell her child "what is
an angel". It is not any easy. He must refer to some entity that is
invisible. How can you tell this to a kid? It is not any easy. Then, you
must start to explain the angel is "something that has an existence" but
it cannot be seen. The next logic step is how to explain a child that
something that is not seen can exist. You can tell him about a dark room.
It is totally dark, and you hear a noise like a drummer. But there is nobody
in the room, and you have not a drummer, and had not hit any drummer. Who
struck the drummer? The angel? This would be a good idea, if the angels
use to play drummers. Do the angels do this? No, the angels do not play
drummers. Then, how can we know an angel exist, if he cannot play a drummer?
Can an angel play the violin? No. Can it play the piano? No, he cannot.
Then, how we know that the angel exist? This is a good question. All we can
say is that "angels exist because some people speak about the angels".
This is more or less the same case for god. God exist for some people believe
that god exist. The next step is, do you believe that god exist? No. Why?
Because I do not see god. OK, but why other people believe in god? It is
a good question. Because some people believe in the words of other people?
Because some people are earning money for speaking of god? Probably. If you get a sweet, or some piece of cake for speaking about god, would you speak
about god? Yeah. Probably. Try it. Tell me something about god. And the
boys says, "god exists. Now give me a piece of cake you have in the kitchen."
Now I got to the kitchen with the kid, and gave him a piece of cake. I can
tell the boy, each time you fancy to eat a piece of cake, you must tell,
"I believe in god, give me a piece of cake".
But instead of that, I would tell my kid, you must declare, "I do not believe
in god. Then you must give me a piece of cake."

This is the essence of the question. We believe in god, if we had been
trained to believe in god, and our mums had given us a piece of cake each
time we were hungry and wanted to eat a piece of cake. By accepting the
existence of god, we had been given some sweets, or some flattering remarks,
or other pleasant consequences. Then, any imaginary entities exists, for
it is profitable, and sweet to believe. We have eaten a lot of cakes and
sweets for being a tender believer.
Eri

someone

unread,
Nov 29, 2015, 4:18:39 AM11/29/15
to talk-o...@moderators.isc.org
On Thursday, November 26, 2015 at 9:43:45 PM UTC, Ymir wrote:
> In article <080b04f9-7c44-41df...@googlegroups.com>,
> Bill Rogers wrote:
>
> > I understand that Ullin Place thinks there is a difference between statements
> > about consciousness and statements about brain processes. But since I think
> > that consciousness *is* a brain process, I hold that statements about
> > consciousness *are* statements about brain processes, even if they are
> > expressed in different words.
>
> Place would agree with you on this point. Someone likes quoting that
> particular passage (from his 1949 paper "Is consciousness a brain
> process?", but he doesn't seem to have read (or to have understood) what
> follows it since what Place is actually saying is diametrically opposed
> to what Someone claims he is saying.
>
> The entire paper can be found here:
> <http://preview.tinyurl.com/p88s35o>
>

Place thought that consciousness was a brain process, but as the quote showed, he wasn't suggesting that statements about consciousness were statements about brain processes. He made a distinction between the 'is' of composition, and the 'is' of definition.

Had you bailed on the following conversation:
https://groups.google.com/d/msg/talk.origins/Jder6vhBAww/-9jW6lw0AwAJ ?

Bob Casanova

unread,
Nov 29, 2015, 2:43:35 PM11/29/15
to talk-o...@moderators.isc.org
On Sat, 28 Nov 2015 12:25:42 -0800 (PST), the following
>Well, I forgot you spoke something about a computer.

OK. In fact, my only comment was about the validity of an
analogy between computers and brains *in this respect only*,
the hardware/software analogy. Philosophical discussions
about the contrasts and similarities aren't really of
interest to me, since I find them to be the equivalent of
navel-gazing. Sorry.

eridanus

unread,
Nov 29, 2015, 6:18:36 PM11/29/15
to talk-o...@moderators.isc.org
a computer with not software, or not electricity, must be the equivalent
of brain in a state of coma.
A brain has some "software" of a class or other, that is mostly built in.
Then, the animal brain has the capability to learn to act according to
some external stimuli.
We can consider a computer, whose software is built in, and mostly learns
to act from external stimuli. This would explain that the computer of my example had learned to speak like a human. I am talking of computer that
was built to learn from the environmental stimuli. The computer can learn
to to speak in English or in other language. It can even go to primary
school and learn the same things our kids do.
Then, it can be a walking computer, a computer with binocular vision, a
couple of ears, a sense of touch, etc. But the computer can be sort of
asexual to avoid problems.

This sort of robot can be a good simile of a human with an ordinary brain.

An extraordinary brain would require more time to learn, like it occurs
with the humans. The same can be with a robot. Let's suppose that a robot
has a limited memory, like a person has. And this memory is sort of erasable
in part, to manage the limitation of space and time to react. The only
solution for this computer is to erase all data that had not been used
during some time past. Or sparing a few groups of memory that were widely
used in the past. Those groups of data could be erased only in part, just
in case they would be needed again in the future. In this situation, the
data could reconstructed with some economy of work.

Then, the invocation of software as a well known software would not be
valid as we are speaking of an unknown robot that is able to learn in
an analogical way as humans. They can obey orders like humans do, or
follow instructions like humans, with some errors of procedure, like in
the case of humans, for some stimuli with they are excessive can be
confused with other that look similar but are not. If the humans cannot
remember well some stimuli and cannot reply to some questions in a semester examination, a human-robot would be in the same case.
I was presenting a robot that can speak with humans in a naturalistic way
with all the normal errors that human beings commit. If a human cannot
recall some item that some day was stored, or we figures it was stored,
it would be something normal. As the amount of data accumulates in the brain
the time to recall some item gets lower. But as the memory is erasable
the some data get lost, for they were not retrieved a sufficient number
of times. As time passes and new data are coming in, some older data were
erased to make space for the new. That is what I call a "flexible memory".

I do not see any room here for the question of the "conscious experience".
Eri







0 new messages