If something in this article is referred to as 'consciously
experiencing' it will mean that it is like something to be that thing.
This definition was first put forward by Thomas Nagel in his paper
'What is it like to be a bat', and has subsequently been used as the
definition of choice by numerous other philosophy professors.
According to this definition, if, when considering what it might be
like to be a certain object, your answer was that it wouldn't be like
anything, then you wouldn't be considering the object to be
consciously experiencing. Whereas if you thought it would be like
something, then you would be considering the object to be consciously
experiencing. To illustrate, if when considering what it would be like
to be a glass or a mobile phone, you thought that it wouldn't be like
anything, then you would be considering neither the glass or mobile
phone to be consciously experiencing. Whereas if you were considering
that it would be like something to be another human (perhaps similar
to what it was like to be you), then you would be considering them to
be consciously experiencing. A popular science fiction film called
'The Terminator' used the cinematic device of depicting a first person
perspective for a robot from the future to suggest what it would be
like to be the robot, thus suggesting that the robot was consciously
experiencing.
The assumption that 'something that consciously experiences can have
its behaviour described in the same terms of physics as something that
doesn't' will be referred to during the article as the 'universality
assumption'
The term 'empirical evidence' refers in this article to evidence based
upon what it is like to be a human.
2. Aim
What this article aims to do, is highlight the contradictory position
in any materialist or physicalist theory which makes the universality
assumption and claims to be based on any empirical evidence.
3. The Thought Experiment Robot
Daniel Dennett in a paper 'What RoboMary Knows' made the claim that:
'If materialism is true, it should be possible (“in principle!”) to
build a material thing–call it a robot brain–that does what a brain
does, and hence instantiates the same theory of experience that we
do.'
Dennett futher points out; 'thinking in terms of robots is a useful
exercise, since it removes the excuse that we don't yet know enough
about brains to say just what is going on that might be relevant,
permitting a sort of woolly romanticism about the mysterious powers of
brains to cloud our judgement'.
So to avoid any sort of woolly romanticism about what we know, and
about the mysterious powers of any mechanism to cloud our judgement
let us avoid asserting that materialism is true, and that the robot
brain does what a brain does.
Let us consider the operation of the robot and the robot brain to be
explainable given our current understanding of physics, and that it
can pass the Turing Test as outlined by Alan Turing in the paper
'Computing, Machinery and Intelligence', which would mean that if one
were to be corresponding via email with individuals in a group, which
was made up of humans and the robot, that you wouldn't be able to
distinguish which correspondent was the robot.
4. Some theories of the conscious status of the robot
Certainly there can be many hypotheses about whether the robot of our
thought experiment were consciously experiencing or not, or what the
conscious experience might be like. A couple will be constructed here
for illustration purposes only. Both theories of consciously
experiencing examined can be considered to be making the universality
assumption. It is this assumption of the theories which the article
aims to highlight a problem with, as such the point isn't restricted
to the two theories put forward here, but can be applied to any theory
which makes this assumption.
Firstly let us consider a school of thought based broadly upon a
suggestion put forward by Place in the paper 'Is Consciousness a Brain
Process'. Which is while we may be able to identify our conscious
experiences separately from our brain processes, for example in an out
of body experience, actually our conscious experiences are organic
brain processes. So what we identify as our conscious experiences is
really an identity of the processing of our organic brain. Since what
we identify as conscious experiences are identical to physical
activity of the organic brain, being one and the same, and the robot
hasn't an organic brain; this school of thought hypothesises that the
robot isn't consciously experiencing. Let us refer to this as Theory
A. While not dealt with directly here, it should be noted that another
branch of thought, could hypothesise that it would be like something
to be the non-organic brain, but that it would be different from being
an organic brain.
The second school of thought is based broadly upon a suggestion by
Putnam in the paper 'Philosophy and Our Mental Life'. Which is that
conscious experiences cannot just be an identity of neural state as
suggested by Theory A, for if they were, then we would have to
consider alien creatures to not be consciously experiencing simply
because their brains are organised differently, perhaps being based on
different chemistry. Instead it suggests that part of the brain
performs a certain function, and can be identified as consciously
experiencing, and that any physical activity which performed the same
function, regardless of the chemistry or architecture used to perform
it, can be identified with the same conscious experiences. Let us
consider them to argue that the robot brain is performing the same
function as the human brain in order to produce equivalent behaviour.
As such their hypothesis is that the robot is consciously
experiencing. Let us refer to this as Theory B.
5. Scientific experimentation to distinguish between the theories
Two theories regarding a subject matter must differ in their
implications of behaviour for the subject matter for them to be
scientifically distinguishable.
Since theories A and B make the universality assumption, they assume
that something could have its behaviour described in the same terms of
physics regardless of whether it is consciously experiencing or not.
So the expected behaviour for the robot, is for it to follow those
laws of physics, for both the hypothesis that it is consciously
experiencing and for the hypothesis that it isn't. Thus competing
theories which make such an assumption, will never suggest a
difference in expected behaviour for the robot, simply because differ
with regards to whether the robot would be consciously experiencing or
not. So while how the robot would react would be open to scientific
investigation, what it would be like to be it would not.
That does not mean however that all theories which make such an
assumption could never be distinguished from one another. For they may
make other assumptions which could be investigated. Though if these
additional assumptions were shown to be correct, then there would
never be any behavioural difference to distinguish whether they, or
theory A or B were correct, given the universality assumption.
To illustrate, consider a third theory, Theory C, which is based upon
a suggestion made by Koch and Crick in their joint paper
'Consciousness, neural basis of'. The suggestion being that there
would be a physical difference between neurons that contribute towards
us consciously experiencing and those that don't. Going further, let
us suppose that the proponents of theory C suggest that there would be
a physical difference between neurons that contribute towards what it
is like to hear, and those which contribute towards what it is like to
see. They suggest that due to lack of such physical differentiation
within the robot, that if it did consciously experience, the
experience would likely consist of variations of a single property,
for example, fluctuations of the colour green.
Now certainly theory C could be distinguished from theories A and B if
it's primary hypothesis of a behavioural difference between neurons
that contribute towards our conscious experiences and those that
don't, were found to be false. Though if this primary hypothesis was
substantiated, there would still remain the issue of scientifically
distinguishing whether the hypothesis provided by theory A or B or C,
regarding the robot's conscious status, was correct.
6. Conclusion
If physicalists which make the universality assumption were asked to
to consider how a 'zombie' would be expected to behave if it were
physically identical to a human, and yet didn't consciously
experience, they might object. They might argue that it involves
conceiving of a conceptual framework or ontology where there are two
entities, firstly the physical human, and secondly its consciousness
which can be separated from former to produce the zombie. So to
stipulate in a thought experiment that this should be done, would be
to stipulate the adoption of such an ontology, and to beg the question
that implications for such an ontology has implications for the
physicalist ontology in which there is only the single entity, the
physical human.
There is an alternative approach, which is asking if there is any
difference in expected behaviour between the hypothesis that the
entity which is the physical human can correctly be identified as
consciously experiencing, and the null hypothesis that the entity
which is the physical human can be correctly identified as not
consciously experiencing. One hypothesis clearly won't correspond to
reality, and nor is it supposed to, since the two hypotheses are
mutually exclusive. In this approach there is no suggestion of an
ontology in which there there are two entities; firstly the physical
human, and secondly its consciousness which can be separated from
former. Both hypotheses are suggestions about how the entity should be
correctly identified. In the same way that if all that could be
measured of an object falling through a vacuum was its rate of
descent, how the object should be identified with regards to shape may
be hypothesised about. Likewise if the object were instead descending
through the earth's atmosphere. Another example would be if a robot
factory produced a range of robots to perform different functions, and
the colour of the plastic used internally in the robots varied
dependent on the function it was built for. If the various function
categories weren't known, and neither was it known the colour range of
plastics used, people could still nevertheless hypothesise as to
whether a robot built for a particular task could be identified as
containing yellow plastic. Expected behaviours can be suggested for
the various hypotheses of how the entity should be identified.
It might be questioned how considering what we know not to be reality,
such as the null-hypothesis that the entity which is the physical
human would be correctly identified as not consciously experiencing,
can help us in our understanding of reality. The thought experiment of
whether the robot is consciously experiencing helps illustrate the
answer.
The thought experiment of the robot illustrates that if the
universality assumption were true, there would be no scientific way of
distinguishing between theories A and B. This isn't an issue of
sensitivity of behavioural measurement, or an inability to construct a
scenario in which the difference between the hypotheses would become
significant to the expected behaviour. It is that the significant
difference between the theories has no significance to the expected
behaviour. To dispute one would only have to state conceptually how to
discern whether the robot was consciously experiencing or not, and
thus have distinguished between whether theory A and B was correct, to
illustrate that the significant difference between the theories is
significant to an expectation of behaviour for the robot. If whether
the robot was consciously experiencing or not isn't significant to how
it is expected to behave, then it follows that it wouldn't be
significant to how humans would be expected to behave either. So no
theory which makes the universality assumption can be considered to be
based on empirical evidence, since that would contradict the
implications of the universality assumption, which is that what it was
like to be something, which is the basis of empirical evidence, would
not be significant to the behaviour of the person formulating the
theory.
So returning to the question of how can the consideration of what we
know not to be reality, such as the null-hypothesis that the entity
which is the physical human would be correctly identified as not
consciously experiencing, help us understand reality, it does so in
the same way as the robot thought experiment, in that it helps us be
clear about the expected behavioural significance of something
consciously experiencing if the universality assumption were true, and
the implications of it.
> 1. Definitions
>
> If something in this article is referred to as 'consciously
> experiencing' it will mean that it is like something to be that thing.
> This definition was first put forward by Thomas Nagel in his paper 'What
> is it like to be a bat', and has subsequently been used as the
> definition of choice by numerous other philosophy professors.
That is a rather nebulous definition.
Names of these "professors," please, to compare with the following:
__________
"...there is no question of there being a commonly accepted, exclusive
sense of the term... I prefer to use it as synonymous with 'mental
phenomenon' or 'mental act.' "
"...intentional in-existence, the reference to something as an object, is
a distinguishing characteristic of all mental phenomena. No physical
phenomenon exhibits anything similar."
Franz Brentano, 'Psychology from an empirical standpoint', 1874
" ...current values of parameters governing the high-level computations
of the operating system."
P. Johnson-Laird, 'A computational analysis of consciousness', Cognition
and Brain Theory, 1983
"Consciousness is like the Trinity; if it is explained so that you
understand it, it hasn't been explained correctly ."
R.J. Joynt, 'Are Two Heads better than One?', Behavioural Brain Sciences,
1981
"'Consciousness' refers to those states of sentience and awareness that
typically begin when we awake from a dreamless sleep and continue until
we go to sleep again, or fall into a coma or die or otherwise become
'unconscious'."
John Searle , 'The Mystery of Consciousness', 1997
"What is meant by consciousness we need not discuss - it is beyond all
doubt."
Sigmund Freud, 'New Introductory Lectures on Psychoanalysis', 1933
"It is often held therefore (1) that a mind cannot help being constantly
aware of all the supposed occupants of its own private stage, and (2)
that it can also deliberately scrutinize by a species of non-sensuous
perception at least some of its own states and operations. Moreover both
this constant awareness (generally called 'consciousness'), and this non-
sensuous inner perception (generally called 'introspection') have been
supposed to be exempt from error.
Gilbert Ryle, 'The Concept of Mind', 1949
"...perhaps 'consciousness' is best seen as a sort of dummy term like
'thing'; useful for the flexibility that is assured by its lack of
specific content."
Kathleen Wilkes, 'Is Consciousness Important?', British Journal of
Philosophy of Science, 1984
"We are conscious of something, on this model, when we have a thought
about it. So a mental state will be conscious if it is accompanied by a
thought about that state...The core of the theory, then is that a mental
state is a conscious state when, and only when, it is accompanied by a
suitable HOT [Higher Order Thought]"
David M. Rosenthal, 'A Theory of Consciousness', The Nature of
Consciousness (ed Block, Flanagan and Güzeldere), 1997
"Behaviourism claims that consciousness is neither a definite nor a
usable concept. The behaviourist, who has been trained always as an
experimentalist, holds, further, that belief in the existence of
consciousness goes back to the ancient days of superstition and magic."
John Watson, 'Behaviourism', 1924
"'Consciousness' is a word worn smooth by a million tongues. Depending
upon the figure of speech chosen it is a state of being, a substance, a
process, a place, an epiphenomenon, an emergent aspect of matter, or the
only true reality."
George Miller, 'Psychology: the Science of Mental Life'', 1962
"Consciousness, we shall find, is reducible to relations between objects,
and objects we shall find to be reducible to relations between different
states of consciousness; and neither point of view is more nearly
ultimate than the other."
T.S.Eliot (Doctoral dissertation), 1916
"The concept of consciousness is a hybrid or better, a mongrel concept:
the word 'consciousness' connotes a number of different concepts and
denotes a number of different phenomena... P-consciousness is
experience...A is access-consciousness. A state is A-conscious if it is
poised for free use in reasoning and for direct 'rational' control of
action and speech...Conflation of P-consciousness and A-consciousness is
ubiquitous in the burgeoning literature on consciousness..."
Ned Block, 'On a Confusion about a Function of Consciousness',
Behavioural and Brain Sciences, 1995
"... in the most interesting sense of the word 'consciousness',
consciousness is the cream on the cake of mentality, a special and
sophisticated development of mentality. It is not the cake itself."
David Armstrong, 'What is consciousness?' , The nature of Mind and Other
Essays, 1980
"The improvements we install in our brain when we learn our languages
permit us to review, recall, rehearse, redesign our own activities,
turning our brains into echo chambers of sorts, in which otherwise
evanescent processes can hang around and become objects in their own
right. Those that persist the longest, acquiring influence as they
persist, we call our conscious thoughts."
Daniel Dennett , 'Kinds of Minds', 1996
Linespace
"The presence of mental images and their use by an animal to regulate its
behavior, provides a pragmatic working definition of consciousness"
D.R.Griffin, 'The Question of Animal Awareness', 1976
> 'If materialism is true, it should be possible (“in principle!”) to
> build a material thing–call it a robot brain–that does what a brain
> does, and hence instantiates the same theory of experience that we do.'
If materialism were true it would not be possible, in principle, to build
a brain.
An absolute understanding of all material and how they might react to
each other would be necessary. Since there is not now nor ever has been
this depth of understanding about any material it is not possible, in
principle, to build anything which is exactly like any other thing.
If a type of non-materialism were true, abstracted theory let say, it
would be possible in principle.
.
> On Wed, 17 Jun 2009 03:31:17 -0700, someone2 wrote:
>
>> 'If materialism is true, it should be possible (“in principle!”) to
>> build a material thing–call it a robot brain–that does what a brain
>> does, and hence instantiates the same theory of experience that we do.'
>
> If materialism were true it would not be possible, in principle, to
> build a brain.
>
> An absolute understanding of all material and how they might react to
> each other would be necessary. Since there is not now nor ever has been
> this depth of understanding about any material it is not possible, in
> principle, to build anything which is exactly like any other thing.
Then by the same principle do you believe that it is impossible
to build a table, a chair, or a computer?
I'm sure you have something in mind, but you need to tighten
up either your thinking or its expression.
It sounds to me like an argument against the "design" of the brain.
It couldn't be built, so it must be evolved.
>
>
>>=20
>> If a type of non-materialism were true, abstracted theory let say, it
>> would be possible in principle.
>> .
>
--
---Tom S.
"...ID is not science ... because we simply do not know what it is saying."
Sahotra Sarkar, "The science question in intelligent design", Synthese,
DOI:10,1007/s11229-009-9540-x
...snip...
Didn't you plagiarise this from someone?
D
(Reposted as didn't seem to come through)
Many of your quotes are definitions put forward before Nagel's article
in 1974, others aren't from philosophers, yet others, such as the
comments from Searle and Rosenthatl, are comments about, but not
definitions of consciousness.
Daniel Dennet who you quoted uses Nagel's definition in his paper
'What RoboMary Knows' ( http://ase.tufts.edu/cogstud/papers/RoboMaryfinal.htm#_ftn3
). For example:
"I was saying that Mary had figured out, using her vast knowledge of
color science, exactly what it would be like for her to see something
red, something yellow, something blue in advance of having those
experiences [3]"
Note three [3] is where he mentions:
"Robinson (1993) also claims that I beg the question by not honoring a
distinction he declares to exist between knowing “what one would say
and how one would react” and knowing “what it is like.” If there is
such a distinction, it has not yet been articulated and defended, by
Robinson or anybody else, so far as I know. If Mary knows everything
about what she would say and how she would react, it is far from clear
that she wouldn’t know what it would be like."
He also in the paper comments from two other philosophers, he says:
"Here are two, drawn from Tye and Lycan:
Now, in the case of knowing via phenomenal concepts, knowing what it
is like to undergo a phenomenal state type P demands the capacity to
represent the phenomenal content of P under those concepts. But one
cannot possess a predicative phenomenal concept unless on has actually
undergone token states to which it applies. (Tye, p169)[4]
As Nagel emphasizes, to know w.i.l., one must either have had the
experience oneself, in the first person, from the inside, or been told
w.i.l. by someone who has had it and is psychologically very similar
to oneself. (Lycan, forthcoming)[5]"
So as you can see both Tye and Lycan are also using the definition, as
well as Robinson as mentioned in note [3]. That is four of them
mentioned using it (including Dennett) in just one paper. Notice Lycan
using an abreviation, indicating just how commonly it is used.
Just one more extract from the paper in which Dennet rather than using
an abreviation, hyphenates the phrase:
"I had previously viewed Tye’s alternative to my brand of thin
materialism as giving too much ground to the qualophiles, the lovers
of phenomenal content, but thanks now to G&H I can welcome him into my
underpopulated fold as a thin materialist malgré lui, someone who has
articulated much more painstakingly than I had just what sorts of
functionalistically explicable complexities go to constitute the what-
it-is-likeness, the so-called phenomenality, of conscious experience."
Just incase from that paper alone you weren't clear how commonly it is
used, you'll find David Chalmers using it in his definition of qualia
in 'Consciousness and its Place in Nature' ( http://consc.net/papers/nature.html
).
Another example of it being used is in the first sentance of the
article on Zombies on the Stanford site ( http://plato.stanford.edu/entries/zombies/
)
"Zombies are exactly like us in all physical respects but have no
conscious experiences: by definition there is ‘nothing it is like’ to
be a zombie."
They don't specifically mention they are using Nagel's definition,
they seem to assume that it would be the one to be used.
Many of your quotes are definitions put forward before Nagel's article
in 1974, others aren't from philosophers, yet others, such as the
comments from Searle and Rosenthal, are comments about, but not
definitions of consciousness.
Daniel Dennet who you quoted, and who is a contemporary philospher
regards,
Mohammad Nur Syamsu
No, no, no. Didn't you even read the paper:
"This is the idea that the “phenomenality” or “intrinsic phenomenal
character” or “greater richness”–whatever it is–cannot be constructed or
derived out of lesser ingredients. Only actual experience (of color, for
instance) can lead to the knowledge of what that experience is like. Put
so boldly, its question-beggingness stands out like a sore thumb, or so I
once thought, but apparently not, since versions of it still get
articulated."
Question-begging is what he calls Nagel's def. He has always rejected
that def. in all his writings. He is talking _about_ Nagel's def,
not "using" it.
<snippage>
Having just skimmed that paper, it doesn't look like this is Nagel's
definition of consciousness or "consciously experiencing." He seems
to be saying that all the diverse kinds of consciousness share this
feature.
<snip of boring zombie argument>
" The assumption that 'something that consciously experiences can have
its behaviour described in the same terms of physics as something
that
doesn't' will be referred to during the article as the 'universality
assumption'"
This 'assumption' is insufficiently fleshed out. There needs to be
more about boundary conditions.
Even if the assumption, as you state it, were false, one can posit a
scenario where a mechanism was programmed to behave in a certain way
(non conscious) for a certain period of time where that behavior was
indistinguishable from a person (conscious). This has actually been
done; simple programs to mimic human speech have fooled readers for
short periods. More complex examples could surely be done. The
assumption needs better definition.
The assumption as stated seems to take for granted that consciously
experiencing is a binary condition; how did you exclude the
possibility that there is a spectrum of consciously experiencing from
the human level to the level of, say, an ant or a thermostat?
Obviously brains are built in nature. If materialism is true, then
brains are built by natural processes, and I see no reason in
principle why we could not duplicate this.
>
> An absolute understanding of all material and how they might react to
> each other would be necessary.
Sure we only need a good enough understanding. We build quite a few
things now which work, without understanding everything about the
materials. We need to know more than we know now, certainly, but you
give no reason to think we cannot learn this is the forseeable future.
> Since there is not now nor ever has been
> this depth of understanding about any material it is not possible, in
> principle, to build anything which is exactly like any other thing.
If you are saying that we cannot duplicate specific brain in real
time, complete with all mental states existing in that moment that may
be true. This does not mean we cannot build a working brain of the
same type.
>
> If a type of non-materialism were true, abstracted theory let say, it
> would be possible in principle.
> .
Ummm... We can only manipulate the world around us thru material
means. If materialism is not the entire story, and human minds are at
least partly non-material, I don't see how we could duplicate that.
Only if the mind is the behavior of a properly constructed brain could
we hope to duplicate a mind (which is what I expect).
Kermit
I don't know whether that is supposed to be a joke because I started
off posting as 'someone'. If not, then no. As far as I am aware
philosophers have tended to try to make the point through the zombie
issue where there are at least two entities considered, the human and
its zombie counterpart. In this argument there is a single robot, and
it simply questions whether there would be any difference in expected
behaviour between the hypothesis that it would be correctly identified
as consciously experiencing, and the null-hypothesis that it would be
correctly identified as not consciously experiencing, if the
universality assumption (defined in the paper were true). Highlighting
the lack of expected difference in behaviour between whether it was
thought to be correctly identified as consciously experiencing or not.
The developmnet of the argument I suspect could be seen in the
postings under the names 'someone', 'someone2', and 'someone3' on
google groups. Though if you have come across the same argument
developed independently, I'd be interested to know.
Though I'd find it strange if the point had already been made in such
a way that it wasn't countered, because it seems there are still
people making the universality assumption, and yet each of us knows
that what it is like to be ourselves is evidence that we are
experiencing being human, and in considering it as evidence, what it
is like to be ourselves has been referred to, and thus has been
significant to our behaviour, which contradicts the implications of
the universality assumption.
Irrelevant. If insight or evidence is trumped by a more recent date,
then many of these quotes would trump Nagel.
> others aren't from philosophers,
Correct. Some are from writers (who are less obtuse, and perhaps can
express the human condition better than anyone), and some are
scientists (who use actual evidence).
> yet others, such as the
> comments from Searle and Rosenthatl, are comments about, but not
> definitions of consciousness.
Yes. Were you ever going to talk about consciousness, or just insist
on one obscure definition? We all know what consciousness is. If it is
difficult to clarify the fuzzy boundaries, perhaps it is because
nature doesn't always have clear boundaries.
>
> Daniel Dennet who you quoted uses Nagel's definition in his paper
> 'What RoboMary Knows' (http://ase.tufts.edu/cogstud/papers/RoboMaryfinal.htm#_ftn3
> ). For example:
>
> "I was saying that Mary had figured out, using her vast knowledge of
> color science, exactly what it would be like for her to see something
> red, something yellow, something blue in advance of having those
> experiences [3]"
>
> Note three [3] is where he mentions:
>
> "Robinson (1993) also claims that I beg the question by not honoring a
> distinction he declares to exist between knowing “what one would say
> and how one would react” and knowing “what it is like.” If there is
> such a distinction, it has not yet been articulated and defended, by
> Robinson or anybody else, so far as I know. If Mary knows everything
> about what she would say and how she would react, it is far from clear
> that she wouldn’t know what it would be like."
I agree with Dennet on this. Saying an behaving as though one were
conscious is circumstantial evidence that one is indeed conscious.
>
> He also in the paper comments from two other philosophers, he says:
>
> "Here are two, drawn from Tye and Lycan:
>
> Now, in the case of knowing via phenomenal concepts, knowing what it
> is like to undergo a phenomenal state type P demands the capacity to
> represent the phenomenal content of P under those concepts. But one
> cannot possess a predicative phenomenal concept unless on has actually
> undergone token states to which it applies. (Tye, p169)[4]
Sounds like more of the same. This is a somewhat stronger statement
that conscious-like behavior is synonymous with consciousness.
>
> As Nagel emphasizes, to know w.i.l., one must either have had the
> experience oneself, in the first person, from the inside, or been told
> w.i.l. by someone who has had it and is psychologically very similar
> to oneself. (Lycan, forthcoming)[5]"
Yes.
>
> So as you can see both Tye and Lycan are also using the definition, as
> well as Robinson as mentioned in note [3]. That is four of them
> mentioned using it (including Dennett) in just one paper. Notice Lycan
> using an abreviation, indicating just how commonly it is used.
>
> Just one more extract from the paper in which Dennet rather than using
> an abreviation, hyphenates the phrase:
>
> "I had previously viewed Tye’s alternative to my brand of thin
> materialism as giving too much ground to the qualophiles, the lovers
> of phenomenal content, but thanks now to G&H I can welcome him into my
> underpopulated fold as a thin materialist malgré lui, someone who has
> articulated much more painstakingly than I had just what sorts of
> functionalistically explicable complexities go to constitute the what-
> it-is-likeness, the so-called phenomenality, of conscious experience."
>
> Just incase from that paper alone you weren't clear how commonly it is
> used, you'll find David Chalmers using it in his definition of qualia
> in 'Consciousness and its Place in Nature' (http://consc.net/papers/nature.html
> ).
>
> Another example of it being used is in the first sentance of the
> article on Zombies on the Stanford site (http://plato.stanford.edu/entries/zombies/
> )
>
> "Zombies are exactly like us in all physical respects but have no
> conscious experiences: by definition there is ‘nothing it is like’ to
> be a zombie."
This sounds like a square circle. You can claim to use them in thought
experiments, but I don't see what they mean, nor is it clear that they
are possible, even in principle.
I don't see how it would be possible to have a zombie - something that
behaves as though it were conscious but is not. I find it
counterintuitive; it is not supported by evidence of brain
activities.
Since they don't exist, and many folks smarter than I doubt their
possibility, they cannot be used to come to firm conclusion about
reality.
Suppose people insisted that square circles were possible, but none of
them mathematicians - would you find it likely? Do you know of any
brain scientists who think it is possible to have an entity that acts
conscious but is not?
>
> They don't specifically mention they are using Nagel's definition,
> they seem to assume that it would be the one to be used.
Kermit
The problem is that there's very little agreement about "what
consciousness is", so most conversations about it just end up
with people talking past each other. A single up-front definition is
most certainly beneficial in avoiding such cross-purpose debates.
someone2 is simply cherry picking turgid phrases that he can use to
make his pseudo-arguments obtuse. He thinks if others can't
understand him, then he is smarter than they. His arguments are
constructed to force the careless reader into accepting his
conclusions before their justification. They are painfully dense
circular arguments, utterly void of evidence.
I like your collection of quotes; thank you.
Kermit
They were the result of maybe one minute's search
on Google for "define consciousness" or some such.
None of the cognitive scientists take Nagel
very seriously, although they
will occasionally quote him or reproduce one of his
papers -- just to turn around and demolish his
arguments. People who actually seem to have rational
ideas about it are Minsky, Dennett, and Hofstadter.
> if, when considering what it might be
> like to be a certain object, your answer was that it wouldn't be like
> anything, then you wouldn't be considering the object to be
> consciously experiencing.
> Whereas if you thought it would be like
> something, then you would be considering the object to be consciously
> experiencing.
But what it it is to be "like " something presupposes, not clarifies,
a conscious experience.
I won't do another round of it with "someone_x_", but it all turns on
whether there is any definite meaning to "what it is like". Dualists
like Chalmers argue this is irreducible. Monists like Dennett argue this
is true, because there *is* nothing like "what it is like to be X", and
you can't reduce non-existent properties.
Monists like me think that the entire argument is purely semantic -
"because there are words that imply there is something that needs
explaining, there is". I follow Whitehead here: this is a Fallacy of
Misplaced Concreteness. Just because there is a noun, doesn't mean there
is a thing named.
Everything about consciousness that needs explaining can be explained in
a simple manner: we are complex enough to model our own states, and we
have a perspective on the world in virtue of being located at a time and
place.
--
John S. Wilkins, Philosophy, University of Sydney
http://evolvingthoughts.net
But al be that he was a philosophre,
Yet hadde he but litel gold in cofre
A sensible comment.
> Monists like me think that the entire argument is purely semantic -
> "because there are words that imply there is something that needs
> explaining, there is". I follow Whitehead here: this is a Fallacy of
> Misplaced Concreteness. Just because there is a noun, doesn't mean
> there is a thing named.
But Whitehead's metaphysics is precisely about consciousness in
Nagel's use of the word, although Whitehead confuses issues somewhat
by calling it "experience" and reserving the word "consciousness"
for something else (specifically the ability to conceive of "what is not").
Whitehead's "Actual Occasions of Experience" are like Leibniz' monads,
each one being a "what-it's-like" to have a perspective on the world.
His point about "misplaced concreteness" is a rejection of
substance-based metaphysics in favor or process-based metaphysics.
> Everything about consciousness that needs explaining can be explained
> in a simple manner: we are complex enough to model our own states,
> and we have a perspective on the world in virtue of being located at
> a time and place.
Then you're still overlooking Nagel's use of the word -- a camera has a
perspective on the world in virtue of being located at a time and a place,
but it's questionable whether there is an associated "what-it's-like to be a
camera". And if the digitized output from a camera were to be used to feed
an information processing system that was complex enough to model its own
states, then it would still be questionable whether the system would have an
associated "what-it's-like" to be that system. Hence Chalmers' question
about why this kind of information processing can't go on "in the dark".
> John S. Wilkins wrote:
> > it all turns on whether there is any definite meaning to
> > "what it is like".
>
> A sensible comment.
>
>
> > Monists like me think that the entire argument is purely semantic -
> > "because there are words that imply there is something that needs
> > explaining, there is". I follow Whitehead here: this is a Fallacy of
> > Misplaced Concreteness. Just because there is a noun, doesn't mean
> > there is a thing named.
>
> But Whitehead's metaphysics is precisely about consciousness in
> Nagel's use of the word, although Whitehead confuses issues somewhat
> by calling it "experience" and reserving the word "consciousness"
> for something else (specifically the ability to conceive of "what is not").
> Whitehead's "Actual Occasions of Experience" are like Leibniz' monads,
> each one being a "what-it's-like" to have a perspective on the world.
> His point about "misplaced concreteness" is a rejection of
> substance-based metaphysics in favor or process-based metaphysics.
Oh yes, I am only following Whitehead on the fallacy here, as I said.
However, I do agree with some of his process metaphysics - the world, it
seems to me, is full of things that are just processes or event
sequences. I would not do it in the post-British-Idealism manner that
Whitehead did it, but I reject the notion of substance, and in many ways
of properties, as unnecessary holdovers of Aristotle's hylomorphism.
>
>
> > Everything about consciousness that needs explaining can be explained
> > in a simple manner: we are complex enough to model our own states,
> > and we have a perspective on the world in virtue of being located at
> > a time and place.
>
> Then you're still overlooking Nagel's use of the word -- a camera has a
> perspective on the world in virtue of being located at a time and a place,
> but it's questionable whether there is an associated "what-it's-like to be a
> camera". And if the digitized output from a camera were to be used to feed
> an information processing system that was complex enough to model its own
> states, then it would still be questionable whether the system would have an
> associated "what-it's-like" to be that system. Hence Chalmers' question
> about why this kind of information processing can't go on "in the dark".
If it is questionable whether there is a "what it's like" to be that
camera, then it is equally, IMO, questionable to say there is a "what
it's like" to be me. The "WIL" (to abbreviate) of the camera is *just*
the perspectival representation of the input that camera receives;
likewise, the "WIL" to be me is *just* the perspectival representation
of the input that I receive. Of course, I am a more complex machine than
the camera (we suppose), so I have a more complex "WIL" than the camera,
but once all that is described, there is no residuum, I think.
And if the camera was asked to describe the "WIL" to be it to another
camera, the representation would exhaust everything there is to exhaust.
Likewise, if I could describe to you in sufficient detail and at a rate
that would generate in you the representation that is in me, you *would*
know the "WIL" to be me, because that is exactly what it is, Mary
arguments aside.*
Chalmers is a clever fellow (and very likeable), but I do not see why we
must concede there even *is* a problem here. Searlean arguments cut no
mustard for me.
*The Mary argument trades on the low bandwidth of
information/representation in the book larnin' Mary does. But if she got
that larnin' at the rate I do when I see red, she would know all there
is to know about seeing red, presuming that *I* do. I think that the
Mary argument is a semantic trick (but then, all good philosophical
problems are).
> polymer <pol...@operamail.com> wrote:
>
>> On Fri, 19 Jun 2009 06:59:37 -0700, Kermit wrote: <snippage>
>> > I like your collection of quotes; thank you.
>>
>> They were the result of maybe one minute's search on Google for "define
>> consciousness" or some such.
>>
>> None of the cognitive scientists take Nagel very seriously, although
>> they
>> will occasionally quote him or reproduce one of his papers -- just to
>> turn around and demolish his arguments. People who actually seem to
>> have rational ideas about it are Minsky, Dennett, and Hofstadter.
>
> I won't do another round of it with "someone_x_", but it all turns on
> whether there is any definite meaning to "what it is like". Dualists
> like Chalmers argue this is irreducible. Monists like Dennett argue this
> is true, because there *is* nothing like "what it is like to be X", and
> you can't reduce non-existent properties.
>
> Monists like me think that the entire argument is purely semantic -
> "because there are words that imply there is something that needs
> explaining, there is". I follow Whitehead here: this is a Fallacy of
> Misplaced Concreteness. Just because there is a noun, doesn't mean there
> is a thing named.
No Ghosts in your Machine; neither are there in mine. Objects that
in ordinary language would be classified as conscious all seem to
have certain kinds of complex feedback mechanisms that hide the
mechanics of decision making from immediate view, and 'conscious'
remains a vague, vague way of describing the highest levels of these.
Nagel's idea of it leaves it at the level of superstition shared
by everyday speakers.
Defined in the manner above, there is no difference between a
"what-it's-like to be a camera" and a "what-it's-like to be me"
(except in richness of content, of course). Having "defined away"
that to which Nagel is attempting to allude (by the use of his phrase),
Nagel's argument would surely appear empty.
> Chalmers is a clever fellow (and very likeable), but I do not see why
> we must concede there even *is* a problem here.
Correct to the extent that once the problem has been "defined away",
there is no longer a problem in terms of that definition.
The pertinent question here becomes: "Has Nagel
recognized something that I have so far failed to notice?"
.... On what grounds might you reply in the negative?
Dennett says in 'What RoboMary Knows' http://ase.tufts.edu/cogstud/papers/RoboMaryfinal.htm
"I was saying that Mary had figured out, using her vast knowledge of
color science, exactly what it would be like for her to see something
red, something yellow, something blue in advance of having those
experiences."
So if he is to be believed, he does think 'what it would be like' is
reducible to scientific knowledge of colour.
On the grounds that I think the problem has been defined *into*
existence in the first place.
There is a general rule of inference that one should posit only what one
needs to account for the phenomena - some sort of razor. I use it to
establish that I only need to accept, for example, the posits of some
ideal physics very close to what we now have, in order to explain what I
see and so forth. It's a good rule. I find that it works very well.
Then I am faced with people whose "intuitions" posit parts of the world
that physics, ideal or not, cannot include. Gods. Supernatural realms.
Platonic heavens. And this mysterious, indefineable, incomprehensible,
inexplicable quantity (or shoudl that be quality) called
"consciousness". I am asked to explain the "WIL", the qualia. But I see
no qualia, and no "WIL" apart from having a perspective. So I am
nonplussed. I should accept these semantic posits and overturn
everything I find has worked thus far? You'd need something a bit more
definite and less (to quote The Doctor) wibbly-wabbly to make me do
that.
Like Hume, I am not aware of any element of consciousness, just the fact
that I am conscious some of the time.
That conclusion would be consistent with what is meant by "something that
I have so far failed to notice", with a subsequent move from agnosticism
to faith in the possible scenario that the likes of Nagel are making a
conceptual error. It is that move that I'm calling into question here --
i.e. how can the claim be justified that the problem is *not* the result of
a genuine recognition of something that one has failed to notice, but rather
that it has been defined *into* existence?
> There is a general rule of inference that one should posit only what
> one needs to account for the phenomena - some sort of razor. I use it
> to establish that I only need to accept, for example, the posits of
> some ideal physics very close to what we now have, in order to
> explain what I see and so forth. It's a good rule. I find that it
> works very well.
I know it well.
> Then I am faced with people whose "intuitions" posit parts of the
> world that physics, ideal or not, cannot include. Gods. Supernatural
> realms. Platonic heavens. And this mysterious, indefineable,
> incomprehensible, inexplicable quantity (or shoudl that be quality)
> called "consciousness". I am asked to explain the "WIL", the qualia.
> But I see no qualia, and no "WIL" apart from having a perspective. So
> I am nonplussed. I should accept these semantic posits and overturn
> everything I find has worked thus far? You'd need something a bit more
> definite and less (to quote The Doctor) wibbly-wabbly to make me do
> that.
Gods, supernatural realms, Platonic heavens and the like are of no interest
to me, and you are not being asked to explain the "WIL" or qualia. Moreover,
you most definitely should *not* overturn everything you find that has
worked thus far -- don't let any wibbly-wabbly arguments make you do that.
> John S. Wilkins wrote:
> > andy-k wrote:
> >> andy-k wrote:
> >>> John S. Wilkins wrote:
> >>>> Chalmers is a clever fellow (and very likeable), but I do not see
> >>>> why we must concede there even *is* a problem here.
> >>>
> >>> Correct to the extent that once the problem has been "defined away",
> >>> there is no longer a problem in terms of that definition.
> >>
> >> The pertinent question here becomes: "Has Nagel
> >> recognized something that I have so far failed to notice?"
> >> .... On what grounds might you reply in the negative?
> >
> > On the grounds that I think the problem has been defined *into*
> > existence in the first place.
>
> That conclusion would be consistent with what is meant by "something that
> I have so far failed to notice", with a subsequent move from agnosticism
> to faith in the possible scenario that the likes of Nagel are making a
> conceptual error. It is that move that I'm calling into question here --
> i.e. how can the claim be justified that the problem is *not* the result of
> a genuine recognition of something that one has failed to notice, but rather
> that it has been defined *into* existence?
It's a question of th eburden of proof here, and the burden, I suggest,
lies squarely upon those who would claim there is an ontological
category not covered by physics. We have had a long tradition of taking
substance dualism to be true, without the lslightest advance in our
knowledge as a result. All advances about thought, emotion, cognition,
decision making andthe like have come from physicalist science. None
provide the merest evidence to the contrary. So I would say that now,
although not 400 years ago, there is no reason to presume that Nagelian
qualia have any reality other than semantic. It's folk psychology.
Now folk science, whther it's psychology, anthropology, biology,
taxonomy, geology, history or physics, has a p[lace in reasonable
discussions just tot he extent that later scientific investigation bears
it out (in nearly every domainjust mentioned, the exact opposite is
true), so I would suggest we have warrant for a pessimistic induction
here: folk science (and "intuitions" that rely upon it) are almost
always false, and there is an onerous duty to show they actually do
obtain, not the other way about.
>
>
> > There is a general rule of inference that one should posit only what
> > one needs to account for the phenomena - some sort of razor. I use it
> > to establish that I only need to accept, for example, the posits of
> > some ideal physics very close to what we now have, in order to
> > explain what I see and so forth. It's a good rule. I find that it
> > works very well.
>
> I know it well.
Good. I didn't want to unnecessarily multiply confusions.
>
>
> > Then I am faced with people whose "intuitions" posit parts of the
> > world that physics, ideal or not, cannot include. Gods. Supernatural
> > realms. Platonic heavens. And this mysterious, indefineable,
> > incomprehensible, inexplicable quantity (or shoudl that be quality)
> > called "consciousness". I am asked to explain the "WIL", the qualia.
> > But I see no qualia, and no "WIL" apart from having a perspective. So
> > I am nonplussed. I should accept these semantic posits and overturn
> > everything I find has worked thus far? You'd need something a bit more
> > definite and less (to quote The Doctor) wibbly-wabbly to make me do
> > that.
>
> Gods, supernatural realms, Platonic heavens and the like are of no interest
> to me, and you are not being asked to explain the "WIL" or qualia. Moreover,
> you most definitely should *not* overturn everything you find that has
> worked thus far -- don't let any wibbly-wabbly arguments make you do that.
Excellent. I shall endeavor not to.
> And this mysterious, indefineable, incomprehensible, inexplicable
> quantity (or shoudl that be quality) called "consciousness".
Maybe a approach might be "What can be so much better defined and so much
more comprehended and is not as mysterious as consciousness?".
> I am not aware of any element of consciousness, just the fact
> that I am conscious some of the time.
'any element'.. meaning any part of a whole? Or do you mean consciousness
as element like an environment or sphere of operation?
.
If you remain nonplussed by Nagel's attempt to elucidate how he is using the
word 'consciousness' then a position of agnosticism would seem appropriate,
but it goes beyond a position of agnosticism to claim that his use of the
word does *not* allude to a genuine insight . Undermining straw-man
arguments is not helpful in justifying that claim.
how do you know that? you yourself think electrons can decide. that
means, in your view, material CAN decide.
. Thats the reason materialism cant
> explain consciousness. In a decision things can turn out one way or
> another.
babies develop from zygotes. babies are conscious. zygotes are
nothing but matter. therefore matter can cause consciousness.
> On Mon, 22 Jun 2009 10:54:17 +1000, John S. Wilkins wrote:
>
> > And this mysterious, indefineable, incomprehensible, inexplicable
> > quantity (or shoudl that be quality) called "consciousness".
>
> Maybe a approach might be "What can be so much better defined and so much
> more comprehended and is not as mysterious as consciousness?".
Well let's see: most concepts of physics, behavioural parameters in
psychology, computational concepts - they are really well defined - the
activity of cells in nervous systems become ever more refined, and the
biochemistry that underlies them... there are numerous concepts that are
well defined or in the process of becoming well defined in empirical
terms.
If we had philosophical terms that were irreducible but
phenomenologically well defined, I'd happily accept them as being
explicanda, but "consciousness", by the admission of those who work in
the field, is not one of them, and "WIL" is the least defined of the
lot.
>
> > I am not aware of any element of consciousness, just the fact
> > that I am conscious some of the time.
>
> 'any element'.. meaning any part of a whole? Or do you mean consciousness
> as element like an environment or sphere of operation?
I mean something about consciousness that cannot be further divided,
which is shared by all consciousness events and states, and which calls
for explanation.
If I am undermining a strawman, then it is perhaps because as yet no
solid man has been erected...
Perhaps, but then again perhaps you have not recognized the solid-man
in Nagel's writings. It's still not clear to me how you can justify taking
the position that Nagel's use of the word 'consciousness' does *not*
allude to a genuine insight. The state of being nonplussed must be a state
of non-commitment unless there is some deep-seated underlying prejudice
in operation, and no amount of straw-man bashing will justify a prejudice.
That depends upon how the "material" is put together.
> Thats the reason materialism cant
> explain consciousness.
I see you have a private, and probably amusing, definition of
"materialism".
> In a decision things can turn out one way or
> another.
That also depends upon a provate, and probably amusing, definition of
"decision".
> We cant make logical progression.....
Speak for yourself.
> ....at the point it can go either way,
Why not?
> at this point new information is introduced into the
> universe, and materialism is useless.
But then again, you're using your private, and probably amusing,
definition of "materialism", so whatever concept you're trying to
convey probably has very little to do with reality.
Boikat
> reducible to scientific knowledge of colour.- Sembunyikan teks kutipan -
That's fair, but only if by scientific knowledge of colour you include
all of the physical reactions to exposure to the color that will occur
in her brain. Dennet surely does NOT mean, for example, that knowing
the wavelength of red light, and its absorption by different
substances, is enough "scientific knowledge of color" to let Mary know
"what it would be like to see red."
>
> - Perlihatkan teks kutipan -
> > Dennett says in 'What RoboMary Knows'
> > http://ase.tufts.edu/cogstud/papers/RoboMaryfinal.htm
> >
> > "I was saying that Mary had figured out, using her vast knowledge of
> > color science, exactly what it would be like for her to see something
> > red, something yellow, something blue in advance of having those
> > experiences."
>
> >
> > So if he is to be believed, he does think 'what it would be like' is
> > reducible to scientific knowledge of colour.
>
> That's fair, but only if by scientific knowledge of colour you include
> all of the physical reactions to exposure to the color that will occur
> in her brain. Dennet surely does NOT mean, for example, that knowing
> the wavelength of red light, and its absorption by different
> substances, is enough "scientific knowledge of color" to let Mary know
> "what it would be like to see red."
That's in effect my bandwidth argument. If all information, including
all effects upon a central nervous system, were delivered at the rate of
actually *seeing* red, Mary would know what it is like to see red
(because she would be seeing red, effectively).
On the other hand, knowing in ways that are pure abstractions (like the
wavelength of light, the kinds of neuronal interactions, etc.) wouldn't
give you all the knowledge that one has seeing red. In neither case is
seeing red anything other than the nervous activity.
If you recognise there is significant difference is recognised between
what is often called conscious brain activity and what is often called
subconscious brain activity, even if the person is under anaesthetic,
then there is the issue, that there would be no scientific way of
distinguishing between theories A and B in the article, and
determining whether any of the robot's processing was conscious, or
whether it was all 'subconscious'.
If you can't explain how you could scientifically distinguish it for
the robot, then why the pretence for a human?
Evidence?
> Thats the reason materialism cant
> explain consciousness.
Of course the philosophy of materialism will not explain
consciousness. Brain science will, however; they are already making
great progress, but there is far to go.
> In a decision things can turn out one way or
> another.
Correct. And it takes a brain to decide.
A falling rock on a cliff cannot decide to bounce one way or another.
We may not be able to take the measurements necessary to predict wher
eit will go, but our limitation of measurement does not mean that the
rock decides. Its bounce is till utterly determined.
> We cant make logical progression at the point it can go
> either way,
Why not? Unlike some people, I often make a decision based on logic.
(Not always, and probably not usually.)
> at this point new information is introduced into the universe,
Please define information in this context. Are you saying that the
universe contains more information now than a moment ago?
> and materialism is useless.
This does not follow.
>
> regards,
> Mohammad Nur Syamsu
Kermit
I understand. It is always good to say that "This is what I mean by X"
before talking about it, if X is something tricky (the subject of most
philosophical brawls). When I mean we all know what consciousness is,
I mean that we all (nearly) agree on this: consciousness is what is
going on inside a sane and awake person' head. But then we have to
determine what it is about that normal, awake person that is the
essence of consciousness. Will it include my cat? How about her
fleas, or my dreaming state?
But someone2 uses a definition that he thinks will lead us to the
correct conclusion, and uses the definition instead of the word
throughout his whole argument. He uses it as a weapon of mass
obscurity rather than a tool for clarity.
Kermit
So dualism produced a lot of knowledge, it is the main idea that
caused the scientific revolution.
regards,
Mohammad Nur Syamsu
This is good.
> But someone2 uses a definition that he thinks will lead us to the
> correct conclusion, and uses the definition instead of the word
> throughout his whole argument. He uses it as a weapon of mass
> obscurity rather than a tool for clarity.
Someone2 begins by adopting Nagel's definition and he quotes his source.
An analysis of Nagel's definition seems a good place to start a discussion
on the subject of consciousness, not least because there seems to be a
polarization here -- some people can't for the life of them see what
Nagel is alluding to, and yet others claim that it is as clear as daylight.
The first question this raises is that of what justification there may be
for a member of the first group to dismiss the claim of the second group
as a conceptual error. He may express disinterest (in which case he
shouldn't involve himself in such discussions at all), or else intellectual
honesty demands that he should justify his position.
> The dualistic view fashioned by medieval monks such as Occam,
[sic. Occham]
Occham was not like the other scholastic monks;
Occham's views are cited by people like J.J.C. Smart
as a good reason to _REJECT_ mind-body dualism.
> unleashed
> the scientific revolution.
Where the hell did you get the idea _MONKS_ ideas
launched the scientific revolution??? What the hell
have you been reading, sonny?
> It made for politics and religion to be
> separated from science, subjectivity separated from objectivity,
> spiritual separate from material. The Darwinists violate that rule of
> duality,
State the rule and show us how.
> the result is an endless fight witn
[sic. with]
> religion, and rubbish
> social darwinist,
[sic. Darwinist]
> nazi,
[sic. Nazi]
communist,
[sic. Communist]
>humanist, atheist science.
>
> So dualism produced a lot of knowledge, it is the main idea that caused
> the scientific revolution.
>
> regards,
> Mohammad Nur Syamsu
What an idiot.
There was no such thing as "rule of duality", ther was "bad science"
which is what you get when you try to mix science with religion an
politics.
> the result is an endless fight witn religion,
Because fundies are too mentally immature to give up their perceived
power over reality.
> and rubbish
> social darwinist, nazi, communist, humanist,
That's an example of idiots trying to mix science with politics, and
non-science.
> atheist science.
Science is actually agnostic in nature.
>
> So dualism produced a lot of knowledge,
It's modern version produces bullshit, called "ID.
> it is the main idea that
> caused the scientific revolution.
Actually, the realization that rea;ity does not always match up with
religious dogmas, so one had to choose between something that's
clearly wrong, and reality was one of the things that led to the
scientific revolution. Once religion was decoupled from science, then
progress was achieved.
Sorry 'bout that.
Boikat
Your reasoning about Occam is based on the faulty premise that Occam
did not believe in God, or that Occam's writings were irrellevant to
his belief in God. But it's quite clear that by this reasoning about
universals Occam came to the conclusion that knowledge about the
spiritual can only be attained through revelation, and not
measurement. So then religion was subjective, and science was
objective etc. and that is what unleashed the scientific revolution.
regards,
Mohammad Nur Syamsu
Thank God! Otherwise, we'd still be treating deseases as if they were
the result of demonic posession, and would be clueless about the vast
majority of natural phenomena.
Boikat
> Indeed reject mind body-dualism, in favor of material spiritual dualism.
> According to Occam the universals are in the mind, and the mind is just
> part of material reality, as mind objects.
>
> Your reasoning about Occam is based on the faulty premise that Occam did
> not believe in God,
No.
> or that Occam's writings were irrellevant to his
> belief in God.
No, I didn't say that, either.
>But it's quite clear that by this reasoning about
> universals Occam came to the conclusion that knowledge about the
> spiritual can only be attained through revelation, and not measurement.
> So then religion was subjective, and science was objective etc. and that
> is what unleashed the scientific revolution.
Ridiculous!
>
> regards,
> Mohammad Nur Syamsu
The default hypothesis for consciousness should be derived from the
logic people use when they talk about it in a practical way in day to
day life. That is the root of the word, and to define another logic as
consciousness must be done with explanation why you diverge from
common understanding. Otherwise if you want to use different logic
then common, then use a different word then consciousness.
In any case consciousness is well enough understood on a practical
basis. The logic we use works.
So in exploring common usage, consciousness refers to deciding in an
imaginary world that is created in the brain, based on input from the
real world. So one is conscious of a table, when one has recreated the
table in the imagination, based on input from the real table through
the senses, and then deciding about the table. Deciding about beauty,
or anything. In any case there is no consciousness without deciding.
regards,
Mohammad Nur Syamsu
where's the proof teh spiritual exists?
According to Occam the universals are in the mind, and the
> mind is just part of material reality, as mind objects.
quoting a 14th century philosopher...especially one you think should
have been suppressed...doesn't help your argument
he didn't prove minds exist either. you guys have tried, but failed,
to show their existence
>
> Your reasoning about Occam is based on the faulty premise that Occam
> did not believe in God
where did he say occam didn't believe in god?
there is one from religious fanatics as well.
>
> The default hypothesis for consciousness should be derived from the
> logic people use when they talk about it in a practical way in day to
> day life.
not necessarily. quantum mechanics describes events which defy day to
day common sense
Deciding about beauty,
> or anything. In any case there is no consciousness without deciding.
>
but there are alternatives that are not based on conscious decisions.
regards,
Mohammad Nur Syamsu
> What is ridiculous, ofcourse, is that you are arguing about Occam as if
> he was an atheist.
No I'm not; I'm very well aware that he was not.]
You're having to make up things because you can't support
the claim that:
"So then religion was subjective, and science was objective etc. and that
is what unleashed the scientific revolution."
The causes of the scientific revolution were much, much more
complex than your simple-minded claim.
And irrelevent to the validity of Occam's Razor being a useful "rule
of thumb", evn if it provides a separation of science and "hocus-
pokus" (what you call "religion").
I still do not see the problem with that.
Boikat
regards,
Mohammad Nur Syamsu
regards,
Mohammad Nur Syamsu
> Whatever, its
[sic. it is, or it's]
> the main idea that caused the scientific revolution, the
> principle to distinguish ought from is, subjective from objective and
> spiritual from material.
What a load of horse crap. No single idea caused it.
The scientific revolution was
caused by, among _many_ other things,
Copernicus' Heliocentric model of the universe,
and other new discoveries that did not fit with older ways
of thinking,
Internal problems of the Church,
weakening its power to _stop_ science,
Middle-eastern ideas and mathematics coming through Spain,
Increase of funding of science and colleges by Royalty,
Scientific methods, including a return of Greek methods that
were not Aristotelian.
That's just the beginning of the things that caused the
scientific revolution. You've been reading some Catholic
or Fundie horse crap.
Correct. Science does not deal with what you, or anyone else thinks,
"ought" to be. So, where's the problem, again? I'm still not seeing
it.
Boikat
Quite right all of this... Good stuff, polymer.
--
dorayme
Por nada. :)
regards,
Mohammad Nur Syamsu
> Now you mention no specific ideas for the scientific revolution.
I mentioned Copernicus' heliocentric cosmology.
That is a very specific idea!
You are either lying or not reading closely --
very possibly because you are unable to.
> You
> completely omit the monks separating material from spiritual, as
> anything significant.
Your claim was that it was THE cause of the scientific revolution,
__________THE__________ cause,
not a CONTRIBUTING cause.
You really don't seem to be very bright.
If you're young, you may have hope of some day actually being
able to make reasonable distinctions. If you are old, your wad
is so shot that you will never recover, and I would pity you
except for the fact that you are one of those irritating
assholes that are so sure that they are right,
yet are completely clueless. So, good riddance to you.
> Besides the scientific revolution is still
> ongoing,
Science continues. The thing that historians
call the "scientific revolution" has been over
a long time.
> and this is the main idea that keeps it going now.
No.
Science does not assume that anything spiritual
exists at all. It makes no sense to say that
they are separated when there is no evidence
that the spiritual exists at all!
>
> regards,
> Mohammad Nur Syamsu
I'm baffled to how people can deny understanding Nagel's definition.
It seems to be taken for granted in our culture that people do. I gave
as an example scenes in the Terminator movie where it was suggested
what it would be like to be the Terminator, and that thus the
Terminator was consciously experiencing. The same device was used in
one of the spiderman movies with a black blob thing, again suggesting
it was consciously experiencing. Another example would be the Matrix
where quite a large chunk of the film is a suggestion of what it would
be like to be a human that was having its brain state manipulated by
being plugged into a machine. Are they claiming not to understand
these suggestions in the films? It seems strange to have physicalist
theories such as functionalism trying to explain why it is like
something to be a human, and then it seems stramge that some that
claim to understand functionalists seemingly trying to avoid admitting
that the theory was an attempt at an explanation of a phenomena.
Certainly people like Dennett don't claim not to understand Nagel's
definition. Certainly if Dennett had, he couldn't have written a paper
about the absurdity of conception of philosophical zombies, since he
wouldn't have understood what was suggested by a philosophical zombie.
Further more in his RoboMary paper, he explicitly claims that he
thinks what it was like to be something would be reducible to
scientific knowledge. So professors such as him, aren't denying
understanding what it refers to, they seem to be more involved in an
argument about the ontological significance. Though clearly there does
seem to be people posting on Usenet claiming they don't understand
what Nagel was referring to.
The article doesn't make any ontological assumptions, it just points
out the behavioural significance of consciously experiencing if the
univerality assumption (see article) is made, and illustrates how this
contradicts, for example, claims to recognise what is being suggested
in films such as the Terminator, and that we don't get drop down menus
like the terminator does.
Terminator, Spiderman, and The Matrix --
your intellect is one sizzling
fajita platter with jalapiños, isn't it?
> It seems strange to have physicalist theories
> such as functionalism trying to explain why it is like something to be a
> human, and then it seems stramge that some that claim to understand
> functionalists seemingly trying to avoid admitting that the theory was
> an attempt at an explanation of a phenomena. Certainly people like
> Dennett don't claim not to understand Nagel's definition.
He claims that it is question begging.
I've already cited the quote from the very paper you
cite earlier in this thread, but you conveniently
forget. To jog your memory, let me quote myself:
_____________
No, no, no. Didn't you even read the paper:
"This is the idea that the “phenomenality” or “intrinsic phenomenal
character” or “greater richness”–whatever it is–cannot be constructed or
derived out of lesser ingredients. Only actual experience (of color, for
instance) can lead to the knowledge of what that experience is like. Put
so boldly, its question-beggingness stands out like a sore thumb, or so I
once thought, but apparently not, since versions of it still get
articulated."
Question-begging is what he calls Nagel's def. He has always rejected
that def. in all his writings. He is talking _about_ Nagel's def,
not "using" it.
__________________________
This is a case in point -- one party claims it's as clear as daylight,
the other that it's question-begging. At least what we have here
is an identified logical error. We might make some progress by
discussing in what sense Nagel's definition begs the question.
When scientists do not distinghuish "oughts" from "is" then:
- politics and religion will want to control science, since it is
their proper job to control the oughts and ought nots
- scientists become even more highly emotive about their theory when
it is connected to what ought, leading to lots of fighting among
scientists
- scientists produce rubbish such as social darwinism etc.
Also when no distinction is made between ought and ought not, then no
knowledge about choosing will be built up. This is because the choice
is at the divididing line between fact and value. You can factually
determine the alternatives, and that a choice is made, but you cannot
factually determine the values that did the job of actualizing the one
alternative and discarding the other. (unless ofcourse if one is a
nazi then one can say with absolute scientific certitude that the
Aryans are courageous and loving people)
So you see, this is why all these atheists in the thread fall back on
"physicalism" to explain consciousness, and by that they mean cause
and effect. They cant explain anything in terms of choosing, since
that would mean to acknowledge the divide between spiritual and
material. So instead they explain choice, and consciousness in terms
of cause and effect. And ofcourse they explain values in terms of
cause and effect also, leading to there being no distinction between
what they say ought and what they say is.
regards,
Mohammad Nur Syamsu
science can not be 'controlled' since it's nothing more than
determining how nature works. you can't fool mother nature.
> - scientists become even more highly emotive about their theory when
> it is connected to what ought, leading to lots of fighting among
> scientists
yes, gunfire is generally a problem at scientific conferences. the
body count is often quite high.
> - scientists produce rubbish such as social darwinism etc.
uh...no. herbert spencer, the inventor of 'social darwinism' was not a
scientist. and the moral failures of religious folks is legion. it was
your ideological colleagues who flew those planes into the WTC, not
scientists.
>
> So you see, this is why all these atheists in the thread fall back on
> "physicalism" to explain consciousness, and by that they mean cause
> and effect. They cant explain anything in terms of choosing, since
> that would mean to acknowledge the divide between spiritual and
> material. So instead they explain choice, and consciousness in terms
> of cause and effect. And ofcourse they explain values in terms of
> cause and effect also, leading to there being no distinction between
> what they say ought and what they say is.
>
jesus, you're a babbler...
regards,
Mohammad Nur Syamsu
Dennett says it "stands out like a sore thumb" in its circularity
because it is basically saying experience is experience or consciousness
is consciousness. Nothing is clarified, no information added.
People like Hume, Ryle, Dennett, Hofstadter, and Minsky understand
that you get nowhere with that kind of crap. The folk-term
'consciousness', _if_ it has any value at all, is now understood
as various high-level subsystems or agents within the mind processing
information about the state of other high level agents.
So you were at this year's ICAIL too! ;o)
> Obivously this Polymer guy is just producing the atheist version of
> history.. completely omitting the monks. Heliocentrism is obviously
> after the scientific revolution got going already. Prerequisite is that
> people distinguish ought from is, spiritual from material etc. You
> mention no other idea in a general sense, you vaguely refer to the
> methods of the greeks.
>
> When scientists do not distinghuish "oughts" from "is" then:
>
> - politics and religion will want to control science, since it is their
> proper job to control the oughts and ought nots - scientists become even
> more highly emotive about their theory when it is connected to what
> ought, leading to lots of fighting among scientists
> - scientists produce rubbish such as social darwinism etc.
Oh, so you think that Thales, Parmenides, Anaxagoras, Democritus,
and Epicurus didn't prefigure that?
That looks like an exercise in authority, rather then the basis for a
scientific research program. I've seen those kinds of theories, they
continuously talk about how "complex" it all is. Talking about
millions and billions of neurons a lot. But couldn't their theory work
with 1 neuron, or 2, or 3?
There is actual scientific research done on artificial consciousnes,
based on the common logic of consciousness, based on the principle of
choosing.
for instance:
http://www.bcs.org/server.php?show=ConWebDoc.15637
regards,
Mohammad Nur Syamsu
When an argument is derided for being "question-begging", the claim should
be substantiated by pointing out how this circularity arises. The claim that
"consciousness is consciousness" is self-evidently circular, but how does
this relate to Nagel's definition of consciousness as the "what-it's-like"?
Nagel is clearly *not* alluding to any kind of information processing.
> People like Hume, Ryle, Dennett, Hofstadter, and Minsky understand
> that you get nowhere with that kind of crap. The folk-term
> 'consciousness', _if_ it has any value at all, is now understood
> as various high-level subsystems or agents within the mind processing
> information about the state of other high level agents.
If it is stipulated that the word 'consciousness' is being used to refer to
"specified high-level subsystems or agents within the information processing
that takes place within the brain" then that is what it must be understood
to refer to for the purposes of the conversation in which that definition
has been stipulated. However, Nagel's definition suggests a different use
for the word that should not be confused with, or conflated with, "specified
high-level subsystems or agents within the information processing that takes
place within the brain". It could well be argued that Nagel and the
information-processing model should be employing different words where
they have ended up using the same word, but this is a commonplace in the
evolution of language and we must muddle-through with such situations.
Care to give us an actual link to the paper, instead
of an abstract of a paper? The abstract gives none
of the paper's arguments.
Not sure if that one is accessible (and since he is emeritus now, has
less incentive to write things up anyway) , but here is a paper by the
same author, and more recent too:
http://www.scholarpedia.org/article/Machine_consciousness
Note in particular what he says about "sensorimotor contingency" which
is really the philosophy behind CASYS.
Nothing of this of course a problem for the ToE
> polymer wrote:
>> andy-k wrote:
>>> This is a case in point -- one party claims it's as clear as daylight,
>>> the other that it's question-begging. At least what we have here is an
>>> identified logical error. We might make some progress by discussing in
>>> what sense Nagel's definition begs the question.
>>
>> Dennett says it "stands out like a sore thumb" in its circularity
>> because it is basically saying experience is experience or
>> consciousness is consciousness. Nothing is clarified, no information
>> added.
>
> When an argument is derided for being "question-begging", the claim
> should be substantiated by pointing out how this circularity arises. The
> claim that "consciousness is consciousness" is self-evidently circular,
> but how does this relate to Nagel's definition of consciousness as the
> "what-it's-like"? Nagel is clearly *not* alluding to any kind of
> information processing.
You are correct that Nagel is not alluding to any
kind of information processing. He is leaving
consciousness in the vague, supernatural state
that it comes to us as a folk-word. What-it-is-like
is equally vague.
>
>
>> People like Hume, Ryle, Dennett, Hofstadter, and Minsky understand that
>> you get nowhere with that kind of crap. The folk-term 'consciousness',
>> _if_ it has any value at all, is now understood as various high-level
>> subsystems or agents within the mind processing information about the
>> state of other high level agents.
>
> If it is stipulated that the word 'consciousness' is being used to refer
> to "specified high-level subsystems or agents within the information
> processing that takes place within the brain" then that is what it must
> be understood to refer to for the purposes of the conversation in which
> that definition has been stipulated. However, Nagel's definition
> suggests a different use for the word that should not be confused with,
> or conflated with, "specified high-level subsystems or agents within the
> information processing that takes place within the brain".
Yes, I agree with that so far.
> It could well
> be argued that Nagel and the information-processing model should be
> employing different words where they have ended up using the same word,
> but this is a commonplace in the evolution of language and we must
> muddle-through with such situations.
Indeed, once a word has been associated with something that cannot
be tested or observed, we have to either give it up or start
using it in a more rational way.
"‘Entanglement’ of brain and environment" may be a valuable way of
looking at it. I don't think this supports Nagel's vague definition,
though, nor anything that nando-ron is claiming.
>
> Nothing of this of course a problem for the ToE
Huh? Theory of Everything?
For that one it is fine too ;o)
But here it's theory of evolution, since nando for some reason never
fully explained by him thinks there is a problem.
Firstly, I'm not sure why you consider Nagel's "what-it's-like"
as "supernatural" -- could you say a little more about how
you're using the word "supernatural"?
Secondly, I still haven't seen any substantiation of the claim
that Nagel's definition is question-begging -- have you seen
anywhere that Dennett offers any?
>> It could well be argued that Nagel and the information-processing model
>> should be employing different words where they have ended up using the
>> same word, but this is a commonplace in the evolution of language and we
>> must muddle-through with such situations.
>
> Indeed, once a word has been associated with something that cannot
> be tested or observed, we have to either give it up or start
> using it in a more rational way.
You may have your definition of consciousness and demand that
Nagel should not be establishing an association between that word
and his "what-it's-like", but then you will only end up overlooking
Nagel's "what-it's-like". That's fine if it is of no interest to you,
but any claim that Nagel is making a conceptual error would still
need to be substantiated.
regards,
Mohammad Nur Syamsu
Neither are you, yet you presule to speak with scientific authority.
> And so were all these other folks
> who put oughts and ought nots in their theories not really scientists
> in the sense of the scientific revolution.
Correct. Is that a problem?
> You know that you aren't
> allowed to put ought and ought not in a theory.
Who is?
> So you see physicalism
> must be limited so that it doesnt cover what ought and ought not.
Science does not. Who says it does?
Boikat
regards,
Mohammad Nur Syamsu
Here's a simple question; If I think water ought to freeze at 40
degrees F, rather than 32 degrees F, will water start freezing at 40
degrees F, just because that's the way I think it ought to be?
It's a simple question.
Boikat?
regards,
Mohammad Nur Syamsu
Yes. Well, partly. Science does not address morality, period.
> as what does the job of
> deciding. In your example there was no logic of decision.
Of course there is not "logic", since what temprature I "decide" water
*ought* to freeze at has little to do with the temprature that water
actually freezes at. What the hell does moraity have to do with that,
anyway?
> In
> consciousness, the subject of the thread, there are many decisions.
Since when? This thread seems to be more about your whining about how
science being separated from the "spiritual", whatever that is
supposed to mean today (in your mind).
> So
> you can only hope to identify the decisionprocess,
There is no decision process involved with the freezing point of
water.
> but not the spirit
> which makes it turn out one way instead of the other.
Because that has nothing to do with the freezing point of water. It
freezes when it's cold enough to cause the water molecules to
crystalize, which happens to be 32 degrees F, whether I like it or
not. My "decisions" have no effect on that.
> Its not possible
> even to construct a theory which identifies in principle what makes it
> turn out one way instead of the other.
I suggest you explore "Probability Theory".
Are you done now?
Boikat
> polymer wrote:
>> andy-k wrote:
>>> When an argument is derided for being "question-begging", the claim
>>> should be substantiated by pointing out how this circularity arises.
>>> The claim that "consciousness is consciousness" is self-evidently
>>> circular, but how does this relate to Nagel's definition of
>>> consciousness as the "what-it's-like"? Nagel is clearly *not* alluding
>>> to any kind of information processing.
>>
>> You are correct that Nagel is not alluding to any kind of information
>> processing. He is leaving consciousness in the vague, supernatural
>> state that it comes to us as a folk-word. What-it-is-like is equally
>> vague.
>
> Firstly, I'm not sure why you consider Nagel's "what-it's-like" as
> "supernatural" -- could you say a little more about how you're using the
> word "supernatural"?
It is supernatural in that there is no way of either testing for it
or observing it. Particular people claim to observe it in themselves,
but there is no way for them to actually demonstrate this intuition
in a way that others can actually observe it.
>
> Secondly, I still haven't seen any substantiation of the claim that
> Nagel's definition is question-begging -- have you seen anywhere that
> Dennett offers any?
The paper offered up for discussion is:
http://ase.tufts.edu/cogstud/papers/RoboMaryfinal.htm#_ftn3
You can give it a careful reading of section 2.
My own argument does not depend on Dennett. I was simply
pointing out to him that he was misinterpreting Dennett,
because D. does not endorse Nagel, but claims that Nagel's
kind of claim is question begging.
>
>
>>> It could well be argued that Nagel and the information-processing
>>> model should be employing different words where they have ended up
>>> using the same word, but this is a commonplace in the evolution of
>>> language and we must muddle-through with such situations.
>>
>> Indeed, once a word has been associated with something that cannot be
>> tested or observed, we have to either give it up or start using it in a
>> more rational way.
>
> You may have your definition of consciousness and demand that Nagel
> should not be establishing an association between that word and his
> "what-it's-like", but then you will only end up overlooking Nagel's
> "what-it's-like".
No. One either has to give up the folk-word version of
'consciousness' or move to words that can be tested
or observed in a reasonable way.
If you, or Nagel, define consciousness so that something is conscious
if and only if there is something it is like to be that thing, what
have you accomplished?
If you give me a definition of a chair, then I can look at something
and decide, with some accuracy, whether it's a chair or not. If you
give me a definition of an airplane, then I can look and decide
whether something I see is an airplane. But with if you say something
is conscious iff there is something it is like to be that thing, how
can I use that definition to decide whether something is conscious?
Exactly right.
It is unusable.
Nagel is not making any attempt to identify an objective phenomenon like a
chair or an airplane, but rather he is attempting to draw his reader's
attention to an overlooked peculiarity. Nagel will have accomplished his
task if one or more readers' attention is drawn to that peculiarity. Some of
his readers will *not* follow his argument, but that doesn't invalidate his
attempt. His use of the word 'consciousness', then, cannot be used to
identify any "what-it's-like" other than one's own perspective upon the
world, and it is precisely *this* to which Nagel is attempting to draw
attention. No other "what-it's-like" can be identified since we can do no
more than *imagine* that other objects have any such thing (and this is the
case even with other humans).
Your use of the word "supernatural" seems contrived to present Nagel's
argument in a pejorative light, but if it is used as you stipulate then by
your definition the "what-it's-like" falls under that description (just as
does, say, Cantor's notion of transfinite numbers, to give another example).
I'm compelled to dismiss that approach as a smear campaign.
>> Secondly, I still haven't seen any substantiation of the claim that
>> Nagel's definition is question-begging -- have you seen anywhere that
>> Dennett offers any?
>
> The paper offered up for discussion is:
> http://ase.tufts.edu/cogstud/papers/RoboMaryfinal.htm#_ftn3
> You can give it a careful reading of section 2.
> My own argument does not depend on Dennett. I was simply
> pointing out to him that he was misinterpreting Dennett,
> because D. does not endorse Nagel, but claims that Nagel's
> kind of claim is question begging.
I'm familiar with Dennett's view, and the answer to my question is in the
negative. Rather than address Nagel's point about what it's like to *be* a
particular organism, Dennett attacks his straw-man of what it's like to
*see* something.
>> You may have your definition of consciousness and demand that Nagel
>> should not be establishing an association between that word and his
>> "what-it's-like", but then you will only end up overlooking Nagel's
>> "what-it's-like".
>
> No. One either has to give up the folk-word version of
> 'consciousness' or move to words that can be tested
> or observed in a reasonable way.
This is a repudiation rather than a refutation, the former having no value
in reasoned argument.
I doubt many of us have overlooked the "peculiarity" that we have a
point of view, even without Nagel's help.
>Nagel will have accomplished his
> task if one or more readers' attention is drawn to that peculiarity. Some of
> his readers will *not* follow his argument, but that doesn't invalidate his
> attempt.
.
>His use of the word 'consciousness', then, cannot be used to
> identify any "what-it's-like" other than one's own perspective upon the
> world, and it is precisely *this* to which Nagel is attempting to draw
> attention.
If his point is simply that we each have a perspective upon the world,
he'll get no argument from me. What has got some folks arguing with
you has been what seemed to be an attempt to reify this perspective
that each of us has into an actual (if immaterial) thing.
>
> ... repudiation rather than a refutation, the former having no value
> in reasoned argument.
Bullshit. If something is shown to be something it is impossible to
work with after all details are exposed, repudiation is the only
rational course.
> That seems to be an extremely wide definition of supernatural that
> potentially results in solipsism. Toothaches are "supernatural" (and my
> dentist a witch doctor) and if you follow this line through, it is _my
> _observation of the apple falling from the tree that gives me a reason
> to believe in gravity, but since it is _my_ observation and I have no
> way of knowing if it identical to yours, that too becomes supernatural.
We must not confuse what is the case with how we know it is the case.
The difficulties involved in knowing how something is so does not make
what is claimed to be so, a supernatural event. A supernatural event is
usually understood to be outside all laws of nature, not governed by the
laws of nature. A difficult area
--
dorayme
regards,
Mohammad Nur Syamsu
Just in case anybody takes this comment too seriously, it's a reference to a
particular poster that just can't seem to stop thinking about me. It's very
flattering really.
I'd like to think you're right, but there are still those that seem to
dismiss it as a conceptual error.
>> His use of the word 'consciousness', then, cannot be used to
>> identify any "what-it's-like" other than one's own perspective upon
>> the world, and it is precisely *this* to which Nagel is attempting
>> to draw attention.
>
> If his point is simply that we each have a perspective upon the world,
> he'll get no argument from me. What has got some folks arguing with
> you has been what seemed to be an attempt to reify this perspective
> that each of us has into an actual (if immaterial) thing.
I have presented no argument for any ontological status of Nagel's
"what-it's-like" -- I've simply stated that some people consider it as
clear as daylight whilst others seem to dismiss it as a conceptual error.
then why did you say he was?
And so were all these other folks
> who put oughts and ought nots in their theories not really scientists
> in the sense of the scientific revolution. You know that you aren't
> allowed to put ought and ought not in a theory. So you see physicalism
> must be limited so that it doesnt cover what ought and ought not.
>
so you finally admit 'morality' has nothing to do with science...it's
about time.
Your behavior signals to your doctor that there is a physical
problem. A brain scan might reveal a common area of the brain
stimulated in patients with similar toothaches.
But the quale of the toothache _is_ supernatural.
We can think that others who have similar physical characteristics
to ourselves are experiencing the same qualia, but there is
no possibility of testing qualia themselves, because they can't
be shared or observed. Definitions of consciousness that
depend on qualia must be jettisoned.
> it is _my
> _observation of the apple falling from the tree that gives me a reason
> to believe in gravity, but since it is _my_ observation and I have no
> way of knowing if it identical to yours, that too becomes supernatural.
<snippage>