Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Searle's research program for consciousness

0 views
Skip to first unread message

nos...@nospam.mit.edu

unread,
May 22, 2000, 3:00:00 AM5/22/00
to
Something has puzzled me for a while about John Searle's "research
program for consciousness," i.e., the program to "discover the causes
of consciousness." I don't quite see how to carry out this program,
given Searle's assumptions.

The most naive model of how one might "discover the causes of consciousness"
is to identify a few promising parameters, and to run a bunch of tests using
different values of the parameters. Test 1 might set parameter A = foo,
parameter B = bar, and so on. Then we would observe the result, and circle
"conscious" or "not conscious" on our scorecard as appropriate. Then we
could analyze the results to determine what parameters were relevant and
what parameters were mostly irrelevant.

A key step in this process is the judgment of whether something is conscious
or not conscious. This is where it seems to me that a Searlian would have
difficulty. I'm not saying that Searle balks at judgments of whether
something is conscious or not; quite the opposite. He is confident that
human beings, and perhaps other biological systems, are conscious, while
something like the Chinese room is not. Although these are contentious
claims, I'm willing to grant them for the sake of argument. The problem
I have is that I'm not sure how to make such judgments myself, especially
in the strange cases that are bound to arise when we tweak parameters in
funny ways. The Chinese room argument suggests that purely "behavioral"
criteria will not succeed in giving us a reliable method of judging
whether something is conscious. But what other criteria are there? I've
been trained to be as disdainful of "verificationism" as anyone, so I'm
open to the idea that externally measurable criteria may never succeed
in defining consciousness. But don't we need a list of externally
measurable criteria in order to do scientific experiments that "discover
the causes of consciousness"?

It may be that the problem is that my model of how the experiments would
go is too naive. I agree that it is naive, but it seems to me that *any*
experiment must at some point face the problem of deciding whether
questionable-case-X is a conscious entity or not. Without some method
of doing this that is at least somewhat objective, I don't see how the
research program can get off the ground.

I suppose that one answer to my question is to assert confidently that
no matter what kind of genetically engineered monstrosity or cyborgian
creation we are faced with, it will be intuitively clear whether it is
*really* conscious (like a person) or a fake (like a Chinese room). But
even if our intuitions agree with Searle on the cases that we have
thought of so far, a grand *a priori* generalization like this seems rash.

I've read _The_Rediscovery_of_the_Mind_ but not much else of Searle. It
seems that this is a basic question that Searle must have addressed at some
point. Where do I go to read about this? What does he say?
--
Tim Chow tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences

Anders N Weinstein

unread,
May 22, 2000, 3:00:00 AM5/22/00
to
In article <8gbji2$2...@schubert.mit.edu>, <nos...@nospam.mit.edu> wrote:
>Something has puzzled me for a while about John Searle's "research
>program for consciousness," i.e., the program to "discover the causes
>of consciousness." I don't quite see how to carry out this program,
>given Searle's assumptions.
>
>The most naive model of how one might "discover the causes of consciousness"
>is to identify a few promising parameters, and to run a bunch of tests using
>different values of the parameters. Test 1 might set parameter A = foo,
>parameter B = bar, and so on. Then we would observe the result, and circle
>"conscious" or "not conscious" on our scorecard as appropriate. Then we
>could analyze the results to determine what parameters were relevant and
>what parameters were mostly irrelevant.
>
>A key step in this process is the judgment of whether something is conscious
>or not conscious.

I am not certain of Searle's position here. I suspect from some
things he writes that it is possible he holds that an authoritative
judgement of whether something is conscious can only be made from the
first-person perspective. Since any behavior viewed third-personally
is logically independent of the presence of consciousness on his view,
any third-person attribution of consciousness would have to be made
on an inferential basis, secured by some principle like "similar
causes, therefore similar effects".

There is a large tradition of thought about the mind that shares the
assumption that any experimental psychology would have to be conducted
via introspection, since, it is assumed, only you can observe your own
sphere of consciousness.

I take it a line of thought in Wittgenstein and Strawson has exploded
this myth completely. This view holds to the contrary that (1) you do
not know facts about your own mind via exercise of an inner perceptual
capacity at all; and, more importantly, (2) facts concerning other
minds must be knowable on at least some occasions by direct application
to behavior visible in the observer's experience. This since it is a
necessary part of the meaning of psychological terms in the language
that they have third-person uses as well as first-person uses.

So of course I agree that the issue you raise is an insuperable problem
for Searle's view. I don't think he has really developed any serious
idea of how to pursue a scientific investigation into "the causes of
consciousness"; this idea functions in his thought mainly as part of a
negative (anti-behaviorist, anti-functionalist, anti-computationalist)
polemic. I think one should agree with Searle's negative points, but
disagree with his positive theory. Once one does that one can also
reject the idea that brains magically cause consciousness to arise,
and explain consciousness by reference to the characteristic behavior
in which consciousness is manifested to observers who have become
fitted to discern it.

Neil W Rickert

unread,
May 22, 2000, 3:00:00 AM5/22/00
to
nos...@nospam.mit.edu writes:

>Something has puzzled me for a while about John Searle's "research
>program for consciousness," i.e., the program to "discover the causes
>of consciousness." I don't quite see how to carry out this program,
>given Searle's assumptions.

Perhaps "causes of consciousness" is already a mistaken notion. We
might discuss what is the cause of a tornado, or the cause of a
hurricane. But we do not normally talk of the causes of weather.
Likewise, I think consciousness is not the type of thing for which we
should seek a cause. The term "consciousness" is just too broad and too
ill defined for such considerations.

>The most naive model of how one might "discover the causes of consciousness"
>is to identify a few promising parameters, and to run a bunch of tests using
>different values of the parameters. Test 1 might set parameter A = foo,
>parameter B = bar, and so on. Then we would observe the result, and circle
>"conscious" or "not conscious" on our scorecard as appropriate. Then we
>could analyze the results to determine what parameters were relevant and
>what parameters were mostly irrelevant.

The problem here is precisely that "consciousness" is ill defined.

>A key step in this process is the judgment of whether something is conscious

>or not conscious. This is where it seems to me that a Searlian would have
>difficulty. I'm not saying that Searle balks at judgments of whether
>something is conscious or not; quite the opposite. He is confident that
>human beings, and perhaps other biological systems, are conscious, while
>something like the Chinese room is not. Although these are contentious
>claims, I'm willing to grant them for the sake of argument. The problem
>I have is that I'm not sure how to make such judgments myself, especially
>in the strange cases that are bound to arise when we tweak parameters in
>funny ways.

That's the problem with ill defined concepts. When you speak of
"consciousness", I cannot be sure that you mean the same thing as I
do. Chalmers seems to identify consciousness with having qualia. I
am more inclined to identify it with thinking. Already, that is two
different notions of consciousness.

This whole "consciousness" thing is, in my opinion, a misdirection.
We should be investigating human behavior and learning. When we
understand that well enough, consciousness will take care of itself.
To use my earlier analogy, if you set out to investigate the causes
and nature of weather, you would probably never discover much. But
if you set out to discover the causes of winds, hurricanes,
rainstorms, blizzards, etc, you might succeed in that, and discover
much about the nature of weather in the process.

> The Chinese room argument suggests that purely "behavioral"
>criteria will not succeed in giving us a reliable method of judging
>whether something is conscious.

Sorry, but I don't see that. It seems pretty obvious that a Chinese
Room could not behave anything like a system that we consider
consciousness. In fact, the persuasiveness of Searle's argument is
probably related to that.

As far as I can tell, Searle believes that behavioral criteria are
not sufficient. But I cannot see that he has actually made the case
for that view.

> But what other criteria are there? I've
>been trained to be as disdainful of "verificationism" as anyone, so I'm
>open to the idea that externally measurable criteria may never succeed
>in defining consciousness. But don't we need a list of externally

>measurable criteria in order to do scientific experiments that "discover
>the causes of consciousness"?

If we don't have externally measurable criteria for consciousness,
then we don't really know what we are talking about when we use that
term. I don't believe that last sentence of mine is equivalent to
verificationalism.

>It may be that the problem is that my model of how the experiments would
>go is too naive. I agree that it is naive, but it seems to me that *any*
>experiment must at some point face the problem of deciding whether
>questionable-case-X is a conscious entity or not. Without some method
>of doing this that is at least somewhat objective, I don't see how the
>research program can get off the ground.

>I suppose that one answer to my question is to assert confidently that
>no matter what kind of genetically engineered monstrosity or cyborgian
>creation we are faced with, it will be intuitively clear whether it is
>*really* conscious (like a person) or a fake (like a Chinese room). But
>even if our intuitions agree with Searle on the cases that we have
>thought of so far, a grand *a priori* generalization like this seems rash.

Perhaps you should take "consciousness research" as just the latest
fad. We might eventually understand consciousness. But not because
of embarking on a faddish program of consciousness research.

>I've read _The_Rediscovery_of_the_Mind_ but not much else of Searle. It
>seems that this is a basic question that Searle must have addressed at some
>point. Where do I go to read about this? What does he say?

You could read his "Intentionality: an essay in the philosophy of
mind," but you might find that there is very little science there.


Gary Forbis

unread,
May 22, 2000, 3:00:00 AM5/22/00
to

Anders N Weinstein <ande...@pitt.edu> wrote in message news:8gbm4v$8q1$1...@usenet01.srv.cis.pitt.edu...

> I take it a line of thought in Wittgenstein and Strawson has exploded
> this myth completely. This view holds to the contrary that (1) you do
> not know facts about your own mind via exercise of an inner perceptual
> capacity at all; and, more importantly,

As I understand your position one knows facts about the world through
direct perception, that is you know the chair because you perceive it not
because you manipulate perceived sense data.

> (2) facts concerning other
> minds must be knowable on at least some occasions by direct application
> to behavior visible in the observer's experience.

I'll come back to this.

> This since it is a
> necessary part of the meaning of psychological terms in the language
> that they have third-person uses as well as first-person uses.

Yes, this is a problem. Unlike the case of the chair where the object of
perception is open to all, consciousness is only open to perception by
the mind having it. To a certain extent consciousness is a reification of
a relationship involving two other objects but even so it can have properties.
There will always be a problem that different people may reference different
things by the word "consciousness" since its public use and identification is
constrained by behavior.

Now back to point (2).

We draw inferences all the time. No one has actually seen an atom yet
we think we know what is being referenced and that atoms exist. The
existence of atoms is not dependent upon our use of language even
though the referent of the word "atom" is. Likewise, that which a person
refers to by the word "consciousness" can exist indepentent of any public
use of language. A person may be wrong about what the public language
references without being wrong about the existence of the object he or
she references when using the word.

I don't know why you allow that individuals may perceive the world directly
but not that they may perceive their consciousness directly.

C. White

unread,
May 23, 2000, 3:00:00 AM5/23/00
to

<nos...@nospam.mit.edu> wrote in message news:8gbji2$2...@schubert.mit.edu...

> Something has puzzled me for a while about John Searle's "research
> program for consciousness

For an update on Searle and consciousness, check out the relevant paper from
his website:

http://socrates.berkeley.edu/%7Ejsearle/

Anders N Weinstein

unread,
May 23, 2000, 3:00:00 AM5/23/00
to
In article <uML39QBx$GA.263@cpmsnbbsa06>,

Gary Forbis <GaryF...@email.msn.com> wrote:
>
>Anders N Weinstein <ande...@pitt.edu> wrote in message news:8gbm4v$8q1$1...@usenet01.srv.cis.pitt.edu...
>> I take it a line of thought in Wittgenstein and Strawson has exploded
>> this myth completely. This view holds to the contrary that (1) you do
>> not know facts about your own mind via exercise of an inner perceptual
>> capacity at all; and, more importantly,
>
>As I understand your position one knows facts about the world through
>direct perception, that is you know the chair because you perceive it not
>because you manipulate perceived sense data.

Yes, at least sometimes. The crucial point is that the cognitive upshot
of perception in a discursive epistemic agent -- a "thick" experience
-- is a person state in which acquired conceptual capacities are
jointly exercised ("actualized"). So it can be an experience to the effect
that "that F is G", for example. In this experience the capacity to
deploy the singular concept expressed in context by the phrase "that F"
(which involves mastery of the demonstrative mode of presentation) is
deployed together with the ability to use the predicative concept
expressed by the schema "___ is G".

However, the present point concerned knowledge of
one's own psychological states, so-called "inner perception".

>> (2) facts concerning other
>> minds must be knowable on at least some occasions by direct application
>> to behavior visible in the observer's experience.

>> This since it is a
>> necessary part of the meaning of psychological terms in the language
>> that they have third-person uses as well as first-person uses.
>
>Yes, this is a problem. Unlike the case of the chair where the object of
>perception is open to all, consciousness is only open to perception by
>the mind having it. To a certain extent consciousness is a reification of

This claim is being denied. Also, I hinted at the argument for denying
it.

Here is a bit more on the rationale: Conceptual abilities are
systematic. In order to conceptually think the proposition expressed by
"I have a headache", as an articulate proposition, one must have a
recombinant set of capacities, a set that meets what Gareth Evans
labelled the "Generality Constraint". That is, one must be able to
understand variations of the concepts along both the subject and
predicate dimensions. For example, such variants as "I have a
toothache", "I have an itch", "I have a left foot" (same subject,
different predicates); and "He has a headache", "You have a headache",
"the man in the red hat has a headache" (same predicate, different
subjects).

So, in brief, you could not enjoy conceptual consciousness of
yourself as a *subject* of psychological states unless you also have
a conception of how to apply psychological predicates to other
subjects. For only then does your thought resolve into a significant
subject-predicate structure at all.

But the key point is to focus *always* on seeing that p, where p
involves certain concepts being brought to bear. With the demise of
the myth of given sensory data as the basis of perceptually obtained
knowledge, there is no limit on the sorts of facts that might on
occasion be directly perceived.

From this perspective, what are "open to perception" are *facts*,
truths or putative truths, expressed by complete sentences of natural
language. And among the facts may be psychological facts, propositions
in which psychological terms are involved. For example, you may hear a
sportscaster say of a downed athlete, "look at him, he's obviously in a
lot of pain". One can see that the person is in pain.

There is of course a possibility of error on these matters. However,
there is a possibility of error on every matter that can be the
content of an objective perceptual judgement. There mere possibility
of error does not distinguish perception of psychological truths from
perception of truths concerning physical objects.

Phenomenologically, this should be quite plausible once it is pointed
out correctly. For we most naturally see other people and their doings
*as* expressive of mentality and consciousness; this "seeing-as"
informs our most epistemically basic experience of our fellows and
their doings. It is not a theoretical inference or hypothesis we make
that other human beings have invisible souls hidden behind the surface
of their behavior. Rather, if you don't mind the provocative idiom,
features of their souls often show themselves to us directly in the
surface they present to us. Think for example of seeing that someone's
feelings are hurt by reading this in their face.

>a relationship involving two other objects but even so it can have properties.
>There will always be a problem that different people may reference different
>things by the word "consciousness" since its public use and identification is
>constrained by behavior.

I agree "consciousness" is broad and woolly. I take it it
is a placeholder for more determinate sorts of states that will
be uncontroversial examples of states of consciousness.

Philosophers have distinguished cognitive states of consciounesss from
sensory states of consciousness. E.g. Ned Block, distinguishes several
concepts of consciousness, including, as I recall, "access"
consciousness, from "phenomenal" consciousness". But what I am suggesting
about the possibility of direct observability of facts concerning
other minds includes both sorts of states.

>We draw inferences all the time. No one has actually seen an atom yet
>we think we know what is being referenced and that atoms exist. The

Philosophers like N.R. Hanson, champion of the idea that perception is
"theory-laden" have claimed that one can learn to
see atoms, perhaps by learning to read their presence in bubble chamber
tracks. I believe Kuhn follows this line with his idea that scientific
paradigms inform one's most basic observations.

At any rate, if there is no such thing as "raw sensory data" it is
a great problem to draw a distinction between what can be directly
perceived and what must be indirectly inferred. I don't know how to
draw any such distinction that rules out the idea that we can perceive
atoms.

>existence of atoms is not dependent upon our use of language even
>though the referent of the word "atom" is. Likewise, that which a person

OK.

>refers to by the word "consciousness" can exist indepentent of any public
>use of language. A person may be wrong about what the public language

OK. Although I believe some states of consciousness *are* in fact
dependent on the possession of linguistic abilities, such that you can't
be in the state if you don't have linguistic abilities. But not all are.

>references without being wrong about the existence of the object he or
>she references when using the word.

OK, if they have a non-public-linguistic concept through which to perceive
the referent. But even this would be explained through language.

>I don't know why you allow that individuals may perceive the world directly
>but not that they may perceive their consciousness directly.

I agree a person can know about his or her own mental states. The
position in question is that this knowledge is not obtained through
inner perception. For example, there is no organ of perception in this case,
nor is there any characteristic feeling or sensation when you know
what you are thinking, nor is there anything we could call picking out
or identifying the subject of the states in a way that can be wrong.

I was originally pointing out a feature of Wittgenstein's view. There
are actually different variations of his root idea. Wittgenstein
himself denied there was any sense to the putative statement "I know I
am in pain" except as a kind of joke. For what could it mean, he asked,
other than "I am in pain"?

One reason for this is that he thought it logically could not be false
when you were in pain, so it could not carry any additional information
as it were. Another is that he thought of saying "I know" as only
appropriate where certain doubt was relevant and investigations had
been made -- in other work he questioned even whether "I know I have a
hand" had any sense in ordinary circumstances (as it might after an
explosion in which one comes to without feeling in one's hand, and has
to conduct some investigation to establish that yes, one does still
have a hand.)

But mainly he thought of the first-person statement "I am in pain" as
an acquired form of *expression* of the inner state in language users.
Because he thought the very concept of pain was logically connected to
that of its expression, he held these statements did not express
knowledge at all, no more than moaning expressed knowledge.

However, others such as Strawson take a different view and think that a
statement like "I am in pain" is a kind of degenerate case of
knowledge. It differs from the central cases in such things as
that there is no sense to the question "how do you know?", no
inquiry that you have conducted to establish this. But it can be
treated as true or false in the right circumstances.

Another defender of such a position I believe is Sydney Shoemaker,
who in a series of papers and lectures has criticized the idea that
knowledge of one's own mental states is obtained by inner perception.

These fine points are not nearly as important to me as the point that
there is way to draw a distinction between what may be directly
observed and what can only be inferred that rules out the possibility
that facts concerning other minds may be directly observed on at
least some occasions.

Gary Forbis

unread,
May 23, 2000, 3:00:00 AM5/23/00
to
C. White <cwhi...@hotmail.com> wrote in message news:XXwW4.29567$wb7.1...@news.flash.net...

I want to quote the last paragraph for those who can't be bothered to read the article.
I do this because the last sentence puts many people assertions about Searle's views
to the lie.

Here is the last paragraph:

In my view the most important problem in the biological
sciences today is the problem of consciousness. I believe
we are now at a point where we can address this problem
as a biological problem like any other. For decades research
has been impeded by two mistaken views: first, that
consciousness is just a special sort of computer program,
a special software in the hardware of the brain; and second
that consciousness was just a matter of information processing.
The right sort of information processing -- or on some views
any sort of information processing --- would be sufficient to
guarantee consciousness. I have criticized these views at
length elsewhere (Searle 1980, 1992, 1997) and do not repeat
these criticisms here. But it is important to remind ourselves
how profoundly anti-biological these views are. On these views
brains do not really matter. We just happen to be implemented
in brains, but any hardware that could carry the program or
process the information would do just as well. I believe, on the
contrary, that understanding the nature of consciousness crucially
requires understanding how brain processes cause and realize
consciousness.. Perhaps when we understand how brains do
that, we can build conscious artifacts using some nonbiological
materials that duplicate, and not merely simulate, the causal
powers that brains have. But first we need to understand how
brains do it.

Thank you for letting me get that off my chest.

Now about the article.

Searle makes the strong claim that consciousness is epiphenominal
on brain processes and asserts this as fact. While I believe the same
I don't believe it is fact but rather the stronger working hypothesis. I
don't believe dualism is quite dead. It can still linger as a "hidden
variable" within a correlate between physical and mental processes.
The main restrictions are that mental events have to be influenced by
physical events and visa versa and that the laws of physics cannot be
abridged by the mind's influence on the brain.

Anders N Weinstein

unread,
May 23, 2000, 3:00:00 AM5/23/00
to
In article <8gf3ej$2...@ux.cs.niu.edu>,
Neil W Rickert <ricke...@cs.niu.edu> wrote:
>"Gary Forbis" <GaryF...@email.msn.com> writes:
>
>But "mental events" are philosophers' fictions. I am not saying that
>consciousness is a fiction -- only that the term "event" is
>misapplied.

I'm not sure why you say this. If I feel a sudden pain, or, equally, if
my pain suddenly stops, that is a mental event. If I am looking around
in a crowd for an acquaintance and then I notice him, that is a mental
event. If I am trying to remember something and then at last it comes
to me, that is a mental event. If I am looking at the Necker cube and
it changes from one appearance to the other, that is a mental event.

Or so it seems to me. I don't see that this involves any tendentious
philosophical interpretation.


Neil W Rickert

unread,
May 23, 2000, 3:00:00 AM5/23/00
to
"Gary Forbis" <GaryF...@email.msn.com> writes:

>Now about the article.

>Searle makes the strong claim that consciousness is epiphenominal
>on brain processes and asserts this as fact.

I have not found anywhere that Searle explicitly states that
consciousness is an epiphenomenon. I will grant that it is implicit,
but he might still deny it.

I still read him as sticking by what has been called his "meat theory" --
that meat (or brains) have special causal properties.

The really big problem with Searle's view, is that he defines
consciousness in a way that makes it unamenable to science, yet he
claims that it is a scientific problem. It thus is understandable
that some people would take him to be a dualist in denial.

> While I believe the same
>I don't believe it is fact but rather the stronger working hypothesis. I
>don't believe dualism is quite dead. It can still linger as a "hidden
>variable" within a correlate between physical and mental processes.
>The main restrictions are that mental events have to be influenced by
>physical events and visa versa and that the laws of physics cannot be
>abridged by the mind's influence on the brain.

But "mental events" are philosophers' fictions. I am not saying that

Anders N Weinstein

unread,
May 23, 2000, 3:00:00 AM5/23/00
to
In article <e9QwFCQx$GA.353@cpmsnbbsa08>,

Gary Forbis <GaryF...@email.msn.com> wrote:
>Searle makes the strong claim that consciousness is epiphenominal
>on brain processes and asserts this as fact.

Searle does not think conscious mental states and events are "epiphenomenal".
He thinks they are "caused by and realized in" brain processes; and that
they have further causal effects in turn.

An epiphenomenon is a causal by-prodcut that does not have further
causal effects. For example the flashing of lights on the console is
normally epiphemonal with respect to the operation of an unattended
computer. Epiphenomenalists think conscious subjects can feel and watch
the effects of physical brain processes but can't intervene to change
them in any way, a very strange idea.

I suspect you overestimate the mind-body problem and the supposed
problems with non-Cartesian dualisms. There are many higher-level
phenomena that cannot be explained at the level of basic physics. For
example, there is no basic physical scientific explanation of why a war
is declared. You cannot even define the event of a war being declared
in the language of basic physical science, I would say. So you cannot
reductively connect the higher-level terms to the lower level terms.

I think the predicates talking mental about conscious mental states and
events are similar higher-level descriptions applied to certain living
organisms. The situation is something like looking at a declaration of
war: you can see it in high level terms, or you can focus on the atomic
events among the physical parts and see it as a complex physical event.
To say that one description is irreducible to the other is not to posit
an immaterial substance or a failure of physical science to provide
complete explanations in its own terms at its own level.


Gary Forbis

unread,
May 23, 2000, 3:00:00 AM5/23/00
to
Anders N Weinstein <ande...@pitt.edu> wrote in message news:8gf4ot$6ci$1...@usenet01.srv.cis.pitt.edu...

> In article <e9QwFCQx$GA.353@cpmsnbbsa08>,
> Gary Forbis <GaryF...@email.msn.com> wrote:
> >Searle makes the strong claim that consciousness is epiphenominal
> >on brain processes and asserts this as fact.
>
> Searle does not think conscious mental states and events are "epiphenomenal".
> He thinks they are "caused by and realized in" brain processes; and that
> they have further causal effects in turn.

I guess I need to correct my use of the term. Would you consider Leibniz
an epiphenomenalist?

> An epiphenomenon is a causal by-prodcut that does not have further
> causal effects. For example the flashing of lights on the console is
> normally epiphemonal with respect to the operation of an unattended
> computer. Epiphenomenalists think conscious subjects can feel and watch
> the effects of physical brain processes but can't intervene to change
> them in any way, a very strange idea.

I don't see it as so strange. If mental processes are caused by brain
processes then the events are caused by the brain processes and
the mental processes come along for the ride.

It seems to me mental processes are a different aspect of some brain
processes. In this way to talk about one causing the other just doesn't
make any sense. Likewise, because they are one and the same (but
different aspects or sets of qualities/properties) mental events don't
cause physical events and physical events don't cause mental events.

> I suspect you overestimate the mind-body problem and the supposed
> problems with non-Cartesian dualisms. There are many higher-level
> phenomena that cannot be explained at the level of basic physics. For
> example, there is no basic physical scientific explanation of why a war
> is declared. You cannot even define the event of a war being declared
> in the language of basic physical science, I would say. So you cannot
> reductively connect the higher-level terms to the lower level terms.

I don't see why not if one is causally related to the other. The declaration of
war is a mental event. If mental events are caused by physical events then
one should be able to define the physical events (or set of events based
upon multiple realizability) that cause the mental event. It may not always
be fruitful to explain events in terms of their causes but that seems like a
separate issue.

James Hunter

unread,
May 23, 2000, 3:00:00 AM5/23/00
to

"C. White" wrote:

> Gary Forbis <GaryF...@email.msn.com> wrote in message
> news:e9QwFCQx$GA.353@cpmsnbbsa08...


> > Searle makes the strong claim that consciousness is epiphenominal
> > on brain processes and asserts this as fact
>

> I doubt Searle would describe his view as "epiphenomenal". Essentially,
> Searle sees consciousness as a higher level feature of the brain caused by
> lower level neuronal activity.
>
> Searle believes that consciousness is a biological phenomenon and that its
> causes can be determined, but Searle disagrees that a definitional reduction
> can take place (e.g., heat redefined in terms of molecular motion) in the
> usual manner. The reason for this is that consciousness is "ontologically
> subjective" so that sort of redefinition eliminates what is being talked
> about.

It is a little more than ontological subjectivity, it's set "theory"
all over again. Philosophers just can't seem to get enough
of the crap.

Neil W Rickert

unread,
May 23, 2000, 3:00:00 AM5/23/00
to
ande...@pitt.edu (Anders N Weinstein) writes:
>In article <e9QwFCQx$GA.353@cpmsnbbsa08>,
>Gary Forbis <GaryF...@email.msn.com> wrote:
>>Searle makes the strong claim that consciousness is epiphenominal
>>on brain processes and asserts this as fact.

>Searle does not think conscious mental states and events are "epiphenomenal".
>He thinks they are "caused by and realized in" brain processes; and that
>they have further causal effects in turn.

It is hard to square this with his CR argument. There he seems to
claim that strong AI and weak AI are behaviorally equivalent. If
what distinguishes them is not an epiphenomen, and if it has further
causal effects, how could the two not be behaviorally
distinguishable? Or are you saying that Searle is a dualist, and the
further causal effects would take place in an immaterial realm?


James Hunter

unread,
May 23, 2000, 3:00:00 AM5/23/00
to

Gary Forbis wrote:

> Anders N Weinstein <ande...@pitt.edu> wrote in message news:8gf4ot$6ci$1...@usenet01.srv.cis.pitt.edu...

> > In article <e9QwFCQx$GA.353@cpmsnbbsa08>,
> > Gary Forbis <GaryF...@email.msn.com> wrote:
> > >Searle makes the strong claim that consciousness is epiphenominal
> > >on brain processes and asserts this as fact.
> >
> > Searle does not think conscious mental states and events are "epiphenomenal".
> > He thinks they are "caused by and realized in" brain processes; and that
> > they have further causal effects in turn.
>

> I guess I need to correct my use of the term. Would you consider Leibniz
> an epiphenomenalist?
>
> > An epiphenomenon is a causal by-prodcut that does not have further
> > causal effects. For example the flashing of lights on the console is
> > normally epiphemonal with respect to the operation of an unattended
> > computer. Epiphenomenalists think conscious subjects can feel and watch
> > the effects of physical brain processes but can't intervene to change
> > them in any way, a very strange idea.

>

> I don't see it as so strange. If mental processes are caused by brain
> processes then the events are caused by the brain processes and
> the mental processes come along for the ride.

>
> It seems to me mental processes are a different aspect of some brain
> processes. In this way to talk about one causing the other just doesn't
> make any sense. Likewise, because they are one and the same (but
> different aspects or sets of qualities/properties) mental events don't
> cause physical events and physical events don't cause mental events.

It might useful to think of space and time from a more modern viewpoint
than Leibnitz to understand what might cause something else to happen.


Gary Forbis

unread,
May 23, 2000, 3:00:00 AM5/23/00
to
C. White <cwhi...@hotmail.com> wrote in message news:LAFW4.33055$wb7.1...@news.flash.net...

>
> Gary Forbis <GaryF...@email.msn.com> wrote in message
> news:e9QwFCQx$GA.353@cpmsnbbsa08...
> > Searle makes the strong claim that consciousness is epiphenominal
> > on brain processes and asserts this as fact
>
> I doubt Searle would describe his view as "epiphenomenal". Essentially,
> Searle sees consciousness as a higher level feature of the brain caused by
> lower level neuronal activity.
>
> Searle believes that consciousness is a biological phenomenon and that its
> causes can be determined, but Searle disagrees that a definitional reduction
> can take place (e.g., heat redefined in terms of molecular motion) in the
> usual manner.

> The reason for this is that consciousness is "ontologically
> subjective" so that sort of redefinition eliminates what is being talked
> about.

I agree with this. It is why I don't understand the view that one causes the
other. It seems that the two, that is the physical and the mental, being
strongly correlated should be sufficient. Isn't there a usual requirement
that cause be temporally prior to effect?

I didn't come away from the article on his web site that the favored one
causal explanation over all others but I may not have read closely enough.
He did discuss some problems with some of the explanations. Did he really
rule out sub-neuronal activity?


C. White

unread,
May 24, 2000, 3:00:00 AM5/24/00
to

C. White

unread,
May 24, 2000, 3:00:00 AM5/24/00
to

James Hunter <James....@Jhuapl.edu> wrote in message
news:392B28F0...@Jhuapl.edu...

> It is a little more than ontological subjectivity, it's set "theory"
> all over again. Philosophers just can't seem to get enough
> of the crap.

I don't get you.

C. White

unread,
May 24, 2000, 3:00:00 AM5/24/00
to

Neil W Rickert <ricke...@cs.niu.edu> wrote in message
news:8gffcp$3...@ux.cs.niu.edu...

> It is hard to square this with his CR argument. There he seems to
> claim that strong AI and weak AI are behaviorally equivalent.

The CR argument was meant to demonstrate that syntax does not procure
semantics or guarentee that there is mental content. The point is that the
man in the CR could feed back answers in Chinese by following a rule book
and without understanding Chinese. In the same way, a digital computer
could mimic understanding Chinese by following a program. The argument
refutes the view that brain processes or computational or algorithmic
("Strong AI").

On Searle's definitions, "Strong AI" is the view that the brain just
computes and "Weak AI" is the view that computers can *simulate* brain
operations.

At some point Searle also argued that not only is semantics not instrinsic
to syntax, but syntax isn't even instrinsic to physics.

rick++

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
Searle is a philosopher, not a scientist.
He is trying to point out what may be misconceptions in understanding
of artificial intelligence, not suggest how to do it.
He also works in other areas of philosophy.

Sent via Deja.com http://www.deja.com/
Before you buy.

C. White

unread,
May 24, 2000, 3:00:00 AM5/24/00
to

Gary Forbis <GaryF...@email.msn.com> wrote in message
news:esFJouUx$GA.318@cpmsnbbsa08...

> I agree with this. It is why I don't understand the view that one causes
the
> other. It seems that the two, that is the physical and the mental, being
> strongly correlated should be sufficient. Isn't there a usual requirement
> that cause be temporally prior to effect?

Searle sees consciousness as a physical phenomenon. He abandons the
traditional "mental"/"physical" distinction. He doesn't say that
consciousness isn't peculiar: its ontological subjectivity makes it
peculiar or at least prevents it from being reduced to the physical in a
fashion that denies its subjective nature.

Oliver Sparrow

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
nos...@nospam.mit.edu wrote:

>But what other criteria are there?

"Nospam@nospam" implies a genuinely belt and braces approach to issues. In
the case of this issue, the key issue is one of modeling. All approaches to
awareness are currently (necessarily) phenomenological. That is, we have no
idea what it involves, and therefore can only look for emitted concomitants
of it. As the criteria of judgement and the thing being sought out are
identical, this is as philosophically useful as classifying text as 'junk'
or 'literature' on the basis of their 'literary content', as judged by
entities (committees, reviewers) in which the characteristic 'literary
awareness' is deemed to be present. (Note that whilst useless in
intellectual sophistry, this approach is pragmatically very helpful, and
the basis of much of our society.)

A clear division, where belt and braces may be deemed fully deployed,
involved a 'clock work' model. "Is this or is this not a frequency
modulated communication system: Y/N?>" is meaningful when applied to an
appropriate structure because we have exact analytical insight into what is
entailed; we understand the clockwork that drives the emergent
phenomenology. If we did not, then we would have to evoke 'listener
panels', or measure relative sig:noise, and assign the structure to one
camp or the other on the basis of informed guesswork.

We do not understand the clock work of awareness. Some argue that we cannot
do so. It is hard to see how they can make such a case in the absence of
reasonably fundamental understanding. Searle can have his opinions. Let us
not confuse this with anything stronger than a view, however.
_______________________________

Oliver Sparrow

Oliver Sparrow

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
"Gary Forbis" <GaryF...@email.msn.com> wrote:

>Searle makes the strong claim that consciousness is epiphenominal

>on brain processes and asserts this as fact. While I believe the same


>I don't believe it is fact but rather the stronger working hypothesis.

Useful quote: the better side of Searle. It is hard to disagree with what
he says. However, we are supposed to agree that anything that happens which
is regular is computable. Thus, if the brain does stuff, than that stuff is
replicable on a widget, given world enough and time. I think here is the
nub, and here is why.

One can simulate a system in three distinct ways.

1) Do so in phenomenal terms, such that the expressed, external behaviour
of the model is indistinguishable from the target system. examples of this
are captured in the Turing test, in state space replication.

2) Do so analytically, creating an exact and homologous replica of internal
actions of the system. This is - in this instance - done through explicitly
algorithmic methods. That is, one writes down a program for a processor
that has no relationship to the target system, such that its actions create
a patterns of events that are exactly homologous with the target system.
Searle's room, and huge look up tables are examples of this.

3) Once again, do so analytically, but in the sense of building a (meta)
processor that will create a homology for itself, without any attempt to
explicitly argorithmatise the subject. The program that guides the
low-order engine or processor sets up a frame of interaction and emulates
the properties of the component parts within this (people in a market,
molecules in a flow) such that emergent properties to drive the simulation.
Such a system may well need large amounts of specific data to set up
initial conditions such that events flow in homology wit the target system.

These three sense of modeling tend to get muddled together. It is, however,
asserted that anything that is describable is open to simulation by
underpinned widgetry, if enough of it - the underpinning plumbing - is
deployed. Exactly how this is true in each of these approaches varies, but
is said to hold. I will not argue this point. Equally, practical issues -
such as NP and beyond, acute sensitivity to initial conditions, quantum
limits - produce more that conceptual limits to the validity of this. Let
us ignore all of these complications. Let us assume that if something can
be described, then it can be simulated by God's Own Processor, the gadget
that can do anything. Thus, rationality suggests, a brain snapshot leads
us, one way or another, to a simulated brain...

The snapshot - the thing to be modeled - is, however, assumed to be
accessible. Once again, this raises any number of practical red herrings
and theoretical issues about measurement, but let us put them aside. Let us
go for the jugular and let us ask: what do I need to have in order to
describe system X? I need to know how system X works. But to do that, I
need to know what System X does that is relevant, for all things carry out
many interactions: bounce photons to Venus an echo sounds, get warm and
evoke paternal feelings from passers-by. What is germane? How does X do
what is germane? Thus, an iterative process is set up, highly
characteristic of science and of social enquiry.

A characteristic of the mind (and doubtless of other systems) is that it
learns, and in learning, it changes the discriminants upon which it
functions, through which it perceives and by which it ultimately learns
more. Despite running on a reasonably common layer of substrates up to a
certain level, beyond this point in the hierarchy, all is adaptive or
subject to individuation and distinctiveness. The System X that is Gary, or
Mr C. Room, changes and changes; and what it shifts about are not the
surface manifestations of an unchanging hierarchy, but the fairly basic
plumbing on which interpretation is made. Indeed, many distinct forms of
ordering - all themselves changing - run through any one nexus of what is
being done. A thing can both reflect sound and evoke paternal feelings.

The iterative process of investigation down to a fixed model of Gary-hood
is, therefore, deeply problematic. The model that algorithmatisation (or
whatever) is supposed to address is changing, and for the model to keep up,
that which wrote the algorithm would have to be meta-modeling the changes
being made by subject system. This regress goes on for ever, unless one
writes an algorithmic process which changes itself. Unfortunately, one
cannot be sure that how it does this will be concomitant with how the
target system is doing it unless one is sure that all relevant aspects of
ordering have been taken into account. One cannot be sure of this, because
we - probes, people - see epiphenomenon and external manifestations,
rather than the whole or the details, of everything. We may have neat
concepts ("a slice of bread", "a munitions factory") but the actuality of
these entities are complex, layered structures. John in dispatch may be
feeling sick, there may be fluff in air conditioner number seven, water
supply may briefly fluctuate in pressure... None of these interlacinating
consequences of remote structures and rule-changing interactions fit in our
percept of the entity. We cannot know what matters until it expresses
itself, and then only in the terms which we have defined as mattering.
Reality is usually non-computable, ergo.

What is this to simulating minds? (A) that Searle is right in asserting
that minds are not lists of static rules. (B) that Searle is wrong in
believing that this distinguishes minds from other - lesser - phenomena in
the real world. None of it is really 'computable' but some aspects of it
are simple or artificial enough to allow us to fool ourselves that it is.
(C) Can minds be created - as opposed to founded in set rules - or is their
a mystic difference between bundles of neurons and the rest of everything?
Answer, we do not know, but there is no reason to suppose that
algorithmatisation is a sensible way to go, or that a mental structure,
supported on whatever substrate, would reduce to or be 'nothing more than'
a set of rules. My guess is that what we 'are' is a set of modular
predispositions to assemble a class of spaces in which interactions can
play themselves out. In doing so, they change the nature of the spaces and
the kinds of transactions occurring in them. Data drive this, and initial
and momentary conditions embody more - much more - information that the
processing architecture of the moment.

Modules that interact do so with partial tokens of their internal states,
and are embedded in wider versions of the predisposition-creating
architecture. Evocation of multiple resonances within networks of modules
give rise to qualia (are qualia) and these cross-system networks are the
way in which the structure talks to itself. To know this would allow you to
simulate it, but to know that you knew it utterly would require you to -
for example - 'feel' the qualia. You could not understand the drivers of
the decision framework without doing so, but you could not do so as an
outsider. Information relativity, therefore, there are Some Things we
Cannot Know, Dr Frankenstein, about complex systems as much as the very
tiny. Doubtless there is a Heisenberg principle to be enunciated, but I
have to go and talk about other matters, so that I shall now do.
_______________________________

Oliver Sparrow

Seth Russell

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
Neil W Rickert wrote:

> But "mental events" are philosophers' fictions. I am not saying that
> consciousness is a fiction -- only that the term "event" is
> misapplied.

Are you saying that you don't believe in 'mental events' ? That you
cannot assign a time to a though that flits through Rickert's head?
Assume one class of 'mental events' is a change in the predilection
to behave in some definite manner. Let's say we identify a group of
people and test them regarding some predilection to behave in a
specific way. Then let them read a convincing passage of text
designed to change that predilection. Test them afterwards and see
if the predilection had changed. Are you predicting that a
sufficiently scientific study would find no correlation between the
predilection before and after the reading? Or do you (somehow?)
interpret that kind of data as not implying that something happened
in these guys heads? Why are mental events, philosophers fictions?

--
Seth Russell
http://robustai.net/ai/word_of_emouth.htm
Click on the button ... see if you can catch me live!
http://robustai.net/JournalOfMyLife/users/SethRussell.html
Http://RobustAi.net/Ai/Conjecture.htm

Anders N Weinstein

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
In article <8gffcp$3...@ux.cs.niu.edu>,

Neil W Rickert <ricke...@cs.niu.edu> wrote:
>ande...@pitt.edu (Anders N Weinstein) writes:
>>In article <e9QwFCQx$GA.353@cpmsnbbsa08>,
>>Gary Forbis <GaryF...@email.msn.com> wrote:
>>>Searle makes the strong claim that consciousness is epiphenominal
>>>on brain processes and asserts this as fact.
>
>>Searle does not think conscious mental states and events are "epiphenomenal".
>>He thinks they are "caused by and realized in" brain processes; and that
>>they have further causal effects in turn.
>
>It is hard to square this with his CR argument. There he seems to
>claim that strong AI and weak AI are behaviorally equivalent. If
>what distinguishes them is not an epiphenomen, and if it has further
>causal effects, how could the two not be behaviorally
>distinguishable?

Well Searle's position is clearly that the same external behavior can
be produced through different internal causes. In the computational
system it is produced through the causal efficacy of the computational
events, with no consciousness or intentionality being constituted. In
the human being, the same behavior might be produced in part through
the efficacy of conscious intentional mental states which are caused by
and realized in the neural substrate.

>distinguishable? Or are you saying that Searle is a dualist, and the
>further causal effects would take place in an immaterial realm?

Searle does not think that conscious mental states and events take
place in an immaterial realm. He thinks consciousness involves
properties that are caused by and realized in their material
substrate.

In one place, as I recall, he suggests it is something like the
macroscopic solidity of a block of ice when it freezes. The solidity is
both caused by and realized in the system of molecules, and can have
causal effects in turn (for the solidity of the ice might cause further
macroscopic events, say blockage of a hole, with everything that causes).

BTW I think you need to be more careful with your terminology, since
there are many forms of "dualism" possible, not all of which involve
the idea of events in an immaterial realm.

Searle indeed wishes to reject the premises of any dualism. He claims
that consciousness is a higher level *physical* property just as
macroscopic solidity is. But I don't think he has a criterion of
"physicality" that will enable him to secure this conclusion, so one
might well suggest he is a kind of dualist.

Still he holds conscious mental events are realized in the brain.

Neil W Rickert

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
Seth Russell <se...@robustai.net> writes:
>Neil W Rickert wrote:

>> But "mental events" are philosophers' fictions. I am not saying that
>> consciousness is a fiction -- only that the term "event" is
>> misapplied.

>Are you saying that you don't believe in 'mental events' ? That you
>cannot assign a time to a though that flits through Rickert's head?

If my attention is on the "events" then I am not watching the time,
and if my attention is on the time, then I am not watching the
"events". Yes, I am questioning whether we can assign times.

>Assume one class of 'mental events' is a change in the predilection
>to behave in some definite manner.

Okay. But now you are talking about externally observable behavior.
That's a physical event.

> Let's say we identify a group of
>people and test them regarding some predilection to behave in a
>specific way.

You are testing physical events.

> Then let them read a convincing passage of text
>designed to change that predilection. Test them afterwards and see
>if the predilection had changed. Are you predicting that a
>sufficiently scientific study would find no correlation between the
>predilection before and after the reading?

No, I am not claiming that. But here you are talking of a physical
event (a change in behavior). So what was the mental event, and what
makes it mental?

> Or do you (somehow?)
>interpret that kind of data as not implying that something happened
>in these guys heads?

I expect there were changes. I presume that some of those changes
were not conscious (not mental). I don't see that you have provided
evidence for mental events.

Anders N Weinstein

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
In article <#HYEmYRx$GA.321@cpmsnbbsa09>,

Gary Forbis <GaryF...@email.msn.com> wrote:
>Anders N Weinstein <ande...@pitt.edu> wrote in message news:8gf4ot$6ci$1...@usenet01.srv.cis.pitt.edu...
>> In article <e9QwFCQx$GA.353@cpmsnbbsa08>,
>> Gary Forbis <GaryF...@email.msn.com> wrote:
>> >Searle makes the strong claim that consciousness is epiphenominal
>> >on brain processes and asserts this as fact.
>>
>> Searle does not think conscious mental states and events are "epiphenomenal".
>> He thinks they are "caused by and realized in" brain processes; and that
>> they have further causal effects in turn.
>
>I guess I need to correct my use of the term. Would you consider Leibniz
>an epiphenomenalist?

An epiphenomenalist holds that physical events can cause mental
events but not the other way around.

Leibniz' theory of pre-established harmony is usually classified as
"parallelism" instead, since he denied any causal interaction between
minds (monads) and the physical world.

(This position is complicated somewhat by his phenomenalism about the physical
world, but that is how his position is most standardly explained.)

But Searle clearly believes that things happen because of his intentional
states, so he is not an epiphenomenalist.

>> An epiphenomenon is a causal by-prodcut that does not have further

>> causal effects. For example the flashing of lights on the console is
>> normally epiphemonal with respect to the operation of an unattended
>> computer. Epiphenomenalists think conscious subjects can feel and watch
>> the effects of physical brain processes but can't intervene to change
>> them in any way, a very strange idea.
>
>I don't see it as so strange. If mental processes are caused by brain
>processes then the events are caused by the brain processes and
>the mental processes come along for the ride.

But then you are a passive observer watching as your body is moved
through its paces by the physical events in your brain. Your desires,
your goals, your needs and wants, these can have no effect on your
behavior.

>It seems to me mental processes are a different aspect of some brain
>processes. In this way to talk about one causing the other just doesn't
>make any sense. Likewise, because they are one and the same (but
>different aspects or sets of qualities/properties) mental events don't
>cause physical events and physical events don't cause mental events.
>

It sounds like you want to hold an identity theory. But if causality
is a relation between events however described, this belies your claim
that it makes no sense to speak of causal interaction.

Say a stimulation of the retina, physical event p1, causes some
other physical event, p2, and, by your theory, p2 is identical to some
mental event m, say the even of your seeing a spot of light.

Now you are right it is odd to say that p2 caused m if p2 = m.
Still if p1 caused p2 and p2 = m then p1 caused m. So there is
still causal interaction between mental and physical events.

>> problems with non-Cartesian dualisms. There are many higher-level
>> phenomena that cannot be explained at the level of basic physics. For
>> example, there is no basic physical scientific explanation of why a war
>> is declared. You cannot even define the event of a war being declared
>> in the language of basic physical science, I would say. So you cannot
>> reductively connect the higher-level terms to the lower level terms.
>
>I don't see why not if one is causally related to the other. The declaration of
>war is a mental event.

I would not say a declaration of war is a mental event, since it has
to take place in public to actually be a declaration of war, it can't
be a subjective event occurring solely in the mind of the declarer.

Moreover, declarations of war are only possible in a certain social
context: even if the president goes into an empty closet and says out loud
into the mirror "I hereby declare war on Canada", no declaration of war
has actually occurred; a man on a desert island cannot issue a declaration
of war, etc.

It is a very difficult matter to say what sort of event a declaration of
war is, I would say.

Anders N Weinstein

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
In article <8gh1oq$6...@ux.cs.niu.edu>,

Neil W Rickert <ricke...@cs.niu.edu> wrote:
>Seth Russell <se...@robustai.net> writes:
>>Neil W Rickert wrote:
>
>>> But "mental events" are philosophers' fictions.
>
>>Are you saying that you don't believe in 'mental events' ? That you
>>cannot assign a time to a though that flits through Rickert's head?
>
>If my attention is on the "events" then I am not watching the time,
>and if my attention is on the time, then I am not watching the
>"events". Yes, I am questioning whether we can assign times.

But first, we can assign at least approximate times from the third
person, so the subject's attention need not be relevant. For we can
certainly narrow down the times at which, say, John notices the insect
crawling up his leg and jumps with a start to within a specified range,
even if John is not busy attending to his own sensations.

You are correct that we cannot assign very *precise* times to mental
events. The concept "time of a mental event" breaks down and loses
applicability when we move to fine-grained time scales, for various
reasons.

I would say this is similar to the way the everyday concept "edge" as
applied to tables breaks down when we move to the atomic level. One
would not say that edges as spoken about in everyday language were
philosopher's fictions for this reason; neither should one say that
"mental events" are philosopher's fictions.

Perhaps one should say that the idea of a sharply defined mental event
such that one should always be able to give a precise scientific answer
to any question about whether or not it is identical with, is caused,
or caused by, any neurological event, *does* involve a kind of fantasy.

In that sense we might say the ontology of mental events, like that of
edges and surfaces, is irreducibly located in the everyday common sense
world, and may not be constructible within the world as fundamental
science represents it. But we should not say that there are no
mental events, just as we should not say tables don't have edges
in the context of an ordinary discussion.

Neil W Rickert

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
ande...@pitt.edu (Anders N Weinstein) writes:
>In article <8gh1oq$6...@ux.cs.niu.edu>,
>Neil W Rickert <ricke...@cs.niu.edu> wrote:
>>Seth Russell <se...@robustai.net> writes:
>>>Neil W Rickert wrote:

>>>> But "mental events" are philosophers' fictions.

>>>Are you saying that you don't believe in 'mental events' ? That you
>>>cannot assign a time to a though that flits through Rickert's head?

>>If my attention is on the "events" then I am not watching the time,
>>and if my attention is on the time, then I am not watching the
>>"events". Yes, I am questioning whether we can assign times.

>But first, we can assign at least approximate times from the third
>person, so the subject's attention need not be relevant. For we can
>certainly narrow down the times at which, say, John notices the insect
>crawling up his leg and jumps with a start to within a specified range,
>even if John is not busy attending to his own sensations.

The third person is responding to a physical event, the jumping with
a start. The third person cannot tell whether the mental event was
visually noticing the insect, or feeling the sensation of the insect
on the leg. Indeed, the third person cannot even be sure that this
is not just an automatic "knee jerk" kind of reaction with no
associated mental event.

I grant that there was an event. I am questioning what the adjective
"mental" adds to that.

>You are correct that we cannot assign very *precise* times to mental
>events. The concept "time of a mental event" breaks down and loses
>applicability when we move to fine-grained time scales, for various
>reasons.

>I would say this is similar to the way the everyday concept "edge" as
>applied to tables breaks down when we move to the atomic level. One
>would not say that edges as spoken about in everyday language were
>philosopher's fictions for this reason; neither should one say that
>"mental events" are philosopher's fictions.

>Perhaps one should say that the idea of a sharply defined mental event
>such that one should always be able to give a precise scientific answer
>to any question about whether or not it is identical with, is caused,
>or caused by, any neurological event, *does* involve a kind of fantasy.

I think of it as an attempt to use language of objective
descriptions, and to impute that to subjective phenomena. By the
way, I dislike the term "mental states" (as used in philosophy) for
much the same reason.

We have computers which we explain in terms of state transitions and
events. But why must we assume that the same language fits
subjective experience?

>In that sense we might say the ontology of mental events, like that of
>edges and surfaces, is irreducibly located in the everyday common sense
>world, and may not be constructible within the world as fundamental
>science represents it. But we should not say that there are no
>mental events, just as we should not say tables don't have edges
>in the context of an ordinary discussion.

Why not talk of "experiences of physical events", instead of using
the term "mental event"? The term "mental event" really sounds like
an action happening on stage in the Cartesion theater, to which you
reacted. While I don't fully agree with Dennett, there was some
point to his criticism of the "Cartesian theater". I don't believe
we react to mental events at all. Rather, I believe that we react to
physical events, and that our subjective experiences are part of our
causal processes of our reaction.


Neil W Rickert

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
"C. White" <cwhi...@hotmail.com> writes:
>Neil W Rickert <ricke...@cs.niu.edu> wrote in message
>news:8gffcp$3...@ux.cs.niu.edu...

>> It is hard to square this with his CR argument. There he seems to
>> claim that strong AI and weak AI are behaviorally equivalent.

>The CR argument was meant to demonstrate that syntax does not procure


>semantics or guarentee that there is mental content.

Right. The argument is that, even if the AI system gets the behavior
right, it is still missing the mental content. But this would seem
to be an argument that the mental content is causally ineffectual (we
can get the behavior right without it), and that the system isn't
really intelligent unless it has this causally ineffectual mental
content.

The implication is clear, that consciousness in an epiphenomenon.

If Searle does not believe it is an epiphenomenon, he should instead
be arguing that a syntax based AI system could not possibly get the
behavior right.

But perhaps such an argument is too hard, and Searle prefers to play
pointless word games.


James Hunter

unread,
May 24, 2000, 3:00:00 AM5/24/00
to

Anders N Weinstein wrote:

> In article <8gh1oq$6...@ux.cs.niu.edu>,
> Neil W Rickert <ricke...@cs.niu.edu> wrote:
> >Seth Russell <se...@robustai.net> writes:
> >>Neil W Rickert wrote:
> >
> >>> But "mental events" are philosophers' fictions.
> >
> >>Are you saying that you don't believe in 'mental events' ? That you
> >>cannot assign a time to a though that flits through Rickert's head?
> >
> >If my attention is on the "events" then I am not watching the time,
> >and if my attention is on the time, then I am not watching the
> >"events". Yes, I am questioning whether we can assign times.

>
>


> You are correct that we cannot assign very *precise* times to mental
> events. The concept "time of a mental event" breaks down and loses
> applicability when we move to fine-grained time scales, for various
> reasons.

>
> I would say this is similar to the way the everyday concept "edge" as
> applied to tables breaks down when we move to the atomic level. One
> would not say that edges as spoken about in everyday language were
> philosopher's fictions for this reason; neither should one say that
> "mental events" are philosopher's fictions.

Since fiction stories would seemingly correspond to "mental events"
one should say that "mental events" are the quintessential philosopher
fictions.

Anders N Weinstein

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
In article <8gh5ub$6...@ux.cs.niu.edu>,

Neil W Rickert <ricke...@cs.niu.edu> wrote:
>ande...@pitt.edu (Anders N Weinstein) writes:
>>In article <8gh1oq$6...@ux.cs.niu.edu>,
>>Neil W Rickert <ricke...@cs.niu.edu> wrote:
>>>Seth Russell <se...@robustai.net> writes:
>>>>Neil W Rickert wrote:
>
>>>>> But "mental events" are philosophers' fictions.
>
>>>>Are you saying that you don't believe in 'mental events' ? That you
>>>>cannot assign a time to a though that flits through Rickert's head?
>
>>>If my attention is on the "events" then I am not watching the time,
>>>and if my attention is on the time, then I am not watching the
>>>"events". Yes, I am questioning whether we can assign times.
>
>>But first, we can assign at least approximate times from the third
>>person, so the subject's attention need not be relevant. For we can
>>certainly narrow down the times at which, say, John notices the insect
>>crawling up his leg and jumps with a start to within a specified range,
>>even if John is not busy attending to his own sensations.
>
>The third person is responding to a physical event, the jumping with
>a start. The third person cannot tell whether the mental event was

It is not always so clear to me how we should "what the observer is
responding to" if the observer's response involves the use of concepts
that can distinguish different aspects of the phenomenon.

Saying the observer is responding to a physical event seems to me
a bit like saying I am in this post not responding to your
words or claims, but only to glowing phosphenes on my CRT. "What
the observer is responding to" is relative to the description the
observer would apply.

It may be that the observer is responding to an expression of
a subjective mental state by taking it as an expression of a subjective
state. In that case I would not want to say the person is really only
observing a physical event. It may be that the expressive behavior is
realized in a physical event.

>a start. The third person cannot tell whether the mental event was
>visually noticing the insect, or feeling the sensation of the insect
>on the leg. Indeed, the third person cannot even be sure that this
>is not just an automatic "knee jerk" kind of reaction with no
>associated mental event.

Often you can easily tell these things, perhaps with a little
further inquiry.

>I grant that there was an event. I am questioning what the adjective
>"mental" adds to that.

The point of this example was that we can narrow the time at which
the *mental* event -- the noticing -- occurred. Sometimes it is
possible for a noticing to occur without any outward sign. A good
poker player may notice he has drawn a full house, for example, without
giving any outward sign. We can talk of the event of his noticing this;


that is a mental event.

Essentially, the way to understand what a mental event is is to look
at the language in which we describe them. I am relying on the idea that
certain verbs are recognizably psycholgical,so may be used to
introduce events: "I can remember the exact moment the solution to
the problem occurred to me as I was brushing my teeth", a mathematician
might say. And so on.

>>You are correct that we cannot assign very *precise* times to mental
>>events. The concept "time of a mental event" breaks down and loses
>>applicability when we move to fine-grained time scales, for various
>>reasons.
>
>>I would say this is similar to the way the everyday concept "edge" as
>>applied to tables breaks down when we move to the atomic level. One
>>would not say that edges as spoken about in everyday language were
>>philosopher's fictions for this reason; neither should one say that
>>"mental events" are philosopher's fictions.
>

>>Perhaps one should say that the idea of a sharply defined mental event
>>such that one should always be able to give a precise scientific answer
>>to any question about whether or not it is identical with, is caused,
>>or caused by, any neurological event, *does* involve a kind of fantasy.
>
>I think of it as an attempt to use language of objective
>descriptions, and to impute that to subjective phenomena. By the

I dislike this way of talking about it since I think it runs the
danger of running together two different senses of "objective". For
it is, as I see it, an objective fact that Jones has a toothache, and
an equally objective fact that anesthetic eases his pain. But it
is a fact concerning his subjective state in a reasonably clear sense.

>way, I dislike the term "mental states" (as used in philosophy) for
>much the same reason.

I would suggest the grammar that implicitly defines what is a
"state" or "event" or an "action" can apply to both physical and
mental phenomena. For example, there are certain mental actions I
can perform, like conjuring a picture in my imagination, that are
subject to my will, and others that are not.

So I would not restrict the very idea of an 'event" to an event
described in physical language. It seems to me there are certainly
events described in psychological or mentalistic language as well.

Some linguists use terms like "stative" or "agentive" to describe
verbs. I have never heard the term "eventive", but it would seem to be
meaningful in the corresponding way. In that term, the thesis that
there are mental states or events is simply the thesis that some
psychological verbs in everyday language are "stative" or
"eventive". Or so it seems to me. Metaphysics is nothing but
grammar.

>We have computers which we explain in terms of state transitions and
>events. But why must we assume that the same language fits
>subjective experience?

The mere idea of a mental event has nothing special to do with
the computer model. It has to with the logical grammar of psychological
statements and whether it permits event designators to be introduced
into our language.

That said, I agree that there are many psychological statements that
do not describe occurent states or events. Obviously "John knows how
to ride a bike" does not describe an event. And "John is riding his
bike" is more of a process. Also, if I say "when he said he was
going to the bank he meant the river bank", I am not describing
any mental state or event.

But other psychological statements do seem apt for introducing
states or events in a broad -- i.e. grammatical -- sense.

>Why not talk of "experiences of physical events", instead of using
>the term "mental event"? The term "mental event" really sounds like

My *experiencing* of a physical event would seem to be a mental event.

>an action happening on stage in the Cartesion theater, to which you
>reacted. While I don't fully agree with Dennett, there was some

I don't see this implication.

It may help to distinguish what we might call first-order from
second-order or reflective mental events. My noticing that the bus
is pulling away might be a first-order mental event, call it e1.
Its object would be the physical event of the bus starting to pull away.
I do not have to be specially introspective or self-conscious for
an event like e1 to occur -- I need not harbor any second-order
or reflective thoughts about e1. Perhaps all my consciousness is
entirely and unselfconsciously directed outward at the bus pulling
away, at the object in the world.

However, I might also undergo a reflective or second-order awareness
of the first-order event e1. For example, I might think that it is
silly for my consciousness to be so caught up in such petty minutia
as catching a bus as it was in e1. This would involve a new mental
event e2, which is directed in part at e1 -- perhaps some small time later.

But there is no need for such a reflective event to occur in order
to say that the first event, the noticing e1, was a mental event.
Mental events are not necessarily objects of reflective awareness,
but rather are the media through which awareness, normally of the outer
physical world, occurs.

>point to his criticism of the "Cartesian theater". I don't believe

I don't believe in a Cartesian theater. The mere talk of mental
events does not require the picture of observing representations
in an inner Cartesian theater.

>we react to mental events at all. Rather, I believe that we react to
>physical events, and that our subjective experiences are part of our
>causal processes of our reaction.

This is quite consonant with my view. Still a subective experience is
a mental event par excellance.

Also, we *can* sometimes react to mental events. As the chemicals in the
IV get adjusted, one can ask the patient to give a sign the minute the
pain changes, for example. In doing so the patient is reacting to a
mental event, or so it would seem to me.


Anders N Weinstein

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
In article <8gh7sb$6...@ux.cs.niu.edu>,

Neil W Rickert <ricke...@cs.niu.edu> wrote:
>"C. White" <cwhi...@hotmail.com> writes:
>>Neil W Rickert <ricke...@cs.niu.edu> wrote in message
>>news:8gffcp$3...@ux.cs.niu.edu...
>>> It is hard to square this with his CR argument. There he seems to
>>> claim that strong AI and weak AI are behaviorally equivalent.
>
>>The CR argument was meant to demonstrate that syntax does not procure
>>semantics or guarentee that there is mental content.
>
>Right. The argument is that, even if the AI system gets the behavior
>right, it is still missing the mental content. But this would seem
>to be an argument that the mental content is causally ineffectual (we
>can get the behavior right without it), and that the system isn't

The logic of this seems muddled.

It only shows that the mental content is not necessary for the
behavior. This does not show the mental content is ineffectual when it
is operative. Searle's idea is that the same behavior might be produced
in different ways.

Compare: You can build a car that runs with a gas-burning engine, or
without. That hardly shows that gas-burning is epiphenomenal in an
internal combustion vehicle. Searle's view is as it were that human
bodies have consciousness-driven engines; he does not deny that other
sorts of engines might realize the same behavior without producing
any consciousness.


Gary Forbis

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
Neil W Rickert <ricke...@cs.niu.edu> wrote in message news:8gh7sb$6...@ux.cs.niu.edu...

> "C. White" <cwhi...@hotmail.com> writes:
> >Neil W Rickert <ricke...@cs.niu.edu> wrote in message
> >news:8gffcp$3...@ux.cs.niu.edu...
> >> It is hard to square this with his CR argument. There he seems to
> >> claim that strong AI and weak AI are behaviorally equivalent.
>
> >The CR argument was meant to demonstrate that syntax does not procure
> >semantics or guarentee that there is mental content.
>
> Right. The argument is that, even if the AI system gets the behavior
> right, it is still missing the mental content. But this would seem
> to be an argument that the mental content is causally ineffectual (we
> can get the behavior right without it), and that the system isn't
> really intelligent unless it has this causally ineffectual mental
> content.

That the behavior can be right without the mental content doesn't
imply that the mental content is causally ineffective but rather that
the behavior is realizable by ways other than mental content.

Not all clocks have pendulums but that doesn't mean pendulums
are ineffective at their function.

> The implication is clear, that consciousness in an epiphenomenon.

Anders has somewhat cleared this up for me. There is another way.

James Hunter

unread,
May 24, 2000, 3:00:00 AM5/24/00
to

Anders N Weinstein wrote:

There is sort of an understanding that the words on the CRT
map into the brain, otherwise the glowing phoshenes or CRT
might not exist.


C. White

unread,
May 24, 2000, 3:00:00 AM5/24/00
to

Neil W Rickert <ricke...@cs.niu.edu> wrote in message
news:8gh7sb$6...@ux.cs.niu.edu...

> right, it is still missing the mental content. But this would seem
> to be an argument that the mental content is causally ineffectual (we
> can get the behavior right without it), and that the system isn't
> really intelligent unless it has this causally ineffectual mental
> content.

No, that is not the point of Searle's argument.

Seth Russell

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
Neil W Rickert will write in localmessage://8gh1oq$6...@ux.cs.niu.edu :
[which i have not yet recieved]

> >>If my attention is on the "events" then I am not watching the time,
> >>and if my attention is on the time, then I am not watching the
> >>"events". Yes, I am questioning whether we can assign times.

I'll stand with Weinstein's rebuttal on that one.
see localmessage://8gh3fb$avj$1...@usenet01.srv.cis.pitt.edu

> I grant that there was an event. I am questioning what the adjective
> "mental" adds to that.

The adjective 'physical' is no better than the adjective 'mental'.
Once we accept that there is only one world where causal effects can
ripple, then your preference for the word 'physical' reduces to a
distinction that makes no difference. Yet there is a valid difference
between events that are privileged to one awareness as opposed to events
that are observable by third parties - for me it's natural to call the
privileged events mental.

James Hunter

unread,
May 24, 2000, 3:00:00 AM5/24/00
to

Seth Russell wrote:

> Neil W Rickert will write in localmessage://8gh1oq$6...@ux.cs.niu.edu :
> [which i have not yet recieved]
>
> > >>If my attention is on the "events" then I am not watching the time,
> > >>and if my attention is on the time, then I am not watching the
> > >>"events". Yes, I am questioning whether we can assign times.
>
> I'll stand with Weinstein's rebuttal on that one.
> see localmessage://8gh3fb$avj$1...@usenet01.srv.cis.pitt.edu
>
> > I grant that there was an event. I am questioning what the adjective
> > "mental" adds to that.
>
> The adjective 'physical' is no better than the adjective 'mental'.
> Once we accept that there is only one world where causal effects can
> ripple, then your preference for the word 'physical' reduces to a
> distinction that makes no difference. Yet there is a valid difference
> between events that are privileged to one awareness as opposed to events
> that are observable by third parties - for me it's natural to call the
> privileged events mental.

The adjective is better, since most events going in the
brain are too mathematically sophisticated for
fairly stupid philosphers to understand.

Neil W Rickert

unread,
May 24, 2000, 3:00:00 AM5/24/00
to
ande...@pitt.edu (Anders N Weinstein) writes:
>Neil W Rickert <ricke...@cs.niu.edu> wrote:

>>The third person is responding to a physical event, the jumping with
>>a start. The third person cannot tell whether the mental event was

>It is not always so clear to me how we should "what the observer is
>responding to" if the observer's response involves the use of concepts
>that can distinguish different aspects of the phenomenon.

>Saying the observer is responding to a physical event seems to me
>a bit like saying I am in this post not responding to your
>words or claims, but only to glowing phosphenes on my CRT. "What
>the observer is responding to" is relative to the description the
>observer would apply.

No, that does not seem to be the right way of looking at it. We can
say that you are responding to the written text, which is still quite
physical. You might metaphorically say that you are responding to my
thoughts. But you don't really know whether the text was produced by
thoughts or by some other method. So, you are actually responding to
the text, and inferring that it came from thoughts.

>It may be that the observer is responding to an expression of
>a subjective mental state by taking it as an expression of a subjective
>state. In that case I would not want to say the person is really only
>observing a physical event. It may be that the expressive behavior is
>realized in a physical event.

>>a start. The third person cannot tell whether the mental event was
>>visually noticing the insect, or feeling the sensation of the insect
>>on the leg. Indeed, the third person cannot even be sure that this
>>is not just an automatic "knee jerk" kind of reaction with no
>>associated mental event.

>Often you can easily tell these things, perhaps with a little
>further inquiry.

I doubt it. Further inquiry gives you what the questioned person
agrees to say as his story of what happened. But short term memory
is short, and any story told is a contruction made afterward, in an
attempt to explain behavior.

>>I grant that there was an event. I am questioning what the adjective
>>"mental" adds to that.

>The point of this example was that we can narrow the time at which
>the *mental* event -- the noticing -- occurred. Sometimes it is
>possible for a noticing to occur without any outward sign. A good
>poker player may notice he has drawn a full house, for example, without
>giving any outward sign. We can talk of the event of his noticing this;
>that is a mental event.

>Essentially, the way to understand what a mental event is is to look
>at the language in which we describe them.

Just about the only people who use that terminology, at least in my
experience, are philosophers. So that seems to bring us back to the
idea that this is a philosopher's invention.

> I am relying on the idea that
>certain verbs are recognizably psycholgical,so may be used to
>introduce events: "I can remember the exact moment the solution to
>the problem occurred to me as I was brushing my teeth", a mathematician
>might say. And so on.

But this is still a story told afterwards.

>>Why not talk of "experiences of physical events", instead of using
>>the term "mental event"? The term "mental event" really sounds like

>My *experiencing* of a physical event would seem to be a mental event.

But you generally cannot tell us what is the event. You can only
point to the physical event. "Mental event" would seem to have no
referent that can be shared, and therefore it is not meaningful in
communication. All that you can share are physical events that you
claim are correlated with the purported mental event.


Anders N Weinstein

unread,
May 25, 2000, 3:00:00 AM5/25/00
to
In article <8ghtkt$9...@ux.cs.niu.edu>,

Neil W Rickert <ricke...@cs.niu.edu> wrote:
>ande...@pitt.edu (Anders N Weinstein) writes:
>>Neil W Rickert <ricke...@cs.niu.edu> wrote:
>
>>>The third person is responding to a physical event, the jumping with
>>>a start. The third person cannot tell whether the mental event was
>
>>It is not always so clear to me how we should "what the observer is
>>responding to" if the observer's response involves the use of concepts
>>that can distinguish different aspects of the phenomenon.
>
>>Saying the observer is responding to a physical event seems to me
>>a bit like saying I am in this post not responding to your
>>words or claims, but only to glowing phosphenes on my CRT. "What
>>the observer is responding to" is relative to the description the
>>observer would apply.
>
>No, that does not seem to be the right way of looking at it. We can
>say that you are responding to the written text, which is still quite
>physical. You might metaphorically say that you are responding to my

Sure. Similarly artworks are quite physical so aesthetic responses to
artworks are responses to physical objects; still, the art critic may
well be said to see something different when looking at an artwork than
the furniture mover does.

And expressive behavior is also quite physical, yet it can be
expressive of mental states.

The point is that when you merely say "x is responding to y", you are
leaving out the contribution of the observer's conceptualization. To
say I am responding to something embodied in physical form does not fix
the *description* under which I am responding to it.

I want to say facts concerning subjective states of others are
occasionally knowable without inference through the direct observation
of behavior that is expressive of them. This involves the observer
applying a certain conceptualization, one different than, e.g. a
Skinnerian behaviorist or a physical scientist, just as the
conceptualization of an artwork by an art critic is different from the
typical conceptualization of a furniture mover acting as such.

If you accept the "theory-ladenness" of observation ala Kuhn, you
should have no problem with the idea that one can know about subjective
states of others via theory laden observation. I don't myself think
*theory* is quite the right word, but you get the idea.

>thoughts. But you don't really know whether the text was produced by
>thoughts or by some other method. So, you are actually responding to
>the text, and inferring that it came from thoughts.

Well is there anything I can observe without inference on your view?
Presumably for anything I might claim to observe you can come up
with a scenario under which I would be mistaken. So I think the mere
possibility of error cannot impugn the idea of knowing something
non-inferentially through observation, or nothing would ever be
knowable that way.

>>Often you can easily tell these things, perhaps with a little
>>further inquiry.
>

>I doubt it. Further inquiry gives you what the questioned person
>agrees to say as his story of what happened. But short term memory
>is short, and any story told is a contruction made afterward, in an
>attempt to explain behavior.

I disagree somewhat with the last claim. In a large range of
cases first person reports are criterial for the existence of psychological
states. This may be understood if you take it that such statements are
partly constitutive of the region of reality described by psychological
statements.

For a somewhat loose analogy, you might consider the way saying "I
promise" is partly involved in the state of having promised. When
you say it, you are not attempting to give an empirically well founded
description of some inner goings on; rather your assertion brings
about the state of having promised and so, in a sense, makes the
statement true whenever it is sincerely uttered.

In a somewhat more subtle way, first person statements are something like
that with respect to the psychological facts -- they do not attempt
to get at some reality that could be independently knowable, apart
from any reference to first-person statements.

Again, I think the analogy needs severe qualifications, but I do not
think that first-person statements answer to some independently
accessible reality. I think psychological states are internally
related to their expressions.

There are other more speculative statements one may make about
oneself and one's reasons, I grant you, but I don't think all
self-reports are like that. Perhaps we are taking different sorts of
statements as our respective paradigms. I am taking a statement like
"I prefer job candidate A to job candidate B". I think this expresses
a psychological preference, and it would be pretty unusual to say
it is simply a story constructed after the fact.

On the other hand, if we ask someone for their *reasons* why they
prefer candidate A to candidate B, they might start spinning some
yarn to account for what is really a hunch or seat-of-the-pants feel.
I agree that may be speculation or rationalization on their part.
What they are authoritative about is their preference, not the
causes of their preference, which may lie outside their minds.

>>Essentially, the way to understand what a mental event is is to look
>>at the language in which we describe them.
>

>Just about the only people who use that terminology, at least in my
>experience, are philosophers. So that seems to bring us back to the
>idea that this is a philosopher's invention.

If the only people who use the term "noun phrase" or "transitive verb"
are linguists, does that mean that noun phrases or transitive verbs
are linguists inventions? No, noun phrases and transitive verbs
occur in ordinary speech all the time, andcan display sensitivity to
the distinction between transitive and intransitive verbs without
having any word for it.

It is similar for "mental event". It may be philosopher's jargon,
but the sort of thing it purports to label is typically expressed
in quite ordinary language, of which I have given quite a few
examples in this thread.

>> I am relying on the idea that
>>certain verbs are recognizably psycholgical,so may be used to
>>introduce events: "I can remember the exact moment the solution to
>>the problem occurred to me as I was brushing my teeth", a mathematician
>>might say. And so on.
>

>But this is still a story told afterwards.

So what? Perhaps the objective psychological facts can be defined as
those that would be expressed in a story told afterwards that meets
certain constraints of coherence with other observations.

>>>Why not talk of "experiences of physical events", instead of using
>>>the term "mental event"? The term "mental event" really sounds like
>
>>My *experiencing* of a physical event would seem to be a mental event.
>

>But you generally cannot tell us what is the event. You can only

What do you mean? Suppose I am looking for a face in a puzzle picture,
I scan it, furrow my brow, suddenly my countenance brightens, you can
see "the penny drop" as one says, and I come out with "now I see it!".
This gives expression to the event of my seeing it. What is there we
cannot say?

It's true you see my behavior. But don't forget, the expression is one
thing, the mental event it expresses another. It's just that the two are
conceptually linked by the relation "x is an expression of y".

>point to the physical event. "Mental event" would seem to have no
>referent that can be shared, and therefore it is not meaningful in
>communication. All that you can share are physical events that you

This is what I am denying. I am saying that Wittgenstein's expressive
conception explains how everyday public language terms for psychological or
mental events can have a referent that is shared.

I think what you say here must be nonsense, given that psychological
terms include those we can recognize as speaking of mental events occur
all the time in natural language. For example, if I tell you the pain
suddenly went away I have shared with you a fact concerning a mental
event.

>communication. All that you can share are physical events that you
>claim are correlated with the purported mental event.

No. A correlation is a contingent empirically discovered relation. I
am claiming certain behavior is expressive of mental states and events
and that the relation between our concepts of the states and our
concepts of some of these characteristic expressions is analytic. That
is part of what it is to use the framework of psychological concepts.
These relations seem to me play a similar role within the language game
of psychological talk that or principles laws adopted as necessary
truths do within scientific paradigms, on your view.

I am somewhat surprised that you oppose what I say so vehemently, given
that you seem hospitable to the pluralistic idea that we might deploy a
variety of conceptual frameworks for different purposes, and that these
frameworks come with certain internal relations that determine analytic
truths. All I am saying is that we can see other human beings through
the lens of one framework when we see them as physical systems and
their motions as physical events; and through another framework when we
see their motions as expressive of subjective psychological states.
To insist "all we really see is physical behavior" misses the crucial role
of the conceptualization.

Among the consequences of appreciating the role of conceptual
frameworks in observation is the appreciation of the fact that
reductive physicalism fails because the world can be described in many
frameworks that are not reducible to a single common base; and
behaviorism is false because it attempts to apply apriori limits on the
content of observation even though we have a plurality of concepts that
can find application in direct experience. Yet you seem to be
persisting in some of the scientistic prejudices of physicalism and
behaviorism here.

Oliver Sparrow

unread,
May 25, 2000, 3:00:00 AM5/25/00
to
Seth Russell <se...@robustai.net> wrote:

>Neil W Rickert will write in localmessage://8gh1oq$6...@ux.cs.niu.edu :
>[which i have not yet recieved]

Hey! Time travel.
_______________________________

Oliver Sparrow

Oliver Sparrow

unread,
May 25, 2000, 3:00:00 AM5/25/00
to
ande...@pitt.edu (Anders N Weinstein) wrote:

>...I do not


>think that first-person statements answer to some independently
>accessible reality. I think psychological states are internally
>related to their expressions.

I am not going to disagree. So much of philosophy has bound itself to what
is possible to assert about black boxes called 'people', and about the
black boxes that exist inside oneself. The conclusion has been that one has
to be tentative bout the workings and content of these, and that there is
not a lot to say beyond this. As the issue matters deeply to us all, we go
on fondling the broken tooth with our collective conceptual tongues.

To get away from this, I suggest, we need to open the black boxes, which is
exactly what those undertaking experimental science are doing. And, I
believe, doing at considerable speed. The most helpful thing that informed
observers could do to hasten this process is to point up the key features
which we need to explore, and the most useful ways of thinking about their
exploration. To be depressing, I suspect that natural intelligence
emulation, strong AI, will follow on from our having access to the answers
to these questions, and not lead the solution to them.
_______________________________

Oliver Sparrow

Anders N Weinstein

unread,
May 25, 2000, 3:00:00 AM5/25/00
to
In article <khjpisk7gf3d0l7dq...@4ax.com>,
Oliver Sparrow <oh...@chatham.demon.co.uk> wrote:
>ande...@pitt.edu (Anders N Weinstein) wrote:
>
>>...I do not

>>think that first-person statements answer to some independently
>>accessible reality. I think psychological states are internally
>>related to their expressions.
>
>I am not going to disagree. So much of philosophy has bound itself to what
>is possible to assert about black boxes called 'people', and about the
>black boxes that exist inside oneself. The conclusion has been that one has

But my idea is that first-person psychological statements are not about
what is inside the black box that is a person. It is not that one can't
find out about the insides of this entity considered as a black box.
It's just that ordinary psychological statements are not about
physically real states of inner mechanisms except in an extremely
indirect way. They are concerned as it were with a kind of socially
constituted virtual machine. Because psychological statements are not
about an actual physical machine inside the cranium, I suggest they
don't answer for their truth to what is physically inside the black box
in any simple way.

You might not be able to find out what another person is thinking with
any amount of looking inside their skull, no more than you can find out
whether a piece of paper is money by subjecting it to chemical assays.

There are some very simple examples of what I mean under the heading
of "implicit" belief or representation. Dennett mentions the chess
playing program that "wants to get its Queen out early". Will you
find this want in the circuitry? In the program code? Well in a sense,
all these things working together bring it about that the system
tends, other things being equal, to strive to get its queen out early.
That is a molar tendency. But there is probably no single
program state that means "get your queen out early".

For another example, consider a person who falls through a deadfall
trap while walking. We may say: he *assumed* he was walking on solid
ground. But need we find any explicit representation of this content
anywhere in his black box? It may be that he is wired so as to be
disposed simply to walk by default, in the absence of any special
indicators suggesting the ground is solid. So again, there is a
"virtual presupposition" but it maps only very indirectly onto the
actual program states in an illuminating explanation of the innards of
the black box.

>to be tentative bout the workings and content of these, and that there is
>not a lot to say beyond this. As the issue matters deeply to us all, we go
>on fondling the broken tooth with our collective conceptual tongues.
>
>To get away from this, I suggest, we need to open the black boxes, which is
>exactly what those undertaking experimental science are doing. And, I
>believe, doing at considerable speed. The most helpful thing that informed
>observers could do to hasten this process is to point up the key features
>which we need to explore, and the most useful ways of thinking about their
>exploration. To be depressing, I suspect that natural intelligence
>emulation, strong AI, will follow on from our having access to the answers
>to these questions, and not lead the solution to them.

I have no objection to opening up the black box. But I don't think it
will help with the apparent mystery (really only an illusion of mystery)
of subjective consciousness. Even when every question about the physical
workings of the black box is answered, mystery lovers will still insist that
the crucial features of consciousness have been left out.

What this shows I think is they are looking for an explanation of
consciousness in the wrong place, or looking for an explanation of the
wrong general form. No empirical results could ever satisfy them,
because their confusion is conceptual -- they are thinking of
consciousness as something wholly intrinsic and independent of any
behavioral manifestations, and wondering how the brain could "radiate"
or otherwise cause this magical property. What they need is a better
understanding of the concepts of consciousness that illustrates how
they are internally related to those of behavioral manifestations.
Then we can say the brain is a wholly mindless mechanism that enables or
makes possible the forms of conduct in which consciousness is manifest,
but does not "generate" consciousness.

James Hunter

unread,
May 25, 2000, 3:00:00 AM5/25/00
to

Anders N Weinstein wrote:

> In article <khjpisk7gf3d0l7dq...@4ax.com>,
> Oliver Sparrow <oh...@chatham.demon.co.uk> wrote:
> >ande...@pitt.edu (Anders N Weinstein) wrote:
> >
> >>...I do not


> >>think that first-person statements answer to some independently
> >>accessible reality. I think psychological states are internally
> >>related to their expressions.
> >

> >I am not going to disagree. So much of philosophy has bound itself to what
> >is possible to assert about black boxes called 'people', and about the
> >black boxes that exist inside oneself. The conclusion has been that one has
>

[...]


> What this shows I think is they are looking for an explanation of
> consciousness in the wrong place, or looking for an explanation of the
> wrong general form. No empirical results could ever satisfy them,
> because their confusion is conceptual -- they are thinking of
> consciousness as something wholly intrinsic and independent of any
> behavioral manifestations, and wondering how the brain could "radiate"
> or otherwise cause this magical property. What they need is a better
> understanding of the concepts of consciousness that illustrates how
> they are internally related to those of behavioral manifestations.
> Then we can say the brain is a wholly mindless mechanism that enables or
> makes possible the forms of conduct in which consciousness is manifest,
> but does not "generate" consciousness.

Not you can't say. It doesn't really matter if the brain does or does not
generate consciousness. Since it is a critical component of intelligence
the "emperor's" new "minds" are generally recursively irrelevent.


Neil W Rickert

unread,
May 25, 2000, 3:00:00 AM5/25/00
to
ande...@pitt.edu (Anders N Weinstein) writes:
>Neil W Rickert <ricke...@cs.niu.edu> wrote:

>The point is that when you merely say "x is responding to y", you are
>leaving out the contribution of the observer's conceptualization. To
>say I am responding to something embodied in physical form does not fix
>the *description* under which I am responding to it.

>I want to say facts concerning subjective states of others are
>occasionally knowable without inference through the direct observation
>of behavior that is expressive of them. This involves the observer
>applying a certain conceptualization, one different than, e.g. a
>Skinnerian behaviorist or a physical scientist, just as the
>conceptualization of an artwork by an art critic is different from the
>typical conceptualization of a furniture mover acting as such.

>If you accept the "theory-ladenness" of observation ala Kuhn, you
>should have no problem with the idea that one can know about subjective
>states of others via theory laden observation. I don't myself think
>*theory* is quite the right word, but you get the idea.

Sorry, I don't see the relevance of theory-ladenness here. If my
observations are theory laden, they are laden with my theory, not
with the theory used by whoever is observed.

>>thoughts. But you don't really know whether the text was produced by
>>thoughts or by some other method. So, you are actually responding to
>>the text, and inferring that it came from thoughts.

>Well is there anything I can observe without inference on your view?

I did not claim that you are using any inference in observing the
words. It is in going from there to a claim about thoughts of the
writer, that you are making an inference.

>Presumably for anything I might claim to observe you can come up
>with a scenario under which I would be mistaken.

You seem to be arguing a completely different question.

> So I think the mere
>possibility of error cannot impugn the idea of knowing something
>non-inferentially through observation, or nothing would ever be
>knowable that way.

It is not the mere possibility of error. You have never met me. You
know me only through the text you read with my name attached. It is
a rather large stretch to say that you are directly observing my
thoughts without inference.

>>>Often you can easily tell these things, perhaps with a little
>>>further inquiry.

>>I doubt it. Further inquiry gives you what the questioned person
>>agrees to say as his story of what happened. But short term memory
>>is short, and any story told is a contruction made afterward, in an
>>attempt to explain behavior.

>I disagree somewhat with the last claim. In a large range of
>cases first person reports are criterial for the existence of psychological
>states. This may be understood if you take it that such statements are
>partly constitutive of the region of reality described by psychological
>statements.

>For a somewhat loose analogy, you might consider the way saying "I
>promise" is partly involved in the state of having promised. When
>you say it, you are not attempting to give an empirically well founded
>description of some inner goings on; rather your assertion brings
>about the state of having promised and so, in a sense, makes the
>statement true whenever it is sincerely uttered.

Sorry, I'm not buying. There is a whole lot more to promises than
the mere utterance of words.

>In a somewhat more subtle way, first person statements are something like
>that with respect to the psychological facts -- they do not attempt
>to get at some reality that could be independently knowable, apart
>from any reference to first-person statements.

>Again, I think the analogy needs severe qualifications, but I do not
>think that first-person statements answer to some independently
>accessible reality. I think psychological states are internally
>related to their expressions.

>There are other more speculative statements one may make about
>oneself and one's reasons, I grant you, but I don't think all
>self-reports are like that. Perhaps we are taking different sorts of
>statements as our respective paradigms. I am taking a statement like
>"I prefer job candidate A to job candidate B". I think this expresses
>a psychological preference, and it would be pretty unusual to say
>it is simply a story constructed after the fact.

>On the other hand, if we ask someone for their *reasons* why they
>prefer candidate A to candidate B, they might start spinning some
>yarn to account for what is really a hunch or seat-of-the-pants feel.
>I agree that may be speculation or rationalization on their part.
>What they are authoritative about is their preference, not the
>causes of their preference, which may lie outside their minds.

In the same way, in the case of the person reacting to the insect,
that person cannot be taken as authoritative for the causes of the
action taken in response.

>>>Essentially, the way to understand what a mental event is is to look
>>>at the language in which we describe them.

>>Just about the only people who use that terminology, at least in my
>>experience, are philosophers. So that seems to bring us back to the
>>idea that this is a philosopher's invention.

>If the only people who use the term "noun phrase" or "transitive verb"
>are linguists,

The "if" is not satisfied. So the questions is meaningless

> mean that noun phrases or transitive verbs
>are linguists inventions? No, noun phrases and transitive verbs
>occur in ordinary speech all the time, andcan display sensitivity to
>the distinction between transitive and intransitive verbs without
>having any word for it.

>It is similar for "mental event". It may be philosopher's jargon,
>but the sort of thing it purports to label is typically expressed
>in quite ordinary language, of which I have given quite a few
>examples in this thread.

Again, I am questioning the word "mental". If I am watching a
baseball game on the television, I don't speak of a screen event.
The event occurred on the baseball field, not on the screen.
Similarly most of your examples of "mental events" are of events that
did not occur in the mind.


Neil W Rickert

unread,
May 25, 2000, 3:00:00 AM5/25/00
to
ande...@pitt.edu (Anders N Weinstein) writes:
>In article <8gkhso$f...@ux.cs.niu.edu>,

>Neil W Rickert <ricke...@cs.niu.edu> wrote:
>>ande...@pitt.edu (Anders N Weinstein) writes:
>>>Neil W Rickert <ricke...@cs.niu.edu> wrote:

>>>If you accept the "theory-ladenness" of observation ala Kuhn, you
>>>should have no problem with the idea that one can know about subjective
>>>states of others via theory laden observation. I don't myself think
>>>*theory* is quite the right word, but you get the idea.

>>Sorry, I don't see the relevance of theory-ladenness here. If my
>>observations are theory laden, they are laden with my theory, not
>>with the theory used by whoever is observed.

>OK, but my point was only that even you should allow that third-person
>observation can be laden with a theory of mental states and events
>implicit in the psychological concepts used by the observer.

Why? What is the value of that?

I'm a skeptic of most of what you consider to be mental states. In
particular, I think that most of the descriptions of rational
behavior, in terms of belief and desires, are "Just So" stories. I
don't see why I need that kind of a theory.

I am not saying that I treat people as mindless mechanical devices.
I am probably as capable of judging and predicting behavior of my
friends as you are of your friends. But I don't find any use for
"beliefs" and "desires" as part of this.

> So that
>you can't say that one can never observationally discern facts
>concerning mental states of others.

Well I certainly make judgements of mental states of others. But
they are not what you would call mental states. You might call them
emotional states, or moods.

>The key point is this: if you accept the idea that all observation is
>theory-laden (or as I would prefer, concept-laden), you have no basis
>for distinguishing apriori between what can be known by direct
>observation and what can only be inferred. As long as the right
>concepts inform the observation, one can observe just about anything.

Wow! You are making huge leaps. We can surely only know by direct
observation, what is derivable through our sensory organs. There is
a lot we can only know with the aid of external scientific
instrumentation. To deny that there is inference is such a case
seems absurd.

>>I did not claim that you are using any inference in observing the
>>words. It is in going from there to a claim about thoughts of the
>>writer, that you are making an inference.

>What line is this, though? Can I observe the words as meaningful, or
>only as shapes without inference?

In fact, it is more likely that you can direclty observe the words as
meaningful, than that you can directly observe them as shapes. It
might take inference to observe them as shapes.

Again, you seem to be arguing against something I have not claimed.

> Can I observe for example, that Jones
>insulted Smith? The insult, let us assume is public. Yet to take
>it *as* an insult requires imputing lots of intentions to the speaker.

If often does require inference to decide that there was an insult.
Sometimes the insult is sufficiently obvious that inference is not
required. But I think you are wrong about the intentions of the
speaker. What makes something an insult has more to do with how it
is perceived by the recipient. There can be unintended insults.

>>>Presumably for anything I might claim to observe you can come up
>>>with a scenario under which I would be mistaken.

>>You seem to be arguing a completely different question.

>Let me try to lay out as clearly as I can my position.

>I am trying to reject the idea that all knowledge of other minds
>is necessarily inferential. I am trying to defend the idea that
>knowledge of facts concerning other minds may on some occasions
>be acquired through direct, non-inferential observation of behavior
>that is expressive of the relevant psychological states.

Since neither of these is in dispute, I wonder why you persist.

>Now there is one possible counter-argument I wish to forestall.
>The counter-argument has been called the "Argument from Illusion".
>The Argument from Illusion points out that one can be mistaken.
>It is quite general in form. It attempts to argue that if a claim to
>know that p by observing that p can ever turn out be mistaken, then we
>can never properly be said to know p by direct observation; rather,
>we stand revealed as observing only some weaker fact q and making a
>tacit inference to p.

The mistake, in this case, appears to be in how you have taken my
earlier argument.

>>>prefer candidate A to candidate B, they might start spinning some
>>>yarn to account for what is really a hunch or seat-of-the-pants feel.
>>>I agree that may be speculation or rationalization on their part.
>>>What they are authoritative about is their preference, not the
>>>causes of their preference, which may lie outside their minds.

>>In the same way, in the case of the person reacting to the insect,
>>that person cannot be taken as authoritative for the causes of the
>>action taken in response.

>I didn't say they could. I said you can know that the subject
>experienced a sensation by seeing the expression.

I don't have any problem with that. But I thin the term "mental
event" is still inapproriate.

>>>It is similar for "mental event". It may be philosopher's jargon,
>>>but the sort of thing it purports to label is typically expressed
>>>in quite ordinary language, of which I have given quite a few
>>>examples in this thread.

>>Again, I am questioning the word "mental". If I am watching a
>>baseball game on the television, I don't speak of a screen event.
>>The event occurred on the baseball field, not on the screen.
>>Similarly most of your examples of "mental events" are of events that
>>did not occur in the mind.

>In one way that doesn't really matter for my purposes. I don't care if
>all these events occur purely "in the mind", I only care if they are
>events, are observable, and are recognizably psychological or
>mind-involving. For example if I can see that Jones insulted Smith,
>then I might know something about Jones' state of mind, for the
>concept of insult is clearly mind-laden.

Then call them events, without having to add the "mental"
qualification.

>On the other hand, I do want to say there are events that
>occur only in the mind. Noticing something, experiencing a sensation,
>feeling a pain start or stop, thinking up the solution to a problem
>without giving any overt sign, seeing the Necker cube flip from one
>interpretation to another, etc.

But you don't really see the Necker cube flip. Your interpretation
might change, but I don't see it as a mental event. If your
attention was on it, then you put some effort into changing your
interpretation. If your attention was not on it, then you really
didn't see a flip -- you merely redirected your attention, but this
time with a different interpretation.

Anders N Weinstein

unread,
May 26, 2000, 3:00:00 AM5/26/00
to
In article <8gkhso$f...@ux.cs.niu.edu>,

Neil W Rickert <ricke...@cs.niu.edu> wrote:
>ande...@pitt.edu (Anders N Weinstein) writes:
>>Neil W Rickert <ricke...@cs.niu.edu> wrote:
>
>>If you accept the "theory-ladenness" of observation ala Kuhn, you
>>should have no problem with the idea that one can know about subjective
>>states of others via theory laden observation. I don't myself think
>>*theory* is quite the right word, but you get the idea.
>
>Sorry, I don't see the relevance of theory-ladenness here. If my
>observations are theory laden, they are laden with my theory, not
>with the theory used by whoever is observed.

OK, but my point was only that even you should allow that third-person


observation can be laden with a theory of mental states and events

implicit in the psychological concepts used by the observer. So that


you can't say that one can never observationally discern facts

concerning mental states of others. For the same reasons that a Kuhnian
won't say that one can never observationally discern facts concerning
positrons, say.

The key point is this: if you accept the idea that all observation is
theory-laden (or as I would prefer, concept-laden), you have no basis
for distinguishing apriori between what can be known by direct
observation and what can only be inferred. As long as the right

concepts inform the observation, one can observe just about anything.

>>>thoughts. But you don't really know whether the text was produced by
>>>thoughts or by some other method. So, you are actually responding to
>>>the text, and inferring that it came from thoughts.
>
>>Well is there anything I can observe without inference on your view?
>
>I did not claim that you are using any inference in observing the
>words. It is in going from there to a claim about thoughts of the
>writer, that you are making an inference.

What line is this, though? Can I observe the words as meaningful, or
only as shapes without inference? Can I observe for example, that Jones


insulted Smith? The insult, let us assume is public. Yet to take
it *as* an insult requires imputing lots of intentions to the speaker.

>>Presumably for anything I might claim to observe you can come up


>>with a scenario under which I would be mistaken.
>
>You seem to be arguing a completely different question.

Let me try to lay out as clearly as I can my position.

I am trying to reject the idea that all knowledge of other minds
is necessarily inferential. I am trying to defend the idea that
knowledge of facts concerning other minds may on some occasions

be acquired through direct, non-inferential observation of behavior
that is expressive of the relevant psychological states.

Now there is one possible counter-argument I wish to forestall.
The counter-argument has been called the "Argument from Illusion".
The Argument from Illusion points out that one can be mistaken.
It is quite general in form. It attempts to argue that if a claim to
know that p by observing that p can ever turn out be mistaken, then we
can never properly be said to know p by direct observation; rather,
we stand revealed as observing only some weaker fact q and making a
tacit inference to p.

I take it the Argument from Illusion is fallacious, and the mere possibility
of error concerning some fact type p does not vitiate the idea that p may
on occasion be known through direct observation. However, seeing exactly
how this can be may take some explaining, and I did not want to get into
the details. However, I take it that anything that can ever be known
by observation is of such a form that illusions are possible; from that
it follows that either the Argument from Illusion is fallacious or nothing
is ever known non-inferentially by direct observation.
Since I take it the second disjunct is untenable, I think this suffices
to show the Argument must be fallacious.

Now I can't be certain if you were alluding to the Argument from Illusion
here, but I detected a whiff of it in your remarks. So you are on notice
that I take it to involve a fallacy, even if the exact nature of the
fallacy is not obvious.

We can be and often are wrong about other minds, just as we can be
wrong about whether something in front of us that looks like a house
really is a house. Still if one can ever be said to know that something
is a house by observation, the same should apply to knowledge of
other's mental states.

>> So I think the mere
>>possibility of error cannot impugn the idea of knowing something
>>non-inferentially through observation, or nothing would ever be
>>knowable that way.
>
>It is not the mere possibility of error. You have never met me. You
>know me only through the text you read with my name attached. It is
>a rather large stretch to say that you are directly observing my
>thoughts without inference.

OK. Actually, in speaking of the letters I wasn't claiming to observe
your thoughts. I was appealing to an analogy.
I was saying there are many ways of looking at these letters -- I may
see them as glowing dots, as letters in a script, as meaningful words
and phrases. These are all different and no one of them captures what
I 'really see".

I was trying to say that seeing words as meaningful symbols stands
to seeing them as dots on a screen as seeing some bit of human behavior as
expressive of mentality does to seeing it as meaningful physical
motions. In both cases you have a certain similar "this in that"
structure in which meanings are displayed embodied in some
medium of expression.

Also, I don't ever claim to directly observe your thoughts as objects.
I might claim to know what you think on a certain matter by seeing
your expressions of those opinions as expressions of your thoughts.

>>For a somewhat loose analogy, you might consider the way saying "I
>>promise" is partly involved in the state of having promised. When
>>you say it, you are not attempting to give an empirically well founded
>>description of some inner goings on; rather your assertion brings
>>about the state of having promised and so, in a sense, makes the
>>statement true whenever it is sincerely uttered.
>
>Sorry, I'm not buying. There is a whole lot more to promises than
>the mere utterance of words.

Of course I agree. Still in the right context, simply giving out the
words can constitute making a promise.

>>prefer candidate A to candidate B, they might start spinning some
>>yarn to account for what is really a hunch or seat-of-the-pants feel.
>>I agree that may be speculation or rationalization on their part.
>>What they are authoritative about is their preference, not the
>>causes of their preference, which may lie outside their minds.
>
>In the same way, in the case of the person reacting to the insect,
>that person cannot be taken as authoritative for the causes of the
>action taken in response.

I didn't say they could. I said you can know that the subject


experienced a sensation by seeing the expression.

>>It is similar for "mental event". It may be philosopher's jargon,


>>but the sort of thing it purports to label is typically expressed
>>in quite ordinary language, of which I have given quite a few
>>examples in this thread.
>
>Again, I am questioning the word "mental". If I am watching a
>baseball game on the television, I don't speak of a screen event.
>The event occurred on the baseball field, not on the screen.
>Similarly most of your examples of "mental events" are of events that
>did not occur in the mind.

In one way that doesn't really matter for my purposes. I don't care if


all these events occur purely "in the mind", I only care if they are
events, are observable, and are recognizably psychological or
mind-involving. For example if I can see that Jones insulted Smith,
then I might know something about Jones' state of mind, for the
concept of insult is clearly mind-laden.

On the other hand, I do want to say there are events that

occur only in the mind. Noticing something, experiencing a sensation,
feeling a pain start or stop, thinking up the solution to a problem
without giving any overt sign, seeing the Necker cube flip from one
interpretation to another, etc.

As I have suggested many times, what the third person observer may see
is a behavioral expression of a mental event. But in seeing such
a thing *as* an expression, one may come by knowledge of the existence
of the event in a way that is not well explained as inferential.

Anders N Weinstein

unread,
May 26, 2000, 3:00:00 AM5/26/00
to
In article <8gkt32$g...@ux.cs.niu.edu>,

Neil W Rickert <ricke...@cs.niu.edu> wrote:
>ande...@pitt.edu (Anders N Weinstein) writes:
>>In article <8gkhso$f...@ux.cs.niu.edu>,
>>Neil W Rickert <ricke...@cs.niu.edu> wrote:
>>>ande...@pitt.edu (Anders N Weinstein) writes:
>>>>Neil W Rickert <ricke...@cs.niu.edu> wrote:
>
>>>>If you accept the "theory-ladenness" of observation ala Kuhn, you
>>>>should have no problem with the idea that one can know about subjective
>>>>states of others via theory laden observation. I don't myself think
>>>>*theory* is quite the right word, but you get the idea.
>
>>>Sorry, I don't see the relevance of theory-ladenness here. If my
>>>observations are theory laden, they are laden with my theory, not
>>>with the theory used by whoever is observed.
>
>>OK, but my point was only that even you should allow that third-person
>>observation can be laden with a theory of mental states and events
>>implicit in the psychological concepts used by the observer.
>
>Why? What is the value of that?

It allows away around the epistemological [pseudo-] problem of "knowledge
of other minds". It explains how facts concerning other minds can
be known without having to resort to a risky inference or inductively
established correlations.

>I am not saying that I treat people as mindless mechanical devices.
>I am probably as capable of judging and predicting behavior of my
>friends as you are of your friends. But I don't find any use for
>"beliefs" and "desires" as part of this.

I would bet this is false. When you are actually engaged in interacting
with people I expect you freely use everyday terms like "he thinks such
and such", "he wants to so and so", "why do you think that?" or "how do
you know?". But it is in these engaged uses that the rubber meets the road
with respect to these concepts.

And again, I am mentioning "theory-ladenness" by way of analogy;
I don't actually think psychological terms function as part of
a theory for the prediction of behavior.

>>you can't say that one can never observationally discern facts
>>concerning mental states of others.
>
>Well I certainly make judgements of mental states of others. But

Glad to hear it.

>they are not what you would call mental states. You might call them
>emotional states, or moods.

Not sure what distinction you are making here. I would say moods
are one class of mental state; I don't see why there can't be others.

>>The key point is this: if you accept the idea that all observation is
>>theory-laden (or as I would prefer, concept-laden), you have no basis
>>for distinguishing apriori between what can be known by direct
>>observation and what can only be inferred. As long as the right
>>concepts inform the observation, one can observe just about anything.
>
>Wow! You are making huge leaps. We can surely only know by direct
>observation, what is derivable through our sensory organs. There is

Well yes, it is a contingent empirical matter what our organs can
enable us to come to detect. But I mean there is no apriori reason
based on the nature of psychological states we can't be trained to
detect information concerning the mental states of others through by
putting our sensory physiology to use in the service of psychological
information detectors.

>>>I did not claim that you are using any inference in observing the
>>>words. It is in going from there to a claim about thoughts of the
>>>writer, that you are making an inference.
>
>>What line is this, though? Can I observe the words as meaningful, or
>>only as shapes without inference?
>
>In fact, it is more likely that you can direclty observe the words as
>meaningful, than that you can directly observe them as shapes. It
>might take inference to observe them as shapes.

Good. Now can I say something similar about human behavior? That I can
directly observe it as expessive of mentality, and it might
take inference or at any rate work to observe it as meaningless
motions?

>> Can I observe for example, that Jones
>>insulted Smith? The insult, let us assume is public. Yet to take
>>it *as* an insult requires imputing lots of intentions to the speaker.
>
>If often does require inference to decide that there was an insult.
>Sometimes the insult is sufficiently obvious that inference is not
>required. But I think you are wrong about the intentions of the
>speaker. What makes something an insult has more to do with how it
>is perceived by the recipient. There can be unintended insults.

OK, but I was using the term according to the sense in which it it is
possible to say: the perceiver *mistook* as an insult something that was
not in fact an insult. Say, as in a recent brouhaha, the hearer
was not familiar with the meaning of the term "niggardly" in the
speaker's dialect. If the speaker used it totally innocently, in this
sense we can say there is a fact of the matter about whether the remark
was an insult which the speaker has simply gotten wrong.

>>I am trying to reject the idea that all knowledge of other minds
>>is necessarily inferential. I am trying to defend the idea that
>>knowledge of facts concerning other minds may on some occasions
>>be acquired through direct, non-inferential observation of behavior
>>that is expressive of the relevant psychological states.
>
>Since neither of these is in dispute, I wonder why you persist.

I wonder too -- what is the basis of *your* disagreement, and what
is the point of your examples of error concerning psychological facts?

>The mistake, in this case, appears to be in how you have taken my
>earlier argument.

Ok, that's a possibility I freely concede. Since you don't seem to want
to spell out your argument in any detail, I think I can be forgiven if
I don't understand it.

>>I didn't say they could. I said you can know that the subject
>>experienced a sensation by seeing the expression.
>
>I don't have any problem with that. But I thin the term "mental
>event" is still inapproriate.

Isn't an experiencing of a sensation a mental event? Why not?

>>On the other hand, I do want to say there are events that
>>occur only in the mind. Noticing something, experiencing a sensation,
>>feeling a pain start or stop, thinking up the solution to a problem
>>without giving any overt sign, seeing the Necker cube flip from one
>>interpretation to another, etc.
>
>But you don't really see the Necker cube flip. Your interpretation

What I meant was what you describe as the interpretation changing.

>might change, but I don't see it as a mental event. If your

Call one interpretation A and the other B. Now doesn't the phrase
"John's interpretation of the cube changed from A to B" make reference
to an event? Isn't it also a pretty clear case of a *mental* event? After
all, John need not have given any overt manifestation of it.

>attention was on it, then you put some effort into changing your
>interpretation. If your attention was not on it, then you really
>didn't see a flip -- you merely redirected your attention, but this
>time with a different interpretation.

Well first, one may say the change in interpretation is an event in
your mind whether you notice it attentively or not.

But second, and more importantly: Consider the case where you are
attending to the cube image, the interpretation changes, and you *do*
notice the change in how it looks to you. Contrary to what you suggest,
this is most often involuntary; though you may try and occasionally
succeed in bringing it about willfully. Either way, how does what you
say undermine the idea that this occurence is an event and a mental one
at that?

Neil W Rickert

unread,
May 26, 2000, 3:00:00 AM5/26/00
to
ande...@pitt.edu (Anders N Weinstein) writes:
>Neil W Rickert <ricke...@cs.niu.edu> wrote:

>>>OK, but my point was only that even you should allow that third-person
>>>observation can be laden with a theory of mental states and events
>>>implicit in the psychological concepts used by the observer.

>>Why? What is the value of that?

>It allows away around the epistemological [pseudo-] problem of "knowledge
>of other minds". It explains how facts concerning other minds can
>be known without having to resort to a risky inference or inductively
>established correlations.

If it is a pseudo-problem, we don't need a way around it.

>>I am not saying that I treat people as mindless mechanical devices.
>>I am probably as capable of judging and predicting behavior of my
>>friends as you are of your friends. But I don't find any use for
>>"beliefs" and "desires" as part of this.

>I would bet this is false. When you are actually engaged in interacting
>with people I expect you freely use everyday terms like "he thinks such
>and such", "he wants to so and so", "why do you think that?" or "how do
>you know?". But it is in these engaged uses that the rubber meets the road
>with respect to these concepts.

Only in the same way that I would say "the computer wants me to load
a tape."

>>> Can I observe for example, that Jones
>>>insulted Smith? The insult, let us assume is public. Yet to take
>>>it *as* an insult requires imputing lots of intentions to the speaker.

>>If often does require inference to decide that there was an insult.
>>Sometimes the insult is sufficiently obvious that inference is not
>>required. But I think you are wrong about the intentions of the
>>speaker. What makes something an insult has more to do with how it
>>is perceived by the recipient. There can be unintended insults.

>OK, but I was using the term according to the sense in which it it is
>possible to say: the perceiver *mistook* as an insult something that was
>not in fact an insult. Say, as in a recent brouhaha, the hearer
>was not familiar with the meaning of the term "niggardly" in the
>speaker's dialect. If the speaker used it totally innocently, in this
>sense we can say there is a fact of the matter about whether the remark
>was an insult which the speaker has simply gotten wrong.

There is a distinction between making a statement that was mistakenly
taken as an insult, and unintentionally making an insult. I don't
believe that Al Campanis intended any insult when he said of black
ball players, that "they don't have the necessities." But it was an
insult nonetheless.

>>>I am trying to reject the idea that all knowledge of other minds
>>>is necessarily inferential. I am trying to defend the idea that
>>>knowledge of facts concerning other minds may on some occasions
>>>be acquired through direct, non-inferential observation of behavior
>>>that is expressive of the relevant psychological states.

>>Since neither of these is in dispute, I wonder why you persist.

>I wonder too -- what is the basis of *your* disagreement, and what
>is the point of your examples of error concerning psychological facts?

I purchase some toothpaste from the supermarket. Can I say that it
was a mental event, when the scanner read the bar code? I tried to
swat a mosquito. However, the mosquito apparently saw my movement,
and flew away. Can we say that there was a mental event in the
mosquito's acting that way?

Just about all of the examp[les you have given for mental events have
counterparts where we would not ascribe mentality. I suggest that
you were talking of events, and that the "mental" qualifier is a
mistaken allusion to cartesian dualism.


Anders N Weinstein

unread,
May 26, 2000, 3:00:00 AM5/26/00
to
In article <8gmt17$1...@ux.cs.niu.edu>,

Neil W Rickert <ricke...@cs.niu.edu> wrote:
>ande...@pitt.edu (Anders N Weinstein) writes:
>>Neil W Rickert <ricke...@cs.niu.edu> wrote:
>
>>>>OK, but my point was only that even you should allow that third-person
>>>>observation can be laden with a theory of mental states and events
>>>>implicit in the psychological concepts used by the observer.
>
>>>Why? What is the value of that?
>
>>It allows away around the epistemological [pseudo-] problem of "knowledge
>>of other minds". It explains how facts concerning other minds can
>>be known without having to resort to a risky inference or inductively
>>established correlations.
>
>If it is a pseudo-problem, we don't need a way around it.

You need a way to entitle you to the claim that it is a pseudo-problem,
i.e. a way to avoid the premises that make it seem like there is a
problem.

>>I wonder too -- what is the basis of *your* disagreement, and what
>>is the point of your examples of error concerning psychological facts?
>

>I purchase some toothpaste from the supermarket. Can I say that it
>was a mental event, when the scanner read the bar code? I tried to
>swat a mosquito. However, the mosquito apparently saw my movement,
>and flew away. Can we say that there was a mental event in the
>mosquito's acting that way?

This looks like a line-drawing fallacy to me. Just because it may be
difficult to draw a sharp line between certain borderline cases of
mentality does not mean there is no distinction between items at the
two poles.

I would say, by the way, that to the extent you think the mosquito did
see something you are attributing a kind of mental event to the
mosquito.

>Just about all of the examp[les you have given for mental events have
>counterparts where we would not ascribe mentality.

What kind of argument is this? There are lots of non-mental
counterparts or analogs of mental events, therefore there are no mental
events? Doesn't look like much of an argument to me.

> I suggest that
>you were talking of events, and that the "mental" qualifier is a
>mistaken allusion to cartesian dualism.

I disagree with the latter. I take it as a kind of philosophically
neutral datum that we have language that speaks of mental events.
Cartesian dualism involves a false picture of the nature of these
mental events as occurring in an immaterial and private medium, but we
can reject the Cartesian model, without rejecting the very existence of
mental events. That is, I don't think the very idea of mental events
is laden with Cartesian dualism. The events might be events in the
brain, for example.

I am still not clear on how you apply this to the examples, for
example, a pain's growing less, the Necker cube interpretation changing,
someone proving a theorem silently "in his head". You grant these
are events but doubt they are "mental"? But they are not, as we
ordinarily talk of them, spatially located or material.


Neil W Rickert

unread,
May 27, 2000, 3:00:00 AM5/27/00
to
ande...@pitt.edu (Anders N Weinstein) writes:
>In article <8gmt17$1...@ux.cs.niu.edu>,
>Neil W Rickert <ricke...@cs.niu.edu> wrote:
>>ande...@pitt.edu (Anders N Weinstein) writes:
>>>Neil W Rickert <ricke...@cs.niu.edu> wrote:

>>>>>OK, but my point was only that even you should allow that third-person
>>>>>observation can be laden with a theory of mental states and events
>>>>>implicit in the psychological concepts used by the observer.

>>>>Why? What is the value of that?

>>>It allows away around the epistemological [pseudo-] problem of "knowledge
>>>of other minds". It explains how facts concerning other minds can
>>>be known without having to resort to a risky inference or inductively
>>>established correlations.

>>If it is a pseudo-problem, we don't need a way around it.

>You need a way to entitle you to the claim that it is a pseudo-problem,
>i.e. a way to avoid the premises that make it seem like there is a
>problem.

"Entitle you to the claim" is one of the bogus ideas coming out of
epistemology. I am the only judge of whether I am entitled to
believe it is a pseudo-problem. If I try to persuade others, they
are the judge of whether they are to be persuaded. In this case,
there is such an abundance of confused ideas around, that I am not
expecting to persuade the world.

>>>I wonder too -- what is the basis of *your* disagreement, and what
>>>is the point of your examples of error concerning psychological facts?

>>I purchase some toothpaste from the supermarket. Can I say that it
>>was a mental event, when the scanner read the bar code? I tried to
>>swat a mosquito. However, the mosquito apparently saw my movement,
>>and flew away. Can we say that there was a mental event in the
>>mosquito's acting that way?

>This looks like a line-drawing fallacy to me. Just because it may be
>difficult to draw a sharp line between certain borderline cases of
>mentality does not mean there is no distinction between items at the
>two poles.

>I would say, by the way, that to the extent you think the mosquito did
>see something you are attributing a kind of mental event to the
>mosquito.

Then, since the scanner did see the bar code, I must be attributing a
mental event to it. But, in fact, I am not attributing a mental
event to either. The event was an event pure and simple, and adding
"mental" does nothing.

>>Just about all of the examp[les you have given for mental events have
>>counterparts where we would not ascribe mentality.

>What kind of argument is this? There are lots of non-mental
>counterparts or analogs of mental events, therefore there are no mental
>events? Doesn't look like much of an argument to me.

No, that is not much of an argument. And it is not the argument I
made. Well, never mind. As a dualist (but in denial), it is clear
that you are committed to mental events.

>I am still not clear on how you apply this to the examples, for
>example, a pain's growing less, the Necker cube interpretation changing,
>someone proving a theorem silently "in his head". You grant these
>are events but doubt they are "mental"? But they are not, as we
>ordinarily talk of them, spatially located or material.

They are as spatially located and as material as the event of the bar
code scanner system determining that the item was toothpaste. I can
find no basis for calling one a mental event and denying that the
other was a mental event.


LindaGee

unread,
May 27, 2000, 3:00:00 AM5/27/00
to

Neil W Rickert wrote in message <8gpp4o$c...@ux.cs.niu.edu>...

Yes they are both spatially located and have a material relationship. Yet
you seem to be giving the impression that there is no difference between
what goes on in terms of events within the confines of the human body, as
compared to the event of an object caught up in the wind. If something
*occurs* to someone via thought processing (thinking) as opposed to
information that gets transferred and transfixed via the sensual perceptors
(as probably would be the case of the mosquito) what would be the problem
with referencing this to being a mental event? Perhaps the mosquito does
register his perceptual information within his own little brain/mind; yet he
does not think for thinking's sake. Therefore how is it that we can
distinguish and talk about what gets accomplished within thought processing,
as within being distinct, from the acquisition and manipulation of sensory
data that occurs within the more passive or reflexive state of being, by
comparison to the more conscientiously self-activating, volitional,
reflective, type thinking/thinker?

Linda
Sci.Phil.Meta.

Jerry Hull

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
On 27 May 2000 19:27:04 -0500, Neil W Rickert <ricke...@cs.niu.edu>
wrote:

>>>If it is a pseudo-problem, we don't need a way around it.


>
>>You need a way to entitle you to the claim that it is a pseudo-problem,
>>i.e. a way to avoid the premises that make it seem like there is a
>>problem.
>
>"Entitle you to the claim" is one of the bogus ideas coming out of
>epistemology. I am the only judge of whether I am entitled to
>believe it is a pseudo-problem. If I try to persuade others, they
>are the judge of whether they are to be persuaded.

Gee, I would think that an ostensive effort to convince others would
presuppose some kind of shared basis for judgements of validity. But
that's just me -- I may be wrong.

--
Jer
http://www.mp3.com/DrJerryPeriferals

C. White

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
A brief, but interesting, article on Chomsky's thoughts on mind/body can be
found at:

http://www.booksunlimited.co.uk/departments/politicsphilosophyandsociety/sto
ry/0,6000,210793,00.html

Neil W Rickert

unread,
May 28, 2000, 3:00:00 AM5/28/00
to

You often are (wrong).

Certainly, such a shared basis can be helpful. But it does not
always exist. When it does not exist, it is up to those who want to
persuade to find a suitable means of persuasion. This is a situation
that can arise with paradigm shifts. Epistemology is of no help
here.


Jerry Hull

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
On 28 May 2000 09:31:12 -0500, Neil W Rickert <ricke...@cs.niu.edu>
wrote:

>ZZZg...@stny.rr.com (Jerry Hull) writes:

>>>"Entitle you to the claim" is one of the bogus ideas coming out of
>>>epistemology. I am the only judge of whether I am entitled to
>>>believe it is a pseudo-problem. If I try to persuade others, they
>>>are the judge of whether they are to be persuaded.
>
>>Gee, I would think that an ostensive effort to convince others would
>>presuppose some kind of shared basis for judgements of validity. But
>>that's just me -- I may be wrong.

>Certainly, such a shared basis can be helpful. But it does not


>always exist. When it does not exist, it is up to those who want to
>persuade to find a suitable means of persuasion. This is a situation
>that can arise with paradigm shifts. Epistemology is of no help
>here.

If there is no ground of validity that you share with others, then
there is no reason for us to pay the least heed to what you say.
Persuasion be damned. You haven't abandoned epistemology, you have
simply adopted an incoherent and self-confuting model. If you really
believe there is no rational ground for others to accept what you say,
you can do us all a favor by shutting up. Really!

--
Jer
http://www.mp3.com/DrJerryPeriferals

Neil W Rickert

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
ZZZg...@stny.rr.com (Jerry Hull) writes:
>On 28 May 2000 17:54:40 -0500, Neil W Rickert <ricke...@cs.niu.edu>
>wrote:

>>ZZZg...@stny.rr.com (Jerry Hull) writes:

>>>If there is no ground of validity that you share with others, then
>>>there is no reason for us to pay the least heed to what you say.

>>I don't pay a lot of attention to what you write either.

>Yes, but you don't have a reason aside from your narcissistic penchant
>for nonsense.

Look who's talking.

>>Epistemology is a system whereby the priesthood attempts to maintain
>>the primacy of the official dogma.

>Are things known?

Sure. But you don't need an epistemologist to tell you that.

> Is there any procedure for determining what is or
>is not known?

Probably not. There may be all sorts of methods available (web search,
library search, etc). But most likely nothing that can be precisely specified
such as we expect for a procedure.

> If you answer no, then by admission you know nothing --

From the fact that I often challenge the official dogma, it does not
follow that I know nothing.

> That is, assuming you are interested in
>coherence and maintaining a rational dialogue, neither of which,
>frankly, appears to be the case.

Again, look who is talking.


Jerry Hull

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
On 28 May 2000 17:54:40 -0500, Neil W Rickert <ricke...@cs.niu.edu>
wrote:

>ZZZg...@stny.rr.com (Jerry Hull) writes:
>
>>If there is no ground of validity that you share with others, then
>>there is no reason for us to pay the least heed to what you say.
>
>I don't pay a lot of attention to what you write either.

Yes, but you don't have a reason aside from your narcissistic penchant
for nonsense.

>Epistemology is a system whereby the priesthood attempts to maintain


>the primacy of the official dogma.

Are things known? Is there any procedure for determining what is or
is not known? If you answer no, then by admission you know nothing --
some of us already suspected as much -- and your remarks are
worthless. If you answer yes, then you have to abandon sophomoric
positions such as the above. That is, assuming you are interested in


coherence and maintaining a rational dialogue, neither of which,
frankly, appears to be the case.

--
Jer
http://www.mp3.com/DrJerryPeriferals

C. White

unread,
May 29, 2000, 3:00:00 AM5/29/00
to

Neil W Rickert <ricke...@cs.niu.edu> wrote in message
news:8gso48$h...@ux.cs.niu.edu...

> Again, look who is talking.

So much for interesting discourse.

Neil W Rickert

unread,
May 29, 2000, 3:00:00 AM5/29/00
to

There is a history of prior unpleasant exchanges behind this.


Jerry Hull

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
On 28 May 2000 22:28:08 -0500, Neil W Rickert <ricke...@cs.niu.edu>
wrote:

>ZZZg...@stny.rr.com (Jerry Hull) writes:

>>>Epistemology is a system whereby the priesthood attempts to maintain
>>>the primacy of the official dogma.

>> Is there any procedure for determining what is or
>>is not known?
>


>Probably not. There may be all sorts of methods available (web search,
>library search, etc). But most likely nothing that can be precisely specified
>such as we expect for a procedure.

I believe some effort has gone into explicating such things as
"deduction" and "induction", including spelling out various effective
procedures. I know of noone who has posited "web search" as a
guarantor of knowledge.

>> If you answer no, then by admission you know nothing --
>

>From the fact that I often challenge the official dogma, it does not
>follow that I know nothing.

Since this "dogma" is something you have made up, it is hard for
others to know exactly what you imagine it involves. But if there is
no procedure for separating knowledge from mere belief, then you have
no way to assure yourself or others that anything you believe is
knowledge, i.e. to disprove the claim that you know nothing.

--
Jer
http://www.mp3.com/DrJerryPeriferals

Neil W Rickert

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
ZZZg...@stny.rr.com (Jerry Hull) writes:
>On 28 May 2000 22:28:08 -0500, Neil W Rickert <ricke...@cs.niu.edu>
>wrote:

>>ZZZg...@stny.rr.com (Jerry Hull) writes:

>>>>Epistemology is a system whereby the priesthood attempts to maintain
>>>>the primacy of the official dogma.

>>> Is there any procedure for determining what is or
>>>is not known?

>>Probably not. There may be all sorts of methods available (web search,
>>library search, etc). But most likely nothing that can be precisely specified
>>such as we expect for a procedure.

>I believe some effort has gone into explicating such things as
>"deduction" and "induction", including spelling out various effective
>procedures. I know of noone who has posited "web search" as a
>guarantor of knowledge.

Induction is a fairy story, no more credible than the tooth fairy or
Santa Claus. Because natural language is inconsistent when
mistakenly treated as a logic system, deduction is quite limited in
its usefulness. It has important uses in mathematics, but it won't
solve most of the problems of everyday life.

>>> If you answer no, then by admission you know nothing --

>>From the fact that I often challenge the official dogma, it does not
>>follow that I know nothing.

>Since this "dogma" is something you have made up, it is hard for
>others to know exactly what you imagine it involves.

It can be found in the philosophy section of the library.

> But if there is
>no procedure for separating knowledge from mere belief, then you have
>no way to assure yourself or others that anything you believe is
>knowledge, i.e. to disprove the claim that you know nothing.

I am reasonably content with my own state of knowledge. My employer
and my colleagues seem equally content. I have no need to answer to
ridiculous charges that I know nothing.


C. White

unread,
May 29, 2000, 3:00:00 AM5/29/00
to

Neil W Rickert <ricke...@cs.niu.edu> wrote in message
news:8gtv60$k...@ux.cs.niu.edu...

> Induction is a fairy story, no more credible than the tooth fairy or
> Santa Claus. Because natural language is inconsistent when
> mistakenly treated as a logic system, deduction is quite limited in
> its usefulness. It has important uses in mathematics, but it won't
> solve most of the problems of everyday life.

If I'm hungry I can reach in the frig and get some food to eat.
I'm hungry.
So I'll reach in the frig and get food to eat.

Sloppy, perhaps. But useful. <g>

Jerry Hull

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
On 29 May 2000 09:34:40 -0500, Neil W Rickert <ricke...@cs.niu.edu>
wrote:

>ZZZg...@stny.rr.com (Jerry Hull) writes:

>>>> Is there any procedure for determining what is or
>>>>is not known?
>
>>>Probably not. There may be all sorts of methods available (web search,
>>>library search, etc). But most likely nothing that can be precisely specified
>>>such as we expect for a procedure.
>
>>I believe some effort has gone into explicating such things as
>>"deduction" and "induction", including spelling out various effective
>>procedures. I know of noone who has posited "web search" as a
>>guarantor of knowledge.
>

>Induction is a fairy story, no more credible than the tooth fairy or
>Santa Claus.

There is no effective procedure for induction as such -- is that what
you mean? Nonetheless, can you deny that there is better and worse
when it comes to having empirical validation for a claim?

> Because natural language is inconsistent when
>mistakenly treated as a logic system, deduction is quite limited in
>its usefulness.

Look who wants to be in the priesthood now! Logical and artificial
languages are proper subsets of natural language, so the latter
encompasses all the capabilities of the former.

> It has important uses in mathematics, but it won't
>solve most of the problems of everyday life.

Most problems of everyday life don't need deduction as such, but
surely you are not suggesting that deduction is the ONLY form of
knowledge?

>> But if there is
>>no procedure for separating knowledge from mere belief, then you have
>>no way to assure yourself or others that anything you believe is
>>knowledge, i.e. to disprove the claim that you know nothing.
>
>I am reasonably content with my own state of knowledge. My employer
>and my colleagues seem equally content. I have no need to answer to
>ridiculous charges that I know nothing.

The self-satisfaction comes as no surprise. If your workplace
behavior matches your petulant Usenet diatribes, I can only marvel at
the tolerance and credulity of your employer and colleagues. But, for
what it is worth, the only one who has made "ridiculous charges" is
YOURSELF. That you know nothing is a logical consequence of YOUR
claim that there are no procedures for validating knowledge. You seem
marvelously unacquainted with the self-referential implications of
your own views.

--
Jer
http://www.mp3.com/DrJerryPeriferals

Jan Holland

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
On Mon, 29 May 2000 15:15:11 GMT, ZZZg...@stny.rr.com (Jerry Hull)
wrote:

>On 29 May 2000 09:34:40 -0500, Neil W Rickert <ricke...@cs.niu.edu>
>wrote:

>Look who wants to be in the priesthood now! Logical and artificial
>languages are proper subsets of natural language [NL], so the latter


>encompasses all the capabilities of the former.

NL (IMHO) contains many subsets
among which the subsets you mention and/but also
the subsets of space, time, logic,etc [STLE]

But because
they are applied together in a context non-congruent
(or non-specified) with their STLE nature
you get a new "total-situation"
the nature of which I am not sure about, but
is anyhow different from
the simple composing STLE-situation.

So I am (not very) curious about studies that
tried to split up NL in STLE subsets
(what did they with sentences that
were valisd in more than 1 STLE situation).
Did they agree/contradict the NL conclusion ?
How can you add up a T and a S-conclusion ?

Jan Holland

Seth Russell

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
Neil W Rickert wrote:

> Then, since the scanner did see the bar code, I must be attributing a
> mental event to it. But, in fact, I am not attributing a mental
> event to either. The event was an event pure and simple, and adding
> "mental" does nothing.

[snip]

> They are as spatially located and as material as the event of the bar
> code scanner system determining that the item was toothpaste. I can
> find no basis for calling one a mental event and denying that the
> other was a mental event.

Nor can I. So why not accept a software event as being in the same
class of thing as a mental event? Your argument is persuasive but it
seems to me it has the underlying assumption that we cannot call
software events mental; and therefore denies the entire class of
events. But mental events can be distinguished from all other types
of events by virtue of their closer ties to the privileged point of
view of the observer. There is another major distinction between
your run of the mill event and a ~mental~ event. All mental events
are events of representations of things, rather than of the things
themselves. Knowing that people cannot read your thoughts and
knowing that your thoughts are only representations is a foundation
of sanity, being oblivous to that distinction is sheer madness.

The thing and the doing to the thing are the same thing.
The thing and the representation of the thing are not the same thing.

--
Seth Russell
http://robustai.net/ai/word_of_emouth.htm
Click on the button ... see if you can catch me live!
http://robustai.net/JournalOfMyLife/users/SethRussell.html
Http://RobustAi.net/Ai/Conjecture.htm

Neil W Rickert

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
ZZZg...@stny.rr.com (Jerry Hull) writes:
>On 29 May 2000 09:34:40 -0500, Neil W Rickert <ricke...@cs.niu.edu>
>wrote:

>>ZZZg...@stny.rr.com (Jerry Hull) writes:

>>>>> Is there any procedure for determining what is or
>>>>>is not known?

>>>>Probably not. There may be all sorts of methods available (web search,
>>>>library search, etc). But most likely nothing that can be precisely specified
>>>>such as we expect for a procedure.

>>>I believe some effort has gone into explicating such things as
>>>"deduction" and "induction", including spelling out various effective
>>>procedures. I know of noone who has posited "web search" as a
>>>guarantor of knowledge.

>>Induction is a fairy story, no more credible than the tooth fairy or
>>Santa Claus.

>There is no effective procedure for induction as such -- is that what
>you mean?

Call it what you like. I am saying that what many epistemologists
write about induction is nonsense. And if they cannot even get that
right, there is no good reason to rely on anything else they say
about knowledge.

> Nonetheless, can you deny that there is better and worse
>when it comes to having empirical validation for a claim?

I am not arguing against the value of testing claims. My objection
was only to the stories epistemologists tell on how this should be
done.

>> Because natural language is inconsistent when
>>mistakenly treated as a logic system, deduction is quite limited in
>>its usefulness.

>Look who wants to be in the priesthood now!

I am not requiring you to agree with me.

> Logical and artificial
>languages are proper subsets of natural language, so the latter


>encompasses all the capabilities of the former.

That's an absurd argument. An consistent subsystem could do
considerably better than the larger inconsistent system.

>> It has important uses in mathematics, but it won't
>>solve most of the problems of everyday life.

>Most problems of everyday life don't need deduction as such, but
>surely you are not suggesting that deduction is the ONLY form of
>knowledge?

I did not suggest anything of the kind. I suggested only that after
induction is ruled out, then deduction is all that is left of the two
things that you introduced into the discussion.

> That you know nothing is a logical consequence of YOUR
>claim that there are no procedures for validating knowledge.

An invention by the habitual liar, J. Hull.


Neil W Rickert

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
"C. White" <cwhi...@hotmail.com> writes:
>Neil W Rickert <ricke...@cs.niu.edu> wrote in message
>news:8gtv60$k...@ux.cs.niu.edu...

>> Induction is a fairy story, no more credible than the tooth fairy or
>> Santa Claus. Because natural language is inconsistent when

>> mistakenly treated as a logic system, deduction is quite limited in
>> its usefulness. It has important uses in mathematics, but it won't

>> solve most of the problems of everyday life.

>If I'm hungry I can reach in the frig and get some food to eat.


>I'm hungry.
>So I'll reach in the frig and get food to eat.

Sure. But without the logic you could still reach into the frig and
get some food to eat. In fact many people do that when logic and
their obesity argue against it.


Neil Nelson

unread,
May 29, 2000, 3:00:00 AM5/29/00
to

Is John Searle is conscious? How do we tell if some object is
conscious? Having never seen some separable object that is in some
manner referenced as John Searle, I could not say for certain, other
than many commonly called `conscious' objects _say_ John Searle is
of the conscious sort, whether or not the reference of the name John
Searle is in fact of the sort called `conscious'. I.e., if John
Searle is of the sort that has merely learned the symbol
manipulation rules, as in a machine of the Chinese Room, we would
have an argument for a distinction by an object that could not
apparently _know_ the distinction by the very argument. Or perhaps
the distinction is that objects of the person variety are not
conscious--if John Searle is of the machine variety--, but that
machines are. The Chinese Room argument is at

http://www.utm.edu/research/iep/c/chineser.htm

but as it is entirely an assemblage of symbols, it is not
necessarily the case, from a symbol manipulation view, that some
machine did not give the argument.

If I use the symbols `I am conscious', is it the case that I should
be regarded as having consciousness? Does it happen that some people
regard some animals as having consciousness, say, their pets, where
other people consider such animals as good eating and without
conscious attributes.

Currently people can easily make distinct objects that are people
and whether those people-like objects respond well to language or
common people-specific stimuli that would qualify such an object as
conscious. I.e., if the object appears to be a person _and_ can
respond (behave) adequately to a certain class of stimuli, it is
conscious. The result is that if an object is distinguishable as a
machine (non-person), it is not conscious though the stimuli
responses may be the same. If we could not tell if the object was a
person or a machine by its direct appearance, it appeared to be a
person, and its responses were the same as would be given by a
person, then we would not easily treat such an object as a machine.
We could find ourselves in some difficulties treating a person as a
machine in the many extreme ways we treat machines and would prefer
to error on the side of caution. And if in doubt treat such an
object as a person. I.e., if some object appears to be a person, it
is a person and is conscious, given the proper stimuli responses,
until shown otherwise. Hence John Searle would likely _call_ a
machine `conscious' if an object was a machine and John Searle could
not make the distinction, which of course, is how attributes are
commonly applied to anything. If it appears with reasonable evidence
to be of a particular object type, it is assumed to be of that type
unless shown to be otherwise. Hence if, in fact, machines were not
conscious and we could not tell the difference, we could not
effectively use a word such as `conscious' where an a priori
distinction would be required for its use. I.e., the word
`conscious' obtains, in the above scenario, no effective use in
making a distinction. We might think that some machines we could not
make distinct from people as not being conscious, but we could not
behave, speak, or think in any manner to effect a distinction. I.e.,
it may be the case that such machines were in fact not conscious,
but without an effective way to make a distinction in observation,
that indeterminate distinction of consciousness becomes irrelevant.

Is John Searle conscious?

Neil Nelson n_ne...@pacbell.net


Jerry Hull

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
On 29 May 2000 12:59:01 -0500, Neil W Rickert <ricke...@cs.niu.edu>
wrote:

>ZZZg...@stny.rr.com (Jerry Hull) writes:
>>>>>> Is there any procedure for determining what is or
>>>>>>is not known?
>
>>>>>Probably not. There may be all sorts of methods available (web search,
>>>>>library search, etc). But most likely nothing that can be precisely specified
>>>>>such as we expect for a procedure.

>>There is no effective procedure for induction as such -- is that what


>>you mean?
>
>Call it what you like. I am saying that what many epistemologists
>write about induction is nonsense. And if they cannot even get that
>right, there is no good reason to rely on anything else they say
>about knowledge.

Most folks are right about some things & wrong about others.

>> Nonetheless, can you deny that there is better and worse
>>when it comes to having empirical validation for a claim?
>
>I am not arguing against the value of testing claims.

Then why do you say "probably not" when asked if there are any
procedures for determining when something is known? If there aren't
any procedures for testing the validity of claims, there's not much
point to testing claims. Am I going too fast for you?

My objection
>was only to the stories epistemologists tell on how this should be
>done.

Then why attack epistemology per se, and not just "some
epistemologists"?

>>> Because natural language is inconsistent when
>>>mistakenly treated as a logic system, deduction is quite limited in
>>>its usefulness.
>

>>Look who wants to be in the priesthood now!
>
>I am not requiring you to agree with me.

What you require could not be more beside the point.

>> Logical and artificial
>>languages are proper subsets of natural language, so the latter
>>encompasses all the capabilities of the former.
>
>That's an absurd argument. An consistent subsystem could do
>considerably better than the larger inconsistent system.

You know neither that natural language is inherently inconsistent nor
that mathematics is consistent. But we do know that natural language
is the metalanguage of any artificial language.

>>Most problems of everyday life don't need deduction as such, but
>>surely you are not suggesting that deduction is the ONLY form of
>>knowledge?
>
>I did not suggest anything of the kind. I suggested only that after
>induction is ruled out, then deduction is all that is left of the two
>things that you introduced into the discussion.

Oh, is there some third general category of procedures for warranting
claims, other than those traditional labeled 'deductive' and
'inductive'?

>> That you know nothing is a logical consequence of YOUR
>>claim that there are no procedures for validating knowledge.
>
>An invention by the habitual liar, J. Hull.

I can tell you're losing an argument when you start to substitute
invective for reason.

--
Jer
http://www.mp3.com/DrJerryPeriferals

Neil W Rickert

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
ZZZg...@stny.rr.com (Jerry Hull) writes:
>On 29 May 2000 12:59:01 -0500, Neil W Rickert <ricke...@cs.niu.edu>
>wrote:

>>I am not arguing against the value of testing claims.

>Then why do you say "probably not" when asked if there are any
>procedures for determining when something is known?

It was rather a silly question. I wondered about its silliness when
answering. But apparently you do not see the silliness.

On its face, the question asks whether the something is known by
somebody somewhere. That is why I made comments about library and
web searches.

> My objection
>>was only to the stories epistemologists tell on how this should be
>>done.

>Then why attack epistemology per se, and not just "some
>epistemologists"?

In the unlikely event that I find any good ones, I will let you
know.

>>> Logical and artificial
>>>languages are proper subsets of natural language, so the latter
>>>encompasses all the capabilities of the former.

>>That's an absurd argument. An consistent subsystem could do
>>considerably better than the larger inconsistent system.

>You know neither that natural language is inherently inconsistent nor
>that mathematics is consistent. But we do know that natural language
>is the metalanguage of any artificial language.

I'll grant that we don't know for sure that mathematics is
consistent. We do know for sure that natural language is
inconsistent if treated as a logic system. We know this because of
the paradoxes that are expressible in natural language.

That natural language is used as a metalanguage misses the point. We
use it as a metalanguage for its expressiveness, not for its logical
properties.

>>>Most problems of everyday life don't need deduction as such, but
>>>surely you are not suggesting that deduction is the ONLY form of
>>>knowledge?

>>I did not suggest anything of the kind. I suggested only that after
>>induction is ruled out, then deduction is all that is left of the two
>>things that you introduced into the discussion.

>Oh, is there some third general category of procedures for warranting
>claims, other than those traditional labeled 'deductive' and
>'inductive'?

Warranting claims is a foolish attempt to play God.

>>> That you know nothing is a logical consequence of YOUR
>>>claim that there are no procedures for validating knowledge.

>>An invention by the habitual liar, J. Hull.

>I can tell you're losing an argument when you start to substitute
>invective for reason.

I expressed a fact -- that you made a false attribution.


C. White

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Neil Nelson <n_ne...@pacbell.net> wrote in message
news:WgBY4.2200$WI5.1...@news.pacbell.net...

>
> Is John Searle is conscious?

There seems to be a gross misunderstanding of the CRA. Searle's argument
has nothing to do with whether or not mental events can be determined by
behavior. His entire point is that following rules (syntax) is not the same
as understanding (semantics). The man passing the symbols back out of the
room has no understanding of Chinese. Like a computer, he just follows an
algorithm and hands out the appropriate symbol according to the book of
rules he is following. The whole point is that he is doing this without
understanding Chinese at all.

The argument is against the strong AI notion that understanding is the same
as carrying out syntactical processes.

C. White

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Neil W Rickert <ricke...@cs.niu.edu> wrote in message
news:8gu9lc$l...@ux.cs.niu.edu...

> Sure. But without the logic you could still reach into the frig and
> get some food to eat. In fact many people do that when logic and
> their obesity argue against it.

Ah, but that's psychology. The logic still holds up. <g>

Jerry Hull

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
On 29 May 2000 20:39:19 -0500, Neil W Rickert <ricke...@cs.niu.edu>
wrote:

>ZZZg...@stny.rr.com (Jerry Hull) writes:

>>>I am not arguing against the value of testing claims.
>
>>Then why do you say "probably not" when asked if there are any
>>procedures for determining when something is known?
>
>It was rather a silly question. I wondered about its silliness when
>answering. But apparently you do not see the silliness.
>
>On its face, the question asks whether the something is known by
>somebody somewhere. That is why I made comments about library and
>web searches.

So now when you wish to disavow your own remarks, you excuse them on
the basis of their silliness? A novel approach. Certainly someone
who explicates epistemology in terms of dogma and priesthoods cannot
avoid silliness. On and under the surface, my question asks if there
are any ways to distinguish between knowledge and mere belief. The
relevance of library and web searches has yet to be revealed.

>>>That's an absurd argument. An consistent subsystem could do
>>>considerably better than the larger inconsistent system.
>
>>You know neither that natural language is inherently inconsistent nor
>>that mathematics is consistent. But we do know that natural language
>>is the metalanguage of any artificial language.
>
>I'll grant that we don't know for sure that mathematics is
>consistent. We do know for sure that natural language is
>inconsistent if treated as a logic system.

We know that artificial languages are inconsistent if misapplied. Why
should it be any different for natural language.

> We know this because of
>the paradoxes that are expressible in natural language.

And when we find out how to avoid paradox, we express that in natural
language also.

>That natural language is used as a metalanguage misses the point. We
>use it as a metalanguage for its expressiveness, not for its logical
>properties.

Are you really saying that it is OK for the metalanguage to be
logically inconsistent, as long as it is sufficiently "expressive"?
Seems like a pretty slick trick to me.

>>>I did not suggest anything of the kind. I suggested only that after
>>>induction is ruled out, then deduction is all that is left of the two
>>>things that you introduced into the discussion.
>
>>Oh, is there some third general category of procedures for warranting
>>claims, other than those traditional labeled 'deductive' and
>>'inductive'?
>
>Warranting claims is a foolish attempt to play God.

Huh! Then what is all this effort on your part to defend your own
remarks and attack mine? Chopped liver?

This pretty much avoids the question, don't you think? Either you
have a third alternative in mind, or you are once again talking
through your hat.

>>>> That you know nothing is a logical consequence of YOUR
>>>>claim that there are no procedures for validating knowledge.
>
>>>An invention by the habitual liar, J. Hull.
>
>>I can tell you're losing an argument when you start to substitute
>>invective for reason.
>
>I expressed a fact -- that you made a false attribution.

Gee, Neil, there is this thing called "supporting" claims, something
that must have been left out of your library. You have yet to even
ADDRESS the charge that your view of epistemogy is self-confuting, let
alone show that it is false. Is this the kind of rational prowess you
use to impress your employer and colleagues? I didn't know that a
madhouse could get a .edu suffix.

--
Jer
http://www.mp3.com/DrJerryPeriferals

Neil Nelson

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

C. White wrote:

> There seems to be a gross misunderstanding of the CRA. Searle's
> argument has nothing to do with whether or not mental events can
> be determined by behavior. His entire point is that following
> rules (syntax) is not the same as understanding (semantics). The
> man passing the symbols back out of the room has no understanding
> of Chinese. Like a computer, he just follows an algorithm and
> hands out the appropriate symbol according to the book of rules he
> is following. The whole point is that he is doing this without
> understanding Chinese at all.

> The argument is against the strong AI notion that understanding is
> the same as carrying out syntactical processes.

How do we tell if John Searle _understands_ or merely applies the
syntactical rules? How do we tell if John Searle has mental events?
I.e., if our only evidence is the same resulting physically observed
expression in either case, if there are in-fact two separate cases,
it is arbitrary which alternative is inferred. What is a mental
event? How do we together observe a mental event? If it is the case
the we both cannot observe the same mental event without inferring
it from behavior, then we have only a _supposition_ of a mental
event. What evidence can be given, which we both can see, that there
are any mental events?

The problem with the argument is the assumption that to understand
is necessarily different than being able to apply the proper rules,
but that there is no clear definition of what it means to
understand. It would seem that if a person is able to apply the
proper rules by giving the expected responses, we would say he
understands; whereas if he did not give the expected responses, and
hence did no apply the proper rules, he did not understand. Is it
not the case that we judge understanding by conformance to
syntactical rules?

What is semantics? If we are required to _say_ what it is then
semantics consists of the syntax in what we say it is. If we then
cannot _say_ what semantics is, assuming it is not syntactical, then
how do we identify what semantics is? I.e., we have a word that
implies there is no description/definition in words that can be
given to connect that word to its referent. If we cannot _say_ what
we mean, how do we agree, using only words, on what we mean?

Neil Nelson n_ne...@pacbell.net

C. White

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Neil Nelson <n_ne...@pacbell.net> wrote in message
news:3933A7EA...@pacbell.net...

> The problem with the argument is the assumption that to understand
> is necessarily different than being able to apply the proper rules,
> but that there is no clear definition of what it means to
> understand.

The point of the CRA is that the man in the room is given uninterpreted
symbols (symbols that have no meaning for him like they would for a Chinese
person), follows rules, and returns uninterpreted symbols. That is, he
follows syntactical procedures and never learns Chinese and never
understands Chinese just as he understands English.

A digitial computer that only manipulates uninterpreted symbols by
syntactical procedures cannot think like we do. Thinking involves more than
manipulating meaningless symbols. It requires interpretation of what the
symbols mean.

The further point Searle makes is that digital computers (that just follow
syntactical rules) may simulate thinking, but they are not thinking.

The strong AI project rests on the idea that thinking is a purely formal
procedure.


Oliver Sparrow

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
Neil W Rickert <ricke...@cs.niu.edu> wrote:

>Sure. But you don't need an epistemologist to tell you that.

More an epidemiologist: do we need another infectious flame war?
^^^^^^^
_______________________________

Oliver Sparrow

Oliver Sparrow

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
"Neil Nelson" <n_ne...@pacbell.net> wrote:

> Is John Searle is conscious? How do we tell if some object is
> conscious? ... Followed by sensible thoughts.

I wonder if the problem is not to do with the relativism of labels. What
"John Searle" calls "John Searle" is undoubtedly conscious. What an
acquaintance calls "John" is a series of sensoria and simulations, which is
not. That from these data and confections we can adduce the existence of a
conscious JS is an intersting feat, but not one of direct attribution, from
observer to 'thing', in that there is no thing, just Searle-in-hinself
(only knowable as aspects, even to himself) and a bunch of aspects as
perceived by Searle-observers. One who knows only of the performing Searle
only through hearsay and literary diligence knows an asepct of an aspect
of... and thus can say nothing useful at all.

Can I know anything about the internal state of Mary, there? Directly, no;
by phenomenolgy-proven resonance with my own internal states, something cab
be induced. Can I know abstract stuff about others' awareness: how many
awarenesses dance upon the head of a pin, or in a packed stadium? Know: no,
imagine-project-induce, yes, soemwhat, in a qualitative way. Can I know if
that pickaxe-grey box-brain in a jar is aware? In any absolute sense, as
things stand, no.
_______________________________

Oliver Sparrow

Sergio Navega

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
Neil Nelson wrote in message <3933A7EA...@pacbell.net>...
>
>[snip]

> What is semantics? If we are required to _say_ what it is then
> semantics consists of the syntax in what we say it is. If we then
> cannot _say_ what semantics is, assuming it is not syntactical, then
> how do we identify what semantics is? I.e., we have a word that
> implies there is no description/definition in words that can be
> given to connect that word to its referent. If we cannot _say_ what
> we mean, how do we agree, using only words, on what we mean?
>
>

The conundrum you propose here is pretty much on the center of the
problem. If one cannot say in words what a semantic notion is, then
how can we identify what it is? Well, the question is that we could
(eventually) express a semantic notion in words, but it would be so
intractable (to our brains) that it would "mean" nothing to us. It
would be just like an hexadecimal dump of an operating system, just
a bunch of lower level elements.

And this is probably related to the difference between conscious,
linguistic thoughts (the raw material of philosophers) versus the
deep, unconscious and perceptual levels of our brain. It is not
reasonable to describe in words what's happening at those lower
levels, not only because of complexity, but also because our
natural language does not have enough gradations.

This is related to the old idea of constructing intelligence based
on purely symbolic models, something that warrants some merit to
Searle's thought experiment. But once one (barely) understands what's
happening down there, then all Searle-like arguments turn up into
child-like stories.

Regards,
Sergio Navega.

C. White

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Sergio Navega <sna...@attglobal.net> wrote in message
news:3933c...@news3.prserv.net...

> The conundrum you propose here is pretty much on the center of the
> problem. If one cannot say in words what a semantic notion is, then
> how can we identify what it is? Well, the question is that we could
> (eventually) express a semantic notion in words, but it would be so
> intractable (to our brains) that it would "mean" nothing to us. It
> would be just like an hexadecimal dump of an operating system, just
> a bunch of lower level elements.

The strength of Searle's argument is that you place a man who already has
conscious intentionality into a room and give him all sorts of syntactic
rules and procedures to follow, he still doesn't understand Chinese. You
can even "internalize" the Chinese Room: put all the elements of the system
into the man (so to speak) and yet he still doesn't understand Chinese.

The intentionality of computers is attributed to them solely by the people
programming them and the people who use them.

Searle is not saying that a computer could not be made that could think or
understand. It might be possible. But it isn't possible by syntactical
processes alone. Nor does he deny that the brain has syntactical or
computational processes. It does (i.e., we are machines that think, that
can interpret symbols as well as "manipulate" them).

In sum: the hardware cannot be neglected.

Neil W Rickert

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
ZZZg...@stny.rr.com (Jerry Hull) writes:
>On 29 May 2000 20:39:19 -0500, Neil W Rickert <ricke...@cs.niu.edu>
>wrote:

>>On its face, the question asks whether the something is known by


>>somebody somewhere. That is why I made comments about library and
>>web searches.

>So now when you wish to disavow your own remarks, you excuse them on
>the basis of their silliness? A novel approach. Certainly someone
>who explicates epistemology in terms of dogma and priesthoods cannot
>avoid silliness. On and under the surface, my question asks if there
>are any ways to distinguish between knowledge and mere belief. The
>relevance of library and web searches has yet to be revealed.

In future, ask your questions directly. Don't ask questions where
you expect answers to a hidden meaning.

Now that your personal attacks are becoming too obvious, I
am terminating the discussion.


Neil Nelson

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

C. White wrote:

> The strength of Searle's argument is that you place a man who
> already has conscious intentionality into a room and give him all
> sorts of syntactic rules and procedures to follow, he still
> doesn't understand Chinese. You can even "internalize" the Chinese
> Room: put all the elements of the system into the man (so to
> speak) and yet he still doesn't understand Chinese.

The strength of the argument depends on the assumption that we can
make the distinction between a man of conscious intentionality and
one who as merely internalized the Chinese Room.

Oliver Sparrow wrote:

[ Can I know anything about the internal state of Mary, there?
[ Directly, no; by phenomenology-proven resonance with my own


[ internal states, something cab be induced. Can I know abstract
[ stuff about others' awareness: how many awarenesses dance upon the
[ head of a pin, or in a packed stadium? Know: no,

[ imagine-project-induce, yes, somewhat, in a qualitative way. Can I


[ know if that pickaxe-grey box-brain in a jar is aware? In any
[ absolute sense, as things stand, no.

If as Oliver Sparrow suggests, we can make no distinction between a
man of conscious intentionality and one of merely Chinese Room
internalization, then we cannot objectively--give observable
evidence to the arbitrary observer--determine which is the case.

Sergio Navega wrote:

( And this is probably related to the difference between conscious,
( linguistic thoughts (the raw material of philosophers) versus the
( deep, unconscious and perceptual levels of our brain. It is not
( reasonable to describe in words what's happening at those lower
( levels, not only because of complexity, but also because our
( natural language does not have enough gradations.

And then as previously, how does one objectively determine
consciousness from merely syntactical response? How does one
objectively make distinct a thought from observable expression?

Clearly, if we make Searle's assumption that there is C. White's
distinction, then Searle's conclusion should follow. In a sense it
is merely a rephrasing of the original assumption. But the question
is: what _objective_ evidence is there for making Searle's
distinction in his assumption (conscious intentionality is distinct
from a man with an internalized Chinese Room)? If there is no
objective evidence, the argument can be avoided as being opinion;
i.e., it is Searle's _opinion_ that there is a distinction as no
objective evidence is available.

But there is one additional problem: no argument merely in words can
be given for non-word referents or results. E.g., if to _understand_
only requires a word map--I ask you a question, you give a response,
from which I say you understand or not--then understanding is
immediately syntactical (confined to syntax). If I ask you to pick a
particular fruit out of many--assuming that we somehow understand
this example beyond syntax--and you pick the proper fruit, then we
could by our physical demonstration (not merely saying it as being
done here) objectively show non-syntactical understanding/meaning.
If Searle wishes to get beyond syntax (a mere manipulation of
symbols), he would need to provide a physical demonstration in which
the premises were identified, without the lack of objective
confusion noted previously, of which none is offered. In a physical
experiment, you could not merely _say_ that the subject has
conscious intentionality separate from a Chinese Room
internalization, you would need to objectively show that such was
the case, otherwise the assumed conditions for the experiment are
not satisfied and the experiment is useless.

Neil Nelson n_ne...@pacbell.net

Phil Roberts, Jr.

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

> ZZZg...@stny.rr.com (Jerry Hull) writes:

> > On and under the surface, my question asks if there
> > are any ways to distinguish between knowledge and mere belief. The
> > relevance of library and web searches has yet to be revealed.
>

This IS an interesting question. And I'll throw out a knee jerk
answer to keep the ball rolling long enough to see if we can find
any wheat in the chaff.

Assuming, for simplicity's sake, that knowledge is merely true belief,
and that this can be unpacked to little more than beliefs which
"correspond" with reality (as if 'correspond' and 'reality' were
themselves unproblematic, :) ), I would say that we can increase
the likelyhood of true belief by assessing them along the explanationist
lines of coherence, elegance, simplicity, testability, and their
ability to square with what we already know - coherence again, I
guess. But in the narrow sense of know, I doubt that we can ever
know that our criteria are themselves ultimate and correct. So
I guess I would say we can increase the likelyhood, but can never
acquire the certainty, that we have the right procedure for the
acquisition of true beliefs, although I'm not so sure that would
also constitute a precedure for assessing whether a belief one
already has is "true" or not.

I know. I know. I'm askin' for it. Jerry's probably going to
slam dunk me on this.


--

Phil Roberts, Jr.

The Psychodynamics of Genetic Indeterminism:
Why We Turned Out Like Captain Kirk Instead of Mr. Spock
http://www.fortunecity.com/victorian/dada/90/

Sergio Navega

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
C. White wrote in message ...

>
>Sergio Navega <sna...@attglobal.net> wrote in message
>news:3933c...@news3.prserv.net...
>> The conundrum you propose here is pretty much on the center of the
>> problem. If one cannot say in words what a semantic notion is, then
>> how can we identify what it is? Well, the question is that we could
>> (eventually) express a semantic notion in words, but it would be so
>> intractable (to our brains) that it would "mean" nothing to us. It
>> would be just like an hexadecimal dump of an operating system, just
>> a bunch of lower level elements.
>
>The strength of Searle's argument is that you place a man who already has
>conscious intentionality into a room and give him all sorts of syntactic
>rules and procedures to follow, he still doesn't understand Chinese. You
>can even "internalize" the Chinese Room: put all the elements of the
system
>into the man (so to speak) and yet he still doesn't understand Chinese.
>
>The intentionality of computers is attributed to them solely by the people
>programming them and the people who use them.


If you're referring to commonplace computers, I may agree. But this does
not extend automatically to any formal symbol manipulation system, if
these terms ('symbol' and 'manipulation') are taken on a wider scope.
One can find a place for neural equivalents of symbolic entities if one
is willing to accept an augmented notion of what a symbol is. In this
regard we could be reclassified as intentional (and symbolic) systems,
without recourse to magical or mystical properties of meat brains.

>
>Searle is not saying that a computer could not be made that could think or
>understand. It might be possible. But it isn't possible by syntactical
>processes alone. Nor does he deny that the brain has syntactical or
>computational processes. It does (i.e., we are machines that think, that
>can interpret symbols as well as "manipulate" them).
>

Again I may agree with you if we take the notion of 'syntactical processes'
as is usually taken. But I insist on a different interpretation, where a
train of spikes (action potentials) in a group of neurons can be seen as
a kind of 'symbolic representation' of external signals that impinged
on our senses. These symbols are different than traditional symbols
because they don't need an "interpreter" to relate to its meaning: their
meaning is explicitly coded in their shape, a property that is unusual
in traditional symbol systems. However, these "symbols" still conserve
a fundamental property: they are objects that stand for other objects.

These are just different kinds of symbols, ones which will (perhaps) require
different formal notions than the symbols we're used to manipulate.
Nevertheless, these systems could be simulated on computers and given
suitable sensory contacts with the world, these systems could eventually
demonstrate similar intentionality with those of humans.

Regards,
Sergio Navega.


Neil Nelson

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Phil Roberts wrote:

[ Assuming, for simplicity's sake, that knowledge is merely true


[ belief, and that this can be unpacked to little more than beliefs
[ which "correspond" with reality (as if 'correspond' and 'reality'
[ were themselves unproblematic, :) ), I would say that we can

[ increase the likelihood of true belief by assessing them along the


[ explanationist lines of coherence, elegance, simplicity,
[ testability, and their ability to square with what we already know
[ - coherence again, I guess.

[ [1] But in the narrow sense of know, I doubt that we can ever know


[ that our criteria are themselves ultimate and correct.

[ So I guess I would say we can increase the likelihood, but can


[ never acquire the certainty, that we have the right procedure for
[ the acquisition of true beliefs, although I'm not so sure that

[ would also constitute a procedure for assessing whether a belief


[ one already has is "true" or not.

I have separated [1] from the rest to ask if Phil Roberts holds [1]
to be ultimate and correct? If not, since [1] holds there is no
method by which we can judge [1] to be ultimate and correct, then
[1] says of itself that it is not (we doubt, we are not certain)
ultimate nor correct, then do we doubt that we doubt? I.e., have we
found ourselves in a variation of the Liar paradox? If we can say
that we doubt, without doubting our doubting, then we say that we
doubt certainly and hence are certain (ultimate and correct) about
at least one thing: that we doubt.

Now that we are certain about this particular assertion, does Phil
Roberts hold the assertion true? Is it a belief? Is it knowledge? If
criteria are required to assess the certainty of the assertion then
could we obtain a certain (necessary) assertion from unnecessary
criteria. Hence if we required criteria to assess certainty, of
which we have one case, then there are also necessary (certain)
criteria of which we can then certainly assert; thereby expanding
the number of assertions of which we are certain. It would seem that
if we are certain about an assertion, then we believe it, it is true
(otherwise we would not believe it), we hold it justified (it is
qualified by our necessary criteria), and hence it is also
knowledge. Knowledge, contrary to Socrates' position, seems quite
plentiful.

Neil Nelson n_ne...@pacbell.net

Sergio Navega

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
Neil Nelson wrote in message <3933F857...@pacbell.net>...

>
> Sergio Navega wrote:
>
> ( And this is probably related to the difference between conscious,
> ( linguistic thoughts (the raw material of philosophers) versus the
> ( deep, unconscious and perceptual levels of our brain. It is not
> ( reasonable to describe in words what's happening at those lower
> ( levels, not only because of complexity, but also because our
> ( natural language does not have enough gradations.
>
> And then as previously, how does one objectively determine
> consciousness from merely syntactical response? How does one
> objectively make distinct a thought from observable expression?
>
> Clearly, if we make Searle's assumption that there is C. White's
> distinction, then Searle's conclusion should follow. In a sense it
> is merely a rephrasing of the original assumption. But the question
> is: what _objective_ evidence is there for making Searle's
> distinction in his assumption (conscious intentionality is distinct
> from a man with an internalized Chinese Room)? If there is no
> objective evidence, the argument can be avoided as being opinion;
> i.e., it is Searle's _opinion_ that there is a distinction as no
> objective evidence is available.
>


Your point here is of fundamental importance, I think. Searle (and
most) thought-experiments are useless notions because they defy any
objective way of settling the question. No one could think of
constructing the rule-book necessary in Searle's CR, because that
would require more matter than found in the entire universe. Yet, it
is exactly the fact that one cannot build such a rule book that
prevents the CR from being an objective test: it depends on *who*
is interviewing the room. One may be less strict (or more gullible)
in his/her notion of intelligence and assign intelligence to the CR
based on a few simple exchanges, while another person may require a
much longer dialog and even then not being satisfied.

This is why I don't "define" intelligence in behavioral terms
(although I know this is exactly the preferred way AI scientists do).
Intelligence is, for me, the kind of ability that one brings to
surface when one does not know exactly how to solve a problem.
I would be convinced of an intelligent CR only in the instance
of it responding sensibly to a vague and uncertain question (but
a thought-provoking one) that I did previously.

Regards,
Sergio Navega.


C. White

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Neil Nelson <n_ne...@pacbell.net> wrote in message
news:3933F857...@pacbell.net...

> Clearly, if we make Searle's assumption that there is C. White's
> distinction, then Searle's conclusion should follow. In a sense it
> is merely a rephrasing of the original assumption. But the question
> is: what _objective_ evidence is there for making Searle's
> distinction in his assumption (conscious intentionality is distinct
> from a man with an internalized Chinese Room)? If there is no
> objective evidence, the argument can be avoided as being opinion;
> i.e., it is Searle's _opinion_ that there is a distinction as no
> objective evidence is available.

Recall that Searle's argument is against the the AI notion that a
sophisticated enough syntactic system will be able to do more than mimic
what we do when we read and interpret symbols. The idea is that it will
duplicate conscious intentionality. The point of Searle's CRA is that a
conscious, intentional agent is given a sophisticated set of rules to follow
and can "respond" by pulling the appropriate Chinese character out of a box
and pushing it out through the slot. However, no matter how sophisticated
the rules are that are given to him, he still does not understand Chinese.

Searle makes the even deeper point in _Rediscovery of the Mind_ (and
elsewhere) that even syntax is not something that is even intrinsic to
physics, but is itself imposed on a system by a conscious, intentional
agent. For example, an open or closed window could be interpreted as
algorithmic information system. Saying that the brain computes or follows
algorithms is a function imposed by a conscious, intentional agent.

A cute example he gives in RDM is of a pencil: it can be arguing that it is
following a program, albeit an uninteresting one.


Anders N Weinstein

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
In article <mke7js0tg45h0fgot...@4ax.com>,

Oliver Sparrow <oh...@chatham.demon.co.uk> wrote:
>"Neil Nelson" <n_ne...@pacbell.net> wrote:
>
>> Is John Searle is conscious? How do we tell if some object is
>> conscious? ... Followed by sensible thoughts.
>
>I wonder if the problem is not to do with the relativism of labels. What
>"John Searle" calls "John Searle" is undoubtedly conscious. What an
>acquaintance calls "John" is a series of sensoria and simulations, which is
>not.

Nonsense. What an acquaintance calls "John Searle" is the exact same
thing as the thing that John Searle calls "I" (whatever exactly that
entity is.) This is the same thing that John Searle calls "John Searle"
as well, if, as is usual, he knows this name of his. But this last is
not necessary, for there is always a possibiltiy of not knowing your
own name through amnesia, while still being able to refer to yourself
as "I".

You can see they must be the same from the logical connections between
claims made by different speakers. For example, if I say "John Searle
is bald" and John Searle says "I am not bald", our claims disagree.

(All this is assuming context resolves the ambiguity in the name
"John Searle" which is probably possessed by several people.)

As to what *sort* of thing "John Searle" names, I would say it is
a living human being, a kind of animal.

>not. That from these data and confections we can adduce the existence of a
>conscious JS is an intersting feat, but not one of direct attribution, from
>observer to 'thing', in that there is no thing, just Searle-in-hinself
>(only knowable as aspects, even to himself) and a bunch of aspects as
>perceived by Searle-observers. One who knows only of the performing Searle

I don't think this is right. Intentional states are characterized
complete propositional content, in the paradigmatic case, one with
subject and predicate contents, in order that to represent something
about the world that may be true or false. Thus I may believe something
that I would express by "that man is bald" or, equally, "that man has
a headache [so go get some aspirin]". But I might also believe "That
man is John Searle". The noun phrase in these sentences
expresses reference to an object, the bearer of the relevant attributes.

Because these states have complete propositional form, the states we
attribute can be resolved into thing and attribute. "John Searle"
name the object, "___ has a headache" might be said to introduce an
attribute. A complete content has to have both.

If you focus always on complete propositional contents, I think you
will be less inclined to say that we only experience aspects or
sensoria and simulations. Even the upshot of perception normally have
has a complete propositional content, as it must if it is to stand
in cognitive relations to claims about the extra-mental world.

Jerry Hull

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
On 30 May 2000 11:14:24 -0500, Neil W Rickert <ricke...@cs.niu.edu>
wrote:

>ZZZg...@stny.rr.com (Jerry Hull) writes:

>>>On its face, the question asks whether the something is known by
>>>somebody somewhere. That is why I made comments about library and
>>>web searches.
>
>>So now when you wish to disavow your own remarks, you excuse them on
>>the basis of their silliness? A novel approach. Certainly someone
>>who explicates epistemology in terms of dogma and priesthoods cannot

>>avoid silliness. On and under the surface, my question asks if there


>>are any ways to distinguish between knowledge and mere belief. The
>>relevance of library and web searches has yet to be revealed.
>

>In future, ask your questions directly. Don't ask questions where
>you expect answers to a hidden meaning.

What hidden meaning? Methinks the lad is a tad befuddled.

>Now that your personal attacks are becoming too obvious, I
>am terminating the discussion.

Too obvious? What have I said that comes even close to this:

>>An invention by the habitual liar, J. Hull.

It tells a lot about your professional integrity, that you would make
such a libelous claim without the SLIGHTEST effort to substantiate it.

--
Jer
http://www.mp3.com/DrJerryPeriferals

Jerry Hull

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
On Tue, 30 May 2000 13:42:32 -0400, "Phil Roberts, Jr."
<phi...@ix.netcom.com> wrote:

>> ZZZg...@stny.rr.com (Jerry Hull) writes:
>
>> > On and under the surface, my question asks if there
>> > are any ways to distinguish between knowledge and mere belief. The
>> > relevance of library and web searches has yet to be revealed.
>

>This IS an interesting question. And I'll throw out a knee jerk
>answer to keep the ball rolling long enough to see if we can find
>any wheat in the chaff.
>

>Assuming, for simplicity's sake, that knowledge is merely true belief,
>and that this can be unpacked to little more than beliefs which
>"correspond" with reality (as if 'correspond' and 'reality' were
>themselves unproblematic, :) ), I would say that we can increase

>the likelyhood of true belief by assessing them along the explanationist

>lines of coherence, elegance, simplicity, testability, and their
>ability to square with what we already know - coherence again, I

>guess. But in the narrow sense of know, I doubt that we can ever

>know that our criteria are themselves ultimate and correct. So

>I guess I would say we can increase the likelyhood, but can never


>acquire the certainty, that we have the right procedure for the
>acquisition of true beliefs, although I'm not so sure that would

>also constitute a precedure for assessing whether a belief one


>already has is "true" or not.

Unlike the case with deduction, there are no effective procedures for
determining the validity of empirical claims. But this hardly entails
that we never have good reason for believing such claims to be true.
The fallacy seems to be the supposition that we cannot know when it is
possible that we are mistaken. This may be the genesis for Rickert's
idiosyncratic stance, but it's hard to tell because his personality
problems so to muddle up any debate.

That aside, it is surely false to argue, as he does, that there is no
shared basis for determining the validity of claims, for by its very
nature an argument presupposes a shared basis of validity.

--
Jer
http://www.mp3.com/DrJerryPeriferals

Gary Forbis

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
C. White <cwhi...@hotmail.com> wrote in message news:FbUY4.1018$IO4.1...@news.flash.net...

> Recall that Searle's argument is against the the AI notion that a
> sophisticated enough syntactic system will be able to do more than mimic
> what we do when we read and interpret symbols.

Maybe you filled this out in what follows but I'm not sure.

Isn't Searle's argument that if the machine in fact has consciousness it is not
due to implementing the syntactic system but rather due to other causal
factors, in particular the physical system's power to cause consciousness
(when undergoing the right processes.)

Neil Nelson

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Neil Nelson wrote:

> Clearly, if we make Searle's assumption that there is C. White's
> distinction, then Searle's conclusion should follow. In a sense it
> is merely a rephrasing of the original assumption. But the
> question is: what _objective_ evidence is there for making
> Searle's distinction in his assumption (conscious intentionality
> is distinct from a man with an internalized Chinese Room)? If
> there is no objective evidence, the argument can be avoided as
> being opinion; i.e., it is Searle's _opinion_ that there is a
> distinction as no objective evidence is available.

C. White wrote:

[ Recall that Searle's argument is against the AI notion that a


[ sophisticated enough syntactic system will be able to do more than

[ mimic what we do when we read and interpret symbols. The idea is


[ that it will duplicate conscious intentionality. The point of
[ Searle's CRA is that a conscious, intentional agent is given a
[ sophisticated set of rules to follow and can "respond" by pulling
[ the appropriate Chinese character out of a box and pushing it out
[ through the slot. However, no matter how sophisticated the rules
[ are that are given to him, he still does not understand Chinese.

[ Searle makes the even deeper point in _Rediscovery of the Mind_
[ (and elsewhere) that even syntax is not something that is even
[ intrinsic to physics, but is itself imposed on a system by a
[ conscious, intentional agent. For example, an open or closed
[ window could be interpreted as algorithmic information system.
[ Saying that the brain computes or follows algorithms is a function
[ imposed by a conscious, intentional agent.

[ A cute example he gives in RDM is of a pencil: it can be arguing
[ that it is following a program, albeit an uninteresting one.

I do not think you are addressing my point. You are using the
phrases `what we do when we read and interpret symbols', `conscious
intentionality', `understand', `conscious, intentional agent'
whereas my argument was that these kinds of phrases make no
effective and objective distinction in the contrast you are trying
to make. E.g., what is it that we do when we read and interpret
symbols that would be different from the responses that an
appropriately programmed machine would make? If all we can do is
observe physical responses, which appears to be the case (we do not
read minds), then we cannot avoid the possibility that what we do
when we read and interpret symbols is the same as what a machine may
be programmed to do. You are assuming that there is a necessary
distinction, I am asking what the objective basis is for making such
a distinction as none appears to be available. If we are already
machines, then our assumption that we are not makes no difference.

E.g., lets turn the Turing Test around and ask if we can prove that
it is necessary that no one posting to this thread is a machine? Is
Searle merely a machine masquerading as a non-machine by using
phrases such as `conscious, intentional agent' that are
syntactically defined to refer to non-machines, but where no such
objective reference can be made? Some arguments suppose such things
as non-existents, square-circles and so on, but that does not
require that there are any such things. You are saying that there
are conscious, intentional agents; I am asking if there is an
objective basis for making such a distinction from the machine that
behaves or appears in all respects the same?

Neil Nelson n_ne...@pacbell.net

Neil Nelson wrote:

> Clearly, if we make Searle's assumption that there is C. White's
> distinction, then Searle's conclusion should follow. In a sense it
> is merely a rephrasing of the original assumption. But the
> question is: what _objective_ evidence is there for making
> Searle's distinction in his assumption (conscious intentionality
> is distinct from a man with an internalized Chinese Room)? If
> there is no objective evidence, the argument can be avoided as
> being opinion; i.e., it is Searle's _opinion_ that there is a
> distinction as no objective evidence is available.

C. White wrote:

[ Recall that Searle's argument is against the AI notion that a


[ sophisticated enough syntactic system will be able to do more than

[ mimic what we do when we read and interpret symbols. The idea is


[ that it will duplicate conscious intentionality. The point of
[ Searle's CRA is that a conscious, intentional agent is given a
[ sophisticated set of rules to follow and can "respond" by pulling
[ the appropriate Chinese character out of a box and pushing it out
[ through the slot. However, no matter how sophisticated the rules
[ are that are given to him, he still does not understand Chinese.

[ Searle makes the even deeper point in _Rediscovery of the Mind_
[ (and elsewhere) that even syntax is not something that is even
[ intrinsic to physics, but is itself imposed on a system by a
[ conscious, intentional agent. For example, an open or closed
[ window could be interpreted as algorithmic information system.
[ Saying that the brain computes or follows algorithms is a function
[ imposed by a conscious, intentional agent.

[ A cute example he gives in RDM is of a pencil: it can be arguing
[ that it is following a program, albeit an uninteresting one.

I do not think you are addressing my point. You are using the
phrases `what we do when we read and interpret symbols', `conscious
intentionality', `understand', `conscious, intentional agent'
whereas my argument was that these kinds of phrases make no
effective and objective distinction in the contrast you are trying
to make. E.g., what is it that we do when we read and interpret
symbols that would be different from the responses that an
appropriately programmed machine would make? If all we can do is
observe physical responses, which appears to be the case (we do not
read minds), then we cannot avoid the possibility that what we do
when we read and interpret symbols is the same as what a machine may
be programmed to do. You are assuming that there is a necessary
distinction, I am asking what the objective basis is for making such
a distinction as none appears to be available. If we are already
machines, then our assumption that we are not makes no difference.

E.g., lets turn the Turing Test around and ask if we can prove that
it is certain that no one posting to this thread is a machine? Is
Searle merely a machine masquerading as a non-machine by using
phrases such as `conscious, intentional agent' that are
syntactically defined to refer to non-machines, but where no such
objective reference can be made? Some arguments suppose such things
as non-existents, square-circles and so on, but that does not
require that there are any such things. You are saying that there
are conscious, intentional agents; I am asking if there is an
objective basis for making such a distinction from the machine that
behaves or appears in all respects the same?

Neil Nelson n_ne...@pacbell.net

Neil Nelson

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Neil Nelson wrote:

> You are saying that there are conscious, intentional agents; I am
> asking if there is an objective basis for making such a
> distinction from the machine that behaves or appears in all
> respects the same?

C. White wrote:

[ I think you're missing the point of the argument, which is that
[ you already have a conscious intentional agent that understands a
[ language following very detailed rules and no matter how detailed
[ and complicated the rules are, no undertanding of Chinese takes
[ place.

What you have just stated is a _premise_ or _assumption_ of Searle's
argument. I may give an argument that assumes its conclusion but
that does not mean we should then accept the conclusion. What is
required is an agreement on acceptance of the premise. I have argued
that there is no objective basis for accepting the premise.

Neil Nelson n_ne...@pacbell.net

Neil Nelson

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Neil Nelson wrote:

> What you have just stated is a _premise_ or _assumption_ of
> Searle's argument. I may give an argument that assumes its
> conclusion but that does not mean we should then accept the
> conclusion. What is required is an agreement on acceptance of the
> premise. I have argued that there is no objective basis for
> accepting the premise.

C. White wrote:

[ Whatever. The assumption is that the man is a conscious
[ intentional agent. ...

I do not disagree that you are making this assumption; I am asking
what is the objective basis for making the assumption? Just stating
an assumption does nothing to provide an objective basis. It may be
that the assumption becomes objective as an artifact of an assumed
agreed syntax required for communication, such as a minimal logic.
It may be the assumption is objective in that we would agree that an
arbitrary observer of physical circumstance would agree with our
statements about physical observation. Why should the assumption be
accepted?

Neil Nelson n_ne...@pacbell.net

C. White

unread,
May 31, 2000, 3:00:00 AM5/31/00
to

Neil Nelson <n_ne...@pacbell.net> wrote in message
news:393456C3...@pacbell.net...

> You are saying that there are conscious, intentional agents; I am
asking if there is an
> objective basis for making such a distinction from the machine that
> behaves or appears in all respects the same?

I think you're missing the point of the argument, which is that you already


have a conscious intentional agent that understands a language following
very detailed rules and no matter how detailed and complicated the rules
are, no undertanding of Chinese takes place.

Distinguishing a conscious machine from an unconscious one is an empirical
question and I don't think there are infallible criteria. But it works the
other way, too. People have been written off as unconscious from
neurological disease or trauma when they were actually conscious. In such
cases there are no objective, observable behavior for determining whether or
not the person is conscious. Behavior is a useful, practical criterion for
determining whether someone or something is conscious. Behavior is an
inappropriate measure of consciousness.

Perhaps a deeper understanding of the neuronal processes of the brain will
lead to objective criteria (at least in biological beings with nervous
systems): consciousness is present iff x, y, and z occur (where x, y, and z
are neurological processes in the brain).

C. White

unread,
May 31, 2000, 3:00:00 AM5/31/00
to

Neil Nelson <n_ne...@pacbell.net> wrote in message
news:3934748E...@pacbell.net...

> What you have just stated is a _premise_ or _assumption_ of Searle's
> argument. I may give an argument that assumes its conclusion but
> that does not mean we should then accept the conclusion. What is
> required is an agreement on acceptance of the premise. I have argued
> that there is no objective basis for accepting the premise.

Whatever. The assumption is that the man is a conscious intentional agent.
The argument is that syntactical rules are not constituitive of his
intentionality so that "programming" him to hand back the correct Chinese
symbol adds nothing to his understanding of Chinese. Unlike English, the
symbols from the Chinese language are meaningless to him, so he manipulates
meaningless symbols just as a digital computer manipulates meaningless
symbols. He successfully communicates Chinese with whomever is putting the
Chinese symbols into the CR. That person walks away with the impression the
man in the room knows and understands Chinese and its symbols. But this
impression is wrong. For he his only carrying out a formal process
following rules of which symbols to hand back when and doesn't understand
the meaning of Chinese symbols.

Again, the argument is against the notion that digital computers think, that
developing formal processes (syntax) enough somehow creates understanding or
thinking like our brains do it. A sophisticated computer program
manipulates meaningless symbols in a formal process that does something
(gives an output) to an input. No level of sophistication of the *formal
process*, the program, makes a conscious intentional creature capable of
understanding (say) Chinese like the brain.

The brain is not a digital computer.

C. White

unread,
May 31, 2000, 3:00:00 AM5/31/00
to

Gary Forbis <GaryF...@email.msn.com> wrote in message
news:ee$Ewnny$GA.188@cpmsnbbsa09...

> Isn't Searle's argument that if the machine in fact has consciousness it
is not
> due to implementing the syntactic system but rather due to other causal
> factors, in particular the physical system's power to cause consciousness
> (when undergoing the right processes.)

Yep.

Oliver Sparrow

unread,
May 31, 2000, 3:00:00 AM5/31/00
to
ande...@pitt.edu (Anders N Weinstein) wrote:

>Nonsense.

...which is seldom the case. But he said it.

He goes on to say that:

>What an acquaintance calls "John Searle" is the exact same

>thing as the thing that John Searle calls "I" .

This is so obviously not the case that it barely requires comment. It
confuses a physical entity - that lump of meat - with a system, what that
meat does, and does to the system that revolves around my, the observers,
meat loaf. I describe 'John Searle' after an morning in a garden together,
talking of fine things, as the aggregated and compacted, crosslinked
information that this interchange has generated, and its resonances with
pre-existing structures in 'me'. What the structure that has generated
these inputs (and missed potential information, and a host of other
matters) is quite distinct from my label. The Inner Searle, considering his
own afternoon, also accesses (and is) the workings of this system, but only
parts of it, and much of that represented symbolically as rough memory
traces and vague associations.

> ... Intentional states are characterized


>complete propositional content, in the paradigmatic case, one with
>subject and predicate contents, in order that to represent something
>about the world that may be true or false.

I fear that these words, whilst construed ina grammatical sentence, convey
no meaning to me. Whatever was intended, I do not think that we know enough
about 'intentional states', either as a taxonomic description of mentation
or else as a description of a transaction, in enough detail to be able to
assert anything whatever about them, save that they exist and do stuff.

>Because these states have complete propositional form, the states we
>attribute can be resolved into thing and attribute. "John Searle"
>name the object, "___ has a headache" might be said to introduce an
>attribute. A complete content has to have both.

A world reduced to Lego. Red block on yellow block. But 'tain't like that,
however convenient it may be to pretend that it is. It is exactly this
oversimplification of what happens when we apply labels (that the label is
exactly the thing, and that the thing is bounded by the properties of the
label) that crashed early AI.
_______________________________

Oliver Sparrow

C. White

unread,
May 31, 2000, 3:00:00 AM5/31/00
to

Neil Nelson <n_ne...@pacbell.net> wrote in message
news:39349117...@pacbell.net...

> I do not disagree that you are making this assumption; I am asking
> what is the objective basis for making the assumption? Just stating
> an assumption does nothing to provide an objective basis. It may be
> that the assumption becomes objective as an artifact of an assumed
> agreed syntax required for communication, such as a minimal logic.
> It may be the assumption is objective in that we would agree that an
> arbitrary observer of physical circumstance would agree with our
> statements about physical observation. Why should the assumption be
> accepted?

Well, let's go back and see what the original question is: does our brain
work like a digital computer? So we take a human brain that already works
the way it does and then add programming to it (that is, the rules of
handing back Chinese symbols). The question is: do completely formal
procedures get us to understand Chinese like we normally do? The CRA answer
is: no. The conclusion follows: the way the brain works is more than the
manipulation of meaningless symbols like a digital computer (which is all a
digital computer does).

In fact, symbols have meaning for us: they stand for something. For a
digital computer, symbols do not stand for anything, not even numbers. They
just are numbers (zeroes and ones). It does not have thoughts and beliefs
about the world (the contents of the mental states of the brain) like we do.


Neil Nelson

unread,
May 31, 2000, 3:00:00 AM5/31/00
to

Neil Nelson wrote:

> I do not disagree that you are making this assumption; I am asking
> what is the objective basis for making the assumption? Just
> stating an assumption does nothing to provide an objective basis.
> It may be that the assumption becomes objective as an artifact of
> an assumed agreed syntax required for communication, such as a
> minimal logic. It may be the assumption is objective in that we
> would agree that an arbitrary observer of physical circumstance
> would agree with our statements about physical observation. Why
> should the assumption be accepted?

C. White wrote:

[ Well, let's go back and see what the original question is: does


[ our brain work like a digital computer? So we take a human brain
[ that already works the way it does and then add programming to it

[ (that is, the rules of handing back Chinese symbols). ...

How does a human brain work that is different from after we add
programming to it? E.g., one sometimes discussed aspect of Church's
Thesis is that all physical laws are computable such that all
physical entities/organizations are equivalent to computable
machines or that these organizations can be seen as computers simply
but of a different physical form. Since the human brain is a
physical organization, then by the previous persuasion (the Thesis
being only thesis) a human brain is already a computer or equivalent
to one whether or not we add programming to it. I.e., if you could
take brain matter and obtain a computational device that exceeded
the results bounded by Church's Thesis--and apparently discover new
physical laws in the process or significantly refine current
ones--then you might be making some headway. And there are those who
hold that human development of advanced mathematics is just such a
result. However, we need to show by objective observation that the
brain is essentially different from one case to the next, or that
the black-box that is the brain functions in a manner that exceeds
any computer. What is your objective observation/evidence that the
before and after programmed brains are different, i.e., that the
brain is not already a computer or computer equivalent?

Neil Nelson n_ne...@pacbell.net

C. White

unread,
May 31, 2000, 3:00:00 AM5/31/00
to

Neil Nelson <n_ne...@pacbell.net> wrote in message
news:3935120A...@pacbell.net...

> result. However, we need to show by objective observation that the
> brain is essentially different from one case to the next, or that
> the black-box that is the brain functions in a manner that exceeds
> any computer. What is your objective observation/evidence that the
> before and after programmed brains are different, i.e., that the
> brain is not already a computer or computer equivalent?

No, that's really not pertinent. What you are doing is taking a human
brain, however it works, and you "program it" like you would a digital
computer. The result of the thought experiment is that formal processes
alone (syntax) does not yield understanding of Chinese.
Something more than formal processes are going on.

The brain is really a fact in this thought experiment and not an assumption.
How it works is not relevant, except that it does not work by adding a bunch
of formal programs to it.

And that's the point of the CRA.

It is loading more messages.
0 new messages