Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Computationalism.

3 views
Skip to first unread message

Neil Rickert

unread,
Jan 21, 1997, 3:00:00 AM1/21/97
to

Here is a preliminary attempt to define computationalism.

Computationalism is the following thesis:

1: The human body receives 'information signals' through a variety of
sensors to be found in the body. Some of these sensors receive
their signals from sources external to the body, while some
receive their signals from internal sensors. These signals are
all connected to a system of neurons, which it will be convenient
to refer to as 'the brain'. We may refer to the collection of
sensors as the sensory system.

2: The same neural system, or brain, produces many output
'information signals', which are connected to various devices such
as muscles and glands. These output devices are mostly internal
to the body, although they may produce externally detectable
effects such as motion or sound. We may refer to the collection
of output neurons, and the devices to which they are connected, as
the motor system.

3: The brain transforms its input 'information signals' in a variety
of ways. These transformations may result in output information
signals or in changes to output signals either at the present or
at future times. The transformations may include changes to
internal structures that will alter the way future signals are
transformed (this is intended to describe any memory recording or
similar activity).

4: The computationalist thesis asserts that there exists a
computational description of the signal transformations which
occur in the brain. Furthermore, the thesis asserts that a
suitable computational description defines everything that is
important about what happens between the sensory system and the
motor system. In particular, the thesis claims that it is
irrelevant whether those transformations are carried out by neural
tissues, or by silicon chips, or some other physical system, as
long as the computational description adequately describes the
signal transformations.

NOTES:

The term 'information signal' deliberately remains undefined.
There is no attempt to define what it is information about. It
suffices to treat it as a signal.

The thesis makes no claim about creating a person. This is because
the thesis is intended to be a scientific one. Talk about creating
a person may be allowed in philosophical circles, but is improper
in scientific discussions, at least in our present state of
knowledge. Note, however, that there is an indirect implication
for the philosophers. If a person's body is operated on so as to
preserve the sensory system and the motor system, and everything
else is replaced by silicon or some other computational system
carrying out the right computation, the implication is that the
same person would exist in the altered body.

The thesis also makes no claims about qualia, again because that
would take it out of the proper bounds of science. Again, as in
the previous note, those of a philosophical bent could draw
plausible implications about qualia.

Some computationalists might claim that the input and output
information could be reduced to that of a stream of ASCII
characters at tty terminal speeds. Indeed, such a reduction in
input and output is suggested by the Turing test. However the
thesis itself does not make any such claim. Presumably those who
would claim that a reduction to tty input is allowed, would not at
the same time suggest that visual qualia could arise when the input
is so limited.


Julian Fogel

unread,
Jan 21, 1997, 3:00:00 AM1/21/97
to

Neil Rickert wrote:
>
> Here is a preliminary attempt to define computationalism.
>
> Computationalism is the following thesis:

Could you briefly describe how computationalism
is different from other materialistic viewpoints,
such as intentionalism or eliminative materialism for example?

thanks,

-Julian

--
Julian Fogel, grad student | I intend this sentence
Department of Computing Science | to be false, but it seems
University of Alberta, Canada | to be true despite me.

Neil Rickert

unread,
Jan 21, 1997, 3:00:00 AM1/21/97
to

In <32E51750...@cs.ualberta.ca> Julian Fogel <jul...@cs.ualberta.ca> writes:
>Neil Rickert wrote:

>> Here is a preliminary attempt to define computationalism.

>>...

>Could you briefly describe how computationalism
>is different from other materialistic viewpoints,
>such as intentionalism or eliminative materialism for example?

I'm probably not the best person to ask about other views.

Some people, Searle for example, would argue that computationalism
cannot explain the ability to have semantics, or intentionality,
which Searle takes as an intrinsic property of persons. Searle talks
about the causal properties of the brain. I take this to mean that
he is arguing that it is unlikely that silicon could be used to
replace neurons without changing anything.

Harnad has argued that computation is the wrong mechanism, and that
the brain should be considered a system of transducers rather than a
computational system.

Eliminative materialists argue that intentionality is a mythical
term, a theoretical entity in a false explanation of cognition. I
think most eliminative materialists are computationalists, but I
suppose one could assume that computation is the wrong mechanism,
while still being an eliminative materialist.


Neil Rickert

unread,
Jan 21, 1997, 3:00:00 AM1/21/97
to

In <32e55945...@news.sprynet.com> 76200...@compuserve.com (JRStern) writes:
>On 21 Jan 1997 10:48:28 -0600, ric...@cs.niu.edu (Neil Rickert)

>wrote:
>>Here is a preliminary attempt to define computationalism.

>>Computationalism is the following thesis:
>[snip]


>> The thesis makes no claim about creating a person. This is because
>> the thesis is intended to be a scientific one. Talk about creating
>> a person may be allowed in philosophical circles, but is improper
>> in scientific discussions, at least in our present state of
>> knowledge.

>Well, I'm glad you added that last. I'd still work on the wording; I
>don't think there's anything "unscientific" about being a person.

To say that talk of creating a person is improper in scientific
discussions is not in any way to say that there is anything
unscientific in being a person. A topic can fail to be scientific,
without being unscientific.


JRStern

unread,
Jan 22, 1997, 3:00:00 AM1/22/97
to

On 21 Jan 1997 10:48:28 -0600, ric...@cs.niu.edu (Neil Rickert)
wrote:
>Here is a preliminary attempt to define computationalism.
>
>Computationalism is the following thesis:
[snip]
> The thesis makes no claim about creating a person. This is because
> the thesis is intended to be a scientific one. Talk about creating
> a person may be allowed in philosophical circles, but is improper
> in scientific discussions, at least in our present state of
> knowledge.

Well, I'm glad you added that last. I'd still work on the wording; I
don't think there's anything "unscientific" about being a person.

I may try to reply to this a bit more carefully, offline.

J.

I3uddha

unread,
Jan 22, 1997, 3:00:00 AM1/22/97
to

ric...@cs.niu.edu (Neil Rickert) wrote:
"...Presumably those who

would claim that a reduction to tty input is allowed, would not at
the same time suggest that visual qualia could arise when the input
is so limited."

Hmmm. Seeing as
(1) I have recently been knocking (my interpretation of) computationalism
and
(2) I have also been vehemently plugging reduction of input to tty as
a valid medium for reaching "critical mass" (vs. chumily accepting the
idea that w/o vision, the poor chips are doomed to run Windows '95)

I would add, for the record, that it would be absurd to suggest that
visual qualia could arise from mere tty input. Qualia and *functionality*
(functionalia?) being two separate--but unequal--goals.

I would also add (in a non-computational manner :-) that I believe
the goal of a functional AI should logically preceed focusing on an AI
capable of detecting/appreciating/enjoying/producing qualia. This,
however, does not mean that I feel qualia should be ignored or abandoned
in the process of attaining the goal of functional AI.

-Bear

Anders N Weinstein

unread,
Jan 22, 1997, 3:00:00 AM1/22/97
to

On 21 Jan 1997 10:48:28 -0600, ric...@cs.niu.edu (Neil Rickert) wrote:
>>Computationalism is the following thesis:
>[snip]
>> The thesis makes no claim about creating a person. This is because
>> the thesis is intended to be a scientific one. Talk about creating
>> a person may be allowed in philosophical circles, but is improper
>> in scientific discussions, at least in our present state of
>> knowledge.

I can't help taking this personally :-).

I would remind you there is an activity of explication, of taking
intuitive concepts and trying to replace them my more explict or
precisely formulated criteria. Turing's test was one such attempt. The most
successful explication I know of was the explication of the intuitive idea
of an effectively computable function.

We will not get that kind of precision for the concept of persons, but
that does not mean ruling out the attempt. There are many reflections
on what is required for personhood that might be relevant to stating
requirements for an AI system -- look at Dennett's "Conditions of
Personhood" in his _Brainstorms_ for one.

Just to give one example, consider the idea that persons are
distinguished by the capacity for higher-order thoughts (beliefs about
their own beliefs) or higher order desires, desires about what to
value (as claimed by Harry Frankfurt).

Neil Rickert

unread,
Jan 22, 1997, 3:00:00 AM1/22/97
to

In <5c5p81$q...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>On 21 Jan 1997 10:48:28 -0600, ric...@cs.niu.edu (Neil Rickert) wrote:

>>>Computationalism is the following thesis:
>>[snip]
>>> The thesis makes no claim about creating a person. This is because
>>> the thesis is intended to be a scientific one. Talk about creating
>>> a person may be allowed in philosophical circles, but is improper
>>> in scientific discussions, at least in our present state of
>>> knowledge.

>I can't help taking this personally :-).

No need for that. I wasn't attacking anybody.

>I would remind you there is an activity of explication, of taking
>intuitive concepts and trying to replace them my more explict or
>precisely formulated criteria. Turing's test was one such attempt. The most
>successful explication I know of was the explication of the intuitive idea
>of an effectively computable function.

>We will not get that kind of precision for the concept of persons, but
>that does not mean ruling out the attempt.

I don't have any problem with that. I'm not trying to discourage
attempts. It might even be that future work in AI could lead to
attempts to give a scientific account of 'person'. I just don't
think we are presently at the point where we can use the
philosophical concept of 'person' as a scientific one. We can of
course have a biological concept of person, as a living member of
homo sapiens.

>Just to give one example, consider the idea that persons are
>distinguished by the capacity for higher-order thoughts (beliefs about
>their own beliefs) or higher order desires, desires about what to
>value (as claimed by Harry Frankfurt).

I don't have any particular problems with that. But do keep in mind
that we are not yet at the point where we can give a scientific
account of thoughts.


Neil Rickert

unread,
Jan 22, 1997, 3:00:00 AM1/22/97
to

>>A topic can fail to be scientific, without being unscientific.

>Frankly, I don't see how a topic can be either.

I would have thought that art and ethics are both topics which are
neither scientific nor unscientific.


JRStern

unread,
Jan 23, 1997, 3:00:00 AM1/23/97
to

>A topic can fail to be scientific, without being unscientific.

Frankly, I don't see how a topic can be either.

Specific arguments, perhaps.

J.


Neil Rickert

unread,
Jan 23, 1997, 3:00:00 AM1/23/97
to

In <fritzmcd-230...@rsc037.rutgers.edu> frit...@eden.rutgers.edu (Fritz J. McDonald) writes:

>In article <5c3dpl$i...@ux.cs.niu.edu>, ric...@cs.niu.edu (Neil Rickert) wrote:

>> Eliminative materialists argue that intentionality is a mythical
>> term, a theoretical entity in a false explanation of cognition. I
>> think most eliminative materialists are computationalists, but I
>> suppose one could assume that computation is the wrong mechanism,
>> while still being an eliminative materialist.

>When Stich was an eliminative materialist, he was a computationalist.
>However, it's not altogether clear that Stich has a positive theory now
>(c.f. _Deconstructing the Mind_).

Yes, I did have trouble working out what Stich's position is in that
book.

> The Churchlands aren't exactly
>computationalists in the usual sense of the words -- they take (ahem,
>usually incapable of cognitive processes) neural network to be ideal for
>implementation (or far-reaching paradigm replacement, or whatever) because
>they are supposedly more biologically possible.

The took some sort of computational position, admittedly NN based, in
their debate with Searle in the Jan 90 Scientific American. (And I
share your skepticism about some of the assumptions made about NNs.)

> By the way,
>intentionality isn't _really_ what is considered a posit so much as the
>theoretical posits of folk psychology. Intentionality is not exactly
>coextensive with folk psychological posits. I'm sure there's some place
>for an eliminative materialist whose a realist about the propositional
>attitudes, but eliminative materialism is just a bad neighborhood of
>logical space anyway.

I appreciate the comments.


Neil Rickert

unread,
Jan 23, 1997, 3:00:00 AM1/23/97
to

>> I just don't
>>think we are presently at the point where we can use the
>>philosophical concept of 'person' as a scientific one. We can of
>>course have a biological concept of person, as a living member of
>>homo sapiens.

>So, biological concepts are not scientific concepts?

Not at all. I have no problem with the biological concept of
'person' as a scientific concept. But when 'person' is brought up in
this newsgroup, that is not what is normally intended. When
Bringsjord ("What Robots can and can't be") argued that robots will
eventually behave like persons but will not be persons, he was not
making a biological statement.


Fritz J. McDonald

unread,
Jan 23, 1997, 3:00:00 AM1/23/97
to

In article <5c3dpl$i...@ux.cs.niu.edu>, ric...@cs.niu.edu (Neil Rickert) wrote:


>
> Eliminative materialists argue that intentionality is a mythical
> term, a theoretical entity in a false explanation of cognition. I
> think most eliminative materialists are computationalists, but I
> suppose one could assume that computation is the wrong mechanism,
> while still being an eliminative materialist.

When Stich was an eliminative materialist, he was a computationalist.
However, it's not altogether clear that Stich has a positive theory now

(c.f. _Deconstructing the Mind_). The Churchlands aren't exactly


computationalists in the usual sense of the words -- they take (ahem,
usually incapable of cognitive processes) neural network to be ideal for
implementation (or far-reaching paradigm replacement, or whatever) because

they are supposedly more biologically possible. By the way,

JRStern

unread,
Jan 24, 1997, 3:00:00 AM1/24/97
to

>I would have thought that art and ethics are both topics which are
>neither scientific nor unscientific.

May I refer you to Mr. Longley?

J.

JRStern

unread,
Jan 24, 1997, 3:00:00 AM1/24/97
to

Well, I just can't parse this at all.


On 23 Jan 1997 20:53:28 -0600, ric...@cs.niu.edu (Neil Rickert)
wrote:

Philip Jackson

unread,
Jan 24, 1997, 3:00:00 AM1/24/97
to Greg Stevens, pjac...@ic.net

Greg Stevens wrote:
>
> JRStern (76200...@compuserve.com) wrote:
>
> : > I just don't

> : >think we are presently at the point where we can use the
> : >philosophical concept of 'person' as a scientific one. We can of
> : >course have a biological concept of person, as a living member of
> : >homo sapiens.
>
> : So, biological concepts are not scientific concepts?
>
> Sometimes I wonder how much of usenet conversation is a product of
> not reading carefully.
>
> Speaker A:
> I don't think that(
> We can use A as B, where
> A==a philosophical concept of 'person'
> B==a scientific concept of 'person'
> )
> I do think that(
> We have C, where
> C==a biological concept of 'person'
> )
>
> Somehow, from this, Speaker B concludes that Speaker A means:
>
> I do think that(
> No C are B, where
> C==biological concepts (context: of 'person')
> B==scientific concepts (context: of 'person')
> )
>
> Now when someone comes up with a conclusion that can perform
> that kind of deduction, we might not have artifical
> *intelligence*, but...
>
> Greg Stevens

Does this mean that Speaker A is referring to himself as a
"philosophical concept of 'person'" and to Speaker B as a "scientific
concept of 'person'", and that Speaker B is affirming it is not a
"biological concept of 'person'"? :-)

Cheers,

Phil Jackson
--------------------------------------------------------------------
"...for the word is the sole sign and the only certain mark of the
presence of thought hidden and wrapt up in the body..." -- Descartes
--------------------------------------------------------------------
Standard Disclaimers. <pjac...@ic.net>

Alexander Schatten

unread,
Jan 25, 1997, 3:00:00 AM1/25/97
to

> [snip]

> The term 'information signal' deliberately remains undefined.
> There is no attempt to define what it is information about. It
> suffices to treat it as a signal.

> [snip]

> Some computationalists might claim that the input and output
> information could be reduced to that of a stream of ASCII
> characters at tty terminal speeds. Indeed, such a reduction in
> input and output is suggested by the Turing test. However the

> thesis itself does not make any such claim. Presumably those who


> would claim that a reduction to tty input is allowed, would not at
> the same time suggest that visual qualia could arise when the input
> is so limited.
>

there is one general problem arising IMO considering your NOTES: if
you say:

> The term 'information signal' deliberately remains undefined.

in connection with

> Some computationalists might claim that the input and output
> information could be reduced to that of a stream of ASCII
> characters at tty terminal speeds. Indeed, such a reduction in
> input and output is suggested by the Turing test. However the
> thesis itself does not make any such claim.

there comes up the question, if discussion on THIS basis is useful at
all. because leaving these points open, EVERYTHING can be handled with
this theory and furtermore, nothing can be explained (at least from
the technical point of view, maybe some philosophical implications
could be seen, but in the moment I guess also these implications can
not be very significant with these restrictions). so what for should
this definition be at last?

am i clear? do i see something wrong??

best regards

-alex
==========
Dipl.Ing. Alexander Schatten
Institute for General Chemistry - Vienna University of Technology
email: asc...@fbch.tuwien.ac.at - http://echm10.tuwien.ac.at/inst/as
==========
Tel.: +43 1 914-29-84
Gallitzinstr.7-13/7/7 / 1160 Vienna / Austria / Europe
==========

Neil Rickert

unread,
Jan 25, 1997, 3:00:00 AM1/25/97
to

In <32ea2d29...@news.tuwien.ac.at> asc...@fbch.tuwien.ac.at (Alexander Schatten) writes:
(Alexander is responding to an earlier post of mine).

>there is one general problem arising IMO considering your NOTES: if
>you say:

>> The term 'information signal' deliberately remains undefined.

>in connection with

>> Some computationalists might claim that the input and output
>> information could be reduced to that of a stream of ASCII
>> characters at tty terminal speeds. Indeed, such a reduction in
>> input and output is suggested by the Turing test. However the
>> thesis itself does not make any such claim.

>there comes up the question, if discussion on THIS basis is useful at
>all. because leaving these points open, EVERYTHING can be handled with
>this theory and furtermore, nothing can be explained (at least from
>the technical point of view, maybe some philosophical implications
>could be seen, but in the moment I guess also these implications can
>not be very significant with these restrictions). so what for should
>this definition be at last?

I'm not quite clear what you mean by "Everything can be handled" and
"nothing can be explained." These sound contradictory.

>am i clear? do i see something wrong??

Perhaps I was unclear. I was offering a tentative definition of
computationalism, for the purposes of further discussion. I was not
trying to give a complete theory of cognition. You could say that I
was trying to answer the question:

Among possible theories of cognition, how would you characterize
those that could be called "computational"?

Since there is a range of ideas as to what constitutes
computationalism I tried to answer this rather broadly.


Jim Balter

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to

I3uddha wrote:

> I would add, for the record, that it would be absurd to suggest that
> visual qualia could arise from mere tty input. Qualia and *functionality*
> (functionalia?) being two separate--but unequal--goals.

Since "visual qualia" can arise from drumming on people's backs,
why is it absurd to suggest that they could arise from "mere"
tty input?

Didn't we have this discussion about "absurd" already?
Something about the Lorentz trnasformations, I think.

--
<J Q B>

I3uddha

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to

I3uddha wrote:
> I would add, for the record, that it would be absurd to suggest that
> visual qualia could arise from mere tty input. Qualia and
*functionality*
> (functionalia?) being two separate--but unequal--goals.

Jim Balter <j...@netcom.com> rewrote:


>Since "visual qualia" can arise from drumming on people's backs,

*people's*, I believe is the key word, here. People start out
(as previously mentioned) with a nice big chunk of the visual
world already under their belt(s). Unless you were referring
to qualia experienced by *people* who have been blind since
birth. In that case, can their "visual qualia" really be
compared to the visual qualia of people who see?

>why is it absurd to suggest that

*absurd* in the pedestrian, non-scientific way that you allowed
for. That (unfortunately) is the only way I know.

>they could arise from "mere" tty input?

"mere" in the sense of "the only input there is".

>Didn't we have this discussion about "absurd" already?

Yes, dad.

>Something about the Lorentz *trnasformations*, I think.
No pun intended?
But seriously. When the conversation turns to topics
mathematical, I must return to my castle and crawl
back into a coffin filled with the soil of my homeland,
Transformania! :-[

-Bear

David Longley

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to

In article <32e819f2...@news.sprynet.com>, JRStern
<76200...@compuserve.com> writes

Exactly - at least this shows an appreciation of what the issues
actually are.

Science is all a matter of explictly giving the variables in ones
universe of discourse unequivocal truth values (which then allow
quantification in). The arts, and much else besides is fine as it is but
is not science for the simple reason that one does not bother to even
try to establish and sustain such conditions.

The idioms of propositional attitude along with other intensional idioms
such as the modal operators and subjunctive conditional do not belong in
the domain of science because they are resistant to the above
strictures. A futher sine qua non is that scientific statements are
potentially testatble, which I take to be equivalent to saying that they
are potentially falsifiable.


Having said that, I have given up expecting much in the way of rational
exchange on this subject from several prolific writers within this
newsgroup. This one is for those who read but don't post.....
--
David Longley

JRStern

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to


My ISP being what it is, the Stevens posting hasn't shown up yet.
FWIW, B and C are related outside this newsgroup, ya know.

J.

Alexander Schatten

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

i just wanted to point out, that not defining these points more
precisely, the whole story tends to become useless, as the basis of a
theory are a set of definitions, but these two are so loose, that
everything fits, and nothing can be explained. this was my opinion in
other words.

-alex
==========
Dipl.Ing. Alexander Schatten
Institute for General Chemistry - Vienna University of Technology

email: asc...@fbch.tuwien.ac.at - http://qspr03.tuwien.ac.at/~aschatt

Neil Rickert

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

In <32ed111...@news.tuwien.ac.at> asc...@fbch.tuwien.ac.at (Alexander Schatten) writes:

>i just wanted to point out, that not defining these points more
>precisely, the whole story tends to become useless, as the basis of a
>theory are a set of definitions, but these two are so loose, that
>everything fits, and nothing can be explained. this was my opinion in
>other words.

Obviously materialism is useless, because everything fits and nothing
can be explained. But that is actually false. For substance dualism
does not fit, and there are still believers in substance dualism. It
turns out that everything fits, but only if you take materialism to
be a starting axiom.

Likewise you criticize my description of computationalism, because
you say that everything fits and nothing can be explained. And, of
course, if you take computationalism as a starting axiom, then that
is right. But not every materialist takes computationalism as a
starting axiom. Penrose's ideas based on quantum gravity working in
microtubules clearly do not fit.


Alexander Schatten

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

On 28 Jan 1997 10:45:04 -0600, ric...@cs.niu.edu (Neil Rickert)
wrote:

>In <32ed111...@news.tuwien.ac.at> asc...@fbch.tuwien.ac.at (Alexander Schatten) writes:

i´m sorry, yet you didnt catch the meaning of my postings. maybe i am
not clear enough. i dont criticise the attempt in general, but the
loose, or open definition - or better not definition - of information
signal and input/output.

my critics refers first of all ONLY to these points!!

maybe we will yet come together??

Neil Rickert

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

>i´m sorry, yet you didnt catch the meaning of my postings. maybe i am
>not clear enough. i dont criticise the attempt in general, but the
>loose, or open definition - or better not definition - of information
>signal and input/output.

Ok. At least I have a better idea of your objection. If you have a
better idea as to how to define these terms, you are welcome to
contribute them to the discussion.

I'm inclined to think that we cannot give a crisp definition of these
terms. Physicists are unable to define 'length' or 'time', and must
take these as undefined concepts. They can define 'mass' only if
they take force as an undefined concept. What we know of these basic
concepts comes from their roles in physics, rather than from any
definitions. And these roles have changed over time, and are
noticably different for relativistic physics than for Newtonian
physics. I think for information sciences we are bound to have the
same problem, and be unable to define 'information'. At best, we
will understand the term from its changing role in our science.

As a first approximation, we might think of 'information' as that
which informs. But if we take it as that which informs an AI system,
then that is no definition at all. If we take it as that which
informs humans, then what do we do about constructing an autonomous
robot to explore Mars, where there might be no humans around to pass
judgement on what is to count as information.


Neil Rickert

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

In <32f32718...@news.tuwien.ac.at> asc...@fbch.tuwien.ac.at (Alexander Schatten) writes:

>do you finally consider information as something describeable in terms
>of mathematics or some kind of other language.

Desribable in terms of mathematics? Surely not. Only mathematical
entities are describable in terms of mathematics. We cannot describe
force, length, or temperature in terms of mathematics, but we can use
mathematics to compute with them.

> that means, this
>information could be "recorded" in some way and "replayed"
>identically. or do you believe, it is not possible to control this
>information-stream (at least theoretically), and what could be the
>"not controlled" rest of the information?

Ok. That is really a question about representation, and whether
information is representable.

>this seems to me a very basic question. if we leave a part of the
>information open as undescribeable, we can come into computational
>problems, otherwise, this theory could tend to be materialistic.

I don't see that being undescribable is that much of a problem.
Being unrepresentable would be a problem. I take it that we receive
information represented as light waves, pressure waves, and in
various other forms. All of these are forms of representation. Then
any information we receive would have a neural representation, or
else we could not receive it. As far as I can tell, you could not
have information without representation.

Perhaps you are asking the following questions:

(1) Is there a fixed material representation system, capable of
representing all information?

(2) Is there a suitable finitely specifiable representation system
capable of representing all information?

(3) Is there a suitable finitely specifiable representation system
for all information, such that the specification is simple enough
to be comprehended by human mathematicians.

At least to a first approximation, I think the answer to (1) is YES.
The universe itself is a material system, and its state could be said
to represent all information. But that by itself is not very
useful. In order to do mathematics, we work in mathematical systems
defined by a finite number of symbols written on paper. So you
really need something like question (2) to clarify this.

I'm inclined to think that the answer to (3) is NO. For all
practical purposes, that means we should take the answer to (2) as
being NO.

>at least one should discuss how coding of this information should be
>possible - becaus otherwise no algorithmic use of this "information"
>would be possible.

That is where I disagree with the direction AI has taken. In effect,
AI has taken for granted that the answer to (3) is YES, and has
sought to find a suitable representation system. We can take much of
the talk about KR (knowledge representation) systems to be based on
just such an assumption. On the assumption that a suitable finitely
specified representation system is available, AI has assumed that the
role of computation is to transform those representations.

But I don't see that you need an affirmative answer to (3). If (3)
is answered in the negative, that would mean that we could never find
a suitable representation system to represent all information. But
no organism needs to represent all information, and thus no AI
system should need to represent all information.

What one really needs is a way of representing a large finite amount
of information, chosen to allow mainly good decision making. For
this one needs a plastic representation system, so that the
representation system itself can be changed to alter the types of
information that it can represent. One needs algorithms to evaluate
the quality of information being received and represented, and
algorithms to modify the representation system so as to make it
available for the most useful types of information.


Oliver Sparrow

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

De la Rochfoucauld (whom I have mispelled and am about to
misquote) said something to the effect that nationhood was as
indefinable but as unmistakable as an odour. In soemthing of the
same qualitative terms, what is the odour of an AI?

I have a model of a cooling brick on my PC. It embodies a lot of
knowledge. I can set it all sorts of odd initial conditions. It's a model,
but is it AI? If it is, what does that make this mailer?

I have a model of the UK economy on my PC. As above, plus a great
deal of serial contingency. Is it AI?

I have a bunch of hexagons, tied together by software elastic, connecting
concepts that teams of people have tabled about their concerns. Topics
pop up when related issues are tabled. It's decision support, but is it AI?

I have a thingie which takes you down a b-tree in respect of marketing
problems, pops up advice and choice, the buzzes off down more b-trees.
Yes? Nah.

OK. So there are these evolving creatures in my screen saver. They compete.
They breed. Nah.

So......... to criteria:

1: The system must embody a model or set of models about something with which
it is to interact. It is likely to do this is a hierarchical form, in which the elements
at play cross link at all levels in the hierarchy and across distinct hierarchies.
High level cross linkages will look like symboloc systems, low level cross links
(and systems within most hierachies at any given layer of it) will look like
connectionism.

2: The system must abstract knowledge about that system and use it to modify
its internal rule base. It will be driven to do this by a set of criteria which are in
part implicit in its structure, in part applied by its designers, in part evolved to
meet the conditions which it encounters. Two key features of this system:

2.1 The abstraction system is driven to focus on what matters.
2.2 The system(s) which decide what matters reach widely in the learning
and focusing apparatus, setting the objective function for the system as
a whole.

3: Changes in the rule base must be of two forms:

3.1 Weights and tweaks, new rules which fit within the syntactical structure
of the model.
3.2 Model busters: syntactical forms which go beyond the model. Two chief
forms are:
3.2.1 Reconstruction of the drivers in (2).
3.2.2 Reconstruction of the abstractions: the big symbolic structures
arising in (1).

4: The abstract "hierarchy towers" - pyramids of ordering - are unified by two
processes:
4.1: symbolic working through of declared truths facts within the currently-
dominant version of the world model and
4.2: non-symbolic working through, 'value' based or triggered by dissonance.

Not a bad list for a first try. Note how difficult this gets as one moves towards
the higher numbers; and note how much one needs the higher numbers in order
to carry out even (1).

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Carl B. Frankel

unread,
Feb 10, 1997, 3:00:00 AM2/10/97
to

Neil Rickert wrote:
>
> I'm inclined to think that we cannot give a crisp definition of these
> terms.
[snip]

>
> As a first approximation, we might think of 'information' as that
> which informs. But if we take it as that which informs an AI system,
> then that is no definition at all. If we take it as that which
> informs humans, then what do we do about constructing an autonomous
> robot to explore Mars, where there might be no humans around to pass
> judgement on what is to count as information.

I think we get some mileage by starting from basic notions in
information theory (and meta-theory). Without worrying right away
about defining a finite lexicon of discrete units of information,
I think it is still meaningful to assert, a la Bateson, that
information is difference which makes a difference to the
information processor. (Difference that makes no difference is
either undetectable difference or detectable difference that is
filtered at the beginning of the processing chain as noise.)
What makes detectable difference into signal is that the
information processor treats it as such. My survival may depend on
my responding to some particular element of my visual field;
however, if I'm blind, or am fixated on other elements of my
visual field, what "should" be signal is, in the former case
undetectable and in the latter case noise. I suffer lethal
disorganization as a result of critical differences not making
a difference within my information processing regimen; however,
what is information for a given processor is separable from the
question of what "should" be information in the service of
adaptively competent information processing. None of can hear a
dog whistle, and so we cannot treat its sound as containing
information, even though our dogs can learn to come when they
hear the whistle. If our environment changes so survival requires
the ability to hear signals above 40KHz, then dogs survive and
we go the way of the dinosaurs (unless we technologically augment
our hearing).

A stream of such detectable differences can then be analyzed for
redundancies in the stream, thereby recognizing patterns, organization,
in the stream. Such patterns are useful in that they allow three
adaptively vital forms of guessing beyond the immediate contents
of the stream:
(1) Extrapolative Guessing about aspects of the present not presented
in the stream, i.e. I am guessing that I am responding to a
contribution to this newsgroup made by another human being, even though
my input is limited to the text delivered by news-server. (Given other
contents of my input stream telling me about the current state of
technology, I deem this a pretty safe guess. Within my lifetime, it
may not remain so.)
(2) Diagnostic Guessing about problems in the present, based upon
symptoms presented in the stream. For example, I guess, when my child
has a fever, that something is medically wrong and that seeking
medical advice and treatment is adviseable. (In fact, my child may
just have finished a bowl of piping hot chicken soup, causing her
to flush and to spike the oral thermometer.)
(3) Predictive Guessing about the future, based upon current and
previous contents of the stream, even though the future does not
present itself to the stream until it is the present.

Moreover, by assessing the goodness-of-fit of the pattern being
applied to analyze the stream, it is possible to guess how good
the analysis is, and therefore how much error-checking needs to be
done. Since entropy, and the potential error-checking burden,
grow exponentially with complexity, such Confidence Guessing
is essential to contain the processing burden from error-checking.
I believe we experience this particular form of guessing emotionally
as compellingness. I would go further and say that we refer to
a guess as knowledge when we find something so compelling that
we do little or no error-checking. Having such knowledge is
adaptively necessary, given our processing bandwidth limitations:
We simply cannot check for all possible errors. However, given that
we live in a continuously changing and pervasively uncertain open
system, we are still guessing about anything that is not tautological,
no matter how compelling our knowledge.

These notions are broad enough to cover humans, amoeba, eco-systems,
word-processors and robots. For that reason, they do not say how
each of these might be different for the others. However, to go back
to the question at the top of the thread, viz. What is
computationalism, I would guess that a computationalist would
move forward from these notions by asking how the information
processing of each is different from the others.

Regards,

Carl F.

======================================================================
What's Punishing Gets Priority. || Carl B. Frankel
|| Consultant
What's Rewarded Gets Repeated. || Organizational Measurement & Eng.
|| 785 Burnett Avenue No 2
What's Measured Gets Managed. || San Francisco, CA 94131-1417
|| Tel. 415-641-8028
What's Noticed Gets Narrated. http://www.ome1.com ca...@ome1.com
======================================================================

Neil Rickert

unread,
Feb 10, 1997, 3:00:00 AM2/10/97
to

In <32FFA8...@ome1.com> "Carl B. Frankel" <ca...@ome1.com> writes:
>Neil Rickert wrote:

>> As a first approximation, we might think of 'information' as that
>> which informs.

>I think we get some mileage by starting from basic notions in
>information theory (and meta-theory).

I haven't found information theory all that helpful. Its notion of
'information' seems to lead to traditional symbolic AI, which
evidently has failed to adequately explain human cognition.

Under information theory, what counts as information is determined by
the sender. Anything else is noise. So the problem of information
theory is for the sender to encode the information in such a way that
a potential receiver could separate the transmitted information from
any noise.

It seems to me that we need a different notion, where what is to
count as 'information' is to be determined by the receiver. Possibly
a sender is deliberately sending misinformation, and the receiver
must try to sort it out, and reject the deliberate misinformation as
noise, while extracting whatever information remains.

>I think it is still meaningful to assert, a la Bateson, that
>information is difference which makes a difference to the
>information processor.

That sounds reasonable enough as a start, and would seem to fit the
idea that what is information is determined by the receiver.

>A stream of such detectable differences can then be analyzed for
>redundancies in the stream, thereby recognizing patterns, organization,
>in the stream. Such patterns are useful in that they allow three
>adaptively vital forms of guessing beyond the immediate contents
>of the stream:

That sounds like a good beginning.


Oliver Sparrow

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

Information is context dependent. "Information theory"
in the Shannon mode is about bandwidth. You can get away
with less bandwidth in direct proportion to the focus
with which you place your conduit and the care with which
you construct the model which the incoming data will inform.
The stethescope *there* on the safe door and the single
click conveys huge information to the safe cracker, over
thin bandwidth. Queen Mary used to have a footman to hold
the telephone eight feet from her, at which she shouted.
The required bandwidth was high, for the focus was poor.

In thiking about information systems, therefore, we have to
model the emitter, the conduit and the recipient. At one level,
this is an artefact which we project on the world. It is for
our semantic conventience: this is how we describe what is
going on. At another level, however, perhaps there are instances
in which the overall system, emitter-conduit-recipient, can
be seen as an agency, something which has properties (which
has self-optimised to have characteristics) which add to
what was there before. The fact of there being a system
(that is, in which some agency couples the parts together in a
task of change, perhaps optimisation) implies that the system
migrates from its initial configuration and something new is
born.

If that is true, then all sorts of things becoem true. If it
is an artefact of our observer status, a projection, then
we are determined clay.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Anders N Weinstein

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

In article <5donur$3...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <32FFA8...@ome1.com> "Carl B. Frankel" <ca...@ome1.com> writes:
>
>Under information theory, what counts as information is determined by
>the sender. ...

>
>It seems to me that we need a different notion, where what is to
>count as 'information' is to be determined by the receiver. Possibly

The slogan Frankel quoted from Bateson -- information is difference
that makes a difference -- would also seem to point to a
receiver-centered concept.

Of course "making a difference" could mean many things. To me it
naturally suggests the idea that the receiver leads a certain sort of
life in which the information *matters* (is relevant). In this sense,
perhaps nothing "makes a difference" to a computer, although the
information we cause to be stored in computers can make a big
difference to *us*.

Others will probably want to interpret it more broadly.

Carl B. Frankel

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

Neil Rickert wrote:
>
> In <32f32718...@news.tuwien.ac.at> asc...@fbch.tuwien.ac.at (Alexander Schatten) writes:
[snip]
[snip]

> That is where I disagree with the direction AI has taken. In effect,
> AI has taken for granted that the answer to (3) is YES, and has
> sought to find a suitable representation system. We can take much of
> the talk about KR (knowledge representation) systems to be based on
> just such an assumption. On the assumption that a suitable finitely
> specified representation system is available, AI has assumed that the
> role of computation is to transform those representations.

I don't think that the term 'information' is ordinarily used this
way in discussions of information processing.

Information is usually a unit of processable difference, that is,
difference that can be detected and transformed according to an
algorithm that embodies the semantics being ascribed to the
difference. The unit of difference can be processed by many
different algorithms, each ascribing different and perhaps even
incompatible semantics. By the same token, multiple processors
using similar processing algorithms can de facto agree on semantics,
or using similar epistemological algorithms (the way scientists do)
can come to agree on the semantics of the differences they
all detect.

If I understand the thrust of your post correctly, it seems to me
that you are using 'information' to mean the universe of all
differences in themselves, facts of nature as a sort of ding an
sich. You are then asking, is the subset of these facts that
we can detect, both with our sense organs and our instrumentation,
fully redundant with the universe of all facts, or at least can
it be, so that we would be capable of (a) detecting the stream of
light, sound, etc, (b) encoding it into appropriate neural impulses,
and then (c) analyzing the stream of neural impulses with some
appropriate semantics (and epistemology) so as to recover the
complete universe of facts. (I have another post on this thread
about the difference between information and the redundancies
in information streams from which we analyze organization.)

If the universe under consideration is a closed system, then perhaps
it might be reasonable to model the problem along these lines.
However, the severe bandwidth limitations of human information
processing alone militate against treating our context as a closed
system. Once the system is open, then we have an inherently
limited and noisy view of a system which is pervasively uncertain
and constantly changing.

Although I think only some of AI is concerned with forms of
knowledge representation that can be mapped onto mathematical
representation, I do think AI has taken a wrong turn to the
extent that it embeds positivistic assumptions about the nature
of the world and how we come to know about it and adapt to it.
Knowledge systems making such assumptions do very well in closed
systems--and many important real-world problems can be engineered so
as to have fairly stable local closures, making such systems powerful
solutions in those contexts--but do poorly in open contexts.

Regards,

Carl F.

======================================================================
What's Punishing Gets Priority. || Carl B. Frankel

|| Managing Consultant

Neil Rickert

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

In <5dqkkp$h...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <5donur$3...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>>In <32FFA8...@ome1.com> "Carl B. Frankel" <ca...@ome1.com> writes:

>>Under information theory, what counts as information is determined by
>>the sender. ...

>>It seems to me that we need a different notion, where what is to
>>count as 'information' is to be determined by the receiver. Possibly

>The slogan Frankel quoted from Bateson -- information is difference
>that makes a difference -- would also seem to point to a
>receiver-centered concept.

Yes, I agree.

>Of course "making a difference" could mean many things.

The simplest meaning might be "affects behavior."

> To me it
>naturally suggests the idea that the receiver leads a certain sort of
>life in which the information *matters* (is relevant). In this sense,
>perhaps nothing "makes a difference" to a computer, although the
>information we cause to be stored in computers can make a big
>difference to *us*.

Well, sure, you can say that "makes a difference" means "makes a
difference to the person", and then since you don't consider the
computer a person you rule out that anything makes a difference. I
think it better to look at behavioral change as the criterion.


Neil Rickert

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

In <416017...@chatham.demon.co.uk> Oliver Sparrow <oh...@chatham.demon.co.uk> writes:

>Information is context dependent.

Right. And in a way, that makes it subjective. Two people might be
in the presence of the same signals, but because of different
contextual assumptions they will pick up different information from
those signals. What one considers information, the other might
consider noise, and vice versa.

> "Information theory"
>in the Shannon mode is about bandwidth.

Right. Or at least `bandwidth' is typical of the concerns that
Shannon was investigating.

>In thiking about information systems, therefore, we have to
>model the emitter, the conduit and the recipient.

In thinking about human cognition, the emitter becomes the whole of
reality, and it is a little tricky to model that.


David Longley

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

In article <5dqkkp$h...@usenet.srv.cis.pitt.edu>

ande...@pitt.edu "Anders N Weinstein" writes:
>
> Others will probably want to interpret it more broadly.
>

My question is..... but what is the point of "different
interpretations" unless they lead to empirically testable, or
alternative predictions?

Reading so many posts in this newsgroup just seems to come down
to "appreciating" alternative points of view (which quite
literally could come down to unique world-lines)!!
--
David Longley


Neil Rickert

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

In <3300A4...@ome1.com> "Carl B. Frankel" <ca...@ome1.com> writes:
>Neil Rickert wrote:
[much about information]

>I don't think that the term 'information' is ordinarily used this
>way in discussions of information processing.

You are quite right. The way 'information' is ordinarily used does
not seem to be helpful for understanding human cognition.

>Information is usually a unit of processable difference, that is,
>difference that can be detected and transformed according to an
>algorithm that embodies the semantics being ascribed to the
>difference. The unit of difference can be processed by many
>different algorithms, each ascribing different and perhaps even
>incompatible semantics. By the same token, multiple processors
>using similar processing algorithms can de facto agree on semantics,
>or using similar epistemological algorithms (the way scientists do)
>can come to agree on the semantics of the differences they
>all detect.

Yes, that is the traditional use of 'information'. It attempts to
define 'information' in a totally objective manner. AI systems built
with such a notion of 'information' should all have exactly identical
behavior in the same circumstances. They could be described as
carrying out the objectives of the programmer, rather than having any
objectives of their own. This is the traditional direction of
symbolic AI. It produces something which does not look much like
human cognition.

In short, I think that notion of 'information' is useless for the
problem at hand. So I am interested in a more subjective notion of
'information'.

>If I understand the thrust of your post correctly, it seems to me
>that you are using 'information' to mean the universe of all
>differences in themselves, facts of nature as a sort of ding an
>sich.

I don't know what it means to refer to "the universe of all
differences in themselves." I most certainly was not talking about
"facts of nature". As far as I can tell, we usually restrict the
term "fact" to linguistically represented information, and I don't
want to be limited by any such restriction.

You seem to be attempting to give a clear objective definition of
"information", whereas I am using the term as an undefined subjective
concept. By analogy, I'll note that even physics begins with some
basic undefinable concepts such as 'length' and 'force'.

In your attempted formulation of 'information' you had to rely on an
undefined and probably undefinable notion of 'fact'. I don't want to
start with an undefined 'fact' and build a definition of
'information' on top, because I think there is much about the way we
use our concept of 'fact' which blocks us from resolving the problems
of 'cognition'. I need a different starting point, so I would rather
begin with an undefined 'information', and on that basis determine
how we come to decide what we shall call a 'fact'.

> You are then asking, is the subset of these facts that
>we can detect, both with our sense organs and our instrumentation,
>fully redundant with the universe of all facts, or at least can
>it be, so that we would be capable of (a) detecting the stream of
>light, sound, etc, (b) encoding it into appropriate neural impulses,
>and then (c) analyzing the stream of neural impulses with some
>appropriate semantics (and epistemology) so as to recover the
>complete universe of facts.

No, I really am not asking that. Partly, I am not asking that,
because I not talking about any universe of facts. And partly, I am
not asking that because I take it as self evident that the
information you can detect is likely to be different from the
information that I can detect.

>If the universe under consideration is a closed system, then perhaps
>it might be reasonable to model the problem along these lines.

That is a big "if", and I am doubtful. In any case, the universe is
so big compared to our ability to deal with it, that for all
practical purposes we should not consider it a closed system.

> Once the system is open, then we have an inherently
>limited and noisy view of a system which is pervasively uncertain
>and constantly changing.

Right. And that is why the traditional objective notion of
"information" seems so poorly suited to dealing with the problem at
hand.

>Although I think only some of AI is concerned with forms of

>knowledge representation ...

I should perhaps be clearer that I have been talking about
information representation, rather than knowledge representation. In
my opinion, 'information' and 'knowledge' are two very different
things.

> ... that can be mapped onto mathematical


>representation, I do think AI has taken a wrong turn to the
>extent that it embeds positivistic assumptions about the nature
>of the world and how we come to know about it and adapt to it.

One of those positivistic assumptions is that 'knowledge' is a
species of 'information'. You are right that AI makes positivistic
assumptions. Psychology also makes positivistic assumptions.
Philosophy is deeply tied to positivistic assumptions, even among
many philosophers who disavow positivism.

>Knowledge systems making such assumptions do very well in closed
>systems--and many important real-world problems can be engineered so
>as to have fairly stable local closures, making such systems powerful
>solutions in those contexts--but do poorly in open contexts.

In short, they solve -- or at least attempt to solve -- the wrong
problem. I agree.


Anders N Weinstein

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

In article <855700...@longley.demon.co.uk>,

David Longley <Da...@longley.demon.co.uk> wrote:
>In article <5dqkkp$h...@usenet.srv.cis.pitt.edu>
> ande...@pitt.edu "Anders N Weinstein" writes:
>>
>> Others will probably want to interpret it more broadly.
>
>My question is..... but what is the point of "different
>interpretations" unless they lead to empirically testable, or
>alternative predictions?

Your simplistic (and very non-Quinean) philosophy of science fails
to give any importance to the formation of the right explanatory concepts.
Thus you are, among other things, completely blind to the value of
conceptual clarification, even to working scientists.

In particular your crude philosophy leaves no room for the process of
cultivating a new vocabulary, including new observation terms.

For example, on your view the best application of "extensional method"
to, say prisoners in China would be one performed by outsiders with no
knowledge at all of Chinese language and customs. For it is only then
that the entries they make in their database would be most likely to be
free of the taint of intensional attribution. Although even in this
case, ethnocentrism is likely to color one's view of the data ("it
looked as though one person were really angry at the other" when maybe
they were playing some sort of local game.)

But someone who has through immersion become familiar with Chinese language
and customs can now make and record observations that the outsider can't.
They might indeed make use of the PROBE system for the low-grade sort of
science that just tabulates superficial generalizations about observable
matters. But that is beside the point, since so much of the work
of understanding has already been done.

It has already been packed into the transition from not understanding
Chinese to understanding Chinese. It has already been packed into the
move of opening up a new world of observable facts to you, for example
the fact that some prisoners are planning an escape. It is packed into
the new material descriptive vocabulary one can now apply to the doings
of the Chinese. ("Thick descriptions", in a phrase from Ryle picked up
by Geertz).

I suppose it is an open question, but I do not expect that any good
"behavioral science" of native Chinese is going to be done by blinkered
foreigners who refuse to learn anything about the culture. Most of the
sort of thing you record in your own work with PROBE could not be applied
to Chinese without understanding.

The point is this: it is prerequiste to having any observation
sentences at all that one have concepts to use in them. Intuitions
without concepts are blind. And it is prerequiste to having concepts
that one grasp a system that can link the judgments containing to
others. In this sense the very choice of a descriptive vocabulary is
itself a major ampliative step. The concepts themselves embody an
understanding of the way things work. Concepts are not
presuppositionless.

Because concepts have presuppositions packed into them --
"theory-ladenness" it is sometimes called -- the choice of concepts is
as important to science as the choice of "inferential technology". Bad
concepts, and your vaunted database is forever doomed to garbage-in,
garbage-out.

Now there is simply no mechanical method for guaranteeing that you
avoid the problem of garbage-in, garbage-out. For there are no
concepts that come certified a priori as the right ones. And the database
itself sure won't tell you.

If the outsider who does not know Chinese were to ask the insider in
this knee-jerk fashion of yours, "where are your testable predictions",
one answer might be: "I can make them, but you cannot (although you
could learn). They do not show up in your world, although they often show up
quite conspicuously in mine (and could yet come to show up in yours)."

In this sense I think there is nothing ontologically privileged about
the "extensional world" studied by physical science, i.e. the range of
facts that can be described in an extensional language. It is just one
of many aspects of reality. If you are perverse, you can feign
meaning-blindness. You can try with all your might to place yourself
back in the same uncomprehending position with respect to the
intentional facts around you as that of the foreigner with respect to
what Chinese speakers are saying and doing. But there is no reason at
all to think there is any cash value to doing so. I do not expect any
very useful science to come from this procedure.

Elsewhere you insist that we *need* a science of behavior. But what we who
are not meaning-blind would want is something that enables us to
operate better within the world of *intentionally characterized*
behavior. For that is what is most prominent and important in our
empirical (experiential) world, that is what we care about. What you
are doing is something like pushing the science of chemistry as the
solution to the problem of someone who wants to see better artwork from
today's artists, giving as a rationale the claim that there *must* be a
solution from chemistry since after all artworks are all material
objects whose components obey the laws of chemical combination.

David Longley

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

In article <5dr23f$j...@usenet.srv.cis.pitt.edu>

ande...@pitt.edu "Anders N Weinstein" writes:

> In article <855700...@longley.demon.co.uk>,
> David Longley <Da...@longley.demon.co.uk> wrote:
> >In article <5dqkkp$h...@usenet.srv.cis.pitt.edu>
> > ande...@pitt.edu "Anders N Weinstein" writes:
> >>
> >> Others will probably want to interpret it more broadly.
> >
> >My question is..... but what is the point of "different
> >interpretations" unless they lead to empirically testable, or
> >alternative predictions?
>
> Your simplistic (and very non-Quinean) philosophy of science fails
> to give any importance to the formation of the right explanatory concepts.
> Thus you are, among other things, completely blind to the value of
> conceptual clarification, even to working scientists.
>

"blind to the value" ?

> In particular your crude philosophy leaves no room for the process of
> cultivating a new vocabulary, including new observation terms.

"cultivating a new vocabulary"? Sounds like creative writing to
me?

>
> For example, on your view the best application of "extensional method"
> to, say prisoners in China would be one performed by outsiders with no
> knowledge at all of Chinese language and customs. For it is only then
> that the entries they make in their database would be most likely to be
> free of the taint of intensional attribution. Although even in this
> case, ethnocentrism is likely to color one's view of the data ("it
> looked as though one person were really angry at the other" when maybe
> they were playing some sort of local game.)


Don't priosners in China learn mathematics? keep their cells
"tidy", get marks for the courses they do?

>
> But someone who has through immersion become familiar with Chinese language
> and customs can now make and record observations that the outsider can't.
> They might indeed make use of the PROBE system for the low-grade sort of
> science that just tabulates superficial generalizations about observable
> matters. But that is beside the point, since so much of the work
> of understanding has already been done.

Better my low level of science than your high minded "poetry"?

>
> It has already been packed into the transition from not understanding
> Chinese to understanding Chinese. It has already been packed into the
> move of opening up a new world of observable facts to you, for example
> the fact that some prisoners are planning an escape. It is packed into
> the new material descriptive vocabulary one can now apply to the doings
> of the Chinese. ("Thick descriptions", in a phrase from Ryle picked up
> by Geertz).

Mnnnnn yes ...errmmmm

>
> I suppose it is an open question, but I do not expect that any good
> "behavioral science" of native Chinese is going to be done by blinkered
> foreigners who refuse to learn anything about the culture. Most of the
> sort of thing you record in your own work with PROBE could not be applied
> to Chinese without understanding.

What, the Chinese don't use computers and databases eh?

>
> The point is this: it is prerequiste to having any observation
> sentences at all that one have concepts to use in them. Intuitions
> without concepts are blind. And it is prerequiste to having concepts
> that one grasp a system that can link the judgments containing to
> others. In this sense the very choice of a descriptive vocabulary is
> itself a major ampliative step. The concepts themselves embody an
> understanding of the way things work. Concepts are not
> presuppositionless.

You're totally lost in "cognitivism" - step out of it and you
might encounter a REAL problem.....

>
> Because concepts have presuppositions packed into them --
> "theory-ladenness" it is sometimes called -- the choice of concepts is
> as important to science as the choice of "inferential technology". Bad
> concepts, and your vaunted database is forever doomed to garbage-in,
> garbage-out.

Date of Birth, how many reports, sentence length, marks for basic
maths course 1, attendance record, weeks in this activity, weeks
in that - differential measures of co-operation by individuals
already deemed delinquent?..... Garbage?

>
> Now there is simply no mechanical method for guaranteeing that you
> avoid the problem of garbage-in, garbage-out. For there are no
> concepts that come certified a priori as the right ones. And the database
> itself sure won't tell you.

But what iof what goe sin isn't "garbage", and what if logiical
analysis IS the best way to analyse input?

>
> If the outsider who does not know Chinese were to ask the insider in
> this knee-jerk fashion of yours, "where are your testable predictions",
> one answer might be: "I can make them, but you cannot (although you
> could learn). They do not show up in your world, although they often show up
> quite conspicuously in mine (and could yet come to show up in yours)."

"knee-jerk" ? Where ARE your predications? Unless you make them,
how are we to see if what YOU have to say is worth remembering?
How are we to learn from you vs someone else?

>
> In this sense I think there is nothing ontologically privileged about
> the "extensional world" studied by physical science, i.e. the range of
> facts that can be described in an extensional language. It is just one
> of many aspects of reality. If you are perverse, you can feign
> meaning-blindness. You can try with all your might to place yourself
> back in the same uncomprehending position with respect to the
> intentional facts around you as that of the foreigner with respect to
> what Chinese speakers are saying and doing. But there is no reason at
> all to think there is any cash value to doing so. I do not expect any
> very useful science to come from this procedure.

It's not meaning "blindness" it's a statement about WHAT we can
clearly talk about and what we can not. You PRETEND to understand
and HOPE oters understand. You are clearly egocentric enough not
to CHECK how what you have said is received by others. A little
investment in quality control might be an eye opener to you.

>
> Elsewhere you insist that we *need* a science of behavior. But what we who
> are not meaning-blind would want is something that enables us to
> operate better within the world of *intentionally characterized*
> behavior. For that is what is most prominent and important in our
> empirical (experiential) world, that is what we care about. What you
> are doing is something like pushing the science of chemistry as the
> solution to the problem of someone who wants to see better artwork from
> today's artists, giving as a rationale the claim that there *must* be a
> solution from chemistry since after all artworks are all material
> objects whose components obey the laws of chemical combination.
>

The REAL problems of this world demand accountability not only in
the specification of what the problem IS, but how it is to be
solved and whether it has been solved. THAT is the legacy of
logical posititivism, and it's a good one.

--
David Longley


Anders N Weinstein

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

In article <855712...@longley.demon.co.uk>,

David Longley <Da...@longley.demon.co.uk> wrote:
>In article <5dr23f$j...@usenet.srv.cis.pitt.edu>
> ande...@pitt.edu "Anders N Weinstein" writes:
>
>> In particular your crude philosophy leaves no room for the process of
>> cultivating a new vocabulary, including new observation terms.
>
>"cultivating a new vocabulary"? Sounds like creative writing to
>me?

Here's a dead simple question for your supposed Quineanism: what if
scientist A is using observation sentences that can't be translated
into the language scientist B uses? How do they apply your method
to reach agreement?

>> For example, on your view the best application of "extensional method"
>> to, say prisoners in China would be one performed by outsiders with no
>> knowledge at all of Chinese language and customs. For it is only then
>> that the entries they make in their database would be most likely to be
>> free of the taint of intensional attribution. Although even in this
>> case, ethnocentrism is likely to color one's view of the data ("it
>> looked as though one person were really angry at the other" when maybe
>> they were playing some sort of local game.)
>
>Don't priosners in China learn mathematics? keep their cells
>"tidy", get marks for the courses they do?

All these would seem to involve intentionality on the part of
the subjects.

Are you agreeing that on your view someone knowing no Chinese could
march into a Chinese prison and apply good "behavior science"?
So that if someone else were to report the problem as "he keeps
insulting people" you would just have to reject it as something you
could deal with? That this ideal scientist would never have to
ask a native speaker "what's he saying?" "what's he doing?".

>Mnnnnn yes ...errmmmm

A marvelous display of critical thought.



>> I suppose it is an open question, but I do not expect that any good
>> "behavioral science" of native Chinese is going to be done by blinkered
>> foreigners who refuse to learn anything about the culture. Most of the
>> sort of thing you record in your own work with PROBE could not be applied
>> to Chinese without understanding.
>
>What, the Chinese don't use computers and databases eh?

The question is whether you, knowing no Chinese, could go into a
Chinese population and usefully apply extensional principles to Chinese
behavior with the aid of your database. You can record some facts in
your database. But a native Chinese speaker can record more facts in
the same database, for example, the number of times someone has made a
complaint. Both you and the other can use the database, but you are
enabled to record different things in it.

Now my question is where in your philosophy of science do you address
the nature of the difference between what you and what a Chinese
person can record? It looks to me as if it's nowhere. But that
difference may be the crucial one.

To repeat: I am not against making use of the database. I am
pointing to the difference which you ignore, the difference between
what a native Chinese speaker can record in the database and what the
outsider can. This alludes to a crucial step that is ignored in
your simplistic philosophy.

>You're totally lost in "cognitivism" - step out of it and you
>might encounter a REAL problem.....

I am not a cognitivist. Cognitivists are interested in the
representations they suppose inscribed in some fashion inside the head.
But I am instead interested in *mental* states. And, loosely put, I do
not think there is anything mental to be found inside the head.
Intentional mental states are real in that there are objective facts to
be found about them, but these facts need not involve inner
representations inside the head.

>> Bad
>> concepts, and your vaunted database is forever doomed to garbage-in,
>> garbage-out.
>
>Date of Birth, how many reports, sentence length, marks for basic
>maths course 1, attendance record, weeks in this activity, weeks
>in that - differential measures of co-operation by individuals
>already deemed delinquent?..... Garbage?

Very likely, I would say. Not that they are all tainted, though they
may be valueless.

First, what's so special about these predicates? Nothing in the
database tells me what to put into it. I might just as well start
fishing for correlations starting with ratio of wrist to ankle size and
time of exposure to the color blue as with anything you cite. If
scientist A starts recording entries with these silly predicates,
scientist B starts with different ones, there is no rationality to the
choice on your view. But there are an infinite number of possible ones
we might use and no way to search them all.

Second, certainly such things as "how many reports" will be worthless
in a corrupt prison, "length of sentence" may have more to do with the
society than the prisoner, as may the measure of "cooperation". As to
such things as attendance and delinguency, sure you can measure them,
but what's so special about these schoolmarmish virtues? What if one
holds that a prisoner who resists these games as demeaning attempts to
assert control is more admirable than one who conforms with every
ridiculous demands of the petty tyrants in authority? Can someone with
these beliefs get the same value from this database technology? And
anyway what if there's a perfectly good reason someone does not attend
these activities, say that someone else has threatened them if they do?
The database will just be blind to these. Unless perhaps we record
*excused* absences, which again looks to be value-laden.

It is rather surprising to me that you can't recognize how much
attitudinizing might be packed into some of these predicates. After
all, the very fact that your subjects are in prison at all reflects the
intuitive judgment of a jury which was centrally concerned with
assessments of intentions. Of course it also depends on what society
deems a crime in the first place. Consider what is involved in the fact
that no man winds up in prison for raping his wife in a certain
society. Is this a striking empirical result? No, it is just a matter
of definition, but a highly criticizable one.

>"knee-jerk" ? Where ARE your predications? Unless you make them,
>how are we to see if what YOU have to say is worth remembering?

Actually, I don't see prediction as very important here. When someone
moves from not understanding Chinese to understanding Chinese, they
have certainly acquired an understanding, wouldn't you say? Yet they
still might not be able to predict particularly well what anyone will do.

> You are clearly egocentric enough not
>to CHECK how what you have said is received by others.

I wonder if you have ever heard of "projection"? I used to think it
was a myth...

>The REAL problems of this world demand accountability not only in
>the specification of what the problem IS, but how it is to be
>solved and whether it has been solved. THAT is the legacy of
>logical posititivism, and it's a good one.

But what if scientific method is by its nature the wrong tool for the job?

Oliver Sparrow

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

(Anders N Weinstein) writes on long, long, Longley:

> Because concepts have presuppositions packed into them --
> "theory-ladenness" it is sometimes called -- the choice of concepts
> is as important to science as the choice of "inferential
> technology". Bad concepts, and your vaunted database is forever
> doomed to garbage-in, garbage-out.


One of my hats is concerned with orchid biology (!). In this rich
family - perhaps 30k species - taxonomy is an issue. How is one to
separate entities in a useful manner? What is a species and what
a colour variation?

It turns out that there is no right answer to this. One school of
taxonomy says that if you can find a repeatable difference between
two field populations, then they are distinct. The other says: in
the field I see things about an entity which intuition about the
family group within which it sits, about the habitat and prospective
evolutionary path of the entity, all point to it being *useful* to
make a species-level distinction. One spawns hundreds of species,
distinct by the number of hairs on their lip, the other spawns
fearful academic squabbles.

This taxonomy problem haunts any young field. How am I to name the
parts and the agencies. What is a subset, what a superset? How
do I classify and how does the classification link into the model
that I have of how things work? AI is, to a huge degree, a discipline
concerned with classification: of naming things and identifying their
distinguishing characteristics, relations with other things and
role in the overall system. It is not surprising that taxonomic
issues arise, often under philosophical stealthing.

The Searle issue is this: if an engine consists of an enumeration of
the parts of something, then how can

(1) the engine be made from nothing but the products of such
an enumeration and
(2) how can the engine engage in enumeration on its own account?

Answer: the engine is to be much more than an enumeration of the
parts. It needs to be able to aply the commonsensical model of
taxonomy given above, which requires the representation of context;
cf my earlier comments and discussion with Rickert on this thread.

Here is the heart of the Longley issue, which is that there is more
to life than chinese enumeration. Truth is not a railway timetable,
as someone Germanic once observed (- Schlieffen? One hopes not.)
Neither is it a list, pace Michael Porter. The heart of the AI
"issue" is how a taxonomising automat can be generated, complete
with its context engine, such that the classifiction is developed and
tested; but as a support for other features, such as the context
engine.

The bootstrap element of this is evident. You need the context to
define the taxonomy, the taxonomy to generate the model, the model
to generate the context. Thus an evolutionary, self-developing
pathway rather than a declarative birth from the thigh of Zeus, the
programmer.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Oliver Sparrow

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

(Trickey Rickert), right again, writes:

> Right. And in a way, that makes it subjective. Two people might be
> in the presence of the same signals, but because of different
> contextual assumptions they will pick up different information from
> those signals. What one considers information, the other might
> consider noise, and vice versa.

Entirely so. I think an interesting point is, however, that absolute
relativism will not do. That is, there are some contexts (by no means
an infinite set) to which "click" will be meaningful and many to
which it will not; and there are a relatively limited number of states
into which a system can fall that deliver these contexts, interpretive
or not. The more contextualised the signal, the less likely the wrong
set is to be evoked. Thus - the example I've used elsewhere - "42"
could be a chest size or a room number, but when it is offered to you
by a hotel receptionist when you have just checked in, the context
evokes the correct interpretive context. Thus a look-ahead, what's up?
engine creates perhaps phantom contexts, into which data then slip.
Experimentally, we know that this is exactly what humans do in some
aspects of data stream management and it seems a natural for
artificial systems as well. A semi-connectionist approach would have
half a dozen phantoms evoked by the general situation, each of which
would compete for processor space. The most successful - those which
self-ignite as a result of resonance with the incoming data - would
quench the others: zoom, a context engine.

>
> >In thiking about information systems, therefore, we have to
> >model the emitter, the conduit and the recipient.
>
> In thinking about human cognition, the emitter becomes the whole of
> reality, and it is a little tricky to model that.

True, but hay que suenar. And getting it right in systems where some
of the variabnles are pinned as observables may offer insight into
floating, self-generating systems such as is the probable nature of
awareness. Grounding may not ne a necessary condition, but it is
certainly a helpful one.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

David Longley

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

In article <5drqkr$m...@usenet.srv.cis.pitt.edu>

ande...@pitt.edu "Anders N Weinstein" writes:

> In article <855712...@longley.demon.co.uk>,
> David Longley <Da...@longley.demon.co.uk> wrote:
> >In article <5dr23f$j...@usenet.srv.cis.pitt.edu>
> > ande...@pitt.edu "Anders N Weinstein" writes:
> >
> >> In particular your crude philosophy leaves no room for the process of
> >> cultivating a new vocabulary, including new observation terms.
> >
> >"cultivating a new vocabulary"? Sounds like creative writing to
> >me?
>
> Here's a dead simple question for your supposed Quineanism: what if
> scientist A is using observation sentences that can't be translated
> into the language scientist B uses? How do they apply your method
> to reach agreement?

It's not as simple as you make out.

This sort of thing happens frequently in databases. One would add
to variable labels and their values to each others respective
data dictionary. If the data actually came from the same overall
population, that would show statistically.

As I see it, the choice of names or labels in the data dictionary
are not what matter - what's important is criterion based
allocation of numeric values for class membership. The labels or
names are just there to conveniently identify nodes in a data
network which allow us to identify useful relations.

If one had a number of these which turned out to be co-extensive,
one might well find this through cluster or factor analysis.

Such analysis is really what the whole system is all about of
course.

>
> >> For example, on your view the best application of "extensional method"
> >> to, say prisoners in China would be one performed by outsiders with no
> >> knowledge at all of Chinese language and customs. For it is only then
> >> that the entries they make in their database would be most likely to be
> >> free of the taint of intensional attribution. Although even in this
> >> case, ethnocentrism is likely to color one's view of the data ("it
> >> looked as though one person were really angry at the other" when maybe
> >> they were playing some sort of local game.)
> >
> >Don't priosners in China learn mathematics? keep their cells
> >"tidy", get marks for the courses they do?
>
> All these would seem to involve intentionality on the part of
> the subjects.

"seem" according to you... I could be called in as a consultant
and show managers and guards how to go about recording behaviour,
probably with very little understanding of chinese "customs".

>
> Are you agreeing that on your view someone knowing no Chinese could
> march into a Chinese prison and apply good "behavior science"?
> So that if someone else were to report the problem as "he keeps
> insulting people" you would just have to reject it as something you
> could deal with? That this ideal scientist would never have to
> ask a native speaker "what's he saying?" "what's he doing?".

Just as any good ethologist or zoologist could go and study
baboon behaviour yes. What folk are doing comes from whatching
what they *DO*. Asking someone to tell you what they are doing is
usually jkust in lieu of futher watching (which might take some
time!).

>
> >Mnnnnn yes ...errmmmm
>
> A marvelous display of critical thought.

I was suggesting your's WASN'T <g>.

>
> >> I suppose it is an open question, but I do not expect that any good
> >> "behavioral science" of native Chinese is going to be done by blinkered
> >> foreigners who refuse to learn anything about the culture. Most of the
> >> sort of thing you record in your own work with PROBE could not be applied
> >> to Chinese without understanding.
> >
> >What, the Chinese don't use computers and databases eh?
>
> The question is whether you, knowing no Chinese, could go into a
> Chinese population and usefully apply extensional principles to Chinese
> behavior with the aid of your database. You can record some facts in
> your database. But a native Chinese speaker can record more facts in
> the same database, for example, the number of times someone has made a
> complaint. Both you and the other can use the database, but you are
> enabled to record different things in it.

That would be a matter of me learning their procedures. It's a
matter of experience with behaviour, nothing more.

>
> Now my question is where in your philosophy of science do you address
> the nature of the difference between what you and what a Chinese
> person can record? It looks to me as if it's nowhere. But that
> difference may be the crucial one.

And "may be" it's NOT crucial at all. What Quine's analyses show
is that one should not get bogged down with "meanings" or
"intensions".

>
> To repeat: I am not against making use of the database. I am
> pointing to the difference which you ignore, the difference between
> what a native Chinese speaker can record in the database and what the
> outsider can. This alludes to a crucial step that is ignored in
> your simplistic philosophy.
>

Your "philosophy" is over populated with ideas.

> >You're totally lost in "cognitivism" - step out of it and you
> >might encounter a REAL problem.....
>
> I am not a cognitivist. Cognitivists are interested in the
> representations they suppose inscribed in some fashion inside the head.
> But I am instead interested in *mental* states. And, loosely put, I do
> not think there is anything mental to be found inside the head.
> Intentional mental states are real in that there are objective facts to
> be found about them, but these facts need not involve inner
> representations inside the head.
>

Fine - mentalist, cognitivist - basically the same sorry position
as far as I am concerned. Read the extract from Bruner on how he
and his Harvard colleagues brought cognitivism back.

When you know a bit more about who these words are used in the
profession maybe it'll be worth speaking again. As it is, you
seem to be only too happy to invent your own history and
language... I can't talk to someone who does that... Nobody can.

--
David Longley


Anders N Weinstein

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

In article <685276...@chatham.demon.co.uk>,

Oliver Sparrow <oh...@chatham.demon.co.uk> wrote:
> (Trickey Rickert), right again, writes:
>
>> Right. And in a way, that makes it subjective. Two people might be
>> in the presence of the same signals, but because of different
>> contextual assumptions they will pick up different information from
>> those signals. What one considers information, the other might
>> consider noise, and vice versa.
>
>Entirely so. I think an interesting point is, however, that absolute
>relativism will not do. That is, there are some contexts (by no means

I have often suggested that this is a very misleading use
of "subjective". To my ear it suggests something like a constructivist
theory, according to which the cognitive subject is somehow creating the
very data she uses to check her beliefs. And it suggests that there is
no objectivity to the information obtained.

But the *great* thing about JJ Gibson's usage is that it has it that
information is not "subjective" in either of those senses. Information
could be objectively present whether or not it is picked up, just as
chroma information is present in a broadcast signal even if it is as
nothing to a black and white set. And the topic of the information, say
that that man just said the English word "porridge", typically concerns
an intersubjectively available state of affairs. This is so even if some
listeners, say native Chinese speakers, are not currently equipped to
pick it up.

Neil Rickert

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

In <5dtd4o$t...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:

>In article <685276...@chatham.demon.co.uk>,
>Oliver Sparrow <oh...@chatham.demon.co.uk> wrote:
>> (Trickey Rickert), right again, writes:

>>> Right. And in a way, that makes it subjective. Two people might be
>>> in the presence of the same signals, but because of different
>>> contextual assumptions they will pick up different information from
>>> those signals. What one considers information, the other might
>>> consider noise, and vice versa.

>>Entirely so. I think an interesting point is, however, that absolute
>>relativism will not do. That is, there are some contexts (by no means

>I have often suggested that this is a very misleading use
>of "subjective". To my ear it suggests something like a constructivist
>theory, according to which the cognitive subject is somehow creating the
>very data she uses to check her beliefs. And it suggests that there is
>no objectivity to the information obtained.

Evidently we have different ideas as to the meaning of 'subjective'.
As I use the term, to say that that the information is subjective is
just to say that it is not objective. To say that it is objective is
to say that there is general agreement among observers, although the
agreement need not be unanimous.

Now suppose I look at a sheet of paper, and see an array of
differential equations. Another person looks at the same paper, and
sees an interesting - even artistic - pattern of squiggly lines. We
have picked up different information. So, as I am using the term,
this is not objective.

>But the *great* thing about JJ Gibson's usage is that it has it that
>information is not "subjective" in either of those senses. Information
>could be objectively present whether or not it is picked up, just as
>chroma information is present in a broadcast signal even if it is as
>nothing to a black and white set.

Suppose that in 1950, a television transmitter broadcast an identical
signal to what is being used today. The standard for chroma
information had not been set. There did not exist any color
television sets. Would you still say that the chroma information was
objectively present? Certainly the people in 1950 would not have
said that there was such information, and might have instead seen
some noise at a low enough level to not be important.


Carl B. Frankel

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

Anders N Weinstein wrote:
>
> In article <5donur$3...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
> >In <32FFA8...@ome1.com> "Carl B. Frankel" <ca...@ome1.com> writes:
> >
> >Under information theory, what counts as information is determined by
> >the sender. ...
> >
> >It seems to me that we need a different notion, where what is to
> >count as 'information' is to be determined by the receiver. Possibly
>
> The slogan Frankel quoted from Bateson -- information is difference
> that makes a difference -- would also seem to point to a
> receiver-centered concept.
>
> Of course "making a difference" could mean many things. To me it

> naturally suggests the idea that the receiver leads a certain sort of
> life in which the information *matters* (is relevant). In this sense,
> perhaps nothing "makes a difference" to a computer, although the
> information we cause to be stored in computers can make a big
> difference to *us*.
>
> Others will probably want to interpret it more broadly.

I think Bateson intended "making a difference" fairly broadly,
something like, "To be utilized in transformation i.e. in processing."
Thus all inputs which a computer processes make a difference
by virtue of the computer accepting them as inputs to some
transformation and utilizing those inputs to accomplish their
transformation to outputs. Bateson then used this notion to start a
discussion about those differences which make a difference
with respect to adaptive competence. This effectively treats
"adapting" as a transformation, and focuses on the sampling, filtering
and utilization of inputs in order to organize adaptively competent
responses. However, Bateson was unclear as to whether he was talking
about the adaptive competence of a unit of adaptation (e.g. genes) or
its adaptive apparatus (the individual) or both. (Note that defining
"making a difference" to mean "is involved in a transformation"
plays very nicely in a suggestion I make to Sloman in another post
about inside/outside boundaries, that a boundary be a dynamic
entity that comes into existence at the interface at which a
transformation collects its inputs and delivers its outputs. :-))

I think where Bateson was trying to go was to establish an
indentity relation between processing information and adapting
to circumstances. I personally doubt there is an identity
relation here, since there are many species of information
processing which are not adaptive. Yet I suspect that
the topology of the problem of adapting to circumstances does
have an indentity relation (a homeomorphism) with one species
of information processing, the closest description of which
can be found in control theory in discussions of adaptive control.
(Bateson and many others have tried to make the mapping to
steady-state control, since "adapting" sounds like it could be
a reference, but have come up rather short. I suspect this is
because "adapting" works much better as a reference to adaptive
controls which select adaptively competent goals for underlying
steady-state controls to achieve.)

Also, I want to thank you and Neil Rickert for making the
distinction between sender-centered and receiver-centered approaches
to information theory. (My two accesses to news feeds must be on
real backwaters, since I've not yet seen Neil's actual post.) Since
I've always focused on the problem of adaptively competent
information processing, I automatically took a receiver-centered
view without actually thinking about it as such. As a result,
I've sometimes had trouble explaining the relationship between
my thinking and information theory, to those with the traditional,
sender/message centered view. (Shannon, after all, was concerned
with the circumstances which preserve or undermine the integrity
of message replication.) This distinction will certainly help
me better bridge the relationship between information theory and
adaptively competent information processing, so I expect I will
cite this personal communication as relevant.

Carl B. Frankel

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

Anders N Weinstein wrote:
>
> In article <685276...@chatham.demon.co.uk>,
> Oliver Sparrow <oh...@chatham.demon.co.uk> wrote:
> > (Trickey Rickert), right again, writes:
> >
> >> Right. And in a way, that makes it subjective. Two people might be
> >> in the presence of the same signals, but because of different
> >> contextual assumptions they will pick up different information from
> >> those signals. What one considers information, the other might
> >> consider noise, and vice versa.
> >
> >Entirely so. I think an interesting point is, however, that absolute
> >relativism will not do. That is, there are some contexts (by no means
>
> I have often suggested that this is a very misleading use
> of "subjective". To my ear it suggests something like a constructivist
> theory, according to which the cognitive subject is somehow creating the
> very data she uses to check her beliefs. And it suggests that there is
> no objectivity to the information obtained.
>
> But the *great* thing about JJ Gibson's usage is that it has it that
> information is not "subjective" in either of those senses. Information
> could be objectively present whether or not it is picked up, just as
> chroma information is present in a broadcast signal even if it is as
> nothing to a black and white set. And the topic of the information, say
> that that man just said the English word "porridge", typically concerns
> an intersubjectively available state of affairs. This is so even if some
> listeners, say native Chinese speakers, are not currently equipped to
> pick it up.

A very interesting quandry, very clearly articulated. I'm not sure,
but I suspect that there is an excluded middle lying between
"subjective" as it is commonly used, and objective. Since you
mention subjective, objective and intersubjectively available all
in the same post without fully specifying how you see the relations
among them, I don't know if I'm really disagreeing with you or not. :-)
In any case, it's little surprise that this thread has so quickly
moved to enquire into epistemological foundations.

Both of your examples involve detectable difference that is meaningful
to someone who is properly equipped. But what of the universe of
difference that no one can presently instrument, no less parse?
To the extent that the future will be like the past, and thus that
scientific discovery did not reach its final completion an hour before
this posting, it is reasonable to suppose that we will someday find
meaningful distinctions in a universe of difference that we presently
do not even detect. But such a supposition derives its reasonableness
from its intersubjective testability, rather than on an objective
basis or from self-manufacture.

As such, I would be very hesitant to say that differences exist
in and of themselves, like a ding an sich, apart from our ability to
detect and utilize them. Their existence is an article of faith, that
some may need to motivate their scientific endeavors. But it is a
proposition that cannot be demonstrated, because it guesses about a
future that is available to no one. And supposing that the proposition
were "true": What difference would it make, beyond being an additional
carrot to hang in front of those who are motivated by exploring the
less-than-fully-revealed? That there may be a transcendant omniscient
view available to some God-like entity does not change our(!)
situation, which is one of have a limited and noisy view of a
pervasively uncertain and constantly changing open system. (Even our
view of the God-like entity can be no better, no matter how hard or
persistently we beat our breasts to the contrary, no matter how many
willing and faithful followers we convince to the contrary.) Increased
knowledge of the system in which we are embedded may allow us to impose
broader and more stable local closures within the system, but will not
in one iota reduce our reliance on our own information processing as
augmented by sharing of knowledge that is intersubjectively available.
We(!) each make the decision that we are so confident in a given
proposition or a given local closure that it is a waste of precious
processing resource to do any error-checking, that the proposition may
be treated as compellingly true. Yet our emotional experience of
compellingness, individually and collectively, is hardly the same
as objectivity--albeit there are many social situations involving
serious consequences if we do not act as though shared beliefs are
objectively true beyond question.

On the other side, this is hardly to say that everything is
subjective or self-manufactured. Again, self-manufacture is another of
those propositions that cannot be demonstrated, but there is much
counter-vailing evidence to be adduced against it, not the least of
which is all of the redundancy in the input stream indicating
intersubjective availability and testability of many observations
and observed relations across trials, circumstances and observers.

I suspect that the underlying problem in both these cases is an
ill-formed self-reference, a proposition that refers to itself in
a way that denies its own semantics (like "this statement is false").
After all, both propositions embed epistemological propositions about
the nature of evidence, viz. that evidence has an objective basis or
is self-manufactured. Yet when we ask what to take in evidence for
each of these propositions about evidence, we find no objective basis
for objectivity, while self-manufacture must be self-manufactured.
To find any credible evidence at all, we must fall back to
redundancy across observers, observations, redundancy of predictions
with observations, etc., which places us squarely on the middle we
should never have excluded.

Regards,

Carl F.

P.S. I hope you will tell me whose philosophical position I have
just restated, so that I can cite it as needed, rather than have
to argue it.

Vesa Monisto

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

Da...@longley.demon.co.uk (David Longley) wrote:
> "... one of the best pieces to work on this type of theme was done
> by Carnap in his "Aufbau" ..."

Some historical remarks:

Rudolf Carnap's (1891 - 1970) "Der logische Aufbau der Welt" (1928) was
not
translated to English until 1967 ("The Logical Structure of the World"),
but it was
known to Quine before that. Much Quine can be understood by Carnap's
"Scheinprobleme in der Philosophie" (1928). I don't know is it
translated to English
but it appears in the second 'Auflage' of the "Aufbau" 1961.

Some extracts, please, from "Vorwort zur zweiten Auflage" (1961):
1)
"Meine Ero"rterungen u"ber die extensionale Methode (§ 43 bis 45 des
"Aufbaus")
scheinen heute nicht mehr befriedigend. Die These der Extensionalita"t
in der fru"her
u"blichen Form, wie sie von Wittgenstein, Russell und mir (§ 43)
vertreten wurde,
besagte, dass alle Aussagen extensional sind. In dieser Form ist aber
die These nicht
richtig. Ich habe daher spa"ter eine schwa"chere Fassung vorgeschlagen,
die besagt,
dass jede nicht-extensionale Aussage in eine logisch a"quivalente
Aussage in einer
extensionalen Sprache u"bersetzbar ist." ... "... aber sie ist noch
nicht bewiesen,
wir ko"nnen sie nur als Vermutung aufstellen ..."
...
"Die Methode, die ich in § 43 die "extensionale Methode" genannt habe,
besteht
im Grunde genommen einfach darin, fu"r das ganze Konstitutionssystem
eine
extensionale Sprache zu verwenden. Hiergegen besteht kein Einwand.
Meine Beschreibung der Methode ist aber in einigen Punkten nicht klar.
Man ko"nnte den Eindruck haben, als würde in meiner Methode angenommen,
dass es fu"r die Gu"ltigkeit der Nachkonstruktion eines gegebenen
Begriffes A
durch den Begriff B schon hinreichend sei, dass B den gleichen Umfang
hat wie A.
In Wirklichkeit muss jedoch die sta"rkere Bedingung erfu"llt sein, dass
die Umfangsgleichheit von B mit A nich nur zufa"llig besteht, sondern
mit Notwendigkeit, d.h. entweder auf Grund von logischen Regeln oder auf
Grund
von Naturgesetzen ..."

2) Concerning "Scheinprobleme in der Philosophie":

"... erst im Fru"hjahr 1927 geschrieben, am Ende meines ersten Wiener
Jahres.
Daher zeigt sich hier sta"rker der Einfluss der Wiener Diskussionen und
des Buches
von Wittgenstein." ... "Das Hautthema ist die Aufgabe, die
Erkenntnistheorie
von Scheinproblemen zu reinigen. Zuna"chst wird ein allgemeines
Kriterium
der Sinnhaftigkeit aufgestellt. Dann wird dieses Kriterium auf die
Erkenntnis
des Fremdpsychischen angewendet. Meine damalige Auffassung ist
eine fru"he Phase des Physikalismus, ..."
"Auf Grund des Sinnkriteriums werden ferner verschiedene
Realita"tsthesen gepru"ft.
Es wird gezeigt, dass sowohl die These des Realismus von der Realita"t
der Aussenwelt
wie auch die des Idealismus von ihrer Irrealität Scheinaussagen sind,
Sa"tze ohne
Sachgehalt. Das gleiche wird dann auch gezeigt fu"r die Thesen der
Realita"t bzw.
Irrealität des Fremdpsychischen. Diese Verurteilung aller Thesen u"ber
metaphysische
Relität (die ich deutlich unterscheide von empirischer Realita"t) is
radikaler als die
im "Aufbau", wo solche Thesen nur aus dem Gebiet der Wissenschaft
ausgeschlossen
wurden. Meine radikalere Einstellung war beeinflusst durch Wittgensteins
Auffassung,
dass metaphu"sische Sa"tze, weil grundsa"tzlich unverifizierbar, sinnlos
sind."
...
University of California, Los Angeles, März 1961. Rudolf Carnap


That’s it. ("... die ich deutlich unterscheide ..." ;-)

(Excuse me my ONLY ONE comment: ‘naïvistic surrealism of
verificationism’;
at least every programmer using more modern advanced languages knows
that. -Why?
From a programmatic viewpoint: What the ontic notions for pointers
happen to be,
that's totally irrelevant for the *workings* of the programs. Thus,
there may ‘sit’ more
'intelligence' in the 'stupid/funny folk-notions' ('common sense') than
ever guessed.)


V.M. (As a neutral surfer/sailor: Everything is just floating, with
anchors or without.)

Vesa Monisto

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

Oooops... sorry!
- You never know what kind of wrapping behavior
a new Navigator can exhibit ...
... but if 'everything is floating' and
the weather is rough... :)

Thank you for your comments.
I've read some Quine but there is always
more to read ...

V.M.

Vesa Monisto

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

But you are quite right: the restriction to FOPC
*is* a watershed.

It depends on how you see the notion "science".
There are phenomena not captured by FOPC
even the research-results are given with FOPC.
There *may* be needs, ... not necessarily.
- I leave it open.

V.M.

David Longley

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In article <5dtd4o$t...@usenet.srv.cis.pitt.edu>

ande...@pitt.edu "Anders N Weinstein" writes:

<snip>


>
> But the *great* thing about JJ Gibson's usage is that it has it that
> information is not "subjective" in either of those senses. Information
> could be objectively present whether or not it is picked up, just as
> chroma information is present in a broadcast signal even if it is as
> nothing to a black and white set. And the topic of the information, say
> that that man just said the English word "porridge", typically concerns
> an intersubjectively available state of affairs. This is so even if some
> listeners, say native Chinese speakers, are not currently equipped to
> pick it up.
>

Don't forget - one of the best pieces fo work on this type of
theme was done by Carnap in his "Aufbau" before he abandoned
methodological solipsism. In his early days he was very much
influenced by Husserl. Gibson's work is sits nicely in the
naturalist camp.... Over the years, I have found that all I once
though unique to Husserl can in fact be found in Quinean
Evidential behaviourism...

'..there's a tradition which argues that - epistemology
to one side - it is at best a strategic mistake to
attempt to develop a psychology which individuates
mental states without reference to their environmental
causes and effects...I have in mind the tradition which
includes the American Naturalists (notably Pierce and
Dewey), all the learning theorists, and such
contemporary representatives as Quine in philosophy and
Gibson in psychology. The recurrent theme here is that
psychology is a branch of biology, hence that one must
view the organism as embedded in a physical environment.
The psychologist's job is to trace those organism-
environment interactions which constitute its behavior.'

J. Fodor (1980) p.64
Methodological solipsism considered as a research
strategy in cognitive psychology.
Massachusetts Inst of Technology
Behavioral and Brain Sciences; 1980 Mar Vol 3(1) 63-109

Draw your own conclusions.
--
David Longley


David Longley

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In article <330179...@sturman.pp.fi>
ve...@sturman.pp.fi "Vesa Monisto" writes:

> Da...@longley.demon.co.uk (David Longley) wrote:
> > "... one of the best pieces to work on this type of theme was done
> > by Carnap in his "Aufbau" ..."
>
> Some historical remarks:
>
> Rudolf Carnap's (1891 - 1970) "Der logische Aufbau der Welt" (1928) was
> not
> translated to English until 1967 ("The Logical Structure of the World"),
> but it was
> known to Quine before that. Much Quine can be understood by Carnap's
> "Scheinprobleme in der Philosophie" (1928). I don't know is it
> translated to English
> but it appears in the second 'Auflage' of the "Aufbau" 1961.

<snip>

It's important to know that Quine and Carnap were VERY close
friends and that a lot of Quine's work grew out of a critique of
Carnap's. And that critique was of Carnap's work AFTER he
abandoned his methodological solipsism. "Meaning and Necessity"
was discussed in detail by Carnap and Quine in the 40 and 50s
before it was published. Carnaps was going to call it "The Method
of Intension and Extension" (though this still figures heavily in
the text). Quine's work was of course a radical critique of
essentialism, and modal logic ("meaning" and "necessity").

Anyone wanting an up to date view of Quine, should read "Pursuit
of Truth" or "From Stimulus to Science". If they want a more
meaty work, "Word and Object" 1960 supported by "From and Logical
Point of View", and "The Ways of Paradox and Other Essays", and
perhaps "Theories and Things".

A very good introduction is provided by Christopher Hookway
"Quine", and if you can find any articles or books edited by
Roger Gibson, they are some of the clearest.
--
David Longley


David Longley

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

I'm not entirely sure to what extent the extensional stance and
computationalism are co-extensive. Particularly if the former is
restricted to the First Order Predicate calculus.

From what I have read and heard of exchanges between Quine,
Boolos and Dreben, it's a moot point whether the problems
encountered beyond a restricted FOPC need concern either the
scientist or computer scientist......
--
David Longley


Anders N Weinstein

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In article <5dtglm$6...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <5dtd4o$t...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>
>Now suppose I look at a sheet of paper, and see an array of
>differential equations. Another person looks at the same paper, and
>sees an interesting - even artistic - pattern of squiggly lines. We
>have picked up different information. So, as I am using the term,
>this is not objective.

Fair enough. But I would say it is still an objective matter whether it
contains differential equations in that you can be right or wrong about
that. Your seeing it as so doesn't suffice to make it so -- perhaps it
was produced by accident. I expect you agree, although I am not sure.

>Suppose that in 1950, a television transmitter broadcast an identical
>signal to what is being used today. The standard for chroma
>information had not been set. There did not exist any color
>television sets. Would you still say that the chroma information was
>objectively present?

No, of course not. So what?

More accurately, the answer might depend on exactly how this highly
unlikely event came to pass. If there were some reliable process
ensuring that it were always true, one might say that the signal carried
the information naturally, even if it did not carry it conventionally.
Whereas if it were an entirely fortuitous coincidence then not.

The original analogy alluded to a case where the information content is
defined by a convention. But the point of the analogy was just the idea
that in some cases one can say information is objectively present even
if not picked up by a particular receiver.

Anders N Weinstein

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In article <330255...@ome1.com>, Carl B. Frankel <ca...@ome1.com> wrote:
>>
>I think Bateson intended "making a difference" fairly broadly,
>something like, "To be utilized in transformation i.e. in processing."

This threatens to be too broad to be useful: any physical object that
is causally affected by something then counts as an information
processor? The presence of oxygen makes a difference to iron and it
rusts or not in response. But that's not information processing.

> Bateson then used this notion to start a
>discussion about those differences which make a difference
>with respect to adaptive competence. This effectively treats

Right. Where the norms that determine what is adaptive and what is
competence would seem to pertain to some kind of life.

Neil Rickert

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In <5dvefl$6...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <330255...@ome1.com>, Carl B. Frankel <ca...@ome1.com> wrote:

>>I think Bateson intended "making a difference" fairly broadly,
>>something like, "To be utilized in transformation i.e. in processing."

>This threatens to be too broad to be useful: any physical object that
>is causally affected by something then counts as an information
>processor? The presence of oxygen makes a difference to iron and it
>rusts or not in response. But that's not information processing.

Why is that not information processing? It seems to me that there
has been a transformation of representations. The information that
was formerly represented by the presence of oxygen in the atmosphere
has now been represented by the presence of rust on the surface of
the iron. The rusting amounts to a natural transduction process.

Your problem is that you don't want to say that there was any
information, unless some human passes judgement on it to say that it
was information. Therein lies your dualism. If a tree falls in the
forest, and there is no human to pass judgement as to whether there
was information, did it make a sound -- or could there even have been
a tree or a forest?


Neil Rickert

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In <5dvdtq$6...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <5dtglm$6...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>>In <5dtd4o$t...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:

>>Now suppose I look at a sheet of paper, and see an array of
>>differential equations. Another person looks at the same paper, and
>>sees an interesting - even artistic - pattern of squiggly lines. We
>>have picked up different information. So, as I am using the term,
>>this is not objective.

>Fair enough. But I would say it is still an objective matter whether it
>contains differential equations in that you can be right or wrong about
>that. Your seeing it as so doesn't suffice to make it so -- perhaps it
>was produced by accident. I expect you agree, although I am not sure.

I'll agree that my seeing it so doesn't make it so. But I don't
think there is anything objective about it. They only become
differential equations when I decide to interpret them using a
particular information representation system. I can be right or
wrong about whether they constitute differential equations within
that mathematical representation system. But the question of
rightness or wrongness is relative to the representation system. The
artist, using a different representation system, could be right about
the aesthetic qualities of the squiggles at the same time I am right
about their being differential equations.

>>Suppose that in 1950, a television transmitter broadcast an identical
>>signal to what is being used today. The standard for chroma
>>information had not been set. There did not exist any color
>>television sets. Would you still say that the chroma information was
>>objectively present?

>No, of course not. So what?

Then it would seem that your notion of "objective information" is
based on the intention of the sender of the information. As I
thought I had made clear, I need a notion of "information" which is
receiver relative rather than sender relative.

>More accurately, the answer might depend on exactly how this highly
>unlikely event came to pass.

Again, that seems to pin your meaning of "information" on the sending
rather than on the receiving.

My concern is with the use of information in the scientific
investigation of the world. For that scientific purpose there is no
sender of the information, except perhaps for God. If we want a
sender based meaning for "information", then we must take God to be
the sender. It would seem that science must reduce to a branch of
theology, and philosophy must reduce to Berkeley's idealism. I want
to avoid that. It seems to me that the path to avoid that is to take
a receiver's view of information.

Again, consider that 1950 television transmitting a modern signal.
What we call chroma information was present, and detectable by 1950s
scientists. In my earlier posting, I think I suggested that they
would see it as noise. But actually, they would see it as
distortion. If it were noise, then that part of the signal would be
random. However, because that part of the signal is correlated in
some strange way with other parts of the signal, it would be seen as
distortion rather than as noise. The 1950s scientists would
recognize that part of the signal as information, but as information
about the presence of some distorting component in the transmitting
equipment. Perhaps if the 1950s scientists investigated thoroughly
enough, they might find that the "distortion" is correlated to the
natural colors of what was being transmitted in black and white.
However, it is unlikely that they would make such a discovery, since
they would be more interested in removing the distortion to improve
the fidelity of the signal.

> If there were some reliable process
>ensuring that it were always true, one might say that the signal carried
>the information naturally, even if it did not carry it conventionally.
>Whereas if it were an entirely fortuitous coincidence then not.

This seems to have it backwards. We do not say that

because there is a reliable process in the sun for producing
photons, therefore those photons inform us about the nuclear
processes in the sun.

Rather, we say

because of the properties of the light signal, we have
information about the sun. Because the information appears
reliable according to our measurements, we can conclude that
there must be a reliable causal process in the sun producing
those photons.

It seems to me that with your sender-based notion of "information",
you are requiring complete reliable knowledge of causal processes as
a prerequisite to having the information which could provide us with
that knowledge. With such a notion of "information," I don't see how
science could ever get off the ground. That is why I prefer a
receiver-based "information."


Anders N Weinstein

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In article <855794...@longley.demon.co.uk>,

David Longley <Da...@longley.demon.co.uk> wrote:
>
>> But the *great* thing about JJ Gibson's usage is that it has it that
>> information is not "subjective" in either of those senses. Information
>> could be objectively present whether or not it is picked up, just as
>
>Don't forget - one of the best pieces fo work on this type of
>theme was done by Carnap in his "Aufbau" before he abandoned
>methodological solipsism. In his early days he was very much

I think this is looney advice. Carnap there was concerned with
constructing the external world on the basis of raw sense data ("thin",
preconceptual subjective experience).

Gibson was interested in capacities for directly detecting "thick"
information *about* the external world which he argued could perfectly
well be sensationless. That is, he *dispensed* with any role for sense
data in his view of perceptual knowledge. With this move he jettisoned
the traditional subjectivism of early Carnap and Husserl.

A cognitivist might say that in place of ampliative *inference* from an
impoverished basis of raw transducer outputs, Gibson wanted to substitute
the idea of a more sophisticated kind of transducer. In any case his
work completely rejects the empiricist idea that raw sense-data could
be the basis of empirical knowledge.

> J. Fodor (1980) p.64
> Methodological solipsism considered as a research
> strategy in cognitive psychology.
> Massachusetts Inst of Technology
> Behavioral and Brain Sciences; 1980 Mar Vol 3(1) 63-109

>Draw your own conclusions.

That Gibson belongs with Quine from Fodor's rather skewed perspective
should not blind one to the deep and important differences.

For example Quine insists for example that "all we have to go on"
epistemologically is the stimulation of our sensory receptors. He
invents a subjectivistic notion of "stimulus meaning" to elaborate his
confusion of epistemic with causal relations. In fact we do not --
could not ! -- "go on" the triggering of our receptors as an epistemic
basis for our beliefs. We might go rather on the information contained
in the patterns.


Anders N Weinstein

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In article <330263...@ome1.com>, Carl B. Frankel <ca...@ome1.com> wrote:
>Both of your examples involve detectable difference that is meaningful
>to someone who is properly equipped. But what of the universe of
>difference that no one can presently instrument, no less parse?
>To the extent that the future will be like the past, and thus that
>scientific discovery did not reach its final completion an hour before
>this posting, it is reasonable to suppose that we will someday find
>meaningful distinctions in a universe of difference that we presently
>do not even detect. ...

Almost certainly.

>As such, I would be very hesitant to say that differences exist
>in and of themselves, like a ding an sich, apart from our ability to
>detect and utilize them. Their existence is an article of faith, that

I think this worry goes to far. Kant's ding an sich would necessarily
be forever undetectable, indeed unimaginable to creatures with our
cognitive constitution. But above you are speaking only of the
presently undetectED -- which might be perfectly detectABLE by us,
given the right formation.

For something to count as objective should not require that it be a
Kantian ding an sich.

>P.S. I hope you will tell me whose philosophical position I have
>just restated, so that I can cite it as needed, rather than have
>to argue it.

Hilary Putnam wrote at one point of the distinction between "internal
realism" and "metaphysical realism". He took this to be similar to
Kant's distinction between transcendental realism (which Kant rejected)
and empirical realism (which he espoused).

In broad terms Putnam took metaphysical realism to be the idea that
truth consists in a correspondance between mental representations and
a reality that is completely independent of them, so independent that
it could be possible that it forever transcend our powers of representing
it accurately. I.e. a god's eye view of truth as getting right the
structure of the things as they are in themselves.

Putnam offered several interesting arguments against this. For example
he argued that on certain assumptions, model theory guarantees that
a subjectively ideal theory -- one that is as well verified as we can
imagine -- would have a model in the things in themselves, and that
therefore the metaphysical realist would have no ground for saying that
it is false to them. Putnam also attacked the idea that physicalism --
the main version of metaphysical realism today, he said -- could
ground its conception of truth as accurate correspondance in a naturalistic
causal theory of reference. Only magic, Putnam argued, could nail our
thought-signs uniquely to the things in themselves in the way a metaphysical
realist required.

On the other hand his "internal realism" is supposed to recognize the
falsity of total conceptual relativism, by recognizing that possessors
of different concepts still have to interpret one another with some
charity as having common referents through different conceptions, and
also recognizing that even though conceptual schemes can change with
history, still not any concepts are as good as any than others. As you
rightly note, we already possess our regulative ideal of a possibly
better theory from our point of view within our current beliefs.
Putnam's view thus came out as a kind of pragmatism, similar perhaps to
what one finds in Nelson Goodman.

C.S. Peirce suggested a definition of truth as that opinion fated to be
agreed on by all human inquirers at some kind of idealized limit that
is the goal of rational inquiry. Putnam suggested that "metaphysical
realism" breaks down at precisely the point at which it diverges from
Peirce's conception (although he notes other problems with the
Peircean, for example the idea of inquiry as approaching a limit).

I suppose I must be an internal realist. Actually I think the main
problem is the picture of mental representations as set over on one
side of a divide with the world on the far side of the boundary. That
makes it look as though there really have to be two sorts of realism, an
internal one available from the point of view of the inquirer, and an
external one available from the point of view of a god that can see the
inquirer and the world from the side, as it were.

At the time Putnam was writing on these issues he never really doubted
that mental states involve relations to language-like representations
or "thought-signs". I believe that presupposition is also a big part of
the problem. Believe in thought-signs and then there can be an
intelligible question of whether they correspond to something outside
them.

Reject thought-signs in favor of a more existentialist conception of
intentionality as a thoroughly situated openness to being, and the
issue sort of evaporates. It becomes clear instead that it really
makes no sense to speak of a cognitive subject as if it might be a
brain in a vat, without ever committing yourself to the body and world
in which it does its living. I would say the real reason "metphysical
realism" is incoherent is more that mind is world- (esp. body-)
dependent, not the other way around.

So while I can agree that there is only what he takes to be the view
from inside, once you jettison the hope for a god's eye view you should
also recognize that that view is really unbounded. It is of course from
this our own grounded point view that we articulate the distinction *for
us* between what's mental and not mental, subjective and objective and
so on, not some supposed transcendent vantage point.

Anders N Weinstein

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In article <5dviqu$8...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <5dvdtq$6...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>
>>Fair enough. But I would say it is still an objective matter whether it
>>contains differential equations in that you can be right or wrong about
>>that. Your seeing it as so doesn't suffice to make it so -- perhaps it
>>was produced by accident. I expect you agree, although I am not sure.
>
>I'll agree that my seeing it so doesn't make it so. But I don't
>think there is anything objective about it. They only become
>differential equations when I decide to interpret them using a
>particular information representation system. I can be right or

Misleading, since this is not exactly a free choice of yours. Or
rather: you are perfectly free not to take an interest in the fact
that they are differential equations. You might be a typographer who
will focus on the type-face instead. Sure, trivial, no one would ever
doubt this, and no realism is threatened. For the objective facts at
the level of the typeface do not conflict with the objective facts at
the level of differential equations.

But if you are looking at what some mathematicians are doing,
it simply is not up to your free choice whether or not they
are differential equations. If you decide not to interpret them as such
then you are missing something.

>wrong about whether they constitute differential equations within
>that mathematical representation system. But the question of
>rightness or wrongness is relative to the representation system. The
>artist, using a different representation system, could be right about
>the aesthetic qualities of the squiggles at the same time I am right
>about their being differential equations.

The artistic case is a little tricky, but in general sure. So far you
have not stated any exciting relativity of truth to representation
system, you have just distinguished two perfectly compatible subsets
of truths, which is boring.

>>>Suppose that in 1950, a television transmitter broadcast an identical
>>>signal to what is being used today. The standard for chroma
>>>information had not been set. There did not exist any color
>>>television sets. Would you still say that the chroma information was
>>>objectively present?
>
>>No, of course not. So what?
>
>Then it would seem that your notion of "objective information" is
>based on the intention of the sender of the information. As I

The point of the analogy was really very limited. Since that is
not natural information, it doesn't fully apply.

>This seems to have it backwards. We do not say that
>
> because there is a reliable process in the sun for producing
> photons, therefore those photons inform us about the nuclear
> processes in the sun.

I certainly think that is what we should say.

>Rather, we say
>
> because of the properties of the light signal, we have
> information about the sun. Because the information appears

We only have information about the sun because the properties of
the light signal are correlated with what is happening in the sun.
If we are wrong about the correlation then we are wrong that we
have information. And the sources of justification for this claim
are very broad, they include everything involved in justifying
atomic theory. Certainly it is not limited to patterns in light rays.

> reliable according to our measurements, we can conclude that
> there must be a reliable causal process in the sun producing
> those photons.

Now I would say you are confusing epistemology with ontology, the
question of why it *appears* to us that light rays bear information
about the sun with the question about what it is for that to be true.
After all, it can perfectly well appear to us that the light rays bear
information but that they not in fact do so.

>It seems to me that with your sender-based notion of "information",
>you are requiring complete reliable knowledge of causal processes as
>a prerequisite to having the information which could provide us with
>that knowledge. With such a notion of "information," I don't see how
>science could ever get off the ground. That is why I prefer a
>receiver-based "information."

The mistake is thinking that in order to detect information you have to
have first verified that it is information. For example, if Gibson is
right then very small children and animals can detect information about
the sizes and affordances of objects in the world, perhaps even from
birth without learning. Small babies that do not crawl over Eleanor
Gibson's "visual cliff" (a patterned drop covered with plexiglass) are
exercising (incorrectly) a capacity for detecting what affords support,
in his view. Now they certainly are not capable of *confirming* that
they are detecting *information* about what affords support, especially
since this might be innate. The idea that they do so would seem to be
absurd -- infants are not in the business of confirming scientific
hypotheses and could not be until they get language. It is only when
they grow up and do science that they could be in an epistemic position
to actually formulate the concept of information detector, which they
instantiated as infants.

I suspect you really are a traditional epistemologist at heart. You
often seem to want to respect the traditional myth of the primacy of
appearance -- the idea that all knowledge is based on something
subjective, on how things *seem* or *appear* to one, construed as a
kind of private realm of data. But the main point of direct realism and
externalist epistemology should be that the doctrine of the primacy of
appearance is false. Coming to know subjective appearances is a *much*
more sophisticated act, one that is really secondary, and is dependent
for its nature on the ability to perceive and act in the world
directly.

Neil Rickert

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In <5dvoj3$8...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <5dviqu$8...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:

>>I'll agree that my seeing it so doesn't make it so. But I don't
>>think there is anything objective about it. They only become
>>differential equations when I decide to interpret them using a
>>particular information representation system. I can be right or

>Misleading, since this is not exactly a free choice of yours. Or
>rather: you are perfectly free not to take an interest in the fact
>that they are differential equations. You might be a typographer who
>will focus on the type-face instead. Sure, trivial, no one would ever
>doubt this, and no realism is threatened.

I was not attempting to threaten realism. That was something you
have been (mis)reading into what I wrote.

>> But the question of
>>rightness or wrongness is relative to the representation system. The
>>artist, using a different representation system, could be right about
>>the aesthetic qualities of the squiggles at the same time I am right
>>about their being differential equations.

>The artistic case is a little tricky, but in general sure. So far you
>have not stated any exciting relativity of truth to representation
>system, you have just distinguished two perfectly compatible subsets
>of truths, which is boring.

Again, I was not trying to proclaim any exciting relativity of truth
to representation system. That truth is relative to representation
system, I take as quite clear and as quite unexciting. It is my
impression that both relativists and absolutists over truth are
confused about this, and both draw unjustified conclusions.

>>>>Suppose that in 1950, a television transmitter broadcast an identical
>>>>signal to what is being used today. The standard for chroma
>>>>information had not been set. There did not exist any color
>>>>television sets. Would you still say that the chroma information was
>>>>objectively present?

>>>No, of course not. So what?

>>Then it would seem that your notion of "objective information" is
>>based on the intention of the sender of the information. As I

>The point of the analogy was really very limited. Since that is
>not natural information, it doesn't fully apply.

The distinction between natural and unnatural information doesn't
seem to make much sense here. In any case, humans are part of
nature, so the signals they generate should be considered as natural
as anything else.

>>This seems to have it backwards. We do not say that

>> because there is a reliable process in the sun for producing
>> photons, therefore those photons inform us about the nuclear
>> processes in the sun.

>I certainly think that is what we should say.

Then science is stopped dead in its tracks, and we should abandon all
hope of having knowledge.

>>Rather, we say

>> because of the properties of the light signal, we have
>> information about the sun. Because the information appears

>We only have information about the sun because the properties of
>the light signal are correlated with what is happening in the sun.

No, I don't agree. We have information about the sun because the
light signal carries that information. That the light signal carries
information is determinable from the nature of the light signal.

>If we are wrong about the correlation then we are wrong that we
>have information.

No, again I disagree. That we have information is determinable
regardless of any correlation with what is happening in the sun. If
we are wrong about that correlation, then we are wrong in our
scientific theories as to what is happening in the sun. But we are
not wrong about the light carrying information. Rather, we have
misinterpreted that information. If you like, we would be wrong in
our conclusion as to what the information is about, but we are not
wrong that there is information.

> And the sources of justification for this claim
>are very broad, they include everything involved in justifying
>atomic theory. Certainly it is not limited to patterns in light rays.

Nevertheless, it started with patterns in light rays.

>> reliable according to our measurements, we can conclude that
>> there must be a reliable causal process in the sun producing
>> those photons.

>Now I would say you are confusing epistemology with ontology, the
>question of why it *appears* to us that light rays bear information
>about the sun with the question about what it is for that to be true.

I think it is you who are confused.

As I see it, ontology is a servant of epistemology. Our ideas about
atoms and molecules came from Dalton's effort to model the
combinatorial relationship observed in chemical reactions. Our ideas
about genes came from Mendel's observations on the combinatorial
character of simple inheritance. In both cases an ontological
judgement was made on the basis of epistemological evidence. Other
supporting evidence for atoms, molecules, and genes did not show up
until later.

Granted, scientists sometimes make mistakes. Thus 'phlogiston' and
'caloric' were both considered when the evidence was meager, as for
that matter was the 'big bang'. Phlogiston and caloric turned out to
be mistakes. The big bang may yet turn out to have been a mistake.

>After all, it can perfectly well appear to us that the light rays bear
>information but that they not in fact do so.

I would like to see a convincing case. It may perfectly well appear
that light rays bear information about X, but that they do not in
fact do so. But it would not follow that they do not bear
information. For example, if it should turn out that red shift seen
in the light waves from outer space does not actually bear
information about the big bang -- for example if astronomers
eventually determine that there was no big bang -- but it would not
follow that the red shift in the light waves does not bear
information. It would only mean that we misinterpreted that
information.

>>It seems to me that with your sender-based notion of "information",
>>you are requiring complete reliable knowledge of causal processes as
>>a prerequisite to having the information which could provide us with
>>that knowledge. With such a notion of "information," I don't see how
>>science could ever get off the ground. That is why I prefer a
>>receiver-based "information."

>The mistake is thinking that in order to detect information you have to
>have first verified that it is information.

There is no such mistake, I think. Surely you have to verify that it
is information, but such verification is not usually a problem.

> For example, if Gibson is
>right then very small children and animals can detect information about
>the sizes and affordances of objects in the world, perhaps even from
>birth without learning. Small babies that do not crawl over Eleanor
>Gibson's "visual cliff" (a patterned drop covered with plexiglass) are
>exercising (incorrectly) a capacity for detecting what affords support,
>in his view.

It is not clear that this is an ability to detect before birth, since
at birth small babies are not into crawling. In any case, consider
the newborn antelope, which can walk around and maintain some degree
of balance at birth. I would agree that the antelope can detect
appropriate information at birth which would allow such walking. It
does not follow that there has been no verification that there is
information. It at most suggests that the verification was mainly
done by processes of natural selection that occurred long ago.

> Now they certainly are not capable of *confirming* that
>they are detecting *information* about what affords support, especially
>since this might be innate.

Again, you are confusing "verified that there was information" with
"verified that there was information about X." These are not the
same at all.

> The idea that they do so would seem to be
>absurd -- infants are not in the business of confirming scientific
>hypotheses and could not be until they get language. It is only when
>hypotheses and could not be until they get language. It is only when
>they grow up and do science that they could be in an epistemic position
>to actually formulate the concept of information detector, which they
>instantiated as infants.

Your dualism is confusing you. The information can be explored by
neural processes and processes of natural selection. Conscious
participation is not required. Part of why I regularly criticize
traditional epistemology, is that most of what is important to
acquiring knowledge is carried out at an unconscious level by neural
processes. Then epistemologists come along and concoct a cock and
bull story about conscious activities that will make for a nice
sounding fairy tail.

>I suspect you really are a traditional epistemologist at heart. You

Nonsense.

> You
>often seem to want to respect the traditional myth of the primacy of
>appearance -- the idea that all knowledge is based on something
>subjective, on how things *seem* or *appear* to one, construed as a
>kind of private realm of data.

Appearance has nothing to do with it. It is what happens at the
neural level that matters. The appearances themselves may change as
knowledge is acquired.

> But the main point of direct realism and
>externalist epistemology should be that the doctrine of the primacy of
>appearance is false.

The primacy of language is equally false, yet you cling to an
emphasis on language in your epistemic theories.

> Coming to know subjective appearances is a *much*
>more sophisticated act, one that is really secondary, and is dependent
>for its nature on the ability to perceive and act in the world
>directly.

I haven't suggested anything to the contrary. As I said in my prior
post, we mean different things by "subjective". All I mean is not
objective. And to me, to say that something (say an observation) is
objective is that say that, at least in principle, we could set up
completely automated mechanical equipment to make the observation
with no human judgement involved.


Brig Klyce

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

On 13 Feb 1997 11:25:50 -0600, ric...@cs.niu.edu (Neil Rickert)
wrote:
[much deleted]

>Then it would seem that your notion of "objective information" is
>based on the intention of the sender of the information. As I
>thought I had made clear, I need a notion of "information" which is
>receiver relative rather than sender relative.
[more deleted]

>Again, consider that 1950 television transmitting a modern signal.
>What we call chroma information was present, and detectable by 1950s
>scientists. In my earlier posting, I think I suggested that they
>would see it as noise. But actually, they would see it as
>distortion. If it were noise, then that part of the signal would be
>random. However, because that part of the signal is correlated in
>some strange way with other parts of the signal, it would be seen as
>distortion rather than as noise. The 1950s scientists would
>recognize that part of the signal as information, but as information
>about the presence of some distorting component in the transmitting
>equipment. Perhaps if the 1950s scientists investigated thoroughly
>enough, they might find that the "distortion" is correlated to the
>natural colors of what was being transmitted in black and white.
>However, it is unlikely that they would make such a discovery, since
>they would be more interested in removing the distortion to improve
>the fidelity of the signal.
[more deleted]

>It seems to me that with your sender-based notion of "information",
>you are requiring complete reliable knowledge of causal processes as
>a prerequisite to having the information which could provide us with
>that knowledge. With such a notion of "information," I don't see how
>science could ever get off the ground. That is why I prefer a
>receiver-based "information."

Clearly, some information comes originally from a sender ("messages"),
and some doesn't ("data").

Messages are intrinsically meaningful, like a blaze on a trail. Like
computer programs, they can prescribe complicated actions to be
carried out by relatively simple all-purpose machines.

Data are intrinsically meaningless, like clothes on the floor. Even if
the clothes turn out to be clues at a crime scene, a detective must
assign meaning to them. Data cannot serve as programs; they can cause
only simple responses to be carried out by single-purpose machines
like fire alarms.

Of course there are times when one cannot be sure a signal contains
merely data or perhaps a message. This is the problem faced by SETI.
In the case of the color TV signal in the 1950's, part of the message
is simply not being understood.

I think the word "information," without this distinction, leads to
confusion.
++++
Brig Klyce * bkl...@panspermia.org
(901) 726-1111 fax 726-0120
1503 Union Ave #216B * Memphis, TN 38104
COSMIC ANCESTRY: http://www.panspermia.org

Neil Rickert

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

>Clearly, some information comes originally from a sender ("messages"),
>and some doesn't ("data").

I think it is not all that clear. Those who believe in a personal
God might say that all comes from a sender. A strict behaviorist
like David Longley might say that it is all just data.

>Messages are intrinsically meaningful, like a blaze on a trail.

Well, that would certainly solve the intentionality problem. Searle
and others have posed the question of how a computer can be dealing
with meaning (semantics) rather than just with syntax. Searle
claimed that the required property (intentionality) is intrinsically
present in humans, but absent in computers. However if the
meaningfullness is intrinsically present in the message itself, this
would seem to solve Searle's problem.

>Data are intrinsically meaningless, like clothes on the floor. Even if
>the clothes turn out to be clues at a crime scene, a detective must
>assign meaning to them. Data cannot serve as programs; they can cause
>only simple responses to be carried out by single-purpose machines
>like fire alarms.

>Of course there are times when one cannot be sure a signal contains
>merely data or perhaps a message. This is the problem faced by SETI.

So if we decide that a SETI signal is a message from an alien, then
magically it will be intrinsically meaningful, and we will understand
it at once. Otherwise we will have to assign a meaning to the signal
before we can understand it.

It seems like a strange notion of intrinsic meaningfulness.


Neil Rickert

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In <5e0j4n$c...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <5dvgjq$8...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:

>[re iron rusting]


>>Why is that not information processing? It seems to me that there
>>has been a transformation of representations. The information that
>>was formerly represented by the presence of oxygen in the atmosphere
>>has now been represented by the presence of rust on the surface of
>>the iron. The rusting amounts to a natural transduction process.

>Transduction of *what* information?

If nothing else, information about the presence of oxygen.

> In what sense is the iron making use
>of the information? In what sense is the iron's reaction appropriate to
>its content? In what sense is a hunk of iron a thing to which
>information is relevant?

I am not particularly concerned with whether the information
processing element can be said to have intentions. If we get too
fussy then we rule out the transistors in our computers as doing any
information processing.

I would rather take the view that there are many natural information
processing systems, albeit rather simple ones. Then intelligence is
possible because these natural processing systems have been harnessed
and concentrated.

>Do you really want to say that everything that happens is
>information processing?

I wasn't specifically saying that, but I do not find the idea at all
troubling.

> This looks like just a slippery slope argument,
>like saying an atomic nucleus is alive because there's no sharp line
>between living and non-living.

I don't see that it is the same at all. In any case, I am not
concerning myself with defining 'life', so I'll leave that problem to
the biologists.

>>Your problem is that you don't want to say that there was any
>>information, unless some human passes judgement on it to say that it
>>was information. Therein lies your dualism. If a tree falls in the

>Not accurate.

>One might say there must be a potentially judgeable content,
>but not necesarily any actual judgment. And actually I do think that
>animals deal with information too, only not quite of the same
>conceptual character.

If all biological life disappeared, would our computers stop being
information processors because there did not exist potential judges?

>This is perhaps a real issue: the concept of information is
>organism-relative. What is highly relevant to a squirrel need not be
>relevant to me.

I agree. This seemed to be a position you were arguing against in
other messages.

> In this sense one might wonder if it makes sense to
>think of ecological information as objectively there in the world in
>patterns which would only be relevant to some imaginary creature of
>such and such a kind that has never and will never exist.

It is not up to us to decide what is relevant to others.


Anders N Weinstein

unread,
Feb 14, 1997, 3:00:00 AM2/14/97
to

In article <5dvgjq$8...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
[re iron rusting]
>Why is that not information processing? It seems to me that there
>has been a transformation of representations. The information that
>was formerly represented by the presence of oxygen in the atmosphere
>has now been represented by the presence of rust on the surface of
>the iron. The rusting amounts to a natural transduction process.

Transduction of *what* information? In what sense is the iron making use


of the information? In what sense is the iron's reaction appropriate to
its content? In what sense is a hunk of iron a thing to which
information is relevant?

Do you really want to say that everything that happens is
information processing? This looks like just a slippery slope argument,


like saying an atomic nucleus is alive because there's no sharp line
between living and non-living.

>Your problem is that you don't want to say that there was any


>information, unless some human passes judgement on it to say that it
>was information. Therein lies your dualism. If a tree falls in the

Not accurate.

One might say there must be a potentially judgeable content,
but not necesarily any actual judgment. And actually I do think that
animals deal with information too, only not quite of the same
conceptual character.

This is perhaps a real issue: the concept of information is


organism-relative. What is highly relevant to a squirrel need not be

relevant to me. In this sense one might wonder if it makes sense to


think of ecological information as objectively there in the world in
patterns which would only be relevant to some imaginary creature of

such and such a kind that has never and will never exist. Or ask if there
is now objective information in the world that would only be relevant to
dodo birds. Or whether we can hope to quantify over every possible form
of animal life in every possible niche when thinking about such
situtations.

In practice this is so hopeless that we need not worry about it.
If you like you could make the actual existence of a potential receiver
somewhere thereabouts a condition on the actual existence of
information in the world.

>forest, and there is no human to pass judgement as to whether there
>was information, did it make a sound -- or could there even have been
>a tree or a forest?

Yes it did. Next question.


Anders N Weinstein

unread,
Feb 14, 1997, 3:00:00 AM2/14/97
to

In article <5e0426$9...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <5dvoj3$8...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>> So far you
>>have not stated any exciting relativity of truth to representation
>>system, you have just distinguished two perfectly compatible subsets
>>of truths, which is boring.
>
>Again, I was not trying to proclaim any exciting relativity of truth
>to representation system. That truth is relative to representation
>system, I take as quite clear and as quite unexciting. It is my
>impression that both relativists and absolutists over truth are
>confused about this, and both draw unjustified conclusions.

I take it that there is no good sense in which truth is relative to
representation system. Better to say meaning or content is relative to
representation system in the sense that one language (representation system)
might express meanings which another one doesn't. But given a particular
content, the only thing one can say ("with unhelpful realism" as Quine
once put it) is that truth is determined by the way the world is.

This is basically because the power of judgeable contents to be true
or false about the world is metaphysical bedrock in thinking about
knowledge. There is really nothing more basic in terms of which one could
explain it. Which particular truths merit acceptance is of course another
matter. But I don't think there could possibly be an informative theory
of the nature of truth -- if one doesn't assume propositions and their
power of describing reality, one is at a loss as to how to speak about
such matters, unless one just moves to a different level, like the physical.

Be that as it may, I would also suggest you need to keep in mind the
distintion between two sorts of cases: first, the case where different
people are focussed on different entirely compatible truths -- "those
are differential equations" vs. "those are nicely shaped ink marks" --
both of which can without difficulty be conjoined and held together in
one system of knowledge. The case under discussion was of this
variety.

But what appears to raise the problem is the case where the two
claims are in different vocabularies, but *can't* be conjoined and held
together simultaneously. e.g. what the Salem folks said about witches
and what I believe. The relativist wants to say these two claims are
also both true in their respective representation systems, *even* though
they cannot rationally be conjoined in one knower. That is what I think
we must reject. We can say the two parties mean different things, but
either they are incompatible or I just don't know what they mean
and cannot evaluate it.

>The distinction between natural and unnatural information doesn't
>seem to make much sense here. In any case, humans are part of
>nature, so the signals they generate should be considered as natural
>as anything else.

Fine, there is nothing more natural for human beings then to have
culture and conventions. There is still an intelligible distinction
between what is independent of human convention and what is not.

>>>This seems to have it backwards. We do not say that
>
>>> because there is a reliable process in the sun for producing
>>> photons, therefore those photons inform us about the nuclear
>>> processes in the sun.
>
>>I certainly think that is what we should say.
>
>Then science is stopped dead in its tracks, and we should abandon all
>hope of having knowledge.

Why? If someone asks me: is the Earth round or flat? and I say: "round".
Will they say: then science has stopped dead in its tracks?

Obviously we have to use our knowledge in reflecting on it. There is
nothing we can say if we are unwilling to presuppose some empirical
knowledge. If it turns out that our first-order knowledge is in error,
then we might have to revise our second-order (i.e. reflective or
epistemological) knowledge as well. But they go together inseparably.

>>If we are wrong about the correlation then we are wrong that we
>>have information.
>
>No, again I disagree. That we have information is determinable
>regardless of any correlation with what is happening in the sun. If
>we are wrong about that correlation, then we are wrong in our
>scientific theories as to what is happening in the sun. But we are
>not wrong about the light carrying information. Rather, we have
>misinterpreted that information. If you like, we would be wrong in
>our conclusion as to what the information is about, but we are not
>wrong that there is information.

I have no idea what conception of information you are using. I
say information is individuated by its content, and information
about the sun is different information from information
about the atmosphere.

If all we know is that there is a pattern to the light then we have
virtually no information at all, beyond information about the pattern.

I am afraid when you say such things that you really operating with a
sense of information according to which it is meaningless and unrelated
to anything beyond itself.

>As I see it, ontology is a servant of epistemology. Our ideas about
>atoms and molecules came from Dalton's effort to model the
>combinatorial relationship observed in chemical reactions. Our ideas
>about genes came from Mendel's observations on the combinatorial
>character of simple inheritance. In both cases an ontological
>judgement was made on the basis of epistemological evidence. Other
>supporting evidence for atoms, molecules, and genes did not show up
>until later.

I'm not sure I understand. It looks to me as if "epistemological evidence" is
just evidence, and all you are are saying is that scientific hypotheses
about the nature of the world are based on evidence.

As always, I think in the scientific case an inquirer is confronted
with a pattern in data propositions that is in need of explanation. But
what about the data propositions themselves? They are not based on
anything more epistemologically basic, since any epistemology must
start with conceptualized data propositions and nothing is more
epistemically basic.

At best you have an analogy: the mindless neural circuitry that "looks"
for patterns in the stimuli can be suggestively compared to a scientist
looking for patterns in propositions recording observations.

>I would like to see a convincing case. It may perfectly well appear
>that light rays bear information about X, but that they do not in
>fact do so.

That is all I meant.

> I would agree that the antelope can detect
>appropriate information at birth which would allow such walking. It
>does not follow that there has been no verification that there is
>information. It at most suggests that the verification was mainly
>done by processes of natural selection that occurred long ago.

This doesn't contradict what I said -- one can be a detector of
information without having verified that one is.

Further natural selection is not really an epistemic agent that
knows and verifies things, although its operation is akin to testing and
confirmation, I'll grant you that.

>> Now they certainly are not capable of *confirming* that
>>they are detecting *information* about what affords support, especially
>>since this might be innate.
>
>Again, you are confusing "verified that there was information" with
>"verified that there was information about X." These are not the
>same at all.

Either way, the child has not verified either.

In Gibson's use, the perceptual systems of a small infant are, among
other things, detectors of the information that something affords
support. They are *not* detectors of information that something is
information.

>traditional epistemology, is that most of what is important to
>acquiring knowledge is carried out at an unconscious level by neural
>processes. Then epistemologists come along and concoct a cock and
>bull story about conscious activities that will make for a nice
>sounding fairy tail.

I would agree that a lot of unconscious stuff is a prerequisite for
human beings to acquire and exercise understanding. I can't exactly see
that it makes sense to say that the knowledge is in unconscious neural
processes, however. At any rate it raises the question about the
relation between what I know, see, think about, understand, and what my
neurons do.


David Longley

unread,
Feb 15, 1997, 3:00:00 AM2/15/97
to

In article <5e2tpu$l...@usenet.srv.cis.pitt.edu>

ande...@pitt.edu "Anders N Weinstein" writes:
>
> Be that as it may, I would also suggest you need to keep in mind the
> distintion between two sorts of cases: first, the case where different
> people are focussed on different entirely compatible truths -- "those
> are differential equations" vs. "those are nicely shaped ink marks" --
> both of which can without difficulty be conjoined and held together in
> one system of knowledge. The case under discussion was of this
> variety.

Behavioural chains, webs, networks seem to be what you mean by
"representational" system. From the extensional stance, Quine called
the phrase "web of belief" back in 1951). Why not accept what Stich
had to say on this (for psychology)?

'This argument was part of a larger project. Influenced
by Quine, I have long been suspicious about the
integrity and scientific utility of the commonsense
notions of meaning and intentional content. This is not,
of course, to deny that the intentional idioms of
ordinary discourse have their uses, nor that the uses
are important. But, like Quine, I view ordinary
intentional locutions as projective, context sensitive,
observer relative, and essentially dramatic. They are
not the sorts of locutions we should welcome in serious
scientific discourse. For those who share this Quinean
scepticism, the sudden flourishing of cognitive
psychology in the 1970s posed something of a problem. On
the account offered by Fodor and other observers, the
cognitive psychology of that period was exploiting both
the ontology and the explanatory strategy of commonsense
psychology. It proposed to explain cognition and certain
aspects of behavior by positing beliefs, desires, and
other psychological states with intentional content, and
by couching generalisations about the interactions among
those states in terms of their intentional content. If
this was right, then those of us who would banish talk
of content in scientific settings would be throwing out
the cognitive psychological baby with the intentional
bath water. On my view, however, this account of
cognitive psychology was seriously mistaken. The
cognitive psychology of the 1970s and early 1980s was
not positing contentful intentional states, nor was it
(adverting) to content in its generalisations. Rather, I
maintained, the cognitive psychology of the day was
"really a kind of logical syntax (only psychologized).
Moreover, it seemed to me that there were good reasons
why cognitive psychology not only did not but SHOULD not
traffic in intentional states. One of these reasons was
provided by the Autonomy argument.'

Stephen P. Stich (1991)
Narrow Content meets Fat Syntax
in MEANING IN MIND - Fodor And His Critics

and writing with others in 1991, even more dramatically:

'In the psychological literature there is no dearth of
models for human belief or memory that follow the lead
of commonsense psychology in supposing that
propositional modularity is true. Indeed, until the
emergence of connectionism, just about all psychological
models of propositional memory, except those urged by
behaviorists, were comfortably compatible with
propositional modularity. Typically, these models view a
subject's store of beliefs or memories as an
interconnected collection of functionally discrete,
semantically interpretable states that interact in
systematic ways. Some of these models represent
individual beliefs as sentence like structures - strings
of symbols that can be individually activated by their
transfer from long-term memory to the more limited
memory of a central processing unit. Other models
represent beliefs as a network of labelled nodes and
labelled links through which patterns of activation may
spread. Still other models represent beliefs as sets of
production rules. In all three sorts of models, it is
generally the case that for any given cognitive episode,
like performing a particular inference or answering a
question, some of the memory states will be actively
involved, and others will be dormant......

The thesis we have been defending in this essay is that
connectionist models of a certain sort are incompatible
with the propositional modularity embedded in
commonsense psychology. The connectionist models in
question are those that are offered as models at the
COGNITIVE level, and in which the encoding of
information is widely distributed and subsymbolic. In
such models, we have argued, there are no DISCRETE,
SEMANTICALLY INTERPRETABLE states that play a CAUSAL
ROLE in some cognitive episodes but not others. Thus
there is, in these models, nothing with which the
propositional attitudes of commonsense psychology can
plausibly be identified. If these models turn out to
offer the best accounts of human belief and memory, we
shall be confronting an ONTOLOGICALLY RADICAL theory
change - the sort of theory change that will sustain the
conclusion that propositional attitudes, like caloric
and phlogiston, do not exist.'

W. Ramsey, S. Stich and J. Garon (1991)
Connectionism, eliminativism, and the future of folk
psychology.

The implications here are that progress in applying psychology will be
impeded if psychologists persist in trying to talk about, or use
psychological (intensional) phenomena within a framework (evidential
behaviourism) which inherently resists quantification into such
terms. Without bound, extensional predicates, we can not reliably use
the predicate calculus, and without the predicate (functional)
calculus we can not formulate lawful relationships, statistical or
determinate.

We can model folk psychological networks of belief with ANNs, but the
EVIDENCE suggests that we would be foolhardy to trust anything
requiring reliable, rule driven operation, to such systems - in the
final analysis ..... they are *anarachic*

--
David Longley


Neil Rickert

unread,
Feb 15, 1997, 3:00:00 AM2/15/97
to

In <5e2tpu$l...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <5e0426$9...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:

>>Again, I was not trying to proclaim any exciting relativity of truth
>>to representation system. That truth is relative to representation
>>system, I take as quite clear and as quite unexciting. It is my
>>impression that both relativists and absolutists over truth are
>>confused about this, and both draw unjustified conclusions.

>I take it that there is no good sense in which truth is relative to
>representation system. Better to say meaning or content is relative to
>representation system in the sense that one language (representation system)
>might express meanings which another one doesn't. But given a particular
>content, the only thing one can say ("with unhelpful realism" as Quine
>once put it) is that truth is determined by the way the world is.

>This is basically because the power of judgeable contents to be true
>or false about the world is metaphysical bedrock in thinking about
>knowledge.

As far as I can tell, metaphysics is a religion with no basis to
support it.

You use a notion of "content" which I take as incoherent. You should
leave your solipsistic palace, and take a look around you. If you
were to do so, you would see that in practice, "truth" is an
attribute we apply to representations. As such, it could not but be
relative to the representation system in which the representations
are made. You talk of "the way the world is", but strictly speaking
that too is incoherent. The closest we can ever come is to our
representations of the world, and these are relative to whatever
representation system we are using.

>But what appears to raise the problem is the case where the two
>claims are in different vocabularies, but *can't* be conjoined and held
>together simultaneously. e.g. what the Salem folks said about witches
>and what I believe. The relativist wants to say these two claims are
>also both true in their respective representation systems, *even* though
>they cannot rationally be conjoined in one knower. That is what I think
>we must reject. We can say the two parties mean different things, but
>either they are incompatible or I just don't know what they mean
>and cannot evaluate it.

I have already suggested that relativists are as confused about
"truth" as are absolutists. So I am not about to defend the follies
of relativism.

>>>>This seems to have it backwards. We do not say that

>>>> because there is a reliable process in the sun for producing
>>>> photons, therefore those photons inform us about the nuclear
>>>> processes in the sun.

>>>I certainly think that is what we should say.

>>Then science is stopped dead in its tracks, and we should abandon all
>>hope of having knowledge.

>Why? If someone asks me: is the Earth round or flat? and I say: "round".
>Will they say: then science has stopped dead in its tracks?

We say that the earth is round because we (or our forebears)
collected information which convinced us that the earth is round.
The information preceded the knowledge. But on your claim the
knowledge must come first else you have no basis for the claim that
there is information. And if you have no basis for assuming that
there is information, you would have no basis for determining that
the earth is round.

>I have no idea what conception of information you are using. I
>say information is individuated by its content, and information
>about the sun is different information from information
>about the atmosphere.

And therefore you are insisting that rationalism is the only possible
philosophy. The empiricist claims that knowledge arises from
information. But on your claim there is no information without
knowledge of the content. If there is to be any hope of an
empiricist program, the notion of "information" cannot be tied to
prior knowledge of 'content'.

>If all we know is that there is a pattern to the light then we have
>virtually no information at all, beyond information about the pattern.

On the contrary, we have a great deal of information which we can use
to start to sort out the nature of the world.

>I am afraid when you say such things that you really operating with a
>sense of information according to which it is meaningless and unrelated
>to anything beyond itself.

I think we should perhaps say that information is intrinsically
meaningless, and meaning is something we add by virtue of the way we
use the information. But as to your charge of "unrelated", you
should keep in mind that a pattern in the light rays is already a
relation. If a pattern reliably recurs, we can reasonably presume
that there is a cause for that recurrence. The progress of science
is based on just such presumptions.

>As always, I think in the scientific case an inquirer is confronted
>with a pattern in data propositions that is in need of explanation.

I think that is a gross oversimplification. If we keep in mind the
theory-laden nature of data, then it should be clear that often the
data propositions (if there are such things) must come rather late in
the game of scientific discovery.

> But
>what about the data propositions themselves?

I am not convinced that there are such things.

> They are not based on
>anything more epistemologically basic, since any epistemology must
>start with conceptualized data propositions and nothing is more
>epistemically basic.

Why must epistemology start with propositions? That strikes me as
obviously wrong. The world is not constructed out of propositions.

>At best you have an analogy: the mindless neural circuitry that "looks"
>for patterns in the stimuli can be suggestively compared to a scientist
>looking for patterns in propositions recording observations.

I think that is a poor characterization of the activity of the
scientist.

>Further natural selection is not really an epistemic agent that
>knows and verifies things, although its operation is akin to testing and
>confirmation, I'll grant you that.

This is nothing but the intellectualist legend. Natural selection
has succeeded in designing homo sapiens, but that counts as nothing
since the processes of natural selection failed to intellectualize
what they were doing.

>>> Now they certainly are not capable of *confirming* that
>>>they are detecting *information* about what affords support, especially
>>>since this might be innate.

>>Again, you are confusing "verified that there was information" with
>>"verified that there was information about X." These are not the
>>same at all.

>Either way, the child has not verified either.

This is but another example of the intellectualist legend.

>In Gibson's use, the perceptual systems of a small infant are, among
>other things, detectors of the information that something affords
>support.

But then Gibson swept most of the problems under the rug.

>>traditional epistemology, is that most of what is important to
>>acquiring knowledge is carried out at an unconscious level by neural
>>processes. Then epistemologists come along and concoct a cock and
>>bull story about conscious activities that will make for a nice
>>sounding fairy tail.

>I would agree that a lot of unconscious stuff is a prerequisite for
>human beings to acquire and exercise understanding. I can't exactly see
>that it makes sense to say that the knowledge is in unconscious neural
>processes, however.

I did not say that. I said that the acquisition of knowledge is
carried out by unconscious neural processes. The knowledge itself I
would place in the the structure of the neural system.

> At any rate it raises the question about the
>relation between what I know, see, think about, understand, and what my
>neurons do.

I'm not sure why it raises that question. I thought that was already
the question that needed to be considered.


Anders N Weinstein

unread,
Feb 15, 1997, 3:00:00 AM2/15/97
to

In article <5e0lq3$9...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>If all biological life disappeared, would our computers stop being
>information processors because there did not exist potential judges?

Well, they would still have their history. After the Romans died out
you can still dig up one of their drinking vessels, and it is still a
Roman drinking vessel you have found. I.e. you can ask the question
"what is it" and answer first with "we need more evidence" and later
with, "a drinking vessel". Of course it could come to be appropriated
for some other use by tomb-robbers, but that doesn't change the
objectivity of the first question.

In that sense artifactual norms are relationally determined but
objective. In a similar sense artifactual computers would continue to
be computers even if all human beings die out.

On the other hand, as an externalist about artifactual information
processing, I would say something molecule for molecule identical to a
computer which came into existence randomly would *not* be a computer
and would *not* be doing information processing. No more than a piece
of clay randomly formed into the shape of a Roman drinking vessel would
be a drinking vessel.

I would say the metaphysics of natural information processing must be
different from the metaphysics of artifactual information processing. The
latter involves derivative intentionality. In the former case
the norms of proper function must be autonomously determined, I would
say, and that is why you must have a thing that can live on its own in an
environment. Because it is only in relation to that normal form of life --
the functional spec of the whole, as it were -- that the proper
functioning of the *parts* can be determined.

Of course if we built a Mars explorer that was designed to survive on
its own by cultivating useful concepts starting with an "unlabelled
world", possibly it too could be said to have an autonomous as opposed
to derivative semantics. For then we might be said to have made an
artificial living thing.

Anders N Weinstein

unread,
Feb 15, 1997, 3:00:00 AM2/15/97
to

In article <855992...@longley.demon.co.uk>,

David Longley <Da...@longley.demon.co.uk> wrote:
>
> Why not accept what Stich
>had to say on this (for psychology)?
>
> 'This argument was part of a larger project. Influenced
> by Quine, I have long been suspicious about the
> integrity and scientific utility of the commonsense
> notions of meaning and intentional content. This is not,

I can accept this. You have to appreciate that in my heart I don't
really give a shit about "scientific utility". I'm interested in what's
*true*, whether or not it is of scientific utility.

It is very clear that the concepts of scientific utility are only a
subset of the concepts usable for stating truths about the world.

As to "integrity" I think Stich is simply very badly confused about what
is required for these concepts to have integrity. For example here:

> 'In the psychological literature there is no dearth of
> models for human belief or memory that follow the lead
> of commonsense psychology in supposing that
> propositional modularity is true. Indeed, until the

My position is that the everyday use of psychological concepts makes
*no commitments at all* about "propositional modularity" as Stich
defines it. That is a matter of physically internal representational
vehicles.

Here perhaps is an analogy: A baseball manager wants to know about a
prospect: can he hit? can he throw? can he run? can he field ground
balls? can he catch fly balls? The baseball manager has a scouting
report indexed by answers to these questions.

Now a Stichian cognitive scientist comes along and says: you know, if
you look inside the neural circuitry of a skilled baseball player, you
can not find a functionally discrete module dedicated to throwing and
another one dedicated to hitting. In fact all these "folk baseball"
concepts are a very poor guide to the internal organization of one's
control circuitry. They "lack the integrity" of a properly scientific
account of internal mechanisms.

Now I will concede there could be a useful point to such a claim. It
might well help the coaches impart baseball skills more effectively to
know about such internal structures. Maybe these structures account for
a kind of crosstalk between learning to throw and learning to catch
that the coaches would do well to take into account.

But notice that what the manager asked did not really commit him to any
idea of "skill modularity" in the cognitive scientist's sense. He did
not ask if the prospect had distinct *subsystems* inside his body
dedicated to throwing and hitting. He did not ask anything about *any*
structures inside the person's body. He asked whether the person could
hit or catch because those are the abilities he is interested in, and
from that point of view hitting is one thing and throwing another. In
this case the interest that governs the parsing of ones abilities into
distinct skills is imposed from without. It has almost nothing to do
with the scientific interest in a functional explanation of how one is
enabled to catch a baseball.

I think it is the same with intentional states. When we carve up the a
person's behavior into intentional states, we make no committments at
all as to how they are realized in neural structures. We do not presume
that individual beliefs have individual realizations inside the
person's body, as Stich accuses of us doing. We are more like the
baseball manager, individuating things into sorts with respect to a
different set of interests.

Stich's connectionist models do not undermine folk psychology.
It is very hard to see how anything at that level of description
*could* undermine it.

>The implications here are that progress in applying psychology will be
>impeded if psychologists persist in trying to talk about, or use
>psychological (intensional) phenomena within a framework (evidential
>behaviourism) which inherently resists quantification into such
>terms. Without bound, extensional predicates, we can not reliably use
>the predicate calculus, and without the predicate (functional)
>calculus we can not formulate lawful relationships, statistical or
>determinate.

This misunderstands Stich et. al. They are not in the least evidential
behaviorists. They are full-fledged cognitivists theorizing
about internal representational vehicles in connectionist systems.

The dispute between connectionist modellers and Fodor-style "classical"
symbolic modellers is an *internal* debate among parties that share a
commitment to the representational theory of mind.

Anders N Weinstein

unread,
Feb 15, 1997, 3:00:00 AM2/15/97
to

In article <5e4ehc$b...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <5e2tpu$l...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>>This is basically because the power of judgeable contents to be true
>>or false about the world is metaphysical bedrock in thinking about
>>knowledge.
>
>As far as I can tell, metaphysics is a religion with no basis to
>support it.

I take it I am criticizing metaphysics when I say this. You could say
metaphysics is the misguided attempt to explain where no explanation is
needed or possible. For example, the attempt to explain what it is for
an object to have a property or what it is for a proposition to have
the power of being true or false about the world..

>You use a notion of "content" which I take as incoherent. You should
>leave your solipsistic palace, and take a look around you. If you
>were to do so, you would see that in practice, "truth" is an
>attribute we apply to representations. As such, it could not but be

"Representation" is ambiguous -- it may mean a representational vehicle,
like a sentence or a map. Those things have syntax or form in addition
to semantics. But it might also mean an intentional or content-bearing
state. For example, Karl's belief that his cat is in the kitchen. If I
report that to you, I *use* English sentences (of course), and if you
understand English you are equipped to evaluate Karl's state for
truth or falsity, assuming my report is fair. But I have *not* told
you anything about Karl's representational system. I suppose you might
say I have committed myself to its having the resources to represent that
content itself.

In this case, it is not clear it makes any sense to ask: in what syntax
does Karl represent that snow is white. Maybe his brain uses a representational
system that neither Karl nor I know about. But then again, maybe it does not.
All I need to be convinced of is that Karl's belief is adequately *expressed*
by the English sentence.

In this sense it appears that our practice with intentional is designed
to be non-committal about syntax or other non-semantic properties of
mental states. So I do not agree that we always ascribe truth to
representations, if that means representational *vehicles* that have a
syntax. Rather, we operate in abstraction from the details of the syntax,
and even in abstraction from the question whether it has a syntax.

>>say information is individuated by its content, and information
>>about the sun is different information from information
>>about the atmosphere.
>
>And therefore you are insisting that rationalism is the only possible
>philosophy. The empiricist claims that knowledge arises from
>information. But on your claim there is no information without
>knowledge of the content. If there is to be any hope of an
>empiricist program, the notion of "information" cannot be tied to
>prior knowledge of 'content'.

If empiricism is based on "thin" or preconceptual information, then you
are right that there is no hope of an empiricist program. For there can
be no epistemic move from pre-conceptual to conceptual information. That
would be what Wilfrid Sellars criticized as the myth of the given in
epistemology.

On the other hand, if one talks about a more Kantian empiricism in which
the epistemically basic states are "thick experience" = conceptualized
observation judgments, then one might say that empiricism is mandatory.

As to the question how the initial concepts are acquired, one can
say they are embodied in cultural practices of using words and are
acquired in the course of socialization into language use. A simple
example would be Wittgenstein's "Slab!" language-game. This is a little
social institution which pre-exists the developing child. Through training
or other means, the developing learner's behavior is shaped so that the
child comes to be able to play that game. And it might be only through
coming to play that game that the child acquires the concept of a Slab,
Stone, etc, and the ability to pick these out from other things.

In that sense these concepts are not abstracted from information or
thin experience by the brain. Rather to have the concepts is to have
acquired the skills involved in playing the game, and
that is imparted through socialization. Of course there is a story of
what happens at the sub-personal neural level, but the concepts
and understanding are items that exist at the social behavioral level, in
transmittable skills.

In this sense one steers between an empiricism of thin experience and
a rationalism of innate ideas.

BTW, for a Wittgensteinian critique of the empiricist idea that concepts
are acquired by abstraction from thin experience, you might try Peter
Geach's _Mental Acts_ (although it is now out of print).

>Why must epistemology start with propositions? That strikes me as
>obviously wrong. The world is not constructed out of propositions.

Because only propositions stand in rational epistemic relations, only
something with a propositional content can *justify* another state with
propositional content.

BTW, I think one might want to say that in some sense the world *is*
constructed out of propositions. The Pitt philosopher John McDowell
is fond of quoting Wittgenstein that the world is a totality of facts
not of things.

> I said that the acquisition of knowledge is
>carried out by unconscious neural processes. The knowledge itself I
>would place in the the structure of the neural system.

But I can accept the claim about acquisition of understanding (your
"knowledge"). I would prefer to say the understanding is in the mode
of conducting oneself in the world, since it involves interactiong with
objects in the world and cannot really be abstracted from them.

>> At any rate it raises the question about the
>>relation between what I know, see, think about, understand, and what my
>>neurons do.
>
>I'm not sure why it raises that question. I thought that was already
>the question that needed to be considered.

From the standpoint of the critique of the myth of the given,
it raises a *problem*, the problem that the epistemic predicates that
apply to the person, such as whether I am justified or not in a
particular perceptual belief, cannot be seen to be derived from the
(non-epistemic) doings of my sub-personal neural structures.


Neil Rickert

unread,
Feb 15, 1997, 3:00:00 AM2/15/97
to

In <5e5iaj$3...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <5e4ehc$b...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>>In <5e2tpu$l...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>>>This is basically because the power of judgeable contents to be true
>>>or false about the world is metaphysical bedrock in thinking about
>>>knowledge.

>>As far as I can tell, metaphysics is a religion with no basis to
>>support it.

>I take it I am criticizing metaphysics when I say this. You could say
>metaphysics is the misguided attempt to explain where no explanation is
>needed or possible. For example, the attempt to explain what it is for
>an object to have a property or what it is for a proposition to have
>the power of being true or false about the world..

>>You use a notion of "content" which I take as incoherent. You should
>>leave your solipsistic palace, and take a look around you. If you
>>were to do so, you would see that in practice, "truth" is an
>>attribute we apply to representations. As such, it could not but be

>"Representation" is ambiguous -- it may mean a representational vehicle,
>like a sentence or a map. Those things have syntax or form in addition
>to semantics. But it might also mean an intentional or content-bearing
>state.

I cannot observe "intentional or content-bearing states", if there are
such things. So I cannot give any truth evaluation.

> For example, Karl's belief that his cat is in the kitchen. If I
>report that to you, I *use* English sentences (of course), and if you
>understand English you are equipped to evaluate Karl's state for
>truth or falsity, assuming my report is fair.

No, I am not equipped to evaluate Karl's states for truth or
falsity. At most, I could evaluate Karl's statements. It would be
an outrageously antisocial act if I were to ascribe states to Karl,
and to then declare those states false.

>mental states. So I do not agree that we always ascribe truth to
>representations, if that means representational *vehicles* that have a
>syntax.

I was not claiming that we always ascribe truth values to
representations. I was only claiming that representations, or what
we believe to be representations, are the only things to which we
ascribe truth values.

>>And therefore you are insisting that rationalism is the only possible
>>philosophy. The empiricist claims that knowledge arises from
>>information. But on your claim there is no information without
>>knowledge of the content. If there is to be any hope of an
>>empiricist program, the notion of "information" cannot be tied to
>>prior knowledge of 'content'.

>If empiricism is based on "thin" or preconceptual information, then you
>are right that there is no hope of an empiricist program. For there can
>be no epistemic move from pre-conceptual to conceptual information. That
>would be what Wilfrid Sellars criticized as the myth of the given in
>epistemology.

>On the other hand, if one talks about a more Kantian empiricism in which
>the epistemically basic states are "thick experience" = conceptualized
>observation judgments, then one might say that empiricism is mandatory.

All of this makes for endless word games for philosophers, but it
completely ignores the actual issues of how we acquire knowledge.

>As to the question how the initial concepts are acquired, one can
>say they are embodied in cultural practices of using words and are
>acquired in the course of socialization into language use.

No, with your restriction they would have to be innate. For with
your restrictions there is no possibility that there could be
socialization into language use.

>In that sense these concepts are not abstracted from information or
>thin experience by the brain. Rather to have the concepts is to have
>acquired the skills involved in playing the game, and
>that is imparted through socialization. Of course there is a story of
>what happens at the sub-personal neural level, but the concepts
>and understanding are items that exist at the social behavioral level, in
>transmittable skills.

Now you are playing your game of dualism again. But it won't do. If
you are going to rule out anything other than conceptualized
information, there could not be a story at the neural level, for
neurons do not deal with conceptualized information.

>BTW, for a Wittgensteinian critique of the empiricist idea that concepts
>are acquired by abstraction from thin experience, you might try Peter
>Geach's _Mental Acts_ (although it is now out of print).

Why should "abstraction" be involved? I would not expect there to be
any 'abstraction' from thin experience, because most ordinary
concepts are not abstract.

>>Why must epistemology start with propositions? That strikes me as
>>obviously wrong. The world is not constructed out of propositions.

>Because only propositions stand in rational epistemic relations, only
>something with a propositional content can *justify* another state with
>propositional content.

That just goes to show the silliness of epistemology.

>BTW, I think one might want to say that in some sense the world *is*
>constructed out of propositions.

That sort of idea strikes me as being right there with Berkeley's
idealism.

>> I said that the acquisition of knowledge is
>>carried out by unconscious neural processes. The knowledge itself I
>>would place in the the structure of the neural system.

>>> At any rate it raises the question about the


>>>relation between what I know, see, think about, understand, and what my
>>>neurons do.

>>I'm not sure why it raises that question. I thought that was already
>>the question that needed to be considered.

>From the standpoint of the critique of the myth of the given,
>it raises a *problem*, the problem that the epistemic predicates that
>apply to the person, such as whether I am justified or not in a
>particular perceptual belief, cannot be seen to be derived from the
>(non-epistemic) doings of my sub-personal neural structures.

Any justification of the results of perception should be based on a
study of the reliability of the low level neural processes involved
in perceiving, not in silly word games about perceptual beliefs.


Aaron Sloman

unread,
Feb 16, 1997, 3:00:00 AM2/16/97
to

ande...@pitt.edu (Anders N Weinstein) writes:

> article: 40824 in comp.ai.philosophy
> Date: 13 Feb 1997 18:31:38 GMT
> Organization: University of Pittsburgh
> ...


> Reject thought-signs in favor of a more existentialist conception of

> intentionality as a thoroughly situated openness to being...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Sounds good. Where can I find some?

> It becomes clear instead that it really
> makes no sense to speak of a cognitive subject as if it might be a
> brain in a vat,

Clear to whom?

Why don't you just say that you (and some other philosophers) simply
don't understand such talk and are not interested?

Those of us who know what we are talking about, will, in any case,
get on with the task of exploring what sorts of cognitive
architectures are and are not possible for more or less disembodied
agents, of various kinds, including, for example, a mathematical
agent committed to doing nothing but explore number theory,
motivated by concerns for depth, elegance, power, economy, and
truth; or an internet explorer agent that works for its owner,
finding things out, negotiating with other agents, etc.

Whether these things have "thoroughly situated openness to being"
doesn't seem to me to be a very clear question, but in any case I
cannot see that it makes any difference to anything of interest.

Everything is thoroughly situated in the sense that it is part of a
web of relationships to other things.

You may have your own preferred set of relationships (spatial,
temporal, biological, cultural, etc.) but that's your choice.

If you don't want to call the things that have a different
collection of relationships "cognitive" that's fine. That's
your semantic preference. But don't expect anyone else to be
influenced by expressions of semantic preferences if that doesn't
help them with their research objectives.

Arguing about whether such states "really" involve a cognitive subject
is as pointless as arguing over whether a circle really is an ellipse,
or whether 0 really is a number, or a virus really is alive, or whether
it's noon on the moon when the moon is directly above Greenwich at 1200
GMT ...

I'd like to distinguish two roles for philosophers in dialogues with
scientists (e.g. comp.ai.philosophy):

(a) CONSTRUCTIVE conceptual analysis which helps researchers by
pointing out distinctions they have missed, or unnoticed
assumptions, or unnoticed implications of assumptions, while showing
how noticing these things helps in the pursuit of research goals,
e.g. by showing by showing that different explanations are needed
from those proposed or that different cases that were not previously
distinguished need different explanations, etc. (there's lots more to be
said on this)

and

(b) DESTRUCTIVE conceptual analysis which tries to rule out certain
research activities by labelling them as nonsensical, without doing
anything to help the researchers make progress with the problems
that they are investigating.

By researchers I include people trying to understand what sorts of
states and processes might, in principle, occur in a brain in a vat
or a computing system that is concerned only with the processing of
internal states, e.g. exploring number theory, exploring strategies
for playing chess or go, etc.

Legislating that the use of intentional (cognitive) descriptions are
inappropriate in such cases is purely destructive. It serves no
useful purpose, and gives no new insight.

It merely attempts to spread a particular sort of linguistic
conservatism. And after a while the repetition of the point gets a bit
boring, though perhaps not quite as boring as some of the other
repetitions encountered on c.a.p !

Cheers -- and apologies if I've misunderstood the context as a result of
not having time to read all the messages in the thread.
Aaron
==
--
Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs )
School of Computer Science, The University of Birmingham, B15 2TT, England
EMAIL A.Sl...@cs.bham.ac.uk
Phone: +44-121-414-4775 (Sec 3711) Fax: +44-121-414-4281

Anders N Weinstein

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

In article <5e7glb$c...@percy.cs.bham.ac.uk>,

Aaron Sloman <A.Sl...@cs.bham.ac.uk> wrote:
>ande...@pitt.edu (Anders N Weinstein) writes:
>
>> Reject thought-signs in favor of a more existentialist conception of
>> intentionality as a thoroughly situated openness to being...

>
>Sounds good. Where can I find some?

You are some. We all are, so just look around you.

Seriously, of course a slogan like that is going to be opaque taken by
itself. I use it as a shorthand for the critique of
representationalist epistemology that one finds in the existentialist
tradition, e.g. Heidegger, Sartre, Merleau-Ponty, popularized somewhat
by Hubert Dreyfus. In certain respects Ryle and Wittgenstein are fellow
travelers (Ryle wrote what must have been one of the first English
language reviews of Being and Time, if not the first.)

>> It becomes clear instead that it really
>> makes no sense to speak of a cognitive subject as if it might be a
>> brain in a vat,
>

>Clear to whom?

To readers who have gained a better insight into their own psychological
concepts from this critique.

>Why don't you just say that you (and some other philosophers) simply
>don't understand such talk and are not interested?

Because that would misrepresent the nature of the lesson.

If someone says one can't conceive a square circle, they are not talking
about a merely subjective limitation on one's own imagination.

>Those of us who know what we are talking about, will, in any case,
>get on with the task of exploring what sorts of cognitive
>architectures are and are not possible for more or less disembodied

One can seem falsely to know what one is talking about, when one does
not. E.g when cognitive scientists tell you you infer the distance of
objects from retinal disparity cues. They may be making a perfectly true
claim about sub-personal control system processes. But they are confused
if they think there is a mental operation of inferring distance, since
at the person level one does not and could not perform such an inference,
but rather has the capacity to detect distance without inference.

Similarly when psychologists suggest that we have very limited knowledge
of our own mental states, I think they are also confused. We have very
limited knowledge of *control system* states, but mental states are
a very different thing (not that we are always authoritative about
our *mental* states. But you can't infer from a failure to know your
control system states to a failure to know your mental states, since
they are completely different sorts of things.

So I would say: I think you are perfectly free to study control
system architectures all you want. But the mind is not a control system.

>agents, of various kinds, including, for example, a mathematical
>agent committed to doing nothing but explore number theory,
>motivated by concerns for depth, elegance, power, economy, and
>truth; or an internet explorer agent that works for its owner,

I would say your claim to conceive this is confused. For one, I don't
think grasp of mathematical concepts is well-conceived as knowledge of
some autonomous realm of abstractions, separable from the use of
numbers for counting things, say. Moreover, I don't see that one can
apply the concepts of such motivations to a creature unless it has a
body that is fit to express such affective states. "The human body is
the best picture of the human soul"

> or an internet explorer agent that works for its owner,
>finding things out, negotiating with other agents, etc.

Well this is a more serious case, since the agent is of course
acting and surviving in a kind of electronic environment.

>Whether these things have "thoroughly situated openness to being"
>doesn't seem to me to be a very clear question, but in any case I
>cannot see that it makes any difference to anything of interest.

The point was to reject the representational theory of mind, the
theory that knowledge of the world is necessarily mediated by
representational vehicles.

John Searle uses an interesting slogan in his work on Intentionality:
he says perception is not representational, it is *pre*-sentational.
I think that expresses a very good idea, although Searle himself
does not fully gain entitlement to use it because he insists on
an individualistic theory of the intentional content of perceptual
states.

>Everything is thoroughly situated in the sense that it is part of a
>web of relationships to other things.

Well sure, but I am thinking of what is required for me to stand in a
direct cognitive relation to my wastebasket by a state expressible that
is only expressible using a deictic device, like "*that's* empty". I
can get into that state because I am embodied in the same world as the
wastebasket, have a body and can *point* and otherwise interact with
the wastebasket.

I don't think a computer or a control system can get into that
sort of state. Of course there has been work on the sorts of "deictic"
representations employed inside the brain. But none of them have
the directness of a demonstratively expressible intentional state.

>Arguing about whether such states "really" involve a cognitive subject
>is as pointless as arguing over whether a circle really is an ellipse,

But in my view it is more like arguing whether a circle is
really a triangle. So I do not agree that it is a mere semantic preference.
I am prepared to agree that our understanding of ourselves is enhanced by
pursuits at many different levels.

But because I differ with you not only on the first-order question, but
also on the meta-level issue of what sort of question it is, I do not
think we will make progress by trying to ascend to the meta-level.

>(b) DESTRUCTIVE conceptual analysis which tries to rule out certain
>research activities by labelling them as nonsensical, without doing
>anything to help the researchers make progress with the problems
>that they are investigating.

I don't think I have every tried to rule out any research activities.
Any more than I have tried to rule out research in chemistry. Study
the control system all you want! I think it's interesting to find out what
happens in your control system.

On the other hand, I don't see how any study of the control system could
show me that I infer the distance of objects from retinal disparity cues
in the relevant sense. My epistemic standing with respect to such things
is normally non-inferential.

>Legislating that the use of intentional (cognitive) descriptions are
>inappropriate in such cases is purely destructive. It serves no
>useful purpose, and gives no new insight.

I think there are problems even in your own terms about the application
of intentional vocabulary to such detached systems. For example, there
is the question as to why you should bother to apply such descriptions
at all, what does adding the semantics get you over and above the formal
description?

Anders N Weinstein

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

In article <5e62ji$c...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <5e5iaj$3...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>>In article <5e4ehc$b...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>>>In <5e2tpu$l...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>I cannot observe "intentional or content-bearing states", if there are
>such things. So I cannot give any truth evaluation.

Well look around you. How else can you characterize your present state
of consciousness? And when you ascribe mental states to others you similarly
ascribe content-bearing states. For example, you can see someone's jaw
drop in surprise at some event and ascribe to them a state of seeing the
event and being surprised by it. That is to ascribe a content-bearing state,
by seeing human behavior *as* expressive of it.

This is something we do all the time. Of course it is fallible, but so
is what we do when we take something to be a house (sometimes it's just a
facade).

You might try reading Searle's book on Intentionality for one
suggestive account of intentionality as content-bearing states, even
if it is imperfect in many respects. In particular Searle shares the
traditional idea that one cannot observe mental states in others, whereas
I think one can occasionally observe them, just as one can (fallibly) observe
houses or Gibson's affordances.

>> For example, Karl's belief that his cat is in the kitchen. If I
>>report that to you, I *use* English sentences (of course), and if you
>>understand English you are equipped to evaluate Karl's state for
>>truth or falsity, assuming my report is fair.
>
>No, I am not equipped to evaluate Karl's states for truth or
>falsity. At most, I could evaluate Karl's statements. It would be
>an outrageously antisocial act if I were to ascribe states to Karl,
>and to then declare those states false.

First this is just false, we do it all the time. Second, it is not in
any way anti-social if the erroneous states you ascribe are well evidenced
given Karl's situation.

For example if you know there is a mirror under the kitchen table that
Karl doesn't know about, in which the reflection of the cat in the
hallway will be visible to him, you may in certain situations predict
that Karl will acquire the false belief that the cat is under the table
when he walks into the room looking for the cat.

There is a similar experiment called the "false-belief task" that
developmental psychologists do with small children to test their
concepts of mental states. I believe three-year-olds do not show
mastery of the ascription of false beliefs to others based on
information available to them, while 5-year-olds do. I recall it
involved a closed box which is understood to be of a type to normally
contain candy but is shown to the child to have been filled with
pencils. They are asked what another child will think is in the closed
box.

Because mastery of psychological concepts clearly involves such skills
at relating different perspectives, I can't see what problem you are
raising. You seem to be feigning some kind of ignorance of how
belief-talk works, for what purposes I can't imagine.

>I was not claiming that we always ascribe truth values to
>representations. I was only claiming that representations, or what
>we believe to be representations, are the only things to which we
>ascribe truth values.

And I was pointing out that we ascribe truth or falsity to intentional
states in the absence of any knowledge of actual representational vehicles
(beyond potentially utterable public language expressions.)

>>As to the question how the initial concepts are acquired, one can
>>say they are embodied in cultural practices of using words and are
>>acquired in the course of socialization into language use.
>
>No, with your restriction they would have to be innate. For with
>your restrictions there is no possibility that there could be
>socialization into language use.

Why? Socialization into mastery of techniques of operating with verbal
concepts does not itself involve conceptualization.

It is very important that in order to implement *your* understanding
and conceptual capacities, your neural circuitry does not have to first
have an equivalent set of understandings and concepts. That was Fodor's
mistake.

>>BTW, I think one might want to say that in some sense the world *is*
>>constructed out of propositions.
>
>That sort of idea strikes me as being right there with Berkeley's
>idealism.

But in Frege's terms Berkeley was speaking about ideas in the subjective
sense. Frege insisted that his Gedanke (Thoughts, propositions)
were *objective* entities, in the sense that they are shareable by
different subjects. Michael Dummett labelled this move "the extrusion of
thoughts from the mind". It is only if one extrudes thoughts from the
mind and treats them as intersubjective objects that one could find
any sense in saying the world is constructed out of them. Although it
is still a provocative slogan, I admit.

>>From the standpoint of the critique of the myth of the given,
>>it raises a *problem*, the problem that the epistemic predicates that
>>apply to the person, such as whether I am justified or not in a
>>particular perceptual belief, cannot be seen to be derived from the
>>(non-epistemic) doings of my sub-personal neural structures.
>
>Any justification of the results of perception should be based on a
>study of the reliability of the low level neural processes involved
>in perceiving, not in silly word games about perceptual beliefs.

But any such inquiry would rest on prior perceptual knowledge that was
not so justified, e.g. the knowledge of a neuroscientist looking at PET
scans.


Neil Rickert

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

In <5eabn5$4...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <5e7glb$c...@percy.cs.bham.ac.uk>,
>Aaron Sloman <A.Sl...@cs.bham.ac.uk> wrote:
>>ande...@pitt.edu (Anders N Weinstein) writes:

>>> It becomes clear instead that it really
>>> makes no sense to speak of a cognitive subject as if it might be a
>>> brain in a vat,

>>Clear to whom?

>To readers who have gained a better insight into their own psychological
>concepts from this critique.

On the other hand, perhaps readers who have attained an even better
insight can see value in "brain in a vat" talk. What you lack, is
the insight needed to realize that you are not the arbiter as to what
is a better insight.

>>Why don't you just say that you (and some other philosophers) simply
>>don't understand such talk and are not interested?

>Because that would misrepresent the nature of the lesson.

In a way, you are as rigidly dogmatic as Longley. You have your own
particular way of conceptualizing the world, and you declare everyone
else to be wrong. Perhaps if you had been around in Galileo's time,
you would have joined those condemning him.

>If someone says one can't conceive a square circle, they are not talking
>about a merely subjective limitation on one's own imagination.

Personally, I have no difficulty conceiving a square circle. The
unit sphere in a two-dimensional space given the L-infinity norm
(should actually be a small 'l' rather than a capital 'L') is such a
square circle. That is, every point on the perimeter is distance 1
from the center, and the perimeter consists of 4 straight lines of
equal length and at equal angles.

>>Those of us who know what we are talking about, will, in any case,
>>get on with the task of exploring what sorts of cognitive
>>architectures are and are not possible for more or less disembodied

>One can seem falsely to know what one is talking about, when one does
>not.

And one can falsely claim that others do not know what they are
talking about, when they do indeed know.

> E.g when cognitive scientists tell you you infer the distance of
>objects from retinal disparity cues. They may be making a perfectly true
>claim about sub-personal control system processes.

This would not show any evidence that the cognitive scientists do not
know what they are talking about. It would merely show that they do
not share your absurd dualism, and thus do not have the identical
concept of 'you'.

> But they are confused
>if they think there is a mental operation of inferring distance, since
>at the person level one does not and could not perform such an inference,
>but rather has the capacity to detect distance without inference.

No, you are confused, a confusion brought about by your
misattribution of meanings to those scientists.

>Similarly when psychologists suggest that we have very limited knowledge
>of our own mental states, I think they are also confused.

As long as you insist on taking your own ill-conceived concepts, and
attributing them to scientists, you will continue to be confused and
to misinterpret others as confused. I expect this derives from you
holding a poor theory of language, which fails to allow for such
conceptual disagreements.

>The point was to reject the representational theory of mind, the
>theory that knowledge of the world is necessarily mediated by
>representational vehicles.

And what would be the point of that, other than to uphold some form
of Ludditism?

>Well sure, but I am thinking of what is required for me to stand in a
>direct cognitive relation to my wastebasket by a state expressible that
>is only expressible using a deictic device, like "*that's* empty". I
>can get into that state because I am embodied in the same world as the
>wastebasket, have a body and can *point* and otherwise interact with
>the wastebasket.

A wonderful "Just So" story that allows you to sound as if you have
analyzed the situation, and reported it with eloquence. But it does
nothing at all to deal with the scientific issues, except perhaps to
sweep them under the rug.

>>(b) DESTRUCTIVE conceptual analysis which tries to rule out certain
>>research activities by labelling them as nonsensical, without doing
>>anything to help the researchers make progress with the problems
>>that they are investigating.

>I don't think I have every tried to rule out any research activities.

No, of course not. You have merely tried to rule out the use of
language to communicate those research activities.


Carl B. Frankel

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

Neil Rickert wrote:
>
> In <3303a006...@news.mem.net> bkl...@panspermia.com (Brig Klyce) writes:
>
[BK]

> >Clearly, some information comes originally from a sender ("messages"),
> >and some doesn't ("data").
>
[NR]

> I think it is not all that clear. Those who believe in a personal
> God might say that all comes from a sender. A strict behaviorist
> like David Longley might say that it is all just data.
>
[BK]

> >Messages are intrinsically meaningful, like a blaze on a trail.
>
[NR]

> Well, that would certainly solve the intentionality problem. Searle
> and others have posed the question of how a computer can be dealing
> with meaning (semantics) rather than just with syntax. Searle
> claimed that the required property (intentionality) is intrinsically
> present in humans, but absent in computers. However if the
> meaningfullness is intrinsically present in the message itself, this
> would seem to solve Searle's problem.
>
[BK]

> >Data are intrinsically meaningless, like clothes on the floor. Even if
> >the clothes turn out to be clues at a crime scene, a detective must
> >assign meaning to them. Data cannot serve as programs; they can cause
> >only simple responses to be carried out by single-purpose machines
> >like fire alarms.
>
> >Of course there are times when one cannot be sure a signal contains
> >merely data or perhaps a message. This is the problem faced by SETI.
>
[NR]

> So if we decide that a SETI signal is a message from an alien, then
> magically it will be intrinsically meaningful, and we will understand
> it at once. Otherwise we will have to assign a meaning to the signal
> before we can understand it.
>
> It seems like a strange notion of intrinsic meaningfulness.

[CBF]

I'm not sure I see any percentage in invoking a notion like
"intrinsic meaningfulness", while we're clearly very(!) far from
anything like a decision criterion for its application. I suspect
that Longley would consider this an example of "folk psychology,"
and even without Longley having specified criteria for the application
of the term 'folk psychology', I would think he would be right in
this instance.

This notion does, however, raise one of the $64,000 dollar questions
for this discussion: Shannon asked the question, what are the
relations that lawfully pertain to the problem of causing a string
of tokens on a sender's notepad to be replicated on a receiver's
notepad. These posting focus on the transmission of meanings:
Given that I have some "idea" in my head, (1) what are the relations
that lawfully pertain to the problem of causing that "idea" to be
replicated in someone else's head, and (2)--otherwise Longley would
be rightfully unhappy with this line of inquiry--how do we verify that
such a transmission of ideas has occurred?

Is this the same as asking how we embed in a message, knoweldge of
the domain of application of the message, given that the domain of
application is not embedded in the message? That is, is an idea
(a meaning) the same as a proposition plus a domain of application
of the proposition?

For example, the network of computers that sits between my
keyboards and readers' monitors will solve the problem of
transmitting the string "It is raining outside your window."
However, most readers, having read that sentence, will feel a twinge
of an impulse to look out their windows, i.e., they will recognize
a reference to a domain of application of the proposition, and will
feel (and perhaps execute) an impulse to so-apply the proposition,
e.g., to look out of the nearest window. What are the underlying
mechanics of these operations, both symbolic and neural?

Regards,

Carl F.

======================================================================
What's Punishing Gets Priority. || Carl B. Frankel
|| Managing Consultant
What's Rewarded Gets Repeated. || Organizational Measurement & Eng.
|| 785 Burnett Avenue No 2
What's Measured Gets Managed. || San Francisco, CA 94131-1417
|| Tel. 415-641-8028
What's Noticed Gets Narrated. http://www.ome1.com ca...@ome1.com
======================================================================

David Longley

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

In article <5e7glb$c...@percy.cs.bham.ac.uk>

A.Sl...@cs.bham.ac.uk "Aaron Sloman" writes:
>
> I'd like to distinguish two roles for philosophers in dialogues with
> scientists (e.g. comp.ai.philosophy):
>
> (a) CONSTRUCTIVE conceptual analysis which helps researchers by
> pointing out distinctions they have missed, or unnoticed
> assumptions, or unnoticed implications of assumptions, while showing
> how noticing these things helps in the pursuit of research goals,
> e.g. by showing by showing that different explanations are needed
> from those proposed or that different cases that were not previously
> distinguished need different explanations, etc. (there's lots more to be
> said on this)
>
> and
>
> (b) DESTRUCTIVE conceptual analysis which tries to rule out certain
> research activities by labelling them as nonsensical, without doing
> anything to help the researchers make progress with the problems
> that they are investigating.
>
> By researchers I include people trying to understand what sorts of
> states and processes might, in principle, occur in a brain in a vat
> or a computing system that is concerned only with the processing of
> internal states, e.g. exploring number theory, exploring strategies
> for playing chess or go, etc.
>
> Legislating that the use of intentional (cognitive) descriptions are
> inappropriate in such cases is purely destructive. It serves no
> useful purpose, and gives no new insight.
>
> It merely attempts to spread a particular sort of linguistic
> conservatism. And after a while the repetition of the point gets a bit
> boring, though perhaps not quite as boring as some of the other
> repetitions encountered on c.a.p !
>

What nonsense - science is all about prohibitions. That's what
LAWS effectively do, and why SOMETIMES we are surprised.

All the above comes down to is Sloman wishing to be free to write
uninhibited popular "science fiction" like so many others working
in AI.

If one is not able to reliably "quantify in", one can not be
clear or precise about what one is talking about (referring to,
measuring, predicting, describing - you name it. And if one can't
do that, literally "anything goes".

Now that's the hallmark of creative writing - one leaves as much
to the experience and imagination as possible - it's also, in my
view, the attraction of "cognitivism" in general (see the later
writings of Bruner for supporting evidence for this analysis)

The reality is that there are fragments of Sloman's writing and
work which *do* deserve respect. My criticism is that he (and
many others who have leapt on the "cognitive" bandwagon is
mystifying what is basically extensionally based behaviour and
system analysis via rhetoric and appeal to folk psychological
notions which HE has no more idea of than others he criticises
elsewhere in this newsgroup ("openness to being" etc.).

He's as immersed in pop psychology as they are...

--
David Longley


Carl B. Frankel

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

Anders N Weinstein wrote:
>
> In article <855992...@longley.demon.co.uk>,
> David Longley <Da...@longley.demon.co.uk> wrote:
> >
[DL]

> > Why not accept what Stich
> >had to say on this (for psychology)?
> >
> > 'This argument was part of a larger project. Influenced
> > by Quine, I have long been suspicious about the
> > integrity and scientific utility of the commonsense
> > notions of meaning and intentional content. This is not,
>
[AW]

> I can accept this. You have to appreciate that in my heart I don't
> really give a shit about "scientific utility". I'm interested in what's
> *true*, whether or not it is of scientific utility.
>
> It is very clear that the concepts of scientific utility are only a
> subset of the concepts usable for stating truths about the world.

[CBF]

If we are going to share a discussion about what may or may not be
true, you are going to have to give us some kind of criteria for
how to use the term. Otherwise, to assert that, "concepts of

scientific utility are only a subset of the concepts usable for stating

truths about the world," begs the question.


>
> As to "integrity" I think Stich is simply very badly confused about what
> is required for these concepts to have integrity. For example here:
>

> > 'In the psychological literature there is no dearth of
> > models for human belief or memory that follow the lead
> > of commonsense psychology in supposing that
> > propositional modularity is true. Indeed, until the
>

I'm not sure it's the same. The baseball manager is asking about
the existence of clear and observable behavioral manifestations,
throwing, hitting, teamwork, etc. The baseball manager typically
does not ask whether the player "intends" to be a good player, except
to the extent that self-reports of such intentions correlate with
exhibited performance. The player could as easily be a sociopath who
is expert at simulating the outward and visible signs of the
"intention" to be a good player--and the manager need not(!)
discriminate this case unless the player's socipathy starts to result
in behaviors which reflect poor teamwork.

I would not go as far as Longley and assert that such states do not
exist or are not worthy of study. I do believe that we have mental
behaviors, and that their acquisition, exhibition and extinction
are probably subject to the same kinds of rules applicable to
other behaviors. However, we need some way of instrumenting such
behaviors that is much more reliable than self-report, and we must not
assign intentions importance out of proportion to the variance they
explain.

For example, as I've remarked in a couple of other postings, the stated
intentions of people in partnered relationships are often quite
disparate from their behavior, almost to the point of being
uncorrelated. (I'm not as current as I should be, but to the best of
my knowledge, no one has yet built a credible model of marital
relationships based upon partners' self-reports, though many(!) have
tried.) One example of the reason I get so nervous whenever I hear
someone talking about "truth" is that both partners in a failing
marriage walk into counselling with a "true" intentional picture of
the relationship and the causes of its failures. The "truer" a person
holds her or his picture to be, the more work I have to do to
disturb her or his truths and intentions, and the poorer the
prognosis (though the correlation is hardly 1.0 here).

Neil Rickert

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

In <5eadfq$5...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <5e62ji$c...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:

>>I cannot observe "intentional or content-bearing states", if there are
>>such things. So I cannot give any truth evaluation.

>Well look around you. How else can you characterize your present state
>of consciousness?

You miss my point, which is that "content-bearing", as you use it, is
not a term I find meaningful.

>You might try reading Searle's book on Intentionality for one
>suggestive account of intentionality as content-bearing states, even
>if it is imperfect in many respects.

I've read it. It struck me as mainly being involved with
constructing an elaborate theory which fits the evidence poorly at
best.

>>No, I am not equipped to evaluate Karl's states for truth or
>>falsity. At most, I could evaluate Karl's statements. It would be
>>an outrageously antisocial act if I were to ascribe states to Karl,
>>and to then declare those states false.

>First this is just false, we do it all the time. Second, it is not in
>any way anti-social if the erroneous states you ascribe are well evidenced
>given Karl's situation.

>For example if you know there is a mirror under the kitchen table that
>Karl doesn't know about, in which the reflection of the cat in the
>hallway will be visible to him, you may in certain situations predict
>that Karl will acquire the false belief that the cat is under the table
>when he walks into the room looking for the cat.

I might consider Karl to have fallen for a perceptual illusion, but I
would not consider him to hold any false beliefs. I think the way
you (and many philosophers) use 'belief' stretches the meaning of
that word far beyond what it can sustain. It seems just silly to
apply the term "belief" to something temporarily in short term
memory. And it is equally silly to say that Karl is in a state of
untruth, simply because of falling for a perceptual illusion. You
might as well say that to watch TV is to be in a state of untruth,
since the TV set is designed to create perceptual illusions.

>>No, with your restriction they would have to be innate. For with
>>your restrictions there is no possibility that there could be
>>socialization into language use.

>Why? Socialization into mastery of techniques of operating with verbal
>concepts does not itself involve conceptualization.

No, you are quite right. It does not involve W-conceptualizion. But
keep in mind that you are a dualist, and what you mean by concept
(i.e. W-concept) is some mythical entity which does not exist and
could not exist as far as I am concerned. It is simply a term in
your anti-scientific theorizing which you try to force on those with
more practical concerns.

>It is very important that in order to implement *your* understanding
>and conceptual capacities, your neural circuitry does not have to first
>have an equivalent set of understandings and concepts. That was Fodor's
>mistake.

No, that was not Fodor's mistake. Fodor has concept of "concept"
that is reasonably similar to yours. Fodor analyzed that, in light
of what are taken to be the methods of acquiring knowledge, and
concluded that there is no possible way of using those methods to
acquire new basic concepts. Your mistake is to adopt a dualism,
whereby you divide the world into to disjoint pieces. You use a
theory at the personal level which would not allow formation of
concepts. You evade the problem by allowing that the concepts could
be formed at the sub-personal level. But then you arbitrarily rule
out any discussion of the relation between the two levels. It is an
anti-scientific approach, and completely out of place in an AI
forum.

>>>BTW, I think one might want to say that in some sense the world *is*
>>>constructed out of propositions.

>>That sort of idea strikes me as being right there with Berkeley's
>>idealism.

>But in Frege's terms Berkeley was speaking about ideas in the subjective
>sense.

Whereas you are talking about propositions in the subjective sense,
but arbitrarily declaring them to be objective.

> Frege insisted that his Gedanke (Thoughts, propositions)
>were *objective* entities, in the sense that they are shareable by
>different subjects.

In mathematics, when we come up with such declarations, we are first
required to prove that the set of objects referred to is non-empty.
I haven't seen any convincing arguments that there are such
things as propositions. I'll admit that philosophers have written a
lot about such things, but that does not prove that there are any of
them.

>>Any justification of the results of perception should be based on a
>>study of the reliability of the low level neural processes involved
>>in perceiving, not in silly word games about perceptual beliefs.

>But any such inquiry would rest on prior perceptual knowledge that was
>not so justified, e.g. the knowledge of a neuroscientist looking at PET
>scans.

That rather depends on how such inquiries are made. Even if you are
correct, that would only go to further show the emptiness of the term
"justification" as used in epistemology. Some time ago, I tried to
argue that science is founded on a core of carefully chosen analytic
statements, but it seems that the ideological commitments of modern
philosophy rules out any sensible discussion of that idea.


Neil Rickert

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

In <33085F...@ome1.com> "Carl B. Frankel" <ca...@ome1.com> writes:
>Neil Rickert wrote:

>[CBF]

>I'm not sure I see any percentage in invoking a notion like
>"intrinsic meaningfulness", while we're clearly very(!) far from
>anything like a decision criterion for its application.

I agree.

>This notion does, however, raise one of the $64,000 dollar questions
>for this discussion: Shannon asked the question, what are the
>relations that lawfully pertain to the problem of causing a string
>of tokens on a sender's notepad to be replicated on a receiver's
>notepad. These posting focus on the transmission of meanings:

Shannon's question was thus about the transmission of syntax, which
seems a rather simpler problem than the transmission of meaning.

>Given that I have some "idea" in my head, (1) what are the relations
>that lawfully pertain to the problem of causing that "idea" to be
>replicated in someone else's head,

I doubt that there could be any lawful relations. I think it not
possible to identically replicate an 'idea'. As an educator, my
experience is that a lot of effort -- with no guarantee of success --
is required to implant an approximation of one's idea into someone
else's head.

> (2)--otherwise Longley would
>be rightfully unhappy with this line of inquiry--how do we verify that
>such a transmission of ideas has occurred?

Longley might be unhappy about any talk of what is inside heads
(other than in physiological terms). Educators do use examinations
as an imperfect tool for evaluating their degree of success in
implanting approximations of ideas.

>Is this the same as asking how we embed in a message, knoweldge of
>the domain of application of the message, given that the domain of
>application is not embedded in the message?

I'm not sure that such embedding is ever possible. Most
communication depends on some background of shared experience. And
even where there is such shared experience, communication can be
quite difficult at times.

> That is, is an idea
>(a meaning) the same as a proposition plus a domain of application
>of the proposition?

I wouldn't think so. When I look around downtown, I see a building
being constructed. The building is almost complete. There is a
great deal of fine detail to the finish of the building. Surrounding
the building is a scaffolding, used by builders in their
construction. The scaffolding is a crude device, although its
overall shape suggests the shape of the building it surrounds. I
suggest that language is like the scaffolding. It is quite crude.
Our actual knowledge has a great deal of fine detail that is not
reflected in that scaffolding. When a person wants to describe their
knowledge, they describe the part of the syntactic scaffolding near
that portion of the edifice which is their knowledge. From that
description, I locate similar scaffolding surrounding my knowledge.
Then I assume that they are talking about something like the
structure near my part of the scaffolding. But there is no way of
being sure that my structure is exactly like their structure, so the
communication is of necessity imperfect.


David Longley

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

In article <5e5g3u$t...@usenet.srv.cis.pitt.edu>

ande...@pitt.edu "Anders N Weinstein" writes:

> In article <855992...@longley.demon.co.uk>,
> David Longley <Da...@longley.demon.co.uk> wrote:
> >

> > Why not accept what Stich
> >had to say on this (for psychology)?
> >
> > 'This argument was part of a larger project. Influenced
> > by Quine, I have long been suspicious about the
> > integrity and scientific utility of the commonsense
> > notions of meaning and intentional content. This is not,
>

> I can accept this. You have to appreciate that in my heart I don't
> really give a shit about "scientific utility". I'm interested in what's
> *true*, whether or not it is of scientific utility.

That's because you are still trusting what is FAMILIAR
(intuitive) rather than what is empirically defensible
(scientific).

>
> It is very clear that the concepts of scientific utility are only a
> subset of the concepts usable for stating truths about the world.

It is very clear to you maybe, but then you put more store by
what's intuitive than i do!

>
> As to "integrity" I think Stich is simply very badly confused about what
> is required for these concepts to have integrity. For example here:
>

> > 'In the psychological literature there is no dearth of
> > models for human belief or memory that follow the lead
> > of commonsense psychology in supposing that
> > propositional modularity is true. Indeed, until the
>

> My position is that the everyday use of psychological concepts makes
> *no commitments at all* about "propositional modularity" as Stich
> defines it. That is a matter of physically internal representational
> vehicles.

But you should judge "your position" by the evidence, and you
clearly aren't. Like all too many folk today, you underestimate
the estent to which what you hold true is sheped by social
desirability or "politically correctness" (folk psychology).

>
> Here perhaps is an analogy: A baseball manager wants to know about a
> prospect: can he hit? can he throw? can he run? can he field ground
> balls? can he catch fly balls? The baseball manager has a scouting
> report indexed by answers to these questions.
>
> Now a Stichian cognitive scientist comes along and says: you know, if
> you look inside the neural circuitry of a skilled baseball player, you
> can not find a functionally discrete module dedicated to throwing and
> another one dedicated to hitting. In fact all these "folk baseball"
> concepts are a very poor guide to the internal organization of one's
> control circuitry. They "lack the integrity" of a properly scientific
> account of internal mechanisms.
>

This is YOUR neo-Stichian account - ponder on QUOTATION and the
propositional attitudes!

> Now I will concede there could be a useful point to such a claim. It
> might well help the coaches impart baseball skills more effectively to
> know about such internal structures. Maybe these structures account for
> a kind of crosstalk between learning to throw and learning to catch
> that the coaches would do well to take into account.
>

You don't understand. He was criticising the predicate calculus
models of memory and attention which fill the psychology
journals.

He clealry advocates ANN models for such behaviours - so do I.
But that is different again from building AI systems which SHOULD
be based on the predicate calculus and derivatives... until we
find something better. We JUST DON'T HAVE A BETTER TECHNOLOGY!

>
> Stich's connectionist models do not undermine folk psychology.
> It is very hard to see how anything at that level of description
> *could* undermine it.
>

Folk psychology tries to impose LOGIC on folk psychological
"mechanisms".........I've discussed the problems this encounters
elsewhere in the context of Quine's "Qunatifiers and
Propositional Attitudes". You clearly don't understand this still
from what you have written above.


> >The implications here are that progress in applying psychology will be
> >impeded if psychologists persist in trying to talk about, or use
> >psychological (intensional) phenomena within a framework (evidential
> >behaviourism) which inherently resists quantification into such
> >terms. Without bound, extensional predicates, we can not reliably use
> >the predicate calculus, and without the predicate (functional)
> >calculus we can not formulate lawful relationships, statistical or
> >determinate.
>

> This misunderstands Stich et. al. They are not in the least evidential


> behaviorists. They are full-fledged cognitivists theorizing
> about internal representational vehicles in connectionist systems.


It doesn't MISUNDERSTAND anything. I am using the work of others
to draw a further implication.

>
> The dispute between connectionist modellers and Fodor-style "classical"
> symbolic modellers is an *internal* debate among parties that share a
> commitment to the representational theory of mind.
>

I am WELL aware of that. I am drawing an altogether different
point from the evidence!!
--
David Longley


Neil Rickert

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

In <5eartj$7...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <5eafjc$f...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:

>>In <5eabn5$4...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:

>>> E.g when cognitive scientists tell you you infer the distance of
>>>objects from retinal disparity cues. They may be making a perfectly true
>>>claim about sub-personal control system processes.

>>This would not show any evidence that the cognitive scientists do not
>>know what they are talking about.

>Well actually I think they do know quite well what they are talking
>about when they are doing their scientific work. What they are occasionally
>unclear on, it seems to me, is the *relation* between the scientific work and
>questions about the epistemic standings of persons.

If questions about epistemic standings are part of a work of pure
fiction, promoted by philosophers for their own amusement, then of
course there could be no relation between that and science. But if
epistemology purports to do with the relation between people and
reality, then it would seem surprising if scientific advances in
cognitive science did not have implications for epistemology.

>Another philosopher (Arthur Collins) used an analogy something like
>this: suppose as a contingent fact all our promises happened to be
>written down in books we each carried with us. Still one could not
>explain the nature of promising by saying: to have made a promise is to
>have written something down in one's book. This would leave out the
>normative consequences of having made a promise.

I certainly hope that cognitive scientists are not trying to define
"promise" in such simplistic ways.

>In similar fashion it might be a conceptual confusion to try to explain
>the nature of believing or asserting in terms of representations
>inscribed in one's brain.

I'll agree that if 'belief' is merely a component of a philosophical
fiction, then that would be a confusion. But if 'belief' has to do
with normal usage of the word, then a scientific investigation of
brain representations could tell us something about what is a
belief.

>[Collins' argument is roughly like this: I could truly ascribe the
>occurrence of one of these representations to myself in a way that
>simply does not commit me on the truth or falsity of the belief
>content. But he reminds us I simply cannot ascribe a belief to myself,
>say the belief that my boss is in Pittsburgh today, without acquiring a
>status of "epistemic risk" according to which I am right or wrong as KV
>is in the Iron City on 2/17/97.

As far as I can tell, epistemic risk is not a life threatening
disease. You are making this sound more like fiction.

>>>The point was to reject the representational theory of mind, the
>>>theory that knowledge of the world is necessarily mediated by
>>>representational vehicles.

>>And what would be the point of that, other than to uphold some form
>>of Ludditism?

>It has nothing at all to do with Ludditism, which I take it is fear of
>technology. It is good because it is a conception according to which
>it is intelligible that our thought could answer for its correctness
>to something outside itself, and so possess the objectivity we think
>it must. It avoids the paradoxical line of thought that aims to
>show that knowledge or intentionality is impossible.

I don't see anything in RTM which denies that thoughts could answer
to their correctness to something outside. I would have expected
that a science of how representations are formed would relate
thoughts to the external world. The only arguments I have seen that
knowledge is impossible are arguments based on obviously wrong
conceptions of knowledge.

>On the assumption of the representational theory of mind a cognitive
>subject is doomed to compare mental representations only to each other,
>and it becomes mysterious how the world could exert any sort of
>rational constraint on our thinking.

I am more concerned with the Weinstein theory of mind, which seems to
make everything mysterious.

>>A wonderful "Just So" story that allows you to sound as if you have
>>analyzed the situation, and reported it with eloquence. But it does
>>nothing at all to deal with the scientific issues, except perhaps to
>>sweep them under the rug.

>You say that as if it's a negative thing. *Of course* it doesn't
>address the scientific issues. It doesn't even pretend to. It purports
>addresses the conceptual issues, under the assumption that we don't
>need to wait for science to address its issues to do that.

In that case, I wonder what relation such word play has to AI.

>I think if we have the right conceptualization of intentionality and its
>relation to the world we are equipped to explain how objective
>perception is possible in a way that the representational theory of mind
>cannot. One shouldn't presume that all issues are best addressed by
>scientific work.

There could be no "right conceptualization of intentionality." For
'intentionality' itself is little more than a fudge factor to explain
away troublesome problems with philosophical theories.


Oliver Sparrow

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

(Anders N Weinstein) writes:

> But the *great* thing about JJ Gibson's usage is that it has it that
> information is not "subjective" in either of those senses.
> Information
> could be objectively present whether or not it is picked up, just as
> chroma information is present in a broadcast signal even if it is as
> nothing to a black and white set.

I know not said Gibson. But I would worry about ineffables which are
declared as 'real'. Real is what we call that which does stuff. Chroma
is designed into a signal for a specific decoder: where there is no
decoder - say, in a wet twig or a farm gate - then the signal heats
the conductor but doesn't do chroma-type stuff. Similar things are
true of e.g. colour. Bradley and all the other English label-stickers
want to make a song and dance about this: spectrum is as spectrum
does, and it's blue when something that uses "blue" as a determining
characteristic says 'tis blue. Otherwise it's just photon soup.


_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Oliver Sparrow

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

Of course, technology gives us the opportunity to embed
information and context together. A convincing view of
space travel is that the best way to do it is to squirt off
scads of very basic nanomachines with a light pressure hose,
such that when the hit a bit of cometary debris they unwrap
themselves and their programming into a 'view and report'
system, plus 'squirt off more nanomachines' generator.
Much what a plant does, of course, but without the nervous
system.

Thus: an active software primordium detects the potential
of its environment, unwraps itself and performs its task.
Java as controlled virus; or the evocation of a latent process
that a system has learned to support.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Anders N Weinstein

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

In article <5eafjc$f...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <5eabn5$4...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>On the other hand, perhaps readers who have attained an even better
>insight can see value in "brain in a vat" talk. What you lack, is
>the insight needed to realize that you are not the arbiter as to what
>is a better insight.

Perhaps. I am expressing my view, one for which I think considerations
can be offered. It really does not advance a discussion to say, in
effect, "perhaps you are wrong". Perhaps I am, but I can't do anything
with that claim.

>>One can seem falsely to know what one is talking about, when one does
>>not.
>

>And one can falsely claim that others do not know what they are
>talking about, when they do indeed know.

Sure. As above, one can't do anything with meta-level observations like
this.

>> E.g when cognitive scientists tell you you infer the distance of
>>objects from retinal disparity cues. They may be making a perfectly true
>>claim about sub-personal control system processes.
>

>This would not show any evidence that the cognitive scientists do not
>know what they are talking about.

Well actually I think they do know quite well what they are talking
about when they are doing their scientific work. What they are occasionally
unclear on, it seems to me, is the *relation* between the scientific work and

questions about the epistemic standings of persons. This comes out
mainly in extra-scientific attempts to claim philosophical significance
for their work, not in the work itself.

Another philosopher (Arthur Collins) used an analogy something like
this: suppose as a contingent fact all our promises happened to be
written down in books we each carried with us. Still one could not
explain the nature of promising by saying: to have made a promise is to
have written something down in one's book. This would leave out the
normative consequences of having made a promise.

In similar fashion it might be a conceptual confusion to try to explain


the nature of believing or asserting in terms of representations

inscribed in one's brain. There might be such representations, just as
in the promising case. But the representations alone do not have the
normative significance of belief ascriptions.

[Collins' argument is roughly like this: I could truly ascribe the
occurrence of one of these representations to myself in a way that
simply does not commit me on the truth or falsity of the belief
content. But he reminds us I simply cannot ascribe a belief to myself,
say the belief that my boss is in Pittsburgh today, without acquiring a
status of "epistemic risk" according to which I am right or wrong as KV
is in the Iron City on 2/17/97.

I would be happy to discuss this interesting argument at greater length.
It occurs in his book _The Nature of Mental Things_.]

>>The point was to reject the representational theory of mind, the
>>theory that knowledge of the world is necessarily mediated by
>>representational vehicles.
>

>And what would be the point of that, other than to uphold some form
>of Ludditism?

It has nothing at all to do with Ludditism, which I take it is fear of
technology. It is good because it is a conception according to which
it is intelligible that our thought could answer for its correctness
to something outside itself, and so possess the objectivity we think
it must. It avoids the paradoxical line of thought that aims to
show that knowledge or intentionality is impossible.

On the assumption of the representational theory of mind a cognitive


subject is doomed to compare mental representations only to each other,
and it becomes mysterious how the world could exert any sort of
rational constraint on our thinking.

>A wonderful "Just So" story that allows you to sound as if you have


>analyzed the situation, and reported it with eloquence. But it does
>nothing at all to deal with the scientific issues, except perhaps to
>sweep them under the rug.

You say that as if it's a negative thing. *Of course* it doesn't
address the scientific issues. It doesn't even pretend to. It purports
addresses the conceptual issues, under the assumption that we don't
need to wait for science to address its issues to do that.

I think if we have the right conceptualization of intentionality and its


relation to the world we are equipped to explain how objective
perception is possible in a way that the representational theory of mind
cannot. One shouldn't presume that all issues are best addressed by
scientific work.

>>I don't think I have every tried to rule out any research activities.
>


>No, of course not. You have merely tried to rule out the use of
>language to communicate those research activities.

I am not sure that the language cognitive scientists use to communicate
their good work really does require linking it to normative epistemic
concerns. Many are rather circumspect in their science; e.g. the brain
scientists who talk about the neural activity that "subserves"
particular cognitive capacities.

Oliver Sparrow

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

(David Longley) writes (or some such):

> What nonsense - science is all about prohibitions. That's what
> LAWS effectively do, and why SOMETIMES we are surprised.

And the LORD spake unto DAVID and he SAID: people who WRITE in erratic
block CaPiTaLs are usually bonkers and ALWAYS dull.

To the point, science is not about prohibitions, laws or any such
rigidities: science is a style of thinking and record keeping, such
that we arrive at better ways of looking at the world, ways which
we share and improve through a flexible, evolving social process.
It is not a legislative framework, lacks police and relies upon
an individual's audit trail for his or her authority.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Oliver Sparrow

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

"Carl B. Frankel" writes:
> If we are going to share a discussion about what may or may not be
> true, you are going to have to give us some kind of criteria for
> how to use the term.

Entirely so, with the added point that whatever people may mean by
"true", a working AI needs only an empyrical approximation to it
in order to function.

Much of the gibber associated with this usegroup revolves around this
basic issue. If one removes the need to be right, and substitutes the
need to be locally adaptive, then such a weight of fustian centuries
lifts that one can imagine daylight. Cue Fidelio. Trumpets off....


_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

David Longley

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

In article <330863...@ome1.com> ca...@ome1.com "Carl B. Frankel" writes:
>
> I would not go as far as Longley and assert that such states do not
> exist or are not worthy of study. I do believe that we have mental
> behaviors, and that their acquisition, exhibition and extinction
> are probably subject to the same kinds of rules applicable to
> other behaviors. However, we need some way of instrumenting such
> behaviors that is much more reliable than self-report, and we must not
> assign intentions importance out of proportion to the variance they
> explain.
>
Then you are going as far as Longley !!

My point has always been the Quinean (and Skinnerian) one that to
be is to be the value of a variable. If one can not reliably use
an existential quantifier in a given context, one has to question
whether that context is about anything CONSISTENTLY. What this
means is that intensional notions just are not dependable for
measurement/prediction systems.

SCIENCE depends on our ability to build relations and functions
for prediction of one observation statement from another.

Please don't reduce what I have said to a "metaphysical"
statement.

--
David Longley


David Longley

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

In article <704522...@chatham.demon.co.uk>
oh...@chatham.demon.co.uk "Oliver Sparrow" writes:

Your science is "popular science" Oliver - mine is indeed the
dull kind that prohibits, which is policed, and which has no place
for notions such as God....(and other rhetorical devices).

--
David Longley


Oliver Sparrow

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

(David Longley) writes:

> Your science is "popular science" Oliver - mine is indeed the
> dull kind that prohibits, which is policed, and which has no place
> for notions such as God....(and other rhetorical devices).

Well there you go, then. Mephistopheles in Faust: "I am the spirit
that always denies." Only he said it in German, which I shall not
risk.

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

Anders N Weinstein

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

In article <5eanmt$f...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <5eadfq$5...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>>For example if you know there is a mirror under the kitchen table that
>>Karl doesn't know about, in which the reflection of the cat in the
>>hallway will be visible to him, you may in certain situations predict
>>that Karl will acquire the false belief that the cat is under the table
>>when he walks into the room looking for the cat.
>
>I might consider Karl to have fallen for a perceptual illusion, but I
>would not consider him to hold any false beliefs. I think the way
>you (and many philosophers) use 'belief' stretches the meaning of
>that word far beyond what it can sustain. It seems just silly to
>apply the term "belief" to something temporarily in short term
>memory.

Don't see the problem. Karl might act on it, draw conclusions from
it, make plans based on it. What more do you need?
To be blunt, I can't see how the length of the temporal duration
is relevant at all.

And anyway, no-one said the illusion had to be very short-lived.
Maybe Karl is lead to write an epic poem
called "the cat under the table", recount the tale on numerous
occasions, or whatever.

> And it is equally silly to say that Karl is in a state of
>untruth, simply because of falling for a perceptual illusion. You

I don't see anything the slightest bit silly about it. Karl might base
a rather elaborate bit of planning or reasoning on his belief that the
cat is in the kitchen. Which is false.

I am not sure what else illusion could be be but a belief that is false?

> And it is equally silly to say that Karl is in a state of
>untruth, simply because of falling for a perceptual illusion. You
>might as well say that to watch TV is to be in a state of untruth,
>since the TV set is designed to create perceptual illusions.

Irrelevant. You don't fall for them, so don't acquire false
beliefs in that case. But we were talking about the case where Karl's
perceptual experience led him into a false belief.

If you believed you were seeing little men inside the box, etc, then
of course you would be subject to an illusion.

>>Why? Socialization into mastery of techniques of operating with verbal
>>concepts does not itself involve conceptualization.
>
>No, you are quite right. It does not involve W-conceptualizion. But
>keep in mind that you are a dualist, and what you mean by concept
>(i.e. W-concept) is some mythical entity which does not exist and
>could not exist as far as I am concerned.

I will be sure and keep in mind that I am speaking of mythical entities.

>>But in Frege's terms Berkeley was speaking about ideas in the subjective
>>sense.
>
>Whereas you are talking about propositions in the subjective sense,
>but arbitrarily declaring them to be objective.

It is not an arbitrary declaration; it is more based on a kind of
transcendental argument from the nature of discourse: if you and I
could never share a common content to discuss, then there could be no
hope of ever achieving a rational engagement between interlocutors. It
would not just be difficult, but *impossible*, since one could never have
any possible evidence that the other person was working with the same
concepts. The argument is based on the recognition that to make an
assertion at all is to lay a claim to intersubjective validity, it is
not merely a subjective venting of your own private state. So that
contents must be at least potentially shareable, if not actually shared.

You might try reading some Frege, e.g. the Thought, or Foundations of
Arithmetic, for a powerful case for the idea that contents
must be objective.

>> Frege insisted that his Gedanke (Thoughts, propositions)
>>were *objective* entities, in the sense that they are shareable by
>>different subjects.
>
>In mathematics, when we come up with such declarations, we are first
>required to prove that the set of objects referred to is non-empty.
>I haven't seen any convincing arguments that there are such
>things as propositions. I'll admit that philosophers have written a
>lot about such things, but that does not prove that there are any of
>them.

But on Frege's view we don't have to prove there are propositions in
the same way, for roughly the same reason that we do not have to first
justify laws of logic in order to reason in accordance with them. That
is because the existence of propositions and logical relations between
them is something on a more fundamental level, one which, it is held,
is a presupposition of any and all material discourse on particular
matters of fact.

>That rather depends on how such inquiries are made. Even if you are
>correct, that would only go to further show the emptiness of the term
>"justification" as used in epistemology. Some time ago, I tried to
>argue that science is founded on a core of carefully chosen analytic
>statements, but it seems that the ideological commitments of modern
>philosophy rules out any sensible discussion of that idea.

I might be sympathetic to that idea. But I think that in actual
scientific practice the distinction between what is analytic and what
is not is a shifting one, like Wittgenstein's distinction between
(defining) "criteria" and (contingently related) "symptoms". Because it
can shift under pressure from empirical results, I don't think the
distinction should be given central importance -- it seems more a
provisional matter of what is being held fixed at a given moment --
where we are standing on Neurath's boat.

For example, the belief that there are exactly four fundamental forces
might be something that is provisionally immune from revision. At the
moment, it functions as a rule in the interpretation of experimental
results, and is not itself considered to be something subject to test
-- if data doesn't fit the theory, we don't first think of positing
another fundamental force, but look for a different repair.

And yet, a repeated pattern of failures, together with an alternative
theory of a fifth force, could lead it to be revised. Such proposals
have appeared and been understood if not adopted by physicists.
Certainly few would take it to be analytic in the sense of true by
definition, although it might provisionally be functioning as a rule.
I think analytic statements in theoretical science are more like that.

Carl B. Frankel

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

Anders N Weinstein wrote:
>
> In article <330863...@ome1.com>, Carl B. Frankel <ca...@ome1.com> wrote:
> >Anders N Weinstein wrote:
[AW]

> >> It is very clear that the concepts of scientific utility are only a
> >> subset of the concepts usable for stating truths about the world.
> >
[CBF]
> >
> >If we are going to share a discussion about what may or may not be
> >true, you are going to have to give us some kind of criteria for
> >how to use the term.
>
[AW]
> Which term? "True"? In most contexts we never have to explicitly use
> the word "true" at all, although it can be a convenience to do so. It
> enables us to effect assertions indirectly -- a bit like calling "eval"
> on a lisp expression, or dereferencing a pointer to an assertion, as it
> were.
>
> But I would say we implicitly aim at truth when we make any serious
> assertion at all. So you can study any and all assertions you like for
> illumination about truth. But you better not demand a definition which
> furnishes you with a criterion *before* you make any truth-claims, or
> else you could never get started on your job. For you would need to
> first apply the criterion to the terms of your definition, and so on.
>
[CBF]

It seems to me almost like you are using "truth" as a primitive term
in a formal system, i.e., a term that is to be defined ostensively.
That's fine with me, as long as you can provide clear decision
criteria for the application of the term, so that I can try to make
some sense of the applicability of your axioms and the system you
derive from them.

For myself, I believe the formal enterprise will do much better
if built up from terms like 'difference', 'before', 'after',
'connect', etc.

[CBF]


> > Otherwise, to assert that, "concepts of
> >scientific utility are only a subset of the concepts usable for stating
> >truths about the world," begs the question.
>

[AW]
> The desk in my office at the University of Pittsburgh is a mess, though
> I can use it well enough.
>
> There, I've just given you a truth about the world. I hope you are
> suitably impressed. Now how many of the component concepts are of great
> scientific utility? "Desk"? "Office"? "University"? "mess"? "use it"?
> "well enough"?

[CBF]

Thanks for sharing ;-)

Seriously, though, I did not intend to suggest that science is the
only means to "truth", but rather that without giving a definition for
"truth" or some rules for the term's usage, your assertion
either begs the question or places itself beyond evaluation.

[AW]
> You may object: people with different standards might well disagree
> with me on this rather value-laden matter. Quite correct, I didn't say
> it was uncontroversial. But I and they must share a common content if
> we are to so much as disagree on something.
>

[CBF]

There's the rub.

In the first place, a content is different from a truth. (I'm not
saying that you said it isn't, but wanted clearly to mark the shift
in discourse.)

Moreover, that we are sharing some content in this conversation is
as compelling for me as I would guess that it is for you. But
if we are to avoid begging the question by taking our own experience
of compellingness in evidence, how do we verify that this is occurring?

It seems to me that we must start by understanding what exactly we
are sharing, so that we can then develop an appropriate means to
instrument the sharing of it. Unfortunately, 'content', the name
for what we are sharing, is yet another term desparately in search of
a definition. With appropriate rules for application, I suppose we
could make this a primitive term as well.

But I would be concerned that in so-doing, we might obscure something
very important. We might want to reserve the term 'content' for use
when there is a motive driving the semantic analysis, or if not, I am
fairly sure that we will want to give special treatment to such a
species of content. Such motives would be closely linked to the
utility values (and perhaps other kinds of values) that make so many
of your contents about your office so value-laden :-).

Intentions can be motives, but not all motives are intentions, unless,
as I think Sloman might(!) believe, intention is synonymous with a
reference to a control system, and the usual use of intention,
reflexive intention, is treated as a special case.

[CBF]


> >other behaviors. However, we need some way of instrumenting such
> >behaviors that is much more reliable than self-report, and we must not
> >assign intentions importance out of proportion to the variance they
> >explain.
>

[AW]
> I don't think I said that self-reports should be treated as
> authoritative. I said that failure to find discrete
> intention-representations inside the body, one per properly ascribed
> intention, need not undermine the ascription.
>
> Even where your self-report is inaccurate, it need not be because it
> fails to accord with a hypothesized representational vehicle inside
> your body, a fiction that no one really knows or cares about. You gave
> the example of people in couples therapy whose behavior-pattern
> exhibited a certain teleology -- it strove after minimizing shame, you
> said -- even though this was hardly an intention found in their self-report.
> But does that entail that inside their body they have a little problem-solver
> with a goal-representation that means "minimize shame"?
>
> Well maybe they do, nothing is impossible. But, the point is, maybe
> they don't. It might be that it is just a molar tendency of the complex
> of inner modules working together to aim after such a goal, even though
> that goal itself is nowhere represented inside them. In that case the
> people are not authoritative in their self-reports but NOT because they
> fail to match an inner control system representation, but because their
> reports fail to accurately represent the teleology of an emergent molar
> behavior pattern.
>

[CBF]
Yes, but I don't quite get your point here.

I would be the first to say that Cronbach & Meehl (1955) did more harm
than good to psychology, essentially authorizing investigators to
study hypothesized (hypostasized, I would say!) internal entities,
often on the meagerest of operationalizations, and without reference
to the relations between their own hypothesized entities and the
hypothesized entities for which every other psychologist had some
data. (And very(!) few psychologists pay attention when Paul Meehl
tries to suggest that such broad license is not exactly what he
had in mind.)

There is probably not a depression entity, nor a self-esteem
entity, nor a shame-avoidance entity (though I actually think there
might be a bad-feeling-avoidance/good-feeling-pursuit strategy
selection entity, but that is strictly an architectural question,
not an epistemological one), even though, as Longley correctly
indicates, much of the methodology of psychology tries to study
such entities as though they existed.

For the same reason, I would not consider the universe of self-reports
of the experience of intentions evidence for the existence of an
intentional entity, and I gather that you would say the same.

Moreover, if the "emergent molar behavior pattern" which results
in the experience of intentions results in experienced intentions
which are often nearly uncorrelated with other behavior, then I propose
that intentions should be studied first as behavioral productions,
and only incidentally as contents. That is, we should first
investigate what produces a mental behavior that has a content that
substantially misclassifies other, manifest behavior? I believe this
approach is as valid for constructing an agent as it for reverse-
engineering a human being.

[CBF]


> > The "truer" a person
> >holds her or his picture to be, the more work I have to do to
> >disturb her or his truths and intentions, and the poorer the
> >prognosis (though the correlation is hardly 1.0 here).
>

[AW]
> To hold it true is just to believe it. I think all you are saying is that
> people sometimes have false beliefs, and sometimes cling very strongly
> to their false beliefs, particularly beliefs about themselves and their
> character.
>
> A professor of mine likes to recount the story of encountering the
> following question on a personality test: "Do you always think your
> beliefs are true?" He answered: "yes, or else I would change them."
>
> I guess in some confused way the question is testing for a virtue like
> open-mindedness or receptivity to criticism. But strictly speaking it
> is absurd. As Wittgenstein observed, if there were a verb meaning "to
> believe falsely" it would have no significant first-person
> present-tense usage.

[CBF]
No, I'm going further to say that this use of 'true' and 'false' is
generally unwarranted and often dangerous, to oneself as much or more
than to others. Most importantly it is derails the modeling of human
information processing. We need to model how humans form pervasive and
compelling beliefs in truth and falsehood, but beating our breast will
not make anything true or false.

Rather than try for a pure, analytical defintion, I propose to
operationalize "truth", based upon observation, as that special (and
degenerate) level of confidence where no error-checking is deemed
necessary (and thus processing bandwidth allocated to error-checking
would generally be maladaptively wasted). (Understanding a
fundamental term like "truth" through a maneuver like operationalization
rather than through analysis: No wonder so many of your colleagues
in philosophy consider those of us in psychology such sickos ;-))

I believe this satisfies the observations of both Wittgenstein and your
professor, allows that I will walk into your office someday and agree
with you emphatically that it is a disaster area, helps explain why
so many make such a big fuss over intentions despite the low variance
explained (most people do not error-check their own intentions), and
yet brings Putnam's problem (from another of your posts) back over
the wall, out of the realm of the nature of the signified reality,
and back into the realm of analyses of redundancies among thought-signs
experienced, across observations, observers and contexts.

We simply don't need to ask about what is *really* out there, to
know that a content is so reliably (redundantly) experienced as to
make it pathognomonic to question it. But the fact that we are not
checking for errors does not make us right. New paradigms occaisionally
come from those whom we would diagnose. Or as Wittgenstein also said
(and I do not quote exactly here), "We use 'I know' as though it
guarantees what is known as a fact. One often forgets the proposition,
'I thought I knew'."

Regards,

Carl F.

======================================================================
What's Punishing Gets Priority. || Carl B. Frankel

Carl B. Frankel

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

I'm not a philosopher and so perhaps I'm missing something important
here, but it seems to me this is getting needlessly complicated.

Why not just say that between Karl and the person watching Karl
there has been a breakdown in the reliability of inter-observer
observations, accountable by the differences in the frames of
reference from which they are making observations? (Similarly,
the reliability coefficient characterizing Neil's and Anders'
epistemic observations is not yet 1.0.)

What additional percentage is there in asking what someone with
a god-like view might make of all of this, since none of us will
ever have such a view, even were a god-like entity to attempt to
impart such a view to us? (The best we would have is the input
stream that resulted from sampling the god-like entity, perhaps
placing us a little closer to the angels, but hardly worth putting
on our CV's.)

Regards,

Carl F.

======================================================================
What's Punishing Gets Priority. || Carl B. Frankel
|| Consultant
What's Rewarded Gets Repeated. || Organizational Measurement & Eng.
|| 785 Burnett Avenue No 2
What's Measured Gets Managed. || San Francisco, CA 94131-1417
|| Tel. 415-641-8028
What's Noticed Gets Narrated. http://www.ome1.com ca...@ome1.com
======================================================================


.

Anders N Weinstein

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

In article <330863...@ome1.com>, Carl B. Frankel <ca...@ome1.com> wrote:
>Anders N Weinstein wrote:
>> It is very clear that the concepts of scientific utility are only a
>> subset of the concepts usable for stating truths about the world.
>
>[CBF]
>
>If we are going to share a discussion about what may or may not be
>true, you are going to have to give us some kind of criteria for
>how to use the term.

Which term? "True"? In most contexts we never have to explicitly use


the word "true" at all, although it can be a convenience to do so. It
enables us to effect assertions indirectly -- a bit like calling "eval"
on a lisp expression, or dereferencing a pointer to an assertion, as it
were.

But I would say we implicitly aim at truth when we make any serious
assertion at all. So you can study any and all assertions you like for
illumination about truth. But you better not demand a definition which
furnishes you with a criterion *before* you make any truth-claims, or
else you could never get started on your job. For you would need to
first apply the criterion to the terms of your definition, and so on.

> Otherwise, to assert that, "concepts of

>scientific utility are only a subset of the concepts usable for stating
>truths about the world," begs the question.

The desk in my office at the University of Pittsburgh is a mess, though


I can use it well enough.

There, I've just given you a truth about the world. I hope you are
suitably impressed. Now how many of the component concepts are of great
scientific utility? "Desk"? "Office"? "University"? "mess"? "use it"?
"well enough"?

You may object: people with different standards might well disagree


with me on this rather value-laden matter. Quite correct, I didn't say
it was uncontroversial. But I and they must share a common content if
we are to so much as disagree on something.

>other behaviors. However, we need some way of instrumenting such


>behaviors that is much more reliable than self-report, and we must not
>assign intentions importance out of proportion to the variance they
>explain.

I don't think I said that self-reports should be treated as


authoritative. I said that failure to find discrete
intention-representations inside the body, one per properly ascribed
intention, need not undermine the ascription.

Even where your self-report is inaccurate, it need not be because it
fails to accord with a hypothesized representational vehicle inside
your body, a fiction that no one really knows or cares about. You gave
the example of people in couples therapy whose behavior-pattern
exhibited a certain teleology -- it strove after minimizing shame, you
said -- even though this was hardly an intention found in their self-report.
But does that entail that inside their body they have a little problem-solver
with a goal-representation that means "minimize shame"?

Well maybe they do, nothing is impossible. But, the point is, maybe
they don't. It might be that it is just a molar tendency of the complex
of inner modules working together to aim after such a goal, even though
that goal itself is nowhere represented inside them. In that case the
people are not authoritative in their self-reports but NOT because they
fail to match an inner control system representation, but because their
reports fail to accurately represent the teleology of an emergent molar
behavior pattern.

> The "truer" a person


>holds her or his picture to be, the more work I have to do to
>disturb her or his truths and intentions, and the poorer the
>prognosis (though the correlation is hardly 1.0 here).

To hold it true is just to believe it. I think all you are saying is that

Anders N Weinstein

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

In article <5eb7tj$f...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:
>In <5eartj$7...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>If questions about epistemic standings are part of a work of pure
>fiction, promoted by philosophers for their own amusement, then of
>course there could be no relation between that and science. But if

But they are not. They concern matters of fact, albeit different ones
than cognitive scientists deal with.

I often use the loose analogy between two perspectives on a dollar
bill: as a physical object, and as a bearer of value. In roughly
similar fashion I think one can distinguish, among others, two
perspectives on human conduct: one, as a sequence of particular acts
brought about by an inner control system with its representational
vehicles; another, as an exercise of a cultivated understanding on the
part of the person as a whole. Both are correct, both concern certain
real aspects of the world. But I think the epistemic standings of
persons live in the second sort of logical space, one that is
relatively autonomous and independent of the study of what's literally
inside your body.

>epistemology purports to do with the relation between people and
>reality, then it would seem surprising if scientific advances in
>cognitive science did not have implications for epistemology.

It is not surprising to me given my philosophical background.

>>In similar fashion it might be a conceptual confusion to try to explain
>>the nature of believing or asserting in terms of representations
>>inscribed in one's brain.
>
>I'll agree that if 'belief' is merely a component of a philosophical
>fiction, then that would be a confusion. But if 'belief' has to do
>with normal usage of the word, then a scientific investigation of
>brain representations could tell us something about what is a
>belief.

Why *must* this be true? Why couldn't your beliefs and intentions have
the same sort of ontological status as your promises and debts and the
economic value of a dollar bill (social externalism)?
If so, then trying to understand them by looking in your brain is looking
in the wrong place.

Note in this regard that our everyday concepts of reasoning do involve
the allied notion of *committment* and *entitlement*, which are similar
normative notions. Pitt's Bob Brandom makes this the starting point for
his theory of belief as discursive committment: such things as that if
you say the cat is in the kitchen then you are *committed* to the claim
that an animal is in the kitchen and cannot become simultaneously
*entitled* to the claim that the catch is in the yard. So that in
playing pitch and catch with sentences in this way one is traversing
positions in a space constituted by normative inferential relations,
not descriptive causal ones (you might in fact say any of these other
things).

Brandom's theory is that such normative relations between contents are
matters of social practice or custom with the words, although that part
should be controversial. But it seems to me to be a pretty substantial
theory worth considering.

>As far as I can tell, epistemic risk is not a life threatening
>disease. You are making this sound more like fiction.

No one said it was a life-threatening disease, so I don't know what
point you are making. Being wrong might be pretty unimportant in a great
variety of contexts. But it is still being wrong.

Anyway, I don't think I can really present Collins' argument in sufficient
detail here, but I would be delighted to discuss it in detail in another
threads if anyone is interested. I offer it simply as an example of a
consideration, based on certain putative features of the ordinary logic
of belief ascription, that tells against the picture of beliefs as
involving a relation to inner representations of any kind.

>I don't see anything in RTM which denies that thoughts could answer
>to their correctness to something outside. I would have expected

Not explicitly, but the picture itself makes it mysterious.

>that a science of how representations are formed would relate
>thoughts to the external world. The only arguments I have seen that

Not if one assumes methodological solipsism, as Fodor and,
in a different context, Chomsky, seem to do.

>>You say that as if it's a negative thing. *Of course* it doesn't
>>address the scientific issues. It doesn't even pretend to. It purports
>>addresses the conceptual issues, under the assumption that we don't
>>need to wait for science to address its issues to do that.
>
>In that case, I wonder what relation such word play has to AI.

Well first, it might mean the main line of work in AI has employed the wrong
conception of understanding. I think you agree with me on this.

Second, I think it has relevance for assessing the philosophical
significance of AI. I don't expect it to have much cash value for
researchers as such, although it might help open one's mind to alternative
strategies and techniques.

Neil Rickert

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

In <5ed3k0$g...@usenet.srv.cis.pitt.edu> ande...@pitt.edu (Anders N Weinstein) writes:
>In article <5eanmt$f...@ux.cs.niu.edu>, Neil Rickert <ric...@cs.niu.edu> wrote:

>>I might consider Karl to have fallen for a perceptual illusion, but I
>>would not consider him to hold any false beliefs. I think the way
>>you (and many philosophers) use 'belief' stretches the meaning of
>>that word far beyond what it can sustain. It seems just silly to
>>apply the term "belief" to something temporarily in short term
>>memory.

>Don't see the problem. Karl might act on it, draw conclusions from
>it, make plans based on it. What more do you need?

In that case, my computer must at various times have a belief that
the 'A' key is depressed, and a thermostat must at various times have
beliefs about the temperature. I guess the problem of cognizing
artifacts has already been solved.

Somehow I think this trivializes "belief".

Perhaps our difference is this:- I distinguish between Karl
believing that there is a cat under the table, and Karl having the
belief that there is a cat under the table. I take it that you do
not make such a distinction, although I think the distinction is
common in talk between non-philosophers. The claim that Karl
believes P is a rather weak claim. But when we invent a 'belief' as
an entity, we are saying something stronger, and implying a degree of
committment by Karl.

>>>But in Frege's terms Berkeley was speaking about ideas in the subjective
>>>sense.

>>Whereas you are talking about propositions in the subjective sense,
>>but arbitrarily declaring them to be objective.

>It is not an arbitrary declaration; it is more based on a kind of
>transcendental argument from the nature of discourse: if you and I
>could never share a common content to discuss, then there could be no
>hope of ever achieving a rational engagement between interlocutors. It
>would not just be difficult, but *impossible*, since one could never have
>any possible evidence that the other person was working with the same
>concepts.

I think it is almost certain that we are usually working with
different concepts. Your claim that such would make discourse
impossible has no basis that I can see. Presumably it is based on
some rather confused theory of language.

In my youth I sang all of the standard Christmas carols about sleighs
and winter and snow and all of that. I didn't seem to have any real
problem with them. Nevertheless, since I had never seen snow, and
since Christmas fell in summer, we might reasonably expect
significant conceptual differences.

>>In mathematics, when we come up with such declarations, we are first
>>required to prove that the set of objects referred to is non-empty.
>>I haven't seen any convincing arguments that there are such
>>things as propositions. I'll admit that philosophers have written a
>>lot about such things, but that does not prove that there are any of
>>them.

>But on Frege's view we don't have to prove there are propositions in
>the same way, for roughly the same reason that we do not have to first
>justify laws of logic in order to reason in accordance with them.

I think there is no relation between these. The laws of logic have only to
do with syntax. They are justified by their analyticity. The question
of propositions is supposed to have something with reality.

> That
>is because the existence of propositions and logical relations between
>them is something on a more fundamental level, one which, it is held,
>is a presupposition of any and all material discourse on particular
>matters of fact.

If the existence of propositions is simply a matter of a required
presupposition, I would think you have slipped into solipsism.

>>That rather depends on how such inquiries are made. Even if you are
>>correct, that would only go to further show the emptiness of the term
>>"justification" as used in epistemology. Some time ago, I tried to
>>argue that science is founded on a core of carefully chosen analytic
>>statements, but it seems that the ideological commitments of modern
>>philosophy rules out any sensible discussion of that idea.

>I might be sympathetic to that idea. But I think that in actual
>scientific practice the distinction between what is analytic and what
>is not is a shifting one, like Wittgenstein's distinction between
>(defining) "criteria" and (contingently related) "symptoms".

I don't see any evidence of shifting. I see a lot of evidence of
shifting concepts, and of different sets of laws for the altered
concepts. But I don't see evidence of changing core laws while
holding to the old concepts.

>For example, the belief that there are exactly four fundamental forces
>might be something that is provisionally immune from revision.

I think it could not change without a corresponding change in
concepts. But in that case, it would not be a changed law so much as
a new law built on the new concepts.


Carl B. Frankel

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

Aaron Sloman wrote:
>
> ande...@pitt.edu (Anders N Weinstein) writes:
>
> > article: 40824 in comp.ai.philosophy
> > Date: 13 Feb 1997 18:31:38 GMT
> > Organization: University of Pittsburgh
> > ...
[AW]

> > Reject thought-signs in favor of a more existentialist conception of
> > intentionality as a thoroughly situated openness to being...
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
[AS]

> Sounds good. Where can I find some?
>
[AW]

> > It becomes clear instead that it really
> > makes no sense to speak of a cognitive subject as if it might be a
> > brain in a vat,
>
[AS]
> Clear to whom?

>
> Why don't you just say that you (and some other philosophers) simply
> don't understand such talk and are not interested?
>
> Those of us who know what we are talking about, will, in any case,
> get on with the task of exploring what sorts of cognitive
> architectures are and are not possible for more or less disembodied
> agents, of various kinds, including, for example, a mathematical
> agent committed to doing nothing but explore number theory,
> motivated by concerns for depth, elegance, power, economy, and
> truth; or an internet explorer agent that works for its owner,
> finding things out, negotiating with other agents, etc.
>
> Whether these things have "thoroughly situated openness to being"
> doesn't seem to me to be a very clear question, but in any case I
> cannot see that it makes any difference to anything of interest.
>
> Everything is thoroughly situated in the sense that it is part of a
> web of relationships to other things.
>
> You may have your own preferred set of relationships (spatial,
> temporal, biological, cultural, etc.) but that's your choice.
>
> If you don't want to call the things that have a different
> collection of relationships "cognitive" that's fine. That's
> your semantic preference. But don't expect anyone else to be
> influenced by expressions of semantic preferences if that doesn't
> help them with their research objectives.
>
> Arguing about whether such states "really" involve a cognitive subject
> is as pointless as arguing over whether a circle really is an ellipse,
> or whether 0 really is a number, or a virus really is alive, or whether
> it's noon on the moon when the moon is directly above Greenwich at 1200
> GMT ...
[snip, more AS]

> Legislating that the use of intentional (cognitive) descriptions are
> inappropriate in such cases is purely destructive. It serves no
> useful purpose, and gives no new insight.
>

[CBF]

In the first place, I strongly agree with you that the phrase


"a more existentialist conception of intentionality as a thoroughly

situated openness to being" is seriously problematic. It poses far
and away more questions than it answers, and is seriously wanting
by way of empirical applicability. (How do I push these words
around as other than symbols in a philospher's language game?)

Empirical applicability is a must for any concept that would move AI or
psychology forward--or philosophy for that matter: Even construed as a
search for "truth", I would be hard pressed to understand, outside of
abstract mathematics, how something could be true and yet have no
manifestations whatsoever, direct or indirect, available to shared
observation (which gets back to Bateson's notion of information, if it
make no difference whatsoever, how has it informed and how is its truth
value or applicability to be assessed.) A private truth that makes
absolutely no difference for how someone lives is a piece of irrelevant
trivia, even to the person her/himself.

I think the real problem with existentialist conceptions is that they
are typically existential in the sense a mathematician would recognize:
they assert the existence of something without being bound to
formulate its construction. In this sense, I would say that Putnam, as
Anders describes him, is a closet existentialist: He posits the
existence of a correspondence between thought-signs and a reality
signified, without burdening himself to construct the mapping. (I
would prefer the terms 'percepts' and 'reality perceived'.)

Admittedly, the belief in the existence of such a correspondence is
deeply compelling for most of us. But given the central role that
the emotion of compellingness takes in bounding (dampening) the amount
of processing bandwidth allocated to error-checking, it begs the
question to take compellingness in evidence (nota bene, presiders of
graduate seminars!)

As such, I believe that existential propositions pose intelligible
questions only to the extent that they drive us to derive a
relevant construction. If no such construction exists, and more
importantly, if no such construction is possible (as is the case
with a mapping to a transcendant, god-like view), then the question
only appears intelligible, like a partial differential equation with no
real roots--it can be manipulated as an abstraction, but will not
result in any real-world solutions. The question is a thought-sign
with no clear mapping to any significand, a clear example of the
danger of uncritically accepting the compellingness with which we
imagine that any grammatically intelligible question perforce exists
in meaningful relations with the rest of our experience.

I think that "a thoroughly situated openness to being" is a less
than crisp way to propose abandoning the obsession with the
correspondence between thought-signs and "reality"-signified, and
to get on with analyzing the relations among the thought-signs.
I may be a bit of a Pollyanna, but I suspect that Anders, you and I
could find a proposition in this region on which we could agree.

As such, I am puzzled by the intensity of your response to Anders
since, apart from his existential formulation of intentional processes,
it seems to me that he is saying much the same as you are. You say,


"Everything is thoroughly situated in the sense that it is part of a

web of relationships to other things," a position with which I
thoroughly agree, and to which I would guess Anders also subscribes
(though perhaps I misunderstand). Indeed, it seems to me that he is
directly applying this notion of "thorough situatedness" in asserting
that there is limited value in speaking "of a cognitive subject as if
it might be a brain in a vat, without ever committing [one]self to the
body and world in which it does its living."

I can't tell how limited Anders thinks the value of problem
decomposition, e.g. studying the operation of the brain in a vat.
I clearly see breaking up big problems into smaller ones as
an important maneuver in many inquiries. (To the extent that the
smaller problems are simpler to solve, decomposing problems
into components often reduces the entropy associated with the both
the smaller inquries and the larger one.) However, I think both you
and Anders would assert that without a clear eye to how to put the
components back together, such decomposition typically invalidates the
inquiry into the operation of the larger system.

Moreover, in his posting on this thread and in other postings as well,
I have understood Anders to subscribe to a model of at least two
major types of subsystems, a control subsystem and an intentional
one. So, I daresay he is not(!) among those who would legislate the
use of intentional descriptions to be inappropriate, as would, say,
Longley.

And even I have taken a somewhat Longley-ish position, in
asserting that we should not over-rate the influence of intentions
on the operation of the underlying control system, just because we
experience a compelling sense of identification with our own
intentions. (See some of my recent postings on the intentions of
parties to a failing marriage.) Reflexive intentions are very(!)
often just another self-serving behavior, the production of which
needs to be modelled along the lines of other behavioral productions.
(Maybe this is close to what Longley is trying to get at when he uses
the term 'folk psychology.')

I believe that the investigation of emotions as a control system
phenomenon will yield much more fruit than the investigation of
intentions, because emotions typically have "real roots" in the
utility values they are perceived to encode--unless, of course,
you mean "intention" to be synonymous with the reference to a control
system.

Carl B. Frankel

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

David Longley wrote:
>
> In article <330863...@ome1.com> ca...@ome1.com "Carl B. Frankel" writes:
> >
> > I would not go as far as Longley and assert that such states do not
> > exist or are not worthy of study. I do believe that we have mental
> > behaviors, and that their acquisition, exhibition and extinction
> > are probably subject to the same kinds of rules applicable to
> > other behaviors. However, we need some way of instrumenting such
> > behaviors that is much more reliable than self-report, and we must not
> > assign intentions importance out of proportion to the variance they
> > explain.
> >
> Then you are going as far as Longley !!
>
> My point has always been the Quinean (and Skinnerian) one that to
> be is to be the value of a variable. If one can not reliably use
> an existential quantifier in a given context, one has to question
> whether that context is about anything CONSISTENTLY. What this
> means is that intensional notions just are not dependable for
> measurement/prediction systems.
>
> SCIENCE depends on our ability to build relations and functions
> for prediction of one observation statement from another.
>
> Please don't reduce what I have said to a "metaphysical"
> statement.

"To be is to be the value of a variable."

That seems to me a highly metaphysical statement. I know what the
statement means, but do not have the tools to evaluate it quickly.
Please forgive me if I chew on it for a while, and respond more
fully when it seems relevant to a subsequent post.

Initial thoughts:

I think one objection Anders might raise right away has to do with the
context of arithmetic, where there are no variables. He made several
statements in another post about the disorganization in his office, the
kinds of statements that I would ordinarily consider to be expressions
in an arthmetic, not an algebra like the predicate calculus. His
statements are not scientific truths; however, unless he is
pathologically compulsive, I daresay that anyone of us might verify
his remarks by means of the arithmetic operation of walking into his
office and taking a look around for ourselves. (Or maybe
its not an objection. As I said, I'm a bit out of my depth.)

As I just said to Neil in another post, I suspect you and I have
some epistemological overlap, but very different ontologies.
In general, ontologies make me very queasy, exactly because they
express more certainty than experience ever verfies. Ontological
categories admit no error variance, it seems to me. Yet it seems
to me that the Quinean proposition you cite is fundamentally an
ontological one.

Or if not, the only other way I can make sense of it is to treat
the value of a variable as an entity in an arithmetic calculation
in an information processing sense, which brings us back to
analyzing redundancies across observations, observers and contexts,
which is not what I understand logical positivism to have in mind.

Neil Rickert

unread,
Feb 18, 1997, 3:00:00 AM2/18/97
to

In <330A08...@ome1.com> "Carl B. Frankel" <ca...@ome1.com> writes:

>As I just said to Neil in another post, I suspect you and I have
>some epistemological overlap, but very different ontologies.
>In general, ontologies make me very queasy, exactly because they
>express more certainty than experience ever verfies. Ontological
>categories admit no error variance, it seems to me. Yet it seems
>to me that the Quinean proposition you cite is fundamentally an
>ontological one.

You have listed what is probably my main disagreement with Longley.
For it seems to me that he has a rather fixed idea as to his
ontology, and his extensionalism is a restriction to the use of only
that ontology. Like you, I am skeptical of any such restrictions.
The history of science is not compatible with the use of a fixed
ontology.


It is loading more messages.
0 new messages