Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

why Ginsberg grouses

4 views
Skip to first unread message

Matthew L. Ginsberg

unread,
Jun 21, 1995, 3:00:00 AM6/21/95
to
There have been some complaints from the Marengo guys and from Jorn
Barger about my attitudes here, and I thought I would take a minute to
try to reply to them. David Magerman has done a better job than I
could, but I'll give it a shot anyway.

Let me begin with a rather more general comment. Being a scientist
is, over all else, fun. But following a close second is the fact that
being a scientist is *hard*. You're competing with the smartest folks
on the planet, and it shouldn't be too surprising that it's hard work
to make progress. It might *seem* that Newton's observations are
obvious, but that's only in retrospect -- Newton worked hard to
develop his theories, and they were anything *but* obvious at the
time. Newton didn't publish his theory of gravitation for years while
he developed enough calculus to prove that the orbits of the planets
should be elliptical, and that spherical planets gravitated like point
objects.

AI is no different. The problems are accessible, which is great. It
means that all kinds of people can bring their insights to bear on
them. But it *doesn't* mean that is all there is to it. Simply
thinking about a problem is worth nothing. What counts is finding a
solution to the problem that has computational impact, and then really
validating that solution on real problems and real data. You have to
step up to the plate, make some falsifiable claims, and then see if
you were right. More often than not, it turns out that you *weren't*
right and then you have to start over. Such is the way of the
scientific method.

A lot of the people posting to comp.ai seem not to understand this.
The view seems to be that making the claims (falsifiable or not) is
sufficient; the posters seems to be operating in some kind of
"generate and test" mode where they generate the hypotheses (the fun
part) and then leave it to the net community to test them (the hard
part at best, and often impossible given the fuzzy nature of the
claims). I have tried, at times, to point that out in specific cases.
I've been met with scorn on all occasions, but I guess that's to be
expected.

So what *should* comp.ai be used for? Lots of things. The usual
newsgroup stuff, of course -- newbie questions, conference
announcements, etc. General speculation on specific questions would
be nice. An example (a fairly technical one, I admit) follows in a
little bit.

Before I do that, though, let me respond to the accusation that I
never introduce new ideas. You can find my papers over the net; start
with the CIRL URL (http://cirl.uoregon.edu) and you'll get there soon
enough. My overall views are:

1. Reasoning can be reasonably captured using first-order logic and
some kind of bookkeeping mechanism. My bookkeeping mechanism of
choice is called a "bilattice."

2. Heuristic adequacy matters. (In other words, good ideas aren't
good unless they can be implemented.) I believe that this has deep
consequences in the field of nonmonotonic reasoning (where it means
that nonmonotonic reasoners need to be fast but perhaps not accurate)
and knowledge representation generally (where it appears to mean that
modal operators should be viewed as interruption markers).

3. There is a lot of progress to be made in basic search algorithms.
This includes effective ways to incorporate justification information
in both constraint-satisfaction search (dynamic backtracking and
related algorithms) and game search (partition search).

All of these views are defended at substantial length in my papers.
People who are interested in specific topics here, or are having
trouble figuring out which paper goes with which philosophical
observation, are welcome to get in touch with me.

Back to the technical speculation, though. Bart Selman and I are
going to be part of a panel at IJCAI this summer, and he is going to
argue that one should use fundamentally different methods to solve a
search problem and to prove that the problem is unsolvable. The
argument is that solutions can be found with incomplete methods, but
proofs of unsatisfiability require completeness.

From a theoretical point of view, Selman is assuming at least that NP
is not equal to co-NP, since if they were equal it would be possible
to verify unsatisfiability in poly-time and one could use
nonsystematic or incomplete methods to search for proofs of
unsatisfiability.

So if NP = co-NP, Selman is wrong. But maybe he's wrong anyway;
although NP might not be equal to co-NP in theory, they are surely
equal in *practice* because we can't find proofs of exponential length
anyway. So maybe Selman is only observing that the best tools *so
far* for solving/proving unsolvable are different, but there is no
reason to expect things to stay that way.

Comments?

Matt Ginsberg

Erann Gat

unread,
Jun 21, 1995, 3:00:00 AM6/21/95
to
In article <3s9gq5$6...@pyrix.cs.uoregon.edu>, gins...@t.uoregon.EDU
(Matthew L. Ginsberg) wrote:

> Comments?

Your philosophical points about the scientific process are right on, but:

> 1. Reasoning can be reasonably captured using first-order logic and
> some kind of bookkeeping mechanism. My bookkeeping mechanism of
> choice is called a "bilattice."

Perhaps you'd better define "reasoning." The claim that FOL is an adequate
model of most human mental processes seems to me to be untenable. Poeple
regularly draw conclusions that are invalid under FOL (Example: "I do not
understand how something as complex as a human could have evolved,
therefore
God exists.") FOL is monotonic and consistent; human reasoning is not.

I know this is an old and tired debate, but perhaps it would be worthwhile
to revisit it in comp.ai. It might be instructive to some readers.

E.

--

Erann Gat
g...@robotics.jpl.nasa.gov

Matthew L. Ginsberg

unread,
Jun 21, 1995, 3:00:00 AM6/21/95
to
In article <gat-2106...@milo.jpl.nasa.gov> g...@robotics.jpl.nasa.gov (Erann Gat) writes:
>In article <3s9gq5$6...@pyrix.cs.uoregon.edu>, gins...@t.uoregon.EDU
>(Matthew L. Ginsberg) wrote:

>> 1. Reasoning can be reasonably captured using first-order logic and
>> some kind of bookkeeping mechanism. My bookkeeping mechanism of
>> choice is called a "bilattice."
>

>Perhaps you'd better define "reasoning." The claim that FOL is an adequate
>model of most human mental processes seems to me to be untenable. Poeple

>regularly draw conclusions that are invalid under FOL ...


> FOL is monotonic and consistent; human reasoning is not.

I didn't say that FOL sufficed; I said that FOL together with some kind
of bookkeeping mechanism sufficed. It seems to me that people *do* do
first-order reasoning in small pieces, generating conclusions that make
sense. Then they do all sorts of weird things with these conclusions --
overturn them in the face of new evidence, what have you. These collected
"weird things" are what I am calling bookkeeping.

The details of "bookkeeping", however, can get pretty intricate. They
appear in the various papers about bilattices ...

Does this help?

Matt Ginsberg


Erann Gat

unread,
Jun 22, 1995, 3:00:00 AM6/22/95
to
In article <3sa52l$c...@pyrix.cs.uoregon.edu>, gins...@t.uoregon.edu
(Matthew L. Ginsberg) wrote:

> In article <gat-2106...@milo.jpl.nasa.gov> g...@robotics.jpl.nasa.gov (Erann Gat) writes:
> >In article <3s9gq5$6...@pyrix.cs.uoregon.edu>, gins...@t.uoregon.EDU
> >(Matthew L. Ginsberg) wrote:
>

> >> 1. Reasoning can be reasonably captured using first-order logic and
> >> some kind of bookkeeping mechanism. My bookkeeping mechanism of
> >> choice is called a "bilattice."
> >

> >Perhaps you'd better define "reasoning." The claim that FOL is an adequate
> >model of most human mental processes seems to me to be untenable. Poeple
> >regularly draw conclusions that are invalid under FOL ...
> > FOL is monotonic and consistent; human reasoning is not.
>
> I didn't say that FOL sufficed; I said that FOL together with some kind
> of bookkeeping mechanism sufficed. It seems to me that people *do* do
> first-order reasoning in small pieces, generating conclusions that make
> sense. Then they do all sorts of weird things with these conclusions --
> overturn them in the face of new evidence, what have you. These collected
> "weird things" are what I am calling bookkeeping.
>
> The details of "bookkeeping", however, can get pretty intricate. They
> appear in the various papers about bilattices ...
>
> Does this help?

Yes and no. It helps me to understand the claim you are making, but it
does not help convince me that your claim is correct. I must confess I
haven't read your papers (I work in robotics and don't have much time for
fun stuff like the general AI problem) but I have read about a lot of other
things that could reasonably be called "bookkeeping mechanisms": frames,
scripts, semantic nets, whatever the CYC people are calling their indexing
scheme nowadays, etc. etc. etc. If bilatices are a variation on this
theme, then the following criticism applies. If not, you should find a
more descriptive term.

For the sake of argument I will take a somewhat more extreme position than
I actually believe: all claims of the form "Human reasoning can be modelled
by first oder logic augmented with X" are vacuous, at least in so far as
they relate to first order logic. If the claim is true then nearly all of
the complexity and intricacy of human reasoning -- all the hard and
interesting stuff -- resides in X, and virtually none of it resides in
first order logic. There is no substantial difference between the
preceding claim and the claim that human reasoning can be modelled by a
Turing machine augmented with X. The situation in AI is, I believe,
analogous to that of physics before Einstein. "All we have to do," said
the physicists, "is find a small modification to Newton's theory that will
let us explain the photoelectric effect and these annoying results about
the speed of light and we'll have this physics thing all worked out." All
we have to do, say the logicists, is find the little modification to FOL
that will let us explain common sense reasoning and the AI problem will be
all worked out. Well, it took a pretty big conceptual leap to explain the
constant speed of light, and I believe that it will take a similar
conceptual leap to solve the AI problem. (And no, I don't think neural
nets or fuzzy logic will do it either.) When it happens, people may look
back and say the FOL can be seen, in retrospect, as an approximation to the
correct solution, just as Newtonian mechanics is an approximation to
relativity. But as long as people setting the research agenda (and that
means you, Matt) cling to the idea that FOL is The Answer and just needs to
be tweaked a little, and as long as people continue to get tenure for
pushing the problem into homunculi like "bookkeeping mechanisms" I believe
that little progress will be made.

Matthew L. Ginsberg

unread,
Jun 22, 1995, 3:00:00 AM6/22/95
to
In article <gat-2206...@milo.jpl.nasa.gov>
g...@robotics.jpl.nasa.gov (Erann Gat) talks about my augmenting
first-order logic with a "bookkeeping mechanism", saying in summary:

>all claims of the form "Human reasoning can be modelled
>by first oder logic augmented with X" are vacuous, at least in so far as
>they relate to first order logic. If the claim is true then nearly all of
>the complexity and intricacy of human reasoning -- all the hard and
>interesting stuff -- resides in X, and virtually none of it resides in
>first order logic.

I think that the split is much more even. The bookkeeping mechanism
is responsible for a lot; it's not a simple tweak that makes FOL work.
The bilattice work is fairly foundational in this regard and seems to
me (but hell, I made it up!) to be contributing as much to the
eventual cognitive architecture as FOL is.

But I also think that FOL is contributing a lot. It is trying to
capture what thinking is about, at least in some ways and at some
levels. So when I say that it should be augmented with a bookkeeping
mechanism, I'm hoping to retain the substantial insights we've had
about theorem proving, deduction, and how they relate to reasoning. I
could just as well have said that I think reasoning can be captured in
a bilattice-based declarative database augmented with a first-order
theorem prover, which might have made Erann happier.

Matt Ginsberg

Fritz Lehmann

unread,
Jun 22, 1995, 3:00:00 AM6/22/95
to
This comp.ai thread has been on whether "FOL" (First-Order Logic),
perhaps with some "bookkeeping" extensions, is adequate to model reasoning
in general. I hope there is some real reason for the "First" in this
debate, other than the fact that Logic 101 courses rarely cover
classical (n-order) logic. Simple concepts like "connected" (for
finite structures), or equality, _cannot_ be defined in _First_ Order
Logic; they require second-order logic (quantifying over relations and
not just over individuals). Some "features" of First-Order-ism are
practically irrelevant (e.g. completeness) or actually bad (e.g. compactness),
but there is often an unthinking assumption that these are somehow
desirable. The formal defects of First-Order-ism are not cured by imitation
("weakly") higher-order logics. If in fact you mean classical logic, say so
and don't gratuitously throw in "First-Order" simply because it's a familiar
phrase from Logic 101.

Yours truly, Fritz Lehmann


Jorn Barger

unread,
Jun 22, 1995, 3:00:00 AM6/22/95
to
In an awesome performance, in article <3scphv$n...@ecl.wustl.edu>,

Fritz Lehmann <fr...@rodin.wustl.edu> wrote:
> This comp.ai thread has been on whether "FOL" (First-Order Logic),
>perhaps with some "bookkeeping" extensions, is adequate to model reasoning
>in general. I hope there is some real reason for the "First" in this
>debate, other than the fact that Logic 101 courses rarely cover
>classical (n-order) logic. Simple concepts like "connected" (for
>finite structures), or equality, _cannot_ be defined in _First_ Order
>Logic; they require second-order logic (quantifying over relations and
>not just over individuals).

Wow... can you explain this more? What meaning of 'connected'?

Is there any serious programming problem in generalizing this to
Nth-order? What does Lenat do in Cyc?

> Some "features" of First-Order-ism are
>practically irrelevant (e.g. completeness) or actually bad (e.g. compactness),

Elaborate on this, too?


j

Philip Jackson

unread,
Jun 22, 1995, 3:00:00 AM6/22/95
to
In article <3s9gq5$6...@pyrix.cs.uoregon.edu>, Matthew L. Ginsberg writes:

>There have been some complaints from the Marengo guys and from Jorn
>Barger about my attitudes here, and I thought I would take a minute to
>try to reply to them. David Magerman has done a better job than I
>could, but I'll give it a shot anyway.
>[...]

Magerman's reply was very humorous and well stated. However, I don't think
that you (Matt) deserve all the blame / credit for the departure of Marengo
Media from comp.ai.

Rather, I think it is important to note that on the same day and apparently
after Magsig published his step by step description showing various
problems with AXIS, Savain chose to stop replying publicly to Magsig's
questions, and blaimed his departure from comp.ai on you, claiming that
comp.ai was censored. In fact Savain was never censored, nor was anyone
else. Perhaps he no longer chose to submit that dialog to public scrutiny
because he did not want to be seen modifying his previous technical
statements about AXIS in further dialog with Magsig and Hammerton.


Phil Jackson


Ralph Becket

unread,
Jun 23, 1995, 3:00:00 AM6/23/95
to

Two points. Firstly, while I agree that AI is still in its infancy, working
with existing theories leads to an understanding of what is wrong with them
and why, and also leads to a better appreciation of problems in AI. Secondly,
current AI theories _do_ let us attack problems that we could not previously
attempt, even if they are nowhere near a model of human reasoning.

It's all very well saying we need a paradigm shift, but I don't think we'll
get one until we understand where we're going wrong.

Ralph
--
Ralph....@cl.cam.ac.uk http://www.cl.cam.ac.uk/users/rwab1

Donald Tveter

unread,
Jun 23, 1995, 3:00:00 AM6/23/95
to
In article <3sea7c$1...@lyra.csx.cam.ac.uk>,

Ralph Becket <rw...@cl.cam.ac.uk> wrote:
>
>Two points. Firstly, while I agree that AI is still in its infancy, working
>with existing theories leads to an understanding of what is wrong with them
>and why, and also leads to a better appreciation of problems in AI. Secondly,
>current AI theories _do_ let us attack problems that we could not previously
>attempt, even if they are nowhere near a model of human reasoning.
>
>It's all very well saying we need a paradigm shift, but I don't think we'll
>get one until we understand where we're going wrong.

The alternatives to conventional symbol processing AI doctrine are
mostly pretty obvious:

You need real numbers not just symbols.

If people use symbols and structures of symbols they also use
images which are also structured objects.

If people use rules they also use cases including images of
specific cases.

Quantum mechanics may be necessary.

Of course exploring these ideas may not be all that easy.

************************************************************************
The Backpropagator's Online Reading List and Review by WWW.
My backprop software for UNIX and DOS is available by FTP or WWW.
A professional version is also available.
************************************************************************
Don Tveter d...@mcs.com
http://www.mcs.com/~drt/home.html ftp://ftp.mcs.com/mcsnet.users/drt
************************************************************************

Fritz Lehmann

unread,
Jun 23, 1995, 3:00:00 AM6/23/95
to
J. Barger asked in comp.ai:
----begin quote----

>classical (n-order) logic. Simple concepts like "connected" (for
>finite structures), or equality, _cannot_ be defined in _First_ Order
>Logic; they require second-order logic (quantifying over relations and
>not just over individuals).

Wow... can you explain this more? What meaning of 'connected'?
Is there any serious programming problem in generalizing this to
Nth-order? What does Lenat do in Cyc?

----end quote----

"Connected" has its ordinary meaning. A structure is connected
if it forms one piece, otherwise it's disconnected (in more than one
piece). To define a formal connected structure (such as a dot-and-line
graph, or a network) you have to quantify over paths: for any two
elements, _there_exists_ a path (of links/relations) between them. Roland
Fraisse and Andresz Ehrenfeucht showed in the 1950's that no first-
order definition (not even a so-called "weakly higher-order" one) can
ever define the predicate "connected" -- you have to be able to say
whether a path exists (of any length) between two elements. This
figures obliquely in Minsky & Papert's attack on single-layer perceptrons.

There are problems in extending automated theorem-provers to
higher-order-logic, but this is a burgeoning field and the problems
are being solved in higher-order provers and proof-checkers.

The current Cyc language, descended from the earlier Cyc "Constraint
Language" by way of the Cyc "Epistemological Language", includes many
higher-order concepts, although more than 90% of the current Cyc "fact"
axioms are only first order. This has been built by many people
in the Cyc project, not just Lenat.

Barger also asked:
----begin quote----


> Some "features" of First-Order-ism are
>practically irrelevant (e.g. completeness) or actually bad (e.g. compactness),

Elaborate on this, too?
----end quote---

No, I don't want to. These are technical issues which excite only
logicians. Most logic textbooks cover them. To misquote J. P. Morgan on
yachts, "If you need to ask, you can't afford them". Just don't call classical
logic "First Order" unless you have some particular reason to, and if somebody
touts completeness and compactness (and Lowenheim-Skolem and 0-1 properties)
as wonderful or necessary, ask why, and don't be satisfied with irrelevant
or incomprehensible answers. I'll cross-post this to sci.logic in case
someone there may care to explain more.

Yours truly, Fritz Lehmann


Erann Gat

unread,
Jun 23, 1995, 3:00:00 AM6/23/95
to
In article <3scmse$5...@pyrix.cs.uoregon.edu>, gins...@t.uoregon.edu
(Matthew L. Ginsberg) wrote:

> I could just as well have said that I think reasoning can be captured in
> a bilattice-based declarative database augmented with a first-order
> theorem prover, which might have made Erann happier.

It would have made me happier if it had been followed by a brief
description
of bilatices, why they are different from all the other indexing mechanisms
that have been tried and failed, and a brief description of something that
bilatices can do that other techniques cannot do. Some experimental data
and a reference or two would be nice too. (Saying "I have written papers
on the subject" is not a reference.)

This whole discussion started because you (Matt) took some people to task
for making sweeping claims in the newsgroup with no supporting evidence.
It seems to me that you have done exactly the same thing: you say that
bilatices will save the world, but then you present no support, not even
a pointer to a paper. Expecting everyone on the group to go seek out your
papers on their own is a rather haughty attitude. It's a reasonable thing
to expect of colleagues and grad students, but this is a newsgroup:
ordinary
people hang out here. Undergrads, high school students, artists, and poor
government employees who don't have time to go to the library to do a
literature search on bilatices, or whose local libraries don't happen to
carry AIJ.

Instead of complaining about the poor level of the discussion on the group
(a legitimate complaint) why don't you set an example? Show us how it
ought
to be done, Matt! Tell us why bilatices are the greatest thing since
sliced
bread, and put it in terms that we poor ignorant people outside the ivory
tower can understand.

Andy Grosso

unread,
Jun 23, 1995, 3:00:00 AM6/23/95
to
In article <gat-2306...@milo.jpl.nasa.gov>,

Erann Gat <g...@robotics.jpl.nasa.gov> wrote:
>In article <3scmse$5...@pyrix.cs.uoregon.edu>, gins...@t.uoregon.edu
>(Matthew L. Ginsberg) wrote:
>
>> I could just as well have said that I think reasoning can be captured in
>> a bilattice-based declarative database augmented with a first-order
>> theorem prover, which might have made Erann happier.
>
>It would have made me happier if it had been followed by a brief
>description
>of bilatices, why they are different from all the other indexing mechanisms
>that have been tried and failed, and a brief description of something that
>bilatices can do that other techniques cannot do. Some experimental data
>and a reference or two would be nice too. (Saying "I have written papers
>on the subject" is not a reference.)
>

The strange thing about thist post is that Matt *did* give pointers
to his work (and to that of other people). The claim

>This whole discussion started because you (Matt) took some people to task
>for making sweeping claims in the newsgroup with no supporting evidence.
>It seems to me that you have done exactly the same thing: you say that
>bilatices will save the world, but then you present no support, not even
>a pointer to a paper.

is false-- I downloaded some papers of Matt's a day or two
ago based solely on the information he provided in this thread
(and thanks, Matt. MVL.ps looks very interesting).

As for the rest of Erran's article-- I found the the sneering comments
about bilattices and the ivory tower both irrelevant and obnoxious.

Being nice is not all that much harder than being an obnoxioous
fool.

Cheers,

Andy Grosso

Matthew L. Ginsberg

unread,
Jun 23, 1995, 3:00:00 AM6/23/95
to
In article <gat-2306...@milo.jpl.nasa.gov> g...@robotics.jpl.nasa.gov (Erann Gat) writes:
>It would have made me happier if it had been followed by a brief
>description
>of bilatices, why they are different from all the other indexing mechanisms
>that have been tried and failed, and a brief description of something that
>bilatices can do that other techniques cannot do. Some experimental data
>and a reference or two would be nice too. (Saying "I have written papers
>on the subject" is not a reference.)

I'm happy to, although be warned that the mathematics is fairly intricate.
Here is the abstract of the basic (1988) bilattice paper.

This paper describes a uniform formalization of much of the current
work in AI on inference systems. We show that many of these systems,
including first-order theorem provers, assumption-based truth
maintenance systems (ATMS's) and unimplemented formal systems such
as default logic or circumscription can be subsumed under a single
general framework.

We begin by defining this framework, which is based on a mathematical
structure known as a *bilattice*. We present a formal definition
of inference using this structure, and show that this definition
generalizes work involving ATMS's and some simple nonmonotonic logics.

Following the theoretical description, we describe a constructive
approach to inference in this setting; the resulting generalization
of both conventional inference and ATMS's is achieved without
incurring any substantial computational overhead. We show that our
approach can also be used to implement a default reasoner, and
discuss a combination of default and ATMS methods that enables
us to formally describe an ``incremental'' default reasoning system.
This incremental system does not need to perform consistency checks
before drawing tentative conclusions, but can instead adjust its
beliefs when a default premise or conclusion is overturned in the
face of convincing contradictory evidence. The system is therefore
much more computationally viable than earlier approaches.

Finally, we discuss the implementation of our ideas. We begin by
considering general issues that need to be addressed when
implementing a multivalued approach such as that we are proposing,
and then turn to specific examples showing the results of an existing
implementation. This single implementation is used to solve a
digital simulation task using first-order logic, a diagnostic task
using ATMS's as suggested by de Kleer and Williams, a problem
in default reasoning as in Reiter's default logic or McCarthy's
circumscription, and to solve the same problem more efficiently by
combining default methods with justification information. All of
these applications use the same general-purpose bilattice theorem
prover and differ only in the choice of bilattice being considered.

>This whole discussion started because you (Matt) took some people to task
>for making sweeping claims in the newsgroup with no supporting evidence.
>It seems to me that you have done exactly the same thing: you say that
>bilatices will save the world, but then you present no support, not even
>a pointer to a paper.

The full paper is available by anonymous ftp from t.uoregon.edu
(check out /u/ftp/papers/README), or by following fairly obvious
links from the CIRL URL http://cirl.uoregon.edu.

The reason I didn't post these pointers before is simply that I had
posted general information about how to access my work previously,
and didn't feel it was appropriate to repost it.

Matt Ginsberg


Philip Jackson

unread,
Jun 23, 1995, 3:00:00 AM6/23/95
to
In article <gat-2206...@milo.jpl.nasa.gov>, Erann Gat writes:

>In article <3sa52l$c...@pyrix.cs.uoregon.edu>, gins...@t.uoregon.edu
>(Matthew L. Ginsberg) wrote:
>
>> In article <gat-2106...@milo.jpl.nasa.gov>
g...@robotics.jpl.nasa.gov (Erann Gat) writes:
>> >In article <3s9gq5$6...@pyrix.cs.uoregon.edu>, gins...@t.uoregon.EDU
>> >(Matthew L. Ginsberg) wrote:
>>
>> >> 1. Reasoning can be reasonably captured using first-order logic and
>> >> some kind of bookkeeping mechanism. My bookkeeping mechanism of
>> >> choice is called a "bilattice."
>> >
>> >Perhaps you'd better define "reasoning." The claim that FOL is an
adequate
>> >model of most human mental processes seems to me to be untenable.
Poeple
>> >regularly draw conclusions that are invalid under FOL ...
>> > FOL is monotonic and consistent; human reasoning is not.
>>
>> I didn't say that FOL sufficed; I said that FOL together with some kind
>> of bookkeeping mechanism sufficed. It seems to me that people *do* do
>> first-order reasoning in small pieces, generating conclusions that make
>> sense. Then they do all sorts of weird things with these conclusions --

>> overturn them in the face of new evidence, what have you. These
collected
>> "weird things" are what I am calling bookkeeping.
>>

I found Matt's initial statement "reasoning can be reasonably captured" to
be plausible if all that he was saying was that FOL with some additional
bookkeeping mechanisms can be used to emulate "reasonable = limited"
portions of human reasoning. I am also interested in studying problem
solving within the context of FOL (indeed, within propositional logic)
because good problem solving / search / bookkeeping methods at that level
should be useful in reasoning with higher level logics.

However, in the ensuing posts the scope of the statement has been expanded
to mean that FOL with additional bookkeeping is "an adequate model of most
human mental processes". This is untenable within normal intepretations
of "FOL" and "bookkeeping". For example, "most human mental processes"
includes inventing and using new languages and notations, reasoning with
metaphors / analogies, learning, dreaming, .... many processes for which we
have barely scratched the surface.

Phil Jackson

Philip Jackson

unread,
Jun 23, 1995, 3:00:00 AM6/23/95
to
In article <3s9gq5$6...@pyrix.cs.uoregon.edu>, Matthew L. Ginsberg writes:
>[...]
>
>Back to the technical speculation, though. Bart Selman and I are
>going to be part of a panel at IJCAI this summer, and he is going to
>argue that one should use fundamentally different methods to solve a
>search problem and to prove that the problem is unsolvable. The
>argument is that solutions can be found with incomplete methods, but
>proofs of unsatisfiability require completeness.
>
The incomplete methods (variants of GSAT, etc.) have been shown to have
good average case performance for fairly large problems (1000 or more
variables). However, if I understand correctly, they are not *guaranteed*
to find solutions when problems are satisfiable, and indeed experimentally
occasionally some satisfiable problems much take longer than others. It
seems likely that one could construct satisfiable problems for which the
incomplete methods might "thrash" arbitrarily long -- I'd be interested to
hear Selman's opinion on this.

>From a theoretical point of view, Selman is assuming at least that NP
>is not equal to co-NP, since if they were equal it would be possible
>to verify unsatisfiability in poly-time and one could use
>nonsystematic or incomplete methods to search for proofs of
>unsatisfiability.
>
>So if NP = co-NP, Selman is wrong. But maybe he's wrong anyway;
>although NP might not be equal to co-NP in theory, they are surely
>equal in *practice* because we can't find proofs of exponential length
>anyway. So maybe Selman is only observing that the best tools *so
>far* for solving/proving unsolvable are different, but there is no
>reason to expect things to stay that way.
>
>Comments?
>
> Matt Ginsberg
>
Well, in addition to the question of NP ?= co-NP, there is of course also
the question of P ?= NP. If P=NP, then we could have polynomial-time,
complete proofs for both satisfiability and unsatisfiability...

Phil Jackson

Jive Dadson

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
> The details of "bookkeeping", however, can get pretty intricate.
> They appear in the various papers about bilattices ...

Please forgive a naive question from an enthusiastic neophyte: Is a
bilattice the same thing as the "graphoid" Bayesian belief network that
Judea Pearl describes? -- a directed acyclic graph with nodes labeled
with hypotheses, and arrows labeled with conditional probability
distributions? If not, could you please give a short intuitive
definition, and a pointer to the basic literature?

Thanks.

Dave

Andreas Schoter

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
g...@robotics.jpl.nasa.gov (Erann Gat) writes:

>In article <3s9gq5$6...@pyrix.cs.uoregon.edu>, gins...@t.uoregon.EDU
>(Matthew L. Ginsberg) wrote:

>> Comments?

>Your philosophical points about the scientific process are right on, but:

>> 1. Reasoning can be reasonably captured using first-order logic and


>> some kind of bookkeeping mechanism. My bookkeeping mechanism of
>> choice is called a "bilattice."

>Perhaps you'd better define "reasoning." The claim that FOL is an adequate


>model of most human mental processes seems to me to be untenable. Poeple

>regularly draw conclusions that are invalid under FOL (Example: "I do not
>understand how something as complex as a human could have evolved,
>therefore
>God exists.") FOL is monotonic and consistent; human reasoning is not.

>I know this is an old and tired debate, but perhaps it would be worthwhile
>to revisit it in comp.ai. It might be instructive to some readers.

>E.

Other people in this thread have mentioned the need for a paradigm
shift - I think, perhaps, this shift is already happening;
specifically the move from the primacy of `truth' as the foundational
notion in logic to the primacy of `information'. Situation semantics
comes to mind as a detailed articulation of this, but it's also
present in Belnap's work and Veltman's. To my mind the constructive
logics also, implicitly, embody this. I've found bilattices also
allow one to embody this idea.

With respect to AI, bilattices do offer an interesting implementation
tool. You should check out Fitting's short paper [2] where he applies
bilattices (specifically the minimal four-point bilattice) to logic
programming and comes up with an interesting system (also [3] is of
interest). A variation on Fitting's system underlies part of my
implementation: I've used Ginsberg's [4] bilattices combined with
Belnap's computaional epistemic agent [1] to build a reasonable
efficient model-based reasoner. The formal details of my system
(evidential bilattice logic) can be found in [5] which is ftpable from
ftp.cogsci.ed.ac.uk in pub/CCS-RPs/ccs-rp-64.ps.gz

Andreas
<as...@cogsci.ed.ac.uk>


[1] @InCollection(Belnap77,
author = "Belnap, N.D.",
editor = "J.M. Dunn and G. Epstein",
title = "A Useful Four-Valued Logic",
booktitle = "Modern Uses of Multiple-Valued Logic",
volume = 2,
series = "Episteme",
publisher = "D. Reidel Publishing Company",
pages = "8--37",
year = "1977"
)

[2] @Inproceedings{Fitting89:negation,
author = "Fitting, M.C.",
title = "Negation as refutation",
booktitle = "Proceedings of the Fourth Annual Symposium on
Logic in Computer Science",
year = 1989,
editor = "Rohit Parikh",
pages = "63-70",
organization = "IEEE"
}

[3] @Article{Fitting91:logprog,
author = "Fitting, M.C.",
title = "Bilattices and the Semantics of Logic
Programming",
journal = "Journal of Logic Programming",
volume = 11,
pages = "91--116",
year = 1991
}

[4] @Article(Ginsberg88,
author = "Ginsberg, M.L.",
title = "Multivalued Logics: A Uniform Approach to
Reasoning in Artificial Intelligence",
journal = "Computational Intelligence",
volume = 4,
pages = "265--316",
year = 1988
)

[5] @TechReport(Schoter:EBL,
author = "{Sch\"oter}, A.",
title = "Evidential Bilattice Logic and Lexical
Inference",
institution = "University of Edinburgh, Centre for Cognitive
Science",
type = "Research Paper",
number = "EUCCS/RP-64",
year = 1994
)

Matthew L. Ginsberg

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
In article <3slbsc$g...@ixnews3.ix.netcom.com> jda...@ix.netcom.com (Jive Dadson ) writes:

>Please forgive a naive question from an enthusiastic neophyte: Is a
>bilattice the same thing as the "graphoid" Bayesian belief network that
>Judea Pearl describes?

No. A bilattice labels each sentence in a declarative database with
a "truth value"; the bookkeeping is responsible for combining the
truth values of old sentences to get truth values for new ones.

A bilattice is essentially a set equipped with two partial orders, one
indicating how true or false something is, and the other indicating
how complete the state of knowledge is about the sentence. Negation
is required to flip the true/false ordering (de Morgan's laws), but
leave the other ordering unchanged. The link this provides between
the two orderings is what makes the mathematics interesting.

Bilattices were introduced in my 1988 paper, "Multivalued Logics: A
Uniform Approach to Reasoning in AI" which appeared in Computational
Intelligence. It's available by anonymous ftp from t.uoregon.edu,
as /u/ftp/papers/mvl.ps.

Matt Ginsberg


Jorn Barger

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
Matthew L. Ginsberg <gins...@t.uoregon.edu> wrote:
>No. A bilattice labels each sentence in a declarative database with
>a "truth value"; the bookkeeping is responsible for combining the
>truth values of old sentences to get truth values for new ones.
>
>A bilattice is essentially a set equipped with two partial orders, one
>indicating how true or false something is, and the other indicating
>how complete the state of knowledge is about the sentence. Negation
>is required to flip the true/false ordering (de Morgan's laws), but
>leave the other ordering unchanged. The link this provides between
>the two orderings is what makes the mathematics interesting.

If I remember right, Lenat only uses five truth values, corresponding
to the five basic truth-values of the alt.folklore.urban FAQ ;^/

T: 100% true
Tb: believed true
U: Unknown
Fb: belived false
F: 100% false

It sounds like bilattices turn this into a continuum, with no two
truth values equal???


j


Matthew L. Ginsberg

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
In article <3sml8i$e...@Mars.mcs.com> jo...@MCS.COM (Jorn Barger) writes:
>T: 100% true
>Tb: believed true
>U: Unknown
>Fb: belived false
>F: 100% false
>
>It sounds like bilattices turn this into a continuum, with no two
>truth values equal???

No, bilattice are not necessarily either continuous or discrete. What
they do is provide a solid understanding of what it is that the truth
values mean.

There is a bilattice that is similar to Lenat's (I've spoken with the
CYC folks abhout this). It includes seven values:

t true
dt true by default (Lenat's Tb)
u unknown
df false by default
f false
* default contradiction
bot contradiction

You need something like *; what if a sentence is believed true for one
reason and false for another? Similarly, you "need" bot, although
it's a bad sign if you ever label a sentence as both true *and* false.

Matt Ginsberg

Erann Gat

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
In article <3sfj1p$g...@pith.uoregon.edu>, gins...@t.uoregon.edu (Matthew
L. Ginsberg) wrote:

> I'm happy to, although be warned that the mathematics is fairly intricate.
> Here is the abstract of the basic (1988) bilattice paper.

[Intricate mathematics deleted.]

I once saw Kip Thorne give a talk on general relativity and time travel.
This man literally wrote the book on gravity waves. He can sling math
around that would make your hair curl. He opened his talk by saying
something like the following. (I am heavily paraphrasing, but the gist
will come through.)

"Before Einstein physicists were wrestling with the question: why do
objects in free-fall tend to move in straight lines except when under the
influence of a gravitational field? After Einstein physicists came to the
realization that objects in free-fall *always* move in straight lines, but
their paths sometimes *appear* not to be straight because space is curved."

In less than a hundred words he had given everyone in the auditorium,
regardless of their mathematical background, a certain level of
understanding of general relativity. Not a full understanding to be sure,
but probably more than they would have had if he had started to talk about
tensors.

This is the level of discourse that is necessary if you wish to communicate
in a forum that is open to the general public, as that lecture was, and as
all unmoderated newsgroups are. There is very little overlap between
effective communications in such settings and effective communications in
professional settings. If you want to communicate effectively in comp.ai
you can't just throw your abstracts and papers at us; you are going to have
to do a little extra work -- take out the jargon, find appropriate
metaphors, think about how to simplify things and distill out the really
essential concepts in terms that laymen can understand. What *is* a
bilattice? *How* does it work? Where can I get one? Inquiring (but
untrained) minds want to know!

When people take academics to task for living in ivory towers it is not out
of disdain for what academics do; it is out of frustration for what they do
not do: make themselves understandable to non-academics. There is nothing
wrong with developing a specialized vocabulary and communications style for
communicating with one's peers; there is very much wrong with expecting
everyone else to learn the jargon before they can even listen to the
discussion.

It makes me angry when I think about it: the Academy has cheapened the role
of popularization to the point where writing accessible work virtually
guarantees ostracism and career death for all but the most senior and
untouchable. Then we turn around and complain when crackpots step in to
fill the void. The machine-learning-breakthrough crap on the newsgroup is
partly your fault, Matt, and mine, and the fault of all the other academics
out there slinging jargon around. It happened because there are people out
there without Ph.D's who are hungry for information, and we didn't give it
to them in a form that they could understand.

Matthew L. Ginsberg

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
In article <gat-2606...@milo.jpl.nasa.gov> g...@robotics.jpl.nasa.gov (Erann Gat) writes:
>In article <3sminn$r...@pith.uoregon.edu>, gins...@t.uoregon.edu (Matthew

>L. Ginsberg) wrote:
>
>> No. A bilattice labels each sentence in a declarative database with
>> a "truth value"; the bookkeeping is responsible for combining the
>> truth values of old sentences to get truth values for new ones.
>>
>> A bilattice is essentially a set equipped with two partial orders, one
>> indicating how true or false something is, and the other indicating
>> how complete the state of knowledge is about the sentence. Negation
>> is required to flip the true/false ordering (de Morgan's laws), but
>> leave the other ordering unchanged. The link this provides between
>> the two orderings is what makes the mathematics interesting.
>
>Wonderful! More! More! This one paragraph has told me more about
>bilattices than all of the preceding discussion; think what you could do
>with ten! Why a partial order? Why not simply a number? Why do bilatices
>work where fuzzy logic and multi-valued logics fail? What is the link
>between the two orderings where negation plays such a central role? And
>explain it without any math! If Kip Thorne can explain general relativity
>and Feynman can do quantum mechanics in plain English, then surely you can
>explain bilattices without the aid of Greek symbols and acronyms.

Your analogy has both strengths and weaknesses, Erann. When Kip
explains general relativity to a lay audience, he is exploiting the
fact that eighty years has passed between the introduction of the
theory and the time of his explanation. It takes a long time for
scientific progress to be translatable into such eloquent terms, and
the translation is dependent on a lot of hard work. I spent enough
time with both Thorne and Feynman while I was at Cal Tech to know that
*they* look at this stuff equally comfortable with both the
mathematics and the intuition; neither is separable from the other. I
have always believed (and continue to believe!) that Einstein's
fundamental insight is that spacetime is a 4-dimensional Riemannian
manifold; that makes *terrific* sense to someone with a well-developed
mathematical intuition but very little sense to a layman. Especially
in the early days, the math is so thoroughly entwined with the
intuition that they are nearly indistinguishable.

But I'll have a shot at your questions:

Why a partial order?

Because it's the right level of generality. You need to be able to
say, "This sentence is more true than that one," or "I know more about
this sentence than that one." You don't need a quantitative measure,
as far as I can tell.

Why not simply a number?

Actually, a number is no good because you need *two* partial orders,
one for "how true" and the other for "how complete is my knowledge."
A single number can't give you both.

Why do bilatices work where fuzzy logic and multi-valued logics fail?

To the extent they do, it's because (1) they are clear about the
separation between the two partial orders while the other methods
aren't and (2) because the semantics of a bilattice system ground out
in the semantics of first-order logic. Where it is applicable, FOL is
(my opinion) clearly the way to go.

What is the link between the two orderings where negation plays
such a central role?

The link is simply that there is *some* operation that inverts one
order while leaving the other alone. (After all, if you know nothing
about a sentence, you also know nothing about its negation.) It turns
out that simply specifying the existence of the link drives the
mathematics. It was quite a surprise to me that this requirement
led to such rich structure.

Matt Ginsberg


Jorn Barger

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
Andy Grosso <wil...@beirut.berkeley.edu> wrote:
>As for the rest of Erran's article-- I found the the sneering comments
>about bilattices and the ivory tower both irrelevant and obnoxious.
>
>Being nice is not all that much harder than being an obnoxioous
>fool.

To my eye, Mr Gat's remarks were intended with humor, and well within
the bounds of reasonable discourse. I cannot say the same for Mr
Grosso's reply.


j

Jorn Barger

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to

This follows-up a message that expired here this morning:

From: gins...@t.uoregon.EDU (Matthew L. Ginsberg)
> I thought I would take a minute to
>try to reply to them. David Magerman has done a better job than I
>could, but I'll give it a shot anyway.

I thought his message was rather unkind.

>being a scientist is *hard*. [...]


>AI is no different. The problems are accessible, which is great. It
>means that all kinds of people can bring their insights to bear on
>them. But it *doesn't* mean that is all there is to it. Simply
>thinking about a problem is worth nothing. What counts is finding a
>solution to the problem that has computational impact, and then really
>validating that solution on real problems and real data. You have to
>step up to the plate, make some falsifiable claims, and then see if
>you were right. More often than not, it turns out that you *weren't*
>right and then you have to start over. Such is the way of the
>scientific method.

If this is really happening, where's the product??? I think people
are going thru the motions, but it's mostly a hoax...

>A lot of the people posting to comp.ai seem not to understand this.
>The view seems to be that making the claims (falsifiable or not) is
>sufficient; the posters seems to be operating in some kind of
>"generate and test" mode where they generate the hypotheses (the fun
>part) and then leave it to the net community to test them (the hard
>part at best, and often impossible given the fuzzy nature of the
>claims).

I think you're fuzzing together lots of different things, here. Yes,
comp.ai attracts a lot of "wow-i-just-solved-the-mystery-of-the-universe"
wankers, but we could all admit that we've been there, and it's a much
more admirable position to be in than never-even-thinking-about-it.

If these people are treated sympathetically, everyone can learn...

> I have tried, at times, to point that out in specific cases.
>I've been met with scorn on all occasions, but I guess that's to be
>expected.

Outsiders' scorn for insiders is a lot less *potent* than the reverse!

(And when have you ever critiqued any ideas I've posted???)

>So what *should* comp.ai be used for? Lots of things. The usual
>newsgroup stuff, of course -- newbie questions, conference
>announcements, etc.

So where are the answers to newbie questions like "how far has AI
gotten?"

> You can find my papers over the net

I'm allergic to postscript, sorry... (Before I'll jump thru that
extra hoop, I have to be convinced it's not the same old same old...)

>2. Heuristic adequacy matters. (In other words, good ideas aren't
>good unless they can be implemented.)

Well, during the interim before they're implemented, can't there be
a phase where they're *good* and *worth discussing* but *not yet*
implemented?

> I believe that this has deep
>consequences in the field of nonmonotonic reasoning (where it means
>that nonmonotonic reasoners need to be fast but perhaps not accurate)

Non-sequitur flag?!? I object to the jargon, for one thing, but are
you then saying that good-plans-aren't-good-if-they-take-too-long-to-find?
(Not exactly the same as not-good-if-they-haven't-been-implemented...)

>and knowledge representation generally (where it appears to mean that
>modal operators should be viewed as interruption markers).

Explain?

>3. There is a lot of progress to be made in basic search algorithms.
>This includes effective ways to incorporate justification information
>in both constraint-satisfaction search (dynamic backtracking and
>related algorithms) and game search (partition search).

Fractal-thicket indexing claims to be a general solution to the search
problem-- there shouldn't *be* any searching!


j


Jorn Barger

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to

From: fr...@rodin.wustl.edu (Fritz Lehmann)

> "Connected" has its ordinary meaning. A structure is connected
>if it forms one piece, otherwise it's disconnected (in more than one
>piece).

But we're just talking about memory locations, right? So if I have a
cons cell that points to two locations, are they connected?

I'm not convinced this use of 'connected' corresponds to any useful
concept in my mental universe...

> To define a formal connected structure (such as a dot-and-line
>graph, or a network) you have to quantify over paths: for any two
>elements, _there_exists_ a path (of links/relations) between them.

1) Why do we care?
2) Is the cons-cell (above) equivalent to the quantified proposition?
Does it affirm its own existence?

> The current Cyc language, descended from the earlier Cyc "Constraint
>Language" by way of the Cyc "Epistemological Language", includes many
>higher-order concepts, although more than 90% of the current Cyc "fact"
>axioms are only first order.

I'd expect that these have to be treated the same way-- is the programming
mechanism completely different for 2nd-order? For third, etc? (I'm
thinking of Lenat's AM, which I think treated everything uniformly...)

>Just don't call classical logic "First Order" unless you have some
>particular reason to, and if somebody touts
>completeness and compactness (and Lowenheim-Skolem and 0-1 properties)
>as wonderful or necessary, ask why, and don't be satisfied with irrelevant
>or incomprehensible answers.

My intuition is that it's a very awkward representation scheme, for
reasons not unlike G Spencer-Brown's...


j

Jorn Barger

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
From: d...@MCS.COM (Donald Tveter) (author of the *2nd* overview of AI
on the Internet)

>The alternatives to conventional symbol processing AI doctrine are
>mostly pretty obvious:
> You need real numbers not just symbols.

Once you have the symbols, you can specify the numbers, but until
you know the symbolic part, why even think about them?

> If people use symbols and structures of symbols they also use
> images which are also structured objects.
> If people use rules they also use cases including images of
> specific cases.

Image, meaning 3D->2D bitmap? (If not, how is an image different from
any other structure of symbols?


j

Matthew L. Ginsberg

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
In article <3sn5sb$a...@Mercury.mcs.com> jo...@MCS.COM (Jorn Barger) writes:
>
>This follows-up a message that expired here this morning:
>
>From: gins...@t.uoregon.EDU (Matthew L. Ginsberg)
>> You can find my papers over the net
>
>I'm allergic to postscript, sorry... (Before I'll jump thru that
>extra hoop, I have to be convinced it's not the same old same old...)

No apology necessary. But by making the material available, I've
done *my* job of disseminating scientific results. Read them or not,
as you see fit.

>
>>2. Heuristic adequacy matters. (In other words, good ideas aren't
>>good unless they can be implemented.)
>
>Well, during the interim before they're implemented, can't there be
>a phase where they're *good* and *worth discussing* but *not yet*
>implemented?

Nope. That's one of the lessons of AI today. There are *lots* of
good ideas. But the only way to tell what is worth pursuing is to
code it up and see what happens. AI is awash in great ideas. What
it needs is more substance.

>> I believe that this has deep
>>consequences in the field of nonmonotonic reasoning (where it means
>>that nonmonotonic reasoners need to be fast but perhaps not accurate)
>
>Non-sequitur flag?!? I object to the jargon, for one thing, but are
>you then saying that good-plans-aren't-good-if-they-take-too-long-to-find?
>(Not exactly the same as not-good-if-they-haven't-been-implemented...)

I was trying to write something that would be at least interesting
to AI practitioners of all levels; some things just can't be said
concisely without using technical language. Same comment as before:
if you can't be bothered to understand what the rest of the community
is doing, you shouldn't expect to be taken seriously yourself.

>
>>and knowledge representation generally (where it appears to mean that
>>modal operators should be viewed as interruption markers).
>
>Explain?

In nontechnical terms, I can't (but wish I could!). This is fairly
recent work, and my own intuitions aren't yet clear enough. Read the
paper.

>
>>3. There is a lot of progress to be made in basic search algorithms.
>>This includes effective ways to incorporate justification information
>>in both constraint-satisfaction search (dynamic backtracking and
>>related algorithms) and game search (partition search).
>
>Fractal-thicket indexing claims to be a general solution to the search
>problem-- there shouldn't *be* any searching!

Not hardly. See my reply to one of your subsequent postings.

Summary: I was talking to one of the Franz guys at a recent workshop,
complaining that my C code ran 20 times faster than my LISP code. He
said RTFM (Read The Fucking Manual), pointing out (rightly) that it
contained stuff about how to get Lisp to inline array references,
since that seemed to be where my problem was.

Jorn, RTFM. The scientific literature is the best effort of the
people who are (at least in theory) the world's experts in this field
to communicate and explain their ideas. What separates you (and the
Marengans) from the scientific community is not raw wit or animal
cunning, but the fact that you seem completely uninterested in anyone
else's results.

Learn postscript. Read the papers. If you don't understand them,
read the papers they cite until you *do* understand them. Make it
impossible for us *not* to take you seriously, instead of grousing
about the fact that we don't.

Matt Ginsberg


Matthew L. Ginsberg

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
In article <3sn72h$b...@Mercury.mcs.com> jo...@MCS.COM (Jorn Barger) writes:
>
>Let me propose an anti-logicist point of view:
>
>- Most 'inferences' can be seen as *filling in the gaps* in a *story*
> about what happened/ can happen/ will happen.
>
>- The most efficient way to index the elements of such an inference
> is in the form of a branching story sequence
>
>- If you have the right starting-generalizations, you can capture all
> the basic patterns in a reasonable space
>
>- At this point, calling it 'inference' hardly means anything anymore!
> Instead, you're retrieving story-elements...

I don't think this matches my understanding of intelligence. There
are two ways to put this. One is anecdotal; you are suggesting that
intelligence is a matter of lookup instead of reasoning.

Think of a really smart person you know. (Jorn, think of someone
else.) Do you respect him because he figures stuff out (i.e., solves
problems), or because he gives you the impression he never figures
stuff out (i.e., lookup)? For me, at least, I respect the problem
solvers. The famous story about Feynman fixing a radio by "thinking
about it" is impressive because he *figured it out*, not because he
had read somewhere how to fix radios.

The other way to put it is as follows: I believe that a hallmark of
intelligence (perhaps even a definition) is the ability to improve
one's performance by devoting additional computational resources to a
problem. Your definition misses that completely.

Matt Ginsberg

Jorn Barger

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to

From: pjac...@nyc.pipeline.com (Philip Jackson)

>However, in the ensuing posts the scope of the statement has been expanded
>to mean that FOL with additional bookkeeping is "an adequate model of most
>human mental processes". This is untenable within normal intepretations
>of "FOL" and "bookkeeping". For example, "most human mental processes"
>includes inventing and using new languages and notations, reasoning with
>metaphors / analogies, learning, dreaming, .... many processes for which we
>have barely scratched the surface.

Meaning that first-order logic can't refer to the symbols it's written in?

I was pleased with Fritz's message, because it suggested that "first order"
is not really the important factor-- if you have predicate calculus,
you still need higher order predicates (which may or may not be 'just
bookkeeping')...? And many of the mechanisms of logic are only vaguely
relevant to human reasoning-- suggesting there may be a simpler way to
look at it...

Let me propose an anti-logicist point of view:

- Most 'inferences' can be seen as *filling in the gaps* in a *story*
about what happened/ can happen/ will happen.

- The most efficient way to index the elements of such an inference
is in the form of a branching story sequence

- If you have the right starting-generalizations, you can capture all
the basic patterns in a reasonable space

- At this point, calling it 'inference' hardly means anything anymore!
Instead, you're retrieving story-elements...

What you have, instead of logic-rules, are the 'meta-stories' of
various Cyc-ian microdomains...

I'll suggest that any inference worth making is worth saving
explicitly in the knowledgebase... and invite counterexamples?


j

Erann Gat

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
In article <3sminn$r...@pith.uoregon.edu>, gins...@t.uoregon.edu (Matthew
L. Ginsberg) wrote:

> No. A bilattice labels each sentence in a declarative database with
> a "truth value"; the bookkeeping is responsible for combining the
> truth values of old sentences to get truth values for new ones.
>
> A bilattice is essentially a set equipped with two partial orders, one
> indicating how true or false something is, and the other indicating
> how complete the state of knowledge is about the sentence. Negation
> is required to flip the true/false ordering (de Morgan's laws), but
> leave the other ordering unchanged. The link this provides between
> the two orderings is what makes the mathematics interesting.

Wonderful! More! More! This one paragraph has told me more about
bilattices than all of the preceding discussion; think what you could do
with ten! Why a partial order? Why not simply a number? Why do bilatices
work where fuzzy logic and multi-valued logics fail? What is the link
between the two orderings where negation plays such a central role? And
explain it without any math! If Kip Thorne can explain general relativity
and Feynman can do quantum mechanics in plain English, then surely you can
explain bilattices without the aid of Greek symbols and acronyms.

E.

--

Erann Gat
g...@robotics.jpl.nasa.gov

Andy Grosso

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
In article <3snb1a$5...@pith.uoregon.edu>,

Matthew L. Ginsberg <gins...@t.uoregon.edu> wrote:
>
>Think of a really smart person you know. (Jorn, think of someone
>else.) Do you respect him because he figures stuff out (i.e., solves
>problems), or because he gives you the impression he never figures
>stuff out (i.e., lookup)? For me, at least, I respect the problem
>solvers. The famous story about Feynman fixing a radio by "thinking
>about it" is impressive because he *figured it out*, not because he
>had read somewhere how to fix radios.

What impresses me most is neither of these. Forming analogies is
*darn* impressive to me. The ability to take two completely
distinct situations and say, correctly, that they are similar in some
fundamental way, is impressive.

It's not really problem solving because there is no explicitly
stated problem. It's not pure lookup (lookup with really
neat indices ?).

Cheers,

Andy Grosso


Erann Gat

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
In article <3sn6fp$k...@pith.uoregon.edu>, gins...@t.uoregon.edu (Matthew
L. Ginsberg) wrote:

> I spent enough
> time with both Thorne and Feynman while I was at Cal Tech to know that
> *they* look at this stuff equally comfortable with both the
> mathematics and the intuition; neither is separable from the other.

Yes. That is why *you* must do the intuitive explaining to the laymen as
well as the technical explaining to your peers. No one else can because no
one else understands the math, and if one doesn't understand the math then
one can't be sure that one has the intuitive explanation right. That is
why Feynman has to write QED and The Character of Physical Law, that is why
Einstein wrote the best popular accounts of Relativity and the history of
physics, and that is why you, Matt Ginsberg, have to explain bilattices to
the unwashed masses in terms that we can understand.

(By the way, for the metaphorically impaired, I am not really grousing
about Ginsberg's conduct per se. My real complaint is with the Academy in
general; I'm just beating on Matt as an illustrative example because he
chose to draw attention to one of the unfortunate consequences of the
Academy's devaluation of popularization: the ever widening rift between the
Academy and the rest of the world, and the resulting rise of crackpots.)

>Einstein's fundamental insight is that spacetime is a 4-dimensional
>Riemannian manifold

Not in this venue it isn't (at least not until you've explained what a
Riemannian manifold is). Around here, Einstein's fundamental insight is
that space-time is curved. This isn't a physics class or an AI research
seminar; this is comp.ai, an unmoderated newsgroup. This is the town
square; this is the local pub. This is where people who are interested in
AI but aren't versed in the jargon come to learn. You might as well have
said that spacetime is a four-dimensional floobargian snickledorf; you
would have conveyed the same information to most of your audience here.
Most of us don't know what a Riemannian manifold is; we don't know what a
bilattice is. If you want to convey information to us, then you will have
to use a vocabulary that we understand. And if you do not want to convey
information to us (or reward your colleagues when they do) then you have no
right to complain when the crackpots take over the conversation.

Pushpinder Singh

unread,
Jun 26, 1995, 3:00:00 AM6/26/95
to
In article <3snfa2$5...@agate.berkeley.edu>, wil...@jaffna.berkeley.edu
(Andy Grosso) wrote:

> In article <3snb1a$5...@pith.uoregon.edu>,
> Matthew L. Ginsberg <gins...@t.uoregon.edu> wrote:
> >
> >The famous story about Feynman fixing a radio by "thinking
> >about it" is impressive because he *figured it out*, not because he
> >had read somewhere how to fix radios.
>
> What impresses me most is neither of these. Forming analogies is
> *darn* impressive to me.

Analogy-making certainly qualifies as figuring, but it strikes me as
far more than just "bookkeeping" (Matt expressed some opinions earlier
about how minds could be adequately described as reasoning by FOL + some
bookkeeping.) Analogy-making is about reasoning about similarities and
differences between higher-order properties of situations, which would
seem to preclude an FOL + bookkeeping approach.

Or am I wrong about this? If so, is there anyone out there who's built
analogy-making machines this way?

-push

Jorn Barger

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to

Matt:

> you are suggesting that
>intelligence is a matter of lookup instead of reasoning.

Yes.

>Think of a really smart person you know. <snipe>


> Do you respect him because he figures stuff out (i.e., solves
>problems), or because he gives you the impression he never figures
>stuff out (i.e., lookup)? For me, at least, I respect the problem

>solvers. The famous story about Feynman fixing a radio by "thinking


>about it" is impressive because he *figured it out*, not because he
>had read somewhere how to fix radios.

It's 'thinking' vs *seeing*. Even when you 'see' how to do something,
it's a matter of finding the relevant *index*.

>The other way to put it is as follows: I believe that a hallmark of
>intelligence (perhaps even a definition) is the ability to improve
>one's performance by devoting additional computational resources to a
>problem. Your definition misses that completely.

It doesn't "miss" it, it relegates it to 'bookkeeping'... ;^/


j

Jorn Barger

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to

Thanks, Matt, for 'putting me right'. I was suffering under the
delusion, now entirely dispelled, that AI was dominated by
gamesplayers whose vested interest was in *not* communicating their
ideas clearly. Now I see that the problem is lazy egomaniacal
crackpots who won't make the effort to learn the jargon, decode the
postscript, read *all* the journal articles, and write toy demos to
prove their worth...

Matt wrote:
> by making the material available, I've
>done *my* job of disseminating scientific results.

I'm glad your conscience is clear... (This isn't where you were
drawing the line a day or two back, though.)

> There are *lots* of
>good ideas. But the only way to tell what is worth pursuing is to
>code it up and see what happens.

I've known programmers who debugged that way. They were the *mediocre*
ones...

> AI is awash in great ideas. What
>it needs is more substance.

Given the widespread criticism of AI for limiting itself to 'toy demos',
do you call those toy demos "substance"???

Certainly, AI needs a great *product*-- something that *works*. I've
proposed an alternate approach to designing this product-- by inventorying
human behavior. You seem to want to *exclude this viewpoint* from
the comp.ai charter...

What kind of scientist is it who tries to exclude opposing viewpoints from
debate???

>if you can't be bothered to understand what the rest of the community

>is doing, you shouldn't expect to be taken seriously yourself. [...]


>Learn postscript. Read the papers. If you don't understand them,
>read the papers they cite until you *do* understand them.

So you're saying, the door is open, all I have to do is grab a postscript
viewer, and ftp your articles, and then track down all their references,
and then track down all *their* references, and then when I can express
my criticisms in *your* jargon, then, maybe, if I'm lucky, you'll
condescend to play the role of educator?

(My illusions are shattered! ;^/

> This is fairly
>recent work, and my own intuitions aren't yet clear enough. Read the
>paper.

Oops-- you mean you haven't *coded* it yet?!?!? (By your rules,
you can't ask me to read it, then, can you???)

>>Fractal-thicket indexing claims to be a general solution to the search
>>problem-- there shouldn't *be* any searching!
>Not hardly. See my reply to one of your subsequent postings.

Did I miss this? The only other followup I have from you included these:

>Jorn, RTFM.

This is rather strong language, acronymed or no.

> The scientific literature is the best effort of the
>people who are (at least in theory) the world's experts in this field
>to communicate and explain their ideas.

Or, alternately, AI literature is a smokescreen by powergamers to win
grant money and drive off their competitors. And if you're so
sincere, why are you closing off debate twenty different (fraudulent,
predictable) ways???

> What separates you (and the
>Marengans) from the scientific community is not raw wit or animal
>cunning, but the fact that you seem completely uninterested in anyone
>else's results.

Ah, so the reason you haven't answered any of my many substantial queries
is that you never *noticed* any of them???

>Make it
>impossible for us *not* to take you seriously, instead of grousing
>about the fact that we don't.

Your capacity for intellectual dishonesty seems inexhaustible. So
how can I ever hope to exhaust it?


j

Michael Jampel

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
Erann Gat <g...@robotics.jpl.nasa.gov> wrote:
>This isn't a physics class or an AI research
>seminar; this is comp.ai, an unmoderated newsgroup. This is the town
>square; this is the local pub.

NO NO NO NO NO NO NO NO NO!

I offer three definitions of the word `public' in the context of 'usenet
/ comp.ai is open to the public':

a) anyone can read what is in comp.ai
b) anyone can post to comp.ai
c) EVERY single person who posts to comp.ai must use language which
could be understood by anyone who reads comp.ai, even if that latter
reader has not bothered to get a single AI text book from the library,
and cannot read, write or understand English.

I don't understand why Erann thinks (c) is the correct definition.
He might object to the fact that I include English-language skills...
he might say that as the US government funds the US end of the internet,
it is acceptable for the language of the US to be used.

But WHY did the government fund the internet? To link defence scientists
together. Then it expanded to link all scientists. Now lots of other
people are getting free access. Fine. But why should that stop the
original users from using it in the way it was intended?

Roughly speaking, comp.ai is intended for people with PhDs in AI to
inform each other about conferences (which will only be comprehensible
to people with PhDs etc etc) and to ask questions which are so close to
the cutting edge that they are not in textbooks.

[Ok, ok, in the above, substitute 'people with a PhD in AI' for some
other, longer, phrase, including people who don't have PhDs but have
been working in the field for 10 years, blah, blah, blah.]

>This is where people who are interested in
>AI but aren't versed in the jargon come to learn.

If you can't be bothered to read a book on AI, then why should Matt be
bothered to re-invent the first 8 chapters of such a book and post them
here?

>And if you do not want to convey
>information to us (or reward your colleagues when they do) then you have no
>right to complain when the crackpots take over the conversation.

comp.ai is unmoderated because it is technically a pain to be moderated
(must find someone willing to moderate), not on philosophical grounds.
If I can't lock my door for technical reasons, doesn't mean that you are
legally allowed to enter my house and steal things. It just means that I
am unable to lock the door. So don't pin too much on unmoderation (sp?).

If someone has the freedom to post "Please tell me what 2+2 is" in
comp.ai, then why doesn't Matt have the freedom to say "Go and do Math
101, then come back"? Or we can have 10,000 posts a day, all except 5
with zero content, in which case the aforementioned people with PhDs in
AI will go off and find another newsgroup. Then the already low signal
to noise ratio will get even worse (10,000:5 will become 9,995:0).

Michael

David G. Mitchell

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
In article <3s9gq5$6...@pyrix.cs.uoregon.edu>,
Matthew L. Ginsberg <gins...@t.uoregon.EDU> wrote:
[among other things]

>From a theoretical point of view, Selman is assuming at least that NP
>is not equal to co-NP, since if they were equal it would be possible
>to verify unsatisfiability in poly-time and one could use
>nonsystematic or incomplete methods to search for proofs of
>unsatisfiability.
>
>So if NP = co-NP, Selman is wrong.

I don't buy this argument, Matt. The equality in ``NP=co-NP'' is
equality up to poly-time reducibility, but this is a very crude
measure: n^1000 is hopelessly intractable, as is ack(10^6)*n^2.

We always have a proof of length n (ie., a witness) for
satisfiability. So, we know before-hand the length of the proof
we are searching for, and we can generally verify it in linear
time. Even if NP=co-NP, the story for unsatisfiability may be
quite different.

If NP=co-NP then, there is some `super' proof system in which we
can give a poly-length proof for every unsatisfiable formula.
But perhaps (I would suggest probably), the shortest proofs for
many formulas will have lengths of a high-degree polynomial,
and verifying these proofs in poly time may involve very large
constants or high-degree polynomials.

I don't think it is obvious that the iterative repair approach
that seems to work so nicely for finding satisfying assignments
can be applied effectively when searching for proofs of unknown
(and perhaps large) length, which might be expensive to verify.


David

PS: Thanks for the non-grumpy discourse on junk in comp.ai.

Andy Grosso

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
In article <3sp6jj$6...@mars.mcs.com>, Jorn Barger <jo...@MCS.COM> wrote:
>
>Thanks, Matt, for 'putting me right'. I was suffering under the
>delusion, now entirely dispelled, that AI was dominated by
>gamesplayers whose vested interest was in *not* communicating their
>ideas clearly. Now I see that the problem is lazy egomaniacal
>crackpots who won't make the effort to learn the jargon, decode the
>postscript, read *all* the journal articles, and write toy demos to
>prove their worth...
>[rest omitted]
>

I really don't understand this post at all. AI has been around since
the 1950's. There are now people who are "experts." They know a fair amount
of what has been done and understand the main ideas involved.

Like most experts, they speak in jargon (occasionally). But, as far as
jargon goes, theirs is not all that indecipherable.

It strikes me that wanting to talk to the experts without taking the
time to at least acquaint yourself with the basics of the field is
incredibly rude. You don't have to read all the journal articles. Or
check out all the references. For example, myself. Ihave a fair amount of
programming knowledge. But, I'd estimate that my AI library is under 20
volumes (and 6 of those I just bought from somebody via e-mail). Grand total
of less than $200. Add to these volumes about 70 or 80 papers that I have
photocopied from journals or downloaded.

And now I can communicate with the experts. Most of the stuff I read was
fairly interesting and it gave me back more time than I put into it (in terms
of not duplicating effort or rediscovering things that have already
been done).

It's not a high price to pay for the knowledge. And, were I an AI
professor, I would not talk to people who weren't willing to make
some sort of similar investment.

What's more, computer scientists have, by and
large, done a wonderful thing. They have adopted a language (most CS
papers are in English)( and a document standard (postscript) to make
things easier. Once you've read a few intro textbooks, you can just go
and read almost any paper in the field.

All you need is FTP and a postscript previewer. And you can acquire almost
all the significant texts and information.

Cheers,

Andy Grosso


Pushpinder Singh

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to

>What do you mean by higher order relations?

By a higher order relations I refer to things like the similarity between
the structure of two collections of statements. For instance, you might
have the collections

flows(water, pipe)
block(pipe) -> not flows(water, pipe)

flows(electrons, wire)
open(wire) -> not flows(electrons, wire)

I can make the inference that blocking a pipe is like opening a wire, but
only because I can make statements about statements.

-push

Matthew L. Ginsberg

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
In article <3sp6jj$6...@Mars.mcs.com> jo...@MCS.COM (Jorn Barger)
writes:

[I've tried to remove some of the flamage.]


>> AI is awash in great ideas. What
>>it needs is more substance.
>
>Given the widespread criticism of AI for limiting itself to 'toy demos',
>do you call those toy demos "substance"???

No. I never said I did.

>Certainly, AI needs a great *product*-- something that *works*. I've
>proposed an alternate approach to designing this product-- by inventorying
>human behavior. You seem to want to *exclude this viewpoint* from
>the comp.ai charter...

Your approach is one of many. Way too many to figure out which will
work. If yours will work, great. Go implement it, and tell me when
it's working. If the performance of the program is impressive, I'll
be impressed. If the performance of the program is unimpressive (which
it is at the moment, by virtue of not being written), I won't be
impressed.

>So you're saying, the door is open, all I have to do is grab a postscript
>viewer, and ftp your articles, and then track down all their references,
>and then track down all *their* references, and then when I can express
>my criticisms in *your* jargon, then, maybe, if I'm lucky, you'll
>condescend to play the role of educator?

That's pretty much it! Except I have no interest in educating you.
I'd like to *talk* to you, which means we need a common language.
Seems pretty arrogant for you to expect the whole scientific community
to learn your language instead of your learning theirs.

It's a pain reading all those articles, I'll admit it. But *I* read
at least the abstracts of every article in AIJ, in JAIR, in AI
Magazine, AAAI, and IJCAI. If the articles are close to my field of
expertise, I read them in their entirety.

That's the way science works. People talk, and people listen. It's
not supposed to be that one guy talks and never bothers to listen.

>> This is fairly
>>recent work, and my own intuitions aren't yet clear enough. Read the
>>paper.
>
>Oops-- you mean you haven't *coded* it yet?!?!? (By your rules,
>you can't ask me to read it, then, can you???)

No, it's coded. And the implementation is well documented, in line
with the theory, and available over the net. I just don't understand
the properties of the implementation well enough to translate them
into lay terms.

Jorn, I never asked you to read anything. I just informed you that if
you don't read anything, no one in the scientific community is going
to listen to you. [You have noticed that no one seems to be listening,
haven't you?]

Matt Ginsberg

Aaron Sloman

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
I agree with most of what Matt Ginsberg said, but would like to
propose a small change.

gins...@t.uoregon.EDU (Matthew L. Ginsberg) writes:

> Date: 21 Jun 1995 16:19:17 GMT
> Organization: Computer Science Department, University Oregon.
> ....

> Let me begin with a rather more general comment. Being a scientist
> is, over all else, fun. But following a close second is the fact that
> being a scientist is *hard*. You're competing with the smartest folks
^^^^^^^^^
> on the planet, and it shouldn't be too surprising that it's hard work
> to make progress.
>.... <etc>

I have a minor quibble. I like to think of science in terms of
*cooperating* rather than *competing*.

Thus I expose my thoughts and arguments not to win battles, but in
the hope that others will help me find any flaws and fix them. Of
course, there's an obligation to try hard to avoid such flaws, in
order to avoid wasting other people's time.

So yes, it's hard. But its hard because the problems are deep and
solutions difficult to find (and test).

The fact that other smart people are involved in the activity
doesn't make it harder. It should make it easier, as we can (and
usually do) help one another.

Of course, in the face of limited research funding, shortage of
academic posts, etc. it is sometimes difficult to avoid thinking
competitively.

And the culture of some academic organisations is far more
competitive than cooperative. Not all, I hope.

Cheers.

Aaron
---
--
Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs )
School of Computer Science, The University of Birmingham, B15 2TT, England
EMAIL A.Sl...@cs.bham.ac.uk OR A.Sl...@bham.ac.uk
Phone: +44-(0)121-414-4775 Fax: +44-(0)121-414-4281

Jesse D Zbikowski

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
In article <3sdt1u$5...@ecl.wustl.edu>,

Fritz Lehmann <fr...@rodin.wustl.edu> wrote:
> "Connected" has its ordinary meaning. A structure is connected
>if it forms one piece, otherwise it's disconnected (in more than one
>piece). To define a formal connected structure (such as a dot-and-line

>graph, or a network) you have to quantify over paths: for any two
>elements, _there_exists_ a path (of links/relations) between them. Roland
>Fraisse and Andresz Ehrenfeucht showed in the 1950's that no first-
>order definition (not even a so-called "weakly higher-order" one) can
>ever define the predicate "connected" -- you have to be able to say
>whether a path exists (of any length) between two elements.

Sorry... I don't quite understand why you can't represent the
"connected" predicate with first-order logic. Why can't one just do
something like this? Let's assume E(u,v) is already defined as
"exists an edge between u and v", and we'll define P(u,v) "path
between u and v" and C(x) "x is connected" in terms of it.

C(x) <==> ((forall u)(forall v) ((u member x)&(v member x) -> P(u,v)))

(forall u)(forall v)(E(u,v) -> P(u,v))

(forall u)(forall v)(forall w)((E(u,v)&P(v,w)) -> P(u,w))

This may not give us a computation method for determining
connectedness, but it looks like an accurate definition to me.

--
Jesse Zbikowski Worcester Polytechnic Institute
z...@wpi.edu Computer Science, '97

[EOF]

Aaron Sloman

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
pjac...@nyc.pipeline.com (Philip Jackson) writes:

> Date: 23 Jun 1995 20:36:00 -0400

...<snip>...

> I found Matt's initial statement "reasoning can be reasonably captured" to
> be plausible if all that he was saying was that FOL with some additional
> bookkeeping mechanisms can be used to emulate "reasonable = limited"
> portions of human reasoning. I am also interested in studying problem
> solving within the context of FOL (indeed, within propositional logic)
> because good problem solving / search / bookkeeping methods at that level
> should be useful in reasoning with higher level logics.


>
> However, in the ensuing posts the scope of the statement has been expanded
> to mean that FOL with additional bookkeeping is "an adequate model of most
> human mental processes". This is untenable within normal intepretations
> of "FOL" and "bookkeeping". For example, "most human mental processes"
> includes inventing and using new languages and notations, reasoning with
> metaphors / analogies, learning, dreaming, .... many processes for which we
> have barely scratched the surface.

I agree with these last comments. I've been puzzling for many years
over our ability to invent and use all sorts of specific forms of
representation suited to particular domains, and especially our use
of spatial forms of representation, which seem to make use of
aspects of the human brain that were developed specifically for
spatial perception, control, and reasoning. (However, I don't think
we know much about how this works.)

My most recent summary of some of the issues can be found in a paper
that should appear before long in an AAAI Press book on Diagrammatic
reasoning edited by Janice Glasgow, Hari Narayanan, & Chandrasekaran.
My contribution is accessible as:

ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect/Aaron.Sloman_musings.ps.Z

For example, various people have worked on automating the sort of
logical reasoning that enables one to prove that

(RT) rotate(list, length(list)) = list

where rotate(L, N) moves N items of the list L from the front to the
end, as defined recursively:

rotate(nil, N) = nil

rotate(cons(X, L), 1) = append(L, cons(X, nil))

rotate(L, N + 1) = rotate(rotate(L, N), 1)

and all the concepts (list, length, rotate, etc.) are definable using
induction, starting from notions like nil (the empty list), cons, head
and tail. (I first came across this example in a talk by Alan Bundy).

The logical proof of (RT) is typically very intricate and quite hard
for people to think up.

Yet most people presented with the theorem find it blindingly
obvious because they think in spatial terms, and "visualise" a
process of successively moving items from the front of the list to
the end.

This is just one example among many. Lots of mathematicians claim
that when thinking about mathematical problems they use spatial
"models". E.g. everyone I know who has thought about the Schroeder
Bernstein Theorem (If A and B are sets and A <= B and B <= A then
A = B) claims to have used a spatial model to think about the
mappings between A and B in the case of infinite sets.

Of course I am not saying that purely spatial representations are
enough: they are generally used in conjunction with logical
assertions and some form of control information that may perhaps be
expressed in some sort of notation for algorithms.

Descartes' discovery that Euclidean geometry could be mapped onto
arithmetic and algebra was a significant discovery of a deep
relationship between TWO forms of representation rather than a
discovery of a theorem within ONE form (i.e. logic).

Of course, it remains an open question whether logic in general or
first order logic in particular is simply being used as an
implementation medium in which all these other things can be
implemented (a suggestion first made, I think, by Pat Hayes in
responding to my 1971 critique of Logicist AI presented at
IJCAI-71).

It's not always noticed that mathematics is full of ad hoc
notations. For example the decimal notation for arithmetic
represents numbers using a very peculiar syntax and semantics where
concatenation is an operator meaning (roughly) multiply by 10 and
add, and the spatial structure of sequences is heavily used in
addition, subtraction, and to some extent even multiplication and
division.

Perhaps Matt is not saying that all human reasoning (and reasoning
by artificial intelligent agents) IS done using logic, but that in
principle it COULD be replaced by logical reasoning, with no loss of
power.

I don't know whether that's true. I suspect that it isn't and that
the reason we (including scientists and mathematicians) use lots of
DIFFERENT forms of representation is that they help to make problems
solvable in a reasonable time e.g. by structuring search spaces so
that one cannot even represent some of the non-solutions that in a
logical formalism would have to be explicitly considered and
rejected.

When I put forward similar arguments in 1971, Pat Hayes attacked
them in 1974 in a paper that can be found reprinted in the
collection of readings in Knowledge Representation edited by
R. Brachman and H. Levesque in 1985.

There are other relevant papers in that collection and in the
special issue of Computational Intelligence, vol 9 no 4, 1993.

I have the impression that most logicians don't bother to answer the
criticisms of logic-based AI because they can't believe that there's
any other form of reasoning that can be valid or sound, apart from
logical reasoning.

I think the use of computations over tailor-made problem-specific
data-structures (e.g. arrays, graphs, lists) in applied computer
science is an important everyday refutation of this narrow view of
reasoning.

Cheers.
Aaron

Andy Grosso

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
In article <push-26069...@mind.mit.edu>,

Pushpinder Singh <pu...@mit.edu> wrote:
>In article <3snfa2$5...@agate.berkeley.edu>, wil...@jaffna.berkeley.edu
>(Andy Grosso) wrote:
>
>> What impresses me most is neither of these. Forming analogies is
>> *darn* impressive to me.
>
>Analogy-making certainly qualifies as figuring...

Really ? The whole point of my post was (sort of)
that the categories Matt was talking about were rather
vague. "Figuring things out" versus "looking them up." seems
to be me to be a false dichotomy.

Take the vast majority of mathematical results. The actual
proofs are frequently of the form "Take situation x.
By clever reasoning reduce it to situation y. Now use the
previously known stuff about situation y to finish off the
proof."

You start with something unknown. You figure out how to reduce it to
something known. Then you apply the known stuff.

Figuring out. Followed by lookup.

But, again using mathematics, there's this almost mystical
"where to look up" sort of thing. You have to have absorbed
lots and lots of mathematics *and* indexed it really really well
This is used throughout the "figuring out process"- part of the
"figuring out" is a search through your stored set of mathematics,
looking for similar things.

So "Figuring out. Followed by lookup" becomes "Figuring out
(which uses a lot of lookup). Followed by lookup."

>...Far more than just "bookkeeping" (Matt expressed some opinions earlier


>about how minds could be adequately described as reasoning by FOL + some
>bookkeeping.) Analogy-making is about reasoning about similarities and
>differences between higher-order properties of situations, which would
>seem to preclude an FOL + bookkeeping approach.
>

I'm not sure. I think the word "book-keeping" is throwing you.
The immediate mental image is a balding man, hunched over a ledger
(my mental image includes one of those little green visors and a
white shirt with light pink vertical stripes, but that's just me)
adding up a column of numbers. Which is *book-keeping*, but is
not what Matt is talking about (except as a metaphor).

Could you see it as a really powerful data-base with FOL
reasoning built in ? I'm not saying that's what brains do, but
it doesn't seem immediately vulnerable to your objection.

Cheers,

Andy


Jive Dadson

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
> A bilattice labels each sentence in a declarative database with
> a "truth value"; the bookkeeping is responsible for combining the
> truth values of old sentences to get truth values for new ones.
>
> A bilattice is essentially a set equipped with two partial orders,
> one indicating how true or false something is, and the other
> indicating how complete the state of knowledge is about the sentence.

Thank you for the prompt response. I think I may see some point of
agreement with my developing views on knowledge and prediction. Tell me
if my intuition is right on where you are going with this. Take the
example of a horse race. We never see two races that are exactly the
same. The first partial order is used to establish a measure of
similarity: This horse is very much like one I have seen before, a
little line another in some ways, not at all like a third, et cetera.
The second partial order measures the probabilistic expectancies of
horses as categorized by the first partial order. Am I on track (so to
speak) at all?

The sticking point always seems to in the estimate of the completeness
of the state of knowledge. All the problems with normalization ...

A toy example of sentences and grades would be very instructive. Could
you show us how the "Tweety triangle" would be modeled in "book-kept"
FOL? Try reconstructing the following true/false, state-of-knowledge
assignments.

Birds fly (very true, very probable) -- Birds are very
identifiable, and
the large majority can fly

Penguins are birds (rather true, certain) -- Penguins have most of
the qualities we associate with
birds, no exceptions

Tweety is a Penguin (true, certain) -- Tweety has every quality we
associate with penguins, and
there is no doubt that it is
Tweety

What sentences may be deduced or ruled out, and how?


Jive

Jive

Pushpinder Singh

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
In article <3snuk8$l...@agate.berkeley.edu>, wil...@beirut.berkeley.edu
(Andy Grosso) wrote:

> In article <push-26069...@mind.mit.edu>,
> Pushpinder Singh <pu...@mit.edu> wrote:
> >In article <3snfa2$5...@agate.berkeley.edu>, wil...@jaffna.berkeley.edu
> >(Andy Grosso) wrote:
> >
> >> What impresses me most is neither of these. Forming analogies is
> >> *darn* impressive to me.
> >
> >Analogy-making certainly qualifies as figuring...
>
> Really ? The whole point of my post was (sort of)
> that the categories Matt was talking about were rather
> vague. "Figuring things out" versus "looking them up." seems
> to be me to be a false dichotomy.

Of course that's true. My point was simply to acknowledge that
analogy-making is as important a component of reasoning as is
logical reasoning. In fact, analogy-making might be the only
form of "looking up" available to us most of the time, since new
situations rarely correspond to old in all respects.

It may be that FOL is useful for reasoning about a wide array of
micro-domains, but that problem solving draws on our ability to
constantly reformulate different aspects of the problem in terms
of the most appropriate micro-domains (the ones that are likely to
produce the most useful inferences,) which is essentially analogy-
making.

> >...Far more than just "bookkeeping" (Matt expressed some opinions earlier
> >about how minds could be adequately described as reasoning by FOL + some
> >bookkeeping.) Analogy-making is about reasoning about similarities and
> >differences between higher-order properties of situations, which would
> >seem to preclude an FOL + bookkeeping approach.
> >
>
> I'm not sure. I think the word "book-keeping" is throwing you.

From what? I'd appreciate it if you would fill in the blanks in
your post -- I can't make sense of what you are trying to say.
I admit that I object to the term "bookkeeping" since it suggests
that analogy-making and anything else that isn't FOL is somehow
less important and less interesting.

> Could you see it as a really powerful data-base with FOL
> reasoning built in ? I'm not saying that's what brains do, but
> it doesn't seem immediately vulnerable to your objection.

Please explain why it isn't immediately at least somewhat vulnerable
(to the objection that analogy-making is essential to reasoning, but
analogy-making requires the ability to reason about higher-order
relations between properties of situations.)

-push

Donald Tveter

unread,
Jun 27, 1995, 3:00:00 AM6/27/95
to
In article <gat-2606...@milo.jpl.nasa.gov>,
Erann Gat <g...@robotics.jpl.nasa.gov> wrote:
>
>When people take academics to task for living in ivory towers it is not out
>of disdain for what academics do; it is out of frustration for what they do
>not do: make themselves understandable to non-academics. There is nothing
>wrong with developing a specialized vocabulary and communications style for
>communicating with one's peers; there is very much wrong with expecting
>everyone else to learn the jargon before they can even listen to the
>discussion.

In many cases all that counts is to get the grant to get the
publication to make the department happy to get tenure. They simply
aren't writing for normal people. Now on the internet the academics
are encountering more normal people who demand insight and this may
improve the quality of some academic papers.

In other cases the jargon and new formalisms appear to be there in
order to keep readers from figuring out that there is no significant
bit of insight present. When you encounter such a paper it can
usually be thrown away without missing anything. An author with a new
really insightful idea will want everyone to know what it is and will
write accordingly.

************************************************************************
The Backpropagator's Online Reading List and Review by WWW.
My backprop software for UNIX and DOS is available by FTP or WWW.
A professional version is also available.
************************************************************************
Don Tveter d...@mcs.com
http://www.mcs.com/~drt/home.html ftp://ftp.mcs.com/mcsnet.users/drt
************************************************************************

Fritz Lehmann

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to
In article <3spkuh$n...@pith.uoregon.edu>,

Matthew L. Ginsberg <gins...@t.uoregon.edu> wrote:
>In article <3sp6jj$6...@Mars.mcs.com> jo...@MCS.COM (Jorn Barger)
>writes:
>[I've tried to remove some of the flamage.]

It would be better for us if Barger would separate the flamage
from his (often insightful) relevant points. Or dispense with it.

>>human behavior. You seem to want to *exclude this viewpoint* from
>>the comp.ai charter...
>Your approach is one of many. Way too many to figure out which will
>work. If yours will work, great. Go implement it, and tell me when
>it's working. If the performance of the program is impressive, I'll
>be impressed. If the performance of the program is unimpressive (which
>it is at the moment, by virtue of not being written), I won't be
>impressed.

Almost all "impressive" ideas in AI were impressive well before they were
implemented (if they were implemented at all). The people with the
ideas are not always the same as the people with a yen to program. That's
why alliances between them can be fruitful.

> ...


>That's pretty much it! Except I have no interest in educating you.
>I'd like to *talk* to you, which means we need a common language.
>Seems pretty arrogant for you to expect the whole scientific community
>to learn your language instead of your learning theirs.

Flamage is a privilege of the officially grumpy. My keyword for
detecting flamage is the word "you".

>It's a pain reading all those articles, I'll admit it. But *I* read
>at least the abstracts of every article in AIJ, in JAIR, in AI
>Magazine, AAAI, and IJCAI. If the articles are close to my field of
>expertise, I read them in their entirety.

Maybe someone in the world genuinely reads all of those abstracts; I
would not recommend it to the aspiring AI student as a wise expenditure
of time. Better to combine some slightly-random browsing (both inside and
_outside_ of standard AI sources) with dogged following of promising leads.

>That's the way science works. People talk, and people listen. It's
>not supposed to be that one guy talks and never bothers to listen.

I've never met Barger, and know of him only through Usenet, email,
his personal reputation with some mathematicians and physicists I know,
and his promising-looking WISDOM ontology list. He is not conspicuously
ignorant; as a perusal of his opinionated "alternate AI FAQ" will show,
he is very well-read in areas at the edge of AI dealing with _content_
(rather than technical AI work on logic variants, etc.). Lenat, Hayes,
Feigenbaum, Sowa, Skuce, Hovy, Gruber, Guarino, Minsky and I have agreed that
the bottleneck for conceptual, symbolic AI is content rather than
fine points of logic variants and inference mechanisms. Of course the
latter are still very important, and (somewhat lamentably) more respected in
Computer Science departments as worthy research topics than content is,
but a person who is knowledgeable about concept-systems for the world
(as opposed to, say, circumscriptive interpretations of default-autoepist-
emic hybrids) may be better equipped to do something of lasting importance
in AI.

Barger, like early Schank, has been primarily interested in ontological
content, and he has put various content-oriented people together in his
WISDOM list. His (mercifully quiescent lately) rants against Schank are
pretty ironic since Barger is more Schankian than practically anybody in AI
including Schank himself in later decades (who has shifted from content to
learning and retrieval mechanisms). Barger does not hesitate to exploit
far-flung fields (like Polti/Propp/Dundes/Thompson story structure algebras,
or the asset-ontologies of interactive fiction, or the convoluted structures
of Ulysses and Finnegan's Wake, etc.) in building his hierarchies of scripts.
This strikes me as just as auspicious for important AI work as reading AI
abstracts and teaching logic courses.

>Jorn, I never asked you to read anything. I just informed you that if
>you don't read anything, no one in the scientific community is going
>to listen to you. [You have noticed that no one seems to be listening,
>haven't you?]
> Matt Ginsberg

The last remark is demonstrably incorrect. Some among us are dismissive
of Barger's bolder excursions into unknown (to him) territories, and
some students who apparently knew him at Schank's place now have a personal
war with him (maybe it's mutual), but his serious theoretical ideas
have a fairly decent following. He is pursuing lines of inquiry which
mainstream AI (other than CYC, Pangloss, CCAT, and several other exceptions)
seems to have abandoned in the 1970's for obscure reasons. How far he can
go remains to be seen. He probably needs an "alliance" such as I mentioned
above, for which harmonious personal interaction is of course an asset.

Yours truly, Fritz Lehmann

Fritz Lehmann

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to
In article <3spoma$j...@bigboote.wpi.edu>,

Jesse D Zbikowski <z...@bigwpi.WPI.EDU> wrote:
>In article <3sdt1u$5...@ecl.wustl.edu>,
>Fritz Lehmann <fr...@rodin.wustl.edu> wrote:
>> "Connected" has its ordinary meaning. A structure is connected
>>if it forms one piece, otherwise it's disconnected (in more than one
>>piece). To define a formal connected structure (such as a dot-and-line
>>graph, or a network) you have to quantify over paths: for any two
>>elements, _there_exists_ a path (of links/relations) between them. Roland
>>Fraisse and Andresz Ehrenfeucht showed in the 1950's that no first-
>>order definition (not even a so-called "weakly higher-order" one) can
>>ever define the predicate "connected" -- you have to be able to say
>>whether a path exists (of any length) between two elements.
>
>Sorry... I don't quite understand why you can't represent the
>"connected" predicate with first-order logic. Why can't one just do
>something like this? Let's assume E(u,v) is already defined as
>"exists an edge between u and v", and we'll define P(u,v) "path

Try expressing "E(u,v)" in standard First-Order logic,
instead of saying "let's assume it's already defined". If
the "edge" (u,v) you define is merely an unstructured individual
like "x" then there is no given relation between (u,v) and its
fellow-individual u. If the "edge" is instead a relation between
u and v then E(u,v) quantifies over relations and is thus already
second order. If you use a membership relation to build sets of
individuals, and assume that extensional equivalence means identity
of predicates, then again you will be quantifying over predicates
and will be second-order.

>between u and v" and C(x) "x is connected" in terms of it.
>C(x) <==> ((forall u)(forall v) ((u member x)&(v member x) -> P(u,v)))
>(forall u)(forall v)(E(u,v) -> P(u,v))

Do you intend this to work only for one particular edge named "E",
or do you mean "(For all edges E)", which is second order?

>(forall u)(forall v)(forall w)((E(u,v)&P(v,w)) -> P(u,w))

Same question, alas.

>This may not give us a computation method for determining
>connectedness, but it looks like an accurate definition to me.

>Jesse Zbikowski Worcester Polytechnic Institute
> z...@wpi.edu Computer Science, '97

I don't see that you've refuted the Ehrenfeucht-Fraisse proofs yet,
but maybe I'm not understanding the method.

Yours truly, Fritz Lehmann

Jim Hendler

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to
In article <gat-2606...@milo.jpl.nasa.gov> g...@robotics.jpl.nasa.gov (Erann Gat) writes:

>
>It makes me angry when I think about it: the Academy has cheapened the role
>of popularization to the point where writing accessible work virtually
>guarantees ostracism and career death for all but the most senior and
>untouchable. Then we turn around and complain when crackpots step in to
>fill the void. The machine-learning-breakthrough crap on the newsgroup is
>partly your fault, Matt, and mine, and the fault of all the other academics
>out there slinging jargon around. It happened because there are people out
>there without Ph.D's who are hungry for information, and we didn't give it
>to them in a form that they could understand.

Sorry Erann, I think you have it somewhat wrong. You're right that we
tend, especially in a scientific forum, to use specialized jargon and
technical argument -- and that is EXACTLY RIGHT. When the discourse
at a scientific meeting falls to anecdotes and poor argument (as it
did all too often at a meeting you and I attended last March) it
degrades to the point of uselessness -- as evidence consider exactly
the level of discourse on this list and the very crap this thread is
responding to.

However, that said, you do have an important point -- if we ONLY
discuss things on the scientific level and don't communicate outside
we also fail -- public interest and public perception is a critical
factor in the rise and fall of scientific endevours and is something
we as scientists must do more of. I heard a recent talk by the
director of one of our nation's main scientific bodies, who took us to
task for leaving science at the whim of our current
scientifically-illiterate House of Reps. -- my words, not his -- and
he said when the funding for basic research is cut to shreads, we have
mainly ourselves to blame.

I believe the solution is not to water down the science in the
scientific forums, but to increase our presence in the popular medium.
I have lately spent some of my time writing for magazines, rather than
journals, and a couple of my articles have made it into print. It
doesn't help my promotion case much, but without research funding I
won't have much reason to be promoted. My goal is not to become the
"Carl Sagan" of AI (G-d forbid!), but rather to help keep the Searle's
and Dreyfuss' (and hype-meisters) from winning the debate by default.

One question this leaves is whether this forum is scientific or
popular. I agree with Matt that we should try to make it more of the
former -- let the popular discussion go to comp.ai.philosophy or other
groups.


--
Prof. James Hendler
Department of Computer Science
University of Maryland
College Park, Md 20742

H. Enderton

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to
(Jesse D Zbikowski) writes:
>Sorry... I don't quite understand why you can't represent the
>"connected" predicate with first-order logic. Why can't one just do
>something like this? ... we'll define P(u,v) "path between u and v"

>(forall u)(forall v)(E(u,v) -> P(u,v))
>(forall u)(forall v)(forall w)((E(u,v)&P(v,w)) -> P(u,w))

Ah, but that does not give a first-order definition of P in terms
of the edge relation E. Instead it states some conditions you
would like P to satisfy. The conditions are met if P is always
true (the largest fixed point), but that's not the P you want.

For a first-order definition, you'd need something like
P(u,v) <--> ....
where `P' does not occur on the right. And this is exactly
what Ehrenfeucht and Fraisse showed is NOT possible.

--Herb Enderton
h...@math.ucla.edu

Philip Jackson

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to
Jorn Barger (jo...@MCS.COM) wrote:

: From: pjac...@nyc.pipeline.com (Philip Jackson)
: >However, in the ensuing posts the scope of the statement has been expanded


: >to mean that FOL with additional bookkeeping is "an adequate model of most
: >human mental processes". This is untenable within normal intepretations
: >of "FOL" and "bookkeeping". For example, "most human mental processes"
: >includes inventing and using new languages and notations, reasoning with
: >metaphors / analogies, learning, dreaming, .... many processes for which we
: >have barely scratched the surface.

: Meaning that first-order logic can't refer to the symbols it's written in?

No, meaning just what I said. Having a logic that can "refer to its own
symbols" does not in itself imply any major progress in representing or
programming the kinds of human mental processes I listed. On the other hand,
I would agree that at some level, a True AI program needs to be able to
reflectively examine and reason about its own execution, and modify and
extend itself.

The phrase "first-order logic can't refer to the symbols it's written in"
seems imprecise. There are various ways it could be interpreted,
each of which may be more or less interesting. However, since you have
introduced the phrase it is probably more appropriate for you to say
further what you mean by it, before I try to give any further thoughts
about it...

:[...]

: Let me propose an anti-logicist point of view:

Please note, that in my own remarks I did not intend to suggest an
"anti-logicist" point of view is necessary, or appropriate. How can one
be "against logic"?

: - Most 'inferences' can be seen as *filling in the gaps* in a *story*


: about what happened/ can happen/ will happen.

To the man with a hammer, the world appears to be a nail. While I don't
deny that representing stories and making inferences about what happens
in them is important, trying to represent logical inferences in general
as a kind of storytelling seems to be an oversimplification, and a
backwards way of looking at things: stories are driven by (various forms)
of logic, rather than logic being made out of stories.

Nevertheless, if it helps to think of things this way, or if it produces
something interesting or useful, go for it.

: - The most efficient way to index the elements of such an inference


: is in the form of a branching story sequence

: - If you have the right starting-generalizations, you can capture all
: the basic patterns in a reasonable space

: - At this point, calling it 'inference' hardly means anything anymore!
: Instead, you're retrieving story-elements...

: What you have, instead of logic-rules, are the 'meta-stories' of
: various Cyc-ian microdomains...

: I'll suggest that any inference worth making is worth saving
: explicitly in the knowledgebase... and invite counterexamples?

: j

I'll repeat what I said in another post, namely that it seems to me
the ability to invent new languages and notations is even more
important than the ability to reason within any given language, or
to retrieve facts stated within any language. Reasoning and retrieval
are both important, but invention and discovery of new knowledge are
only possible if one can reason about what one does not know and
cannot retrieve.

Cheers,

Phil Jackson


Andy Grosso

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to
In article <3ss9fl$e...@condor.ic.net>,
Philip Jackson <pjac...@falcon.ic.net> wrote:

>Jorn Barger (jo...@MCS.COM) wrote:
>
>Please note, that in my own remarks I did not intend to suggest an
>"anti-logicist" point of view is necessary, or appropriate. How can one
>be "against logic"?
>

Well, in daily life, it is frequently convenient to be "anti-logic."
If I have a daily routine, it costs me mental effort and time to
analyze the current situation and then deviate from my routine
accordingly.

It is more efficient a lot of times to just stick wih the
established pattern.

Similarly, if I am in a situation which is strongly analagous
to a situation I have already encountered successfully, I might
just do what worked there instead of actually trying to
figure out optimal behavior for this new scenario.

Cheers,

Andy


Philip Jackson

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to

Yes -- it seems to me that our ability to invent new languages, notations,
and representations is a much more important element of human-level
intelligence than our ability to reason within any particular
representation, or to retrieve information within any particular
representation.

It also seems to me that a key element in the invention of new languages
and notations is the use of "metaphoric reasoning", letting one thing stand
for another, describing one thing in terms used for another, etc. We often
do this in ways that are inherently imprecise and not rigorously true, yet
which are still helpful in solving problems.

This kind of metaphoric reasoning seems to be very different from the more
rigorous, perhaps less fault-tolerant way in which people have represented
analogic reasoning within predicate logic.

>
[skipping past a very interesting discussion of spacial reasoning in
mathematics]

>
>Of course I am not saying that purely spatial representations are
>enough: they are generally used in conjunction with logical
>assertions and some form of control information that may perhaps be
>expressed in some sort of notation for algorithms.

>[...]

Lakoff and Johnson's book "Metaphors We Live By" is the one of the best
discussions I've found on the prevalence of metaphors throughout our
language, and in particular the use of spatial metaphors throughout
language....


>Of course, it remains an open question whether logic in general or
>first order logic in particular is simply being used as an
>implementation medium in which all these other things can be
>implemented (a suggestion first made, I think, by Pat Hayes in
>responding to my 1971 critique of Logicist AI presented at
>IJCAI-71).
>

The implementation medium is not so important as the functionality, of
course, provided that the medium does not restrict us from describing the
desired functionality. FOL does seem to be too limited to support spatial
reasoning, e.g. if one cannot represented "connectedness" in FOL....


>It's not always noticed that mathematics is full of ad hoc
>notations. For example the decimal notation for arithmetic
>represents numbers using a very peculiar syntax and semantics where
>concatenation is an operator meaning (roughly) multiply by 10 and
>add, and the spatial structure of sequences is heavily used in
>addition, subtraction, and to some extent even multiplication and
>division.
>

Another good example is the invention of calculus, and of two different
notations by Newton and Leibniz-- Leibniz' notation succeeded because it
was more compact and expressive, and made better use of "page space"....


>Perhaps Matt is not saying that all human reasoning (and reasoning
>by artificial intelligent agents) IS done using logic, but that in
>principle it COULD be replaced by logical reasoning, with no loss of
>power.
>
>I don't know whether that's true. I suspect that it isn't and that
>the reason we (including scientists and mathematicians) use lots of
>DIFFERENT forms of representation is that they help to make problems
>solvable in a reasonable time e.g. by structuring search spaces so
>that one cannot even represent some of the non-solutions that in a
>logical formalism would have to be explicitly considered and
>rejected.

>[...]

>
>I have the impression that most logicians don't bother to answer the
>criticisms of logic-based AI because they can't believe that there's
>any other form of reasoning that can be valid or sound, apart from
>logical reasoning.

The onus is upon those who believe a different form of reasoning can be
constructed to do so. Certainly, a lot of work has gone into constructing
mathematical logic, and logic-based AI. It is not obvious to me how to
represent the kind of metaphoric reasoning I've described within
logic-based AI. Perhaps metaphoric reasoning should subsume logic-based
reasoning, i.e., be able to become more rigorous and complete as necessary
in reasoning about a problem.

>[...]
>
>Cheers.
>Aaron
>--

Cheers,

Phil Jackson

William Siler

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to
Matthew L. Ginsberg <gins...@t.uoregon.edu> writes:

>It's a pain reading all those articles, I'll admit it. But *I* read
>at least the abstracts of every article in AIJ, in JAIR, in AI
>Magazine, AAAI, and IJCAI. If the articles are close to my field of
>expertise, I read them in their entirety.
>
>That's the way science works. People talk, and people listen. It's
>not supposed to be that one guy talks and never bothers to listen.

Alas, every journal comes from the AI community. Over the last 25 years or
so, I have noticed that AIers listen to other AIers, but not to anyone else.
This is rather sad.

Philip Jackson

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to
Philip Jackson wrote:
>Jorn Barger (jo...@MCS.COM) wrote:
>
>: - Most 'inferences' can be seen as *filling in the gaps* in a *story*
>: about what happened/ can happen/ will happen.
>
>To the man with a hammer, the world appears to be a nail. While I don't
>deny that representing stories and making inferences about what happens
>in them is important, trying to represent logical inferences in general
>as a kind of storytelling seems to be an oversimplification, and a
>backwards way of looking at things: stories are driven by (various forms)
>of logic, rather than logic being made out of stories.
>
>Nevertheless, if it helps to think of things this way, or if it produces
>something interesting or useful, go for it.
>
>: - The most efficient way to index the elements of such an inference
>: is in the form of a branching story sequence
>
>: - If you have the right starting-generalizations, you can capture all
>: the basic patterns in a reasonable space
>
>: - At this point, calling it 'inference' hardly means anything anymore!
>: Instead, you're retrieving story-elements...
>
>: What you have, instead of logic-rules, are the 'meta-stories' of
>: various Cyc-ian microdomains...
>
>: I'll suggest that any inference worth making is worth saving
>: explicitly in the knowledgebase... and invite counterexamples?
>

One thing I will say in favor of the notion of inference as storytelling is
that mathematical proofs tend to follow and combine different "scripts",
much like standard plots of stories. So a system which could combine such
scripts, or retrieve the most appropriate scripts to use in proving a
particular point, could be worthwhile, and have advantages over approaches
based on pure search. (I recall that this approach has already been used in
some theorem-proving systems, or in the meta-level control structures of
some expert systems, though offhand I cannot provide a specific reference.)


Phil Jackson

Philip Jackson

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to

>[...]
>

In recent posts, if I have read them correctly, Barger has accused Ginsberg
of "intellectual dishonesty" (not the kind of charge that should be tossed
about lightly) and Barger has objected to Ginsberg's suggestion that Barger
should RTFM. So far as I can see, Ginsberg has been intellectually honest
in presenting his own ideas, and in trying to offer some constructive,
objective criticisms. It may also be a fair suggestion that Barger should
read more papers (F or not) about search, to balance out his knowledge
about retrieval.

Perhaps if Barger would "lay off" flaming other people, and other people
would stop flaming him, and we could concentrate on technical issues, then
there could be increased bandwidth for *information content*. For this
kind of approach to work, however, people must discuss and debate *ideas*,
and try to avoid either directing or interpreting criticism of ideas as
criticism of people.

Unfortunately, this is probably asking too much. I guess there will always
be flames in comp.ai ;-)

Phil Jackson

H. Enderton

unread,
Jun 28, 1995, 3:00:00 AM6/28/95
to
From: h...@sonia.math.ucla.edu (H. Enderton)

--Herb Enderton
h...@math.ucla.edu
---

David Longley

unread,
Jun 29, 1995, 3:00:00 AM6/29/95
to
On Connectedness and Opacity

If 'connectedness' can not be defined within First Order Logic,
but FOL is a sufficient language to express almost all of science
(Quine 1960;1992), then perhaps connectedness is not a necessary
feature of learning. In fact, doesn't this suggest that learning,
memory etc are intensional, and does that not shed some light on
the opacity (or context specificity) of learning/the intensional?

Is this sound?
--
David Longley

Bob Riemenschneider

unread,
Jun 29, 1995, 3:00:00 AM6/29/95
to
In article <3spoma$j...@bigboote.WPI.EDU> z...@bigwpi.WPI.EDU (Jesse D Zbikowski) writes:

> Sorry... I don't quite understand why you can't represent the
> "connected" predicate with first-order logic. Why can't one just do

> something like this? Let's assume E(u,v) is already defined as

> "exists an edge between u and v", and we'll define P(u,v) "path


> between u and v" and C(x) "x is connected" in terms of it.
>
> C(x) <==> ((forall u)(forall v) ((u member x)&(v member x) -> P(u,v)))
>

> (forall u)(forall v)(E(u,v) -> P(u,v))
>
> (forall u)(forall v)(forall w)((E(u,v)&P(v,w)) -> P(u,w))
>

> This may not give us a computation method for determining
> connectedness, but it looks like an accurate definition to me.

Accurate as far as it goes, but incomplete. You need to add that P is the
smallest relation satisfying these axioms, which requires quantifcation
over relations. Without that additional axiom, you have models of the
theory in which P is much larger -- models in which, e.g.,

P(u, v) <-> u = u & v = v

is true -- and so you can't claim to have defined "path".

-- rar

David Longley

unread,
Jun 29, 1995, 3:00:00 AM6/29/95
to
On Connectedness and Opacity

the opacity (or context specificity) of learning/the intensional,
and the liklihood of a 'Society of Mind'.

Philip Jackson

unread,
Jun 29, 1995, 3:00:00 AM6/29/95
to
In article <804460...@longley.demon.co.uk>, David Longley writes:

>On Connectedness and Opacity
>
>If 'connectedness' can not be defined within First Order Logic,
>but FOL is a sufficient language to express almost all of science
>(Quine 1960;1992), then perhaps connectedness is not a necessary
>feature of learning.

Or perhaps Quine was wrong (or perhaps he did not really say this?) and FOL
is not sufficient to express almost all of science. Of course, "almost all"
needs definition, but I think it would be pretty easy to find properties
like connectedness that are used in many fields of science... E.g., how
about the notion of a "sum over histories" in quantum mechanics? Here's a
case where physicists reason about and perform operations on a potentially
infinite set of paths....

> In fact, doesn't this suggest that learning,
>memory etc are intensional, and does that not shed some light on
>the opacity (or context specificity) of learning/the intensional,
>and the liklihood of a 'Society of Mind'.
>
>Is this sound?
>--

If the premise is false, then one can derive any conclusion, but it isn't
sound reasoning. But just because the premise (that FOL is adequate for
almost all fields of science) may be false, does not mean that your
conclusion is false.

>David Longley
>

Phil Jackson

Marvin Minsky

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to minsky
In article <804472...@longley.demon.co.uk> Da...@longley.demon.co.uk writes:

>Minsky and Papert of course made much of the failure of single layer
>neural networks to model the XOR function. They also made much of the
>possibility of real systems being built from multiple agencies opaque
>to one another. I'm intrigued by the assertion that FOL can not handle
>'connectedness', and that Minsky and Papert's critique of Perceptrons
>was largely based on the failure of single layer perceptrons to solve
>such problems. Was the Minsky & Papert critique also a critique of FOL
>as an adequate language for AI by the same token?

Hmmm. It's not entirely unrelated, to use a waffling phrase. In
fact, virtually all the theorems in that Perceptrons book also apply
to n+1-layer nets as well.

(In most cases this can be seen by replacing our growth rates by the
n-th root of the rates for the nets with a single inner layer. It's a
constant annoyance that so many NN practitioners haven't noticed this
rather obvious point, and hence keep saying that n-layer nets escape
those limitations. This even applies to parity, unless you allow
arbitrarily large fan-in (as did the authors of the otherwise good PDP
book. Our theorems assumed what we called "finite order" -- which is
the same as bounded fan-in. Of course if you don't assume any such
limitation, then you can compute *anything* in two layers, simply by
writing out the normal conjunctive form of a Boolean function. This
corresponds to what comp.ai.philosophy members call the "hemongous
lookup table approach".)

As for connectedness, this clearly requires some sort of recursive
closure, e.g., the "minimization operator" that gets you from
primitive to general recursion. Offhand (because I haven't thought it
out yet) it seems to me that FOL does have the same difficulty with
some analogy to topological connectedness. Can anyone produce (or give
a reference to) a precise formulation of this?

>My basic question is for help with the explication of the above, ie if
>it sheds light on why Minsky was led to the Society of Mind, and also,
>if the material I have cited on the Fragmentation of Behaviour should
>be seen as a positive critique of some of the central tenets of
>Cognitive Science, particularly, the rationality assumption.

I don't think this was what led me and Papert to SoM; rather, it was
more the question of why children were so able to use "common sense"
despite Piaget's convincing demonstrations that they seemed unable to
reliably use relatively simple formal reasoning methods even past the
age of 10 years (and in most cases, in later life as well).
Certainly, we never dreamed of making any assumptions about
rationality -- if by that you mean logical inference rather than
plausible inference.

Ian Sutherland

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
In article <804472...@longley.demon.co.uk>,

David Longley <Da...@longley.demon.co.uk> wrote:
>I'm intrigued by the assertion that FOL can not handle
>'connectedness'

I don't think this assertion has been made. The assertion
that's been made is that if you have a first-order language
with a binary predicate which means "is directly
connected", there's no first-order formula C(x,y) which
means "x is connected to y" in all structures for the
language. It's a long jump from that well-defined
technical statement to the ill-defined and nontechnical
statement "FOL can not handle 'connectedness'".

FOL can "handle 'connectedness'" at least in the sense that
it seems that axiomatic set theory can prove anything about
the concept of "connectedness" that's been proved in
mathematics, and axiomatic set theory is a first-order
theory.

I think it might be worthwhile for you to try to say
precisely what notion of "handle" you're using when you
talk about "handling" connectedness. It seems entirely
possible that Minsky/Papert might be using one notion and
Quine another.
--
Ian Sutherland
i...@eecs.nwu.edu

Sans Peur

David Longley

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
In article <3svu91$d...@news.eecs.nwu.edu>
i...@eecs.nwu.edu "Ian Sutherland" writes:

> In article <804472...@longley.demon.co.uk>,
> David Longley <Da...@longley.demon.co.uk> wrote:
> >I'm intrigued by the assertion that FOL can not handle
> >'connectedness'
>

> I think it might be worthwhile for you to try to say
> precisely what notion of "handle" you're using when you
> talk about "handling" connectedness. It seems entirely
> possible that Minsky/Papert might be using one notion and
> Quine another.
> --
> Ian Sutherland
> i...@eecs.nwu.edu
>
> Sans Peur
>

I'm sure you are correct to criticize my vague use of "handling". I
should have been more careful. Thanks for pointing this out.

What I would find very helpful would be some some development of the
the Quinean thesis that deductive inference fails within intensional
contexts. Psychological terms, being intensional, resist substitutivity
of identicals 'salva veritate', and as a psychologist, I find this
fascinating and worthy of considerable elaboration. My remarks on
the 'connectedness' predicate (relation) were meant in that context.

Would anyone care to comment/correct me?
--
David Longley

David Longley

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
In article <3sv8pu$6...@pipe1.nyc.pipeline.com>
pjac...@nyc.pipeline.com "Philip Jackson" writes:

> Phil Jackson
>
Many thanks Phil...
>
Yes, I accept that, but I would like something more definitive. Quine
does make some rather strong statements for predicate logic in Word &
Object, Quiddities and Pursuit of Truth (as I have cited elsewhere).
I think he is refering to FOL when he makes those claims, but am happy
to be corrected. Similarly if Quine is wrong. Quantum Mechanics is out
of my area of expertise altogether, but I understand that some of the
more theoretical aspects venture into three value logic and other odd
notions.

Minsky and Papert of course made much of the failure of single layer
neural networks to model the XOR function. They also made much of the
possibility of real systems being built from multiple agencies opaque
to one another. I'm intrigued by the assertion that FOL can not handle
'connectedness', and that Minsky and Papert's critique of Perceptrons
was largely based on the failure of single layer perceptrons to solve
such problems. Was the Minsky & Papert critique also a critique of FOL
as an adequate language for AI by the same token?

My basic question is for help with the explication of the above, ie if


it sheds light on why Minsky was led to the Society of Mind, and also,
if the material I have cited on the Fragmentation of Behaviour should
be seen as a positive critique of some of the central tenets of
Cognitive Science, particularly, the rationality assumption.

--
David Longley

Fritz Lehmann

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
In article <804440...@longley.demon.co.uk>,

David Longley <Da...@longley.demon.co.uk> wrote:
>On Connectedness and Opacity
>
>If 'connectedness' can not be defined within First Order Logic,
>but FOL is a sufficient language to express almost all of science
>(Quine 1960;1992), then perhaps connectedness is not a necessary
>feature of learning. ...
>David Longley

I guess Quine erred. He had a passionate hatred for quantifying
over anything but individuals in his youth, and lots of people
(like R. M. Martin for example) got on that bandwagon. But I think
Quine has recanted, or at least refused to to take the extreme view,
as the wisdom of maturity has supplanted the excesses of wild youth.
It seems wrong to suppose that "science" doesn't need connectedness;
its use in electrical circuits, linkages, object mechanics, etc.
seems obvious. (Or do these examples fall into the exception allowed by the
"almost" above?)

Yours truly, Fritz Lehmann


David Longley

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
In article <3svtt8$j...@ecl.wustl.edu> fr...@rodin.wustl.edu "Fritz Lehmann" writes: > In article <804440...@longley.demon.co.uk>, > David Longley <Da...@longley.demon.co.uk> wrote: > >On Connectedness and Opacity > >If 'connectedness' can not be defined within First Order Logic, > >but FOL is a sufficient language to express almost all of science > >(Quine 1960;1992), then perhaps connectedness is not a necessary > >feature of learning. ... > >David Longley > I guess Quine erred. He had a passionate hatred for quantifying > over anything but individuals in his youth, and lots of people > (like R. M. Martin for example) got on that bandwagon. But I think > Quine has recanted, or at least refused to to take the extreme view, > as the wisdom of maturity has supplanted the excesses of wild youth. > It seems wrong to suppose that "science" doesn't need connectedness; > its use in electrical circuits, linkages, object mechanics, etc. > seems obvious. (Or do these examples fall into the exception allowed by the > "almost" above?) > Yours truly, Fritz Lehmann Thanks for the above. My remarks were purely speculative (as are those which follow), and I am grateful for any comments/corrections. Could you elaborate on how the examples you cite require the connectedness predicate (relation)? Here's Quine (1992) on the place of the predicate calculus: 'At first the problem of mind was ontological and linguistic. With the passing of mind as substance, there remained a twofold problem of mentalistic language: syntactic and semantic. The distinctive syntactic trait of mentalistic discourse was the content clause 'that p'. This obstructed extensionality: that is, the substitutivity of identity and more generally the interchangeability of all coextensive terms and clauses salva veritate. It obstructed classical predicate logic as a universal theoretical framework. Now this quarter of the mind problem is in a fair way to dissolution. Quotational treatment of propositional attitudes de dicto delivers them to the extensional domain of predicate logic, thanks to the reduction of quotation to spelling. Propositional attitudes de re, on the other hand, we downgraded. So we see the attitudes de dicto reconciled syntactically with extensional logic. A single language, regimented in predicate logic, can take them in stride along with natural science. The residual oddity of these mentalistic predicates de dicto is purely semantic: they do not interlock productively with the self-sufficient concepts and causal laws of natural science. Still the mentalistic predicates, for all their vagueness, have long interacted with one another, engendering age-old strategies for predicting and explaining human action. They complement natural science in their incommensurable way, and are indispensable both to the social sciences and our everyday dealings. Read Dennett and Davidson.' W. V. O. Quine (1992) Intension The Pursuit of Truth p.72-73 I thought the following worth citing too: 'The first-order predicate calculus is an extensional logic in which Leibniz's Law is taken as an axiomatic principle. Such a logic cannot admit 'intensional' or 'referentially opaque' predicates whose defining characteristic is that they flout that principle.' U. T. Place (1987) Skinner Re-Skinned P. 244 In B.F. Skinner Consensus and Controversy Eds. S. Modgil & C. Modgil There *do* seem to be very important limitations on what one can infer within psychological (epistemic) contexts, and I seem to have become somewhat 'bitten' by this in recent years. Elsewhere I have reviewed some of the work in empirical psychology which I think has a bearing on the matter - however, I am very conscious that this is not my area of expertise (I'm a psychologist, not a logician), so I'm grateful for any advice - supportive o otherwise. David Longley

David Longley

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
In article <1995Jun30.0...@media.mit.edu>
min...@media.mit.edu "Marvin Minsky" writes:

> >Minsky and Papert of course made much of the failure of single layer
> >neural networks to model the XOR function. They also made much of the
> >possibility of real systems being built from multiple agencies opaque
> >to one another. I'm intrigued by the assertion that FOL can not handle
> >'connectedness', and that Minsky and Papert's critique of Perceptrons
> >was largely based on the failure of single layer perceptrons to solve
> >such problems. Was the Minsky & Papert critique also a critique of FOL
> >as an adequate language for AI by the same token?
>

> Hmmm. It's not entirely unrelated, to use a waffling phrase. In
> fact, virtually all the theorems in that Perceptrons book also apply
> to n+1-layer nets as well.
>

<snip>


>
> >My basic question is for help with the explication of the above, ie if
> >it sheds light on why Minsky was led to the Society of Mind, and also,
> >if the material I have cited on the Fragmentation of Behaviour should
> >be seen as a positive critique of some of the central tenets of
> >Cognitive Science, particularly, the rationality assumption.
>

> I don't think this was what led me and Papert to SoM; rather, it was
> more the question of why children were so able to use "common sense"
> despite Piaget's convincing demonstrations that they seemed unable to
> reliably use relatively simple formal reasoning methods even past the
> age of 10 years (and in most cases, in later life as well).
> Certainly, we never dreamed of making any assumptions about
> rationality -- if by that you mean logical inference rather than
> plausible inference.
>

If we accept that humans (of all ages) have great difficulty
using the *formal* rules of logic to reason (Piaget, Wason and
others) then whilst McCarthy and others may be incorrect IFF they
take human performance as a reference for AI models, that might
just be taken to suggest that his efforts to accommodate the
intensional idioms within FOL is misdirected, but that FOL (and
possibly its developments - I am not equipped to judge) *is* the
appropriate language for AI researchers to embrace (as many AI
text books suggest).

But wouldn't that make the SoM approach more palatable/at home in
neuroscience and psychology rather than AI? That is, the ideas
expressed in SoM and the ambitions of the 'connectionists' might
be seen as research into the *limitations* of human performance
(the opacity problem/failure to make the connections) and the
explication of some of the constraints of our neurology - whilst
McCarthy's approach, pruned of recent attempts to accommodate
intension within First Order theory, might ultimately deliver
more of the formal, but definitely not human-like, logically
reasoning systems which we have already seen (resolution etc.).


(if this makes no sense, put it down to the freak heatwave
addling my brains - it's been 90 degrees in London today)
--
David Longley

Aaron Sloman

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
David Longley <Da...@longley.demon.co.uk> writes:

> Date: Fri, 30 Jun 95 07:56:34 GMT
> Organization: Relational Technology
>
> ....


> What I would find very helpful would be some some development of the
> the Quinean thesis that deductive inference fails within intensional
> contexts.

This is totally trivial.

If an intensional context is DEFINED as one in which replacement of
a term by one that is referential equivalent does not preserve truth
value then it follows trivially that the particular sort of
deductive logic that allows substitution of referential equivalents
will fail in intensional contexts. So what?

There's nothing to develop, except to notice that you had better not
use that narrow sort of logic in that sort of context.

> ...Psychological terms, being intensional, resist substitutivity


> of identicals 'salva veritate', and as a psychologist, I find this
> fascinating and worthy of considerable elaboration.

I suspect you have some notion that this is unique to psychology,
and that your fascination has to do with a long term agenda that is
misguided.

(a) intensionality (as I've pointed out previously) is not unique to
psychological contexts, and

(b) the widely believed claim that ALL referring expressions in
psychological contexts are intensional is false.

I've given examples of both in previous contributions.

Here are some more examples of (a)

"It's easy to prove that the sum of the first 5 odd
numbers is 25."
(replace "25" with a very complicated expression that
evaluates to 25)

"The set of chairs in this room is easily identifiable."
(Replace "the set of chairs in this room" with an expression
that refers to the same set, but uses a different membership
criterion, e.g. "the set of objects in this room that were
all manufactured in Wigan in 1973")

There are lots and lots of examples of (a) relating to computers,
e.g.
"The computer has the information that Fred Smith was
born in 1960"

Even if Fred Smith is your brother and that statement is true, this
might be false
"The computer has the information that your brother was
born in 1960"

There's nothing mysterious about psychological statements being
intensional (referentially opaque). The same is true of many
statements about other information processing systems besides
people.

But not all psychological statements are intensional (referentially
opque).

Another example of (b)

"The policeman noticed the burglar climing over the wall."
(I claim that for extensionally equivalent
substitutions of "the burglar" the truth value of
the sentence will not change.)

I suspect that the attempt to link Quine, limitations of first order
logic, the language of science, and properties of psychological
statements, is just a muddle.

The language we use to describe information processing systems
allows us sometimes to refer (directly or indierectly, explicitly or
implicitly) to aspects of their semantic content. When we do this we
are not talking about what that content refers to.

But this is not a topic that is excluded from science. Do you really
want to claim that computer information systems are excluded from
the realm of science?

As for whether Quine claimed that first order predicate calculus
was the language of science, all I can say is that IF he did, he
merely showed what a narrow view of science he had.

There are many extensions to first order predicate calculus that
people have found necessary for one purpose or another (e.g. modal
logics, which also produce intensional contexts).

Why not drop the topic -- there's no mileage in it, at least not
the mileage you are looking for!

Sorry.
Aaron
---
--
Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs )
School of Computer Science, The University of Birmingham, B15 2TT, England
EMAIL A.Sl...@cs.bham.ac.uk
Phone: +44-121-414-4775 Fax: +44-121-414-4281

David Longley

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
In article <3t1lef$4...@percy.cs.bham.ac.uk>
A.Sl...@cs.bham.ac.uk "Aaron Sloman" writes:

> David Longley <Da...@longley.demon.co.uk> writes:
>
> > Date: Fri, 30 Jun 95 07:56:34 GMT
> > Organization: Relational Technology
> >
> > ....
> > What I would find very helpful would be some some development of the
> > the Quinean thesis that deductive inference fails within intensional
> > contexts.
>
> This is totally trivial.
>
> If an intensional context is DEFINED as one in which replacement of
> a term by one that is referential equivalent does not preserve truth
> value then it follows trivially that the particular sort of
> deductive logic that allows substitution of referential equivalents
> will fail in intensional contexts. So what?
>

Of course not - IFF that's what is being claimed. But Chisholm (1957)
and Quine are not claiming this - rather they are suggesting that this
is what seems to characterise intensional idioms.

You have clearly decided to ignore what I and others have actually
written (at length) and seem prepared to dismiss what amounts to over
50 years of work in the Philosophy of Mind where this issue has been
extensively analysed and discussed (though not resolved). Have you read
any of this literature Aaron? It should be clear from the material I
have posted that intensional contexts include more than propositional
attitudes (modal operators and the counterfactuals), but nevermind.

You do seem quite happy to ascribe content and *assume* such attributions
are true. Now *that* is precisely what is wrong with such processes. See
Davidson's 'Psychology as Philosophy'. I think the rest of your note is
characterised by the same errant reasoning. Here's what I have said in
the past. If you don't recognize any of it as having a ring of face or
ecological validity - I can only urge you to take some course in psychology.

FRAGMENTS OF BEHAVIOUR:

FROM 'A System Specification for PROfiling BEhaviour'
A: Methodological Solipsism

'A cognitive theory with no rationality restrictions is
without predictive content; using it, we can have
virtually no expectations regarding a believer's
behavior. There is also a further metaphysical, as
opposed to epistemological, point concerning rationality
as part of what it is to be a PERSON: the elements of a
mind - and, in particular, a cognitive system - must FIT
TOGETHER or cohere.......no rationality, no agent.'

C. Cherniak (1986)
Minimal Rationality p.6

'Complexity theory raises the possibility that formally
correct deductive procedures may sometimes be so slow as
to yield computational paralysis; hence, the "quick but
dirty" heuristics uncovered by the psychological
research may not be irrational sloppiness but instead
the ultimate speed-reliability trade-off to evade
intractability. With a theory of nonidealized
rationality, complexity theory thereby "justifies the
ways of Man" to this extent.'

ibid p.75-76

The establishment of coherence or incoherence depends on a commitment
to clear and accurate recording and analysis of observations and their
relations within a formal system. Unfortunately, biological
constraints on both neuron conduction velocity and storage capacity
impose such severe constraints on human processing capacity that we
are restricted to using heuristics rather than recursive functions.
There can be no doubt that non-human computers are, at least with
respect to the propositional calculus, and first order predicate
calculus with monadic predicates, ie systems which are decidable, have
decision procedures untainted by Godel's Theorem 1931, offer a far
more reliable way of analysing information than intuitive judgment.

The primary reason for writing this volume is to locate the programme
of behaviour assessment and management referred to as 'Sentence
Management' within contemporary research and development in cognitive
and behaviour science. It is also in part motivated by the author
having been in a position for some time where he has been required to
both teach, train, and support applied criminological psychologists in
the use of deductive (computer, and relational database 4GL
programming) as well as inductive (inferential statistical) inference
in an applied setting. This responsibility has led to a degree of
bewilderment. Some very influential work in mathematical logic this
century has suggested that certain domains of concern simply do not
fall within the 'scope and language of science' (Quine 1956). That
work suggests, in fact, that psychological idioms, as opposed to
behavioural terms, belong to a domain which is resistant to the tools
of scientific analysis since they flout a basic axiom which is a
precondition for valid inference.

Whilst this point has been known to logicians for nearly a century,
empirical evidence in support of this conclusion began to accumulate
throughout the 1970s and 1980s as a result of work in Decision Theory
in psychology and medicine. (Kahneman, Slovic and Tversky 1982, Arkes
and Hammond 1986). This work provided a substantial body of empirical
evidence that human judgement is not adequately modelled by the axioms
of subjective probability theory (ie Bayes Theorem, cf. Savage 1954,
Cooke 1991), or formal logic (Wason 1966, Johnson-Laird and Wason
1972), and that in all areas of human judgement, quite severe errors
of judgement were endemic probably due to basic neglect of base rates
(prior probabilities or relative frequencies of behaviours in the
population, see Eddy 1982 for a striking example of the
misinterpretation of the diagnostic contribution of mammography in
breast cancer). Instead, the evidence now strongly suggests that
judgements are usually little more than guesses, or 'heuristics' which
are prone to well understood biases such as 'anchoring',
'availability', and 'representativeness'. This work has progressively
undermined the very foundations of Cognitive Science, which takes
rationality and substitutivity as axiomatic.

It is bewildering how difficult it is to teach deductive reasoning
skills effectively if in fact the classical, functionalist, stance of
contemporary cognitive psychology is in fact true. Yet the literature
on teaching skills in deductive reasoning suggests that these *are* in
fact very difficult skills to teach. Most significantly, it is
notoriously difficult to teach such skills with the objective of
having them *applied to practical problems*. What seems to happen is
that, despite efforts to achieve the contrary, the skills that are
acquired, are both acquired and applied as intensional, inductive
heuristics, rather than as a set of formal logical rules.

This volume is therefore to be taken as a rationale for both a
programme of inmate management and assessment referred to as 'Sentence
Management' (which is both historically descriptive and deductive
rather than projective and inductive in approach), and for the format
of the current MSc 'Computing and Statistics' module which is part of
the Msc in Applied Criminological Psychology, designed to provide
formal training in behaviour science for new psychologists working
within the English Prison Service. The 'Computing and Statistics'
module could in fact be regarded as a module in 'Cognitive Skills' for
psychologists. The format adopted is consistent with the
recommendations of researchers such as Nisbett and Ross (1980) Holland
et al. (1986), and Ross and Nisbett 1991. Elaboration of the
substantial Clinical vs. Actuarial dimension can be found in section C
below.

At the heart of 20th century logic there is a very interesting problem
(illuminated over the last three decades by W.V.O Quine (1956, 1960,
1990,1992), which seems to be divisive with respect to the
classification and analysis of 'mental' (or psychological) phenomena
as opposed to 'physical' phenomena. The problem is variously known as
Brentano's Thesis (Quine 1960), 'the problem of intensionality', or
'the content-clause problem'.

"The keynote of the mental is not the mind it is the
content-clause syntax, the idiom 'that p'".

W. V. O. Quine
Intension
The Pursuit of Truth (1990) p.71

It is the subject of this volume that the solution to this problem
renders psychology and behaviour science two very different subjects
with entirely different methods and domains of application. The
problem is reflected in differences in how language treats certain
classes of terms. One class is the 'extensional' and the other
'intensional'. This volume therefore sets out the relevant
contemporary research background and outlines the practical
implications which these logical classes have for the applied,
practical work of criminological psychologists. We will begin with a
few recent statements on the implications of Brentano's Thesis for
psychology as a science. The basic conclusion is that there can be no
scientific analysis, ie no reliable application of the laws of logic
or mathematics to psychological phenomena, because psychological
phenomena flout the very axioms which mathematical, logical and
computational processes must assume for valid inference. From the fact
that quantification is unreliable within intensional contexts, it
follows that both p and not-p can be held as truth values for the same
proposition, and from any system which allows such inconsistency, any
conclusion whatsoever can be inferred. The thrust of this volume is
that bewilderment vanishes once one appreciates that the subject
matter of Applied Criminological Psychology is exclusively that of
behaviour, and that its methodology is exclusively deductive and
analytical. This is taken to be a clear vindication of Quine's 1960
dictum that:

'If we are limning the true and ultimate structure of
reality, the canonical scheme for us is the austere
scheme that knows no quotation but direct quotation and
no propositional attitudes but only the physical
constitution and behavior of organisms.'

W.V.O Quine
Word and Object 1960 p 221

For:

'Once it is shown that a region of discourse is not
extensional, then according to Quine, we have reason to
doubt its claim to describe the structure of reality.'

C. Hookway
Logic: Canonical Notation and Extensionality
Quine (1988)

The problem with intensional (or common sense or 'folk') psychology
has been clearly spelled out by Nelson (1992):

'The trouble is, according to Brentano's thesis, no such
theory is forthcoming on strictly naturalistic, physical
grounds. If you want semantics, you need a full-blown,
irreducible psychology of intensions.

There is a counterpart in modern logic of the thesis of
irreducibility. The language of physical and biological
science is largely *extensional*. It can be formulated
(approximately) in the familiar predicate calculus. The
language of psychology, however, is *intensional*. For
the moment it is good enough to think of an
*intensional* sentence as one containing words for
*intensional* attitudes such as belief.

Roughly what the counterpart thesis means is that
important features of extensional, scientific language
on which inference depends are not present in
intensional sentences. In fact intensional words and
sentences are precisely those expressions in which
certain key forms of logical inference break down.'

R. J. Nelson (1992)
Naming and Reference p.39-42

and explicitly by Place (1987):

'The first-order predicate calculus is an extensional
logic in which Leibniz's Law is taken as an axiomatic
principle. Such a logic cannot admit 'intensional' or
'referentially opaque' predicates whose defining
characteristic is that they flout that principle.'

U. T. Place (1987)
Skinner Re-Skinned P. 244
In B.F. Skinner Consensus and Controversy
Eds. S. Modgil & C. Modgil

The *intension* of a sentence is its 'meaning', or the property it
conveys. It is sometimes used almost synonymously with the
'proposition' or 'content' communicated. The *extension* of a term or
sentence on the other hand is the CLASS of things of which the term or
sentence can be said to be true. Thus, things belong to the same
extension of a term or sentence if they are the same members of the
designated class, whilst things share the same intension,
(purportedly) if they share the same property. Here's how Quine (1987)
makes the distinction:

'If it makes sense to speak of properties, it should
make clear sense to speak of sameness and differences of
properties; but it does not. If a thing has this
property and not that, then certainly this property and
that are different properties. But what if everything
that has this property has that one as well, and vice
versa? Should we say that they are the same property? If
so, well and good; no problem. But people do not take
that line. I am told that every creature with a heart
has kidneys, and vice versa; but who will say that the
property of having a heart is the same as that of having
kidneys?

In short, coextensiveness of properties is not seen as
sufficient for their identity. What then is? If an
answer is given, it is apt to be that they are identical
if they do not just happen to be coextensive, but are
necessarily coextensive. But NECESSITY, q.v., is too
hazy a notion to rest with.

We have been able to go on blithely all these years
without making sense of identity between properties,
simply because the utility of the notion of property
does not hinge on identifying or distinguishing them.
That being the case, why not clean up our act by just
declaring coextensive properties identical? Only because
it would be a disturbing breach of usage, as seen in the
case of the heart and kidneys. To ease that shock, we
change the word; we speak no longer of properties, but
of CLASSES......

We must acquiesce in ordinary language for ordinary
purposes, and the word 'property' is of a piece with it.
But also the notion of property or its reasonable
facsimile that takes over, since these contexts never
hinge on distinguishing coextensive properties. One
instance among many of the use of classes in mathematics
is seen under DEFINITION, in the definition of number.

For science it is classes SI, properties NO.'

W. V. O. Quine (1987)
Classes versus Properties
QUIDDITIES:

It has been argued quite convincingly by Quine (1956,1960) that the
scope and language of science is entirely extensional, that the
intensional is purely attributive, instrumental or creative, and that
there can not be a universal language of thought or 'mentalese', since
such a system would presume determinate translation relations.
Different languages are different systems of behaviour which may
achieve similar ends, but they do not support direct, determinate
translation relations. This is Quine's (1960) 'Indeterminacy of
Translation Thesis'. Despite its import, we frequently behave 'as if'
it is legitimate to directly translate (substitute), and we do this
not only within our own language as illustrated below, but within our
own thinking and language.

This profound point of mathematical logic can be made very clear with
a simple, but representative example. The sub-set of intensional
idioms with which we are most concerned in our day to day dealings
with people are the so called 'propositional attitudes' (saying that,
remembering that, believing that, knowing that, hoping that and so
on). If we report that someone 'said that' he hated his father, it is
often the case that we do not report what is articulated verbatim, ie
precisely, what was said. Instead, we frequently 'approximate' the
'meaning' of what was said and consider this legitimate on the grounds
that the 'meaning' is preserved.

Unfortunately, this assumes that, in contexts of propositional
attitude ('says that', 'thinks that', 'believes that', and, quite
pertinently, 'knows that' etc.) we are free to substitute terms or
phrases which are otherwise co-referential as we can extensionally
with 7+3 = 10 and 5+5=10. That is, it assumes that inference within
intensional contexts is valid. Yet nobody would report that if Oedipus
said that he wanted to marry Jocasta that he said that he wanted to
marry his mother! The problem with intensional idioms is that they can
not be substituted for one another and still preserve the truth
functionality of the contexts within which they occur. In fact, they
can only be directly quoted verbatim, ie as behaviours. Now
substitutivity of co-referential identicals 'salva veritate'
(Leibniz's Law) is in fact a basic extensional axiom of first order
logic, and is a law which underpins all valid inference. One of the
objectives of this paper is therefore to specify in practical detail
how in fact we propose to develop a system for inmate reporting which
does not flout Leibniz's Law, but takes it as central. This is an
inversion of current practices in significant areas of the work of
applied psychologists, and whilst the example cited above is a simple
one, it is nevertheless highly representative of much of the
problematic work of practising psychologists, who, often ignorant of
the above constraint on dealing with the problems which logicians have
identified with the intensional, are, as a consequence, therefore more
often 'creative' in their dealings with inmates and in their report
writing, than is often appreciated, even though the 'Puzzle About
Belief' and modal contexts is well documented (Church 1954, Kripke
1979).

Dretske (1980) put the issue as follows:

'If I know that the train is moving and you know that
its wheels are turning, it does not follow that I know
what you know just because the train never moves without
its wheels turning. More generally, if all (and only) Fs
are G, one can nonetheless know that something is F
without knowing that it is G. Extensionally equivalent
expressions, when applied to the same object, do not
(necessarily) express the same cognitive content.
Furthermore, if Tom is my uncle, one can not infer (with
a possible exception to be mentioned later) that if S
knows that Tom is getting married, he thereby knows that
my uncle is getting married. The content of a cognitive
state, and hence the cognitive state itself, depends
(for its identity) on something beyond the extension or
reference of the terms we use to express the content. I
shall say, therefore, that a description of a cognitive
state, is non-extensional.'

F. I. Dretske (1980)
The Intentionality of Cognitive States
Midwest Studies in Philosophy 5,281-294

For the discipline of psychology, the above logical analyses can be
taken either as a vindication of 20th century behaviourism/physicalism
(Quine 1960,1990,1992) or as a knockout blow to 20th century
'Cognitivism' and psychologism (methodological solipsism).

'One may accept the Brentano thesis as showing the
indispensability of intentional idioms and the
importance of an autonomous science of intention, or as
showing the baselessness of intentional idioms and the
emptiness of a science of intention. My attitude, unlike
Brentano's, is the second. To accept intentional usage
at face value is, we saw, to postulate translation
relations as somehow objectively valid though
indeterminate in principle relative to the totality of
speech dispositions. Such postulation promises little
gain in scientific insight if there is no better ground
for it than that the supposed translation relations are
presupposed by the vernacular of semantics and
intention.'

W. V. O. Quine
The Double Standard
Flight from Intension
Word and Object (1960), p218-221

In response to these mounting problems, Jerry Fodor published an
influential paper in 1980 entitled 'Methodological Solipsism
Considered as a Research Strategy for Cognitive Psychology'. In that
paper he proposed that Cognitive Psychology adopt a stance, or
restricted itself to the explication of the ways that subjects make
sense of the world from their 'own particular point of view'. This was
to be contrasted with the objectives of 'Naturalistic Psychology' or
'Evidential Behaviourism'.

Methodological Solipsism, as opposed to Methodological Behaviourism,
takes 'cognitive processes', mental contents (meanings/propositions)
or 'propositional attitudes' of folk/commonsense psychology at face
value. It accepts that there is a 'Language of Thought' (Fodor 1975),
that there is a universal 'mentalese' which natural languages map
onto, and which express thoughts as 'propositions'. It examines the
apparent causal relations and processes of 'attribution' between these
processes and other psychological processes which have propositional
content. It accepts what is known as the 'formality condition', ie
that thinking is a purely formal, syntactic, computational affair
which therefore has no room for semantic notions such as truth or
falsehood. Such computational processes are therefore indifferent to
whether beliefs are about the world per se (can be said to have a
reference), or are just the views of the belief holder (ie may be
purely imaginary). Technically, this amounts to beliefs not being
subject to 'existential or universal quantification' (where
'existential' refers to the logical quantifier, y 'there exists at
least one', and 'universal' z refers to 'for all').

Methodological Solipsism looks to the *relations* between beliefs 'de
dicto', which are opaque to the holder (he may believe that the
Morning Star is the planet Venus, but not believe that the Evening
Star is the planet Venus, and therefore believe different things of
the Morning and Evening Stars). Methodological Solipsism does not
concern itself with the transparency of beliefs, ie their referential,
or 'de re' status. Some further examples of what all this entails
might be helpful here, since the implications of Methodological
Solipsism are both subtle and far ranging. The critical notions in
what follow are 'transfer of training', 'generalisation decrement',
'inductive vs. deductive inference', and the distinction between
'heuristics' and 'algorithms'.

Here is how Fodor's paper was summarised in abstract:

'Explores the distinction between 2 doctrines, both of
which inform theory construction in much of modern
cognitive psychology: the representational theory of
mind and the computational theory of mind. According to
the former, propositional attitudes are viewed as
relations that organisms bear to mental representations.
According to the latter, mental processes have access
only to formal (nonsemantic) properties of the mental
representations over which they are defined. The
following claims are defended: (1) The traditional
dispute between rational and naturalistic psychology is
plausibly viewed as an argument about the status of the
computational theory of mind. (2) To accept the
formality condition is to endorse a version of
methodological solipsism. (3) The acceptance of some
such condition is warranted, at least for that part of
psychology that concerns itself with theories of the
mental causation of behavior. A glossary and several
commentaries are included.'

J A Fodor (1980)
Methodological solipsism considered as a research
strategy in cognitive psychology.
Massachusetts Inst of Technology
Behavioral and Brain Sciences; 1980 Mar Vol 3(1) 63-109

Some of the commentaries, particularly those by Loar or Rey clarify
what is, admittedly, quite a difficult, but substantial view widely
held by graduate psychologists.

'If psychological explanation is a matter of describing
computational processes, then the references of our
thoughts do not matter to psychological explanation.
This is Fodor's main argument.....Notice that Fodor's
argument can be taken a step further. For not only are
the references of our thoughts not mentioned in
cognitive psychology; nothing that DETERMINES their
references, like Fregian senses, is mentioned
either....Neither reference nor reference-determining
sense have a place in the description of computational
processes.'

B. F. Loar
Ibid p.89

Not all of the commentaries were as formal, as the following
commentary from one of the UK's most eminent logicians makes clear:

'Fodor thinks that when we explain behaviour by mental
causes, these causes would be given "opaque"
descriptions "true in virtue of the way the agent
represents the objects of his wants (intentions,
beliefs, etc.) to HIMSELF" (his emphasis). But what an
agent intends may be widely different from the way he
represents the object of his intention to himself. A man
cannot shuck off the responsibility for killing another
man by just 'directing his intention' at the firing of a
gun:

"I press a trigger - Well, I'm blessed!
he's hit my bullet with his chest!"'

P. Geach
ibid p80

The Methodological Solipsist's stance is clearly at odds with what is
required to function effectively as an APPLIED Criminological
Psychologist if 'functional effectiveness' is taken to refer to
intervention in the behaviour of an inmate with reference to his
environment. Here's how Fodor contrasted Methodological Solipsism with
the naturalistic approach:

'..there's a tradition which argues that - epistemology
to one side - it is at best a strategic mistake to
attempt to develop a psychology which individuates
mental states without reference to their environmental
causes and effects...I have in mind the tradition which
includes the American Naturalists (notably Pierce and
Dewey), all the learning theorists, and such
contemporary representatives as Quine in philosophy and
Gibson in psychology. The recurrent theme here is that
psychology is a branch of biology, hence that one must
view the organism as embedded in a physical environment.
The psychologist's job is to trace those
organism/environment interactions which constitute its
behavior.'

J. Fodor (1980) ibid. p.64

Here is how Stich (1991) reviewed Fodor's position ten years on:

'This argument was part of a larger project. Influenced
by Quine, I have long been suspicious about the
integrity and scientific utility of the commonsense
notions of meaning and intentional content. This is not,
of course, to deny that the intentional idioms of
ordinary discourse have their uses, nor that the uses
are important. But, like Quine, I view ordinary
intentional locutions as projective, context sensitive,
observer relative, and essentially dramatic. They are
not the sorts of locutions we should welcome in serious
scientific discourse. For those who share this Quinean
scepticism, the sudden flourishing of cognitive
psychology in the 1970s posed something of a problem. On
the account offered by Fodor and other observers, the
cognitive psychology of that period was exploiting both
the ontology and the explanatory strategy of commonsense
psychology. It proposed to explain cognition and certain
aspects of behavior by positing beliefs, desires, and
other psychological states with intentional content, and
by couching generalisations about the interactions among
those states in terms of their intentional content. If
this was right, then those of us who would banish talk
of content in scientific settings would be throwing out
the cognitive psychological baby with the intentional
bath water. On my view, however, this account of
cognitive psychology was seriously mistaken. The
cognitive psychology of the 1970s and early 1980s was
not positing contentful intentional states, nor was it
(adverting) to content in its generalisations. Rather, I
maintained, the cognitive psychology of the day was
"really a kind of logical syntax (only psychologized).
Moreover, it seemed to me that there were good reasons
why cognitive psychology not only did not but SHOULD not
traffic in intentional states. One of these reasons was
provided by the Autonomy argument.'

Stephen P. Stich (1991)
Narrow Content meets Fat Syntax
in MEANING IN MIND - Fodor And His Critics

and writing with others in 1991, even more dramatically:

'In the psychological literature there is no dearth of
models for human belief or memory that follow the lead
of commonsense psychology in supposing that
propositional modularity is true. Indeed, until the
emergence of connectionism, just about all psychological
models of propositional memory, except those urged by
behaviorists, were comfortably compatible with
propositional modularity. Typically, these models view a
subject's store of beliefs or memories as an
interconnected collection of functionally discrete,
semantically interpretable states that interact in
systematic ways. Some of these models represent
individual beliefs as sentence like structures - strings
of symbols that can be individually activated by their
transfer from long-term memory to the more limited
memory of a central processing unit. Other models
represent beliefs as a network of labelled nodes and
labelled links through which patterns of activation may
spread. Still other models represent beliefs as sets of
production rules. In all three sorts of models, it is
generally the case that for any given cognitive episode,
like performing a particular inference or answering a
question, some of the memory states will be actively
involved, and others will be dormant......

The thesis we have been defending in this essay is that
connectionist models of a certain sort are incompatible
with the propositional modularity embedded in
commonsense psychology. The connectionist models in
question are those that are offered as models at the
COGNITIVE level, and in which the encoding of
information is widely distributed and subsymbolic. In
such models, we have argued, there are no DISCRETE,
SEMANTICALLY INTERPRETABLE states that play a CAUSAL
ROLE in some cognitive episodes but not others. Thus
there is, in these models, nothing with which the
propositional attitudes of commonsense psychology can
plausibly be identified. If these models turn out to
offer the best accounts of human belief and memory, we
shall be confronting an ONTOLOGICALLY RADICAL theory
change - the sort of theory change that will sustain the
conclusion that propositional attitudes, like caloric
and phlogiston, do not exist.'

W. Ramsey, S. Stich and J. Garon (1991)
Connectionism, eliminativism, and the future of folk
psychology.

The implications here are that progress in applying psychology will be
impeded if psychologists persist in trying to talk about, or use
psychological (intensional) phenomena within a framework (evidential
behaviourism) which inherently resists quantification into such
terms. Without bound, extensional predicates, we can not reliably use
the predicate calculus, and without the predicate (functional)
calculus we can not formulate lawful relationships, statistical or
determinate.

In the following pages, I hope to be able to explicate how dominant
the methodologically solipsistic approach is within psychological
research and practice, and how that work can only have a critically
negative impact on the practical work of the Applied Criminological
Psychologist. In the main, the following looks to the study of how
people spontaneously use socially conditioned (induced) intensional
heuristics, and how these are at odds with what we now know to be
formally optimal (valid) from the stance of the objective
(extensional) sciences. It argues that the primary objective of the
applied psychologist must be the extensional analysis of observations
of behaviour (Quine 1990) and that any intervention or advice must be
based exclusively on such data if what is provided is to be classed as
a professional service. To attempt to understand or describe behaviour
without reference to the environment within which it occurs, is, it is
argued, to only partially understand and describe behaviour at best, a
point made long ago by Brunswick and Tolman (1933). To do otherwise is
to treat self-assessment/report as a valid and reliable source of
behavioural data, whilst a substantial body of evidence from Cognitive
Psychology, some of which is reviewed in this paper, suggests such a
stance is a very fundamental error. Like 'folk physics', 'folk
psychology' has been documented and found wanting. The last section of
this paper outlines a technology for directly recording and
extensionally analysing inmate/regime interactions or relations,
thereby providing a practical direction to shape the work of Applied
Criminological Psychology.

The following pages cite some examples of research which looks at the
use of intensional heuristics from a methodological solipsistic
stance. The first looks at the degree to which intensional heuristics
can be trained, and is a development of work published by Nisbett and
Krantz (1983). The concept of response generalisation, ie the transfer
of training to new problems is the key issue in what follows. However,
as Nisbett and Wilson (1977) clearly pointed out, subjects' awareness
should not be given undue weight when assessing its efficacy, instead,
testing for change by differential placement in contexts which require
such skills should be the criterion.

'Ss were trained on the law of large numbers in a given
domain through the use of example problems. They were
then tested either on that domain or on another domain
either immediately or after a 2-wk delay. Strong domain
independence was found when testing was immediate. This
transfer of training was not due simply to Ss' ability
to draw direct analogies between problems in the trained
domain and in the untrained domain. After the 2-wk
delay, it was found that (1) there was no decline in
performance in the trained domain and (2) although there
was a significant decline in performance in the
untrained domain, performance was still better than for
control Ss. Memory measures suggest that the retention
of training effects is due to memory for the rule system
rather than to memory for the specific details of the
example problems, contrary to what would be expected if
Ss were using direct analogies to solve the test
problems.'

Fong G. T. & Nisbett R. E. (1991)
Immediate and delayed transfer of training effects in
statistical reasoning.
Journal of Experimental Psychology General; 1991 Mar Vol
120(1) 34-45

Note that the authors report a decline in performance after the delay,
a point taken up and critically discussed by Ploger and Wilson (1991).
Upon reanalysing the Fong and Nisbett's results, these authors
concluded:

'The data in this study suggest the following argument:
Most college students did not apply the LLN [Law of
Large Numbers] to problems in everyday life. When given
brief instruction on the LLN, the majority of college
students were able to remember that rule. This led to
some increase in performance on problems involving the
LLN. **Overall, most students could state the rule with
a high degree of accuracy, but failed to apply it
consistently. The vast majority of college students
could memorize a rule; some applied it to examples, but
most did not.**

Fong and Nisbett (1991) concluded their article with the
suggestion that "inferential rule training may be the
educational gift that keeps on giving" (p.44). It is
likely that their educational approach may be successful
for relatively straightforward problems that are in the
same general form as the training examples. We suspect,
however, that for more complex problems, rule training
might be less effective. **Students may remember the
rule, but fail to understand the relevant implications.
In such cases, students may accept the gift, but it will
not keep on giving.'**

D. Ploger and M. Wilson
J Experimental Psychology: General, 1991,120,2,213-214
(My emphasis)

This criticism is repeated by Reeves and Weisberg (1993):

G. T. Fong and R. E. Nisbett claimed that human problem
solvers use abstract principles to accomplish transfer
to novel problems, based on findings that Ss were able
to apply the law of large numbers to problems from a
different domain from that in which they had been
trained. However, the abstract-rules position cannot
account for results from other studies of analogical
transfer that indicate that the content or domain of a
problem is important both for retrieving previously
learned analogs (e.g., K. J. Holyoak and K. Koh, 1987;
M. Keane, 1985, 1987; B. H. Ross, 1989) and for mapping
base analogs onto target problems (Ross, 1989). It also
cannot account for Fong and Nisbett's own findings that
different-domain but not same-domain transfer was
impaired after a 2-wk delay. It is proposed that the
content of problems is more important in problem solving
than supposed by Fong and Nisbett.'

L. M. Reeves and R. W. Weisberg
Abstract versus concrete information as the basis for
transfer in problem Solving: Comment on Fong and Nisbett
(1991).
Journal of Experimental Psychology General 1993 Mar
Vol122(1) 125-128

The above authors concluded their paper:

'Accordingly, we urge caution in development of an
abstract-rules approach in analogical problem solving at
the expense of domain or exemplar-specific information.
Theories in deductive reasoning have been developed that
give a more prominent role to problem content (e.g.
Evans, 198; Johnson-Laird, 1988; Johnson-Laird & Byrne,
1991) and thus better explain the available data; the
evidence suggests that problem solving theories should
follow this trend. Ibid p.127

The key issue is not whether students (or inmates) can learn
particular rules, or strategies of behaviour, since such behaviour
modification is quite fundamental to training any professional;
rather, **the issue is how well such rules are in fact applied outside
the specific training domain where they are learned**, which, writ
large, means the specialism within which they belong. This theme runs
throughout this paper in different guises. In some places the emphasis
is on 'similarity metrics', in others, 'synonymy', 'analyticity' and
'the opacity of the intensional'. Throughout, the emphasis is on
transfer of training and the fragmentation of all skill learning which
is fundamental to the rationale for the system of Sentence Management
which will be explicated and discussed in the latter parts of this
paper.

Fong et al. (1990) having reviewed the general neglect of base rate
information and overemphasis on case-specific information in parole
decision making, went on to train probation officers in the use of the
law of large numbers. This training increased probation officers' use
of base-rates when making predictions about recidivism, but this is a
specialist, context specific skill.

'Consider a probation officer who is reviewing an
offender's case and has two primary sources of
information at his disposal: The first is a report by
another officer who has known the offender for three
years; and the second is his own impressions of the
offender based on a half-hour interview. According to
the law of large numbers, the report would be considered
more important than the officer's own report owing to
its greater sample size. But research suggests that
people will tend to underemphasize the large sample
report and overemphasize the interview. Indeed, research
on probation decisions suggests that probation officers
are subject to exactly such a bias (Gottfredson and
Gottfredson; 1988; Lurigio, 1981)'

G. T. Fong, A. J. Lurigio & L. J. Stalans (1990)
Improving Probation Decisions Through Statistical Training
Criminal Justice and behavior 17,3,1990, 370-388

However, it is important to evaluate the work of Nisbett and
colleagues in the context of their early work which is clearly in the
tradition of fallibility of 'intuitive' human judgment. Their work
illustrates the conditions under which formal discipline, or cognitive
skills can be effectively inculcated, and which classes of skills are
relatively highly resistant to training. Such training generalises
most effectively to apposite situations, many of which will be
professional contexts. A major thesis of this volume is that for
extensional skills to be put into effective practice, explicit
applications must be made salient to elicit and sustain the
application of such skills. Formal, logical skills are most likely to
be applied within contexts such as actuarial analysis, which comprise
the application of professional skills in information technology. Such
a system is outlined with illustrative practical examples as framework
for applied behaviour science in the latter part of this volume.

Recently, Nisbett and colleagues (1992) in defending their stance
against the conventional view that there may in fact be little in the
way of formal rule learning, have suggested criteria for resolving the
question as to whether or not explicit rule following is fundamental
to reasoning, and if so, under what circumstances:

'A number of theoretical positions in psychology -
including variants of case-based reasoning, instance-
based analogy, and connectionist models - maintain that
abstract rules are not involved in human reasoning, or
at best play a minor role. Other views hold that the use
of abstract rules is a core aspect of human reasoning.
We propose eight criteria for determining whether or not
people use abstract rules in reasoning, and examine
evidence relevant to each criterion for several rule
systems. We argue that there is substantial evidence
that several different inferential rules, including
modus ponens, contractual rules, causal rules, and the
law of large numbers, are used in solving everyday
problems. We discuss the implications for various
theoretical positions and consider hybrid mechanisms
that combine aspects of instance and rule models.

E. Smith , C Langston and R Nisbett (1992)
The Case for Rules in Reasoning, Cognitive Science 16, 1-40

Whilst the above, particularly the degree to which training must be
'taught for transfer', is clearly relevant to the training of
psychologists in the use of deductive and actuarial technology
(computing and statistics), it is also relevant to work in the domain
of cognitive skills, and, from the evidence that cognitive skills
should be treated no differently to any other behavioural skills, the
argument is relevant to any other skill training, whether part of
inmate programmes. or staff training.

For instance, in some of the published studies (e.g. Porporino et al
1991), pre to post course changes (difference scores) in cognitive
skills have been presented as evidence for the efficacy of such
programmes in conjunction with the more critical (albeit to date,
quantitatively less impressive) measures of changes in reconviction
rate. Clearly one must ask whether one is primarily concerned to bring
about a change in cognitive behaviour, and/or a change in other
behaviours. In the transfer of training and reasoning studies by
Nisbett and colleagues, the issues are acknowledged to be highly
dependent on the types of heuristics being induced, and the
conventional position (which is being represented in this volume) is,
as pointed out above, still contentious, although the view being
expressed here remains the *conventional* one. The issue is one of
*generalisation* of skills to novel tasks or situations, ie situations
other than the training tasks. To what extent does generalisation in
practice occur, if at all?. These issues, and the research in
experimental psychology (outside the relatively small area of
criminological psychology), are cited here as clear empirical
illustrations of *the opacity of the intensional*. The conventional
view, as Fong and Nisbett (1991) clearly state, is that:

'A great many scholars today are solidly in the
concrete, empirical, domain-specific camp established by
Thorndike and Woodworth (1901), arguing that people
reason without the aid of abstract inferential rules
that are independent of the content domain.'

Thus, whilst Nisbett and colleagues have provided some evidence for
the induction of (statistical) heuristics, they acknowledge that there
is a problem attempting to teach formal rules (such as those of the
predicate calculus) which are not 'intuitively obvious'. This issue is
therefore at the heart of the question as to the issue of resourcing
specific, ie special inmate programmes, which are 'cognitively' based,
and which adhere to the conventional 'formal discipline' notion. Such
investment must be compared with investment in the rest of inmate
activities which can be used to monitor and shape behaviour under the
relatively natural conditions of the prison regime. There, the natural
demands of the activities are focal, and the 'programme' element rests
in apposite allocation and clear description of what the activity area
requires/offers in terms of behavioural skills.

There is a logical possibility that in restricting the subject matter
of psychology, and thereby the deployment of psychologists, to what
can only be analysed and managed from a Methodological Solipsistic
(cognitive) perspective, one will render some very significant results
of research in psychology irrelevant to applied *behaviour* science
and technology, unless taken as a vindication of the stance that
behaviour is essentially context specific. As explicated above,
intensions are not, in principle, amenable to quantitative analysis.
They are, in all likelihood, only domain or context specific. A few
further examples should make these points clearer.

--
David Longley

Fritz Lehmann

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
In comp.ai, David Longley's quote from Quine refers to the place of
"predicate calculus" as a theoretical framework. That's perfectly fine -- this
thread is about people in AI who, for no particular reason, keep throwing in
the words "First-Order" or "FOL". Standard predicate calculus is not
restricted to First-Order quantification; take a look at Principia
Mathematica. The only reason these people (99% of them) keep repeating
"First-Order-Logic" when they actually mean predicate logic is that
"FOL" is a familiar phrase they heard in Logic 101, and have kept hearing
since--within AI. They do not pause to _think_ about why they are throwing in
the restriction. (Most Logic 101 classes don't devote time to the higher order
part of predicate logic.)

Now, this does not apply to everybody; there are a (very) few people
in AI who actually understand and believe in First-Orderism and are willing
to accept its bizarre consequences, like the inability to define connected-
ness, finiteness, equality, transitive closure, and a host of other useful
things, plus the necessary existence of weird infinite kinds of species
of non-standard integers, Robinsonian infinitesimals, etc. They adopt
the view of a once-dominant faction of logicians who are devoted to First-
Orderism. In order to get a passably expressive language, the latter have
tried to capture (inherently second-order) Peano Arithmetic by devising
elaborate kludges like axiom schemata and Henkin semantics to create what
is known as "weakly" higher-order logic in imitation of the real thing.
Their primary motivation (I surmise) was that they liked certain formal
properties like completeness, compactness, Lowenheim-Skolem, and 0-1
properties. I have never heard why any of these properties have the
slightest relevance in practical AI work, and I have not seen one instance
of a popular demand for, say, non-standard integers and the like. Some
of those formal properties are worse than irrelevant; as the logician
Peter Simons succinctly put it to me, "Compactness is bad."

The vogue for First-Orderism is waning within logic and model theory,
though there's still a cadre of die-hards; most of the major work in
model theory in the last 15 years has dealt with properties of
(sometimes otherwise restricted forms of) higher-order logic. Maybe in
about 10 years time this news will have diffused into AI.

For example, the new AI KIF (Knowledge Interchange Format) proposed logic
standard has been restricted to First-Order, for reasons unknown, although
there's hope that it will be extended later to (weakly? strongly?) higher-order
logic. Meanwhile this ostensibly "universal" language is unable to define
the predicate "connected" even for finite structures (due to the Fraisse
and Ehrenfeucht results).

Yours truly, Fritz Lehmann


Amitabha Mukerjee

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
A.Sl...@cs.bham.ac.uk (Aaron Sloman) wrote:
>pjac...@nyc.pipeline.com (Philip Jackson) writes:
>
>> ... For example, "most human mental processes"

>> includes inventing and using new languages and notations, reasoning with
>> metaphors / analogies, learning, dreaming, .... many processes for which we
>> have barely scratched the surface.
>
>I agree with these last comments. I've been puzzling for many years
>over our ability to invent and use all sorts of specific forms of
>representation suited to particular domains, and especially our use
>of spatial forms of representation, which seem to make use of
>aspects of the human brain that were developed specifically for
>spatial perception, control, and reasoning. (However, I don't think
>we know much about how this works.)

Human dependance on spatial themes and metaphors has been attracting
increasing attention. Last AAAI-94, the Workshops in Spatial and Temporal
Reasoning, as also the one on NL and Vision, attracted considerable
debate on this issue. While it was clear that Temporal reasoning is
closely related on our abilities to reason about space, it also appears
that a lot of ordinary reasoning also follows spatial cues, e.g.
"x follows from y" (causal inference), "3 before 4" (ordering - all types
of it), inclusion, set overlap, clustering etc. all
depend on our extremely well-developed capability for dealing
with space.

This spatial metaphor is now being used in developing a large number of
areas - including diagrammatic reasoning, GIS Query interfaces, and
most interestingly, in hypertext reasoning (see Andreas Dieberger's
page on the spatial metaphors in hypertext at
http://www.gatech.edu/lcc/idt/Faculty/andreas_dieberger/ECHT94.WS.toc.html),
and of course in areas that traditionally have involved perceptuo-motor
problems such as vision or robotics. This IJCAI there are two workshops
looking at spatial relations from an AI and cognitive context
(http://agora.leeds.ac.uk/spacenet/spatialexpressions.html) and on
spatial and temporal reasoning, and also a tutorial on spatial reasoning
that aims to provide a review (http://www.cs.albany.edu/~amit/tutijcai.html).
>
>Descartes' discovery that Euclidean geometry could be mapped onto
>arithmetic and algebra was a significant discovery of a deep
>relationship between TWO forms of representation rather than a
>discovery of a theorem within ONE form (i.e. logic).


>
>Of course, it remains an open question whether logic in general or
>first order logic in particular is simply being used as an
>implementation medium in which all these other things can be
>implemented (a suggestion first made, I think, by Pat Hayes in
>responding to my 1971 critique of Logicist AI presented at
>IJCAI-71).

> ...
>I have the impression that most logicians don't bother to answer the
>criticisms of logic-based AI because they can't believe that there's
>any other form of reasoning that can be valid or sound, apart from
>logical reasoning.

The debate over logic vs non-logic (what?) has of course been getting
a boost over allegedly non-symbolic systems like reactive systems (Brooks)
and distributed agent architectures. Even these though, rely on logical/
symbolic underpinnings in their current implementation, and I am sure there
is some merit to the claim that at some level spatial attributes are converted
into symbolic ones, but perhaps some (maybe most?) reasoning takes place
BEFORE this. Recent work on in vivo analysis of brain signals by Kosslyn
and others (see his response to Glasgow in the Computational Intelligence
spl issue on imagery v.9(4), 1993) seems to be pointing to some of the
mechanisms that are used in such pre-symbolic processing.

---
Amitabha Mukerjee (e-mail: am...@cs.albany.edu)
Dept of Comp Sci, Texas A&M Univ, College Stn, TX 77843
======================================================================


Bob Riemenschneider

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
In article <804504...@longley.demon.co.uk> David Longley <Da...@longley.demon.co.uk> writes:
> In article <3svtt8$j...@ecl.wustl.edu> fr...@rodin.wustl.edu "Fritz
> Lehmann" writes:
> > In article <804440...@longley.demon.co.uk>, David Longley
> > <Da...@longley.demon.co.uk> wrote:
> > >If 'connectedness' can not be defined within First Order Logic, ...

> > It seems wrong to suppose that "science" doesn't need connectedness;
> > its use in electrical circuits, linkages, object mechanics, etc. ...

> Could you elaborate on how the examples you cite require the connectedness
> predicate (relation)?

Since interest in connectedness continues, I thought it was high time
someone noted that, although it is not definable in *finitary* first-order
logic, it is straightforwardly definable in $L_{\omega_1 \omega}$ -- excuse
the TeX-ism! -- : if R(x,y) means that x is directly related to y by R, then
A_n, where

A_n = (E z_1)(E z_2) ... (E z_{n-1})
[ R(x, z_1) & R(z_1, z_2) & ... & R(z_{n-1}, y) ]

means that there is a path of length n from x to y, and

\/ { A_n : n in N }

means that there is a path from x to y, and the universal closure of that
disjunction means that the universe is connected. So, while *some*
mechanism not available in standard first-order logic is required for
defining connectedness, there are alternatives to higher-order
quantification which some might find less philosophically objectionable.
(Someone other than Quine, whose objection to higher-order logics as set
theory in sheeps clothing applies equally to infinitary logics. Someone
who thinks we humans have a pretty good grasp of recursive countable
disjunctions like the above.)

-- rar

Jorn Barger

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
In article <3ssmol$p...@pipe1.nyc.pipeline.com>,

Philip Jackson <pjac...@nyc.pipeline.com> wrote:
>In recent posts, if I have read them correctly, Barger has accused
>Ginsberg of "intellectual dishonesty" (not the kind of charge that
>should be tossed about lightly)

Thank you, Phil, for bringing this up. I agree it's a charge that
needs to be backed up with evidence, and I'm grateful for the
opportunity to post this. Here's what I found by going back thru
Matt's postings since June 21-- a perfectly adequate sampling of what
I was thinking of:

- Blithe unconcern for the truth of personal attacks:


>Seems pretty arrogant for you to expect the whole scientific community

>to learn your language instead of your learning theirs. [...]
>What separates you (and the Marengans) from the scientific community
>is not raw wit or animal cunning, but the fact that you seem completely
>uninterested in anyone else's results. [...]


>It's not supposed to be that one guy talks and never bothers to listen.

>if you don't read anything, no one in the scientific community is going
>to listen to you. [You have noticed that no one seems to be listening,
>haven't you?]
>Think of a really smart person you know. (Jorn, think of someone
>else.)

- Rationalizing prejudices:
>That's one of the lessons of AI today. There are *lots* of
>good ideas. But the only way to tell what is worth pursuing is to
>code it up and see what happens. AI is awash in great ideas. What
>it needs is more substance. [...]


>Your approach is one of many. Way too many to figure out which will
>work.

- Rationalizing obscurantism:
>But by making the material available, I've done *my* job of
>disseminating scientific results. [...]
>When Kip explains general relativity to a lay audience, he is
>exploiting the fact that eighty years has passed between the
>introduction of the theory and the time of his explanation. It takes a
>long time for scientific progress to be translatable into such eloquent
>terms, and the translation is dependent on a lot of hard work. [...]
>I just don't understand the properties of the implementation well
>enough to translate them into lay terms.


j
jo...@mcs.com

Bob Riemenschneider

unread,
Jun 30, 1995, 3:00:00 AM6/30/95
to
From: r...@birch.csl.sri.com (Bob Riemenschneider)

-- rar
---

David Longley

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <3t3020$a...@percy.cs.bham.ac.uk>
A.Sl...@cs.bham.ac.uk "Aaron Sloman" writes:

> Dear David (David Longley <Da...@longley.demon.co.uk> )
>
> I find it strange that you object so strongly to something I've
> written which is almost identical with a quotation you have
> circulated several times.
>
> you wrote:
> (dl)


> > > > What I would find very helpful would be some some development of the
> > > > the Quinean thesis that deductive inference fails within intensional
> > > > contexts.
>

> I wrote
> (as)


> > > This is totally trivial.
> > >
> > > If an intensional context is DEFINED as one in which replacement of
> > > a term by one that is referential equivalent does not preserve truth
> > > value then it follows trivially that the particular sort of
> > > deductive logic that allows substitution of referential equivalents
> > > will fail in intensional contexts. So what?
>

> you objected
> (dl)


> > Of course not - IFF that's what is being claimed. But Chisholm (1957)
> > and Quine are not claiming this - rather they are suggesting that this
> > is what seems to characterise intensional idioms.
>

> I am not sure what you think "this" refers to. But you have several
> times included this quote from Place


>
> > 'The first-order predicate calculus is an extensional
> > logic in which Leibniz's Law is taken as an axiomatic
> > principle. Such a logic cannot admit 'intensional' or
> > 'referentially opaque' predicates whose defining

> ^^^^^^^^^^^^^^^^


> > characteristic is that they flout that principle.'

> ^^^^^^^^^^^^^^


> >
> > U. T. Place (1987)
> > Skinner Re-Skinned P. 244
> > In B.F. Skinner Consensus and Controversy
> > Eds. S. Modgil & C. Modgil
>

> Place is saying exactly what I said, namely that the defining
> characteristic of intensional predicates is that they flout the
> principle which is part of the defining characteristic of
> extensional logic.
>
> I.e. the whole thing is just a matter of definition. It is therefore
> totally trivial that extensional logics are not applicable to
> intensional (referentially opaque) predicates, or contexts.

This point warrants a reply by itself.

IFF it was a matter of an analytic statement, Quine would have
made a major gaff, since all of his work has essentially been
based on empiricism without that dogma (incidentally, I'm using
the word essentially in a loose way here).

IFF Place meant define in the sense that you are, I'm sure we'd
all agree that the extensional/intensional distinction is trivial
- but that's not what is being said. What is being said is that
the intensional idioms, treated as an empirically collected
class, share a common characteristic, ie failure of
substitutivity 'salva veritate'. This is not an a priori
definition ('Two Dogmas of Empiricism' on analyticity). This is
fundamental to understanding any of Quine, but particulalrly his
conception of psychology as naturalised epistemology.
--
David Longley

Jim Balter

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
>> times included this quote from Place
>>
>> > 'The first-order predicate calculus is an extensional
>> > logic in which Leibniz's Law is taken as an axiomatic
>> > principle. Such a logic cannot admit 'intensional' or
>> > 'referentially opaque' predicates whose defining
>> ^^^^^^^^^^^^^^^^

>> > characteristic is that they flout that principle.'
>> ^^^^^^^^^^^^^^

>
>IFF Place meant define in the sense that you are, I'm sure we'd
>all agree that the extensional/intensional distinction is trivial
>- but that's not what is being said. What is being said is that
>the intensional idioms, treated as an empirically collected
>class, share a common characteristic, ie failure of
>substitutivity 'salva veritate'. This is not an a priori
>definition ('Two Dogmas of Empiricism' on analyticity). This is
>fundamental to understanding any of Quine, but particulalrly his
>conception of psychology as naturalised epistemology.

Perhaps it would help if you defined the terms "'intensional' or
'referentially opaque' predicates" as used by Place. If referentially
opaque predicates are not a priori defined by the fact that they
are not referentially transparent, then how in the world are they
defined? To say that if that's what Place meant there would be no
disagreement or that if that's what Quine meant then he committed a
gaffe begs the question.

And please avoid such self-serving ad hominem claims as that Aaron
Sloman has ignored what you've written. You certainly won't make any
points with those readers familiar with Aaron's thoughtfulness and
civility.


-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------


David Longley

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <3t374e$a...@mars.earthlink.net> jqb "Jim Balter" writes:
>
> Perhaps it would help if you defined the terms "'intensional' or
> 'referentially opaque' predicates" as used by Place. If referentially
> opaque predicates are not a priori defined by the fact that they
> are not referentially transparent, then how in the world are they
> defined? To say that if that's what Place meant there would be no
> disagreement or that if that's what Quine meant then he committed a
> gaffe begs the question.
>

OK - here is part of it again - I take it as accepted that a RDBMS is
an application of the predicate calculus.


FRAGMENTS OF BEHAVIOUR:
EXTRACT 12: FROM 'A System Specification for PROfiling BEhaviour'

The bulk of the material in this section is designed to illustrate the
constraints of the technology available. Paradoxically, the major
constraint is that the PROBE system can not handle
psychological/intensional 'information' (and it is moot whether such
idioms can in fact reduce uncertainty given that such idioms are
locked within an intensional circle). Hopefully, by the end of this
section it should be clear why this is the case, and why as a
consequence, a clear distinction must be made between psychology and
behaviour science.

'We take as the extension of a predicator the class of those
individuals to which it applies and, its intension, the
property which it expresses; this is in accordance with
customary conceptions. As the extension of a sentence we take
its truth-value (truth or falsity); as its intension, the
proposition expressed by it. Finally, the extension of an
individual expression is the individual to which it refers;
its intension is a concept of a new kind expressed by it,
which we call an individual concept. These conceptions of
extensions and intensions are justified by their
fruitfulness..'

R. Carnap (1947;1956)
The Method of Extension and Intension
Meaning and Necessity p.1

'Intensional and extensional ontologies are like oil and
water.'

W.V.O. Quine (1953)
From a Logical Point of View p.157

'The keynote of the mental is not the mind; it is the
content-clause syntax, the idiom 'that p'.'

W.V.O Quine (1990)
Intension
The Pursuit of Truth p.71

Intension
The Pursuit of Truth p.72-73

Each time we tell somebody what somebody else said, what somebody
thinks, or believes, we tend to say 'he said that...', he 'thinks
that...', he believes that...'. Yet it is notoriously rare that we
actually state verbatim what it was that they actually 'said',
'thought', 'believed' etc. We tend to behave as if it is legitimate to
substitute what we consider to be equivalents, translations, or
synonyms in such contexts. In fact, this all too easily leads to
rhetoric and sophistry because it is impossible to substitute within
intensional contexts 'salva veritate', one is constrained to direct
quotation. Once this is appreciated, much of what passes as mental
life, or psychology, is revealed for what it is - creative fiction.
What may be even more surprising is that such indeterminacy applies to
first person utterances also, psychological idioms such as 'remembers
that'. The price we pay for going beyond the information given is
often very high - namely truth-functionality, or in short, truth
itself.

Technically, PROBE is an application of relational theory to behaviour
profiling and management within the constraints of classical predicate
logic as a universal theoretical framework (Quine 1956;1992).
Descriptive, or declarative, observational terms (predicates) are
recorded along with the appropriate values (truth-functions) of those
predicates for particular individuals in a relational structure known
as a Data Base Management System (DBMS). For instance, the predicate
'Sentence_Length', applied to an individual inmate's national number
returns an integer value representing the number of years, and as a
second predicate, sub-part of a year as months, the inmate was
sentenced to upon conviction. The predicate 'Primary_Index_Offence',
when applied to an inmate's national number, returns a value measured
on a nominal scale representing one of a set of fixed judicial
offences.

Such a DBMS is not just a electronic data filing and retrieval system,
although, depending on how it is used, it can have many features in
common with such administrative systems. Rather, a DBMS is defined by
its use, and is a formal system based on theoretical work in
mathematical logic. The power of such systems lies not in their
ability to store data values for subsequent retrieval and listing, but
in the powerful predicate calculus based query languages which enable
users to practically implement Frege's (1879) programme of making
deductive inferences about individuals using the universal language of
first order logic. Today, Leibniz's and Frege's representatives are
just as likely to be professors and lecturers in Computer Science as
they are professors or lecturers in Logic:

'The predicate calculus can be used to form queries about
stored data. Basically we can store the extension of a
predicate as a set of data values in a file on a
computer...The facts in the database can be written down
directly as a series of instances of predicates applied to
constants..

..We have already seen the power of the Predicate Calculus
as a query language, using examples of everyday queries.
Following a proposal by E.F. Codd (1970), various specialised
forms of it have been developed to work on relations; such a
language is called a relational calculus. The QUEL language
was one of the earliest implementations of Codd's proposals
and its style is very close to his proposed predicate
calculus notation.'

P. Gray (1984)
Logic, Algebra and Databases

They are, alas, unlikely to be psychologists (narrowly construed),
who, literally defined, are methodologically solipsists or
intensionalists (Fodor 1980). The methodological solipsist is
concerned with how individuals (inmates) construe the world, 'from
their own point of view' (Fodor 1980), despite the many protestations
from mathematical logicians that intension can not determine extension
(meaning is not 'in the head'). As methodological solipsists,
psychologists are not concerned with truth-conditions, or the
extensionality of statements, observational or otherwise. For example,
it is often argued that, in the interests of effective treatment, the
dialogue between counsellor and 'client' is confidential. A counsellor
will not ascribe truth-functions to the sentences emitted by an
inmate, and may in fact reinforce the stream of verbal behaviour,
thereby eliciting more behaviour. The rationale here may be that such
a strategy elicits samples of verbal behaviour, the phenomenology of
which may be critical to diagnostic assessment. However, what if the
content, or meaning of what people say, is largely irrelevant to how
they behave or have behaved (Nisbett and Wilson 1977):

'..there may be little or no direct introspective access to
higher order cognitive processes. Ss are sometimes (a)
unaware of the existence of a stimulus that importantly
influenced a response, (b) unaware of the existence of the
response, and (c) unaware that the stimulus has affected the
response. It is proposed that when people attempt to report
on their cognitive processes, that is, on the processes
mediating the effects of a stimulus on a response, they do
not do so on the basis of any true introspection. Instead,
their reports are based on a priori, implicit causal
theories, or judgments about the extent to which a particular
stimulus is a plausible cause of a given response. This
suggests that though people may not be able to observe
directly their cognitive processes, they will sometimes be
able to report accurately about them. Accurate reports will
occur when influential stimuli are salient and are plausible
causes of the responses they produce, and will not occur when
stimuli are not salient or are not plausible causes.'

R. Nisbett & T. Wilson (1977)
Telling More Than We Can Know: Public Reports on Private
Processes
Psychological-Review; 1977 Mar Vol 84(3) 231-259

Offering confidentiality may of course be an effective strategy to
limit the indeterminacy of intensional idioms (see Postman 1959 on
'Serial Reproduction'). However, given recent work in logical analysis
(Kripke 1971,1972; Donnellan 1966; Putnam 1973; Schwartz 1977) on
Direct Reference, which suggests the social determination of
extension, ie that what is inside the head has nothing to do with the
establishment of meaning, confidentiality in a correctional context
may prove to be self defeating. This is not to say that individuals
should not be handled tactfully of course, just that an uncritical
promise of confidentiality is likely to limit therapeutic efficacy if
social reinforcement of behaviour is, as the above and Skinner's work
suggest, paramount. It should be more widely appreciated perhaps that
there may be nothing behind what is said apart from the form of the
verbal behaviour itself:

'Linguistic competence is not the ability to articulate
antecedently determinate ideas, intensions, or meanings; nor
is it the ability to reproduce the world in words. We have no
such abilities. It consists, rather, in mastery of a complex
social practice, an acquired capacity to conform to the mores
of a linguistic community. It is neither more nor less than
good linguistic behavior.'

Catherine Z. Elgin (1990)
Facts That Don't Matter
Meaning and method - Essays in Honor of Hilary Putnam

For an example of the futility of reliance of propositional attitudes
within empirical psychological research, one can do no better than
cite the review of Wicker (1969) on attitude-behaviour consistency:

'Insko and Schopler (1967) have suggested the possibility
that much evidence showing a close relationship between
verbal and overt behavioral responses has been obtained but
never published because investigators and journal editors
considered such findings 'unexciting' and 'not worthy of
publication'. If such data exist, their publication is needed
to correct the impression suggested by the present review
that attitude-behavior inconsistency is the more common
phenomenon.

The presently available evidence on attitude-behavior
relationships does not seem to contradict conclusions by two
early researchers in the area: LaPiere wrote in 1934:

'The questionnaire is cheap, easy, and mechanical. The study
of human behavior is time consuming, intellectually
fatiguing, and depends for its success upon the ability of
the investigator. The former method gives quantitative
results, the latter mainly qualitative. Quantitative
measurements are quantitatively accurate; qualitative
evaluations are always subject to the errors of human
judgment. Yet it would seem far more worth while to make a
shrewd guess regarding that which is essential than to
accurately measure that which is likely to prove quite
irrelevant (La Piere, 1934, p.237).

Corey, in 1937 wrote:

'It is impossible to say in advance of investigation whether
the lack of relationship reported here between attitude
questionnaire scores and overt behavior is generally true for
measures of verbal opinion. Were that the case, the value of
attitude scales and questionnaires would for most practical
purposes be extremely slight. It would avail a teacher very
little, for example, so to teach as to cause a change in
scores on a questionnaire measuring attitude toward communism
if these scores were in no way indicative of the behavior of
his pupils.

It is difficult to devise techniques whereby certain types of
behavior can be rather objectively estimated for the purpose
of comparison with verbal opinions. Such studies despite
their difficulty, would seem to be very much worthwhile. It
is conceivable that our attitude testing program has gone far
in the wrong direction. The available scales and techniques
are almost too neat. The ease with which so-called
attitudinal studies can be conducted is attractive but the
implications are equivocal' (Corey, 1937, p.279).

Wicker concluded his paper 'Attitudes v. Actions' (1969) with the following:

'The present review provides little evidence to support the
postulated existence of stable, underlying attitudes within
the individual which influence both his verbal expressions
and his actions. This suggests several implications for
social science researchers.

First, caution must be exercised to avoid making the claim
that a given study or set of studies of verbal attitudes,
however well done, is socially significant merely because the
attitude objects employed are socially significant. Most
socially significant questions involve overt behavior, rather
than peoples feelings, and the assumption that feelings are
directly translated into actions has not been demonstrated.
Casual examination of recent numbers of this and other like
journals suggests that such caution has rarely been shown.

Second, research is needed on various postulated sources of
influence on overt behavior. Once these variables are
operationalised, their contribution and the contribution of
attitudes the variance of overt behavior can be determined.
Such research may lead to the identification of factors or
kinds of factors which are consistent with better predictors
of overt behavior than attitudes.

Finally, it is essential that researchers specify their
conceptions of attitudes. Some may be interested only in
verbal responses to attitude scales, in which case the
question of attitude-behavior relationships is not
particularly relevant or important. However, researchers who
believe that assessing attitudes is an easy way to study
overt social behaviors should provide evidence that their
verbal measures correspond to relevant behaviors. Should
consistency not be demonstrated, the alternatives would seem
to be to acknowledge that one's research deals only with
verbal behavior or to abandon the attitude concept in favour
of directly studying overt behavior.'

Allan W Wicker (1969)
Attitudes v. Actions: The relationship between Verbal
and Overt Responses to Attitude Objects.

The only study to shed further light on the conclusions of Wicker was
the study by Ajzen & Fishbein (1977) which basically adds the caveat
that the correspondence between what people say they do and what they
actually do is improved if the questions become some specific and so
constrained that to expect otherwise would be absurd. Alas, most of
the questions we ask of inmates are not of such a nature:

'Examines research on the relation between attitude and
behavior in light of the correspondence between attitudinal
and behavioral entities. Such entities are defined by their
target, action, context, and time elements. A review of
available empirical research supports the contention that
strong attitude-behavior relations are obtained only under
high correspondence between at least the target and action
elements of the attitudinal and behavioral entities. This
conclusion is compared with the rather pessimistic
assessment of the utility of the attitude concept found in
much contemporary social psychological literature.

I. Ajzen & M. Fishbein
Attitude-behavior relations: A theoretical analysis and
review of
empirical research.
Psychological Bulletin 1977 Sep Vol 84(5) 888-918

finally, a relatively recent study:

'Assessed the effects of 2 kinds of introspection (focusing
on attitudes and analyzing reasons for feelings) in 2
experiments with 191 undergraduates. In Exp I, Ss at a
college dining hall analyzed the reasons for their attitudes
toward different types of beverages, focused on their
attitudes, or received no instructions to introspect. The
attitude measure was reported liking for the beverages, while
the behavioral measure was the amount of each beverage Ss
drank. Exp II included the same conditions in a laboratory
study in which the attitude object was a set of 5 puzzles.
Reported interest in the puzzles and the proportion of
puzzles Ss attempted were assessed. In both studies,
analyzing reasons reduced attitude-behavior consistency
relative to the correlations in the focusing and control
conditions.'

Wilson T D. & Dunn D S.
Effects of introspection on attitude-behavior consistency:
Analyzing reasons versus focusing on feelings.
Journal of Experimental Social Psychology 1986 May Vol 22(3)
249-263

Recall Nisbett and Wilson's (1977) review (Volume 1) of the relation
between self-report and the actual controlling contingencies.
Throughout all of these studies, it is well to bear in mind the
austere logician's statement that:

'..the meaning of words are abstractions from the truth
conditions of sentences that contain them.'

W.V.O. Quine (1981)
The Five Milestones of Empiricism: Theories and Things p.69

If such a line is accepted, intensionalist practices may serve no
practical purpose other than to distract from more fruitful processes
of measuring, recording and contracting behaviour. That is,
intensional practices may serve no more than to limit, through poor
socialization and private record keeping, what could be learned from
extensional analysis of relations between classes of behaviours (e.g.
frequencies of problem behaviour, and the joint frequencies of these
classes with other classes of behaviour such as age, and index
offence. Intensional contexts can be identified as follows:

'Chisholm proposes three independently operating criteria for
Intentional sentences.

(1) A simple declarative sentence is Intentional if it uses
a substantival expression - a name or a description - in such
a way that neither the sentence nor its contradictory implies
either that there is or that there isn't anything to which
the substantival expression truly applies.

(2) Any noncompound sentence which contains a propositional
clause...is Intentional provided that neither the sentence
nor its contradictory implies either that the propositional
clause is true or that it is false.

(3) If A and B are two names or descriptions designating the
same thing or things, and sentence P differs from sentence Q
only in having A where Q has B, then sentences P and Q are
Intentional if the truth of one together with the truth that
A and B are co-designative does not imply the truth of the
other'

The going scheme of logic, the logic that both works and is
generally supposed to suffice for all scientific discourse
(and, some hold, all SIGNIFICANT discourse), is extensional.
That is, the logic is blind to intensional distinctions; the
intersubstitution of coextensive terms, regardless of their
intensions, does not affect the truth value (truth or
falsity) of the enclosing sentence. Moreover, the truth
value of a complex sentence is always a function of the truth
values of its component sentences.

The Intentionalist thesis of irreducibility is widely
accepted, in one form or another, and there are two main
reactions to the impasse: Behaviourism and Phenomenology. The
behaviourist argues that since the Intentional idioms cannot
be made to fit into the going framework of science, they must
be abandoned, and the phenomena they are purported to
describe are claimed to be chimerical.'

D. C. Dennett (1969)
Content and Consciousness p32.

The choice was clearly spelled out by Quine in 1960, but remains
poorly appreciated:

'One may accept the Brentano thesis as showing the
indispensability of intentional idioms and the importance of
an autonomous science of intention, or as showing the
baselessness of intentional idioms and the emptiness of a
science of intention. My attitude, unlike Brentano's, is the
second. To accept intentional usage at face value is, we saw,
to postulate translation relations as somehow objectively
valid though indeterminate in principle relative to the
totality of speech dispositions. Such postulation promises
little gain in scientific insight if there is no better
ground for it than that the supposed translation relations
are presupposed by the vernacular of semantics and
intention.'

W. V. O Quine

The Double Standard: Flight from Intension
Word and Object (1960), p218-221

The alternative, methodologically incompatible approach of evidential
behaviourism, is restricted to extensional, normative analysis and
management of behaviour, drawing on natural inmate-environment
interactions, i.e., behaviour with respect to day to day activities.
This is eliminativist with respect to intensions (properties,
meanings, senses or thoughts, Quine, 1960; 1990; 1992), not on the
grounds that they comprise a body of pre-scientific 'folk' theoretical
idioms (Stich 1983; Churchland 1989), but because such idioms violate
the basic axiom of valid inference, namely Leibniz's Law: for any
objects x and y, if x is identical to y, then if x has a certain
property F, so does y. Symbolically: (x)(y)[(x=y) (Fx Fy)]. This is
the indiscernibility of identicals upon which all inference is
premised. ("Things are the same as each other, of which one can be
substituted for the other without loss of truth" - [Eadam sunt, quorum
unum potest substitui alteri salva veritate].

'...it is useless to suggest, as some logicians have done,
that the variable x may take as its values intensions of some
sort. For if we admit intensions as possible values of our
variables, we must abandon the principle of the
indiscernibility of identicals, and then, because we have no
clear criterion of identity, we shall be unable to say what
we want to say about extensions.'

Problems of Intensionality
W. Kneale and M Kneale (1962)
The Development of Logic p.617



'The first-order predicate calculus is an extensional logic
in which Leibniz's Law is taken as an axiomatic principle.
Such a logic cannot admit 'intensional' or 'referentially

opaque' predicates whose defining characteristic is that they
flout that principle.'


U. T. Place (1987)
Skinner Re-Skinned P. 244
In B.F. Skinner Consensus and Controversy
Eds. S. Modgil & C. Modgil

'There is a counterpart in modern logic of the thesis of

irreducibility. The language of physical and biological

science is largely extensional. It can be formulated

(approximately) in the familiar predicate calculus. The

language of psychology, however, is intensional. For the
moment it is good enough to think of an intensional sentence
as one containing words for intentional attitudes such as
belief.

Roughly what the counterpart thesis means is that important
features of extensional, scientific language on which
inference depends are not present in intensional sentences.
In fact intensional words and sentences are precisely those
expressions in which certain key forms of logical inference
break down.'

R. J. Nelson (1992)
Naming and Reference p.40

Note, '..intensional words and sentences are precisely those

expressions in which certain key forms of logical inference break

down' and '..the language of psychology, however, is intensional'.
Whilst it is clearly the case that folk psychology is largely
concerned with properties, characteristics or qualities of
individuals, their beliefs, desires, thoughts, feelings etc., it is
also the case that this is now true of much of contemporary
professional psychology (Fodor 1980). However, it may also be true
that many contemporary psychologists are not aware of the full
implications and quandaries implied by the this stance (Stich 1980).
Whilst it has been persuasively argued (Quine 1951,1956) that
quantification into intensional contexts is indeterminate, leading
inevitably to 'indeterminacy of translation' (Quine 1960). Nelson
(1992), a one time IBM senior mathematician goes on to point out:

'It is widely claimed today by philosophers of logic that
intensional sentences cannot be equivalently rephrased or
replaced by extensional sentences. Thus Brentano's thesis
reflected in linguistic terms asserts that psychology cannot
be framed in the extensional terminology of mathematics,
physics or biology'.

ibid p.42.

This point has not only been made by logicians. In fact it has been a
major, perhaps the major finding of research within Personality and
Social Psychology since the 1950s. Here is how Ross and Nisbett (1991)
put the matter:

'Finally, it should be noted that some commonplace
statistical failings help sustain the dispositional bias.
First, people are rather poor at detecting correlations of
the modest size that underlie traits (Chapman and Chapman
1967, 1969; Kunda and Nisbett 1986; Nisbett and Ross 1980).
Second, people have little appreciation of the relationship
of sample size to evidence quality. In particular, they have
little conception of the value of aggregated observations in
making accurate predictions about trait-related behavior
(Kahneman & Tversky 1973; Kunda & Nisbett 1986). The gaps in
people's statistical abilities create a vacuum that the
perceptual and cognitive biases rush in to fill.'

L. Ross and R. E. Nisbett (1991)
The Person and the Situation: Perspectives of Social
Psychology

and within Cognitive Psychology, Agnoli & Krantz, 1989:

'A basic principle of probability is the conjunction rule,
p(B) r p(A&B). People violate this rule often, particularly
when judgements of probability are based on intensional
heuristics such as representativeness and availability.
Through other probabilistic rules are obeyed with increasing
frequency as people's levels of mathematical talent and
training increase, the conjunction rule generally does not
show such a correlation. We argue that this recalcitrance is
not due to inescapable "natural assessments"; rather, it
stems from the absence of generally useful problem-solving
designs that bring extensional principles to bear on this
class of problem. We predict that when helpful extensional
strategies are made available, they should compete well with
intensional heuristics. Two experiments were conducted, using
as subjects adult women with little mathematical background.
In Experiment I, brief training on concepts of algebra of
sets, with examples of their use in solving problems, reduced
conjunction-rule violations substantially, compared to a
control group. Evidence from similarity judgements suggested
that use of the representativeness heuristic was reduced by
the training....

...We conclude that such intensional heuristics can be
suppressed when alternative strategies are taught.

The development of formal thought does not culminate in
adolescence as Piaget (1928) held; rather, it depends on
education (Fong, Krantz, & Nisbett, 1986, Nisbett, Fong,
Lehmann & Cheng 1987) and may continue throughout adulthood.
Probabilistic reasoning has been an especially useful domain
in which to study the impact of training in adulthood on
formal thought. Probabilistic principles are cultural
inventions at most a few centuries old (Hacking 1975).....

Tversky and Kahneman (1983) focused on processes in which
people substitute intensional for extensional thinking. In
the latter mode, concepts are represented mentally in the
same way as sets, hence, rules of logic and probability are
followed in the main. By contrast, intensional thinking
represents concepts by prototypes, exemplars, or relations to
other concepts (Rosch, 1978, Smith & Medlin 1981). Processing
is affected strongly by imaginability of prototypes,
availability of exemplars, etc., and its results are not
constrained as strongly by logical relations. A prime example
is the representativeness heuristic (Kahneman & Tversky
1972), in which probability of a outcome is judged in terms
of the similarity of that outcome to a prototype.

Tversky and Kahneman (1983) drew far reaching conclusions
from the fact that, in most of their tests, the prevalence of
conjunction errors was not affected by statistical education.
They developed the concept of "natural assessment", a
computation that is 'routinely carried out as part of the
perception of events and the comprehension of messages......
even in the absence of a specific task set.' They defined a
"judgmental heuristic" as a 'strategy that relies on a
natural assessment to produce an estimation or a prediction.'
They compared such mechanisms to perceptual computations, and
cognitive errors to perceptual illusions. In their view,
people well trained in mathematics nonetheless perform
natural assessments automatically. The results of these
mental computations strongly influence probability judgement.
Therefore, statistics courses presumably affect probability
judgements, in problems such as "Linda," no more than
geometry courses affect geometric visual illusions, i.e.,
scarcely at all.

Agnoli & Krantz (1989)
Suppressing Natural Heuristics by Formal Instruction:
The Case of the Conjunction Fallacy [my emphasis]
Cognitive Psychology 21, 515-550 (1989)

--
David Longley

Jim Balter

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
David Longley <Da...@longley.demon.co.uk> wrote:
>In article <3t374e$a...@mars.earthlink.net> jqb "Jim Balter" writes:
>>
>> Perhaps it would help if you defined the terms "'intensional' or
>> 'referentially opaque' predicates" as used by Place. If referentially
>> opaque predicates are not a priori defined by the fact that they
>> are not referentially transparent, then how in the world are they
>> defined? To say that if that's what Place meant there would be no
>> disagreement or that if that's what Quine meant then he committed a
>> gaffe begs the question.
>>
>
>OK - here is part of it again - I take it as accepted that a RDBMS is
>an application of the predicate calculus.

I ask for a definition and you post hundreds and hundreds of lines that,
as far as my brief glance tells, do not contain the phrases "intensional
predicate" or "referentially opaque". Is this intended to be opaqueness
by example? Tell me, in a few words, what Place means by "intensional
predicate" (and why you keep turning this into "intensional idiom") and
"referentially opaque" (and why he treats them as a priori equivalent)
or don't bother to respond. If I were interested in redacting vast
amounts of material, I would be reading a text, not conversing on the net.
If I ask someone a question about unix, I don't expect them to post the
entire reference manual; that would be rude and silly, wouldn't it?


David Longley

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <3t3c5c$c...@mars.earthlink.net> jqb "Jim Balter" writes:

> I ask for a definition and you post hundreds and hundreds of lines that,
> as far as my brief glance tells, do not contain the phrases "intensional
> predicate" or "referentially opaque". Is this intended to be opaqueness
> by example? Tell me, in a few words, what Place means by "intensional
> predicate" (and why you keep turning this into "intensional idiom") and
> "referentially opaque" (and why he treats them as a priori equivalent)
> or don't bother to respond. If I were interested in redacting vast
> amounts of material, I would be reading a text, not conversing on the net.
> If I ask someone a question about unix, I don't expect them to post the
> entire reference manual; that would be rude and silly, wouldn't it?
>
>

You will have to ask Place for that - or read the article the extract
comes from. What I have provided is Dennett's criteria. Dennett and
Place had a bit of a spat over Skinner in the book I cited Place from
and it all got rather out of hand.

To be frank, the whole issue is so complex that I can't pretend to be
able to do a better job than I just tried (and that took some doing).
The same goes for the indeterminacy thesis - shelves have been written
on these matters - and they are at the heart of the Philosophy of Mind.

All I can say is that I hope Aaron *did* run his summary of Quine's
indeterminacy thesis by Hookway. and I hope he tells us the outcome. I
am trying to make some constructive use of a technology which purports
to be able to get away from the quagmire of intensionalism - if it will
not do that I am happy to have that pointed out - but all I have seen
to date is a few AI workers who are trying to incorporate all that I
think is rotten in psychology into AI - and that's not what I am here
for ;^{.

It's more than an academic issue for me alas...
--
David Longley

terrill snyder

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <1995Jun30.0...@media.mit.edu>, min...@media.mit.edu
says...

>
>In article <804472...@longley.demon.co.uk> Da...@longley.demon.co.uk
writes:
>
>>Minsky and Papert of course made much of the failure of single layer
>>neural networks to model the XOR function. They also made much of the
>>possibility of real systems being built from multiple agencies opaque
>>to one another. I'm intrigued by the assertion that FOL can not handle
>>'connectedness', and that Minsky and Papert's critique of Perceptrons
>>was largely based on the failure of single layer perceptrons to solve
>>such problems. Was the Minsky & Papert critique also a critique of FOL
>>as an adequate language for AI by the same token?
>
>Hmmm. It's not entirely unrelated, to use a waffling phrase. In
>fact, virtually all the theorems in that Perceptrons book also apply
>to n+1-layer nets as well.
>
>(In most cases this can be seen by replacing our growth rates by the
>n-th root of the rates for the nets with a single inner layer. It's a
>constant annoyance that so many NN practitioners haven't noticed this
>rather obvious point, and hence keep saying that n-layer nets escape
>those limitations. This even applies to parity, unless you allow
>arbitrarily large fan-in (as did the authors of the otherwise good PDP
>book. Our theorems assumed what we called "finite order" -- which is
>the same as bounded fan-in. Of course if you don't assume any such
>limitation, then you can compute *anything* in two layers, simply by
>writing out the normal conjunctive form of a Boolean function. This
>corresponds to what comp.ai.philosophy members call the "hemongous
>lookup table approach".)
>
>As for connectedness, this clearly requires some sort of recursive
>closure, e.g., the "minimization operator" that gets you from
>primitive to general recursion. Offhand (because I haven't thought it
>out yet) it seems to me that FOL does have the same difficulty with
>some analogy to topological connectedness. Can anyone produce (or give
>a reference to) a precise formulation of this?
>
Is this what you had in mind?
f(x,y,z)=f(x,f(x,y-1,z),z-1)

TerrillS


Jorn Barger

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <804548...@longley.demon.co.uk>,
David Longley <Da...@longley.demon.co.uk> wrote (not to me):
>You [...] seem prepared to dismiss what amounts to over
>50 years of work in the Philosophy of Mind where this issue has been
>extensively analysed and discussed (though not resolved).

And when it's resolved, you don't think it will be summarizable in
a simple sentence of plain english?

I think this sort of 50-years-of-work rhetoric is usually dangerous
and wrong. Twentieth century western intellectual history has little
to brag about, and much to atone for...


j


Aaron Sloman

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
Dear David (David Longley <Da...@longley.demon.co.uk> )

I find it strange that you object so strongly to something I've
written which is almost identical with a quotation you have
circulated several times.

you wrote:
(dl)


> > > What I would find very helpful would be some some development of the
> > > the Quinean thesis that deductive inference fails within intensional
> > > contexts.

I wrote
(as)


> > This is totally trivial.
> >
> > If an intensional context is DEFINED as one in which replacement of
> > a term by one that is referential equivalent does not preserve truth
> > value then it follows trivially that the particular sort of
> > deductive logic that allows substitution of referential equivalents
> > will fail in intensional contexts. So what?

you objected
(dl)


> Of course not - IFF that's what is being claimed. But Chisholm (1957)
> and Quine are not claiming this - rather they are suggesting that this
> is what seems to characterise intensional idioms.

I am not sure what you think "this" refers to. But you have several


times included this quote from Place

> 'The first-order predicate calculus is an extensional


> logic in which Leibniz's Law is taken as an axiomatic
> principle. Such a logic cannot admit 'intensional' or
> 'referentially opaque' predicates whose defining

^^^^^^^^^^^^^^^^


> characteristic is that they flout that principle.'

^^^^^^^^^^^^^^


>
> U. T. Place (1987)
> Skinner Re-Skinned P. 244
> In B.F. Skinner Consensus and Controversy
> Eds. S. Modgil & C. Modgil

Place is saying exactly what I said, namely that the defining


characteristic of intensional predicates is that they flout the
principle which is part of the defining characteristic of
extensional logic.

I.e. the whole thing is just a matter of definition. It is therefore
totally trivial that extensional logics are not applicable to
intensional (referentially opaque) predicates, or contexts.

> You have clearly decided to ignore what I and others have actually
> written (at length)

What you have written (at length) is a mixture of many things. Some
of it is fine. Some of it I don't understand. And some of it is
based on what looks like uncritical acceptance and misapplication
of philosophical claims which I have tried to deflate (by showing
they are merely definitional) or refute (by providing counter
examples).

> ..and seem prepared to dismiss what amounts to over


> 50 years of work in the Philosophy of Mind where this issue has been
> extensively analysed and discussed (though not resolved).

I don't ignore it. I am saying that if there are intensional
contexts then it follows totally trivially that they cannot be
handled by extensional logic. There's nothing deep about that.

I then showed, by means of examples, that intensionality is neither
necessary nor sufficient for mentality, since some non-mental
predicates exhibit intensionality, and not all mental predicates do
in all contexts.

So yes, there has been 50 years of philosophical muddle about this.

Notice that I did not claim that it was trivial that some
intensional contexts exist, as Frege pointed out long ago. What
these contexts are and how they work, and how many different kinds
there are is a topic for philosophical and linguistic study. (I
include linguistics and good philosophy as part of science.)

But these things won't be studied fruitfully by constantly repeating
a claim that if they are not amenable to extensional logic they are
not a part of science.

If something exists it is an object for scientific study i.e. the
attempt to find out what makes it possible, the conditions under
which various forms occur, what their effects are, etc. etc.

Intensional contexts exist. Therefore they are objects for
scientific study.

If that means that extensional logics don't suffice for science,
that's fine with me, no matter what Quine or any other authority
figure has said. Predicate calculus just happens to be one among
many (excellent) tools for science and engineering.

Of course, none of this implies that your criticisms of
psychological theorists who ignore empirical data are wrong. Your
criticisms are fine.

Anyone who ignores empirical data and forms judgements about
criminals or anything else based largely on their own prejudices
should be criticised. Go for them!!

But basing the criticism of opinionated judges and prison officers
on Quine's views of the nature of science and logic or on claims
about intensional contexts is just muddying the waters, and probably
means that the people who are your real targets will ignore your
criticisms because of all the irrelevant detail they have to wade
through. (I previously compared it with using a broken sledge-hammer
to turn a screw.)

I won't respond in detail to your 890 line message, but it may be
useful to comment on two points. You quote Fodor

'.....the distinction between 2 doctrines, both of


which inform theory construction in much of modern
cognitive psychology: the representational theory of
mind and the computational theory of mind. According to
the former, propositional attitudes are viewed as
relations that organisms bear to mental representations.
According to the latter, mental processes have access
only to formal (nonsemantic) properties of the mental

representations over which they are defined. ....'


J A Fodor (1980)
Methodological solipsism considered as a research
strategy in cognitive psychology.

Behavioral and Brain Sciences; 1980 Mar Vol 3(1) 63-109

I think that defining a computational theory of mind as ruling out
semantic properties of mental representations is similar to Searle's
mistake in saying that computation is merely about syntax. A great
deal of what is of interest in many computing systems is exactly
that they involve semantics, starting from the very primitive
semantics of some bit strings interpreted (by the machine) as
"pointers", i.e. addresses of locations in a virtual memory space
and the semantics of other bit strings interpreted (by the machine)
as instructions of various kinds. In some virtual machines (e.g. a
lisp or prolog virtual machine) richer kinds of semantics are
possible (e.g. there are trees, networks, numbers, etc., that can be
referred to by variables, the contents of datastructures, etc.)

I've argued in various places (e.g. my review of Penrose in the AI
journal in 1992)[*] that there is a mathematical notion of
"computation" that is purely syntactic, but that there are
computational machines which are not purely syntactic, and that's
why they are so useful. The study of semantic engines is part of
science.

[*] accessible via ftp in the directory
ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect
in the file penrose.ps.Z. See also the file
Aaron.Sloman_semantics.ps.Z which is a paper that was recently
published in Philosophical Transactions of the Royal Society:
Physical Sciences and Engineering,Vol 349, 1689, pp 43-58 1994

So B.F. Loar (whom I've never heard of) is wrong in this extract
you've quoted:

.....For not only are


the references of our thoughts not mentioned in
cognitive psychology; nothing that DETERMINES their
references, like Fregian senses, is mentioned
either....Neither reference nor reference-determining
sense have a place in the description of computational
processes.'

I've previously suggested in response to one of your earlier
messages that the description of algorithms in computer science is a
counter example to this.

For instance, the set of ordered input-output pairs is the extension
(reference) of a mathematical function, whereas the algorithm by
which the outputs are generated from the inputs is the intension
(reference-determining sense). What Loar says has no place in the
description of computational processes is absolutely *central* to
many descriptions of computational processes.

For example if I say:
"The machine added 35 to the result of counting the members
of the list of attendees"

the expression "35" occurs in an intensional context because I
cannot (salva veritate) replace it with any arbitrary numerical
expression that refers to the same number. E.g. the above might be
true and this false:

"The machine added the square root of the number of adjectives
in its dictionary to the result of counting the members
of the list of attendees"

even if the square root of the number of adjectives in its
dictionary happens to be 35. So that's an intensional description of
a computing system's behaviour. Programmers use them all the time
in explaining HOW a machine works as opposed to merely WHAT it does.

I.e. when talking about semantic engines whether they are human
brains or computers we OFTEN use intensional language, and quite
rightly.

Frege, the inventor (an inventor?) of predicate calculus, and one of
my philosophical heroes, came very close to seeing all this. He did
not quite make it because he did not think of the mind as a
computational engine, so he used obscure analogies to explain the
difference between sense and reference (e.g. he said the reference
is like the object you look at through a telescope, and the sense is
like the virtual image of the object in the telescope -- not too bad
for a first attempt, for it points approximately at the difference
between what and how, which is what intensionality is all about).

Cheers.
Aaron

David Longley

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <3t3020$a...@percy.cs.bham.ac.uk>
A.Sl...@cs.bham.ac.uk "Aaron Sloman" writes:

This is a followup to my note on classification rather than definition.
Again, I'm reverting to quoting Quine so we both know what the source
of the matter is:

'The variables 'x', 'y', etc., adjuncts to the notation
of quantification, bring about a widening of the notion
of sentence. A sentence which contains a variable
without its quantifier (e.g., 'Fx' or '(y)Fxy', lacking
'(x)') is not a sentence in the ordinary true-or-false
sense; it is true FOR some values of its free variables,
perhaps, and false for others. Called an OPEN sentence,
it is akin rather to a predicate: instead of having a
TRUTH VALUE (truth or falsity) it may be said to have an
EXTENSION, this being conceived as the class of those
evaluations of its free variables for which it is true.
For convenience one speaks also of the extension of a
closed sentence, but what is then meant is simply the
truth value.

A compound sentence which contains a sentence as a
component clause is called an EXTENSIONAL context of
that component sentence if, whenever you supplant the
component by any sentence with the same extension, the
compound remains unchanged in point of its own
extension. In the special case where the sentences
concerned are closed sentences, then, contexts are
extensional if all substitutions of truths for true
components and falsehoods for false components leave
true contexts true and false ones false. In the case of
closed sentences, in short, extensional contexts are
what are commonly known as truth functions.

It is well known, and easily seen, that the
conspicuously limited means which we have lately allowed
ourselves for compounding sentences - viz, 'and', 'not'
and quantifiers - are capable of generating only
extensional contexts. It turns out, on the other hand,
that they confine us no more than that; the ONLY ways of
embedding sentences within sentences which ever obtrude
themselves, and resist analysis by 'and', 'not', and
quantifiers, prove to be contexts of other than
extensional kind. It will be instructive to survey them.

Clearly QUOTATION is, by our standards, non-extensional;
we cannot freely put truths for truths and falsehoods
for falsehoods within quotation, without affecting the
truth value of a broader sentence whereof the quotation
forms a part.......

A more seriously non-extensional context is indirect
discourse.....It is the more interesting, then, to
reflect that indirect discourse is in any event at
variance with the characteristic objectivity of science.
It is a subjective idiom.

Indirect discourse, in the standard idiom form 'says
that', is the head of a family which includes also
'believes that', 'doubts that', 'is surprised that',
'wishes that', 'strives that', and the like. The
subjectivity noted in the case of 'says that' is shared
by these other idioms twice over; for what these
describe in terms of a subjective projection of oneself
is not even the protagonist's speech behavior, but his
subjective state in turn.

Further cases of non-extensional idiom, outside the
immediate family enumerated above, are 'because' and the
closely related phenomenon of the contrary-to-fact
conditional. Now it is an ironical but familiar fact
that though the business of science is describable in
unscientific language as the discovery of causes, the
notion of cause itself has no firm place in science. The
disappearance of causal terminology from the jargon of
one branch of science and another has seemed to mark the
progress in understanding of the branches concerned'.

The Scope and Language of Science
W.V.O Quine (1954)
In 'The Ways of Paradox and Other Essays' 1976
p228 - 245

--
David Longley

Aaron Sloman

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
Jim Balter <jqb> writes, in response to an article from David
Longley which has apparently flown across the Atlantic to Jim, but
hasn't (yet) crawled across England to Birmingham.

Jim quotes David as writing (in response to my remarks about a
quotation from U.T.Place):

(DL)


> >IFF Place meant define in the sense that you are, I'm sure we'd
> >all agree that the extensional/intensional distinction is trivial
> >- but that's not what is being said.

I chose my words carefully and I did not say that the distinction is
trivial.

What I said was trivial was what David referred to thus:

> What I would find very helpful would be some some development of the
> the Quinean thesis that deductive inference fails within intensional
> contexts.

I wrote "this is totally trivial". I.e. the Quinean thesis thus
stated is trivial. The result that predicate calculus is not
applicable in intensional contexts is trivial because it is a
trivial consequence of two definitions, of deductive inference (as
defined by Quine, i.e. to be what predicate calculus can do) and of
intensional contexts.

It's important to notice that this is trivial, because then instead
of lamenting the inapplicability of predicate calculus to
intensional contexts, or using the inapplicability to say that
science can have no truck with intensional contexts, we can ask:

is there something broader than predicate calculus that is able
to handle intensional contexts?

Well for a start there's ordinary English, which I find quite
useful, sometimes, just as Frege found German pretty useful for
talking about intensional contexts, as he originally did in some
depth.

In addition there are many extensions of predicate calculus that
logicians, mathematicians, computer scientists, and philosophers
have explored and found useful for one task or another. John
McCarthy's work on mental concepts is an example. I have no doubt
there will be *many* more extensions.

David may not wish to use any such extension (e.g. because he can't
express them in his database). That's too bad for him.

We can totally discount any claim by Quine or any other `authority'
that only first order predicate logic is acceptable for science, or
that AI must use first order predicate logic and nothing else. It's
just a silly restriction, when it's already clear that something
broader is needed (given that intensional contexts exist).

Of course, it's not silly for some people to explore how far you can
get with a restricted logic. That's a contribution to science.

But we don't need such an exploration to discover things that are
trivially true: predicate calculus (defined as including Leibniz'
principle of substitutivity) cannot handle intensional contexts.

So, use something else to handle them instead.

Cheers.
Aaron
(PS) I have not understood Quine's most recent comments on all this.
I suspect his difficulty with intensional contexts is connected
with his unhappiness about the difficulty of finding criteria for
identity of properties (since neither coextensiveness nor necessary
coextensiveness suffice to fit all intuitions concerning identity
of properties, and Carnap's solution, which was to use syntactic
equivalence of expressions, seemed too strict for those who want to
allow different ways of expressing the same property). I suspect the
answer is to give up the search for a unique notion of identity of
properties, or intensions, or concepts.

Similarly we can't expect a unique answer to the question: "Which
algorithm did the machine use to compute its result?" -- there will
be different answers at different levels of abstraction, from the
high level that enables us to say that the same algorithm can be
expressed in both Fortran and Lisp (sometimes!) down to the level of
description of compiled machine code versions of algorithms, which
will inevitably be different on different machine architectures,
even if derived from the same high level language program.

Quine may wish scientists NOT to talk about things that don't have
unique criteria for identity, or which cannnot be handled in
predicate calculus. That's his choice, but not mine: I regard it as
looking for your lost keys under the lamp because that's where
there's most light. It doesn't follow that you'll find them there.
---

terrill snyder

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <804536...@longley.demon.co.uk>, Da...@longley.demon.co.uk
says...
>

David,

Maybe having a head pounding hangover makes reading your material seem
longer than it is, but I couldn't get all of the way through it.

I did get halfway through the first large post. Anyway, I suppose I
could have read what you were saying more thoroughly, and figured this
out for myself, but what are you advocating?

TerrillS


David Longley

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <3t3tfi$e...@percy.cs.bham.ac.uk>
A.Sl...@cs.bham.ac.uk "Aaron Sloman" writes:

>
> It's important to notice that this is trivial, because then instead
> of lamenting the inapplicability of predicate calculus to
> intensional contexts, or using the inapplicability to say that
> science can have no truck with intensional contexts, we can ask:
>
> is there something broader than predicate calculus that is able
> to handle intensional contexts?
>
> Well for a start there's ordinary English, which I find quite
> useful, sometimes, just as Frege found German pretty useful for
> talking about intensional contexts, as he originally did in some
> depth.
>
> In addition there are many extensions of predicate calculus that
> logicians, mathematicians, computer scientists, and philosophers
> have explored and found useful for one task or another. John
> McCarthy's work on mental concepts is an example. I have no doubt
> there will be *many* more extensions.
>
> David may not wish to use any such extension (e.g. because he can't
> express them in his database). That's too bad for him.

I read Quine, and others as having demonstrated failure of
quantification and substitutivity into intensional contexts. On
the basis of this he goes on to recommend that such 'creatures of
darkness' (Quine 1957) are best eschewed.

This to me, is a positive contribution to ontology, where 'to be
is to be the value of a bound variable'. The closed world of
intensional terms on the other hand seems to commit one to
methodological solipsism - which I suspect knows no bounds and
cannot be translated into behaviour. This is more austere than
Skinner in 'Beyond Freedom and Dignity', so nobody who adopts
Quine's criteria can expect an easy ride :-(.

It's true that I am constrained by relational theory, it seems,
because the same rules apply (referential integrity - Gray 1984).
IFF a way can be found to remove this problem, I suspect it will
be through Quine's divide and conquer strategy, ie we will find
behavioural accounts which do just as good a job as the
intensional terms.

To *uncritically* accept folk psychological terms into the data
dictionary of behaviour science at the outset seems to me to be
pseudo-science (almost by definition). Whilst cognitivism has
become rather popular over the past couple of decades, I think
many have misunderstood why. As set out elsewhere, it is not
because psychology finds such terms useful, rather it's because
naive psychology makes use of them, and for the past 25 years or
so major research programmes in psychology have just been
describing these HEURISTICS. But as Nisbett and Ross (1980),
Kahneman, Slovic and Tversky (1982) and Dawes, Faust and Meehl
(1989), and perhaps even Minsky (1986), point out - these
strategies are not rational - so why should anyone in AI want to
build them into computers? This is a subtle point which I think
large numbers of psychologists do not fully appreciate - because
many of them *do* operate from a stance of methodological
solipsism (cf. extract 1 & 2) rather than evidential
behaviourism.

For those interested, here are some of the references cited here
and elsewhere.


Codd E F A relational model of data for large shared data banks
Comm ACM 13, 1970, 377-387

Frege G. Begriffsschrift (1879) In Heijenhoort (Ed) From Frege to
Godel: A Source Book in Mathematical Logic Harvard University Press
1966

Gardarin G & Valduriez P Relational Databases and Knowledge Bases
Addison Wesley 1989

Gray P Logic, Algebra and Databases Ellis Horwood Limited 1984

Agnoli F & Krantz D. H. Suppressing Natural Heuristics by Formal
Instruction: The Case of the Conjunction Fallacy Cognitive Psychology
21, 515-550, 1989

Gluck M A & Bower G. H. From conditioning to category learning: An
adaptive network model. Journal of Experimental Psychology General
(1988) Sep Vol 117(3) 227-247

Gluck M A & Bower G H Component and pattern information in
adaptive networks. Journal of Experimental Psychology General; (1990)
Mar Vol 119 (1) 105-9

Lehman D R & & Nisbett R E A Longitudinal Study of the Effects of
Undergraduate Training on Reasoning. Developmental Psychology,1990,26,
6,952-960

Minsky M L & Papert S A Perceptrons: An Introduction to
Computational Geometry MIT Press 1990

Nisbett R E & Wilson T D Telling more than we can know: Verbal
reports on mental processes Psychological-Review; 1977 Mar Vol 84(3)
231-259

Nisbett R E & Ross L Human Inference: Strategies and Shortcomings
of Social Judgment Century Psychology Series, Prentice-Hall (1980)

Nisbett R E & Krantz D H The Use of Statistical Heuristics in
Everyday Inductive Reasoning Psychological Review, 1983, 90, 4, 339-
363

Nisbett R E, Fong G T, Lehman D R & Cheng P W Teaching Reasoning
Science v238, 1987 pp.625-631

Ploger D & Wilson M Statistical reasoning: What is the role of
inferential rule training? Comment on Fong and Nisbett. Journal of
Experimental Psychology General; 1991 Jun Vol 120(2) 213-214

Reeves L M & Weisberg R W Abstract versus concrete information as
the basis for transfer in problem solving: Comment on Fong and Nisbett

(1991). Journal of Experimental Psychology General; 1993 Mar Vol
122(1) 125-128

Rescorla R A Pavlovian Conditioning: It's Not What You Think It
Is. American Psychologist, March 1988.

Ross L & Nisbett R E The Person and The Situation: Perspectives of
Social Psychology McGraw Hill 1991

Shafir E & Tversky A Thinking Through Uncertainty:
Nonconsequential Reasoning and Choice Cognitive Psychology 24,449-474,
1992

Smith E E, Langston C & Nisbett R E The case for rules in
reasoning. Cognitive Science; 1992 Jan-Mar Vol 16(1) 1-40

Stich S The Fragmentation of Reason Bradford Books 1990

Sutherland S IRRATIONALITY: The Enemy Within Constable: London 1992

Tversky A & Kahneman D Extensional Versus Intuitive Reasoning: The
Conjunction Fallacy in Probability Judgment Psychological Review
v90(4) 1983

Wason W C Reasoning in New Horizons in Psychology, Penguin Books
1966

Arkes H R & Hammond K R Judgment and Decision Making: An
interdisciplinary reader Cambridge University Press 1986

Dawes R.M The Robust beauty of improper linear models in decision
making American Psychologist, 1979 34,571-582

Dawes R M Rational Choice in an Uncertain World Orlando: Harcourt,
Brace, Jovanovich 1988

Dawes R M, Faust D & Meehl P E Clinical Versus Actuarial Judgement
Science v243, pp 1668-1674 1989

Elstein A S Clinical judgment: Psychological research and medical
practice. Science; 1976 Nov Vol 194(4266) 696-700

Einhorn H J & Hogarth R M Behavioral decision theory: Processes of
judgment and choice Annual Review of Psychology (1981), 32, 53-88

Faust D Data integration in legal evaluations: Can clinicians
deliver on their premises? Behavioral Sciences and the Law; 1989 Fal
Vol 7(4) 469-483

Goldberg L R Simple models or simple processes? Some research on
clinical judgments American Psychologist,1968,23(7) p.483-496

Kahneman D, Slovic P & Tversky A Judgment Under Uncertainty:
Heuristics and Biases Cambridge University Press 1982

Kyllonen P C & Christal R E Reasoning Ability Is (Little More
Than) Working-Memory Capacity?! Intelligence 14, 389-433 1990

Lundberg G A Case Studies vs. Statistical Methods - An Issue
Based on Misunderstanding. Sociometry v4 pp379-83 1941

Meehl P E Clinical vs. Statistical Prediction: A Theoretical
Analysis and a Review of the Evidence University of Minnesota Press,
Minneapolis. 1954

Meehl P E When Shall We Use Our Heads Instead of the Formula?
PSYCHODIAGNOSIS: Collected Papers 1971

Churchland P M A Neurocomputational Perspective The Nature of
Mind and The Structure of Science Bradford Books 1989

Quine W V O Quantifiers and Propositional Attitudes (1956) In
The Ways of Paradox and Other Essays Harvard University Press 1966,
1972

Quine W V O The Scope and Language of Science (1954) The Ways of
Paradox and Other Essays Harvard University Press 1966, 1972

Quine W V O Word and Object MIT Press 1960

Quine W V O What Is It All About ? American Scholar, (1980)
50,43-54

Quine W V O The Pursuit of Truth Harvard University Press
1990,1992

Schnaitter R Knowledge as Action: The Epistemology of Radical
Behaviorism in B.F. Skinner Consensus and Controversy Eds. S. Modgil
& C. Modgil Falmer Press 1987

Skinner B F The Operational Analysis of Psychological Terms
Psychological Review, (1945) 45, 270-77

Skinner B F Beyond Freedom and Dignity New York, Knopf 1971

--
David Longley

Michael Zeleny

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <3t1lef$4...@percy.cs.bham.ac.uk>
A.Sl...@cs.bham.ac.uk (Aaron Sloman) writes:

>David Longley <Da...@longley.demon.co.uk> writes:

>> ....


>> What I would find very helpful would be some some development of the
>> the Quinean thesis that deductive inference fails within intensional
>> contexts.

>This is totally trivial.

This is totally unimpressive.

>If an intensional context is DEFINED as one in which replacement of
>a term by one that is referential equivalent does not preserve truth
>value then it follows trivially that the particular sort of
>deductive logic that allows substitution of referential equivalents
>will fail in intensional contexts. So what?

Intensional contexts need not be so defined. Counterexample: the
language of de re modality, suitably construed, sustains both the
substitution salva veritate of codesignative terms, and deductive
closure, in all contexts.

>There's nothing to develop, except to notice that you had better not
>use that narrow sort of logic in that sort of context.

Proving too much, like protesting too much, is deleterious to the
credibility of your case.

>> ...Psychological terms, being intensional, resist substitutivity
>> of identicals 'salva veritate', and as a psychologist, I find this
>> fascinating and worthy of considerable elaboration.

>I suspect you have some notion that this is unique to psychology,
>and that your fascination has to do with a long term agenda that is
>misguided.
>
>(a) intensionality (as I've pointed out previously) is not unique to
>psychological contexts, and
>
>(b) the widely believed claim that ALL referring expressions in
>psychological contexts are intensional is false.

Strong words. See above.

>I've given examples of both in previous contributions.
>
>Here are some more examples of (a)
>
> "It's easy to prove that the sum of the first 5 odd
> numbers is 25."
> (replace "25" with a very complicated expression that
> evaluates to 25)
>
> "The set of chairs in this room is easily identifiable."
> (Replace "the set of chairs in this room" with an expression
> that refers to the same set, but uses a different membership
> criterion, e.g. "the set of objects in this room that were
> all manufactured in Wigan in 1973")

Ease of proof and identification can be readily cashed out in terms
of psychological states. Likewise, alternative construals in terms
of mechanical processes can be readily impeached as not exemplifying
genuine instances thereof. "Nothing is itself identified or proven,
but thinking that makes it so." Even if this is merely a dialectical
position, nothing in the above examples suffices to dismiss it.

>There are lots and lots of examples of (a) relating to computers,
>e.g.
> "The computer has the information that Fred Smith was
> born in 1960"
>
>Even if Fred Smith is your brother and that statement is true, this
>might be false
> "The computer has the information that your brother was
> born in 1960"

The same objection applies to your proposed imputation of possession
of information that something is the case, to "the computer," in
contradistinction from imputing it to the individuals responsible
for its programming and use.

>There's nothing mysterious about psychological statements being
>intensional (referentially opaque). The same is true of many
>statements about other information processing systems besides
>people.

This cannot be maintained without arbitrarily fixing the onus for
processing information, on the basis of some ideologically driven,
pre-theoretical considerations.

>But not all psychological statements are intensional (referentially
>opque).
>
>Another example of (b)
>
> "The policeman noticed the burglar climing over the wall."
> (I claim that for extensionally equivalent
> substitutions of "the burglar" the truth value of
> the sentence will not change.)

Nonsense. If the semantics of noticing is compositional, your claim
will never get off the ground -- for then, P's ability to notice B
doing W, would have to depend on his ability to identify B as such.
Moreover, the fundamental assumption is widely disputed in literature.
See e.g. Dretske's recent discussions of the identity of percepts and
the nature of noticing.

>I suspect that the attempt to link Quine, limitations of first order
>logic, the language of science, and properties of psychological
>statements, is just a muddle.
>
>The language we use to describe information processing systems
>allows us sometimes to refer (directly or indierectly, explicitly or
>implicitly) to aspects of their semantic content. When we do this we
>are not talking about what that content refers to.
>
>But this is not a topic that is excluded from science. Do you really
>want to claim that computer information systems are excluded from
>the realm of science?

All one would have to claim, is that the computer's ostensible ability
to process information is entirely imputed thereto by its human users
and observers. For the chronologically gifted among us, this might be
easier to see by comparing it to a slide rule.

>As for whether Quine claimed that first order predicate calculus
>was the language of science, all I can say is that IF he did, he
>merely showed what a narrow view of science he had.

And, presumably, still has.

>There are many extensions to first order predicate calculus that
>people have found necessary for one purpose or another (e.g. modal
>logics, which also produce intensional contexts).

Not all of them are computationally tractable, however. An easy way
to see that is by considering the importation of intensional entities
as a type-raising operation, along the lines of Kaplan's Russelling a
Frege-Church.

>Why not drop the topic -- there's no mileage in it, at least not
>the mileage you are looking for!
>
>Sorry.
>Aaron

>---
>--
>Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs )
>School of Computer Science, The University of Birmingham, B15 2TT, England
>EMAIL A.Sl...@cs.bham.ac.uk
>Phone: +44-121-414-4775 Fax: +44-121-414-4281

cordially, don't
mikhail zel...@math.ucla.edu tread
writing from the disneyland of formal philosophy on
"Le cul des femmes est monotone comme l'esprit des hommes." me

David Longley

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <3t2u2u$i...@pipe4.nyc.pipeline.com>
pjac...@nyc.pipeline.com "Philip Jackson" writes:
>
> (My textbook does not suggest this, incidentally -- I noted the need for
> higher order logics ;-)
>

Partially, to keep matters expressively simple?

> Just because humans may find it difficult to reason formally in a conscious
> way does not mean they do not do so efficiently at some lower, subconscious
> level, i.e. perhaps some reasoning processes operate in a formal way at a
> lower level.

I thought of saying that when I posted, but thought better of it
since we are stuck with language and science as an expressive
medium. The work on perception and computing 1st and second
derivatives to track and hit a ball comes to mind - but the NNers
can handle that I guess via function approximation. IFF AI
principles are to be communicable rather than just emulatable (?)
surely its principles must be couched in formal (extensional)
language?

Ok....two extracts (I quote as empirical evidence as indirect
quotation tends to be non-extensional Quine 1960..<g>).

'Note that the variables in quantified sentences refer to
objects of a universe of discourse, not to functions or
relations. Consequently, they cannot be used in
functional or relational positions. We say that a
language with this property is first order. A second-
order language is one with function and relation
variables as well. We have chosen to restrict our
attention here to a first-order language both because
this language allows us to prove some strong results
that simply do not hold of second-order languages and
because it is adequate for most purposes in AI.'

M.R. Genesereth & N.J. Nilsson (1987)
Logical Foundations of Artificial Intelligence


'Predicate calculus is a language for the expression of
mathematical theories. When a mathematical theory is
expressed in this language, it becomes a set of
statements (or sentences, or formulas), each of which
says something about the thing described by the theory.l
Predicate calculus provides a set of inference rules
(for deriving new statements from the ones that are
given) and a set of symbols (to be used in making
statements) that seem to be adequate for most
mathematical theories. Thus, to insure generality,
almost all Al work on theorem provers has been concerned
with developing machines that handle sets of statements
in predicate calculus (note 6-2).

In fact, almost all work in the subject of theorem
proving has concerned itself with theorems stated in
first-order predicate calculus, which is discussed in
this section. Ultimately, it is desirable to extend
theorem proving methods to higher-order logics, because
they are more natural for the statement of most
mathematical theories. (The difference be tween first-
and higher-order logics is defined below.) Work in this
direction has been undertaken (e.g., Robinson, 1969;
Hewitt, 1968 et seq.; Pietrzykowski and Jensen, 1972).
The first-order predicate calculus is general enough,
though, so that if Church's thesis is correct, then all
mathematical theories can be expressed using it. In
principle, the AI research that has been done in first-
order predicate calculus is no less general than any
work that may be done in higher-order predicate
calculus. However, it is stressed again that, in
practice, first-order predicate calculus is not adequate
for the statement of mathematical theories about most
real-world environments and problems. The first-order
expressions of such theories would be extremely long,
complicated, and inefficient (if they were at all
obtainable), just as it would be extremely complicated
and inefficient to try to describe a real-world, problem
solving procedure (e.g., SIN, DENDRAL) as a Turing
machine. The Al research on first-order predicate
calculus has been valuable as a relatively simple
demonstration that computers can be made to "reason" in
a general way about logical problems. (This discussion
is continued in the section of this chapter entitled
"Applications to Real-World Problems." )'

P C Jackson (1974)
Introduction to Artificial Intelligence
1st Edition

Is it nevertheless *possible* that Quine is offering sound advice
that we hold higher order logics at arms length, and that for
practical purposes we can make do with FOL (specifically a rDBMS
and 4GL suported by statistical analysis and report writing,
where the data dictionary comprises our predicates). If not,
should I perhaps join the ranks of the "depth psychologists" and
come back again when the higher order logics have found their way
into rDBMS products. I thought what I was doing might be a valid
application of AI - but if it's woefully inadequate to model the
real world.....I'm sunk.

--
David Longley

Fritz Lehmann

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
David Longley in comp.ai quoted:
---begin quote---
' ... We have chosen to restrict our

attention here to a first-order language both because
this language allows us to prove some strong results
that simply do not hold of second-order languages and
because it is adequate for most purposes in AI.'
M.R. Genesereth & N.J. Nilsson (1987)
Logical Foundations of Artificial Intelligence
---end quote---

Doubtless this is why the proposed KIF logic standard has been restricted
to First-Order-ism so far. M.R. Genesereth is the author of KIF. I am still
waiting to hear what the "strong results" are that are relevant to
practical work in AI. If they are "the usual suspects", completeness,
compactness, Loewenheim-Skolem and 0-1 properties, then I am still waiting
to hear why any of them is importantly good (as opposed to bad) for AI work.
Presumably "connected", "finite", "equal", "function", "natural number",
and all the other not-First-Order-definable concepts are deemed by these
authors to be unnecessary for "most purposes of AI", and less important than
having completeness, compactness, Loewenheim-Skolem and 0-1 properties,
or other alluded-to "strong results". The question is: Why?

Yours truly, Fritz Lehmann


Matthew L. Ginsberg

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <3t10kr$9...@Mercury.mcs.com> jo...@MCS.COM (Jorn Barger)
provides a wonderfully paranoid distillation of my postings over the
past ten days.

>- Blithe unconcern for the truth of personal attacks:

Barger says I accused him of arrogance.

Guilty! But this seems to show *concern* for the truth, not
lack of it. : )

>- Rationalizing prejudices:

Barger implies that I don't think "It sounds good" is enough to
warrant investigation of a new AI idea.

Guilty! And glad of it.

>- Rationalizing obscurantism:

I'm not sure, but I think that to use the word "obscurantism" is
to rationalize it. : )

Barger's objection here is that I think my primary job is to
talk with other scientists; dissemination to the public is
secondary.

Secondary but still important. My intro AI text is probably the most
accssible of all the books out there. And it actually includes a
chapter (the last) on the public responsibilities of scientists. In
the preface, I try to explain why textbook or lay explanations
shouldn't focus on the most recent scientific progress.

Good news, Jorn! The textbook is available in a form other than
postscript.

Matt Ginsberg


Jorn Barger

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to

Not at all arrogantly, and without a trace of intellectual dishonesty,
Matt Ginsberg writes:
>In article <3t10kr$9...@Mercury.mcs.com> jo...@MCS.COM (Jorn Barger)
>provides a wonderfully paranoid distillation of my postings over the
>past ten days.

Matt, you may disagree with the interpretations I put on your words, but
by using the term 'paranoid' you seem to be not disagreeing but rather
squelching debate via ad hominems.

>>- Blithe unconcern for the truth of personal attacks:
>Barger says I accused him of arrogance.

Matt, do you care in the least whether you're being truthful with this
paraphrase? My quoting showed these untruthful attacks:

> [You] expect the whole scientific community to learn your language


> you seem completely uninterested in anyone else's results

> one guy talks and never bothers to listen

> you don't read anything

Are you blithely unconcerned that these personal attacks are untrue?

>>- Rationalizing prejudices:
>Barger implies that I don't think "It sounds good" is enough to
>warrant investigation of a new AI idea.

Matt, do you care in the least whether you're being truthful with this
paraphrase?

(And do you intend to ignore Fritz's testimony about this, as well?)

>>- Rationalizing obscurantism:
>I'm not sure, but I think that to use the word "obscurantism" is
>to rationalize it. : )

Yes, it has more than three syllables, but I expected it to declare
its own meaning within the first two. (If that's a problem for you,
let me know. : /

>Barger's objection here is that I think my primary job is to
>talk with other scientists; dissemination to the public is
>secondary.

Matt, do you care in the least whether you're being truthful with this
paraphrase?

I quoted you saying:


>But by making the material available, I've done *my* job of
>disseminating scientific results

...along with two other quotes that suggested dissemination to the
public was too difficult for you at this point. I consider that claim
intellectually dishonest, in any context...

I get the feeling you're playing a little game of never admit, never
apologise...


j

Philip Jackson

unread,
Jul 1, 1995, 3:00:00 AM7/1/95
to
In article <804536...@longley.demon.co.uk>, David Longley writes:

>In article <1995Jun30.0...@media.mit.edu>
> min...@media.mit.edu "Marvin Minsky" writes:
>
>> In article <804472...@longley.demon.co.uk> Da...@longley.demon.co.uk
writes:
>>
>> >Minsky and Papert of course made much of the failure of single layer

>> >neural networks to model the XOR function. They also made much of the
>> >possibility of real systems being built from multiple agencies opaque
>> >to one another. I'm intrigued by the assertion that FOL can not handle

>> >'connectedness', and that Minsky and Papert's critique of Perceptrons

>> >was largely based on the failure of single layer perceptrons to solve
>> >such problems. Was the Minsky & Papert critique also a critique of FOL
>> >as an adequate language for AI by the same token?
>>
>> Hmmm. It's not entirely unrelated, to use a waffling phrase. In
>> fact, virtually all the theorems in that Perceptrons book also apply
>> to n+1-layer nets as well.
>>

><snip>
>>
>> >My basic question is for help with the explication of the above, ie if
>> >it sheds light on why Minsky was led to the Society of Mind, and also,
>> >if the material I have cited on the Fragmentation of Behaviour should
>> >be seen as a positive critique of some of the central tenets of

>> >Cognitive Science, particularly, the rationality assumption.
>>
>> I don't think this was what led me and Papert to SoM; rather, it was
>> more the question of why children were so able to use "common sense"
>> despite Piaget's convincing demonstrations that they seemed unable to
>> reliably use relatively simple formal reasoning methods even past the
>> age of 10 years (and in most cases, in later life as well).
>> Certainly, we never dreamed of making any assumptions about
>> rationality -- if by that you mean logical inference rather than
>> plausible inference.
>>
>
>If we accept that humans (of all ages) have great difficulty
>using the *formal* rules of logic to reason (Piaget, Wason and
>others) then whilst McCarthy and others may be incorrect IFF they
>take human performance as a reference for AI models, that might
>just be taken to suggest that his efforts to accommodate the
>intensional idioms within FOL is misdirected, but that FOL (and
>possibly its developments - I am not equipped to judge) *is* the
>appropriate language for AI researchers to embrace (as many AI
>text books suggest).
>

(My textbook does not suggest this, incidentally -- I noted the need for
higher order logics ;-)

Just because humans may find it difficult to reason formally in a conscious
way does not mean they do not do so efficiently at some lower, subconscious
level, i.e. perhaps some reasoning processes operate in a formal way at a
lower level.

Alternatively, in other recent posts, I've suggested that AI'ers need to
investigate "metaphoric reasoning" that can be imprecise and informal, yet
become formal and precise when necessary...


>But wouldn't that make the SoM approach more palatable/at home in
>neuroscience and psychology rather than AI? That is, the ideas
>expressed in SoM and the ambitions of the 'connectionists' might
>be seen as research into the *limitations* of human performance
>(the opacity problem/failure to make the connections) and the
>explication of some of the constraints of our neurology - whilst
>McCarthy's approach, pruned of recent attempts to accommodate
>intension within First Order theory, might ultimately deliver
>more of the formal, but definitely not human-like, logically
>reasoning systems which we have already seen (resolution etc.).
>
>
>(if this makes no sense, put it down to the freak heatwave
>addling my brains - it's been 90 degrees in London today)
>

Well, in a heat wave I suggest finding a pub that serves some *cold* beer!
(I recall there are some in London ;-)

>David Longley
>

Other than the metaphorical use of the word 'connected', there does not
seem to be any connection between the connectedness predicate in predicate
logic and the opacity problem / failure to make connections involved in
questions of intensional and extensional logic.

Cheers!

Phil Jackson

Ilias Kastanas

unread,
Jul 2, 1995, 3:00:00 AM7/2/95
to
In article <804498...@longley.demon.co.uk>,
David Longley <Da...@longley.demon.co.uk> wrote:
>In article <3svu91$d...@news.eecs.nwu.edu>
> i...@eecs.nwu.edu "Ian Sutherland" writes:
>
>> In article <804472...@longley.demon.co.uk>,

>> David Longley <Da...@longley.demon.co.uk> wrote:
>> >I'm intrigued by the assertion that FOL can not handle
>> >'connectedness'
>>
>> I think it might be worthwhile for you to try to say
>> precisely what notion of "handle" you're using when you
>> talk about "handling" connectedness. It seems entirely
>> possible that Minsky/Papert might be using one notion and
>> Quine another.
>> --
>> Ian Sutherland
>> i...@eecs.nwu.edu
>>
>> Sans Peur
>>
>I'm sure you are correct to criticize my vague use of "handling". I
>should have been more careful. Thanks for pointing this out.
>

[ . . . ]

Let us demystify.


Connectedness cannot be expressed in the first-order language

of the ambient structure (graphs, say). Consider all finite graphs;
------------------------

you cannot characterize the connected ones.

You can say E(a, b), true if there is an edge from "a" to "b". But

you also need to say: "a" to "b" and "b" to "c", "a" to "b" and "b" to "c"

and "c" to "d", ... and so on. There is no bound. End of story.


(If so inclined, you can turn the above into a rigorous proof that

any formula will "miss" on some big enough graphs).

On the other hand, give yourself Set Theory, or appropriate fragments

thereof, and you will have no difficulty handling, in first order, not only

connectedness but measurable sets, analytic functions, and everything else.



In reality, we are talking about the Transitive Closure operation and

its non-expressibility in first order; finitary analogue of Well-foundedness

of recursive (even arithmetical) trees being Pi-1-1 complete, and thus

beyond the arithmetical hierarchy levels, even the transfinite ones.

How did neural networks get involved? The elusive XOR is an accident

of linear algebra! (Actually, generalized as Parity it is the typical

"hard" n-ary relation.)

One obtains Transitive Closure, and other "higher type" entities, as

fixed points of first-order formulas, thought of as operators. It is an

elegant approach to generalized computability theory, and has connections

with P and NP.

Ilias


Calvin Bruce Ostrum

unread,
Jul 2, 1995, 3:00:00 AM7/2/95
to
In article <3t1lef$4...@percy.cs.bham.ac.uk>,
Aaron Sloman <A.Sl...@cs.bham.ac.uk> wrote:

| David Longley <Da...@longley.demon.co.uk> writes:
| > ...Psychological terms, being intensional, resist substitutivity
| > of identicals 'salva veritate', and as a psychologist, I find this
| > fascinating and worthy of considerable elaboration.
| I suspect you have some notion that this is unique to psychology,
| and that your fascination has to do with a long term agenda that is
| misguided.

Do you have any ideas as to what this "long term agenda" might be?
I'm imagining some sort of Panopticon with Longley and other "behavior
scientists" at its center. A dystopia, to be sure.

---------------------------------------------------------------------------
Calvin Ostrum c...@cs.toronto.edu
---------------------------------------------------------------------------
I mean, if all you want to do is to be able to predict your experiences,
the rational strategy is clear: Dont revise your theories, just *arrange
to have fewer experiences*; close your eyes, put your fingers in your ears,
and dont move. Now, why didnt Newton think of that?
-- Jerry Fodor, "The Dogma that Didnt Bark"
---------------------------------------------------------------------------

David Longley

unread,
Jul 2, 1995, 3:00:00 AM7/2/95
to
For those curious as to where the anti-intensionalist stance
might go in practice, we have developed a simple FOL (rDBMS)
based system designed to allow the collation of Observation
Statements as Occasion Sentences (Quine 1992) in the interest of
building deductive reports on behavioural attainment. The simple
idea is to let formal systems collate observations and subject
those observations to extensional (deductive and actuarial)
analysis rather than rely on intensional heuristics. Interested
readers can contact me via e-mail if they want an outline. Below
are a few comments on well our intensional idioms have served us
relative to the fornmal systems we have managed to build on the
basis of 'non-natural' intelligence.

'We consider ourselves distinguished from the ape by the
power of thought. We do not remember that it is like the
power of walking in the one-year old. We think, it is
true, but we think so badly that I often feel it would
be better if we did not.'

B Russell
(in Faith and Mountains
cited in R M Dawes (1988)
Rational Choice in an Uncertain World)

o o o

'The point of thinking in terms of propositional
attitudes even where no neat sentences of propositional
attitude can be produced is that Intensional objects,
even under the linguistic interpretation given them
here, lead almost inexorably to metaphysical excess, and
the characteristic of these objects that accounts for
this is one that it can be argued serves precisely to
show that Intentional objects are not any kind of
objects at all. That characteristic is the dependence of
Intentional objects on particular descriptions. As
criterion (3) indicates, to change the description is to
change the object. What sort of thing is a different
thing under different descriptions? Not any object.'

'The going scheme of logic, the logic that both works

and is generally supposed to suffice for all scientific

discourse (and, some hold, all significant discourse),

is extensional. That is, the logic is blind to
intensional distinctions; the intersubstitution of
coextensive terms, regardless of their intensions, does
not affect the truth value (truth or falsity) of the
enclosing sentence. Moreover, the truth value of a
complex sentence is always a function of the truth

values of its component sentences. Criteria (2) and (3)
indicate that Intentional sentences do not follow the
rules of extensional, truth functional logic, and hence
they are intensional. This expression of the position
leads us to the central claim of the Intentionalists,
that Intentional phenomena are absolutely irreducible to
physical phenomena. Put in terms of sentences, the claim
is that Intentional sentences cannot be reduced to or
paraphrased into extensional sentences about the
physical world. The claim goes beyond the obvious fact
that Intentional sentences are intensional, and hence
cannot be, as they stand, extensional - to the more
remarkable claim that no sentence or sentences can be
found which adequately reproduce the information of an
Intentional sentence and still conform to extensional
logic.'

D Dennett (1969)
Content and Consciousness, p28-30

o o o

'Regardless of how much we stand to gain from supposing
that human behavior is the proper subject matter of a
science, no one who is a product of Western civilization
can do so without a struggle. We simply do not want such
a science.'

B F Skinner (1953)
Can Science Help? - The Threat to Freedom
(in Science and Human Behavior p.7)

o o o

'Thesis: Owing to the abusive reliance upon significance
testing - rather than point or interval estimation,
curve shape, or ordination - in the social sciences, the
usual article summarizing the state of the evidence on a
theory (such as appears in the Psychological Bulletin)
is nearly useless.'

P E Meehl (1986)
What Social Scientists Don't Understand
(in Metatheory in Social Science
Eds D W Fiske & R A Shweder p.325)

o o o

'We think of a science as comprising those truths which
are expressible in terms of 'and', 'not', quantifiers,
variables, and certain predicates appropriate to the
science in question....To specify a science, within the
described mold, we still have to say what the predicates
are to be, and what the domain of objects is to be over
which the variables of quantification range.'

W.V.O. Quine (1954)


The Scope and Language of Science

The Ways of Paradox and other essays p.242

o o o

'The prevalent tendency to underweigh or ignore
distributional information is perhaps the major error of
intuitive prediction. The consideration of
distributional information, of course, does not
guarantee the accuracy of forecasts. It does, however,
provide some protection against completely unrealistic
predictions. The analyst should therefore make every
effort to frame the forecasting problem so as to
facilitate utilising all the distributional information
that is available to the expert.'

D. Kahneman and A. Tversky (1982)
Intuitive prediction: Biases and corrective procedures
(in Kahneman, Slovic and Tversky 1982)

o o o

'Finally, actuarial methods - at least within the
domains discussed in this article - reveal the upper
bounds in our current capacities to predict human
behavior. An awareness of the modest results that are
often achieved by even the best available methods can
help to counter unrealistic faith in our predictive
powers and our understanding of human behavior. It may
well be worth exchanging inflated beliefs for an
unsettling sobriety, if the result is an openness to new
approaches and variables that ultimately increase our
explanatory and predictive powers.'

R Dawes, D Faust & P Meehl (1989)


Clinical Versus Actuarial Judgement : Science v243,

p1668-1674

o o o

'Thus we have arrived at something fundamental: our
conventions regarding the use of the words "not" and
"or" is such that in asserting the two propositions
"object A is either red or blue" and "object A is not
red," I have implicitly already asserted "object A is
blue." This is the essence of so-called *logical
deduction*.. It is not then, in any way based on real
connections between states of affairs, which we
apprehend in thought. On the contrary, it has nothing at
all to do with the nature of things, but drives from our
manner of speaking about things. A person who refused to
recognize logical deduction would not thereby manifest a
different belief from mine about the behaviour of
things, but he would refuse to speak about things
according to the same rules as I do. I could not
convince him, but I could refuse to speak with him any
longer, just as I should refuse to play chess with a
partner who insisted on moving the bishop orthogonally.

What logical deduction accomplishes, then, is this: it
makes us aware of all that we have implicitly asserted -
on the basis of conventions regarding the use of
language - in asserting a system of propositions, just
as, in the above example, "object A is blue" is
implicitly asserted by the assertion of the two
propositions "object A is red or blue" and "object A is
not red."

In saying this we have already suggested the answer to
the question, which naturally must have forced itself on
the mind of every reader who has followed our argument:
if it is really the case that the propositions of logic
are tautologies, that they say nothing about objects,
what purpose does logic serve?

...logical propositions, though being purely
tautologous, and logical deductions, though being
nothing but tautological transformations, have
significance for us *because we are not omniscient*. Our
language is so constituted that in asserting such and
such propositions we implicitly assert such and such
other propositions - but we do not see immediately all
that we have implicitly asserted in this manner. It is
only logical deduction which makes us conscious of it.

If I have succeeded in clarifying somewhat the role of
logic, I may now be brief about the role of mathematics.
The propositions of mathematics are of exactly the same
kind as the propositions of logic: they are tautologous,
they say nothing at all about the objects we want to
speak about, but concern only the manner in which we
want to speak of them....We become aware of meaning the
same by "2+3" and by "5", by going back to the meanings
of "2," "3," "5," "+," and making tautological
transformations until we just see that "2+3" means the
same as "5". It is such successive tautological
transformation that is meant by "calculating"; the
operations of addition and multiplication which are
learned in school are directives for such tautological
transformation; every mathematical proof is a succession
of such tautological transformations. Their utility,
again, is due to the fact that, for example, we do not
by any means see immediately that we mean by "24 x 31"
the same as by "744"; but if we calculate the product
"24 x 31", then we transform it step by step, in such a
way that in each individual transformation we recognize
that on the basis of the conventions regarding the use
of the signs involved (in this case numerals and the
signs "+" and "x") what we mean after the transformation
is still the same as what we meant before it, until
finally we became consciously aware of meaning the same
by "744" and by "24 x 31."

...at first glance it is difficult to believe that the
whole of mathematics, with its theorems that it cost
such labour to establish, with its results that so often
surprise us, should admit of being resolved into
tautologies. But there is just one little point which
this argument overlooks: it overlooks the fact that we
are not omniscient. An omniscient being, indeed, would
at once know everything that is implicitly contained in
the assertion of a few propositions. IT would know
immediately that on the basis of the conventions
concerning the use of the numerals and the
multiplication sign, "24 x 31" is synonymous with "744".
An omniscient being has no need for logic and
mathematics. We ourselves, however, first have to make
ourselves conscious of this by successive tautological
transformations, and hence it may prove quite surprising
to us that in asserting a few propositions we have
implicitly also asserted a proposition which seemingly
is entirely different from them, or that we do mean the
same by two complexes of symbols which are externally
altogether different.'

H Hahn (1933)
Logic, Mathematics and Knowledge of Nature
In Ayer (Ed) Logical Positivism (1959)

o o o

'The most characteristic thing about mental life, over
and beyond the fact that one apprehends the events of
the world around one, is that one constantly goes beyond
the information given'.

J Bruner (1957)
Going Beyond The Information Given
(in H Gulber and others (eds)
Contemporary Approaches to Cognition)

o o o

--
David Longley

David Longley

unread,
Jul 3, 1995, 3:00:00 AM7/3/95
to
In article <3t867s$4...@nntp1.u.washington.edu>
for...@cac.washington.edu "Gary Forbis " writes:

> Maybe David wants to avoid the use of intensional language because of
> referental opacity.
>
> How are we to know diferent individuals mean the same thing when they use
> intensional terms? I would suppose we could require the presence of certain
> behavior prior to attributibution but then we might as well refer to the
> behavior and avoid the intensional language all together.
>
> I was impressed by the way, on a local television channel, two clinicians
> attributed nearly diameterically opposed views of Michael and Pricila after
> the national interview. Surely the means by which the arrived at their views
> must be contained in them rather than in Jackson and Presly. How then can
> such a method be more precise or even equivalent to a strict behavioral
> account?

<snip>

Yes - that's it. The goal is to reduce variance in the data, thereby making
the reports and analyses more reliable and (in some cases legally) defensible.
The alternative seems to lead inevitably to creative writing, rhetoric or
authoritative fiction. I think the only reason so many get away with it is
because generally we lack he actual behavioural facts, and those presenting
their fanciful notions *sound* confident. Folk psychology is the natural,
socially acceptable vernacular. It is rude to break its rules. But the work
of much of empirical psychology over the past 25 years (Tversky and Kahneman,
Ross and Nisbett, Dawes, Faust and Meehl) has shown us how poor folk psychology
actually is.

It doesn't just come down to Quine. Quine simply provides some additional
evidence from LOGIC. But that evidence can be used to integrate much of the
other evidence - and that's useful - empirically and pragmatically.

--
David Longley

Oliver Sparrow

unread,
Jul 3, 1995, 3:00:00 AM7/3/95
to
What a pattering of tiny fingers there has been over the recent weekend. A
prime concern thst they have drummed out seems to be the following: that we
may be building untested but implicit assumptions into our social engines -
those systems, programmes, processes and laws by which we govern our affairs.
Th erest of the discussion seems to revolve around whether there are ways of
making such tests that go beyond the phenomonological happenstance.

> "David Longley" writes:

> ........, we have developed a simple ... system... The simple


> idea is to let formal systems collate observations and subject
> those observations to extensional (deductive and actuarial)
> analysis rather than rely on intensional heuristics.

He has said much the same elsewhere, and I noted the homology that this had
to the history of econometrics. I further noted that this approach
had failed in that august - or at any rate, once well-paid - discipline.
Consider: the very data that you collect -'rates of recidivism', 'age structure
at first recorded offence' - are themselves deeply embedded in the issues with
which you are concerned. Throw in 'washing machine sales in Taiwan', 'turtle
egg laying rates in Pago Pago', whether the convicted individual 'dresses' to
the left of right and you will get correlations; and will be alarmed to find
that the resultant mathematical space captures much of the observed variance.
You will rightly feel that this tells you nothing: it is not a useful model, an
'explanation' because it is not in harmony with the mental model that you have
of the system that is under test. If one has taken the position that one is not
prepared to *have* a mental model and that the data (quoations or not) are
supreme, then this places one is a difficult cul de sac.

The realm in which econometrics is helpful is, as with all science, in serving
as a probe as to the validity of a well-defined model that one has of what is
going on. I posted the example of a century of US energy demand, in which the
model tells one to look hard at what was happening in the coal industry in the
1918 period; and when one does so, one finds a protracted period of deeply
disruptive labour relations. Adding numerical proxies for this offers a far
better model; and allows one to undeerstand the evolution of the US energy
industry far better. So with quarks and markets, enzymes and elephants.

If we are going to be pragmatists - which is what all this quining about
intensionality is about - then we should take note of what actually happens
when we humans run our affairs as well as we can; which is by definition the
best *factual* model to hand. What we do is we iterate. We suck it and we see.
We build up rules of thumb and piecemeal observation, achieve major
breakthroughs - phrenology is the ultimate guide to criminology! - which we
subsequently bin; and generally proceed as best we may.

I suggest that the prime reason that philosophers have sunk in general esteem
is not that they operate beyond the run of practical minds but that they pay
so little regard to complexity of the real world. We look elsewhere for our
grand syntheses, therefore: to kaizen, to hard work on real projects, to facts
and creative connections. We know what we know about cosmology from deduction
and inference, analysis and observation, fact and fancy: we are aware that we
do not know it all and that what we do know contains inconsistencies. The model
is, however, useful and the best game in town; and we expect it to be rather
different in a decade hence.

In general, however, the entities with which we have to work in the real world
of affairs are not logical constructs and do not fall into neat partitioned
little boxes about which we can say if, then, else.. Rather, they are
intersections of systems of interaction of sufficient density that it is
helpful to call them things: firms and organisations, human relationships,
social balances and patterns that we recognise because we have educated
ourselves to do so. Many are self-referential, non-linear and deeply embedded
in the system from which they sprang. They can be analysed in no other way.

Data is made into information by context: "42" may be noise or a datum; but it
becomes information only when set into an apposite context, such as asking
"what is my room number, please?" One cannot avoid this necessity by quoting
people's words verbatim, communication by token. All information exchange
implies contextuality - be it the receptor on the surface of a cell, it has
its evolutionary or design context. Where two agents are involved in what it
is useful to call dialogue, then these must either share a preset context
- mating insects - or be generating the context from what they both know of
the other's internal state and general situation: "what's my room number,
please?"

_________________________________________________

Oliver Sparrow
oh...@chatham.demon.co.uk

It is loading more messages.
0 new messages