I'm about to publish both papers, but on the off chance that
there is a still a conceivable objection that I have not yet rebutted,
I am inviting critical responses. The full preprints are available
from me on request (and I'm still giving the talks, in case anyone's
(Preprint available from author)
MINDS, MACHINES AND SEARLE
Behavioral & Brain Sciences
20 Nassau Street
Princeton, NJ 08542
Summary and Conclusions:
Searle's provocative "Chinese Room Argument" attempted to
show that the goals of "Strong AI" are unrealizable.
Proponents of Strong AI are supposed to believe that (i) the
mind is a computer program, (ii) the brain is irrelevant,
and (iii) the Turing Test is decisive. Searle's point is
that since the programmed symbol-manipulating instructions
of a computer capable of passing the Turing Test for
understanding Chinese could always be performed instead by a
person who could not understand Chinese, the computer can
hardly be said to understand Chinese. Such "simulated"
understanding, Searle argues, is not the same as real
understanding, which can only be accomplished by something
that "duplicates" the "causal powers" of the brain. In the
present paper the following points have been made:
1. Simulation versus Implementation:
Searle fails to distinguish between the simulation of a
mechanism, which is only the formal testing of a theory, and
the implementation of a mechanism, which does duplicate
causal powers. Searle's "simulation" only simulates
simulation rather than implementation. It can no more be
expected to understand than a simulated airplane can be
expected to fly. Nevertheless, a successful simulation must
capture formally all the relevant functional properties of a
2. Theory-Testing versus Turing-Testing:
Searle's argument conflates theory-testing and Turing-
Testing. Computer simulations formally encode and test
models for human perceptuomotor and cognitive performance
capacities; they are the medium in which the empirical and
theoretical work is done. The Turing Test is an informal and
open-ended test of whether or not people can discriminate
the performance of the implemented simulation from that of a
real human being. In a sense, we are Turing-Testing one
another all the time, in our everyday solutions to the
"other minds" problem.
3. The Convergence Argument:
Searle fails to take underdetermination into account. All
scientific theories are underdetermined by their data; i.e.,
the data are compatible with more than one theory. But as
the data domain grows, the degrees of freedom for
alternative (equiparametric) theories shrink. This
"convergence" constraint applies to AI's "toy" linguistic
and robotic models as well, as they approach the capacity to
pass the Total (asympototic) Turing Test. Toy models are not
4. Brain Modeling versus Mind Modeling:
Searle also fails to note that the brain itself can be
understood only through theoretical modeling, and that the
boundary between brain performance and body performance
becomes arbitrary as one converges on an asymptotic model of
total human performance capacity.
5. The Modularity Assumption:
Searle implicitly adopts a strong, untested "modularity"
assumption to the effect that certain functional parts of
human cognitive performance capacity (such as language) can
be be successfully modeled independently of the rest (such
as perceptuomotor or "robotic" capacity). This assumption
may be false for models approaching the power and generality
needed to pass the Total Turing Test.
6. The Teletype versus the Robot Turing Test:
Foundational issues in cognitive science depend critically
on the truth or falsity of such modularity assumptions. For
example, the "teletype" (linguistic) version of the Turing
Test could in principle (though not necessarily in practice)
be implemented by formal symbol-manipulation alone (symbols
in, symbols out), whereas the robot version necessarily
calls for full causal powers of interaction with the outside
world (seeing, doing AND linguistic understanding).
7. The Transducer/Effector Argument:
Prior "robot" replies to Searle have not been principled
ones. They have added on robotic requirements as an
arbitrary extra constraint. A principled
"transducer/effector" counterargument, however, can be based
on the logical fact that transduction is necessarily
nonsymbolic, drawing on analog and analog-to-digital
functions that can only be simulated, but not implemented,
8. Robotics and Causality:
Searle's argument hence fails logically for the robot
version of the Turing Test, for in simulating it he would
either have to USE its transducers and effectors (in which
case he would not be simulating all of its functions) or he
would have to BE its transducers and effectors, in which
case he would indeed be duplicating their causal powers (of
seeing and doing).
9. Symbolic Functionalism versus Robotic Functionalism:
If symbol-manipulation ("symbolic functionalism") cannot in
principle accomplish the functions of the transducer and
effector surfaces, then there is no reason why every
function in between has to be symbolic either. Nonsymbolic
function may be essential to implementing minds and may be a
crucial constituent of the functional substrate of mental
states ("robotic functionalism"): In order to work as
hypothesized, the functionalist's "brain-in-a-vat" may have
to be more than just an isolated symbolic "understanding"
module -- perhaps even hybrid analog/symbolic all the way
through, as the real brain is.
10. "Strong" versus "Weak" AI:
Finally, it is not at all clear that Searle's "Strong
AI"/"Weak AI" distinction captures all the possibilities, or
is even representative of the views of most cognitive
Hence, most of Searle's argument turns out to rest on
unanswered questions about the modularity of language and
the scope of the symbolic approach to modeling cognition. If
the modularity assumption turns out to be false, then a
top-down symbol-manipulative approach to explaining the mind
may be completely misguided because its symbols (and their
interpretations) remain ungrounded -- not for Searle's
reasons (since Searle's argument shares the cognitive
modularity assumption with "Strong AI"), but because of the
transdsucer/effector argument (and its ramifications for the
kind of hybrid, bottom-up processing that may then turn out
to be optimal, or even essential, in between transducers and
effectors). What is undeniable is that a successful theory
of cognition will have to be computable (simulable), if not
exclusively computational (symbol-manipulative). Perhaps
this is what Searle means (or ought to mean) by "Weak AI."
(To appear in: "Categorical Perception"
S. Harnad, ed., Cambridge University Press 1987
Preprint available from author)
CATEGORY INDUCTION AND REPRESENTATION
Behavioral & Brain Sciences
20 Nassau Street
Princeton NJ 08542
Categorization is a very basic cognitive activity. It is
involved in any task that calls for differential responding,
from operant discrimination to pattern recognition to naming
and describing objects and states-of-affairs. Explanations
of categorization range from nativist theories denying that
any nontrivial categories are acquired by learning to
inductivist theories claiming that most categories are learned.
"Categorical perception" (CP) is the name given to a
suggestive perceptual phenomenon that may serve as a useful
model for categorization in general: For certain perceptual
categories, within-category differences look much smaller
than between-category differences even when they are of the
same size physically. For example, in color perception,
differences between reds and differences between yellows
look much smaller than equal-sized differences that cross
the red/yellow boundary; the same is true of the phoneme
categories /ba/ and /da/. Indeed, the effect of the category
boundary is not merely quantitative, but qualitative.
There have been two theories to explain CP effects. The
"Whorf Hypothesis" explains color boundary effects by
proposing that language somehow determines our view of
reality. The "motor theory of speech perception" explains
phoneme boundary effects by attributing them to the patterns
of articulation required for pronunciation. Both theories
seem to raise more questions than they answer, for example:
(i) How general and pervasive are CP effects? Do they occur
in other modalities besides speech-sounds and color? (ii)
Are CP effects inborn or can they be generated by learning
(and if so, how)? (iii) How are categories internally
represented? How does this representation generate
successful categorization and the CP boundary effect?
Some of the answers to these questions will have to come
from ongoing research, but the existing data do suggest a
provisional model for category formation and category
representation. According to this model, CP provides our
basic or elementary categories. In acquiring a category we
learn to label or identify positive and negative instances
from a sample of confusable alternatives. Two kinds of
internal representation are built up in this learning by
"acquaintance": (1) an iconic representation that subserves
our similarity judgments and (2) an analog/digital feature-
filter that picks out the invariant information allowing us
to categorize the instances correctly. This second,
categorical representation is associated with the category
name. Category names then serve as the atomic symbols for a
third representational system, the (3) symbolic
representations that underlie language and that make it
possible for us to learn by "description."
This model provides no particular or general solution to the
problem of inductive learning, only a conceptual framework;
but it does have some substantive implications, for example,
(a) the "cognitive identity of (current) indiscriminables":
Categories and their representations can only be provisional
and approximate, relative to the alternatives encountered to
date, rather than "exact." There is also (b) no such thing
as an absolute "feature," only those features that are
invariant within a particular context of confusable
alternatives. Contrary to prevailing "prototype" views,
however, (c) such provisionally invariant features MUST
underlie successful categorization, and must be "sufficient"
(at least in the "satisficing" sense) to subserve reliable
performance with all-or-none, bounded categories, as in CP.
Finally, the model brings out some basic limitations of the
"symbol-manipulative" approach to modeling cognition,
showing how (d) symbol meanings must be functionally
anchored in nonsymbolic, "shape-preserving" representations
-- iconic and categorical ones. Otherwise, all symbol
interpretations are ungrounded and indeterminate. This
amounts to a principled call for a psychophysical (rather
than a neural) "bottom-up" approach to cognition.
>> [The turing test] should be timed as well as checked for accuracy...
>> Turing would want a degree of humor...
>> check for `personal values,' `compassion,'...
>> should have a degree of dynamic problem solving...
>> a whole body of psychometric literature which Turing did not consult.
>I think that these details are premature and arbitrary. We all know
>(well enough) what people can DO: They can discriminate, categorize,
>manipulate, identify and describe objects and events in the world, and
>they can respond appropriately to such descriptions.
Just who is being arbitrary here? Qualities like humor, compassion,
artistic creativity and the like are precisely those which many of us
consider to be those most characteristic of mind! As to the
"prematurity" of all this, you seem to have suddenly and most
conveniently forgotten that you were speaking of a "total turing
test" -- I presume an ultimate test that would encompass all that we
mean when we speak of something as having a "mind", a test that is
actually a generations-long research program.
As to whether or not "we all know what people do", I'm sure our
cognitive science people are just *aching* to have you come and tell
them that us humans "discriminate, categorize, manipulate, identify, and
describe". Just attach those pretty labels and the enormous preverbal
substratum of our consciousness just vanishes! Right? Oh yeah, I suppose
you provide rigorous definitions for these terms -- in your as
yet unpublished paper...
>Now let's get devices to (1) do it all (formal component) and then
>let's see whether (2) there's anything that we can detect informally
>that distinguishes these devices from other people we judge to have
>minds BY EXACTLY THE SAME CRITERIA (namely, total performance
>capacity). If not, they are turing-indistinguishable and we have no
>non-arbitrary basis for singling them out as not having minds.
You have an awfully peculiar notion of what "total" and "arbitrary"
mean, Steve: its not "arbitrary" to exclude those traits that most
of us regard highly in other beings whom we presume to have minds.
Nor is it "arbitrary" to exclude the future findings of brain
research concerning the nature of our so-called "minds". Yet you
presume to be describing a "total turing test".
May I suggest that what you describing is not a "test for mind", but
rather a "test for simulated intelligence", and the reason you will
not or cannot distinguish between the two is that you would elevate
today's primitive state of technology to a fixed methodological
standard for future generations. If we cannot cope with the problem,
why, we'll just define it away! Right? Is this not, to paraphrase
Paul Feyerabend, incompetence upheld as a standard of excellence?
Blessed be you, mighty matter, irresistible march of evolution,
reality ever new born; you who by constantly shattering our mental
categories force us to go further and further in our pursuit of the
-Pierre Teilhard de Chardin "Hymn of the Universe"
> 1) I've always been somewhat suspicious about the Turing Test. (1/2 :-)
> a) does anyone out there have any good references regarding
> its shortcomings. :-|
John Searle's notorious "Chinese Room" argument has probably
drawn out more discussion on this topic in recent times than
anything else I can think of. As far as I can tell, there seems
to be no consensus of opinion on this issue, only a broad spectrum
of philosophical stances, some of them apparently quite angry
(Hofstadter, for example). The most complete presentation I have yet
encountered is in the journal for the Behavioral and Brain Sciences
1980, with a complete statement of Searle's original argument,
responses by folks like Fodor, Rorty, McCarthy, Dennett, Hofstadter,
Eccles, etc, and Searle's counterresponse.
People frequently have misconceptions of just what Searle is arguing,
the most common of these being:
Machines cannot have minds.
What Searle really argues is that:
The relation (mind:brain :: software:hardware) is fallacious.
Computers cannot have minds solely by virtue of their running the
His position seems to derive from his thoughts in the philosophy of
language, and in particular his notion of Intentionality.
Familiarity with the work of Frege, Russell, Wittgenstein, Quine,
Austin, Putnam, and Kripke would really be helpful if you are
interested in the motivation behind this concept, but Searle
maintains that his Chinese room argument makes sense without any of