The Symbol Grounding Problem

Skip to first unread message

Stevan Harnad

May 9, 1987, 11:45:11 AM5/9/87

To define a SUBsymbolic "level" rather than merely a NONsymbolic
process or phenomenon one needs a formal justification for the implied
up/down-ness of the relationship. In the paradigm case -- the
hardware/software distinction and the hierarchy of compiled
programming languages -- the requisite formal basis for the hierarchy is
quite explicit. It is the relation of compilation and implementation.
Higher-level languages are formally compiled into lower level ones
and the lowest is implemented as instructions that are executed by a
machine. Is there anything in the relation of connectionist processes
to symbolic ones that justifies calling the former "sub"-symbolic in
anything other than a a hopeful metaphorical sense at this time?

The fact that IF neural processes are really connectionistic (an
empirical hypothesis) THEN connectionist models are implementable in
the brain defines a super/sub relationship between connectionist
models and neural processes (conditional, of course, on the validity
-- far from established or even suggested by existing evidence -- of
the empirical hypothesis), but this would still have no bearing on
whether connectionism can be considered to stand in a sub/super relationship
to a symbolic "level." There is of course also the fact that any discrete
physical process is formally equivalent in its input/output relations
to some turing machine state, i.e., some symbolic state. But that would
make every such physical process "subsymbolic," so surely turing
equivalence cannot be the requisite justification for the putative
subsymbolic status of connectionism in particular.

A fourth sense of down-up (besides hardware/software, neural
implementability and turing-equivalence) is psychophysical
down-upness. According to my own bottom-up model, presented in the book I
just edited (Categorical Perception, Cambridge University Press 1987),
symbols can be "grounded" in nonsymbolic representations in the
following specific way:

Sensory input generates (1) iconic representations -- continuous,
isomorphic analogs of the sensory surfaces. Iconic representations
subserve relative discrimination performance (telling pairs of things
apart and judging how similar they are).

Next, constraints on categorization (e.g., either natural
discontinuities in the input, innate discontinuities in the internal
representation, or, most important, discontinuities *learned* on the
basis of input sampling, sorting and labeling with feedback) generate
(2) categorical representations -- constructive A/D filters which preserve
the invariant sensory features that are sufficient to subserve reliable
categorization performance. [It is in the process of *finding* the
invariant features in a given context of confusable alternatives that I
believe connectionist processes may come in.] Categorical
representations subserve identification performance (sorting things
and naming them).

Finally, the *labels* of these labeled categories -- now *grounded*
bottom/up in nonsymbolic representations (iconic and categorical)
derived from sensory experience -- can then be combined and recombined
in (3) symbolic representations of the kind used (exclusively, and
without grounding) in contemporary symbolic AI approaches. Symbolic
representations subserve natural language and all knowledge and
learning by *description* as opposed to direct experiential

In response to my challenge to justify the "sub" in "subsymbolic" when
one wishes to characterize connectionism as subsymbolic rather than
just nonsymbolic, (Rik Belew) replies:

> I do intend something more than non-symbolic when I use the term
> sub-symbolic. I do not rely upon "hopeful neural analogies" or any
> other form of hardware/software distinction. I use "subsymbolic"
> to refer to a level of representation below the symbolic
> representations typically used in AI... I also intend to connote
> a supporting relationship between the levels, with subsymbolic
> representations being used to construct symbolic ones (as in subatomic).

The problem is that the "below" and the "supporting" are not cashed
in, and hence just seem to be synonyms for "sub," which remains to
be justified. An explicit bottom-up hypothesis is needed to
characterize just how the symbolic representations are constructed out
of the "subsymbolic" ones. (The "subatomic" analogy won't do,
otherwise atoms risk becoming subsymbolic too...) Dr. Belew expresses
some sympathy for my own grounding hypothesis, but it is not clear
that he is relying on it for the justification of his own "sub."
Moreover, this would make connectionism's subsymbolic status
conditional on the validity of a particular grounding hypothesis
(i.e., that three representational levels exist as I described them,
in the specific relation I described, and that connectionistic
processes are the means of extracting the invariant features underlying
the categorical [subsymbolic] representation). I would of course be
delighted if my hypothesis turned out to be right, but at this point
it still seems a rather risky "ground" for justifying the "sub" status of

> my interest in symbols began with the question of how a system might
> learn truly new symbols. I see nothing in the traditional AI
> definitions of symbol that helps me with that problem.

The traditional AI definition of symbol is simply arbitrary formal
tokens in a formal symbol system, governed by formal syntactic rules
for symbol manipulation. This general notion is not unique to AI but
comes from the formal theory of computation. There is certainly a
sense of "new" that this captures, namely, novel recombinations of
prior symbols, according to the syntactic rules for combination and
recombination. And that's certainly too vague and general for, say,
human senses of symbol and new-symbol. In my model this combinatorial
property does make the production of new symbols possible, in a sense.
But combinatorics is limited by several factors. One factor is the grounding
problem, already discussed (symbols alone just generate an ungrounded,
formal syntactic circle that there is no way of breaking out of, just as
in trying to learn Chinese from a Chinese-Chinese dictionary alone). Other
limiting factors on combinatorics are combinatory explosion, the frame problem,
the credit assignment problem and all the other variants that I have
conjectured to be just different aspects of the problem of the
*underdetermination* of theory by data. Pure symbol combinatorics
certainly cannot contend with these. The final "newness" problem is of
course that of creativity -- the stuff that, by definition, is not
derivable by some prior rule from your existing symbolic repertoire. A
rule for handling that would be self-contradictory; the real source of
such newness is probably partly statistical, and again connectionism may
be one of the candidate components.

> It seems very conceivable to me that the critical property we will
> choose to ascribe to computational objects in our systems symbols
> is that we (i.e., people) can understand their semantic content.

You are right, and what I had inadvertently left out of my prior
(standard) syntactic definition of symbols and symbol manipulation was
of course that the symbols and manipulations must be semantically
interpretable. Unfortunately, so far that further fact has only led to
Searlian mysteries about "intrinsic" vs. "derived intentionality" and
scepticism about the the possibility of capturing mental processes
with computational ones. My grounding proposal is meant to answer
these as well.

> the fact that symbols must be grounded in the *experience* of the
> cognitive system suggests why symbols in artificial systems (like
> computers) will be fundamentally different from those arising in
> natural systems (like people)... if your grounding hypothesis is
> correct (as I believe it is) and the symbols thus generated are based
> in a fundamental way on the machine's experience, I see no reason to
> believe that the resulting symbols will be comprehensible to people.
> [e.g., interpretations of hidden units... as our systems get more
> complex]

This is why I've laid such emphasis on the "Total Turing Test."
Because toy models and modules, based on restricted data and performance
capacities, may simply not be representative of and comparable to
organisms' complexly interrelated robotic and symbolic
functional capacities. The experiential base -- and, more
important, the performance capacity -- must be comparable in a viable
model of cognition. On the other hand, the "experience" I'm talking
about is merely the direct (nonsymbolic) sensory input history, *not*
"conscious experience." I'm a methodological epiphenomenalist on
that. And I don't understand the part about the comprehensibility of
machine symbols to people. This may be the ambiguity of the symbolic
status of putative "subsymbolic" representations again.

> The experience lying behind a word like "apple" is so different
> for any human from that of any machine that I find it very unlikely
> that the "apple" symbol used by these two system will be comparable.

I agree. But this is why I proposed that a candidate device must pass
the Total Turing Test in order to be capture mental function.
Arbitrary pieces of performance could be accomplished in radically different
ways and would hence be noncomparable with our own.

> Based on the grounding hypothesis, if computers are ever to understand
> NL as fully as humans, they must have an equally vast corpus of
> experience from which to draw. We propose that the huge volumes of NL
> text managed by IR systems provide exactly the corpus of "experience"
> needed for such understanding. Each word in every document in an IR
> system constitutes a separate experiential "data point" about what
> that word means. (We also recognize, however, that the obvious
> differences between the text-base "experience" and the human
> experience also implies fundamental limits on NL understanding
> derived from this source.)... In this application the computer's
> experience of the world is second-hand, via documents written by
> people about the world and subsequently through users'queries of
> the system

We cannot be talking about the same grounding hypothesis, because mine
is based on *direct sensory experience* ("learning by acquaintance")
as oppposed to the symbol combinations ("learning by description"),
with which it is explicitly contrasted, and which my hypothesis
claims must be *grounded* in the former. The difference between
text-based and sensory experience is crucial indeed, but for both
humans and machines. Sensory input is nonsymbolic and first-hand;
textual information is symbolic and second-hand. First things first.

> I'm a bit worried that there is a basic contradiction in grounded
> symbols. You are suggesting (and I've been agreeing) that the only
> useful notion of symbols requires that they have "inherent
> intentionality": i.e., that there is a relatively direct connection
> between them and the world they denote. Yet almost every definition
> of symbols requires that the correspondence between the symbol and
> its referent be *arbitrary*. It seems, therefore, that your "symbols"
> correspond more closely to *icons* (as defined by Peirce), which
> do have such direct correspondences, than to symbols. Would you agree?

I'm afraid I must disagree. As I indicated earlier, icons do indeed
play a role in my proposal, but they are not the symbols. They merely
provide part of the (nonsymbolic) *groundwork* for the symbols. The
symbol tokens are indeed arbitrary. Their relation to the world is
grounded in and mediated by the (nonsymbolic) iconic and categorical

> In terms of computerized knowledge representations, I think we have
> need of both icons and symbols...

And reliable categorical invariance filters. And a principled
bottom-up grounding relation among them.

> I see connectionist learning systems building representational objects
> that seem most like icons. I see traditional AI knowledge
> representation languages typically using symbols and indices. One of
> the questions that most interests me at the moment is the appropriate
> "ontogenetic ordering" for these three classes of representation.
> I think the answer would have clear consequences for this discussion
> of the relationship between connectionist and symbolic representations
> in AI.

I see analog transformations of the sensory surfaces as the best
candidates for icons, and connectionist learning systems as
as possible candidates for the process that finds and extracts the invariant
features underlying categorical representations. I agree about traditional
AI and symbols, and my grounding hypothesis is intended as an answer about
the appropriate "ontogenetic ordering."

> Finally, this view also helps to characterize what I find missing
> in most *symbolic* approaches to machine learning: the world
> "experienced" by these systems is unrealistically barren, composed
> of relatively small numbers of relatively simple percepts (describing
> blocks-world arches, or poker hands, for example). The appealling
> aspect of connectionist learning systems (and other subsymbolic
> learning approaches...) is that they thrive in exactly those
> situations where the system's base of "experience" is richer by
> several orders of magnitude. This accounts for the basically
> *statistical* nature of these algorithms (to which you've referred),
> since they are attempting to build representations that account for
> statistically significant regularities in their massive base of
> experience.

Toy models and microworlds are indeed barren, unrealistic and probably
unrepresentative. We should work toward models that can pass the Total
Turing Test. Invariance-detection under conditions of high
interconfusability is indeed the problem of a device or organism that
learns its categories from experience. If connectionism turns out to
be able to do this on a life-size scale, it will certainly be a
powerful candidate component in the processes underlying our
representational architecture, especially the categorical level. What
that architecture is, and whether this is indeed the precise
justification for connectionism's "sub" status, remains to be seen.

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mi...@princeton.csnet har...@mind.Princeton.EDU

Reply all
Reply to author
0 new messages