Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Summary: Direct Perception v.s. Representationalism

28 views
Skip to first unread message

Steven Lehar

unread,
Apr 28, 2005, 10:55:41 AM4/28/05
to
We have had a full and informative exchange on the question of
Direct Perception v.s. Representationalism, and it seems we have
pretty much exhausted that topic by now, as we are now just repeating
the same arguments over and over again. So I will bow out of the
discussion in a major way at this point, although I will answer any
residual questions that people might have. I have no more major
arguments to make on the subject. Most sincere thanks to all who
have contributed, and to all those lurkers out there who have found
this debate worth following. If nothing else, it is in my view a most
fascinating topic, and resolving this issue once and for all would be
of the greatest significance for philosophy, psychology, and
neuroscience. I hope we have advanced that cause if only by a small
notch.

I will take this opportunity to summarize the whole debate as I see it
from the representationalist perspective. The origin of direct
perception is naive realism. It is almost impossible to shake the very
vivid impression that what we see in experience is the world
itself. But this concept is profoundly at odds with the
neurophysiology of perception, with sense organs that transmit
information to the brain, an obviously representational system. One of
these views must be right, and the other is wrong, unless as Andrew
Brook claims, the truth lies somewhere in between.

It turns out however that the direct perception view is incoherent,
whether in its pure or partial form, because it involves the organism
having knowledge of things which are not explicitly represented in its
brain. Proponents of direct perception, from Gibson onwards, must
sense some kind of a problem at least subconsciously, as seen in their
supreme confidence that their view is right, even though they cannot
articulate their position with sufficient specificity as to explain
how an artificial robotic system could possibly be built that operates
by direct perception. Even more surprising, they cannot seem to
formulate any possible future experiment that would resolve the issue
definitively one way or the other. And strangest of all, they appear
to have a peculiar blindness to this gaping hole in their view of
perception, as if they simply cannot understand the objections of the
representationalists. It would be one thing if they acknowledged that
there is some kind of paradox involved, but that they consider that
paradox more tolerable than the incredible representationalist
view. But direct perceptionists cannot seem to even bring themselves
to acknowledge that there is any kind of problem at all.

The more extreme form of direct perception espoused by Gibson is like
behaviorism: it is full of prohibited concepts and so-called "category
errors" that forbid one to acknowledge experience as a "thing", or to
recognize our experience as pictorial, or that it has spatial extent,
when those facts are plain for all to see as soon as they just open
their eyes. "Perception is behavior", Rickert tells us, and there is
supposedly no meaning to words like "information" and "representation"
and "processing", concepts which have become part of our everyday
lexicon since the arrival of computer technology in our lives. One
gets the sense that direct perception is a religion rather than a
scientific hypothesis, as seen in Gibson's stark refusal to even
discuss sensory processing at all.

Andrew Brook's more moderate concept of "direct representationalism"
sounds at first more reasonable, because he allows for representations
in the brain, but paradoxically, he insists that we view the world
"directly" *through* the representations in our brain, a concept that
strikes me as a contradiction in terms. If our brain uses
representations, then of course our experience consists of those
representations, and we cannot view the world directly except by way
of them. When questioned more closely, Brook wriggles and squirms
until his explanation becomes almost identical to a
representationalist thesis, with the sole exception that he continues
to insist that our experience of the world is direct, although he
cannot even explain exactly what he means by that term.

Until the *principle* of direct perception can be demonstrated in a
simple robot model, the concept is so vague and incoherent as to be
essentially meaningless.

Conscious experience has an information content, and information
cannot exist without a physical medium or carrier to carry that
information. That medium can only be the brain, where the sensory
nerves terminate, and from whence the motor nerves originate.

A theory that makes no predictions about possible future discoveries
in the brain, is no scientific theory at all, but is more like an
article of faith.

Direct perception cannot explain the *ontology* of the vivid spatial
structure of visual experience, those volumetric objects bounded by
continuous colored surfaces that disappear when we close our
eyes. These are obviously a product of the brain, and yet they appear
out in the world. They can only be states of our own brain, and thus be
located in our brain.

Like Behaviorism, direct perception only survives by simply
prohibiting discussion of concepts like the information,
representation, and processing in the brain, and by prohibiting
discussion of the manifest properties of the vivid spatial structure
of experience.

The chief argument raised against representationalism is the question
of experience, and why we have it when our brain processes sensory
information. This is indeed a deep philosophical quandary. However we
know for a fact that experience does exist, and that the brain is the
organ of conscious experience. The only reasonable location for
experience is inside the brain as a representation.

We do not need "internal eyes" to "see" the representations in our
brain, we simply experience the structure of certain patterns of
energy in our brain directly. While this concept may seem deeply
troubling to some, it is nowhere near as troubling as the concept of
awareness of the world out beyond the brain, where there is no
computational or representational hardware to do the experiencing. Direct
perception does not offer any better explanation for the question of
experience, except by prohibiting discussion of
experience as a "category error".

O'Regans concept of probing the external environment as if it were an
internal memory is demonstrably false, because the three-dimensional
spatial information of the external world is by no means immediately
available from glimpses of the world, but requires the most
sophisticated and as-yet undiscovered algorithm to decipher that
spatial information from the retinal input.

The absurdity of O'Regan's concept is highlighted by the condition of
visual agnosia, a visual integration failure, because the condition of
apperceptive agnosia is the failure of a visual function whose
existence O'Regan effectively denies.

The question of direct perception v.s. representationalism is not a
conceptual issue that is beyond the reach of science, it is a very
significant empirical issue with profound implications for the nature
of perceptual representation in the brain, and for our attempts to
replicate the principle of perception in artificial robot models.

The phenomena of dreams and hallucinations clearly demonstrate that
the brain is capable of constructing vivid spatial experiences in the
absence of an external world available for direct inspection. Perception
is a guided hallucination, constrained by sensory input.

In the face of this overwhelming array of indisputable evidence, how
can intelligent, educated people continue to insist that our
experience of the world is direct? The answer is the very vivid
impression that our experience simply *appears* direct. Some people
just cannot bring themselves to accept the view towards which all of
the evidence inevitably points. This issue is the ultimate example of
a paradigm debate, because seeing things from the representationalist
perspective requires that one inverts one's entire epistemology to
recognize that the world which appears outside is actually inside
one's head. It is admittedly a difficult concept to swallow.

I have prepared an exerpted summary of this whole debate on-line, which
can be found at:

http://cns-alumni.bu.edu/~slehar/cartoonepist/EpistDebate.html

I will be travelling for the next few days, so I will be off-line
until the middle of next week.

Steve Lehar

Steven Ericsson-Zenith

unread,
Apr 28, 2005, 4:07:51 PM4/28/05
to
Steven Lehar wrote ..
.. The origin of direct

> perception is naive realism. It is almost impossible to shake the very
> vivid impression that what we see in experience is the world
> itself. But this concept is profoundly at odds with the
> neurophysiology of perception, with sense organs that transmit
> information to the brain, an obviously representational system. ...

If you expand your consideration beyond the physiological structure
and neurobiology of our species one can identify sense organs that
exist autonomously and without a brain.

Simon Conway Morris discusses examples in his "Life's Solution"
(Cambridge, 2003) - the following condensed from pages 154-155

"... jellyfish ... renowned both for their highly toxic stings and for
their remarkable eyes. The eyes are similar in construction to other
camera-eyes, with a large lens located in front of the retina but
seperated by a layer of cells that may help in focusing. ...
While primitive eye spots are known in other cnidarians, at first
sight the sophistication of the cubozoan eyes, which typically
total eight around the margin of the swimming bell, is quite surprising.
Cubozoans, however, are active and highly agile swimmers, have obvious
visual actuity ... What is particularly interesting is the relative
simplicity of the nervous system, which consists of a nerve net
linked to a series of four pacemakers a neural architecture that is
effectively imposed by the jellyfish body plan. Thus there is no brain,
yet complex eyes and sophisticated behavior ...
.. what is evident in these animals is that the nervous system
shows levels of autonomy, so that at times it can act as an
integrated network and in other circumstances show a greater degree of independence and be capable of dealing with directional inputs."

See http://jeb.biologists.org/cgi/reprint/204/8/1413
for the Saterlie and Nolen paper that is the particular basis of
Conway's observations - the paper discusses this biology in detail.

This simple architecture, from my PoV, illustrates the engineering of
sentience in which primitive representation and experience are unified
(what you have been calling direct) and not dependent on the
higher order functions of a complex brain. This does not enlighten us
about the secondary effects of short or long term memory (that IMHO
creates the representational effects that you observe) but it does
suggest that there is an immediacy and autonomy not apparently allowed
by the representational processes you advocate - and it would
tend to deminish arguments that advocate exclusively a centralized model
of self for the operation of sentient organism.

I observe, in general, that the discussion of consciousness
is unnecessarily limited when we limit our consideration to
the biological systems of our own species only and
observations based on introspection and artifacts of language.

With respect,
Steven

--
Dr. Steven Ericsson-Zenith
http://www.semeiosis.com

Augustin Carreno

unread,
Apr 28, 2005, 4:07:49 PM4/28/05
to
Steve Lehar has presented a strong defense of representation in his
summary. Unfortunately, as he himself has acknowledged previously, it
is unlikely to cause too many minds to change. Many people have
suggested that maybe there is a way to reconcile the opposing views of
direct vs indirect perception, and this seems desirable since the
absence of empirical evidence that would clearly determine the correct
position suggests that there is a possibility that both approaches
could be wrong, or both could be right. I submit that both could be
right.

I think part of the confusion arises from the fact that both
approaches start by assuming that the outside world is a given.
Although this seems reasonable and intuitive, it is also too strong a
claim to be a useful postulate because it allows for the introduction
of a number of concepts such as "experience," "information," and so
on, too early in the game. You don't begin to prove a theorem -- and
this is basically what the debaters are trying to do, albeit without
mathematical terminology -- by using concepts you haven't defined in
your axioms.

The problem of the outside world as a given is that it is hard to
justify by either representation or direct perception what it is that
the brain is supposed to do, I mean, the problem of consciousness is
basically finding out how the world got to be out there for us to
experience. Answer that and you know how the brain works. But, if we
stipulate that the world is indeed already there, then the advantage
of making a copy of it over taking it at face value is difficult to
elucidate.

If instead we use a weaker axiom, there could be room for compromise.
For example, using elementary physics we know that one corollary of
Newton's laws is that we can react only to a force. We know of
electricity only through its effect on charges; of gravitation through
its effect on mass, and so on. In general we can say that we can
perceive only variations of a potential. So, if we postulate only that
the outside world is a certain potential, our task becomes to imagine
how the brain can turn this very general influence into the world in
front of us.

The causal chain for this approach places an interpretation of the
potential variations, that is, what we would call the world, at the
end, but is it a representation? Well, it depends on one's definition
of representation, but since we started with a potential and got a
chart of its variation at the end, it looks more like a creation of
the brain. On the other hand, as is usually the case in mathematics,
this solution accepts a reverse interpretation, namely, we can say
that yes, the world is out there, which we perceive directly and
report to the brain and at the end of the causal chain we arrive to
the potential variations as the triggers of experience.

In sum, yes there is a picture, but it is not a copy of the world. It
is, for all intents and purposes, the world.

Neil W Rickert

unread,
Apr 28, 2005, 7:12:34 PM4/28/05
to
Steven Lehar <sle...@CNS.BU.EDU> wrote on Apr 28, 2005:

SL>I will take this opportunity to summarize the whole debate as I see it
SL>from the representationalist perspective.

Steven indeed gave the representationalist perspective. It was
perhaps more a polemic than a summary.

I don't want to prolong the discussion. But I should correct some
erroneous statements.

SL> "Perception is behavior", Rickert tells us, and there is
SL>supposedly no meaning to words like "information" and "representation"
SL>and "processing", concepts which have become part of our everyday
SL>lexicon since the arrival of computer technology in our lives.

Someone may have said these things. But it certainly was not I.

SL>Until the *principle* of direct perception can be demonstrated in a
SL>simple robot model, the concept is so vague and incoherent as to be
SL>essentially meaningless.

We should put this in perspective. There is no robot model that
demonstrates representational perception. There is no robot model
that credibly demonstrates any kind of perception.

-NWR

Andrew Brook

unread,
Apr 28, 2005, 7:13:14 PM4/28/05
to
Steven Lehar wrote:
It is almost impossible to shake the very
> vivid impression that what we see in experience is the world
> itself. But this concept is profoundly at odds with the
> neurophysiology of perception, with sense organs that transmit
> information to the brain, an obviously representational system. One of
> these views must be right, and the other is wrong, unless as Andrew
> Brook claims, the truth lies somewhere in between.

Direct perception is entirely consistent with the neurophysiology perception. It
is inconsistent merely with a certain ideology of what follows from these facts.
Steve, saying what you do here shows as serious an unwillingness to come to
grips with what we are actually saying as your earlier statement that we believe
that representation is outside the head. We believe that representation can be
of or about thing outside the head. But your statement is about (about -- you
cannot escape intentionality) the nature of the vehicle, mine is about .... what
can I say, what it is about, what it makes us conscious of. If you refuse to
even try to grasp this distinction, I hypothesis that you are letting ideology
substitute for evidence and argument. (I said I wouldn't be contributing but
some misattributions are just too aggravating.)

> It turns out however that the direct perception view is incoherent,

> whether in its pure or partial form, **because it involves the organism


> having knowledge of things which are not explicitly represented in its

> brain.**

[The clause in asterisks] No, it does not ... and I have said why over and over
and over. What does it take to get you at least to acknowledge that we believe
what we say we believe? Go back over my messages. I have addressed this point at
least half a dozen times. Again, I suspect you are letting unshakeable ideology
go proxy for looking at the arguments.

Andrew

--

Andrew Brook, Professor of Philosophy
Director, Institute of Cognitive Science
Member, Canadian Psychoanalytic Society
2217 Dunton Tower, Carleton University
Ottawa ON, Canada K1S 5B6
Ph: 613 520-3597
Fax: 613 520-3985
Web: www.carleton.ca/~abrook

Marina Rakova

unread,
May 2, 2005, 11:36:44 AM5/2/05
to
I have been a lurker in the thread, hoping to figure out what people's
positions are, but I feel all the more confused after reading the
exchanges. Now that Steven Lehar has rested his case for
representationalism I would like, if possible ask him a couple of
questions. They might be thoroughly unintelligent, and already answered
a hundred times before (which probably escaped me), but I would
appreciate clarifications nonetheless.

One of the things I find most confusing is the use of terms in the
exchange between Steven Lehar and Andrew Brook. On the traditional
reading 'representationalism' is the view that experience represents
mind-independent physical objects as having certain properties. This is
the view that is normally associated with the name of Fred Dretske.
However, Andrew Brook who has emphasised the closeness of his view to
Dretske's calls it 'direct realism' and Steven Lehar does not subscribe
to the diaphanousness of experience, saying that what we perceive are
our brain's representations.

I am sympathetic to representationalism in holding that there is a hell
of a lot of representations in our mind/ brain and that the robot test
is one good way of choosing between theories of perception. That's why
there are a few things I still don't understand about Steven Lehar's
position.

1) What's the relationship between representations and the
mind-independent objects of the outside world (presuming one's happy
with its being out there)? If I understand correctly, Steven Lehar's
view is that we perceive not the objects in the world, but our
representations of them (this view gave rise to the homunculus
objections posted by a number of people). My question is: is there
anything interesting that can be said about the relation the latter
bear to the former? or is methodological solipsism all that is
required? And I would like to return to the case of people fitted with
the Bach-y Rita device. According to Steven Lehar, these people 'move'
from perceiving representation A while they are getting accustomed to
the device to perceiving representation B once they are accustomed to
it (perceiving objects instead of perceiving tickles produced by
electric currents). But why should there ever be any transition from
representation type 1 to representation type 2 unless we assume that
the point of having representations is having access to the world? If I
understand correctly this is something to which Professor Brook cannot
receive an answer from those opposing his version of direct realism or
am I missing something there? I don't know whether any robot test
could distinguish between these two views, but perhaps one might
consider the operation of a robot in a noisy environment (like a robot
who needs to kick a real-world chair but who's being fed far too much
information for its processing capacity - a leak from past scene
representations - should we not give any priority to how things are in
the world in this case?)

And

2) What is the relationship between Steven Lehar's representations and
the traditional philosophers' sense-data (as he says that his view is
indirect perception view)?

I hope I'm actually making sense.

Sincerely,

Marina Rakova

Steven Lehar

unread,
May 6, 2005, 9:37:15 PM5/6/05
to
Reply to Augustin Carreno:

Carreno >


Many people have suggested that maybe there is a way to reconcile the

opposing views of direct vs indirect perception, that there is a


possibility that both approaches could be wrong, or both could be right. I
submit that both could be right.

< Carreno

The only form of direct perception that might be right is one that is
experimentally indistinguishable from representationalism, and thus makes
no testable predictions to distinguish between the two. That form of direct
perception however is indistinguishable from representationalism, because
it posits that every aspect of experience is necessarily replicated in the
brain.

The more extreme version of direct perception that prohibits
representations of any sort, cannot be right if representationalism is also
right.

Carreno >


In sum, yes there is a picture, but it is not a copy of the world. It
is, for all intents and purposes, the world.

< Carreno

But there are two aspects of the "picture", one that disappears when you
close your eyes, and the other that continues to exist unchanged. That
clearly places one world on the outside beyond your eyelids, and the other
one inside on this side of your eyelids. The failure to distinguish these
fundamentally different worlds is exactly the theory of direct perception.

Carreno >


if we stipulate that the world is indeed already there, then the advantage
of making a copy of it over taking it at face value is difficult to
elucidate.

< Carreno

It is very easy to elucidate. If "taking it at face value" means making a
sensory image of it and sending it to the brain for processing, then that
is representationalism, and "direct perception" would be wrong. If "taking
it at face value" means behavior and experience *as if* we had a copy of
the world in our brain, but we actually *don't*, then the causal loop
between perception and behavior is critically broken, and perceptual motor
function remains forever a deep dark mystery.

Steve

Steven Lehar

unread,
May 6, 2005, 9:36:50 PM5/6/05
to
Response to Neil Rickert:

Rickert >


Steven indeed gave the representationalist perspective. It was
perhaps more a polemic than a summary.

< Rickert

Obviously it was a view from the representationalist perspective; feel free
to compose a summary from the "direct perception" perspective. And there
was an overall message to that polemic, which is that direct perception may
seem plausible enough when debated issue by issue, but it falls apart when
viewed in the aggregate, when one takes a "big picture" view as revealed in
the summary. Advocates of direct perception cannot even agree on what their
theory states, whether there are any representations in the brain or not,
or if there are, whether they encode all of experience or only some of it.
They cannot make testable predictions of future experiments that would
resolve the matter one way or the other, nor can they build a functioning
robot to demonstrate the *principle* behind the concept.

The motivation behind direct percepton on the other hand is perfectly
clear, it is motivated by the very vivid naive impression that what we
experience is the world itself, rather than an internal representation. In
that sense it is very much like the "animism" of the turn of the last
century whose advocates insisted that life is "something more" than just
chemical reactions, although they were unable to define what the added
ingredient might be, or how it would be detected in principle, or how it
could be implemented in a simple model.

Lehar >>


"Perception is behavior", Rickert tells us

<< Lehar

Rickert >


Someone may have said these things. But it certainly was not I.

< Rickert

Apologies, that was Glen Sizemore in...

http://listserv.uh.edu/cgi-bin/wa?A2=ind0110&L=psyche-d&P=R2

Lehar >>


Until the *principle* of direct perception can be demonstrated in a

simple robot model...
<< Lehar

Rickert >


We should put this in perspective. There is no robot model that
demonstrates representational perception. There is no robot model
that credibly demonstrates any kind of perception.

< Rickert

Any robot model with a camera, computer, and servos, demonstrates the
*principle* behind representationalism, and thus clarifies concepts such
as "information", "representation", and "processing" in terms that are
perfectly clear to anyone who has used a computer. There is *no* such
simple demonstration of the *principle* behind direct perception, nor is
there even consensus among advocates what that term actually means, or what
it says with any specificity about the perceptual process, or how it would
be implemented in a robot model. It is quite extraordinary how dogmatically
and with such supreme confidence the advocates defend what is a pretty
vague concept.

Steve

Steven Lehar

unread,
May 6, 2005, 9:37:14 PM5/6/05
to
Reply to Dr. Steven Ericsson-Zenith:

Very interesting turn of the discussion to the jellyfish and (if I may
extend it to one of my favorite creatures) the hydra. The nervous system of
the hydra (like the jellyfish) is composed of a network of undifferentiated
neurons distributed across the surface of the tiny creature like a fishnet
stocking. The primitive dendrite/axons propagate activation in both
directions, and the primitive synapses that form wherever those neural
processes cross, also propagate activation in both directions. Local
stimulation of the hydra has been observed to produce an "echo effect" as
the activation spreads outward from the locus of stimulation like rings in
a pond when a stone is tossed in, eventually passing around the creature's
cylindrical body or a tentacle, and back to the point of stimulation.

And yet dispite this extremely simple decentralized and unstructured
nervous system, the hydra is capable of remarkably sophisticated structured
behavior including waving its tentacles to snag food, bending its tenticle
towards its mouth when food has been snagged, opening its mouth
and "swallowing" the food, peristaltic contractions of its body to digest
the food, reverse contractions to expel undigested remains, contractive
pulsing of its body for swimming (when in a swimming stage of the life
cycle) and even a form of "walking" locomotion where the hydra bends over
and grabs the ground with its tentacles, and flips its "foot" up overhead
and down again in a somersault, repeated as necessary to get to where it is
going. The jellyfish adds eyes and thus visual function to this simple
neural architecture.

I agree wholeheartedly with Ericsson-Zenith when he suggests that we ought
to begin the investigation of sensory representation and processing with
such simple creatures before we attempt to address the human or primate
perception. How could all that complex behavior possibly be performed by
that simple nervous system? How does the creature know which parts of its
body to contract and extend? What are the "templates" that control its
spatial behavior?

Ericsson-Zenith says:

"This simple architecture ... illustrates the engineering of sentience in


which primitive representation and experience are unified (what you have
been calling direct) and not dependent on the higher order functions of a

complex brain. This ... [suggests] that there is an immediacy and autonomy


not apparently allowed by the representational processes you advocate - and

it would tend to diminish arguments that advocate exclusively a centralized


model of self for the operation of sentient organism."

First of all, I do in fact advocate an "immediacy and autonomy" of simple
analog holistic processes through the nervous system, as suggested by
Gestalt theory, and the patterns of those holistic processes are what
perform perceptual and motor functions, and they also correlate with
the "experience" of that simple organism, i.e., those patterns of
activation are one and the same as the patterns of its simple experience.

Lets go one level simpler and discuss the paramecium, a **single-cellular**
creature equipped with a fringe of "cilia", tiny hairs around the perimeter
of its microscopic cigar-shaped body. Synchronized waves of contraction and
extension travel through the cilia from anterior to posterior, and that is
how the organism propels itself forward through water. In this creature
there is not even a nervous system at all, and yet we see organized
synchronized structured behavior that operates by the principle of
**harmonic resonance**, patterns of electrochemical standing or travelling
waves that define ordered patterns in space and time. These waves
constitute the motor planning of the tiny creature, and they are
under "voluntary" control, i.e. they can cycle faster or slower, they can
reverse direction when the creature encounters a noxious stimulus, and they
can turn the creature left or right by differential waving of the cilia. I
propose that the **principle** behind this harmonic resonance control
system is exactly the foundational principle behind human and animal
perception, cognition, and motor control.

(((see...

Harmonic Resonance Theory
http://cns-alumni.bu.edu/~slehar/webstuff/hr1/hr1.html

and

Directional Harmonic Theory
http://cns-alumni.bu.edu/~slehar/webstuff/dirhr1/dirhr1.html

and this short illustrated on-line summary:
http://cns-alumni.bu.edu/~slehar/HarmonicResonance/HarmonicResonance.html

I am currently working on a cartoon presentation of this idea, which will
be posted on PSYCHE-D as soon as it is ready.)))

And the same principle is responsible for the control of the hydra and the
jellyfish. Only a standing wave explanation could possibly account for the
structured behavior of those unstructured nervous systems, with spatial
regions of extension and contraction in a organized patterns. (Contrary to
Cathy Reason's contention in an earlier thread, the travelling waves on a
paramecium DO demonstrate the feasability of standing waves for
computational function in biological tissue, as do the standing waves in
fish and snake locomotion, and the cycling waves of the feet of a centipede)

Now consider perceptual function, back in the paramecium. Some kind of
molecular lock-and-key protein embedded in the creature's cell wall, must
detect attractive and noxious stimuli in the environment. But if these
sensory signals are to cause behavioral response, they must be able to
influence or modulate the standing wave pattern of the cilia. For example
a strong noxious stimulus at one part of the organism should have the
effect of reversing the propulsive oscillations, while attractive stimuli
should modulate the rate of oscillation on one side relative to the other,
in order to make the paramecium turn in that direction. There is a holistic
global aspect to this sensory function, because the creature must make
an "image" of the overall direction of attractive & aversive stimuli in its
environment so as to be able to modulate its motor propulsion coherently in
response. Whatever that global patterning mechanism might be (another
standing wave principle, for example, or a gradient of chemical
concentration within the organism) that global pattern corresponds to the
creature's experience of its environment. It "sees" promise in this
direction, and threat in that, and that perceptual picture directly
influences the pattern of oscillation of the cilia.

This is the "organism in active exploratory interaction with the
environment", and if THIS is what you mean by "direct perception" then I am
with you. But there is nothing direct about it, because the creature cannot
possibly even in principle "perceive" patterns that are not explicitly
present in its tiny body, but must construct those patterns internally in
order for them to be able to influence its motor behavior. There is a
causal linkage that inevitably goes **through** internal representations or
replicas of (presumed) external patterns. There are no higher order
mechanisms of a complex brain, but there IS a spontaneous spatial pattern
formation mechanism which is what *I* would call "computation", although of
an immediate analog spatial form. Immediacy and autonomousness can indeed
be part of a representational principle, and the unified holistic aspects
of perceptual function as revealed by Gestalt theory, implicate exactly
such an immediate autonomous and holistic strategy of computation.

There is admittedly no "centralized model of the self" in this case, but
there is a holistic unified model of the surrounding environment, and there
is no "homunculus" required to view it, its function of "being viewed" is
subserved by the causal influence that it has on motor behavior. Would the
paramecium be "conscious" of these patterns? Only in a highly simplified
primitive form, commensurate with the simplified primitive mechanism. But
if it did not have even that tiniest spark of experience, then our own
experience must forever remain a deep dark mystery.

A lot of the thrust of the "direct perception" movement is driven by the
holistic unitary nature of experience, that appears to violate our notions
of "neural plausibility"--how could the brain possibly construct such a
rich and complex spatial world so immediately and in parallel? That is why
they cannot bring themselves to believe that it is a picture in their own
brain.

But the neural plausibility issue is *separable* from the epistemological
question, and epistemology is rudely violated by the notion of experience
of entities "directly" out in the world, rather than by internal
representations of them. Direct perception embodies a profound
epistemological "category error" which is impossible in principle. If the
continuous volumetric real-time image of the world of experience appears
implausible in terms of contemporary understanding of neurophysiology, then
it is our notions of neurophysiology which are in urgent need of revision
to account for that holistic unitary nature. Just saying that perception
occurs "out in the world" or that it is a "category error" to observe the
spatial structure of experience, is not a solution to anything, it is
merely the re-statement of the problem, with the profound paradox built in
to the "solution".

With "solutions" like that, who needs paradoxes? It tells us nothing of how
things work.

Representationalism on the other hand makes the testable prediction that if
an organism responds to patterns in the environment (such as avoiding
noxious stimuli and going for attractive ones in a spatially coherent
manner) then there must be *some* physically measurable phenomenon inside
the creature that replicates that external pattern of stimulation. If a
jellyfish exhibits behavior as if it could "see" the world around it, then
it must have an internal replica of that world inside its body. Direct
perception makes no such prediction, but neither can it begin to explain
the causal influence that patterns of environmental invariants seem to
have "directly" on the organism's behavior.

Steve

Steven Lehar

unread,
May 6, 2005, 9:42:08 PM5/6/05
to
Reply to Marina Rakova:

Rakova >


Now that Steven Lehar has rested his case for representationalism I would
like, if possible ask him a couple of questions.

< Rakova

I would be delighted.

Rakova >


1) What's the relationship between representations and the mind-independent
objects of the outside world (presuming one's happy with its being out

there)? ... is there anything interesting that can be said about the


relation the latter bear to the former? or is methodological solipsism all
that is required?

< Rakova

They are connected by the sensory input, that constrains the configuration
of the experienced percept. Representationalism reveals perception to be a
*constructive* or *generative* computation, one that constructs an
information-rich data structure that contains a lot more explicit spatial
information than the sensory stimulus on which it is based.

The computational principle behind visual perception can be described as
the transformation from a two-dimensional stimulus to a three-dimensional
percept, as described in the Gestalt Bubble model here

http://cns-alumni.bu.edu/~slehar/epist/epist8.html

and in greater detail here

http://cns-alumni.bu.edu/~slehar/webstuff/bubw3/bubw3.html#compmech

Notice how direct perception does not even allow this kind of description
of spatial perception, it being apparently some kind of "category error".

Rakova >


And I would like to return to the case of people fitted with
the Bach-y Rita device. According to Steven Lehar, these people 'move'
from perceiving representation A while they are getting accustomed to
the device to perceiving representation B once they are accustomed to
it (perceiving objects instead of perceiving tickles produced by
electric currents). But why should there ever be any transition from
representation type 1 to representation type 2 unless we assume that
the point of having representations is having access to the world?

< Rakova

In the transition from A to B, the experience moves from the location of
the perceived chest to a projection out into the perceived world, for all
the world *as if* giving the percipient access to the world beyond the
chest, although in fact it is just an internal construction or replica of
the world. The representation feels like the world itself because it is
volumetric and it integrates different sensory modalities into a unified
spatial percept, and it appears to be beyond the body. But we know for a
fact that that percept is actually inside our head, otherwise it would not
turn into a murky brown space when we closed our eyes.

Rakova >


I don't know whether any robot test could distinguish between these two

views, ... like a robot who needs to kick a real-world chair ...
< Rakova

In the direct perception robot, the video camera records an image, and some
kind of computation is performed, but the result of that computation
appears magically back out in the world, instead of inside the computer
brain, although exactly how this would be done remains completely
unexplained and mysterious. Nobody has ever built the robot to demonstrate
the principle.

In the representationalist robot the 2-D video image would be expanded into
a 3-D volumetric data structure as outlined in the Gestalt Bubble model
linked above. Unlike in direct perception, the "output" of this perceptual
computation *is* available for further computation because it is inside the
computer brain, where the computational and representational machinery
resides. The operational principle is perfectly clear, even if the details
of the algorithm remain obscure. In fact, the representationalist concept
explains *HOW* the organism interacts with the environment so as to feel as
if it were in direct perceptual contact with the world, it does so by
building a world in its brain, the very part of the problem that remains
deeply mysterious in the direct perception view.

Rakova >


2) What is the relationship between Steven Lehar's representations and
the traditional philosophers' sense-data (as he says that his view is
indirect perception view)?

< Rakova

The three-dimensional volumetric data structure that is the "output" of
perceptual processing is itself composed of "sense data" in the form of
solid volumes, bounded by colored surfaces, embedded in a spatial void. Now
the Critical Realists who first came up with the idea of "sense data"
(a "category error" according to direct perceptionists) held a kind of half-
way viewpoint between direct perception and representationalism, kind of
like Andrew Brooks, who can't bring himself to commit to having experience
in the head as a representation or out in the world directly perceived. It
took Bertrand Russell to sort out that mess and to finally locate sense
data in the brain. For a summary of the historical debate, I highly
recomment this link:

http://cns-alumni.bu.edu/~slehar/webstuff/consc1/consc1a.html#hist

You will read people debating the same thing we have been hashing out, two
hundred years ago, again and again and AGAIN! (as Andrew Brooks is wont to
express it). I think it will clear things up considerably if you read that
historical summary.

Rakova >


On the traditional reading 'representationalism' is the view that
experience represents mind-independent physical objects as having certain
properties. This is the view that is normally associated with the name of
Fred Dretske.

< Rakova

Beware, Michael Tye and Fred Dretske *claim* to be representationalists,
but they suffer the same mental ambiguity as Andrew Brook, and actually
believe some strange intermediate theory which is neither here nor there.

http://cns-alumni.bu.edu/~slehar/Representationalism.html#WHATISREP

Steve Lehar

Steven Lehar

unread,
May 6, 2005, 9:40:46 PM5/6/05
to
Reply to Andrew Brook:

Brook>


Steve, saying what you do here shows as serious an unwillingness to come to
grips with what we are actually saying as your earlier statement that we
believe that representation is outside the head. We believe that

representation can be of or about things outside the head. But your
statement is about ... the nature of the vehicle, mine is about ... what it


makes us conscious of. If you refuse to even try to grasp this distinction,
I hypothesis that you are letting ideology substitute for evidence and
argument.

< Brook

You have yet to tell us whether the representations in the brain
necessarily encode *all* of experience or just *some* of it. This may seem
like an irrelevant detail to you, but for me that issue is *everything*.
Because if you acknowledge that some experience is not explicitly
represented, then you actually *ARE* claiming that part of your experience
is out in the world, not in your head, and I challenge you to demonstrate
*that* in a robot model. So are you herewith now denying that *any* aspect
of experience can exist without being explicitly represented in your brain?

Lehar >>


**because it involves the organism having knowledge of things which are not
explicitly represented in its brain.**

<< Lehar

Brook >


[The clause in asterisks] No, it does not ... and I have said why over and
over and over. What does it take to get you at least to acknowledge that we
believe what we say we believe?

< Brook

Is that really what you believe? That all of experience *IS* necessarily
represented in the brain? But in that case your "theory" is
indistinguishable from straight representationalism. You can't have it both
ways, and then complain when I point out the weakness in either one of the
two versions which you refuse to choose between.

I say again, how can you feel such supreme confidence that your view is
right, when you cannot even explain to us what your view is? Could it be
that it is you who are letting ideology substitute for evidence and
argument?

Steve

Andrew Brook

unread,
May 8, 2005, 10:42:57 AM5/8/05
to
Steve once again says that we have not explained our position. So let me once
again say what I have done. I have:

1. ... defined the position. Direct perception is being aware of objects in the
world, nothing short of them, and not by inference from anything else of which I
am aware. (We can be directly aware of other things, too, including our own
representations, ourselves, and our bodily states. But direct awareness of the
world around us is the crucial case.)

2. ... given examples. If Steve is reading this, the state he is in a a perfect
example of direct awareness. He is aware of the words on the screen directly,
not by inference from anything else of which he is aware. And he is aware of the
words on the screen, not any intermediary. He won't accept this as an example
but has given no reason for this refusal.

3. ... said why this position work, indeed is really the only game in town -- no
notion of anything more direct can even be articulated.

4. ... said that all this is perfectly compatible with there being a complex
information-process connecting brain to objects, with there being
representations, and so on. (In an earlier message, I said how my view is
different from, and I think less radical than, behaviourism about
consciousness.) What matters here is that it is crucial to distinguish between
the representational vehicle, some state of the brain I expect, and what that
vehicle represents, what it makes us conscious *of*, its object. As I have said
before, you cannot get to first base in thinking about consciousness and
representation unless you accept that both generally have intentionality.

What more do you want?

Two further issues:

1. Encoding is a red herring. Information delivered to the foveal spot (2 to 6
degrees of arc of retina) is probably encoded directly, a good proportion of it
anyway (and we probably cannot be conscious of what is not encoded). Except for
movement and some other low-resolution information, most of the rest of the
conscious visual field is probably generated by a judgment to the effect that
'the rest is like this' -- the 2 to 6 degrees of arc. Much else of which we can
be aware is not encoded. All we have is an algorithm for generating the items on
demand, numbers for example. So?

Clearly it follows that I would reject this inference. I don't know how many
more ways I can say why I do:

"Because if you acknowledge that some experience is not explicitly
represented, then you actually *ARE* claiming that part of your experience
is out in the world, not in your head, and I challenge you to demonstrate
*that* in a robot model."

2. If experiments cannot distinguish direct from indirect representationalism,
that most emphatically does *not* mean that the indirect view is supported. It
means that on this issue both views are unscientific -- not selectable by
evidence -- and the issue has to be settled other ways. If A cannot be
distinguished from B by evidence, B cannot be distinguished from A either.
'Distinguish' is a symmetrical relationship.

It should be obvious that robot models, as with Bach-y-Rita's prosthetic
devices, demonstrate direct and indirect representationalism equally well -- at
best. More likely, they tip the scales in the direction of direct
representationalism. Steve's reading of the transition in B-y-R patients from
experiencing the tickles to seeing objects in the world is this:

"In the transition from [the one to the other], the experience moves from the


location of the perceived chest to a projection out into the perceived world,
for all the world *as if* giving the percipient access to the world beyond the
chest, although in fact it is just an internal construction or replica of
the world."

Pray tell, what would NOT *as if* access to the world be like, then? Surely what
the 'internal construct' gives them *is* access to the world. What the internal
constructs is about, represents, is the objects that they can now see --
directly, i.e., not by inference from anything else of which they are aware.

Final comment: Steve is entitled to say,

"The motivation behind direct percepton on the other hand is perfectly
clear, it is motivated by the very vivid naive impression that what we
experience is the world itself, rather than an internal representation. In
that sense it is very much like the "animism" of the turn of the last
century whose advocates insisted that life is "something more" than just
chemical reactions, although they were unable to define what the added
ingredient might be, or how it would be detected in principle, or how it

could be implemented in a simple model",

only if he accepts the same judgment on his own view. My own view is that
civilized debate should avoid speculations about people's motives. It is clear
that Steve refuses to consider any position other than his own but I wouldn't
dream of speculating about why that is so.

Let me repeat. I have said all these things before. Nothing in this message is
new. I am not supremely confident that our position is right but I sure would
like it if Steve and others convinced that we are wrong would try to understand
the position, rather than attributing to us over and over views that we don't
hold, don't need to hold, and would be silly to hold. Our position is as
articulated in this message. If you don't like it, say what you find to be wrong
about it.

Marina Rakova

unread,
May 8, 2005, 10:50:13 AM5/8/05
to
Dear Steve (if I may),

Could I press a few more questions on you with respect to the following
point reproduced here?

> Rakova >
> And I would like to return to the case of people fitted with
> the Bach-y Rita device. According to Steven Lehar, these people 'move'
> from perceiving representation A while they are getting accustomed to
> the device to perceiving representation B once they are accustomed to
> it (perceiving objects instead of perceiving tickles produced by
> electric currents). But why should there ever be any transition from
> representation type 1 to representation type 2 unless we assume that
> the point of having representations is having access to the world?
> < Rakova
>
>

Steven Lehar:


> In the transition from A to B, the experience moves from the location
> of
> the perceived chest to a projection out into the perceived world, for
> all
> the world *as if* giving the percipient access to the world beyond the
> chest, although in fact it is just an internal construction or replica
> of
> the world. The representation feels like the world itself because it is
> volumetric and it integrates different sensory modalities into a
> unified
> spatial percept, and it appears to be beyond the body. But we know for
> a
> fact that that percept is actually inside our head, otherwise it would
> not
> turn into a murky brown space when we closed our eyes.

I don't think a proponent of direct perception (the way it has been
understood here) would deny that the percept is in the head, and would
probably say, phrasing it in terms of percepts, that the first percept
represented things as being A, and the second percept as B (granting
even that in both cases the subject is perceiving his/ her
representation). But in terms of how things are to these people, the
subjective difference must be huge (from the world of peculiar
sensations to the world of external objects; presumably this difference
also had an effect on behaviour). So, _why_ is there the transition
from A to B and how is it accomplished? Maybe I could put it
differently: why should the sensory input even begin to constrain the
construction of representations in such a way so as to result in the
subject's having the feeling of perceiving the world (where 'having the
feeling' is understood not phenomenally but, as it were,
computationally - what I'm trying to say is that presumably B-type
representation has a different relation to other representations
available to the subject and behavioural outputs from A-type
representation).

Thank you.

Marina Rakova

Andrew Brook

unread,
May 8, 2005, 10:51:09 AM5/8/05
to
I just want to note publicly that Marina Rakova's questions all strike me as
good ones. As to my use of terminology, which she wonders about, I just followed
others in using the term 'direct realism'. For my own position, I prefer the
term 'direct representationalism'. And yes, my position is similar to Dretske's,
esp. his idea that our representations are (mostly) transparent, that is, that
we see via and through them but we are not (directly) aware of the
representations themselves. Differences: his view is a bit more detailed than
mine. I think that we both see through representations and can be directly aware
of them. And I think that I have an argument for my view; Dretske seems to have
mainly powerful intuitions backing his.

If Steve is right, we both suffer from the same 'ambiguity', whatever he may
have had in mind by that term.

Neil W Rickert

unread,
May 8, 2005, 10:50:00 AM5/8/05
to
Steven Lehar <sle...@CNS.BU.EDU> wrote on May 5, 2005:
>Response to Neil Rickert:

>Rickert >
NR>Steven indeed gave the representationalist perspective. It was
NR>perhaps more a polemic than a summary.
>< Rickert

SL>Obviously it was a view from the representationalist perspective; feel free
SL>to compose a summary from the "direct perception" perspective.

I don't see any need for a one-sided summary. I'm involved because
of scientific interest, not because I want to push a particular
ideology. I welcome a diversity of methodologies.

SL> Advocates of direct perception cannot even agree on what their
SL>theory states, whether there are any representations in the brain or not,
SL>or if there are, whether they encode all of experience or only some of it.

There is no clear meaning of "representation". Different people take
it to mean different things.

I'm not sure that I would even say that "direct perception" is a
theory. I have been referring to it as an account. Gibson used it
to provide a framework for his research in perceptual psychology.
But I'm not a psychologist, so my interests are different. There is
no reason why I should have to agree with every detail of what Gibson
wrote.

My own views on perception do not derive from Gibson. My preference
for "direct perception" is because Gibson's account happens to be a
better fit than Marr's account.

SL>They cannot make testable predictions of future experiments that would
SL>resolve the matter one way or the other, nor can they build a functioning
SL>robot to demonstrate the *principle* behind the concept.

Steven keeps repeating this mantra.

Both sides are in the same predicament with respect to predictions.
Both can make vague predictions about what will eventually be
discovered in neurophysiology, but neither side can give specifics
sufficient to guide the neuro-scientists in such a determination.

No functioning robot has yet persuasively demonstrated any cognitive
or conscious capabilities, so both sides are on a par here too.

SL>The motivation behind direct percepton on the other hand is perfectly
SL>clear, it is motivated by the very vivid naive impression that what we
SL>experience is the world itself, rather than an internal representation.

This is where Steven makes a serious mistake. He is not a mind reader,
and he should stop trying to attribute motives to others.

In my case, I am not all that concerned with conscious experience.
My main interest has been cognition -- how we acquire and use
knowledge. My original view was the received view of
representationalism, which is the view most likely to arise out of
naive realism. I dropped that view when I realized how implausible
it is for a biological system.

I find nothing in representationalism that contributes to a study of
cognition. When we examine robots with representationalist designs,
the source of knowledge is clear -- it is "knowledge" designed into
the system. These systems are built on nativist principles, and
cannot account for the acquisition of knowledge such as we see in
human children.

>Lehar >>
SL>Until the *principle* of direct perception can be demonstrated in a
SL>simple robot model...
><< Lehar

>Rickert >
NR>We should put this in perspective. There is no robot model that
NR>demonstrates representational perception. There is no robot model
NR>that credibly demonstrates any kind of perception.
>< Rickert

SL>Any robot model with a camera, computer, and servos, demonstrates the
SL>*principle* behind representationalism, and thus clarifies concepts such
SL>as "information", "representation", and "processing" in terms that are
SL>perfectly clear to anyone who has used a computer.

No, it doesn't do any of those things. I don't know of anybody who
really believes that a camera, computer and servos has any conscious
experience, nor that it demonstrates any cognitive abilities
comparable to the learning we see in human children. The terms
"information" and "representation" are still quite muddy, with a
great deal of disagreement on what they main.

The camera/computer system uses representations in the sense that it
represents data for us humans. It does not represent anything for
the computer itself, for the computer has no self.

What is represented in the computer is information only in the sense
that it informs us humans. The computer is not itself informed. It
is merely a mechanical data processor, following preprogrammed
procedures on our behalf.

If roboticists are ever able to produce something comparable to
Asimov's R. Daneel Olivaw, that might indeed clarify some of these
concepts. But I won't hold my breath while waiting.

-NWR

Andrew Brook

unread,
May 8, 2005, 10:52:06 AM5/8/05
to
I read Steve Lehar's 'summary' of the debate nearly last of the long series of
messages on direct realism, etc., in my inbox. A most annoying document! Without
rehashing all the details, let me say that virtually nothing he says about
direct realism is true of the position that I have articulated, as I am sure he
knows. In fact, in no sense has he summarized the debate. What he does is merely
to summarize is all the charges he has levelled against us. I will leave it to
others to judge whether I have 'wiggled and squirmed' more than Steve. At least
I try to respond to his claims.

Arnold Trehub

unread,
May 8, 2005, 4:20:17 PM5/8/05
to
Neil Rickert wrote:

>
> Both sides are in the same predicament with respect to predictions.
> Both can make vague predictions about what will eventually be
> discovered in neurophysiology, but neither side can give specifics
> sufficient to guide the neuro-scientists in such a determination.
>

This is simply not true. My theory of the cognitive brain (Trehub 1975, 1977,
1991) presents very specific models for the minimal structure and dynamics of
neuronal mechanisms competent to perform many essential cognitive tasks.
For example, two basic cognitive processes --- one-trial learning of visual
objects and transformation of 2D retinocentric stimuli into 3D egocentric brain
representations --- that were explained/predicted by the mechanisms and systems
I proposed, were later demonstrated in direct psychophysical and
neurophysiological investigations. In addition, the model successfully
predicted the occurence of a completely novel visual illusion (The *pendulum
illusion*. See Trehub, 1991). My theoretical model is constrained by
biological principles and is based on representational neuronal processing.

>
> I find nothing in representationalism that contributes to a study of
> cognition. When we examine robots with representationalist designs,
> the source of knowledge is clear -- it is "knowledge" designed into
> the system. These systems are built on nativist principles, and
> cannot account for the acquisition of knowledge such as we see in
> human children.
>

I'm not familiar with the literature on robots, but I do know that the
cognitive brain model that I have proposed does indeed account for the
acquisition of knowledge. For example, see *The Cognitive Brain*,1991,
Chapter 12, "Self-Directed Learning in a Complex Environment.". See also,
Trehub, 1997; and for an extended discussion of mental imagery in my model,
go here:

http://listserv.uh.edu/cgi-bin/wa?A2=ind0011&L=psyche-d&P=R2997

Arnold Trehub

References:

Trehub, A (1975). Adaptive pattern processing in the visual system.
*International J. Man-Machine Studies* 7:439-446.

Trehub, A (1977). Neuronal models for cognitive processes: Networks for
learning, perception, and imagination. *J. Theoretical Biology* 65:141-169.

Trehub, A (1991). *The Cognitive Brain*. MIT Press, 1991.

Trehub, A (1997). Sparse Coding of Faces in a Neuronal Model: Interpreting
Cell Population Response in Object Recognition. In Donahoe and Packard eds.
*Neural-Network Models of Cognition: Biobehavioral Foundations*.
Elsevier Science.

0 new messages