Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Summary: Direct Perception v.s. Representationalism

4 views
Skip to first unread message

Steven Lehar

unread,
May 10, 2005, 11:49:13 AM5/10/05
to
Reply to Neil Rickert:

Rickert >
There is no clear meaning of "representation". Different people take
it to mean different things.
< Rickert

There's a lot more consensus on the meaning of "representation" than there
is of "direct perception". Now there's a term that means different things
to different people! But most anyone would recognize a photodiode array as
a representation that represents patterns of brightness across an image
with a pattern of voltages in certain registers. It does not matter whether
there is anyone there to "see" or experience the representation, it is
still a representation that represents one thing with another.

SL>They cannot make testable predictions of future experiments that would
SL>resolve the matter one way or the other, nor can they build a functioning
SL>robot to demonstrate the *principle* behind the concept.

Rickert >
Steven keeps repeating this mantra. ...
Both sides are in the same predicament with respect to predictions.
Both can make vague predictions about what will eventually be
discovered in neurophysiology, but neither side can give specifics
sufficient to guide the neuro-scientists in such a determination.
< Rickert

Not so! Representationalism makes the very concrete prediction that every
aspect of the world that is experienced, will eventually be identified as
various states of the brain. That there is no aspect of experience that is
*not* explicitly represented as some state in the brain. That thesis will
one day be proven right, or proven wrong. It cannot be both.

Direct perception on the other hand seems to make no testable predictions.
Even if we found representations for *every last* experience in the brain,
that would not disprove direct perception. But then again, what would? If
there is no imaginable experiment that would distinguish between direct and
indirect perception, then they are indistinguishable. That is, direct
perception would be identical to representationalism.

So we toss a coin to choose between them? No! One way is a reasonable
explanation in terms that everyone understands, such as information and
representation and computation, that can be built in real hardware, whereas
the other is a vague statement of words including a lot of redefinitions of
ordinary words to new meanings, and a host of "category errors" that forbid
certain conceptualizations, but with *no* specific implications for how to
build a robot, or what kind of representations (if any) we should expect to
find in the brain.

SL>Any robot model with a camera, computer, and servos, demonstrates the
SL>*principle* behind representationalism, and thus clarifies concepts such
SL>as "information", "representation", and "processing" in terms that are
SL>perfectly clear to anyone who has used a computer.

Rickert >
No, it doesn't do any of those things. I don't know of anybody who
really believes that a camera, computer and servos has any conscious
experience,
< Rickert

Redefinition: "representation" now means "has conscious experience"

Rickert>
nor that it demonstrates any cognitive abilities
comparable to the learning we see in human children.
< Rickert

Redefinition: "representation" now ALSO means "with cognitive abilities
comparable to human children".

Rickert >
The terms "information" and "representation" are still quite muddy, with a
great deal of disagreement on what they main.
< Rickert

Only to direct perceptionists. To the rest of the world these terms are
perfectly clear and unambiguous. Now if you think "representation"
necessarily implies consciousness and cognitive ability comparable to
children, then I can see how it would seem to be muddy.

Rickert >
The camera/computer system uses representations in the sense that it
represents data for us humans. It does not represent anything for
the computer itself, for the computer has no self.
< Rickert

That is one narrow and non-standard re-definition of a commonly understood
concept. Most any reasonable person would consider a camera/computer system
a representation whether or not there was a human around for it to
represent to. Instead of redefining every other word to new and obscure
meanings, it would be helpful if direct perceptionists used the ordinary
meanings of words, and explain their theory in words we already understand.

The principle of representationalism *is* demonstrated in robot models,
whereas the principle of direct perception is not even specified sufficient
to devise such a model.

Steve

Steven Lehar

unread,
May 10, 2005, 11:52:44 AM5/10/05
to
Reply to Marina Rakova:

Rakova >
I don't think a proponent of direct perception (the way it has been
understood here) would deny that the percept is in the head, ... that in
both cases the subject is perceiving his/ her representation).
< Rakova

Oh they would dispute most vigorously that we perceive our representations.
As to where they locate experience, that varies. Some deny that there is
anything to be located anywhere. But by saying so, they assume that the
structure that they experiece is the world itself, so they are actually
saying that their experience is outside their head, although they deny that
it is an experience, and thus avoid having to account for its existence or
properties. Others, like Velmans, locate it in some orthogonal dimension
inaccessible to scientific scrutiny. But there is only one place experience
could be, and that is in the brain.

Rakova >
why should the sensory input even begin to constrain the
construction of representations in such a way so as to result in the
subject's having the feeling of perceiving the world
< Rakova

If you cannot see the world around you directly, but only by way of retinal
images, then you can create a data structure in your brain that replicates
the spatial relations in the external world, as evidenced by the retinal
input. The internal spatial model gives us spatial consciousness, we become
aware of the surrounding world as a space full of objects, and having that
knowledge helps us interact with the world using spatial algorithms in our
spatial representation. It is a computational strategy that nature has
found to be adaptive.

http://cns-alumni.bu.edu/~slehar/cartoonepist/cartoonepist44.html

Steve

Steven Lehar

unread,
May 10, 2005, 12:02:31 PM5/10/05
to
Reply to Andrew Brook:

Brook >
What more do you want?
...
Our position is as articulated in this message. If you don't like it, say
what you find to be wrong about it.
< Brook

This is the more that I want:

1: Predictions of possible experiments that would differentiate the two
positions. If there are no such experiments, then it is no theory at all,
and the default is representationalism, because...

2: Define what "direct representationalism" means in terms of an actual
mechanism--how would a direct representation differ from an indirect one?
We all understand indirect representationalism, that is how robots work.
But how would you build the direct representationalist robot? How would it
differ?

So far the only "definition" you have given us is "being aware of objects
in the world, nothing short of them, and not by inference from anything
else of which I am aware.". We understand the *words*, but what does that
actually *mean*? What difference would it make to our *experience* if
representationalism *was* indirect? What difference would it make for
neurophysiology? Your definition is perfectly consistent with pure
representationalism, that also gives a vivid impression of perceiving
directly.

Brook >
What matters here is that it is crucial to distinguish between
the representational vehicle, some state of the brain I expect, and what
that vehicle represents, what it makes us conscious *of*, its object.
< Brook

But that is just as true in indirect representationalism. The voltage in a
photodiode that represents the brightness of light, or the electrical state
of a retinal cone that represents a color component of a point of light.
The voltage gives information *of* the brightness of light. But that
information is nevertheless necessarily inside the computer, or at least
downstream of the photodiode where the transduction occurs. What is the
difference between saying 1: the voltage gives indirect information about
the brightness of light, and 2: that the voltage gives *direct* information
*of* the brightness of light measured *through* the representation. Is
there a difference that makes a difference? Or are we just playing with
words? The onus is on YOU to demonstrate that direct representationalism is
different in any way from indirect representationalism.

Brook >
If experiments cannot distinguish direct from indirect representationalism,
that most emphatically does *not* mean that the indirect view is supported.
It means that on this issue both views are unscientific -- not selectable
by evidence
< Brook

Wrong. Representationalism is the "obvious" concept that is thoroughly
understood, to the point that we have built representationalist systems
whose principles of operation are clear to all. Not only has direct
perception never been demonstrated, it has never even been clearly defined
sufficient to make predictions about possible future experiments. If it
makes no such predictions, then the default returns to the concept that we
*do* thoroughly understand, and that *does* make specific predictions.

Brook >
It should be obvious that robot models, as with Bach-y-Rita's prosthetic
devices, demonstrate direct and indirect representationalism equally well
< Brook

Wrong. The Bach-y-Rita device has a human being in the loop to do the
perceptual integration, so the fact that they report having a spatial
experience outside of themselves, as also with any ordinary visual
experience, can mean either that experience is escaping their heads and
projecting into the world around them, *or* that a spatial percept is being
constructed in their internal representation of surrounding space.

When I'm talking about a robot model I don't mean anything more complex
than, for example, a single photodiode and a circuit that interprets its
voltage as a brightness of light. That there is already a representational
system. How would it differ if it were built as a "direct
representationalist" system? Does it not already perceive brightness
*through* voltage? If there would be no difference, then the words "direct
representationalist" are meaningless.

Brook >
My own view is that civilized debate should avoid speculations about
people's motives. It is clear that Steve refuses to consider any position
other than his own but I wouldn't dream of speculating about why that is so.
< Brook

I am sorry, I was aware that that might sound a little offensive. But the
point I was making is that the two sides of this debate are not
symmetrical, one is the "naive" view that we understand from earliest days
of childhood, while the other is admittedly incredible, or seemingly so,
which is why it remains so controversial. I can understand your position,
having been there myself, whereas I don't think you can even fully
conceptualize my position, it sounds so self-evidently absurd to you, that
it sounds like a theory that "black is white" or "up is down". And that
asymmetry in the debate is a significant factor, even if it remains (of
course) invisible to you. For example:

Brook >... said why this [direct representationalism] work, indeed is
really the only game in town -- no notion of anything more direct can even
be articulated.
< Brook

and

Brook >
If Steve is reading this, the state he is in a a perfect example of direct
awareness. He is aware of the words on the screen directly, not by
inference from anything else of which he is aware.
< Brook

That does not distinguish the two alternatives, indirect perception also
predicts an experience *as if* it were direct. How does Andrew think
experience would appear if representationalism were indirect? It would
appear just like that.

In summary, we need

1: predictions of possible experiments, and

2: a definition of what the concept means that differentiates it from
indirect perception in a way that would have implications for biology or
robotics.

Othwise there is no "there" there.

Steve

Andrew Brook

unread,
May 10, 2005, 1:29:39 PM5/10/05
to
Lehar:

Even if we found representations for *every last* experience in the
brain, that would not disprove direct perception. But then again, what
would? If there is no imaginable experiment that would distinguish
between direct and indirect perception, then they are indistinguishable.

Me:
As I have said repeatedly, if this is so, then neither theory is
empirical on this point. If evidence does not distinguish A from B, then
it does not distinguish B from A either, and Steve has no evidence
whatsoever that perception is indirect rather than direct. 'Distinguish'
is a symmetrical relationship. Since I think that indirect
representationalism is at bottom not a theory at all, just an utterly
confused misinterpretation of the scientific evidence, that result is
fine with me.

(The rest of the message is same old, same old about robots, precision,
etc. I have responded to these charges half a dozen times now.)

Andrew

--

Andrew Brook, Professor of Philosophy
Director, Institute of Cognitive Science
Member, Canadian Psychoanalytic Society
2217 Dunton Tower, Carleton University
Ottawa ON, Canada K1S 5B6
Ph: 613 520-3597
Fax: 613 520-3985
Web: www.carleton.ca/~abrook

Neil W Rickert

unread,
May 10, 2005, 7:11:45 PM5/10/05
to
Steven Lehar <sle...@CNS.BU.EDU> wrote on May 9, 2005:
>Reply to Neil Rickert:

>Rickert >
NR>There is no clear meaning of "representation". Different people take
NR>it to mean different things.
>< Rickert

SL>There's a lot more consensus on the meaning of "representation" than there
SL>is of "direct perception". Now there's a term that means different things
SL>to different people! But most anyone would recognize a photodiode array as
SL>a representation that represents patterns of brightness across an image
SL>with a pattern of voltages in certain registers. It does not matter whether
SL>there is anyone there to "see" or experience the representation, it is
SL>still a representation that represents one thing with another.

There is a computer science meaning of represent, and there is a
cognitive meaning of represent. They are not the same.

It is one thing to say that the output of individual diodes
represents the intensity of light striking that diode. It is a quite
different thing to say that the array as a whole represents an
image.

>SL>They cannot make testable predictions of future experiments that would
>SL>resolve the matter one way or the other, nor can they build a functioning
>SL>robot to demonstrate the *principle* behind the concept.

>Rickert >
NR>Steven keeps repeating this mantra. ...
NR>Both sides are in the same predicament with respect to predictions.
NR>Both can make vague predictions about what will eventually be
NR>discovered in neurophysiology, but neither side can give specifics
NR>sufficient to guide the neuro-scientists in such a determination.
>< Rickert

SL>Not so! Representationalism makes the very concrete prediction that every
SL>aspect of the world that is experienced, will eventually be identified as
SL>various states of the brain.

There is nothing concrete about that prediction. There is no
suggested way of testing it. There is no known metric for "every
aspect of the world that is experienced". The prediction claims an
identity between two things. One of these is knowable in principle,
but unknowable in practice, due to the complexity of the brain. The
other is unknowable in principle for it refers to what is private and
subjective.

>SL>Any robot model with a camera, computer, and servos, demonstrates the
>SL>*principle* behind representationalism, and thus clarifies concepts such
>SL>as "information", "representation", and "processing" in terms that are
>SL>perfectly clear to anyone who has used a computer.

>Rickert >
NR>No, it doesn't do any of those things. I don't know of anybody who
NR>really believes that a camera, computer and servos has any conscious
NR>experience,
>< Rickert

SL>Redefinition: "representation" now means "has conscious experience"

The redefinition is due to Steven. A few paragraphs up, he
makes what he calls a concrete prediction, which equates what
he calls representations with experience. But now he apparently
wants to deny that representation has anything to do with
experience.

I can only repeat my point. Representationalism, at least according
to some of the claims of its proponents, is purported to be an
explanation of conscious experience. Yet the robots that are said to
demonstrate the principle of representationalism do not actually
demonstrate any conscious experience at all.

-NWR

Jeff Dalton

unread,
May 11, 2005, 11:14:44 AM5/11/05
to
Quoting Neil W Rickert <ricker...@CS.NIU.EDU>:

> Steven Lehar <sle...@CNS.BU.EDU> wrote on May 9, 2005:

> NR> but neither side can give specifics


> NR> sufficient to guide the neuro-scientists in such a determination.

> SL>Not so! Representationalism makes the very concrete prediction that every


> SL>aspect of the world that is experienced, will eventually be identified as
> SL>various states of the brain.
>
> There is nothing concrete about that prediction. There is no
> suggested way of testing it. There is no known metric for "every
> aspect of the world that is experienced". The prediction claims an
> identity between two things. One of these is knowable in principle,
> but unknowable in practice, due to the complexity of the brain. The
> other is unknowable in principle for it refers to what is private and
> subjective.

That's unduly pessimistic. For example, I'm aware of the colours
of various objects. That's reasonably knowable. Sure, there are
skeptical reasons for saying we don't really know it, but skepticism
applies much more generally. We needn't make it especially important
in this case.

Then, it's unlikely that the brain is so complex that we can
never work out whether colours are represented (in the CS sense)
in the brain.

> SL>Redefinition: "representation" now means "has conscious experience"
>
> The redefinition is due to Steven. A few paragraphs up, he
> makes what he calls a concrete prediction, which equates what
> he calls representations with experience. But now he apparently
> wants to deny that representation has anything to do with
> experience.
>
> I can only repeat my point. Representationalism, at least according
> to some of the claims of its proponents, is purported to be an
> explanation of conscious experience. Yet the robots that are said to
> demonstrate the principle of representationalism do not actually
> demonstrate any conscious experience at all.

Most representationalist (at least) don't think they're explaining
conscious experience. They're explaining the content of conscuious
experience (or something like that). How / why various goings-on
in the brain are conscious experience is left as a mystery by
their representationalism.

Do the advocates of direct perception think that conscious
experience is somehow (magic?) able to directly connect with
the world, without involving the sorts of mechanisms that
would be implemented in a robot?

-- Jeff

Glen M. Sizemore

unread,
May 11, 2005, 4:58:29 PM5/11/05
to
Dalton: Do the advocates of direct perception think that conscious

experience is somehow (magic?) able to directly connect with
the world, without involving the sorts of mechanisms that
would be implemented in a robot?


Sizemore: I didn’t really expect to gain many converts, but I thought that the “perception-as-action” view that I have been espousing would have at least been considered. Here’s the point – this issue disappears if you view perception as an activity of the organism like eating, breathing, and hitting a drop shot. It is mediated by physiology. What you sense when you introspect are portions of this activity. Nothing magical. The only reason that you think you have a “picture” of the world is that looking at a photo is LIKE looking at the world – that is, parts of the activity (and that activity is what you introspect) are the same (or at least similar) when I look at the world as when I look at a photo of that scene. But looking at the world is NOT looking at a representation, though it is LIKE it. And that is the answer to Nagel’s riddle. And that is the answer to “qualia.” We see some of our own activity and we are trained to call it “seeing green” or “seeing a horse,” etc.!
etc. etc. etc. We can, and have, trained animals to “report” on their own behavior (and presumably some aspects of the behavior are observable only to them), and the behavioral processes, presumably, closely resemble those of humans. We can investigate the ontogeny of such behavior (for example, how many different ways do you have to induce pain in an animal before it identifies novel instances as pain?) as well as the neurophysiology. There is nothing else left! So much for “qualia.” But this won’t ever convince those who insist that we are seeing a distorted copy of the world.

<snip the tail>
>
> From: Jeff Dalton <je...@INF.ED.AC.UK>

Neil W Rickert

unread,
May 11, 2005, 8:45:47 PM5/11/05
to
Jeff Dalton <je...@INF.ED.AC.UK> wrote on May 11, 2005:
>Quoting Neil W Rickert <ricker...@CS.NIU.EDU>:

NR> There is nothing concrete about that prediction. There is no
NR> suggested way of testing it. There is no known metric for "every
NR> aspect of the world that is experienced". The prediction claims an
NR> identity between two things. One of these is knowable in principle,
NR> but unknowable in practice, due to the complexity of the brain. The
NR> other is unknowable in principle for it refers to what is private and
NR> subjective.

JD>That's unduly pessimistic. For example, I'm aware of the colours
JD>of various objects. That's reasonably knowable. Sure, there are
JD>skeptical reasons for saying we don't really know it, but skepticism
JD>applies much more generally. We needn't make it especially important
JD>in this case.

I was not intending to be pessimistic.

For comparison, consider the Church-Turing thesis. This also claims
an identity, or at least an equivalence, between two things. But it
is not referred to as a thesis or a proposition or a hypothesis. It
is referred to as a thesis, precisely because these things are in
different domains such that no proof of equivalence, experimental or
formal, is possible.

I expect that we will eventually have a good account of cognition,
and we will accept some equivalences as a consequence of that
account. But the equivalence will be something that we just accept,
not anything that we can prove.

A lot of science is like that.

Newton never did prove the equivalence of gravity with an attractive
force. Asserting that there was an equivalence was a thesis, but not
a prediction. Newton could make many useful predictions from his
theory of gravity, but that presumed equivalence was not one of
them.

JD>Then, it's unlikely that the brain is so complex that we can
JD>never work out whether colours are represented (in the CS sense)
JD>in the brain.

I expect we will need good theories to guide research before we can
get that far. But we will still have no more than a thesis, although
perhaps one which we could accept without dispute. This would be the
same problem of a claimed equivalence of two things that are in such
different domains that there could be no experimental verification of
equivalence.

-NWR

Neil W Rickert

unread,
May 11, 2005, 8:45:23 PM5/11/05
to
Jeff Dalton <je...@INF.ED.AC.UK> wrote on May 11, 2005:

JD>Do the advocates of direct perception think that conscious
JD>experience is somehow (magic?) able to directly connect with
JD>the world, without involving the sorts of mechanisms that
JD>would be implemented in a robot?

I don't claim to speak for all direct perceptionists. So I will speak
only for myself.

I don't expect that there is any magic. On the other hand, I also
don't expect that we will build robots capable of having experience,
or at least I don't expect that we will build them any time soon.

For sure, I do not deny that there are mechanisms, and I don't expect
any new physics to be required.

The trouble with the mechanisms used by robots, as we build them today,
is that they lead to a brittleness of behavior. I expect biological
systems will use different mechanisms more resistant to this problem.

At least part of the disagreement between me and a
representationalist such as Steven, is conceptual. We disagree on
how things should be described.

Let me start by stating a couple of positions where I think Steven
would disagree.

1: Computers don't actually computer. People compute, and use
computers as tools to assist them in that computation.

My reason for this view is that I take computation to be action
on numbers (or other mathematical objects), rather than action on
numerals (the marks we make to represent numbers). To be sure,
we carry out our computation by operating on numerals, which we
use as proxies for the numbers. But the term "computation"
refers to the action on the numbers, not to the proxy action on
the numerals.

The question of whether there is an error in the computation is
judged by whether the right operations are performed on the
numbers. For example, in a floating point computation, we might
follow all of the formal rules in manipulating the numerals, but
still get a bad answer due to effects of roundoff error. We
would judge this to be an error in the computation, even though
all of the formal rules were properly followed.

1(a): Computers don't make inferences.

This is really the same point. In its ordinary meaning, "inference"
is used for decisions made about content, not about form.

2: An ordinary mercury thermometer does not carry a
representation of the temperature. Nor is it a symbol that
denotes the current temperature.

The thermometer reading is strongly correlated with temperature,
and we use that correlation in our determination of the
temperature. But we also have to consider whether the thermometer
is miscalibrated, or whether there is a local source of heat that
makes the thermometer reading unreliable for a particular
reading.

This possibility of error in the thermometer is one of the
considerations that leads me to the view that the temperature does
not represent or denote.

I should add that I am not doctrinaire in the above views. I teach
computer science, and I talk to my students as if the computer
actually does compute. For ordinary purposes, the distinction is not
that important, and is usually not a source of confusion. But it can
matter when considering theories of cognition or of consciousness.

-------------

Those examples lead to what I see as the important difference between
the representationalist view and the direct perceptionist view.

I expect that the representationalist does take the thermometer
reading as a representation. He assumes that all we need is to copy
that external representation, to form an internal representation.

The direct perceptionist view, or at least my own view, is that we
cannot copy. Rather, we must *read* the thermometer. And reading is
not just copying, for it involves judgement as to whether conditions
(such as a local source of heat) could affect the reliability of the
thermometer.

There, I think, is the main difference. The representationalist sees
perception as a mainly passive activity, copying representions that
exist in the sensors. The direct perceptionist view is that
perception is not passive, but involves activity (as in reading the
thermometer). The direct perceptionist is emphasizing the
interactive aspects of perception.

-NWR

0 new messages