Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Aristotle and the self referencing representation.

19 views
Skip to first unread message

Alex Green

unread,
Oct 21, 2006, 10:46:51 AM10/21/06
to
Brook and Raymont's interesting article in Psyche:

The Representational Base of Consciousness
http://psyche.cs.monash.edu.au/symposia/kriegel/3Brook_Raymont.pdf

raises some interesting issues. The idea of a self representing representation is particularly interesting, as Brook and Raymont put it:

"Representations can represent themselves as well as whatever else they may represent."

or Aristotle:

"In every case the mind which is actively thinking is the objects which it thinks."( De Anima)

Where Aristotle and Brook and Raymont seem to differ is in the use of "Consciousness OF". Aristotle realises that "Consciousness OF" implies a regress:

"..we must fall into an infinite regress or we must assume a sense which is aware of itself." (De Anima Book III,425b)"

Aristotle proposes an interesting way out of this problem:

"But that which mind thinks and the time in which it thinks are in this case divisible only incidentally and not as such. For in them too there is something indivisible (though, it may be, not isolable) which gives unity to the time and the whole of length; and this is found equally in every continuum whether temporal or spatial."

Aristotle seems to be saying that time extended representations might solve the problem of self representation.

Has Aristotle already found the answer to Brook and Raymont's problem?

Now for a rant. I would like to see the confrontation of "consciousness OF" by editors of articles on consciousness. "Consciousness OF" implies a regress and the truly interesting problem in consciousness research is how this regress is avoided (as Aristotle knew thousands of years ago).

Best wishes

Alex Green

Send instant messages to your online friends http://uk.messenger.yahoo.com

Jonathan Edwards

unread,
Oct 21, 2006, 3:32:53 PM10/21/06
to
Like Alex Green, I found Brook and Raymond's article enjoyable and
interesting. They conclude
"Someone should write a book about all this."
At least one person already has. I am entirely in agreement with
their proposal and have given some suggestions as to the biophysics
of how and where this can come about in:

"How Many People Are There In My Head? And In Hers?
Imprint Academic September 2006 - available thru Amazon.

This is an elaboration of the J Consc Stud 2005 v4-5, 60-76 article
at http://www.ucl.ac.uk/~regfjxe/aw.htm (which mostly deals with the
binding problems)

For me the key concept is 'presentation'. These are not in fact re-
presentations because they are not presented anywhere else earlier
on. There is only the one presenting. This is maybe another way of
putting Brook and Raymond's problems with FOR. That presenting must -
like the good bar code analogy - have layers of meaning which might
be compared with semantic and syntactic aspects of language;
component and contextual meanings (a bit like the computer language
C?). To have that meaning they must be presented to something that
knows how to interpret complex level meanings. My belief is that post-
Feynman physics provides us with that fairly easily - but you have to
be prepared to throw out a few treasured teddy bears of ideas of who
you are.

Jo Edwards

Anthony Sebastian

unread,
Oct 21, 2006, 6:23:19 PM10/21/06
to
In response to Alex Green's post of 21 Oct 2006, stating that "Consciousness OF" implies a regress and the truly interesting problem in consciousness research is how this regress is avoided (as Aristotle knew thousands of years ago).

In relation to experiencing, if each successive metacognitive regress* requires an increase in cognitive complexity then the number of regresses might find their limit in the complexity-achieving limits of cognitive processing.

*consider the following fourfold regress:

experiencing non-consciously = performing the physiological activity of receiving and responding adjustively to information about some external, say, object or event of reality

experiencing consciously = performing the physiological activity of receiving and responding adjustively to information about the internal event of reality comprising the performing of the physiological activity of receiving and responding adjustively to information about an external, say, object or event of reality

experiencing self-consciously = performing the physiological activity of receiving and responding adjustively to information about the internal event of reality comprising the performing of the physiological activity of receiving and responding adjustively to information about the internal event of reality comprising the performing of the physiological activity of receiving and responding adjustively to information about an external, say, object or event of reality

experiencing evaluatively self-consciously = performing the physiological activity of receiving and responding adjustively to information about the internal event of reality of performing the physiological activity of receiving and responding adjustively to information about the internal event of reality comprising the performing of the physiological activity of receiving and responding adjustively to information about the internal event of reality comprising the performing of the physiological activity of receiving and responding adjustively to information about an external, say, object or event of reality

Perhaps cognitive processing can achieve further complexity, enabling 'higher' metacognitive states, but one begin to appreciate that ahead lies a non-linear road.

Anthony Sebastian.

-------------------------
Anthony Sebastian, MD
Professor of Medicine
40 Crags Court
San Francisco, CA 94131
Anthony_...@msn.com<mailto:Anthony_...@msn.com> [preferred email address]
415-648-0834 [tel]; 415-358-5953 [fax]

Faculty Affiliations:
Department of Medicine, Division of Nephrology
General Clinical Research Center,
Special Projects Associate
University of California, San Francisco
seba...@gcrc.ucsf.edu<mailto:seba...@gcrc.ucsf.edu>

Faculty Webpage:
http://medicine.ucsf.edu/nephrology/faculty/anthony_sebastian.html<http://medicine.ucsf.edu/nephrology/faculty/anthony_sebastian.html>

View/Download Selected Publications:
http://www.msnusers.com/AnthonySebastianMDFiles/documents.msnw<http://wwwmsnusers.com/AnthonySebastianMDFiles/documents.msnw>

Weblog: http://tonyseb.blogspot.com<http://tonyseb.blogspot.com/>

Emails written in E-Prime: http://en.wikipedia.org/wiki/E-Prime<http://en.wikipedia.org/wiki/E-Prime>

Alfredo Pereira Jr

unread,
Oct 21, 2006, 8:07:51 PM10/21/06
to
Alex Green wrote:
> "Consciousness OF" implies a regress and the truly interesting
> problem in consciousness research is how this regress is avoided
(as Aristotle knew thousands of years ago).

There is a third alternative to
a) the infinite regress/progress in the "consciousness of" approach and
b) the "self-referencing representation" Platonic solution to that
problem.

The third alternative is based on Merleau-Ponty. Very briefly,
consider Pereira´s definition of consciousness as *contentful
subjective experience*. Following this definition, the structure of
consciousnes has three aspects:
a) the contents;
b) the lived experience;
c) the conscious subject.

Now consider that the content is embodied in brain activity and
embedded in the domain of interaction of brain, body and
environment.
And finally assume, with Merleau-Ponty, that the conscious subject
is the living body.

In this view, consciousness is consciousness of contents (not
necessarily representations) generated in the interaction domain,
and processed by the brain. The conscious subject is the living
body, which is a part of the content-generating domain. Therefore,
consciousness implies a (partially) *self-referencing living body*,
not a self-referencing representation.

Best Regards,

Alfredo Pereira Jr.

Steven Ericsson-Zenith

unread,
Oct 21, 2006, 8:10:26 PM10/21/06
to
I would like to hear from Andrew Brook exactly what it means "to
present" in his identity model. To be honest, I could make little
sense of it.

James, who is often referred to in the paper, was mostly influenced
by Peirce in these matters, I am surprised that no mention was made
of Peirce's detailed considerations.

With respect,
Steven


--
Dr. Steven Ericsson-Zenith
Institute for Advanced Science & Engineering
http://iase.info

Andrew Brook

unread,
Oct 21, 2006, 10:29:30 PM10/21/06
to
We'll get to some of the other messages tomorrow but this one is fairly
straightforward so we'll have a shot at it now.

Andrew

Steven Ericsson-Zenith wrote:
> I would like to hear from Andrew Brook exactly what it means "to
> present" in his identity model. To be honest, I could make little sense
> of it.

That's Andrew Brook and Paul Raymont. We have done this project together.

How does a representation present whatever it is about? How for example does a
perception of a painting present the painting? That is how a representation
presents itself -- and oneself as it's subject. We see no need to posit any
asymmetry here.

> James, who is often referred to in the paper, was mostly influenced by
> Peirce in these matters, I am surprised that no mention was made of
> Peirce's detailed considerations.

Well, tell us about the influences! Not general influences but influences where
they would count for what we cite from James. Just saying that there were
influences is not enough. So far as we know, *on the parts of James' work of
interest to us*, Peirce did not have much of an influence. The fact that in
general Peirce was a more sophisticated and systematic philosopher than James is
not enough by itself to show anything to the contrary. Anyway, so long as James
had a distinctive and original point of view of his own, it does not matter much
who influenced him, surely. Aristotle and Kant influenced me, Kant in
particular. We don't mention this either.

Andrew

--

Andrew Brook
Chancellor's Professor of Philosophy
Director, Institute of Cognitive Science
Member, Canadian Psychoanalytic Society
2217 Dunton Tower, Carleton University
Ottawa ON, Canada K1S 5B6
Ph: 613 520-3597
Fax: 613 520-3985
Web: www.carleton.ca/~abrook

Steven Ericsson-Zenith

unread,
Oct 22, 2006, 12:07:18 PM10/22/06
to
On Oct 21, 2006, at 6:36 PM, Andrew Brook wrote:

> Steven Ericsson-Zenith wrote:
>> I would like to hear from Andrew Brook exactly what it means "to
>> present" in his identity model. To be honest, I could make little
>> sense of it.
>

> ...


> How does a representation present whatever it is about? How for
> example does a perception of a painting present the painting? That
> is how a representation presents itself -- and oneself as it's
> subject. We see no need to posit any asymmetry here.

I am not referring to any asymmetry, I am simply asking what - in
your model - it means "to present." What you have said here does not
appear to make any more sense than the paper. Simply, what does it
mean "to present," specifically. It appears to me to be a very vague
notion, and from my point of view is the flawed basis of the paper
since it is then even more uncertain as to exactly what you are
referring to when you discuss "self-representation."

Further, how would one distinguish your identity theory from any
other so that we might discover its relative merit?

Andrew Brook

unread,
Oct 22, 2006, 1:13:04 PM10/22/06
to
Soon we should take this off the list but let me try once more: In the same way
that a perception presents what it is about, a perception presents itself and
oneself as its subject. We do not endorse any particular theory of
(re)presentation and hope that our view is compatible with any reasonably
adequate theory. We define our notion by example, as I have just done. What
matters to us is that, whatever presentation of the world is like, a
representation presenting itself and it presenting oneself as its subject are
the same kind of process. Different target, same kind of process.

A note on the word 'representation'. Another contributor made a sharp
distinction between representation and presentation. We don't. 'Representation'
is now a term of art in cognitive research and no longer has much by way of
links to its etymological roots. For us, what representations do is present
states, properties and events to cognitive subjects. You could call these
presenting states 'presentations' if you wanted but you'd run the risk of not
being understood.

I'm not sure what you mean by 'identity theory'. As the term is used in
philosophy, we are functionalists, not identity theorists, though we don't go
into the matter in the little Psyche paper. That is to say, we think that
representating, thinking, etc., are activities of the brain, not specific
circuits or whatever in the brain. But maybe you have something different in mind.

Andrew

Jonathan Edwards

unread,
Oct 22, 2006, 3:11:39 PM10/22/06
to
That representation is 'a term of art' would worry me. This seems to
imply that everybody knows what it means and uses it the same way.
However, most of us are surely familiar with the fact that it usually
means used in all sorts of ways to suit the assumptions of the user.
Also, this is a discussion forum on consciousness for people of all
disciplines so we need to make sure that we do not use 'discipline-
private' meanings. Things often mean something different in the lab
down the corridor.

Fodor and Putnam would seem to have raised serious issues about what
representation might mean, and whether or not it has to be a re-
presentation to something is crucial. Any model that locates
consciousness in a network of nerves, for instance, implies that
percepts are re-presented to something that has no meaningful
identity or receiving capacity, since it is an arbitrary collection
of receiving units, not a receiving unit. In functionalist terms
there can be no presenting. These issues are at the heart of the
problem.

I think we need to consider the possibility that we have a word of
art that has no coherent meaning. Steve Zenith's question seems
pertinent because if there really is presenting implied we have to
have some idea of a physically possible context. I am happy with mine
but Steve is probably sceptical so he is entitled to a suggestion,
maybe?

Jo Edwards

Andrew Brook

unread,
Oct 22, 2006, 5:16:34 PM10/22/06
to
Jonathan, I entirely agree with you that there are lots of interesting questions
about representation but they were not our questions, certainly not in a version
of our approach that had to meet the constrains of Psyche. You are quite right
that the word 'representation' is used in a great many ways. For example, the
way in which linguists use it is very different from how the AI community uses
it which is very different from how (most) psychologists and philosophers use
it. And that is just at the level of differences in which is being talked about.
Go to theories about these things and the proliferations multiplies by at least
an order of magnitude. So yes, the term has no coherent meaning. But it has
coherent meaningS. We're using one of them.

That said, we laid out how we are using word, by giving examples. About
everything else, including what theory would give an adequate account of what is
going on in these examples, we are neutral, including about the issue that most
interests you, presentation vs. re-presentation. For our purposes, which is
right, if either, just does not matter.

It would be interesting to see some comments on the topics that we do talk
about, as well as on ones that we did not talk about.

Andrew

Jonathan Edwards wrote:

> That representation is 'a term of art' would worry me. This seems to imply
that everybody knows what it means and uses it the same way. However, most of
us are surely familiar with the fact that it usually means used in all sorts of
ways to suit the assumptions of the user. Also, this is a discussion forum on
consciousness for people of all disciplines so we need to make sure that we do
not use 'discipline- private' meanings. Things often mean something different in
the lab down the corridor.
>
> Fodor and Putnam would seem to have raised serious issues about what
representation might mean, and whether or not it has to be a re- presentation to
something is crucial. Any model that locates consciousness in a network of
nerves, for instance, implies that percepts are re-presented to something that
has no meaningful identity or receiving capacity, since it is an arbitrary
collection of receiving units, not a receiving unit. In functionalist terms
there can be no presenting. These issues are at the heart of the problem.
>
> I think we need to consider the possibility that we have a word of art that
has no coherent meaning. Steve Zenith's question seems pertinent because if
there really is presenting implied we have to have some idea of a physically
possible context. I am happy with mine but Steve is probably sceptical so he is
entitled to a suggestion, maybe?
>
> Jo Edwards

--

Jonathan Edwards

unread,
Oct 23, 2006, 10:03:15 AM10/23/06
to
Dear Andrew,
I appreciate that you may feel you are getting a hard time from
people interested in other agendas but I think people are genuinely
interested in a synthesis. In my initial post I opened with the
comment implying that I was entirely in tune with your rejection of
FOR and HOR approaches. I am also entirely in agreement with the
interpretation of representation that implies presentation to
something - unlike in a computer where it is unclear what this might
mean. In your last post you say you are using one meaning of
representation, but we have to deduce which one from your invocation
of presentation. Presentation is unambiguous. There is another issue
about encoding and decoding, which I agree is non-sequitur.

The problem is that having rejected the other models on logical
grounds, we are faced with some sort of 'nested' presenting to
'something'. The trouble is that the usual models can be shown to be
physically incoherent using just the sort of logic you use to reach
your model of representation. William James 'non-existent physical
fact' has never gone away. Presenting must have some sort of physical
underpinning and a good way to test conclusions we have derived by
logic is to make sure that they can be consistent with at least some
sort of physical substrate. So, in a sense I have spent the last five
years trying to work out how the sort of model you propose can be
physically possible. The answer I came to is tough to swallow, but
it is also biologically very intuitive in certain respects. I am
always ready to consider another option but I cannot see what that
could be at present.

I guess my comment is 'elegant derivation of a theoretical model but
what would you suggest as an candidate 'presentee' that would give it
the physical legitimacy it needs?'.

Jo E

Andrew Brook

unread,
Oct 24, 2006, 8:03:13 PM10/24/06
to
Finally a moment to say a word in response to Alex Green's interesting
message on Aristotle. Raymont and I have always thought of Aristotle as
one of our allies and list him as one in the full version of the Psyche
paper. Indeed, I admire De Anima very much, especially when one
considers what a massive advance it was over anything resembling
psychology or phil of mind in its time.

One statement in Alex's post caught my eye. "Aristotle realises that


"Consciousness OF" implies a regress:

"..we must fall into an infinite regress or we must assume a sense which is
aware of itself." (De Anima Book III,425b)"

But surely what Aristotle is saying is that consc of implies a regress *only if*
we do not adopt the idea of self-presenting representations, "a sense which is
aware of itself." We could not agree more.

There is one general caution about Aristotle. Because are getting him through at
least two, sometimes three, translations, it is very hard to know if his words
now translated (through Arabic and often Latin) as some English word were
actually referring to what we use that word to refer to.

Andrew

{Alex has second quote from A but does not give a reference. On a quick look, I
could not find it.}

Andrew Brook

unread,
Oct 24, 2006, 8:03:13 PM10/24/06
to
Alfredo Pereira Jr ended an interesting post on Merleau-Ponty this way:

"Therefore, consciousness implies a (partially) *self-referencing living body*,
not a self-referencing representation."

I am not sure what difference he had in mind here but the two don't look
mutually exclusive to me. Quite the reverse. A living body does its work by
(among other things) representing, no?

Andrew

Joseph Polanik

unread,
Oct 24, 2006, 10:25:27 PM10/24/06
to
Andrew Brook wrote:

> Soon we should take this off the list but let me try once more: In the
> same way that a perception presents what it is about, a perception
> presents itself and oneself as its subject. We do not endorse any
> particular theory of (re)presentation and hope that our view is
> compatible with any reasonably adequate theory. We define our notion by
> example, as I have just done. What matters to us is that, whatever
> presentation of the world is like, a representation presenting itself
> and it presenting oneself as its subject are the same kind of process.
> Different target, same kind of process.
>
> A note on the word 'representation'. Another contributor made a sharp
> distinction between representation and presentation. We don't.
> 'Representation' is now a term of art in cognitive research and no
> longer has much by way of links to its etymological roots. For us, what
> representations do is present states, properties and events to cognitive
> subjects. You could call these presenting states 'presentations' if you
> wanted but you'd run the risk of not being understood.
>
> I'm not sure what you mean by 'identity theory'. As the term is used in
> philosophy, we are functionalists, not identity theorists, though we
> don't go into the matter in the little Psyche paper. That is to say, we
> think that representating, thinking, etc., are activities of the brain,
> not specific circuits or whatever in the brain. But maybe you have
> something different in mind.

While your theory of the tripartite structure of representations
describes our kind of consciousness in a better way than FOR and HOR, I
question your rejection of the idea that a representation could be
decomposed into its three separate components.

If a representation could not be decomposed; then, the evolutionary
challenge is increased. An animal would have had to develop the capacity
to have the entire tripartite structure all at once. It seems plausible,
maybe even probable, that some animal developed the capacity for having
representations that were about something else and not at all about it
as well.

What are your reasons for supposing that this could never have happened?

If it could have happened that way somewhere in our evolutionary tree,
would it not be possible to analyze a representation into its components?

Joseph Polanik

@^@~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~@^@

Arnold Trehub

unread,
Oct 25, 2006, 1:01:57 PM10/25/06
to
In their PSYCHE 12 (2) paper, Brook and Raymont use the example of seeing words
on a computer screen. They claim that seeing the words is all the representing
one needs to do to become conscious of (1) the words, and of (2) representing
them. They then claim the following:

"In addition, each such act of representing is all the representing needed, we
believe, to become fully conscious of a third thing -- of [3] *who* is seeing
the words, namely, oneself. We can't argue for this here but if it is so, then
for each such representation, to become aware not just of what it represents
and of the representation itself but also of oneself as the thing that has it,
we need only that one representation."

They sum up their argument in the idea that representations are self presenting
and that the representational basis of consciousness is a global representation
including the self as the one having the representation.

Jonathan Edwards agrees with this view but wants some kind of candidate
physical "underpinning" to test and explain this proposal about the


representational basis of consciousness. Jonathan writes:

> The problem is that having rejected the other models on logical
> grounds, we are faced with some sort of 'nested' presenting to
> 'something'. The trouble is that the usual models can be shown to be
> physically incoherent using just the sort of logic you use to reach
> your model of representation. William James 'non-existent physical
> fact' has never gone away. Presenting must have some sort of physical
> underpinning and a good way to test conclusions we have derived by
> logic is to make sure that they can be consistent with at least some
> sort of physical substrate.

It seems to me that if by "the representational basis of consciousness"
one means the basis for the existence of consciousness (C) per se, we are
unable to explain the sheer existence of C (the hard problem) within our
current scientific norms. We can, however, provide an explanation for the
*content* of consciousness, i.e., that which is self presenting. I have
commented about this in an earlier post:

http://listserv.uh.edu/cgi-bin/wa?A2=ind0110&L=PSYCHE-D&P=R2&I=-3

I agree with Jonathan that there is a nested presenting and that the
presenting has to be to oneself (*me/I*). The crucial question is how can
this possibly happen within the biophysical constraints of the brain.
My claim is that we are able to have a phenominal experience if and only
if we have an innate neuronal system that can represent a coherent extended
3D world from an egocentric perspective. For a more detailed account of this
view see Trehub (in press), "Space, self, and the theater of consciousness",
*Consciousness and Cognition*.

Arnold Trehub

Andrew Brook

unread,
Oct 25, 2006, 10:05:59 PM10/25/06
to
Re. Colin Hales important message, three things:

1. Why couldn't the midbrain house representations? I don't see that
representing need be limited to cortex.

2. In our picture, we talk about 'all the representation needed for ...'
deliberately. If other things are needed, too, to have this that or the other
kind of consciousness, that would be no problem for us. My own inclination would
be to think that a midbrain alone could be conscious of the world around it and
some states of its own body (hunger, pain, maybe simple pleasures), but not of
being conscious of these things or or most of its representations.

3. I think it most unlikely that any small group of neurons could be any
conscious state. Would they still be a conscious state if separated from the
rest of the brain (in such a way as to keep them alive and functioning)?


Re. Joseph Polanik's intriguing message on evolution and self-presenting
representations (a topic to which I haven't given a lot of thought):

We claim that the information in a representation about itself and oneself being
is inseparable from the information about the world. I don't think that this
creates a special problem for explaining how such representations could have
evolved. I know it looks on the face of it like the problem of the eye, that all
the bits would have had to present together for any of them to be selected for,
but in this case we are not talking about anything necessarily very complicated.
By representing anything, a representation represents itself to some degree. And
it is actually glued to the think that has it, so has to have information about
this thing. Now, the capacities needed to recognize or make any kind of use of
this information would be another matter -- they could be stacked and layered in
evolutionary time. But the three-part representation by itself is no big deal.
Put the distinction this way. If what I am saying is right, then iguana
representations contain all three kinds of information. It does not follow from
this that the iguana can make any use, any use at all, of most of the
information thus contained.

Andrew

Colin Hales

unread,
Oct 26, 2006, 9:18:50 AM10/26/06
to
I'd like to suggest caution here:

"3. I think it most unlikely that any small group of neurons could be any
conscious state. Would they still be a conscious state if separated from
the rest of the brain (in such a way as to keep them alive and
functioning)?"

Consider the entity X consisting of a surgically excised thirst cohort
surrounded by a perfect electrical/chemical replica of the input axons
feeds, output axon feeds, nutrient supply and impinging electromagnetic
fields from all directions and all related material flows through the
interstitial spaces and to astrocytes, endocrine and immune system
interactions etc... A big ask! To the cohort it appears to be operating
inside a normal brain.

I believe in this circumstance the entity X cannot currently be justifiably
argued not to have the physics generating phenomenal consciousness
operating as it was before excision - even though it is not integrated into
a normal agent. Everything that was there before has been supplied. From
the point of view of 'being' the entity X, then that experience would
be 'had' by the entity X cohort in that local region of the universe
including X. It could be an experiential representation corresponding to
the worst dose of thirst ever experienced by any biology ever. It's just
that it no longer integrates with any form of agency, so no agent is
actually behaving in response to the experience. No-'body' is having it. No-
'body' can report 'what it is like'.

The fact that we find it hard to imagine how this might be should not be a
reason to discount it as a possibility. Especially as it remains consistent
with everything else we wish to say about consciousness.

Another tricky bit is that the words 'conscious state' presume meaning not
currently mapped to the physiological state of a piece of matter in any
agreed manner by science. We have to be careful. For example - To me the
words 'conscious state' means the existence of any physics generating first-
person-presented phenomena. To other it is likely to mean something else.
It’s time to get specific about these things, IMO.

It's interesting that we are getting to the point of being able to discuss
it like this at all. That seems to be the circumstance we now all inhabit.

In respect of phenomenal consciousness(only) Denton seems to have delivered
a reductionist stride by science out of the cortex into a small group of
basal neural cells. That stride walked away from a raft of arguments about
the nature of consciousness as a whole and towards the deeper physics of
phenomenal consciousness that is used to construct/deliver it into the
whole. Both sets of descriptors (low and high level) remain important but
at least we now know that the validity of certain points of view are likely
to be restricted to certain levels of a layered integrative view of brain
structure. That layered view appears to be extending DEEPER/FINER to a
layer we currently cannot see but nonetheless is there to be found.

Those investigators at the top of the structure (say, doing whole brain
human EEG analysis) are operating at the top level and will not say
anything prescriptive of low level physics. Those working on the low level
physics will be unable to say anything about the top level behaviour to the
EEG science - this is the perfectly normal state of the study of a
hierarchically organised natural structure. None of this sounds odd as a
situation in science.

No matter how weird it seems we are faced with the empirically proven
reality that small cohort/single cell physics underpins organism phenomenal
life through the provision of some sort of phenomenal 'pallet' used to
paint representations on a spherical canvas with cells/cell groups as
a 'brush'. Some cells/cell groups are configured as brushes and some
aren't.

At least that's what the primordial emotion work seems quite reasonably to
be saying. I'll leave it there, I think.

Regards,

Colin Hales

Jonathan Edwards

unread,
Oct 26, 2006, 2:17:43 PM10/26/06
to
I forget who suggested that it is most unlikely that a small group of
neurones could be any conscious state but I would have to support the
thrust of Colin Hales's reply. Why not? What metaphysical mist
changes the situation in a brain from in a dish in which the cells
are cleverly given the same informational input? The assumption that
you need lots of cells to receive an informational input for a
conscious experience to occur based on that input is not based on any
scientific evidence and poses serious logical problems.

I would, nevertheless, wonder whether the data Colin quoted should
be interpreted a little differently. A cell/group of cells the job of
which is to receive visual information about moving objects has every
right to be sentient of that input but perhaps it would have no
'visual' quality because there is no need for it to be contrasted
with auditory information. My own view would be that the qualities we
call visual and auditory relate to informational input into cells
that receive both modalities - maybe prefrontal cortex cells.
Emotional input might, however, be relevant to cells at lots of
levels. It is not a modality that has to be put into a physical
framework in the same way, and might be the sort of signal that any
cell might want to receive - to alter respiratory drive or stress
protein synthesis or something. I would put an experience of fear not
just in the deep cells but another copy in cells all over the brain.
After all, these signals must be passed on to cells all over the
brain in order to affect cerebral function as a whole, as they seem
to. Information in the brain is by and large available in vast
numbers of copies - something people often seem to ignore.

I know that I am still in a minority of what may now be four in six
billion when it comes to conscious cells but 'most unlikely' needs to
be based on scientific argument. I have just enjoyed The God
Delusion. I suspect this is linked to the Human (as single sentient)
Being Delusion.

Jo E



Colin Hales

unread,
Oct 26, 2006, 5:55:24 PM10/26/06
to
It's interesting to see how the the discussion of consciousness physics
options now has a slightly different edge to it.

Jonathon Edwards speaks of possibilities as to a mechanism for integration
of cellular level experience components into a 'whole' and the visibility
of any part to the whole. To me this is the 'unity/binding' issue
approached in a post-Denton view of consciousness. The issue of the basic
physics of the experience and how the basic physics can interact/integrate
into a whole can be seen as two separate clues to the final solution.
Whatever distinguishes the 'thirst-cohort' from the 'non-thirst' cohort,
the physics involved must also offer potential for the sort of integration
Jonathon speaks of or it is not valid physics in the context of provision
of consciousness 'down the evolutionary track' in more complex brains.
Andrew Brook spoke of mid-brain experience content. Why not?

A whole raft of viewpoints are now open to criticism not previously
possible. For example, where are computationalists/functionalists to go?
Artificial intelligence work now has a case to answer in any assumption as
to the relevance/role or otherwise of the experiential qualities in
intelligence. You can ask...

"OK. This creature is smart. It can get thirsty right....'there'. Where's
the thirst in your computer and why doesn't it matter in intelligent
behaviour in respect of thirst? Evolution favoured it and expends a lot of
energy (blood flow/ATP consumption) to make thirst happen for reasons that
those creatures that don't died out. Justify why your design decision to
make a machine version of a thirsty creature will survive novel exposure to
dehydration but that has no 'thirst-qualia-generator' in it".

There may be an answer and it may be valid - but previously the discussion
wasn't possible and that was used to justify computationalist assumptions.
That argument is now more open to critical attack. In a critical argument
about AI design decisions you now have a specific case to answer that
wasn't there before.

Epiphenomenalism and emergentism have a similar problem in that we can now
point directly at qualia... there... It's not a product of
complexity/organisation at the organ level. There's still a role for
emergent features in the whole, but emergence based on a real something
going on deeper. Which I find rather comforting. Magical emergentism always
made me squirm. You can't have a lake without a water molecule. If
consciousnesness is the 'lake' then we have at least located the general
locale of a water 'molecule'.

Lots of good/easy empirical work is suggested: Make a creature (sheep, say)
very thirsty, fMRI it, locate 'thirst-cohort' and 'not thirst cohort'
nearby each other. Harvest the brain basal area. Put on a good sunday roast
and thank the sheep. Then do a very detailed EM morphological study of
neuron/astroctye/dendrite shape, location and interconnects. Plot the whole
thing out in 3D and look for the differences. Tedious but possible.

What are we waiting for?

regards,

Colin Hales

Andrew Brook

unread,
Oct 26, 2006, 10:17:25 PM10/26/06
to
In response to Cathy Reason:

1. 'Levels' talk is purely metaphorical and in my view very apt to
mislead. There are no levels in the brain. What people mean when they
talk about levels is really different ways of describing the one,
single, brain.

2. An epistemic infinite regress is different from a regress of
'consciousness of' and it is harmless, I think, because we can get off
the treadmill any place we like.

3. The example of the barcode is our example. Of course there are a
number of sub-functions, as we discuss. But they are subfunctions of a
single representation, for the reasons we give.

4. We "admit" that HOR and FOR don't work? Not quite. We insist upon it
and provide arguments to support our hostility to them. 'Admit' implies
some reluctance on our part. We are not in the slightly reluctant to
jettison them.

Cathy Reason

unread,
Oct 27, 2006, 10:38:28 AM10/27/06
to
Andrew Brook wrote:

1. 'Levels' talk is purely metaphorical and in my view very apt to
mislead. There are no levels in the brain. What people mean when they
talk about levels is really different ways of describing the one,
single, brain.

CMR:
You could argue this, on the grounds that levels are epistemic categories
imposed upon a natural system. By the same token, though, one could argue
that representations don't exist in the brain either because they are also
imposed epistemic catogories.

AB:


2. An epistemic infinite regress is different from a regress of
'consciousness of' and it is harmless, I think, because we can get off
the treadmill any place we like.

CMR:
Oh no you can't - you have to follow it through to the bitter end. If your
conscious state is going to represent itself, then it has to be capable of
representing, beyond any possibility of doubt whatsoever, that it is a
conscious state - because that is what conscious subjects are able to do.

Now we can take a closer look at the problem. We have your conscious
state, which we'll call R, which must represent some other thing (let's call
that O) as well as itself. Let the state of R at any time be called X.

Ok. Now in line with your formulation, let's propose some single state X1,
which represents both R and O. How reliably does X1 represent R? That
depends on the subfunction, call it f1, by which X1 is determined. In other
words, the accuracy of X1 is contingent on the reliability of f1 - about
which we know nothing at all.

But in that case X1 fails to do what we required of it, which is to
represent that R is a conscious state *beyond any possibility of doubt
whatsoever*. In order to do what we require, we need to check on the
reliability of f1, and this necessitates another subfunction f2, which
produces a new state X2 which represents R, O, and a check on the
reliability of f1. But the accuracy of X2 is contingent on f2, and so on -
and so starts the treadmill.

But it is emphatically not true, as you claim, that we can get off the
treadmill whenever we want. Let's say we let the treadmill run for, say,
100 iterations. We end up with a state X100, whose accuracy is still
contingent on the reliability of the subfunction f100 which generated it.
So X100 still does not do what we require, which is to represent that R is a
conscious state *beyond any possibility of doubt whatsoever*. In fact
generally, the accuracy of any state Xn is contingent on the reliability of
the function fn which generates it.

Ok, one might say, but let's stop faffing around with all these piffling
little subfunctions - let's just introduce some mighty megafunction M, which
returns the state XM which can be summarized as follows: "I represent the
fact that R is a conscious state and I have confirmed the reliability of all
subfunctions involved in determining this, including myself." Does that
solve our problem?

Well, no. Because we can still ask *how reliably* M has checked on all
subfunctions including itself. It may be that M is not reliable at all -
and this will always be a possibility because of the inherent functional
separation between X and M.

(One might also note that M is mathematically equivalent to fL, where L is
the largest possible integer. Of course no such integer exists, and so
neither can M.)

But the upshot of all this is that your conscious representation can never,
ever represent what is required of it - which is that it is, *beyond any
possibility of doubt whatsoever*, a conscious state.

Cathy

Andrew Brook

unread,
Oct 27, 2006, 12:09:15 PM10/27/06
to
It is getting to be time to take our conversation offline but just one
last thing:

Why do you require accuracy "beyond any possibility of doubtwhatsoever".
We never impose such a requirement on any other knowledge-generating
process. Descartes famously tried to build an epistemology on a
requirement like yours but almost no one in the past couple of hundred
years has thought of his starting point as either viable or valuable.
Outside the realm of mathematics and the like, knowledge claims are
always hostage to the world, therefore never demonstrated conclusively.

Andrew Brook

unread,
Oct 27, 2006, 7:59:18 PM10/27/06
to
See below

Jonathan Edwards wrote:

> I forget who suggested that it is most unlikely that a small group of
> neurones could be any conscious state but I would have to support the
> thrust of Colin Hales's reply. Why not? What metaphysical mist
> changes the situation in a brain from in a dish in which the cells
> are cleverly given the same informational input? The assumption that
> you need lots of cells to receive an informational input for a
> conscious experience to occur based on that input is not based on any
> scientific evidence and poses serious logical problems.

Unlikely because most conscious states contain/represent thousands if
not millions of bits of information. It is unlikely that one neuron or
even a small group could accomplish that.

As to the 'brain in a vat' thought experiment that has been floating
around on the list: Sure, if all the inputs and outputs to a certain
important set of neurons are there, a conscious state could be there,
maybe even would be there. But the conscious state could just as well
extend over the whole system: inputs, neurons, and outputs. There is no
reason that I know of, no reason from brain imaging for sure, to view
conscious states as not highly distributed representations. All imaging
tells us is that having state X is correlated with some region of the
brain (actually, it is almost always multiple regions of the brain)
using ever so slightly more oxygen or .... than background. Well,
background activation is still activation. Who knows over what range of
neurons X extends?

Cathy Reason

unread,
Oct 28, 2006, 11:52:20 AM10/28/06
to
Andrew Brook wrote:

>Why do you require accuracy "beyond any possibility of doubtwhatsoever".
>We never impose such a requirement on any other knowledge-generating
>process.

Because as conscious subjects, we can be aware that we are conscious beyond
any possibility of doubt whatsoever, and that's just an empirical fact (in
the most fundamental sense of the word) we have to account for.

Cathy

Andrew Brook

unread,
Oct 28, 2006, 3:34:34 PM10/28/06
to
In response to Cathy Reason:

An organism could not be conscious and fail to know this (the case with
all nonhuman animals, I suspect) or believe itself not to be conscious
(which actually happens, I believe, in some psychological order whose
name I cannot remember)? Going the other way, no organism could have a
belief (it would have to be an unconscious belief but we have lots of
those) that it was conscious of itself and its psychological states when
it was not? Both disconnects seem entirely possible to me. Indeed, the
two branches of the first scenario actually happen. (I restrict the
second as I do because having a belief at all may be enough for to have
some minimal degree of consciousness, something not nearly as complex as
consciousness of self, to be sure, but still consciousness of some kind.)

Anyway, even if one is certain about being conscious in some way, why
would that be a problem for our model?

Andrew


Cathy Reason wrote:

Cathy Reason

unread,
Oct 28, 2006, 7:24:03 PM10/28/06
to
Andrew Brook wrote:

An organism could not be conscious and fail to know this (the case with
all nonhuman animals, I suspect) or believe itself not to be conscious
(which actually happens, I believe, in some psychological order whose
name I cannot remember)? Going the other way, no organism could have a
belief (it would have to be an unconscious belief but we have lots of
those) that it was conscious of itself and its psychological states when
it was not? Both disconnects seem entirely possible to me. Indeed, the
two branches of the first scenario actually happen. (I restrict the
second as I do because having a belief at all may be enough for to have
some minimal degree of consciousness, something not nearly as complex as
consciousness of self, to be sure, but still consciousness of some kind.)

CMR:
I'm sure all this is true, but it doesn't change the basic empirical fact -
which is that as conscious beings we can know, beyond any possibility of
doubt, that we are conscious.

AB:


Anyway, even if one is certain about being conscious in some way, why
would that be a problem for our model?

CMR:
Dear me, Andrew, I rather thought I'd laid that out already, and at such a
tedious level of detail that I was expecting you to accuse me of obsessive
pedantry. If there's some part of the argument you don't follow or don't
agree with, wouldn't it be better if you focussed on exactly what that was?

Cathy

Andrew Brook

unread,
Oct 28, 2006, 8:28:54 PM10/28/06
to
Cathy Reason wrote:

>I'm sure all this is true, but it doesn't change the basic empirical fact -
>which is that as conscious beings we can know, beyond any possibility of
>doubt, that we are conscious.
>
>

If what I said is true, it does 'change the basic empirical fact' --
there is no such fact. That's what the possibility of a double
dissociation shows.

About our model, I know what you said, of course. I just don't
understand why you said those things. (Your last response. 'Dear me,
Andrew', is patronizing but I will let that pass. Either I am a
candidate for village idiot or you are not justifiying your assertions.)

Andrew

Jonathan Edwards

unread,
Oct 29, 2006, 9:19:03 AM10/29/06
to
Exactly so, Andrew. Conscious states need 'thousands if not millions
of bits' of information. Thus our presentee must receive thousands or
millions of bits. Horace Barlow famously (Perception 1972) put it at
1000 elements. However, his model hit issues I will mention below and
for other reasons I agree we probably need more (barcodes need some
redundancy?). Let's say 100,000 bits.

A juicy pyramidal neuron, I am told, has a ~40,000 bit input. If it
makes analogue use of phase, and short term retention phenomena, like
dendritic spine twitching, it could front an experience worth a
significantly greater number of bits, but I doubt we go that much
beyond the preferred number - 100,000. How the heck you get the Mona
Lisa let's put to one side because its tricky however we try. But we
must remember that all we need is enough bits to encode; the
presentee can turn it into what might appear to be megabytes of
pixels, just as Word gives you a massive pdf.

The trouble with groups of cells is that they cannot have a richer
input bitscore because nothing (no presentee) gets more bits than one
cell. The idea that you can 'add up' the bits for a 'system' of
several receiving/integrating units, however commonly assumed, is
groundless. First of all, the maths of computation simply do not
allow that. At a more basic level, it does not make sense in any
recognised physical view of the world - as William James pointed out
even before computers were invented - a 'non-existent physical fact'.

And if our 100,000 bits was 100 bits in each of 1000 cells, what
would these bits be? If it is 1 bit in each of 100,000 cells then
what are these cells doing with just one bit? If the bit is their
output the presentee must be the next cell along and we are back to
where we were. It does not work. You may now see why I think it so
important that we are clear that we mean re-presentation to a presentee.

And we do not want too much input (40,000x10,000 say). There is no
reason to think that sentient units in a brain would be designed to
receive signal and noise (or duplication) and filter out. It is much
easier to envisage input as just signal - so more than 100,000 bits
would be embarasse de richesse. Assuming biological efficiency, we
are looking for 'just enough' and no more please.

So having one experience 'extend over a system' cannot work. For
sure, the whole brain is blasting away and fMRI hotspots are not what
they are cracked up to be - I could not agree more. But extension of
experience over a large area cannot, as James's argument
devastatingly shows, be many cells sharing one experience, unless you
concoct some new physical processes which would require neuroscience
to be trashed and rewritten. The only rational possibility, the one
that fits with neuroscience, is that there are lots of copies of
experience over a large area. Weird, but outputs from sensory relay
cells go to thousands of places, so why not? It is generally thought
that the brain is useful because it does millions of things at once -
like picking 1 out of 20,000 meanings for a noise in a fraction of a
second while trying out a dozen syntactic contexts. A single copy of
experience would only allow one computation at a time and the brain
does not work like that as far as we know.

So yes, there are reasons to view conscious states as not highly
distributed representations. The key argument was writen in 1890 and
has never been refuted. It is so simple that I doubt it will ever be
refuted, even if James linked it to an equally simple logical error
that threw him off the scent and forced him to claim that
consciousness could not have a physical basis. It baffles me that it
is not immediately understood by everyone, but then, as a say in my
book, understanding may be almost as flimsy as David Hume suggested.
We are all at the mercy of societies of decision-making cells with no
overall master. Minsky just missed out that experience has to be
where the decisions are.

Best wishes

Jo E

Refs: James, W., Principles of Psychology Chapter 6, Harvard edition
pp178-180 and http://www.ucl.ac.uk/~regfjxe/awnew.htm !

Cathy Reason

unread,
Oct 29, 2006, 4:30:57 PM10/29/06
to
Andrew Brook wrote:

>If what I said is true, it does 'change the basic empirical fact' --
>there is no such fact. That's what the possibility of a double
>dissociation shows.

Indeed that's precisely where I think your formulation falls down, because
any conscious subject whose conscious states took the form of
self-representing representations, and who was capable of thinking
logically, would inevitably end up thinking this way. The possibility of a
double dissociation would arise because of the inherent functional
separation between being conscious, and having beliefs about being
conscious.

Let's assume that you are such a person. You could argue that if it's
possible to be in state of believing that you are conscious when you are not
actually conscious (what one might call a "pseudoconscious" state) then no
conscious subject could ever be absolutely certain they were conscious,
because they could never be certain they weren't merely in a pseudoconscious
state.

Trouble is, that conclusion is just plain wrong. It *is* possible for a
conscious subject to know for certain that they are conscious - for example,
I know that I'm conscious right now, and I can assure you there's absolutely
no possibility of doubt about it ;-) It's just one of those bizarre facts
about consciousness that it has this remarkable property, and it's part of
what makes consciousness so hard to explain and understand.

In fact if this weren't so, there'd be no point in doing consciousness
studies at all. It would be much simpler just to assume we were all making a
mistake and consciousness didn't exist because none of us was ever really
conscious in the first place. Then we could all go back to measuring
membrane potentials or something.

Cathy

Andrew Brook

unread,
Oct 29, 2006, 9:46:24 PM10/29/06
to
Cathy Reason wrote:

>It *is* possible for a
>conscious subject to know for certain that they are conscious - for example,
>I know that I'm conscious right now, and I can assure you there's absolutely
>no possibility of doubt about it ;-) It's just one of those bizarre facts
>about consciousness that it has this remarkable property, and it's part of
>what makes consciousness so hard to explain and understand.
>
>

I don't know why you say this. Just saying something over and over does
not make it true. What is the argument? I would agree that I am unlikely
to make a mistake here. But I am unlikely to make a mistake about my
name, too. I know both phenomena pretty well. It is, however, not
impossible that I could. Why do you think me and my being conscious are
different? Is it meant to be self-evident that here I have certainty?

You and I went round this mulberry bush some time ago. Looks like we're
spinning our wheels again.

john limber

unread,
Oct 30, 2006, 1:10:51 PM10/30/06
to
>But
>strangely, if I go back to watching the tree outside my window, I still have
>certainty about being phenomenally conscious of it. The correct conclusion
>is not generalized skepticism, but the fact that under some circumstances, a
>human being can be certain about being conscious.
What is this "certainty" that some of you seem so sure about? A
state/process of mind? A state/process about a state/process of mind?
["certain about being conscious"]

Having a personal 'attitude' toward "being conscious" seems like fairly weak
evidence for much of anything other than the reported attitude itself. [I
am having certainty...]

John Limber
Durham NH


> From: Alex Gamma <ga...@BLI.UNIZH.CH>
> Reply-To: "PSYCHE Discussion Forum (Theoretical emphasis)"
> <PSYC...@LISTSERV.UH.EDU>
> Date: Mon, 30 Oct 2006 11:13:17 +0100
> To: <PSYC...@LISTSERV.UH.EDU>
> Subject: Re: Representation
>
> Cathy is exactly right. Maybe the insistence on the possibility of double
> dissociation regarding phenomenal consciousness is a sign of the general bad
> reputation introspection has come to have in recent decades. But we need to
> be encouraged to take our subjective experience seriously. The fact is that
> now, as I'm looking out the window at the green and yellow autumn leafs on a
> large tree, I know with absolut certainty that I'm phenomenally conscious.
> If you take your (phenomenal) experience seriously as a *personal*
> experience with certain features, not just as an abstract object of
> theoretical interest, then yes, it is self-evident that you cannot be wrong
> about being phenomenally conscious.
>
> I know that there are patients denying that they're blind while bumping into
> furniture all the time. On the face of it, they are wrong about their
> phenomenal experience. But it's unclear what has really gone wrong in these
> people. What their experience is really like. And what their reasoning is
> like. We can allow that these cases exist, that some kind of dissociation
> between their reports on their phenomenal experience and their experience
> itself exists. From a logical point of view, this seems to lead to exactly
> the kind of conclusion Andrew wants to draw: that although it might seem to
> me impossible to be wrong about my being conscious, I could be. But
> strangely, if I go back to watching the tree outside my window, I still have
> certainty about being phenomenally conscious of it. The correct conclusion
> is not generalized skepticism, but the fact that under some circumstances, a
> human being can be certain about being conscious. That's what needs to be
> explained.
>
> Alex

0 new messages