Primitive Awareness and Symmetry

15 views
Skip to first unread message

Craig Weinberg

unread,
Apr 2, 2012, 12:14:51 PM4/2/12
to Everything List
1. We cannot doubt that we are aware.

2. Our awareness may represent realities which are independent from
our own existence.

3. Our awareness may represent ideas and fantasies which have no
existence independent from our experience of it (and whatever
neurological processes are behind it)

4. Representation can only be accomplished through presentation.

5. A word or a picture has to look like something to us in order to
remind of us of something else.

6. Saying that awareness or qualia only represents another process
does not explain why there should be any presentation of that process
in the first place, let alone posit a mechanism by which a physical
process can be represented by something that does not physically
exist.

7. The problem with the mechanistic view is that it relies on the real
existence of awareness and choice to make a case for distrusting
awareness and choice.

A consequence of this logical contradiction is that when we begin from
the assumption of mechanism and work backwards it almost invariably
blinds us to the presentation of the work that we ourselves are doing
in determining this deterministic opinion. We fool ourselves into
thinking that there is no man even behind our own curtain, and mistake
all authentic, concrete presentations for abstract, symbolic
representations. That does not work for awareness because awareness
itself can only be represented to something which is already aware.

Thus the symbol grounding problem arises when we make the mistake of
assuming first that awareness must follow the rules of the world which
is represented within awareness. Since the experience does not show up
on the radar of materialism, we are forced to accept the absurdities
of ungrounded feeling which emerges somehow without mechanism or
explanation from generic physical changes or computations. We have to
conflate symbol and reality - either by making reality not primitively
real (comp) or by making symbols not really real (physics).

To me, the clear solution to this is not to begin from either the
assumption of idealism or materialism but to examine the relationship
between them. Once we notice that there is really nothing about these
two positions which is not symmetrical, we can move on to the next
step of examining symmetry itself. What I find is that symmetry is a
bootstrap metaphor for metaphor.

Symmetry is what makes sense - literally. How it does this is
understandable. It presents and then re-presents itself. It
demonstrates how significance and order can be expressed through
reflection. It is both mathematical and aesthetic but serves no
purpose in either a comp or physical universe. It is so fundamental
that we miss it entirely - which makes sense since we are part of the
universe rather than objective observers of it.

William R. Buckley

unread,
Apr 2, 2012, 1:02:50 PM4/2/12
to everyth...@googlegroups.com
Craig:

Please explain a little further what you mean by *accomplished through presentation* and in
particular, what you mean by presentation.

Your point number 5 fits clearly within the purview of semiotics.

wrb

 


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.


Craig Weinberg

unread,
Apr 2, 2012, 4:08:59 PM4/2/12
to Everything List
Hi William,

On Apr 2, 1:02 pm, "William R. Buckley" <bill.buck...@gmail.com>
wrote:
> Craig:
>
> Please explain a little further what you mean by *accomplished through
> presentation* and in
> particular, what you mean by presentation.

What I mean by that is that to make something seem like something
else, it has to appear as something experienced in the first place.
The color blue can't be purely a representation of optical/
neurological patterns without there being a presentation of those
patterns (blue) which is different from that which is represented. If
the patterns were already literally blue, there would be no need to
translate them and we would see blue images in the tissues of the
brain. If blue was nothing but a summary of physical patterns, any
presentation would be redundant and we would use purely abstract,
instinctive (unconscious) models.

From blindsight, synesthesia, and anosognosia we know that particular
qualia are not inevitably associated with the conditions they usually
represent for us, so it seems impossible to justify qualia on a
functionalist basis. Just as a computer needs no speakers and video
screen inside itself, there is no purpose for such a presentation
layer within our own mechanism. Of course, even if there were a
purpose, there is no hint of such a possibility from mechanism alone.
If there was some reason that a bucket of rocks could benefit by some
kind of collective 'experience' occurring amongst them, that's a
million miles from suspecting that experience could be a conceivable
possibility.

Rather than 'consciousness', human beings would benefit evolutionarily
much more by just being able to do something mechanically conceivable
things like teleport, time travel, or breathe fire. Awareness doesn't
even make sense as a possibility. Were we not experiencing it
ourselves we could never anticipate any such possibility in any
universe.

>
> Your point number 5 fits clearly within the purview of semiotics.

My view really is a semiotic view, except that I think semiotics
itself arises out of sense-motive experience. It has to start with a
subject who can receive, interpret, and transmit signs. Signs without
a subject can't be signs...can't really be anything.

Craig

>
> wrb

Stathis Papaioannou

unread,
Apr 2, 2012, 8:06:47 PM4/2/12
to everyth...@googlegroups.com
On Tue, Apr 3, 2012 at 6:08 AM, Craig Weinberg <whats...@gmail.com> wrote:

> From blindsight, synesthesia, and anosognosia we know that particular
> qualia are not inevitably associated with the conditions they usually
> represent for us, so it seems impossible to justify qualia on a
> functionalist basis. Just as a computer needs no speakers and video
> screen inside itself, there is no purpose for such a presentation
> layer within our own mechanism. Of course, even if there were a
> purpose, there is no hint of such a possibility from mechanism alone.
> If there was some reason that a bucket of rocks could benefit by some
> kind of collective 'experience' occurring amongst them, that's a
> million miles from suspecting that experience could be a conceivable
> possibility.
>
> Rather than 'consciousness', human beings would benefit evolutionarily
> much more by just being able to do something mechanically conceivable
> things like teleport, time travel, or breathe fire. Awareness doesn't
> even make sense as a possibility. Were we not experiencing it
> ourselves we could never anticipate any such possibility in any
> universe.

Since there is no evolutionary advantage to consciousness it must be a
side-effect of the sort of behaviour that conscious organisms display.
Otherwise, why did we not evolve as zombies?


--
Stathis Papaioannou

meekerdb

unread,
Apr 2, 2012, 9:02:30 PM4/2/12
to everyth...@googlegroups.com

I like Julian Jaynes idea that it is a side-effect of using the same parts of the brain
for cogitation as are used for perception. That would be the kind of thing that evolution
would do, jury rigged but efficient.

Brent

Craig Weinberg

unread,
Apr 2, 2012, 10:20:52 PM4/2/12
to Everything List
On Apr 2, 8:06 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
Because existence is a subordinate category of awareness and not the
other way around. Evolution is an epiphenomenon of physics, and
physics is the back end of the Totality. The front end is awareness.

To assume that consciousness must be a side-effect of something else
begs the question of the origin of consciousness and arbitrarily
privileges purposeless mechanism from the start. Once you make that
presumption, it follows logically that consciousness must be an
illusion since it can't be explained. The logic isn't bad, it's just
based on initial assumptions that aren't carefully examined. Awareness
transcends logic.

“I regard consciousness as fundamental. I regard matter as derivative
from consciousness. We cannot get behind consciousness. Everything we
talk about, everything that we regard as existing, postulates
consciousness.” - Max Planck 25 January, 1931”


Craig

Craig Weinberg

unread,
Apr 2, 2012, 10:28:04 PM4/2/12
to Everything List
On Apr 2, 9:02 pm, meekerdb <meeke...@verizon.net> wrote:

> I like Julian Jaynes idea that it is a side-effect of using the same parts of the brain
> for cogitation as are used for perception.  That would be the kind of thing that evolution
> would do, jury rigged but efficient.

I like what I've read of Jaynes too. The Bicameral Mind helps begin to
model what I call super-signifying ideas in culture (much better than
H.A.D.D., which I hate for explaining religion but works well for
explaining why we want to believe computers can become conscious). I
don't know of anything he wrote about though that explains why or how
awareness could exist in the first place.

Craig

meekerdb

unread,
Apr 2, 2012, 11:29:46 PM4/2/12
to everyth...@googlegroups.com

Why perception exists is pretty obvious in terms of evolutionary advantage. Even bacteria
perceive chemical gradients. Jaynes theory shows why thinking should be like perceiving a
voice in your head.

Brent

>
> Craig
>

1Z

unread,
Apr 3, 2012, 4:55:58 AM4/3/12
to Everything List
I have no idea what any of that means.

Bruno Marchal

unread,
Apr 3, 2012, 9:44:28 AM4/3/12
to everyth...@googlegroups.com

Consciousness comes from the conjunction of an (instinctive,
preprogrammed, or better pre-engrammed) belief in a consistent reality/
god/universe/whatever, and the existence of that reality. The side-
effect comes from the fact that the logic of communicable belief is
different from the logic of the communicable-and-true beliefs.

Evolution, being driven by locally communicable events, cannot give an
advantage to truth, that's true, but without truth, they would be no
communicable events at all. So consciousness has to exist to make
sense of the relative selection, by the universal mind, and the third
person plural type of reality needed for sharable physical realities.
It that sense, consciousness is not really a side effect, but is what
make evolution and physical realities selectable by the "universal
mind". Consciousness looks like a side effect, from inside, only in
the Aristotelian picture. With comp, and its platonist consequences,
we might as well say that matter and evolution is a side effect of
consciousness. Without consciousness the notion of physical reality
would lost his meaning, given that the physical reality can only
result from the shared dreams, lived by the universal mind multiple
instantiations.
And consciousness can be associated with a range of behavior, but is
not equal to any behavior. It is of the type of knowledge, and is a
fixed point on self-doubting (like in Descartes). It is universal and
exists, with comp, right at the "start" of arithmetical truth. It does
not need to be selected, fro it exists at the start, and eventually is
the one responsible for all possible observer selections.

The point here is difficult and subtle, and I am just trying to convey
it. It takes into account the universal mind, as David pointed on
recently, and which I have to endorse through thought experience with
amnesia, (or some report of real experiences with some drugs) and the
complete UDA reversal.


> Otherwise, why did we not evolve as zombies?

OK.

Bruno

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Apr 3, 2012, 12:32:59 PM4/3/12
to Everything List
On Apr 2, 11:29 pm, meekerdb <meeke...@verizon.net> wrote:
> On 4/2/2012 7:28 PM, Craig Weinberg wrote:
>
> >> I like Julian Jaynes idea that it is a side-effect of using the same parts of the brain
> >> for cogitation as are used for perception.  That would be the kind of thing that evolution
> >> would do, jury rigged but efficient.
> > I like what I've read of Jaynes too. The Bicameral Mind helps begin to
> > model what I call super-signifying ideas in culture (much better than
> > H.A.D.D., which I hate for explaining religion but works well for
> > explaining why we want to believe computers can become conscious). I
> > don't know of anything he wrote about though that explains why or how
> > awareness could exist in the first place.
>
> Why perception exists is pretty obvious in terms of evolutionary advantage.

Why? The same evolutionary advantage would be conferred through
unconscious computation. Blindsight shows that perceptual function
does not necessarily rely on conscious presentation. If we had no
perception ourselves, we could never guess that such a phenomena could
exist or that it could improve survival in any way, any more than it
would improve a neuron's odds of successfully signalling to another if
it played the theme song from Hawaii Five-0 to itself every time it
depolarized.

> Even bacteria
> perceive chemical gradients.  Jaynes theory shows why thinking should be like perceiving a
> voice in your head.

Yes, I agree, they do perceive chemical gradients, but not because it
helps them survive. Everything that they do to survive could be
accomplished unconsciously and mechanically. From a functionalist
perspective, perception can only be purely ornamental gravy. In
reality, I think that it's the survival-and-existence part that is the
gravy, perception is the essential meat and potatoes.

Craig

Evgenii Rudnyi

unread,
Apr 3, 2012, 3:56:01 PM4/3/12
to everyth...@googlegroups.com
On 03.04.2012 02:06 Stathis Papaioannou said the following:

The evolutionary advantage of consciousness, according to Jeffrey Gray,
is late-error detection.

Evgenii

Evgenii Rudnyi

unread,
Apr 3, 2012, 4:02:06 PM4/3/12
to everyth...@googlegroups.com
On 03.04.2012 05:29 meekerdb said the following:

It depends on how do you define what a perception is. If a perception is
supposed to be conscious experience, then bacteria do not perceive
chemical gradients, but rather sense them. If you however define
perceive and sense as equivalent terms, then even a ballcock perceives a
level of water.

�Bacteria can perceive� is typical for biologists, see my small comment
on this

http://blog.rudnyi.ru/2011/01/perception-feedback-and-qualia.html

Evgenii

Craig Weinberg

unread,
Apr 3, 2012, 4:38:41 PM4/3/12
to Everything List
Why would a device need to be conscious in order to have late-error
detection?

As far as ballcocks and electronic sensors, the difference is that
they don't assemble themselves. We use their native capacities for
purposes that plastic and metal has no way of accessing. The ballcock
is only a thing in our world, it doesn't have any world of its own. I
think that the molecules that make up the materials have their own
world, but it's not likely to be anything like what we could imagine.
Maybe all molecules have a collective experience on that microcosmic
level, where snapshots of momentary awareness corresponding to change
string together centuries of relative inactivity.

It is not the fact that matter detects and responds to itself that is
in question, it is the presentation of an interior realism which
cannot be explained in a mechanistic context.

Craig

Bruno Marchal

unread,
Apr 4, 2012, 3:31:28 AM4/4/12
to everyth...@googlegroups.com

I agree. People confuse consciousness-the-qualia, and consciousness-
the-integrating function. Stathis was talking about the qualia.
Evolution can press only on the function, a priori.

>
> As far as ballcocks and electronic sensors, the difference is that
> they don't assemble themselves. We use their native capacities for
> purposes that plastic and metal has no way of accessing. The ballcock
> is only a thing in our world, it doesn't have any world of its own. I
> think that the molecules that make up the materials have their own
> world, but it's not likely to be anything like what we could imagine.
> Maybe all molecules have a collective experience on that microcosmic
> level, where snapshots of momentary awareness corresponding to change
> string together centuries of relative inactivity.
>
> It is not the fact that matter detects and responds to itself that is
> in question, it is the presentation of an interior realism which
> cannot be explained in a mechanistic context.

This is begging the question. And I would say that mechanism explains
well the interior realism, up to the qualia itself which can be
explained only in the negative. It is that thing that the machine
"feels correctly" to be non functional and makes the machine thinks at
first "non correctly" that she is not a machine. It is not correct
from the 3-view, but still correct from the machine first person view.
If 3-I is a machine, the 1-I cannot feels to be a machine.
As Minski pointed out, machines will be as befuddled as us about the
mind-body problem. But comp can explains this "befuddling" at the meta-
level, completely. The machines too. In a sense, the first person and
consciousness is not a machine, with the mechanist hypothesis.

Bruno


http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Apr 4, 2012, 1:45:17 PM4/4/12
to Everything List
On Apr 4, 3:31 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 03 Apr 2012, at 22:38, Craig Weinberg wrote:

>
> > It is not the fact that matter detects and responds to itself that is
> > in question, it is the presentation of an interior realism which
> > cannot be explained in a mechanistic context.
>
> This is begging the question. And I would say that mechanism explains
> well the interior realism, up to the qualia itself

I don't see that there can be any interior realism without qualia -
they are the same thing. Mechanism assumes that because we can't
explain the existence of qualia mechanistically, it must be an
emergent property/illusion of mechanism. If we instead see that
mechanism is a particular kind of lowest common denominator exterior
qualia, then it would be silly to try to explain the parent
phenomenology in terms of the child set of reduced possibilities.

> which can be
> explained only in the negative. It is that thing that the machine
> "feels correctly" to be non functional and makes the machine thinks at
> first "non correctly" that she is not a machine. It is not correct
> from the 3-view, but still correct from the machine first person view.
> If 3-I is a machine, the 1-I cannot feels to be a machine.
> As Minski pointed out, machines will be as befuddled as us about the
> mind-body problem. But comp can explains this "befuddling" at the meta-
> level, completely. The machines too. In a sense, the first person and
> consciousness is not a machine, with the mechanist hypothesis.

Mechanism is always going to implicate mechanism as the cause of
anything, because it has no capacity to describe anything else and it
has not capacity to extend beyond descriptions. Consciousness is a
much larger phenomenon, as it includes all of mechanism as well as
many more flavors of experience. Only through direct experience can we
know that it is possible that there is a difference between
description and reality.

Through the monochrome lens of mechanism, it is easy to prove that
audiences will think they see something other than black and white
pixels because we understand that they are seeing fluid patterns of
changing pixels rather than the pixels themselves, but this doesn't
explain how we see color. The idea that a machine would logically not
think of itself as a machine doesn't explain the existence of what it
feels like to be the opposite of a machine or how it could really feel
like anything.

Craig

Evgenii Rudnyi

unread,
Apr 4, 2012, 2:58:06 PM4/4/12
to everyth...@googlegroups.com
The term late error detection as such could be employed without
consciousness indeed. Yet, Jeffrey Gray gives it some special meaning
that I will try briefly describe below.

Jeffrey Gray in his book speaks about conscious experience, that is,
exactly about qualia. Self, mind, and intellect as such is not there.

He has tried first hard to put conscious experience in the framework of
the normal science (I guess that he means here physicalism) but then he
shows that conscious experience cannot be explained by the theories
within a normal science (functionalism, neural correlates of
consciousness, etc.).

According to him, conscious experience is some multipurpose display. It
is necessary yet to find how Nature produces it but at the moment this
is not that important.

He considers an organism from a cybernetic viewpoint, as a bunch of
feedback mechanisms (servomechanisms). For a servomechanism it is
necessary to set a goal and then to have a comparator that compares the
goal with the reality. It might function okay at the unconscious level
but conscious experience binds everything together in its display. This
binding happens not only between different senses (multimodal binding)
but also within a single sense (intramodel binding). For example we
consciously experience a red kite as a whole, although in the brain
lines, colors, surfaces are processed independently. Yet we cannot
consciously experience a red kite not as a whole, just try it.

Hence the conscious display gives a new opportunity to compare
expectations with reality and Jeffrey Grayrefers to it as late error
detection. That is, there is a bunch of servomechanisms that are running
on their own but then conscious experience allows brain to synchronize
everything together. This is a clear advantage from the Evolution viewpoint.

Evgenii

On 04.04.2012 09:31 Bruno Marchal said the following:


>
> On 03 Apr 2012, at 22:38, Craig Weinberg wrote:
>
>> On Apr 3, 3:56 pm, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
>>> On 03.04.2012 02:06 Stathis Papaioannou said the following:

...

>>>> Since there is no evolutionary advantage to consciousness it must be a
>>>> side-effect of the sort of behaviour that conscious organisms display.
>>>> Otherwise, why did we not evolve as zombies?
>>>
>>> The evolutionary advantage of consciousness, according to Jeffrey Gray,
>>> is late-error detection.
>>
>> Why would a device need to be conscious in order to have late-error
>> detection?
>
> I agree. People confuse consciousness-the-qualia, and

> consciousness-the-integrating function. Stathis was talking about the

Bruno Marchal

unread,
Apr 4, 2012, 3:01:56 PM4/4/12
to everyth...@googlegroups.com

On 04 Apr 2012, at 19:45, Craig Weinberg wrote:

> On Apr 4, 3:31 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> On 03 Apr 2012, at 22:38, Craig Weinberg wrote:
>
>>
>>> It is not the fact that matter detects and responds to itself that
>>> is
>>> in question, it is the presentation of an interior realism which
>>> cannot be explained in a mechanistic context.
>>
>> This is begging the question. And I would say that mechanism explains
>> well the interior realism, up to the qualia itself
>
> I don't see that there can be any interior realism without qualia -
> they are the same thing.

I agree with this.


> Mechanism assumes that because we can't
> explain the existence of qualia mechanistically, it must be an
> emergent property/illusion of mechanism.

It explains the existence of qualia, including some possible geometry
of them. It fails to explain only some aspect of qualia, but it meta-
explains why it cannot explain those aspects. The internal realism has
a necessary blind spot somehow.

> If we instead see that
> mechanism is a particular kind of lowest common denominator exterior
> qualia,
> then it would be silly to try to explain the parent
> phenomenology in terms of the child set of reduced possibilities.

?


>
>> which can be
>> explained only in the negative. It is that thing that the machine
>> "feels correctly" to be non functional and makes the machine thinks
>> at
>> first "non correctly" that she is not a machine. It is not correct
>> from the 3-view, but still correct from the machine first person
>> view.
>> If 3-I is a machine, the 1-I cannot feels to be a machine.
>> As Minski pointed out, machines will be as befuddled as us about the
>> mind-body problem. But comp can explains this "befuddling" at the
>> meta-
>> level, completely. The machines too. In a sense, the first person and
>> consciousness is not a machine, with the mechanist hypothesis.
>
> Mechanism is always going to implicate mechanism as the cause of
> anything, because it has no capacity to describe anything else and it
> has not capacity to extend beyond descriptions.

Yes it has. Once a machine is Löbian it can see its limitations, and
overcome it. This leads to many paths.


> Consciousness is a
> much larger phenomenon, as it includes all of mechanism as well as
> many more flavors of experience.

It is fuzzy. I can agree and disagree depending how you circumscribe
the meaning of the terms you are using.

> Only through direct experience can we
> know that it is possible that there is a difference between
> description and reality.

Yes. But we cannot know reality as such, except for the conscious non
communicable parts. So, when we talk with each other, we can only make
hypothesis and reasoning.


>
> Through the monochrome lens of mechanism, it is easy to prove that
> audiences will think they see something other than black and white
> pixels because we understand that they are seeing fluid patterns of
> changing pixels rather than the pixels themselves, but this doesn't
> explain how we see color. The idea that a machine would logically not
> think of itself as a machine doesn't explain the existence of what it
> feels like to be the opposite of a machine or how it could really feel
> like anything.

But mechanism is not proposed as an explanation. It is more a "law"
that we exploit to clarify the problems. You can see it as a strong
assumption/belief given that it is a belief in possible
reincarnations. Comp is refutable. Non-comp is not refutable.

Bruno

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Apr 4, 2012, 7:43:56 PM4/4/12
to Everything List
If an evolutionary advantage would be conferred by synchronization and
binding of data, why not just synchronize and bind the data
quantitatively? Parallel processing, compression, etc. Where would the
possibility of experienced qualities come in?

Craig

Stathis Papaioannou

unread,
Apr 4, 2012, 7:59:45 PM4/4/12
to everyth...@googlegroups.com
On Wed, Apr 4, 2012 at 5:56 AM, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
> On 03.04.2012 02:06 Stathis Papaioannou said the following:

>> Since there is no evolutionary advantage to consciousness it must be a


>> side-effect of the sort of behaviour that conscious organisms display.
>> Otherwise, why did we not evolve as zombies?
>>
>
> The evolutionary advantage of consciousness, according to Jeffrey Gray, is
> late-error detection.

But the late-error detection processing could be done in the same way
by a philosophical zombie. Since, by definition, a philosophical
zombie's behaviour is indistinguishable from that of a conscious being
there is no way that nature could favour a conscious being over the
equivalent philosophical zombie. You then have two options to explain
why we are not zombies:

(a) It is impossible to make a philosophical zombie as consciousness
is just a side-effect of intelligent behaviour;
(b) It is possible to make a philosophical zombie but the mechanism
for intelligent behaviour that nature chanced upon has the side-effect
of consciousness.

Though (b) is possible I don't think it's plausible.


--
Stathis Papaioannou

Craig Weinberg

unread,
Apr 4, 2012, 8:16:15 PM4/4/12
to Everything List
On Apr 4, 3:01 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 04 Apr 2012, at 19:45, Craig Weinberg wrote:
>
> > On Apr 4, 3:31 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> >> On 03 Apr 2012, at 22:38, Craig Weinberg wrote:
>
> >>> It is not the fact that matter detects and responds to itself that
> >>> is
> >>> in question, it is the presentation of an interior realism which
> >>> cannot be explained in a mechanistic context.
>
> >> This is begging the question. And I would say that mechanism explains
> >> well the interior realism, up to the qualia itself
>
> > I don't see that there can be any interior realism without qualia -
> > they are the same thing.
>
> I agree with this.
>
> > Mechanism assumes that because we can't
> > explain the existence of qualia mechanistically, it must be an
> > emergent property/illusion of mechanism.
>
> It explains the existence of qualia, including some possible geometry
> of them. It fails to explain only some aspect of qualia, but it meta-
> explains why it cannot explain those aspects. The internal realism has
> a necessary blind spot somehow.

A blind spot is what I would expect when trying to explain a parent
phenomenon from a child perspective.

>
> > If we instead see that
> > mechanism is a particular kind of lowest common denominator exterior
> > qualia,
> > then it would be silly to try to explain the parent
> > phenomenology in terms of the child set of reduced possibilities.
>
> ?

Arithmetic is a kind of qualia. It is a particular kind - a low common
denominator of qualia, just as black and white could be said to be
kinds of color (the most colorless kinds) but colors are not reducible
to black and white.

>
>
>
>
>
>
>
>
>
>
>
> >> which can be
> >> explained only in the negative. It is that thing that the machine
> >> "feels correctly" to be non functional and makes the machine thinks
> >> at
> >> first "non correctly" that she is not a machine. It is not correct
> >> from the 3-view, but still correct from the machine first person
> >> view.
> >> If 3-I is a machine, the 1-I cannot feels to be a machine.
> >> As Minski pointed out, machines will be as befuddled as us about the
> >> mind-body problem. But comp can explains this "befuddling" at the
> >> meta-
> >> level, completely. The machines too. In a sense, the first person and
> >> consciousness is not a machine, with the mechanist hypothesis.
>
> > Mechanism is always going to implicate mechanism as the cause of
> > anything, because it has no capacity to describe anything else and it
> > has not capacity to extend beyond descriptions.
>
> Yes it has. Once a machine is Löbian it can see its limitations, and
> overcome it. This leads to many paths.

Only when those limitations can be described arithmetically. It leads
to many paths but they are all descriptions rather than experiences.
At what point can a Löbian machine see that it can't taste or smell?

>
> > Consciousness is a
> > much larger phenomenon, as it includes all of mechanism as well as
> > many more flavors of experience.
>
> It is fuzzy. I can agree and disagree depending how you circumscribe
> the meaning of the terms you are using.
>
> > Only through direct experience can we
> > know that it is possible that there is a difference between
> > description and reality.
>
> Yes. But we cannot know reality as such, except for the conscious non
> communicable parts. So, when we talk with each other, we can only make
> hypothesis and reasoning.

Hypothesis and reasoning is all that we need since we already are
experiencing the non communicable parts ourselves directly.

>
>
>
> > Through the monochrome lens of mechanism, it is easy to prove that
> > audiences will think they see something other than black and white
> > pixels because we understand that they are seeing fluid patterns of
> > changing pixels rather than the pixels themselves, but this doesn't
> > explain how we see color. The idea that a machine would logically not
> > think of itself as a machine doesn't explain the existence of what it
> > feels like to be the opposite of a machine or how it could really feel
> > like anything.
>
> But mechanism is not proposed as an explanation. It is more a "law"
> that we exploit to clarify the problems. You can see it as a strong
> assumption/belief given that it is a belief in possible
> reincarnations. Comp is refutable. Non-comp is not refutable.

Comp's refutability is an illusion, since the possibility of something
being refutable is a computation. Refuting comp through a comp is like
saying that running things over with a steam roller is a test of
whether or not they are flat.

Craig

Evgenii Rudnyi

unread,
Apr 5, 2012, 12:37:00 PM4/5/12
to everyth...@googlegroups.com
On 05.04.2012 01:59 Stathis Papaioannou said the following:

Jeffrey Gray considers consciousness from a viewpoint of empirical
studies. Philosophical zombies so far exist only in the minds of crazy
philosophers, so I am not sure if this is relevant.

As I have written, conscious experience offers unique capabilities to
tune all running servomechanisms to the brain that otherwise it has not.
This is what neuroscience says. When neuroscience will find zombies,
then it would be possible to consider this hypothesis as well.

Clearly one can imagine that he/she is not zombie and others are
zombies. But then he/she must convince others that they are zombies.

Evgenii

Evgenii Rudnyi

unread,
Apr 5, 2012, 12:41:39 PM4/5/12
to everyth...@googlegroups.com
On 05.04.2012 01:43 Craig Weinberg said the following:

We do not know what kind of computing brain does. It well might be that
at the level of neuron nets it was simpler to create a conscious display
than to employ other means. On the other hand, the robotics has yet to
prove that they can reach the behavioral level of for example mammals.
This has not been done yet. One cannot exclude that the progress here
will be achieved only when people will find a trick how a brain creates
conscious experience.

Evgenii

Evgenii

meekerdb

unread,
Apr 5, 2012, 2:07:09 PM4/5/12
to everyth...@googlegroups.com
On 4/4/2012 11:58 AM, Evgenii Rudnyi wrote:
> The term late error detection as such could be employed without consciousness indeed.
> Yet, Jeffrey Gray gives it some special meaning that I will try briefly describe below.
>
> Jeffrey Gray in his book speaks about conscious experience, that is, exactly about
> qualia. Self, mind, and intellect as such is not there.
>
> He has tried first hard to put conscious experience in the framework of the normal
> science (I guess that he means here physicalism) but then he shows that conscious
> experience cannot be explained by the theories within a normal science (functionalism,
> neural correlates of consciousness, etc.).
>
> According to him, conscious experience is some multipurpose display. It is necessary yet
> to find how Nature produces it but at the moment this is not that important.

Display to whom? the homunculus?

>
> He considers an organism from a cybernetic viewpoint, as a bunch of feedback mechanisms
> (servomechanisms). For a servomechanism it is necessary to set a goal and then to have a
> comparator that compares the goal with the reality. It might function okay at the
> unconscious level but conscious experience binds everything together in its display.

But why is the binding together conscious?

> This binding happens not only between different senses (multimodal binding) but also
> within a single sense (intramodel binding). For example we consciously experience a red
> kite as a whole, although in the brain lines, colors, surfaces are processed
> independently. Yet we cannot consciously experience a red kite not as a whole, just try it.

Actually I can. It takes some practice, but if, for example, you are a painter you learn
to see things a separate patches of color. As an engineer I can see a kite as structural
and aerodynamic elements.

>
> Hence the conscious display gives a new opportunity to compare expectations with reality
> and Jeffrey Grayrefers to it as late error detection.

But none of that explains why it is necessarily conscious. Is he contending that any
comparisons of expectations with reality instantiates consciousness? So if a Mars Rover
uses some predictive program about what's over the hill and then later compares that with
what is over the hill it will be conscious?

> That is, there is a bunch of servomechanisms that are running on their own but then
> conscious experience allows brain to synchronize everything together. This is a clear
> advantage from the Evolution viewpoint.

It's easy to say consciousness does this and that and to argue that since these things are
evolutionarily useful that's why consciousness developed. But what is needed is saying
why doing this and that rather than something else instantiates consciousness.

It seems that Gray is following my idea that the question of qualia, Chalmer's 'hard
problem', will simply be bypassed. We will learn how to make robots that act conscious
and we will just say consciousness is just an operational attribute.

Brent

>
> Evgenii

meekerdb

unread,
Apr 5, 2012, 2:10:55 PM4/5/12
to everyth...@googlegroups.com

But what constitutes 'a conscious display'. Display implies someone to whom it is displayed.

> than to employ other means. On the other hand, the robotics has yet to prove that they
> can reach the behavioral level of for example mammals. This has not been done yet. One
> cannot exclude that the progress here will be achieved only when people will find a
> trick how a brain creates conscious experience.

I think they will solve the problem of producing intelligent behavior and just assume they
have created conscious experience.

Brent

>
> Evgenii
>
> Evgenii
>

David Nyman

unread,
Apr 5, 2012, 2:39:59 PM4/5/12
to everyth...@googlegroups.com
On 5 April 2012 17:37, Evgenii Rudnyi <use...@rudnyi.ru> wrote:

>> (a) It is impossible to make a philosophical zombie as consciousness
>> is just a side-effect of intelligent behaviour;
>> (b) It is possible to make a philosophical zombie but the mechanism
>> for intelligent behaviour that nature chanced upon has the side-effect
>> of consciousness.
>>
>> Though (b) is possible I don't think it's plausible.
>>
>
> Jeffrey Gray considers consciousness from a viewpoint of empirical studies.
> Philosophical zombies so far exist only in the minds of crazy philosophers,
> so I am not sure if this is relevant.

I've always thought that the parable of the philosophical zombie was
nothing more than a way of dramatising the fact that fundamental
physical theory explicitly abjures any appeal to consciousness in
pursuit of its explanatory goals. All such theories are built on the
assumption (which I for one am in no position to dispute) that a
complete physical account of human behaviour could be completed
without reference to any putative conscious states

The zombie metaphor isn't intended as a challenge to how things
actually are, but rather to pump our intuition of explanatory gaps in
our theories of how things are. Hence, in the case that either option
a) or b) were true, it would still seem unsatisfactory that that
neither conclusion is forced by any existing physical theory, given
the unavoidable observational truth of consciousness.

David

Evgenii Rudnyi

unread,
Apr 5, 2012, 2:49:27 PM4/5/12
to everyth...@googlegroups.com
On 05.04.2012 20:07 meekerdb said the following:

> On 4/4/2012 11:58 AM, Evgenii Rudnyi wrote:
>> The term late error detection as such could be employed without
>> consciousness indeed. Yet, Jeffrey Gray gives it some special meaning
>> that I will try briefly describe below.
>>
>> Jeffrey Gray in his book speaks about conscious experience, that is,
>> exactly about qualia. Self, mind, and intellect as such is not there.
>>
>> He has tried first hard to put conscious experience in the framework
>> of the normal science (I guess that he means here physicalism) but
>> then he shows that conscious experience cannot be explained by the
>> theories within a normal science (functionalism, neural correlates of
>> consciousness, etc.).
>>
>> According to him, conscious experience is some multipurpose display.
>> It is necessary yet to find how Nature produces it but at the moment
>> this is not that important.
>
> Display to whom? the homunculus?

No, he creates an interesting scheme to escape the homunculus:

p. 110. �(1) the unconscious brain constructs a display in a medium,
that of conscious perception, fundamentally different from its usual
medium of electrochemical activity in and between nerve cells;

(2) it inspects the conscious constructed display;

(3) it uses the results of the display to change the working of its
usual electrochemical medium.�

Hence the unconscious brain does the job. I should say that this does
not answer my personal inquiry on how I perceive a three dimensional
world, but this is another problem. In his book, Jeffrey Gray offers
quite a plausible scheme.

>>
>> He considers an organism from a cybernetic viewpoint, as a bunch of
>> feedback mechanisms (servomechanisms). For a servomechanism it is
>> necessary to set a goal and then to have a comparator that compares
>> the goal with the reality. It might function okay at the unconscious
>> level but conscious experience binds everything together in its display.
>
> But why is the binding together conscious?

There is no answer to this question yet. This is just his hypothesis
based on experimental research. In a way, this is a description of
experiments. The question why requires a theory, it is not there yet.

>> This binding happens not only between different senses (multimodal
>> binding) but also within a single sense (intramodel binding). For
>> example we consciously experience a red kite as a whole, although in
>> the brain lines, colors, surfaces are processed independently. Yet we
>> cannot consciously experience a red kite not as a whole, just try it.
>
> Actually I can. It takes some practice, but if, for example, you are a
> painter you learn to see things a separate patches of color. As an
> engineer I can see a kite as structural and aerodynamic elements.

If you visually experiences this indeed, it might be good to make a MRI
test to see the difference with others. This way you will help to
develop the theory of consciousness.

I understand what you say and I can imagine a kite as a bunch of masses,
springs and dampers but I cannot visually experience this when I observe
the kite. I can visually experience this only when I draw it on a paper.

>>
>> Hence the conscious display gives a new opportunity to compare
>> expectations with reality and Jeffrey Grayrefers to it as late error
>> detection.
>
> But none of that explains why it is necessarily conscious. Is he
> contending that any comparisons of expectations with reality
> instantiates consciousness? So if a Mars Rover uses some predictive
> program about what's over the hill and then later compares that with
> what is over the hill it will be conscious?

He just describes experimental results. He has conscious experience, he
has a brain, MRI shows activities in the brain, then another person in
similar circumstances shows a similar activities in the brain and states
that he has conscious experience. Hence it is logical to suppose that
brain produces conscious experience.

There is no discussion in his book whether this is necessarily
conscious. There are no experimental results to discuss that. As for
Mars Rover, in his book there is a statement that ascribing
consciousness to robots is not grounded scientifically. There are no
experimental results in this respect to discuss.

>> That is, there is a bunch of servomechanisms that are running on their
>> own but then conscious experience allows brain to synchronize
>> everything together. This is a clear advantage from the Evolution
>> viewpoint.
>
> It's easy to say consciousness does this and that and to argue that
> since these things are evolutionarily useful that's why consciousness
> developed. But what is needed is saying why doing this and that rather
> than something else instantiates consciousness.

This remains as Hard Problem. There is no solution of that in the book.

> It seems that Gray is following my idea that the question of qualia,
> Chalmer's 'hard problem', will simply be bypassed. We will learn how to
> make robots that act conscious and we will just say consciousness is
> just an operational attribute.

No, his statement is that this phenomenon does not fit in the normal
science. He considers current theories of consciousness including
ephiphenomenalism, functionalism, neural correlate of consciousness and
his conclusion is that this theories cannot describe observations, that
is, Hard Problem remains.

Evgenii

> Brent
>
>>
>> Evgenii
>

Evgenii Rudnyi

unread,
Apr 5, 2012, 2:50:21 PM4/5/12
to everyth...@googlegroups.com
On 05.04.2012 20:10 meekerdb said the following:

It is hard to predict what happens. Let us see.

Evgenii

Evgenii Rudnyi

unread,
Apr 5, 2012, 2:56:41 PM4/5/12
to everyth...@googlegroups.com
On 05.04.2012 20:39 David Nyman said the following:

> On 5 April 2012 17:37, Evgenii Rudnyi<use...@rudnyi.ru> wrote:
>
>>> (a) It is impossible to make a philosophical zombie as consciousness
>>> is just a side-effect of intelligent behaviour;
>>> (b) It is possible to make a philosophical zombie but the mechanism
>>> for intelligent behaviour that nature chanced upon has the side-effect
>>> of consciousness.
>>>
>>> Though (b) is possible I don't think it's plausible.
>>>
>>
>> Jeffrey Gray considers consciousness from a viewpoint of empirical studies.
>> Philosophical zombies so far exist only in the minds of crazy philosophers,
>> so I am not sure if this is relevant.
>
> I've always thought that the parable of the philosophical zombie was
> nothing more than a way of dramatising the fact that fundamental
> physical theory explicitly abjures any appeal to consciousness in
> pursuit of its explanatory goals. All such theories are built on the
> assumption (which I for one am in no position to dispute) that a
> complete physical account of human behaviour could be completed
> without reference to any putative conscious states
>
> The zombie metaphor isn't intended as a challenge to how things
> actually are, but rather to pump our intuition of explanatory gaps in
> our theories of how things are. Hence, in the case that either option
> a) or b) were true, it would still seem unsatisfactory that that
> neither conclusion is forced by any existing physical theory, given
> the unavoidable observational truth of consciousness.
>
> David

In this sense, his conclusion is in agreement with philosophers. In his
book, Jeffery Gray shows that "consciousness display" cannot be
explained by the current science. According to him, a new science is
required.

Yet, this does not change his hypothesis about why "consciousness
display" could be advantageous for evolution. We do not know what it is,
but if is there, it certainly can help to organize servomechanisms in
the body.

Evgenii

meekerdb

unread,
Apr 5, 2012, 3:38:34 PM4/5/12
to everyth...@googlegroups.com

But 'conscious display' is just putting another name on what he purports to explain.
Unless Gray can point to specific brain structures and processes and explain why those
structures and processes make consciousness and others don't, he has done nothing to put
new words on "consciousness". Science needs *operational* definitions. Conversely, if he
can specify the structures and processes then we can instantiate those in a robot and see
if the robot acts as if it were conscious. I think that will be the experimental test of
a theory of consciousness. If we can manipulate consciousness by physical/chemical
manipulation of the brain that will be evidence we know what consciousness is. Notice
that in the physical science we don't go around saying, "Yes, I know how gravity works and
I can predict its effects and write equations for it, but what IS it?"

Brent

David Nyman

unread,
Apr 5, 2012, 3:39:16 PM4/5/12
to everyth...@googlegroups.com
On 5 April 2012 19:56, Evgenii Rudnyi <use...@rudnyi.ru> wrote:

> Yet, this does not change his hypothesis about why "consciousness display"
> could be advantageous for evolution. We do not know what it is, but if is
> there, it certainly can help to organize servomechanisms in the body.

Sure, if it is there, it could indeed be advantageous, if not
indispensable. But such notions of course do not avoid the Hard
Problem. Many independent considerations converge to suggest that -
as it bears on macroscopic physical evolution - consciousness in the
Hard sense will always be externally indistinguishable from
sufficiently intelligent behaviour, as Brent argues. The problem with
"display" ideas about consciousness (compare, for example, Johnjoe
McFadden's EM theory) is that they must, in the end, be fully
justified in impersonal terms, and hence once again appeals to the
additional hypothesis of consciousness, at the relevant level of
description, will be redundant.

I confess this smells to me like the wrong sort of theory. On the
other hand, if comp is true the story can be somewhat more subtle.
Comp + consciousness (the "internal view" of arithmetical truth)
implies an infinity of possible histories, in which natural selection,
of features advantageous to macroscopic entities inhabiting a
macroscopic environment, is a particularly consistent strand. It also
entails parallel strands of "evolutionary history" - i.e. at the level
of wave function - which need make no reference to any such macro
features but nonetheless imply the same gross distributions of matter.
But such a schema does entail a "causal" role for consciousness, as
the unique integrator of discontinuous subjective perspectives, but at
a very different logical level than that of "physical causation" (i.e.
the reductive structural relation between states).

David

meekerdb

unread,
Apr 5, 2012, 3:44:30 PM4/5/12
to everyth...@googlegroups.com
On 4/5/2012 11:49 AM, Evgenii Rudnyi wrote:
>>
>> Display to whom? the homunculus?
>
> No, he creates an interesting scheme to escape the homunculus:
>
> p. 110. �(1) the unconscious brain constructs a display in a medium, that of conscious
> perception, fundamentally different from its usual medium of electrochemical activity in
> and between nerve cells;

Is it a physical medium, made of quarks and electrons? Is it an immaterial soul stuff?
Or is it just a placeholder name for a gap in the theory?

>
> (2) it inspects the conscious constructed display;

Is the display conscious or the 'it' that's doing the inspection.

>
> (3) it uses the results of the display to change the working of its usual
> electrochemical medium.�

Sounds like a soul or homunculus to me.

>
> Hence the unconscious brain does the job.

But the display is denoted 'conscious'? Is it not part of the brain?

> I should say that this does not answer my personal inquiry on how I perceive a three
> dimensional world, but this is another problem. In his book, Jeffrey Gray offers quite a
> plausible scheme.

Doesn't sound anymore plausible than a conscious spirit.

Brent

meekerdb

unread,
Apr 5, 2012, 3:58:23 PM4/5/12
to everyth...@googlegroups.com
On 4/5/2012 12:39 PM, David Nyman wrote:
> I confess this smells to me like the wrong sort of theory. On the
> other hand, if comp is true the story can be somewhat more subtle.
> Comp + consciousness (the "internal view" of arithmetical truth)
> implies an infinity of possible histories, in which natural selection,
> of features advantageous to macroscopic entities inhabiting a
> macroscopic environment, is a particularly consistent strand.

I think that's the story even if comp is false.

> It also
> entails parallel strands of "evolutionary history" - i.e. at the level
> of wave function - which need make no reference to any such macro
> features but nonetheless imply the same gross distributions of matter.

Are you contemplating consciousness as a kind of equivalence relation that picks out the
different branches of Everett's MWI, i.e. solves the basis problem of decoherence? That
would seem to make every quasi-classical object conscious.

> But such a schema does entail a "causal" role for consciousness, as
> the unique integrator of discontinuous subjective perspectives,

To refer to 'subjective' perspectives seems to already assume consciousness.

Brent

Evgenii Rudnyi

unread,
Apr 5, 2012, 4:38:31 PM4/5/12
to everyth...@googlegroups.com
On 05.04.2012 21:38 meekerdb said the following:

Science start with a research on a phenomenon. If to speak about a
theory of consciousness then we are presumably close to the level when
ancient Greeks would try to develop a theory of electricity. Yet, the
phenomenon, for example lighting was already there and it was possible
to describe it even then.

'Conscious display' is a metaphor, if you like then a placeholder. We
cannot explain right now how brain produces consciousness and this is
Gray's point. Yet, this does not mean that the phenomenon is not there.
We just cannot explain it. In this respect, Gray's book is a very good
example of empirical science, the theory of consciousness is however not
there.

Evgenii

Evgenii Rudnyi

unread,
Apr 5, 2012, 4:43:22 PM4/5/12
to everyth...@googlegroups.com
On 05.04.2012 21:39 David Nyman said the following:

Gray's book is not a theory of consciousness, this is rather an
empirical research with an outcome that the modern science cannot
explain observation in that research. Gray also confesses that

�There are no behavioral tests by which we can distinguish whether a
computer, a robot or a Martian possesses qualia.�

At the same time, he shows how to bring consciousness into the lab:

�These experiments demonstrate yet again, by the way, that the �privacy�
of conscious experience offers no barrier to good science. Synaesthetes
claim a form of experience that is, from the point of view of most
people, idiosyncratic in the extreme. Yet it can be successfully brought
into the laboratory.�

Evgenii

David Nyman

unread,
Apr 5, 2012, 4:45:01 PM4/5/12