--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
> From blindsight, synesthesia, and anosognosia we know that particular
> qualia are not inevitably associated with the conditions they usually
> represent for us, so it seems impossible to justify qualia on a
> functionalist basis. Just as a computer needs no speakers and video
> screen inside itself, there is no purpose for such a presentation
> layer within our own mechanism. Of course, even if there were a
> purpose, there is no hint of such a possibility from mechanism alone.
> If there was some reason that a bucket of rocks could benefit by some
> kind of collective 'experience' occurring amongst them, that's a
> million miles from suspecting that experience could be a conceivable
> possibility.
>
> Rather than 'consciousness', human beings would benefit evolutionarily
> much more by just being able to do something mechanically conceivable
> things like teleport, time travel, or breathe fire. Awareness doesn't
> even make sense as a possibility. Were we not experiencing it
> ourselves we could never anticipate any such possibility in any
> universe.
Since there is no evolutionary advantage to consciousness it must be a
side-effect of the sort of behaviour that conscious organisms display.
Otherwise, why did we not evolve as zombies?
--
Stathis Papaioannou
I like Julian Jaynes idea that it is a side-effect of using the same parts of the brain
for cogitation as are used for perception. That would be the kind of thing that evolution
would do, jury rigged but efficient.
Brent
Why perception exists is pretty obvious in terms of evolutionary advantage. Even bacteria
perceive chemical gradients. Jaynes theory shows why thinking should be like perceiving a
voice in your head.
Brent
>
> Craig
>
Consciousness comes from the conjunction of an (instinctive,
preprogrammed, or better pre-engrammed) belief in a consistent reality/
god/universe/whatever, and the existence of that reality. The side-
effect comes from the fact that the logic of communicable belief is
different from the logic of the communicable-and-true beliefs.
Evolution, being driven by locally communicable events, cannot give an
advantage to truth, that's true, but without truth, they would be no
communicable events at all. So consciousness has to exist to make
sense of the relative selection, by the universal mind, and the third
person plural type of reality needed for sharable physical realities.
It that sense, consciousness is not really a side effect, but is what
make evolution and physical realities selectable by the "universal
mind". Consciousness looks like a side effect, from inside, only in
the Aristotelian picture. With comp, and its platonist consequences,
we might as well say that matter and evolution is a side effect of
consciousness. Without consciousness the notion of physical reality
would lost his meaning, given that the physical reality can only
result from the shared dreams, lived by the universal mind multiple
instantiations.
And consciousness can be associated with a range of behavior, but is
not equal to any behavior. It is of the type of knowledge, and is a
fixed point on self-doubting (like in Descartes). It is universal and
exists, with comp, right at the "start" of arithmetical truth. It does
not need to be selected, fro it exists at the start, and eventually is
the one responsible for all possible observer selections.
The point here is difficult and subtle, and I am just trying to convey
it. It takes into account the universal mind, as David pointed on
recently, and which I have to endorse through thought experience with
amnesia, (or some report of real experiences with some drugs) and the
complete UDA reversal.
> Otherwise, why did we not evolve as zombies?
OK.
Bruno
The evolutionary advantage of consciousness, according to Jeffrey Gray,
is late-error detection.
Evgenii
It depends on how do you define what a perception is. If a perception is
supposed to be conscious experience, then bacteria do not perceive
chemical gradients, but rather sense them. If you however define
perceive and sense as equivalent terms, then even a ballcock perceives a
level of water.
�Bacteria can perceive� is typical for biologists, see my small comment
on this
http://blog.rudnyi.ru/2011/01/perception-feedback-and-qualia.html
Evgenii
I agree. People confuse consciousness-the-qualia, and consciousness-
the-integrating function. Stathis was talking about the qualia.
Evolution can press only on the function, a priori.
>
> As far as ballcocks and electronic sensors, the difference is that
> they don't assemble themselves. We use their native capacities for
> purposes that plastic and metal has no way of accessing. The ballcock
> is only a thing in our world, it doesn't have any world of its own. I
> think that the molecules that make up the materials have their own
> world, but it's not likely to be anything like what we could imagine.
> Maybe all molecules have a collective experience on that microcosmic
> level, where snapshots of momentary awareness corresponding to change
> string together centuries of relative inactivity.
>
> It is not the fact that matter detects and responds to itself that is
> in question, it is the presentation of an interior realism which
> cannot be explained in a mechanistic context.
This is begging the question. And I would say that mechanism explains
well the interior realism, up to the qualia itself which can be
explained only in the negative. It is that thing that the machine
"feels correctly" to be non functional and makes the machine thinks at
first "non correctly" that she is not a machine. It is not correct
from the 3-view, but still correct from the machine first person view.
If 3-I is a machine, the 1-I cannot feels to be a machine.
As Minski pointed out, machines will be as befuddled as us about the
mind-body problem. But comp can explains this "befuddling" at the meta-
level, completely. The machines too. In a sense, the first person and
consciousness is not a machine, with the mechanist hypothesis.
Bruno
Jeffrey Gray in his book speaks about conscious experience, that is,
exactly about qualia. Self, mind, and intellect as such is not there.
He has tried first hard to put conscious experience in the framework of
the normal science (I guess that he means here physicalism) but then he
shows that conscious experience cannot be explained by the theories
within a normal science (functionalism, neural correlates of
consciousness, etc.).
According to him, conscious experience is some multipurpose display. It
is necessary yet to find how Nature produces it but at the moment this
is not that important.
He considers an organism from a cybernetic viewpoint, as a bunch of
feedback mechanisms (servomechanisms). For a servomechanism it is
necessary to set a goal and then to have a comparator that compares the
goal with the reality. It might function okay at the unconscious level
but conscious experience binds everything together in its display. This
binding happens not only between different senses (multimodal binding)
but also within a single sense (intramodel binding). For example we
consciously experience a red kite as a whole, although in the brain
lines, colors, surfaces are processed independently. Yet we cannot
consciously experience a red kite not as a whole, just try it.
Hence the conscious display gives a new opportunity to compare
expectations with reality and Jeffrey Grayrefers to it as late error
detection. That is, there is a bunch of servomechanisms that are running
on their own but then conscious experience allows brain to synchronize
everything together. This is a clear advantage from the Evolution viewpoint.
Evgenii
On 04.04.2012 09:31 Bruno Marchal said the following:
>
> On 03 Apr 2012, at 22:38, Craig Weinberg wrote:
>
>> On Apr 3, 3:56 pm, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
>>> On 03.04.2012 02:06 Stathis Papaioannou said the following:
...
>>>> Since there is no evolutionary advantage to consciousness it must be a
>>>> side-effect of the sort of behaviour that conscious organisms display.
>>>> Otherwise, why did we not evolve as zombies?
>>>
>>> The evolutionary advantage of consciousness, according to Jeffrey Gray,
>>> is late-error detection.
>>
>> Why would a device need to be conscious in order to have late-error
>> detection?
>
> I agree. People confuse consciousness-the-qualia, and
> consciousness-the-integrating function. Stathis was talking about the
> On Apr 4, 3:31 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> On 03 Apr 2012, at 22:38, Craig Weinberg wrote:
>
>>
>>> It is not the fact that matter detects and responds to itself that
>>> is
>>> in question, it is the presentation of an interior realism which
>>> cannot be explained in a mechanistic context.
>>
>> This is begging the question. And I would say that mechanism explains
>> well the interior realism, up to the qualia itself
>
> I don't see that there can be any interior realism without qualia -
> they are the same thing.
I agree with this.
> Mechanism assumes that because we can't
> explain the existence of qualia mechanistically, it must be an
> emergent property/illusion of mechanism.
It explains the existence of qualia, including some possible geometry
of them. It fails to explain only some aspect of qualia, but it meta-
explains why it cannot explain those aspects. The internal realism has
a necessary blind spot somehow.
> If we instead see that
> mechanism is a particular kind of lowest common denominator exterior
> qualia,
> then it would be silly to try to explain the parent
> phenomenology in terms of the child set of reduced possibilities.
?
>
>> which can be
>> explained only in the negative. It is that thing that the machine
>> "feels correctly" to be non functional and makes the machine thinks
>> at
>> first "non correctly" that she is not a machine. It is not correct
>> from the 3-view, but still correct from the machine first person
>> view.
>> If 3-I is a machine, the 1-I cannot feels to be a machine.
>> As Minski pointed out, machines will be as befuddled as us about the
>> mind-body problem. But comp can explains this "befuddling" at the
>> meta-
>> level, completely. The machines too. In a sense, the first person and
>> consciousness is not a machine, with the mechanist hypothesis.
>
> Mechanism is always going to implicate mechanism as the cause of
> anything, because it has no capacity to describe anything else and it
> has not capacity to extend beyond descriptions.
Yes it has. Once a machine is Löbian it can see its limitations, and
overcome it. This leads to many paths.
> Consciousness is a
> much larger phenomenon, as it includes all of mechanism as well as
> many more flavors of experience.
It is fuzzy. I can agree and disagree depending how you circumscribe
the meaning of the terms you are using.
> Only through direct experience can we
> know that it is possible that there is a difference between
> description and reality.
Yes. But we cannot know reality as such, except for the conscious non
communicable parts. So, when we talk with each other, we can only make
hypothesis and reasoning.
>
> Through the monochrome lens of mechanism, it is easy to prove that
> audiences will think they see something other than black and white
> pixels because we understand that they are seeing fluid patterns of
> changing pixels rather than the pixels themselves, but this doesn't
> explain how we see color. The idea that a machine would logically not
> think of itself as a machine doesn't explain the existence of what it
> feels like to be the opposite of a machine or how it could really feel
> like anything.
But mechanism is not proposed as an explanation. It is more a "law"
that we exploit to clarify the problems. You can see it as a strong
assumption/belief given that it is a belief in possible
reincarnations. Comp is refutable. Non-comp is not refutable.
Bruno
>> Since there is no evolutionary advantage to consciousness it must be a
>> side-effect of the sort of behaviour that conscious organisms display.
>> Otherwise, why did we not evolve as zombies?
>>
>
> The evolutionary advantage of consciousness, according to Jeffrey Gray, is
> late-error detection.
But the late-error detection processing could be done in the same way
by a philosophical zombie. Since, by definition, a philosophical
zombie's behaviour is indistinguishable from that of a conscious being
there is no way that nature could favour a conscious being over the
equivalent philosophical zombie. You then have two options to explain
why we are not zombies:
(a) It is impossible to make a philosophical zombie as consciousness
is just a side-effect of intelligent behaviour;
(b) It is possible to make a philosophical zombie but the mechanism
for intelligent behaviour that nature chanced upon has the side-effect
of consciousness.
Though (b) is possible I don't think it's plausible.
--
Stathis Papaioannou
Jeffrey Gray considers consciousness from a viewpoint of empirical
studies. Philosophical zombies so far exist only in the minds of crazy
philosophers, so I am not sure if this is relevant.
As I have written, conscious experience offers unique capabilities to
tune all running servomechanisms to the brain that otherwise it has not.
This is what neuroscience says. When neuroscience will find zombies,
then it would be possible to consider this hypothesis as well.
Clearly one can imagine that he/she is not zombie and others are
zombies. But then he/she must convince others that they are zombies.
Evgenii
We do not know what kind of computing brain does. It well might be that
at the level of neuron nets it was simpler to create a conscious display
than to employ other means. On the other hand, the robotics has yet to
prove that they can reach the behavioral level of for example mammals.
This has not been done yet. One cannot exclude that the progress here
will be achieved only when people will find a trick how a brain creates
conscious experience.
Evgenii
Evgenii
Display to whom? the homunculus?
>
> He considers an organism from a cybernetic viewpoint, as a bunch of feedback mechanisms
> (servomechanisms). For a servomechanism it is necessary to set a goal and then to have a
> comparator that compares the goal with the reality. It might function okay at the
> unconscious level but conscious experience binds everything together in its display.
But why is the binding together conscious?
> This binding happens not only between different senses (multimodal binding) but also
> within a single sense (intramodel binding). For example we consciously experience a red
> kite as a whole, although in the brain lines, colors, surfaces are processed
> independently. Yet we cannot consciously experience a red kite not as a whole, just try it.
Actually I can. It takes some practice, but if, for example, you are a painter you learn
to see things a separate patches of color. As an engineer I can see a kite as structural
and aerodynamic elements.
>
> Hence the conscious display gives a new opportunity to compare expectations with reality
> and Jeffrey Grayrefers to it as late error detection.
But none of that explains why it is necessarily conscious. Is he contending that any
comparisons of expectations with reality instantiates consciousness? So if a Mars Rover
uses some predictive program about what's over the hill and then later compares that with
what is over the hill it will be conscious?
> That is, there is a bunch of servomechanisms that are running on their own but then
> conscious experience allows brain to synchronize everything together. This is a clear
> advantage from the Evolution viewpoint.
It's easy to say consciousness does this and that and to argue that since these things are
evolutionarily useful that's why consciousness developed. But what is needed is saying
why doing this and that rather than something else instantiates consciousness.
It seems that Gray is following my idea that the question of qualia, Chalmer's 'hard
problem', will simply be bypassed. We will learn how to make robots that act conscious
and we will just say consciousness is just an operational attribute.
Brent
>
> Evgenii
But what constitutes 'a conscious display'. Display implies someone to whom it is displayed.
> than to employ other means. On the other hand, the robotics has yet to prove that they
> can reach the behavioral level of for example mammals. This has not been done yet. One
> cannot exclude that the progress here will be achieved only when people will find a
> trick how a brain creates conscious experience.
I think they will solve the problem of producing intelligent behavior and just assume they
have created conscious experience.
Brent
>
> Evgenii
>
> Evgenii
>
>> (a) It is impossible to make a philosophical zombie as consciousness
>> is just a side-effect of intelligent behaviour;
>> (b) It is possible to make a philosophical zombie but the mechanism
>> for intelligent behaviour that nature chanced upon has the side-effect
>> of consciousness.
>>
>> Though (b) is possible I don't think it's plausible.
>>
>
> Jeffrey Gray considers consciousness from a viewpoint of empirical studies.
> Philosophical zombies so far exist only in the minds of crazy philosophers,
> so I am not sure if this is relevant.
I've always thought that the parable of the philosophical zombie was
nothing more than a way of dramatising the fact that fundamental
physical theory explicitly abjures any appeal to consciousness in
pursuit of its explanatory goals. All such theories are built on the
assumption (which I for one am in no position to dispute) that a
complete physical account of human behaviour could be completed
without reference to any putative conscious states
The zombie metaphor isn't intended as a challenge to how things
actually are, but rather to pump our intuition of explanatory gaps in
our theories of how things are. Hence, in the case that either option
a) or b) were true, it would still seem unsatisfactory that that
neither conclusion is forced by any existing physical theory, given
the unavoidable observational truth of consciousness.
David
No, he creates an interesting scheme to escape the homunculus:
p. 110. �(1) the unconscious brain constructs a display in a medium,
that of conscious perception, fundamentally different from its usual
medium of electrochemical activity in and between nerve cells;
(2) it inspects the conscious constructed display;
(3) it uses the results of the display to change the working of its
usual electrochemical medium.�
Hence the unconscious brain does the job. I should say that this does
not answer my personal inquiry on how I perceive a three dimensional
world, but this is another problem. In his book, Jeffrey Gray offers
quite a plausible scheme.
>>
>> He considers an organism from a cybernetic viewpoint, as a bunch of
>> feedback mechanisms (servomechanisms). For a servomechanism it is
>> necessary to set a goal and then to have a comparator that compares
>> the goal with the reality. It might function okay at the unconscious
>> level but conscious experience binds everything together in its display.
>
> But why is the binding together conscious?
There is no answer to this question yet. This is just his hypothesis
based on experimental research. In a way, this is a description of
experiments. The question why requires a theory, it is not there yet.
>> This binding happens not only between different senses (multimodal
>> binding) but also within a single sense (intramodel binding). For
>> example we consciously experience a red kite as a whole, although in
>> the brain lines, colors, surfaces are processed independently. Yet we
>> cannot consciously experience a red kite not as a whole, just try it.
>
> Actually I can. It takes some practice, but if, for example, you are a
> painter you learn to see things a separate patches of color. As an
> engineer I can see a kite as structural and aerodynamic elements.
If you visually experiences this indeed, it might be good to make a MRI
test to see the difference with others. This way you will help to
develop the theory of consciousness.
I understand what you say and I can imagine a kite as a bunch of masses,
springs and dampers but I cannot visually experience this when I observe
the kite. I can visually experience this only when I draw it on a paper.
>>
>> Hence the conscious display gives a new opportunity to compare
>> expectations with reality and Jeffrey Grayrefers to it as late error
>> detection.
>
> But none of that explains why it is necessarily conscious. Is he
> contending that any comparisons of expectations with reality
> instantiates consciousness? So if a Mars Rover uses some predictive
> program about what's over the hill and then later compares that with
> what is over the hill it will be conscious?
He just describes experimental results. He has conscious experience, he
has a brain, MRI shows activities in the brain, then another person in
similar circumstances shows a similar activities in the brain and states
that he has conscious experience. Hence it is logical to suppose that
brain produces conscious experience.
There is no discussion in his book whether this is necessarily
conscious. There are no experimental results to discuss that. As for
Mars Rover, in his book there is a statement that ascribing
consciousness to robots is not grounded scientifically. There are no
experimental results in this respect to discuss.
>> That is, there is a bunch of servomechanisms that are running on their
>> own but then conscious experience allows brain to synchronize
>> everything together. This is a clear advantage from the Evolution
>> viewpoint.
>
> It's easy to say consciousness does this and that and to argue that
> since these things are evolutionarily useful that's why consciousness
> developed. But what is needed is saying why doing this and that rather
> than something else instantiates consciousness.
This remains as Hard Problem. There is no solution of that in the book.
> It seems that Gray is following my idea that the question of qualia,
> Chalmer's 'hard problem', will simply be bypassed. We will learn how to
> make robots that act conscious and we will just say consciousness is
> just an operational attribute.
No, his statement is that this phenomenon does not fit in the normal
science. He considers current theories of consciousness including
ephiphenomenalism, functionalism, neural correlate of consciousness and
his conclusion is that this theories cannot describe observations, that
is, Hard Problem remains.
Evgenii
> Brent
>
>>
>> Evgenii
>
It is hard to predict what happens. Let us see.
Evgenii
In this sense, his conclusion is in agreement with philosophers. In his
book, Jeffery Gray shows that "consciousness display" cannot be
explained by the current science. According to him, a new science is
required.
Yet, this does not change his hypothesis about why "consciousness
display" could be advantageous for evolution. We do not know what it is,
but if is there, it certainly can help to organize servomechanisms in
the body.
Evgenii
But 'conscious display' is just putting another name on what he purports to explain.
Unless Gray can point to specific brain structures and processes and explain why those
structures and processes make consciousness and others don't, he has done nothing to put
new words on "consciousness". Science needs *operational* definitions. Conversely, if he
can specify the structures and processes then we can instantiate those in a robot and see
if the robot acts as if it were conscious. I think that will be the experimental test of
a theory of consciousness. If we can manipulate consciousness by physical/chemical
manipulation of the brain that will be evidence we know what consciousness is. Notice
that in the physical science we don't go around saying, "Yes, I know how gravity works and
I can predict its effects and write equations for it, but what IS it?"
Brent
> Yet, this does not change his hypothesis about why "consciousness display"
> could be advantageous for evolution. We do not know what it is, but if is
> there, it certainly can help to organize servomechanisms in the body.
Sure, if it is there, it could indeed be advantageous, if not
indispensable. But such notions of course do not avoid the Hard
Problem. Many independent considerations converge to suggest that -
as it bears on macroscopic physical evolution - consciousness in the
Hard sense will always be externally indistinguishable from
sufficiently intelligent behaviour, as Brent argues. The problem with
"display" ideas about consciousness (compare, for example, Johnjoe
McFadden's EM theory) is that they must, in the end, be fully
justified in impersonal terms, and hence once again appeals to the
additional hypothesis of consciousness, at the relevant level of
description, will be redundant.
I confess this smells to me like the wrong sort of theory. On the
other hand, if comp is true the story can be somewhat more subtle.
Comp + consciousness (the "internal view" of arithmetical truth)
implies an infinity of possible histories, in which natural selection,
of features advantageous to macroscopic entities inhabiting a
macroscopic environment, is a particularly consistent strand. It also
entails parallel strands of "evolutionary history" - i.e. at the level
of wave function - which need make no reference to any such macro
features but nonetheless imply the same gross distributions of matter.
But such a schema does entail a "causal" role for consciousness, as
the unique integrator of discontinuous subjective perspectives, but at
a very different logical level than that of "physical causation" (i.e.
the reductive structural relation between states).
David
Is it a physical medium, made of quarks and electrons? Is it an immaterial soul stuff?
Or is it just a placeholder name for a gap in the theory?
>
> (2) it inspects the conscious constructed display;
Is the display conscious or the 'it' that's doing the inspection.
>
> (3) it uses the results of the display to change the working of its usual
> electrochemical medium.�
Sounds like a soul or homunculus to me.
>
> Hence the unconscious brain does the job.
But the display is denoted 'conscious'? Is it not part of the brain?
> I should say that this does not answer my personal inquiry on how I perceive a three
> dimensional world, but this is another problem. In his book, Jeffrey Gray offers quite a
> plausible scheme.
Doesn't sound anymore plausible than a conscious spirit.
Brent
I think that's the story even if comp is false.
> It also
> entails parallel strands of "evolutionary history" - i.e. at the level
> of wave function - which need make no reference to any such macro
> features but nonetheless imply the same gross distributions of matter.
Are you contemplating consciousness as a kind of equivalence relation that picks out the
different branches of Everett's MWI, i.e. solves the basis problem of decoherence? That
would seem to make every quasi-classical object conscious.
> But such a schema does entail a "causal" role for consciousness, as
> the unique integrator of discontinuous subjective perspectives,
To refer to 'subjective' perspectives seems to already assume consciousness.
Brent
Science start with a research on a phenomenon. If to speak about a
theory of consciousness then we are presumably close to the level when
ancient Greeks would try to develop a theory of electricity. Yet, the
phenomenon, for example lighting was already there and it was possible
to describe it even then.
'Conscious display' is a metaphor, if you like then a placeholder. We
cannot explain right now how brain produces consciousness and this is
Gray's point. Yet, this does not mean that the phenomenon is not there.
We just cannot explain it. In this respect, Gray's book is a very good
example of empirical science, the theory of consciousness is however not
there.
Evgenii
Gray's book is not a theory of consciousness, this is rather an
empirical research with an outcome that the modern science cannot
explain observation in that research. Gray also confesses that
�There are no behavioral tests by which we can distinguish whether a
computer, a robot or a Martian possesses qualia.�
At the same time, he shows how to bring consciousness into the lab:
�These experiments demonstrate yet again, by the way, that the �privacy�
of conscious experience offers no barrier to good science. Synaesthetes
claim a form of experience that is, from the point of view of most
people, idiosyncratic in the extreme. Yet it can be successfully brought
into the laboratory.�
Evgenii