MGA 1

44 views
Skip to first unread message

Bruno Marchal

unread,
Nov 18, 2008, 2:52:47 PM11/18/08
to everyth...@googlegroups.com
Hi,

Those who dislikes introduction can skip up to "THE FIRST THOUGHT
EXPERIMENT AND THE FIRST QUESTION".
---------------------


INTRODUCTION

MGA is for Movie Graph Argument (like UDA is for Universal Dovetailer
Argument).

By UDA(1...7), the seven first step of the UDA, we have a proof or
argument that


(COMP + there is a concrete universe with a concrete universal
dovetailer running forever in it)

implies that

physics is emerging statistically from the computations (as seen
from a "first person points of view").

Note: I will use "computationalism, digital mechanism, and even just
mechanism, as synonymous.

MGA is intended to eliminate the hypothesis that:

there is a concrete universe with a concrete universal dovetailer
running forever)


Leading to: comp implies that physics is a branch of (mathematical)
computer science.

Some nuances will have to be added. But I prefer to be slightly wrong,
and understandable, than to make a long list of "vocabulary" and
pursuing in some obscur jargon.


But in case you have not read the UDA, there is no problem. MGA by
itself shows something independent of the UDA, indeed it shows (is
supposed to show) that the physical supervenience thesis is false.
Consciousness does not supervene on the *physical activity* of the
brain/computer/universe. This shows that mechanism is incompatible
with materialism (even weak form) or naturalism or physicalism,
because they traditionally assume the physical supervenience thesis.

It is more subtle than UDA, and I expect possible infinite
discussions. (Zombies will come back!)


Now a preliminary remark for clarifying what we mean by MECHANISM.
When the mechanist says "yes" to the doctor, it is because he believes
(or hopes) he will survive QUA COMPUTATIO (sorry for the latin). I
mean he believes that he will survive because the computational device
he will get in place of its old brain does the "right" computations
(which exists by hypothesis). he does not believe something like this
(although he could!). I believe that there is God who will, by its
magic means, pull out my soul, and then put it back in the new
computational device.
A mechanical theory of consciousness, as well explained by Dennett,
should rely of the fact that we don't attribute knowledge or
consciousness, still less prescience, to the neurons, or elementary
logical gates, or quarks, ... that is to the elementary part of the
computational device. (The elementary parts depends of course of the
substitution level choice).

This means, assuming both mechanism and naturalism (i.e. the physical
supervenience thesis), that when consciousness supervenes on the
physical activity of a brain, no neuron is aware of the other neurons
to which they are related. Each neuron is "aware" only of some
information they get of the neurons, not of the neurons themselves. If
that was not the case, so that some neurons have some prescience of
the identity of the neurons to which they are connected, it would just
mean, when keeping the mechanist hypothesis, that we have not chosen
the right level of substitution, and should go down further.

Now come the first thought experiment and the first question.
-------------------------

THE FIRST THOUGHT EXPERIMENT AND THE FIRST QUESTIONS (MGA 1) : The
lucky cosmic event.

One billions years ago, at one billion light years away, somewhere in
the universe (which exists by the naturalist hypo) a cosmic explosion
occurred. And ...

... Alice had her math exam this afternoon.
From 3h to 4h, she solved successfully a problem. She though to
herself, "oh, easy, Oh careful there is trap, yet I can solve it".

What really happened is this. Alice already got an artificial brain,
since a fatal brain tumor in her early childhood. At 3h17 pm one
logical gate did broke, (resp. two logical gates, three, 24, 4567,
234987, ... all).

But Alice was lucky (incredibly lucky). When the logical gate A did
break, and for example did not send a bit to logical gate B, an
energetic particle coming from the cosmic explosion, by pure chance,
did trigger the logical gate B at the right time. And just after this
happening another energetic particle fixed the gate problem.

Question: did this change Alice's consciousness during the exam?

I ask the same question with 2440 broken gates. They broke, let us say
during an oral exam, and each time a gate broke, by sending a wrong
info, or by not sending some info, an energetic particle coming from
that cosmic explosion do the job, and at some point in time, a bunch
of energetic particle fix Alice's brain.

Suppose that ALL the neurons/logical gates of Alice are broken during
the exam, all the time. But Alice, I told you, is incredibly lucky,
and that cosmic beam again manage each logical gates to complete their
work in the relevant places and times. And again at the end of the
exam, a cosmic last beam fixed her brain. In particular she succeed
the exam, and she can explain later to her mother, with her sane
(artificial) brain, that she thought tp herself, during the oral
exam: "oh, easy, Oh careful there is trap, yet I can solve it".

The last question (of MGA 1) is: was Alice, in this case, a zombie
during the exam?

I let you think.

Bruno


http://iridia.ulb.ac.be/~marchal/

Russell Standish

unread,
Nov 19, 2008, 1:13:04 AM11/19/08
to everyth...@googlegroups.com

I think it makes a difference if all gates, including the output gates
are broken. But if the output gates are intact, and are all activated
in the correct way by your "happy rays" (rayons heureux), then your
argument should be correct.

>
> But Alice was lucky (incredibly lucky). When the logical gate A did
> break, and for example did not send a bit to logical gate B, an
> energetic particle coming from the cosmic explosion, by pure chance,
> did trigger the logical gate B at the right time. And just after this
> happening another energetic particle fixed the gate problem.
>
> Question: did this change Alice's consciousness during the exam?
>
> I ask the same question with 2440 broken gates. They broke, let us say
> during an oral exam, and each time a gate broke, by sending a wrong
> info, or by not sending some info, an energetic particle coming from
> that cosmic explosion do the job, and at some point in time, a bunch
> of energetic particle fix Alice's brain.
>
> Suppose that ALL the neurons/logical gates of Alice are broken during
> the exam, all the time. But Alice, I told you, is incredibly lucky,
> and that cosmic beam again manage each logical gates to complete their
> work in the relevant places and times. And again at the end of the
> exam, a cosmic last beam fixed her brain. In particular she succeed
> the exam, and she can explain later to her mother, with her sane
> (artificial) brain, that she thought tp herself, during the oral
> exam: "oh, easy, Oh careful there is trap, yet I can solve it".
>
> The last question (of MGA 1) is: was Alice, in this case, a zombie
> during the exam?
>
> I let you think.
>
> Bruno
>

I think Alice was indeed not a zombie, and that her consciousness
supervened on the physical activity stimulating her output gates (the
cosmic explosion that produced the "happy rays"). Are you suggesting
that she was a zombie?

I can see the connection with Tim Maudlin's argument, but in his case,
the machinery known as Olympia is too simple to be conscious (being
nothing more than a recording - simpler than most automata anyway),
and the machinery known as Klara was in fact stationary, leading to a
rather absurd proposition that consciousness would depend on a
difference in an inactive machine.

In your case, the cosmic explosion is far from inactive, and if a star
blew up in just such a way that its cosmic rays produced identical
behaviour to Alice taking her exam (consciously), I have no problems
in considering her consciousness as having supervened on the cosmic
rays travelling from that star for that instant. It is no different to
the proverbial tornado ripping through one of IBM's junk yards and
miraculously assembling a conscious computer by chance.

Of course you know my opinion that the whole argument changes once you
consider the thought experiment taking place in a multiverse.

Cheers

--

----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------

Bruno Marchal

unread,
Nov 19, 2008, 6:59:43 AM11/19/08
to everyth...@googlegroups.com

Le 19-nov.-08, à 07:13, Russell Standish a écrit :


> I think Alice was indeed not a zombie,


I think you are right.
COMP + MAT implies Alice (in this setting) is not a zombie.

> and that her consciousness
> supervened on the physical activity stimulating her output gates (the
> cosmic explosion that produced the "happy rays"). Are you suggesting
> that she was a zombie?


Not at all. (Not yet ...).

>
> I can see the connection with Tim Maudlin's argument, but in his case,
> the machinery known as Olympia is too simple to be conscious (being
> nothing more than a recording - simpler than most automata anyway),
> and the machinery known as Klara was in fact stationary, leading to a
> rather absurd proposition that consciousness would depend on a
> difference in an inactive machine.
>
> In your case, the cosmic explosion is far from inactive,

This makes the movie graph argument immune against the first half of
Barnes objection. But let us not anticipate on the sequel.

> and if a star
> blew up in just such a way that its cosmic rays produced identical
> behaviour to Alice taking her exam (consciously), I have no problems
> in considering her consciousness as having supervened on the cosmic
> rays travelling from that star for that instant. It is no different to
> the proverbial tornado ripping through one of IBM's junk yards and
> miraculously assembling a conscious computer by chance.


Does everyone accept, like Russell, that, assuming COMP and MAT, Alice
is not a zombie? I mean, is there someone who object? Remember we are
proving implication/ MAT+MECH => <something>. We never try to argue
about that <something> per se. Eventually we hope to prove MAT+MECH =>
false, that is NOT(MAT & MECH) which is equivalent to MAT implies NOT
MECH, MECH => NOT MAT, etc.

(by MAT i mean materialism, or naturalism, or physicalism or more
generally "the physical supervenience thesis", according to which
consciousness supervenes on the physical activity of the brain.

If no one objects, I will present MGA 2 (soon).


>
> Of course you know my opinion that the whole argument changes once you
> consider the thought experiment taking place in a multiverse.


We will see (let us go step by step for not confusing the audience).
Thanks for answering.


Bruno Marchal


http://iridia.ulb.ac.be/~marchal/

Telmo Menezes

unread,
Nov 19, 2008, 10:06:12 AM11/19/08
to everyth...@googlegroups.com
Bruno,

> If no one objects, I will present MGA 2 (soon).

I also agree completely and am curious to see where this is going.
Please continue!

Cheers,
Telmo Menezes.

Gordon Tsai

unread,
Nov 19, 2008, 10:48:52 AM11/19/08
to everyth...@googlegroups.com
Bruno:
 
   I'm intested to see the second part. Thanks!

--- On Wed, 11/19/08, Bruno Marchal <mar...@ulb.ac.be> wrote:

Jason Resch

unread,
Nov 19, 2008, 1:50:35 PM11/19/08
to everyth...@googlegroups.com


On Wed, Nov 19, 2008 at 5:59 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:


Does everyone accept, like Russell,  that, assuming COMP and MAT, Alice
is not a zombie? I mean, is there someone who object? Remember we are
proving implication/ MAT+MECH => <something>. We never try to argue
about that <something> per se. Eventually we hope to prove MAT+MECH =>
false, that is NOT(MAT & MECH) which is equivalent to MAT implies NOT
MECH, MECH => NOT MAT, etc.

(by MAT i mean materialism, or naturalism, or physicalism or more
generally "the physical supervenience thesis", according to which
consciousness supervenes on the physical activity of the brain.

Bruno, I am on the fence as to whether or not Alice is a Zombie.  The argument for her not being conscious is related to the non causal effect of information in this scenario.  A string of 1's and 0's which is simply defined out of nowhere, in my opinion cannot contain conscious observers, even if it could be considered to encode brain states conscious observers or a universe with conscious observers.  To have meaningful information there must be relations between objects, such as the flow of information in the succession of states in a Turing machine.  In the case of Alice, the information coming from the cosmic rays is meaningless, and might as well have occurred in isolation.  If all of Alice's logic gates had been spread over a field, and made to fire in the same way due to cosmic rays and if all logic gates remained otherwise disconnected from each other, would anyone consider this field of logic gates be conscious?

I have an idea that consciousness is related to hierarchies of information, at the lowest levels of neural activity, simple computations of small amounts of information combine information into a result, and then these higher level results are passed up to higher levels of processing, etc.  For example the red/green/blue data from the eyes are combined into single pixels, these pixels are combined into an field of colors, this field of colors is then processed by object classification sections of the brain.  So my argument that Alice might not be conscious would be related to the skipping of steps through the injection of information which is "empty" (not having been computed from lower level sets of information and hence not actually conveying any information).

Jason

) I do not believe is 

Jason Resch

unread,
Nov 19, 2008, 2:17:35 PM11/19/08
to everyth...@googlegroups.com
To add some clarification, I do not think spreading Alice's logic gates across a field and allowing cosmic rays to cause each gate to perform the same computations that they would had they existed in her functioning brain would be conscious.  I think this because in isolation the logic gates are not computing anything complex, only AND, OR, NAND operations, etc.  This is why I believe rocks are not conscious, the collisions of their molecules may be performing simple computations, but they are never aggregated into complex patterns to compute over a large set of information.

Jason

Michael Rosefield

unread,
Nov 19, 2008, 2:37:44 PM11/19/08
to everyth...@googlegroups.com
Are not logic gates black boxes, though? Does it really matter what happens between Input and Output? In which case, it has absolutely no bearing on Alice's consciousness whether the gate's a neuron, an electronic doodah, a team of well-trained monkeys or a lucky quantum event or synchronicity. It does not matter, really, where or when the actions of the gate take place.

2008/11/19 Jason Resch <jason...@gmail.com>

Bruno Marchal

unread,
Nov 19, 2008, 2:55:18 PM11/19/08
to everyth...@googlegroups.com
On 19 Nov 2008, at 20:17, Jason Resch wrote:

To add some clarification, I do not think spreading Alice's logic gates across a field and allowing cosmic rays to cause each gate to perform the same computations that they would had they existed in her functioning brain would be conscious.  I think this because in isolation the logic gates are not computing anything complex, only AND, OR, NAND operations, etc.  This is why I believe rocks are not conscious, the collisions of their molecules may be performing simple computations, but they are never aggregated into complex patterns to compute over a large set of information.


Actually I agree with this argument. But it does not concern Alice, because I have provide her with an incredible amount of luck. The lucky rays  fix the neurons in a genuine way (by that abnormally big amount of pure luck). 
If you doubt Alice remain conscious, how could you accept an experience of simple teleportation (UDA step 1 or 2). If you can recover consciousness from a relative digital description, how could that consciousness distinguish between a recovery from a genuine description send from earth (say), and a recovery from a description luckily generated by a random process? If you recover from a description (comp), you cannot know if that description has been generated by a computation or a random process, unless you give some prescience to the logical gates. Keep in mind we try to refute the conjunction MECH and MAT.

Nevertheless your intuition below is mainly correct, but the point is that accepting it really works, AND keeping MECH, will force us to negate MAT.

Bruno





Jason

On Wed, Nov 19, 2008 at 12:50 PM, Jason Resch <jason...@gmail.com> wrote:


On Wed, Nov 19, 2008 at 5:59 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:


Does everyone accept, like Russell,  that, assuming COMP and MAT, Alice
is not a zombie? I mean, is there someone who object? Remember we are
proving implication/ MAT+MECH => <something>. We never try to argue
about that <something> per se. Eventually we hope to prove MAT+MECH =>
false, that is NOT(MAT & MECH) which is equivalent to MAT implies NOT
MECH, MECH => NOT MAT, etc.

(by MAT i mean materialism, or naturalism, or physicalism or more
generally "the physical supervenience thesis", according to which
consciousness supervenes on the physical activity of the brain.

Bruno, I am on the fence as to whether or not Alice is a Zombie.  The argument for her not being conscious is related to the non causal effect of information in this scenario.  A string of 1's and 0's which is simply defined out of nowhere, in my opinion cannot contain conscious observers, even if it could be considered to encode brain states conscious observers or a universe with conscious observers.  To have meaningful information there must be relations between objects, such as the flow of information in the succession of states in a Turing machine.  In the case of Alice, the information coming from the cosmic rays is meaningless, and might as well have occurred in isolation.  If all of Alice's logic gates had been spread over a field, and made to fire in the same way due to cosmic rays and if all logic gates remained otherwise disconnected from each other, would anyone consider this field of logic gates be conscious?

I have an idea that consciousness is related to hierarchies of information, at the lowest levels of neural activity, simple computations of small amounts of information combine information into a result, and then these higher level results are passed up to higher levels of processing, etc.  For example the red/green/blue data from the eyes are combined into single pixels, these pixels are combined into an field of colors, this field of colors is then processed by object classification sections of the brain.  So my argument that Alice might not be conscious would be related to the skipping of steps through the injection of information which is "empty" (not having been computed from lower level sets of information and hence not actually conveying any information).

Jason

) I do not believe is 




Bruno Marchal

unread,
Nov 19, 2008, 2:40:45 PM11/19/08
to everyth...@googlegroups.com

On 19 Nov 2008, at 16:06, Telmo Menezes wrote:


>
> Bruno,
>
>> If no one objects, I will present MGA 2 (soon).
>
> I also agree completely and am curious to see where this is going.
> Please continue!


Thanks Telmo, thanks also to Gordon.

I will try to send MGA 2 asap. But this asks me some time. Meanwhile I
suggest a little exercise, which, by the way, finishes the proof of
"MECH + MAT implies false", for those who thinks that there is no
(conceivable) zombies. (they think that "exists zombie" *is* false).

Exercise (mat+mec implies zombie exists or are conceivable):

Could you alter the so-lucky cosmic explosion beam a little bit so
that Alice still succeed her math exam, but is, reasonably enough, a
zombie during the exam. With zombie taken in the traditional sense of
Kory and Dennett.
Of course you have to keep well *both* MECH *and* MAT.

Bruno

http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Nov 19, 2008, 4:43:02 PM11/19/08
to everyth...@googlegroups.com

As I understand it a philosophical zombie is someone who looks and acts just
like a conscious person but isn't conscious, i.e. has no "inner narrative".
Time and circumstance play a part in this. As Bruno pointed out a cardboard
cutout of a person's photograph could be a zombie for a moment. I assume the
point of the exam is that an exam is long enough in duration and complex enough
that it rules out the accidental, cutout zombie. But then Alice has her normal
behavior restored by a cosmic ray shower that is just as improbable as the
accidental zombie, i.e. she is, for the duration of the shower, an accidental
zombie.

So I'm puzzled as to how answer Bruno's question. In general I don't believe in
zombies, but that's in the same way I don't believe my glass of water will
freeze at 20degC. It's an opinion about what is likely, not what is possible.
It seems similar to the question, could I have gotten in my car and driven to
the store, bought something, and driven back and yet not be conscious of it.
It's highly unlikely, yet people apparently have done such things.

Brent

Jason Resch

unread,
Nov 19, 2008, 5:26:57 PM11/19/08
to everyth...@googlegroups.com
On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal <mar...@ulb.ac.be> wrote:

On 19 Nov 2008, at 20:17, Jason Resch wrote:

To add some clarification, I do not think spreading Alice's logic gates across a field and allowing cosmic rays to cause each gate to perform the same computations that they would had they existed in her functioning brain would be conscious.  I think this because in isolation the logic gates are not computing anything complex, only AND, OR, NAND operations, etc.  This is why I believe rocks are not conscious, the collisions of their molecules may be performing simple computations, but they are never aggregated into complex patterns to compute over a large set of information.


Actually I agree with this argument. But it does not concern Alice, because I have provide her with an incredible amount of luck. The lucky rays  fix the neurons in a genuine way (by that abnormally big amount of pure luck). 

If the cosmic rays are simply keeping her neurons working normally, then I'm more inclined to believe she remains conscious, but I'm not certain one way or the other.

 
If you doubt Alice remain conscious, how could you accept an experience of simple teleportation (UDA step 1 or 2). If you can recover consciousness from a relative digital description, how could that consciousness distinguish between a recovery from a genuine description send from earth (say), and a recovery from a description luckily generated by a random process?

I believe consciousness can be recovered from a digital description, but I don't believe the description itself is conscious while being beamed from one teleporting station to the other.  I think it is only when the body/computer simulation is instantiated can consciousness recovered from the description.

Consider sending the description over an encrypted channel, without the right decryption algorithm and key the description can't be differentiated from random noise.  The same bits could be interpreted entirely differently depending completely on how the recipient uses it.  The "meaning" of the transmission is recovered when it forms a system with complex relations, presumably the same relations as the original one that was teleported, even though it may be running on a different physical substrate, or a different computer architecture.

I don't deny that a random process could be the source of a transmission that resulted in the creation of a conscious being, what I deny is that random *simple computations, lacking any causal linkages, could form consciousness.
 
* By simple I mean the types of computation done in discrete steps, such as multiplication, addition, etc.  Those done by a single neuron or a small collection of logic gates.

If you recover from a description (comp), you cannot know if that description has been generated by a computation or a random process, unless you give some prescience to the logical gates. Keep in mind we try to refute the conjunction MECH and MAT.

Here I would say that consciousness is not correlated with the physical description at any point in time, but rather the computational history and flow of information, and that this is responsible for the subjective experience of being Alice.  If Alice's mind is described by a random process, albeit one which gives the appearance of consciousness during her exam, she nevertheless has no coherent computational history and her mind contains no large scale informational structures.  The state machine that would represent her in the case of injection of random noise is a different state machine that would represent her normally functioning brain. 

Jason

Brent Meeker

unread,
Nov 19, 2008, 5:55:23 PM11/19/08
to everyth...@googlegroups.com
Right. That's why I think that a simulation instantiating a conscious being
would have to include a lot of environment and the being would only be conscious
*relative to that environment*. I think it is an interesting empirical question
whether a person can be conscious with no interaction with their environment.
It appears that it is possible for short periods of time, but I once read that
in sensory deprivation experiments the subjects minds would go into a loop after
a couple of hours. Is that still being conscious?

Brent Meeker

Telmo Menezes

unread,
Nov 19, 2008, 6:19:44 PM11/19/08
to everyth...@googlegroups.com
> Could you alter the so-lucky cosmic explosion beam a little bit so
> that Alice still succeed her math exam, but is, reasonably enough, a
> zombie during the exam. With zombie taken in the traditional sense of
> Kory and Dennett.
> Of course you have to keep well *both* MECH *and* MAT.

I think I can...

Instead of correcting the brain, the cosmic beams trigger output
neurons in a sequence that makes Alice write the right answers. That
is to say, the information content of the beams is no longer a
representation of an area of Alice's brain, but a representation of
the answers to the exam. An outside observer cannot distinguish one
case from the other. In the first she is Alice, in the second she is a
zombie.

Telmo.

Kory Heath

unread,
Nov 20, 2008, 2:23:04 AM11/20/08
to everyth...@googlegroups.com

On Nov 18, 2008, at 11:52 AM, Bruno Marchal wrote:
> The last question (of MGA 1) is: was Alice, in this case, a zombie
> during the exam?

Of course, my personal answer would take into account the fact that I
already have a problem with the materialist's idea of "matter". But I
think we're supposed to be considering the question in the context of
mechanism and materialism. So I'll ask, what should a mechanist-
materialist say about the state of Alice's consciousness during the
exam?

Maybe I'm jumping ahead, but I think this thought experiment creates a
dilemma for the mechanist-materialist (which I think is Bruno's
point). In contrast to many of the other responses in this thread, I
don't think the mechanist-materialist should believe that Alice is
conscious in the case when every gate has stopped functioning (but
cosmic rays are randomly causing them to flip in the exact same way
that they would have flipped if they were functioning). Alice is in
that case functionally identical to a random-number generator. It
shouldn't matter at all whether these cosmic rays are striking the
broken gates in her head, or if the gates in her head are completely
inert and the rays are striking the neurons in (say) her arms and her
spinal chord, still causing her body to behave exactly as it would
have without the breakdown. I agree with Telmo Menezes that the
mechanist-materialist shouldn't view Alice as conscious in the latter
case. But I don't think it's any different than the former case.

It sounds like many people are under the impression that mechanism-
materialism, with it's rejection of zombies, is committed to the view
that Lucky Alice must be conscious, because she's behaviorally
indistinguishable from the Alice with the correctly-functioning brain.
But, in the sense that matters, Lucky Alice is *not* behaviorally
indistinguishable from fully-functional Alice. For the mechanist-
materialist, everything physical counts as "behavior". And there is a
clear physical difference between the two Alices, which would be
physically discoverable by a nearby scientist with the proper
instruments.

Lets imagine that, during the time that Alice's brain is broken but
"luckily" acting as though it wasn't due to cosmic rays, someone
throws a ball at Alice's head, and she ("luckily") ducks out of the
way. The mechanist-materialist may be happy to agree that she did
indeed "duck out of the way", since that's just a description of what
her body did. But the mechanist-materialist can (and must) claim that
Lucky Alice did not in fact respond to the ball at all. And that
statement can be translated into pure physics-talk. The movements of
Alice's body in this case are being caused by the cosmic rays. They
are causally disconnected from the movements of the ball (except in
the incidental way that the ball might be having some causal effect on
the cosmic rays). When Alice's brain is working properly, her act of
ducking *is* causally connected to the movement of the ball. And this
kind of causal connection is an important part of what the mechanist-
materialist means by "consciousness".

Dennett is able to - and in fact must - say that Alice is not
conscious when all of her brain-gates are broken but very luckily
being flipped by cosmic rays. When Dennett says that someone is
conscious, he is referring precisely to these behavioral competences
that can be described in physical terms. He means that this collection
of physical stuff we call Alice really is responding to her immediate
environment (like the ball), observing things, collecting data, etc.
In that very objective sense, Lucky Alice is not responding to the
ball at all. She's not conscious by Dennett's physicalist definition
of consciousness. But she's also not a zombie, because she is behaving
differently than fully-functional Alice. You just have to be able to
have the proper instruments to know it.

If you still think that Dennett would claim that Lucky Alice is a
zombie, take a look at this quote from http://ase.tufts.edu/cogstud/papers/zombic.htm
: "Just remember, by definition, a zombie behaves indistinguishably
from a conscious being–in all possible tests, including not only
answers to questions [as in the Turing test] but psychophysical tests,
neurophysiological tests–all tests that any 'third-person' science can
devise." Lucky Alice does *not* behave indistinguishably from a
conscious being in all possible tests. The proper third-person test
examining her logic gates would show that she is not responding to her
immediate environment at all. Dennett should claim that she's a non-
conscious non-zombie.

Nevertheless, I think Bruno's thought experiment causes a problem for
the mechanist-materialist, as it is supposed to. If we believe that
the fully-functional Alice is conscious and the random-gate-brain
Alice is not conscious, what happens when we start turning Alice's
functioning brain-gates one-at-a-time into random brain gates (and
they luckily keep flipping the way they would have)? Alice's deep
behavior changes - she gradually stops responding to her environment,
although her outward behavior makes it look like she still does - but
clearly there's nothing within Alice "noticing" the change. We
certainly can't imagine (as Searle wants to) that Alice is internally
feeling her consciousness slip away, but is powerless to cry out, etc.

It's tempting to say that this argument simply shows us that Lucky
Alice must be conscious after all, but that's just the other horn of
the dilemma. The mechanist-materialist can only talk about
consciousness in computational / physical terms. For Dennett, if you
say that Alice is "aware", you must be able to translate this into
mechanistic terms. And I can't see any mechanistic sense in which
Lucky Alice can be said to be "aware" of anything.

I prefer to just say that Bruno's thought experiment shows that
there's something wrong with mechanism-materialism, but it's not
obvious (yet) what the solution is.

-- Kory

Kory Heath

unread,
Nov 20, 2008, 3:16:36 AM11/20/08
to everyth...@googlegroups.com

On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote:
> So I'm puzzled as to how answer Bruno's question. In general I
> don't believe in
> zombies, but that's in the same way I don't believe my glass of
> water will
> freeze at 20degC. It's an opinion about what is likely, not what is
> possible.

I take this to mean that you're uncomfortable with thought experiments
which revolve around logically possible but exceedingly unlikely
events. I think that's understandable, but ultimately, I'm on the
philosopher's side. It really is logically possible - although
exceedingly unlikely - for a random-number-generator to cause a robot
to walk around, talk to people, etc. It really is logically possible
for a computer program to use a random-number-generator to generate a
lattice of changing bits that "follows" Conway's Life rule. Mechanism
and materialism needs to answer questions about these scenarios,
regardless of how unlikely they are.

-- Kory

John Mikes

unread,
Nov 20, 2008, 10:40:42 AM11/20/08
to everyth...@googlegroups.com
On 11/19/08, Bruno Marchal <mar...@ulb.ac.be> wrote:
>..." Keep in mind we try to refute the
> conjunction MECH and MAT.>
> Nevertheless your intuition below is mainly correct, but the point is
> that accepting it really works, AND keeping MECH, will force us to
> negate MAT.
>
> Bruno
> http://iridia.ulb.ac.be/~marchal/
----and lots of other things in the discussion.----


the concept of "Zombie" emerged as questioned. Thinking about
it, (I dislike the entire field together with 'thought-experiments' and
the fairy-tale processes of differentiated teleportations, etc.)
I concluded that a 'zombie' as used mostly, is a 'person(??)' with
NO HUMAN CONSCIOUSNESS (whatever WE included in the 'C'
term). I am willing to expand on it: a (humanly) zombie MAY HAVE
mental functions beyond the (excluded) select ones WE use in our
present potential as 'thinking humans'. It needs it, since assumed are
the activities that must be directed by some form of mentality
(call it 'physical?' ones). - "Zombie does..."

It boils down to my overall somewhat negative position (although
I have no better one) of UDA, MPG, comp, etc. - all of them are
products of HUMAN thinking and restrictions as WE can imagine
the unfathomable existence (the totality - real TOE).
I find it a 'cousin' of the reductionistic conventional sciences, just
a bit 'freed up'. Maybe a distant cousin. Meaning: it handles the
totality WITHIN the framework of our limited (human) logic(s).

The "list's" said 100 years 'ahead ways' of thinking (Bruno's 200)
is still a mental activity of the NOW existing minds.

Alas, we cannot do better. I just want to take all this mental
exercise with the grain of salt of "there may be more to all of it"
what we cannot even fancy (imagine, fantasize of) today,
with our mind anchored in our restrictions. (Including 'digital',
'numbers', learned wisdom, etc.).

Sorry if I offended anyone on the list, it was not intended.
I am not up to the level of the list, just 'freed up' my thinking
into alowing further (unknown?) domains into our ignorance.
I call it 'my' scinetific agnosticism.

John M


Bruno Marchal

unread,
Nov 20, 2008, 12:47:32 PM11/20/08
to everyth...@googlegroups.com


No inner narrative, no inner image, no inner souvenir, no inner
sensation, no qualia, no subject, no first person notions at all. OK.

>
> Time and circumstance play a part in this. As Bruno pointed out a
> cardboard
> cutout of a person's photograph could be a zombie for a moment. I
> assume the
> point of the exam is that an exam is long enough in duration and
> complex enough
> that it rules out the accidental, cutout zombie.

Well, given that it is a thought experiment, the resources are free,
and I can make the cosmic lucky explosion as lucky as you need for
making Alice apparently alive, and with COMP+MAT, indeed alive. All
its neurons break down all the time, and, because she is so lucky, an
event which occurred 10 billions years before, send to her, at all
right moment and place (and thus this is certainly NOT random) the
lucky ray plumber who fixes momentarily the problem by trigging the
other neurons to which it was supposed to send the infos (for example).
Keeping comp and mat, making her unconscious here would be equivalent
to give Alice's neurons a sort of physical prescience.


> But then Alice has her normal
> behavior restored by a cosmic ray shower that is just as improbable
> as the
> accidental zombie, i.e. she is, for the duration of the shower, an
> accidental
> zombie.


Well, with Telmo solution of the "MGA 1bis exercise", where only the
motor output neuron are fixed and where no internal neuron is fixed
(almost all neurons), with MEC + MAT, Alice has no working brain at
all, is only a lucky puppet, and she has to be a zombie. But in the
original problem, all neurons are fixed, and then I would say Alice is
not a zombie (if not, you give a magical physical prescience to the
neurons).

But now, you are right, that in both case, the luck can only be
accidental. If, in the same thought experience, keeping the exact same
"no lucky cosmic explosion, but giving now a phone call to the teacher
or to Alice, so that she moves 1mm away of the position she had in the
previous version, she will miss the lucky rays, most probably some
will go through in wrong places and most probably she will miss the
exams, and perhaps even die. So you are right, in Telmo's solution of"
MGA 1bis exercise" she is an accidental zombie. But in the original
MGA 1, she should remain conscious (with MECH and MAT), even if
accidentally so.


>
>
> So I'm puzzled as to how answer Bruno's question.

Hope it is clear for every one now?

> In general I don't believe in
> zombies, but that's in the same way I don't believe my glass of
> water will
> freeze at 20degC. It's an opinion about what is likely, not what is
> possible.

OK. Accidental zombie are possible, but are very unlikely (but wait
for MGA 2 for a lessening of this statement).
Accidental consciousness (like in MGA 1, with MECH+MAT) is possible
also, and is as much unlikely (same remark).

Of course, as unlikeley as possible, nobody can test if someone else
is "really conscious" or is a accidental zombie, because for any
series of test you can imagine, you can conceive a sufficiently lucky
cosmic explosion.


>
> It seems similar to the question, could I have gotten in my car and
> driven to
> the store, bought something, and driven back and yet not be
> conscious of it.
> It's highly unlikely, yet people apparently have done such things.

(I think here something different occurs, concerning intensity of
attention with respect to different conscious streams, but it is out-
of-topic, I think).


Bruno


http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Nov 20, 2008, 1:03:16 PM11/20/08
to everyth...@googlegroups.com
On 19 Nov 2008, at 23:26, Jason Resch wrote:



On Wed, Nov 19, 2008 at 1:55 PM, Bruno Marchal <mar...@ulb.ac.be> wrote:

On 19 Nov 2008, at 20:17, Jason Resch wrote:

To add some clarification, I do not think spreading Alice's logic gates across a field and allowing cosmic rays to cause each gate to perform the same computations that they would had they existed in her functioning brain would be conscious.  I think this because in isolation the logic gates are not computing anything complex, only AND, OR, NAND operations, etc.  This is why I believe rocks are not conscious, the collisions of their molecules may be performing simple computations, but they are never aggregated into complex patterns to compute over a large set of information.


Actually I agree with this argument. But it does not concern Alice, because I have provide her with an incredible amount of luck. The lucky rays  fix the neurons in a genuine way (by that abnormally big amount of pure luck). 

If the cosmic rays are simply keeping her neurons working normally, then I'm more inclined to believe she remains conscious, but I'm not certain one way or the other.


I have no certainty either. But this I feel related with my instinctive rather big uncertainty about the assumptions MECH and  MAT. Now if both MECH and MAT are, naively enough perhaps, assumed to be completely true, I think I have no reason for not attributing to Alice consciousness. If not MECH break down, because I have to endow neurons with some prescience. The physical activity is the same, as far as they serve to instanciate a computation (cf the "qua computatio").



 
If you doubt Alice remain conscious, how could you accept an experience of simple teleportation (UDA step 1 or 2). If you can recover consciousness from a relative digital description, how could that consciousness distinguish between a recovery from a genuine description send from earth (say), and a recovery from a description luckily generated by a random process?

I believe consciousness can be recovered from a digital description, but I don't believe the description itself is conscious while being beamed from one teleporting station to the other.  I think it is only when the body/computer simulation is instantiated can consciousness recovered from the description.


I agree. No one said that the description was conscious. Only that consciousness is related to a physical instantiation of a computation, which unluckily break down all the time, but were fixed, at genuine places and moments., by an incredibly big (but finite) amount  luck, (assuming consciously MECH+MAT)




Consider sending the description over an encrypted channel, without the right decryption algorithm and key the description can't be differentiated from random noise.  The same bits could be interpreted entirely differently depending completely on how the recipient uses it.  The "meaning" of the transmission is recovered when it forms a system with complex relations, presumably the same relations as the original one that was teleported, even though it may be running on a different physical substrate, or a different computer architecture.


No problem. I agree.




I don't deny that a random process could be the source of a transmission that resulted in the creation of a conscious being, what I deny is that random *simple computations, lacking any causal linkages, could form consciousness.



The way the lucky rays fixed Alice neurons illustrates that they were not random at all. That is why Alice is so lucky!




 
* By simple I mean the types of computation done in discrete steps, such as multiplication, addition, etc.  Those done by a single neuron or a small collection of logic gates.

If you recover from a description (comp), you cannot know if that description has been generated by a computation or a random process, unless you give some prescience to the logical gates. Keep in mind we try to refute the conjunction MECH and MAT.

Here I would say that consciousness is not correlated with the physical description at any point in time, but rather the computational history and flow of information, and that this is responsible for the subjective experience of being Alice.  If Alice's mind is described by a random process, albeit one which gives the appearance of consciousness during her exam, she nevertheless has no coherent computational history and her mind contains no large scale informational structures.


If it was random, sure. But it was not. More will be said through MGA 2.




 The state machine that would represent her in the case of injection of random noise is a different state machine that would represent her normally functioning brain. 


Bruno Marchal

unread,
Nov 20, 2008, 1:07:36 PM11/20/08
to everyth...@googlegroups.com


Right.

I guess you see that such a zombie is an accidental zombie. We will
have to come back later on this "accidental" part.

Bruno


http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Nov 20, 2008, 1:38:18 PM11/20/08
to everyth...@googlegroups.com
Kory Heath wrote:
>
> On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote:
>> So I'm puzzled as to how answer Bruno's question. In general I
>> don't believe in
>> zombies, but that's in the same way I don't believe my glass of
>> water will
>> freeze at 20degC. It's an opinion about what is likely, not what is
>> possible.
>
> I take this to mean that you're uncomfortable with thought experiments
> which revolve around logically possible but exceedingly unlikely
> events.

I think you really you mean nomologically possible. I'm not uncomfortable with
them, I just maintain a little skepticism. For one thing what is nomologically
possible or impossible is often reassessed. Less than a century ago the
experimental results Elizer, Vaidman, Zeilenger, et al, on delayed choice,
non-interaction measurement, and other QM phenomena would all have been
dismissed in advance as "logically" impossible.

>I think that's understandable, but ultimately, I'm on the
> philosopher's side. It really is logically possible - although
> exceedingly unlikely - for a random-number-generator to cause a robot
> to walk around, talk to people, etc. It really is logically possible
> for a computer program to use a random-number-generator to generate a
> lattice of changing bits that "follows" Conway's Life rule. Mechanism
> and materialism needs to answer questions about these scenarios,
> regardless of how unlikely they are.

I don't disagree with that. My puzzlement about how to answer Bruno's question
comes from the ambiguity as to what we mean by a philosophical zombie. Do we
mean its outward actions are the same as a conscious person? For how long?
Under what circumstances? I can easily make a robot that acts just like a
sleeping person. I think Dennett changes the question by referring to
neurophysiological "actions". Does he suppose wetware can't be replaced by
hardware?

In general when I'm asked if I believe in philosophical zombies, I say no,
because I'm thinking that the zombie must outwardly behave like a conscious
person in all circumstances over an indefinite period of time, yet have no inner
experience. I rule out an accidental zombie accomplishing this as to improbable
- not impossible. In other words if I were constructing a robot that had to act
as a conscious person would over a long period of time in a wide variety of
circumstances, I would have to build into the robot some kind of inner attention
module that selected what was important to remember, compressed into short
representation, linked it to other memories. And this would be an inner
narrative. Similary for the other "inner" processes. I don't know if that's
really what it takes to build a conscious robot, but I'm pretty sure it's
something like that. And I think once we understand how to do this, we'll stop
worrying about "the hard problem of consciousness". Instead we'll talk about
how efficient the inner narration module is or the memory confabulation module
or the visual imagination module. Talk about consciousness will seem as quaint
as talk about the elan vital does now.

Brent


>
> -- Kory
>
>
> >
>

Bruno Marchal

unread,
Nov 20, 2008, 1:52:14 PM11/20/08
to everyth...@googlegroups.com


I am afraid you are already too much suspect of the contradictory
nature of MEC+MAT.
Take the reasoning has a game. Try to keep both MEC and MAT, the game
consists in showing the more clearly as possible what will go wrong.
The goal is to help the other to understand, or to find an error
(fatal or fixable: in both case we learn).


>
>
> It sounds like many people are under the impression that mechanism-
> materialism, with it's rejection of zombies, is committed to the view
> that Lucky Alice must be conscious, because she's behaviorally
> indistinguishable from the Alice with the correctly-functioning brain.
>
> But, in the sense that matters, Lucky Alice is *not* behaviorally
> indistinguishable from fully-functional Alice.

You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The
original Alice, well I mean the one in MGA 1, is functionally
identical at the right level of description (actually she has already
digital brain). The physical instantiation of a computation is
completely realized. No neurons can "know" that the info (correct and
at the right places) does not come from the relevant neurons, but from
a lucky beam.

> For the mechanist-
> materialist, everything physical counts as "behavior". And there is a
> clear physical difference between the two Alices, which would be
> physically discoverable by a nearby scientist with the proper
> instruments.

But the physical difference does not play a role. If you invoke it,
how could you accept saying yes to a doctor, who introduce bigger
difference?

>
>
> Lets imagine that, during the time that Alice's brain is broken but
> "luckily" acting as though it wasn't due to cosmic rays, someone
> throws a ball at Alice's head, and she ("luckily") ducks out of the
> way. The mechanist-materialist may be happy to agree that she did
> indeed "duck out of the way", since that's just a description of what
> her body did.

OK, for both ALICE of Telmo's solution of MGA 1bis, and ALICE MGA 1.


> But the mechanist-materialist can (and must) claim that
> Lucky Alice did not in fact respond to the ball at all.

Consciously or privately? Certainly not for ALICE MGA 1bis. But why
not for ALICE MGA 1? Please remember to try to naively, or candidly
enough, keep both MECH and MAT in mind. You are already reasoning
like if we were concluding some definitive things, biut we are just
trying to build an argument. In the end, you will say: I knew it, but
the point is helping the others to "know" it too. Many here have
already the good intuition I think. The point is to make that
intuition the most communicable possible.

> And that
> statement can be translated into pure physics-talk. The movements of
> Alice's body in this case are being caused by the cosmic rays. They
> are causally disconnected from the movements of the ball (except in
> the incidental way that the ball might be having some causal effect on
> the cosmic rays).


More on this after MGA 2. Hopefully tomorrow.

> When Alice's brain is working properly, her act of
> ducking *is* causally connected to the movement of the ball. And this
> kind of causal connection is an important part of what the mechanist-
> materialist means by "consciousness".

Careful: such kind of causality needs ... MAT.

>
>
> Dennett is able to - and in fact must - say that Alice is not
> conscious when all of her brain-gates are broken but very luckily
> being flipped by cosmic rays. When Dennett says that someone is
> conscious, he is referring precisely to these behavioral competences
> that can be described in physical terms.

You see.

> He means that this collection
> of physical stuff we call Alice really is responding to her immediate
> environment (like the ball), observing things, collecting data, etc.
> In that very objective sense, Lucky Alice is not responding to the
> ball at all. She's not conscious by Dennett's physicalist definition
> of consciousness. But she's also not a zombie, because she is behaving
> differently than fully-functional Alice. You just have to be able to
> have the proper instruments to know it.
>
> If you still think that Dennett would claim that Lucky Alice is a
> zombie, take a look at this quote from http://ase.tufts.edu/cogstud/papers/zombic.htm
> : "Just remember, by definition, a zombie behaves indistinguishably
> from a conscious being–in all possible tests, including not only
> answers to questions [as in the Turing test] but psychophysical tests,
> neurophysiological tests–all tests that any 'third-person' science can
> devise." Lucky Alice does *not* behave indistinguishably from a
> conscious being in all possible tests.

By definition, I would say, she does. Of course, this makes sense,
with MECH only above the substitution level. But at that level, a
neurophysiologist looking in the detail would see the neurons doing
their job. Only, he will also see, some neurons breaking down, and
then being fixed, not by an internal biological fixing mechanism (like
it occurs all the time in biological system, but by a lucky beam, but
despite this, and thanks to this, the brain of Alice (MGA 1) does the
entire normal usual work. If not, you introduce a kind of magic, which
if existing, would prevent me to say yes to any doctor.


> The proper third-person test
> examining her logic gates would show that she is not responding to her
> immediate environment at all. Dennett should claim that she's a non-
> conscious non-zombie.
>
> Nevertheless, I think Bruno's thought experiment causes a problem for
> the mechanist-materialist, as it is supposed to. If we believe that
> the fully-functional Alice is conscious and the random-gate-brain
> Alice is not conscious, what happens when we start turning Alice's
> functioning brain-gates one-at-a-time into random brain gates (and
> they luckily keep flipping the way they would have)? Alice's deep
> behavior changes - she gradually stops responding to her environment,
> although her outward behavior makes it look like she still does - but
> clearly there's nothing within Alice "noticing" the change. We
> certainly can't imagine (as Searle wants to) that Alice is internally
> feeling her consciousness slip away, but is powerless to cry out, etc.
>
> It's tempting to say that this argument simply shows us that Lucky
> Alice must be conscious after all, but that's just the other horn of
> the dilemma. The mechanist-materialist can only talk about
> consciousness in computational / physical terms. For Dennett, if you
> say that Alice is "aware", you must be able to translate this into
> mechanistic terms. And I can't see any mechanistic sense in which
> Lucky Alice can be said to be "aware" of anything.

Alice MGA 1 can be said to be aware in the roiginal mechanist sense.
When she thought "Oh the math problem is easy", she trigged the right
memories in her brain, with the correct physical activity, even if
just luckily in that case.

>
>
> I prefer to just say that Bruno's thought experiment shows that
> there's something wrong with mechanism-materialism, but it's not
> obvious (yet) what the solution is.


And things will even be more confusing after MGA 2, but that's the
goal. MEC + MAT should give a contradiction, we will extract some
weirder and weirder proposition until the contradiction will be
utterly clear. OK?

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Nov 20, 2008, 3:05:49 PM11/20/08
to everyth...@googlegroups.com
Hi John,


> It boils down to my overall somewhat negative position (although
> I have no better one) of UDA, MPG, comp, etc. - all of them are
> products of HUMAN thinking and restrictions as WE can imagine
> the unfathomable existence (the totality - real TOE).
> I find it a 'cousin' of the reductionistic conventional sciences, just
> a bit 'freed up'. Maybe a distant cousin. Meaning: it handles the
> totality WITHIN the framework of our limited (human) logic(s).


I think that Human logic is already a progress compared to Russian, or
Belgian, or Hungarian, or American logic, or ...

And then you know how much I agree with you, once you substitute
"human" by "lobian" (where a lobian machine/number is a universal
machine who know she is universal, and bet she is a machine).


> Alas, we cannot do better.

I'm afraid so. Thanks for acknowledging.


> just want to take all this mental
> exercise with the grain of salt of "there may be more to all of it"


Sure. And if we take ourself too much seriously, we can miss the
ultimate cosmic divine joke (if there is one).

>
> what we cannot even fancy (imagine, fantasize of) today,
> with our mind anchored in our restrictions. (Including 'digital',
> 'numbers', learned wisdom, etc.).


Be careful and be open to your own philosophy. The idea that "digital"
and "numbers" (the concept, not our human description of it) are
restrictions could be due to our human prejudice. May be a machine
could one day believes this is a form of unfounded prejudicial
exclusion.

I hope you don't mind my frank attitude, and I wish you the best,

Bruno
http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Nov 20, 2008, 3:27:29 PM11/20/08
to everyth...@googlegroups.com

On 19 Nov 2008, at 20:37, Michael Rosefield wrote:

> Are not logic gates black boxes, though? Does it really matter what
> happens between Input and Output? In which case, it has absolutely
> no bearing on Alice's consciousness whether the gate's a neuron, an
> electronic doodah, a team of well-trained monkeys or a lucky quantum
> event or synchronicity.


Good summary.

> It does not matter, really, where or when the actions of the gate
> take place.


As far as they represent, physically or materially, the relevant
computation, assuming MEC+MAT. OK.

MGA 2 will give one more step forward the idea that the materiality
cannot play a relevant part in the computation. I will try to do MGA 2
tomorrow. (It is 21h22m23s here, I mean 9h22m31s pm :).

I have to solve a conflict between two ways to make the MGA 2. If I
don't succeed, I will make both.

Thanks for trying to understand,

Bruno


http://iridia.ulb.ac.be/~marchal/

Jason Resch

unread,
Nov 20, 2008, 3:27:32 PM11/20/08
to everyth...@googlegroups.com
On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal <mar...@ulb.ac.be> wrote:



 The state machine that would represent her in the case of injection of random noise is a different state machine that would represent her normally functioning brain. 


Absolutely so.



Bruno,

What about the state machine that included the injection of "lucky" noise from an outside source vs. one in which all information was derived internally from the operation of the state machine itself?  Would those two differently defined machines not differ and compute something different?  Even though the computations are identical the information that is being computed comes from different sources and so carries with it a different "connotation".  Though the bits injected are identical, they inherently imply a different meaning because the state machine in the case of injection has a different structure than that of her normally operating brain.  I believe the brain can be abstracted as a computer/information processing system, but it is not simply the computations and the inputs into the logic gates at each step that are important, but also the source of the input bits, otherwise the computation isn't the same.

Jason

Gordon Tsai

unread,
Nov 20, 2008, 3:40:42 PM11/20/08
to everyth...@googlegroups.com
Bruno:
   I think you and John touched the fundamental issues of human rational. It's a dilemma encountered by phenomenology. Now I have a question: In theory we can't distinguish ourselves from a Lobian Machine. But can lobian machines truly have sufficient rich experiences like human? For example, is it possible for a lobian machine to "still its mind' or "cease the computational logic" like some eastern philosophy suggested? Maybe any of the out-of-loop experience is still part of the computation/logic, just as our out-of-body experiences are actually the trick of brain chemicals?
 
Gordon 

--- On Thu, 11/20/08, Bruno Marchal <mar...@ulb.ac.be> wrote:
From: Bruno Marchal <mar...@ulb.ac.be>
Subject: Re: MGA 1
To: everyth...@googlegroups.com

Kory Heath

unread,
Nov 20, 2008, 5:35:13 PM11/20/08
to everyth...@googlegroups.com

On Nov 20, 2008, at 10:38 AM, Brent Meeker wrote:
> I think you really you mean nomologically possible.

I mean logically possible, but I'm happy to change it to
"nomologically possible" for the purposes of this conversation.

> I think Dennett changes the question by referring to
> neurophysiological "actions". Does he suppose wetware can't be
> replaced by
> hardware?

No, he definitely argues that wetware can replaced by hardware, as
long as the hardware retains the computational functionality of the
wetware.

> In general when I'm asked if I believe in philosophical zombies, I
> say no,
> because I'm thinking that the zombie must outwardly behave like a
> conscious
> person in all circumstances over an indefinite period of time, yet
> have no inner
> experience. I rule out an accidental zombie accomplishing this as
> to improbable
> - not impossible.

I agree. But if you accept that it's nomologically possible for a
robot with a random-number-generator in its head to outwardly behave

like a conscious person in all circumstances over an indefinite period

of time, then your theory of consciousness, one way or another, has to
answer the question of whether or not this unlikely robot is
conscious. Now, maybe your answer is "The question is misguided in
that case, and here's why..." But that's a significant burden.

-- Kory

Brent Meeker

unread,
Nov 20, 2008, 6:33:01 PM11/20/08
to everyth...@googlegroups.com
Kory Heath wrote:
>
> On Nov 20, 2008, at 10:38 AM, Brent Meeker wrote:
>> I think you really you mean nomologically possible.
>
> I mean logically possible, but I'm happy to change it to
> "nomologically possible" for the purposes of this conversation.

Doesn't the question go away if it is nomologically impossible?

>
>> I think Dennett changes the question by referring to
>> neurophysiological "actions". Does he suppose wetware can't be
>> replaced by
>> hardware?
>
> No, he definitely argues that wetware can replaced by hardware, as
> long as the hardware retains the computational functionality of the
> wetware.

But that's the catch. Computational functionality is a capacity, not a fact.
Does a random number generator have computational functionality just in case it
(accidentally) computes something? I would say it does not. But referring the
concept of zombie to a capacity, rather than observed behavior, makes a
difference in Bruno's question.

>
>> In general when I'm asked if I believe in philosophical zombies, I
>> say no,
>> because I'm thinking that the zombie must outwardly behave like a
>> conscious
>> person in all circumstances over an indefinite period of time, yet
>> have no inner
>> experience. I rule out an accidental zombie accomplishing this as
>> to improbable
>> - not impossible.
>
> I agree. But if you accept that it's nomologically possible for a
> robot with a random-number-generator in its head to outwardly behave
> like a conscious person in all circumstances over an indefinite period
> of time, then your theory of consciousness, one way or another, has to
> answer the question of whether or not this unlikely robot is
> conscious. Now, maybe your answer is "The question is misguided in
> that case, and here's why..." But that's a significant burden.

I would regard it as an empirical question about how the robots brain worked.
If the brain processed perceptual and memory data to produce the behavior, as in
Jason's causal relations, I would say it is conscious in some sense (I think
there are different kinds of consciousness, as evidenced by Bruno's list of
first-person experiences). If it were a random number generator, i.e.
accidental behavior, I'd say not. Observing the robot for some period of time,
in some circumstances can provide strong evidence against the "accidental"
hypothesis, but it cannot rule it out completely.

Brent

Kory Heath

unread,
Nov 21, 2008, 12:05:58 AM11/21/08
to everyth...@googlegroups.com

On Nov 20, 2008, at 3:33 PM, Brent Meeker wrote:
> Doesn't the question go away if it is nomologically impossible?

I'm sort of the opposite of you on this issue. You don't like to use
the term "logically possible", while I don't like to use the term
"nomologically impossible". I don't see the relevance of nomological
possibility to any philosophical question I'm interested in. For
anything that's nomologically impossible, I can just imagine a
cellular automaton or some other computational or mathematical
"physics" in which that thing is nomologically possible. And then I
can just imagine physically instantiating that universe on one of our
real computers. And then all of my philosophical questions still apply.

I can certainly imagine objections to that viewpoint. But life is
short. My point was that, since you already agreed that it's
nomologically possible for a random robot to outwardly behave like a
conscious person for some indefinite period of time, we can sidestep
the (probably interesting) discussion we might have about nomological
vs. logical possibility in this case.

> Does a random number generator have computational functionality just
> in case it
> (accidentally) computes something? I would say it does not. But
> referring the
> concept of zombie to a capacity, rather than observed behavior,
> makes a
> difference in Bruno's question.

I think that Dennett explicitly refers to computational capacities
when talking about consciousness (and zombies), and I follow him. But
Dennett's point is that computational capacity is always, in
principle, observed behavior - or, at least, behavior that can be
observed. In the case of Lucky Alice, if you had the right tools, you
could examine the neurons and see - based on how they were behaving! -
that they were not causally connected to each other. (The fact that a
neuron is being triggered by a cosmic ray rather than by a neighboring
neuron is an observable part of its behavior.) That observed behavior
would allow you to conclude that this brain does not have the
computational capacity to compute the answers to a math test, or to
compute the trajectory of a ball.

> I would regard it as an empirical question about how the robots
> brain worked.
> If the brain processed perceptual and memory data to produce the
> behavior, as in
> Jason's causal relations, I would say it is conscious in some sense
> (I think
> there are different kinds of consciousness, as evidenced by Bruno's
> list of
> first-person experiences). If it were a random number generator, i.e.
> accidental behavior, I'd say not.

I agree. But why do you say you're puzzled about how to answer Bruno's
question about Lucky Alice? I think you just answered it - for you,
Lucky Alice wouldn't be conscious. (Or do you think that Lucky Alice
is different than a robot with a random-number-generator in its head?
I don't.)

-- Kory

Brent Meeker

unread,
Nov 21, 2008, 1:44:32 AM11/21/08
to everyth...@googlegroups.com

I think Alice is different. She has the capacity to be conscious. This is
potentially, temporarily interrupted by some mysterious failure of gates (or
neurons) in her brain - but wait, these failures are serendipitously canceled
out by a burst of cosmic rays, so they all get the same input/output as if
nothing had happened. So, functionally, it's as if the gates didn't fail at
all. This functionality is beyond external behavior; it includes forming
memories, paying attention, etc. Of course we may say it is not causally
related to Alice's environment, but this depends on a certain theory of
causality, a physical theory. If the cosmic rays exactly replace all the gate
functions to maintain the same causal chains then from an informational
perspective we might say the rays were caused by the relations to her environment.

Brent

Brent

Kory Heath

unread,
Nov 21, 2008, 4:45:36 AM11/21/08
to everyth...@googlegroups.com

On Nov 20, 2008, at 10:52 AM, Bruno Marchal wrote:
> I am afraid you are already too much suspect of the contradictory
> nature of MEC+MAT.
> Take the reasoning has a game. Try to keep both MEC and MAT, the game
> consists in showing the more clearly as possible what will go wrong.

I understand what you're saying, and I accept the rules of the game. I
*am* trying to keep both MEC and MAT. But it seems as though we differ
on how we understand MEC and MAT, because in my understanding,
mechanist-materialists should say that Bruno's Lucky Alice is not
conscious (for the same reason that Telmo's Lucky Alice is not
conscious).

> You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The
> original Alice, well I mean the one in MGA 1, is functionally
> identical at the right level of description (actually she has already
> digital brain). The physical instantiation of a computation is
> completely realized. No neurons can "know" that the info (correct and
> at the right places) does not come from the relevant neurons, but from
> a lucky beam.

I agree that the neurons don't "know" or "care" where their inputs are
coming from. They just get their inputs, perform their computations,
and send their outputs. But when it comes to the functional, physical
behavior of Alice's whole brain, the mechanist-materialist is
certainly allowed (indeed, forced) to talk about where each neuron's
input is coming from. That's a part of the computational picture.

I see the point that you're making. Each neuron receives some input,
performs some computation, and then produces some output. We're
imagining that every neuron has been disconnected from its inputs, but
that cosmic rays have luckily produced the exact same input that the
previously connected neurons would have produced. You're arguing that
since every neuron is performing the exact same computations that it
would have performed anyway, the two situations are computationally
identical.

But I don't think that's correct. I think that plain old, garden
variety mechanism-materialism has an easy way of saying that Lucky
Alice's brain, viewed as a whole system, is not performing the same
computations that fully-functioning Alice's brain is. None of the
neurons in Lucky Alice's brain are even causally connected to each
other. That's a pretty big computational difference!

I am arguing, in essence, that for the mechanist-materialist,
"causality" is an important aspect of computation and consciousness.
Maybe your goal is to show that there's something deeply wrong with
that idea, or with the idea of "causality" itself. But we're supposed
to be starting from a foundation of MEC and MAT.

Are you saying that the mechanist-materialist *does* say that Lucky
Alice is conscious, or only that the mechanist-materialist *should*
say it? Because if you're saying the latter, then I'm "playing the
game" better than you are! I'm pretty sure that Dennett (and the other
mechanist-materialists I've read) would say that Lucky Alice is not
conscious, and for them, they have a perfectly straightforward way of
explaining what they *mean* when they say that she's not conscious.
They mean (among other things) that the actions of her neurons are not
being affected at all by the paper lying in front of her on the table,
or the ball flying at her head. For Dennett, it's practically a non-
sequitur to say that she's conscious of a ball that's not affecting
her brain.

> But the physical difference does not play a role.

It depends on what you mean by "play a role". You're right that the
physical difference (very luckily) didn't change what the neurons did.
It just so happens that the neurons did exactly what they were going
to do anyway. But the *cause* of why the neurons did what they did is
totally different. The action of each individual neuron was caused by
cosmic rays rather than by neighboring neurons. You seem to be asking,
"Why should this difference play any role in whether or not Alice was
conscious?" But for the mechanist-materialist, the difference is
primary. Those kinds of causal connections are a fundamental part of
what they *mean* when they say that something is conscious.

> If you invoke it,
> how could you accept saying yes to a doctor, who introduce bigger
> difference?

Do you mean the "teleportation doctor", who makes a copy of me,
destroys me, and then reconstructs me somewhere else using the copied
information? That case is not problematic in the way that Lucky Alice
is, because there is an unbroken causal chain between the "new" me and
the "old" me. What's problematic about Lucky Alice is the fact that
her ducking out of the way of the ball (the movements of her eyes, the
look of surprise, etc.) has nothing to do with the ball, and yet
somehow she's still supposed to be conscious of the ball.

A much closer analogy to Lucky Alice would be if the doctor
accidentally destroys me without making the copy, turns on the
receiving teleporter in desperation, and then the exact copy that
would have appeared anyway steps out, because (luckily!) cosmic rays
hit the receiver's mechanisms in just the right way. I actually find
this thought experiment more persuasive than Lucky Alice (although I'm
sure some will argue that they're identical). At the very least, the
mechanist-materialist has to say that the resulting Lucky Kory is
conscious. I think it's also clear that Lucky Kory's consciousness
must be exactly what it would have been if the teleportation had
worked correctly. This does in fact lead me to feel that maybe
causality shouldn't have any bearing on consciousness after all.

However, the materialist-mechanist still has some grounds to say that
there's something interestingly different about Lucky Kory than
Original Kory. It is a physical fact of the matter that Lucky Kory is
not causally connected to Pre-Teleportation Kory. When someone asks
Lucky Kory, "Why do you tie your shoes that way?", and Lucky Kory
says, "Because of something I learned when I was ten years old", Lucky
Kory's statement is quite literally false. Lucky Kory ties his shoes
that way because of some cosmic rays. I actually don't know what the
standard mechanist-materialist way of viewing this situation is. But
it does seem to suggest that maybe breaks in the causal chain
shouldn't affect consciousness after all.

And of course, we can turn the screws in the usual way. If we can do
Lucky Teleportation once, we can do it once a day, and then once an
hour, and then once a second, and so on, until eventually we just have
nothing but random numbers, and if those random numbers happen to look
like Kory, aren't they just as conscious as Lucky Kory was? But this
doesn't convince me (yet) that Lucky Alice should be viewed as
conscious after all. It just convinces me (again) that there's
something weird about the mechanistic-materialist view of
consciousness. Or about the materialist's view of "causality".

>> But the mechanist-materialist can (and must) claim that
>> Lucky Alice did not in fact respond to the ball at all.
>
> Consciously or privately?

Physically! By the definition of the thought experiment, it is a
physical fact that no neuron in Alice's head responded to the ball (in
the indirect way that they normally would have if she were wired
correctly). Whether or not she had a conscious experience of a ball is
a different question.

>> When Alice's brain is working properly, her act of
>> ducking *is* causally connected to the movement of the ball. And this
>> kind of causal connection is an important part of what the mechanist-
>> materialist means by "consciousness".
>
> Careful: such kind of causality needs ... MAT.

Yes, of course. But we're *supposed* to be considering the question in
the context of MAT.

> But at that level, a
> neurophysiologist looking in the detail would see the neurons doing
> their job. Only, he will also see, some neurons breaking down, and
> then being fixed, not by an internal biological fixing mechanism (like
> it occurs all the time in biological system, but by a lucky beam, but
> despite this, and thanks to this, the brain of Alice (MGA 1) does the
> entire normal usual work.

What do you mean by "fixed"? If the cosmic rays "fix" the neurons so
that they are able to respond to the input of their neighboring
neurons as they're supposed to, then I've misunderstood the thought
experiment. But if you mean that the cosmic rays "fix" the neurons by
(very luckily) sending them the same inputs that they would have
received from their neighboring neurons, then I don't agree that the
neurophysiologist looking at the details would conclude that the
neurons are doing their job, or that the brain of Alice MGA 1 is doing
its entire normal usual work. He would conclude that the brain is not
physically reacting to the pencil or the paper or the ball at all. For
a mechanist, how can a person be aware of a ball if not a single
neuron in her head is physically reacting to that ball?

>> The mechanist-materialist can only talk about
>> consciousness in computational / physical terms. For Dennett, if you
>> say that Alice is "aware", you must be able to translate this into
>> mechanistic terms. And I can't see any mechanistic sense in which
>> Lucky Alice can be said to be "aware" of anything.
>
> Alice MGA 1 can be said to be aware in the roiginal mechanist sense.
> When she thought "Oh the math problem is easy", she trigged the right
> memories in her brain, with the correct physical activity, even if
> just luckily in that case.

Memory is notoriously confusing, so lets keep talking about the ball.
What can a mechanist possibly mean by saying that Lucky Alice was
aware of the ball? By the definition of the thought experiment (unless
I've misunderstood it), every single neuron in Lucky Alice's brain is
being triggered by cosmic rays rather than by neighboring neurons. Not
a single action of any neuron (and therefore, not a single movement of
her body) has anything to do with the movement of the ball. All we can
say is that the neurons are (very improbably) being triggered in the
exact same way that they *would* have been triggered if they were
wired up correctly, and they were actually responding (indirectly) to
the light on her retinas, etc.

So what would it mean to say that, nevertheless, Lucky Alice is aware
of the ball? The only sense I can make of this is that, since each
individual neuron is doing exactly what it would have done anyway, the
same "experience" (qualia, whatever) results (or supervenes, or
whatever). But that's exactly the view of consciousness that Dennett
(the archetypical mechanist-materialist) has spent a lifetime arguing
against. For him, that would be a very magical view of consciousness.
For him, the "experience" of being aware of the ball, "deciding" to
duck, etc., is simply what it feels like to be a collection of neurons
responding to that ball. When he says, "This collection of neurons is
aware of that ball", he is saying, by definition, that that ball is
having causal effects on those neurons. (And not just the causal
effects that any physical object has on any nearby physical object.)

> And things will even be more confusing after MGA 2, but that's the
> goal. MEC + MAT should give a contradiction, we will extract some
> weirder and weirder proposition until the contradiction will be
> utterly clear. OK?

Of course I'm entirely on board with the spirit of your thought
experiment. You think MECH and MAT implies that Lucky Alice is
conscious, but I don't think it does. I'm not sure how important that
difference is. It seems substantial. But I can also predict where
you're going with your thought experiment, and it's the exact same
place I go. So by all means, continue on to MGA 2, and we'll see what
happens.

-- Kory

Bruno Marchal

unread,
Nov 21, 2008, 6:37:53 AM11/21/08
to everyth...@googlegroups.com

Jason,

Nice, you are anticipatiing on MGA 2. So if you don't mind I will
"answer" your post in MGA 2, or in comments you will perhaps make
afterward.

... asap.

Bruno


Le 20-nov.-08, à 21:27, Jason Resch a écrit :
http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Nov 21, 2008, 6:43:57 AM11/21/08
to everyth...@googlegroups.com
Hi Gordon,

Le 20-nov.-08, à 21:40, Gordon Tsai a écrit :

> Bruno:
>    I think you and John touched the fundamental issues of human
> rational. It's a dilemma encountered by phenomenology. Now I have a
> question: In theory we can't distinguish ourselves from a Lobian
> Machine. But can lobian machines truly have sufficient rich
> experiences like human?

This is our assumption. Assuming comp, we are machine, so certainly
some machine can have our rich experiences. Indeed, us.

> For example, is it possible for a lobian machine to "still its
> mind' or "cease the computational logic" like some eastern philosophy
> suggested? Maybe any of the out-of-loop experience is still part of
> the computation/logic, just as our out-of-body experiences are
> actually the trick of brain chemicals?


Eventually we will be led to the idea that it is the "brain chemicals"
which are the result of a trick of "universal consciousness", but here
I am anticipating. Let us go carefully step by step.

I think I will have some time this afternoon to make MGA 2,

See you there ...

Bruno


http://iridia.ulb.ac.be/~marchal/

Stathis Papaioannou

unread,
Nov 21, 2008, 6:45:42 AM11/21/08
to everyth...@googlegroups.com
A variant of Chalmers' "Fading Qualia" argument
(http://consc.net/papers/qualia.html) can be used to show Alice must
be conscious.

Alice is sitting her exam, and a part of her brain stops working,
let's say the part of her occipital cortex which enables visual
perception of the exam paper. In that case, she would be unable to
complete the exam due to blindness. But if the neurons in her
occipital cortex are stimulated by random events such as cosmic rays
so that they pass on signals to the rest of the brain as they would
have normally, Alice won't know she's blind: she will believe she sees
the exam paper, will be able to read it correctly, and will answer the
questions just as she would have without any neurological or
electronic problem.

If Alice were replaced by a zombie, no-one else would notice, by
definition; also, Alice herself wouldn't notice, since a zombie is
incapable of noticing anything (it just behaves as if it does). But I
don't see how it is possible that Alice could be *partly* zombified,
behaving as if she has normal vision, believing she has normal vision,
and having all the cognitive processes that go along with normal
vision, while actually lacking any visual experiences at all. That
isn't consistent with the definition of a philosophical zombie.


--
Stathis Papaioannou

Kory Heath

unread,
Nov 21, 2008, 8:54:13 AM11/21/08
to everyth...@googlegroups.com

On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote:
> A variant of Chalmers' "Fading Qualia" argument
> (http://consc.net/papers/qualia.html) can be used to show Alice must
> be conscious.

The same argument can be used to show that Empty-Headed Alice must
also be conscious. (Empty-Headed Alice is the version where only
Alice's motor neurons are stimulated by cosmic rays, while all of the
other neurons in Alice's head do nothing. Alice's body continues to
act indistinguishably from the way it would have acted, but there's
nothing going on in the rest of Alice's brain, random or otherwise.
Telmo and Bruno have both indicated that they don't think this Alice
is conscious. Or at least, that a mechanist-materialist shouldn't
believe that this Alice is conscious.)

Let's assume that Lucky Alice is conscious. Every neuron in her head
(they're all artificial) has become causally disconnected from all the
others, but they (very improbably) continue to do exactly what they
would have done when they were connected, due to cosmic rays. Let's
say that we remove one of the neurons from Alice's head. This has no
effect on her outward behavior, or on the behavior of any of her other
neurons (since they're already causally disconnected). Of course, we
can remove two neurons, and then three, etc. We can remove her entire
visual cortex. This can't have any noticeable effect on her
consciousness, because the neurons that do remain go right on acting
the way they would have acted if the cortex was there. Eventually, we
can remove every neuron that isn't a motor neuron, so that we have an
empty-headed Alice whose body takes the exam, ducks when I throw the
ball at her head, etc.

If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
then there are partial zombies halfway between them. Like you, I can't
make any sense of these partial zombies. But I also can't make any
sense of the idea that Empty-Headed Alice is conscious. Therefore, I
don't think this argument shows that Empty-Headed Alice (and by
extension, Lucky Alice) must be conscious. I think it shows that
there's a deeper problem - probably with one of our assumptions.

Even though I actually think that mechanist-materialists should view
both Lucky Alice and Empty-Headed Alice as not conscious, I still
think they have to deal with this problem. They have to deal with the
spectrum of intermediate states between Fully-Functional Alice and
Lucky Alice. (Or between Fully-Functional Alice and Empty-Headed Alice.)

-- Kory

Michael Rosefield

unread,
Nov 21, 2008, 10:18:07 AM11/21/08
to everyth...@googlegroups.com
This is one of those questions were I'm not sure if I'm being relevant or missing the point entirely, but here goes:

There are multiple universes which implement/contain/whatever Alice's consciousness. During the period of the experiment, that universe may no longer be amongst them but shadows along with them closely enough that it certainly rejoins them upon its termination.

So, was Alice conscious during the experiment? Well, from Alice's perspective she certainly has the memory of consciousness, and due to the presence of the implementing universes there was certainly a conscious Alice out there somewhere. Since consciousness has no intrinsic spatio-temporal quality, there's no reason for that consciousness not to count.


2008/11/21 Kory Heath <ko...@koryheath.com>

Bruno Marchal

unread,
Nov 21, 2008, 11:15:27 AM11/21/08
to everyth...@googlegroups.com

On 21 Nov 2008, at 10:45, Kory Heath wrote:



...

A much closer analogy to Lucky Alice would be if the doctor  
accidentally destroys me without making the copy, turns on the  
receiving teleporter in desperation, and then the exact copy that  
would have appeared anyway steps out, because (luckily!) cosmic rays  
hit the receiver's mechanisms in just the right way. I actually find  
this thought experiment more persuasive than Lucky Alice (although I'm  
sure some will argue that they're identical). At the very least, the  
mechanist-materialist has to say that the resulting Lucky Kory is  
conscious. I think it's also clear that Lucky Kory's consciousness  
must be exactly what it would have been if the teleportation had  
worked correctly. This does in fact lead me to feel that maybe  
causality shouldn't have any bearing on consciousness after all.


Very good. Thanks.




However, the materialist-mechanist still has some grounds to say that  
there's something interestingly different about Lucky Kory than  
Original Kory. It is a physical fact of the matter that Lucky Kory is  
not causally connected to Pre-Teleportation Kory.


Keeping the comp hyp (cf the "qua computatio") this would introduce magic.



When someone asks  
Lucky Kory, "Why do you tie your shoes that way?", and Lucky Kory  
says, "Because of something I learned when I was ten years old", Lucky  
Kory's statement is quite literally false. Lucky Kory ties his shoes  
that way because of some cosmic rays. I actually don't know what the  
standard mechanist-materialist way of viewing this situation is. But  
it does seem to suggest that maybe breaks in the causal chain  
shouldn't affect consciousness after all.

Yes.

.....

Of course I'm entirely on board with the spirit of your thought  
experiment. You think MECH and MAT implies that Lucky Alice is  
conscious, but I don't think it does. I'm not sure how important that  
difference is. It seems substantial. But I can also predict where  
you're going with your thought experiment, and it's the exact same  
place I go. So by all means, continue on to MGA 2, and we'll see what  
happens.


Thanks.  A last comment on your reply on Stathis' recent comment.

Stathis argument, based on Chalmers' fading qualia is mainly correct I think. And it could be that your answer to Stathis is correct too. 
And this would finish our work. We would have a proof that Telmo Alice is uncouscious and that Telmo Alice is conscious, finishing the reductio ad absurbo.
Keep in mind that we are doing a reductio ad absurdo. Those who are convinced by bith Stathis and Russell Telmo, ...  can already take holidays! 

Have to write MGA 2 for the others.

Bruno

Jason Resch

unread,
Nov 21, 2008, 11:52:21 AM11/21/08
to everyth...@googlegroups.com


On Fri, Nov 21, 2008 at 3:45 AM, Kory Heath <ko...@koryheath.com> wrote:



However, the materialist-mechanist still has some grounds to say that
there's something interestingly different about Lucky Kory than
Original Kory. It is a physical fact of the matter that Lucky Kory is
not causally connected to Pre-Teleportation Kory. When someone asks
Lucky Kory, "Why do you tie your shoes that way?", and Lucky Kory
says, "Because of something I learned when I was ten years old", Lucky
Kory's statement is quite literally false. Lucky Kory ties his shoes
that way because of some cosmic rays. I actually don't know what the
standard mechanist-materialist way of viewing this situation is. But
it does seem to suggest that maybe breaks in the causal chain
shouldn't affect consciousness after all.

This is very similar to an existing thought experiment in identity theory:


Jason

Jason Resch

unread,
Nov 21, 2008, 12:01:49 PM11/21/08
to everyth...@googlegroups.com
Stathis,

What you described sounds very similar to a split brain patient I saw on a documentary.  He was able to respond to images presented to one eye, and ended up drawing them with a hand controlled by the other hemisphere, yet he had no idea why he drew that image when asked.  The problem may not be that he isn't experiencing the visualization, but that the part of his brain that is responsible for speech is disconnected from the part of his brain that can see.


Jason

Brent Meeker

unread,
Nov 21, 2008, 1:19:21 PM11/21/08
to everyth...@googlegroups.com
Just to make things more confusing ;-) We should keep in mind that in current
theories of physics the direction of time's arrow, and hence of "causality", is
a mere statistical phenomena and at a fundamental level all physical processes
are reversible - along with their causal order. In physics, causality just
means no action-at-a distance.

It seems to me that the conundrums of these thought experiments about zombies
are muddled by invoking possibilities that are so improbable as to be
impossible. It's like considering playing craps with loaded dice that always
come up sixes and then asking, "Suppose cosmic rays always happened to come down
and strike the dies so that they came up randomly; would playing with them be a
fair game?" "Being a fair game" is an abstract concept, going beyond a
particular sequence of events, and so is "coming up randomly". So any answer to
this question has to fudge the difference between impossible and improbable.

Questions about zombies seem to have the same character when you hypothesize
their behavior is driven by cosmic rays or random number generators. You're
saying suppose something happened that is so improbable that it's impossible, do
you now agree that it's possible or not? If I say it's impossible, you answer
that ex hypothesi, it could happen. If I say it's possible, you can add to the
example to make it more and more improbable, e.g. Alice dodges a ball AND she
composes a concerto while playing tennis.

Brent

Brent Meeker

unread,
Nov 21, 2008, 1:36:09 PM11/21/08
to everyth...@googlegroups.com

If they were just observing Alice's outward behavior they would say, "It appears
that Alice is a conscious being, but of course there's 1e-100 chance that she's
just an automaton operated by by cosmic rays." If they were actually observing
her inner workings, they'd say, "Alice is just an automaton who in an extremely
improbable coincidence has appeared as if conscious, but we can easily prove she
isn't by watching her future behavior or even by blocking the rays."

Brent

Brent Meeker

unread,
Nov 21, 2008, 2:01:38 PM11/21/08
to everyth...@googlegroups.com

I think experiments like this support the idea that consciousness is not a
single thing. We tend to identify conscious thought with the thought that is
reported in speech. But that's just because it is the thought that is readily
accessible to experimenters.

Brent

Kory Heath

unread,
Nov 21, 2008, 8:23:05 PM11/21/08
to everyth...@googlegroups.com

On Nov 21, 2008, at 8:15 AM, Bruno Marchal wrote:
> On 21 Nov 2008, at 10:45, Kory Heath wrote:
>> However, the materialist-mechanist still has some grounds to say that
>> there's something interestingly different about Lucky Kory than
>> Original Kory. It is a physical fact of the matter that Lucky Kory is
>> not causally connected to Pre-Teleportation Kory.
>
>
> Keeping the comp hyp (cf the "qua computatio") this would introduce
> magic.

I'm not sure it has to. Can you elaborate on what magic you think it
ends up introducing?

In the context of mechanism-materialism, I am forced to believe that
Lucky Kory's consciousness, qualia, etc., are exactly what they would
have been if the teleportation had worked properly. But I don't see
how that forces me to accept any magic. It doesn't (for instance)
force me to say that Kory's "real" consciousness magically jumped over
to Lucky Kory despite the lack of the causal connection. As a
mechanist, I don't think there's any sense in talking about
consciousness in that way.

Dennett has a slogan: "When you describe what happens, you've
described everything." In this weird case, we have to fall back on
describing what happened. A pattern of molecules was destroyed, and
somewhere else that exact pattern was (very improbably) created by a
random process of cosmic rays. Since we mechanists believe that
consciousness and qualia are just aspects of patterns, the
consciousness and qualia of the lucky pattern must (by definition) be
the same as the original would have been. I don't think that causes
any (immediate) problem for the mechanist. Is Lucky Kory the same
person as Original Kory? I don't think the mechanist is committed to
any particular answer to this question. We've already described what
happened. Now it's just a matter of how we want to use our words. If
we want to use them in a certain way, there is a sense in which we can
say that Lucky Kory is not the same person as Original Kory, as long
as we understand that *all* we mean is that Lucky Kory isn't causally
connected to Original Kory (along with whatever else that implies).

However, I do start getting uncomfortable when I realize that this
lucky teleportation can happen over and over again, and if it happens
fast enough, it just reduces to sheer randomness that just happens to
be generating an ordered pattern that looks like Kory. I have a hard
time understanding how a mechanist can consider a bunch of random
numbers to be conscious. If that's the kind of magic you're referring
to, then I agree.

-- Kory

Kory Heath

unread,
Nov 21, 2008, 8:29:57 PM11/21/08
to everyth...@googlegroups.com

On Nov 21, 2008, at 8:52 AM, Jason Resch wrote:
This is very similar to an existing thought experiment in identity theory:


Cool. Thanks for that link!

-- Kory

Kory Heath

unread,
Nov 21, 2008, 8:54:08 PM11/21/08
to everyth...@googlegroups.com

On Nov 21, 2008, at 9:01 AM, Jason Resch wrote:
> What you described sounds very similar to a split brain patient I
> saw on a documentary.

It might seem similar on the surface, but it's actually very
different. The observers of the split-brain patient and the patient
himself know that something is amiss. There is a real difference in
his consciousness and his behavior. If cosmic rays randomly severed
your corpus callosum right now, you would definitely notice a
difference. (It's an empirical question whether or not you'd know it
almost immediately, or if it would take a while for you to figure it
out. I'm sure the neurologists and cognitive scientists already know
the answer to that one.)

At no point during the replacement of Alice's fully-functioning
neurons with cosmic-ray stimulated neurons (or during the replacement
of cosmic-ray neurons with no neurons at all) will Alice notice any
difference in her consciousness. In principle, she cannot notice it,
since every one of her full-functionally neurons always continues to
do exactly what it would have done. This is a serious problem for the
mechanistic view of consciousness.

-- Kory

Jason Resch

unread,
Nov 21, 2008, 9:53:00 PM11/21/08
to everyth...@googlegroups.com

What about a case when only some of Alice's neurons have ceased normal function and became dependent on the lucky rays?  Lets say the neurons in her visual center stopped working but her speech center was unaffected.  In this manner could she talk about what she saw without having any conscious experience of sight?  I'm beginning to see how truly frustrating the MGA argument is: If all her neurons break and are luckily fixed I believe she is a zombie, if only one of her neurons fails but we correct it, I don't think this would effect her consciousness in any perceptible way, but cases where some part of her brain needs to be corrected are quite strange, and almost maddeningly so.

I think you are right in that the split brain cases are very different, but I think the similarity is that part of Alice's consciousness would disappear, though the lucky effects ensure she acts as if no change had occurred.  If all of a sudden all her neurons started working properly again, I don't think she would have any recollection of having lost any part of her consciousess, the lucky effects should have fixed her memories as well, and the parts of her brain which remained functional would also not have detected any inconsitencies, yet the parts of her brain that depended on lucky cosmic rays generated no subjective experience for whatever set of information they were processing.  (So I would think)

Jason

Kory Heath

unread,
Nov 22, 2008, 3:12:38 AM11/22/08
to everyth...@googlegroups.com

On Nov 21, 2008, at 6:53 PM, Jason Resch wrote:
> What about a case when only some of Alice's neurons have ceased
> normal function and became dependent on the lucky rays?

Yes, those are exactly the cases that are highlighting the problem.
(For me. For Bruno, Lucky Alice is still conscious. But he has the
analogous problem when we remove half of the neurons from Lucky
Alice's head.)

> I'm beginning to see how truly frustrating the MGA argument is: If
> all her neurons break and are luckily fixed I believe she is a
> zombie, if only one of her neurons fails but we correct it, I don't
> think this would effect her consciousness in any perceptible way,
> but cases where some part of her brain needs to be corrected are
> quite strange, and almost maddeningly so.

I agree.

> I think you are right in that the split brain cases are very
> different, but I think the similarity is that part of Alice's
> consciousness would disappear, though the lucky effects ensure she
> acts as if no change had occurred.

The tough part is that it's not just that she outwardly acts as if no
change had occurred. It's that, if the mechanistic view of
consciousness is correct, her subjective experience can't change,
either - at least, not in any noticeable way. If it did, she would
notice it and (probably) say something about it. And that can't
happen, because the act of noticing something or saying something
requires her neurons and her mouth to do something different.

The conclusion seems to be that, if mechanism is true, it's possible
for any part of my brain, or all of it, to disappear without changing
my conscious experience. That suggests a conceptual problem somewhere.

-- Kory

Stathis Papaioannou

unread,
Nov 22, 2008, 5:06:29 AM11/22/08
to everyth...@googlegroups.com
2008/11/22 Kory Heath <ko...@koryheath.com>:

> If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
> then there are partial zombies halfway between them. Like you, I can't
> make any sense of these partial zombies. But I also can't make any
> sense of the idea that Empty-Headed Alice is conscious. Therefore, I
> don't think this argument shows that Empty-Headed Alice (and by
> extension, Lucky Alice) must be conscious. I think it shows that
> there's a deeper problem - probably with one of our assumptions.

Yes, there must be a problem with the assumptions. The only assumption
that I see we could eliminate, painful though it might be for those of
a scientific bent, is the idea that consciousness supervenes on
physical activity. Q.E.D.

> Even though I actually think that mechanist-materialists should view
> both Lucky Alice and Empty-Headed Alice as not conscious, I still
> think they have to deal with this problem. They have to deal with the
> spectrum of intermediate states between Fully-Functional Alice and
> Lucky Alice. (Or between Fully-Functional Alice and Empty-Headed Alice.)

--
Stathis Papaioannou

Stathis Papaioannou

unread,
Nov 22, 2008, 5:21:32 AM11/22/08
to everyth...@googlegroups.com
2008/11/22 Jason Resch <jason...@gmail.com>:

> What you described sounds very similar to a split brain patient I saw on a
> documentary. He was able to respond to images presented to one eye, and
> ended up drawing them with a hand controlled by the other hemisphere, yet he
> had no idea why he drew that image when asked. The problem may not be that
> he isn't experiencing the visualization, but that the part of his brain that
> is responsible for speech is disconnected from the part of his brain that
> can see.
> See: http://www.youtube.com/watch?v=ZMLzP1VCANo

This differs from the Lucky Alice example in that the split brain
patient notices that something is wrong, for the reason you give:
speech and vision are processed in different hemispheres. Another
interesting neurological example to consider is Anton's Syndrome, a
condition where people with lesions in their occipital cortex
rendering them blind don't seem to notice that they're blind. They
confabulate when they are asked to describe something put in front of
them and make up excuses when they walk into things. One can imagine a
kind of zombie vision if one of these patients were supplied with an
electronic device that sends them messages about their environment:
they would behave as if they can see as well as believe that they can
see, even though they lack any visual experiences. It should be noted,
however, that Anton's syndrome is a specific organic delusional
disorder, where a patient's cognition is affected in addition to the
perceptual loss, not just as a result of the perceptual loss. Blind or
deaf people who aren't delusional know they are blind or deaf.

--
Stathis Papaioannou

Günther Greindl

unread,
Nov 22, 2008, 6:47:23 AM11/22/08
to everyth...@googlegroups.com
Hmm,

> However, I do start getting uncomfortable when I realize that this
> lucky teleportation can happen over and over again, and if it happens
> fast enough, it just reduces to sheer randomness that just happens to
> be generating an ordered pattern that looks like Kory. I have a hard
> time understanding how a mechanist can consider a bunch of random
> numbers to be conscious. If that's the kind of magic you're referring

I think that is the major attraction of "mathematical universes" - that
the order emerges "after the fact" - out of random patterns.

The order would take over the function of causality in a materialist
picture.

Causality, as Brent (I think) has mentionend, is still not really
understood. What physicists mean is actually a certain kind of locality
- and macro-causality emerges as a statistical mean.

This is already not so far from "order from randomness" (in platonia
locality would also be an "after the fact", a "physical" feature).

Bruno takes the whole step, dumps matter, and let's mind emerge from
arithmetic truth.

What I think fascinating is why we then find ourselves as "single
persons" - if one dumps matter, why is not an arbitrary ordering out of
the "number mess" conscious of being many persons at once (in the matter
picture: being aware of superpositions).

Why then the feeling of being a single person?
In Bruno's system: why are OM's tied to single persons?

Cheers,
Günther

Günther Greindl

unread,
Nov 22, 2008, 7:07:56 AM11/22/08
to everyth...@googlegroups.com

Kory Heath wrote:
>
> If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
> then there are partial zombies halfway between them. Like you, I can't
> make any sense of these partial zombies. But

also can't make any

I think a materialist would either have to argue that Lucky Alice is
conscious (if he focuses on physical states) and that removing neurons
would lead to fading qualia (the "partial zombies") or simply assume
that already Lucky Alice is a Zombie (because he focuses on causal
dynamics).

(I would like to note that I have dropped MAT in the meantime and tend
to MECH. Just wanted to "simulate" a materialist argumentation :-) -
maybe I can convince myself of MAT and not MECH again *grin*)

Could we say that MAT focuses on _physical states_ (exclusively) and
MECH on _dynamics_? And that MGA shows that one can't have both?


Cheers,
Günther

Kory Heath

unread,
Nov 22, 2008, 9:45:35 AM11/22/08
to everyth...@googlegroups.com

On Nov 22, 2008, at 2:06 AM, Stathis Papaioannou wrote:
> Yes, there must be a problem with the assumptions. The only assumption
> that I see we could eliminate, painful though it might be for those of
> a scientific bent, is the idea that consciousness supervenes on
> physical activity. Q.E.D.

Right. But the problem is that that conclusion doesn't tell me how to
deal with the (equally persuasive) arguments that convince me there's
something deeply correct about viewing consciousness in computational
terms, and viewing computation in physical terms. So I'm really just
left with a dilemma. As I've hinted earlier, I suspect that there's
something wrong with the idea of "physical matter" and related ideas
like causality, probability, etc. But that's pretty vague.

-- Kory

Brent Meeker

unread,
Nov 22, 2008, 3:37:46 PM11/22/08
to everyth...@googlegroups.com
Günther Greindl wrote:
>
>
> Kory Heath wrote:
>> If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
>> then there are partial zombies halfway between them. Like you, I can't
>> make any sense of these partial zombies. But
> also can't make any

I don't see why partial "zombies" are problematic. My dog is conscious of
perceptions, of being an individual, of memories and even dreams, but he doesn't
have an inner narrative - so is he a partial zombie?

Brent

John Mikes

unread,
Nov 22, 2008, 3:53:30 PM11/22/08
to everyth...@googlegroups.com
Brent,
did your dog communicate to you (in dogese, of course) that she has - NO -
INNER NARRATIVE? or you are just ignorant to perceive such?
(Of course do not expect such at the complexity level of your 11b neurons)
John M

Brent Meeker

unread,
Nov 22, 2008, 4:26:12 PM11/22/08
to everyth...@googlegroups.com
John Mikes wrote:
> Brent,
> did your dog communicate to you (in dogese, of course) that she has - NO -
> INNER NARRATIVE? or you are just ignorant to perceive such?
> (Of course do not expect such at the complexity level of your 11b neurons)
> John M

Of course not. It's my inference from the fact that my dog has no outer
narrative. Have you read Julian Jaynes "The Origin of Consciousness in the
Breakdown of the Bicameral Mind"? He argues, persuasively in my opinion, that
our inner narrative arises from internalizing our outer narrative, i.e. spoken
communication with other people.

Brent

Stathis Papaioannou

unread,
Nov 22, 2008, 9:24:21 PM11/22/08
to everyth...@googlegroups.com
2008/11/23 Kory Heath <ko...@koryheath.com>:

We could say there are two aspects to mathematical objects, a physical
aspect and a non-physical aspect. Whenever we interact with the number
"three" it must be realised, say in the form of three objects. But
there is also an abstract three, with threeness properties, that lives
in Platonia independently of any realisation. Similarly, whenever we
interact with a computation, it must be realised on a physical
computer, such as a human brain. But there is also the abstract
computation, a Platonic object. It seems that consciousness, like
threeness, may be a property of the Platonic object, and not of its
physical realisation. This allows resolution of the apparent paradoxes
we have been discussing.

--
Stathis Papaioannou

Stathis Papaioannou

unread,
Nov 22, 2008, 9:42:50 PM11/22/08
to everyth...@googlegroups.com
On 2008/11/23 Brent Meeker <meek...@dslextreme.com> wrote:

> I don't see why partial "zombies" are problematic. My dog is conscious of
> perceptions, of being an individual, of memories and even dreams, but he doesn't
> have an inner narrative - so is he a partial zombie?

Your dog has experiences, and that seems to me to be the most
important thing distinguishing zombie from non-zombie. If Lucky Alice
is a partial zombie, she is lacking in experiences of a certain kind,
such as visual perception, but behaves just the same and otherwise
thinks and feels just the same. She remembers visual experiences from
before she suffered brain damage and feels that they are just the same
as present visual experiences: so in what sense could she have a
deficit rendering her blind?


--
Stathis Papaioannou

Bruno Marchal

unread,
Nov 23, 2008, 6:48:02 AM11/23/08
to everyth...@googlegroups.com

On 20 Nov 2008, at 19:38, Brent Meeker wrote:


> Talk about consciousness will seem as quaint
> as talk about the elan vital does now.


Then you are led to eliminativism of consciousness. This makes MEC+MAT
trivially coherent. The price is big: consciousness does no more
exist, like the "elan vital". MEC becomes vacuoulsy true: I say yes to
the doctor, without even meaning it. But it seems to me that
consciousness is not like the "elan vital". I do make the, admittedly
non sharable, experience of consciousness all the time, so it seems to
me that such a move consists in negating the data. If the idea of
keeping the notion of primitive matter, which I recall is really an
hypothesis, is so demanding that I have to abandon the idea that I am
conscious, I will abandon the hypothetical notion of primitive matter
instead.
But you make my point.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Nov 23, 2008, 7:18:56 AM11/23/08
to everyth...@googlegroups.com

On 21 Nov 2008, at 10:45, Kory Heath wrote:

> However, the materialist-mechanist still has some grounds to say that
> there's something interestingly different about Lucky Kory than
> Original Kory. It is a physical fact of the matter that Lucky Kory is

> not causally connected to Pre-Teleportation Kory. When someone asks
> Lucky Kory, "Why do you tie your shoes that way?", and Lucky Kory
> says, "Because of something I learned when I was ten years old", Lucky
> Kory's statement is quite literally false. Lucky Kory ties his shoes
> that way because of some cosmic rays. I actually don't know what the
> standard mechanist-materialist way of viewing this situation is. But
> it does seem to suggest that maybe breaks in the causal chain
> shouldn't affect consciousness after all.


You are right, at least when, for the sake of the argument, we
continue to keep MEC and MAT, if only to single out, the most
transparently possible, the contradiction.
Let us consider your "lucky teleportation case", where someone use a
teleporter which fails badly. So it just annihilates the "original"
person, but then, by an incredible luck the person is reconstructed
with his right state after. If you ask him "how do you know how to tie
shoes", if the person answers, after that bad but lucky
"teleportation" "because I learn in my youth": he is correct.
He is correct for the same reason Alice's answer to her exams were
correct, even if luckily so.
Suppose I send you a copy of my sane paper by the internet, and that,
the internet demolishes it completely, but that by an incredible
chance your buggy computer rebuild it in its exact original form. This
will not change the content of the paper, and the paper will be
correct or false independently of the way it has flight from me to you.
In the bad-lucky teleporter case, even with MAT (and MEC) it is still
the right person who survived, with the correct representation of her
right memories, and so one. Even if just "luckily so".
MGA 2 then shows that the random appearance of the lucky event was a
red hearing, so that we have to admit that consciousness supervenes on
the movie graph (the movie of the running of the boolean optical
computer).

Of course I don't believe that consciousness supervene on the physical
activity of such movie, but this means that I have to abandon the
whole physical supervenience. I will read the other posts. I think
many have understood and have already concluded. But from a strict
logical point of view, perhaps some are willing to defend the idea
that the movie-graph is conscious, and, in that case, I will present
MGA 3, which is supposed to show that, well, a movie cannot think,
through MEC (there is just no computation there). Of course, the movie
has still some relationship with the original consciousness of Alice,
and this will help us to save the MEC part of the physical
supervenience thesis, giving rise to the notion of "computational
supervenience", but this form of supervenience does no more refer to
anything *primarily* physical, and this will be enough preventing the
use of a concrete universe for blocking the UDA conclusion.

Bruno


http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Nov 23, 2008, 7:37:48 AM11/23/08
to everyth...@googlegroups.com

On 20 Nov 2008, at 21:27, Jason Resch wrote:



On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal <mar...@ulb.ac.be> wrote:



 The state machine that would represent her in the case of injection of random noise is a different state machine that would represent her normally functioning brain. 


Absolutely so.



Bruno,

What about the state machine that included the injection of "lucky" noise from an outside source vs. one in which all information was derived internally from the operation of the state machine itself?  

At which times? How? Did MGA 2 clarify this?




Would those two differently defined machines not differ and compute something different?  Even though the computations are identical the information that is being computed comes from different sources and so carries with it a different "connotation".  

But the supervenience principle and the non-prescience of the neurons makes it impossible to the machine to "feel" such connotations.



Though the bits injected are identical, they inherently imply a different meaning because the state machine in the case of injection has a different structure than that of her normally operating brain.  I believe the brain can be abstracted as a computer/information processing system, but it is not simply the computations and the inputs into the logic gates at each step that are important, but also the source of the input bits, otherwise the computation isn't the same.

If the source differs below the substitution level, the machine cannot be aware of it. If she was, it would mean we have been wrong with the choice of the substitution level. OK? We can come back on this.

Bruno



Bruno Marchal

unread,
Nov 23, 2008, 8:17:51 AM11/23/08
to everyth...@googlegroups.com

On 20 Nov 2008, at 21:40, Gordon Tsai wrote:

Bruno:
   I think you and John touched the fundamental issues of human rational. It's a dilemma encountered by phenomenology. Now I have a question: In theory we can't distinguish ourselves from a Lobian Machine.


Note that in the math part (Arithmetical UDA), I consider only "*Sound* Lobian machine". Sound means hat they are never wrong (talking about numbers). Now no sound Lobian machine can know that she is sound, and I am not yet sure I will find an interesting notion of lobianity for unsound machines, and sound Lobian Machine can easily get unsound, especially when they begin to confuse deductive inference and inductive inference. We just cannot know if we are (sound) Lobian Machine.
It is more something we should hope for ...




But can lobian machines truly have sufficient rich experiences like human?



You know, Mechanism is a bit like the half bottle of wine. The optimist thinks that the bottle is "yet half full", and the pessimist thinks that the bottles is "already half-empty".
About mechanism, the optimist reasons like that. I love myself because I have a so interesting life with so many rich experiences. Now you tell me I am a machine. So I love machine because machine *can* have rich experiences, indeed, myself is an example.
The pessimist reasons like that. I hate myself because my life is boringly uninteresting without any rich experiences. Now you tell me I am a machine. I knew it! My own life confirms that rumor according to which machine are stupid automata. No meaning no future. 




For example, is it possible for a lobian machine to "still its mind' or "cease the computational logic" like some eastern philosophy suggested? Maybe any of the out-of-loop experience is still part of the computation/logic, just as our out-of-body experiences are actually the trick of brain chemicals?





The bad news is that the singular point is, imo, behind us. The universal machine you bought has been clever, but this has been shadowed by your downloadling on so many particular purposes software. And then she need to be "in a body" so that you can use it, as a if it was a sort of slave, to send me a mail. It will take time for them too. And once a universal machine has a body or a relative representation,  the first person and the third person get rich and complex, but possibly confused. Its soul falls, would say Plotin. She can get hallucinated and all that.

With comp, to be very short and bit provocative, the notion of "out-of-body" experience makes no sense at all because we don't have a body to go out of it, at the start. Your body is in your head, if I can say. 

This is at least a *consequence* of the assumption of mechanism, and I'm afraid you have to understand that by yourself, a bit like a theorem in math. But it is third person sharable, for example by UDA, I think. it leads I guess to a different view on Reality (different from the usual Theology of Aristotle, but not different from Plato Theology, roughly speaking).

You can ask any question, but my favorite one are the naive question :)

Bruno Marchal



Bruno Marchal

unread,
Nov 23, 2008, 8:31:33 AM11/23/08
to everyth...@googlegroups.com

On 22 Nov 2008, at 11:06, Stathis Papaioannou wrote:

> Yes, there must be a problem with the assumptions. The only assumption
> that I see we could eliminate, painful though it might be for those of
> a scientific bent, is the idea that consciousness supervenes on
> physical activity. Q.E.D.


Logically you could also abandon MEC, but I guess you think, as I tend
to think myself, that this could be even more painful for those of
scientific bent.
In the long run physicists could be very happy that their foundations
relies on numbers relations (albeit statistical).

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Nov 23, 2008, 9:17:34 AM11/23/08
to everyth...@googlegroups.com

I agree with you. It resolves the conceptual problems about mind and
matter, but if forces us to redefine matter from how "consciousness
differentiate in Platonia" (this comes from MGA + ... UDA(1..7). Comp
really reduce the mind body problem to the body problem: it remains to
show we don't have too much white rabbits. But the problem is a pure
problem in computer science now.

Bruno


http://iridia.ulb.ac.be/~marchal/

John Mikes

unread,
Nov 23, 2008, 11:15:35 AM11/23/08
to everyth...@googlegroups.com
On 11/22/08, Brent Meeker <meek...@dslextreme.com> wrote:
>
Brent, I appreciate your 'consenting' reply<G> - however -
yes, I read (long ago) J. Jaynes and appreciated MOST of his ideas, do
not accept him as substitute (verbal) opinion in our presently ongoing
discussion. We may have ideas generated after (in spite of?) J.J.
Yet - in your reply - the "spoken communication with other people"
refers in the present topic to communication in 'dogese' (with other
dogs?) so your argument is still in limbo.
Just for the fun of it

John Mikes

>
> >
>

John Mikes

unread,
Nov 23, 2008, 11:41:33 AM11/23/08
to everyth...@googlegroups.com
On 11/23/08, Bruno Marchal <mar...@ulb.ac.be> wrote:
>
> On 20 Nov 2008, at 21:40, Gordon Tsai wrote:
>
>> Bruno:
>> I think you and John touched the fundamental issues of human
>> rational. It's a dilemma encountered by phenomenology. Now I have a
>> question: In theory we can't distinguish ourselves from a Lobian
>> Machine.
>
(JM): Dear Gordon, thanks for your consent. My reply is shorter than
Bruno's (Indeed professional - long - one): "If we say so":
'We' created a "machine" as we wish and if we created it 'that way',
we cannot distinguish ourselves from it.
>
>(Bruno):
> Note that in the math part (Arithmetical UDA), I consider only
> "*Sound* Lobian machine". Sound means hat they are never wrong
> (talking about numbers). Now no sound Lobian machine can know that she
> is sound, and I am not yet sure I will find an interesting notion of
> lobianity for unsound machines, and sound Lobian Machine can easily
> get unsound, especially when they begin to confuse deductive inference
> and inductive inference. We just cannot know if we are (sound) Lobian
> Machine.
> It is more something we should hope for ...
>
>> But can lobian machines truly have sufficient rich experiences like
>> human?
>
> You know, Mechanism is a bit like the half bottle of wine. The
> optimist thinks that the bottle is "yet half full", and the pessimist
> thinks that the bottles is "already half-empty".
> About mechanism, the optimist reasons like that. I love myself because
> I have a so interesting life with so many rich experiences. Now you
> tell me I am a machine. So I love machine because machine *can* have
> rich experiences, indeed, myself is an example.
> The pessimist reasons like that. I hate myself because my life is
> boringly uninteresting without any rich experiences. Now you tell me I
> am a machine. I knew it! My own life confirms that rumor according to
> which machine are stupid automata. No meaning no future.

(JM): thanks Bruno, for the nice metaphor of 'machine' - In my vocabulary
a machine is a model exercising a mechanism, but chacquun a son gout.
With a mechanism I am differently: I like to expand it onto something like
'anything (process) that gets something entailed' without restrictions. But
again, I do not propose this to universal acceptance.
>
>> For example, is it possible for a lobian machine to "still its mind'
>> or "cease the computational logic" like some eastern philosophy
>> suggested? Maybe any of the out-of-loop experience is still part of
>> the computation/logic, just as our out-of-body experiences are
>> actually the trick of brain chemicals?
>
> The bad news is that the singular point is, imo, behind us. The
> universal machine you bought has been clever, but this has been
> shadowed by your downloadling on so many particular purposes software.
> And then she need to be "in a body" so that you can use it, as a if it
> was a sort of slave, to send me a mail. It will take time for them
> too. And once a universal machine has a body or a relative
> representation, the first person and the third person get rich and
> complex, but possibly confused. Its soul falls, would say Plotin. She
> can get hallucinated and all that.
>
> With comp, to be very short and bit provocative, the notion of "out-of-
> body" experience makes no sense at all because we don't have a body to
> go out of it, at the start. Your body is in your head, if I can say.
>
> This is at least a *consequence* of the assumption of mechanism, and
> I'm afraid you have to understand that by yourself, a bit like a
> theorem in math. But it is third person sharable, for example by UDA,
> I think. it leads I guess to a different view on Reality (different
> from the usual Theology of Aristotle, but not different from Plato
> Theology, roughly speaking).
(JM): Bruno, in my opinion NOTHING is 'third person sharable' - only a
'thing' (from every- or no-) can give rise to develop a FIRST personal
variant of the sharing, more or less (maybe) resembling the original 'to
be shared' one. In its (1st) 'personal' variation. (Cf: perceived reality).

>
> You can ask any question, but my favorite one are the naive question :)
>
> Bruno Marchal
>
>
> http://iridia.ulb.ac.be/~marchal/
>
(JM): John Mikes
>
>
> >
>

Bruno Marchal

unread,
Nov 23, 2008, 1:47:22 PM11/23/08
to everyth...@googlegroups.com
On 23 Nov 2008, at 17:41, John Mikes wrote:


On 11/23/08, Bruno Marchal <mar...@ulb.ac.be> wrote:


About mechanism, the optimist reasons like that. I love myself because
I have a so interesting life with so many rich experiences. Now you
tell me I am a machine. So I love machine because machine *can* have
rich experiences, indeed, myself is an example.
The pessimist reasons like that. I hate myself because my life is
boringly uninteresting without any rich experiences. Now you tell me I
am a machine. I knew it! My own life confirms that rumor according to
which machine are stupid automata. No meaning no future.

(JM): thanks Bruno, for the nice metaphor of 'machine' -


It was the "pessimist metaphor". I hope you know I am a bit more optimist, ... with regard to machines.



In my vocabulary
a machine is a model exercising a mechanism, but chacquun a son gout.

We agree on the definition. 








(JM): Bruno, in my opinion NOTHING is 'third person sharable' - only a
'thing' (from every- or no-) can give rise to develop a FIRST personal
variant of the sharing,

The third person part is what the first person variant is a variant of.
I don't pretend we can know it. But if we don't bet on it,  we become solipsist.



more or less (maybe) resembling the original 'to
be shared' one. In its (1st) 'personal' variation. (Cf: perceived reality).

Building theories help to learn how false we can be. We have to take our theories seriously, make then precise and clear enough if we want to see the contradiction and learn from there. Oh we can also contemplate, meditate, or listen to music; or use (legal) entheogen, why not, there are many paths, not incompatible. But reasoning up to a contradiction, pure or with the facts, is the way of the researcher. 

Bruno

Hal Finney

unread,
Nov 23, 2008, 12:39:07 PM11/23/08
to everyth...@googlegroups.com
Allow me to try analyzing MGA 1 in the context of what I call the UDASSA
framework, which I have discussed here on this list in years past. First
I will briefly review UDASSA.

In UDASSA, the measure of an observer experience, an observer moment, or
for that matter anything that can be thought of as an information pattern,
is its measure in the Universal Distribution, a mathematically defined
probability distribution which is basically the probability that a given
Universal Turing Machine will output that bit pattern, given a random
program as input. This turns out to be approximately equal to 1/2 to the
power of AC, where AC is the algorithmic complexity of the bit pattern,
i.e. the length of the shortest program that outputs that bit pattern.
More precisely, the measure is actually the sum of contributions from all
programs that output that pattern. For each program, if its length is L,
its contribution to the measure of that pattern is 1/2 to the L power.

UDASSA also assumes that experiences that have higher measure are more
likely to be experienced, hence that we would predict that we are more
likely to experience something that has high measure than something low.
This is the ASSA part.

The first step to analyzing most of our thought experiments is to assume
that consciousness can be represented as an abstract information pattern
of some yet-to-be-determined nature. Assume that as our understanding
of psychology and brain physiology grows, we are able to achieve a
mathematical definition of consciousness, such that any system which
implements a pattern which meets various criteria would be said to be
conscious. Brains presumably would then turn out to be conscious because
they implement (or "instantiate") patterns that follow these rules.

In the UDASSA framework, we would apply this understanding and model
of consciousness slightly differently. Rather than asking whether a
particular brain or other system implements a given consciousness, or
more generally asking whether it is conscious at all, we would ask what
contribution the system in question makes to the measure of a given,
mathematically-defined, conscious experience. Systems which we would
conventionally say "are conscious", like brains, would be ones which
make a large contribution; systems which do not, like rocks, would make
virtually no contribution to the measure.

The manner in which a brain makes a contribution to the measure of the
abstract information pattern representing its conscious experience is
like this: The measure of the abstract consciousness is based on the
size of the shortest program which can output it. A short program to
output a given person's conscious experience is to start a universe off
with a Big Bang in a simple initial state and with simple physical laws;
run it for a while; and then look at a given location in space-time for
patterns of activity which can be specified by simple rules, and record
those patterns. The rules in this case would be those corresponding to
neural events in the brain, in whatever form is necessary to output the
appropriate mathematical representation of consciousness.

Some years back on this list, I estimated that a particular observer
moment might correspond to a program of this nature with a length in
the tens of thousands of bits. This is very short considering that
the raw data of neural activity would be billions of bits; any program
which tried to use a rock of some other non-conscious source as its
raw material for outputting the bit pattern would have to hard-code the
entire bit pattern within itself. Since the contribution of a program
is inversely exponential in the length of the program, the enormous
economy of brain-scanning type programs means that they would contribute
essentially all the measure to conscious experiences, rocks essentially
none, and hence that we "really are" brains in this sense.

Turning now to the zombie based thought experiments, let us think of Alice
whose brain malfunctions for a while but who gets lucky due to cosmic ray
strikes and so the overall brain pattern proceeds unchanged. What impact,
if any, would such events have on her consciousness in this framework?

Under UDASSA, we would not ask if she is conscious or a zombie. We would
ask what contribution this instance of her experience makes to the measure
of the information pattern representing her conscious experience during
this time (that is, what her experience would be if all were working
well). If her brain still contributes about as much as it would if it
were working right, we'd say that she is conscious. If it contributes
almost nothing, we'd say she was not conscious and was, in that sense,
a zombie. The UDASSA framework also allows for intermediate levels of
contribution, although due to the exponential nature of the Universal
Distribution, even small increases in program size will greatly reduce
the amount of contribution.

Assuming that the shortest possible program for outputting the
mathematical representation of Alice's conscious experience is based on
the brain scan concept sketched above, what happens when a neuron stops
working? Well, the brain scan is not going to work the same way. Now,
if it is a single neuron, it's likely that there would be no noticeable
effect on consciousness. The brain is an imperfect machine and must have
a degree of redundancy and error correction. It is likely that the fact
that one neuron has stopped working would be caught and corrected by the
error-correction part of the brain-scan program, and its output would not
be changed. So although the input is different, the output is probably
the same and so we would say that Alice is not a zombie. Fundamentally
this is because the brain is immune to noise at this level (we assume).

Now let us suppose a more serious failure, thousands of neurons, enough
that it really would make a difference in her experience. But we assume
that just through luck, cosmic rays activate those neurons so that there
is no long term change in her thought processes or behavior. The rest
of her brain works normally. In this case, the brain scanning program
would possibly output a different result. It would be working by tracking
neural events, synaptic activity and so on. Depending on the details,
the fact that the neurons continued to fire in the right patterns might
not be good enough.

Let's suppose that the neurons are firing but their synapses are broken
and are not releasing neurotransmitters. Suppose it turns out that
the optimal brain-scanning program studies synaptic activity as well
as overall neural activity. In that case it would output a different
result. In order to get the program to output the same thing as it would
have if the brain weren't broken, we would have to make it more complex,
so that the cosmic ray activity was just as good as neural stimulation,
for making neurons fire and for producing the desired output pattern. This
would complicate the program and probably make it substantially larger, at
least hundreds of bits. This would then decrease its contribution to the
measure of Alice's conscious experience by at least a factor of 2 to the
100th power. Alice would have to be though of as a zombie in this case.

Now, this is based on the assumption that the optimal brain-scanning
program would be disrupted by whatever aspect it is about Alice's brain
which is "broken". By playing around with various ways of doing brain
scanning and various ways a brain might break, we can get different
answers. But at a minimum, if Alice's neurons had never been wired up
to each other in her whole life, and all her life they had merely fired
sheerly by luck in the same patterns they would have had if properly
connected, then it seems clear that no simple program could extract the
proper causal connections among the neurons that would be a necessary
prerequisite to analyze their logical relationships and output a concise
mathematical representation of her consciousness. So in that case she
would certainly be a zombie. A momentary interruption, with the causal
channels still in place but perhaps temporarily blocked, might still be
tolerated, again depending on the details.

Note that in principle, then, whether Alice would be a zombie is an
empirical question that can be solved via sufficient study of psychology
and brain physiology (to understand what characterizes consciousness),
and computer science (to learn what kinds of programs can most efficiently
translate raw brain measurements into the mathematical representation
of conscious experience).

This framework also allows answers to the various other thought
experiments which have been raised, such as whether a sufficiently
detailed recording of the brain's activity would be conscious. UDASSA
suggests that not only is the answer yes (assuming that a simple program
can infer the same logical relationships as it could from the actual
brain itself), but that actually such a recording might contribute
vastly more to the measure of the conscious experience than an ordinary
brain! That is because the recording persists, and so the program to
output the mathematical representation of the conscious experience
can be shorter due to the reduced precision necessary in specifying
the time coordinate where the scan should start. Hence such "static"
recordings can apparently produce experiences with much higher measure
than ordinary brains. We might therefore endorse something like Nick
Bostrom's Simulation Argument (simulation-argument.com) with the proviso
that not only are we living in simulations, but that the simulations
are recorded in persistent storage in some form.

Hal Finney

Kory Heath

unread,
Nov 24, 2008, 12:08:01 PM11/24/08
to everyth...@googlegroups.com

On Nov 23, 2008, at 4:18 AM, Bruno Marchal wrote:
> Let us consider your "lucky teleportation case", where someone use a
> teleporter which fails badly. So it just annihilates the "original"
> person, but then, by an incredible luck the person is reconstructed
> with his right state after. If you ask him "how do you know how to tie
> shoes", if the person answers, after that bad but lucky
> "teleportation" "because I learn in my youth": he is correct.
> He is correct for the same reason Alice's answer to her exams were
> correct, even if luckily so.

I think it's (subtly) incorrect to focus on Lucky Alice's *answers* to
her exams. By definition, she wrote down the correct answers. But (I
claim) she didn't compute those answers. A bunch of cosmic rays just
made her look like she did. Fully-Functional Alice, on the other hand,
actually did compute the answers.

Let's imagine that someone interrupts Fully-Functional Alice while
she's taking the exam and asks her, "Do you think that your actions
right now are being caused primarily by a very unlikely sequence of
cosmic rays?", and she answers "No". She is answering correctly. By
definition, Lucky Alice will answer the same way. But she will be
answering incorrectly. That is the sense in which I'm saying that
Lucky Kory is making a false statement when he says "I learned to tie
my shoes in my youth".

In the case of Lucky Kory, I concur that, despite this difference, his
subjective consciousness is identical to what Kory's would have been
if the teleportation was successful. But the reason I can view Lucky
Kory as conscious at all is that once the lucky accident creates him
out of whole cloth, his neurons are firing correctly, are causally
connected to each other in the requisite ways, etc. I have a harder
time understanding how Lucky Alice can be conscious, because at the
time I'm supposed to be viewing her as conscious, she isn't meeting
the causal / computational pre-requisites that I thought were
necessary for consciousness. And I can essentially turn Lucky Kory
into Lucky Alice by imagining that he is "nothing but" a series of
lucky teleportations. And then suddenly I don't see how he can be
conscious, either.

> Suppose I send you a copy of my sane paper by the internet, and that,
> the internet demolishes it completely, but that by an incredible
> chance your buggy computer rebuild it in its exact original form. This
> will not change the content of the paper, and the paper will be
> correct or false independently of the way it has flight from me to
> you.

That's because the SANE paper doesn't happen to talk about it's own
causal history. Imagine that I take a pencil and a sheet of paper and
write the following on it:

"The patterns of markings on this paper were caused by Kory Heath. Of
course, that doesn't mean that the molecules in this piece of paper
touched the hands of Kory Heath. Maybe the paper has been teleported
since Kory wrote it, and reconstructed out of totally different
molecules. But there is an unbroken causal chain from these markings
back to something Kory once did."

If you teleport that paper normally, the statement on it remains true.
If the teleportation fails, but a lucky accident creates an identical
piece of paper, the statement on it is false. Maybe this has no
bearing on consciousness or anything else, but I don't want to forget
about the distinction until I'm sure it's not relevant.

> Of course, the movie
> has still some relationship with the original consciousness of Alice,
> and this will help us to save the MEC part of the physical
> supervenience thesis, giving rise to the notion of "computational
> supervenience", but this form of supervenience does no more refer to
> anything *primarily* physical, and this will be enough preventing the
> use of a concrete universe for blocking the UDA conclusion.

I see what you mean. But for me, these thought experiments are making
me doubt that I even have a coherent notion of "computational
supervenience".

-- Kory

Kory Heath

unread,
Nov 24, 2008, 1:42:56 PM11/24/08
to everyth...@googlegroups.com

On Nov 22, 2008, at 6:24 PM, Stathis Papaioannou wrote:
> Similarly, whenever we
> interact with a computation, it must be realised on a physical
> computer, such as a human brain. But there is also the abstract
> computation, a Platonic object. It seems that consciousness, like
> threeness, may be a property of the Platonic object, and not of its
> physical realisation. This allows resolution of the apparent paradoxes
> we have been discussing.

For reasons that are (mostly) independent of all of these thought
experiments, I suspect that there's something deeply correct about the
idea that an "abstract computation" can be the substrate for
consciousness. Or at least, I think there's something deeply correct
about replacing the idea of "physical existence" with "mathematical
facts-of-the-matter". This immediately eliminates weird questions like
"why is there something instead of nothing", which seem unanswerable
in the context of the normal view of physical existence.

But what I'm realizing is that I still don't have a clear conception
of how consciousness is supposed to relate to these Platonic
computations. (Or maybe I don't have a clear enough picture of what
counts as a "Platonic computation".) In a way, it feels to me as
though I still have "partial zombie" problems, even in Platonia.

Lets imagine a "block universe" in Platonia - a 3D block of cells
filled (in some order that we specify) with the binary digits of PI.
Somewhere within this block, there are (I think) regions which look
"as if" they're following the rules of Conway's Life, and some of
those regions contain creatures that look "as if" they're conscious.
Are they actually conscious? The move away from "physical existence"
to "mathematical existence" (what I've called "mathematical
physicalism") doesn't immediately help me answer this question.

The answer I *used* to give was that it doesn't matter, because no
matter what "accidental order" you find in Platonia, you also find the
"real order". In other words, if you find some portion of the digits
of PI that "seems to be" following the rules of Conway's Life, then
there is also (of course) a Platonic object that represents the
"actual" computations that the digits of PI "seem to be" computing.
This is, essentially, Bostrom's "Unification" in the context of
Platonia. It doesn't matter whether or not "accidental order" in the
digits of PI can be viewed as conscious, because either way, we know
the "real order" exists in Platonia as well, and multiple
"instantiations" of the same pain in Platonia wouldn't result in
multiple pains.

I'm uncomfortable with the philosophical vagueness of some of this. At
the very least, I want a better handle on why Unification is correct
and Duplication is not in the context of Platonia (or why that
question is confused, if it is).

-- Kory

Bruno Marchal

unread,
Nov 24, 2008, 2:01:59 PM11/24/08
to everyth...@googlegroups.com

On 24 Nov 2008, at 18:08, Kory Heath wrote:



>
> I see what you mean. But for me, these thought experiments are making
> me doubt that I even have a coherent notion of "computational
> supervenience".



You are not supposed to have a coherent idea of what is "computational
supervenience". This belongs to the conclusion of the reasoning, and
this will need elaboration on what is a computation. This is not so
hard with ... computer science.

To understand that MEC+MAT is contradictory, you have only to
understand them well enough so as to get up to the point where the
contradiction occurs. You give us many quite good argument for saying
that Lucky Alice, and even Lucky Kory, are not conscious. I do agree,
mainly, with those argument.

So let me be clear; you argument that , assuming MEC+MAT, Lucky Alice
is not conscious are almost correct, and very convincing. And so, of
course Lucky Alice is not conscious.

Now, MGA 1 is an argument showing, that MEC+MAT, due to the physical
supervenience thesis, and the non prescience of the neurons, entails
that Lucky Alice is conscious. The question is: do you see this. too

If you see this, we have:

MEC+MAT entails Lucky Alice is not conscious (by your correct argument)
MEC+MAT entails Lucky Alice is conscious (by MGA 1)

Thus MEC+MAT entails (Lucky Alice is conscious AND Lucky Alice is not
conscious), that is, MEC+MAT entails "false", a contradiction.
And that is the point.

If your argument were not merely convincing but definitive, then I
would not need to make MGA 3 for showing it is ridiculous to endow the
projection of a movie of a computation with consciousness (in real
"space-time", like the physical supervenience thesis asked for).

OK?


Bruno



>
>
> -- Kory
>
>
> >

http://iridia.ulb.ac.be/~marchal/



Kory Heath

unread,
Nov 24, 2008, 8:13:02 PM11/24/08
to everyth...@googlegroups.com

On Nov 24, 2008, at 11:01 AM, Bruno Marchal wrote:
> If your argument were not merely convincing but definitive, then I
> would not need to make MGA 3 for showing it is ridiculous to endow the
> projection of a movie of a computation with consciousness (in real
> "space-time", like the physical supervenience thesis asked for).

Ok, I think I'm following you now. You're saying that I'm failing to
provide a definitive argument showing that it is ridiculous to endow
the projection of a movie of a computation with consciousness. (Or, in
my alternate thought experiment, I'm failing to provide a *definitive*
reason why it's ridiculous to endow the "playing back" of the
previously-computed "block universe" with consciousness.) I concur -
my arguments are convincing, but not definitive. If MGA 3 (or MGA 4,
etc.) is definitive, or even just more convincing, so much the better.
Please proceed!

-- Kory

Bruno Marchal

unread,
Nov 25, 2008, 5:55:37 AM11/25/08
to everyth...@googlegroups.com

Le 25-nov.-08, à 02:13, Kory Heath a écrit :

>
>
> On Nov 24, 2008, at 11:01 AM, Bruno Marchal wrote:
>> If your argument were not merely convincing but definitive, then I
>> would not need to make MGA 3 for showing it is ridiculous to endow the
>> projection of a movie of a computation with consciousness (in real
>> "space-time", like the physical supervenience thesis asked for).
>
> Ok, I think I'm following you now. You're saying that I'm failing to
> provide a definitive argument showing that it is ridiculous to endow
> the projection of a movie of a computation with consciousness. (Or, in
> my alternate thought experiment, I'm failing to provide a *definitive*
> reason why it's ridiculous to endow the "playing back" of the
> previously-computed "block universe" with consciousness.)

Yes.

> I concur -
> my arguments are convincing, but not definitive. If MGA 3 (or MGA 4,
> etc.) is definitive, or even just more convincing, so much the better.
> Please proceed!

So you agree that MGA 1 does show that Lucky Alice is conscious
(logically).
Normally, this means the proof is finished for you (but that is indeed
what you say before I begun; everything is coherent).

About MGA 3, I feel almost a bit ashamed to explain that. To believe
that the projection of the movie makes Alice conscious, is almost like
explaining why we should not send Roger Moore (James Bond) in jail,
giving that there are obvious movie where he clearly does not respect
the speed limitation (grin). Of course this is not an argument.

Bruno

http://iridia.ulb.ac.be/~marchal/

Russell Standish

unread,
Nov 25, 2008, 6:05:51 AM11/25/08
to everyth...@googlegroups.com
On Tue, Nov 25, 2008 at 11:55:37AM +0100, Bruno Marchal wrote:
> About MGA 3, I feel almost a bit ashamed to explain that. To believe
> that the projection of the movie makes Alice conscious, is almost like
> explaining why we should not send Roger Moore (James Bond) in jail,
> giving that there are obvious movie where he clearly does not respect
> the speed limitation (grin). Of course this is not an argument.
>
> Bruno
>

There is a world of difference between the James Bond movie, which is
clearly not the same as the actor in flesh and blood, and the sort of
movie used in your MGA, which by definition is indistinguishable in
all important respects from the original conscious being. It is
important not to let our intuitions misguide us at this point. Brent
was effectively making the same point, about when unlikely events
become indistinguishable from impossible.

Cheers


--

----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------

Bruno Marchal

unread,
Nov 25, 2008, 6:25:05 AM11/25/08
to everyth...@googlegroups.com
Just to be clear on this, I obviously agree.

Best,

Bruno



Le 25-nov.-08, à 12:05, Russell Standish a écrit :
http://iridia.ulb.ac.be/~marchal/

Kory Heath

unread,
Nov 25, 2008, 9:49:26 AM11/25/08
to everyth...@googlegroups.com

On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote:
> So you agree that MGA 1 does show that Lucky Alice is conscious
> (logically).

I think I have a less rigorous view of the argument than you do. You
want the argument to have the rigor of a mathematical proof. You say
"Let's start with the mechanist-materialist assumption that Fully-
Functional Alice is conscious. We can replace her neurons one-by-one
with random neurons that just happen to do what the fully-functional
ones were going to do. By definition none of her exterior or interior
behavior changes. Therefore, the resulting Lucky Alice must be exactly
as conscious as Fully-Functional Alice."

To me, this argument doesn't have the full rigor of a mathematical
proof, because it's not entirely clear what the mechanist-materialists
really mean when they say that Fully-Functional Alice is conscious,
and it's not clear whether or not they would agree that "none of her
exterior or interior behavior changes (in any way that's relevant)".
There *is* an objective physical difference between Fully-Functional
Alice and Lucky Alice - it's precisely the (discoverable, physical)
fact that her neurons are all being stimulated by cosmic rays rather
than by each other. I don't see why the mechanist-materialists are
logically disallowed from incorporating that kind of physical
difference into their notion of consciousness.

Of course, in practice, Lucky Alice presents a conundrum for such
mechanist-materialists. But it's not obvious to me that the conundrum
is unanswerable for them, because the whole notion of "consciousness"
in this context seems so vague. Bostrom's views about fractional
"quantities" of experience are a case in point. He clearly takes a
mechanist-materialist view of consciousness, and he believes that a
grid of randomly-flipping bits cannot be conscious, no matter what it
does. He would argue that, during Fully-Functional Alice's slide into
Lucky Alice, her subjective quality of consciousness doesn't change,
but her "quantity" of consciousness gradually reduces until it becomes
zero. That seems weird to me, but I don't see how to "logically prove"
that it's wrong. All I have are messy philosophical arguments and
thought experiments - what Dennett calls "intuition pumps".

That being said, I'm happy to proceed as if our hypothetical mechanist-
materialists have accepted the force of your argument as a logical
proof. Yes, they claim, given the assumptions of our mechanism-
materialism, if Fully-Functional Alice is conscious, Lucky Alice must
*necessarily* also be conscious. If the laser-graph is conscious, then
the movie of it must *necessarily* be conscious. What's the problem
(they ask)? On to MGA 3.

-- Kory

Bruno Marchal

unread,
Nov 25, 2008, 1:00:01 PM11/25/08
to everyth...@googlegroups.com

On 25 Nov 2008, at 15:49, Kory Heath wrote:

>
>
> On Nov 25, 2008, at 2:55 AM, Bruno Marchal wrote:
>> So you agree that MGA 1 does show that Lucky Alice is conscious
>> (logically).
>
> I think I have a less rigorous view of the argument than you do. You
> want the argument to have the rigor of a mathematical proof.

Yes. But it is applied mathematics, in a difficult domain (psychology/
theology and foundation of physics).

There is a minimum of common sense and candidness which is asked for.
The proof is rigorous in the way it should give to anyone the feeling
that it could be entirely formalized in some intensional mathematics,
S4 with quantifiers, or in the modal variant of G and G*. This is
eventually the purpose of the interview of the lobian machine (using
Theaetetus epistemological definition). But this is normally not
needed for "conscious english speaking being with enough common sense
and some interest in the matter".


> You say
> "Let's start with the mechanist-materialist assumption that Fully-
> Functional Alice is conscious. We can replace her neurons one-by-one
> with random neurons


They are random in the sense that ALL strings are random. They are not
random in Kolmogorov sense for example. MGA 2 should make this clear.

> that just happen to do what the fully-functional
> ones were going to do.


It is not random for that very reason. It is luckiness in MGA 1, and
the record of computations in MGA 2.

> By definition none of her exterior or interior
> behavior changes.


I never use those terms in this context, except in comp jokes like
"the brain is in the brain". It is dangerous because interior/exterior
can refer both to the in-the skull/outside-the-skull, and objective/
subjective.

I just use the fact that you say "yes" to a doctor "qua
computatio" (with or without MAT).

> Therefore, the resulting Lucky Alice must be exactly
> as conscious as Fully-Functional Alice."
>
> To me, this argument doesn't have the full rigor of a mathematical
> proof, because it's not entirely clear what the mechanist-materialists
> really mean when they say that Fully-Functional Alice is conscious,


Consciousness does not need to be defined more precisely than it is
needed for saying "yes" to the doctor qua computatio, like a
naturalist could say "yes" for an artificial heart.
Consciousness and (primitive) Matter don't need to be defined more
precisely than needed to understand the physical supervenience thesis.
Despite term like "existence of a primitive physical universe" or the
very general "supervenience" term itself.
You could have perhaps still a problem with the definitions or with
the hypotheses?


>
> and it's not clear whether or not they would agree that "none of her
> exterior or interior behavior changes (in any way that's relevant)".
> There *is* an objective physical difference between Fully-Functional
> Alice and Lucky Alice - it's precisely the (discoverable, physical)
> fact that her neurons are all being stimulated by cosmic rays rather
> than by each other.

There is an objective difference between very young Alice with her
"biological brain" and very young Alice the day after the digital
graft. But taking both MEC and MAT together, you cannot use that
difference. If you want use that difference, you have to make change
to MEC and/or to MAT. You can always be confused by the reasoning in a
way which pushes you to (re)consider MEC or MAT, and to interpret them
more vaguely so that those changes are made possible. But then we
learn nothing "clear" from the reasoning. We learn if we do the same,
but precisely.

> I don't see why the mechanist-materialists are
> logically disallowed from incorporating that kind of physical
> difference into their notion of consciousness.


In our setting, it means that the neuron/logic gates have some form of
prescience.


>
>
> Of course, in practice, Lucky Alice presents a conundrum for such
> mechanist-materialists. But it's not obvious to me that the conundrum
> is unanswerable for them, because the whole notion of "consciousness"
> in this context seems so vague.

No, what could be vague is the idea of linking consciousness with
matter, but that is the point of the reasoning. If we keep comp, we
have to (re)define the general notion of matter.

> Bostrom's views about fractional
> "quantities" of experience are a case in point.

If that was true, why would you say "yes" to the doctor without
knowing the thickness of the artificial axons?
How can you be sure your consciousness will not half diminish when the
doctor proposes to you the new cheaper brain which use thinner fibers,
or half the number of redundant security fibers (thanks to a progress
in security software)?
I would no more dare to say "yes" to the doctor if I could loose a
fraction of my consciousness and become a partial zombie.


> He clearly takes a
> mechanist-materialist view of consciousness,


Many believes in naturalism. At least, its move shows that he is aware
of the difficulty of the mind body problem. But he has to modify comp
deeply, for making its move meaningful.
If anything physical/geometrical about the neurons is needed, let the
digital machine take into account that physical/geometrical feature.
This means, let the level be refined. But once the level is correctly
choose, comp forces us to abstract from the functioning of the
elementary boolean gates.


> and he believes that a
> grid of randomly-flipping bits cannot be conscious,


(I am ok with that. I mean, this will remain true both with comp and
NON MAT).

> no matter what it
> does. He would argue that, during Fully-Functional Alice's slide into
> Lucky Alice, her subjective quality of consciousness doesn't change,
> but her "quantity" of consciousness gradually reduces until it becomes
> zero. That seems weird to me, but I don't see how to "logically prove"
> that it's wrong. All I have are messy philosophical arguments and
> thought experiments - what Dennett calls "intuition pumps".

Because I would have to say NO to the doctor who proposes me a digital
neural net with "infinitesimally" or very thin but solid fibers". I
would become a zombie if Bostrom is right. Bostrom does not use the
digital MEC hypothesis.


>
>
> That being said, I'm happy to proceed as if our hypothetical
> mechanist-
> materialists have accepted the force of your argument as a logical
> proof. Yes, they claim, given the assumptions of our mechanism-
> materialism, if Fully-Functional Alice is conscious, Lucky Alice must
> *necessarily* also be conscious. If the laser-graph is conscious, then
> the movie of it must *necessarily* be conscious. What's the problem
> (they ask)? On to MGA 3.


Hmmm.... (asap). Still disentangling MGA 3 and MGA 4 ...

Bruno


http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Nov 25, 2008, 2:16:55 PM11/25/08
to everyth...@googlegroups.com
I'm not sure I agree with that. If consciousness is a process it may be
instantiated in physical relations (causal?). But relations are in general not
attributes of the relata. Distance is an abstract relation but it is always
realized as the distance between two things. The things themselves don't have
"distance". If some neurons encode my experience of "seeing a rose" might not
the experience depend on the existence of roses, the evolution of sight, and the
causal chain as well as the immediate state of the neurons?

>
>
>>
>> Of course, in practice, Lucky Alice presents a conundrum for such
>> mechanist-materialists. But it's not obvious to me that the conundrum
>> is unanswerable for them, because the whole notion of "consciousness"
>> in this context seems so vague.
>
> No, what could be vague is the idea of linking consciousness with
> matter, but that is the point of the reasoning. If we keep comp, we
> have to (re)define the general notion of matter.
>
>
>
>> Bostrom's views about fractional
>> "quantities" of experience are a case in point.
>
> If that was true, why would you say "yes" to the doctor without
> knowing the thickness of the artificial axons?
> How can you be sure your consciousness will not half diminish when the
> doctor proposes to you the new cheaper brain which use thinner fibers,
> or half the number of redundant security fibers (thanks to a progress
> in security software)?
> I would no more dare to say "yes" to the doctor if I could loose a
> fraction of my consciousness and become a partial zombie.

But who would say "yes" to the doctor if he said that he would take a movie of
your brain states and project it? Or if he said he would just destroy you in
this universe and you would continue your experiences in other branches of the
multiverse or in platonia? Not many I think.

Brent

Russell Standish

unread,
Nov 25, 2008, 6:19:43 PM11/25/08
to everyth...@googlegroups.com
On Tue, Nov 25, 2008 at 11:16:55AM -0800, Brent Meeker wrote:
>
> But who would say "yes" to the doctor if he said that he would take a movie of
> your brain states and project it? Or if he said he would just destroy you in
> this universe and you would continue your experiences in other branches of the
> multiverse or in platonia? Not many I think.
>
> Brent
>

Then perhaps nobody has sufficient faith in COMP!

Interestingly, I pointed out an inherent contradiction in the "Yes,
doctor" postulate a while back, which I gather you're still thinking
of a response Bruno. Lets call it the "Standish wager", after the
Pascal wager about belief in God.

If YD is true, then you must also accept the consequences, namely
COMP-immortality. In which case you may as well say "no" to the
doctor, as COMP-immortality guarantees that you will survive the terminal
brain disease that brought you to the doctor in the first place.

Of course, in reality, it may be a very different choice being
presented. Perhaps Vinge's Singularity happens, and one is given the
choice between uploading into the hive mind, or being put to death on
the spot to conserve resources. Or more modestly, one is being given a
choice of whether to have a direct internet connection implanted in
your skull. In each of these cases, one should make the choice based
on whether the new configuration offers a better life over your
existing one, or not. Survival prospects really shouldn't enter into
it. In the event YD is false, you will then not be any worse off than
you were before.

BTW - I watched the Prestige on the weekend. Good recommendation,
Bruno! My wife enjoyed it greatly too, and wants to watch it again
sometime. I can't get her to read my book, though :(

Cheers

--

----------------------------------------------------------------------------

Kory Heath

unread,
Nov 25, 2008, 11:36:41 PM11/25/08
to everyth...@googlegroups.com

On Nov 25, 2008, at 10:00 AM, Bruno Marchal wrote:
> You could have perhaps still a problem with the definitions or with
> the hypotheses?

I think I haven't always been clear on our definitions of mechanism
and materialism. But I can understand and accept definitions of those
terms under which MGA 1 shows that it's logically necessary that Lucky
Alice is conscious, and MGA 2 shows that it's logically necessary that
"the projection of the movie makes Alice conscious" (your words from a
previous email). I think we can proceed with that.

But can you clarify exactly what MECH+MAT is supposed to be saying
about the movie? Does MECH+MAT say that something special is happening
when we project the movie, or is the simple existence of the movie
enough?

-- Kory

Stathis Papaioannou

unread,
Nov 26, 2008, 8:36:52 AM11/26/08
to everyth...@googlegroups.com
2008/11/25 Kory Heath <ko...@koryheath.com>:

> The answer I *used* to give was that it doesn't matter, because no
> matter what "accidental order" you find in Platonia, you also find the
> "real order". In other words, if you find some portion of the digits
> of PI that "seems to be" following the rules of Conway's Life, then
> there is also (of course) a Platonic object that represents the
> "actual" computations that the digits of PI "seem to be" computing.
> This is, essentially, Bostrom's "Unification" in the context of
> Platonia. It doesn't matter whether or not "accidental order" in the
> digits of PI can be viewed as conscious, because either way, we know
> the "real order" exists in Platonia as well, and multiple
> "instantiations" of the same pain in Platonia wouldn't result in
> multiple pains.
>
> I'm uncomfortable with the philosophical vagueness of some of this. At
> the very least, I want a better handle on why Unification is correct
> and Duplication is not in the context of Platonia (or why that
> question is confused, if it is).

I'd agree with your first paragraph quoted above. It isn't possible to
introduce, eliminate or duplicate Platonic objects; they're all just
there, eternally.

--
Stathis Papaioannou

Bruno Marchal

unread,
Nov 26, 2008, 11:22:49 AM11/26/08
to everyth...@googlegroups.com

On 25 Nov 2008, at 20:16, Brent Meeker wrote:

>
> Bruno Marchal wrote:
>>

>>
>>> Brent: I don't see why the mechanist-materialists are


>>> logically disallowed from incorporating that kind of physical
>>> difference into their notion of consciousness.
>>
>>

>> Bruno: In our setting, it means that the neuron/logic gates have
>> some form of
>> prescience.
>
> Brent: I'm not sure I agree with that. If consciousness is a

> process it may be
> instantiated in physical relations (causal?). But relations are in
> general not
> attributes of the relata. Distance is an abstract relation but it
> is always
> realized as the distance between two things. The things themselves
> don't have
> "distance". If some neurons encode my experience of "seeing a rose"
> might not
> the experience depend on the existence of roses, the evolution of
> sight, and the
> causal chain as well as the immediate state of the neurons?


With *digital* mechanism, it would just mean that we have not chosen
the right level of substitution. Once the level is well chosen, then
we can no more give role to the implementations details. They can no
more be relevant, or we introduce prescience in the elementary
components.


>>
>>
>>> Bostrom's views about fractional
>>> "quantities" of experience are a case in point.
>>
>> If that was true, why would you say "yes" to the doctor without
>> knowing the thickness of the artificial axons?
>> How can you be sure your consciousness will not half diminish when
>> the
>> doctor proposes to you the new cheaper brain which use thinner
>> fibers,
>> or half the number of redundant security fibers (thanks to a progress
>> in security software)?
>> I would no more dare to say "yes" to the doctor if I could loose a
>> fraction of my consciousness and become a partial zombie.
>
> But who would say "yes" to the doctor if he said that he would take
> a movie of
> your brain states and project it? Or if he said he would just
> destroy you in
> this universe and you would continue your experiences in other
> branches of the
> multiverse or in platonia? Not many I think.


I agree with you. Not many will say yes to such a doctor! Even
rightly so (with MEC). I think MGA 3 should make this clear.
The point is just that if we assume both MEC *and* MAT, then the
movie is "also" conscious, but of course (well: by MGA 3) it is not
conscious "qua computatio", so that we get the (NON COMP or NON MAT)
conclusion.

I keep COMP (as my working hypothesis, but of course I find it
plausible for many reasons), so I abandon MAT. With comp,
consciousness can still supervene on computations (in Platonia, or
more concretely in the universal deployment), but not on its physical
implementation. By UDA we have indeed the obligation now to explain
the physical, by the computational. It is the reversal I talked about.
Somehow, consciousness does not supervene on brain activity, but brain
activity supervene on consciousness. To be short, because
consciousness is now somehow related with the whole of arithmetical
truth, and things are no so simple.

Bruno
http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Nov 27, 2008, 3:43:23 PM11/27/08
to everyth...@googlegroups.com
But is causality an implementation detail? There seems to be an implicit
assumption that digitally represented states form a sequence just because there
is a rule that defines that sequence, but in fact all digital (and other)
sequences depend on causal chains.
It's not so clear to me. One argument leads to CONSCIOUS and the other leads to
NON-CONSCIOUS, but there is not direct contradiction - only a contradiction of
intuitions. So it may be a fault of intuition in evaluating the thought
experiments.

Brent

John Mikes

unread,
Nov 27, 2008, 5:07:49 PM11/27/08
to everyth...@googlegroups.com
Brent wrote:
...
"But is causality an implementation detail?  There seems to be an implicit
assumption that digitally represented states form a sequence just because there
is a rule that defines(*) that sequence, but in fact all digital (and other) sequences depend on(**) causal chains." ...
 
I would insert at (*): 'in digitality'  - 
and at (**):
'(the co-interefficiency of) unlimited'  - because in my vocabulary (and I do not expect the 'rest of the world to accept it) the conventional term 'causality', meaning to find "A CAUSE" within the (observed)  topical etc. model that entails the (observed) 'effect' - gave place to the unlimited inteconnections that - in their total interefficiency - result in the effect we observed within a model-domain, irrespective of the limits of the observed domain.
"Cause" - IMO - is a limited term of ancient narrow epistemic (model based?) views, not fit for discussions in a "TOE"-oriented style.
Using obsolete words impress the coclusions as well.
 
John Mikes

Brent Meeker

unread,
Nov 27, 2008, 6:16:17 PM11/27/08
to everyth...@googlegroups.com
John Mikes wrote:
> Brent wrote:
> ...
> *"But is causality an implementation detail? There seems to be an implicit

> assumption that digitally represented states form a sequence just
> because there
> is a rule that defines(*) that sequence, but in fact all digital (and
> other) sequences depend on(**) causal chains." ...*
>
> I would insert at (*): /*'in digitality'*/ -
> and at (**):
> /*'(the co-interefficiency of) unlimited'*/ - because in my vocabulary
> (and I do not expect the 'rest of the world to accept it) the
> conventional term /'causality'/, meaning to find /"A CAUSE"/ within the
> (observed) topical etc. model that entails the (observed) 'effect' -
> gave place to the unlimited inteconnections that - in their total
> interefficiency - result in the effect we observed within a
> model-domain, irrespective of the limits of the observed domain.
> "Cause" - IMO - is a limited term of ancient narrow epistemic (model
> based?) views, not fit for discussions in a "TOE"-oriented style.
> Using obsolete words impress the coclusions as well.

I think I agree with that last remark (although I'm not sure because the
language seems obscure). I meant causality in the physicists sense of "no
action at a distance", not in an epistemic sense.

Brent

John Mikes

unread,
Nov 28, 2008, 8:18:02 AM11/28/08
to everyth...@googlegroups.com
Thanks, Brent,
 
at least you read through my blurb. Of course I am vague - besides I wrote the post in a jiffy - not premeditatedly, I am sorry. Also there is no adequate language to those things I want to refer to, not even 'in situ', the ideas and terms about interefficient totality (IMO more than just the TOE) are still sought of. We have only the old language of the (models - based) quotidien and scientific terms like your "in the physicists' sense" and similar.
 
BTW: ""no action at a distance"? what would you call a Mars-to-Earth term when NASA is sending an order and the module on Mars starts digging?  I think you may consider the "beam" a 'connecting' (physical) space-term?
 
I hope to be in the ballpark of your model-based (physicalistic) causality's extension in a sense: (I never considered my position in an 'epistemic sense') but think of your (physical) distance as 'unrelated', relevant to more than just measurable space, in any 'dimension' we may (or still cannot) think. I consider
sometimes 'causality' as some backwards-"deterministic" in the sense that everything is 'e/affected' by other changes (relations) - as in: nothing generates itself. (In this respect I shove the "ORIGIN" under the rag, because I acknowledge that it is beyond our limited mental capabilities - and I don't want to start with unreasonable assumptions.
 
(Yes, in my 'narrative' about a Big Bang fantasy - closer to human common sense logic starts with a Plenitude-assumption, a pretty undetailed image, giving rise only to some physically-mathematically followable(?) process of the mandatory occurrence of the unlimited (both in quality and number) universes, but I am ready to change it to a better idea any time.)
 
I wonder if I added to the obscurity of my language. If yes, I am sorry.
 
John M
 
 

On Thu, Nov 27, 2008 at 6:16 PM, Brent Meeker <meek...@dslextreme.com> wrote:

John Mikes wrote:
> Brent wrote:
> ...
> *"But is causality an implementation detail?  There seems to be an implicit
> assumption that digitally represented states form a sequence just
> because there
> is a rule that defines(*) that sequence, but in fact all digital (and
> other) sequences depend on(**) causal chains." ...*
>
> I would insert at (*): /*'in digitality'*/  -
> and at (**):
> /*'(the co-interefficiency of) unlimited'*/  - because in my vocabulary
> (and I do not expect the 'rest of the world to accept it) the
> conventional term /'causality'/, meaning to find /"A CAUSE"/ within the
> (observed)  topical etc. model that entails the (observed) 'effect' -
> gave place to the unlimited inteconnections that - in their total
> interefficiency - result in the effect we observed within a
> model-domain, irrespective of the limits of the observed domain.
> "Cause" - IMO - is a limited term of ancient narrow epistemic (model
> based?) views, not fit for discussions in a "TOE"-oriented style.
> Using obsolete words impress the conclusions as well.
Reply all
Reply to author
Forward
0 new messages