The MGA revisited

44 views
Skip to first unread message

LizR

unread,
Mar 22, 2015, 6:24:36 PM3/22/15
to everyth...@googlegroups.com
I've just revisited Russell's "The MGA revisited" in the hope of understanding it. Unfortunately my pretty little head doesn't like being bothered with such matters despite "the spirit being willing" so I found my mind boggling a bit, as usual.

However the nub seems to be something like this.

If consciousness is the outcome of computation,
and consciousness supervenes on physical states,
then we can arrange for the same stream of consciousness to be produced by different physical states.

For example, at some point in the computation we may have the value 1 in two registers. There are combined using a logical AND operation, giving the result 1. But this could be substituted by a logical OR, or the values ignored and the result put in by hand (or via a projected movie, or cosmic rays, etc) and the same result would occur.

But this contradicts the definition of supervenience (about which I have a question, see below). Hence consciousness (as computation) cannot supervene on a physical process.

A question about supervenience.

In the classroom example, the question "does consciousness supervene on the class?" is answered no. Obviously consciousness is thought to supervene on people in the class, e.g. Alice and Bob (ignoring the MGA for now, at least). So consciousness supervenes on Alice, and separately on Bob - but not on the class, even though Alice and Bob's consciousness supervene on the class.

Sorry, I got confused at this point. Does this simply mean that the class has no overall single consciousness supervening on it - a "group mind" as it were?

Russell Standish

unread,
Mar 22, 2015, 7:00:35 PM3/22/15
to everyth...@googlegroups.com
That would be one way of looking at it.

Cheers

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au

Latest project: The Amoeba's Secret
(http://www.hpcoders.com.au/AmoebasSecret.html)
----------------------------------------------------------------------------

LizR

unread,
Mar 22, 2015, 7:11:05 PM3/22/15
to everyth...@googlegroups.com
How else might one look at it?

(And how about the "nub" -- is that correct?)

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Russell Standish

unread,
Mar 22, 2015, 8:07:50 PM3/22/15
to everyth...@googlegroups.com
My main point is that supervenience is not what you think it is. Some
people think supervenience means that a particular arrangement of a
substrate is necessary for consciousness.

Of course, if you had never heard the term before, then it shouldn't
be a problem.

LizR

unread,
Mar 22, 2015, 8:38:49 PM3/22/15
to everyth...@googlegroups.com
OK, thanks.

Well, yes, it's true that I hadn't heard the term except in places like this...

So anyway, the argument that the exact arrangement of the substrate isn't necessary for consciousness means that the same experiences could be generated by different arrangements of a given substrate, or perhaps completely different substrates (which may just be a statement of computational universality, or something similar?)

That in itself doesn't make the substrate unnecessary to consciousness, surely? It's just saying that there isn't a one-to-one mapping, and (for example) silicon or carbon brains might in theory generate the same states. So a given conscious experience doesn't supervene on the exact same configuration of the substrate... So maybe it doesn't always matter to my experience if my brain has, say, 100 or 101 ions in a particular synapse (or 1000000 and 1000001 - I have no idea what figures are realistic).

I'm not sure how you get from that to the non-primariness of matter, however.

meekerdb

unread,
Mar 22, 2015, 11:10:02 PM3/22/15
to everyth...@googlegroups.com
On 3/22/2015 5:38 PM, LizR wrote:
OK, thanks.

Well, yes, it's true that I hadn't heard the term except in places like this...

So anyway, the argument that the exact arrangement of the substrate isn't necessary for consciousness means that the same experiences could be generated by different arrangements of a given substrate, or perhaps completely different substrates (which may just be a statement of computational universality, or something similar?)

That in itself doesn't make the substrate unnecessary to consciousness, surely? It's just saying that there isn't a one-to-one mapping, and (for example) silicon or carbon brains might in theory generate the same states. So a given conscious experience doesn't supervene on the exact same configuration of the substrate... So maybe it doesn't always matter to my experience if my brain has, say, 100 or 101 ions in a particular synapse (or 1000000 and 1000001 - I have no idea what figures are realistic).

I'm not sure how you get from that to the non-primariness of matter, however.

That's where the MGA comes in.  It purports to show that one of the possible substrates is inert matter, which seems so absurd that we should conclude the matter plays no part whatsoever.

Brent

Terren Suydam

unread,
Mar 23, 2015, 4:54:26 PM3/23/15
to everyth...@googlegroups.com
For me the MGA was illuminating, but an even more mind-bending demonstration of the supervenience idea is that one can fashion a Turing-complete computer using an array of simple mechanical switches, through which ping-pong balls would flow (see http://helge.ru-stad.name/ppb_comp/ppbcne.htm for example).

As such, by Church-Turing, you could take whatever computational framework one might hypothesize necessary to support consciousness (e.g. a simulation of a human brain), and run it on this ping-pong computer (albeit on a time-scale many orders of magnitude slower than today's, or even yesterday's, devices). The physical scale of the ping-pong computer necessary to run a sophisticated sim like that would be pretty massive too, to allow for the large amount of 'tape' (Turing) or memory required... but that's merely a pragmatic point. In principle, it would be computing exactly the same program as the supercomputer you'd probably commission to run that sim... and therefore, consciousness would supervene on your ping pong ball computer. 

As Bruno has said, if that is too absurd, then for you that functions as a reductio absurdum against the mechanistic hypothesis.

Terren

--

Stathis Papaioannou

unread,
Mar 23, 2015, 5:12:31 PM3/23/15
to everyth...@googlegroups.com
On 24 March 2015 at 07:54, Terren Suydam <terren...@gmail.com> wrote:
> For me the MGA was illuminating, but an even more mind-bending demonstration
> of the supervenience idea is that one can fashion a Turing-complete computer
> using an array of simple mechanical switches, through which ping-pong balls
> would flow (see http://helge.ru-stad.name/ppb_comp/ppbcne.htm for example).
>
> As such, by Church-Turing, you could take whatever computational framework
> one might hypothesize necessary to support consciousness (e.g. a simulation
> of a human brain), and run it on this ping-pong computer (albeit on a
> time-scale many orders of magnitude slower than today's, or even
> yesterday's, devices). The physical scale of the ping-pong computer
> necessary to run a sophisticated sim like that would be pretty massive too,
> to allow for the large amount of 'tape' (Turing) or memory required... but
> that's merely a pragmatic point. In principle, it would be computing exactly
> the same program as the supercomputer you'd probably commission to run that
> sim... and therefore, consciousness would supervene on your ping pong ball
> computer.
>
> As Bruno has said, if that is too absurd, then for you that functions as a
> reductio absurdum against the mechanistic hypothesis.

The ping-pong mind is consistent with standard computationalism, which
requires a physical computer. It is no more absurd if you think about
it than that little bags of water (cells) can form a mind. The MGA, as
well as Maudlin's Olympia argument and Putnam's rock argument, claim
something more radical: that if computationalism is true then it
cannot be the physical activity in the computer that gives rise to
consciousness. That means you must either throw out computationalism
as a whole as absurd or, less commonly, as Bruno does keep
computationalism and throw out the physical supervenience thesis.


--
Stathis Papaioannou

Terren Suydam

unread,
Mar 23, 2015, 5:31:03 PM3/23/15
to everyth...@googlegroups.com
Good clarification, thanks. Nonetheless, there is something intuitively more challenging about supervenience when it is applied to macro-sized computational devices you can see working with your own eyes, to me anyway. I.e. there is an aspect of computers and brains, that because their operation is hidden by the time/space micro-scales, makes it much harder to explicitly draw the connection between the actual mechanism and the phenomena of consciousness (whatever that may be). But you are of course correct that it's an apples to oranges comparison to the MGA.

meekerdb

unread,
Mar 23, 2015, 6:08:27 PM3/23/15
to everyth...@googlegroups.com

Or you can throw out the assumption that conscious thought is independent of an external world.
  This assumption comes easily to Platonist because they think Platonia exists independent of any worlds; but I find it very suspicious.

Brent

LizR

unread,
Mar 23, 2015, 6:48:55 PM3/23/15
to everyth...@googlegroups.com
On 23 March 2015 at 16:09, meekerdb <meek...@verizon.net> wrote:
That's where the MGA comes in.  It purports to show that one of the possible substrates is inert matter, which seems so absurd that we should conclude the matter plays no part whatsoever.

That sounds like Maudlin's Olimpia argument....?

So far I get that different substrates can create the same computational states (by which I assume we mean the contents of registers and memory?) But how does the MGA get from showing that to showing that inert matter can be a possible substrate? (ISTM that a projected graph is not inert, if that's the argument.)

LizR

unread,
Mar 23, 2015, 6:53:59 PM3/23/15
to everyth...@googlegroups.com
On 24 March 2015 at 09:54, Terren Suydam <terren...@gmail.com> wrote:
For me the MGA was illuminating, but an even more mind-bending demonstration of the supervenience idea is that one can fashion a Turing-complete computer using an array of simple mechanical switches, through which ping-pong balls would flow (see http://helge.ru-stad.name/ppb_comp/ppbcne.htm for example).

As such, by Church-Turing, you could take whatever computational framework one might hypothesize necessary to support consciousness (e.g. a simulation of a human brain), and run it on this ping-pong computer (albeit on a time-scale many orders of magnitude slower than today's, or even yesterday's, devices). The physical scale of the ping-pong computer necessary to run a sophisticated sim like that would be pretty massive too, to allow for the large amount of 'tape' (Turing) or memory required... but that's merely a pragmatic point. In principle, it would be computing exactly the same program as the supercomputer you'd probably commission to run that sim... and therefore, consciousness would supervene on your ping pong ball computer. 

I think it's been shown that computers can (in theory) be built from lots of different things, and still be computers. Mechanical relays, pipes and valves, the game of life, a guy with a massive lookup table in a closed room...  a rock... black holes... no doubt one can make them from all sorts of baroque arrangements of matter/energy/space/time. I think the MGA, if one can grasp it fully (which admittedly I can't, yet) shows that all such arrangements are equally absurd.
 

Russell Standish

unread,
Mar 23, 2015, 6:55:06 PM3/23/15
to everyth...@googlegroups.com
Broadly, the idea is to use notion that movement is relative. If a
machine is moving through a fixed sequence of states, we can
equivalently set things up so the machine is inert, but the observer
moves in such a way that appearance is unchanged. The absurdity is
that this implies consciousness depends on the motion of the observer.

This is a relative of the "rocks are conscious" argument.

LizR

unread,
Mar 23, 2015, 6:59:32 PM3/23/15
to everyth...@googlegroups.com
On 24 March 2015 at 11:08, meekerdb <meek...@verizon.net> wrote:
Or you can throw out the assumption that conscious thought is independent of an external world.  This assumption comes easily to Platonist because they think Platonia exists independent of any worlds; but I find it very suspicious.

Doesn't that take you back to the starting point of Bruno's argument? If conscious thought depends on the world, then either it's a computation, or part of one (apparently the argument works even if you have to assume the HUbble sphere is digitally simulatable, if that's the right word) - in which case we seem led inexorably to the UDA/MGA/TLA - or it's something else. 

What else could it be?

Bruce Kellett

unread,
Mar 23, 2015, 7:10:20 PM3/23/15
to everyth...@googlegroups.com
Russell Standish wrote:
> On Tue, Mar 24, 2015 at 11:48:52AM +1300, LizR wrote:
>> On 23 March 2015 at 16:09, meekerdb <meek...@verizon.net> wrote:
>>
>>> That's where the MGA comes in. It purports to show that one of the
>>> possible substrates is inert matter, which seems so absurd that we should
>>> conclude the matter plays no part whatsoever.
>>>
>> That sounds like Maudlin's Olimpia argument....?
>>
>> So far I get that different substrates can create the same computational
>> states (by which I assume we mean the contents of registers and memory?)
>> But how does the MGA get from showing that to showing that inert matter can
>> be a possible substrate? (ISTM that a projected graph is not inert, if
>> that's the argument.)
>>
>
> Broadly, the idea is to use notion that movement is relative. If a
> machine is moving through a fixed sequence of states, we can
> equivalently set things up so the machine is inert, but the observer
> moves in such a way that appearance is unchanged. The absurdity is
> that this implies consciousness depends on the motion of the observer.

No, it doesn't imply any such thing. The motion of the observer, or rate
of change of the sequence of states, is irrelevant to consciousness. The
only relevant thing is the states themselves -- the rate at which they
are observed (or even if they are static) does not matter.

(if you are concerned that /some/ notion of time is essential, then it
needs only that time be encoded in the states in some way. No external
time parameter is needed. See Julian Barbour's book /The End of Time/)

Bruce

meekerdb

unread,
Mar 23, 2015, 8:07:24 PM3/23/15
to everyth...@googlegroups.com
On 3/23/2015 3:48 PM, LizR wrote:


On 23 March 2015 at 16:09, meekerdb <meek...@verizon.net> wrote:
That's where the MGA comes in.  It purports to show that one of the possible substrates is inert matter, which seems so absurd that we should conclude the matter plays no part whatsoever.

That sounds like Maudlin's Olimpia argument....?

It is essentially the same.  But I think Maudlin took the other side of the reductio and concluded that computationalism must be incomplete.



So far I get that different substrates can create the same computational states (by which I assume we mean the contents of registers and memory?) But how does the MGA get from showing that to showing that inert matter can be a possible substrate? (ISTM that a projected graph is not inert, if that's the argument.)

Yes, as I understand it that's the argument.  It's consistent with Platonism.  A computer program's execution written out on paper is just as much a calculation as a lot of transistors switching.

My caveat is that neither of them is conscious in THIS world because being conscious requires being conscious OF something.  An isolated, pure consciousness is an oxymoron.  Consciousness only exists as part of thoughts and thoughts only have meaning by reference to an external world and potential action in that world.

Brent

meekerdb

unread,
Mar 23, 2015, 8:15:12 PM3/23/15
to everyth...@googlegroups.com
The difference is that you can't separate out a material instantiation of a computation and say that because it can be inert that shows that consciousness doesn't need matter.  If consciousness is computation then it can be implemented in different substances, but there must be a whole world implemented in that substance too for the consciousness to be conscious OF.  The projected sequence of the MGA isn't conscious in this world because it doesn't interact with this world. We tend to intuit that it's conscious because we see the causal connection to this world, but to be conscious it would have to be part of a whole projected world in which it acted.  This is not longer so radical.  It's just saying to could digitally simulate a world and conscious beings within it.  The simulation might look inert to us in our world, as it would if it were just written out on paper, but from within the simulation it would look dynamic.

Brent

meekerdb

unread,
Mar 23, 2015, 8:16:36 PM3/23/15
to everyth...@googlegroups.com
On 3/23/2015 4:04 PM, Russell Standish wrote:
> On Tue, Mar 24, 2015 at 11:48:52AM +1300, LizR wrote:
>> On 23 March 2015 at 16:09, meekerdb <meek...@verizon.net> wrote:
>>
>>> That's where the MGA comes in. It purports to show that one of the
>>> possible substrates is inert matter, which seems so absurd that we should
>>> conclude the matter plays no part whatsoever.
>>>
>> That sounds like Maudlin's Olimpia argument....?
>>
>> So far I get that different substrates can create the same computational
>> states (by which I assume we mean the contents of registers and memory?)
>> But how does the MGA get from showing that to showing that inert matter can
>> be a possible substrate? (ISTM that a projected graph is not inert, if
>> that's the argument.)
>>
> Broadly, the idea is to use notion that movement is relative. If a
> machine is moving through a fixed sequence of states, we can
> equivalently set things up so the machine is inert, but the observer
> moves in such a way that appearance is unchanged. The absurdity is
> that this implies consciousness depends on the motion of the observer.

There doesn't even have to be movement, just some ordering (movement is just ordering in
time - but time isn't fundamental).

Brent

meekerdb

unread,
Mar 23, 2015, 8:33:34 PM3/23/15
to everyth...@googlegroups.com
Right. It's only order that it needed. But if this view is to be taken as fundamental
then it requires that there be some overlap between states in order to define the
sequence. Otherwise there's no inherent way to order them without assuming some known
dynamics, 2nd law, etc.

Brent

LizR

unread,
Mar 23, 2015, 8:44:22 PM3/23/15
to everyth...@googlegroups.com
On 24 March 2015 at 13:07, meekerdb <meek...@verizon.net> wrote:
Yes, as I understand it that's the argument.  It's consistent with Platonism.  A computer program's execution written out on paper is just as much a calculation as a lot of transistors switching.

So is the idea to show that a recording is just as conscious as the original calculation?

My caveat is that neither of them is conscious in THIS world because being conscious requires being conscious OF something.  An isolated, pure consciousness is an oxymoron.  Consciousness only exists as part of thoughts and thoughts only have meaning by reference to an external world and potential action in that world.

I am under the impression Bruno gets around that by potentially allowing the environment to be simulated as well. Or contrariwise, can't all the inputs to the conscisouness be provided as though it was in the world? (as for a brain in a vat for example. I mean hypothetically, and to simplify the argument, not as a general model of consciousness.)
 

LizR

unread,
Mar 23, 2015, 8:46:14 PM3/23/15
to everyth...@googlegroups.com
I think you've answered your own objection there. There's no reason (in principle) for the world not to be simulated, I assume.
 

LizR

unread,
Mar 23, 2015, 8:48:21 PM3/23/15
to everyth...@googlegroups.com
On 24 March 2015 at 13:33, meekerdb <meek...@verizon.net> wrote:
(if you are concerned that /some/ notion of time is essential, then it needs only that time be encoded in the states in some way. No external time parameter is needed. See Julian Barbour's book /The End of Time/)

Right.  It's only order that it needed.  But if this view is to be taken as fundamental then it requires that there be some overlap between states in order to define the sequence.  Otherwise there's no inherent way to order them without assuming some known dynamics, 2nd law, etc.

This sounds like "October the First is too late" again. There is some principle that allows state N+1 to contain a record of N, N-1 etc, but not of N+2

LizR

unread,
Mar 23, 2015, 8:50:01 PM3/23/15
to everyth...@googlegroups.com
On 24 March 2015 at 13:44, LizR <liz...@gmail.com> wrote:
Or contrariwise, can't all the inputs to the conscisouness be provided as though it was in the world? (as for a brain in a vat for example. I mean hypothetically, and to simplify the argument, not as a general model of consciousness.)

Of course that might be the vital point where the whole argument goes astray. (Like "The man in the room can just look up the answer..." )
 

meekerdb

unread,
Mar 23, 2015, 8:57:33 PM3/23/15
to everyth...@googlegroups.com
Yes, he casually dismisses the objection by saying we'll just include the environment too.  But that's my point that it's then no longer a new radical result.  It's just saying that if you simulate a world it can include conscious beings who are conscious of that world.  But IN THAT WORLD their substrate is not inert - even if it's inert in our world, e.g. consider the novel "Mody Dick" being simulated in a computer.  To Ishmael and Ahab in the computer they'd be conscious and experiencing the hunt for the white whale.  And, according to Platonists, they are as printed on the page too.

Brent

Bruce Kellett

unread,
Mar 23, 2015, 8:57:53 PM3/23/15
to everyth...@googlegroups.com
I am not sure I fully grasp what you mean by 'some overlap between
states'. If you follow Barbour's idea then each state is complete within
itself -- containing a 'time capsule' that gives the illusion of the
passage of time. There need be no ordered sequence at all between states
-- at least if I remember Barbour correctly.

Bruce

meekerdb

unread,
Mar 23, 2015, 8:59:38 PM3/23/15
to everyth...@googlegroups.com
Right.  In which case there's also no meaning to saying it is simulated.  To be simulated is only meaningful when it is relative to a really real world. 

Brent

meekerdb

unread,
Mar 23, 2015, 9:09:08 PM3/23/15
to everyth...@googlegroups.com
It's been a long time since I read Barbour, but I believe he uses the idea that state N+1
contains information from N, i.e. records/history. So, although the state is complete in
itself, in terms of information this is a kind of "overlap". Then hypothesizing something
like the second law provides an ordering.

Brent

Bruce Kellett

unread,
Mar 23, 2015, 9:22:55 PM3/23/15
to everyth...@googlegroups.com
It may be a more radical idea than that Barbour proposed, but if you
take this to its obvious conclusion, then there need be no connection
between the states whatsoever. The illusion of 'time', and the laws by
which one might mark the passage of time, are all encoded within each
individual state; there need be no connection with other states, and
certainly no notion of progression from state N-1 to N, and thence to N+1.

Bruce
(We only know the past by means of memories/records in the present.)

meekerdb

unread,
Mar 23, 2015, 9:31:24 PM3/23/15
to everyth...@googlegroups.com
Why isn't the cumulative inclusion of information (increased entropy) an ordering
principle? What more could you ask for to order discrete states?

Brent

Bruce Kellett

unread,
Mar 23, 2015, 9:59:00 PM3/23/15
to everyth...@googlegroups.com
It is a perfectly good ordering system. But my point was that we do not
even need distinct states that need to be ordered.

Bruce

Russell Standish

unread,
Mar 23, 2015, 11:10:42 PM3/23/15
to everyth...@googlegroups.com
On Tue, Mar 24, 2015 at 10:10:37AM +1100, Bruce Kellett wrote:
> Russell Standish wrote:
> >On Tue, Mar 24, 2015 at 11:48:52AM +1300, LizR wrote:
> >>On 23 March 2015 at 16:09, meekerdb <meek...@verizon.net> wrote:
> >>
> >>> That's where the MGA comes in. It purports to show that one of the
> >>>possible substrates is inert matter, which seems so absurd that we should
> >>>conclude the matter plays no part whatsoever.
> >>>
> >>That sounds like Maudlin's Olimpia argument....?
> >>
> >>So far I get that different substrates can create the same computational
> >>states (by which I assume we mean the contents of registers and memory?)
> >>But how does the MGA get from showing that to showing that inert matter can
> >>be a possible substrate? (ISTM that a projected graph is not inert, if
> >>that's the argument.)
> >>
> >
> >Broadly, the idea is to use notion that movement is relative. If a
> >machine is moving through a fixed sequence of states, we can
> >equivalently set things up so the machine is inert, but the observer
> >moves in such a way that appearance is unchanged. The absurdity is
> >that this implies consciousness depends on the motion of the observer.
>
> No, it doesn't imply any such thing. The motion of the observer, or
> rate of change of the sequence of states, is irrelevant to
> consciousness. The only relevant thing is the states themselves --
> the rate at which they are observed (or even if they are static)
> does not matter.
>

Then clearly, you have no problem with the concept of a conscious
recording.

In order for the MGA to go through, conscious recordings need to be
considered absurd.

I personally, don't have an opinion either way, which is why I
consider that to be a rather serious flaw of the MGA.

Bruce Kellett

unread,
Mar 23, 2015, 11:17:24 PM3/23/15
to everyth...@googlegroups.com
If you take the block universe model seriously then we are nothing more
than conscious recordings!

I don't know what MGA stands for, or what it means, so I can't comment
on that.

Bruce

LizR

unread,
Mar 23, 2015, 11:39:12 PM3/23/15
to everyth...@googlegroups.com
Apologies

"Movie Graph Argument" - from Bruno's 2004 paper I believe.

I also don't know whether it makes more or less sense for a recording to be conscious than a computation. However I have a feeling Bruno addressed this when he was explaining the who thing to me, some time ago - it doesn't matter whether the recording is conscious or not, it's just one of the infinite number of possible computations that contribute to generating that moment of consciousness.

Perhaps! Altho it seems to me that is assuming comp so maybe I didn't get that right.



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.

Russell Standish

unread,
Mar 23, 2015, 11:39:24 PM3/23/15
to everyth...@googlegroups.com
On Tue, Mar 24, 2015 at 02:17:40PM +1100, Bruce Kellett wrote:
>
> If you take the block universe model seriously then we are nothing
> more than conscious recordings!
>

Fair point!

> I don't know what MGA stands for, or what it means, so I can't
> comment on that.
>

Ha - it's in the title of this thread!

It's Bruno Marchal's "Movie Graph Argument" (I would probably have
translated it as "Filmed Graph Argument", but MGA seems to have
stuck. Also known as "Step 8" of the UDA. It purports to show that
materialism and mechanism (aka computationalism) are fundamentally
incompatible. It is closely related to Tim Maudlin's Olimpia argument.

I wrote a preprint which is available from
http://www.hpcoders.com.au/blog/?p=73 if you're interested in knowing
a bit more, or at least is a source of references, if you think I'm
too turgid.

Cheers

Bruce Kellett

unread,
Mar 24, 2015, 1:04:46 AM3/24/15
to everyth...@googlegroups.com
Russell Standish wrote:
> On Tue, Mar 24, 2015 at 02:17:40PM +1100, Bruce Kellett wrote:
>> If you take the block universe model seriously then we are nothing
>> more than conscious recordings!
>>
>
> Fair point!
>
>> I don't know what MGA stands for, or what it means, so I can't
>> comment on that.
>>
>
> Ha - it's in the title of this thread!

I can enter a thread without knowing what the title means!

>
> It's Bruno Marchal's "Movie Graph Argument" (I would probably have
> translated it as "Filmed Graph Argument", but MGA seems to have
> stuck. Also known as "Step 8" of the UDA. It purports to show that
> materialism and mechanism (aka computationalism) are fundamentally
> incompatible. It is closely related to Tim Maudlin's Olimpia argument.
>
> I wrote a preprint which is available from
> http://www.hpcoders.com.au/blog/?p=73 if you're interested in knowing
> a bit more, or at least is a source of references, if you think I'm
> too turgid.

Thanks. I downloaded your short paper. I can see why the extended
argument on the everything list have failed to move me. The gaps in your
argument are rather evident. You state that if computationalism is
valid, then all possible experiences are instantiated by the dovetailer.
But nowhere do you define what is a possible experience. It seems to
depend on the fact that the dovetailer runs all possible computer
programs. In that case, it runs a program in which in the next instance
I become lighter than air and can float around the room!

In fact, I can write computer programs where the laws of physics change
from instant to instant. Why do we not experience these things?

ISTM that you are simply assuming that 'possible' means 'possible within
the bounds of the physical laws that govern the world we live in.' I
think you might see the problem with such an assumption.

Bruce

meekerdb

unread,
Mar 24, 2015, 1:21:02 AM3/24/15
to everyth...@googlegroups.com
Aye, there's the rub. Bruno claims that such capricious sequences of experience must have
small measure. But I think the "must" means "so that my theory will hold water." Anyway
he admits it's an open problem to show that the UD doesn't just produce experiential confetti.

Brent

Bruce Kellett

unread,
Mar 24, 2015, 1:31:38 AM3/24/15
to everyth...@googlegroups.com
So why do we waste time on such an incomplete theory?

I would say that rather than such random sequences of experiences having
small measure, they must dominate. We need the glue of laws to hold our
sequence of experiences together -- and these laws can only come from
experience. It is clear that computationalism fails and that
physicalism, with its given laws, wins the day.

Bruce

meekerdb

unread,
Mar 24, 2015, 1:47:57 AM3/24/15
to everyth...@googlegroups.com
Note that this is the "everything-list", so it attracts people whose idea of a TOE is to
start with the assumption that everything (in some sense) happens and try to extract the
observed world from that. Also it seems that combining the cosmological multiverse and
Everett's multiple worlds, as discussed by Carroll and others has the same problem.
Without somehow deriving or postulating Born's rule these ideas also threaten to just
produce noise. So if Bruno has solution in Platonia, it would be interesting.

ISTM that the problem might go away in a continuum form of computationalism in which
digital computation is just an approximation. But computation and a continuum analog of
the UD aren't well defined.

Brent

Bruce Kellett

unread,
Mar 24, 2015, 2:05:39 AM3/24/15
to everyth...@googlegroups.com
meekerdb wrote:
> On 3/23/2015 10:31 PM, Bruce Kellett wrote:
>> meekerdb wrote:
>>>
>>> Aye, there's the rub. Bruno claims that such capricious sequences of
>>> experience must have small measure. But I think the "must" means "so
>>> that my theory will hold water." Anyway he admits it's an open
>>> problem to show that the UD doesn't just produce experiential confetti.
>>
>> So why do we waste time on such an incomplete theory?
>
> Note that this is the "everything-list", so it attracts people whose
> idea of a TOE is to start with the assumption that everything (in some
> sense) happens and try to extract the observed world from that.

Fair enough, though a TOE might take many forms that do not involve such
a starting point. But if that is the aim, then it would seem that a
means of eliminating random law changes and random experiences might
rank as fairly high on the list of problems to be solved.

> Also it
> seems that combining the cosmological multiverse and Everett's multiple
> worlds, as discussed by Carroll and others has the same problem.
> Without somehow deriving or postulating Born's rule these ideas also
> threaten to just produce noise. So if Bruno has solution in Platonia,
> it would be interesting.

The trouble with Platonia is that you are actually trying to get an a
priori account of the world. I thought that that enterprise had been
abandoned when Kant failed to proved that space had to be Euclidean. As
Hume said, causation comes from constant conjunction -- experience
preempts the a priori.

I think you are right that deriving the Born rule is the stumbling block
for Everett and related notions.

Bruce

Stathis Papaioannou

unread,
Mar 24, 2015, 2:17:22 AM3/24/15
to everyth...@googlegroups.com
In addition to considerations about measure there are anthropic
considerations, since you would not be able to form and sustain
thoughts in worlds where physical laws change from moment to moment
even if those worlds are more common.


Stathis Papaioannou

Bruce Kellett

unread,
Mar 24, 2015, 2:47:26 AM3/24/15
to everyth...@googlegroups.com
The frequency and extent of law changes need not be that great to cause
real problems for the theory. Besides, you always have the problem of
"Last Thursdayism".

Bruce

LizR

unread,
Mar 24, 2015, 3:54:57 AM3/24/15
to everyth...@googlegroups.com
What's wrong with Last Thursdayism?

Bruce Kellett

unread,
Mar 24, 2015, 3:58:49 AM3/24/15
to everyth...@googlegroups.com
LizR wrote:
> What's wrong with Last Thursdayism?

Nothing that I know of. But people tend not to like the idea.

Bruce

Quentin Anciaux

unread,
Mar 24, 2015, 5:24:08 AM3/24/15
to everyth...@googlegroups.com
If the world is a computation, conscious part of it are subprogram that can be isolated by definition... now that when they run, for their consciousness to have meaning they must be fed input that have meaning to the conscious subprogram is a tautology...

Also, the MGA *never* assert that the consciousness simulated is conscious of *our* world (as it is obvious it can't be as it isn't fed inputs from our world)... it only assumes that you're running a program who is thought to be conscious (simulating a conscious being) and shows that if you accept that, and you accept the supervenience thesis and so accept that it is conscious in virtue of running in bare matter, you have to accept that the same stream of consciousness supervene on the projection + broken gate.

Regards,
Quentin
 
Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.



--
All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

meekerdb

unread,
Mar 24, 2015, 7:11:28 PM3/24/15
to everyth...@googlegroups.com
On 3/24/2015 2:23 AM, Quentin Anciaux wrote:


2015-03-24 1:57 GMT+01:00 meekerdb <meek...@verizon.net>:
On 3/23/2015 5:44 PM, LizR wrote:
On 24 March 2015 at 13:07, meekerdb <meek...@verizon.net> wrote:
Yes, as I understand it that's the argument.  It's consistent with Platonism.  A computer program's execution written out on paper is just as much a calculation as a lot of transistors switching.

So is the idea to show that a recording is just as conscious as the original calculation?

My caveat is that neither of them is conscious in THIS world because being conscious requires being conscious OF something.  An isolated, pure consciousness is an oxymoron.  Consciousness only exists as part of thoughts and thoughts only have meaning by reference to an external world and potential action in that world.

I am under the impression Bruno gets around that by potentially allowing the environment to be simulated as well. Or contrariwise, can't all the inputs to the conscisouness be provided as though it was in the world? (as for a brain in a vat for example. I mean hypothetically, and to simplify the argument, not as a general model of consciousness.)

Yes, he casually dismisses the objection by saying we'll just include the environment too.  But that's my point that it's then no longer a new radical result.  It's just saying that if you simulate a world it can include conscious beings who are conscious of that world.  But IN THAT WORLD their substrate is not inert - even if it's inert in our world, e.g. consider the novel "Mody Dick" being simulated in a computer.  To Ishmael and Ahab in the computer they'd be conscious and experiencing the hunt for the white whale.  And, according to Platonists, they are as printed on the page too.


If the world is a computation, conscious part of it are subprogram that can be isolated by definition...

That's the point I disagree with.  When Bruno starts the comp argument by asking if you would say "Yes" to the doctor, it is implicit that the doctor is going to replace some part or all of your brain, BUT it's going to remain within the same environmental context.  I think the "consciousness subprogram" can run without the context, but I think it gets it's meaning, what it's about, from the context - and I think that context has to be very broad, including evolutionary history for example.


now that when they run, for their consciousness to have meaning they must be fed input that have meaning to the conscious subprogram is a tautology...

Also, the MGA *never* assert that the consciousness simulated is conscious of *our* world

It's implied by his Alice discussion.  If the computation were just some arbitrary program we would have no reason to think it instantiated consciousness.  We only think that because it is record to a conscious computation in our world.


(as it is obvious it can't be as it isn't fed inputs from our world)... it only assumes that you're running a program who is thought to be conscious (simulating a conscious being) and shows that if you accept that, and you accept the supervenience thesis and so accept that it is conscious in virtue of running in bare matter, you have to accept that the same stream of consciousness supervene on the projection + broken gate.

But I'm not accepting the supervenience thesis as applied to an isolated sequence of states.  Without the context (which is implicit in the counterfactuals) the same sequence of computations could correspond to two different meanings, two different conscious thoughts - just as the same set of differential equations can model two different physical systems.

I'm not sure how this plays into the UD because there they are infinitely many threads of computation through the same state.  The state cannot, by itself, instantiate a thought.  A thought must require a long sequence of identical or similar states.  But in the UD there are no counterfactuals, because every possibility occurs at some point and branches from the thread.  At least that's how I understand it.

Brent

Quentin Anciaux

unread,
Mar 24, 2015, 7:25:06 PM3/24/15
to everyth...@googlegroups.com


Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net> a écrit :
>
> On 3/24/2015 2:23 AM, Quentin Anciaux wrote:
>>
>>
>>
>> 2015-03-24 1:57 GMT+01:00 meekerdb <meek...@verizon.net>:
>>>
>>> On 3/23/2015 5:44 PM, LizR wrote:
>>>>
>>>> On 24 March 2015 at 13:07, meekerdb <meek...@verizon.net> wrote:
>>>>>
>>>>> Yes, as I understand it that's the argument.  It's consistent with Platonism.  A computer program's execution written out on paper is just as much a calculation as a lot of transistors switching.
>>>>
>>>>
>>>> So is the idea to show that a recording is just as conscious as the original calculation?
>>>>>
>>>>>
>>>>> My caveat is that neither of them is conscious in THIS world because being conscious requires being conscious OF something.  An isolated, pure consciousness is an oxymoron.  Consciousness only exists as part of thoughts and thoughts only have meaning by reference to an external world and potential action in that world.
>>>>
>>>>
>>>> I am under the impression Bruno gets around that by potentially allowing the environment to be simulated as well. Or contrariwise, can't all the inputs to the conscisouness be provided as though it was in the world? (as for a brain in a vat for example. I mean hypothetically, and to simplify the argument, not as a general model of consciousness.)
>>>
>>>
>>> Yes, he casually dismisses the objection by saying we'll just include the environment too.  But that's my point that it's then no longer a new radical result.  It's just saying that if you simulate a world it can include conscious beings who are conscious of that world.  But IN THAT WORLD their substrate is not inert - even if it's inert in our world, e.g. consider the novel "Mody Dick" being simulated in a computer.  To Ishmael and Ahab in the computer they'd be conscious and experiencing the hunt for the white whale.  And, according to Platonists, they are as printed on the page too.
>>>
>>
>> If the world is a computation, conscious part of it are subprogram that can be isolated by definition...
>
>
> That's the point I disagree with. 

If it's a program then you've no choice.

> When Bruno starts the comp argument by asking if you would say "Yes" to the doctor, it is implicit that the doctor is going to replace some part or all of your brain, BUT it's going to remain within the same environmental context. 

Yes... But that context could be also simulated... In the end the only thing the conscious program can know, it knows it through an interface...

> I think the "consciousness subprogram" can run without the context, but I think it gets it's meaning, what it's about, from the context

The context is internal to the conscious subprogram as it is it by definition who gives meaning. The 'external' world is only inputs received from the interface of the subprogram, no more.

- and I think that context has to be very broad, including evolutionary history for example.
>
>
>> now that when they run, for their consciousness to have meaning they must be fed input that have meaning to the conscious subprogram is a tautology...
>>
>> Also, the MGA *never* assert that the consciousness simulated is conscious of *our* world
>
>
> It's implied by his Alice discussion. 

When rerunning the program with the recorded initial input, by hypothesis the second run must be as conscious as the first when the inputs came from the 'real'  external world... The program itself can't tell as it receives exactly the same inputs... Not similar inputs but *exactly* the same. So either the second run is as conscious as the first or none are.

>If the computation were just some arbitrary program we would have no reason to think it instantiated consciousness.

I never said it's an arbitrary program,  I said it's a program thought to instantiate a conscious moment... However you determine it is a conscious moment in the first place is irrelevant for the argument.

>  We only think that because it is record to a conscious computation in our world.
>
>
>> (as it is obvious it can't be as it isn't fed inputs from our world)... it only assumes that you're running a program who is thought to be conscious (simulating a conscious being) and shows that if you accept that, and you accept the supervenience thesis and so accept that it is conscious in virtue of running in bare matter, you have to accept that the same stream of consciousness supervene on the projection + broken gate.
>
>
> But I'm not accepting the supervenience thesis as applied to an isolated sequence of states.  Without the context (which is implicit in the counterfactuals) the same sequence of computations could correspond to two different meanings, two different conscious thoughts - just as the same set of differential equations can model two different physical systems.
>
> I'm not sure how this plays into the UD because there they are infinitely many threads of computation through the same state.  The state cannot, by itself, instantiate a thought.  A thought must require a long sequence of identical or similar states.  But in the UD there are no counterfactuals, because every possibility occurs at some point and branches from the thread.  At least that's how I understand it.
>
> Brent
>

Russell Standish

unread,
Mar 25, 2015, 12:08:55 AM3/25/15
to everyth...@googlegroups.com
On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
> Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net> a écrit :
>
> When rerunning the program with the recorded initial input, by hypothesis
> the second run must be as conscious as the first when the inputs came from
> the 'real' external world... The program itself can't tell as it receives
> exactly the same inputs... Not similar inputs but *exactly* the same. So
> either the second run is as conscious as the first or none are.

Or there is precisely one sequence of conscious observer moments no
matter how many times it is rerun (or recorded and replayed, whatever).

Quentin Anciaux

unread,
Mar 25, 2015, 2:18:56 AM3/25/15
to everyth...@googlegroups.com


Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au> a écrit :
>
> On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
> > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net> a écrit :
> >
> > When rerunning the program with the recorded initial input, by hypothesis
> > the second run must be as conscious as the first when the inputs came from
> > the 'real'  external world... The program itself can't tell as it receives
> > exactly the same inputs... Not similar inputs but *exactly* the same. So
> > either the second run is as conscious as the first or none are.
>
> Or there is precisely one sequence of conscious observer moments no
> matter how many times it is rerun (or recorded and replayed, whatever).
>
> Cheers

Then in this case physical supervenience is false... The movie graph argument is showong just that if you believe it's the physical activity token that generates consciousness then you must accept that a consciousness supervenes on the movie plus broken gate. If you disbelieve physical supervenience from the start then MGA is not needed.

Quentin

> --
>
> ----------------------------------------------------------------------------
> Prof Russell Standish                  Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics      hpc...@hpcoders.com.au
> University of New South Wales          http://www.hpcoders.com.au
>
>  Latest project: The Amoeba's Secret
>          (http://www.hpcoders.com.au/AmoebasSecret.html)
> ----------------------------------------------------------------------------
>

meekerdb

unread,
Mar 25, 2015, 2:23:25 AM3/25/15
to everyth...@googlegroups.com
On 3/24/2015 11:18 PM, Quentin Anciaux wrote:


Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au> a écrit :
>
> On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
> > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net> a écrit :
> >
> > When rerunning the program with the recorded initial input, by hypothesis
> > the second run must be as conscious as the first when the inputs came from
> > the 'real'  external world... The program itself can't tell as it receives
> > exactly the same inputs... Not similar inputs but *exactly* the same. So
> > either the second run is as conscious as the first or none are.
>
> Or there is precisely one sequence of conscious observer moments no
> matter how many times it is rerun (or recorded and replayed, whatever).
>
> Cheers

Then in this case physical supervenience is false...


How so?  Supervenience doesn't forbid different substrates from producing the same supervening effect.  In this case it would be two different instances of the physical process producing the same conscious thoughts.

Brent

Quentin Anciaux

unread,
Mar 25, 2015, 2:27:32 AM3/25/15
to everyth...@googlegroups.com

If it's different instances both moment are conscious... Not only one... The how many time it is run is important as by physical supervenience, it's the physical token that generates consciousness. So if ypu say that it doesn't matter how many times you run the cpnsciuous able program with the correct inputs, then you reject physical supervenience.

Quentin

>
> Brent

Quentin Anciaux

unread,
Mar 25, 2015, 2:29:00 AM3/25/15
to everyth...@googlegroups.com


Le 25 mars 2015 07:27, "Quentin Anciaux" <allc...@gmail.com> a écrit :
>
>
> Le 25 mars 2015 07:23, "meekerdb" <meek...@verizon.net> a écrit :
>
> >
> > On 3/24/2015 11:18 PM, Quentin Anciaux wrote:
> >>
> >>
> >> Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au> a écrit :
> >> >
> >> > On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
> >> > > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net> a écrit :
> >> > >
> >> > > When rerunning the program with the recorded initial input, by hypothesis
> >> > > the second run must be as conscious as the first when the inputs came from
> >> > > the 'real'  external world... The program itself can't tell as it receives
> >> > > exactly the same inputs... Not similar inputs but *exactly* the same. So
> >> > > either the second run is as conscious as the first or none are.
> >> >
> >> > Or there is precisely one sequence of conscious observer moments no
> >> > matter how many times it is rerun (or recorded and replayed, whatever).
> >> >
> >> > Cheers
> >>
> >> Then in this case physical supervenience is false...
> >
> >
> > How so?  Supervenience doesn't forbid different substrates from producing the same supervening effect.  In this case it would be two different instances of the physical process producing the same conscious thoughts.
>
> If it's different instances both moment are conscious... Not only one... The how many time it is run is important as by physical supervenience, it's the physical token that generates consciousness. So if ypu say that it doesn't matter how many times you run the cpnsciuous able program with the correct inputs,

Because there is only one conscious moment

Bruce Kellett

unread,
Mar 25, 2015, 7:09:13 AM3/25/15
to everyth...@googlegroups.com
Quentin Anciaux wrote:
> Le 25 mars 2015 07:27, "Quentin Anciaux" <allc...@gmail.com
> <mailto:allc...@gmail.com>> a écrit :
> > Le 25 mars 2015 07:23, "meekerdb" <meek...@verizon.net
> <mailto:meek...@verizon.net>> a écrit :
> > > On 3/24/2015 11:18 PM, Quentin Anciaux wrote:
> > >>
> > >> Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au
> <mailto:li...@hpcoders.com.au>> a écrit :
> > >> >
> > >> > On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
> > >> > > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net
> <mailto:meek...@verizon.net>> a écrit :
I do not think this follows. Consciousness supervenes on the brain
states. It does not matter if these are instantiated in brain wetware or
in an accurate record of these brain states on a film or in a computer
memory. It is the states (or sequence of states) that makes up the
conscious experience. If the record is exact, then replaying it
reproduces exactly the initial conscious experience (as Russell points
out), not some other experience.

How does this undermine physical supervenience? The brain wetware,
photographic film, and computer memory are all physical things that
instantiate the appropriate states and the conscious experience
supervenes on these. The architecture of the computer that simulates
consciousness does not matter as long as it accurate reproduces the
appropriate brain states.

Bruce

Quentin Anciaux

unread,
Mar 25, 2015, 7:25:58 AM3/25/15
to everyth...@googlegroups.com
2015-03-25 12:09 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au>:
Quentin Anciaux wrote:
Le 25 mars 2015 07:27, "Quentin Anciaux" <allc...@gmail.com <mailto:allc...@gmail.com>> a écrit :
 > Le 25 mars 2015 07:23, "meekerdb" <meek...@verizon.net <mailto:meek...@verizon.net>> a écrit :
 > > On 3/24/2015 11:18 PM, Quentin Anciaux wrote:
 > >>
 > >> Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au <mailto:li...@hpcoders.com.au>> a écrit :
 > >> >
 > >> > On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
 > >> > > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net <mailto:meek...@verizon.net>> a écrit :
 > >> > >
 > >> > > When rerunning the program with the recorded initial input, by hypothesis
 > >> > > the second run must be as conscious as the first when the inputs came from
 > >> > > the 'real'  external world... The program itself can't tell as it receives
 > >> > > exactly the same inputs... Not similar inputs but *exactly* the same. So
 > >> > > either the second run is as conscious as the first or none are.
 > >> >
 > >> > Or there is precisely one sequence of conscious observer moments no
 > >> > matter how many times it is rerun (or recorded and replayed, whatever).
 > >> >
 > >> > Cheers
 > >>
 > >> Then in this case physical supervenience is false...
 > >
How so?  Supervenience doesn't forbid different substrates from producing the same supervening effect.  In this case it would be two different instances of the physical process producing the same conscious thoughts.

If it's different instances both moment are conscious... Not only one... The how many time it is run is important as by physical supervenience, it's the physical token that generates consciousness. So if ypu say that it doesn't matter how many times you run the cpnsciuous able program with the correct inputs,

Because there is only one conscious moment

then you reject physical supervenience.

I do not think this follows. Consciousness supervenes on the brain states. It does not matter if these are instantiated in brain wetware or in an accurate record of these brain states on a film or in a computer memory. It is the states (or sequence of states) that makes up the conscious experience. If the record is exact, then replaying it reproduces exactly the initial conscious experience (as Russell points out), not some other experience.

Yes... that's what I said... replaying it N times under physical supervenience means you have N times the conscious moment supervening on the substrate *in realtime* (exactly the same conscious moment) but it is instantiated N times, not only once... (when I say realtime, it's not that the inner time of the conscious moment should be one to one with the external time where that conscious moment is supervening, but that the conscious moment exists at the same time it is running) (as Russel seems to say).

Like say when I'm running an actual program (any one, not a "conscious" one if that exist) in a VM with recorded inputs of a previous run... everytime I run it, it is running and instantiated (not just once)... likewise a "conscious" program, under physical supervenience, the conscious moment would be "existing" everytime I run it and not just once.
 

How does this undermine physical supervenience? The brain wetware, photographic film, and computer memory are all physical things that instantiate the appropriate states and the conscious experience supervenes on these. The architecture of the computer that simulates consciousness does not matter as long as it accurate reproduces the appropriate brain states.

Multiple realisation does not undermine physical supervenience... what undermine it, is that you're forced to accept (with the movie graph argument) that the consciousness is supervening on the movie + broken gate... which is absurd, and the conclusion is either that physical supervenience is false or computationalism is...

Quentin

I agree with that... I'm just saying that if you say, under physical supervenience, that running N times the conscious moment does not instantiate it N times, then you simply reject physical supervenience...
 


Bruce


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.

Quentin Anciaux

unread,
Mar 25, 2015, 7:27:52 AM3/25/15
to everyth...@googlegroups.com
2015-03-25 12:25 GMT+01:00 Quentin Anciaux <allc...@gmail.com>:


2015-03-25 12:09 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au>:
Quentin Anciaux wrote:
Le 25 mars 2015 07:27, "Quentin Anciaux" <allc...@gmail.com <mailto:allc...@gmail.com>> a écrit :
 > Le 25 mars 2015 07:23, "meekerdb" <meek...@verizon.net <mailto:meek...@verizon.net>> a écrit :
 > > On 3/24/2015 11:18 PM, Quentin Anciaux wrote:
 > >>
 > >> Le 25 mars 2015 05:08, "Russell Standish" <li...@hpcoders.com.au <mailto:li...@hpcoders.com.au>> a écrit :
 > >> >
 > >> > On Wed, Mar 25, 2015 at 12:25:04AM +0100, Quentin Anciaux wrote:
 > >> > > Le 25 mars 2015 00:11, "meekerdb" <meek...@verizon.net <mailto:meek...@verizon.net>> a écrit :
 > >> > >
 > >> > > When rerunning the program with the recorded initial input, by hypothesis
 > >> > > the second run must be as conscious as the first when the inputs came from
 > >> > > the 'real'  external world... The program itself can't tell as it receives
 > >> > > exactly the same inputs... Not similar inputs but *exactly* the same. So
 > >> > > either the second run is as conscious as the first or none are.
 > >> >
 > >> > Or there is precisely one sequence of conscious observer moments no
 > >> > matter how many times it is rerun (or recorded and replayed, whatever).
 > >> >
 > >> > Cheers
 > >>
 > >> Then in this case physical supervenience is false...
 > >
How so?  Supervenience doesn't forbid different substrates from producing the same supervening effect.  In this case it would be two different instances of the physical process producing the same conscious thoughts.

If it's different instances both moment are conscious... Not only one... The how many time it is run is important as by physical supervenience, it's the physical token that generates consciousness. So if ypu say that it doesn't matter how many times you run the cpnsciuous able program with the correct inputs,

Because there is only one conscious moment

then you reject physical supervenience.

I do not think this follows. Consciousness supervenes on the brain states. It does not matter if these are instantiated in brain wetware or in an accurate record of these brain states on a film or in a computer memory. It is the states (or sequence of states) that makes up the conscious experience. If the record is exact, then replaying it reproduces exactly the initial conscious experience (as Russell points out), not some other experience.

Yes... that's what I said... replaying it N times under physical supervenience means you have N times the conscious moment supervening on the substrate *in realtime* (exactly the same conscious moment) but it is instantiated N times, not only once... (when I say realtime, it's not that the inner time of the conscious moment should be one to one with the external time where that conscious moment is supervening, but that the conscious moment exists at the same time it is running) (as Russel seems to say).


Correction as Russel seems to say there is only one conscious moment... how many time you run it... well under physical supervenience you have N times exactly the same conscious moment... but each run is as real and existing as the other... and there is not only one... saying there is only one is rejecting physical supervenience.

Quentin
 
Like say when I'm running an actual program (any one, not a "conscious" one if that exist) in a VM with recorded inputs of a previous run... everytime I run it, it is running and instantiated (not just once)... likewise a "conscious" program, under physical supervenience, the conscious moment would be "existing" everytime I run it and not just once.
 

How does this undermine physical supervenience? The brain wetware, photographic film, and computer memory are all physical things that instantiate the appropriate states and the conscious experience supervenes on these. The architecture of the computer that simulates consciousness does not matter as long as it accurate reproduces the appropriate brain states.

Multiple realisation does not undermine physical supervenience... what undermine it, is that you're forced to accept (with the movie graph argument) that the consciousness is supervening on the movie + broken gate... which is absurd, and the conclusion is either that physical supervenience is false or computationalism is...

Quentin

I agree with that... I'm just saying that if you say, under physical supervenience, that running N times the conscious moment does not instantiate it N times, then you simply reject physical supervenience...
 


Bruce


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



--
All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

Stathis Papaioannou

unread,
Mar 25, 2015, 11:35:56 AM3/25/15
to everyth...@googlegroups.com
If my mind is being run on two separate computers, I can't know which one of the two, and I can't say that my last remembered moment was run on one or other or my next anticipated moment will be run on one or other. If one computer stops it makes no difference to me and if a third computer running my mind comes online it makes no difference to me. So effectively there is only one conscious moment. Under physical supervenience, stopping all the computers stops the conscious moment.


--
Stathis Papaioannou

Quentin Anciaux

unread,
Mar 25, 2015, 12:16:00 PM3/25/15
to everyth...@googlegroups.com
No, there are as many (same) conscious moments as there are instances running in "realtime"  on the physical substrate *under physical supervenience*... that these conscious moments are exactly the same doesn't change that... only from an idealist POV can you say there is only one.

Quentin
 
Under physical supervenience, stopping all the computers stops the conscious moment.


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.

Bruno Marchal

unread,
Mar 25, 2015, 12:47:54 PM3/25/15
to everyth...@googlegroups.com
Just discover the thread. When I have less time, I answer by thread
and miss this one.

Very interesting conversation. I will comment, soon or later, in
chronological order or in non-chronological order.

I have not a solution in platonia, I have a formulation of the problem
in arithmetic (assuming comp, by which I mean Church-Turing thesis +
the assumption that someone, your our anyone else survive some
classical digital (physical if you want to assume a physical universe)
functional subtstitution at some level of description.

And Löbian machine like PA and ZF are not stupid and already can give
their thought on the question.

BTW, to me, "theories" (which in recursion means basically recursively
enumerable set (of beliefs, if you want), or the machine that you can
attach to such theories (the prover theorem) already past the Turing
test. They are conscious, even if quite disconnected from our mundane
reality.

It is about RA that I have a doubt, but I tend more and more to
attribute consciousness to RA too. It shocked my natural opinion,
though.

I just formulate the problem, and I reduce the mind-body problem to a
body problem in computer science. using Solovay discovery I can easily
distinguish between the computer's computer sciences and computer
science.

UDA is supposed to help people to understand that the physical
universe is in "your" head.

But thank to the mathematical discovery of the universal machine by
Turing, which is also an arithmetical discovery, we can translate "the
matter appearance problem for computer science" in the language of PA
and interview it on the question.

The discovery of Gödel, LOb, Solovay makes possible to also interview
kind of God, or Analysts, that is formal system or theories knowing
more about the machine than that machine can rationnal justify about
herself, and that provides a way to nuance the logics. With comp,
classical logic justfies the need on non classical logics, like
intuitionist logic, quantum logics, etc.

I explain that if we assume comp, we have not much choice than to
extend Everett embedding of the physicists in the wave to the Gödel
embedding of the mathematician in arithmetic.



>
> ISTM that the problem might go away in a continuum form of
> computationalism in which digital computation is just an
> approximation. But computation and a continuum analog of the UD
> aren't well defined.

On the contrary. It is "physical" or "matter" which are not well
defined. Computation can be defined in the language of arithmetic, or
with the combinators, straight from the principles I gave the other
day. You can't do that for physical computation, even if quantum
computation get close, but use the math of the universal Turing
machine to qualify itself as Turing universal.

if you accept church thesis, and "yes doctor", it is simply a fact
that arithmetic contains a web of machines dreams.
The FPI imposes them a continuum, as all universal machine thinking
twice can understand the points and know that if she is a machine then
below her substitution level there is an ocean of computations leading
to our relevant state.

You can tell me that there is a real real real physical universe which
does the selection, but how can he succeed without introducing
something non Turing emulable, and non FPI-recoverable?

That is what MGA do: it shows that invoking matter to make the
selection is like introducing a vague complex metaphysical assumption
in place of trying to solve an interesting problem: the mathematical,
and partially psychological if not theological origin of the stable
beliefs in physical laws. It extends Darwin on Arithmetic, and the
universal machines, when listen at, already gives incredible clues.

Matter has been a fertile methodological assumption but to get
consciousness and the dreams you need eventually to abandon the
primary character of the notion. Combinators or number or Turing
equivalent are simpler, conceptually.

Quantum computing assumes classical computing, and it (if real) has to
be derivable by classical computing *seen from inside*, which is what
the []p & p, []p & <>p, []p & <>p & p describe with p interpretation
limited to sigma_1 sentences (quantum computation).

MGA just shows that if at step seven the move is: we are in a "little
universe" (no big boltzman brains, no Universal dovetailing), then
this invokes a god to avoid the search for an explanation.

With step seven, you know that matter must be explained by a measure
on computations. Step 8, the MGA, just show that you can save matter
and comp, by using some magic, but only that. It makes the
ultrafinitist move equivalent with the God-of-the-gap type of
argument. It weakened the use of the occam razor in the possible bet
that comp applies to our reality. (making the physical universe itself
NON entirely Turing emulable, btw).

MGA can be avoided by reversing the charge. I ask the believer in
primary matters to explain how they can change an arithmetical
computation into something capable of having an attribute that the
arithmetical computation would lack.

Peter Jones answered basically "because I take that as axiom", like
Omnès for QM, that it defines by Everett-QM + a miracle.

But we can dig deeper, if we take the notion of Universal machine
seriously, and actually, we can listen to what they already said. "We"
are handicapped, when working on those fundamental questions, by
million years of building prejudices, that simple universal being does
not have a priori (it is de-different after installing Windows, or
google, of course).

Bruno



>
> Brent
>>
>> I would say that rather than such random sequences of experiences
>> having small measure, they must dominate. We need the glue of laws
>> to hold our sequence of experiences together -- and these laws can
>> only come from experience. It is clear that computationalism fails
>> and that physicalism, with its given laws, wins the day.
>>
>> Bruce
>>
>>
>>> Brent
>>>
>>>>
>>>> ISTM that you are simply assuming that 'possible' means 'possible
>>>> within the bounds of the physical laws that govern the world we
>>>> live in.' I think you might see the problem with such an
>>>> assumption.
>>>>
>>>> Bruce
>>>>
>>>
>>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Mar 25, 2015, 12:56:11 PM3/25/15
to everyth...@googlegroups.com

On 25 Mar 2015, at 12:25, Quentin Anciaux wrote:

Multiple realisation does not undermine physical supervenience... what undermine it, is that you're forced to accept (with the movie graph argument) that the consciousness is supervening on the movie + broken gate... which is absurd, and the conclusion is either that physical supervenience is false or computationalism is...


Good summary. If you accept physical supervenience, you need to accept that non active part of the brain have active part in the brain, basically. 

It makes clear that it is not the material brain or material computer which does the thinking, but the abstract person run by any sufficiently robust programs, with a robustness defined to its most plausible computations above its substitution level above and below the substitution below.

Bruno







Bruno Marchal

unread,
Mar 25, 2015, 1:14:30 PM3/25/15
to everyth...@googlegroups.com
OK. So the winning program in the FPI limit of what happen below the
subst level, "we" learn to manage that noise. Why not? That fits with
Feynmann formulation of QM. We would have, to sum up terrribly:

In the work of the UD, the winner (the one generating the stable
illusion) is SUM on all e^iUD.

I think somehow that is correct, and I show that there is already the
shadow of something like that being justify by the reasoner reasoning
on itself and its consistent extensions.




> We need the glue of laws to hold our sequence of experiences together

Our dreams, yes. I suggest to "dream" a sequence of experience.
particular case are given by the awake state relative to some
computation(s).



> -- and these laws can only come from experience.

They can be inferred from experience. But they might be justifiable by
a deeper (theological) theory, which would explain the origin and
necessity of the physical laws.



> It is clear that computationalism fails and that physicalism, with
> its given laws, wins the day.

You are quite quick here. Not sure which day you talk about, as I am
usually out of time those days ;)

Bruno









>
> Bruce
>
>
>> Brent
>>>
>>> ISTM that you are simply assuming that 'possible' means 'possible
>>> within the bounds of the physical laws that govern the world we
>>> live in.' I think you might see the problem with such an assumption.
>>>
>>> Bruce
>>>
>

Bruno Marchal

unread,
Mar 25, 2015, 1:57:05 PM3/25/15
to everyth...@googlegroups.com
Excellent. That *is* the question. But the "everything" of comp is not
just noise, it is the sigma_1 complete part of the arithmetical
reality, and it get structured when apprehended by the machines or
number themselves, from inside.

All modal realism (or everything theories) can suffer of an inflation
of realities threatening the possibility of prevision and
theorization. But with computationalism we have computer science to
make the question precise and get some clues thanks to the vast
accomplishment in that era (not well known unfortunately).



>
> ISTM that you are simply assuming that 'possible' means 'possible
> within the bounds of the physical laws that govern the world we live
> in.' I think you might see the problem with such an assumption.

At the basic meta-level, possible will mean that a mental state is
accessible by the UD, or exist in arithmetic.

We do have (thanks to Church's thesis) a precise definition of digital
computation, with reasonable equivalence classes of behaviors. We
don't have this for analog or physical computation notions, which have
no (serious) Church's thesis.

Church's thesis (also called Church-Turing thesis, or CT) is a very
strong thesis. I can prove the incompletess of all theories about
digital machines in one diagonalization (well two, as you make always
two diagonlaization technically). Judson Webb, and Kleene wrote a
papers on that.

Church missed Church's thesis. It is Kleene who created the 'Church's
thesis", and understood the best, with Emil Post, Martin Davis, and
some others, its alluring consequences.

Computability can be defined in term of provability, and indeed
sigma_1 complete provability. Computability is a tiny part of the
arithmetical truth, and living souls supervene on the border between
computability and non computability, (where they can hesitate notably
between security and freedom).

I have to go.

Will comment very plausibly other MGA posts later. Sometimes some
people tend to put more in MGA than there is, like other people put
more in "arithmetical realism" than there is.

There are two bombs: the universal machine, the fact that some
universal machine knows the price of being universal, and the
empirical bomb: the quantum universal machine. It is too early to
decide if the quantum universal machine (which does not violate Church
thesis) confirms of refute computationalism.

Bruno

meekerdb

unread,
Mar 25, 2015, 4:08:33 PM3/25/15
to everyth...@googlegroups.com
But that's what the MGA is arguing for - an idealist POV.

Brent

Quentin Anciaux

unread,
Mar 25, 2015, 4:15:38 PM3/25/15
to everyth...@googlegroups.com
Yes and ? I was explaining physical supervenience and how if you accept it and at the same time you believe in computationalism, you are forced by MGA to acknowledge that the consciousness must supervene on the movie + broken gates.

Quentin
 

Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

LizR

unread,
Mar 25, 2015, 5:02:55 PM3/25/15
to everyth...@googlegroups.com
It's OK, Brent likes to say "But ... " as though he's arguing against you, then repeats what you just said. I have no idea why, when elsewhere he makes very good points.

Bruce Kellett

unread,
Mar 26, 2015, 2:38:20 AM3/26/15
to everyth...@googlegroups.com
Bruno Marchal wrote:
> On 24 Mar 2015, at 06:31, Bruce Kellett wrote:
>> meekerdb wrote:
>>> On 3/23/2015 10:05 PM, Bruce Kellett wrote:
>>>>
>>>> In fact, I can write computer programs where the laws of physics
>>>> change from instant to instant. Why do we not experience these things?
>>>
>>> Aye, there's the rub. Bruno claims that such capricious sequences of
>>> experience must have small measure. But I think the "must" means "so
>>> that my theory will hold water." Anyway he admits it's an open
>>> problem to show that the UD doesn't just produce experiential confetti.
>>
>> So why do we waste time on such an incomplete theory?
>>
>> I would say that rather than such random sequences of experiences
>> having small measure, they must dominate.
>
> OK. So the winning program in the FPI limit of what happen below the
> subst level, "we" learn to manage that noise. Why not? That fits with
> Feynmann formulation of QM. We would have, to sum up terrribly:
>
> In the work of the UD, the winner (the one generating the stable
> illusion) is SUM on all e^iUD.

The trouble I see with this is that it already assumes the linear
superposition principle and quantum mechanics. But that has not been
derived in your theory and it is illegitimate to use it at this point.
The Feynman sum-over-paths works only because we have deterministic
physical laws that enable us to compute the phases along alternative
paths. You do not have such laws in evidence.


> I think somehow that is correct, and I show that there is already the
> shadow of something like that being justify by the reasoner reasoning on
> itself and its consistent extensions.

I doubt that that makes any sense for a new-born baby who, nevertheless,
experiences an ordered world.


>> We need the glue of laws to hold our sequence of experiences together
>
> Our dreams, yes. I suggest to "dream" a sequence of experience.
> particular case are given by the awake state relative to some
> computation(s).
>
>> -- and these laws can only come from experience.
>
> They can be inferred from experience. But they might be justifiable by a
> deeper (theological) theory, which would explain the origin and
> necessity of the physical laws.

So you need to do the work to achieve this justification within your
model. At the moment, you just seem to assume those parts of physical
theory that you want.

Bruce

Bruce Kellett

unread,
Mar 26, 2015, 3:06:28 AM3/26/15
to everyth...@googlegroups.com
I think that all the MGA establishes is that if the film taken of the
physical states of the brain is a good copy, then consciousness can
supervene an that copy as well as the original.

Let me try to summarize the argument as I see it. We are conscious and
we have brains that seem to be connected with the conscious state, such
that a reasonable first model is that consciousness supervenes on the
physical brain -- we alter the brain, we affect the conscious state, and
the conscious state, being deterministic, reciprocally affects the
brain. (Changed thoughts are correlated with changed brain states.)

The observation is then made that we could, quite probably, simulate the
brain state to any desired level in a computer (universal Turing
machine). The question is: does consciousness supervene on the physical
state, or on the abstract calculational state represented by the computer?

Given that the computer simulation has the same conscious state as the
original brain, it follows that copies of the conscious state can be
made. In so far as these are accurate copies of the original physical
state, they are all the same conscious moments -- we only create
different consciousnesses when the inputs differ between copies -- and
then the states are no longer identical.

None of this argues against conscious supervening on the physical rather
than on an abstraction in Platonia. The MGA, as I understand it, was
designed to undermine this conclusion. The movie image projected on the
original neural plate recreates the original conscious state. But we can
degrade the neural plate. As long as we project the same movie copy, the
conscious state is unchanged. It is argued that this is absurd. As far
as I can tell, such an argument hinges on the notion of conterfactual
equivalence: the original movie and the degraded plate are not
counterfactually equivalent.

I simply say, so what! Counterfactual equivalence does not have any
independent justification, and it is highly unlike to be sensible, even
in the context of computationalism. Basically, because the simulation of
any given conscious state can be carried out an any computer -- whatever
the architecture, physical construction, or programming language. As
long as the original state is accurately simulated, the conscious state
will be the same. But these different instances of the calculation are
generally not counterfactually equivalent, nor need they be -- they only
have to simulate the original state to the required degree of accuracy
-- they may differ to any degree whatsoever for their calculated states
before and after the target conscious moment.

This comes back to my original question: since all possible programs are
run by the dovetailer, how do we ensure that conscious beings see an
ordered and predictable world. Only a set of measure zero among all
possible programs would give that result.

Bruce

Quentin Anciaux

unread,
Mar 26, 2015, 3:53:22 AM3/26/15
to everyth...@googlegroups.com
2015-03-26 8:05 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au>:
Bruno Marchal wrote:
On 25 Mar 2015, at 12:25, Quentin Anciaux wrote:

Multiple realisation does not undermine physical supervenience... what undermine it, is that you're forced to accept (with the movie graph argument) that the consciousness is supervening on the movie + broken gate... which is absurd, and the conclusion is either that physical supervenience is false or computationalism is...

Good summary. If you accept physical supervenience, you need to accept that non active part of the brain have active part in the brain, basically.
It makes clear that it is not the material brain or material computer which does the thinking, but the abstract person run by any sufficiently robust programs, with a robustness defined to its most plausible computations above its substitution level above and below the substitution below.

I think that all the MGA establishes is that if the film taken of the physical states of the brain is a good copy, then consciousness can supervene an that copy as well as the original.

Let me try to summarize the argument as I see it. We are conscious and we have brains that seem to be connected with the conscious state, such that a reasonable first model is that consciousness supervenes on the physical brain -- we alter the brain, we affect the conscious state, and the conscious state, being deterministic, reciprocally affects the brain. (Changed thoughts are correlated with changed brain states.)

The observation is then made that we could, quite probably, simulate the brain state to any desired level in a computer (universal Turing machine). The question is: does consciousness supervene on the physical state, or on the abstract calculational state represented by the computer?

Given that the computer simulation has the same conscious state as the original brain, it follows that copies of the conscious state can be made. In so far as these are accurate copies of the original physical state, they are all the same conscious moments -- we only create different consciousnesses when the inputs differ between copies -- and then the states are no longer identical.

None of this argues against conscious supervening on the physical rather than on an abstraction in Platonia. The MGA, as I understand it, was designed to undermine this conclusion. The movie image projected on the original neural plate recreates the original conscious state. But we can degrade the neural plate. As long as we project the same movie copy, the conscious state is unchanged. It is argued that this is absurd.

Under physical supervenience it is... like the inert olympia... under computationalism alone it is not, as the person manifest through the abstract computation... the movie + broke gates, or olympia is just a specific instance of the computation (a record or like a big table lookup), but under computationalism, you're not this or that specific computations, but the infinities of computations going through your current state... which in this case, the movie graph and broken gate is an extreme member of implemenation that class.
 
As far as I can tell, such an argument hinges on the notion of conterfactual equivalence: the original movie and the degraded plate are not counterfactually equivalent.

I simply say, so what! Counterfactual equivalence does not have any independent justification, and it is highly unlike to be sensible, even in the context of computationalism. Basically, because the simulation of any given conscious state can be carried out an any computer -- whatever the architecture, physical construction, or programming language. As long as the original state is accurately simulated, the conscious state will be the same. But these different instances of the calculation are generally not counterfactually equivalent, nor need they be -- they only have to simulate the original state to the required degree of accuracy -- they may differ to any degree whatsoever for their calculated states before and after the target conscious moment.

This comes back to my original question: since all possible programs are run by the dovetailer, how do we ensure that conscious beings see an ordered and predictable world. Only a set of measure zero among all possible programs would give that result.

Yes, it seems to me, we should see white noise, but maybe a selection attribute must be in play... like an anthropic argument.

Quentin
 


Bruce


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Bruce Kellett

unread,
Mar 26, 2015, 7:14:02 AM3/26/15
to everyth...@googlegroups.com
Quentin Anciaux wrote:
> 2015-03-26 8:05 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au
> <mailto:bhke...@optusnet.com.au>>:
>
> This comes back to my original question: since all possible programs
> are run by the dovetailer, how do we ensure that conscious beings
> see an ordered and predictable world. Only a set of measure zero
> among all possible programs would give that result.
>
> Yes, it seems to me, we should see white noise, but maybe a selection
> attribute must be in play... like an anthropic argument.

Anthropic arguments are not going to work with computationalism because
there is no basis on which you can assume underlying deterministic
physical laws.

Bruce

Quentin Anciaux

unread,
Mar 26, 2015, 7:28:20 AM3/26/15
to everyth...@googlegroups.com
2015-03-26 12:13 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au>:
Quentin Anciaux wrote:
2015-03-26 8:05 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au <mailto:bhkellett@optusnet.com.au>>:


    This comes back to my original question: since all possible programs
    are run by the dovetailer, how do we ensure that conscious beings
    see an ordered and predictable world. Only a set of measure zero
    among all possible programs would give that result.

Yes, it seems to me, we should see white noise, but maybe a selection attribute must be in play... like an anthropic argument.

Anthropic arguments are not going to work with computationalism because there is no basis on which you can assume underlying deterministic physical laws.


It seems to me it works relatively.... consciousness like ours can only experiment worls ordered like ours... even if almost all dreams/worlds produced by mathematics are not like that and do not allow of consciousness like ours, as you can only experience worlds like ours, it's no magic that you do... like with Quantum Immortality, you cannot experience being dead, so no wonder you find yourself alive, even if in almost all worlds you're dead (or not existing at all).

Quentin
 

Bruce

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Bruce Kellett

unread,
Mar 26, 2015, 8:02:47 AM3/26/15
to everyth...@googlegroups.com
Quentin Anciaux wrote:
> 2015-03-26 12:13 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au
>
> Quentin Anciaux wrote:
>
> 2015-03-26 8:05 GMT+01:00 Bruce Kellett
>
> This comes back to my original question: since all possible
> programs
> are run by the dovetailer, how do we ensure that conscious
> beings
> see an ordered and predictable world. Only a set of measure zero
> among all possible programs would give that result.
>
> Yes, it seems to me, we should see white noise, but maybe a
> selection attribute must be in play... like an anthropic argument.
>
>
> Anthropic arguments are not going to work with computationalism
> because there is no basis on which you can assume underlying
> deterministic physical laws.
>
>
> It seems to me it works relatively.... consciousness like ours can only
> experiment worls ordered like ours... even if almost all dreams/worlds
> produced by mathematics are not like that and do not allow of
> consciousness like ours, as you can only experience worlds like ours,
> it's no magic that you do... like with Quantum Immortality, you cannot
> experience being dead, so no wonder you find yourself alive, even if in
> almost all worlds you're dead (or not existing at all).

But we do not need the degree of order that we observe. We could survive
perfectly well with a reasonable number of miracles -- laws that don't
quite work always. And there are vastly more possible worlds of that
sort than those that are strictly deterministic. The measure problem
gets you every time.

Bruce

Quentin Anciaux

unread,
Mar 26, 2015, 8:07:22 AM3/26/15
to everyth...@googlegroups.com
Well we don't know that we could survive in such world...but even, if MWI is correct, most instances of me goes everytime in such worlds... and some of us don't, why wonder that it's a miracle when it's a given there will always be a me in a non magic world ? I wonder why I'm not in a magic world... because I'm not.

Quentin
 


Bruce

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Bruno Marchal

unread,
Mar 26, 2015, 11:40:32 AM3/26/15
to everyth...@googlegroups.com
I am OK. I think Quentin is arguing in the reducto ad absurdum part.

In a sense both Russell is righ (there is only one 1p-experience), and Quentin is right: we can attribute consciousness in each running (but then if we attribute it to the physical activity token: we get the absurd conclusion: playing records and real-time consciousness supervene on a static film, etc.

What happens is that consciousness here and now exist by the existence of the computation (and thus in arithmetic), and the probability of this or that differentiation has to take into account all the running in arithmetic (and not all playing records). here the running are the computations, or the triple (universal number, program, a number of steps), and the records are the Gödel number of those computations. Both are realized in arithmetic, but the computation are realized in some standard interpretation of arithmetic, and the Gödel numbers are only syntactical description of them so that the machine and us know which computation we are talking about.

Russell used the ...-1-1-1-1-view, but in MGA (and UDA) we need to use some 3-1 views, or 3-1-1 views. 

Consciousness depends only on the existence of the (relevant) computations, but the relative stability of the local consciousness flux depends on the relative proportion of histories/dreams, and for this we need to consider the 1-views attributed to person, but incarnated in some 3p activity (program executions), and thus we need to use (implicitly) some 3-1 view.

If 100 computers, physically distinct, "run my consciousness" (simultaneously or not), it is the same consciousness, and the 1p is unique, but I will prefer that 1% of the running differentiate into an hellish experience instead of 99%.

For extracting the physics, this 3-view is needed, or at least helps, I think. In modal logic, those are given by multi-modal expression, like []<1>[2]P, for example, which admits a non ambiguous interpretation in arithmetic, as they are all defined from the Gödel arithmetical 'Beweisbar' predicate ([]p).

Bruno



--
Stathis Papaioannou


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.

Bruno Marchal

unread,
Mar 26, 2015, 12:51:23 PM3/26/15
to everyth...@googlegroups.com

On 24 Mar 2015, at 04:39, LizR wrote:

Apologies

"Movie Graph Argument" - from Bruno's 2004 paper I believe.

Actually, MGA appears in step 8, but is not explained in the sane04 paper. I only referred to it.

The original publication is

Marchal, B., "informatique et théorie de l'esprit", Acte du Colloque de l'ARC, Toulouse 1987. Proceeding 1988.

(It is the colloquium where I meet Dennett, but we discussed only on Church thesis, where we agreed, I did not explain the MGA orally, because we got only 20 minutes, and it is close to impossible to go that quick. Orally I explained only the basic of self-reproduction and self-reference, the Dx = xx song).




I also don't know whether it makes more or less sense for a recording to be conscious than a computation.

It is the assumption of computationalism, that consciousness is relatively instantiated by computation, that is, some activity done by some universal numbers relatively to some universal numbers.



However I have a feeling Bruno addressed this when he was explaining the who thing to me, some time ago - it doesn't matter whether the recording is conscious or not, it's just one of the infinite number of possible computations that contribute to generating that moment of consciousness.

Hmm, not really. But a recording can change a measure, because it contains the instantaneous description which can be used to re-instantiate a computer, and then differentiate. But it is absurd to attribute the consciousness here-and-now to a movie, which instantiates zero computation. In films we see only "zombie", as you can realize if you try to discuss with James Bond, through a movie, or if you try the Turing test with "him", through the movie. What makes a computation different from a movie, is that it is counterfactually correct. Now, what the MGA shows, is that consciousness is related to that counterfactualness, which is part of the mathematical realization, (and physical, sometimes, relatively to you).  Consciousness is a semantical notion, and it makes sense through the correctness of the conunterfactuals, which are part of some arithmetical proposition being true. The recording has no similar rich semantics, as it is only a remind of the existence of that computation. The difficulties in materialism really comes from the useless reification of a token reality, once you think consciousness is invariant for relative digital substitution. 




Perhaps! Altho it seems to me that is assuming comp so maybe I didn't get that right.


You were right, but usually the attribution of consciousness to a recording is where I stop the reductio ad absurdum, as there is no computation in a recording, and like Maudlin showed this would entail also magical abilities of neuron to detect inactive neurons, or even non interacting objects, and this locally (in a supposedly Turing emulable way).  To me the idea that a record is conscious is as absurd that the belief a glass is broken in a cartoon by Donald when he sent a ball on the glass.

There is no problem attributing consciousness to the recording, though,  if you keep in mind that recording cannot differentiate in the universal dovetailing, unlike computation. They might differentiate by some accidental circumstances, as described above, but then they are again part of a computation, and only played the role of some memory. 

The differentiation are due to inputs (like W or M), and recordings have no inputs, unless your call them "temproary memory" waiting to be installed by in some computers. 

In fact the measure "here" is somehow given by the future duplications, which I sum up by the little drawing Y = II. Y represents the time diagram of the bifurcation, and the bifurcation go back in the path augmenting the measure of that set of computations. To multiply yourself is a good life insurance. But if you are OK that 2+2=4 is true independently of your little ego, then that multiplication becomes infinite and is already realized in all possible ways in a tiny part of arithmetic.  And explaining matter becomes more difficult, but more interesting too, and the way I proceed give communicable and non communicable, which is a good candidate for a precise theory of qualia (as qX1* is supposed to be, accepting some old antic definitions in epistemology).

Bruno






On 24 March 2015 at 16:17, Bruce Kellett <bhke...@optusnet.com.au> wrote:
Russell Standish wrote:
On Tue, Mar 24, 2015 at 10:10:37AM +1100, Bruce Kellett wrote:
Russell Standish wrote:
On Tue, Mar 24, 2015 at 11:48:52AM +1300, LizR wrote:
On 23 March 2015 at 16:09, meekerdb <meek...@verizon.net> wrote:

That's where the MGA comes in.  It purports to show that one of the
possible substrates is inert matter, which seems so absurd that we should
conclude the matter plays no part whatsoever.

That sounds like Maudlin's Olimpia argument....?

So far I get that different substrates can create the same computational
states (by which I assume we mean the contents of registers and memory?)
But how does the MGA get from showing that to showing that inert matter can
be a possible substrate? (ISTM that a projected graph is not inert, if
that's the argument.)

Broadly, the idea is to use notion that movement is relative. If a
machine is moving through a fixed sequence of states, we can
equivalently set things up so the machine is inert, but the observer
moves in such a way that appearance is unchanged. The absurdity is
that this implies consciousness depends on the motion of the observer.
No, it doesn't imply any such thing. The motion of the observer, or
rate of change of the sequence of states, is irrelevant to
consciousness. The only relevant thing is the states themselves --
the rate at which they are observed (or even if they are static)
does not matter.


Then clearly, you have no problem with the concept of a conscious
recording.

In order for the MGA to go through, conscious recordings need to be
considered absurd.

I personally, don't have an opinion either way, which is why I
consider that to be a rather serious flaw of the MGA.

If you take the block universe model seriously then we are nothing more than conscious recordings!

I don't know what MGA stands for, or what it means, so I can't comment on that.


Bruce


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Bruno Marchal

unread,
Mar 26, 2015, 1:05:03 PM3/26/15
to everyth...@googlegroups.com
Hmm... But with the stroboscope-movie, seeing a movie is a relatiive
motion. You make a flash on each picture, from pict one to the last
one, then you can make a movie with a centered falshing liight, and it
is only the presence of observers, which define a "movie" in some real
time, different for each observer. Then the presence of the observers
are not needed, and eventually you will make a consciousness of time,
in real time, supervening on something static.

Computationalism associates consciousness to the abstract person,
incarnated into a body which makes it possible to manifeste itself
relatively to some computations,.

A recording has no input, no output, and I guess you would not say yes
to a doctor proposing a recording of your future experience (provided
by some alien time traveller, say) instead of a brain/computer?

The problem is that this will entail that there is no consciousness in
any active brain or computer either, if we conceive them as 3p things
around us. Consciousness will be associated to infinities of machines
in arithmetic.

And I have no opinion about the truth of this. I just take
computationalism (TC + YD) seriously to see where we are driven, in
fact, to make the mind-body precise in that setting.

I doubt there is a flaw, I think, as I ask a question in the setting
of an hypothesis. What would it means for a recording to be conscious,
once we postulate computationalism? That might be consistent (see my
comment to Stathis) but it would introduce terrible difficulties, most
of them risking to be vocabulary difficulties.

MGA only lower the amount of occam razor needed to accept the
reversal, once we postulate comp.

I thought you were OK when I explained that computation/recording
assimilation is like the computation/Godel-number-of-a-computation
assimilation in arithmetic.
It is a bit like an assimilation of []p and p (hmm.. at some different
level, though).

For computation, we have a precise mathematical definition. For the
notion of recording, it is already fuzzy if they are physical, and
then how to define them, etc.

I will need to reread your paper on the MGA, may be I miss something.

Cheers,

Bruno


Bruno




>
>
> --
>
> ----------------------------------------------------------------------------
> Prof Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics hpc...@hpcoders.com.au
> University of New South Wales http://www.hpcoders.com.au
>
> Latest project: The Amoeba's Secret
> (http://www.hpcoders.com.au/AmoebasSecret.html)
> ----------------------------------------------------------------------------
>

Bruno Marchal

unread,
Mar 26, 2015, 3:04:03 PM3/26/15
to everyth...@googlegroups.com

On 24 Mar 2015, at 01:07, meekerdb wrote:

On 3/23/2015 3:48 PM, LizR wrote:


On 23 March 2015 at 16:09, meekerdb <meek...@verizon.net> wrote:
That's where the MGA comes in.  It purports to show that one of the possible substrates is inert matter, which seems so absurd that we should conclude the matter plays no part whatsoever.

That sounds like Maudlin's Olimpia argument....?

It is essentially the same.  But I think Maudlin took the other side of the reductio and concluded that computationalism must be incomplete.



So far I get that different substrates can create the same computational states (by which I assume we mean the contents of registers and memory?) But how does the MGA get from showing that to showing that inert matter can be a possible substrate? (ISTM that a projected graph is not inert, if that's the argument.)

Yes, as I understand it that's the argument.  It's consistent with Platonism.  A computer program's execution written out on paper is just as much a calculation as a lot of transistors switching.

Not at all. or you are terribly ambiguous.

A computation occurs when a number relation is true, that is realized in some reality.

A computation written on a paper is useful to convince some people that a computation exists in arithmetic, but once written, or encoded into a Gödel number, it is not a computation.

May be you meant "execution written out on paper during the program execution"? Then that OK.

By computations and computability, I mean the mathematical, indeed arithmetical concept discovered by Church, Kleene, Post,, Turing, Markov.

With Church thesis, it is an easy theorem that all computations are realized in the sigma_1 complete part of arithmetic (and similarly when you add oracle, but universal machine cannot distinguish *big* number/machine with infinite machine (oracle)).

I assume much just less than you. I assume RA, and thanks to you I see that it gives a form of strict finitism making sense for the universal machine. Well, that could be defended. 

The big discovery is by Turing and other mathematicians. I really don't assume more than that.



My caveat is that neither of them is conscious in THIS world because being conscious requires being conscious OF something. 

Sure.


An isolated, pure consciousness is an oxymoron.  Consciousness only exists as part of thoughts and thoughts only have meaning by reference to an external world and potential action in that world.

Agreed. But the question is more between :are we fundamentally mammals living on earth, or are we universal numbers living in arithmetic, deluded by oracles or other universal numbers.

I remind that I say that a number u is universal if phi_u("x,y") = phi_x(y)? For some fixed universal enumeration (of the partial computable functions).

Even RA can prove the existence of such u, and of its computations. But RA knows nothing about u abilities, cannot prove that this or that u is universal. PA can do that, and PA can recognize that PA itself is such a u. (Like we all do, i mean the humans, but not many knows that). 
*that* exists in arithmetic. Logicians knows this for a century now.

That gives a generator of TOEs, any first-order logical specification of a universal number.

The problem is that the universal numbers get into relations, which are, well, complicated.

I don't the assumption of a primary physical universe. If I am such a u (comp) then all what I can be confronted is 1) below my substitution level: the result of the competition between all universal numbers. And above, relatively stable universal number, and it is an open question if there is one precise, univocal,  physical universal number.

We can compare the comp classical physics and the empirical physics. If they differ, it means we have detected an Oracle. It will remain open if that oracle comes from a perverse bostromian simulation (by our descendants wanting to fail us), or from some cartesian malin génie (daemon), that you can call matter or god. I don't think that there are evidence for such oracle, and to assume them would make things more complex than we need to do. The competition between all universal numbers put already enough mess in platonia. 

Bruno



Brent


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Bruno Marchal

unread,
Mar 26, 2015, 3:48:03 PM3/26/15
to everyth...@googlegroups.com

On 26 Mar 2015, at 07:38, Bruce Kellett wrote:

> Bruno Marchal wrote:
>> On 24 Mar 2015, at 06:31, Bruce Kellett wrote:
>>> meekerdb wrote:
>>>> On 3/23/2015 10:05 PM, Bruce Kellett wrote:
>>>>>
>>>>> In fact, I can write computer programs where the laws of physics
>>>>> change from instant to instant. Why do we not experience these
>>>>> things?
>>>>
>>>> Aye, there's the rub. Bruno claims that such capricious
>>>> sequences of experience must have small measure. But I think the
>>>> "must" means "so that my theory will hold water." Anyway he
>>>> admits it's an open problem to show that the UD doesn't just
>>>> produce experiential confetti.
>>>
>>> So why do we waste time on such an incomplete theory?
>>>
>>> I would say that rather than such random sequences of experiences
>>> having small measure, they must dominate.
>> OK. So the winning program in the FPI limit of what happen below
>> the subst level, "we" learn to manage that noise. Why not? That
>> fits with Feynmann formulation of QM. We would have, to sum up
>> terrribly:
>> In the work of the UD, the winner (the one generating the stable
>> illusion) is SUM on all e^iUD.
>
> The trouble I see with this is that it already assumes the linear
> superposition principle and quantum mechanics

Not literaly, but I see what you means. That is why I define the
observable by the "measure one" on the sigma_1 sentence, to "obeys"
the constraints of computationalism has illustrated in the UDA.



> . But that has not been derived in your theory and it is
> illegitimate to use it at this point.

I did not do that. I refer you to my publications for the details of
what has been done.
It has an advantage on current physics, it splits the truth between
the truth on computers, and what computers can know, observe, feel
about it.




> The Feynman sum-over-paths works only because we have deterministic
> physical laws that enable us to compute the phases along alternative
> paths. You do not have such laws in evidence.

Well. The UDA shows that there is not much choice in the matter if you
want to stay objective of the relation between subjective and
objective. And then, the "measure one", an abtsract prerequiste is
offered by the introspective universal machine in the form of by three
theories, called S4Grz1, Z1*, X1*.






>
>
>> I think somehow that is correct, and I show that there is already
>> the shadow of something like that being justify by the reasoner
>> reasoning on itself and its consistent extensions.
>
> I doubt that that makes any sense for a new-born baby who,
> nevertheless, experiences an ordered world.

?
Indeed, you need already a good maturity in math, and a taste for
philosophy of mind. Some pretend the fear of death appear at the age
of four, and normally, that should be technically enough to explain
this to a 4 years old child. But that would be akin to bad treatment
(unless he asks for it).

All, what I say is understandable ny any u endowed with enough
induction power. my generic one is the "well known" PA (the
Escherichia Coli of the universal number having enough induction
power: the one I call the löbian numbers.




>
>
>>> We need the glue of laws to hold our sequence of experiences
>>> together
>> Our dreams, yes. I suggest to "dream" a sequence of experience.
>> particular case are given by the awake state relative to some
>> computation(s).
>>> -- and these laws can only come from experience.
>> They can be inferred from experience. But they might be justifiable
>> by a deeper (theological) theory, which would explain the origin
>> and necessity of the physical laws.
>
> So you need to do the work to achieve this justification within your
> model. At the moment, you just seem to assume those parts of
> physical theory that you want.

You might be a bit unfair. I assume only RA, and I use only standard
definition in philosophy.

UDA explains why physics is reduced to a calculus of uncertainty on
computations "seen from inside", and the math part (which took 300
years of development, I only made it easy) describe what machine can
say, from inside, with "the inside" defined by the necessary nuances
brought by incompleteness for what universal numbers/machines can
expect about themselves.

I am not sure you got the tilt of the UDA, or are aware that a tiny
fragment of arithmetic realize already all computations, and that all
u are uncertain about which u they are and which other u supports them.

I do not assume a physical universe, even at the start of the UDA,
althrough it is simpler to think and do so, but I do assulme the
stability of some u (brain, doctors, ...). If you see a flaw, please
tell me at which step. It is mainly a formulation of a problem, + an
embryo of solution, when the problem is asked to the universal löbian
machine. The löbian number are those who are universal number and who
know that there are universal numbers (at least).

Frankly, you just showed that you have not really take a look on what
I have done. I know the long text are in french, but sane04 is a good
summary.

My main interest is in the origin of physical laws. Like Wheeler I
don't think that it is a physical process, but more like the sharable
observable invariant (measure one) of the universal Turing number in
arithmetic, or in any first person specification of any universal
number.

You might need good book in theoretical computer science and
mathematical logic, if you doubt of the use I make of the notion.

It helps to be aware of the mind-body problem, the main motivation,
which can be divided, when assuming computationalism, into two hard
problems: the consciousness problem, the matter problem. I show only
that they are partially translatable in arithmetic, and that some
universal numbers share a stable opinion on that.

It would be a lie to claim that science has decided between Aristotle
(reality is what you see, measure), and Plato (what you see, measure,
are the shadow of something else). My work suggests, in some precise
way, that computationalism sides with Plato on the mind body issue,
indeed even on Pythagorus, which is somehow rehabilitate through
Church-Turing thesis.

Bruno

Bruno Marchal

unread,
Mar 26, 2015, 3:58:33 PM3/26/15
to everyth...@googlegroups.com
You need Turing topic argument, and the logic of those are given by
the logic of self-reference.

Then it can be shown that the basis used is not important, and I (try)
to illustrate the working of all this with elementary arithmetic and
with combinators, but topological unitary computations, or fortran, or
lisp, would do.

Physics and theology are machine independent notion. All base leads to
the same theory. This comes from the fact that the intensional Church
thesis follows from the usul extensional one. Not only all universal
number u can compute all computable functions, but they can imitate
the precise ways other u computes, so all universal dovetailing
entails the same competition between all u (below our substitution
level).

Bruno



>
> Bruce
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.
http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Mar 26, 2015, 4:10:34 PM3/26/15
to everyth...@googlegroups.com
On 26 Mar 2015, at 13:06, Quentin Anciaux wrote:



2015-03-26 13:02 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au>:
Quentin Anciaux wrote:
2015-03-26 12:13 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au
    Quentin Anciaux wrote:

        2015-03-26 8:05 GMT+01:00 Bruce Kellett

            This comes back to my original question: since all possible
        programs
            are run by the dovetailer, how do we ensure that conscious
        beings
            see an ordered and predictable world. Only a set of measure zero
            among all possible programs would give that result.

        Yes, it seems to me, we should see white noise, but maybe a
        selection attribute must be in play... like an anthropic argument.


    Anthropic arguments are not going to work with computationalism
    because there is no basis on which you can assume underlying
    deterministic physical laws.


It seems to me it works relatively.... consciousness like ours can only experiment worls ordered like ours... even if almost all dreams/worlds produced by mathematics are not like that and do not allow of consciousness like ours, as you can only experience worlds like ours, it's no magic that you do... like with Quantum Immortality, you cannot experience being dead, so no wonder you find yourself alive, even if in almost all worlds you're dead (or not existing at all).

But we do not need the degree of order that we observe. We could survive perfectly well with a reasonable number of miracles -- laws that don't quite work always. And there are vastly more possible worlds of that sort than those that are strictly deterministic. The measure problem gets you every time.

Well we don't know that we could survive in such world...but even, if MWI is correct, most instances of me goes everytime in such worlds... and some of us don't, why wonder that it's a miracle when it's a given there will always be a me in a non magic world ? I wonder why I'm not in a magic world... because I'm not.

OK. That's why when I ask the universal Löbian u, I ask it to abstract from the cul-de-sac world, where magic like [](santa-Klauss exist) becomes true, or []("0=1"). That abstraction is the move from []p to []p & <>p. You do it also with the stronger move []p to []p & p.

Those moves, limited to the sigma_1 arithmetical truth, that the observers assumed, The realm of RA, + those intensional nuance suggest the structure on which the observable get permanent and sharable, but also the no sharable part.

That's only a beginning of a solution. It might fail, but up to now, the observable seem to have good quantization making them already close to quantum logics, algebra of projectors. 

Bruno


Quentin
 


Bruce

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscribe@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



--
All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.

Bruno Marchal

unread,
Mar 26, 2015, 4:30:10 PM3/26/15
to everyth...@googlegroups.com
You are quick here.
I might explain the stroboscope machinery which might help me to ask
you what you mean by consciousness supervening to a recording, given
that no computation at all is involved in the recording.



> Basically, because the simulation of any given conscious state can
> be carried out an any computer -- whatever the architecture,
> physical construction, or programming language. As long as the
> original state is accurately simulated, the conscious state will be
> the same.

It is not the state which must be correctly simulated, it is the
relation between those state/ You need the truth of the proposition IF
the input change, I do some this or that". That truth is part of what
define my person, and it is not applied in the movie. The movie is a
sequence of description of states, not a sequence of states related by
some universal number.





> But these different instances of the calculation are generally not
> counterfactually equivalent, nor need they be -- they only have to
> simulate the original state to the required degree of accuracy

But there is no simulation here, in the technical sense of simulation.

I say that x simulates y on z if for all y z, the sentence phi_x(y, z)
= phi_y(z) is provable in RA.




> -- they may differ to any degree whatsoever for their calculated
> states before and after the target conscious moment.

If one x simulated y, an infinity of different x will simulated y as
well. We agree that there is a measure problem.


>
> This comes back to my original question: since all possible programs
> are run by the dovetailer, how do we ensure that conscious beings
> see an ordered and predictable world.

Indeed that is the question. To answer it I have asked a Löbian
number. The answer, to make simple, is that if you abstract on
falsities (for knowledge), or illusion (for physics) the logic of self-
reference constraints the problem and show that in those direction the
reality kicks back ans is structured, so we can extend the formal
"measure one" into a physics (calculus of uncertainty).



> Only a set of measure zero among all possible programs would give
> that result.

I suspect you don't take the first person views (the modal variant of
relational justification) into account.

I don't pretend it is simple. You need to understand some theorems in
computer science, which has been exploited a lot to get those machine
logics.

Bruno



>
> Bruce
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to everything-li...@googlegroups.com.
> To post to this group, send email to everyth...@googlegroups.com.
http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Mar 26, 2015, 4:34:57 PM3/26/15
to everyth...@googlegroups.com
I can agree. But it is an objective one, derived from RA, with the comp mind-body in the background.

The physical realities emerges from the math of dreams, but the dreams emerges from the math of computations and computability/non-computability.

It is post Church-Turing-Post-Gödel idealism. But it is close to neoplatonists idealism, and match with mystical reports, if not taken too much literally.

Above all, it is testable.

Bruno

I have to go. 


Brent


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

LizR

unread,
Mar 26, 2015, 4:50:42 PM3/26/15
to everyth...@googlegroups.com
Possibly, but how do you prove that's more likely to be experienced than regularity? Do miracles require more or less bits to specify than "same old, same old" ? Could the nature of consciousness be such that the most likely continuer of a given observer moment is the one that has the least available difference from the previous one (in some sense - is it possible to do some maths on this? (must dig out TON....again....))

meekerdb

unread,
Mar 26, 2015, 5:13:44 PM3/26/15
to everyth...@googlegroups.com
This seems like branch counting in MWI.  If one branches in a hellish experience while 99 continue on the same non-hellish path then there are only two streams of consciousness.  So does your FPI tell you you have probability 1/2 of experiencing hell or does the amount of physical instantiation determine the measure?

Brent

meekerdb

unread,
Mar 26, 2015, 5:22:49 PM3/26/15
to everyth...@googlegroups.com
On 3/26/2015 12:03 PM, Bruno Marchal wrote:
Agreed. But the question is more between :are we fundamentally mammals living on earth, or are we universal numbers living in arithmetic, deluded by oracles or other universal numbers.

That seems to me to be the same as the question am I a brain in a vat.  So long as the BIV simulation is good enough, as long as the arithmetic relations are right, there's no discernible difference.

Brent

Bruce Kellett

unread,
Mar 26, 2015, 7:35:57 PM3/26/15
to everyth...@googlegroups.com
Bruno Marchal wrote:
>
> On 26 Mar 2015, at 12:13, Bruce Kellett wrote:
>
>> Quentin Anciaux wrote:
>>> 2015-03-26 8:05 GMT+01:00 Bruce Kellett <bhke...@optusnet.com.au
>>> <mailto:bhke...@optusnet.com.au>>:
>>> This comes back to my original question: since all possible programs
>>> are run by the dovetailer, how do we ensure that conscious beings
>>> see an ordered and predictable world. Only a set of measure zero
>>> among all possible programs would give that result.
>>> Yes, it seems to me, we should see white noise, but maybe a selection
>>> attribute must be in play... like an anthropic argument.
>>
>> Anthropic arguments are not going to work with computationalism
>> because there is no basis on which you can assume underlying
>> deterministic physical laws.
>
> You need Turing topic argument, and the logic of those are given by the
> logic of self-reference.
>
> Then it can be shown that the basis used is not important, and I (try)
> to illustrate the working of all this with elementary arithmetic and
> with combinators, but topological unitary computations, or fortran, or
> lisp, would do.

I think translation from colloquial English has let us down here. There
are two sense of the word 'basis" in English. From the OED (Oxford
English Dictionary): "A thing on which anything is constructed and by
which its constitution or operation is determined; a determining
principle". By extension, in physics/mathematics "the set of vectors
spanning a vector space in terms of which any vector in the space can be
expressed". I was using the word in the first sense.

Bruce

Bruce Kellett

unread,
Mar 26, 2015, 7:43:04 PM3/26/15
to everyth...@googlegroups.com
Bruno Marchal wrote:
> On 25 Mar 2015, at 16:35, Stathis Papaioannou wrote:
>>
>> If my mind is being run on two separate computers, I can't know which
>> one of the two, and I can't say that my last remembered moment was run
>> on one or other or my next anticipated moment will be run on one or
>> other. If one computer stops it makes no difference to me and if a
>> third computer running my mind comes online it makes no difference to
>> me. So effectively there is only one conscious moment. Under physical
>> supervenience, stopping all the computers stops the conscious moment.
>
> I am OK. I think Quentin is arguing in the reducto ad absurdum part.
>
> In a sense both Russell is righ (there is only one 1p-experience), and
> Quentin is right: we can attribute consciousness in each running (but
> then if we attribute it to the physical activity token: we get the
> absurd conclusion: playing records and real-time consciousness supervene
> on a static film, etc.

One problem is that this is an invalid "argument from incredulity". The
fact that you find this conclusion absurd is not an argument against the
conclusion: it is merely a statement about how you fell about the
conclusion -- which could be right or wrong, and in either case does not
depend on how you feel about it.

I think there are important points buried here and I will attempt to
explore them in more detail in another post -- I am rather short of time
today.

Bruce

PGC

unread,
Mar 26, 2015, 8:02:23 PM3/26/15
to everyth...@googlegroups.com


On Friday, March 27, 2015 at 12:43:04 AM UTC+1, Bruce wrote:
Bruno Marchal wrote:
> On 25 Mar 2015, at 16:35, Stathis Papaioannou wrote:
>>
>> If my mind is being run on two separate computers, I can't know which
>> one of the two, and I can't say that my last remembered moment was run
>> on one or other or my next anticipated moment will be run on one or
>> other. If one computer stops it makes no difference to me and if a
>> third computer running my mind comes online it makes no difference to
>> me. So effectively there is only one conscious moment. Under physical
>> supervenience, stopping all the computers stops the conscious moment.
>
> I am OK. I think Quentin is arguing in the reducto ad absurdum part.
>
> In a sense both Russell is righ (there is only one 1p-experience), and
> Quentin is right: we can attribute consciousness in each running (but
> then if we attribute it to the physical activity token: we get the
> absurd conclusion: playing records and real-time consciousness supervene
> on a static film, etc.

One problem is that this is an invalid "argument from incredulity". The
fact that you find this conclusion absurd is not an argument against the
conclusion: it is merely a statement about how you fell about the
conclusion -- which could be right or wrong, and in either case does not
depend on how you feel about it.

Why or how is anybody arguing that problem is generated or solved by "how somebody feels about it"?

It's via contradiction/standard reductio: assume conclusion false and negation to be true, and from this we derive contradiction. If latter is the case, conclusion must be true.

Only two things are required: law of excluded middle and if statement implies something false, it must be false. PGC


 

Bruce Kellett

unread,
Mar 26, 2015, 8:11:04 PM3/26/15
to everyth...@googlegroups.com
Where is the contradiction?

Bruce

Platonist Guitar Cowboy

unread,
Mar 26, 2015, 8:34:18 PM3/26/15
to everyth...@googlegroups.com


On Fri, Mar 27, 2015 at 1:10 AM, Bruce Kellett <bhke...@optusnet.com.au> wrote:

PGC wrote:




Why or how is anybody arguing that problem is generated or solved by "how somebody feels about it"?

It's via contradiction/standard reductio: assume conclusion false and negation to be true, and from this we derive contradiction. If latter is the case, conclusion must be true.

Only two things are required: law of excluded middle and if statement implies something false, it must be false. PGC

Where is the contradiction?

Of what? MGA? I just described the mechanism, far from "just feelings".

I assumed you had read at least a paper: incompatibility of physical supervenience with comp. PGC
 

LizR

unread,
Mar 26, 2015, 8:59:14 PM3/26/15
to everyth...@googlegroups.com
Actually I'd like to know where the contradiction is too (and I have read Bruno's papers, and "The Amoeba's Secret", and of course Bruno has done his best to teach me some modal logic...)

...but I still have difficulty following the MGA. It has been explained (at least at times) as showing that "if phys supervenience holds, then a recording of a conscious computation would also be conscious" - and (I'm told) this is absurd.

Bruce said (I think) that although this seems absurd, it may not be. That is, one can't argue from incredulity. That seems like a reasonable comment.

The MGA also appears - to me, at least - to show that

(1) (assuming phys sup) the same conscious state could supervene on two different physical states (AND or OR for example)

(2) (assuming p.s.) quite a lot of the physical stuff could be removed from the setup without making a difference.

(3) (assuming p.s.) Broken gates, say, could be driven to give the correct output by playing back some of a recording, giving a mix of recording and computation

All the above seems to put dents in physical supervenience, but I can't see an outright contradiction - which probably means I have missed something important.

So, I'd really like to know - what contradiction? :-)

Bruce Kellett

unread,
Mar 26, 2015, 9:04:28 PM3/26/15
to everyth...@googlegroups.com
Bruno Marchal wrote:
> On 26 Mar 2015, at 08:05, Bruce Kellett wrote:
>>
>> I simply say, so what! Counterfactual equivalence does not have any
>> independent justification, and it is highly unlike to be sensible,
>> even in the context of computationalism.
>
> You are quick here.
> I might explain the stroboscope machinery which might help me to ask you
> what you mean by consciousness supervening to a recording, given that no
> computation at all is involved in the recording.
>
>> Basically, because the simulation of any given conscious state can be
>> carried out an any computer -- whatever the architecture, physical
>> construction, or programming language. As long as the original state
>> is accurately simulated, the conscious state will be the same.
>
> It is not the state which must be correctly simulated, it is the
> relation between those state/ You need the truth of the proposition IF
> the input change, I do some this or that". That truth is part of what
> define my person, and it is not applied in the movie. The movie is a
> sequence of description of states, not a sequence of states related by
> some universal number.

As is usual with debates of this kind, I think we are talking about two
different things. When you introduce the possibility of the Dr replacing
your brain with a simulation, you mean "an artificial intelligence
program initialized by the synaptic weights read out from your old
brain". In other words, this is not simply a recording of the state of
the brain at one instant, it is a full description that would enable the
brain to be reconstructed. In other words, it is a prescription for
producing a model of your brain -- a simulation.

I agree it is clear that this model is conscious only when it is
running. If you write down the Godel number of the description, that is
a static object and would not be considered conscious in itself. But
this description could be used to build a model in any medium, be it a
computer, or a system composed of billiard balls. Provided the exact
details are modelled, the model will be conscious when the simulation is
run.

The other thing (that seems to be introduced with the MGA) is the we
observe the active brain and record it from instant to instant in
sufficient detail that we can observe which neurones are active, which
connections are made, and in which order. This is effectively the
"movie". It records a certain period of conscious activity, but it does
not contain the information necessary to construct a model that can go
on operating independently outside the original recording period.

The question is: if I replay the recording of the second type, do I
recreate the conscious experience? Note that this is not a simulation in
the normal sense, it is a replay of a recording of the relevant parts of
the brain undergoing conscious activity. If conscious supervenes on the
physical brain so that the pattern of connections and neurone firings
constitute the physical manifestation of the conscious experience, then
rerunning the recording will recreate the conscious experience.

It is essentially the same as if I am running the simulation on the
computer I observe all the registers and memory of this computer then
recreate exactly this pattern of registers and memory data by some other
means than by running the original program. If one creates a conscious
experience, then so does the other.

The argument seems to be that the replay of the recording will not
recreate the conscious experience because it is not counterfactually
correct. I do not think that it has been demonstrated that this is
relevant. If exactly the same physical activity of the brain has been
replayed, then exactly the same consciousness would be experienced. This
is the meaning, as I see it, of saying that consciousness supervenes on
the physical state of the brain (or, probably more correctly, on the
sequence of physical states). Sure, replaying the movie does not
reconstruct an individual that can go on functioning independently once
the movie finishes -- but that was not the idea. We are reproducing a
conscious moment, not simulating a conscious entity in its entirety.

The Movie Graph Argument is an attempt to argue that this concept of
physical supervenience is absurd, so that consciousness supervenes only
on the (counterfactually correct) computation. I think the argument
fails because it assumes what it attempts to prove. Namely, it assumes
that physical supervenience is false (absurd).

Bruce

Bruce Kellett

unread,
Mar 26, 2015, 9:09:34 PM3/26/15
to everyth...@googlegroups.com
Yes, physical supervenience is incompatible with computationalism. But
it remains to be proved that physical supervenience is false and comp is
true. That what what I took the MGA to be attempting to do. If that is
what it is about, then it fails because it assumes what is to be proven.

Bruno said: "if we attribute it to the physical activity token: we get
the absurd conclusion: playing records and real-time consciousness
supervene on a static film, etc."

That is the invalid argument from incredulity.

Bruce

meekerdb

unread,
Mar 26, 2015, 9:34:39 PM3/26/15
to everyth...@googlegroups.com
I think counterfactual correctness is necessary for a process to be a computation.  Otherwise, I think the process is equivalent a sequence of states A,B,C,... and there can be a translation into a completely different set of states a,b,c,... which then putatively instantiates the same computation.  This is what motivates the counterfactual correctness requirement. 

I would claim though that getting counterfactual correctness mean anticipating and providing for all possible events, which in turn means incorporating a very large part of the world into the process to be recorded - maybe all the past light cone.  If you do that then the MGA is about simulating a world and it's trivial that the brain part of the simulation experiences that world *within the simulation*.

Brent
"It’s quite something to be the subject of a major film. It makes you realize how short life is when they cut out the boring bits."
   --- Stephen Hawking

Platonist Guitar Cowboy

unread,
Mar 26, 2015, 9:49:51 PM3/26/15
to everyth...@googlegroups.com
On Fri, Mar 27, 2015 at 2:09 AM, Bruce Kellett <bhke...@optusnet.com.au> wrote:
Platonist Guitar Cowboy wrote:



On Fri, Mar 27, 2015 at 1:10 AM, Bruce Kellett <bhke...@optusnet.com.au <mailto:bhkellett@optusnet.com.au>> wrote:

    PGC wrote:

        Why or how is anybody arguing that problem is generated or
        solved by "how somebody feels about it"?

        It's via contradiction/standard reductio: assume conclusion
        false and negation to be true, and from this we derive
        contradiction. If latter is the case, conclusion must be true.

        Only two things are required: law of excluded middle and if
        statement implies something false, it must be false. PGC

    Where is the contradiction?

Of what? MGA? I just described the mechanism, far from "just feelings".

I assumed you had read at least a paper: incompatibility of physical supervenience with comp. PGC


Yes, physical supervenience is incompatible with computationalism.

Yup.
 
But it remains to be proved that physical supervenience is false and comp is true.

In what frame then, as it looks as if you're implying some sort of mega-ontology?

What you suggest goes beyond the scope of demonstrating that you can't keep both comp and physical supervenience in the same ontological frame.

You're quite the ultimate mystic, Bruce! ;-)

And Liz: yes, what if the movie graph dreams? Of course this is logically plausible.

But you forget that consciousness cannot supervene on the film due to computationalism being the only game in town at this point by definition. If you have something besides physical universes or comp ones, please share ;-)

No computational activity, given this frame, can be associated to the projection of film, given the terms we're working with. This or the stroboscope version of the argument imply that if you're going to use comp with noted precisions, then observer is no longer required (universal numbers), no real time playing of film, which itself has no computational role, like some accidental passive supervenience?, and can be discarded.

The absurdity is that if you allow the film to dream counter to the kinds of objections raised here... that you have to ride with all possible dreams supervening on the activity of the stroboscope and even no dreams at all because the stroboscope itself actualizes no primitive computational activity and can therefore be discarded.

But if you really want: you can keep all your zombies that are no zombies and pretend this is not absurd...;-)

And no, this is no "proof that comp is true". That's way too strong. Just the incompatibility and absurdity if you want to keep both or have film dreaming or whatever. PGC


LizR

unread,
Mar 26, 2015, 10:16:25 PM3/26/15
to everyth...@googlegroups.com
PGC - I think you may have skimmed over too much for me to grasp what you're saying. But maybe not. So .... does contradicition arise because you assume to start with that consciousness is created by computation, then show that it would also (assuming physical supervenience) arise from something that isn't computation?

I'm still not sure where the dreams come in, however. (Or the zombies...)

On the subject of counterfactual correctness, isn't that the point of Olimpia and Klara? My problem with counterfactual correctness is (probably the same as Maudlin's?) -- how does the system know it's counterfactually correct if it doesn't actually pass through any of the "what-if" states? To put it another way, when you have a recording of the conscious computational states being replayed, what difference could be made by the presence (or absence) of all the extra bits that would deal with counterfactual correctness if a different computation was being replayed, but happen in this case not to be used? I can't see how this could make any physical difference to the states being replayed (unless counterfactual correctness introduces some nonphysical magic into the system?)


Bruce Kellett

unread,
Mar 26, 2015, 10:34:36 PM3/26/15
to everyth...@googlegroups.com
LizR wrote:
>
> On the subject of counterfactual correctness, isn't that the point of
> Olimpia and Klara? My problem with counterfactual correctness is
> (probably the same as Maudlin's?) -- how does the system /know/ it's
> counterfactually correct if it doesn't actually pass through any of the
> "what-if" states? To put it another way, when you have a recording of
> the conscious computational states being replayed, what difference could
> be made by the presence (or absence) of all the extra bits that /would/
> deal with counterfactual correctness if a different computation was
> being replayed, but happen in this case not to be used? I can't see how
> this could make any physical difference to the states being replayed
> (unless counterfactual correctness introduces some nonphysical magic
> into the system?)

That is an excellent point. I find it deeply ironic that Bruna relies on
counterfactual correctness in his account of computationalism but
cheerfully abandons counterfactual definiteness when it comes to the MWI
explanation of the EPR correlations.

Bruce

Stathis Papaioannou

unread,
Mar 26, 2015, 11:31:24 PM3/26/15
to everyth...@googlegroups.com
It would take a vast amount of coding "by hand" to create a universe filling in details of miracles occurring at multiple arbitrary points, as opposed to an orderly universe with a few laws and initial conditions.


--
Stathis Papaioannou

Platonist Guitar Cowboy

unread,
Mar 27, 2015, 12:13:23 AM3/27/15
to everyth...@googlegroups.com
On Fri, Mar 27, 2015 at 3:16 AM, LizR <liz...@gmail.com> wrote:
PGC - I think you may have skimmed over too much for me to grasp what you're saying. But maybe not. So .... does contradicition arise because you assume to start with that consciousness is created by computation, then show that it would also (assuming physical supervenience) arise from something that isn't computation?

Bruno will kick my butt for vulgarizing his thesis in this improvisatory, overly short, imprecise manner. I suspect you're still assuming physical universe without being aware of it.
 

I'm still not sure where the dreams come in, however. (Or the zombies...)

On the subject of counterfactual correctness, isn't that the point of Olimpia and Klara? My problem with counterfactual correctness is (probably the same as Maudlin's?) -- how does the system know it's counterfactually correct if it doesn't actually pass through any of the "what-if" states?

"The system" is what here? "It" referring to what here? Would you tend to interpret these as physical or comp objects?

Remember that comp supervenience requires physics to become part of machine psychology/theology; thus every explanatory potency of a physical universe is left behind. The association is some sensation [of my joy in space-time (x,t)] to [type] of relative computational state. 
 
To put it another way, when you have a recording of the conscious computational states being replayed, what difference could be made by the presence (or absence) of all the extra bits that would deal with counterfactual correctness if a different computation was being replayed, but happen in this case not to be used? I can't see how this could make any physical difference to the states being replayed (unless counterfactual correctness introduces some nonphysical magic into the system?)

A machine from which we remove some redundant parts resulting in a finite set of states or executions looses counterfactual correctness: The movie is not conscious. The universal machine viewing it via types, not tokens, of possible activities keeps CC intact, with consciousness supervening on potential activities, and not some brittle, particular branch of the same.

And yes, we can cite all manner of quantum weirdness and state that consciousness supervenes on physical processes that are not actualized. This is reasonable since measurements depending on potential observations that are non-actualized depend on CC. But here, Bruno iirc pointed out that this would be a case of tokens rather than types. In short "Bruno will definitely kill me for simplifying and shortening as much as I have" sense, consciousness relative to computational state of a universal machine supervenes on set of possible accessible extensions of these states distributed on the entirety of the UD. PGC
 

meekerdb

unread,
Mar 27, 2015, 12:35:29 AM3/27/15
to everyth...@googlegroups.com
On 3/26/2015 7:16 PM, LizR wrote:
On the subject of counterfactual correctness, isn't that the point of Olimpia and Klara? My problem with counterfactual correctness is (probably the same as Maudlin's?) -- how does the system know it's counterfactually correct if it doesn't actually pass through any of the "what-if" states? To put it another way, when you have a recording of the conscious computational states being replayed, what difference could be made by the presence (or absence) of all the extra bits that would deal with counterfactual correctness if a different computation was being replayed, but happen in this case not to be used? I can't see how this could make any physical difference to the states being replayed (unless counterfactual correctness introduces some nonphysical magic into the system?)

I see two possible answers.  First, in a quantum world there is a superposition of all those "counterfactual" states, so they are really present, but only observable as different relative states.  Of course this already invokes QM and physics, rather than deriving them.  But maybe it can be shown that the infinite threads of the UD serve to test all the counterfactual states.

Or, secondly, although there is no physical difference in the sequence of states in the replaying, consciousness is not physical and so could be absent.  This doesn't require that consciousness be magic.  If it is the abstract thing called "computation" then in the abstract it needs to counterfactually correct to count as computation.

Brent

LizR

unread,
Mar 27, 2015, 12:58:54 AM3/27/15
to everyth...@googlegroups.com
On 27 March 2015 at 17:13, Platonist Guitar Cowboy <multipl...@gmail.com> wrote:

On Fri, Mar 27, 2015 at 3:16 AM, LizR <liz...@gmail.com> wrote:
PGC - I think you may have skimmed over too much for me to grasp what you're saying. But maybe not. So .... does contradicition arise because you assume to start with that consciousness is created by computation, then show that it would also (assuming physical supervenience) arise from something that isn't computation?

Bruno will kick my butt for vulgarizing his thesis in this improvisatory, overly short, imprecise manner. I suspect you're still assuming physical universe without being aware of it.

Not without being aware. That's why it says "assuming physical supervenience" :-)


I'm still not sure where the dreams come in, however. (Or the zombies...)

On the subject of counterfactual correctness, isn't that the point of Olimpia and Klara? My problem with counterfactual correctness is (probably the same as Maudlin's?) -- how does the system know it's counterfactually correct if it doesn't actually pass through any of the "what-if" states?

"The system" is what here? "It" referring to what here? Would you tend to interpret these as physical or comp objects?

All this is assuming materialism. I can't reject that until I understand how the MGA does a reductio on it.

Remember that comp supervenience requires physics to become part of machine psychology/theology; thus every explanatory potency of a physical universe is left behind. The association is some sensation [of my joy in space-time (x,t)] to [type] of relative computational state. 

Yes, I get the general idea. I want the specific details of how to get there. As I said I don't quite get the MGA.
 
To put it another way, when you have a recording of the conscious computational states being replayed, what difference could be made by the presence (or absence) of all the extra bits that would deal with counterfactual correctness if a different computation was being replayed, but happen in this case not to be used? I can't see how this could make any physical difference to the states being replayed (unless counterfactual correctness introduces some nonphysical magic into the system?)

A machine from which we remove some redundant parts resulting in a finite set of states or executions looses counterfactual correctness:

Yes I know, but why do (presumably, on a Friday...) materialists like Brent argue that you need CCness to have a computation - what physical difference is it supposed to make, in the physicalist ontology?
 
The movie is not conscious. The universal machine viewing it via types, not tokens, of possible activities keeps CC intact, with consciousness supervening on potential activities, and not some brittle, particular branch of the same.

 Fine but that only works once you've ditched materialism (see above) which is what I'd like to do to embrace my inner comp, but can't see how (yet).

And yes, we can cite all manner of quantum weirdness and state that consciousness supervenes on physical processes that are not actualized. This is reasonable since measurements depending on potential observations that are non-actualized depend on CC. But here, Bruno iirc pointed out that this would be a case of tokens rather than types. In short "Bruno will definitely kill me for simplifying and shortening as much as I have" sense, consciousness relative to computational state of a universal machine supervenes on set of possible accessible extensions of these states distributed on the entirety of the UD. PGC

Quantum theory gives a new slant on CCness, but is still I think materialist?

In a nutshell, what I want to know is .... how do I start from the assumption of materialism, and show it leads to a contradiction? Preferably in baby steps that even my pretty little head can grasp.

LizR

unread,
Mar 27, 2015, 1:02:21 AM3/27/15
to everyth...@googlegroups.com
On 27 March 2015 at 17:35, meekerdb <meek...@verizon.net> wrote:
On 3/26/2015 7:16 PM, LizR wrote:
On the subject of counterfactual correctness, isn't that the point of Olimpia and Klara? My problem with counterfactual correctness is (probably the same as Maudlin's?) -- how does the system know it's counterfactually correct if it doesn't actually pass through any of the "what-if" states? To put it another way, when you have a recording of the conscious computational states being replayed, what difference could be made by the presence (or absence) of all the extra bits that would deal with counterfactual correctness if a different computation was being replayed, but happen in this case not to be used? I can't see how this could make any physical difference to the states being replayed (unless counterfactual correctness introduces some nonphysical magic into the system?)
I see two possible answers.  First, in a quantum world there is a superposition of all those "counterfactual" states, so they are really present, but only observable as different relative states.  Of course this already invokes QM and physics, rather than deriving them.  But maybe it can be shown that the infinite threads of the UD serve to test all the counterfactual states.

Quantum physics definitely makes a difference here, but I believe Bruno's argument assumes digital mechanism which is classical, so QM has to drop out of it, if anything?

Or, secondly, although there is no physical difference in the sequence of states in the replaying, consciousness is not physical and so could be absent.  This doesn't require that consciousness be magic.  If it is the abstract thing called "computation" then in the abstract it needs to counterfactually correct to count as computation.

That appears to throw out the physical supervenience thesis, like the MGA.

Bruce Kellett

unread,
Mar 27, 2015, 1:54:50 AM3/27/15
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
> On Friday, March 27, 2015, LizR <liz...@gmail.com
>
> On 27 March 2015 at 01:02, Bruce Kellett <bhke...@optusnet.com.au
>
> Quentin Anciaux wrote:
>
> 2015-03-26 12:13 GMT+01:00 Bruce Kellett
> <bhke...@optusnet.com.au
> <javascript:_e(%7B%7D,'cvml','bhke...@optusnet.com.au');>
Not necessarily. Just insert a few (pseudo-)random numbers at strategic
points! But, on the other hand, the UD runs all possible programs, so
what does it matter if a few are a bit complicated. :-)

Bruce
It is loading more messages.
0 new messages