Bruno,
It seems to me that this runs head-on into the problem of the
definition of time...
Here is my argument; I am sure there will be disagreement with it.
Supposing that Alice's consciousness is spread out over the movie
billboards next to the train track, there is no longer a normal
temporal relationship between mental moments. There must merely be a
"time-like" relationship, which Alice experiences as time.
But, then,
we are saying that wherever a logical relationship exists that is
time-like, there is subjective time for those inside the time-like
relationship.
Now, what might constitute a time-like relationship? I see several
alternatives, but none seem satisfactory.
At any given moment, all we can be directly aware of is that one
moment. If we remember the past, that is because at the present moment
our brain has those memories; we don't know if they "really" came from
the past. What would it mean to put moments in a series? It changes
nothing essential about the moment itself; we can remove the past,
because it adds nothing.
The connection between moments doesn't seem like a physical
connection; the notion is non-explanatory, since if there were such a
physical connection we could remove it without altering the individual
moments, therefore not altering our memories, and our subjective
experience of time.
Similarly, can it be a logical relationship? Is it
the structure of a single moment that connects it to the next? How
would this be? Perhaps we require that there is some function (a
"physics") from one moment to the next?
But, this does not exactly
allow for things like relativity in which there is no single universal
clock.
Of course, relativity could be simulated, creating a universe
that was run be a universal clock but whose internal facts did not
depend on which universal clock, exactly, the simulation was run from.
My problem is, I suppose, that any particular definition of "timelike
relationship" seems too arbitrary.
As another example, should any
probabilistic elements be allowed into physics? In this case, we don't
have a function any more, but a relation-- perhaps a relation of
weighted transitions. But how would this relation make any difference
from inside the universe?
>> So, basically, you are saying that I'm offering an alternative
>> argument against materialism, correct?
>
> It seems to me you were going in that direction, yes.
>
Well, *I* was suggesting that we run up against the problem of time in
*either* direction (physical reality / mathematical reality); so the
real problem would be a naive view of time, rather than COMP + MAT.
But, you are probably right: the problem really only applies to MAT.
On the other hand, I might try to take up the argument again after
reading UDA. :)
> With the MEC hypothesis, a "believer" in comp "go to hell". (Where a
> "believer"in , is someone who takes p for granted).
> Comp, is like self-consistency, a self-observing machine can guess it,
> hope it, (or fear it), but can never take it for granted. It *is*
> theological. No machine can prove its theology, but Löbian machine can
> study the complete theology of more simple Löbian machines, find the
> invariant for the consistent extensions, and lift it to themselves,
> keeping consistency by "consciously" being aware that this is has to
> be taken as an interrogation, it is not for granted, so that saying
> "yes" to the doctor needs an act of faith, and never can be imposed.
> (Of course we can argue biology has already "bet" on it).
Yes, this is fundamentally interesting :).
> Maudlin shows that for a special computation, which supports in time
> some consciousness (by using the (physical) supervenience thesis), you
> can build a device doing the same computation with much less physical
> activity, actually with almost no physical activity at all. The
> natural reply is that such a machine has no more the right
> counterfactual behavior. Then Maudlin shows that you can render the
> counterfactual correctness to such machine by adding, what will be for
> the special computation, just inert material.
> But this give to inert material something which plays no role, or
> would give prescience to elementary material in computations; from
> which you can conclude that MEC and MAT does not works well together.
I am not sure this convinces me. If the "inert" material is useful to
the computation in the counterfactual situations, then it is useful,
cannot be removed.
> Abram, are you aware that Godel's incompleteness follows "easily" (=
> in few lines) from Church thesis? Not the second theorem, but the
> first, even a stronger form of the first.
No, I do not know that one.
>> --Abram
...
> But this reasoning goes through if we make the hole in the film
> itself. Reconsider the image on the screen: with a hole in the film
> itself, you get a "hole" in the movie, but everything which enters and
> go out of the hole remains the same, for that (unique) range of
> activity. The "hole" has trivially the same functionality than the
> subgraph functionality whose special behavior was described by the
> film. And this is true for any subparts, so we can remove the entire
> film itself.
>
I don't think this step follows at all. Consciousness may supervene on
the stationary unprojected film, but if you start making holes in it,
you will eventually get a film on a nonconscious entity. At some
point, the consciousness is no longer supervening on the film (but may
well be supervening on other films that haven't been so adulterated,
or on running machines or whatever...
> Does Alice's dream supervene (in real time and space) on the
> projection of the empty movie?
>
No.
>
> 2)
>
> I give now what is perhaps a simpler argument
>
> A projection of a movie is a relative phenomenon. On the planet 247a,
> nearby in the galaxy, they don't have screen. The film pellicle is as
> big as a screen, and they make the film passing behind a stroboscope
> at the right frequency in front of the public. But on planet 247b,
> movies are only for travellers! They dress their film, as big as those
> on planet 247a, in their countries all along their train rails with a
> lamp besides each frames, which is nice because from the train,
> through its speed, you get the usual 24 frames per second. But we
> already accepted that such movie does not need to be observed, the
> train can be empty of people. Well the train does not play any role,
> and what remains is the static film with a lamp behind each frame. Are
> the lamps really necessaries? Of course not, all right? So now we are
> obliged to accept that the consciousness of Alice during the
> projection of the movie supervenes of something completely inert in
> time and space. This contradicts the *physical* supervenience thesis.
>
But the physics that Alice experiences will be fully dynamic. She will
experience time, and non-inert processes that she is supervening on.
Why does the physical supervenience require that all instantiations of
a consciousness be dynamic? Surely, it suffices that some are?
>
> c) Eliminate the hypothesis "there is a concrete deployment" in the
> seventh step of the UDA. Use UDA(1...7) to define properly the
> computationalist supervenience thesis. Hint: reread the remarks above.
I have no problems with this conclusion. However, we cannot eliminate
supervenience on phenomenal physics, n'est-ce pas?
--
----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------
On 29 Nov 2008, at 04:49, Abram Demski wrote:
>
> Bruno,
>
> I have done some thinking, and decided that I don't think this last
> step of the argument works for me. You provided two arguments, and so
> I provide two refutations.
>
> 1. (argument by removal of unnecessary parts): Suppose Alice lives in
> a cave all her life, with bread and water tossed down keeping her
> alive, but nobody ever checking to see that she eats it; to the
> outside world, she is functionally unnecessary. But from Alice's point
> of view, she is not functionally removable, nor are the other things
> in the cave that the outside world knows nothing about. The point is,
> we need to be careful about labeling things functionally removable; we
> need to ask "from whose perspective?". A believer in MAT who accepted
> the consciousness of the movie could claim that such an error is being
> made.
The argument was more of the type : "removal of unnecessay and
unconscious or unintelligent parts. Those parts have just no
perspective. If they have some perpective playing arole in Alice's
consciousness, it would mean we have not well chosen the substitution
level. You are reintroducing some consciousness on the elementary
parts, here, I think.
>
>
> 2. (argument by spreading movie in space instead of time): Here I need
> to go back further in the argument... I still think the objection
> about hypotheticals (ie counterfactuals) works just fine. :)
Then you think that if someone is conscious with some brain, which for
some reason, does never use some neurons, could loose consciousness
when that never used neuron is removed?
If that were true, how could still be confident with an artificial
digital brain. You may be right, but the MEC hypothesis would be put
in doubt.
Bruno
Yes but with MAT, the inert material has no use in the particular
instantiation we have chosen. If it play a role, it cannot be in
virtue of the MEC hypothesis *together* with the MAT hypothesis. If
not, it means you already make consciousness supervening on the
abstract computation the pieces of materials instantiate
"accidentally" here and now, not really on the physical process
implementing that computation.
Feel free to criticize ....
>
>
>
>> Abram, are you aware that Godel's incompleteness follows "easily" (=
>> in few lines) from Church thesis? Not the second theorem, but the
>> first, even a stronger form of the first.
>
> No, I do not know that one.
I will have the occasion to explain if I decide to make the UDA
beginning by step seven.
>
> On Wed, Nov 26, 2008 at 10:09:01AM +0100, Bruno Marchal wrote:
>> MGA 3
>
> ...
>
>> But this reasoning goes through if we make the hole in the film
>> itself. Reconsider the image on the screen: with a hole in the film
>> itself, you get a "hole" in the movie, but everything which enters
>> and
>> go out of the hole remains the same, for that (unique) range of
>> activity. The "hole" has trivially the same functionality than the
>> subgraph functionality whose special behavior was described by the
>> film. And this is true for any subparts, so we can remove the entire
>> film itself.
>>
>
> I don't think this step follows at all. Consciousness may supervene on
> the stationary unprojected film,
This, I don't understand. And, btw, if that is true, then the physical
supervenience thesis is already wrong. The
physical supervenience thesis asks that consciousness is associated in
real time and space with the activity of some machine (with MEC).
What do you mean by an instantiation of a dynamical process which is
not dynamic. Even a block universe describe a dynamical process, or a
variety of dynamical processes.
>
>
>>
>> c) Eliminate the hypothesis "there is a concrete deployment" in the
>> seventh step of the UDA. Use UDA(1...7) to define properly the
>> computationalist supervenience thesis. Hint: reread the remarks
>> above.
>
> I have no problems with this conclusion. However, we cannot eliminate
> supervenience on phenomenal physics, n'est-ce pas?
We cannot eliminate supervenience of consciousness on what we take as
other persons indeed. Of course phenomenal physics is a first person
subjective creation, and it helps to entangle our (abstract)
computational histories. That is the role of a "brain". It does not
create consciousness, it does only make higher the probability for
that consciousness to be able to manifest itself relatively to "other
consciousness". But consciousness can rely, with MEC, only to the
abstract computation.
Sorry for being a bit short, I have to go,
Bruno
> The argument was more of the type : "removal of unnecessay and
> unconscious or unintelligent parts. Those parts have just no
> perspective. If they have some perpective playing arole in Alice's
> consciousness, it would mean we have not well chosen the substitution
> level. You are reintroducing some consciousness on the elementary
> parts, here, I think.
>
The problem would not be with removing individual elementary parts and
replacing them with functionally equivalent pieces; this obviously
preserves the whole. Rather with removing whole subgraphs and
replacing them with equivalent pieces. As Alice-in-the-cave is
supposed to show, this can remove consciousness, at least in the limit
when the entire movie is replaced...
>
> Then you think that if someone is conscious with some brain, which for
> some reason, does never use some neurons, could loose consciousness
> when that never used neuron is removed?
> If that were true, how could still be confident with an artificial
> digital brain. You may be right, but the MEC hypothesis would be put
> in doubt.
>
I am thinking of it as being the same as someone having knowledge
which they never actually use. Suppose that the situation is so
extreme that if we removed the neurons involved in that knowledge, we
will not alter the person's behavior; yet, we will have removed the
knowledge. Similarly, if the behavior of Alice in practice comes from
a recording, yet a dormant conscious portion is continually ready to
intervene if needed, then removing that dormant portion removes her
consciousness.
--Abram
Then assuming MEC requires some definition of "activity" and consciousness may
cease when there is no activity of the required kind.
Brent
>
> Bruno,
>
>> The argument was more of the type : "removal of unnecessay and
>> unconscious or unintelligent parts. Those parts have just no
>> perspective. If they have some perpective playing arole in Alice's
>> consciousness, it would mean we have not well chosen the substitution
>> level. You are reintroducing some consciousness on the elementary
>> parts, here, I think.
>>
>
> The problem would not be with removing individual elementary parts and
> replacing them with functionally equivalent pieces; this obviously
> preserves the whole. Rather with removing whole subgraphs and
> replacing them with equivalent pieces. As Alice-in-the-cave is
> supposed to show, this can remove consciousness, at least in the limit
> when the entire movie is replaced...
The limit is not relevant. I agrre that if you remove Alice, you
remove any possibility for Alice to manifest herself in your most
probable histories. The problem is that in the range activity of the
projected movie, removing a part of the graph change nothing. It
changes only the probability of recoevering Alice from her history in,
again, your most probable history. IThere are no physical causal link
between the experience attributed to the physical computation and the
"causal history of projecting a movie". The incremental removing of
the graph hilighted the lack of causality in the movie. Perhaps not in
the best clearer way, apparently. Perhaps I should have done the case
of a non dream. I will come back on this.
>
>
>
>>
>> Then you think that if someone is conscious with some brain, which
>> for
>> some reason, does never use some neurons, could loose consciousness
>> when that never used neuron is removed?
>> If that were true, how could still be confident with an artificial
>> digital brain. You may be right, but the MEC hypothesis would be put
>> in doubt.
>>
>
> I am thinking of it as being the same as someone having knowledge
> which they never actually use. Suppose that the situation is so
> extreme that if we removed the neurons involved in that knowledge, we
> will not alter the person's behavior; yet, we will have removed the
> knowledge. Similarly, if the behavior of Alice in practice comes from
> a recording, yet a dormant conscious portion is continually ready to
> intervene if needed, then removing that dormant portion removes her
> consciousness.
You should definitely do the removing of the graph in the non-dream
situation. Let us do it.
Let us take a situation without complex inputs. Let us imagine Alice
is giving a conference in a big room, so, as input she is just blinded
by some projector, + some noise, and she makes a talk on Astronomy (to
fix the things). Now from 8h30 to 8H45 pm, she has just no brain, she
get the "motor" info from a projected recording of a previous *perfect
dream* of that conference, dream done the night before, or send from
Platonia (possible in principle). Then, by magic, to simplify, at 8h45
she get back the original brain, which by optical means inherits the
stage at the end of the conference in that perfect dream. I ask you,
would you say Alice was a zombie, during the conference?
Bruno
>> This, I don't understand. And, btw, if that is true, then the
>> physical
>> supervenience thesis is already wrong. The
>> physical supervenience thesis asks that consciousness is associated
>> in
>> real time and space with the activity of some machine (with MEC).
>
> Then assuming MEC requires some definition of "activity" and
> consciousness may
> cease when there is no activity of the required kind.
We require a notion of physical activity related to a computation for
having MEC *and* the supervenience thesis.
With MEC alone, we abandon MAT, the computational supervenience thesis
will have to make any notion of physical causality a statistically
emerging pattern from (hopefully sharable) first person (plural)
points of view.
Not at all. I have defined "history" by a computation as see from a
first person (plural or not).
Of course, well I guess I should insist on that perhaps, by
computation I always mean the mathematical object; It makes sense only
with respect to to some universal machine, and I have chosen
elementary arithmetic as the primitive one.
Although strictly speaking the notion of computable is an epistemic
notion, it happens that Church thesis makes it equivalent with purely
mathematical notion, and this is used for making the notion of
probable history a purely mathematical notion, (once we got a
mathematical notion of first person, but this is simple in the thought
experience (memory, diary ..., and a bit more subtle in the interview
(AUDA)).
A difficulty, in those post correspondences, is that I am reasoning
currently with MEC and MAT, just to get the contradiction, but in many
(most) posts I reason only with MEC (having abandon MAT).
After UDA, you can already understand that "physical" has to be
equivalent with "probable history" for those who followed the whole
UDA+MGA. "physical" has to refer the most probable (and hopefully)
sharable relative computational history.
This is already the case with just UDA, if you assume both the
existence of a "physical universe" and of a concrete UD running in
that concrete universe. MGA is designed to eliminate the assumption of
"a physical universe" and of the "concrete UD".
>
>
>> IThere are no physical causal link
>> between the experience attributed to the physical computation and the
>> "causal history of projecting a movie".
>
> But there is a causal history for the creation of the movie - it's a
> recording
> of Alice's brain functions which were causally related to her
> physical world.
Assuming MEC+MAT you are right indeed. But the causal history of the
creation of the movie, is not the same "computation" or causal chain
than the execution of Alice's mind and Alice's brain during her
"original dream". If you make abstraction of that difference, it means
you already don't accept the physical supervenience thesis, or, again,
you are introducing "magical knowledge" in the elementary part running
the computation.
You can only forget the difference of those two "computations" by
abstracting from the physical part of the story. This means you are
using exclusively the computational supervenience. MGA should make
clear (but OK, I warned MGA is subtle) that the consciousness has to
be related to the genuine causality or history. But it is that very
genuineness that physics can accidentally reproduced in a non genuine
way, like the brain movie projection, making the physical
supervenience absurd.
It seems to me quasi obvious that it is ridiculous to attribute
consciousness to the physical events of projecting the movie of a
brain. That movie gives a pretty detailed description of the
computations, but there is just no computation, nor even genuine
causal relation between the states. Even one frame is not a genuine
physical computational states. Only a relative description of it. In a
cartoon, if you see someone throwing a ball on a window, the
description of the broken glass are not caused by the description of
someone throwing a ball. And nothing changes, for the moment of the
projection of the movie, if the cartoon has been made from a real
similar filmed situation.
To attribute consciousness to the stationary (non projected)
contradict immediately the supervenience thesis of course.
All this is a bit complex because we have to take well into account
the distinction between
A computation in the "real" world,
A description of a computation in the "real" world,
And then most importantly:
A computation in Platonia
A description of a computation in Platonia.
I argue that consciousness supervenes on computation in Platonia. Even
in Platonia consciousness does not supervene on description of the
computation, even if those description are 100% precise and correct
I said a long time ago that a counting algorithm is not a universal
dovetailer, despite the fact that a counting algorithm can be said to
generate all the description of all computations (meaning also that
natural numbers and the successor function, but without addition and
multiplication law, are not enough). A UD does compute, in the
arithmetical platonia, it does not just enumerate the description of
the computations.
It is subtle, a bit like the difference between "A implies B", and
"the deduction of A from B" in Logic. In the interview of the Lobian
machine, those distinctions remains subtle but at least can be made
mathematically transparent. (But this is not needed to understand the
UDA-MGA or UDA(1...8).
>
>
>> The incremental removing of
>> the graph hilighted the lack of causality in the movie.
>
> It seems to me there is still a causal chain - it is indirected via
> creating the
> movie.
OK, but you cannot say that Alice is conscious from 3h30 to 3h45 by
supervening on an event which is "another story" than the computation
supposed to be done by the graph for that experience. The
"indirectness" you mention is enough to discredit the physical
supervenience thesis.
My answer would have to be, no, she lacks the necessary counterfactual
behaviors during that time.
And, moreover, if only part of the brain
were being run by a recording
then she would lack only some
counterfactuals,
and so she would count as partially conscious.
Bruno, do you still keep a notion of causality and the likes in
platonia? I have collected these snips from some recent posts:
Brent Meeker wrote:
>But is causality an implementation detail? There seems to be an
>implicit
>assumption that digitally represented states form a sequence just
>because there
>is a rule that defines that sequence, but in fact all digital (and
>other)
>sequences depend on causal chains.
Kory wrote:
> I have an intuition that causality
>(or its logical equivalent in Platonia) is somehow important for
>consciousness. You argue that the the slide from Fully-Functional
>Alice to Lucky Alice (or Fully-Functional Firefox to Lucky Firefox)
>indicates that there's something wrong with this idea. However, you
>have an intuition that order is somehow important for consciousness.
But we must realise that causality is a concept that is deeply related
(cognitively, in humans) to time and physical change.
But both time and space _emerge_ only from the inside view (1st person
or 1st person shareable) in the sum over all computations.
In Platonia (viewed, for the time being, ludicrously and impossibly,
from the outside) - there is no notion of time, space, sequentiality,
before and after.
The very notion of causation must be one that arises only in the inside
view, as a "succession" of consistent patterns.
In a sense, order (shareable histories) must arise from the Platonic
Eternal Mess (chaos) -> somehow along the lines of self-organization maybe:
http://en.wikipedia.org/wiki/Self-organization#Self-organization_in_mathematics_and_computer_science
In this sense, the computations would "assemble themselves" to
"consistent histories".
Bruno said:
>Even
>in Platonia consciousness does not supervene on description of the
>computation, even if those description are 100% precise and correct
Hmm, I understand the difference between description and computation in
maths and logic, and also in real world, but I do not know if this still
makes sense in Platonia -> viewed from the acausal perspective outlined
above. Well maybe in the sense that in some histories there will be
platonic descriptions that are not conscious.
But in other histories those descriptions will be computations and
conscious.
Cheers,
Günther
On Sat, Nov 29, 2008 at 10:11:30AM +0100, Bruno Marchal wrote:On 28 Nov 2008, at 10:46, Russell Standish wrote:On Wed, Nov 26, 2008 at 10:09:01AM +0100, Bruno Marchal wrote:MGA 3...But this reasoning goes through if we make the hole in the filmitself. Reconsider the image on the screen: with a hole in the filmitself, you get a "hole" in the movie, but everything which entersandgo out of the hole remains the same, for that (unique) range ofactivity. The "hole" has trivially the same functionality than thesubgraph functionality whose special behavior was described by thefilm. And this is true for any subparts, so we can remove the entirefilm itself.I don't think this step follows at all. Consciousness may supervene onthe stationary unprojected film,This, I don't understand. And, btw, if that is true, then the physicalsupervenience thesis is already wrong. Thephysical supervenience thesis asks that consciousness is associated inreal time and space with the activity of some machine (with MEC).
I am speaking as someone unconvinced that MGA2 implies an
absurdity. MGA2 implies that the consciousness is supervening on the
stationary film.
BTW - I don't think the film is conscious by virtue of the
counterfactuals issue, but that's a whole different story. And
"Olympization" doesn't work, unless we rule out the multiverse.Why does the physical supervenience require that all instantiations ofa consciousness be dynamic? Surely, it suffices that some are?What do you mean by an instantiation of a dynamical process which isnot dynamic. Even a block universe describe a dynamical process, or avariety of dynamical processes.
A block universe is nondynamic by definition. But looked at another
way, (ie from the inside) it is dynamic. It neatly illustrates why
consciousness can supervene on a stationary film (because it is
stationary when viewed from the inside).
The "film", however does need
to be sufficiently rich, and also needs to handle counterfactuals
(unlike the usual sort of movie we see which has only one plot).
The problem is that eliminating the brain from phenomenal experience
makes that experience even more highly probable than without. This is
the Occam catastrophe I mention in my book. Obviously this contradicts
experience.
Therefore I conclude that supervenience on a phenomenal physical brain
is necessary for consciousness.
I speculate a bit that this may be due
to self-awareness, but don't have a good argument for it. It is the
"elephant in the room" with respect to pure MEC theories.Sorry for being a bit short, I have to go,Brunohttp://iridia.ulb.ac.be/~marchal/--
----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------
I must admit you have completely lost me with MGA 3.
With MGA 1 and 2, I would say that, with MEC+MAT, also the the
projection of the movie (and Lucky Alice in 1) are conscious - because
it supervenes on the physical activity.
MEC says: it's the computation that counts, not the substrate.
MAT says: we need some substrate to perform a computation. In MGA 1 and
2 we have substrates (neurons or optical boolean graph that performs the
computation).
Now in MGA 3 you say:
> Now, consider the projection of the movie of the activity of Alice's
> brain, "the movie graph".
> Is it necessary that someone look at that movie? Certainly not.
Agreed.
> Is it necessary to have a screen? Well, the range of activity here is
> just one dynamical description of one computation. Suppose we make a
> hole in the screen. What goes in and out of that hole is exactly the
> same, with the hole and without the hole. For that unique activity, the
> hole in the screen is functionally equivalent to the subgraph which the
> hole removed.
We can remove those optical boolean nodes which are not relevant for the
caterpillar dream
>Clearly we can make a hole as large as the screen, so no
> need for a screen.
but no! Then we wouldn't have a substrate anymore. You are dropping MAT
at this step, not leading MEC+MAT to a contradiction.
> But this reasoning goes through if we make the hole in the film itself.
> Reconsider the image on the screen: with a hole in the film itself, you
> get a "hole" in the movie, but everything which enters and go out of the
> hole remains the same, for that (unique) range of activity. The "hole"
> has trivially the same functionality than the subgraph functionality
> whose special behavior was described by the film. And this is true for
> any subparts, so we can remove the entire film itself.
We can talk about this part after I understand why you can drop our
optical boolean network *grin*
Cheers,
Günther
I have reread MGA 2 and would like to add the following:
We have the
optical boolean graph: OBG -> this computes alice's dream.
we make a movie of this computation.
Now we run again, but in OBG some nodes do not make the computation
correctly, BUT the movie _triggers_ the nodes, so in the end, the
"computation" is performed.
So, with MEC+MAT and ALL NODES broken, I say this:
a) If the OBG nodes MALFUNCTION, but their function is subsituted with
the movie (on/off), it is conscious.
b) If the OBG is broken that in a way that all nodes are not active
anymore (no on/off, no signal passing), then no consciousness.
I think we can split the intuitions along these lines: if you assume
that consciousness depends on activity along the vertices, then Alice is
conscious neither in a nor in b, and then indeed I see why already MGA 2
leads to a problem with MEC+MAT.
But if I think that consciousness supervenes only on the correct
lighting up of the nodes (not the vertices!! -> I don't need causality
then, only the correct order), than a) would be conscious, b) not, and
MGA 3 does not work I you take away my OBG (with the node intuition)!
Cheers,
Günther
For what it's worth, I do think that that there's a *kind* of
causality in Platonia. Let me once again trot out the picture of a
platonic block universe in which the initial state is the binary
digits of PI, and the succeeding states are determined by the rules of
Conway's Life. This block universe exists unchangingly and eternally
in Platonia, but the states of the bits within it are related in a
kind of causal fashion. The state of each bit in the block is
determined (in a sense, "caused") by the pyramid of cells beneath it,
stretching back to the initial state, which is determined by the
algorithm for computing the binary digits of PI. In this sense,
causality is an essential aspect of the platonic notion of computation.
One might argue that this is really a misuse of the concept of
"causality" - that I should just talk about the necessary logical
relationships that are there "by definition" in my platonic object.
But my point is that these logical relationships fill the exact role
that "causality" is supposed to fill for the physicalist. When
patterns of bits within this platonic block universe "discuss" their
own physics, they might talk about how current configurations of
physical matter were "caused" by previous states. The logical
connections in Platonia are a good candidate for what they can
actually be talking about.
This platonic form of "causality" may not always be directly related
to the concept of time that patterns of bits in a block universe might
have. For instance, there's a cellular automaton rule (which deserves
to be much more widely known than it is) called Critters which is as
simple as Conway's Life, uses only bits (on or off), is known to be
computation universal, and is also fully reversible. This gets weird,
because the computational structures within a Critters block universe
will still seem to favor one direction in time - they'll store
memories about the "past" and try to anticipate the "future", etc. But
in fact, our own physics seems to be reversible, so we have these same
issues to work out regarding our own consciousness. The point is that,
within a Critters block universe in Platonia, the states will still be
logically related to each other in a way that precisely matches what
physicists in the block universe (the "critters within Critters"!)
would think of as causality.
-- Kory
I still find the whole thing easier to grasp when presented in terms
of cellular automata.
Let's say we have a computer program that starts with a large but
finite 2D grid of bits, and then iterates the rules to some CA
(Conway's Life, Critters, whatever) on that grid a large but finite
number of times, and stores all of the resulting computations in
memory, so that we have a 3D block universe in memory. And lets say
that the resulting block universe contains patterns that MECH-MAT
would say are conscious.
If we believe that consciousness supervenes on the physical act of
"playing back" the data in our block universe like a movie, then we
have a problem. Because before we play back the movie, we can fill any
portions of the block universe we want with zeros. So then our played
back movie can contain "conscious" creatures who are walking around
with (say) zeros where their visual cortexes should be, or their high-
level brain functions should be, etc. In other words, we have a fading
qualia problem (which we have also called a "partial zombie" problem
in these threads).
I find the argument compelling as far as it goes. But I'm not
convinced that all or most actual, real-world mechanist-materialists
believe that consciousness supervenes on the physical act of playing
back the stored computations. Bruno indicates that it must, by the
logical definitions of MECH and MAT. This just makes me feel like I
don't really understand the logical definitions of MECH and MAT.
-- Kory
It is, prima facie, no more absurd than consciousness supervening on a
block universe.
> >
> > A block universe is nondynamic by definition. But looked at another
> > way, (ie from the inside) it is dynamic. It neatly illustrates why
> > consciousness can supervene on a stationary film (because it is
> > stationary when viewed from the inside).
>
> OK, but then you clearly change the physical supervenience thesis.
>
How so? The stationary film is a physical object, I would have thought.
>
> > The "film", however does need
> > to be sufficiently rich, and also needs to handle counterfactuals
> > (unlike the usual sort of movie we see which has only one plot).
>
>
> OK. Such a film could be said to be a computation. Of course you are
> not talking about a stationary thing, which, be it physical or
> immaterial, cannot handle counterfactuals.
>
If true, then a block universe could not represent the
Multiverse. Maybe so, but I think a lot of people might be surprised
at this one.
> >
> > The problem is that eliminating the brain from phenomenal experience
> > makes that experience even more highly probable than without. This is
> > the Occam catastrophe I mention in my book. Obviously this contradicts
> > experience.
> >
> > Therefore I conclude that supervenience on a phenomenal physical brain
> > is necessary for consciousness.
>
>
> It is vague enough so that I can interpret it favorably through MEC.
>
That is my point - physical supervenience (aka materialism) is not
only not contradicted by MEC (aka COMP), but in fact is necessary for
to even work. Only what I call naive physicalism,
(aka the need for a concrete instantiation of a computer running the
UD) is contradicted by MEC.
What _is_ interesting is that not all philosophers distinguish between
physicalism and materialism. David Chalmers does not, but Michael
Lockwood does, for instance. Much of this revolves around the
ontological status of emergence.
On 30 Nov 2008, at 18:53, Günther Greindl wrote:
>
> Hi all,
>
> Bruno, do you still keep a notion of causality and the likes in
> platonia? I have collected these snips from some recent posts:
OK, I will comment, and perhaps say more for the benefit of the
others. But in a nutshell, the simplest notion of "causality" in
Platonia is the implication. A "causes" B, if and only A is false or B
is true. I recall that "the Platonia" of Peano Arithmetic is just
arithmetical truth or the "standard model of Elementary Arithmetic,
like the Platonia of Zermelo Fraenkel set theory is (the more dubious)
Set Theoretical truth. In some context I can use deduction as a form
of Platonist causality, which, for first order Lobian Machine.
As expected it is a mathematical causality, and has a priori no
relation with physical causality ...
Then, you can consider key subset of the implication/deduction
causalities: the computational causality, for example A "causes" B if
all computations (executed by the UD) going through A are going
through B. Or things like that (they will be many variants). All
notions should be translatable in formal arithmetic (or combinators,
fortran programs, etc.) when we interview the machines in Platonia,
notably to retrieve the physical laws (or the believe in the physical
laws). When this is done we should have the comp physical notions
capable of explaining or intuitive notion of physical causality.
>
>
> Brent Meeker wrote:
>
>> But is causality an implementation detail? There seems to be an
>> implicit
>> assumption that digitally represented states form a sequence just
>> because there
>> is a rule that defines that sequence, but in fact all digital (and
>> other)
>> sequences depend on causal chains.
>
> Kory wrote:
>
>> I have an intuition that causality
>> (or its logical equivalent in Platonia) is somehow important for
>> consciousness. You argue that the the slide from Fully-Functional
>> Alice to Lucky Alice (or Fully-Functional Firefox to Lucky Firefox)
>> indicates that there's something wrong with this idea. However, you
>> have an intuition that order is somehow important for consciousness.
>
> But we must realise that causality is a concept that is deeply related
> (cognitively, in humans) to time and physical change.
I agree. Especially physical causality. But even the notion of
"responsibility" is deeply related to time (and causality).
>
>
> But both time and space _emerge_ only from the inside view (1st person
> or 1st person shareable) in the sum over all computations.
Assuming comp, and that we are correct, ok.
>
>
> In Platonia (viewed, for the time being, ludicrously and impossibly,
> from the outside)
A powerfull lobian machine like ZF can do this, looking at some
Platonia, in a precise way when reasoning on the "Platonia" of a
simpler sound Lobian Machine. (Even for the 1-Platonias, the first
person pov in Platonia (this gives the hypostases)).
> - there is no notion of time, space, sequentiality,
> before and after.
Right. But don't overlook that the number zero is before the number
one, which is itself before the number two, which is before the number
three, etc. (With "before" interpreted by minus one). The UD itself
has a first computation step, then a second, then a third, etc. But
like "a movie", you can look at all of them, well if you are
infinitely patient of course, and immortal. It is in that sense
(before making things more technical) that the UD computes in Platonia.
>
>
> The very notion of causation must be one that arises only in the
> inside
> view, as a "succession" of consistent patterns.
OK.
>
>
> In a sense, order (shareable histories) must arise from the Platonic
> Eternal Mess (chaos) -> somehow along the lines of self-organization
> maybe:
> http://en.wikipedia.org/wiki/Self-organization#Self-organization_in_mathematics_and_computer_science
>
> In this sense, the computations would "assemble themselves" to
> "consistent histories".
>
> Bruno said:
>> Even
>> in Platonia consciousness does not supervene on description of the
>> computation, even if those description are 100% precise and correct
>
> Hmm, I understand the difference between description and computation
> in
> maths and logic, and also in real world, but I do not know if this
> still
> makes sense in Platonia -> viewed from the acausal perspective
> outlined
> above. Well maybe in the sense that in some histories there will be
> platonic descriptions that are not conscious.
>
> But in other histories those descriptions will be computations and
> conscious.
A "movie" *in* Platonia, would be a description of a computation
encoded in some static way by some occasional program or entity. Even
in Platonia, such a description is not a computation, but only a
description (without any causality, even in the simple imlication/
deduction sense). It is the difference between the fact that three
added to two gives five, and the writting or the Gödel number of the
sentence "3+2=5".
It is really the difference between a reality (be it mathematical),
and a picture (be it dynamic) of that reality.
Consciousness will be related to "true mathematical facts", not on
description of those facts. The difficulty here is that consciousness
is also related to description in the memory of a machine.
Comp implies the "unification" in the sense of Bostrom. Conscious
experiences are unique, they cannot be replicated, and they supervene
on all the computations, and thus all implementations going through
their corresponding 3 comp states, or below.
Best,
Bruno
Oh( That is not true! We still have the projector and the film. We can
project the movie in the air or directly in your eyes.
I agree for this for this when the film itself is made empty, but then
I can recover a conterfactually correct computation by adding inert
material!
> You are dropping MAT
> at this step,
No. Only when I got that Alice's consciousness supervene on the empty
film (with or without inert material).
> not leading MEC+MAT to a contradiction.
>
>> But this reasoning goes through if we make the hole in the film
>> itself.
>> Reconsider the image on the screen: with a hole in the film itself,
>> you
>> get a "hole" in the movie, but everything which enters and go out
>> of the
>> hole remains the same, for that (unique) range of activity. The
>> "hole"
>> has trivially the same functionality than the subgraph functionality
>> whose special behavior was described by the film. And this is true
>> for
>> any subparts, so we can remove the entire film itself.
>
> We can talk about this part after I understand why you can drop our
> optical boolean network *grin*
it is really something people have to meditate. I could have conclude
in the absurdity of MAT (with MEC) at MGA 2. It is hard for me to take
people seriously when they argue that the consciousness of Alice
supervenes on a movie of its brain activity. There is no causality,
nor computations, during the *projection* of the movie. Alice's
experience is related to ALL computations going through those states,
not to descriptions of those states which can been made and collected
in other histories. Locally it makes sense to ascribe *that*
consciousness when you have the mean to interpret (through some
universal machine) her computational states.
[Consciousness of (x,t)] is never [physical states] at (x,t)
it is:
[Consciousness of (x,t)] is always all computational states (in the UD
°) corresponding to that experience. (It is an indexical view of
reality).
And computational states can be defined by true platonic relation
between numbers. (The usual way is done with Kleene predicate).
Bruno
On 30 Nov 2008, at 19:17, Abram Demski wrote:
>
> Bruno,
>
> No, she cannot be conscious that she is partially conscious in this
> case, because the scenario is set up such that she does everything as
> if she were fully conscious-- only the counterfactuals change. But, if
> someone tested those counterfactuals by doing something that the
> recording didn't account for, then she may or may not become conscious
> of the fact of her partial consciousness-- in that case it would be
> very much like brain damage.
A very serious brain damage!
>
>
> Anyway, yes, I am admitting that the film of the graph lacks
> counterfactuals and is therefore not conscious.
OK.
> My earlier splitting
> of the argument into an argument about (1) and a separate argument
> against (2) was perhaps a bit silly, because the objection to (2) went
> far enough back that it was also an objection to (1). I split the
> argument like that just because I saw an independent flaw in the
> reasoning of (1)... anyway...
>
> Basically, I am claiming that there is a version of COMP+MAT that MGA
> is not able to derive a contradiction from. The version goes something
> like this:
>
> "Yes, consciousness supervenes on computation, but that computation
> needs to actually take place (meaning, physically). Otherwise, how
> could consciousness supervene on it?
Yes but with UDA the contrary happens. Even if a material world, the
question becomes: how could consciousness remain attached on this
matter.
(It is simpler to understand this issue by supposing some concrete
universal deployment in the "real" universe, and this provides the
motivation for MGA. the concreteness of the UD is a red herring.
You seem to forget that the MAT mind-body problem is not solved. I
mean this is what all experts in the field agree on. To invoke matter
to have something on which consciousness can supervene on, seems to me
a "gap explanation". It introduces more mystery than needed.
> Now, in order for a computation
> to be physically instantiated, the physical instantiation needs to
> satisfy a few properties. One of these properties is clearly some sort
> of isomorphism between the computation and the physical instantiation:
> the actual steps of the computation are represented in physical form.
> A less obvious requirement is that the physical computation needs to
> have the proper counterfactuals: if some external force were to modify
> some step in the computation, the computation must progress according
> to the new computational state (as translated by the isomorphism)."
You will be led to difficulties, like giving a computational role to
inert material. It is ok, because it saves the counterfactual (and
thus MEC), but on the price of attributing a flow of conscious
experience (in real time) to inert material. I can't swallow that,
especially if the motivation is going back to the unsolved problems of
mind, matter and their relations.
By dropping MAT, we have an explanation of consciousness or of the
reason why numbers, due to their true relations with many other
numbers, can develop from inside stable (from their views) believes on
reality and realities including, evidences can be found, physical
realities. Numbers, or combinators, etc.
Bruno
>
> On Sun, Nov 30, 2008 at 07:10:43PM +0100, Bruno Marchal wrote:
>>>
>>> I am speaking as someone unconvinced that MGA2 implies an
>>> absurdity. MGA2 implies that the consciousness is supervening on the
>>> stationary film.
>>
>>
>> ? I could agree, but is this not absurd enough, given MEC and the
>> definition of the physical superveneience thesis;
>
> It is, prima facie, no more absurd than consciousness supervening on a
> block universe.
>
>>>
>>> A block universe is nondynamic by definition. But looked at another
>>> way, (ie from the inside) it is dynamic. It neatly illustrates why
>>> consciousness can supervene on a stationary film (because it is
>>> stationary when viewed from the inside).
>>
>> OK, but then you clearly change the physical supervenience thesis.
>>
>
> How so? The stationary film is a physical object, I would have
> thought.
I don't understand this. The physical supervenience thesis associate
consciousness AT (x,t) to a computational state AT (x,t). The idea is
that consciousness can be "created" in real time by the physical
"running" of a computation (viewed of not in a block universe).
With the stationary film, this does not make sense. Alice experience
of a dream is finite and short, the film lasts as long as you want. I
think I see what you are doing: you take the stationary film as an
incarnation of a computation in Platonia. In that sense you can
associate the platonic experience of Alice to it, but this is a
different physical supervenience thesis. And I argue that even this
cannot work, because the movie does not capture a computation. The
universal interpreter is lacking. It could even correspond to another
experience, if the graph was a movie of another sort of computer, for
example with NAND substituted for the NOR.
>
>
>>
>>> The "film", however does need
>>> to be sufficiently rich, and also needs to handle counterfactuals
>>> (unlike the usual sort of movie we see which has only one plot).
>>
>>
>> OK. Such a film could be said to be a computation. Of course you are
>> not talking about a stationary thing, which, be it physical or
>> immaterial, cannot handle counterfactuals.
>>
>
> If true, then a block universe could not represent the
> Multiverse. Maybe so, but I think a lot of people might be surprised
> at this one.
I am not sure I can give sense to an expression like "the multiverse"
or the "block universe" can or cannot handle counterfactuals. They
have no inputs, nor outputs.
>> but no! Then we wouldn't have a substrate anymore.
> Oh( That is not true! We still have the projector and the film. We can
> project the movie in the air or directly in your eyes.
Ok I see now where our intuitions differ (always the problem with
thought experiment) - but maybe we can clear this up and see where it
leads...
> it is really something people have to meditate. I could have conclude
> in the absurdity of MAT (with MEC) at MGA 2. It is hard for me to take
> people seriously when they argue that the consciousness of Alice
> supervenes on a movie of its brain activity. There is no causality,
> nor computations, during the *projection* of the movie.
If that is how you see MAT (you require causality) - then I would also
agree -> MGA 2 shows absurdity.
>Alice's
> experience is related to ALL computations going through those states,
> not to descriptions of those states which can been made and collected
> in other histories. Locally it makes sense to ascribe *that*
> consciousness when you have the mean to interpret (through some
> universal machine) her computational states.
That is already part of your theory (UDA and all) (as I understand it),
but not included already in COMP or in MAT.
> [Consciousness of (x,t)] is never [physical states] at (x,t)
For me, the above expresses the essence of (naive) MAT -> let's call it
NMAT.
So, clearly:
NMAT: [Consciousness of (x,t)] supervenes on [physical states] at (x,t)
And on physical states only! Not on the causal relations of these states
(block universe view).
Your argument goes like this:
> it is:
> [Consciousness of (x,t)] is always all computational states (in the UD
> °) corresponding to that experience. (It is an indexical view of
> reality).
And I share it IF we can show that MAT+MEC is inconsistent. But I am not
convinced yet.
For me, the essence of MEC (COMP) is this:
COMP: there is a level at which a person can be substituted at a digital
level (we don't have to go down to "infinity"), and where this digital
description is enough to reconsitute this person elsewhere and elsewhen,
independent of substrate.
NMAT additionally requires that the substrate for COMP be some
"mysterious" substance, and not only a platonic relation.
My intuition tells me this can't be -> we have to drop either MEC or NMAT.
But MGA 3, when dropping the boolean gates, violates NMAT, because:
NMAT: [Consciousness of (x,t)] supervenes on [physical states] at (x,t)
And the physical states relevant where the _states of the boolean graph_
(the movie projector was just the lucky cosmic ray).
Do you have different definition for MAT? Do you require causal dynamics
for MAT?
The problem with NMAT as I define it raises the issue as in the Putnam
paper -> does every rock implement every finite state-automaton?
Chalmers makes the move to implementation, so introduces causal dynamics.
So, sophisticated MAT would probably be:
SMAT: [Consciousness of (x,t)] supervenes on [physical states] at (x,t)
over a timespan delta(t) _if_ sufficiently complex causal dynamics are
at work during this timespan relating the physical states.
Then I would say: MGA 2 (already) shows that SMAT+MEC are not
compatible. No need for MGA 3.
For NMAT+MEC (which is problematic for other reasons) MGA 3 is not
convincing.
Would you agree with this?
Cheers,
Günther
>
> Bruno,
>
> It sounds like what you are saying in this reply is that my version of
> COMP+MAT is consistent, but counter to your intuition (because you
> cannot see how consciousness could be attached to physical stuff).
I have no problem a priori in attaching consciousness to physical
stuff. I do have problem when MEC + MAT forces me to attach
consciousness to an empty machine (with no physical activity) together
with inert material.
> If
> this is the case, then it sounds like MGA only works for specific
> versions of MAT-- say, versions of MAT that claim consciousness hinges
> only on the matter, not on the causal relationships.
On the contrary. I want consciousness related to the causal
relationship. But with MEC the causal relationship are in the
computations. The thought experiment shows that the physical
implementation plays the role of making them able to manifest
relatively to us, but are not responsible for their existence.
> In other words,
> what Günther called NMAT. So you need a different argument against--
> let's call it CMAT, for causal MAT. The olympization argument only
> works if COMP+CMAT can be shown to imply the removability of inert
> matter... which I don't think it can, because that inert matter here
> has a causal role to play in the counterfactuals, and is therefore
> essential to the physical computation.
OK, so now you have to disagree with MGA 1. No problem. But would you
still say "yes" to the mechanist doctor? I don't see how, because now
you appeal to something rather magic like influence in real time of
inactive material. Or, you are weakening the physical supervenience
thesis by appeal to a notion of causality which seems to me a bit
magical, and contrary to the local functionalism of the
computationalist.
The real question I have to ask to you, Günther and others is this
one: does your new supervenience thesis forced the UD to be
physically executed in a "real" universe to get the UDA conclusion?
Does MGA, even just as a refutation of "naïve mat" eliminate the use
of the concrete UD in UDA?
It is true that by weakening MEC or MAT, the reasoning doesn't go
through, but it seems to me the conclusion goes with any primitive
stuff view of MAT or Matter activity to which we could attach
consciousness through "causal" links. Once you begin to define matter
through causal links, and this keeping comp, and linking the
experience to those causal relation, perhaps made in other time at
other occasion, you are not a long way from the comp supervenience.
But if you don't see this, I guess the conversation will continue.
Bruno
Ah, please, add the delta again (see my previews post). I did wrote
(dx,dt), but Anna thought it was infinitesimal. It could be fuzzy
deltas or whatever you want. Unless you attach your consciousness,
from here and now, to the whole block multiverse, the reasoning will
go through, assuming of course that the part of the multiverse, on
which you attach your mind, is Turing emulable (MEC).
>
>
>> The idea is
>> that consciousness can be "created" in real time by the physical
>> "running" of a computation (viewed of not in a block universe).
>
> Well we're pretty sure that brains do this.
Well, my point is that for believing this, you have to abandon the MEC
hypothesis, perhaps in a manner like Searle or Penrose. Consciousness
would be the product of some non Turing emulable chemical reactions.
But if everything in the brain (or the genralized brain) is turing
emulable, then the reasoning (uda+mga) is supposed to explain why
consciousness (an immaterial thing) is related only to the computation
made by the brain, but not the brain itself nor to its physical
activity during the physical implementation. Your locally physical
brain just makes higher the probability that your consciousness
remains entangled with mine (and others).
>
>
>>
>> With the stationary film, this does not make sense. Alice experience
>> of a dream is finite and short, the film lasts as long as you want. I
>> think I see what you are doing: you take the stationary film as an
>> incarnation of a computation in Platonia. In that sense you can
>> associate the platonic experience of Alice to it, but this is a
>> different physical supervenience thesis. And I argue that even this
>> cannot work, because the movie does not capture a computation.
>
> I was thinking along the same lines. But then the question is what
> does capture
> a computation. Where in the thought experiments, starting with
> natural Alice
> and ending with a pictures of Alice's brain states, did we lose
> computation? Is
> it important that the sequence be time rather than space or some
> other order?
> Is it the loss causal relations or counterfactuality?
We "lose a computation" relatively to us when the computation is not
executed by a stable (relatively to us) universal machine nearby, be
it a cell, a brain, a natural or artificial universal computer.
In the case of the movie, it is no so bad. Consciousness does not
supervene on the movie or its projection, but the movie can be used as
a backup of Alice's state. We can re-project a frame, of that movie,
on a functionally well working Boolean optical graph, and Alice will
be back ... with us.
Of course the computations themselves, and their many possible
differentiations, are already in Platonia (= in the solution of the
universal Diophantine equation, in the processing of the UD, or
perhaps in the Mandelbrot set).
Alice's brain and body are "just" local stable artifacts belonging to
our (most probable) computational history, and making possible for
Alice consciousness to differentiate through interactions with us,
relatively to us.
Bruno
On 01 Dec 2008, at 22:53, Günther Greindl wrote:
>
> Hi Bruno,
>
>>> but no! Then we wouldn't have a substrate anymore.
>> Oh( That is not true! We still have the projector and the film. We
>> can
>> project the movie in the air or directly in your eyes.
>
> Ok I see now where our intuitions differ (always the problem with
> thought experiment) - but maybe we can clear this up and see where it
> leads...
OK.
>
>
>> it is really something people have to meditate. I could have conclude
>> in the absurdity of MAT (with MEC) at MGA 2. It is hard for me to
>> take
>> people seriously when they argue that the consciousness of Alice
>> supervenes on a movie of its brain activity. There is no causality,
>> nor computations, during the *projection* of the movie.
>
> If that is how you see MAT (you require causality) - then I would also
> agree -> MGA 2 shows absurdity.
Well I require at least a minimum of physical causality to implement
physically the computational "causality" (which incarnates platonic
relation existing among numbers).
MAT presupposes anything primitively material and causal of course.
Remember that I am using Materialism and physicalism (and naturalsim)
as synonymous, because the argument is very general. The (naïve) idea
is that the brain *does* compute something when you dream, for
example, and that it is the physical causality which is responsible
for the implementation of the computation.
>
>
>> Alice's
>> experience is related to ALL computations going through those states,
>> not to descriptions of those states which can been made and collected
>> in other histories. Locally it makes sense to ascribe *that*
>> consciousness when you have the mean to interpret (through some
>> universal machine) her computational states.
>
> That is already part of your theory (UDA and all) (as I understand
> it),
> but not included already in COMP or in MAT.
Not at all. This could be confusing for those who don't know UDA. Once
MEC+MAT is shown to be incompatible, we then chose MEC and thus
abandon MAT.
(why? just for not going out of the range of my working hypothesis, ok)
With MEC, there is no more physical supervenience thesis, on the kind
compatible with MEC. But we keep MEC, so we have to continue to
related consciousness with the computation, right? We do no more have
a notion of physical computation, so we attach consciousness to the
computation itself, LIKE it has already been done in the UDA, except
that we don't need no more to run the UD.
>
>
>> [Consciousness of (x,t)] is never [physical states] at (x,t)
>
> For me, the above expresses the essence of (naive) MAT -> let's call
> it
> NMAT.
>
> So, clearly:
> NMAT: [Consciousness of (x,t)] supervenes on [physical states] at
> (x,t)
>
> And on physical states only! Not on the causal relations of these
> states
> (block universe view).
You are perhaps taking me too much literally here. It is just
difficult, lengthy and confusing to make a precise definition of the
physical supervenience which would work for the different views of the
universe.
The physical supervenience thesis just says that 1) there is a
physical universe, 2) it can compute, and consciousness requires some
special local computations made *in* that universe.
>
>
> Your argument goes like this:
>> it is:
>> [Consciousness of (x,t)] is always all computational states (in the
>> UD
>> °) corresponding to that experience. (It is an indexical view of
>> reality).
>
> And I share it IF we can show that MAT+MEC is inconsistent. But I am
> not
> convinced yet.
>
> For me, the essence of MEC (COMP) is this:
>
> COMP: there is a level at which a person can be substituted at a
> digital
> level (we don't have to go down to "infinity"), and where this digital
> description is enough to reconsitute this person elsewhere and
> elsewhen,
> independent of substrate.
>
>
> NMAT additionally requires that the substrate for COMP be some
> "mysterious" substance, and not only a platonic relation.
Not so mysterious. It just seems to require some particular
computations. The "physical" one. People are used to think about it in
term of waves or particles, or field, geometrical dynamical object.
They believe those are particulars (which become mysterious only with
comp, but a priori with Mat they are rather "natural");
>
>
> My intuition tells me this can't be -> we have to drop either MEC or
> NMAT.
>
> But MGA 3, when dropping the boolean gates, violates NMAT, because:
> NMAT: [Consciousness of (x,t)] supervenes on [physical states] at
> (x,t)
>
> And the physical states relevant where the _states of the boolean
> graph_
> (the movie projector was just the lucky cosmic ray).
>
> Do you have different definition for MAT? Do you require causal
> dynamics
> for MAT?
MAT is very general, but indeed it requires the minimum amount of
causality so that we can implement a computation in the physical
world, if not I don't see how we could talk on physical supervenience.
>
>
> The problem with NMAT as I define it raises the issue as in the Putnam
> paper -> does every rock implement every finite state-automaton?
>
> Chalmers makes the move to implementation, so introduces causal
> dynamics.
>
> So, sophisticated MAT would probably be:
> SMAT: [Consciousness of (x,t)] supervenes on [physical states] at
> (x,t)
> over a timespan delta(t) _if_ sufficiently complex causal dynamics are
> at work during this timespan relating the physical states.
No this is MAT. But with COMP, if it happens that a boolean gates or
neurons *personal history" has to be taken into account, by definition
of MEC, it means we don't have chose correctly the level of
substitution.
>
>
>
> Then I would say: MGA 2 (already) shows that SMAT+MEC are not
> compatible.
All right then!
> No need for MGA 3.
OK.
>
>
> For NMAT+MEC (which is problematic for other reasons) MGA 3 is not
> convincing.
I think I have miss something.
>
>
> Would you agree with this?
I am not sure I understand the NMAT.
What MGA is supposed to do is to eliminate the hypothesis of the
concrete UD in the step seven of UDA. Do you think it does, or not
(yet)?
Best,
Bruno
OK, that clarifies things and it corresponds with my intuition that
consciousness is relative to an environment. I can't seem to answer the
question is MG-Alice conscious "yes" or "no", but I can say she is conscious
within the movie environment, but not within our environment. This is similar
to Stathis asking about consciousness within a rock. We could say the thermal
motions of atoms within the rock may compute consciousness, but it is a
consciousness within the rock environment, not in ours.
Brent
All this is a bit complex because we have to take well into account
the distinction between
A computation in the "real" world,
A description of a computation in the "real" world,
And then most importantly:
A computation in Platonia
A description of a computation in Platonia.
I argue that consciousness supervenes on computation in Platonia. Even
in Platonia consciousness does not supervene on description of the
computation, even if those description are 100% precise and correct
On 02 Dec 2008, at 20:33, Abram Demski wrote:
>
> Bruno,
>
> I am a bit confused. To me, you said
>
>> Or, you are weakening the physical supervenience
>> thesis by appeal to a notion of causality which seems to me a bit
>> magical, and contrary to the local functionalism of the
>> computationalist.
>
> This seems to say that the version of MAT that MGA is targeted at does
> not include causal requirements.
MAT is the usual idea that there is a physical world described through
physical laws. Those capture physical causality, generally under the
form of differential equations. If there were no causality in physics,
the very notion of physical supervenience would not make sense. Nor MEC
+MAT, at the start. Sorry if I have been unclear, but I was
criticizing only the *magical* causality which is necessary for
holding both the physical supervenience thesis and the mechanist
hypothesis, like attribution of prescience to the neurons (in MGA 1),
or attributing a computational role in inert Material.
>
>
> To Günther, you said:
>
>>> Do you have different definition for MAT? Do you require causal
>>> dynamics
>>> for MAT?
>>
>>
>> MAT is very general, but indeed it requires the minimum amount of
>> causality so that we can implement a computation in the physical
>> world, if not I don't see how we could talk on physical
>> supervenience.
>
> Does the MAT you are talking about include causal requirements or not?
Of course.
>
>
> About your other questions--
>
>> OK, so now you have to disagree with MGA 1. No problem. But would you
>> still say "yes" to the mechanist doctor? I don't see how, because
>> now
>> you appeal to something rather magic like influence in real time of
>> inactive material.
>
> So long as that "inert" material preserves the correct
> counterfactuals, everything is fine. The only reason things seem
> strange with olympized Alice is because *normally* we do not know in
> advance which path cause and effect will take for something as
> intricate as a conscious entity. The air bags in a car are "inert" in
> the same way-- many cars never get in a crash, so the air bags remain
> unused. But since we don't know that ahead of time, we want the air
> bags. Similarly, when talking to the mechanist doctor, I will not be
> convinced that a recording will suffice...
Me too. But that remark is out of the context of the argument. If I
want an artificial brain (MEC) I expect it to handle the
counterfactuals, because indeed we don't know things in advance. But
in the context of the proof we were in a situation where we did know
the things in advance. Suppose that my doctor discovers in my brain
some hardware build for managing my behavior only in front of
dinosaurs, like old unused subroutine being only relic of the past,
then it seems to me that the doctor, in the spirit of mechanist
functionalism can decide of dropping those subroutine for building me
a cheaper artificial brain. And that is all we need for the argument
for going through. Consciousness relies on the computation which
always kept the right counterfactuals, and never on their relative
implementations which will only change their relative measures.
>
>
>> The real question I have to ask to you, Günther and others is this
>> one: does your new supervenience thesis forced the UD to be
>> physically executed in a "real" universe to get the UDA conclusion?
>
> Yes.
Then it seems to me you are relying on some magical causality attached
to a magical notion of matter. I don't understand how you can still
say yes to a doctor with such a notion of mechanism. See above. I
would no more even trust a Darwinian brain.
>
>
>> Does MGA, even just as a refutation of "naïve mat" eliminate the use
>> of the concrete UD in UDA?
>
> No.
>
> (By the way, I have read UDA now, but have refrained from posting a
> commentary since there has been a great deal of discussion about it on
> this list and I could just be repeating the comments of others...)
Then you can read the answers I have given to the others. It seems to
me UDA(1..7) does no more pose any problem, except for those who have
decided to not understand, or believes "religioulsy" in matter and
comp. In public forum group you always end up discussing with those
who like cutting the hairs.
>
>
> Also: Günther mentioned "SMAT", which actually sounds like the "CMAT"
> I proposed... so I'll refer to it as SMAT from now on.
I am sorry if I have been unclear, but MAT is taken in a very large
sense. MAT is the belief in a physical universe obeying physical laws,
be it quantum, classical, or whatever. Actually, for a
computationalist (especially after UDA+MGA), MAT seems to be just a
way to single out one special computations above the others.
Kim Jones has convinced me to explain UDA, and the general idea, a new
time. It could be an opportunity to let us known your commentaries. To
be sure, some mathematicians get more easily the point when I
introduce the arithmetical translation of the UDA. You can study it in
my papers or wait I explain this also on the list, or search the list
archive for the explanation of the AUDA that I have already given. Of
course mathematicians are a minority here and I try not to exaggerate
with the list patience. What is subtle in MGA is a bit subtler in
AUDA, too, but then it relies at least on non controversial facts in
Mathematical Logic, which are unfortunately not well known nor well
taught. Except the logicians, logic is not very well known even by
mathematicians.
....
Alice's brain and body are "just" local stable artifacts belonging toour (most probable) computational history, and making possible forAlice consciousness to differentiate through interactions with us,relatively to us.Bruno
OK, that clarifies things and it corresponds with my intuition that
consciousness is relative to an environment. I can't seem to answer the
question is MG-Alice conscious "yes" or "no", but I can say she is conscious
within the movie environment, but not within our environment. This is similar
to Stathis asking about consciousness within a rock. We could say the thermal
motions of atoms within the rock may compute consciousness, but it is a
consciousness within the rock environment, not in ours.
On Sun, Nov 30, 2008 at 11:33 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:
All this is a bit complex because we have to take well into account
the distinction between
A computation in the "real" world,
A description of a computation in the "real" world,
And then most importantly:
A computation in Platonia
A description of a computation in Platonia.
I argue that consciousness supervenes on computation in Platonia. Even
in Platonia consciousness does not supervene on description of the
computation, even if those description are 100% precise and correctBruno, this is interesting and I have had similar thoughts of late regarding along this vein. The trouble is, I don't see how the "real" world can be differentiated from Platonia.
Just as the UD contains instances of itself, and hence computations within computations,
can't mathematical objects contain mathematical objects?
If so then aren't our actions in this universe just as mathematcally or computationally fundamental as any other instantiation in platonia?
Platonia might be highly interconnected even fractal and so performing a computation in this universe in a sense hasn't created anything new, but created a link to other identical things which have always been there, and in the timelessness of platonia one can't say which came before, or which is the original or most real.
After wrestling with block time, the MGA, and computationalism I'm starting to wonder how computations are implemented in a 4 dimensional and static mathematical object.
The best I can come up with is that the mathematical structure is defined by some equation or equations,
and that by virtue of this imposed order, defines relations between particles. Computation depends on relations, be it electrons in silicon, Chinese with radios or a system of beer cans and ping-pong balls;
from the outside there is little or no indication what is going on is forming consciousness, it is only relative from the inside, and since these relations carry state and information across one of the 4 dimensions of the universe we end up with DNA and brains which record and process information in sequence, or so it appears to us being trapped on within this equation defined in platonia.
In the case of a movie that is in this physical world, no mathematical equation defines progression between frames and there is no conveyance of information, the alteration of one frame does not affect any other which would not be the case nor possible with a timeless mathematical object.
and that by virtue of this imposed order, defines relations between particles. Computation depends on relations, be it electrons in silicon, Chinese with radios or a system of beer cans and ping-pong balls;Here you are talking about instantiations of computations relatively to our most probable computations, which have a physical "look". But strictly speaking computations are only relation between numbers.
This seems to assume there is causality apart from physical causality, but there
is no causality in logic or mathematics (except in a metaphorical, I might say
"magical", sense). So I don't see that Gunther is relying on anything magical.
Brent
Le 03-déc.-08, à 17:20, Jason Resch a écrit :
> On Wed, Dec 3, 2008 at 9:53 AM, Bruno Marchal <mar...@ulb.ac.be>
> wrote:
>>
>>> and that by virtue of this imposed order, defines relations between
>>> particles. Computation depends on relations, be it electrons in
>>> silicon, Chinese with radios or a system of beer cans and ping-pong
>>> balls;
>>
>>
>> Here you are talking about instantiations of computations relatively
>> to our most probable computations, which have a physical "look". But
>> strictly speaking computations are only relation between numbers.
>>
>
> Bruno,
>
> Thanks for your reply, I am curious what exactly you mean by the most
> probable computations going through our state if these computations
> cannot be part of a larger (shared universe) computation.
Hmmm... It means you have still a little problem with step seven. I
wish we share a computable environment, but we cannot decide this at
will. I agree we have empirical evidence that here is such (partially)
computable environment, and I am willing to say I trust nature for
this. Yet, the fact is that to predict my next first person experience
I have to take into account ALL computations which exist in the
arithmetical "platonia" or in the universal dovetailing.
> Where does the data provided to the senses come from if not from a
> computation which also includes that of the environment as well?
You don't know that. The data and their statistics come from all
computational histories going through my state. The game is to take
completely seriously the comp hyp, and if it contradicts facts, we will
abandon it. But that day has not yet come .... Until then we have to
derive the partial computability of our observable enviroment from a
statistic on all computations made by the UD.
> Also, why does the computation have to be between numbers specifically,
They don't. Sometimes I use the combinators. They have to be finite
objects, and this comes from the *digital* aspect of the comp. hyp.
> could a program in the deployment that calculates the evolution of a
> universe
This is something you have to define. If you do it I bet you will find
a program equivalent to a universal dovetailer, a bit like Everett
universal quantum wave.
> perform the necessary computations to generate an observer?
Sure. The problem is that there will be an infinity of program
generating the same observer, in the same state, and the observer
cannot know in which computations it belongs. Never? Measurement
particularizes, but never get singular.
> If they can, then it stands other mathematical objects besides pure
> turing machines and besides the UD could implement computations
> capable of generating observers.
Not really. Those objects are internam construction made by programs
relatively to trheir most probable history.
> I noticed in a previous post of yours you mentioned 'Kleene
> predicates' as a way of deriving computations from true statements, do
> you know of any good sources where I could learn more about Kleene
> predicates?
A very good introduction is the book by N.J. Cutland. See the reference
in my thesis. There are other books. I will think to make a list with
some comments. Actually I really love Kleene's original "Introduction
to Metamathematics", but the notations used are a bit old fashioned.
Hope I am not too short. I am a bit busy today,
Best,
Bruno
I try to single out where you depart from the comp hyp, to focus on the
essential. I could add comments later on other paragraphs of your
posts.
Le 03-déc.-08, à 19:22, Brent Meeker a écrit :
> But there is causality. The sequence of events in the movie are
> directly caused
> by the projector, but they have a causal linkage back to Alice and the
> part of
> her environment that is captured in the movie. I see no principled
> reason to
> consider only the immediate cause and not refer back further in the
> chain of
> causation.
If this were true, I don't see why I could say yes to a doctor for an
artificial brain. I have to take account of the "traceability" of all
part of the artificial brain. You have a problem with the "qua
computatio" part of the MEC+MAT hypotheses, I think.
This is coherent with the fact that you have still some shyness with
the step six, if I remember well. They will be opportunity to come
back.
I have to go now.
Bruno
PS Abram. I think I will have to meditate a bit longer on your
(difficult) post. You may have a point (hopefully only pedagogical :)
A little bit more commentary may be in order then... I think my point
may be halfway between pedagogical and serious...
What I am saying is that people will come to the argument with some
vague idea of which computations (or which physical entities) they
pick out as "conscious". They will compare this to the various
hypotheses that come along during the argument-- MAT, MEC, MAT + MEC,
"Lucky Alice is conscious", "Lucky Alice is not conscious", et
cetera... These notions are necessarily 3rd-person in nature. It seems
like there is a problem there. Your argument is designed to talk about
1st-person phenomena.
If a 1st-person-perspective is a sort of structure (computational
and/or physical), what type of structure is it? If we define it in
terms of behavior only, then a recording is fine. If we define it in
terms of inner workings, then a recording is probably not fine, but we
introduce "magical" dependence on things that shouldn't matter to
us... ie, we should not care if we are interacting with a perfectly
orchestrated recording, so long as to us the result is the same.
It seems like this is independent of the differences between
pure-comp / comp+mat.
--Abram
Hmmm... It means you have still a little problem with step seven. I
wish we share a computable environment, but we cannot decide this at
will. I agree we have empirical evidence that here is such (partially)
computable environment, and I am willing to say I trust nature for
this. Yet, the fact is that to predict my next first person experience
I have to take into account ALL computations which exist in the
arithmetical "platonia" or in the universal dovetailing.
> I noticed in a previous post of yours you mentioned 'KleeneA very good introduction is the book by N.J. Cutland. See the reference
> predicates' as a way of deriving computations from true statements, do
> you know of any good sources where I could learn more about Kleene
> predicates?
in my thesis. There are other books. I will think to make a list with
some comments. Actually I really love Kleene's original "Introduction
to Metamathematics", but the notations used are a bit old fashioned.
This point rather depends on what Platonia contains. If it contains
all sets of cardinality 2^{\aleph_0}, then the inside view of the
deployment will be conatained in it.
I do understand that your concept of Platonia (Arithmetic Realism I
believe you call it) is a Kronecker-like "God made the integers, all
the rest was made by man", and so what you say would be true of that.
Cheers
--
----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------
> "Yes, consciousness supervenes on computation, but that computation
> needs to actually take place (meaning, physically). Otherwise, how
> could consciousness supervene on it? Now, in order for a computation
> to be physically instantiated, the physical instantiation needs to
> satisfy a few properties. One of these properties is clearly some sort
> of isomorphism between the computation and the physical instantiation:
> the actual steps of the computation are represented in physical form.
> A less obvious requirement is that the physical computation needs to
> have the proper counterfactuals: if some external force were to modify
> some step in the computation, the computation must progress according
> to the new computational state (as translated by the isomorphism)."
So if you destroy the counterfactual behaviour by removing components
that are not utilised, you end up with a recording-equivalent, which
isn't conscious. But what if you destroy the counterfactual behaviour
by another means? For example, if I wear a device that will instantly
kill me if I deviate from a particular behaviour, randomly determined
by the device from moment to moment, but survive, will my
consciousness be diminished as a result? You might say, no, because if
the device were not there I would have been able to handle the
counterfactuals. But then it might also be argued for the first
example that if the unused components had not been removed, the
recording-equivalent would also have been able to handle the
counterfactuals; and you can make this more concrete by having the
extra machinery waiting to be dropped into place in a counterfactual
universe.
--
Stathis Papaioannou
On Thu, Dec 4, 2008 at 5:19 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:Hmmm... It means you have still a little problem with step seven. I
wish we share a computable environment, but we cannot decide this at
will. I agree we have empirical evidence that here is such (partially)
computable environment, and I am willing to say I trust nature for
this. Yet, the fact is that to predict my next first person experience
I have to take into account ALL computations which exist in the
arithmetical "platonia" or in the universal dovetailing.
Bruno, I am with you that none of us can decide which of the infinite number of histories contain/compute us; when I talk about a universe I refer to just a single such history.
Perhaps you use history to refer only to the computational history that implements the observer's mind where I use it to mean an object which computes the mind of one or more observers in a consistent and fully definable way.
What I am not clear on with regards to your position is whether or not you believe most observers (if we could locate them in platonia from a 3rd person view) exist in environments larger than their brains, and likely containing numerous other observers or if you believe the mind is the only thing reified by computation and it is meaningless to discuss the environments they perceive because they don't exist.
The way I see it, using the example of this physical universe only, it is far more probable for a mind to come about from the self-ordering properties of a universe such as this than for there to be a computation where the mind is an initial condition. The program that implements the physics of this universe is likely to be far smaller than the program that implements our minds, or so my intuition leads me to believe.
>
> On Wed, Dec 03, 2008 at 04:53:11PM +0100, Bruno Marchal wrote:
>>
>> I really don't know. I expect that the mathematical structure, as
>> seen
>> from inside, is so big that Platonia cannot have it neither as
>> element
>> nor as subpart. (Ah, well, I am aware that this is counter-intuitive,
>> but here mathematical logic can help to see the consistency, and the
>> quasi necessity with formal version of comp).
>>
>
> This point rather depends on what Platonia contains. If it contains
> all sets of cardinality 2^{\aleph_0}, then the inside view of the
> deployment will be conatained in it.
I am not sure. In my opinion, to have a platonia capable of describing
the first person views emerging from the UD entire work, even the
whole of Cantor Paradise will be too little. Even big cardinals (far
bigger than 2^(aleph_0)) will be like too constrained shoes. Actually
I believe that the first person views raised through the deployment
just escape the whole of human conceivable mathematics. It is big. But
it is also structured. It could even be structured as a person. I
don't know.
>
>
> I do understand that your concept of Platonia (Arithmetic Realism I
> believe you call it) is a Kronecker-like "God made the integers, all
> the rest was made by man", and so what you say would be true of that.
Yes the 3-Platonia can be very little, once we assume comp. But the
first view inside could be so big that eventually all notion of 1-
Platonia will happen to be inconsistent. It is for sure unameable (in
the best case). I discussed this a long time ago with George Levy: the
first person plenitude is big, very big, incredibly big. Nothing can
expressed or give an idea of that bigness.
At some point I will explain that the "divine intellect" of a lobian
machine as simple as Peano-Arithmetic is really far bigger than the
"God" of Peano-Arithmetic. I know it is bizarre (and a bit too
technical for being addressed right now I guess).
Have a good day,
Bruno
>
>> PS Abram. I think I will have to meditate a bit longer on your
>> (difficult) post. You may have a point (hopefully only pedagogical :)
>
> A little bit more commentary may be in order then... I think my point
> may be halfway between pedagogical and serious...
>
> What I am saying is that people will come to the argument with some
> vague idea of which computations (or which physical entities) they
> pick out as "conscious". They will compare this to the various
> hypotheses that come along during the argument-- MAT, MEC, MAT + MEC,
> "Lucky Alice is conscious", "Lucky Alice is not conscious", et
> cetera... These notions are necessarily 3rd-person in nature. It seems
> like there is a problem there. Your argument is designed to talk about
> 1st-person phenomena.
The whole problem consists, assuming hypotheses, in relating 1-views
with 3-views.
In UDA, the 1-views are approximated by 1-discourses (personal diary
notes, memories in the brain, ...). But I do rely on the minimal
intuition needed to give sense to the willingness of saying "yes" to a
digitalist surgeon, and the believe in a comp survival, or a belief in
the unchanged feeling of "my" consciousness in such annihilation-
(re)creation experiences.
>
>
> If a 1st-person-perspective is a sort of structure (computational
> and/or physical), what type of structure is it?
The surprise will be: there are none. The 1-views of a machine will
appears to be already not expressible by the machine. The first and
third God have no name. Think about Tarski theorem in the comp
context. A sound machine cannot define the whole notion of "truth
about me".
> If we define it in
> terms of behavior only, then a recording is fine.
We certainly avoid the trap of behaviorism. You can see this as a
weakness, or as the full strong originality of comp, as I define it.
We give some sense, albeit undefined, to the word "consciousness"
apart from any behavior. But to reason we have to assume some relation
between consciousness and possible discourses (by machines).
> If we define it in
> terms of inner workings, then a recording is probably not fine, but we
> introduce "magical" dependence on things that shouldn't matter to
> us... ie, we should not care if we are interacting with a perfectly
> orchestrated recording, so long as to us the result is the same.
>
> It seems like this is independent of the differences between
> pure-comp / comp+mat.
This is not yet quite clear for me. Perhaps, if you are patient
enough, you will be able to clarify this along the UDA reasoning which
I will do slowly with Kim. The key point will be the understanding of
the ultimate conclusion: exactly like Everett can be said to justify
correctly the phenomenal collapse of the wave, if comp is assumed, we
have to justify in a similar way the wave itself. Assuming comp, we
put ourself in a position where we have to explain why numbers
develops stable and coherent belief in both mind and matter. We can
presuppose neither matter, nor mind eventually, except our own
consciousness, although even consciousness will eventually be reduced
into our "believe in numbers".
Bruno
This seems to be getting away from the simple requirement that the
computer be able to handle counterfactuals. What if the device were
not easy to disarm, but almost impossible to disarm? What if it had
tentacles in every neurone, ready to destroy it if it fired at the
wrong time?
> A related way out would be to point out that all the computational
> machinery is present in one case (merely disabled), whereas it is
> totally absent in the other case.
So you agree that in the case where the extra machinery is waiting to
be dropped into place, consciousness results?
--
Stathis Papaioannou
Yes, there are these differences, but why should the differences be
relevant to the question of whether consciousness occurs or not? And
what about the case where the extra machinery that would allow the
right sort of causal structure but isn't actually used in a particular
situation is temporarily disengaged?
It seems to me that everyone contributing to these threads has an
intuition about consciousness, then works backwards from this:
"obviously, recordings aren't conscious; now what are the qualities
that recordings have which distinguish them from entities that are
conscious?". There's nothing intrinsically wrong with this method, but
it is possible to reach an impasse when the different parties have
different intuitions.
--
Stathis Papaioannou
Exactly so. Consciousness is probably not the unified thing that we
intuitively assume anyway. There was an article in the newspaper today
that Henry Molaison died. He had lived some 50yrs with profound amnesia
after an operation on his brain to cure severe seizures. He apparently
could not form new memories. But that only applied to verbal, i.e.
"conscious" memories. He could learn new tasks in the sense that he
improved with practice even though if asked he would say he'd never done
the task before.
Brent Meeker
Bruno,
Yes, I think there is a big difference between making an argument more
detailed and making it more understandable. They can go together or be
opposed. So a version of the argument targeted at my complaint might
not be good at all pedagogically...I would be pleased if you can give me a version of MAT or MEC to whichthe argument does not apply. For example, the argument applies to mosttransfinite variant of MEC. It does not apply when some "magic" isintroduced in MAT, and MAT is hard to define in a way to exclude thatmagic. If you can help, I thank you in advance.
My particular brand of "magic" appears to be a requirement of
counterfactual/causal structure that reflects the
counterfactual/causal structure of (abstract) computation.
Stathis has
pointed out some possible ways to show such ideas incoherent (which I
am not completely skeptical of, despite my arguments).
Since this type
of theory is the type that matches my personal intuition, MGA will
feel empty to me until such alternatives are explicitly dealt a
killing blow (after which the rest is obvious, since I intuitively
feel the contradiction in versions of COMP+MAT that don't require
counterfactuals).
Of course, as you say, you'd be in a hard spot if you were required to
deal with every various intuition that anybody had... but, for what
it's worth, that is mine.
Destructive phenomena do occur. To see this, realise that an infinite
set of histories will correspond to a given logical statement. Two
inconsistent statements can be combined disjunctively (A or B),
and their conjunction is false. Such a disjunction corresponds to the
union of the two sets of histories consistent with each statement. The
intersection of these sets of histories is, of course, empty.
So the measure of the histories consistent with A or B is now just
given by the sum of the measures of the two individual
statements. Since the information is given by the negative logarithm of these
measures, we see that the information of A or B is less than that of
either A or B taken separately. Information has been destroyed by
taking the inconsistent statements together.
It is this "triangle inequality" nature of information that gives rise
to the vector space structure in quantum mechanics.
Michael Lockwood distinguishes between materialism (consciousness
supervenes on the physical world) and physicalism (the physical world
suffices to explain everything). The difference between the two is
that in physicalism, consciousness (indeed any emergent phenomenon) is
mere epiphenomena, a computational convenience, but not necessary for
explanation, whereas in non-physicalist materialism, there are emergent
phenomena that are not explainable in terms of the underlying physics,
even though supervenience holds. This has been argued in the famous
paper by Philip Anderson. One very obvious distinction between
the two positions is that strong emergence is possible in materialism,
but strictly forbidden by physicalism. An example I give of strong
emergence in my book is the strong anthropic principle.
So - I'm convinced your argument works to show the contradiction
between COMP and physicalism, but not so the more general
materialism. I think you have confirmed this in some of your previous
responses to me in this thread.
Which is just as well. AFAICT, supervenience is the only thing
preventing the Occam catastrophe. We don't live in a magical world,
because such a world (assuming COMP) would have so many contradictory
statements that we'd disappear in a puff of destructive logic!
(reference to my previous posting about destructive phenomena).
>
> On Sat, Dec 06, 2008 at 03:32:53PM +0100, Bruno Marchal wrote:
>>
>> I would be pleased if you can give me a version of MAT or MEC to
>> which
>> the argument does not apply. For example, the argument applies to
>> most
>> transfinite variant of MEC. It does not apply when some "magic" is
>> introduced in MAT, and MAT is hard to define in a way to exclude that
>> magic. If you can help, I thank you in advance.
>>
>> Bruno
>>
>
> Michael Lockwood distinguishes between materialism (consciousness
> supervenes on the physical world) and physicalism (the physical world
> suffices to explain everything). The difference between the two is
> that in physicalism, consciousness (indeed any emergent phenomenon) is
> mere epiphenomena, a computational convenience, but not necessary for
> explanation, whereas in non-physicalist materialism, there are
> emergent
> phenomena that are not explainable in terms of the underlying physics,
> even though supervenience holds.
In what sense are they emergent? They emerge from what?
> This has been argued in the famous
> paper by Philip Anderson. One very obvious distinction between
> the two positions is that strong emergence is possible in materialism,
> but strictly forbidden by physicalism. An example I give of strong
> emergence in my book is the strong anthropic principle.
>
> So - I'm convinced your argument works to show the contradiction
> between COMP and physicalism, but not so the more general
> materialism.
I don't see why. When I state the supervenience thesis, I explain that
the type of supervenience does not play any role, be it a causal
relation or an epiphenomenon.
> I think you have confirmed this in some of your previous
> responses to me in this thread.
>
> Which is just as well. AFAICT, supervenience is the only thing
> preventing the Occam catastrophe. We don't live in a magical world,
> because such a world (assuming COMP) would have so many contradictory
> statements that we'd disappear in a puff of destructive logic!
> (reference to my previous posting about destructive phenomena).
I don' really understand. If such argument is correct, how could
classical logic not be quantum like. The problem of the white rabbits
is that they are consistent. Your explanation would make the world
quantum or not independently of the "degree of independence" of the
computational histories. Observation would not make a logic classical,
as it is the case in QM.
Bruno
Thanks for the references.
--Abram
ps- it is final exam crunch time, so I haven't been checking email so
much as usual... I may get around to more detailed replies et cetera
this weekend or next week.
> Press, London.Boolos, G. (1993). The Logic of Provability. Cambridge
> University Press, Cambridge.Smoryński, P. (1985). Self-Reference and Modal
> Logic. Springer Verlag, New York.Smullyan, R. (1987). Forever Undecided.
Bruno,
Thanks for the references.
ps- it is final exam crunch time, so I haven't been checking email so
much as usual... I may get around to more detailed replies et cetera
this weekend or next week.
I don't think it implies it, but it is certainly possible. Emergence
is possible with just two incommensurate levels.
Cheers
They emerge from the underlying physics (or chemistry, or whatever the
syntactic layer is). Supervenience is AFAICT nothing other than the
concept of emergence applied to consciousness. In many respects it
could be considered to be synonymous.
>
>
> > This has been argued in the famous
> > paper by Philip Anderson. One very obvious distinction between
> > the two positions is that strong emergence is possible in materialism,
> > but strictly forbidden by physicalism. An example I give of strong
> > emergence in my book is the strong anthropic principle.
> >
> > So - I'm convinced your argument works to show the contradiction
> > between COMP and physicalism, but not so the more general
> > materialism.
>
> I don't see why. When I state the supervenience thesis, I explain that
> the type of supervenience does not play any role, be it a causal
> relation or an epiphenomenon.
>
In your Lille thesis (sorry I still haven't read your Brussels thesis)
you say at the end of section 4.4.1 that SUP-PHYS supposes at minimum
a concrete physical world. I don't see how this follows at all from
the concept of supervenience, but I accept that it is necessary for
(naive) physicalism.
>
> > I think you have confirmed this in some of your previous
> > responses to me in this thread.
> >
> > Which is just as well. AFAICT, supervenience is the only thing
> > preventing the Occam catastrophe. We don't live in a magical world,
> > because such a world (assuming COMP) would have so many contradictory
> > statements that we'd disappear in a puff of destructive logic!
> > (reference to my previous posting about destructive phenomena).
>
>
> I don' really understand. If such argument is correct, how could
> classical logic not be quantum like. The problem of the white rabbits
> is that they are consistent.
Sorry, to be clear - the white rabbits themselves are consistent, and
also also quite rare (ie improbable). However they also tend to come
in "equal and opposite" (ie contradictory) forms so when combined
contribute to the measure of a non-magical world. That is
information destructve phenomena.
As for logic, each individual observer sees a world according to
classical logic. Only by quantifying over multiple observers does
quantum logic come into play. This is a key point I make on page 219
of my book. I'm sorry I haven't found the best way to express the
argument yet - it really is quite subtle. I know Youness had
difficulties with this aspect as well.
I apologise - I have been speaking in coded sentences which require a
deal of unpacking if you are unfamiliar with the concepts. But I'm in
good company here...