The second step of the MGA, consists in making a change to MGA 1 so
that we don't have to introduce that unreasonable amount of cosmic
luck, or of apparent randomness. It shows the "lucky" aspect of the
coming information is not relevant. Jason thought on this sequel.
Let us consider again Alice, which, as you know as an artificial
brain, made of logic gates.
Now Alice is sleeping, and doing a dream---like Carroll's original
Alice.
Today we know that a REM dream is a conscious experience, or better an
experience of consciousness, thanks to the work of Hearne Laberge,
Dement, etc.
Malcolm's theory of dream, where dream are not conscious, has been
properly refuted by Hearne and Laberge experiences. (All reference can
be found in the bibliography of my "long thesis". Ask me if you have
problem to find them.
I am using a dream experience instead of an experience of awakeness
for having less technical problems and being shorter on the relevant
points. I let you do the change as an exercise if you want. If you
have understood UDA up to the sixth step, such change are easy to do.
To convince Brent Meeker, you will have to put the environment,
actually its digital functional part in the "generalized brain",
making the general setting much longer to describe. (If the part of
the environment needed for consciousness to proceed is not Turing
emulable, then you already negate MEC of course).
The dream will facilitate the experience. It is known that in a REM
dream we are paralyzed (no outputs), we are cut out from the
environment: (no inputs, well not completely because you would not
hear the awakening clock, but let us not care about this, or do the
exercise above), ... and we are hallucinating: the dream is a natural
sort of video game. It shows that the brain is at least a "natural"
virtual reality generator. OK?
Alice has already an artificial digital brain. This consists in a
boolean tridimensional graph with nodes being NOR gates, and vertex
being wires. For the MEC+MAT believer, the dream is produced by the
physical activity of the "circular digital information processing"
done by that boolean graph.
With MEC, obviously all what matter is that the boolean graph
processes the right computation, and we don't have to take into
account the precise position of the gates in space. They are not
relevant for the computation (if things like that were relevant we
would already have said "no" to the doctor. So we can topologically
deform Alice boolean graph brain and project it on a plane so that no
gates overlap. Some wires will cross, but (exercise) the crossing of
the wires function can itself be implemented with NOR gates. (A
solution of that problem, posed by Dewdney, has been given in the
Scientific American Journal (and is displayed in "Conscience et
Mécanisme" with the reference).
So Alice's brain can be made into a plane boolean graph.
Also, a MEC+MAT believer should not insist on the electrical nature
of the communication by wires, nor on the electrical nature of the
processing of the information by the gates, so that we can use optical
information instead. Laser beams play the role of the wires, and some
destructive interference can be used for the NOR. The details are not
relevant, given that I am not presenting a realist experiment (below,
or later, if people harass me with too much engineering question, I
will propose a completely different representation of the same (with
respect to the relevance of the reasoning) situation, by using the
even less realist Ned Block Chinese People Computer: it can be used
for making clear no magic is used in what follows, with the price that
its overall implementation is very unrealist, given that the neurons
are the chinese willingly playing that role.
So, now, we put Alice's brain, which has become a two dimensional
optical boolean graph, in between two planes of transparent solid
material, glass, and we add a sort of "clever" fluid cristal together
with the graph,in between the glass plates. The fluid cristal is
supposed to have the following peculiar property (which certainly is
hard to implement concretely but which is possible in principle). Each
time a beam of light trigs a line between two nodes, it trigs a laser
beam in the "good" direction between the two optical gates, with the
correct frequency-color (to keep right the functioning of the NOR).
This works well, and we can let that brain work from time t1 to t2,
where Alice dreams specifically, for fixing the matter, that she is in
front of a mushroom, talking with a caterpillar who sits on the
Muschroom (all right?). We have beforehand save the instantaneous
state corresponding to the begining of that dream, so as to be able to
repeat that precise graph activity.
Each time we allow the graph doing the computation corresponding to
the dream (which exists by MEC), the believer in MAT, who believes in
the physical supervenience thesis, has to admit Alice is conscious, in
the sense of having the experience of consciousness of her (non lucid)
dream: she feels herself talking with a caterpillar for example.
Now we film that active graph, with a high resolution camera.
As you have most probably already guess, that film constitutes our
home made "lucky cosmic explosion" generator, corresponding to Alice's
dream experience.
So let us suppose that poor Alice got, again, a not very good optical
plane graph, so that some (1 to many to all, again) NOR gates break
down, in that precise computation corresponding to her dream
experience. And let us project, in real time, with the correct
scaling, the movie we have made, on the graph, playing its role of a
repeatable lucky rays generator.
If Alice remains conscious in MGA 1, through MEC and MAT, Alice
remains conscious in this setting too, all right?
In the ALL gates broken case, we have really, *only a movie* of
Alice's brain activity. Does consciousness arise from the projection
of that movie?
Should a believer in MEC+MAT believes that?
Bruno
I think yes. Although one might quibble about the "arise" part. The specific
pattern, which was not generated but merely copied by the movie, arose from
Alice's life experience - with mushrooms, caterpillars, etc. It's
meaning/interpretation comes from the external world.
Brent
Is the movie causally interacting with the gates? In other words, is
the light from the movie projector stimulating gates when the lasers
fail to?
> In the ALL gates broken case, we have really, *only a movie* of
> Alice's brain activity. Does consciousness arise from the projection
> of that movie?
Once again, is the movie supposed to be triggering any working
machinery in the graph? Or could you just as easily project it
somewhere else that point?
-- Kory
I tend to think our universe is finite. The multiverse, that's another
story... But even in an infinite universe, we could have a finite
consciousness computation that could potentially depend on any part of
the state of the universe, without being possible to know which part a
priori. Someone could always argue that playing the film failed to
recreate consciousness because you left a certain part of the universe
out. I don't actually believe any of this to be the case, I'm just
playing devil's advocate...
> Also if all my consciousness depends on all the universe, then it
> depends also on yours (and everything else) that I know you or not... I
> believe this a lot improbable.
I wouldn't know how to measure the probability of such a thing being
true, but I think at least you agree that it is possible, and that's
enough to cause us problems.
T.
This is why I prefer to cast these thought experiments in terms of
finite cellular automata. All of the issues you mention go away. (One
can argue that finite cellular automata can't contain conscious
beings, but that's just a rejection of MEC, which we're supposed to be
keeping.)
I'm not entirely sure I understand the details of Bruno's Movie-Graph
(yet), so I don't know if it's equivalent to the following thought
experiment:
Let's say that we run a computer program that allocates a very large
two-dimensional array, fills it with a special Initial State (which is
hard-coded into the program), and then executes the rules of Conway's
Life on the array for a certain number of iterations. Let's say that
the resulting "universe" contains creatures that any garden-variety
mechanist would agree are fully conscious. Let's say that we run the
universe for at least enough iterations to allow the creatures to move
around, say a few things, experience a few things, etc. Finally, let's
say that we store the results of all of our calculations in a (much
larger) area of memory, so that we can look up what each bit did at
each tick of the clock.
Now let's say that we "play back" the stored results of our
calculations, like a movie. At each tick of the clock t, we just copy
the bits from time t of our our stored memory into our two-dimensional
array. There are no Conway's Life calculations going on here. We're
just copying bits, one time-slice at a time, from our stored memory
into our original grid. It is difficult for a mechanist to argue that
any consciousness is happening here. It's functionally equivalent to
just printing out each time-slice onto a (huge) piece of paper, and
flipping through those pages like a picture book and watching the
"animated playback". It's hard for a mechanist to argue that this
style of flipping pages in a picture book can create consciousness.
Now let's imagine that we compute the Conway's Life universe again -
we load the Initial State into the grid, and then iteratively apply
the Conway's Life rule to the grid. However, for some percentage of
the cells in the grid, instead of looking at the neighboring cells and
updating according to the Conway's Life rule, we instead just pull the
data from the lookup table that we created in the previous run.
If we apply the Conway's Life rule to all the cells, it seems like the
creatures in the grid ought to be conscious. If we don't apply the
Life rule to any of the cells, but just pull the data from our
previously-created lookup table, it seems like the creatures in the
grid are not conscious. But if we apply the Life rule to half of the
cells and pull the other half from the lookup table, there will
(probably) be some creature in the grid who has half of the cells in
its brain being computed by the Life rule, and half being pulled from
the lookup table. What's the status of this creature's consciousness?
-- Kory
But brain functions are essentially classical (see Tegmark's paper). Thought
would be impossible if quantum entanglement was more that a perturbation. From
a classical viewpoint, your brain can only be causally affected by a finite
portion of the universe.
Brent
I don't think it's a relevant distinction. Even when the game-of-life is
running on the computer the adjacent cells are not physically causing the
changes from "on" to "off" and vice versa - that function is via the program
implemented in the computer memory and cpu. So why should it make a difference
whether those state changes are decided by gates in the cpu or a huge look-up table?
Brent
But how would they agree on this? If we knew the answer to that we wouldn't
need to be considering these (nomologically) impossible thought experiments. I
don't think we would judge purely by their behavior. That might suffice if we
could observe for a very long time and if we could manipulate the environment,
but more practically I think we would look at how their sensory organs and
memory interacted to influence behavior.
Brent
> If we apply the Conway's Life rule to all the cells, it seems like the
> creatures in the grid ought to be conscious. If we don't apply the
> Life rule to any of the cells, but just pull the data from our
> previously-created lookup table, it seems like the creatures in the
> grid are not conscious. But if we apply the Life rule to half of the
> cells and pull the other half from the lookup table, there will
> (probably) be some creature in the grid who has half of the cells in
> its brain being computed by the Life rule, and half being pulled from
> the lookup table. What's the status of this creature's consciousness?
Which leads again to the problem of partial zombies. What is your
objection to saying that the looked up computation is also conscious?
How would that be inconsistent with observation, or lead to logical
contradiction?
--
Stathis Papaioannou
But this static information is produced by a dynamic computation - so it can be
regarded as deriving its meaning from that computation. I don't see why that
implicit meaning shouldn't count.
Brent
> I would side with Kory that a looked up recording of conscious activity is
> not conscious. My argument being that static information has no implicit
> meaning because there are an infinite number of ways a bit string can be
> interpreted. However in a running program the values of the bits do have
> implicit meaning according to the rules of the state machine.
One part of the system has meaning relative to another part. However,
what if we consider the whole system? We could then say that the left
half, computer A, has meaning relative to the right half, computer B.
It doesn't matter that an outside observer could come up with
infinitely many meanings, any more than it matters that an alien could
up with infinitely many interpretations of an English sentence.
--
Stathis Papaioannou
Right. And , even if the brain is a quantum computer, the argument
will go through, if only because a quantum computer can be simulated
by a classical computer (albeit very slowly: but this is not relevant,
the UD is very "slow" but first person cannot be aware of that). As
Quentin suggested you have to identify yourself completely with the
entire quantum multiverse to prevent the conclusion, and even in that
case, this has to be extracted from the MEC part of the MEC+MAT
hypothesis, which is the point. But yes in that case you can postulate
a sort of primitive matter having some relevance with your
consciousness. (Making them both very mysterious, and making their
link also rather mysterious, btw).
MGA 1 and MGA 2 are sometimes confronted with "super ad hoc move",
which, from a logical point of view have to be taken into account. I
expect I will have to go up to MGA 4, but I can imagine making some
MGA 5 to make such move invalid, relatively to some inductive
rationality principle explicited. Sort of a vaccine against such
"super ad hoc move". They appears also "against many worlds", against
experience testing Bell's inequality, etc. Also if you want to use
entanglement throughout the whole universe (or multiverse), you will
have difficulties in relating measurements and conscious memory of
experiences (but of course this is not yet solved the pure comp view),
I think.
So Tegmark work is not really relevant here. A good thing for me
because, although I think and tend to believe that Tegmark is
accurate, I don't have the personal knowledge of practical quantum
mechanics to be assure personally about the meaningfulness of the
chosen unities.
Bruno
>> If we apply the Conway's Life rule to all the cells, it seems like
>> the
>> creatures in the grid ought to be conscious. If we don't apply the
>> Life rule to any of the cells, but just pull the data from our
>> previously-created lookup table, it seems like the creatures in the
>> grid are not conscious. But if we apply the Life rule to half of the
>> cells and pull the other half from the lookup table, there will
>> (probably) be some creature in the grid who has half of the cells in
>> its brain being computed by the Life rule, and half being pulled from
>> the lookup table. What's the status of this creature's consciousness?
>
> I don't think it's a relevant distinction. Even when the game-of-
> life is
> running on the computer the adjacent cells are not physically
> causing the
> changes from "on" to "off" and vice versa - that function is via the
> program
> implemented in the computer memory and cpu. So why should it make a
> difference
> whether those state changes are decided by gates in the cpu or a
> huge look-up table?
I agree.
They would use the same criteria that they use to decide that humans
are conscious in our own world, which would be a combination of
observing outward behavior (Turing-Test), and observing brain states.
In one sense, that would be harder, because the conscious beings in
the Life universe will look very different than us. In another sense
it would be easier, because they'd have access to every bit of the
Life universe.
Am I confusing "mechanism" with something else? "Functionalism"?
"Computationalism"?
-- Kory
I can only answer this in the context of Bostrom's "Duplication" or
"Unification" question. Let's say that within our Conway's Life
universe, one particular creature feels a lot of pain. After the run
is over, if we load the Initial State back into the array and iterate
the rules again, is another experience of pain occurring? If you think
"yes", you accept Duplication by Bostrom's definition. If you say
"no", you accept Unification.
Duplication is more intuitive to me, and you might say that my thought
experiment is aimed at Duplicationists. In that context, I don't
understand why playing back the lookup table as a movie should create
another experience of pain. None of the actual Conway's Life
computations are being performed. We could just print them out on
(very large) pieces of paper and flip them like a book. Is this
supposed to generate an experience of pain? What if we just lay out
all the pages in a row and move our eyes across them? What if we lay
them out randomly and move our eyes across them? And so on. I argue
that if running the original computation a second time would create a
second experience of pain, we can generate a "partial zombie".
Stathis, Brent, and Bruno have all suggested that there is no "partial
zombie" problem in my argument. Is that because you all accept
Unification? Or am I missing something else?
-- Kory
On Sat, Nov 22, 2008 at 8:52 PM, Stathis Papaioannou <stat...@gmail.com> wrote:
2008/11/23 Kory Heath <ko...@koryheath.com>:
Which leads again to the problem of partial zombies. What is your
> If we apply the Conway's Life rule to all the cells, it seems like the
> creatures in the grid ought to be conscious. If we don't apply the
> Life rule to any of the cells, but just pull the data from our
> previously-created lookup table, it seems like the creatures in the
> grid are not conscious. But if we apply the Life rule to half of the
> cells and pull the other half from the lookup table, there will
> (probably) be some creature in the grid who has half of the cells in
> its brain being computed by the Life rule, and half being pulled from
> the lookup table. What's the status of this creature's consciousness?
objection to saying that the looked up computation is also conscious?
How would that be inconsistent with observation, or lead to logical
contradiction?
I would side with Kory that a looked up recording of conscious activity is not conscious.
My argument being that static information has no implicit meaning because there are an infinite number of ways a bit string can be interpreted.
However in a running program the values of the bits do have implicit meaning according to the rules of the state machine.
What makes this weird is that in one respect our universe might be considered a 4-d recording, containing a record of computations performed by neurons and brains across one of its dimensions. Perhaps this is further evidence in support of Bruno's theory: mind cannot exist in a physical universe because it is just a recording of a computation, and only the actual computation itself can create consciousness.
Since when can consciousness be an instantaneous event?
Anna
The difference is in the number of times that the relevant computation
was physically implemented. When you query the lookup table to get a
bit, you are not performing the computation again. You're just viewing
the result of the computation you did earlier. It seems to me that
this matters for Duplicationists, but maybe not for Unificationists.
Or maybe I'm still misdiagnosing the problem. Is anyone arguing that,
when you play back the lookup table like a movie, this counts as
performing all of the Conway's Life computations a second time? In
that case there would be nothing problematic about this thought
experiment for Duplicationists or Unificationists. But I don't see how
playing back the lookup table can count as implementing the Conway's
Life computations.
-- Kory
Yeah, but still. I don't think consciousness can be freeze-framed
mathematically like this. I haven't been reading the conversation,
though...I should probably try to catch up.
Anna
You are welcome.
You seem to know a bit of logic, so you could read the paper UDA +
AUDA paper here:
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html
Well you arrive at the end (of the first part?) of a more than 10
years conversation but it is NEVER too late :)
I am currently explaining the Movie Graph Argument, which is the 8th
step of the Universal Dovetailer Argument. The UDA is supposed to
show, or shows, that mechanism and physicalism (or materialism,
naturalism) are incompatible. It shows that if mechanism is true,
physics has to be derived from numbers and logic.
The AUDA is about the same explained to, or by, a lobian machine,
which is a universal machine knowing she is universal (or if you know
logic: a Sigma_1 theorem prover which can prove all sentences of the
shape S -> Bew('S'), S Sigma_1. Peano Arithmetic, the formal theory,
can readily be transformed into such a finitely presentable machine.
From this we can extract a logic of the observable proposition and
compare with the empirical quantum logic, making comp testable, and
already tested on its most weird consequences, retrospectively.
Why do they count as two instances? Because they supervene on physical
processes that are spacially distinct? That would assume that spacetime is
fundamental. Or is it because you assume that remembering the dream isn't
distinct process but must be mixed with other experiences related to the location?
Brent
Brent
The last one. Unless the person has not yet opened the door of the
"reconstitution box", the experience of remembering the dream in
Washington is a different experience of the remembering the dream in
Moscow. For reason of climate, people to who relating the dream, etc.
The two "computations" executed by Alice brain diverge because they
have different inputs.
Bruno
> From this we can extract a logic of the observable proposition and
> compare with the empirical quantum logic, making comp testable, and
> already tested on its most weird consequences, retrospectively.
you could refute COMP (MEC) if it would contradict empirical QM, but QM
(and especially many worlds) is also compatible with MAT (and NOT COMP).
These would be Tegmark's Level I and II universes - infinite physical
(or mathematical physicalist as defined by Kory) universes with matter
permuting in all possible ways. If you then let consciousness supervene
on matter (but not in a COMP way (see MGA) - maybe because of local
infinities or whatever) and with UNIFICATION you would also get a many
worlds scenario (also in the sense that for a 1st person one would have
to look at the MAT-histories running through every OM)
In your posts you do seem to have a preference for COMP (although you
say you don't have a position ;-) but I think you definitely lean more
to COMP than to MAT - are there reasons for this or is it only a
personal predilection?
Cheers,
Günther
p.s.: I am looking forward to your further MGA posts (how far will they
go, you have hinted up to MGA 5?) and the ensuing discussion, I have
very much enjoyed reading all this stuff.
Le dimanche 23 novembre 2008 à 22:09 +0100, Günther Greindl a écrit :
> Bruno,
>
> > From this we can extract a logic of the observable proposition and
> > compare with the empirical quantum logic, making comp testable, and
> > already tested on its most weird consequences, retrospectively.
>
> you could refute COMP (MEC) if it would contradict empirical QM, but QM
> (and especially many worlds) is also compatible with MAT (and NOT COMP).
It is.
> These would be Tegmark's Level I and II universes - infinite physical
> (or mathematical physicalist as defined by Kory) universes with matter
> permuting in all possible ways.
That was my point about finite "block" of universe... even if the
universe is infinite every finite block of it contains finite numbers of
matter hence a finite numbers (however big it is) of possible
permutations of the matter within it (even if you take the maximal
permutations of fully filled "block" of matters). That's what I call the
divx arguments :) What you see (or what any human could see) however big
is the resolution of the picture is still finite data. Example, imagine
that our eyes resolution is 10⁵x10⁵ and we are able to "see" 10³
pictures per second... then a human lifetime of seeing is encodable in a
string of 10⁵x10⁵x3x10³x60x60x24x365x~100 bits (3 for 3 bytes per pixel
for 16.5 millions color not even discernable by us, 100 for a 100 years
of lifetime) not taking "compression" in account.. it's (very⁵) big but
finite (and I did take a very⁵ high resolution) and all humans seeing
will be encodable with all permutations available on a string of this
length. Which is even bigger but still finite.
> If you then let consciousness supervene
> on matter (but not in a COMP way (see MGA) - maybe because of local
> infinities or whatever) and with UNIFICATION you would also get a many
> worlds scenario (also in the sense that for a 1st person one would have
> to look at the MAT-histories running through every OM)
If infinities are at play... what is a MAT-history ? it can't even be
"written".
> In your posts you do seem to have a preference for COMP (although you
> say you don't have a position ;-) but I think you definitely lean more
> to COMP than to MAT - are there reasons for this or is it only a
> personal predilection?
>
> Cheers,
> Günther
>
> p.s.: I am looking forward to your further MGA posts (how far will they
> go, you have hinted up to MGA 5?) and the ensuing discussion, I have
> very much enjoyed reading all this stuff.
>
>
Regards,
Quentin
--
All those moments will be lost in time, like tears in rain.
Bruno,From this we can extract a logic of the observable proposition andcompare with the empirical quantum logic, making comp testable, andalready tested on its most weird consequences, retrospectively.
you could refute COMP (MEC) if it would contradict empirical QM, but QM
(and especially many worlds) is also compatible with MAT (and NOT COMP).
These would be Tegmark's Level I and II universes - infinite physical
(or mathematical physicalist as defined by Kory) universes with matter
permuting in all possible ways. If you then let consciousness supervene
on matter (but not in a COMP way (see MGA) - maybe because of local
infinities or whatever) and with UNIFICATION you would also get a many
worlds scenario (also in the sense that for a 1st person one would have
to look at the MAT-histories running through every OM)
In your posts you do seem to have a preference for COMP (although you
say you don't have a position ;-) but I think you definitely lean more
to COMP than to MAT - are there reasons for this or is it only a
personal predilection?
p.s.: I am looking forward to your further MGA posts (how far will they
go, you have hinted up to MGA 5?) and the ensuing discussion, I have
very much enjoyed reading all this stuff.
This I don't follow. I would have thought it implies the opposite.
--
----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------
>
> On Sun, Nov 23, 2008 at 03:59:02PM +0100, Bruno Marchal wrote:
>>>
>>> I would side with Kory that a looked up recording of conscious
>>> activity is not conscious.
>>
>>
>>
>> I agree with you. The point here is just that MEC+MAT implies it.
>>
>
> This I don't follow. I would have thought it implies the opposite.
MGA 1 shows that MEC+MAT implies lucky Alice is conscious (during the
exam). OK?
MGA 2 shows that MEC+MAT implies Alice is dreaming (and thus conscious)
when the film is projected. OK?
I take the "looked recording" as identical (with respect to the
reasoning) with a projection of the movie.
Of course I don't believe that a projection of a filmed computation is
conscious 'qua computatio". It is so absurd that sometimes I end the
Movie Graph Argument here. I mean I consider this equivalent to false,
and thus as enough for showing COMP+MAT implies false.
MGA 3 is intended for those who believes that the movie can be
conscious qua computatio.
Bruno
Quentin Anciaux wrote:
> If infinities are at play... what is a MAT-history ? it can't even be
> "written".
Agreed. And that is why we should be more reluctant to drop COMP than to
drop MAT.
But IF we drop COMP, we could "accept" unwriteable MAT-histories.
Cheers,
Günther
> I think you are correct, but allowing the observer to be mechanically
> described as obeying the wave equation (which solutions obeys to comp),
Hmm well if you have a basis, yes; - but "naked" infinite-dimensional
Hilbert Space (the "everything" in QM)? With MAT we do not only
concentrate on OMs (as with COMP) but on all states (which maybe don't
have an OM)
> I mean Everett is really SWE+COMP.
Ok I have not looked at it this way yet - how does COMP enter the
picture automatically in the Everett interpretation? I am missing
something here. Do you mean because all the solutions are computable?
(but see objection above)?
> With MAT we haven't (except bibles, myth, etc.). There is no standard
> notion of mat histories,
I agree - that is why I think COMP is a better guess than MAT - although
I still have some quibbles ...
> deployment with comp). To have MAT correct, you have to accept not only
> actual infinities, but concrete actual infinities that you cannot
> approximate with Turing machine, nor with Turing Machine with oracle.
> You are a bit back to literal angels and fairies ...
Yes, we agree.
> As I said many times, COMP is my favorite working *hypothesis*. It is my
...
> MAT has been a wonderful methodological assumption, but it has always
> being incoherent, or eliminativist on the mind.
Ok. But what do you think of the following: Bertrand Russell's neutral
monism (also Feigl and others) is an interesting metaphysical "theory":
one would have a basic "mind-stuff" - protoexperientials - which would
follow the laws of comp.
It would not be a dualism, it would be mind-monism, but the "objects"
being computed would not be OMs directly but some kind of basic
mind-components - this idea is not new, in fact these objects would
correspond to the "dharmas" of yogacara (and also Theravada Buddhism,
but not so clearly there). (see
http://en.wikipedia.org/wiki/Dharmas#Dharmas_in_Buddhist_phenomenology)
One would lose the wonderful OM-COMP correspondence (which I think is an
important feature of your COMP) and get some kind of "binding problem"
again - how a unified consciousness results from the "dharmas"; but one
would be able to better explain how we have shareable histories (which
is I think a _weak point_ of COMP if related directed to OMs - as has
already been mentionend on the list, we can drift into solipsism with
COMP quite easily (and I don't see why shareable histories of any great
measure should evolve)
>> p.s.: I am looking forward to your further MGA posts (how far will they
>> go, you have hinted up to MGA 5?) and the ensuing discussion, I have
>> very much enjoyed reading all this stuff.
>
>
> Thanks. And so you believe that MAT+MEC makes Alice conscious through
> the projection of its brain movie!
Yes, if MAT+MEC is assumed, I would believe this. And I would not yet
accept it as an "absurdity" and ruling out of MAT+MEC - although I would
see that it is beginning to get very strange *grin*
>You really want me to show this is
> absurd. It is not so easy, and few people find this necessary, but I
> will do asap (MGA 3).
Yup :-)
And I would be interested what you think of the idea to let COMP govern
a "dharma"-level and not an OM-level directly.
Cheers,
Günther
Yes. You could define precise mathematical unwriteable MAT-histories.
Mathematical logicians have already the tools for managing "Newtonian"
MAT-histories..You will need logic with non enumerable alphabet. Good
luck with the non enumerable typo errors :)
But no problem. I find this unplausible but it can be done consistently.
COMP is a bit like consistency from Peano Arithmetic first person
view on its third person description (its clothes or its Gödel Number,
or its "program"): IF true, then its falsity is consistent.
COMP is the ontic truth on "YES DOCTOR", and it entails (provably with
some vocabulary definition) the intrinsical RIGHT, for machines, to
say "NO" to the doctor, and the ethical obligation to respect those
who says NO.
I have no problem with MAT believers, only with COMP+MAT believers.
Note also that, even with just COMP the first person "OM "lives"
unwriteable stories, so those tools will be used, even in the cadre of
COMP.
And I can uderstand, through comp, the roots of the believe that comp
is false. Actually there is a sense to say that from the first person
point of view, comp *is* flase. The first person that you can (in a
proper mathematical way) associated to a machine, already does not or
cannot believe in the truth of comp. This I can elaborate later, but
this needs more technics.
Bruno
I don't mean to hold up the show, but I'm still stuck here. I don't
understand how Lucky Alice should be viewed as conscious in the
context of MEC+MAT.
In a different message, you said this:
> But to go in the
> detail here would confront us with the not simple task of defining
> more precisely what is a computation, or what we will count has two
> identical computations in the deployment.
As complex as that task may be, I'm beginning to think that I can't
get past MGA 1 without tackling it.
Imagine that you have a grid of bits, and at each tick of the clock,
each bit is randomly turned on or off using a pseudorandom number
generator with a very long periodicity. Imagine that for some stretch
of time, the bits in the grid act "as if" they were following the
rules to Conway's Life. Are Conway's Life computations in fact being
performed? I thought "obviously no". The majority answer here seems to
be "obviously yes".
Suppose that we perform a very complex computation, and the result is
the integer "5". Should any computation that results in "5" be viewed
as performing the former computation?
Chalmer's paper "Does a Rock Implement Every Finite-State Automaton?"
seems directly relevant to all of these Lucky Alice thought
experiments. (Is it?) I need to re-read that paper.
I have no doubt that my thinking on these topics is confused. Where
should I begin?
-- Kory
I think you are correct, but allowing the observer to be mechanicallydescribed as obeying the wave equation (which solutions obeys to comp),
Hmm well if you have a basis, yes; - but "naked" infinite-dimensional
Hilbert Space (the "everything" in QM)?
With MAT we do not only
concentrate on OMs (as with COMP) but on all states (which maybe don't
have an OM)
I mean Everett is really SWE+COMP.
Ok I have not looked at it this way yet - how does COMP enter the
picture automatically in the Everett interpretation?
I am missing
something here. Do you mean because all the solutions are computable?
(but see objection above)
With MAT we haven't (except bibles, myth, etc.). There is no standardnotion of mat histories,
I agree - that is why I think COMP is a better guess than MAT - although
I still have some quibbles ...
deployment with comp). To have MAT correct, you have to accept not onlyactual infinities, but concrete actual infinities that you cannotapproximate with Turing machine, nor with Turing Machine with oracle.You are a bit back to literal angels and fairies ...
Yes, we agree.As I said many times, COMP is my favorite working *hypothesis*. It is my...MAT has been a wonderful methodological assumption, but it has alwaysbeing incoherent, or eliminativist on the mind.
Ok. But what do you think of the following: Bertrand Russell's neutral
monism (also Feigl and others) is an interesting metaphysical "theory":
one would have a basic "mind-stuff" - protoexperientials - which would
follow the laws of comp.
It would not be a dualism, it would be mind-monism, but the "objects"
being computed would not be OMs directly but some kind of basic
mind-components - this idea is not new, in fact these objects would
correspond to the "dharmas" of yogacara (and also Theravada Buddhism,
but not so clearly there). (see
http://en.wikipedia.org/wiki/Dharmas#Dharmas_in_Buddhist_phenomenology)
One would lose the wonderful OM-COMP correspondence (which I think is an
important feature of your COMP)
and get some kind of "binding problem"
again - how a unified consciousness results from the "dharmas"; but one
would be able to better explain how we have shareable histories (which
is I think a _weak point_ of COMP if related directed to OMs - as has
already been mentionend on the list, we can drift into solipsism with
COMP quite easily (and I don't see why shareable histories of any great
measure should evolve)
p.s.: I am looking forward to your further MGA posts (how far will theygo, you have hinted up to MGA 5?) and the ensuing discussion, I havevery much enjoyed reading all this stuff.Thanks. And so you believe that MAT+MEC makes Alice conscious throughthe projection of its brain movie!
Yes, if MAT+MEC is assumed, I would believe this. And I would not yet
accept it as an "absurdity" and ruling out of MAT+MEC - although I would
see that it is beginning to get very strange *grin*
You really want me to show this isabsurd. It is not so easy, and few people find this necessary, but Iwill do asap (MGA 3).
Yup :-)
And I would be interested what you think of the idea to let COMP govern
a "dharma"-level and not an OM-level directly.
Please see my recent response to Bruno. If we perform a complex
computation which results in placing the integer "5" into some memory
variable, and then later we copy the contents of that memory variable
to some other location in memory, in what sense are we re-performing
the original complex computation?
-- Kory
I share your reservations, Kory. In outline, Burno's argument so far seems to
be (I'm sure Bruno will correct me if I get this wrong):
1. Assume that consciousness supervenes on the material realization of some
complex computations.
2. These computations could be performed stepwise by some machine that only does
arithmetic and consciousness would still supervene.
3. The order of the steps matter, but not the time interval between steps. So
even if the steps are discrete and separated in time consciousness will still
supervene.
4. Since many different mechanisms can realize the sequence of steps the
consciousness must supervene on the computation however the sequence is realized.
5. The sequence of steps could be realized by accident, i.e. a random number
generator.
6. The sequence of steps could be realized by a recording of the original,
conscious sequence.
7. 5 & 6 supra are absurd (i.e. "false") therefore there is an implicit
contradiction in 1.
But I don't find this compelling. First, 5 & 6 are not contradictions - they
just violate our intuitions about what consciousness should be like. But what
is it about them that violate our intuitions?
(a) They have divorced consciousness from it's context, i.e. it's potential or
actual interaction with an environment.
(b) They eliminate the temporal continuity, so that the consciousness is sliced
into discrete "observer moments" which are regarded as states in a state machine.
(c) They eliminate causal connections within the process that is supposed to
realize consciousness.
The causal connections are broken by imagining coincidents that are so
improbable that their probablity of happening within the lifetime of the
universe is infinitesimal - in other words at a level where we have no way to
distinguish improbable from impossible.
Having shown there is something counter-intuitive implicit in 1 thru 7 supra,
we're invited to conclude that consciousness supervenes on pure, abstract
computation which takes place in an arithmetical Platonia. But that also
violates a lot of intuitions. Of course I'm not against violating intuitions,
but I expect some predictive power in exchange.
Brent
That's different since, ex hypothesi, the original calculation was complex. So
we can say just putting the answer, 5, in a register is not repeating the
calculation based on some complexity measure of the process.
Brent
I accept Unification, though for different reasons to those discussed
in these threads.
> Duplication is more intuitive to me, and you might say that my thought
> experiment is aimed at Duplicationists. In that context, I don't
> understand why playing back the lookup table as a movie should create
> another experience of pain. None of the actual Conway's Life
> computations are being performed. We could just print them out on
> (very large) pieces of paper and flip them like a book. Is this
> supposed to generate an experience of pain? What if we just lay out
> all the pages in a row and move our eyes across them? What if we lay
> them out randomly and move our eyes across them? And so on.
If the GOL results in consciousness, then I don't see how you could
consistently claim that such activities don't generate consciousness.
The question turns on what is a computation and why it should have
magical properties. For example, if someone flips the squares on a
Life board at random and accidentally duplicates the Life rules does
that mean the computation is carried out? How would you know by
observation if this was happening just by luck? You could argue that
after a short period of observation the Life board would become
completely disorganised, but what about the case of competent
square-flipper who has a condition that might render him amnesic at
any moment? What about the case of having a vast army of random
square-flippers operating multiple boards, so that at least one of
them necessarily follows the correct rules?
> I argue
> that if running the original computation a second time would create a
> second experience of pain, we can generate a "partial zombie".
>
> Stathis, Brent, and Bruno have all suggested that there is no "partial
> zombie" problem in my argument. Is that because you all accept
> Unification? Or am I missing something else?
I think there is a partial zombie problem regardless of whether
Unification or Duplication is accepted. Interestingly, Nick Bostrom
doesn't seem to have a problem with the idea of partial zombies:
http://www.nickbostrom.com/papers/experience.pdf
--
Stathis Papaioannou
But the Conway's Life calculations are "complex" in the sense that I
meant the term. If we have a grid of cells filled with a pattern of
bits, and we point at one particular cell and ask, "If we iterate the
Conway's Life rule on this grid a trillion times, will this bit be on
or off?", we have to perform a bunch of computations to answer the
question. If we store the results of those computations, and then
later someone points at that same cell and asks the same question, and
I just look up the answer, I don't see how we can say that that act of
looking up the answer counts as re-performing the original
computation. Are you arguing that it does?
-- Kory
Brent
Right - I think we had a breakdown in communication. I thought you
were asserting the opposite
> I take the "looked recording" as identical (with respect to the
> reasoning) with a projection of the movie.
>
> Of course I don't believe that a projection of a filmed computation is
> conscious 'qua computatio". It is so absurd that sometimes I end the
> Movie Graph Argument here. I mean I consider this equivalent to false,
> and thus as enough for showing COMP+MAT implies false.
> MGA 3 is intended for those who believes that the movie can be
> conscious qua computatio.
>
> Bruno
>
The movie, in this case, is a very precise recording of the states of
all of Alice's neurons and their interactions. Why wouldn't it be
conscious? Someone once said to you "don't confuse the territory with
the map" - and you very sagely asked "what if the map is so detailed
it is indistinguishable from the territory".
A popular representation of the universe is a block universe, where
all events exist in a 4D static representation that is forever
timeless. A block universe contains conscious entities, who perceive
time etc., at least according to your usual die hard materialist,
don't you think? How does a block universe differ from your movie
though?
Note it is important not to rely on our intuition here. None of us has
experience of movies with the level of resolution been discussed
here. High definition movies are distinctly lame by comparison.
I guess I'll need MGA3!
I would say no. But of course, the real question is, "Why does it
matter?" If I'm reading you correctly, you're taking the view that
it's the pattern of bits that matters, not what created it (or
"caused" it, or "computed it", etc.)
It would help me if I had a clearer idea of how you view
consciousness. I assume that, for you, if someone flips the squares on
a Life board at random and creates the expected "chaos", there's no
consciousness there, but that there are certain configurations that
could arise (randomly) that you would consider conscious. I assume
that these patterns would show some kind of regularity - some kind of
law-like behavior.
It's not easy for me to explain why I think it matters what kind of
process (or in Platonia, what kind of abstract computation) generated
that order. But it's also not easy for me to understand the
alternative view. During those stretches of time when the random field
of bits is creating a pattern that you would call conscious, what do
you *mean* when you say it's conscious? By definition, you can't mean
anything about how it's reacting to its environment, or that it's
doing something "because of" something else, etc.
> I think there is a partial zombie problem regardless of whether
> Unification or Duplication is accepted.
Can you elaborate on this? What partial zombie problem do you see that
Unification doesn't address? And do you think that the move away from
"physical reality" to "mathematical reality" solves that problem? If
so, how?
-- Kory
Yes. Suppose one of the components in my computer is defective but,
with incredible luck, is outputting the appropriate signals due to
thermal noise. Would it then make sense to say that the computer isn't
"really" running Firefox, but only pretending to do so, reproducing
the Firefox behaviour but lacking the special Firefox
qualia-equivalent?
> It would help me if I had a clearer idea of how you view
> consciousness. I assume that, for you, if someone flips the squares on
> a Life board at random and creates the expected "chaos", there's no
> consciousness there, but that there are certain configurations that
> could arise (randomly) that you would consider conscious. I assume
> that these patterns would show some kind of regularity - some kind of
> law-like behavior.
In the first instance, yes. But then the problem arises that under a
certain interpretation, the chaotic patterns could also be seen as
implementing any given computation. A common response to this is that
although it may be true in a trivial sense, as it is true that a block
of marble contains every possible statue, it is useless to define
something as a computation unless it can process information in a way
that interacts with its environment. This seems reasonable so far, but
what if the putative computation is of a virtual world with conscious
observers? The trivial sense in which such a computation can be said
to be hiding in chaos is no longer trivial, as I see no reason why the
consciousness of these observers should be contingent on the
possibility of interaction with the environment containing the
substrate of their implementation. My conclusion from this is that
consciousness, in general, is not dependent on the orderly physical
activity which is essential for the computations that we observe.
Rather, consciousness must be a property of the abstract computation
itself, which leads to the conclusion that the physical world is
probably a virtual reality generated by the big computer in Platonia,
since there is no basis for believing that there is a concrete
physical world separate from the necessarily existing virtual one.
> It's not easy for me to explain why I think it matters what kind of
> process (or in Platonia, what kind of abstract computation) generated
> that order. But it's also not easy for me to understand the
> alternative view. During those stretches of time when the random field
> of bits is creating a pattern that you would call conscious, what do
> you *mean* when you say it's conscious? By definition, you can't mean
> anything about how it's reacting to its environment, or that it's
> doing something "because of" something else, etc.
I know what I mean by consciousness, being intimately associated with
it myself, but I can't explain it.
>> I think there is a partial zombie problem regardless of whether
>> Unification or Duplication is accepted.
>
> Can you elaborate on this? What partial zombie problem do you see that
> Unification doesn't address?
If by "Unification" you mean the idea that two identical brains with
identical input will result in only one consciousness, I don't see how
this solves the conceptual problem of partial zombies. What would
happen if an identical part of both brains were replaced with a
non-concious but otherwise identically functioning equivalent?
> And do you think that the move away from
> "physical reality" to "mathematical reality" solves that problem? If
> so, how?
The Fading Qualia argument proves functionalism, assuming that the
physical behaviour of the brain is computable (some people like Roger
Penrose dispute this). Functionalism then leads to the conclusion that
consciousness isn't dependent on physical activity, as discussed in
the recent threads. So, either functionalism is wrong, or
consciousness resides in the Platonic realm.
--
Stathis Papaioannou
Doesn't this antinomy arise because we equivocate on "running Firefox". Do we
mean a causal chain of events in the computer according to a certain program
specification or do we mean the appearance on the screen of the same thing that
the causal chain would have produced? We'd say "no" by the first meaning, but
"yes" by the second. Obviously, there the question is not black-and-white. If
the computer simply dropped a bit or two and miscolored a few pixels, no one
would notice and no one would assert it wasn't running Firefox. So really, when
we talk about "running Firefox" we are referring to a fuzzy, holistic process
that admits of degrees.
I'm developing a suspicion of arguments that say "suppose by accident...". If
we say that the (putative) possibility of something happening "by accident"
destroys the relevance of it happening as part of a causal chain, we are, in a
sense, rejecting the concept of causal chains and relations - and not just in
consciousness, as your Firefox example illustrates.
I wrote "putative" above because this kind of thought experiment hypothesizes
events whose probability is infinitesimal. If you take a finitist view, there
is a lower bound to non-zero probabilities.
>
>> It would help me if I had a clearer idea of how you view
>> consciousness. I assume that, for you, if someone flips the squares on
>> a Life board at random and creates the expected "chaos", there's no
>> consciousness there, but that there are certain configurations that
>> could arise (randomly) that you would consider conscious. I assume
>> that these patterns would show some kind of regularity - some kind of
>> law-like behavior.
>
> In the first instance, yes. But then the problem arises that under a
> certain interpretation, the chaotic patterns could also be seen as
> implementing any given computation. A common response to this is that
> although it may be true in a trivial sense, as it is true that a block
> of marble contains every possible statue, it is useless to define
> something as a computation unless it can process information in a way
> that interacts with its environment. This seems reasonable so far, but
> what if the putative computation is of a virtual world with conscious
> observers? The trivial sense in which such a computation can be said
> to be hiding in chaos is no longer trivial,
It is still trivial in the sense that it could be said to instantiate all
possible conscious worlds (at least up to some size limit). Since we don't know
what is necessary to instantiate consciousness, this seems much more speculative
than saying the block of marble instantiates all computations - which we already
agree is true only in a trivial sense.
>as I see no reason why the
> consciousness of these observers should be contingent on the
> possibility of interaction with the environment containing the
> substrate of their implementation. My conclusion from this is that
> consciousness, in general, is not dependent on the orderly physical
> activity which is essential for the computations that we observe.
Yet this is directly contradicted by those specific instances in which
consciousness is interrupted by disrupting the physical activity.
> Rather, consciousness must be a property of the abstract computation
> itself, which leads to the conclusion that the physical world is
> probably a virtual reality generated by the big computer in Platonia,
This seems to me to be jumping to a conclusion by examining only one side of the
argument and, finding it flawed, embracing the contrary. Abstract computations
are atemporal and don't have to be generated. So it amounts to saying that the
physical world just IS in virtue of there being some mapping between the world
and some computation.
Of there's something wrong with the argument that functionalism implies
consciousness isn't dependent on physical activity. As Bruno points out
"physical activity" is not so well defined at a fundamental level and neither is
"consciousness".
Brent Meeker
> Doesn't this antinomy arise because we equivocate on "running Firefox". Do we
> mean a causal chain of events in the computer according to a certain program
> specification or do we mean the appearance on the screen of the same thing that
> the causal chain would have produced? We'd say "no" by the first meaning, but
> "yes" by the second. Obviously, there the question is not black-and-white. If
> the computer simply dropped a bit or two and miscolored a few pixels, no one
> would notice and no one would assert it wasn't running Firefox. So really, when
> we talk about "running Firefox" we are referring to a fuzzy, holistic process
> that admits of degrees.
A functionally equivalent copy of Firefox behaves in the same way as
the standard copy to which we are comparing it, giving the same output
for a given input. Differences which the program can't "know" about
are not important in this context, and the exact nature of the
hardware - whether solid state or valve, causal or random - is one
such difference. Of course, if the hardware is causal the program will
run much more reliably, but if the random hardware runs appropriately
through luck, I don't see how the program could know this.
> I'm developing a suspicion of arguments that say "suppose by accident...". If
> we say that the (putative) possibility of something happening "by accident"
> destroys the relevance of it happening as part of a causal chain, we are, in a
> sense, rejecting the concept of causal chains and relations - and not just in
> consciousness, as your Firefox example illustrates.
I would say that the significance of the causal chain is in
reliability, not in the experience the computation has, such as it may
be.
> I wrote "putative" above because this kind of thought experiment hypothesizes
> events whose probability is infinitesimal. If you take a finitist view, there
> is a lower bound to non-zero probabilities.
Can't we stay finitist and say these improbable things are very likely
to happen given a very big universe, say 3^^^3 metres across in
Knuth's notation?
> It is still trivial in the sense that it could be said to instantiate all
> possible conscious worlds (at least up to some size limit). Since we don't know
> what is necessary to instantiate consciousness, this seems much more speculative
> than saying the block of marble instantiates all computations - which we already
> agree is true only in a trivial sense.
We do know what it takes to instantiate consciousness: chemical
reactions in the brain. If these chemical reactions are computable
then an appropriate computation should also instantiate consciousness.
If we consider only the case of inputless conscious beings, I still
don't see why they won't be instantiated in randomness.
>>as I see no reason why the
>> consciousness of these observers should be contingent on the
>> possibility of interaction with the environment containing the
>> substrate of their implementation. My conclusion from this is that
>> consciousness, in general, is not dependent on the orderly physical
>> activity which is essential for the computations that we observe.
>
> Yet this is directly contradicted by those specific instances in which
> consciousness is interrupted by disrupting the physical activity.
But if it's all a virtual reality, it isn't a concrete physical
disruption that affects consciousness. It's just that the program
takes a turn which manifests in the virtual world as brain and
consciousness disruption.
>> Rather, consciousness must be a property of the abstract computation
>> itself, which leads to the conclusion that the physical world is
>> probably a virtual reality generated by the big computer in Platonia,
>
> This seems to me to be jumping to a conclusion by examining only one side of the
> argument and, finding it flawed, embracing the contrary. Abstract computations
> are atemporal and don't have to be generated. So it amounts to saying that the
> physical world just IS in virtue of there being some mapping between the world
> and some computation.
Yes. But I arrive at this conclusion because I can't think of a reason
to constrain computation so that it is only implemented by
conventional computers, and not by any and every random process.
>> The Fading Qualia argument proves functionalism, assuming that the
>> physical behaviour of the brain is computable (some people like Roger
>> Penrose dispute this). Functionalism then leads to the conclusion that
>> consciousness isn't dependent on physical activity, as discussed in
>> the recent threads. So, either functionalism is wrong, or
>> consciousness resides in the Platonic realm.
>
> Of there's something wrong with the argument that functionalism implies
> consciousness isn't dependent on physical activity.
Yes, but I find the argument convincing.
--
Stathis Papaioannou
It seems to me that this reasoning creates just as serious a problem
for your perspective as it does for mine. Suppose we physically remove
the defective component from the computer, but, with incredible luck,
the surrounding components continue to act as though they were
receiving the signals they would have received. Your experience of
using Firefox remains the same, so (by your argument above) it
shouldn't make sense to say that the computer isn't "really" running
Firefox. But we can keep removing components until all that's left is
a monitor that, with incredible luck due to thermal noise, is
displaying the pixels that would have been displayed if your computer
was actually functioning, doing things like displaying a mouse-pointer
that (very improbably!) happens to move when you move your mouse, etc.
This is, of course, just a recapitulation of the argument we've
already been considering - the slide from Fully-Functional Alice to
Lucky Alice to Empty-Headed Alice. I have an intuition that causality
(or its logical equivalent in Platonia) is somehow important for
consciousness. You argue that the the slide from Fully-Functional
Alice to Lucky Alice (or Fully-Functional Firefox to Lucky Firefox)
indicates that there's something wrong with this idea. However, you
have an intuition that order is somehow important for consciousness.
(Without trying to beg the question, I might use the term "mere
order", to indicate the fact that, for you, it doesn't matter whether
the blinking bits in some hypothetical 2D array were generated by
(say) a random process, it just matters that they display the
requisite order.) But the slide from Lucky Alice to Empty-Headed Alice
is just as problematic for that view as the slide from Fully-
Functional Alice to Lucky Alice is for mine.
My point isn't that your intuition must be incorrect. My point is that
the above argument fails to show me why your "mere order" intuition is
more correct than my "real order" intuition, since the argument is
equally destructive to both intuitions. Instead of giving up your
intuition, you make a move to Platonia. But in that new context, I
think it still makes sense to ask if "mere order" (for instance, in
the binary digits of PI) is enough for consciousness, and the Alice /
Firefox thought experiments don't help me answer that question.
> If by "Unification" you mean the idea that two identical brains with
> identical input will result in only one consciousness, I don't see how
> this solves the conceptual problem of partial zombies. What would
> happen if an identical part of both brains were replaced with a
> non-concious but otherwise identically functioning equivalent?
I was referring to the idea that my Conway's Life version of Bruno's
MGA 2 may only present a problem for Duplicationists. If one believes
that physically re-performing all of the Conway's Life computations
would create a second experience of pain (assuming that there's a
creature in there with that description), and if you *don't* believe
that the act of playing the move back creates a second experience of
pain, then you have a partial zombie problem. But it you accept
Unification, the problem might go away (although I'm unsure of this).
I still feel like I don't have a handle on how you feel the move to
Platonia solves these problems. If we imagine the mathematical
description of filling a 3D grid with the binary digits of PI,
somewhere within it we will find some patterns of bits that look as
though they're following the rules to Conway's Life. If we see
creatures in there, would they be conscious? What about the areas in
that grid where we find the equivalent of Empty-Headed Alice, where
most of the cells seem to be "following the rules" of Conway's Life,
but the section where a creature's "visual cortex" ought to be is just
filled with zeros? In other words, why doesn't the "partial zombie"
problem still exist for us in Platonia?
-- Kory
> I still feel like I don't have a handle on how you feel the move to
> Platonia solves these problems. If we imagine the mathematical
> description of filling a 3D grid with the binary digits of PI,
> somewhere within it we will find some patterns of bits that look as
> though they're following the rules to Conway's Life. If we see
> creatures in there, would they be conscious? What about the areas in
> that grid where we find the equivalent of Empty-Headed Alice, where
> most of the cells seem to be "following the rules" of Conway's Life,
> but the section where a creature's "visual cortex" ought to be is just
> filled with zeros? In other words, why doesn't the "partial zombie"
> problem still exist for us in Platonia?
Asking questions like this about platonic objects isn't like asking
the same questions about objects in a physical world. Abstract
threeness is not a kind of picture of what we would recognise as
threeness in the physical world: three objects, or five objects which
could be seen as two lots of two and one lot of one object, or the
Arabic numeral "3". Similarly, you can't point to a picture of a
physical computer and ask whether that is giving rise to a particular
computation in Platonia. Threeness, computations and consciousness
exist eternally and necessarily, and can't be created, destroyed or
localised.
I realise this coming close to regarding consciousness as akin to the
religious notion of a disembodied soul. But what are the alternatives?
As I see it, if we don't discard computationalism the only alternative
is to deny that consciousness exists at all, which seems to me
incoherent.
--
Stathis Papaioannou
I understand (I think) how threeness and computations exist eternally
in Platonia, but I don't understand your Platonic notion of
consciousness. Even after the move to Platonia, I'm still viewing
consciousness as something fundamentally computational. Are you?
-- Kory
Yes, and I think of consciousness as an essential side-effect of the
computation, as addition is an essential side-effect of the sum of two
numbers. But I don't have a clear idea of what a Platonic object
actually *is*: all I can think about are the shadows of the Platonic
forms encountered in the physical world.
--
Stathis Papaioannou
Stathis Papaioannou wrote:
> I realise this coming close to regarding consciousness as akin to the
> religious notion of a disembodied soul. But what are the alternatives?
> As I see it, if we don't discard computationalism the only alternative
> is to deny that consciousness exists at all, which seems to me
> incoherent.
ACK
But the differences are so enormous that one is again very far from
religion. In religion, the soul is an "essence" of a person interfacing
with a material body and usually exposed to some kind of judgement in an
afterlife.
With COMP the "soul" - better: mind - is all there is - no material
world, no essence, no judgements, just COMP. And it "supervenes" on -
better: is (inside/outside view) - computations (see UDA for details ;-)
Cheers,
Günther
>
>
>
> Stathis Papaioannou wrote:
>
>> I realise this coming close to regarding consciousness as akin to the
>> religious notion of a disembodied soul. But what are the
>> alternatives?
>> As I see it, if we don't discard computationalism the only
>> alternative
>> is to deny that consciousness exists at all, which seems to me
>> incoherent.
>
> ACK
>
> But the differences are so enormous that one is again very far from
> religion. In religion, the soul is an "essence" of a person
> interfacing
> with a material body and usually exposed to some kind of judgement
> in an
> afterlife.
I guess you mean "our occidental religion" ( which are about 40%
Plato, 60% Aristotle, say).
>
>
> With COMP the "soul" - better: mind - is all there is - no material
> world, no essence, no judgements,
Well, to be frank, we don't know that. Open problem :)
> just COMP.
Well, mainly its consequences, IF true.
Thanks for your encouraging kind remarks in your posts, Günther.
Bruno
Ok, I'm with you so far. But I'd like to get a better handle your
concept of a computation in Platonia. Here's one way I've been
picturing "platonic computation":
Imagine an infinite 2-dimensional grid filled with the binary digits
of PI. Now imagine an infinite number of 2-dimensional grids on top of
that one, with each grid containing the bits from the grid beneath it,
as transformed by the Conway's Life rules. This is a description of a
platonic computational object. Of course, my language is somewhat
"visual", but that's incidental. The point is, this is a precisely
defined mathematical object. We can "point at" any cell in this
infinite grid, and there is an answer to whether or not this bit is on
or off, given our definitions. (More formally, we can define an
abstract computational function that accepts any integer and returns
the state of that bit, given all of our definitions.)
Do you find this an acceptable way (not necessarily the only way) of
describing a computational platonic object? How would you talk about
how consciousness relates to the conscious-seeming patterns in this
platonic object? Would you say that consciousness "supervenes" on
those portions of this platonic computation?
-- Kory
> Ok, I'm with you so far. But I'd like to get a better handle your
> concept of a computation in Platonia. Here's one way I've been
> picturing "platonic computation":
>
> Imagine an infinite 2-dimensional grid filled with the binary digits
> of PI. Now imagine an infinite number of 2-dimensional grids on top of
> that one, with each grid containing the bits from the grid beneath it,
> as transformed by the Conway's Life rules. This is a description of a
> platonic computational object. Of course, my language is somewhat
> "visual", but that's incidental. The point is, this is a precisely
> defined mathematical object. We can "point at" any cell in this
> infinite grid, and there is an answer to whether or not this bit is on
> or off, given our definitions. (More formally, we can define an
> abstract computational function that accepts any integer and returns
> the state of that bit, given all of our definitions.)
>
> Do you find this an acceptable way (not necessarily the only way) of
> describing a computational platonic object? How would you talk about
> how consciousness relates to the conscious-seeming patterns in this
> platonic object? Would you say that consciousness "supervenes" on
> those portions of this platonic computation?
I struggle with the question of what a platonic object actually is,
even for something very simple. Let's say the implementation of a
circle supports roundness in the same way that a certain computation
supports consciousness. We can easily think of many ways a circle can
be represented in the real world, but which of these should we think
of when considering the platonic object? Is it possible to point to
platonic square and say it isn't round, or does the square support
roundness implicitly since it could be considered a circle
transformed? And is there any reason not to consider roundness as a
basic platonic object in itself, perhaps with circles somehow
supervening on roundness rather than the other way around?
--
Stathis Papaioannou
I see what you mean. But I'm uncomfortable with (what I perceive as)
the resulting vagueness in the platonic view of consciousness. You've
indicated that you think of consciousness as fundamentally
computational and Platonic - that's it's an essential side-effect of
platonic computations, as addition is the essential side-effect of the
sum of two-numbers. But if we don't have a clear conception of
"platonic computations", do we even really know what we're talking
about? I'm worried, essentially, that the move to Platonia "solves"
the problems created by these thought experiments only by creating a
view of consciousness that's too vague to allow such problems to arise.
-- Kory
> I see what you mean. But I'm uncomfortable with (what I perceive as)
> the resulting vagueness in the platonic view of consciousness. You've
> indicated that you think of consciousness as fundamentally
> computational and Platonic - that's it's an essential side-effect of
> platonic computations, as addition is the essential side-effect of the
> sum of two-numbers. But if we don't have a clear conception of
> "platonic computations", do we even really know what we're talking
> about? I'm worried, essentially, that the move to Platonia "solves"
> the problems created by these thought experiments only by creating a
> view of consciousness that's too vague to allow such problems to arise.
I agree that it's vague, but any way you look at it consciousness is
vague, slippery and elusive. This is probably why philosophers and
scientists who like to be clear about things have sometimes come to
the conclusion that consciousness is not real at all: the only real
thing is intelligence, which manifests as intelligent behaviour. This
idea steers a course between Scylla (paradoxes) and Charybdis
(vagueness and mysticism) and is attractive... as long as you avoid
introspection.
--
Stathis Papaioannou
this is an answer for a mail a few weeks back, did not have the time up
to now.
>With comp, we have an (non
> denombrable) infinity of computations, going through a (denombrable)
> infinity of states, and only few of them, I would say will have 1-OM
> role or 3-OM role. Even a fewer minority (a priori) will belongs to
> sharable computations (physical realities).
Ok, so, in your view, some states code for 1-OM roles (qualia) and some
states code for shareable views (quanta). Most states code for nothing.
What is a 3-OM? Do you mean a 3rd-person view description of an OM?
(for instance the "zombie" coding of a COMP state in a light-beam sent
from Earth to Mars)?
> expressible. It is still an open problem if comp leads to solipsism, but
> all the evidences available today, are that it does not lead to solipsism.
Which evidence? Is that one of your technical results? Which one?
>> It would not be a dualism, it would be mind-monism, but the "objects"
>> being computed would not be OMs directly but some kind of basic
>> mind-components - this idea is not new, in fact these objects would
>> correspond to the "dharmas" of yogacara (and also Theravada Buddhism,
>> but not so clearly there). (see
>> http://en.wikipedia.org/wiki/Dharmas#Dharmas_in_Buddhist_phenomenology)
>>
>> One would lose the wonderful OM-COMP correspondence (which I think is an
>> important feature of your COMP)
>> and get some kind of "binding problem"
>> again - how a unified consciousness results from the "dharmas"; but one
>> would be able to better explain how we have shareable histories (which
>> is I think a _weak point_ of COMP if related directed to OMs - as has
>> already been mentionend on the list, we can drift into solipsism with
>> COMP quite easily (and I don't see why shareable histories of any great
>> measure should evolve)
>> And I would be interested what you think of the idea to let COMP govern
>> a "dharma"-level and not an OM-level directly.
>
>
> I am asking myself if you are not doing a 1004 fallacy(*).
> (*)Like when Bruno said "about 1004 sheep" in "Sylivie and Bruno" by
> Lewis Carroll.
<snip>
It's not a 1004 fallacy, it is rather an attempt to recover some aspects
of materialism. (See the Chalmers excerpt I have included below on
type-F monism)
But then, of course, it would succumb also to the MGA argument (that is,
it does not go together with COMP).
> Try to explain, like if it was to to a "layman", the difference you
> make between "dharma-level" and "OM-level". Which OM?
I guess I mean the difference between type-F monism and pure idealism
(see again Chalmers text included below). Your view is a pure idealism,
the type-f monism is a bit nearer to mainstream views (though still not
widely held).
> (remember that the superveneience thesis is more conscience/ "relative
> implementation of states", than conscience/implementation of states".
> The relativity will add the probability (or credibility) of context and
> histories.
Ah ok - so you mean that also with COMP and UDA there could be "raw
feels" instantiated in Platonia (that would be dharma-level) - or some
kind of protoexperential?
Let me rephrase my question: with MAT, we have certain ideas (which
might be wrong) on what mind could supervene on: on brains, that is, on
certain organic chemical structures which exhibit high complexity and
causal interaction. And this consciousness is a _unified_ experience
(which also makes it a bit mysterious for MAT).
With COMP, I am not sure on what consciousness would supervene. On a
single step of a computation? On a turing machine state? On a
number-theoretic relation? On a proof?
And is the supervenient consciousness always tied to an integrated whole
like a person, or, as I asked above, could also "raw feels" supervene on
some parts of a computation which, relatively to others, constitute part
of a computation on which a unified experience would supervene. (maybe
that is what you mean with "conscience/ "relative implementation of states"?
Is that also why you think that COMP is not solipsistic? For example, if
consciousness directly supervenes on some form of computation, and
physical appearances (SWE etc) have to be derived from the measure on
outgoing computations, you must also be aware that all humans in your
experience are "physical objects" - they would only _not_ be zombies if
they are "fully computed" (such as your OM) - but how can you guarantee
that with the idealistic interpretation that you have? With the
"relative implementations"?
I think that there is a bit of a difficulty hidden there.
I am interested in your thoughts.
Cheers,
Günther
P.S.: Below the excerpt from Chalmers. Some words on type-F monism and
idealism.
Consciousness and its Place in Nature http://consc.net/papers/nature.html
David J. Chalmers
[[Published in (S. Stich and F. Warfield, eds) Blackwell Guide to the
Philosophy of Mind (Blackwell, 2003), and in (D.
Chalmers, ed) Philosophy of Mind: Classical and Contemporary Readings
(Oxford, 2002).]]
Type-F Monism
Type-F monism is the view that consciousness is constituted by the
intrinsic properties of fundamental physical entities: that is, by the
categorical bases of fundamental physical dispositions *[[Versions of
type-F monism have been put forward by Russell 1926, Feigl 1958/1967,
Maxwell 1979, Lockwood 1989, Chalmers 1996, Griffin 1998, Strawson 2000,
and Stoljar 2001.]]
snip
This view holds the promise of integrating phenomenal and physical
properties very tightly in the natural world. Here, nature consists of
entities with intrinsic (proto)phenomenal qualities standing in causal
relations within a spacetime manifold. Physics as we know it emerges
from the relations between these entities, whereas consciousness as we
know it emerges from their intrinsic nature. As a bonus, this view is
perfectly compatible with the causal closure of the microphysical, and
indeed with existing physical laws.
The view can retain the structure of physical theory as it already
exists; it simply supplements this structure with an intrinsic nature.
snip
This view has elements in common with both materialism and dualism. From
one perspective, it can be seen as a sort of materialism. If one holds
that physical terms refer not to dispositional properties but the
underlying intrinsic properties, then the protophenomenal properties can
be seen as physical properties, thus preserving a sort of materialism.
From another perspective, it can be seen as a sort of dualism. The
view acknowledges phenomenal or protophenomenal properties as
ontologically fundamental, and itretains an underlying duality between
structural-dispositional properties (those directly characterized in
physical theory) and intrinsic protophenomenal properties (those
responsible for consciousness). One might suggest that while the view
arguably fits the letter of materialism, it shares the spirit of
antimaterialism.
In its protophenomenal form, the view can be seen as a sort of neutral
monism: there are underlying neutral properties X (the protophenomenal
properties), such that the X properties are simultaneously responsible
for constituting the physical domain (by their relations) and the
phenomenal domain (by their collective intrinsic nature). In its
phenomenal form, can be seen as a sort of idealism, such that mental
properties constitute physical properties, although these need not be
mental properties in the mind of an observer, and they may need to be
supplemented by causal and spatiotemporal properties in addition. One
could also characterize this form of the view as a sort of panpsychism,
with phenomenal properties ubiquitous at the fundamental level. One
could give the view in its most general form the name panprotopsychism,
with either protophenomenal or phenomenal properties underlying all of
physical reality.
snip
There is one sort of principled problem in the vicinity. Our
phenomenology has a rich and specific structure: it is unified, bounded,
differentiated into many different aspects, but with an underlying
homogeneity to many of the aspects, and appears to have a single subject
of experience. It is not easy to see how a distribution of a large
number of individual microphysical systems, each with their own
protophenomenal properties, could somehow add up to this rich and
specific structure. Should one not expect something more like a
disunified, jagged collection of phenomenal spikes?
This is a version of what James called the combination problem for
panpsychism, or what Stoljar (2001) calls the structural mismatch
problem for the Russellian view (see also Foster 1991, pp. 119-30). To
answer it, it seems that we need a much better understanding of the
compositional principles of phenomenology: that is, the principles by
which phenomenal properties can be composed or constituted from
underlying phenomenal properties, or protophenomenal properties. We have
a good understanding of the principles of physical composition, but no
real understanding of the principles of phenomenal composition. This is
an area that deserves much close attention: I think it is easily the
most serious problem for the type-F monist view. At this point, it is an
open question whether or not the problem can be solved.
snip
Overall, type-F monism promises a deeply integrated and elegant view of
nature. No-one has yet developed any sort of detailed theory in this
class, and it is not yet clear whether such a theory can be developed.
But at the same time, there appear to be no strong reasons to reject the
view. As such, type-F monism is likely to provide fertile grounds for
further investigation, and it may ultimately provide the best
integration of the physical and the phenomenal within the natural world.
snip
Second, some nonmaterialists are idealists (in a Berkeleyan sense),
holding that the physical world is itself constituted by the conscious
states of an observing agent. We might call this view type-I monism. It
shares with type-F monism the property that phenomenal states play a
role in constituting physical reality, but on the type-I view this
happens in a very different way: not by having separate "microscopic"
phenomenal states underlying each physical state, but rather by having
physical states constituted holistically by a "macroscopic" phenomenal
mind. This view seems to be non-naturalistic in a much deeper sense than
any of the views above, and in particular seems to suffer from an
absence of causal or explanatory closure in nature: once the natural
explanation in terms of the external world is removed, highly complex
regularities among phenomenal states have to be taken as unexplained in
terms of simpler principles. But again, this sort of view should at
least be acknowledged.
Hello Bruno,
this is an answer for a mail a few weeks back, did not have the time up
to now.With comp, we have an (nondenombrable) infinity of computations, going through a (denombrable)infinity of states, and only few of them, I would say will have 1-OMrole or 3-OM role. Even a fewer minority (a priori) will belongs tosharable computations (physical realities).
Ok, so, in your view, some states code for 1-OM roles (qualia) and some
states code for shareable views (quanta).
Most states code for nothing.
What is a 3-OM? Do you mean a 3rd-person view description of an OM?
(for instance the "zombie" coding of a COMP state in a light-beam sent
from Earth to Mars)?
expressible. It is still an open problem if comp leads to solipsism, butall the evidences available today, are that it does not lead to solipsism.
Which evidence? Is that one of your technical results? Which one?
Try to explain, like if it was to to a "layman", the difference youmake between "dharma-level" and "OM-level". Which OM?
I guess I mean the difference between type-F monism and pure idealism
(see again Chalmers text included below). Your view is a pure idealism,
the type-f monism is a bit nearer to mainstream views (though still not
widely held).
(remember that the superveneience thesis is more conscience/ "relativeimplementation of states", than conscience/implementation of states".The relativity will add the probability (or credibility) of context andhistories.
Ah ok - so you mean that also with COMP and UDA there could be "raw
feels" instantiated in Platonia (that would be dharma-level) - or some
kind of protoexperential?
Let me rephrase my question: with MAT, we have certain ideas (which
might be wrong) on what mind could supervene on: on brains, that is, on
certain organic chemical structures which exhibit high complexity and
causal interaction.
And this consciousness is a _unified_ experience
(which also makes it a bit mysterious for MAT).
With COMP, I am not sure on what consciousness would supervene. On a
single step of a computation? On a turing machine state? On a
number-theoretic relation? On a proof?
And is the supervenient consciousness always tied to an integrated whole
like a person,
or, as I asked above, could also "raw feels" supervene on
some parts of a computation which, relatively to others, constitute part
of a computation on which a unified experience would supervene. (maybe
that is what you mean with "conscience/ "relative implementation of states"?
Is that also why you think that COMP is not solipsistic? For example, if
consciousness directly supervenes on some form of computation, and
physical appearances (SWE etc) have to be derived from the measure on
outgoing computations, you must also be aware that all humans in your
experience are "physical objects" - they would only _not_ be zombies if
they are "fully computed" (such as your OM) - but how can you guarantee
that with the idealistic interpretation that you have? With the
"relative implementations"?
I think that there is a bit of a difficulty hidden there.
I am interested in your thoughts.
>>> I think you are correct, but allowing the observer to be mechanically
>>> described as obeying the wave equation (which solutions obeys to comp),
>>
>> Hmm well if you have a basis, yes; - but "naked" infinite-dimensional
>> Hilbert Space (the "everything" in QM)?
>
>
> You put the finger on a problem I have with QM. I ill make a confession:
> I don't believe QM is "really" turing universal.
> The universal quantum rotation does not generate any interesting
> computations!
Could you please elaborate a bit on the two above sentences. I am
missing a more context to understand where "really" points to. And with
the second sentence, I simply don't understand it.
> I am open, say, to the idea that quantum universality needs measurement,
> and this could only exists internally. So the "naked" infinidimensional
> Hilbert space + the universal wave (rotation, unitary transformation) is
> a simpler ontology than arithmetical truth.
> Yet, even on the vaccum, from inside its gives all the non linearities
> you need to build arithmetic ... and consciousness.
Cheers,
mirek
Hello Bruno,I think you are correct, but allowing the observer to be mechanicallydescribed as obeying the wave equation (which solutions obeys to comp),Hmm well if you have a basis, yes; - but "naked" infinite-dimensionalHilbert Space (the "everything" in QM)?You put the finger on a problem I have with QM. I ill make a confession:I don't believe QM is "really" turing universal.The universal quantum rotation does not generate any interestingcomputations!
Could you please elaborate a bit on the two above sentences. I am
missing a more context to understand where "really" points to.
And with
the second sentence, I simply don't understand it.
I am open, say, to the idea that quantum universality needs measurement,and this could only exists internally. So the "naked" infinidimensionalHilbert space + the universal wave (rotation, unitary transformation) isa simpler ontology than arithmetical truth.Yet, even on the vaccum, from inside its gives all the non linearitiesyou need to build arithmetic ... and consciousness.
Cheers,
mirek
Best,
mirek
> The classical universal
> dovetailer generates easily all the quantum computations, but I find
> hard to just define *one* unitary transformation, without measurement,
> capable of generating forever greater computational memory space. Other
> problems are more technical, and are related to the very notion of
> universality and are rather well discussed in the 2007 paper:
>
> Deutsch's Universal Quantum Turing Machine revisited.
> http://arxiv.org/pdf/quant-ph/0701108v1
>
> I could relate this with technical problem with the BCI combinator
> algebra, that is those structure in which every process are reversible,
> and no cloning are possible (cf the No Kestrel, No Starling summary of
> physics(*)). Those algebra are easily shown being non turing universal,
> and pure unitarity seems to me to lead to such algebra.
>
> Could you implement with a quantum computer the "really infinite"
> counting algorithm by a purely unitary transformation? The one which
> generates without stopping 0, 1, 2, 3, ... That would already be a big help.
>
> Bruno
>
> (*) Marchal B., 2005, Theoretical computer science and the natural
> sciences
>
> Thank you for a quick answer! I'll take a look at it, my curiosity
> approves additional items on my TODO list :-)
Manage keeping finite your todo list :)
I have finished the reading of the paper I mentioned (Deutsch's
Universal Quantum Turing Machine revisited) and I see they have very
similar problems, probably better described. The paper mentions (but
does not tackle) an old problem already described by Shi 2002, which
made me think at the time that the notion of Universality is a bit
dubious in the quantum realm.
To sum up: is there a (never stopping) quantum counting algorithm? I
think I can build a Quantum UD from it, well in case the Shi problem
is not too much devastating.
But here, and now, I got a feeling there is just no quantum counting
algorithm ...
Cheers,
Bruno
PS Note that AUDA (the arithmetical UDA) is in principle already able
to solve completely that problem. It is still possible that "the
material hypostases" of the self-observing *classical* universal
machine lacks both the kestrels and the starlings, and their
descendant combinators in which case comp predicts that physics is NOT
Turing Universal. Comp would predict that not all natural numbers are
in any possible nature or physics!
"in principle" only because the translation in arithmetic leads to
very complex arithmetical formula (bounded by PI_1 IN Arithmetical
Truth, if you know a bit of degrees of unsolvability. I will perhaps
explain a bit of this, but take it easy for not making explode the
todo list :).
Note the beauty of comp: even if there are no physical universal
machine in the physical universe (including the physical universe(s)),
*you* (and other persons) are and remains universal machine.
We do not live in physical universes, we just traverse them to be able
to chat some bits, perhaps. The first persons would be spiraling
through an infinite sequence of rotations, if said through an image.
> I have finished the reading of the paper I mentioned (Deutsch's
> Universal Quantum Turing Machine revisited) and I see they have very
> similar problems, probably better described.
I finished a rather careful reading of that paper (QTM revisited) too,
http://arxiv.org/pdf/quant-ph/0701108v1
and if I got it right the main authors' point is:
Claims:
1) An Universal Probabilistic Turing Machine (PTM) can simulate the set
of PTMs with computable transition probabilities EXACTLY.
2) An Universal QTM can simulate the set of QTMs with computable
amplitudes only approximately.
Conclusion:
The notion of universality for Quantum TM is not of the same kind that
we have for Determinictic TM and Probabilistic TM.
--------
Well, the first claim is correct and the corresponding algorithm for an
EXACT simulation is very simple. I think you know this well, but for the
sake of having a good reference, see for example Lemma 7.14 in
http://www.cs.princeton.edu/theory/complexity/bppchap.pdf
A tricky point, of course, is that in order to achive an EXACT
simulation your algorithm will potentionally never stop. For example,
trying to achieve ouput probability P=1/3 using UPTM with transitional
probabilities {0,1/2,1} is exact only in the limit.
In practice, such an EXACT simulation is not needed, and people prefer
to say that one machine CAN simulate other machine if properties in
question can be approximated with ARBITRARY accuracy. Yes, and it should
be reasonably fast. Typically the penalty for a better accuracy is
upper-bounded by a polylog factor.
Regarding the second claim, it is not true to my knowledge.
Approximation of amplitudes is a convergent process - set your accuracy,
suffer polylog slowdown factor, done. Wanna go to the limit, you get an
exact simulation.
> The paper mentions (but
> does not tackle) an old problem already described by Shi 2002, which
> made me think at the time that the notion of Universality is a bit
> dubious in the quantum realm.
I don't know which problem do you mean. In the QTM Revisited paper, the
authors do not suply a valid reference, the paper they refer to does not
exist and they don't get right even the first name of Dr. Shi.
Thus I may only assume that you/the authors refer to this paper
http://arxiv.org/pdf/quant-ph/0205115v2
I have read this paper few years ago and after a quick today's scan I'm
not aware of some explicitly described problem. On the contrary, the
message of the paper is that it is 'easy' to find universal set of
quantum gates (given that you start, for better or worse, from classical
universal set of gates).
> To sum up: is there a (never stopping) quantum counting algorithm? I
> think I can build a Quantum UD from it, well in case the Shi problem
> is not too much devastating.
> But here, and now, I got a feeling there is just no quantum counting
> algorithm ...
>
Please be more specific about what do you mean by a quantum counting
algorithm. Sometimes I'm not too bright guy :-)
Is this what you mean?
step 1\ |0>
step 2\ |0> + |1>
step 3\ |0> + |1> + |2>
....
or (a classical machine operated by quantum means)
step 1\ |0>
step 2\ |1>
step 3\ |2>
....
or something different :-)
Best,
mirek
>
> Please be more specific about what do you mean by a quantum counting
> algorithm. Sometimes I'm not too bright guy :-)
Really? Not here I think. The question *was* and *is* fuzzy.
>
>
> Is this what you mean?
> step 1\ |0>
> step 2\ |0> + |1>
> step 3\ |0> + |1> + |2>
> ....
>
Interesting. Perhaps an electron climbing in some way the energy
states at carefully chosen frequences?
>
> or (a classical machine operated by quantum means)
> step 1\ |0>
> step 2\ |1>
> step 3\ |2>
> ....
>
> or something different :-)
My question has perhaps no sense at all. Is there a notion of quantum
computation done without any measurement? Is there a purely unitary
transformation which "augment" the dimensionality of the initial
quantum machine. Does the notion of universal quantum dovetailing
makes sense.
I don't find my Shi papers, but from what I remind, it gives some good
argument about the difficulty of redefining the halting problem
(halting in which universe? ...).
I have no problem with most quantum algorithm, but no clear idea of
what really a quantum computation in general can be, despite I have
few doubt it does really exploits superposed "physical
realities" (assuming QM, that is the SWE).
Don't worry. Sometimes I'm not too bright guy too :-)
Bruno
Quantum lambda calculus by Andre van Tonder does not containt measurement.
http://arxiv.org/pdf/quant-ph/0307150v5
From the abstract, he proves equivalence between his quantum lambda
calculus and quantum Turing machine (also without measurement). That's
all I know in this respect for the moment.
> Is there a purely unitary
> transformation which "augment" the dimensionality of the initial
> quantum machine. Does the notion of universal quantum dovetailing
> makes sense.
I am not too familiar with the process of dovetailing, but I'm fine with
the general idea that there is program which systematically generates
every possible C/Lisp code and in between steps of this generation it
interprets parts of what is already generated.
Can you sketch how should one think about such dovetailing in terms of
classical logical gates, please?
> I don't find my Shi papers, but from what I remind, it gives some good
> argument about the difficulty of redefining the halting problem
> (halting in which universe? ...).
Good, your note about the halting problem helped to refine my google
search to the extend that I've found the Shi paper you are talking
about. Hereby, I also apologize to the authors of QTM Revisited paper,
their reference was correct.
http://dx.doi.org/10.1016/S0375-9601(02)00015-4
I'll read it.
Regards,
mirek
There are certain arguments (Deutsch, Wallace, Greaves) that propose
that they can derive probabilites (and the Born rule) from decision
theory - although I am not convinced (see for instance Price 2008 -
http://arxiv.org/abs/0802.1390).
Criticism notwithstanding, I think that Deutsch Et Al have made
interesting inroads into the problem of probability and am optimistic
that it will be solved. Everett is at least not worse off than other
intrepretations concerning probability.
But I have a question concerning the UDA: here, we don't even have
intuitive concepts of wave interference, decoherence etc anymore, which
can account for different branch weights.
In a sense, I don't see how a computation could be "cancelled" by
another one. Do you have intuitions of how to derive the Born rule from
the measure of all computations? Why should some consistent histories be
more probable than others? (there is of course the Quantum Logic and
Gleason's theorem connection which we have discussed shortly) - but I
don't see any _intuitive_ connection, of why there should by histories
with different probabilities?
Best Wishes,
Günther
>
> Again a question for Bruno ;-)
>
> There are certain arguments (Deutsch, Wallace, Greaves) that propose
> that they can derive probabilites (and the Born rule) from decision
> theory - although I am not convinced (see for instance Price 2008 -
> http://arxiv.org/abs/0802.1390).
Careful: they derive the Born rule from decision theory AND Everett QM
= the SWE+comp, mainly).
(but the UDA tells us that with comp we have to derive also QM
>
>
> Criticism notwithstanding, I think that Deutsch Et Al have made
> interesting inroads into the problem of probability and am optimistic
> that it will be solved.
I think so too.
> Everett is at least not worse off than other
> intrepretations concerning probability.
I share with Everett and deWitt the opinion that the SWE provides its
own internal interpretation. Indeed, I believe Arithmetic provides its
own internal many interpretations.
>
>
> But I have a question concerning the UDA: here, we don't even have
> intuitive concepts of wave interference, decoherence etc anymore,
> which
> can account for different branch weights.
The UDA provides an intuition on the weights branch: its terribly
subtle redundancy on the computations. So the first "intuitive weight"
is just the measure on the computations going through your state. But
this is a priori purely additive. So this means that either a
quantized (digital) version of a UD will win the "measure battles"
among universal dovetailers, or that we are forgetting to take a data
into account. And indeed we forget that we have to take into account
the measure *from the point of view of the machine*. This lead to the
*intensional* variants of the logics of self-reference G and G*, which
leads to the material hypostases (for the probability), which leads to
a quantum Goldblatt -like arithmetical interpretation of quantum logic.
The question is: will this give rise to a sufficiently symmetrical
base, like in QM?
The material hypostases multiplies themselves into an infinity of
weaker variants, giving rise to "arithmetical projection" surrounded
by linear and symmetrical structure, but this is not in my theses. I
am still hoping for finding sufficiently rich "Temperly Lieb Algebra
so that I get a notion of space or universal braiding ...
>
>
> In a sense, I don't see how a computation could be "cancelled" by
> another one.
Tricky problem! Sure.
> Do you have intuitions of how to derive the Born rule from
> the measure of all computations?
By the arithmetical quantum logic. We should derive the SWE and the
projection rules.
Don't expect anything easy here.
> Why should some consistent histories be
> more probable than others?
Because there are more numerous, and handle better the coupling with
noise and perverse arithmetical histories (white rabbits). If not,
well, we will *have* reason to doubt comp.
> (there is of course the Quantum Logic and
> Gleason's theorem connection which we have discussed shortly)
Yes. Exactly. We have today just that sign. It is the last thing I
got, in 1994.
> - but I
> don't see any _intuitive_ connection, of why there should by histories
> with different probabilities?
The intuition is the redundancy of the UD's work, and the way
histories have to be measure *from the point of view" of the machine.
In AUDA: the "material pov", which by UDA is given by a probability or
credibility measure on the consistant extensions, is given by Bp & Dt
(and their variants B^n d & D^m t, n least of equal to m). With p
sigma_1 ("comp" in the language of the machine), this gives rise to B
logics without necessitation rule, it is enough to get an arithmetical
quantization).
Our intuition is lost here, of course. But we could have expected
this, no?
Best,
Bruno
> In a sense, I don't see how a computation could be "cancelled" by
> another one.