I see that one of the earlier participants on the Everything list has now
taken it upon himself to educate the masses because the "cat is out of the
bag" and QI has become a familiar topic to many.
http://arxiv.org/ftp/arxiv/papers/0902/0902.0187.pdf
Does he say anything in this article that he hasn't said on the Everything
list in his struggles against QI?
----- End forwarded message -----
I have now read the whole of Jacques Mallah's "paper", and to put it
mildly, it is disappointing. I would have expected more from him. It
is neither the "definitive debunking" hoped for by the author, nor is
it persuasive in the rhetorical sense. What little technical detail he
provides obscures, rather than illuminates the issue.
So what is the paper? I mentioned the interesting comment on how we
should expect to find ourselves a Boltzmann brain shortly after the
big bang, but there was no follow up to this. I have no idea how he
came up with that notion.
His discussion of the Born rule is incorrect. The probability given by
the Born rule is not the square of the state vector, but rather the square
modulus of the inner product of some eigenvector with the original
state, appropriately normalised to make it a probability. After
observation, the state vector describing the new will be proportional
to the eigenvector corresponding the measured eigenvalue, but nothing
in QM says anything about its amplitude. Indeed it is conventional to
normalise the resulting state vector, as a computational convenience -
but this is an entirely different proposition to Mallah's.
What I think he is trying to discuss, somewhat clumsily, in the
section on measure, is the ASSA notion of a unique well-defined
measure for all observer moments. This has been discussed in this
list extensively, and also summarised in my book. But it would sure
confuse anyone not familiar with the notion.
He goes on to mention rather briefly in passing his doomsday style
argument against QI, but not in detail. Which is just as well, as that
argument predicts that we should be neonatal infants!
He also mentions Tegmark's amoeba croaks argument, which is not
actually an argument against QI, but rather a discussion of what QI
might actually mean. Contrary to what some people might think, QI
doesn't predict one would necessarily experience being vastly older
than the rest of the population. It just predicts that we should all
experience a "good innings", and that what happens after that is
rather unpredictable - it may be lapsing into senesence, it may be followed
by rebirth into a different consciousness, it may be a form of
afterlife, or of uploading Singulatarian style.
So sorry Jacques - you need to do better. I'm sure you can!
Cheers
--
----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------
Russell, I expected there might be some discussion of my latest eprint on this list. That's why I'm here now - to see if there are any clarifications I should make in it. I intend to make it better - and perhaps I'll have you guys to thank!
Don't expect me to stick around. I see the list hasn't changed much - Bruno is still pushing his crackpot UDA. I could tell you what's wrong with his MGA, but I'm here to deal with the QS paper first.
> http://arxiv.org/ftp/arxiv/papers/0902/0902.0187.pdf
> I mentioned the interesting comment on how we should expect to find ourselves a Boltzmann brain shortly after the big bang, but there was no follow up to this. I have no idea how he came up with that notion.
I wrote
"If one denies that the amount of “a person’s” consciousness can change as a function of time after it begins to exist and as long as there is at least some of it left, then in the quantum MWI, since there deterministically is some slight amplitude that any given particle configuration (such as that of a person’s brain) exists even shortly after the Big Bang, there would again be no reason to expect that a typical person would be the result of normal evolutionary processes – you would have been ‘born’ way back then."
Seems pretty straightforward to me:
1. Initially, before evolution occurred, a typical Boltzmann brain (BB) had about the same measure as a brain which was like what we consider a normal person's (an atypical BB).
2. The typical BB's all together vastly outnumbered the atypical ones, so they had much more total measure.
3. We are assuming here that a person's measure can't change as a function of time.
4. Therefore the initial measure advantage of the typical BB's would hold for all time.
Perhaps I should spell out the steps like that in the paper, but I thought it was self-explanatory already.
> His discussion of the Born rule is incorrect. The probability given by
> the Born rule is not the square of the state vector
Russell, Jesse Mazer has already pointed out that it is your discussion of my discussion of it that is incorrect. It's true that people use various terminology (maybe I should have said squared norm instead of squared amplitude) and I was trying to keep technicalities to a minimum. See
http://en.wikipedia.org/wiki/Probability_amplitude
> After observation, the state vector describing the new will be
> proportional to the eigenvector corresponding the measured eigenvalue,
> but nothing in QM says anything about its amplitude. Indeed it is
> conventional to normalise the resulting state vector
That only makes sense in a collapse interpretation (or for practical convenience). My guess is you looked up the Born Rule in some textbook and naturally it did not have an MWI perspective.
> What I think he is trying to discuss, somewhat clumsily, in the
> section on measure, is the ASSA notion of a unique well-defined
> measure for all observer moments.
The charge of 'clumsiness' is too vague for me to do anything about, so perhaps you could be more specific. As for self-sampling, I didn't want to use that term because it can create the confusion that something random is really going on. Instead I covered the Bayesian issues in my sections on the Reflection Argument and Theory Confirmation.
> He goes on to mention rather briefly in passing his doomsday style
> argument against QI, but not in detail.
I think the argument is presented in full. What part is missing?
> Which is just as well, as that argument predicts that we should be neonatal infants!
I remembered that odd confusion of yours has been discussed on the list before, so I Googled it. I found a 2003 post by Saibal Mitra that covers it. I think I must have posted about it too, in the old days.
http://www.mail-archive.com/everyth...@eskimo.com/msg04697.html
> "... once you take into account the possibility of dying then you will see a decrease. But ignoring that, the measure should be conserved. The measure for being in a particular state at age 30 should be much smaller than the measure for being in a particular state at age 4, but after summation over all possible states you can be in, you should find that the total measure is conserved."
Suppose you differentiate into N states, then on average each has 1/N of your original measure. I guess that's why you think the measure decreases. But the sum of the measures is N/N of the original.
This is trivially obvious so I saw no reason to mention it explicitly in the paper. If there are people other than Russell with the same confusion, then I may add it in.
> He also mentions Tegmark's amoeba croaks argument, which is not
> actually an argument against QI, but rather a discussion of
> what QI might actually mean.
I quoted Tegmark verbatim. He says "my brain cells will gradually give out (indeed, that's already started happening...) so that I keep feeling self-aware, but less and less so, the final "death" being quite anti-climactic, sort of like when an amoeba croaks."
'Final death' - those are his words, not mine. You may not agree with him, but I don't see how you can deny that he is arguing against QI.
The discussion on amnesia and personal identity is very relevant, as perhaps people will realize that personal idenity is not well defined and that the whole QS/QI thing doesn't make any sense.
Tegmark mentioned in an article the idea of self-aware structures, SAS.
He wrote that the search for such structures is ongoing, i.e., he
postulated the existence of such structures without giving examples.
I'm wondering if consciousness and self-awareness has been
"mathematized" somewhere, preferably in documents I can download without
academic affiliation / subscription.
I'm inclined to thing agents might be a pathway to this end, as well as
what David Wolpert is calling a device.
Do any of the formalizations come close to being reflective of human
consciousness? In other words a mathematical model of human consciousness?
Thank you.
--- On Fri, 2/6/09, russell standish <li...@hpcoders.com.au> wrote:
> So sorry Jacques - you need to do better. I'm sure you can!
Russell, I expected there might be some discussion of my latest eprint on this list. That's why I'm here now - to see if there are any clarifications I should make in it. I intend to make it better - and perhaps I'll have you guys to thank!
Don't expect me to stick around. I see the list hasn't changed much - Bruno is still pushing his crackpot UDA. I could tell you what's wrong with his MGA, but I'm here to deal with the QS paper first.
> http://arxiv.org/ftp/arxiv/papers/0902/0902.0187.pdf
> I mentioned the interesting comment on how we should expect to find ourselves a Boltzmann brain shortly after the big bang, but there was no follow up to this. I have no idea how he came up with that notion.
I wrote
"If one denies that the amount of "a person's" consciousness can change as a function of time after it begins to exist and as long as there is at least some of it left, then in the quantum MWI, since there deterministically is some slight amplitude that any given particle configuration (such as that of a person's brain) exists even shortly after the Big Bang, there would again be no reason to expect that a typical person would be the result of normal evolutionary processes – you would have been 'born' way back then."
Seems pretty straightforward to me:
1. Initially, before evolution occurred, a typical Boltzmann brain (BB) had about the same measure as a brain which was like what we consider a normal person's (an atypical BB).
2. The typical BB's all together vastly outnumbered the atypical ones, so they had much more total measure.
3. We are assuming here that a person's measure can't change as a function of time.
4. Therefore the initial measure advantage of the typical BB's would hold for all time.
Perhaps I should spell out the steps like that in the paper, but I thought it was self-explanatory already.
> His discussion of the Born rule is incorrect. The probability given by
> the Born rule is not the square of the state vector
Russell, Jesse Mazer has already pointed out that it is your discussion of my discussion of it that is incorrect. It's true that people use various terminology (maybe I should have said squared norm instead of squared amplitude) and I was trying to keep technicalities to a minimum. See
http://en.wikipedia.org/wiki/Probability_amplitude
> After observation, the state vector describing the new will be
> proportional to the eigenvector corresponding the measured eigenvalue,
> but nothing in QM says anything about its amplitude. Indeed it is
> conventional to normalise the resulting state vector
That only makes sense in a collapse interpretation (or for practical convenience). My guess is you looked up the Born Rule in some textbook and naturally it did not have an MWI perspective.
> What I think he is trying to discuss, somewhat clumsily, in the
> section on measure, is the ASSA notion of a unique well-defined
> measure for all observer moments.
The charge of 'clumsiness' is too vague for me to do anything about, so perhaps you could be more specific. As for self-sampling, I didn't want to use that term because it can create the confusion that something random is really going on. Instead I covered the Bayesian issues in my sections on the Reflection Argument and Theory Confirmation.
> He goes on to mention rather briefly in passing his doomsday style
> argument against QI, but not in detail.
I think the argument is presented in full. What part is missing?
> Which is just as well, as that argument predicts that we should be neonatal infants!
I remembered that odd confusion of yours has been discussed on the list before, so I Googled it. I found a 2003 post by Saibal Mitra that covers it. I think I must have posted about it too, in the old days.
http://www.mail-archive.com/everyth...@eskimo.com/msg04697.html
> "... once you take into account the possibility of dying then you will see a decrease. But ignoring that, the measure should be conserved. The measure for being in a particular state at age 30 should be much smaller than the measure for being in a particular state at age 4, but after summation over all possible states you can be in, you should find that the total measure is conserved."
Suppose you differentiate into N states, then on average each has 1/N of your original measure. I guess that's why you think the measure decreases. But the sum of the measures is N/N of the original.
This is trivially obvious so I saw no reason to mention it explicitly in the paper. If there are people other than Russell with the same confusion, then I may add it in.
> He also mentions Tegmark's amoeba croaks argument, which is not
> actually an argument against QI, but rather a discussion of
> what QI might actually mean.
I quoted Tegmark verbatim. He says "my brain cells will gradually give out (indeed, that's already started happening...) so that I keep feeling self-aware, but less and less so, the final "death" being quite anti-climactic, sort of like when an amoeba croaks."
'Final death' - those are his words, not mine. You may not agree with him, but I don't see how you can deny that he is arguing against QI.
The discussion on amnesia and personal identity is very relevant, as perhaps people will realize that personal idenity is not well defined and that the whole QS/QI thing doesn't make any sense.
I'm happy to help - hopefully any criticisms I and others make will be
constructive. But it may take a few days, as work commitments intervene!
>
> > What I think he is trying to discuss, somewhat clumsily, in the
> > section on measure, is the ASSA notion of a unique well-defined
> > measure for all observer moments.
>
> The charge of 'clumsiness' is too vague for me to do anything about, so perhaps you could be more specific. As for self-sampling, I didn't want to use that term because it can create the confusion that something random is really going on. Instead I covered the Bayesian issues in my sections on the Reflection Argument and Theory Confirmation.
Fair comment. I will try to be more specific. But for now, take it as
a signpost that the discussion does not adequately explain the
concepts of "amount of consciousness", "effective probability",
the sampling measure used in anthropic arguments and the
interrelations between all of these.
> Suppose you differentiate into N states, then on average each has 1/N of your original measure. I guess that's why you think the measure decreases. But the sum of the measures is N/N of the original.
>
> This is trivially obvious so I saw no reason to mention it explicitly in the paper. If there are people other than Russell with the same confusion, then I may add it in.
I still find this confusing. Your argument seems to be that you won't
live to 1000 because the measure of 1000 year old versions of you in
the multiverse is very small - the total consciousness across the
multiverse is much less for 1000 year olds than 30 year olds. But by
an analogous argument, the measure of 4 year old OM's is higher than
that of 30 year old OM's, since you might die between age 4 and 30.
But here you are, an adult rather than a child. Should you feel your
consciousness more thinly spread or something?
--
Stathis Papaioannou
> I could tell you what's wrong with his MGA, but I'm here to deal with the QS paper first.
I appreciate your prioritizing your paper, but I would be interested in
what you find wrong with the MGA.
By the way, as I mentioned in a previous mail to John, my departure from
materialism started with this book:
I also read your paper, and I my main problem is that you have strong
physicalist (materialist) assumptions - that there is some matter in the
universe which "instantiates" brain states.
That is indeed a problematic view - so maybe you could relate your take
on the MGA?
Cheers,
Günther
--- On Sat, 2/7/09, Jesse Mazer <laser...@hotmail.com> wrote:
> are you open to the idea
> that there might be truths about subjectivity (such as
> truths about what philosophers call 'qualia') which
> cannot be reduced to purely physical statements? Are you
> familiar with the ideas of philosopher David Chalmers, who
> takes the latter position? He doesn't advocate
> interactive dualism, where there's some kind of
> soul-stuff that can influence matter--he assumes that the
> physical world is "causally closed", so all
> physical events have purely physical causes, including all
I am very familiar with David Chalmers' position. My view is that he's wrong: If I have qualia, I don't find it plausible that they can have no influence over my spelled-out thoughts and words or actions, which is what epiphenomenalism would imply. If "true qualia" must be in addition to whatever is making me think and say I have qualia, then I have no reason to think I have the "true" ones. I am a reductive computationalist.
> If one buys into
> the possibility of objective truths about mental
> states/qualia and psychophysical laws, it wouldn't be
> such a stretch to imagine that there may be objective truths
> about the first-person probabilities of experiencing
> different branches in either the MWI or duplication
> experiments in a single universe (so that you don't have
> to rely on decision theory, which depends on non-objective
> choices about which future possibilities you 'care'
> about, to discuss quantum immortality), and that these
> probabilities could be determined by some combination of an
> objective physical measure on different brainstates and some
> set of "psychophysical laws". If so, the question
> of quantum immortality would boil down to whether a given
> mind always has a 100% chance of experiencing a
> "next" observer-moment as long as a
> "next" brainstate exists somewhere, or whether
> there is some nonzero chance of one's flow of experience
> just ending.Jesse
In the QI paper, in some of the arguments I explicitly appeal to functionalism. Most MWIers are functionalists, so those arguments should apply for them.
If dualism is assumed, there are few limits on what can happen, but if Occam's razor is applied to it you can assume things won't end up much different than without it. Chalmers himself is a computationalist (just not a reductive one).
The concept of measure, and the empirical arguments such as the Boltzmann Brains one and the general argument against immortality, should apply regardless of the physicalism/platonism/dualism debate.
You might die between 4 and 30, but the chance is fairly small, let's say 10% for the sake of argument. So, if we just consider these two ages, the effective probability of being 30 would be a little less than that of being 4 - not enough less to draw any conclusions from.
The period of adulthood is longer than that of childhood so actually you are more likely to be an adult. How likely? Just look at a cross section of the population. Some children, more adults, basically no super-old folks.
> Should you feel your consciousness more thinly spread or something?
No, measure affects how common an observation is, not what it feels like.
> You might die between 4 and 30, but the chance is fairly small, let's say 10% for the sake of argument. So, if we just consider these two ages, the effective probability of being 30 would be a little less than that of being 4 - not enough less to draw any conclusions from.
>
> The period of adulthood is longer than that of childhood so actually you are more likely to be an adult. How likely? Just look at a cross section of the population. Some children, more adults, basically no super-old folks.
Suppose I did something extremely risky as a child and survived. The
multiverse is as a result much more densely filled with my childhood
OM's. Now, it is true that a randomly sampled OM out of all the
possible OM's available to me is more likely to be one of these
childhood OM's, but random sampling of this sort is not how life
works. When I anticipate my future, the only options I need consider
are those OM's which have my present OM in their immediate subjective
past. I can't jump backwards into my childhood and I can't stand still
at the present moment if my present OM's measure is increased
enormously, any more than can I suddenly find myself a different
person entirely because their measure is a lot greater than mine.
Given that I am who I am now, there are constraints as to what
candidate OM's are allowed in the lucky dip for my next conscious
moment.
--
Stathis Papaioannou
This sort of talk about "random sampling" and "luck" is misleading and is exactly why I broke down the roles of effective probability into the four categories I did in the paper.
If you are considering future versions of yourself, in the MWI sense, there is no randomness involved. Depending on how you define "you", "you" will either be all of them, or "you" are just an observer-moment and can consider them to be "other people". Regardless of definitions, this case calls for the use of Caring Measure for decision making.
Bruno is still pushing his crackpot UDA.
I could tell you what's wrong with his MGA, ....
Bruno, I will post on the subject - but not yet. I do not want to get sidetracked from improving my paper.
> I see you have make some progress on the subject (but not yet on
> diplomacy, unless your "crackpot" wording is just an affectionate
> mark: I could be OK with that. Well we will see).
I will admit that diplomacy is not always my strong suit when dealing with controversial subjects.
My characterization of it is sincere, not affectionate, though mainly what made me say that is that you call it a proof. It's an argument, not a proof, and the argument fails to be convincing. Now many people make arguments that I don't buy and I don't necessarily call those arguments crackpot, but I will if they make too-strong claims.
> Welcome back to the list Jacques,
Thanks :)
Hi. In the above, I was describing the consequences of #3, the assumption that a person's measure can't change over time. That assumption is certainly not what people have been calling the "ASSA" - obviously, I believe that measure does change as a function of time. Rather, #3 is my attempt to put what you call the "RSSA" in well-defined terms so that its consequences can be explored.
> > Instead I covered the Bayesian issues in my sections on the Reflection Argument and Theory Confirmation.
> >
> What measure then are you talking about ? Bayesian probabilities are relative, it is non-sense to talk about absolute measure.
I don't understand your comment. The sections of my paper that I mentioned explain how to use what I call "effective probabilities" in certain situations. If there is a problem with those procedures that you would like to point out, that would make it impossible to use them, you'd have to be a lot more specific.
> > > He goes on to mention rather briefly in passing his doomsday style
> > > argument against QI, but not in detail.
> >
> > I think the argument is presented in full. What part is missing?
>
> What happen to your "you" ?
Do you mean "why don't you reach the super-old ages"? The number of super-old "copies of you" is much less than for normal ages. This is equivalent to "most copies of you die off first". Which is equivalent to "most people die off first". It is irrelevant whether the people are different, or similar enough to be called "copies".
The You you know (no quotes around it this time) is just one copy among the "you" ones that are similar to you.
In other words, perhaps too compactly said for people to appreciate, "your" measure is reduced.
--- On Mon, 2/9/09, Bruno Marchal <mar...@ulb.ac.be> wrote:good idea to resume UDA again
Bruno, I will post on the subject - but not yet. I do not want to get sidetracked from improving my paper.
I see you have make some progress on the subject (but not yet ondiplomacy, unless your "crackpot" wording is just an affectionatemark: I could be OK with that. Well we will see).
I will admit that diplomacy is not always my strong suit when dealing with controversial subjects.
My characterization of it is sincere, not affectionate, though mainly what made me say that is that you call it a proof. It's an argument, not a proof, and the argument fails to be convincing. Now many people make arguments that I don't buy and I don't necessarily call those arguments crackpot, but I will if they make too-strong claims.
This sort of talk about "random sampling" and "luck" is misleading and is exactly why I broke down the roles of effective probability into the four categories I did in the paper.
If you are considering future versions of yourself, in the MWI sense, there is no randomness involved.
Welcome back to the list Jacques,
Thanks :)
The You you know (no quotes around it this time) is just one copy among the "you" ones that are similar to you.
In other words, perhaps too compactly said for people to appreciate, "your" measure is reduced.
> This sort of talk about "random sampling" and "luck" is misleading and is exactly why I broke down the roles of effective probability into the four categories I did in the paper.
>
> If you are considering future versions of yourself, in the MWI sense, there is no randomness involved. Depending on how you define "you", "you" will either be all of them, or "you" are just an observer-moment and can consider them to be "other people". Regardless of definitions, this case calls for the use of Caring Measure for decision making.
It seems that the disagreement may be one about personal identity. It
is not clear to me from your paper whether you accept what Derek
Parfit calls the "reductionist" theory of personal identity. Consider
the following experiment:
There are two consecutive periods of consciousness, A and B, in which
you are an observer in a virtual reality program. A is your
experiences between 5:00 PM and 5:01 PM while B is your experiences
between 5:01 PM and 5:02 PM, subjective time. A is being implemented
in parallel on two computers MA1 and MA2, so that there are actually
two qualitatively identical streams of consciousness which we can call
A1 and A2. At the end of the subjective minute, data is saved to disk
and both MA1 and MA2 are switched off. An external operator picks up a
copy of the saved data, walks over to a third computer MB, loads the
data and starts up the program. After another subjective minute MB is
switched off and the experiment ends.
As the observer you know all this information, and you look at the
clock and see that it is 5:00 PM. What can you conclude from this and
what should you expect? To me, it seems that you must conclude that
you are currently either A1 or A2, and that in one minute you will be
B, with 100% certainty. Would you say something else?
--
Stathis Papaioannou
I'd say it's a matter of definition, and there are three basic ones:
1) If I am A1 and will become B, then A2 has an equal right to say that he will become B. Thus, one could say that I am the same person as A2. This is personal fusion.
2) If the data saved to the disk is only based on A1 (e.g. discarding any errors that A2 might have made) then one could say that A1 is the same person as B, while A2 is not. This is causal differentiation.
3) If I am defined as an observer-moment, then I am part of either A1 or A2, not even the whole thing - just my current experience. This is the most conservative definition and thus may be the least misleading.
Regardless of definitions, what will be true is that the measure of A will be twice that of B. For example, if have not yet looked at the clock, and I want to place a bet on what it currently reads, and my internal time sense tells me only that about a minute has passed (so it is near 5:01, but I don't know which side of it), then I should bet that it is before 5:01 with effective probability 2/3. This Reflection Argument is equivalent to the famous "Sleeping Beauty" thought experiment.
Brent
You are right, but I think that Stathis is right too. When Stathis
talks about two identical stream of consciousness, he make perhaps
just a little abuse of language, which seems to me quite justifiable.
Just give a mirror to the observer so that A *can* (but does not) look
in the mirror to see if he is implemented by MA1 or by MA2. Knowing
the protocol the observer can predict that IF he look at the mirror
the stream of consciousness will bifurcate into A1 and A2. Accepting
the Y = II rule, that is bifurcation of "future" = differentiation of
the whole story) makes the Stathis "abuse of language" an acceptable
way to describe the picture. So Stathis get the correct expectation,
despite the first person ambiguity in "two identical stream of
consciousness".
If two infinitely computations *never* differentiate, should we count
them as one? I am not sure but I think we should still differentiate
them. UD generates infinitely often such infinitely similar streams.
That should play a role for the relative (to observer-moment) measure
pertaining on the computations. OK?
Bruno
OK.
> 2) If the data saved to the disk is only based on A1 (e.g. discarding any errors that A2 might have made) then one could say that A1 is the same person as B, while A2 is not. This is causal differentiation.
Yes, but I'm assuming A1 and A2 have identical content.
> 3) If I am defined as an observer-moment, then I am part of either A1 or A2, not even the whole thing - just my current experience. This is the most conservative definition and thus may be the least misleading.
This is the way I think of it, at least provisionally.
> Regardless of definitions, what will be true is that the measure of A will be twice that of B. For example, if have not yet looked at the clock, and I want to place a bet on what it currently reads, and my internal time sense tells me only that about a minute has passed (so it is near 5:01, but I don't know which side of it), then I should bet that it is before 5:01 with effective probability 2/3. This Reflection Argument is equivalent to the famous "Sleeping Beauty" thought experiment.
But the point is, I do look at the clock and I do know that I am A,
with probability 1, and therefore that I will soon be B with
probability 1. This would still be the case even if the ratio of A:B
were 10^100:1. There is no option for me to feel myself suspended at
5:01 PM, or other weird experiences, because the measure of A is so
much greater.
--
Stathis Papaioannou
I think if the observer knows everything I know, they can't conclude
anything more or less than I can.
Namely, that at 5:00 there are two computers running simulations, and
in one minute there will be one computer running a simulation.
I don't see how the observer asking "Which one am I?" is in any sense
asking for more information. The problem is the word "I" - what does
it refer to? Both computers MA1 and MA2 simulate an observer asking
"Which one am I". We know everything that happens - and "when you've
explained everything that happens, you've explained
everything." (Dennett again)
Bruno, giving A1 and A2 mirrors which would show different stuff
violates Stathis' assumption of running the _same_ computation - you
can't go out of the system.
And your remark that we should differentiate infinite identical platonic
computations confuses me - it seems to contradict unification (which I
gather you assume).
Measure can only be influenced by _different_ computations supporting
the same OM.
Cheers,
Günther
Michael Rosefield wrote:
> I agree. They are both pointers to the same abstract computation.
>
>
> --------------------------
> - Did you ever hear of "The Seattle Seven"?
> - Mmm.
> - That was me... and six other guys.
>
>
> 2009/2/10 Brent Meeker <meek...@dslextreme.com
> <mailto:meek...@dslextreme.com>>
>
>
> Bruno Marchal wrote:
> >
> > On 10 Feb 2009, at 18:44, Brent Meeker wrote:
> >
> >> Stathis Papaioannou wrote:
> >>> 2009/2/10 Jack Mallah <jackm...@yahoo.com
> <mailto:jackm...@yahoo.com>>:
Quentin, why would the measure of 4 year olds be "a lot more" than the measure of 30 year olds? I have already explained that the effect of differentiation (eg by learning) is exactly balanced by the increased number of versions to sum over (the N/N explanation) and the effect of child mortality is small.
Is there some third factor that you think comes into play? Can you estimate quantitatively what you think the measure ratio would be?
> Also even if absolute measure had sense, do you mean that the measure of a 1000 years old OM is strictly zero (not infinitesimal, simply and strictly null)?
No, it is not zero, but it is extremely small. I have never suggested that there is no long time tail in the measure distribution that extends to infinite time. Of course there is. Any MWIer knows that. But it is negligable. You will never experience it, or depending on definitions, at least not in any significant measure. The general argument against immortality proves that. It is no more significant then any other very-small-measure set of observations, such as the ones in which you are king of the demons. You might as well forget about it.
That actually doesn't matter - causation is defined in terms of counterfactuals. If - then, considering what happens at that moment of saving the data. If x=1 and y=1, and I copy the contents of x to z, that is not the same causal relationship as if I had copied y to z.
> > 3) If I am defined as an observer-moment, then I am
> part of either A1 or A2, not even the whole thing - just my
> current experience. This is the most conservative
> definition and thus may be the least misleading.
>
> This is the way I think of it, at least provisionally.
OK.
> But the point is, I do look at the clock and I do know that I am A, with probability 1, and therefore that I will soon be B with probability 1.
That contradicts what you said above about being an observer-moment. If you are, then some _other_ observer-moments will be in B, not you.
But surely the counterfactuals are the same in each case too? In which
case it is the same causal relationship. We're talking computations
here, each computation will respond identically to the same
counterfactual input.
In my book (page 146) I make the comment:
"The Doomsday argument with selection of observer moments made
according to a monotonically declining function of age would predict
the youngest of observer moments to be selected. By this argument, it
is actually mysterious why we should ever observe ourselves as adults,
a reductio ad absurdum for the Mallah argument."
Jacques has convinced me that the measure in question may be
sufficiently slowly declining over (say) the first 80 years of human
life that anthropic arguments becomes blunt. Particularly when the
categories concerned are things like childhood, adolescence, youth,
middle age and old age, rather than specific ages. Of course, infant
mortality is still very high in many parts of the world, so the
overall measure of babies is much higher than other age groups, but
one could argue that infants are not conscious until after the brain
reorganisation that occurs in the second year of life, so it is possible
that high infant mortality doesn't count.
Of course my major problem with the argument depending on the ASSA
still stands, but I'm willing to grant that this particular objection
may be overegged.
>> > 3) If I am defined as an observer-moment, then I am
>> part of either A1 or A2, not even the whole thing - just my
>> current experience. This is the most conservative
>> definition and thus may be the least misleading.
>>
>> This is the way I think of it, at least provisionally.
>
> OK.
>
>> But the point is, I do look at the clock and I do know that I am A, with probability 1, and therefore that I will soon be B with probability 1.
>
> That contradicts what you said above about being an observer-moment. If you are, then some _other_ observer-moments will be in B, not you.
But the same could be said about everyday life. The person who wakes
up in my bed tomorrow won't be me, he will be some guy who thinks he's
me and shares my memories, personality traits, physical
characteristics and so on. In other words, everyone only lives
transiently, and continuity of consciousness is an illusion. The
question of survival is then the question of how to ensure that this
illusion continues. QI allows the illusion to continue indefinitely.
--
Stathis Papaioannou
I think I understand your point, but I don't see that the continuity of
consciousness is any more an illusion than any other continuity: the continuity
of space, the persistence of objects, etc. You are just generalizing Zeno's
paradox. But once you look at it that way, the question becomes, "Why imagine
the continuity is made up of discrete elements?" It is this conceptualization,
points in space, moments in time, observer moments as atoms of consciousness,
that creates the paradox. So maybe we should recognize continuity as
fundamental. The continuity need not be temporal, it could be a more abstract
property such a causal connection or perhaps what Bruno says distinguishes a
computation from a description of the computation.
Brent
--- On Mon, 2/9/09, Quentin Anciaux <allc...@gmail.com> wrote:
> Also I still don't understand how I could be 30 years old and not 4, there are a lot more OM of 4 than 30... it is the argument you use for 1000 years old, I don't see why it can hold for 30 ?
Quentin, why would the measure of 4 year olds be "a lot more" than the measure of 30 year olds? I have already explained that the effect of differentiation (eg by learning) is exactly balanced by the increased number of versions to sum over (the N/N explanation) and the effect of child mortality is small.
Is there some third factor that you think comes into play? Can you estimate quantitatively what you think the measure ratio would be?
> Also even if absolute measure had sense, do you mean that the measure of a 1000 years old OM is strictly zero (not infinitesimal, simply and strictly null)?
No, it is not zero, but it is extremely small. I have never suggested that there is no long time tail in the measure distribution that extends to infinite time. Of course there is. Any MWIer knows that. But it is negligable. You will never experience it, or depending on definitions, at least not in any significant measure. The general argument against immortality proves that. It is no more significant then any other very-small-measure set of observations, such as the ones in which you are king of the demons. You might as well forget about it.
>> But the same could be said about everyday life. The person who wakes
>> up in my bed tomorrow won't be me, he will be some guy who thinks he's
>> me and shares my memories, personality traits, physical
>> characteristics and so on. In other words, everyone only lives
>> transiently, and continuity of consciousness is an illusion.
>
> I think I understand your point, but I don't see that the continuity of
> consciousness is any more an illusion than any other continuity: the continuity
> of space, the persistence of objects, etc. You are just generalizing Zeno's
> paradox. But once you look at it that way, the question becomes, "Why imagine
> the continuity is made up of discrete elements?" It is this conceptualization,
> points in space, moments in time, observer moments as atoms of consciousness,
> that creates the paradox. So maybe we should recognize continuity as
> fundamental. The continuity need not be temporal, it could be a more abstract
> property such a causal connection or perhaps what Bruno says distinguishes a
> computation from a description of the computation.
I don't think it makes a difference if life is continuous or discrete:
it is still possible to assert that future versions of myself are
different people who merely experience the illusion of being me.
However, this just becomes a semantic exercise. Saying that I will
wake up in my bed tomorrow is equivalent to saying that someone
sufficiently similar to me will wake up in my bed tomorrow.
--
Stathis Papaioannou
Heart disease. Cancer. Stroke. Degradation of various organs leading to death. Such ailments are known to strike older people more than young people. Are such things unheard of in your country?
I wouldn't call it "sudden", but certainly by 100 the measure has dropped off a lot. By 200, survival is theoretically possible, so the measure isn't zero, but such cases are obviously quite rare.
> Also this is still assuming ASSA and does not take in accound that my next momemt is not a random momemt (with high measure) against all momemts, but a random momemt again all momemts that have my current moment as memories/previous.
There is no randomness whatsoever involved. See my replies to Stathis.
--- On Tue, 2/10/09, russell standish <li...@hpcoders.com.au> wrote:
> But surely the counterfactuals are the same in each case too? In which case it is the same causal relationship. We're talking computations here, each computation will respond identically to the same counterfactual input.
I believe you both are taking what I wrote out of context. Sorry if I was not clear.
In the above I was talking about the moment at which the data is saved, from either A1 or A2, when making the transition to B in the thought experiment.
BTW, causation (sensitivity to counterfactuals) is part of the criteria for an implementation of a computation. So in that sense causation is essential to the experience.
Exactly.
And if your measure were to drop off dramatically overnight, it is equivalent to saying that many _more people_ woke up in your bed today as compared to the number of people who will wake up in your bed tommorrow.
Which is equivalent to saying that, for all practical purposes, you will probably die overnight. And that is the point.
Indeed there seems to be a conflict between MWI of QM and the feeling of
consciousness. QM evolves unitarily to preserve total probability, which
implies that the splitting into different quasi-classical subspaces reduces the
measure of each subspace. But there's no perceptible diminishment of
consciousness. I think this is consistent with the idea that consciousness is a
computation, since in that case the computation either exists or it doesn't.
Two copies don't increase the measure of a computation and reducing it's vector
in Hilbert space doesn't diminish it.
Brent Meeker
I have a different argument against QTI.
I had a nice dream last night, but unfortunately it suddenly ended.
Now, this is empirical evidence against QTI because, according to the
QTI, the life expectancy of the version of me simulated in that dream
should have been be infinite.
It is good to see that I am not alone here in taking a stand against QS/QI. What do you think of my paper? Is it unclear, convincing, unconvincing?
Are there others like us who still post here?
Regards,
Jack
If that is so then how do you explain the Born rule?
There are some people who will, but relatively few. That is what counts for QS to be invalid.
> So what you are saying is that at some point the measure fall to be strictly null... and that needs an argument from your part.
No, I never suggested it is zero. It doesn't have to be.
> Also you did not answer the question about the realness feeling of observer B... he has twice less measure according to you, does it feel less alive/real/conscious ?
I answered that previously. Measure affects the commonness of an observation, not what it feels like.
But why should less measure imply a "diminishment of consciousness"? Measure is not intended to have anything to do with how a given observer or observer-moment feels subjectively at a given instant, just how *likely* that experience is. If I win the lottery I don't feel my consciousness diminish, for example.
Jesse
--- On Wed, 2/11/09, Quentin Anciaux <allc...@gmail.com> wrote:
> 2009/2/11 Jack Mallah <jackm...@yahoo.com>
> > And if your measure were to drop off dramatically overnight, it is equivalent to saying that many _more people_ woke up in your bed today as compared to the number of people who will wake up in your bed tommorrow.There are some people who will, but relatively few. That is what counts for QS to be invalid.
> >
> > Which is equivalent to saying that, for all practical purposes, you will probably die overnight. And that is the point.
> >
> I don't think so, the point is that there is still someone who will wake up in the bed tomorrow... as long as the measure is not null this is true, and that's what count for the argument to be valid.
No, I never suggested it is zero. It doesn't have to be.
> So what you are saying is that at some point the measure fall to be strictly null... and that needs an argument from your part.
I answered that previously. Measure affects the commonness of an observation, not what it feels like.
> Also you did not answer the question about the realness feeling of observer B... he has twice less measure according to you, does it feel less alive/real/conscious ?
--
All those moments will be lost in time, like tears in rain.
The Born rule assumes you start with a normalized vector (i.e. ray), so it
calculates predicted probabilities conditional on the state preparation. After
each measurement, the vector is renormalized because the prediction is always
conditional on the present state. This is quite different from applying a
probability measure to the evolution of a multiverse in which decoherence
defines many different orthogonal subspaces, each of which gets a small
projection of the state vector of the multiverse.
Brent
> 2009/2/11 Quentin Anciaux>
>
>
> Because the point is to know from a 1st person perspective that it exists a "next subjective moment"... if there is, QI holds. Even if in the majority of "universes" I'm dead... from 1st perspective I cannot "be dead" hence the only moments that count is where I exists however small the measure of that moment is... and if at any momemts there exists a successor where I exists then QI holds.
But any notion of there being objective truths about what happens from the "1st person perspective", as opposed to just 3rd person truths about what various brains *report* experiencing, gets into philosophical assumptions that really need to made explicit or else people are talking at cross-purposes...this is what I was getting at with my post at http://groups.google.com/group/everything-list/msg/26b0bf3e1e971381
If A1 sees MA1 and A2 sees MA2 and they see something different, i.e. MA1 and
MA2 are distinguishable, then you've violated the hypothesis that the
computations are identical.
Brent
We seem to be in violent agreement.
Brent
I guess that depends on what you care about.
Brent
Of course maybe in some other branch of the multiverse your dream is continuing.
That's what makes everything-theories difficult to test.
But you raise an interesting point. Everything-theories that suppose
consciousness is constituted by the closest continuations need to solve the
"white rabbit" problem. But that solution, whatever it is, would equally apply
in dreams. So why don't dreams have the same physics as waking life?
Brent
This idea seems inconsistent with MWI. In QM the split is uncaused so it's
hard to see why its influence extends into the past and increases the measure of
computations that were identical before the split.
Brent
>Note that the same "vocabulary"
> problem occurs with Quantum Physics.
>
> Of course we still lack a definite criteria of identity for
> computation. But we can already derive what can count as different
> computations if we want those measure question making sense.
As I understand it your theory of personal identity depends on computations
"going through" a particular state. Intuitively this implies a state at a
particular moment, but a Y=II representation implies that we are taking into
account not just the present state but some period of history - which would
correspond with the usual idea of a person - something with a history, not just
a state.
Brent
--- On Sun, 2/8/09, Stathis Papaioannou <stat...@gmail.com> wrote:Suppose you differentiate into N states, then onaverage each has 1/N of your original measure. I guess that's why you think the measure decreases. But the sum of the measures is N/N of the original. I still find this confusing. Your argument seems to be that you won't live to 1000 because the measure of 1000 year old versions of you in the multiverse is very small - the total consciousness across the multiverse is much less for 1000 year olds than 30 year olds. But by an analogous argument, the measure of 4 year old OM's is higher than that of 30 year old OM's, since you might die between age 4 and 30. But here you are, an adult rather than a child.You might die between 4 and 30, but the chance is fairly small, let's say 10% for the sake of argument. So, if we just consider these two ages, the effective probability of being 30 would be a little less than that of being 4 - not enough less to draw any conclusions from. The period of adulthood is longer than that of childhood so actually you are more likely to be an adult. How likely? Just look at a cross section of the population. Some children, more adults, basically no super-old folks.Should you feel your consciousness more thinly spread or something?No, measure affects how common an observation is, not what it feels like.
You seem to be saying that commonness of an experience has no effect on, what for practical purposes, is whether people should expect to experience it. That is a contradiction in terms. It is false by definition. If an "uncommon" experience gets experienced just as often as a "common" experience, then by definition they are equally common and have equal measure.
If continuity is fundamental then personal identity could be defined in terms of
it and there could be a real difference between you and someone with the same
memories, but without continuity to your past.
Brent
> From a 1st perspective commonness is useless in the argument. The important is what it feels like for the experimenter.You seem to be saying that commonness of an experience has no effect on, what for practical purposes, is whether people should expect to experience it. That is a contradiction in terms. It is false by definition. If an "uncommon" experience gets experienced just as often as a "common" experience, then by definition they are equally common and have equal measure.
> There are some people who will, but relatively few. That is what counts for QS to be invalid.
Hmm, that does not make QS invalid (see Quentin and Jonathan's posts for
my views on the issue, they have expressed everything clearly), and in
fact you have already conceded QI (by asserting that measure never drops
to null).
It seems to me (judging from your abstract) that your real problem is
with the ethical conclusions which may or may not follow from QI.
But then the correct way is not to argue against QI but to tackle the
ethical questions head on.
Hilary Greaves would be an example (care for all your successors); or
even better, adopt a benevolent attitude toward all conscious OMs so
that you try to act to _increase_ conscious states (of all beings) in
the whole universe, and not decrease them.
I do not see a true ethical problem following from QI when people are
ethical in the first case. And if they are not, I don't think that QI
will add much incentive to be unethical.
Cheers,
Günther
If you remember that you had a nice dream then the version of you in
the dream is continuing. And if you had forgotten it, there would be
other versions of you that didn't, as Brent suggested.
--
Stathis Papaioannou
You agree that if one version of me goes to bed tonight and one
version of me wakes up tomorrow, then I should expect to wake up
tomorrow. But if extra versions of me are manufactured and run today,
then switched off when I go to sleep, then you are saying that I might
not wake up tomorrow. The extra copies of me have somehow sapped my
life strength.
--
Stathis Papaioannou
> If continuity is fundamental then personal identity could be defined in terms of
> it and there could be a real difference between you and someone with the same
> memories, but without continuity to your past.
But that could lead to absurd conclusions. Suppose you discover that
you have a disease which breaks the required continuity every time you
go to sleep, and that this has been happening your whole life. Will
you worry about falling asleep tonight? Should your property be
disposed of tomorrow according to your will?
--
Stathis Papaioannou
Who has this disease? :-)
--- On Wed, 2/11/09, George Levy <gl...@quantics.net> wrote:
> One could argue that measure actually increases continuously and corresponds to the increase in entropy occurring in everyday life. So even if you are 90 or 100 years old you could still experience an increase in measure.
I guess you are basing that on some kind of branch-counting idea.
If that were the case, the Born Rule would fail. Perhaps the probability rule would be more like proportionality to norm^2 exp(entropy) instead of just norm^2. If that was it, then for example unstable nuclei would be observed to decay a lot faster than the Born Rule predicts.
Conventional half life calculations are accurate. So either entropy would not be a factor, or the MWI is experimentally disproven already. Well, if it is a weak enough function of entropy then maybe it hasn't been disproven, but inclusion of free parameters like that which can always be made small enough goes against Occam's Razor. Otherwise there'd be no end of possible correction factors.
At least your idea was testable, with none of the meaningless "first person" sloganeering. Ideas like that, keep em' coming!
> In any case, measure is measured over a continuum and its value is infinite to begin with. So whether it increases or decreases may be a moot point.
It's not moot. Just take density ratios. The size of the universe may be infinite, but that didn't stop Hubble from saying it's getting bigger.
> As I said, the increase or decrease in measure is at the crux of this problem. Your paper really did not illuminate the issue in a satisfactory manner.
It could no doubt use some tweaking, which is why I'm on the list now. I know I'm not always a good communicator. What should be clarified or added to it?
You won't know this evening if you are one of the "extra versions" or the original. So yes, in that situation, you will probably not be around tomorrow. Only the original will.
> The extra copies of me have somehow sapped my life strength.
Not at all. I guess that is a joke?
Creating more copies, then getting rid of the same number, does not result in a net decrease in measure. That is why the movie "The Prestige" bears no resemblance whatsoever to QS despite rumors to the contrary.
If you create extra copies and leave them alive, there is a net increase in measure. That is equivalent to new people being born even if they have your memories. This once happenned to Will Riker on Star Trek: TNG.
It will be experienced - but not by most of "you". For all practical purposes it might as well not exist.
> What you're saying is uncommon moment are *never* experienced (means their measure is strictly null), for the QI argument to hold it is suffisant to have at least *one* next moment for every moment.
No and no.
Well, this seems to be the real point of disagreement between you and
the pro-QI people. If I am one of the extra versions and die
overnight, but the original survives, then I have survived. This is
why there can be a many to one relationship between earlier and later
copies. If you don't agree with this then you should make explicit
your theory of personal identity.
--
Stathis Papaioannou
I wouldn't worry if the clones were all kept in perfect lockstep. If
one of my clones survived, I would survive. It doesn't matter that the
clones are made up of different matter, as long as this matter is in a
configuration such that it could be a future version of myself. For
this is what happens in ordinary life: the matter comprising my body
is almost all replaced over the course of months or years, but I still
feel that I'm me. Whatever you want to call the important part of me -
mind, consciousness, soul - is preserved if the pattern making up my
brain is preserved.
--
Stathis Papaioannou
Hi George. The everything list feels just like old times, no? Which is nice in a way but has a big drawback - I can only take so much of arguing the same old things, and being outnumbered. And that limit is approaching fast again. At least I think your point here is new to the list.
--- On Wed, 2/11/09, George Levy <gl...@quantics.net> wrote:One could argue that measure actually increases continuously and corresponds to the increase in entropy occurring in everyday life. So even if you are 90 or 100 years old you could still experience an increase in measure.I guess you are basing that on some kind of branch-counting idea. If that were the case, the Born Rule would fail. Perhaps the probability rule would be more like proportionality to norm^2 exp(entropy) instead of just norm^2. If that was it, then for example unstable nuclei would be observed to decay a lot faster than the Born Rule predicts.
It will be experienced - but not by most of "you". For all practical purposes it might as well not exist.
--- On Wed, 2/11/09, Quentin Anciaux <allc...@gmail.com> wrote:
> > > From a 1st perspective commonness is useless in
> the argument. The important is what it feels like for the experimenter.
> >
> > You seem to be saying that commonness of an experience has no effect on, what for practical purposes, is whether people should expect to experience it. That is a contradiction in terms. It is false by definition. If an "uncommon" experience gets experienced just as often as a "common" experience, then by definition they are equally common and have equal measure.
> >
> That's not what I said. I said however uncommon an experience is, if it exists... it exists by definition, if mwi is true, and measure is never strictly null for any particular moment to have a successor then any moment has a successor hence there exists a me moment of 1000 years old and it is garanteed to be lived by definition.
No and no.
> What you're saying is uncommon moment are *never* experienced (means their measure is strictly null), for the QI argument to hold it is suffisant to have at least *one* next moment for every moment.
>
> Welcome back Jack Mallah!
>
> I have a different argument against QTI.
>
> I had a nice dream last night, but unfortunately it suddenly ended.
> Now, this is empirical evidence against QTI because, according to the
> QTI, the life expectancy of the version of me simulated in that dream
> should have been be infinite.
The notion of "ending" makes sense only relatively to something
"ending latter or not ending".
With most definition of first person (including both UDA and AUDA
definitions) "first person ending" just makes no sense.
Topologically, life is an open set.
Bruno
>
> Bruno Marchal wrote:
>>
>> On 11 Feb 2009, at 00:38, Günther Greindl wrote:
>>
>>> I'm with Mike and Brent.
>>>
>>> Bruno, giving A1 and A2 mirrors which would show different stuff
>>> violates Stathis' assumption of running the _same_ computation - you
>>> can't go out of the system.
>>
>> See my answer to Brent. Once A1 looks at itself in the mirror (and
>> thus A2 too, given the protocol). A1 sees MA1 and A2 sees MA2, and
>> the
>> computation differs.
>
> If A1 sees MA1 and A2 sees MA2 and they see something different,
> i.e. MA1 and
> MA2 are distinguishable, then you've violated the hypothesis that the
> computations are identical.
Right? I did change the protocol to make my point, which concerns only
the probability of finding myself by MA1 or by MA2, but not both.
Bruno
Ah! That is a good question. It is equivalent to the first person
white rabbit problem (which was the point of the original white rabbit
problem in "conscience and mécanisme".
The answer is that dreams are really stabilized by their relative
apparition with respect to deep computations with high measure. For
such relativity we need a mechanist notion of first person PLURAL, and
then I hope the arithmetical hypostases will confirmed this the
working of such notion.
Bruno
>
> This idea seems inconsistent with MWI. In QM the split is uncaused
> so it's
> hard to see why its influence extends into the past and increases
> the measure of
> computations that were identical before the split.
I got the inspiration from the MWI, and even from David Deutsch
convincing point that conceptually differentiation-talk is less wrong
than bifurcation-talk. But is is not simply, in QM, a consequence of
the linearity of the tensor product?, i.e. the fact that the state
A*(B+C) is equivalent with (A*C)+(A*C), where A, B, C represents kets
and * represents the tensor product.
Of course the price to pay, as Everett first noticed, is that the
states become a relative notion, and the probabilities too, making
RSSA obligatory in QM. With comp it is more subtle (but then Everett
uses comp and missed or abstracted himself from this subtlety).
>> Of course we still lack a definite criteria of identity for
>> computation. But we can already derive what can count as different
>> computations if we want those measure question making sense.
>
> As I understand it your theory of personal identity depends on
> computations
> "going through" a particular state. Intuitively this implies a
> state at a
> particular moment, but a Y=II representation implies that we are
> taking into
> account not just the present state but some period of history -
> which would
> correspond with the usual idea of a person - something with a
> history, not just
> a state.
Absolutely so. It is the Darwinistic aspect of comp. A species with a
lot of offsprings makes higher the "time life" of old gene.
Perhaps thats why it is said we should grow and multiply :)
Bruno
Indeed.
> > --- On Wed, 2/11/09, George Levy
> > If that were the case, the Born Rule would fail.
> Perhaps the probability rule would be more like proportionality to norm^2 exp(entropy) instead of just norm^2. If that was it, then for example unstable nuclei would be observed to decay a lot faster than the Born Rule predicts.
> I do not understand why you say that the Born rule would fail.
High entropy branches would have more probability than low entropy ones compared to the standard Born rule.
> Yes I am linking the entropy to MW branching.
But you should realize that the Born rule is self-consistent in the face of branching. If there is branching to N states, then on average the squared norm of each will be 1/N of the original. That much is proven by the math. Linking squared norm to measure is of course a tougher issue.
> You say that the Born Rule would fail if measure *increases*.
Actually, all I said was that it would fail if measure is linked to entropy. Any significant modification to it would make it fail.
> Using your own argument I could say that the Born rule would fail if measure *decreases *according to function f(t). For example it could be norm^2 f(t) .
That would make it fail but if the modification is only a function of time it would be hard to detect. Making it a function of a branch-dependent observable like entropy leads to a much easier-to-detect deviation.
> So using your own argument since the Born rule is only norm^2 therefore measure stays constant?
In ordinary experimental situations, total measure stays constant.
In life or death situations there is a correction factor but it is well known: the measure in a given world is proportional to the number of people alive in it as well as to the squared norm. This is taken into account under the Anthropic principle, and explains why our universe seems fine-tuned for life even though worlds like that presumably have a relatively small total squared norm compared to the sum of the others.
>
> Hi George. The everything list feels just like old times, no?
I am afraid we are just a bit bactracking 10 years ago.
No problem. After all, concerning theology, I am asking people to
backtrack 1500 years ago (1480 to be precise).
> Which is nice in a way but has a big drawback - I can only take so
> much of arguing the same old things, and being outnumbered. And
> that limit is approaching fast again. At least I think your point
> here is new to the list.
>
> --- On Wed, 2/11/09, George Levy <gl...@quantics.net> wrote:
>> One could argue that measure actually increases continuously and
>> corresponds to the increase in entropy occurring in everyday life.
>> So even if you are 90 or 100 years old you could still experience
>> an increase in measure.
>
> I guess you are basing that on some kind of branch-counting idea.
>
> If that were the case, the Born Rule would fail. Perhaps the
> probability rule would be more like proportionality to norm^2
> exp(entropy) instead of just norm^2. If that was it, then for
> example unstable nuclei would be observed to decay a lot faster than
> the Born Rule predicts.
>
> Conventional half life calculations are accurate. So either entropy
> would not be a factor, or the MWI is experimentally disproven
> already. Well, if it is a weak enough function of entropy then
> maybe it hasn't been disproven, but inclusion of free parameters
> like that which can always be made small enough goes against Occam's
> Razor. Otherwise there'd be no end of possible correction factors.
>
> At least your idea was testable, with none of the meaningless "first
> person" sloganeering. Ideas like that, keep em' coming!
So you stop at step two of the UDA?
What is wrong with the definition of first and third person views
notion? I gave a complete third person definition of both notions.
(see the SANE 2004 paper). Or look at the arithmetical definition (the
Theaetetic one);
>
>
>> In any case, measure is measured over a continuum and its value is
>> infinite to begin with. So whether it increases or decreases may be
>> a moot point.
>
> It's not moot. Just take density ratios. The size of the universe
> may be infinite, but that didn't stop Hubble from saying it's
> getting bigger.
>
>> As I said, the increase or decrease in measure is at the crux of
>> this problem. Your paper really did not illuminate the issue in a
>> satisfactory manner.
>
> It could no doubt use some tweaking, which is why I'm on the list
> now. I know I'm not always a good communicator. What should be
> clarified or added to it?
You say: "no randomness involved" but you seem to accept
probabilities. Do I just miss something here?
You seem not taking the 1 pov / 3 pov distinction seriously into
account. What does mean "questioning immortality" then?
Bruno
How do you explain why it works? I say it is because people in higher amplitude branches have more measure.
> This is quite different from applying a probability measure to the evolution of a multiverse in which decoherence defines many different orthogonal subspaces, each of which gets a small projection of the state vector of the multiverse.
Then it is not the standard MWI in the Everett tradition.
Brian,
Tononi's information integration view of consciousness might fit your bill.
Overview:
http://www.spectrum.ieee.org/jun08/6315
Paper (open access):
http://www.biomedcentral.com/1471-2202/5/42
Cheers,
Günther
Brian Tenneson wrote:
> Dear Everything List,
>
> Tegmark mentioned in an article the idea of self-aware structures, SAS.
> He wrote that the search for such structures is ongoing, i.e., he
> postulated the existence of such structures without giving examples.
>
> I'm wondering if consciousness and self-awareness has been
> "mathematized" somewhere, preferably in documents I can download without
> academic affiliation / subscription.
>
> I'm inclined to thing agents might be a pathway to this end, as well as
> what David Wolpert is calling a device.
>
> Do any of the formalizations come close to being reflective of human
> consciousness? In other words a mathematical model of human consciousness?
>
> Thank you.
>
> >
>