against UD+ASSA, part 1

104 views
Skip to first unread message

Wei Dai

unread,
Sep 26, 2007, 8:39:51 AM9/26/07
to everyth...@googlegroups.com
I promised to summarize why I moved away from the philosophical position
that Hal Finney calls UD+ASSA. Here's part 1, where I argue against ASSA.
Part 2 will cover UD.

Consider the following thought experiment. Suppose your brain has been
destructively scanned and uploaded into a computer by a mad scientist. Thus
you find yourself imprisoned in a computer simulation. The mad scientist
tells you that you have no hope of escaping, but he will financially support
your survivors (spouse and children) if you win a certain game, which works
as follows. He will throw a fair 10-sided die with sides labeled 0 to 9. You
are to guess whether the die landed with the 0 side up or not. But here's a
twist, if it does land with "0" up, he'll immediately make 90 duplicate
copies of you before you get a chance to answer, and the copies will all run
in parallel. All of the simulations are identical and deterministic, so all
91 copies (as well as the 9 copies in the other universes) must give the
same answer.

ASSA implies that just before you answer, you should think that you have
0.91 probability of being in the universe with "0" up. Does that mean you
should guess "yes"? Well, I wouldn't. If I was in that situation, I'd think
"If I answer 'no' my survivors are financially supported in 9 times as many
universes as if I answer 'yes', so I should answer 'no'." How many copies of
me exist in each universe doesn't matter, since it doesn't affect the
outcome that I'm interested in.

Notice that in this thought experiment my reasoning mentions nothing about
probabilities. I'm not interested in "my" measure, but in the measures
of the outcomes that I care about. I think ASSA holds intuitive appeal to
us, because historically, copying of minds isn't possible, so the measure of
one's observer-moment and the measures of the outcomes that are causally
related to one's decisions are strictly proportional. In that situation, it
makes sense to continue to think in terms of subjective probabilities
defined as ratios of measures of observer-moments. But in the more general
case, ASSA doesn't hold up.

Stathis Papaioannou

unread,
Sep 26, 2007, 9:34:55 AM9/26/07
to everyth...@googlegroups.com

There is an asymmetry here because you are reasoning about a part of
the multiverse which isn't duplicated. If both you and all your
survivors were duplicated together, or if you were only interested in
some selfish reward you would obtain in the event of a "0", that would
change the problem. A discrepancy between 1st person/ 3rd person POV
is also seen in QS experiments.


--
Stathis Papaioannou

Bruno Marchal

unread,
Sep 26, 2007, 11:07:40 AM9/26/07
to everyth...@googlegroups.com

Le 26-sept.-07, à 15:34, Stathis Papaioannou a écrit :

Important point, I agree. Like I agreed with Stathis' answer to Youness
(I think) for helping to distinguish ASSA and RSSA. At least formally
RSSA implies immortality, where ASSA does not.
Of course the kind of immortality you get from comp (and thus RSSA-like
theory, like pure QM also) is not necessarily resembling the kind of
"wishful type of immortality" associated with some religion.

Bruno

http://iridia.ulb.ac.be/~marchal/

Hal Finney

unread,
Sep 26, 2007, 5:04:52 PM9/26/07
to everyth...@googlegroups.com
Wei Dai writes:
> I promised to summarize why I moved away from the philosophical position
> that Hal Finney calls UD+ASSA. Here's part 1, where I argue against ASSA.
> Part 2 will cover UD.
>
> Consider the following thought experiment. Suppose your brain has been
> destructively scanned and uploaded into a computer by a mad scientist. Thus
> you find yourself imprisoned in a computer simulation. The mad scientist
> tells you that you have no hope of escaping, but he will financially support
> your survivors (spouse and children) if you win a certain game, which works
> as follows. He will throw a fair 10-sided die with sides labeled 0 to 9. You
> are to guess whether the die landed with the 0 side up or not. But here's a
> twist, if it does land with "0" up, he'll immediately make 90 duplicate
> copies of you before you get a chance to answer, and the copies will all run
> in parallel. All of the simulations are identical and deterministic, so all
> 91 copies (as well as the 9 copies in the other universes) must give the
> same answer.

This is an interesting experiment, but I have two comments. First,
you could tighten the dilemma by having the mad scientist flip a biased
coin with say a 70% chance of coming up heads, but then he duplicates
you if it comes up tails. Now you have it that the different styles of
reasoning lead to opposite actions, while in the original you might as
well pick 0 in any case.

Second, why the proviso that the simulations are identical and
deterministic? Doesn't the reasoning (and dilemma) go through just as
strongly if they are allowed to diverge? You will still be faced with a
conflict where one kind of reasoning says you have your 91% subjective
probability of it coming up a certain way, while logic would seem to
suggest you should pick the other one.

But, in the case where your instances diverge, isn't the subjective-
probability argument very convincing? In particular if we let you run
for a while after the duplication - minutes, hours or days - there might
be quite a bit of divergence. If you have 91 different people in one
case versus 1 in the other, isn't it plausible - in fact, compelling -
to think that you are in the larger group?

And again, even so, wouldn't you still want to make your choice on the
basis of ignoring this subjective probability, and pick the one that
maximizes the chances for your survivors: as you say, the measure of
the outcomes that you care about?

If so, then this suggests that the thought experiment is flawed because
even in a situation where most people would agree that subjective
perception is strongly skewed, they would still make a choice ignoring
that fact. And therefore its conclusions would not necessarily apply
either when dealing with the simpler case of a deterministic and
synchronous duplication.

Hal Finney

Wei Dai

unread,
Sep 26, 2007, 10:15:53 PM9/26/07
to everyth...@googlegroups.com
Hal Finney wrote:
> This is an interesting experiment, but I have two comments. First,
> you could tighten the dilemma by having the mad scientist flip a biased
> coin with say a 70% chance of coming up heads, but then he duplicates
> you if it comes up tails. Now you have it that the different styles of
> reasoning lead to opposite actions, while in the original you might as
> well pick 0 in any case.

In my scenario you're supposed to answer "0" or "non-0" and there are 9 ways
it can be non-zero so in effect it's a 90% biased coin.

> Second, why the proviso that the simulations are identical and
> deterministic? Doesn't the reasoning (and dilemma) go through just as
> strongly if they are allowed to diverge? You will still be faced with a
> conflict where one kind of reasoning says you have your 91% subjective
> probability of it coming up a certain way, while logic would seem to
> suggest you should pick the other one.

The proviso is there because if the simulations are allowed to diverge, what
does the mad scientist do if some of them answer "0" and some answer
"non-0"? I didn't want to deal with that issue.

> But, in the case where your instances diverge, isn't the subjective-
> probability argument very convincing? In particular if we let you run
> for a while after the duplication - minutes, hours or days - there might
> be quite a bit of divergence. If you have 91 different people in one
> case versus 1 in the other, isn't it plausible - in fact, compelling -
> to think that you are in the larger group?

Yes, that's why I said UD+ASSA seems intuitively appealing.

> And again, even so, wouldn't you still want to make your choice on the
> basis of ignoring this subjective probability, and pick the one that
> maximizes the chances for your survivors: as you say, the measure of
> the outcomes that you care about?

Yes. So my point is, even though the subjective probability computed by ASSA
is intuitively appealing, we end up ignoring it, so why bother? We can
always make the right choices by thinking directly about measures of
outcomes and ignoring subjective probabilities.

> If so, then this suggests that the thought experiment is flawed because
> even in a situation where most people would agree that subjective
> perception is strongly skewed, they would still make a choice ignoring
> that fact. And therefore its conclusions would not necessarily apply
> either when dealing with the simpler case of a deterministic and
> synchronous duplication.

I don't understand this part. This thought experiment gives a situation
where the probabilities computed by ASSA is useless, thereby showing that we
need a more general principle of rationality. So I'd say that ASSA is
flawed, not the thought experiment. Or maybe I'm just not getting your
point...


Youness Ayaita

unread,
Sep 27, 2007, 5:45:08 AM9/27/07
to Everything List
On 26 Sep., 14:39, "Wei Dai" <wei...@weidai.com> wrote:
> ASSA implies that just before you answer, you should think that you have
> 0.91 probability of being in the universe with "0" up. Does that mean you
> should guess "yes"? Well, I wouldn't. If I was in that situation, I'd think
> "If I answer 'no' my survivors are financially supported in 9 times as many
> universes as if I answer 'yes', so I should answer 'no'." How many copies of
> me exist in each universe doesn't matter, since it doesn't affect the
> outcome that I'm interested in.
>
> Notice that in this thought experiment my reasoning mentions nothing about
> probabilities. I'm not interested in "my" measure, but in the measures
> of the outcomes that I care about.

I do agree with you, Wei. Sometimes, it's not useful to consider the
expectation for your next observer moment---in particular, if you are
interested in what happens to other people (thus in the observer
moments they must expect for themselves). As I pointed out in my
recent message "A question concerning the ASSA/RSSA debate", an
absolute measure over observer moments isn't necessary. Every specific
problem we are concerned with leads to a specific measure over
observer moments. In this context, I would refer the ideas of the ASSA/
RSSA to the problem "What will I experience next?" This is a problem
we are very often concerned with (for example if we perform an
observation or a measurement, also leading to the Born rule). But it's
not the only problem we might be interested in! So, this new
perspective can be seen as a generalization of the ASSA/RSSA. In our
rational decisions, we can include other aspects (e.g. other people)
than ourselves. Rationality is not restricted to self-sampling. We
could call this 'general rationality'.

marc....@gmail.com

unread,
Sep 27, 2007, 8:06:42 AM9/27/07
to Everything List

On Sep 27, 2:15 pm, "Wei Dai" <wei...@weidai.com> wrote:

>
> Yes. So my point is, even though the subjective probability computed by ASSA
> is intuitively appealing, we end up ignoring it, so why bother? We can
> always make the right choices by thinking directly about measures of
> outcomes and ignoring subjective probabilities.
>

OK, new thought experiement. ;)

Barring a global disaster which wiped out all of the humanity or its
descendents, there would exist massively more observers in the future
than currently exist.
But you (as an observer) find you born amongst the earliest humans.
Since barring global disaster there will be massively more observers
in the future, why did you find yourself born so early? Surely your
probability of being born in the future (where there are far more
observers) was much much higher than your chances of being born so
early among a far smaller pool of observers?
The conclusion appears to be that there is an overwhelming probability
that we are on the brink of some global disaster which will wipe out
all humanity, since that would explain why we don't find ourselves
among the pool of future observers (because there are none).
Is the conclusion correct?


Russell Standish

unread,
Sep 27, 2007, 7:39:47 PM9/27/07
to everyth...@googlegroups.com
On Thu, Sep 27, 2007 at 12:06:42PM -0000, marc....@gmail.com wrote:
>
> OK, new thought experiement. ;)
>
> Barring a global disaster which wiped out all of the humanity or its
> descendents, there would exist massively more observers in the future
> than currently exist.
> But you (as an observer) find you born amongst the earliest humans.
> Since barring global disaster there will be massively more observers
> in the future, why did you find yourself born so early? Surely your
> probability of being born in the future (where there are far more
> observers) was much much higher than your chances of being born so
> early among a far smaller pool of observers?
> The conclusion appears to be that there is an overwhelming probability
> that we are on the brink of some global disaster which will wipe out
> all humanity, since that would explain why we don't find ourselves
> among the pool of future observers (because there are none).
> Is the conclusion correct?
>

This is the standard Doomsday argument, which has been well
discussed. In this case, "disaster" just means population decline. A
world population decline to say 1 billion over the next couple of
centuries, with a slower decline after that is probably enough to
ensure the SSA predicts a near peak population observation. Although I
haven't done the maths on this one - I did it on assuming current
exponential growth continues, how long have we got until the crash,
and it's less than 100 years, so we can say there must be a population
decline of some sort before then (assuming validity of the DA).

The only other historical time the Doosmday Argument predicts
"disaster" before our current time was during the Golden Age of
ancient Greece. And, sure enough, there was a significant population
decline around 200 BCE. This is in appendix B of my book - I really
must get around to writing this up as a peer reviewed article though.

Cheers

----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------

Brent Meeker

unread,
Sep 28, 2007, 3:43:12 PM9/28/07
to everyth...@googlegroups.com
No, because (under your assumptions) the argument is time-translation
invariant.

Brent Meeker


Jesse Mazer

unread,
Sep 30, 2007, 1:20:18 AM9/30/07
to everyth...@googlegroups.com

This is the standard "Doomsday argument" (see
http://www.anthropic-principle.com/primer1.html and
http://www.anthropic-principle.com/faq.html from Nick Bostrom's site), but
there's a loophole--it's possible that something like Nick Bostrom's
"simulation argument" (which has its own site at
http://www.simulation-argument.com/ ) is correct, and that we are *already*
living in some vast transhuman galaxy-spanning civilization, but we just
don't know it because a significant fraction of the observer-moments in this
civilization are part of "ancestor simulations" which simulate the distant
past (or alternate versions of their own 'actual' past) in great detail. In
this case the self-sampling assumption which tells us our own present
observer-moment is "typical" could be correct, and yet we'd have incorrect
information about the total number of humanlike observers that had gone
before us, if we knew the actual number it might tell us that doomsday was
far off.

Thinking about these ideas, I also came up with a somewhat different thought
about how our current observer-moment might be "typical" without it needing
to imply the likelihood that civilization was on the verge of ending. It's
pretty science-fictioney and fanciful, but maybe not *too* much more so than
the ancestor simulation idea. The basic idea came out of thinking about
whether it would ever be possible to "merge" distinct minds into a single
one, especially in a hypothetical future when mind uploading is possible and
the minds that want to merge exist as programs running on advanced
computers. This idea of mind-merging appears a lot in science fiction--think
of the Borg on Star Trek--but it seemed to me that it would actually be
quite difficult, because neural networks are so idiosyncratic in the details
of their connections, and because memories and knowledge are stored in such
a distributed way, they aren't like ordinary computer programs designed by
humans where you have neat easy-to-follow decision trees and units of
information stored in distinct clearly-marked locations. Figuring out how to
map one neural network's concept of a "cat" (for example) to another's in
such a way that the combined neural network behaved in a nice unified matter
wouldn't be straightforward at all, and each person's concept of a cat
probably involves links to huge numbers of implicit memories which have a
basis in that person's unique life history.

So, is there any remotely plausible way it could be done? My idea was that
if mind B wanted to integrate mind A into itself, perhaps the only way to do
it would be to hook the two neural networks up with a lot of initially
fairly random connections (just as the connections between different areas
in the brain of a newborn are initially fairly random), and then *replay*
the entire history of A's neural network from when it was first formed up to
the moment it agreed to merge with B, with B's neural network adapting to it
in realtime and forming meaningful connections between A's network and its
own, in much the same way that our left hemisphere has been hooked up to our
right hemisphere since our brain first formed and the two are in a constant
process of adapting to changes in one another so they can function as a
unified whole. For this to work, perhaps B would have to be put into a very
passive, receptive state, so that it was experiencing the things happening
in A's brain throughout the "replay" of A's life as if they were happening
to itself, with no explicit conscious memories or awareness of itself as an
individual distinct from A with its own separate life history. In this case,
the experience of being B's neural network experiencing this replay of A's
life might be subjectively indistinguishable from A's original life--there'd
be no way of telling whether a particular observer-moment was actually that
of A or of B passively experiencing as part of such an extended replay.

And suppose that later, after B had emerged from this life-replay with A now
assimilated into itself, and B had gotten back to its normal life in this
transhuman future, another mind C wanted to assimilate B into itself in the
same way. If it used the same procedure, it would have to experience a
replay of the entire history of B's neural network, including the period
where B was experiencing the replay of A's neural network (even if C had
already experienced a reply of A's history on its own)--and these
experiences, too, could be indistinguishable from the original experiences
of A! So if I'm having experiences which subjectively seem like those of A,
I'd have no way of being sure if they weren't actually those of some
transhuman intellect which wanted to integrate my mind into itself, or some
other transhuman intellect which wanted to integrate that first transhuman's
mind into itself, etc. If we imagine that a significant number of future
transhuman minds will be "descendents" of the earliest uploads (perhaps
because those will be the ones that have had the most time to make multiple
diverging copies of themselves, so they form the largest 'clades'), then as
different future minds keep wanting to merge with one another, they might
keep having to re-experience the same original life histories of the
earliest uploads, over and over again. Thus experiences of the lives of
beings who were born within decades of the development of uploading
technology (or just close enough to the development of the technology so
that, even if they didn't naturally live to see it, they'd be able to use
cryonics to get their brains preserved so that they could be scanned once it
became possible) could form a significant fraction of the set of all
observer-moments in some extremely vast galaxy-spanning transhuman
civilization.

Perhaps it's pretty farfetched, but my motive in thinking along these lines
is not just that I want to see the doomsday argument wrong when applied to
the lifetime of our civilization--it's that the doomsday argument can also
be applied to our *individual* lifetimes, so that if my current
observer-moment is "typical" the number of years I have left to live is
unlikely to be too many times larger than the number of years I've lived
already, and yet in this form it seems to be clearly incompatible with the
argument for quantum immortality, which I also find pretty plausible (see my
argument in favor of QI at
http://groups.google.com/group/everything-list/msg/c88e55c668ac4f65 ). The
only way for the two arguments to be compatible is if we have beings with
infinitely extended lifespans that nevertheless are continually having
experiences where they *believe* themselves to have only lived for a few
decades and have no conscious memories of a longer life prior to
that--something along the lines of "reincarnation", except that I want a
version that doesn't require believing in a supernatural non-computable
"soul". This is my attempt to come up with a rationale for why transhuman
minds might have a lot of observer-moments that seem to be those of
short-lived beings like ourselves, one which hopefully would not seem
completely crazy to someone who's already prepared to swallow notions like
mind uploading and giant post-singularity transhuman civilizations (and at
least some on this list would probably fall into that category).

Jesse

_________________________________________________________________
Discover sweet stuff waiting for you at the Messenger Cafe.  Claim your
treat today!
http://www.cafemessenger.com/info/info_sweetstuff.html?ocid=TXT_TAGHM_SeptHMtagline2

Günther Greindl

unread,
Sep 30, 2007, 4:49:27 AM9/30/07
to everyth...@googlegroups.com
Hello all,

I have always found these doomsday arguments rather strange (and the
mathematics, nice as the equations may be, resting on false premises).

Assuming that OM are distributed unevenly, at the moment you are
_living_ an OM you can make absolutely no conclusion about where in the
distribution you are - it looks somewhat like "after the fact" reasoning
to me.

Example:
A lottery with chances of winning 1 in 10 million.

Most people will lose, so they say (after experiencing a loss): well, it
was the likely event.

But let's say _one_ person won in this round: he can say: it was very
unlikely that _I_ won this time, but it just so happened, so fine.

In the lottery example, we know the distribution beforehand.

But now let us move to Observer Moments (OM):

You observe:
"I exist here and now. I know nothing about the OM Distribution, I can
only speculate."

This OM would be true no matter if you find yourself alone in big
universe or in the middle of a massively life-crowded galactic
super-cluster.

Random self-sampling does not help, because, per definition, an OM will
"live" itself even if it was very unlikely to be selected.

Mathematics trying to derive probabilities of OM seem to me to be rather
futile attempts at arbitrarily modeling a thought experiment - the
doomsday argument - the latter actually being metaphysics at it's worst.

Regards,
Günther

--
Günther Greindl
Department of Philosophy of Science
University of Vienna
guenther...@univie.ac.at
http://www.univie.ac.at/Wissenschaftstheorie/

Blog: http://dao.complexitystudies.org/
Site: http://www.complexitystudies.org

Stathis Papaioannou

unread,
Sep 30, 2007, 6:17:34 AM9/30/07
to everyth...@googlegroups.com
On 30/09/2007, Jesse Mazer <laser...@hotmail.com> wrote:

> Perhaps it's pretty farfetched, but my motive in thinking along these lines
> is not just that I want to see the doomsday argument wrong when applied to
> the lifetime of our civilization--it's that the doomsday argument can also
> be applied to our *individual* lifetimes, so that if my current
> observer-moment is "typical" the number of years I have left to live is
> unlikely to be too many times larger than the number of years I've lived
> already, and yet in this form it seems to be clearly incompatible with the
> argument for quantum immortality, which I also find pretty plausible (see my
> argument in favor of QI at
> http://groups.google.com/group/everything-list/msg/c88e55c668ac4f65 ).

Is the DA incompatible with QI? According to MWI, your measure in the
multiverse is constantly dropping with age as versions of you meet
their demise. According to DA, your present OM is 95% likely to be in
the first 95% of all OM's available to you. Well, that's why you're a
few decades old, rather than thousands of years old at the
ever-thinning tail end of the curve. But this is still consistent with
the expectation of an infinite subjective lifespan as per QI.


--
Stathis Papaioannou

Russell Standish

unread,
Sep 30, 2007, 7:22:45 AM9/30/07
to everyth...@googlegroups.com
On Sun, Sep 30, 2007 at 08:17:34PM +1000, Stathis Papaioannou wrote:
>
> Is the DA incompatible with QI? According to MWI, your measure in the
> multiverse is constantly dropping with age as versions of you meet
> their demise. According to DA, your present OM is 95% likely to be in
> the first 95% of all OM's available to you. Well, that's why you're a
> few decades old, rather than thousands of years old at the
> ever-thinning tail end of the curve. But this is still consistent with
> the expectation of an infinite subjective lifespan as per QI.
>

Incidently, this is the core of Jacques Mallah's argument against
QTI. In the end, I discovered that his argument was internally
consistent, but relied on the ASSA assumption, which I wasn't
comfortable with. I wrote about this debate in my book.

--

Russell Standish

unread,
Sep 30, 2007, 7:28:22 AM9/30/07
to everyth...@googlegroups.com
On Sun, Sep 30, 2007 at 10:49:27AM +0200, Günther Greindl wrote:
>
> Hello all,
>
> I have always found these doomsday arguments rather strange (and the
> mathematics, nice as the equations may be, resting on false premises).
>
> Assuming that OM are distributed unevenly, at the moment you are
> _living_ an OM you can make absolutely no conclusion about where in the
> distribution you are - it looks somewhat like "after the fact" reasoning
> to me.
>
> But now let us move to Observer Moments (OM):
>
> You observe:
> "I exist here and now. I know nothing about the OM Distribution, I can
> only speculate."

In most anthropic arguments, you do know something about the
distribution. Otherwise, as you say, you can only speculate. For
instance in the original Doomsday argument you know the distribution
of birth moments in the past (a relatively slow population increase,
followed by a far more rapid increase in the last two centuries),
therefore you can infer something about the temporal distribution in
the future using the SSA.


--

Stathis Papaioannou

unread,
Sep 30, 2007, 8:21:41 AM9/30/07
to everyth...@googlegroups.com
On 30/09/2007, Russell Standish <li...@hpcoders.com.au> wrote:
>
> On Sun, Sep 30, 2007 at 08:17:34PM +1000, Stathis Papaioannou wrote:
> >
> > Is the DA incompatible with QI? According to MWI, your measure in the
> > multiverse is constantly dropping with age as versions of you meet
> > their demise. According to DA, your present OM is 95% likely to be in
> > the first 95% of all OM's available to you. Well, that's why you're a
> > few decades old, rather than thousands of years old at the
> > ever-thinning tail end of the curve. But this is still consistent with
> > the expectation of an infinite subjective lifespan as per QI.
> >
>
> Incidently, this is the core of Jacques Mallah's argument against
> QTI. In the end, I discovered that his argument was internally
> consistent, but relied on the ASSA assumption, which I wasn't
> comfortable with. I wrote about this debate in my book.

Could this be a way to reconcile both ASSA and RSSA? You can expect to
survive indefinitely *and* you can expect to find yourself in a period
of high measure. It also explains why you aren't the oldest person in
the world.

--
Stathis Papaioannou

Jesse Mazer

unread,
Sep 30, 2007, 1:49:37 PM9/30/07
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
>
>Is the DA incompatible with QI? According to MWI, your measure in the
>multiverse is constantly dropping with age as versions of you meet
>their demise. According to DA, your present OM is 95% likely to be in
>the first 95% of all OM's available to you. Well, that's why you're a
>few decades old, rather than thousands of years old at the
>ever-thinning tail end of the curve. But this is still consistent with
>the expectation of an infinite subjective lifespan as per QI.

Well, this view would imply that although I am likely to reach reasonable
conclusions about measure if I assume my current OM is "typical", I am
inevitably going to find myself in lower and lower measure OMs in the
future, where the assumption that the current one is typical will lead to
more and more erroneous conclusions. I guess if you believe there is no real
temporal relation between OMs, that any sense of an observer who is
successively experiencing a series of different OMs is an illusion and that
the only real connection between OMs is that memories one has may resemble
the current experiences of another, then there isn't really a problem with
this perspective (after all, I have no problem with the idea that the
ordinary Doomsday Argument applied to civilizations implies that eventually
the last remaining humans will have a position unusually close to the end,
and they'll all reach erroneous conclusions if they attempt to apply the
Doomsday Argument to their own birth order...the reason I have no problem
with this is that I don't expect to inevitably 'become' them, they are
separate individuals who happen to have an unusual place in the order of all
human births). But I've always favored the idea that a theory of
consciousness would determine some more "objective" notion of temporal flow
than just qualitative similarities in memories, that if my current OM is X
then there would be some definite ratio between the probability that my next
OM would be Y vs. Z. This leads me to the analogy of pools of water with
water flowing between them that I discussed in this post:

http://groups.google.com/group/everything-list/msg/07cd5c7676f6f6a1

>Consider the following analogy--we have a bunch of tanks of water, and each
>tank is constantly pumping a certain amount of its own water to a bunch of
>other tanks, and having water pumped into it from other tanks. The ratio
>between the rates that a given tank is pumping water into two other tanks
>corresponds to the ratio between the probabilities that a given
>observer-moment will be
>succeeded by one of two other possible OMs--if you imagine individual water
>molecules as observers, then the ratio between rates water is going to the
>two tanks will be the same as the ratio between the probabilities that a
>given molecule in the current tank will subsequently find itself in one of
>those two tanks. Meanwhile, the total amount of water in a tank would
>correspond to the absolute probability of a given OM--at any given time, if
>you randomly select a single water molecule from the collection of all
>molecules in all tanks, the amount of water in a tank is proportional to
>the
>probability your randomly-selected molecule will be in that tank.
>
>Now, for most ways of arranging this system, the total amount of water in
>different tanks will be changing over time. In terms of the analogy, this
>would be like imposing some sort of universal time-coordinate on the whole
>multiverse and saying the absolute probability of finding yourself
>experiencing a given OM changes with time, which seems pretty implausible
>to me. But if the system is balanced in such a way that, for each tank, the
>total rate that water is being pumped out is equal to the total rate that
>water is being pumped in, then the system as a whole will be in a kind of
>equilibrium, with no variation in the amount of water in any tank over
>time. So in terms of OMs, this suggests a constraint on the relationship
>between the absolute probabilities and the conditional probabilities, and
>this constraint (together with some constraints imposed by a 'theory of
>consciousness' of some kind) might actually help us find a unique
>self-consistent way to assign both sets of probabilities, an idea I
>elaborated on in the "Request for a glossary of acronyms" thread.

(also see the followup post at http://tinyurl.com/38g8yt ...and to see the
context of the whole thread go to http://tinyurl.com/2wsowb , and for the
'Request for a glossary of acronyms' thread which I mentioned at the end of
the quote go to http://tinyurl.com/2wah5v )

So, the requirement that the system be in "equilibrium", with the total
amount of water in each tanks not changing over time, means that if you
randomly select one of all the water molecules in the system "now", the
probability it will be in any one of the various tanks (corresponding to
different OMs with a measure assigned) will be the same as if you randomly
select one of the water molecules, then wait a while so that molecule has
time to travel through a number of successive tanks, and want to know what
the probability is that it will be in the given tank "then". This means that
at any moment in a water molecule's history, it will always be likely to
reach good conclusions if it considers itself to be randomly selected from
the set of all tanks weighted by their "absolute probability" (corresponding
to the absolute measure on each OM), you don't have a situation where
there's a special moment where they'll be correct if they reason this way
but their conclusions will grow more and more erroneous if they do so at
later points in their history, or a situation where there is some global
notion of "time" and the absolute probability associated with each tank is
changing over time.

Jesse

_________________________________________________________________
Can you find the hidden words?  Take a break and play Seekadoo!
http://club.live.com/seekadoo.aspx?icid=seek_hotmailtextlink1

Russell Standish

unread,
Sep 30, 2007, 7:19:59 PM9/30/07
to everyth...@googlegroups.com

It isn't, because Mallah's DA + ASSA predicts a negligible probability
of finding oneself in an OM of (say) greater than 120 years old,
whereas with the RSSA one has the QTI predictions, and experiencing
being 200 years old is not that unusual. Explaining how two
intelligent people can come to such dramatically different conclusions
from a given argument lead to formalising this distinction between the
ASSA and the RSSA.


Cheers

Stathis Papaioannou

unread,
Sep 30, 2007, 9:30:52 PM9/30/07
to everyth...@googlegroups.com
On 01/10/2007, Russell Standish <li...@hpcoders.com.au> wrote:

> It isn't, because Mallah's DA + ASSA predicts a negligible probability
> of finding oneself in an OM of (say) greater than 120 years old,
> whereas with the RSSA one has the QTI predictions, and experiencing
> being 200 years old is not that unusual. Explaining how two
> intelligent people can come to such dramatically different conclusions
> from a given argument lead to formalising this distinction between the
> ASSA and the RSSA.

There is a small probability that you will find yourself greater than
120 years old, but you might still be guaranteed of living to 120. For
example, if the lifespan of every human were exactly 120 years and one
minute, then you are very unlikely to find yourself over 120 years
old, yet you will certainly find yourself over that age eventually.
Similarly with the QTI your measure is not uniform over your entire
very long lifespan, so that a randomly sampled OM is very unlikely to
be over 1000 years old, but you are still guaranteed of exceeding that
age eventually. Is there an inconsistency here?


--
Stathis Papaioannou

Russell Standish

unread,
Sep 30, 2007, 10:08:55 PM9/30/07
to everyth...@googlegroups.com

The ASSA is the assumption that what you will experience must be
randomly sampled from the distribution of OMs. So with the ASSA you
are not guaranteed of experiencing being over 1000 years old at all.

>
>
>
> --
> Stathis Papaioannou

Stathis Papaioannou

unread,
Oct 1, 2007, 6:34:57 AM10/1/07
to everyth...@googlegroups.com
On 01/10/2007, Jesse Mazer <laser...@hotmail.com> wrote:

> >Is the DA incompatible with QI? According to MWI, your measure in the
> >multiverse is constantly dropping with age as versions of you meet
> >their demise. According to DA, your present OM is 95% likely to be in
> >the first 95% of all OM's available to you. Well, that's why you're a
> >few decades old, rather than thousands of years old at the
> >ever-thinning tail end of the curve. But this is still consistent with
> >the expectation of an infinite subjective lifespan as per QI.
>
> Well, this view would imply that although I am likely to reach reasonable
> conclusions about measure if I assume my current OM is "typical", I am
> inevitably going to find myself in lower and lower measure OMs in the
> future, where the assumption that the current one is typical will lead to
> more and more erroneous conclusions.

That's right, but the same is true in any case for the atypical
observers who assume that they are typical. Suppose I've forgotten how
old I am, but I am reliably informed that I will live to the age of
120 years and one minute. Then I would be foolish to guess that I am
currently over 120 years old; but at the same time, I know with
certainty that I will *eventually* reach that age.

> I guess if you believe there is no real
> temporal relation between OMs, that any sense of an observer who is
> successively experiencing a series of different OMs is an illusion and that
> the only real connection between OMs is that memories one has may resemble
> the current experiences of another, then there isn't really a problem with
> this perspective (after all, I have no problem with the idea that the
> ordinary Doomsday Argument applied to civilizations implies that eventually
> the last remaining humans will have a position unusually close to the end,
> and they'll all reach erroneous conclusions if they attempt to apply the
> Doomsday Argument to their own birth order...the reason I have no problem
> with this is that I don't expect to inevitably 'become' them, they are
> separate individuals who happen to have an unusual place in the order of all
> human births).

That's exactly how I view OM's. It is necessary that they be at least
this, since even if they are strung together temporally in some other
way (such as being generated in the same head) they won't form a
continuous stream of consciousness unless they have the appropriate
memory relationship. It is also sufficient, since I would have the
sense of continuity of consciousness even if my OM's were generated at
different points in space and time.

> But I've always favored the idea that a theory of
> consciousness would determine some more "objective" notion of temporal flow
> than just qualitative similarities in memories, that if my current OM is X
> then there would be some definite ratio between the probability that my next
> OM would be Y vs. Z.

If you assume that the probability is determined by the ratio of the
measure of Y to Z, given that Y and Z are equally good candidate
successor OM's, this takes care of it and is moreover completely
independent of any theory of consciousness. All that is needed is that
the appropriate OM's be generated; how, when, where or by whom is
irrelevant.

I don't understand how the probability that a water molecule will be
in a given tank stays constant over time. Sure, the probability that a
random water molecule is in a given tank is proportional to the volume
in that tank, but once a particular water molecule is identified,
isn't it increasingly likely as time increases to end up in a
downstream tank, regardless of the volume of the downstream tanks?

You seem to be allowing for the possibility that your next OM might
not actually be your next OM. Quoting from one of your above-cited
posts:

"So suppose we
calculate the absolute probability of different possible OMs being my "next"
experience *without* taking into account specific knowledge of what my
current OM is, by doing a sum over the absolute probability of each OM being
my current experience multiplied by the conditional probability that that OM
will be followed by the OM whose probability of being my "next" experience
we want to calculate."

It seems to me that you *have* to take into account specific knowledge
of your current OM in these questions. Your current OM fixes you in
identity and in time. You don't have to consider that you will turn
into a five year old, or into George Bush, even if the multiverse is
teeming with George Bushes and five year old Jesses. The only OM's you
need consider for your next experience are those which count as t-now
+ delta-t versions of yourself.


--
Stathis Papaioannou

Jesse Mazer

unread,
Oct 1, 2007, 7:36:52 PM10/1/07
to everyth...@googlegroups.com
Stathis Papaioannou wrote:

I'm not talking about whether they are generated at different points in
space in time or not from a 3rd-person perspective, I'm talking about
whether there is a theory of consciousness that determines some sort of
"objective" truths about the temporal flow between OMs from a 1st-person
perspective (for example, an objective truth about the relative
probabilities that an experience of OM X will be followed by OM Y vs. OM Z),
or whether there is no such well-defined and objectively correct theory, and
the only thing we can say is that the memories of some OMs have purely
qualitative similarities to the experiences of others. Are you advocating
the latter?

>
> > But I've always favored the idea that a theory of
> > consciousness would determine some more "objective" notion of temporal
>flow
> > than just qualitative similarities in memories, that if my current OM is
>X
> > then there would be some definite ratio between the probability that my
>next
> > OM would be Y vs. Z.
>
>If you assume that the probability is determined by the ratio of the
>measure of Y to Z, given that Y and Z are equally good candidate
>successor OM's, this takes care of it and is moreover completely
>independent of any theory of consciousness.

But the "theory of consciousness" is needed to decide whether Y and Z are
indeed "equally good candidate successor OMs". For example, what if X is an
observer-moment of the actual historical Napoleon, Y is another OM of the
historical Napoleon, while Z is an OM of a delusional patient who thinks
he's Napoleon, and who by luck happens to have a set of fantasy memories
which happen to be quite similar to memories that the actual Napoleon had.
Is there some real fact of the matter about whether Z can qualify as a valid
successor, or is it just a matter of opinion?

I also see no reason to think that the question of whether observer-moment Y
is sufficiently similar to observer-moment X to qualify as a "successor"
should be a purely binary question as opposed to a "fuzzy" one. After all,
if you say the answer is "yes", and if Y can be described in some
mathematical language as a particular computation or pattern of
cause-and-effect or somesuch, then you can consider making a series of small
modifications to the computation/causal pattern, giving a series of similar
OMs Y', Y'', Y''', etc...eventually you'd end with a totally different OM
that had virtually no resemblance to either X or Y. So is there some point
in the sequence where you have an observer-moment that qualifies as a valid
successor to X, and then you change one bit of the computation or one
neural-firing event, and suddenly you have an observer-moment that is
completely invalid as a successor to X? This seems implausible to me, it
makes more sense that a theory of consciousness would determine something
like a "degree of similarity" between an OM X and a candidate successor OM
Y, and that this degree of similarity would factor into the probability that
an experience of X would be followed by an experience of Y.

In this case, if I am currently experiencing X, the relative probabilities
that my next OM is Y or Z might be determined by both the relative "degree
of similarity" of Y and Z to X *and* the absolute measure of Y and Z (or it
might be even more complicated; perhaps it would depend on some measure of
the internal coherence of all the different infinite sequences of OMs which
contain X and which have Y or Z as a successor).

If you have time you might want to take a look at the discussion in the
thread "FW: Quantum accident survivor" at http://tinyurl.com/23eq4g which
got continued in the thread "Request for a glossary of acronyms" that I
linked to earlier at http://tinyurl.com/2wah5v ...in particular, you could
look at my posts at http://tinyurl.com/28fogw and http://tinyurl.com/2hwdfz
on that thread (and possibly also the post http://tinyurl.com/3a6k7j from
the 'Request for a glossary of acronyms' thread which builds on them) where
I talk more about this idea that the probability of a given OM being
experienced as my "next" one might depend on a combination of its absolute
measure and its degree of similarity to my current one, and how this leads
to my own pet theory of how one might get a TOE that assigns a unique
measure to each OM. But to summarize it here, my pet theory is that there
might be a unique self-consistent solution when you impose the above rule
about the probability of my "next" OM, along with a global constraint
equivalent to the idea that all the "tanks of water" maintain a constant
amount of water in them (with the amount of water standing for the absolute
measure of each observer-moment) even as they are constantly giving up water
to other tanks (the tanks that stand for possible successor OMs, with the
relative amount of water a tank X gives to tank Y vs. tank Z standing for
the relative probability that Y vs. Z will be the next experience after X)
and receiving water from other tanks (their 'precursor' OMs). This global
constraint would give you something like the following system of equations:

P(A) = P(A)*P(A -> A) + P(B)*P(B -> A) + P(C)*P(C -> A) + ...
P(B) = P(A)*P(A -> B) + P(B)*P(B -> B) + P(C)*P(C -> B) + ...
P(C) = P(A)*P(A -> C) + P(B)*P(B -> C) + P(C)*P(C -> C) + ...

...where A, B, etc. are OMs, P(B) would be the absolute measure of B, and
P(A -> B) would be the probability that B would be the successor of A. If
you then use the rule that P(A -> B) would be something like S(A, B)*P(B),
where S(A, B) is some measure of the "similarity" of B to A which is
determined by a theory of consciousness, and P(B) is again the absolute
measure of B, then the above system of equations becomes:

P(A) = P(A)*S(A, A)*P(A) + P(B)*S(B, A)*P(A) + P(C)*S(C, A)*P(A) + ...
P(B) = P(A)*S(A, B)*P(B) + P(B)*S(B, B)*P(B) + P(C)*S(C, B)*P(B) + ...
P(C) = P(A)*S(A, C)*P(C) + P(B)*S(B, C)*P(C) + P(C)*S(C, C)*P(C) + ...

And with each equation you can divide both sides by the expression on the
left side, giving:

1 = P(A)*S(A, A) + P(B)*S(B, A) + P(C)*S(C, A) + ...
1 = P(A)*S(A, B) + P(B)*S(B, B) + P(C)*S(C, B) + ...
1 = P(A)*S(A, C) + P(B)*S(B, C) + P(C)*S(C, C) + ...

This might be enough to uniquely determine all absolute measures P(A), P(B)
etc. if your theory of consciousness already told you all the "similarities"
S(A, B), S(A, C) etc.

You misunderstand--I fully agree that once you pick a water molecule and
note that it's in a given tank, say tank A, then finding the probability
that it will "next" be in some other tank like B or C is affected by your
knowledge that it was last in A, and is not just proportional to the
absolute measure (total amount of water) of B or C. What I was saying is
that if you pick a random water molecule, then give it enough time to move
to another tank, and you want to know the probability that it will be in a
given tank such as B, *averaged over all possible tanks it might have been
in initially*, then this probability is exactly the same as the probability
it was initially in B. This is equivalent to the idea that all the tanks'
in-flows and out-flows are in equilibrium, so the amount of water in each
doesn't change over time despite the fact that any given water molecule is
constantly moving between tanks (which stands for the idea that the global
measure on each observer-moment is fixed, there's no notion of the
multiverse assigning a different absolute measure to the same OM with the
passage of some overarching time parameter). This is also equivalent to the
condition I expressed earlier with these equations:

P(A) = P(A)*P(A -> A) + P(B)*P(B -> A) + P(C)*P(C -> A) + ...
P(B) = P(A)*P(A -> B) + P(B)*P(B -> B) + P(C)*P(C -> B) + ...
P(C) = P(A)*P(A -> C) + P(B)*P(B -> C) + P(C)*P(C -> C) + ...

>
>"So suppose we
>calculate the absolute probability of different possible OMs being my
>"next"
>experience *without* taking into account specific knowledge of what my
>current OM is, by doing a sum over the absolute probability of each OM
>being
>my current experience multiplied by the conditional probability that that
>OM
>will be followed by the OM whose probability of being my "next" experience
>we want to calculate."
>
>It seems to me that you *have* to take into account specific knowledge
>of your current OM in these questions. Your current OM fixes you in
>identity and in time. You don't have to consider that you will turn
>into a five year old, or into George Bush, even if the multiverse is
>teeming with George Bushes and five year old Jesses. The only OM's you
>need consider for your next experience are those which count as t-now
>+ delta-t versions of yourself.

Yes, I agree...if my current OM is A and a possible "next" one is B, the
probability of my experiencing B next is given by P(A -> B) in the notation
above, which is different from P(B). But again, I think you misunderstood
what I was saying there, see above for clarification.

Jesse

_________________________________________________________________
Share your special parenting moments!
http://www.reallivemoms.com?ocid=TXT_TAGHM&loc=us

Stathis Papaioannou

unread,
Oct 2, 2007, 8:55:02 AM10/2/07
to everyth...@googlegroups.com
On 02/10/2007, Jesse Mazer <laser...@hotmail.com> wrote:

> I'm not talking about whether they are generated at different points in
> space in time or not from a 3rd-person perspective, I'm talking about
> whether there is a theory of consciousness that determines some sort of
> "objective" truths about the temporal flow between OMs from a 1st-person
> perspective (for example, an objective truth about the relative
> probabilities that an experience of OM X will be followed by OM Y vs. OM Z),
> or whether there is no such well-defined and objectively correct theory, and
> the only thing we can say is that the memories of some OMs have purely
> qualitative similarities to the experiences of others. Are you advocating
> the latter?

I believe that the idea of a self extended in time is a kind of
illusion, but it's an important illusion that I would like to
continue. What I can expect the next experience of this illusional
self to be can be objectively calculated using the measure and degree
of similarity of candidate successor OM's (to the extent that these
parameters can be determined), as you discuss below.

> >If you assume that the probability is determined by the ratio of the
> >measure of Y to Z, given that Y and Z are equally good candidate
> >successor OM's, this takes care of it and is moreover completely
> >independent of any theory of consciousness.
>
> But the "theory of consciousness" is needed to decide whether Y and Z are
> indeed "equally good candidate successor OMs".

Oh, by "theory of consciousness" it seems you mean what I mean by
"theory of personal identity".

> For example, what if X is an
> observer-moment of the actual historical Napoleon, Y is another OM of the
> historical Napoleon, while Z is an OM of a delusional patient who thinks
> he's Napoleon, and who by luck happens to have a set of fantasy memories
> which happen to be quite similar to memories that the actual Napoleon had.
> Is there some real fact of the matter about whether Z can qualify as a valid
> successor, or is it just a matter of opinion?

I would say that if Z is really just as good as the original
Napoleonic OM's, then it would have to qualify as a valid successor.
Of course the patient's brain would be far less likely to produce the
requisite OM's than Napoleon's brain, but in principle it is possible.

> I also see no reason to think that the question of whether observer-moment Y
> is sufficiently similar to observer-moment X to qualify as a "successor"
> should be a purely binary question as opposed to a "fuzzy" one. After all,
> if you say the answer is "yes", and if Y can be described in some
> mathematical language as a particular computation or pattern of
> cause-and-effect or somesuch, then you can consider making a series of small
> modifications to the computation/causal pattern, giving a series of similar
> OMs Y', Y'', Y''', etc...eventually you'd end with a totally different OM
> that had virtually no resemblance to either X or Y. So is there some point
> in the sequence where you have an observer-moment that qualifies as a valid
> successor to X, and then you change one bit of the computation or one
> neural-firing event, and suddenly you have an observer-moment that is
> completely invalid as a successor to X? This seems implausible to me, it
> makes more sense that a theory of consciousness would determine something
> like a "degree of similarity" between an OM X and a candidate successor OM
> Y, and that this degree of similarity would factor into the probability that
> an experience of X would be followed by an experience of Y.
>
> In this case, if I am currently experiencing X, the relative probabilities
> that my next OM is Y or Z might be determined by both the relative "degree
> of similarity" of Y and Z to X *and* the absolute measure of Y and Z

Yes, you would need to do some sort of calculation with measure
multiplied by degree of similarity, or rather degree of
appropriateness as successor - since my OM's of a minute ago are very
similar to my present OM but would not qualify at all as my
successors.

> (or it
> might be even more complicated; perhaps it would depend on some measure of
> the internal coherence of all the different infinite sequences of OMs which
> contain X and which have Y or Z as a successor).

What does "contain X" mean? I think of X as the complete content of one OM.

I don't see why they should sum up this way. Suppose A is you about to
toss a coin, B is you observing heads and C is you observing tails. If
the coin is fair, P(B) = P(C); let's say they both equal 1 in
arbitrary units of measure. Then S(A,B) = S(A,C) = 1 since either B or
C succeeding A are equally valid outcomes while all other combinations
are impossible and S(X,Y) = 0. P(A) could be anything at all; say
1000. We now have (from the cancelled out equation below):

1 = P(A)*S(A, A) + P(B)*S(B, A) + P(C)*S(C, A)

1 /= 1000*0 + 1*0 + 1*0

1 = P(A)*S(A, B) + P(B)*S(B, B) + P(C)*S(C, B)

1 /= 1000*1 + 1*0 + 1*0

1 = P(A)*S(A, C) + P(B)*S(B, C) + P(C)*S(C, C)

1 /= 1000*1 + 1*0 + 1*0

Again, I don't see why measure should be conserved in this way. My
present OM might be orders of magnitude higher in measure than its
predecessors or successors.


--
Stathis Papaioannou

Pete Carlton

unread,
Oct 2, 2007, 5:49:24 PM10/2/07
to everyth...@googlegroups.com

>> Since barring global disaster there will be massively more observers
>> in the future, why did you find yourself born so early? Surely your
>> probability of being born in the future (where there are far more
>> observers) was much much higher than your chances of being born so
>> early among a far smaller pool of observers?

Isn't there a major problem here with the word "you" here? To whom or
what is it referring?

If it is asking "Why did you, Brent, a man born in the 20th century,
find yourself born in the 20th century?", then the answer is obvious;
it's like asking why is twelve twelve, and not a thousand. You're not
picking a number randomly when you ask "Why is twelve twelve?" -
you're picking twelve.

The target of your question (Brent) indeed lives at a time with a
relatively small number of observers - if you want to talk about how
things are in the future, maybe you should ask someone in the future...

Jesse Mazer

unread,
Oct 2, 2007, 5:59:57 PM10/2/07
to everyth...@googlegroups.com

The self-sampling assumption just argues that you are likely to get correct
conclusions if you reason *as if* you were randomly picked from the set of
all observers, not that "you" really were assigned a life randomly in some
ultimate metaphysical sense. The issue is discussed on the FAQ at
anthropic-principle.com, where Bostrom write:

Q3. I have memories of 20th century events, so I cannot have been born
earlier than the 20th century.

A3. We have to distinguish two "cannots" here. (1) Given the validity of
these memories then I was in fact not born <1900.[true] (2) I could not
exist without these memories.[much more doubtful].

It is indeed problematic how and in what sense you could be said to be a
random sample, and from which class you should consider yourself as having
been sampled (this is "the problem of the reference class"). Still, we seem
forced by arguments such as Leslie's emerald example (below) or my own
amnesia chamber thought experiment (see my "Investigations into the Doomsday
argument" [
http://www.anthropic-principle.com/preprints/inv/investigations.html ]) to
consider ourselves as random samples due to observer self-selection at least
in some cases.

>A firm plan was formed to rear humans in two batches: the first batch to be
>of three humans of one >sex, the second of five thousand of the other sex.
>The plan called for rearing the first batch in one >century. Many centuries
>later, the five thousand humans of the other sex would be reared. Imagine
> >that you learn you’re one of the humans in question. You don’t know which
>centuries the plan >specified, but you are aware of being female. You very
>reasonably conclude that the large batch was >to be female, almost
>certainly. If adopted by every human in the experiment, the policy of
>betting >that the large batch was of the same sex as oneself would yield
>only three failures and five thousand >successes. ... [Y]ou mustn’t say:
>‘My genes are female, so I have to observe myself to be female, no >matter
>whether the female batch was to be small or large. Hence I can have no
>special reason for >believing it was to be large.’ (Leslie 1996, pp.
>222-23)

_________________________________________________________________
Spiderman 3 Spin to Win! Your chance to win $50,000 & many other great
prizes! Play now! http://spiderman3.msn.com

Stathis Papaioannou

unread,
Oct 3, 2007, 6:23:13 AM10/3/07
to everyth...@googlegroups.com
On 03/10/2007, Jesse Mazer <laser...@hotmail.com> wrote:

> >A firm plan was formed to rear humans in two batches: the first batch to be
> >of three humans of one >sex, the second of five thousand of the other sex.
> >The plan called for rearing the first batch in one >century. Many centuries
> >later, the five thousand humans of the other sex would be reared. Imagine
> > >that you learn you're one of the humans in question. You don't know which
> >centuries the plan >specified, but you are aware of being female. You very
> >reasonably conclude that the large batch was >to be female, almost
> >certainly. If adopted by every human in the experiment, the policy of
> >betting >that the large batch was of the same sex as oneself would yield
> >only three failures and five thousand >successes. ... [Y]ou mustn't say:
> >'My genes are female, so I have to observe myself to be female, no >matter
> >whether the female batch was to be small or large. Hence I can have no
> >special reason for >believing it was to be large.' (Leslie 1996, pp.
> >222-23)

This sort of argument seems to go awry with some of the thought
experiments concerning duplication and personal identity. Suppose God
reveals to you that every other minute, he has been increasing your
measure a millionfold for a minute, then returning it back to normal.
What you don't know and have to guess is whether the increase is
happening during the odd or the even numbered minutes. It is currently
3 minutes past the hour, and you reason that, if you say that you are
sampled from the high measure period, a million versions of you will
be right to every one that is wrong, so you guess that almost
certainly the duplication is occurring during the odd minutes.
However, if you wait a while you can go through the same reasoning
about the even numbered minutes. Clearly, you cannot claim that the
duplication is almost certainly occurring during both the odd and the
even numbered minutes, since this inconsistent with the information
God has reliably provided you.

--
Stathis Papaioannou

Reply all
Reply to author
Forward
0 new messages