Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The Case of Sleeping Beauty

4,181 views
Skip to first unread message

Jamie Dreier

unread,
Mar 15, 1999, 3:00:00 AM3/15/99
to

We plan to put Beauty to sleep by chemical means, and then we'll flip a
(fair) coin. If the coin lands Heads, we will awaken Beauty on Monday
afternoon and interview her. If it lands Tails, we will awaken her Monday
afternoon, interview her, put her back to sleep, and then awaken her again
on Tuesday afternoon and interview her again.

The (each?) interview is to consist of the one question: what is your
credence now for the proposition that our coin landed Heads?

When awakened (and during the interview) Beauty will not be able to tell
which day it is, nor will she remember whether she has been awakened
before.

She knows the above details of our experiment.

What credence should she state in answer to our question?


-Jamie

p.s. Don't worry, we will awaken Beauty afterward and she'll suffer no ill
effects.

p.p.s. This puzzle/problem is, as far as I know, due to a graduate student
at MIT. Unfortunately I don't know his name (I do know it's a man). The
problem apparently arose out of some consideration of the Case of the
Absentminded Driver.

p.p.p.s. Once again, I have no very confident 'solution' of my own; I will
eventually post the author's solution, but I am not entirely happy with
that one either.

--
SpamGard: For real return address replace "DOT" with "."

Jim Ferry

unread,
Mar 15, 1999, 3:00:00 AM3/15/99
to
Jamie Dreier wrote:
>
> We plan to put Beauty to sleep by chemical means, and then we'll flip a
> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday
> afternoon and interview her. If it lands Tails, we will awaken her Monday
> afternoon, interview her, put her back to sleep, and then awaken her again
> on Tuesday afternoon and interview her again.
>
> The (each?) interview is to consist of the one question: what is your
> credence now for the proposition that our coin landed Heads?
>
> When awakened (and during the interview) Beauty will not be able to tell
> which day it is, nor will she remember whether she has been awakened
> before.
>
> She knows the above details of our experiment.
>
> What credence should she state in answer to our question?

50/50. The Heads and Tails environments are identical. They
give her no information, so the probability remains the same.

| Jim Ferry | Center for Simulation |
+------------------------------------+ of Advanced Rockets |
| http://www.uiuc.edu/ph/www/jferry/ +------------------------+
| jferry@expunge_this_field.uiuc.edu | University of Illinois |

bu...@pac2.berkeley.edu

unread,
Mar 15, 1999, 3:00:00 AM3/15/99
to
In article <36ED7A81.5A61@delete_this_field.uiuc.edu>,

Jim Ferry <jferry@delete_this_field.uiuc.edu> wrote:
>Jamie Dreier wrote:
>>
>> We plan to put Beauty to sleep by chemical means, and then we'll flip a
>> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday
>> afternoon and interview her. If it lands Tails, we will awaken her Monday
>> afternoon, interview her, put her back to sleep, and then awaken her again
>> on Tuesday afternoon and interview her again.

>> What credence should she state in answer to our question?


>
>50/50. The Heads and Tails environments are identical. They
>give her no information, so the probability remains the same.

SPOILER

Nope. She should say that the probability that it's tails is 2/3.

Imagine repeating the experiment a million times. Heads comes
up half a million times, as does tails. But each time tails
comes up she's awakened twice. So there are a total of
1.5 million awakenings, and only half a million of them
occur after the coin came up heads.

-Ted

Jim Ferry

unread,
Mar 15, 1999, 3:00:00 AM3/15/99
to

Nope. 2/3 of the awakenings occur after a Tail, but these
awakenings are not equally likely events.

P(Heads & Monday) = 1/2
P(Tails & Monday) = 1/4
P(Tails & Tuesday) = 1/4

bu...@pac2.berkeley.edu

unread,
Mar 15, 1999, 3:00:00 AM3/15/99
to
In article <36ED8693.3D15@delete_this_field.uiuc.edu>,

Jim Ferry <jferry@delete_this_field.uiuc.edu> wrote:
>
>Nope. 2/3 of the awakenings occur after a Tail, but these
>awakenings are not equally likely events.
>
>P(Heads & Monday) = 1/2
>P(Tails & Monday) = 1/4
>P(Tails & Tuesday) = 1/4

This would be correct if the rules stated that she would be awakened
on Monday OR Tuesday in the event the coin comes up tails,
but unless I misread the question, she is to be awakened on Monday
AND Tuesday in this case. That means that the three events
are definitely equiprobable.

Once again, imagine repeating the experiment a million times.
The event (Heads & Monday) will occur half a million times.
So will the event (Tails & Monday) (since tails comes up half the
time, and every time it does a Monday-awakening occurs). So
those two events are equally probable.

-Ted


Jamie Dreier

unread,
Mar 15, 1999, 3:00:00 AM3/15/99
to
Well, those are the two answers I expected, of course.

Each has something obvious to be said in its favor. But they can't both be
right. (Can they?)

So, to summarize:

There is a frequentist sort of argument in favor of her declaring that the
chance that the coin landed Heads is only 1/3 (to wit: suppose the game
were played repeatedly, and on each occasion for guessing she made a guess
to herself, "I guess that it's Heads"; she would be right only 1/3 of the
time). On the other hand, there is a more Bayesian sort of argument that
she should think the chance is 1/2 (to wit: I thought it was 1/2 before
they put me to sleep, and I clearly have no new information, so it would
be irrational to change my mind).

Two arguments, incompatible conclusions, at least one of the arguments
must be faulty.

-Jamie

David A Karr

unread,
Mar 15, 1999, 3:00:00 AM3/15/99
to
Eytan Zweig <eyt...@spinoza.tau.ac.il> wrote:
>For instance, take an extreme case - instead of throwing a normal die, say
>it is weighed in such a fashion that there is only a 1/1000000 chance of
>tails coming up- but in that case, she will be woken 999,999 times! By your
>reasoning, she should say the chance of it being tails is 50%, which is
>clearly incorrect.

"Clearly"? I see nothing clear about it.

In fact I'm not sure I know what Jamie's question even means.
What's the significance of the "credence" that Sleeping Beauty
assigns to a proposition that we already know to be true (or false)?

Are we offering her a "bet"---she has the option to pay $p (out of
the royal treasury, whose amount is "large enough" but hidden from her),
and if she does so and the coin was "heads" we pay her back $1?
And do we offer this bet every time we interview her?
If so, I wouldn't want to risk more than $0.50 if I were
Sleeping Beauty in your experiment, would you?


Change this back to 1/2 chance of heads and two wakenings on tails
for every wakening on heads: then as S.B. I wouldn't even pay
$0.34 for the chance that the coin came up heads. In other words,
that measure of "credence" is 1/3.


On the other hand, suppose again the coin comes up heads on 1/2 of
all flips, and S.B. is woken once on heads, twice on tails. But now
suppose S.B. has the opportunity *before* the coin toss to pay $0.45
"betting on heads." Each time we wake her up, we ask if she still
wants her bet (if she still has one) to stand---if she says "no" at
any time during the week, her $0.45 is returned at the end of the week,
otherwise she gets $1 on heads and $0 on tails at the end of the week.

In the latter case she'll have two chances to cancel a losing bet
for every chance to cancel a winning bet, yet it still pays *not* to
cancel the bet; in that sense her "credence" is 1/2.


So is the "credence" that the coin came up heads 1/2 or 1/3?
Again, what *is* "credence"?

--
David A. Karr "Groups of guitars are on the way out, Mr. Epstein."
ka...@shore.net --Decca executive Dick Rowe, 1962

Jamie Dreier

unread,
Mar 15, 1999, 3:00:00 AM3/15/99
to
Hm, three good points.

> Eytan Zweig

> After the experiment ends, Beauty is woken up once more, this time for good.
> She does not, however, remember anything that happened while the experiment
> was going on, except the rules of the experiment. Once again she is
> interviewed, and asked what is the credence for the possibility that the
> coin came up heads.
>
> Obviously, the credence is now 50%; however, she gained no new information
> except that the experiment is over, which is irrelevent to the credence -
> everything else is exactly the same knowledge as she had during the
> experiment. Why, according to your logic, does the credence change?

Indeed.
The 'no new information' reasoning seems very compelling.

Back up a little and I think it can be made even *more* compelling.
Before she is put to sleep, Beauty certainly thinks that the chance of
Heads is 1/2. Now for the sake of argument, suppose that upon awakening
she really does get some hard-to-state information, so that she reasonably
changes her credence in Heads to 1/3.
But whatever that strange information might be, she *knew* she was going
to get it, she knew this before she was put to sleep. Whenever you *know*
for sure that you are going to get information soon that will rationally
make you take the chance of Heads to be 1/3, surely you must *now* take
the chance of Heads to be 1/3. Otherwise your beliefs suffer from a very
blatant sort of diachronic incoherence (and we'll make an easy dutch book
against you).

So she couldn't possibly be getting any new information, because it
couldn't possibly be rational for her to think ahead of time that the
chance of Heads is 1/3.


ka...@shore.net (David A Karr) wrote:

> In fact I'm not sure I know what Jamie's question even means.
> What's the significance of the "credence" that Sleeping Beauty
> assigns to a proposition that we already know to be true (or false)?

Hmmm.

Well, I meant to be using the standard Bayesian sense of 'credence', which
is generally cashed out as 'degree of belief'.

If you prefer, you may (as Matt McLelland suggests) rephrase the question:
What should she take the odds of Heads to be when we interview her? (I am
talking about her rational state of belief, though -- which may or may not
be logically related to her disposition to bet, I intend not to take any
stand on what the relation is or isn't.)


> Are we offering her a "bet"---she has the option to pay $p (out of
> the royal treasury, whose amount is "large enough" but hidden from her),
> and if she does so and the coin was "heads" we pay her back $1?
> And do we offer this bet every time we interview her?
> If so, I wouldn't want to risk more than $0.50 if I were
> Sleeping Beauty in your experiment, would you?
>
>
> Change this back to 1/2 chance of heads and two wakenings on tails
> for every wakening on heads: then as S.B. I wouldn't even pay
> $0.34 for the chance that the coin came up heads. In other words,
> that measure of "credence" is 1/3.

Yeah.

Ok, but when we interview her, we can just ask her, What do you think is
the chance that the coin came up Heads? And what should she say, what is
the rational thing for her to believe?


Matt McLelland <mat...@flash.net> adds:

> The answer could be "50-50 of course. It is still a fair coin" or "Given that
> I am up, the odds are just 1/3."
> I think the second answer was the intended one.

Well, that is the answer endorsed by the author of the problem. He thinks
she should say that the probability of Heads given "I am awake now" is
1/3. *I* intended neither answer in particular. I keep waffling, myself.

bu...@pac2.berkeley.edu wrote:

> I don't think it has anything to do with frequentism vs. Bayesianism --
> I can phrase the 2/3-tails argument in Bayesian terms just as well
> as in frequentist terms.

Ok, that may be right. Still, the reasoning given in support of the 1/3
Heads answer tends to be by way of some facts about long term relative
frequencies, while the reasoning in favor of the 1/2 answer tends to be in
terms of rational change in belief.
(Note that without assuming that rational updating is by
conditionalization, it's hard to find any argument at all in favor of a
1/2 answer.)

> As it happens, I'm a diehard Bayesian;
> I phrased the argument in frequentist terms because in my
> experience that's what other people respond best to.
>
> Consider a universe of four possible events:
>
> 1. Coin came up heads; today is Monday
> 2. Coin came up heads; today is Tuesday
> 3. Coin came up tails; today is Monday
> 4. Coin came up tails; today is Tuesday
>
> Surely we can agree that these four events are equally probable, if we
> lack the information that Sleeping Beauty was awakened. To be
> specific, imagine that Rip van Winkle is asleep beside Sleeping Beauty
> and that the rules of the game dictate that he will be awakened on
> Monday and Tuesday regrardless of the outcome of the coin flip. When
> he wakes up, he assesses the probability of the four events above to
> be equal: 1/4 each. (Assume he's awakened before S.B., in the
> cases where she is to be awakened.)
>
> Sleeping Beauty is in excatly the same state as Rip van Winkle, except
> that she has one more piece of information: she knows that she's been
> awakened. Now use standard Bayesian techniques to assess her
> (subjective) probabilities for the three events. Number 2 is ruled
> out; the others are equally probable. QED.
>
> -Ted

Yeah, good point.
Here's the funny thing, though. The information, "I have been awakened",
is a peculiar piece of information. For one thing, it is 'essentially
tensed' information, which is very peculiar in itself. It is something
that in principle she cannot know in advance. If we try, "At some moment I
will be or have been awakened", as a proxy for "I have been awakened",
then we have failed to capture the information, since obviously she does
already know, before the experiment, that she will be awakened at some
moment, and still she thinks (before the experiment) that the chance that
the coin lands Heads is 1/2.

Another odd thing about this information is that Beauty herself cannot
possibly receive its contradictory as information. She cannot ever be in a
position to conditionalize on, "I have not been awakened."

When I first looked at this problem, I thought, "I can see what's so
strange about this situation -- it's that the 'information' that appears
to be relevant is information not about what the world is like, but about
what day it is." But this is not right, I was completely wrong about that.
This feature is entirely incidental, and could be squeezed right out with
a more complicated story.

Jamie Dreier

unread,
Mar 15, 1999, 3:00:00 AM3/15/99
to
Matt McLelland <mat...@flash.net> wrote:

> Jamie Dreier wrote:
>
> > Back up a little and I think it can be made even *more* compelling.
> > Before she is put to sleep, Beauty certainly thinks that the chance of
> > Heads is 1/2. Now for the sake of argument, suppose that upon awakening
> > she really does get some hard-to-state information, so that she reasonably
> > changes her credence in Heads to 1/3.
>

> There is nothing paradoxical about this. It isn't even very
complicated, and time
> really has nothing to do with it. Change the problem so that don't
ever interogate
> her if the coin comes up heads, and suppose she knows this. Now imagine
that you
> are her, and that you get interogated. You don't have reason to doubt
that the coin
> is still fair, but you still know that it came up tails with 100%
certainty. There
> isn't anything deeper than this invovled in this problem.

I think there is.

In your example, the fact that she is being interrogated does,
uncontroversially, count as information for her. We can put that
information in a tenseless way, so that she could in principle get it at
some other time -- say, before she is put to sleep in the first place. At
that moment, she will quite reasonably think, "Of course, if I am
interrogated at all, that will mean that the coin came up tails. That is,
pr(Tails | I will be interrogated) = 1."

Then, when she is in fact interrogated, she conditionalizes as usual, and voila.

But in the original problem, there doesn't seem to be any way to state the
relevant information (if it really is information) in a neutral, untensed
way. We can't put it like this: "I have been or will be interrogated."
Because she already knows that at the outset, so if pr(Heads | I have been
or will be interrogated) = 1/3, then since she knows that the condition
obtains, she can just conditionalize and conclude, pr(Heads) = 1/2. But
that's not right.

So as far as I can see, time really does have something to do with it.

Jamie Dreier

unread,
Mar 15, 1999, 3:00:00 AM3/15/99
to
russ...@wanda.vf.pond.com (Matthew T. Russotto) wrote:

> The only explanation I can come up with is that "today is Monday" and
> "today is Tuesday" can't be treated as events.

I think that's right. At least in the most straightforward way, they cannot be.

The 'events' in probability theory can often be thought of as sets of
possible worlds. The probability function is a measure on these sets.
Conjunction of the events corresponds to intersection of the sets,
disjunction to union, and so on. The event, "the next Congress will have a
Republican majority", is represented by the set of worlds in which the
next Congress has a Republican majority. My estimate of the probability of
the event comes from my measure of that set.

But which set of possible worlds represents the pseudoevent (hm, well, to
be less tendentious: the 'purported event') that today is Monday? Today is
Monday, after all, in *every* possible world (I'm writing at 11:52 pm on
Monday), or else in *no* possible world (you are most likely reading this
on Tuesday or Wednesday).

I guess the interesting question is whether we can extend the usual
apparatus to include this new sort of information. It ought to be
possible. After all, we *do* sometimes find ourselves in the position of
not knowing what time it is, and of having some educated guesses about it
("I know it isn't noon, my probabilities for times are clustered around
midnight..."). And sometimes this makes a difference to what we think we
should do. ("I'm pretty sure my watch is fast, so I can play two more
rounds of freecell before I have to run off to the meeting, but there is a
small chance that my watch is slow, in which case I'd better stop typing
*now* and hightail it over there....")

Eytan Zweig

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to

<bu...@pac2.berkeley.edu> wrote in message
news:7cjvq3$hv5$1...@agate.berkeley.edu...
>In article <36ED7A81.5A61@delete_this_field.uiuc.edu>,

>Jim Ferry <jferry@delete_this_field.uiuc.edu> wrote:
>>Jamie Dreier wrote:
>>>
>>> We plan to put Beauty to sleep by chemical means, and then we'll flip a
>>> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday
>>> afternoon and interview her. If it lands Tails, we will awaken her
Monday
>>> afternoon, interview her, put her back to sleep, and then awaken her
again
>>> on Tuesday afternoon and interview her again.
>
>>> What credence should she state in answer to our question?
>>
>>50/50. The Heads and Tails environments are identical. They
>>give her no information, so the probability remains the same.
>
>SPOILER
>
>
>
>
>
>Nope. She should say that the probability that it's tails is 2/3.
>
>Imagine repeating the experiment a million times. Heads comes
>up half a million times, as does tails. But each time tails
>comes up she's awakened twice. So there are a total of
>1.5 million awakenings, and only half a million of them
>occur after the coin came up heads.
>

>-Ted

No, this reasoning only applies if she knows that the experiment shall be
repeated several times, and that the amount of tails shall be equal to
heads. It does not apply to a single case where one result eliminates the
other.

For instance, take an extreme case - instead of throwing a normal die, say
it is weighed in such a fashion that there is only a 1/1000000 chance of
tails coming up- but in that case, she will be woken 999,999 times! By your
reasoning, she should say the chance of it being tails is 50%, which is
clearly incorrect.

Eytan Zweig

Eytan Zweig

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to

<bu...@pac2.berkeley.edu> wrote in message
news:7ck2et$k6p$1...@agate.berkeley.edu...
>In article <36ED8693.3D15@delete_this_field.uiuc.edu>,

>Jim Ferry <jferry@delete_this_field.uiuc.edu> wrote:
>>
>>Nope. 2/3 of the awakenings occur after a Tail, but these
>>awakenings are not equally likely events.
>>
>>P(Heads & Monday) = 1/2
>>P(Tails & Monday) = 1/4
>>P(Tails & Tuesday) = 1/4
>
>This would be correct if the rules stated that she would be awakened
>on Monday OR Tuesday in the event the coin comes up tails,
>but unless I misread the question, she is to be awakened on Monday
>AND Tuesday in this case. That means that the three events
>are definitely equiprobable.
>
>Once again, imagine repeating the experiment a million times.
>The event (Heads & Monday) will occur half a million times.
>So will the event (Tails & Monday) (since tails comes up half the
>time, and every time it does a Monday-awakening occurs). So
>those two events are equally probable.
>
>-Ted

Ok, think about this related question -

After the experiment ends, Beauty is woken up once more, this time for good.
She does not, however, remember anything that happened while the experiment
was going on, except the rules of the experiment. Once again she is

interviewed, and asked what is the credence for the possibility that the
coin came up heads.

Obviously, the credence is now 50%; however, she gained no new information


except that the experiment is over, which is irrelevent to the credence -
everything else is exactly the same knowledge as she had during the
experiment. Why, according to your logic, does the credence change?

Eytan Zweig

Matt McLelland

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
David A Karr wrote:

> On the other hand, suppose again the coin comes up heads on 1/2 of
> all flips, and S.B. is woken once on heads, twice on tails. But now
> suppose S.B. has the opportunity *before* the coin toss to pay $0.45
> "betting on heads." Each time we wake her up, we ask if she still
> wants her bet (if she still has one) to stand---if she says "no" at
> any time during the week, her $0.45 is returned at the end of the week,
> otherwise she gets $1 on heads and $0 on tails at the end of the week.
>
> In the latter case she'll have two chances to cancel a losing bet
> for every chance to cancel a winning bet, yet it still pays *not* to
> cancel the bet; in that sense her "credence" is 1/2.

I think that "What is your credence" can be assumed to mean "What are the odds
[to you]". The question was:

What is your credence now for the proposition that our coin landed Heads?

Your objection is that this could be interpreted to mean "What *were* the odds
that our coin landed heads when we flipped it." I think that the use of the
word 'now' makes implies a meaning of "What are the odds that the coin came up
heads given that we just woke you up."

This isn't really a problem with the use of the word credence, I think. The
same possible ambiguity exists in the statement:

Now what are the odds that our coin landed heads?

bu...@pac2.berkeley.edu

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
In article <pl436000-150...@bootp-17.college.brown.edu>,

Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:
>
>There is a frequentist sort of argument in favor of her declaring that the
>chance that the coin landed Heads is only 1/3 (to wit: suppose the game
>were played repeatedly, and on each occasion for guessing she made a guess
>to herself, "I guess that it's Heads"; she would be right only 1/3 of the
>time). On the other hand, there is a more Bayesian sort of argument that
>she should think the chance is 1/2 (to wit: I thought it was 1/2 before
>they put me to sleep, and I clearly have no new information, so it would
>be irrational to change my mind).

I don't think it has anything to do with frequentism vs. Bayesianism --


I can phrase the 2/3-tails argument in Bayesian terms just as well

as in frequentist terms. As it happens, I'm a diehard Bayesian;

bu...@pac2.berkeley.edu

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
In article <7ck3m8$blg$1...@goethe.tau.ac.il>,

Eytan Zweig <eyt...@spinoza.tau.ac.il> wrote:
>After the experiment ends, Beauty is woken up once more, this time for good.
>She does not, however, remember anything that happened while the experiment
>was going on, except the rules of the experiment. Once again she is
>interviewed, and asked what is the credence for the possibility that the
>coin came up heads.
>
>Obviously, the credence is now 50%; however, she gained no new information
>except that the experiment is over, which is irrelevent to the credence -

Can you explain that last clause? Why on earth would you think that
the information that the experiment is over is irrelevant? Of course
it's relevant -- it completely changes the universe of possibilities
for her, from

{heads and I'm being awakened for the first time,
tails and I'm being awakened for the first time,
tails and I'm being awakened for the second time}

to

{heads and the experiment is over,
tails and the experiment is over}.

With a different universe of possibilities, there's no reason to
expect the probability of tails to be the same in the two cases. I
can't even begin to guess your reasons for suggesting that the two
situations (1. she knows the experiment is still going on; 2. she
knows the experiment is over) are equivalent.

-Ted

Matthew T. Russotto

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
In article <7ck29k$bbm$1...@goethe.tau.ac.il>,

Eytan Zweig <eyt...@spinoza.tau.ac.il> wrote:
}
}<bu...@pac2.berkeley.edu> wrote in message
}news:7cjvq3$hv5$1...@agate.berkeley.edu...

}>Nope. She should say that the probability that it's tails is 2/3.
}>
}>Imagine repeating the experiment a million times. Heads comes
}>up half a million times, as does tails. But each time tails
}>comes up she's awakened twice. So there are a total of
}>1.5 million awakenings, and only half a million of them
}>occur after the coin came up heads.

}
}No, this reasoning only applies if she knows that the experiment shall be
}repeated several times, and that the amount of tails shall be equal to
}heads. It does not apply to a single case where one result eliminates the
}other.
}
}For instance, take an extreme case - instead of throwing a normal die, say
}it is weighed in such a fashion that there is only a 1/1000000 chance of
}tails coming up- but in that case, she will be woken 999,999 times! By your
}reasoning, she should say the chance of it being tails is 50%, which is
}clearly incorrect.

She's given that the coin is fair, so your objection doesn't apply.

Consider this:
Same setup, only this time Sleeping Beauty is told whether it is
Monday or Tuesday. Clearly, on Tuesday she should answer that the
probability of tails is 1, and on Monday she should answer that the
probability of tails is 1/2.

Certainly when she isn't told, the aggregate probability of combining
the two cases can't drop to 1/2.

But to confound things further:

P(tails) = 1/2
P(heads) = 1/2
P(tails | Monday) = 1/2
P(tails | Tuesday) = 1
P(Monday | tails) = 1/2
P(Monday | heads) = 1
P(Tuesday | tails) = 1/2
P(Tuesday | heads) = 0
P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)
= 1/2*1/2 + 1*1/2 = 3/4
P(Tuesday) = 1/4
(alarm bells should be going off by now)

P(tails) = P(tails|Monday)*P(Monday) + P(tails|Tuesday)*P(Tuesday)
= 1/2 * 3/4 + 1 * 1/4 = 5/8

A contradiction! But all I've used is
P(A) = P(A|B)P(B) + P(A|~B)P(~B)... that can't be wrong!

The only explanation I can come up with is that "today is Monday" and
"today is Tuesday" can't be treated as events.

--
Matthew T. Russotto russ...@pond.com
"Extremism in defense of liberty is no vice, and moderation in pursuit
of justice is no virtue."

bu...@pac2.berkeley.edu

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
In article <7ck29k$bbm$1...@goethe.tau.ac.il>,
Eytan Zweig <eyt...@spinoza.tau.ac.il> wrote:

>For instance, take an extreme case - instead of throwing a normal die, say
>it is weighed in such a fashion that there is only a 1/1000000 chance of
>tails coming up- but in that case, she will be woken 999,999 times! By your
>reasoning, she should say the chance of it being tails is 50%, which is
>clearly incorrect.

I don't understand the last clause ("which is clearly incorrect").
It would make a lot more sense to me if it said "which is
clearly correct." :-)

When she's awakened, there are 1,000,000 distinct possibilities:

1. Heads, and I'm being awakened for the first time.
2. Tails, and I'm being awakened for the first time.
3. Tails, and I'm being awakened for the second time.
...
1000000. Tails, and I'm being awakened for the 999,999th time.

Clearly the first one is 999,999 times more likely than any of the
others, but the others are 999,999 times more numerous, so, based on
the information she has available, the probabilities of heads and
tails are equal.

-Ted

Matt McLelland

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
Jamie Dreier wrote:

> Back up a little and I think it can be made even *more* compelling.
> Before she is put to sleep, Beauty certainly thinks that the chance of
> Heads is 1/2. Now for the sake of argument, suppose that upon awakening
> she really does get some hard-to-state information, so that she reasonably
> changes her credence in Heads to 1/3.

There is nothing paradoxical about this. It isn't even very complicated, and time

Matt McLelland

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
Matthew T. Russotto wrote:

> P(tails | Monday) = 1/2

Bzzz.
p(tails | Monday) = 1/3

> P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)
> = 1/2*1/2 + 1*1/2 = 3/4
> P(Tuesday) = 1/4
> (alarm bells should be going off by now)

No. So far so good. Keep in mind that the even "It is monday" is really "I got
woken up on Monday"

> P(tails) = P(tails|Monday)*P(Monday) + P(tails|Tuesday)*P(Tuesday)

> = 1/2 * 3/4 + 1 * 1/4 = 5/8

Substituting correct values you would have the following non-contradiction:
P(tails) = 1/3 * 3/4 + 1*1/4 = 1/2


Jamie Dreier

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
Matt McLelland <mat...@flash.net> wrote:

> Matthew T. Russotto wrote:
>
> > P(tails | Monday) = 1/2
>
> Bzzz.
> p(tails | Monday) = 1/3
>
> > P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)
> > = 1/2*1/2 + 1*1/2 = 3/4
> > P(Tuesday) = 1/4
> > (alarm bells should be going off by now)
>
> No. So far so good. Keep in mind that the even "It is monday" is
really "I got
> woken up on Monday"

Really?

So wait, let's see how this works.
You think that P(tails | I got woken up on Monday) = 1/3.
But she *knows* she will be awakened on Monday. So why can she not
conditionalize in advance, before she is put to sleep, and conclude that
the chance of tails is 1/3?

Or what about this version:
same set-up, except that when we awaken Beauty on Monday we will wake her
up by shouting, "HEY, BEAUTY, IT'S MONDAY!".

Now surely when we awaken her in that rude way, she will take the chance
that the coin came up Heads to be 1/2. Or is this really different from a
version in which we will not awaken her on Tuesday at all, no matter how
the coin lands? It doesn't seem to be any different, from her perspective
on Monday when we awaken her by the rude shouting.

Matt McLelland

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
Jamie Dreier wrote:

> So as far as I can see, time really does have something to do with it.

Time really isn't the issue. Unfortunately, I completely missed the boat with my
last few posts:
The probability that the coin came up heads given that you were just woken up is
1/2. My apologies.

Anyone who still doesn't believe it can imagine what would happen if they increased
the number of interrogations from 2 to 1 zillion on tails (leaving 1 interrogation
for a head). You agree to do the experiment once, and sure enough you awaken and
they ask the question. Do you really think you can be almost positive that the coin
didn't come up heads?


Matt McLelland

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
Jamie Dreier wrote:

> Matt McLelland <mat...@flash.net> wrote:
>
> > Matthew T. Russotto wrote:
> >
> > > P(tails | Monday) = 1/2
> >
> > Bzzz.
> > p(tails | Monday) = 1/3
> >
> > > P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)
> > > = 1/2*1/2 + 1*1/2 = 3/4
> > > P(Tuesday) = 1/4
> > > (alarm bells should be going off by now)
> >
> > No. So far so good. Keep in mind that the even "It is monday" is
> really "I got
> > woken up on Monday"
>
> Really?

Really. My previous retraction doesn't affect anything I said to Matthew T.
Russotto.

> So wait, let's see how this works.
> You think that P(tails | I got woken up on Monday) = 1/3.

Yep.

> But she *knows* she will be awakened on Monday. So why can she not
> conditionalize in advance, before she is put to sleep, and conclude that
> the chance of tails is 1/3?

You are confusing the events "She will be awakened on Monday" and "Today is
monday and she was awakened". They are not the same. The probability of the
first is 1 and the probability of the second isn't.

Let me give you a simple example that doesn't involve time:

I flip a coin. If it comes up heads I put the number 123456 in a hat, and if
it comes up tails I put the numbers 1 through a million in a hat.

Scenario 1: You pull a single number from my hat. If you pull 123456,
hopefully you will guess the coin was heads.

Scenario 2: You pull the numbers from my hat until none are left. Now, you
know that you will eventually pull 123456 from either hat. So if you learned
that 123456 was pulled from the hat at some point during our trial, there
would be no 'information' gained. On the other hand, if you pull 123456 from
the hat on your first pull, then the odds are that the coins flipped heads.

> Or what about this version:
> same set-up, except that when we awaken Beauty on Monday we will wake her
> up by shouting, "HEY, BEAUTY, IT'S MONDAY!".

This *is* different.

PS: The event "Today is Monday" is perfectly valid.


Matthew T. Russotto

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
In article <36EDD8D7...@flash.net>,

Matt McLelland <mat...@flash.net> wrote:
}Matthew T. Russotto wrote:
}
}> P(tails | Monday) = 1/2
}
}Bzzz.
}p(tails | Monday) = 1/3

This means that if we tell Sleeping Beauty what day we woke her up,
on Monday she should say that the probability of the coin having
landed "tails" is 1/3. Surely that can't be the case!

bu...@pac2.berkeley.edu

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
In article <36EDECBB...@flash.net>,
Matt McLelland <mat...@flash.net> wrote:

> Anyone who still doesn't believe it can imagine what would happen if
> they increased the number of interrogations from 2 to 1 zillion on
> tails (leaving 1 interrogation for a head). You agree to do the
> experiment once, and sure enough you awaken and they ask the question.
> Do you really think you can be almost positive that the coin didn't
> come up heads?

Yes. Of course. How could it be any other way? There are
a zillion and one equiprobable possible explanations for
why she was awakened. One is heads, and a zillion are tails.
P(heads) = 1/zillion.

-Ted

bu...@pac2.berkeley.edu

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
I wrote

> P(heads) = 1/zillion.

I meant

P(heads) = 1/(zillion+1)

of course. Sorry about that. (If only all the errors I made were
in the zillionth decimal place!)

-Ted

David A Karr

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:
>ka...@shore.net (David A Karr) wrote:
>
>> In fact I'm not sure I know what Jamie's question even means.
>> What's the significance of the "credence" that Sleeping Beauty
>> assigns to a proposition that we already know to be true (or false)?
>
>Hmmm.
>
>Well, I meant to be using the standard Bayesian sense of 'credence', which
>is generally cashed out as 'degree of belief'.

And my question is, what the heck difference does it make *what*
S.B.'s "degree of belief" is? Why should S.B. care what number she
says? Why should the experimenter care?

>If you prefer, you may (as Matt McLelland suggests) rephrase the question:
>What should she take the odds of Heads to be when we interview her? (I am
>talking about her rational state of belief, though -- which may or may not
>be logically related to her disposition to bet, I intend not to take any
>stand on what the relation is or isn't.)

I suggested placing bets because this actually puts S.B. in a situation
where giving an "incorrect" credence to a proposition could reasonably
be perceived as being disadvantageous to her. There are other ways to
do this, for example I'm sure you could recast the original problem
in such a way that Sleeping Beauty's life is in jeopardy and she has to
make a decision that will either increase or decrease her risk--and she
cannot "opt out of" this "bet." But in any case the essential thing
is to posit an answer to the question, "Who cares?"

I still don't know what you mean by "rational state of belief."
Assuming the number of angels that can dance simultaneously on the
head of a pin is finite, is that number closer to 200,000 or to 2?
What's a "rational state of belief" regarding that question?

David A Karr

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
Matt McLelland <mat...@flash.net> wrote:

>David A Karr wrote:
>> In the latter case she'll have two chances to cancel a losing bet
>> for every chance to cancel a winning bet, yet it still pays *not* to
>> cancel the bet; in that sense her "credence" is 1/2.
>[...]

> What is your credence now for the proposition that our coin landed Heads?
>
>Your objection is that this could be interpreted to mean "What *were* the odds
>that our coin landed heads when we flipped it." I think that the use of the
>word 'now' makes implies a meaning of "What are the odds that the coin came up
>heads given that we just woke you up."

I think you mischaracterized my argument. I very much intended to
ask, "What are the odds that the coin came up heads given that we just
woke you up." I'm not so much concerned with any decisions S.B. might
have made before she went to sleep; I'm asking, *now* *that* *she's*
*been* *awakened* and asked to make a decision, what should she decide?
I think it's obvious she should make this decision in light of any
information she has *now*, not just in light of any information she
had before the coin was tossed.

If you like, let's not have any bets placed before S.B. goes to sleep.
Instead, each time we wake her we'll ask, "Do you want your agent to
pay $p on Wednesday, knowing that you'll get back $1 if the coin came
up heads and $0 if it came up tails?" Moreover, S.B. knew we were going
to ask this.

Personally, if I were S.B. in this experiment I'd ask, "Do you mean my
agent will pay $p *in* *addition* to any other payments I might
authorize or have authorized this week? Or does the order I give now
supercede any previous orders I may have given?" And I would refuse
to answer the question that was asked me until I'd gotten a
satisfactory answer to these two.

David A Karr

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:
>But which set of possible worlds represents the pseudoevent (hm, well, to
>be less tendentious: the 'purported event') that today is Monday? Today is
>Monday, after all, in *every* possible world (I'm writing at 11:52 pm on
>Monday), or else in *no* possible world (you are most likely reading this
>on Tuesday or Wednesday).

Actually, I think in a sense there *is* a possible world in which
today is Monday (or "is not Monday," in case you happen to think it's
Monday when you read this). After all, someone at some point in time
declared, "It's time to start holding a Sabbath every seven days, and
the first one will be ... days from now." (Or substitute your own
favorite theory for the origin of our current seven-day week.) I see
no fundamental reason why that announcement couldn't possibly have
been delayed a day or two.

Robinson Crusoe faced this question. It was important to him to know
what day was Sunday, because he feared he might be punished if he
failed to make Sunday observances on the proper day. And as it turned
out, every week for many years he believed "today is Sunday" when in
fact the day was not Sunday.


But you can easily set up the Sleeping Beauty experiment to get around
this what-day-is-today question. On the day we flip the coin, we set
up an empty cup. On each successive day thereafter, early in the
morning we drop a small white ball into the cup. If after this act
there is one ball in the cup, or if there are two balls and the coin
shows tails, we wake S.B. and interview her; otherwise we let her
sleep, except that on the day when we drop the seventh ball in the cup
we wake S.B. and end the experiment. We also promise not to let the
coin be turned over from the moment it lands heads or tails until the
moment the experiment ends.

When S.B. is interviewed, then, the following two propositions seem
to me to be equally subject to whatever "degree of credence" she can
assign:

The coin is facing heads up.
There is exactly one ball in the cup.

r e s

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
(I hope you don't object to me posting into this dialogue.)

Instead of playing the stated game with S.B., suppose that the
consequence of Heads is to write the letter "A" on a slip of
white paper, whereas the consequence of Tails is to write "A"
on a slip of white paper and also on n-1 additional slips of
colored paper, each having a different color (n=2 corresponds
to the old scenario). Suppose that S.B. knows all about how
this procedure is performed, but, as far as the outcome is
concerned, S.B. is informed only of the letter written on one
piece of paper, but not whether it is a consequence of H or T.
(The letter, of course, is always "A", just as before the
information available to S.B. was only "I've been Awakened",
together with the game rules.)

It seems to me that S.B.'s "state of information" about H or T
is exactly the same in both of these games, with
"Monday"<->"white paper", "Tuesday"<->"colored paper",
(or if n>2, the different days correspond to different colors).
It has nothing essential to do with time, but has everything to
do with indistiguishability of outcomes like "A" when not
accompanied by any distinguishing feature, such as color or day.

In both of these games, S.B. learns nothing relevant about H or T,
so that pr(T|A)=pr(T)=1/2=pr(H)=pr(H|A), with obvious notation.

Arguments based on there being n cases stemming from T and only
one case from H must incorporate the fact that the probabilility
of any one of the n T-cases being involved in the outcome is also
proportionately smaller.

--
r e s (Spam-block=XX)


Jamie Dreier wrote in message ...


>Matt McLelland <mat...@flash.net> wrote:
>> Jamie Dreier wrote:
>>

>> > Back up a little and I think it can be made even *more* compelling.
>> > Before she is put to sleep, Beauty certainly thinks that the chance of
>> > Heads is 1/2. Now for the sake of argument, suppose that upon awakening
>> > she really does get some hard-to-state information, so that she
reasonably
>> > changes her credence in Heads to 1/3.
>>

>> There is nothing paradoxical about this. It isn't even very
>complicated, and time
>> really has nothing to do with it. Change the problem so that don't
>ever interogate
>> her if the coin comes up heads, and suppose she knows this. Now imagine
>that you
>> are her, and that you get interogated. You don't have reason to doubt
>that the coin
>> is still fair, but you still know that it came up tails with 100%
>certainty. There
>> isn't anything deeper than this invovled in this problem.
>

>I think there is.
>
>In your example, the fact that she is being interrogated does,
>uncontroversially, count as information for her. We can put that
>information in a tenseless way, so that she could in principle get it at
>some other time -- say, before she is put to sleep in the first place. At
>that moment, she will quite reasonably think, "Of course, if I am
>interrogated at all, that will mean that the coin came up tails. That is,
>pr(Tails | I will be interrogated) = 1."
>
>Then, when she is in fact interrogated, she conditionalizes as usual, and
voila.
>
>But in the original problem, there doesn't seem to be any way to state the
>relevant information (if it really is information) in a neutral, untensed
>way. We can't put it like this: "I have been or will be interrogated."
>Because she already knows that at the outset, so if pr(Heads | I have been
>or will be interrogated) = 1/3, then since she knows that the condition
>obtains, she can just conditionalize and conclude, pr(Heads) = 1/2. But
>that's not right.
>

>So as far as I can see, time really does have something to do with it.
>

David A Karr

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
r e s <XXr...@ix.netcom.com> wrote:
>Arguments based on there being n cases stemming from T and only
>one case from H must incorporate the fact that the probabilility
>of any one of the n T-cases being involved in the outcome is also
>proportionately smaller.

True, but you've posited an experiment in which S.B. will be
informed *only* *once* that an A is written on a piece of paper.
In Jamie's experiment, S.B. receives her information *twice*
in some cases, although by the second time she's forgotten that
she received the information before.

The two experiments are not isomorphic; at least, it's not obvious to
me that they're isomorphic.

Jamie Dreier

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
ka...@shore.net (David A Karr) wrote:


> >> In fact I'm not sure I know what Jamie's question even means.
> >> What's the significance of the "credence" that Sleeping Beauty
> >> assigns to a proposition that we already know to be true (or false)?
> >
> >Hmmm.
> >
> >Well, I meant to be using the standard Bayesian sense of 'credence', which
> >is generally cashed out as 'degree of belief'.
>

> And my question is, what the heck difference does it make *what*
> S.B.'s "degree of belief" is? Why should S.B. care what number she
> says? Why should the experimenter care?

Well, why should anyone care about anything???
I guess I do care, personally, that my representation of the world be
rational. Why should I? I don't know. I know that I do care whether I use,
say, modus ponens correctly. Why should I?


> >If you prefer, you may (as Matt McLelland suggests) rephrase the question:
> >What should she take the odds of Heads to be when we interview her? (I am
> >talking about her rational state of belief, though -- which may or may not
> >be logically related to her disposition to bet, I intend not to take any
> >stand on what the relation is or isn't.)
>

> I suggested placing bets because this actually puts S.B. in a situation
> where giving an "incorrect" credence to a proposition could reasonably
> be perceived as being disadvantageous to her. There are other ways to
> do this, for example I'm sure you could recast the original problem
> in such a way that Sleeping Beauty's life is in jeopardy and she has to
> make a decision that will either increase or decrease her risk--and she
> cannot "opt out of" this "bet." But in any case the essential thing
> is to posit an answer to the question, "Who cares?"

If it's that essential, then let's just suppose that she really, really
wants to have rational credences.


> I still don't know what you mean by "rational state of belief."
> Assuming the number of angels that can dance simultaneously on the
> head of a pin is finite, is that number closer to 200,000 or to 2?
> What's a "rational state of belief" regarding that question?

Oh, hold on.
I'm certainly not saying that there is any particularly rational state
regarding that question. In general, my view is that rationality is a
feature of your beliefs collectively, not one by one. So I suppose that
what it's rational for you to think about angels on pins depends on what
else you believe.

I am resisting a particular characterization of the same question in terms
of bets for a reason, I'm not just refusing in order to be difficult.

Why don't I post a separate account of my reason. I think I will.

Jamie Dreier

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
David Karr,

If we asked for Beauty's dispositions to bet, instead of asking for her
credence or what she should believe, I think we'd be asking a different
question.

Suppose it's like this. Suppose we tell her in advance that when(ever) she
is awakened, we will ask her to declare her fair odds that the coin comes
up heads. She must name a fair price for a ticket that pays, um, let's
see, $6 if the coin landed Heads, and nothing otherwise. We will sell her
the ticket for the named price. She is supposed to decide what is the
largest sum she will pay.

Now in this case, it seems pretty obvious that the price she should name
is $2. (Or $1.99, presumably she'd be indifferent to getting the ticket if
it were really sold for the 'fair price'. I'll ignore this hereafter.)

Suppose she says $3 instead -- this is the other obvious price to name.

But now she looks to be in some trouble. She knows that if the coin does
land Heads, she will buy a ticket one time, and she will win, so she will
net $3. On the other hand, if the coin lands Tails she will bet twice,
losing $3 each time, a net loss of $6. Since her prior for the coin
landing Heads is 1/2, this looks like a terrible plan. If she executed it
repeatedly in many runs of the game, she'd take a bath.

Instead she should offer to pay at most $2. Then she nets $4 if the coin
lands Heads, and loses $2 on each of two bets if the coin lands Tails.
Fair.

However, this does not seem to me to show that she should take the real
chance of Heads to be 1/3 at the moment she awakens. The problem is that
*the amount she has to bet in all depends on the outcome on which she is
betting*. In such circumstances, the odds you will take do not reflect
your view of the actual chances.

To see this (maybe it's obvious, humor me), consider the grossly unpopular
Variable Bet Casino. At the VBC they have a roulette table, and you bet
with plaid chips. The roulette wheel is an ordinary casino roulette wheel
(pretend there are no zero nor double-zero so that roulette actually uses
'fair bets'). But the rule is that the plaid chips are worth $1 if the
ball lands in a red space, and $1000 if it lands in a black space.
Now what are the fair odds for the bets on red and black? Not even odds,
that's for sure. The fair odds would pay 1000 to one for a bet on red, and
one to 1000 for a bet on black.
But the fact that these are the odds I deem fair clearly does *not* mean
that I think the chance that the ball will land in a red space is very
tiny. I think the chance is 1/2. Why do my fair odds not reflect my views
about the actual chances? Because the amount that I am betting is not
fixed, it depends on the outcome on which I am betting.

Same for Beauty.

So, while the question of what odds she would take is interesting (sort of
-- it seems to me to be pretty trivial), it doesn't settle the question of
what Beauty should believe.

r e s

unread,
Mar 16, 1999, 3:00:00 AM3/16/99
to
It's just *because* the alternative game involves S.B. being
informed only once of the outcome, that it becomes isomorphic
to the original game. That's the very part that properly
corresponds to "forgetting" in the first game.

Please reconsider this, because I think it succeeds as a true
isomorphism.

--
r e s (Spam-block=XX)


David A Karr wrote in message <7pyH2.84$no1....@news.shore.net>...

Matthew T. Russotto

unread,
Mar 17, 1999, 3:00:00 AM3/17/99
to
In article <7con13$d...@sjx-ixn9.ix.netcom.com>,

r e s <XXr...@ix.netcom.com> wrote:
}Consider the odds ratio, where Ak is the particular event of
}SB being awakened for the k-th interview (k=1):
}
}pr(H|A1)/pr(T|A1)
}=[pr(A1|H)*pr(H)] / [pr(A1|T)*Pr(T)]
}=pr(A1|H)/pr(A1|T), since we're given that pr(H)=pr(T).
}=1/pr(A1|T), since pr(A1|H)=1.
}
}So, what is pr(A1|T)?
}I assert pr(A1|T)=1/n (e.g. 1/2 for the original n=2)
}where n is the number of times SB is to be awakened if Tail
}occurs.This must be so because the rules guarantee that,
}given T, Sb cannot know which of the A1,...,An obtains.
}Given T, Sb can only know that some one of these Ak obtains.
}
}Therefore, pr(H|A1) / pr(T|A1) = pr(H|A1) / [1-pr(H|A1)] = n,
}and
}pr(H|A1)= n/(n+1),
}hence
}pr(H|A1)=2/3 if n=2.
}
}The same argument shows that if you believe that pr(H|A1)=1/2,
}then you must believe that pr(A1|T)=1.

My argument is with the assumption that P(H) = P(T). I've shown that a
contradiction occurs if P(H)=P(T)=1/2, P(H|A1) = P(T|A1) = 1/2, and
P(An|T) = 1/n. It can be resolved by changing P(H|A1) = 1/(n+1), or
by changing P(H) = 1/(n+1). The question is which change fits the
description of the experiment. On the surface, it seems like we've
flipped a fair coin, therefore P(H)=P(T)=1/2. But I claim that what
we've actually done is used that fair coin to produce a distribution
(and a less than random one) where P(H) = 1/(n+1). We've done this by
flipping the coin once and measuring once if it comes up heads, but
measuring n times if it comes up tails.

bu...@pac2.berkeley.edu

unread,
Mar 17, 1999, 3:00:00 AM3/17/99
to
In article <36EF452F...@flash.net>,
Matt McLelland <mat...@flash.net> wrote:

I have no comment on your AIDS-patient analogy, since I don't see what
it's supposed to show. That some of the conclusions in this analysis
are counterintuitive? Well, maybe. I guess it depends on your
intuition. This whole daily-amnesia scenario is so crazy to begin
with that this sort of appeal to intuition doesn't really help that
much, IMHO.

Onwards.

>Correct! But those last events are not *disjoint*!!!

I couldn't care less. The assertion I was making had nothing to do
with disjointness. In the language of the original puzzle (with a
simple coin-flip and no zillions), I was asserting the following:

P(heads and today is monday) = P(tails and today is monday)

Note that this assertion makes no mention of disjointness.
Frankly, I thought it was an agonizingly obvious assertion to make
and was astonished to find people disagreeing with it! If you really
disagree with this assertion, then tell me -- which of those two
events is less likely? That is, which event would occur less
often in a large ensemble of repeated trials of the experiment?
(Or, if you don't agree that these last two questions are equivalent,
then what definition of probability are you using?)

> P(The coin came up heads and *this* is my first time up) = 1/2
>and
> P(The coin came up tails and *this* is my first time up) = 1/2*1/N


I agree with this, but I don't see why it's relevant to the problem.

>Maybe this example would be better. Suppose that we are going to do the
>same old sleeping beauty experiment only this time the coin is biased 100
>to 1 in favor of heads. Also, instead of waking her twice, we will wake
>her 10,000 times for tails. Your argument seems to be that if we agree
>to ask sleeping beauty every time what the coin came up, she will get it
>right only 1% of the time if she answers heads.

Right.

>How can this be true if
>the odds are only 1% that it will be tails? The simple answer is that
>her mistakes are magnified 10,000 times. Imagine now that she awakens to
>find some strange person has broken into the laboratory - something
>clearly not supposed to happen every time. They ask her at gunpoint,
>what was the result of the coin toss? She should clearly answer heads
>and has only a 1% chance of answering incorrectly.

Right. But I don't understand what this has to do with the situation
at hand. She clearly has different information in this case, so
naturally her assessment of the probabilities will be different.

-Ted

David A Karr

unread,
Mar 17, 1999, 3:00:00 AM3/17/99
to
Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:
>> the other hand, if her score is not weighted by time, then any number
>> in the range [0,1], even a random number, is an equally rational credence
>> *under* *this* *set* *of* *values*. However, I don't think this kind
>> of reasoning is what you had in mind, since it doesn't give any good
>> reason to assign credences between 0 and 1, which seems to be what
>> you're looking for.
>
>Huh.
>No, those are certainly not what I had in mind!

I didn't think so, and I didn't bring these up just to be tendentious.
I'm doing this because I'm confused.


>As I said, I generally just take 'credence' and 'degree of belief' for
>granted, it's the way I'm used to thinking about probability. How
>confident should she be that the coin landed Heads, given what else she
>believes? Measure confidence on the [0,1] scale, where 1 is the confidence
>you place in an obvious tautology and 0 your confidence in an obvious
>contradiction, and .5 your confidence in the quarter I am now tossing
>landing Heads, and so on.

Now we're getting into yeat another semantic muddle. I use the term
"confidence" to refer to a particular non-Bayesian measure of belief,
the one usually meant when someone says they used "95% percent
confidence intervals". If I analyze a statistical sample, I might
make a statement about a hypothesis with "confidence 0.95". This is
not at all the same as my saying that I estimate the hypothesis to be
true with *probability* 0.95.

Now this is interesting because I understand two measures of something
roughly corresponding to a degree or strength of belief, both measured
on the scale [0,1], yet not equivalent to each other nor even having
a well-defined mapping from one to the other. So maybe we should
say "probability" rather than "credence" since at least this gives me
a clue which of the two measures we're looking for.

>If she makes her judgment and then the experimenter tells her how the coin
>really landed, should she be more surprised if he tells her "Heads" than
>if he tells her "Tails"? Or equally surprised?
>
>Can't you just think about the probabiity of Heads, from Beauty's perspective?

I just don't know. Usually I like my probability estimates to fit into
a rational world view, and one of the ways I test their rationality is
to imagine that I (or someone else) could make some sort of bet on them.
(After all, that's the origin of this branch of mathematics, isn't it?)

In this case all the betting does is to point out difficulties with
other possible measures of goodness such as "how suprised" Beauty
should be. For example, if she were to assign probability 1/3 to
heads, then she'll be twice as surprised when the coin is revealed to
be heads as when it's revealed to be tails. But if the experimenter
really does make this revelation each time before he puts Beauty back
to sleep, then Beauty ends up being *exactly* as suprised by tails as
by heads over the course of the experiment (half as surprised each
time, but it happens twice as often). This minimizes her risk,
i.e. the variance in how surprised she'll be. I find that a
compelling argument for adopting this probability estimate, but not
compelling enough to make me give up the notion that the estimate
probably really should be 1/2 after all.


I'd be inclined to just pick a side (1/3 or 1/2) and stay there, but I
keep having this fear that the whole edifice of reasoning is built on
sand.


A few related questions:

Suppose a few seconds after Beauty wakes, the experimenter tells her
what day it is? What should she assign as the probability that the
coin came up heads, given that she's just been told today is Monday?
r e s seems to think the answer is 2/3. This boggles me.

Suppose Beauty just woke up and has no other new information.
What's the probability (in her rational view) that today is Monday?

Matt McLelland

unread,
Mar 17, 1999, 3:00:00 AM3/17/99
to
bu...@pac2.berkeley.edu wrote:

> >Correct! But those last events are not *disjoint*!!!
>
> I couldn't care less. The assertion I was making had nothing to do
> with disjointness.

Oh really? Sometime in the past you wrote:

> Yes. Of course. How could it be any other way? There are
> a zillion and one equiprobable possible explanations for
> why she was awakened. One is heads, and a zillion are tails.
> P(heads) = 1/zillion.

Now, initially I objected on the grounds that the events were not
equiprobable, believing you to be talking about events of the form "The coin
flipped tails, it is day 2, and here I am". You retorted that they were
equiprobable and claimed the events you were talking about were of the form
"The coin flipped tails and I was awakened on day 2". I then objected that
these events are not disjoint. Why does it matter? Because the only way you
can conclude that N equiproabable events each have a probability of 1/N is if
they are disjoint!

> In the language of the original puzzle (with a
> simple coin-flip and no zillions), I was asserting the following:
>
> P(heads and today is monday) = P(tails and today is monday)

This isn't true!
P(heads and today is monday) = 1/2
P(tails and today is monday) = 1/N (N being the number of awakenings)

> Note that this assertion makes no mention of disjointness.
> Frankly, I thought it was an agonizingly obvious assertion to make
> and was astonished to find people disagreeing with it!

You shouldn't - it is wrong.

> If you really disagree with this assertion, then tell me -- which of those
> two
> events is less likely? That is, which event would occur less
> often in a large ensemble of repeated trials of the experiment?
> (Or, if you don't agree that these last two questions are equivalent,
> then what definition of probability are you using?)
>
> > P(The coin came up heads and *this* is my first time up) = 1/2
> >and
> > P(The coin came up tails and *this* is my first time up) = 1/2*1/N
>
> I agree with this, but I don't see why it's relevant to the problem.

? You agree? How is this different from the events with "this" replaced by
"today"?

r e s

unread,
Mar 17, 1999, 3:00:00 AM3/17/99
to
Matthew T. Russotto wrote in message ...
[...]

>My argument is with the assumption that P(H) = P(T).

This isn't an assumption, but is part of the problem
statement, viz.,"we'll flip a (fair) coin".

We'll have little hope of communicating intelligently
about this problem unless we can at least agree that
this means, a priori, pr(H)=pr(T).

>I've shown that a
>contradiction occurs if P(H)=P(T)=1/2, P(H|A1) = P(T|A1) = 1/2, and
>P(An|T) = 1/n. It can be resolved by changing P(H|A1) = 1/(n+1), or
>by changing P(H) = 1/(n+1). The question is which change fits the
>description of the experiment. On the surface, it seems like we've
>flipped a fair coin, therefore P(H)=P(T)=1/2.

I would say that it's an explicit part of the problem that pr(H)=pr(T),
meaning the probabilities unconditioned by any information other than
the rules of the game. So, if, as you say below, you see a different
distribution arising for H/T, it would presumably involve some
pr(H|E)=/=pr(T|E) calculated for some conditioning event E, and it
appears that you've taken E=A1. (Although I don't agree with the actual
values you seem to have obtained for the unequal conditional
probabilities -- I showed in another posting that pr(H|A1)=2/3.)

>But I claim that what
>we've actually done is used that fair coin to produce a distribution
>(and a less than random one) where P(H) = 1/(n+1). We've done this by
>flipping the coin once and measuring once if it comes up heads, but
>measuring n times if it comes up tails.


--
r e s (Spam-block=XX)

kIdMiGaRu

unread,
Mar 17, 1999, 3:00:00 AM3/17/99
to
Okay, I'll try to put forth several different reasons why 1/2 is the
correct answer. This post is aimed more towards those who believe the
probability of heads is not 1/2.

First, there's the straightforward probability approach:
P(heads, Monday) = 1/2 * 1 + 1/2 * 0 = 1/2
P(tails, Monday) = 1/2 * 0 + 1/2 * 1/2 = 1/4
P(tails, Tuesday) = 1/2 * 0 + 1/2 * 1/2 = 1/4
I'm hoping you can see why this is reasonable.

Secondly, think of it this way: Sleeping Beauty knows that no matter
what, if it is Monday P(heads) = 1/2. This can is simple to see; if we
only woke her up once in either case, P(heads) = 1/2. Now, she will only
wake up again on Tuesday if she woke up on Monday after a flip of tails.
Therefore, the probability of waking up Tuesday is the same as the
probability of (tails, Monday), which is 1/2. Another way to say it is:
the chances of waking up on a given one of the "tails days" (Monday or
Tuesday) are equiprobable. So P(tails, Monday) = P(tails, Tuesday). We
also know that the chance of heading down the "heads branch" of the
system is 1/2, while the chance of following the "tails branch" is also
1/2. If all of three cases were equiprobable, the probability of waking
up is more than sure - it's 3/2! So P(tails, Monday) + P(tails, Tuesday)
= 1/2, and each equals 1/4.

Again. I'll use a bag for my analogy. If I flip heads, I will put a red
ball in the bag. If I flip tails, I will put two blue balls in the bag.
If I ask you what are your odds of pulling out a red ball after I flip,
would you say 1/3? I don't think so. It's obvious that you can only pull
a red ball if it landed on heads (a 1/2 chance), just as you can only
pull a blue ball if it landed on tails (also a 1/2 chance). Even if I
extended it to 1 million blue balls, the odds are still 1/2.

Basically, there is a lot of confusion over how to interpret the
problem. Many people are saying there are three cases (Monday & heads,
Monday & tails, Tuesday & tails), and believe that one of them is being
picked at random. If this were true, it definitely would be 1/3, but
it's not true. There are essentially only two cases: Monday & heads, or
Monday & tails. These are the only possibilities. If it's tails, then we
just do a lot of extra stuff. If it's heads, then we stop.

Some argue with the betting situation, where SB places a bet on whether
it's heads or tails. This is a different question from the original.
Here, people are taking into account the odds (as in expected returns),
instead of the probability (if it's heads or tails). If I go back to the
bag analogy, it's like saying: "For every ball in the bag, whether one
or two, we will bet on what it is." Obviously, you would bet on blue
because it has better returns. It's equivalent to betting on the coin,
and having 1:1 returns for heads, while having 2:1 returns for tails.

On the other hand, SB is not even deciding if she wants to bet tails or
heads, she's being asked for the probability of it being heads. If it's
Monday, it's a 1/2 chance; if it's Tuesday (or any later day for the
"large n" case), it's still a 1/2 chance that heads came up. She's right
no matter what day it is if she says 1/2. But if we decide instead to
guess "heads or tails" when she wakes up, the question is transformed.
Now she will be right once if it's heads, and wrong numerous times if
it's tails.

That's my two cents. And not surprisingly, one's one heads and one's on
tails =)

p.s. I'm a little frustrated now because I had to break my "e" rule...
well, it was fun trying!

Matt McLelland

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
David A Karr wrote:

> > I thought we had already agreed that the phrase involving
> >"credence" was to mean "What probability should SB assign to the
> >event 'the coin flipped heads' "... ? Or has this discussion
> >progressed into a debate on the philosophy of probability?
>
> I was rather under the impression that the all the other discussion
> was simply begging that question. I could be wrong.

If I understand your position it is that we face a dilemma if our definition of
probability involves frequency of occurrence. After all, if we run the
experiment 200 times, then we will awaken 300 times and the coin will only be
heads 100 of those times - thus if we pick some random awakening there is a 1/3
chance that the associated coin flipped heads. All true so far. The problem
is that in reality, each awakening isn't equally likely, and so arguments based
on picking a *random* awakening don't tell us what the probability will be in a
some awakening decided by the experiment. I don't think that the usual models
for probability are shaken by this problem.

You previously brought up the good point that we should concern ourselves with
how we should simulate this if we were to do so. We can follow the example of
how we measure the bias of a coin. We devise a trial whose outcome will be one
of two events (head or tails) and then repeat the experiment a large number of
times and compute the frequencies. So, now, what should constitute a trial in
our present case? We flip a coin, and it comes up heads we will increment the
count on the event "The coin was heads when I awakened". What if it comes up
tails? Do we increment both "The coin was tails when I was awakened on Monday"
and the "The coin was tails when I was awakened on Tuesday?" No. That would
mean that a single trial had been used to count two *disjoint* events. Instead,
if the coin comes up tails, we must *pick* *one* day at random and increment its
event.

I have a another point that relates to your conversation with r e s, but I think
it is interesting enough not to be buried at the bottom of a long message.


Matt McLelland

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
I thought of a more natural situation in which this problem arises - cloning!

You are at the lab to be cloned but are uneasy about going through with it. The
technician is annoyed by your indecision and tells you that he will decide for you.
You insist that you don't want to know his decision for your own peace of mind, and
so he agrees that after you are put to sleep for the procedure he will flip a coin
and only clone you if it comes up heads. When you awaken in a recovery room, it
dawns on you that you have no idea whether you are yourself or a clone!

1. What are the odds that you were cloned?
2. What are the odds that you are the clone?


Matthew T. Russotto

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
In article <36F04A90...@flash.net>,

This is more natural?

Given that the coin is fair:
The odds that you were cloned is 1/2. The odds that you are the clone
is 1/3rd.

Norman Diamond

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
When two incomplete observers have different amounts of information, their
computations of probability can very well differ and still be correct.

Before the experiment, both Beauty and we believe that the probability
of the coin coming up heads will be 50%. Both she and we believe that
there will be two occasions for an event to occur. Everyone knows that
for an observer who will not know how the coin landed or which day it is,
the following events will appear equally probable:
(a) Heads and Monday and we will awaken her.
(b) Heads and Tuesday and we will not awaken her.
(c) Tails and Monday and we will awaken her.
(d) Tails and Tuesday and we will awaken her.

When Beauty is awakened, she will know that she is awakened, but she
will not know how the coin landed and she will not know what day it is.
She will have more information than a completely ignorant observer.
She will know that she is awake. Event (b) has been eliminated for her.
The remaining events will appear equally probable for her. P(Tails)
will be 2/3. Also P(Tuesday) will be 2/3.

Someone has posted a scenario in which Beauty places a bet each time
she is awake. In such a scenario, if she does not base her bet on a
computation that P(Tails) is 2/3, she will suffer. That poster is right.

However, when Beauty is awakened, we will know that she is awakened,
we will know how the coin landed, and we will know what day it is.
We will have more information than a completely ignorant observer.
Event (b) is always eliminated for us, but also two other events will
be eliminated for us. At the time an event occurs, none of the events
will have probability 2/3 for us. One event will have probability 1
for us. Obviously we can't use this knowledge in placing a bet now
since we don't have the knowledge yet, but we sure will have it on
Monday and Tuesday. When we have already seen the coin land, if we
place a bet based on a computation that P(Tails) is 1/2 or that
P(Tails) is 2/3, we will suffer.

Here is another example. When Monty Hall opens a door, he already
knows if the contestant's first choice was right. The contestant
only gains enough information to compute that switching to the
remaining door has a 2/3 chance of winning while staying put retains
its old 1/3 chance. But Monty, and any other fully informed observer,
knows which door wins with probability 1.

So I think there is no longer any reason for confusion about whether
Beauty's computation of P(Tails) should be 2/3.


There still seems to be a paradox though. Someone else posted a
scenario in which Beauty places a bet before the experiment begins,
and then each time she is awakened, she gets a chance to cancel the
bet. She is given an advantage originally, paying $0.45 for a return
of $1 if the coin lands heads and a return of $0 if the coin lands
tails. Before the experiment begins, she and we believe P(heads) is
0.5 so it looks profitable to place a bet. When she is awakened,
she is allowed to cancel the bet. With her new information, she will
want to cancel. It is not really paradoxical that she might want to
cancel, it is paradoxical that she can figure out that she will always
want to cancel. This is the part that I would really like to resolve.

When Beauty is awakened, there is a 1/3 probability that she will
cancel a winning bet, a 1/3 probability that she will cancel a losing
bet, and a 1/3 probability that she will pseudo-cancel an already
canceled bet. We will know which case it is, but she will not.

But that does not resolve the paradox. If Beauty wants to be as
pedantic as I do, then the scenario will simply have its wording
made more accurate: When Beauty is awake she will have an option
to cancel or pseudo-cancel the bet, and she will not know which
operation actually takes place when she makes that decision, though
we will know. Since she will still always make that decision, the
paradox is still there.


This beats Newcomb's pseudo-paradox, that's for sure. In Newcomb's
problem the answer depends on what the premises really are. If the
premises are that the predictor is really perfect then the contestant
knows that it is most profitable to leave some money behind on the
table. If the premises are that causality exists then the contestant
takes both boxes. If the premises are that the predictor is really
perfect *and* causality exists then Bertrand Russell is the pope.
But Beauty has no inconsistent premises, at least not that I can see.
--
<< If this were the company's opinion, I would not be allowed to post it. >>
"I paid money for this car, I pay taxes for vehicle registration and a driver's
license, so I can drive in any lane I want, and no innocent victim gets to call
the cops just 'cause the lane's not goin' the same direction as me" - J Spammer

Norman Diamond

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
In article <7cn86c$sjv$1...@agate.berkeley.edu>, bu...@pac2.berkeley.edu writes:
>In article <36EEF61B...@flash.net>,
>Matt McLelland <mat...@flash.net> wrote:
>>They aren't equiprobable. Think about it.
>
>Of course they are. Think about it. :-)
>
>Here's the argument one more time. Repeat the experiment N times,
>for large N. Heads will come up N/2 times; tails will come up
>N/2 times. So the event
> The coin came up heads and I was awakened for the first time
>and the event
> The coin came up tails and I was awakened for the kth time
>(for some fixed k between 1 and a zillion) will both occur N/2 times.
>If they occur equally often in a large number of trials, then BY
>DEFINITION they're equiprobable.
>
>That's phrased in frequentist language,

Of course they aren't. Think about it. :-)

Repeat the experiment for N AWAKENINGS, for large N. Heads will have
occured N/3 times; tails will also have occured N/3 times. But N/3
awakenings will have been preceded by heads, and 2N/3 awakenings will
have been preceded by tails. So the event
The coin came up heads and I was awakened
and the event
The coin came up tails and I was awakened
(for some fixed k between 1 and a zillion) will occur different numbers
of times. If they occur differently often in a large number of trials,
then BY DEFINITION they're differently probable.

That's phrased in frequentist language, too. And from the amount of
information Beauty has when she's awake, it's right.

Matt McLelland

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
Matthew T. Russotto wrote:

> This is more natural?

Perhaps more natural wasn't the correct way to put it - less contived might have been
better.

> Given that the coin is fair:
> The odds that you were cloned is 1/2. The odds that you are the clone
> is 1/3rd.

Correct! Now we just have to agree that this is isomorphic to the original problem from
Beauty's perspective when she wakes up and we are done.


Matt McLelland

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
Norman Diamond wrote:

> Of course they aren't. Think about it. :-)
>
> Repeat the experiment for N AWAKENINGS, for large N. Heads will have
> occured N/3 times; tails will also have occured N/3 times. But N/3
> awakenings will have been preceded by heads, and 2N/3 awakenings will
> have been preceded by tails. So the event
> The coin came up heads and I was awakened
> and the event
> The coin came up tails and I was awakened
> (for some fixed k between 1 and a zillion) will occur different numbers
> of times. If they occur differently often in a large number of trials,
> then BY DEFINITION they're differently probable.

I would like to point out that this is not what I was saying - these two events
*are* equiprobable. I think you actually agree with the guy you are arguing (Ted -
bu...@pac2.berkeley.edu). You are counting every awakening that occurs in the
experiment - when what you should be doing is counting a particular random
awakening per experiment. (I posted more on this a post or two ago)


Matt McLelland

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
Norman Diamond wrote:

> Before the experiment, both Beauty and we believe that the probability
> of the coin coming up heads will be 50%. Both she and we believe that
> there will be two occasions for an event to occur. Everyone knows that
> for an observer who will not know how the coin landed or which day it is,
> the following events will appear equally probable:
> (a) Heads and Monday and we will awaken her.
> (b) Heads and Tuesday and we will not awaken her.
> (c) Tails and Monday and we will awaken her.
> (d) Tails and Tuesday and we will awaken her.

I would like to add some more "equally probable" events to your list:
(e) Heads and Wednesday and we will not awaken her.
(f) Heads and Thursday and we will not awaken her.
...

You see, you can't just claim that all of these things are equally probable. What
is the a priori probability that today is Monday?? Now, we could, by construction
treat 'Monday' and 'Tuesday' as events on which the random variable 'Today' is
equally likely. For example, an observer to this experiment picks a random day to
visit (Monday or Tuesday) with equal probability and independently of the coin toss
in the experiment. For him, the events you listed really are equiprobable. If
this observer learned that Beauty was awake, he could eliminate choice (b) and
proceed to draw all of your conclusions. Unfortunately, Beauty doesn't pick a day
to visit randomly and independently of the coin toss. If the coin came up heads
she *is* going to visit Monday- Tuesday just isn't a possibility anymore.


Matthew T. Russotto

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
In article <7cprb1$e...@dfw-ixnews5.ix.netcom.com>,

r e s <XXr...@ix.netcom.com> wrote:
}Matthew T. Russotto wrote in message ...
}[...]
}>My argument is with the assumption that P(H) = P(T).
}
}This isn't an assumption, but is part of the problem
}statement, viz.,"we'll flip a (fair) coin".
}
}We'll have little hope of communicating intelligently
}about this problem unless we can at least agree that
}this means, a priori, pr(H)=pr(T).

I agree that the toss is fair. I don't agree that the "random"
variable derived from the toss is fair. You're measuring the value of
the toss twice in the case that it is heads, and only once if it is
tails.

Suppose I flip a fair coin many times. Each time I flip it, I write
"H" if it is heads, and "TT" if it is tails. If I pick a random
letter from the page, what is the probability that it is a "T"?

}I would say that it's an explicit part of the problem that pr(H)=pr(T),
}meaning the probabilities unconditioned by any information other than
}the rules of the game. So, if, as you say below, you see a different
}distribution arising for H/T, it would presumably involve some
}pr(H|E)=/=pr(T|E) calculated for some conditioning event E, and it
}appears that you've taken E=A1. (Although I don't agree with the actual
}values you seem to have obtained for the unequal conditional
}probabilities -- I showed in another posting that pr(H|A1)=2/3.)

Yes, I missstated that -- I meant that you could resolve the contradiction
by stating P(T|A1)=1/(n+1), not P(H|A1). But that doesn't fit the
real situation.

Matt McLelland

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
Matthew T. Russotto wrote:

> Suppose I flip a fair coin many times. Each time I flip it, I write
> "H" if it is heads, and "TT" if it is tails. If I pick a random
> letter from the page, what is the probability that it is a "T"?

Now try this one:
You flip a coin *once* and write "H" if it is heads and "TT" if it is tails.
If you pick a random letter from the page, what is the probability that it is
a "T"?

After all, Beauty is only going to participate in this experiment once.


Matt McLelland

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
kIdMiGaRu wrote:

> Secondly, think of it this way: Sleeping Beauty knows that no matter
> what, if it is Monday P(heads) = 1/2.

This isn't true. P(heads | Monday) = 2/3. That is, if she knows she got
woken up on Monday the odds are 2/3 that the coin was heads. I think I
agree with all of the other stuff you said, so maybe I just misunderstand
what you mean by this statement.


Matt McLelland

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to

Matt McLelland wrote:

Oops! No this is incorrect. The correct probability for this question is 1/4. The
question I meant to ask was "If you can tell that you are not the clone, what are the odds
that you were cloned?"


Matt McLelland

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
Nobody tell the "Horrible Question" guy that my answer is bogus....

hoch...@rocketmail.com

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
I don't want to be the one person in the rec.puzzles universe without
a comment on this. Perhaps the following (very similar) situation will
clarify matters.

Suppose that the situation is as described, but we explicitly say that,
given a head outcome, we let SB sleep on Tuesday. Introduce another
character, Bob, who never keeps track of what day it is but knows all
the details of the SB experiment.

We now tell Bob that beauty is awake, and ask Bob what he thinks
the probability that the coin came up heads is. He reasons:

P( flip was heads | she's awake ) = P(she's awake | flip was heads) P(heads)
----------------------------------------
P(she's awake)

= 1/2 * 1/2
---------------
1/2*1/2 + 1/2*1

= 1/3

Is there a good argument that SB's calculation should be different
from Bob's? Does the explicit statement that we let her sleep on
Tuesday given a head outcome change the problem?

Mike

In article <pl436000-150...@bootp-17.college.brown.edu>,
pl43...@brownvmDOTbrown.edu (Jamie Dreier) wrote:
>
> We plan to put Beauty to sleep by chemical means, and then we'll flip a
> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday
> afternoon and interview her. If it lands Tails, we will awaken her Monday
> afternoon, interview her, put her back to sleep, and then awaken her again
> on Tuesday afternoon and interview her again.
>
> The (each?) interview is to consist of the one question: what is your
> credence now for the proposition that our coin landed Heads?
>
> When awakened (and during the interview) Beauty will not be able to tell
> which day it is, nor will she remember whether she has been awakened
> before.
>
> She knows the above details of our experiment.
>
> What credence should she state in answer to our question?
>
> -Jamie
>
> p.s. Don't worry, we will awaken Beauty afterward and she'll suffer no ill
> effects.
>
> p.p.s. This puzzle/problem is, as far as I know, due to a graduate student
> at MIT. Unfortunately I don't know his name (I do know it's a man). The
> problem apparently arose out of some consideration of the Case of the
> Absentminded Driver.
>
> p.p.p.s. Once again, I have no very confident 'solution' of my own; I will
> eventually post the author's solution, but I am not entirely happy with
> that one either.


>
> --
> SpamGard: For real return address replace "DOT" with "."
>

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

John Rickard

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
An argument for 1/3 (which I am convinced is the correct answer).

1. Change the experiment so that if the coin comes up heads, she is
still woken once, but it is decided at random (e.g. by another
coin toss) whether to wake her on Monday or Tuesday (each with
probability 1/2). This surely can't change the answer.

2. Now consider a variant where she is put to sleep for only one day,
and woken at most once (so no need to make her forget); heads
means wake her with probability 1/2 (e.g. based on another coin
toss); tails means do wake her. If she is woken, her reckoning of
the probability that the coin came up heads is 1/3. (Agreed?)

(I'm assuming that the final awakening at the end of the
experiment cannot be confused with the awakenings we're interested
in.)

3. In the original experiment as modified in (1), allow her to ask
what day it is. If the answer is Monday, she is in an equivalent
position to that in (2) (since she knows that the plan implied
that she would be woken on Monday with probability 1/2 if the coin
came up heads, and with probability 1 if the coin came up tails);
therefore her reckoning of the probability is 1/3. Similarly, if
the answer is Tuesday then her reckoning of the probability is
1/3. Since it makes no difference to her reckoning what day it
is, her reckoning is 1/3 before she asks the question.

If she decides not to ask after all, then her reckoning is still
1/3 (since she has gained no new information by deciding not to
ask).

Merely knowing that she is allowed to ask what day it is can't
make any difference to her reckoning of the probability that the
coin came up heads, so her reckoning of the probability in (1) is
1/3. Hence her reckoning in the original question is also 1/3.

--
John Rickard <John.R...@virata.com>

Matthew T. Russotto

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
In article <36F09211...@flash.net>,

Matt McLelland <mat...@flash.net> wrote:
}kIdMiGaRu wrote:
}
}> Secondly, think of it this way: Sleeping Beauty knows that no matter
}> what, if it is Monday P(heads) = 1/2.
}
}This isn't true. P(heads | Monday) = 2/3. That is, if she knows she got
}woken up on Monday the odds are 2/3 that the coin was heads.

That's not the case. It would be the case if she was woken up on
Monday if the coin was heads, and Monday or Tuesday randomly if the
coin was tails. But in fact she is woken BOTH Monday and Tuesday if
the coin was tails.

Jamie Dreier

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
ka...@shore.net (David A Karr)

> Now we're getting into yeat another semantic muddle. I use the term
> "confidence" to refer to a particular non-Bayesian measure of belief,
> the one usually meant when someone says they used "95% percent
> confidence intervals". If I analyze a statistical sample, I might
> make a statement about a hypothesis with "confidence 0.95". This is
> not at all the same as my saying that I estimate the hypothesis to be
> true with *probability* 0.95.

Hm, right.
Remind us what it does mean. ;-)

Give an example, preferably involving balls in urns. ;-)


> Now this is interesting because I understand two measures of something
> roughly corresponding to a degree or strength of belief, both measured
> on the scale [0,1], yet not equivalent to each other nor even having
> a well-defined mapping from one to the other. So maybe we should
> say "probability" rather than "credence" since at least this gives me
> a clue which of the two measures we're looking for.

'Probability' is fine.
As you know, I use these epistemic words, 'confidence', 'credence',
because that's my preferred interpretation of ordinary probability
statements.


I did try to explain in terms independent of any explicitly Bayesian
dogma. 'Confidence' is the converse of 'surpisingness'. When your
confidence (in my sense) for a certain event is high, you will not be very
surprised if and when it happens (when you learn that it has happened).
When your confidence is very low, you will be very surprised if it
happens.

At the limits, your confidence in an obvious tautology is 1, since the
surprisingness of the event, "Either it rains on Sunday or it doesn't", is
nil. Your confidence in a contradiction is 0, since the contradiction's
actually happening is so surprising as to be literally unimaginable.

At the midpoint, the surprisingness of the coin's landing Heads is exactly
the same as the surpringness of its landing Tails; thus your confidence in
the two is also the same. So that's 1/2.
We can think of all confidences as limits of sequences of compounded
equally-surprising events, a la Ramsey.

> Usually I like my probability estimates to fit into
> a rational world view, and one of the ways I test their rationality is
> to imagine that I (or someone else) could make some sort of bet on them.
> (After all, that's the origin of this branch of mathematics, isn't it?)

Well, let's see.
Ian Hacking argues that the modern conception of probability arose by the
unification of two different concepts: the idea of relative long term
frequencies of repeatable events, and the idea of a degree of belief (as
in the Pyrrhonian skeptics' views). He thinks that these got unified by
actuaries in (as I recall) Holland and Flanders, who needed a science of
probability to run their insurance businesses. (So I guess Hacking is
vaguely a Marxist: concept formation driven by economic innovation.)

But you presumably mean something later, like Ramsey's formulation of
decision theory. That formulation is heavily dependent on an agent's
dispositions to bet. These dispositions reveal the agent's credences.

>
> In this case all the betting does is to point out difficulties with
> other possible measures of goodness such as "how suprised" Beauty
> should be. For example, if she were to assign probability 1/3 to
> heads, then she'll be twice as surprised when the coin is revealed to
> be heads as when it's revealed to be tails. But if the experimenter
> really does make this revelation each time before he puts Beauty back
> to sleep, then Beauty ends up being *exactly* as suprised by tails as
> by heads over the course of the experiment (half as surprised each
> time, but it happens twice as often). This minimizes her risk,
> i.e. the variance in how surprised she'll be. I find that a
> compelling argument for adopting this probability estimate, but not
> compelling enough to make me give up the notion that the estimate
> probably really should be 1/2 after all.

Ahhhh.
The 'surprisingness' point really is supposed to be a little different.
The probability you report is supposed to be an accurate measure of how
surprised you will actually be (higher prob => less surprised).


> A few related questions:
>
> Suppose a few seconds after Beauty wakes, the experimenter tells her
> what day it is? What should she assign as the probability that the
> coin came up heads, given that she's just been told today is Monday?
> r e s seems to think the answer is 2/3. This boggles me.

Me too.
That is the worst problem, I think, with my own gut feeling that she
should think (before hearing what day it is, but upon awakening) that the
probability is 1/2. If it is 1/2, then as far as I can tell she *has* to
change to 2/3 when she discovers that it is Monday.

> Suppose Beauty just woke up and has no other new information.
> What's the probability (in her rational view) that today is Monday?

This one does not bother me as much, directly.
The strong intuition is that she should think that Monday is *more likely*
than Tuesday. But both sides say this. The difference is that the Halfers
(like me) say that the chance that it's Monday is 3/4, while the Thirders
say that the chance that it's Monday is 2/3.
(Or have I messed that up?)

-Jamie

bu...@pac2.berkeley.edu

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
In article <36F027AA...@flash.net>,
Matt McLelland <mat...@flash.net> wrote:
>bu...@pac2.berkeley.edu wrote:

>> P(heads and today is monday) = P(tails and today is monday)
>
>This isn't true!

I see that I wasn't clear here. Let me specify what I meant by
this. By P(X) I mean simply the probability that event X will
occur during one complete run of the experiment. By that
definition, it is clear (I hope) that both of the above probabilities
are 1/2 (and hence they're equal).

You apparently mean something else (or rather, thought I meant
something else) by P(X), although, to be honest, I can't think of any
meaning one could attach to it that would make the statement below
true.

>P(heads and today is monday) = 1/2
>P(tails and today is monday) = 1/N (N being the number of awakenings)

>> > P(The coin came up heads and *this* is my first time up) = 1/2


>> >and
>> > P(The coin came up tails and *this* is my first time up) = 1/2*1/N
>>
>> I agree with this, but I don't see why it's relevant to the problem.
>
>? You agree? How is this different from the events with "this" replaced by
>"today"?


Hmm. My brain seems to have turned off when I wrote that.
I do not agree with the above statement, and I have no idea
why I wrote that I did. Sorry!

-Ted

bu...@pac2.berkeley.edu

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
In article <7cpnl7$tvh$1...@nntpd.lkg.dec.com>,

Norman Diamond <dia...@tbj.dec.com> wrote:
>In article <7cn86c$sjv$1...@agate.berkeley.edu>, bu...@pac2.berkeley.edu writes:
>>In article <36EEF61B...@flash.net>,

>>Matt McLelland <mat...@flash.net> wrote:
>>>They aren't equiprobable. Think about it.
>>
>>Of course they are. Think about it. :-)
[...]

>Of course they aren't. Think about it. :-)
>
>Repeat the experiment for N AWAKENINGS, for large N. Heads will have
>occured N/3 times; tails will also have occured N/3 times. But N/3
>awakenings will have been preceded by heads, and 2N/3 awakenings will
>have been preceded by tails. So the event
> The coin came up heads and I was awakened
>and the event
> The coin came up tails and I was awakened
>(for some fixed k between 1 and a zillion) will occur different numbers
>of times.

I completely agree with this, and I don't think it contradicts
anything I said. Those two events aren't the ones I was talking about
when I said "these events are equiprobable." I was talking about the
events

"the coin came up heads and an awakening occurred on Monday,"
"the coin came up tails and an awakening occurred on Monday,"
"the coin came up tails and an awakening occurred on Tuesday."

Those events all occur equally often in an ensemble, so they're
equally probable. That's all I was saying.

As far as I can tell, you and I are in complete agreement.

-Ted

Matt McLelland

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
John Rickard wrote:

> An argument for 1/3 (which I am convinced is the correct answer).

Oh good, another brave soul has inducted themselves into the Hall of
Wrong.

> 1. Change the experiment so that if the coin comes up heads, she is
> still woken once, but it is decided at random (e.g. by another
> coin toss) whether to wake her on Monday or Tuesday (each with
> probability 1/2). This surely can't change the answer.

And it doesn't.

> 2. Now consider a variant where she is put to sleep for only one day,
> and woken at most once (so no need to make her forget); heads
> means wake her with probability 1/2 (e.g. based on another coin
> toss); tails means do wake her. If she is woken, her reckoning of

> the probability that the coin came up heads is 1/3. (Agreed?)

True. True.

> 3. In the original experiment as modified in (1), allow her to ask
> what day it is. If the answer is Monday, she is in an equivalent
> position to that in (2) (since she knows that the plan implied
> that she would be woken on Monday with probability 1/2 if the coin
> came up heads, and with probability 1 if the coin came up tails);
> therefore her reckoning of the probability is 1/3. Similarly, if
> the answer is Tuesday then her reckoning of the probability is
> 1/3. Since it makes no difference to her reckoning what day it
> is, her reckoning is 1/3 before she asks the question.

Bzzz. Wrongo. The innocuous looking explanation in parenthesis doesn't
cut it... and the statement it is explaining isn't true. In reality after
you are told it is Monday, the odds are 50-50 that the coin came up heads.

For all of these new people who are jumping on the 1/3 bandwagon, I really
wish you would really think about the following situation. Don't try to
calculate probabilities using any method, but use your intuition:

A coin is flipped which is biased 1 zillion to 1 against tails. If the
coin flips tails then you are awakened zillion^2 times, heads only once.
You agree to do the experiment *1* time. You are put under and when you
wake up you are asked whether or not you think the coin came up heads.
To put things in perspective, maybe you and your friend (performing the
experiment) agree to put a bucket full of sand on the ground and you only
get woken up once until the sand spontaneously forms a 1 cubic meter
diamond by random molecule interactions. Do you *really* think that when
you get up after performing this experiment *1* time that you can be
*certain* that a 1 cubic meter diamond is waiting for you?


Carl Witthoft

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to
While mixing bytes into the filestructure called
<36F04A90...@flash.net>, the light of reason befell Matt McLelland
<mat...@flash.net> who thus proposed:

->I thought of a more natural situation in which this problem arises - cloning!
->
->You are at the lab to be cloned but are uneasy about going through with
it. The
->technician is annoyed by your indecision and tells you that he will
decide for you.
->You insist that you don't want to know his decision for your own peace
of mind, and
->so he agrees that after you are put to sleep for the procedure he will
flip a coin
->and only clone you if it comes up heads. When you awaken in a recovery
room, it
->dawns on you that you have no idea whether you are yourself or a clone!
->
->1. What are the odds that you were cloned?
->2. What are the odds that you are the clone?

hah. I'd check for my appendectomy scar.
--
Carl Witthoft c...@world.std.com ca...@aoainc.com http://world.std.com/~cgw
Got any old pinball machines for sale?

kIdMiGaRu

unread,
Mar 18, 1999, 3:00:00 AM3/18/99
to

David A Karr wrote:

>
> In article <36F08934...@i.am>, kIdMiGaRu <kid...@i.am> wrote:
> >First, there's the straightforward probability approach:
> >P(heads, Monday) = 1/2 * 1 + 1/2 * 0 = 1/2
> >P(tails, Monday) = 1/2 * 0 + 1/2 * 1/2 = 1/4
> >P(tails, Tuesday) = 1/2 * 0 + 1/2 * 1/2 = 1/4
> >I'm hoping you can see why this is reasonable.
>
> No. This seems like utter nonsense to me. I can't see any interpretation
> of P(tails,Monday) that would justify this approach, at least not obviously.
>

I agree. I posted this very unclearly, so here's what I really meant.
P(tails, SOMEDAY) = 1/2. It's only if you are counting on a specific day
that the probability becomes 1/4. This is because the choice between
Monday and Tuesday is dependent on the first probability, that of
flipping the coin. However, if you just want to know if she will wake up
on some day due to a tails flip, then the probability is 1/2.

Here's another analogy I just thought of to explain the 1/2 for heads to
the "1/3 bandwagon." Imagine a driver on a highway. Whenever she comes
to a fork in the road, she will pick one of the two directions at
random. She approaches the first fork in the road (this is analogous to
the coin flip). Going left takes her to Offramp 1, while the highway
continues to the right. Further down that path, another fork (similar to
Monday vs. Tuesday) splits into two paths; left goes to Offramp 2 and
right goes to Offramp 3.

---------monospace font illustration below---------

1 2 3
\ \ /
\ \ /
\ \ /
\ /
\ /
\ /
|
Driver

---------------------------------------------------

Let's say you know she left the highway on one of these offramps, but
you don't know which. What's the probability that she exited at Offramp
1? If it's not clear yet that the probability is 1/2 instead of 1/3,
then you must be viewing the problem incorrectly. You're right, it's not
completely isomorphic because she only exits at either 2 or 3 instead of
both, but it doesn't matter from S.B.'s point of view. When she wakes
up, she doesn't know if she's passed Monday and Tuesday or just Monday,
but it really doesn't matter. If the flip was tails, we wake her up
today (whether it is Monday or Tuesday). The probability of it being
Monday is the same as it being Tuesday, so it's just as if they woke her
up on one of those days at random, which is isomorphic to picking
between Offramp 2 and Offramp 3 at random.

Carl Witthoft

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
While mixing bytes into the filestructure called
<36F1811F...@flash.net>, the light of reason befell Matt McLelland
<mat...@flash.net> who thus proposed:

->For all of these new people who are jumping on the 1/3 bandwagon, I really
->wish you would really think about the following situation. Don't try to
->calculate probabilities using any method, but use your intuition:
->
->A coin is flipped which is biased 1 zillion to 1 against tails. If the
->coin flips tails then you are awakened zillion^2 times, heads only once.
->You agree to do the experiment *1* time. You are put under and when you
->wake up you are asked whether or not you think the coin came up heads.
->To put things in perspective, maybe you and your friend (performing the
->experiment) agree to put a bucket full of sand on the ground and you only
->get woken up once until the sand spontaneously forms a 1 cubic meter
->diamond by random molecule interactions. Do you *really* think that when
->you get up after performing this experiment *1* time that you can be
->*certain* that a 1 cubic meter diamond is waiting for you?

Well, I'm not going to try to decipher your proposition there, but I would
like to point out as strongly as possible:

INTUITION DOES NOT WORK FOR STATISTICS!!!!!

For gosh sakes, doesn't the M.H. problem make that clear to you?

If you can't prove or disprove your proposition using mathematical
expressions, then you don't have a handle on the problem (sort of like the
infamous "to learn something teach it; to understand it, program in on a
computer").

And in fact I don't accept that a biased coin yields a situation analogous
to the original problem.

Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Carl Witthoft wrote:

> ->A coin is flipped which is biased 1 zillion to 1 against tails. If the
> ->coin flips tails then you are awakened zillion^2 times, heads only once.
> ->You agree to do the experiment *1* time. You are put under and when you
> ->wake up you are asked whether or not you think the coin came up heads.
> ->To put things in perspective, maybe you and your friend (performing the
> ->experiment) agree to put a bucket full of sand on the ground and you only
> ->get woken up once until the sand spontaneously forms a 1 cubic meter
> ->diamond by random molecule interactions. Do you *really* think that when
> ->you get up after performing this experiment *1* time that you can be
> ->*certain* that a 1 cubic meter diamond is waiting for you?
>
> Well, I'm not going to try to decipher your proposition there, but I would
> like to point out as strongly as possible:
>
> INTUITION DOES NOT WORK FOR STATISTICS!!!!!

I often hear "intuition is often wrong". What that really means is "often
people have poor intuition".

> For gosh sakes, doesn't the M.H. problem make that clear to you?

The M.H. problem is perfectly intuitive. *In fact*, the way I typically
convince a lay person of the correctness of the "switch" answer in the MH
problem is to ask them to imagine that there are 1 million doors. You then pick
one and Monty, knowing where the prize is, opens 999,998 doors which don't have
the prize behind them. Then I ask them, do you really think you got lucky and
picked the right door initially? Because if you didn't guess correctly then the
prize lies behind the other door. Most people find this argument *very*
compelling. I was hoping that the equally compelling argument above could be
used to convince people to at least keep an open mind. Unless you can provide
an explanation as to why a biased coin is different, or you realize that 50% is
the answer by some other means, I suggest you look at it. The idea is that if
you bias the coin strongly in favor of heads, it is clear that there is a good
chance you will wake up due to a head - regardless of what happens if it comes
up tails.

> If you can't prove or disprove your proposition using mathematical


> expressions, then you don't have a handle on the problem (sort of like the
> infamous "to learn something teach it; to understand it, program in on a
> computer").

In fact, the result of 50% is very easily proved mathematically, as r e s has
shown *repeatedly*. Furthermore, I have repeatedly gone to the trouble of
exposing the fallacy in every position I have seen which favors 1/3 as the
answer.

> And in fact I don't accept that a biased coin yields a situation analogous to
> the original problem.

What do you mean? You think that biasing the coin changes things dramatically?
Why don't you explain this.


Norman Diamond

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <36F0717B...@flash.net>, Matt McLelland <mat...@flash.net> writes:
>Norman Diamond wrote:
>>Before the experiment, both Beauty and we believe that the probability
>>of the coin coming up heads will be 50%. Both she and we believe that
>>there will be two occasions for an event to occur. Everyone knows that
>>for an observer who will not know how the coin landed or which day it is,
>>the following events will appear equally probable:
>>(a) Heads and Monday and we will awaken her.
>>(b) Heads and Tuesday and we will not awaken her.
>>(c) Tails and Monday and we will awaken her.
>>(d) Tails and Tuesday and we will awaken her.
>
>I would like to add some more "equally probable" events to your list:
>(e) Heads and Wednesday and we will not awaken her.
>(f) Heads and Thursday and we will not awaken her.
>...

You forgot some:
(g) Tails and Wednesday and we will not awaken her.
(h) Tails and Thursday and we will not awaken her.
...

There is no problem. You create a new experiment adding Wednesday and
Thursday and ..., then you add these events, and they are all equally
probable.

Beauty's conditional probabilities don't change. When she is awake,
she gains information that events (b), (e), (f), (g), (h), ... have
been eliminated. The three remaining events remain equally probable
(1/3 chance) for her.

If you don't believe it yet, consider solely coin flips. Someone else
set an experiment where a 1-yen coin and 5-yen coin are both flipped
once. Events HH, HT, TH, and TT are equally probable. Then you want
to object that adding a 10-yen coin changes the probabilities. It
does not. Events HHH, HHT, HTH, HTT, THH, THT, TTH, and TTT will be
equally probable. Meanwhile, for the 1 and 5, events HH, HT, TH, and
TT will continue to exist and will remain equally probable, though of
course their probabilities will be 0.25 not 0.125.

Suppose with just the 1 and 5, if event HH or TH or TT occured then
the person informs you that there was a flip, but if event HT occured
then he does not inform you. I say that when you are informed of a
flip you see 3 equally probable possibilities, and I'm not quite sure
yet what you say about this. But you say that if a 10-yen coin is
added then my version is disproved. I think not. With a 1, 5, and
10, if event HHH or THH or TTH occured then the person will inform
you that there was a flip, but if one of the other five events occurs
then he will not inform you. Then I still say when you are informed
that there was a flip, you have 3 equally probable possibilities.

>You see, you can't just claim that all of these things are equally probable. What
>is the a priori probability that today is Monday??

Not needed. If this is what's bothering you, change the wording of the
original problem just to say that Beauty is awakened twice for tails but
only once for heads. No weekdays involved.

Norman Diamond

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <36F08934...@i.am>, kIdMiGaRu <kid...@i.am> writes:
>Again. I'll use a bag for my analogy. If I flip heads, I will put a red
>ball in the bag. If I flip tails, I will put two blue balls in the bag.
>If I ask you what are your odds of pulling out a red ball after I flip,
>would you say 1/3? I don't think so.

Right. But what is the mapping onto Beauty's problem? If you put one
red ball in the bag, Beauty will draw it once. If you put two blue balls
into the bag, Beauty will draw twice. See it now?

(In the original problem the blue balls can be distinguished and we know
which order they will be drawn in, but this adds no information about
probabilities, it merely clarifies the mapping.)

You know for one flip that Beauty has a 50% chance of a red ball once,
and that Beauty has a 50% chance of drawing a blue ball twice.

Beauty knows that for one drawing she has a 1/3 chance of drawing red
and a 2/3 chance of drawing blue. Beauty doesn't know when you flipped.

1/3 of the awakenings are immediately preceded by heads.
1/3 of the awakenings are immediately preceded by tails.
1/3 of the awakenings are immediately preceded by awakenings, and the
preceding flip was tails.

Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Norman Diamond wrote:

Rather than rebutting each point you have made, I am going to try to explain the
difference between your coin experiment from the viewpoint of an observer and from the
viewpoint of Beauty, as that is where I believe the heart of your error lies.

> Suppose with just the 1 and 5, if event HH or TH or TT occured then
> the person informs you that there was a flip, but if event HT occured
> then he does not inform you. I say that when you are informed of a
> flip you see 3 equally probable possibilities, and I'm not quite sure
> yet what you say about this.

This analysis is correct. If you were told that the coin was flipped twice then the
only thing you could eliminate is HT, and the other cases are equally probable. This
problem is isomorphic to the following:

A coin is flipped once. If it comes up heads Beauty is awakened on Monday. If the
coin comes up tails she is awakened on both Monday and Tuesday. You, as an observer,
are awakened on a random day (Monday or Tuesday) and are not told which. When you
learn that Beauty was awakened on this same day, what is the probability that the coin
flipped heads?

The answer is 1/3 as you have correctly determined. This can be seen because the day
you are awakened is *independent* of the outcome of the coin toss. We can therefore
conclude that HM, HT, TM, and TT are four equally probable events. When you learn that
Beauty is awake, it eliminates possibility HT and leaves three equally likely choices.
This is *not* the same as the original problem. In the original problem, the day on
which Beauty is awakened is *not* independent of the coin toss, it is intimately
dependent on it.

When Beauty prepares to undergo the experiment, she knows that there is a 50%
chance that the coin will come up heads and she will be awoken on Monday. She *knows*
that she will be woken up no matter what happens. When she awakes, therefore, she has
not eliminated *any* possibility and so the probability of heads is still 50%. It is
ridiculous to say she has eliminated the possibility "I am not awake and it is
tuesday", as that was an impossibility going into the experiment. The argument in this
last paragraph is really very compelling, and I would ask that you really try to find
the fault in this reasoning. (i.e., don't just give another calculation deriving your
answer. Point to the step in this logic which is faulty and explain why.)


David A Karr

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <36F08934...@i.am>, kIdMiGaRu <kid...@i.am> wrote:
>First, there's the straightforward probability approach:
>P(heads, Monday) = 1/2 * 1 + 1/2 * 0 = 1/2
>P(tails, Monday) = 1/2 * 0 + 1/2 * 1/2 = 1/4
>P(tails, Tuesday) = 1/2 * 0 + 1/2 * 1/2 = 1/4
>I'm hoping you can see why this is reasonable.

No. This seems like utter nonsense to me. I can't see any interpretation
of P(tails,Monday) that would justify this approach, at least not obviously.

For example, if P(tails,Monday) means the prior probability (before the
start of the experiment) that the coin will come up tails and Beauty will
be awakened on Monday, then P(tails,Monday) = 1/2.

If P(tails,Monday) is supposed to be Beauty's estimate that "today is
Monday and the coin came up heads", calculated during one of her
waking periods, then surely it's obvious by now that some of us find
this calculation as obscure as hell, and saying it's "obviously" 1/4
is not going to be in the least reassuring to us.

--
David A. Karr "Groups of guitars are on the way out, Mr. Epstein."
ka...@shore.net --Decca executive Dick Rowe, 1962

David A Karr

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Matt McLelland <mat...@flash.net> wrote:
>To put things in perspective, maybe you and your friend (performing the
>experiment) agree to put a bucket full of sand on the ground and you only
>get woken up once until the sand spontaneously forms a 1 cubic meter
>diamond by random molecule interactions. Do you *really* think that when
>you get up after performing this experiment *1* time that you can be
>*certain* that a 1 cubic meter diamond is waiting for you?

Since diamonds are composed of carbon and sand is composed mainly of
silicon and oxygen, I'd say with probability 1 (or practically 1) that
I'd never wake a second time. (To tell the truth, I'm not sure what
exactly you meant by "you only get woken up once until ... .")

Norman Diamond

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <36F06C3E...@flash.net>, Matt McLelland <mat...@flash.net> writes:
>Norman Diamond wrote:
>>Of course they aren't. Think about it. :-)
>>Repeat the experiment for N AWAKENINGS, for large N. Heads will have
>>occured N/3 times; tails will also have occured N/3 times. But N/3
>>awakenings will have been preceded by heads, and 2N/3 awakenings will
>>have been preceded by tails. So the event
>> The coin came up heads and I was awakened
>>and the event
>> The coin came up tails and I was awakened
>>(for some fixed k between 1 and a zillion) will occur different numbers
>>of times. If they occur differently often in a large number of trials,
>>then BY DEFINITION they're differently probable.
>
>I would like to point out that this is not what I was saying -

Of course. This is what Mr. Bunn was saying, except that I corrected
errors in what was being counted.

>these two events *are* equiprobable.

They are not.

>I think you actually agree with the guy you are arguing (Ted -
>bu...@pac2.berkeley.edu).

Yes and no. I agree with him that frequentism is a valid method of
computing probabilities. I disagree on what to count and on the result
that is obtained, which is why I corrected his posting.

>You are counting every awakening that occurs in the experiment - when what
>you should be doing is counting a particular random awakening per experiment.

Fine. Out of N awakenings, pick one awakening with uniform distribution
over the set of all awakenings. The flip most recent before the awakening
has a 2/3 chance of being tails.

Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Norman Diamond wrote:

> You know for one flip that Beauty has a 50% chance of a red ball once,
> and that Beauty has a 50% chance of drawing a blue ball twice.

> Beauty knows that for one drawing she has a 1/3 chance of drawing red
> and a 2/3 chance of drawing blue. Beauty doesn't know when you flipped.

No, and I think this example really cuts to the heart of the matter. If there was
one flip, and Beauty picked a random drawing, then the odds of the drawing being
red is 1/2. The odds of the drawing being blue are 1/2.

ON THE OTHER HAND, lets change the original problem so that the experiment is
repeated a large number of times. That is, a coin is repeatedly flipped, and
Beauty is awakened once on heads and twice on tails each time. Her amnesia of
previous events remains throughout the entire procedure, of which she is fully
aware. Now, when she awakens, what probability should she attach to the event that
the coin resulting in her awakening was heads?


David A Karr

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Matt McLelland <mat...@flash.net> wrote:
>David A Karr wrote:
>> I was rather under the impression that the all the other discussion
>> was simply begging that question. I could be wrong.
>
>If I understand your position it is that we face a dilemma if our
>definition of probability involves frequency of occurrence.

No, only if you have two different ways of counting frequency of occurrence,
and you believe *both* are valid. Otherwise, there's no dilemma, merely
the question "what is probability?"

>We flip a coin, and it comes up heads we will increment the count on
>the event "The coin was heads when I awakened". What if it comes up
>tails? Do we increment both "The coin was tails when I was awakened
>on Monday" and the "The coin was tails when I was awakened on
>Tuesday?" No. That would mean that a single trial had been used to
>count two *disjoint* events.

But if you actually ran the experiment, and *if* the coin came up tails,
then you *would* be awakened on Monday and also on Tuesday, i.e., both
events would actually have occurred. How then can these two events
be "disjoint"?

Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
David A Karr wrote:

> Matt McLelland <mat...@flash.net> wrote:
> >To put things in perspective, maybe you and your friend (performing the
> >experiment) agree to put a bucket full of sand on the ground and you only
> >get woken up once until the sand spontaneously forms a 1 cubic meter
> >diamond by random molecule interactions. Do you *really* think that when
> >you get up after performing this experiment *1* time that you can be
> >*certain* that a 1 cubic meter diamond is waiting for you?

Ok. This was full of error. I meant that you put a bucket of coal on the
ground and go to sleep. If it turns into a diamond you do the sleeping beauty
thing of waking up and going to back to sleep, forgetting all details, 1
zillion times. If it doesn't you wake up just once and get to go on with your
life. Do you really expect to get a big diamond this way?

Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
David A Karr wrote:

> But if you actually ran the experiment, and *if* the coin came up tails,
> then you *would* be awakened on Monday and also on Tuesday, i.e., both
> events would actually have occurred. How then can these two events
> be "disjoint"?

The two events "The coin was heads and today is monday" and "The coin was tails
and today is tuesday" are disjoint - thus when we do repeated trials we
shouldn't be counting both of them with one trial. Again you must be careful
to distinguish between events like "Tails and she was awakened on monday" and
"Tails and today is monday". We are trying to find the probability of the
second type. Events of the first type are all the same event.

Man. Any rec.puzzlers who aren't interested in this must be ready to shoot
themselves (or us) by now.


David A Karr

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:
>ka...@shore.net (David A Karr)
>> If I analyze a statistical sample, I might
>> make a statement about a hypothesis with "confidence 0.95". This is
>> not at all the same as my saying that I estimate the hypothesis to be
>> true with *probability* 0.95.
>
>Give an example, preferably involving balls in urns. ;-)

OK. That's easy:

(1) Suppose I have an urn here. I didn't really have any knowledge
what was in it until a minute ago, but then I reached in and felt a
bunch of balls, so I stirred them up very thoroughly and pulled one
out, looked at it, and dropped it back in. I repeated this five
times. Each time, the ball was black. I would say then that this
provided evidence that most of the balls in the urn are black, with
confidence 95% (actually almost 97% if you want to push it).

(2) This is not the same as my saying that with 95% probability most
of the balls in the urn are black. I don't know what probability I
can rationally assign to that event.

Technically, of course my 95% confidence from paragraph (1) is a
probability, but it's a probability conditioned on a counterfactual.
To wit, it means that if it *wasn't* true that the urn had a majority
of black balls, there's only a 5% probability (less, actually) that
I'd have pulled out five black ones in a row. I say counterfactual
because, having run the experiment, I have been led to disbelieve
that the condition is true. In any case this probability is not at
all the same as the one in paragraph (2).

Now if you were to persuade me that it was highly unlikely that the
balls were black in the first place, more unlikely even than the
counterfactual probability in (1), then I probably ought not to
conclude that the balls are mostly black. So in reality, this sort of
statistics only really works to establish facts that seemed likely
(but we're quite fuzzy about how likely) to begin with.

>As you know, I use these epistemic words, 'confidence', 'credence',
>because that's my preferred interpretation of ordinary probability
>statements.

"Credence" is fine; I objected to "confidence" only because of the
semantic overloading.

>At the midpoint, the surprisingness of the coin's landing Heads is exactly
>the same as the surpringness of its landing Tails; thus your confidence in
>the two is also the same. So that's 1/2.

What's the surprisingness of the following sequence of ten coin flips:
HTTHTHHTTH
I don't see anything surprising about it (I just generated it with a
real coin), but its prior probability was 1/1024. (I would have been
suprised by ten tails in a row, but only because I'm a fallible human.)

>We can think of all confidences as limits of sequences of compounded
>equally-surprising events, a la Ramsey.

Oh, then you could say my sequence is really no more surprising than
the other 1023 disjoint possible outcomes of that experiment.


>> to imagine that I (or someone else) could make some sort of bet on them.
>> (After all, that's the origin of this branch of mathematics, isn't it?)
>

>Ian Hacking [...] thinks that these got unified by


>actuaries in (as I recall) Holland and Flanders, who needed a science of

>probability to run their insurance businesses. [...]
>
>But you presumably mean something later, [...]

No, I'm trying to recall a story wherein a certain gambler is said to
have asked a certain famous mathematician to figure the odds on a certain
gambling game. Unfortunately I don't recall the details at the moment.

Bets made by insurance companies would certainly fit into the concept
I had in mind, though.

>> For example, if she were to assign probability 1/3 to
>> heads, then she'll be twice as surprised when the coin is revealed to
>> be heads as when it's revealed to be tails.
>

>Ahhhh.
>The 'surprisingness' point really is supposed to be a little different.
>The probability you report is supposed to be an accurate measure of how
>surprised you will actually be (higher prob => less surprised).

But I thought the reason that we bother to calculate probabilties in
most practical cases is to tell us how surprised we *should* be when
the low-probability event occurs. Otherwise, HHHHHHHHHH actually would
have lower probability than HTTHTHHTTH, and the car would be behind your
originally-chosen door with probability almost 1/2 (because most people
seem to find it equally surprising that it might up behind the other door).

On the other hand, I don't think my model of minimizing "surprise risk"
was very good, so I withdraw it.

Note that when we introduce betting, there are still subtleties. For
example, if we ask Beauty, "Do you want to bet <blah blah blah>," her
rational response might be, "Wait a minute. Are any bets I might make
this week cumulative, or does my answer to that question simply
supercede any previous answers I might have given to it?" Our answer
to this should determine what odds she's willing to take. I suspect
there may be similar subtleties in defining the probability in any
other way.

David A Karr

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Matt McLelland <mat...@flash.net> wrote:
>Ok. This was full of error. I meant that you put a bucket of coal on the
>ground and go to sleep. If it turns into a diamond you do the sleeping beauty
>thing of waking up and going to back to sleep, forgetting all details, 1
>zillion times. If it doesn't you wake up just once and get to go on with your
>life. Do you really expect to get a big diamond this way?

If I'm awake and I know the experiment has not yet ended, then hell yes.
Once is enough, never mind the zillion times. You just guaranteed me no
other possibility exists.

I'm assuming a zillion is a lot smaller than the odds against coal turning
into a diamond, however, so when they wake me up I'll figure it's almost
sure that they're about to send me home. I'll be even surer that there's
no diamond when they say, "OK, you can go home now."

I'm assuming in the original Sleeping Beauty experiment that Beauty knows
she won't be interviewed at the end of the week, at least not the way
she is on Monday or (possibly) Tuesday. So when we ask her what she
thinks the probability of heads is, she knows the experiment isn't over
and can answer accordingly.

David A Karr

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Matt McLelland <mat...@flash.net> wrote:
>what you should be doing is counting
>a particular randomawakening per experiment. (I posted more on this a
>post or two ago)

Did you justify this claim then? (I forget.) Can you?

David A Karr

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Matt McLelland <mat...@flash.net> wrote:
>David A Karr wrote:
>
>> But if you actually ran the experiment, and *if* the coin came up tails,
>> then you *would* be awakened on Monday and also on Tuesday, i.e., both
>> events would actually have occurred. How then can these two events
>> be "disjoint"?
>
>The two events "The coin was heads and today is monday" and "The coin
>was tails and today is tuesday" are disjoint

Then maybe you should have phrased the events that way, rather than
"The coin was tails when I was awakened on Monday" as you did in the
post to which I responded. You've trimmed too much from the message
above; it's very hard for anyone to see what I was referring to.

>Man. Any rec.puzzlers who aren't interested in this must be ready to shoot
>themselves (or us) by now.

If they've half a brain (or better) they're killing every message in this
thread before reading it, as I do with all uninteresting (to me) threads.

What's the probability that whoever last read this post is ready to
shoot me? :-)

Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Ok. I have worked out the math for a more general case which I hope will help clarify
the situation.

Question:
Let one trial consist of a single coin toss. If the toss comes up heads, then
Beauty(SB) is awakened once. If the toss comes up tails, she will awaken N times. Now
suppose that for our experiment we tell SB that we are going to conduct M trials. The
usual forgetting rules apply each time she awakens. When she awakens, what are the
odds that the last coin flipped came up heads?

Solution:
To simplify explanation, I am going to describe how to solve the case of M=2, N=2 and
then wave my hands and give a general formula. I suspect that anyone who doesn't like
my formula for the general case also won't like my explanation for M=2, N=2, and so
this just helps keep things less verbose.

There are four equal probable events corresponding to the coin toss results in the
first and second experiment: HH, HT, TH, TT. That these are equiprobable comes from
the fact that the trials are independent. In the first case, there will be two
awakenings both of which come from heads. In the second case and third cases of HT and
TH there will be three awakenings, of which only 1 will be heads. In the final case of
TT, there will be four awakenings - all from tails. So, the probability that a
particular awakening (when she awakens at some random time) comes from heads given that
the coins came up HH will be 1. Similarly the same conditional probability will be 1/3
given HT, 1/3 given TH, and 0 given TT. By the law of total probability, this gives a
value of 1*1/4 + 1/3*1/4 + 1/3*1/4 = 5/12 that the last coin flipped was heads if you
don't know anything about the two coin flips.

Using a generalized argument, I calculate the solution of the general problem to be:

SUM K=0 to M OF { (M choose K) (M-K) / (M+N*K) } / 2^M

Go ahead and calculate values of this thing for large M. You will see that it
converges to 1/3. However, the original case of M=1, N=2 still evaluates to 1/2. This
shows, if you believe this result, that you *cannot* measure the probability of heads
in a single experiment by simply repeating the experiment a large number of times and
then claiming that the typical result there should apply to the case of M=1.


Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Matt McLelland wrote:

> Go ahead and calculate values of this thing for large M. You will see that it

> converges to 1/3.

This should be 1/(N+1) here since N wasn't necessarily 2.

Also, there is a mistake in my posted formula. Instead of


SUM K=0 to M OF { (M choose K) (M-K) / (M+N*K) } / 2^M

it should be
SUM K=0 to M OF { (M choose K) (M-K) / (M+(N-1)*K) } / 2^M


Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
David A Karr wrote:

> Matt McLelland <mat...@flash.net> wrote:
> >what you should be doing is counting
> >a particular randomawakening per experiment. (I posted more on this a
> >post or two ago)
>
> Did you justify this claim then? (I forget.) Can you?

I don't know if can to anyones satisfaction =). It is clear as can be to
me, however. To approximate the odds of a certain event E occuring in a
trial, one way to do it is to run a bunch of trials and put

P(E) = (number of times E occured in N trials) / N

Let Heads be the event that the coin came up heads. Similarly Tails.
Let Monday be the event that she was woken up on Monday.

It is simple to see that if we use this as the definition of probability
P(Heads and Monday) = 1/2. (If we run a bunch of trials, about half will be
from heads). So under this definition, in half of the trials, when she
wakes up the coin will have come up heads, and so the odds of the coin being
heads when she wakes up is 1/2.
Notice I haven't mentioned what happens on the tails side of the coin yet -
and I have still have managed to compute the odds that the coin is heads
when she wakes.


Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
NEW APPLICATIONS OF MATHEMATICS: Matt McLellands Get Rich Clinic

Poor? Work all of the time and barely make ends meat? Feel like ending it
all? Come to me first!

At my clinic, we will buy you a lottery ticket. Now everyone knows that your
odds of winning our state lottery are only 1 in a million, but the $20 million
dollar jackpot this time can almost certainly be yours if you just call me now!
When you arrive you will be given a warm bath and fed a four star meal of your
choice. When you are done being pampered, you just lie down and go to sleep.
While you are sleeping, our trained personnel will watch the TV to see if your
lottery ticket is a winner. If it isn't, then you just pay our small $250
service fee and go on about your day. If it is, then you will be repeatedly
awakened into lavish surroundings and by beautiful women every day for the rest
of your life. How you ask?!? Well, at the end of every 30 minute interval, we
will wipe your memory clean up to the time you arrived! The mathematics behind
this miracle are complicated, so I won't bore you, but if we suppose that you
will live 50 more years, then your odds of awakening to find that you have won
are amazingly over 50%! Wouldn't you pay $250 to have a 50% chance of winning
the lottery?!? The number is 1800-IMAFOOL. Operators are standing by.

Matthew Daly

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
I'll never forget the time that ka...@shore.net (David A Karr) said:

>In the Monty Hall simulations, it was pretty obvious how to decide the
>question. Propose a contestant strategy, play a million games according
>to this strategy, and see how many cars you win. The strategy that wins
>the most cars is clearly best (unless you prefer goats).
>
>Here, we can certainly run a simulation, but I don't see any
>rational (if you'll pardon the term) way for us to agree on how
>to score the results. Maybe that's what the controversy is all about.

It seems to me that it's a simple application of game theory. Call the
two players Adversary and Beauty, and put the coin away for a few moments.
Adversary has two moves in the game depending on when to wake Beauty
(which I'll call M and MT), and Beauty has two moves in guessing which day
it is (which I'll call M and T). We'll say that Beauty gets a point if
she's right and no point if she's wrong. Here's the payoff matrix (y'all
have fixed-width fonts, right?):

B
M T
M 1 0
A
MT 0.5 0.5

Up to this point, we've ignored the fair coin, but all it's saying is that
we know that A's strategy vector is fixed at [0.5, 0.5]. So, if my game
theory and matrix multiplication skills haven't left me, the payoff vector
for Beauty's pure strategies is [0.75, 0.25], i.e. Monday is the right
answer _3/4_ of the time.

I'm not sure if I should be apologetic or defensive in that I didn't come
up with 2/3. Perhaps my implementation of the model is wrong (which I
wouldn't put past me), but I think it's also possible that people haven't
observed that "which day is it?" isn't an independent variable, and only
by considering the model where A has a choice does the elementary solution
flow freely.

-Matthew
---
Matthew Daly mwd...@pobox.com http://www.frontiernet.net/~mwdaly/

Though he is a person to whom things do not happen, perhaps they
may when he is on the other side. - E. Gorey

Matthew Daly

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
(Sorry if this is reposted)

Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Matthew Daly wrote:

> [...] i.e. Monday is the right answer _3/4_ of the time. [...]

This analysis seems correct, and has come to the correct conclusion in that
when Beauty awakes, the probability that it is Monday is 3/4. However, this
wasn't the question we were debating (though that the 2/3 crowd would probably
disagree with your result). We were debating the probability that the coin
flipped heads and to a lesser extent the probability that the coin came up
heads if we told her it was Monday. You've found the right answer to the
wrong question!


Gerry Quinn

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <36F1F2DF...@flash.net>, Matt McLelland <mat...@flash.net> wrote:

>
>It is simple to see that if we use this as the definition of probability
>P(Heads and Monday) = 1/2. (If we run a bunch of trials, about half will be
>from heads). So under this definition, in half of the trials, when she
>wakes up the coin will have come up heads, and so the odds of the coin being
>heads when she wakes up is 1/2.
>Notice I haven't mentioned what happens on the tails side of the coin yet -
>and I have still have managed to compute the odds that the coin is heads
>when she wakes.
>

The above encapsulates your error very clearly.

The odds of the coin being heads DURING A TRIAL are 1/2. The odds of the coin
being heads DURING AN AWAKENING are 1/3.

Suppose marmalade comes in large jars and honey comes in small jars. When the
larder is empty I choose one at random. Today I may or may not have gone to
the shop, and am eating either marmalade or honey. What is the chance I am
eating marmalade? (Ans. >50%)

Next time I go to the shop, what is the chance I bought marmalade last time?
(Ans. 50%).

- Gerry Quinn
http://bindweed.com

Matthew Daly

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to

Good point. I didn't try to understand every post in the thread, but I
thought that I had at least read the first one correctly.... <sigh>

-Matthew, who seems to have some mental allergy to the word "Bayesian"

Matthew T. Russotto

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <36F19C43...@flash.net>,

Matt McLelland <mat...@flash.net> wrote:
}In fact, the result of 50% is very easily proved mathematically, as r e s has
}shown *repeatedly*. Furthermore, I have repeatedly gone to the trouble of
}exposing the fallacy in every position I have seen which favors 1/3 as the
}answer.

The problem with the SB problem isn't with the mathematics; it's
mapping the mathematics to the problem. r e s has shown that there is
a problem where 1/2 is the answer, and that in that problem, if
SB knows it is Monday, she should answer that the probability of heads
is 2/3rds. I don't think that's the problem posed.
--
Matthew T. Russotto russ...@pond.com
"Extremism in defense of liberty is no vice, and moderation in pursuit
of justice is no virtue."

Jamie Dreier

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
ka...@shore.net (David A Karr) wrote:

> Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:
> >ka...@shore.net (David A Karr)
> >> If I analyze a statistical sample, I might
> >> make a statement about a hypothesis with "confidence 0.95". This is
> >> not at all the same as my saying that I estimate the hypothesis to be
> >> true with *probability* 0.95.
> >
> >Give an example, preferably involving balls in urns. ;-)
>
> OK. That's easy:
>
> (1) Suppose I have an urn here. I didn't really have any knowledge
> what was in it until a minute ago, but then I reached in and felt a
> bunch of balls, so I stirred them up very thoroughly and pulled one
> out, looked at it, and dropped it back in. I repeated this five
> times. Each time, the ball was black. I would say then that this
> provided evidence that most of the balls in the urn are black, with
> confidence 95% (actually almost 97% if you want to push it).

Ok.
So let's see, what you did was to find

pr(drew five out of five black | NOT[most balls are black])

and the confidence in [most balls are black] was 1 - that probability.

So the confidence in the hypothesis, given the evidence, is

1 - pr(E | ~H) (where the "~" is negation)


> (2) This is not the same as my saying that with 95% probability most
> of the balls in the urn are black. I don't know what probability I
> can rationally assign to that event.

Right. Neither do I. It is entirely open, depending on your priors.


Surprisingness:

> What's the surprisingness of the following sequence of ten coin flips:
> HTTHTHHTTH

It's very surprising. You know, like 1 - 2^(-10) surprising. As are all
sequences of ten flips.

> I don't see anything surprising about it (I just generated it with a
> real coin), but its prior probability was 1/1024. (I would have been
> suprised by ten tails in a row, but only because I'm a fallible human.)

Yeah.
That's a different concept, a different kind of surprise.


> Oh, then you could say my sequence is really no more surprising than
> the other 1023 disjoint possible outcomes of that experiment.

Right, but all of them are pretty darned surprising. Compared to this
proposition: at least one of the ten flips was a Head.

[The birth of probability theory]

> No, I'm trying to recall a story wherein a certain gambler is said to
> have asked a certain famous mathematician to figure the odds on a certain
> gambling game. Unfortunately I don't recall the details at the moment.

Oh, that one. That's Blaise Pascal, and the gambler was some French noble.
And the game was a dice game resembling Birdcage. Sure, I guess that has
as good a claim as any to the origins of the science of probability.


> Note that when we introduce betting, there are still subtleties. For
> example, if we ask Beauty, "Do you want to bet <blah blah blah>," her
> rational response might be, "Wait a minute. Are any bets I might make
> this week cumulative, or does my answer to that question simply
> supercede any previous answers I might have given to it?" Our answer
> to this should determine what odds she's willing to take.

Absolutely.
There are also some other difficulties, like, she might start thinking,
"When people start asking you to bet on obvious things, there's almost
always some kind of a trick involved." (Beauty hangs around in bars a lot
and once got bit on the nose by the Jack of Clubs.) Or she might say, "Oh,
I hate gambling, it aggravates my ulcer and it's immoral." Those seem like
mere annoyances, though. The question you had her ask is much more
serious.

-Jamie

--
SpamGard: For real return address replace "DOT" with "."

Jamie Dreier

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
mwd...@pobox.com wrote:


> It seems to me that it's a simple application of game theory. Call the
> two players Adversary and Beauty, and put the coin away for a few moments.
> Adversary has two moves in the game depending on when to wake Beauty
> (which I'll call M and MT), and Beauty has two moves in guessing which day
> it is (which I'll call M and T). We'll say that Beauty gets a point if
> she's right and no point if she's wrong. Here's the payoff matrix (y'all
> have fixed-width fonts, right?):
>
> B
> M T
> M 1 0
> A
> MT 0.5 0.5
>
> Up to this point, we've ignored the fair coin, but all it's saying is that
> we know that A's strategy vector is fixed at [0.5, 0.5]. So, if my game
> theory and matrix multiplication skills haven't left me, the payoff vector

> for Beauty's pure strategies is [0.75, 0.25], i.e. Monday is the right


> answer _3/4_ of the time.
>

> I'm not sure if I should be apologetic or defensive in that I didn't come
> up with 2/3. Perhaps my implementation of the model is wrong (which I
> wouldn't put past me), but I think it's also possible that people haven't
> observed that "which day is it?" isn't an independent variable, and only
> by considering the model where A has a choice does the elementary solution
> flow freely.

Hmmmm.

As Matt Mc. noted, you didn't answer the original question, but the answer
you give falls squarely on the Halfers side.

But, when you say that 'Monday' is the right answer "_3/4_ of the time",
which 'times' are you considering? Note that if, in repeated trials,
Beauty always guesses 'Monday', she will be right only 2/3 of the 'times'.

Matthew T. Russotto

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <36F1F2DF...@flash.net>,

Matt McLelland <mat...@flash.net> wrote:
}David A Karr wrote:
}
}> Matt McLelland <mat...@flash.net> wrote:
}> >what you should be doing is counting
}> >a particular randomawakening per experiment. (I posted more on this a
}> >post or two ago)
}>
}> Did you justify this claim then? (I forget.) Can you?
}
}I don't know if can to anyones satisfaction =). It is clear as can be to
}me, however. To approximate the odds of a certain event E occuring in a
}trial, one way to do it is to run a bunch of trials and put
}
}P(E) = (number of times E occured in N trials) / N
}
}Let Heads be the event that the coin came up heads. Similarly Tails.
}Let Monday be the event that she was woken up on Monday.
}
}It is simple to see that if we use this as the definition of probability
}P(Heads and Monday) = 1/2. (If we run a bunch of trials, about half will be
}from heads).

What's a trial? If we run the experiment multiple times and count
each experiment as a trial, we find that
P(Heads and Monday) = 1/2
P(Tails and Monday) = 1/2
P(Tails and Tuesday) = 1/2

This is because for some trials, two of these "events" will occur;
they are not disjoint. If we actually look at the outcomes of the
experiment, we find that they are

Heads and Monday
Tails and (Monday AND Tuesday).

In this experiment, "Tails and Monday" is not really an event -- that
is, it is not some combination of outcomes. This is not, therefore,
the experiment that SB is being asked to guess the probability for.

Gerry Quinn

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
I just thought I'd post this - it's the same paradox but in slightly different
form, and I think it makes it much clearer:

I like marmalade and honey. Marmalade comes in large jars which last for two
weeks, while honey comes in small jars which only last a week. I always keep
a jar of one or the other. When it's empty, I go down to the shop and buy a
jar of either marmalade or honey, each with probability 50%.

Q1: You call to my house - what is the chance I am eating marmalade?
Answer: 2/3

Q2: You meet me in the shop - what is the chance that the last time I bought a
tooth-rotting toast-enhancer, it was marmalade?
Answer: 1/2

Calling to the house corresponds to awakening Sleeping Beauty. Meeting me in
the shop corresponds to the situation before or after the experiment. It's
easy to invent spurious paradoxes though. If you meet me in the shop, the
chance I was eating marmalade yesterday is only 1/2. But the chance I was
eating marmalade ten days ago is (wait for it) 3/4.

Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Gerry Quinn wrote:

> I just thought I'd post this - it's the same paradox but in slightly different
> form, and I think it makes it much clearer:
>
> I like marmalade and honey. Marmalade comes in large jars which last for two
> weeks, while honey comes in small jars which only last a week. I always keep
> a jar of one or the other. When it's empty, I go down to the shop and buy a
> jar of either marmalade or honey, each with probability 50%.
>
> Q1: You call to my house - what is the chance I am eating marmalade?
> Answer: 2/3

You have posed a very similar problem. But as I have pointed out, this problem
involves *repeated* trials of the experiment. Please read my post detailing the
calculation of this experiment [just a couple of posts ago]. The answer to this
question is in fact asymptotically 2/3. If you want to make the problem
isomorphic to the SB problem please *remove* the requirement that you go to the
store *every* week and instead only go once. I now don't see a simple way to
change the problem to make the problem isomorphic. It should be clear that if
you only go once and come home with a jar of gook, that it has a 50% chance of
being marmalade or honey- and it will stay this way with a suitable modification
to make the marmalade call happen twice. [Note it would not be ok to change the
problem to "I am twice as likely to call" because SB always calls on heads - she
is no more likely to wake up on heads]

David A Karr

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Matthew Daly <mwd...@pobox.com> wrote:
>We'll say that Beauty gets a point if
>she's right and no point if she's wrong. Here's the payoff matrix
>
> B
> M T
> M 1 0
>A
> MT 0.5 0.5

Matthew, I had the feeling a number of times while reading your
argument that I wasn't sure I understood it. Here's one early on. If
B chooses T and A chooses MT, then B is wrong when she wakes on Monday
(no point), and right when she wakes on Tuesday (earning one point).
So how did you determine that the correct payoff for that strategy
combination was the average of the two days (0.5) rather than the
total score accumulated in that game (1)? (Or for that matter, how
do you determine that the correct score is not 0?)

I'm not claiming here that 1 or 0 is the correct answer rather than
0.5, but I do suspect more and more that the problem as originally
given is ambiguous, and that any of these three answers could be
forced by additional assumptions.

Gerry Quinn

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to

Repeated trials simplify probability calculations by making explicit an
implied ensemble of possible universes. But the answer is still the same.

I buy a jar of one or the other, just once. You call in three days and again
in ten days. Or maybe you call at a random time, or many times. On a
particular occasion when you call, I am eating one or the other. Chances are
2:1 it's marmalade. There's more marmalade, just like there's more 'wakening
on heads'.

Parallax

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
On Fri, 19 Mar 1999 19:58:30 GMT, ger...@indigo.ie (Gerry Quinn)
wrote:

>I buy a jar of one or the other, just once. You call in three days and again
>in ten days. Or maybe you call at a random time, or many times. On a
>particular occasion when you call, I am eating one or the other. Chances are
>2:1 it's marmalade. There's more marmalade, just like there's more 'wakening
>on heads'.

Uh. I've been following this thread for a while, and this doesn't sit
well with me. If you buy a random jar, and I call you in 3 days time,
it's 1:1 Marmalade or Honey. If I call you in 10 days time, it's 1:1
Marmalade or Nothing. If I call you on a random day and you just so
happen to be eating -something-, then yes, it is 2:1 Marmalade. But,
this is different from Sleeping Beauty in the fact that, because you
are eating something when I called, I get an extra piece of
information. Beauty doesn't get that option. Even if we do the trial
once, I have a 1/4 chance of calling you when you're out of Honey,
which puts you squarely back at 1:1 Honey:Marmalade.

On the SB problem, which, at least in my mind, is different, think of
it this way: If she guesses heads every time, she will either be right
once or wrong twice. If she guesses tails, she will either by right
twice or wrong once. If we flip the zillion-sided coin (1 heads, one
zillion-1 tails) she will either be, on a guess of heads, right once
or wrong a zillion-1 times. On a guess of tails, she will either be
right a zillion-1 times or wrong once. As long as the coin is fair (or
we know and account for how unfair it is :) her guess should be tails,
EVEN THOUGH she knows it is a 50:50 chance either way.

And yes, I was a staunch supporter of the 50:50 throughout most of
this, and by 'most of this' I'm including THIS post. Only after I
started explaining it did I realize I was wrong.

--Parallax

Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
David A Karr wrote:

> I'm not claiming here that 1 or 0 is the correct answer rather than
> 0.5, but I do suspect more and more that the problem as originally
> given is ambiguous, and that any of these three answers could be
> forced by additional assumptions.

I seem to be getting the same impression from many of your posts... that you
think there is a subjective element to determining probability. I may
(depending on what you actually believe) whole-heartedly disagree with
this. The probability that an event will happen is a very real thing. If
someone put a gun to Beauty's head and said "Heads or Tails?" then she has
an optimal answer, or at least a set of answers with equal and maximal
probability. Now, we can disagree on whether the word "credence" means
probability or something about expected values or about confidence levels or
whatever, but once we agree that it means probability in the usual sense, it
is has a well defined value (based on a certain fact pattern).

That Beauty should assign a probability of 50% to the event that "The coin
landed heads" upon waking is secondary in importance to realizing that there
is *some* well-defined probability which she should assign to that event.


Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Gerry Quinn wrote:

> Repeated trials simplify probability calculations by making explicit an
> implied ensemble of possible universes. But the answer is still the same.

This is true...if you do the repetition correctly. Here is what you are doing wrong:

You are running 1000 experiments and then piling all of the awakenings into a pot.
Then you image that Beauty awakens at a random one of these awakenings and find out if
the coin responsible is heads or tails. Since, in the 1000 experiments all run
together there will be 500 heads and 1000 tails, you will compute the frequency of
heads to be about 1/3.

What you should be doing is:

Run 1000 experiments. For each experiment, wake Beauty up once at some random
awakening and find out if the coin came up heads or tails. In this case the ratio of
heads to tails will be about 1/2.


Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
Matthew T. Russotto wrote:

> What's a trial? If we run the experiment multiple times and count
> each experiment as a trial, we find that
> P(Heads and Monday) = 1/2
> P(Tails and Monday) = 1/2
> P(Tails and Tuesday) = 1/2
>
> This is because for some trials, two of these "events" will occur;
> they are not disjoint. If we actually look at the outcomes of the
> experiment, we find that they are
>
> Heads and Monday
> Tails and (Monday AND Tuesday).
>
> In this experiment, "Tails and Monday" is not really an event -- that
> is, it is not some combination of outcomes. This is not, therefore,
> the experiment that SB is being asked to guess the probability for.

This is all true. The experiment we have to assign probabilities for is the
following:

Flip a fair coin and record the result in the event H (for heads, ~H for tails).
Pick a random day to wake up on out of the possilibities and record this as M
(for monday, ~M for tuesday). The reason we pick a random day to wake up on is
because SB has no idea what day it is when she wakes up... from her perspective
the day is a random one.

P(H) = P(H and M) = 1/2
P(~H) = 1/2
P(~H and M) = 1/4
P(~H and ~M) = 1/4


Matt McLelland

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
There were some mistakes with my last post. Here is a corrected version with a few
additional comments:

Gerry Quinn wrote:
> Repeated trials simplify probability calculations by making explicit an
> implied ensemble of possible universes. But the answer is still the same.

This is true...if you do the repetition correctly. Here is what you are doing wrong:

You are running 1000 experiments and then piling all of the awakenings into a pot.

It is easy to see that if we do this there will be about 500 awakenings from heads in the
pot and 1000 from tails. So, if Beauty awakens at a random one of these awakenings it is
easy to see there is 1/3 probability of it being heads.

What you should be doing is:

Run 1000 experiments. For each experiment, wake Beauty up once at some random
awakening and find out if the coin came up heads or tails. In this case the ratio of
heads to tails will be about 1/2.

Compare this with how you compute the odds that a coin would come up heads. You flip it
1000 times and *each* *time* ask "Did it come up heads?". For a fair coin you will find
near 500 times for heads, and so the odds of heads are 500/1000. To do the same thing in
this case, we do 1000 trials of the experiment and *each* *time* ask "Does a random beauty
awaken to heads?" Again we find 500 times that she does - and the odds are 500/1000.


Carl Witthoft

unread,
Mar 19, 1999, 3:00:00 AM3/19/99
to
While mixing bytes into the filestructure called
<FpkI2.422$no1....@news.shore.net>, the light of reason befell
ka...@shore.net (David A Karr) who thus proposed:

->Matt McLelland <mat...@flash.net> wrote:
->>David A Karr wrote:
->>
->>> But if you actually ran the experiment, and *if* the coin came up tails,
->>> then you *would* be awakened on Monday and also on Tuesday, i.e., both
->>> events would actually have occurred. How then can these two events
->>> be "disjoint"?
->>
->>The two events "The coin was heads and today is monday" and "The coin
->>was tails and today is tuesday" are disjoint
->
->Then maybe you should have phrased the events that way, rather than
->"The coin was tails when I was awakened on Monday" as you did in the
->post to which I responded. You've trimmed too much from the message
->above; it's very hard for anyone to see what I was referring to.
->
->>Man. Any rec.puzzlers who aren't interested in this must be ready to shoot
->>themselves (or us) by now.
->
->If they've half a brain (or better) they're killing every message in this
->thread before reading it, as I do with all uninteresting (to me) threads.
->
->What's the probability that whoever last read this post is ready to
->shoot me? :-)

At this very instant, zero, since I'm the last person (or was when I typed
this) and I wouldn't shoot someone from the next town over :-)
--
Carl Witthoft c...@world.std.com ca...@aoainc.com http://world.std.com/~cgw
Got any old pinball machines for sale?

It is loading more messages.
0 new messages