4082 views

Skip to first unread message

Mar 15, 1999, 3:00:00 AM3/15/99

to

We plan to put Beauty to sleep by chemical means, and then we'll flip a

(fair) coin. If the coin lands Heads, we will awaken Beauty on Monday

afternoon and interview her. If it lands Tails, we will awaken her Monday

afternoon, interview her, put her back to sleep, and then awaken her again

on Tuesday afternoon and interview her again.

The (each?) interview is to consist of the one question: what is your

credence now for the proposition that our coin landed Heads?

When awakened (and during the interview) Beauty will not be able to tell

which day it is, nor will she remember whether she has been awakened

before.

She knows the above details of our experiment.

What credence should she state in answer to our question?

-Jamie

p.s. Don't worry, we will awaken Beauty afterward and she'll suffer no ill

effects.

p.p.s. This puzzle/problem is, as far as I know, due to a graduate student

at MIT. Unfortunately I don't know his name (I do know it's a man). The

problem apparently arose out of some consideration of the Case of the

Absentminded Driver.

p.p.p.s. Once again, I have no very confident 'solution' of my own; I will

eventually post the author's solution, but I am not entirely happy with

that one either.

--

SpamGard: For real return address replace "DOT" with "."

Mar 15, 1999, 3:00:00 AM3/15/99

to

Jamie Dreier wrote:

>

> We plan to put Beauty to sleep by chemical means, and then we'll flip a

> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday

> afternoon and interview her. If it lands Tails, we will awaken her Monday

> afternoon, interview her, put her back to sleep, and then awaken her again

> on Tuesday afternoon and interview her again.

>

> The (each?) interview is to consist of the one question: what is your

> credence now for the proposition that our coin landed Heads?

>

> When awakened (and during the interview) Beauty will not be able to tell

> which day it is, nor will she remember whether she has been awakened

> before.

>

> She knows the above details of our experiment.

>

> What credence should she state in answer to our question?

>

> We plan to put Beauty to sleep by chemical means, and then we'll flip a

> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday

> afternoon and interview her. If it lands Tails, we will awaken her Monday

> afternoon, interview her, put her back to sleep, and then awaken her again

> on Tuesday afternoon and interview her again.

>

> The (each?) interview is to consist of the one question: what is your

> credence now for the proposition that our coin landed Heads?

>

> When awakened (and during the interview) Beauty will not be able to tell

> which day it is, nor will she remember whether she has been awakened

> before.

>

> She knows the above details of our experiment.

>

> What credence should she state in answer to our question?

50/50. The Heads and Tails environments are identical. They

give her no information, so the probability remains the same.

| Jim Ferry | Center for Simulation |

+------------------------------------+ of Advanced Rockets |

| http://www.uiuc.edu/ph/www/jferry/ +------------------------+

| jferry@expunge_this_field.uiuc.edu | University of Illinois |

Mar 15, 1999, 3:00:00 AM3/15/99

to

In article <36ED7A81.5A61@delete_this_field.uiuc.edu>,

Jim Ferry <jferry@delete_this_field.uiuc.edu> wrote:

>Jamie Dreier wrote:

>>

>> We plan to put Beauty to sleep by chemical means, and then we'll flip a

>> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday

>> afternoon and interview her. If it lands Tails, we will awaken her Monday

>> afternoon, interview her, put her back to sleep, and then awaken her again

>> on Tuesday afternoon and interview her again.

Jim Ferry <jferry@delete_this_field.uiuc.edu> wrote:

>Jamie Dreier wrote:

>>

>> We plan to put Beauty to sleep by chemical means, and then we'll flip a

>> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday

>> afternoon and interview her. If it lands Tails, we will awaken her Monday

>> afternoon, interview her, put her back to sleep, and then awaken her again

>> on Tuesday afternoon and interview her again.

>> What credence should she state in answer to our question?

>

>50/50. The Heads and Tails environments are identical. They

>give her no information, so the probability remains the same.

SPOILER

Nope. She should say that the probability that it's tails is 2/3.

Imagine repeating the experiment a million times. Heads comes

up half a million times, as does tails. But each time tails

comes up she's awakened twice. So there are a total of

1.5 million awakenings, and only half a million of them

occur after the coin came up heads.

-Ted

Mar 15, 1999, 3:00:00 AM3/15/99

to

Nope. 2/3 of the awakenings occur after a Tail, but these

awakenings are not equally likely events.

P(Heads & Monday) = 1/2

P(Tails & Monday) = 1/4

P(Tails & Tuesday) = 1/4

Mar 15, 1999, 3:00:00 AM3/15/99

to

In article <36ED8693.3D15@delete_this_field.uiuc.edu>,

Jim Ferry <jferry@delete_this_field.uiuc.edu> wrote:

>

>Nope. 2/3 of the awakenings occur after a Tail, but these

>awakenings are not equally likely events.

>

>P(Heads & Monday) = 1/2

>P(Tails & Monday) = 1/4

>P(Tails & Tuesday) = 1/4

Jim Ferry <jferry@delete_this_field.uiuc.edu> wrote:

>

>Nope. 2/3 of the awakenings occur after a Tail, but these

>awakenings are not equally likely events.

>

>P(Heads & Monday) = 1/2

>P(Tails & Monday) = 1/4

>P(Tails & Tuesday) = 1/4

This would be correct if the rules stated that she would be awakened

on Monday OR Tuesday in the event the coin comes up tails,

but unless I misread the question, she is to be awakened on Monday

AND Tuesday in this case. That means that the three events

are definitely equiprobable.

Once again, imagine repeating the experiment a million times.

The event (Heads & Monday) will occur half a million times.

So will the event (Tails & Monday) (since tails comes up half the

time, and every time it does a Monday-awakening occurs). So

those two events are equally probable.

-Ted

Mar 15, 1999, 3:00:00 AM3/15/99

to

Well, those are the two answers I expected, of course.

Each has something obvious to be said in its favor. But they can't both be

right. (Can they?)

So, to summarize:

There is a frequentist sort of argument in favor of her declaring that the

chance that the coin landed Heads is only 1/3 (to wit: suppose the game

were played repeatedly, and on each occasion for guessing she made a guess

to herself, "I guess that it's Heads"; she would be right only 1/3 of the

time). On the other hand, there is a more Bayesian sort of argument that

she should think the chance is 1/2 (to wit: I thought it was 1/2 before

they put me to sleep, and I clearly have no new information, so it would

be irrational to change my mind).

Two arguments, incompatible conclusions, at least one of the arguments

must be faulty.

-Jamie

Mar 15, 1999, 3:00:00 AM3/15/99

to

Eytan Zweig <eyt...@spinoza.tau.ac.il> wrote:

>For instance, take an extreme case - instead of throwing a normal die, say

>it is weighed in such a fashion that there is only a 1/1000000 chance of

>tails coming up- but in that case, she will be woken 999,999 times! By your

>reasoning, she should say the chance of it being tails is 50%, which is

>clearly incorrect.

>For instance, take an extreme case - instead of throwing a normal die, say

>it is weighed in such a fashion that there is only a 1/1000000 chance of

>tails coming up- but in that case, she will be woken 999,999 times! By your

>reasoning, she should say the chance of it being tails is 50%, which is

>clearly incorrect.

"Clearly"? I see nothing clear about it.

In fact I'm not sure I know what Jamie's question even means.

What's the significance of the "credence" that Sleeping Beauty

assigns to a proposition that we already know to be true (or false)?

Are we offering her a "bet"---she has the option to pay $p (out of

the royal treasury, whose amount is "large enough" but hidden from her),

and if she does so and the coin was "heads" we pay her back $1?

And do we offer this bet every time we interview her?

If so, I wouldn't want to risk more than $0.50 if I were

Sleeping Beauty in your experiment, would you?

Change this back to 1/2 chance of heads and two wakenings on tails

for every wakening on heads: then as S.B. I wouldn't even pay

$0.34 for the chance that the coin came up heads. In other words,

that measure of "credence" is 1/3.

On the other hand, suppose again the coin comes up heads on 1/2 of

all flips, and S.B. is woken once on heads, twice on tails. But now

suppose S.B. has the opportunity *before* the coin toss to pay $0.45

"betting on heads." Each time we wake her up, we ask if she still

wants her bet (if she still has one) to stand---if she says "no" at

any time during the week, her $0.45 is returned at the end of the week,

otherwise she gets $1 on heads and $0 on tails at the end of the week.

In the latter case she'll have two chances to cancel a losing bet

for every chance to cancel a winning bet, yet it still pays *not* to

cancel the bet; in that sense her "credence" is 1/2.

So is the "credence" that the coin came up heads 1/2 or 1/3?

Again, what *is* "credence"?

--

David A. Karr "Groups of guitars are on the way out, Mr. Epstein."

ka...@shore.net --Decca executive Dick Rowe, 1962

Mar 15, 1999, 3:00:00 AM3/15/99

to

Hm, three good points.

> Eytan Zweig

> After the experiment ends, Beauty is woken up once more, this time for good.

> She does not, however, remember anything that happened while the experiment

> was going on, except the rules of the experiment. Once again she is

> interviewed, and asked what is the credence for the possibility that the

> coin came up heads.

>

> Obviously, the credence is now 50%; however, she gained no new information

> except that the experiment is over, which is irrelevent to the credence -

> everything else is exactly the same knowledge as she had during the

> experiment. Why, according to your logic, does the credence change?

Indeed.

The 'no new information' reasoning seems very compelling.

Back up a little and I think it can be made even *more* compelling.

Before she is put to sleep, Beauty certainly thinks that the chance of

Heads is 1/2. Now for the sake of argument, suppose that upon awakening

she really does get some hard-to-state information, so that she reasonably

changes her credence in Heads to 1/3.

But whatever that strange information might be, she *knew* she was going

to get it, she knew this before she was put to sleep. Whenever you *know*

for sure that you are going to get information soon that will rationally

make you take the chance of Heads to be 1/3, surely you must *now* take

the chance of Heads to be 1/3. Otherwise your beliefs suffer from a very

blatant sort of diachronic incoherence (and we'll make an easy dutch book

against you).

So she couldn't possibly be getting any new information, because it

couldn't possibly be rational for her to think ahead of time that the

chance of Heads is 1/3.

ka...@shore.net (David A Karr) wrote:

> In fact I'm not sure I know what Jamie's question even means.

> What's the significance of the "credence" that Sleeping Beauty

> assigns to a proposition that we already know to be true (or false)?

Hmmm.

Well, I meant to be using the standard Bayesian sense of 'credence', which

is generally cashed out as 'degree of belief'.

If you prefer, you may (as Matt McLelland suggests) rephrase the question:

What should she take the odds of Heads to be when we interview her? (I am

talking about her rational state of belief, though -- which may or may not

be logically related to her disposition to bet, I intend not to take any

stand on what the relation is or isn't.)

> Are we offering her a "bet"---she has the option to pay $p (out of

> the royal treasury, whose amount is "large enough" but hidden from her),

> and if she does so and the coin was "heads" we pay her back $1?

> And do we offer this bet every time we interview her?

> If so, I wouldn't want to risk more than $0.50 if I were

> Sleeping Beauty in your experiment, would you?

>

>

> Change this back to 1/2 chance of heads and two wakenings on tails

> for every wakening on heads: then as S.B. I wouldn't even pay

> $0.34 for the chance that the coin came up heads. In other words,

> that measure of "credence" is 1/3.

Yeah.

Ok, but when we interview her, we can just ask her, What do you think is

the chance that the coin came up Heads? And what should she say, what is

the rational thing for her to believe?

Matt McLelland <mat...@flash.net> adds:

> The answer could be "50-50 of course. It is still a fair coin" or "Given that

> I am up, the odds are just 1/3."

> I think the second answer was the intended one.

Well, that is the answer endorsed by the author of the problem. He thinks

she should say that the probability of Heads given "I am awake now" is

1/3. *I* intended neither answer in particular. I keep waffling, myself.

bu...@pac2.berkeley.edu wrote:

> I don't think it has anything to do with frequentism vs. Bayesianism --

> I can phrase the 2/3-tails argument in Bayesian terms just as well

> as in frequentist terms.

Ok, that may be right. Still, the reasoning given in support of the 1/3

Heads answer tends to be by way of some facts about long term relative

frequencies, while the reasoning in favor of the 1/2 answer tends to be in

terms of rational change in belief.

(Note that without assuming that rational updating is by

conditionalization, it's hard to find any argument at all in favor of a

1/2 answer.)

> As it happens, I'm a diehard Bayesian;

> I phrased the argument in frequentist terms because in my

> experience that's what other people respond best to.

>

> Consider a universe of four possible events:

>

> 1. Coin came up heads; today is Monday

> 2. Coin came up heads; today is Tuesday

> 3. Coin came up tails; today is Monday

> 4. Coin came up tails; today is Tuesday

>

> Surely we can agree that these four events are equally probable, if we

> lack the information that Sleeping Beauty was awakened. To be

> specific, imagine that Rip van Winkle is asleep beside Sleeping Beauty

> and that the rules of the game dictate that he will be awakened on

> Monday and Tuesday regrardless of the outcome of the coin flip. When

> he wakes up, he assesses the probability of the four events above to

> be equal: 1/4 each. (Assume he's awakened before S.B., in the

> cases where she is to be awakened.)

>

> Sleeping Beauty is in excatly the same state as Rip van Winkle, except

> that she has one more piece of information: she knows that she's been

> awakened. Now use standard Bayesian techniques to assess her

> (subjective) probabilities for the three events. Number 2 is ruled

> out; the others are equally probable. QED.

>

> -Ted

Yeah, good point.

Here's the funny thing, though. The information, "I have been awakened",

is a peculiar piece of information. For one thing, it is 'essentially

tensed' information, which is very peculiar in itself. It is something

that in principle she cannot know in advance. If we try, "At some moment I

will be or have been awakened", as a proxy for "I have been awakened",

then we have failed to capture the information, since obviously she does

already know, before the experiment, that she will be awakened at some

moment, and still she thinks (before the experiment) that the chance that

the coin lands Heads is 1/2.

Another odd thing about this information is that Beauty herself cannot

possibly receive its contradictory as information. She cannot ever be in a

position to conditionalize on, "I have not been awakened."

When I first looked at this problem, I thought, "I can see what's so

strange about this situation -- it's that the 'information' that appears

to be relevant is information not about what the world is like, but about

what day it is." But this is not right, I was completely wrong about that.

This feature is entirely incidental, and could be squeezed right out with

a more complicated story.

Mar 15, 1999, 3:00:00 AM3/15/99

to

Matt McLelland <mat...@flash.net> wrote:

> Jamie Dreier wrote:

>

> > Back up a little and I think it can be made even *more* compelling.

> > Before she is put to sleep, Beauty certainly thinks that the chance of

> > Heads is 1/2. Now for the sake of argument, suppose that upon awakening

> > she really does get some hard-to-state information, so that she reasonably

> > changes her credence in Heads to 1/3.

>

> There is nothing paradoxical about this. It isn't even very

complicated, and time

> really has nothing to do with it. Change the problem so that don't

ever interogate

> her if the coin comes up heads, and suppose she knows this. Now imagine

that you

> are her, and that you get interogated. You don't have reason to doubt

that the coin

> is still fair, but you still know that it came up tails with 100%

certainty. There

> isn't anything deeper than this invovled in this problem.

I think there is.

In your example, the fact that she is being interrogated does,

uncontroversially, count as information for her. We can put that

information in a tenseless way, so that she could in principle get it at

some other time -- say, before she is put to sleep in the first place. At

that moment, she will quite reasonably think, "Of course, if I am

interrogated at all, that will mean that the coin came up tails. That is,

pr(Tails | I will be interrogated) = 1."

Then, when she is in fact interrogated, she conditionalizes as usual, and voila.

But in the original problem, there doesn't seem to be any way to state the

relevant information (if it really is information) in a neutral, untensed

way. We can't put it like this: "I have been or will be interrogated."

Because she already knows that at the outset, so if pr(Heads | I have been

or will be interrogated) = 1/3, then since she knows that the condition

obtains, she can just conditionalize and conclude, pr(Heads) = 1/2. But

that's not right.

So as far as I can see, time really does have something to do with it.

Mar 15, 1999, 3:00:00 AM3/15/99

to

russ...@wanda.vf.pond.com (Matthew T. Russotto) wrote:

> The only explanation I can come up with is that "today is Monday" and

> "today is Tuesday" can't be treated as events.

I think that's right. At least in the most straightforward way, they cannot be.

The 'events' in probability theory can often be thought of as sets of

possible worlds. The probability function is a measure on these sets.

Conjunction of the events corresponds to intersection of the sets,

disjunction to union, and so on. The event, "the next Congress will have a

Republican majority", is represented by the set of worlds in which the

next Congress has a Republican majority. My estimate of the probability of

the event comes from my measure of that set.

But which set of possible worlds represents the pseudoevent (hm, well, to

be less tendentious: the 'purported event') that today is Monday? Today is

Monday, after all, in *every* possible world (I'm writing at 11:52 pm on

Monday), or else in *no* possible world (you are most likely reading this

on Tuesday or Wednesday).

I guess the interesting question is whether we can extend the usual

apparatus to include this new sort of information. It ought to be

possible. After all, we *do* sometimes find ourselves in the position of

not knowing what time it is, and of having some educated guesses about it

("I know it isn't noon, my probabilities for times are clustered around

midnight..."). And sometimes this makes a difference to what we think we

should do. ("I'm pretty sure my watch is fast, so I can play two more

rounds of freecell before I have to run off to the meeting, but there is a

small chance that my watch is slow, in which case I'd better stop typing

*now* and hightail it over there....")

Mar 16, 1999, 3:00:00 AM3/16/99

to

<bu...@pac2.berkeley.edu> wrote in message

news:7cjvq3$hv5$1...@agate.berkeley.edu...

>In article <36ED7A81.5A61@delete_this_field.uiuc.edu>,

>Jim Ferry <jferry@delete_this_field.uiuc.edu> wrote:

>>Jamie Dreier wrote:

>>>

>>> We plan to put Beauty to sleep by chemical means, and then we'll flip a

>>> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday

>>> afternoon and interview her. If it lands Tails, we will awaken her

Monday

>>> afternoon, interview her, put her back to sleep, and then awaken her

again

>>> on Tuesday afternoon and interview her again.

>

>>> What credence should she state in answer to our question?

>>

>>50/50. The Heads and Tails environments are identical. They

>>give her no information, so the probability remains the same.

>

>SPOILER

>

>

>

>

>

>Nope. She should say that the probability that it's tails is 2/3.

>

>Imagine repeating the experiment a million times. Heads comes

>up half a million times, as does tails. But each time tails

>comes up she's awakened twice. So there are a total of

>1.5 million awakenings, and only half a million of them

>

>-Ted

No, this reasoning only applies if she knows that the experiment shall be

repeated several times, and that the amount of tails shall be equal to

heads. It does not apply to a single case where one result eliminates the

other.

For instance, take an extreme case - instead of throwing a normal die, say

it is weighed in such a fashion that there is only a 1/1000000 chance of

tails coming up- but in that case, she will be woken 999,999 times! By your

reasoning, she should say the chance of it being tails is 50%, which is

clearly incorrect.

Eytan Zweig

Mar 16, 1999, 3:00:00 AM3/16/99

to

<bu...@pac2.berkeley.edu> wrote in message

news:7ck2et$k6p$1...@agate.berkeley.edu...

>In article <36ED8693.3D15@delete_this_field.uiuc.edu>,

>Jim Ferry <jferry@delete_this_field.uiuc.edu> wrote:

>>

>>Nope. 2/3 of the awakenings occur after a Tail, but these

>>awakenings are not equally likely events.

>>

>>P(Heads & Monday) = 1/2

>>P(Tails & Monday) = 1/4

>>P(Tails & Tuesday) = 1/4

>

>This would be correct if the rules stated that she would be awakened

>on Monday OR Tuesday in the event the coin comes up tails,

>but unless I misread the question, she is to be awakened on Monday

>AND Tuesday in this case. That means that the three events

>are definitely equiprobable.

>

>Once again, imagine repeating the experiment a million times.

>The event (Heads & Monday) will occur half a million times.

>So will the event (Tails & Monday) (since tails comes up half the

>time, and every time it does a Monday-awakening occurs). So

>those two events are equally probable.

>

>-Ted

Ok, think about this related question -

After the experiment ends, Beauty is woken up once more, this time for good.

She does not, however, remember anything that happened while the experiment

was going on, except the rules of the experiment. Once again she is

interviewed, and asked what is the credence for the possibility that the

coin came up heads.

Obviously, the credence is now 50%; however, she gained no new information

except that the experiment is over, which is irrelevent to the credence -

everything else is exactly the same knowledge as she had during the

experiment. Why, according to your logic, does the credence change?

Eytan Zweig

Mar 16, 1999, 3:00:00 AM3/16/99

to

David A Karr wrote:

> On the other hand, suppose again the coin comes up heads on 1/2 of

> all flips, and S.B. is woken once on heads, twice on tails. But now

> suppose S.B. has the opportunity *before* the coin toss to pay $0.45

> "betting on heads." Each time we wake her up, we ask if she still

> wants her bet (if she still has one) to stand---if she says "no" at

> any time during the week, her $0.45 is returned at the end of the week,

> otherwise she gets $1 on heads and $0 on tails at the end of the week.

>

> In the latter case she'll have two chances to cancel a losing bet

> for every chance to cancel a winning bet, yet it still pays *not* to

> cancel the bet; in that sense her "credence" is 1/2.

I think that "What is your credence" can be assumed to mean "What are the odds

[to you]". The question was:

What is your credence now for the proposition that our coin landed Heads?

Your objection is that this could be interpreted to mean "What *were* the odds

that our coin landed heads when we flipped it." I think that the use of the

word 'now' makes implies a meaning of "What are the odds that the coin came up

heads given that we just woke you up."

This isn't really a problem with the use of the word credence, I think. The

same possible ambiguity exists in the statement:

Now what are the odds that our coin landed heads?

Mar 16, 1999, 3:00:00 AM3/16/99

to

In article <pl436000-150...@bootp-17.college.brown.edu>,

Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:

>

>There is a frequentist sort of argument in favor of her declaring that the

>chance that the coin landed Heads is only 1/3 (to wit: suppose the game

>were played repeatedly, and on each occasion for guessing she made a guess

>to herself, "I guess that it's Heads"; she would be right only 1/3 of the

>time). On the other hand, there is a more Bayesian sort of argument that

>she should think the chance is 1/2 (to wit: I thought it was 1/2 before

>they put me to sleep, and I clearly have no new information, so it would

>be irrational to change my mind).

Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:

>

>There is a frequentist sort of argument in favor of her declaring that the

>chance that the coin landed Heads is only 1/3 (to wit: suppose the game

>were played repeatedly, and on each occasion for guessing she made a guess

>to herself, "I guess that it's Heads"; she would be right only 1/3 of the

>time). On the other hand, there is a more Bayesian sort of argument that

>she should think the chance is 1/2 (to wit: I thought it was 1/2 before

>they put me to sleep, and I clearly have no new information, so it would

>be irrational to change my mind).

I don't think it has anything to do with frequentism vs. Bayesianism --

I can phrase the 2/3-tails argument in Bayesian terms just as well

as in frequentist terms. As it happens, I'm a diehard Bayesian;

Mar 16, 1999, 3:00:00 AM3/16/99

to

In article <7ck3m8$blg$1...@goethe.tau.ac.il>,

Eytan Zweig <eyt...@spinoza.tau.ac.il> wrote:

>After the experiment ends, Beauty is woken up once more, this time for good.

>She does not, however, remember anything that happened while the experiment

>was going on, except the rules of the experiment. Once again she is

>interviewed, and asked what is the credence for the possibility that the

>coin came up heads.

>

>Obviously, the credence is now 50%; however, she gained no new information

>except that the experiment is over, which is irrelevent to the credence -

Eytan Zweig <eyt...@spinoza.tau.ac.il> wrote:

>After the experiment ends, Beauty is woken up once more, this time for good.

>She does not, however, remember anything that happened while the experiment

>was going on, except the rules of the experiment. Once again she is

>interviewed, and asked what is the credence for the possibility that the

>coin came up heads.

>

>Obviously, the credence is now 50%; however, she gained no new information

>except that the experiment is over, which is irrelevent to the credence -

Can you explain that last clause? Why on earth would you think that

the information that the experiment is over is irrelevant? Of course

it's relevant -- it completely changes the universe of possibilities

for her, from

{heads and I'm being awakened for the first time,

tails and I'm being awakened for the first time,

tails and I'm being awakened for the second time}

to

{heads and the experiment is over,

tails and the experiment is over}.

With a different universe of possibilities, there's no reason to

expect the probability of tails to be the same in the two cases. I

can't even begin to guess your reasons for suggesting that the two

situations (1. she knows the experiment is still going on; 2. she

knows the experiment is over) are equivalent.

-Ted

Mar 16, 1999, 3:00:00 AM3/16/99

to

In article <7ck29k$bbm$1...@goethe.tau.ac.il>,}news:7cjvq3$hv5$1...@agate.berkeley.edu...

}>Nope. She should say that the probability that it's tails is 2/3.

}>

}>Imagine repeating the experiment a million times. Heads comes

}>up half a million times, as does tails. But each time tails

}>comes up she's awakened twice. So there are a total of

}>1.5 million awakenings, and only half a million of them

}>occur after the coin came up heads.

}

}No, this reasoning only applies if she knows that the experiment shall be

}repeated several times, and that the amount of tails shall be equal to

}heads. It does not apply to a single case where one result eliminates the

}other.

}

}For instance, take an extreme case - instead of throwing a normal die, say

}it is weighed in such a fashion that there is only a 1/1000000 chance of

}tails coming up- but in that case, she will be woken 999,999 times! By your

}reasoning, she should say the chance of it being tails is 50%, which is

}clearly incorrect.

}>Nope. She should say that the probability that it's tails is 2/3.

}>

}>Imagine repeating the experiment a million times. Heads comes

}>up half a million times, as does tails. But each time tails

}>comes up she's awakened twice. So there are a total of

}>1.5 million awakenings, and only half a million of them

}

}No, this reasoning only applies if she knows that the experiment shall be

}repeated several times, and that the amount of tails shall be equal to

}heads. It does not apply to a single case where one result eliminates the

}other.

}

}For instance, take an extreme case - instead of throwing a normal die, say

}it is weighed in such a fashion that there is only a 1/1000000 chance of

}tails coming up- but in that case, she will be woken 999,999 times! By your

}reasoning, she should say the chance of it being tails is 50%, which is

}clearly incorrect.

She's given that the coin is fair, so your objection doesn't apply.

Consider this:

Same setup, only this time Sleeping Beauty is told whether it is

Monday or Tuesday. Clearly, on Tuesday she should answer that the

probability of tails is 1, and on Monday she should answer that the

probability of tails is 1/2.

Certainly when she isn't told, the aggregate probability of combining

the two cases can't drop to 1/2.

But to confound things further:

P(tails) = 1/2

P(heads) = 1/2

P(tails | Monday) = 1/2

P(tails | Tuesday) = 1

P(Monday | tails) = 1/2

P(Monday | heads) = 1

P(Tuesday | tails) = 1/2

P(Tuesday | heads) = 0

P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)

= 1/2*1/2 + 1*1/2 = 3/4

P(Tuesday) = 1/4

(alarm bells should be going off by now)

P(tails) = P(tails|Monday)*P(Monday) + P(tails|Tuesday)*P(Tuesday)

= 1/2 * 3/4 + 1 * 1/4 = 5/8

A contradiction! But all I've used is

P(A) = P(A|B)P(B) + P(A|~B)P(~B)... that can't be wrong!

The only explanation I can come up with is that "today is Monday" and

"today is Tuesday" can't be treated as events.

--

Matthew T. Russotto russ...@pond.com

"Extremism in defense of liberty is no vice, and moderation in pursuit

of justice is no virtue."

Mar 16, 1999, 3:00:00 AM3/16/99

to

>For instance, take an extreme case - instead of throwing a normal die, say

>it is weighed in such a fashion that there is only a 1/1000000 chance of

>tails coming up- but in that case, she will be woken 999,999 times! By your

>reasoning, she should say the chance of it being tails is 50%, which is

>clearly incorrect.

I don't understand the last clause ("which is clearly incorrect").

It would make a lot more sense to me if it said "which is

clearly correct." :-)

When she's awakened, there are 1,000,000 distinct possibilities:

1. Heads, and I'm being awakened for the first time.

2. Tails, and I'm being awakened for the first time.

3. Tails, and I'm being awakened for the second time.

...

1000000. Tails, and I'm being awakened for the 999,999th time.

Clearly the first one is 999,999 times more likely than any of the

others, but the others are 999,999 times more numerous, so, based on

the information she has available, the probabilities of heads and

tails are equal.

-Ted

Mar 16, 1999, 3:00:00 AM3/16/99

to

Jamie Dreier wrote:

> Back up a little and I think it can be made even *more* compelling.

> Before she is put to sleep, Beauty certainly thinks that the chance of

> Heads is 1/2. Now for the sake of argument, suppose that upon awakening

> she really does get some hard-to-state information, so that she reasonably

> changes her credence in Heads to 1/3.

There is nothing paradoxical about this. It isn't even very complicated, and time

Mar 16, 1999, 3:00:00 AM3/16/99

to

Matthew T. Russotto wrote:

> P(tails | Monday) = 1/2

Bzzz.

p(tails | Monday) = 1/3

> P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)

> = 1/2*1/2 + 1*1/2 = 3/4

> P(Tuesday) = 1/4

> (alarm bells should be going off by now)

No. So far so good. Keep in mind that the even "It is monday" is really "I got

woken up on Monday"

> P(tails) = P(tails|Monday)*P(Monday) + P(tails|Tuesday)*P(Tuesday)

> = 1/2 * 3/4 + 1 * 1/4 = 5/8

Substituting correct values you would have the following non-contradiction:

P(tails) = 1/3 * 3/4 + 1*1/4 = 1/2

Mar 16, 1999, 3:00:00 AM3/16/99

to

Matt McLelland <mat...@flash.net> wrote:

> Matthew T. Russotto wrote:

>

> > P(tails | Monday) = 1/2

>

> Bzzz.

> p(tails | Monday) = 1/3

>

> > P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)

> > = 1/2*1/2 + 1*1/2 = 3/4

> > P(Tuesday) = 1/4

> > (alarm bells should be going off by now)

>

> No. So far so good. Keep in mind that the even "It is monday" is

really "I got

> woken up on Monday"

Really?

So wait, let's see how this works.

You think that P(tails | I got woken up on Monday) = 1/3.

But she *knows* she will be awakened on Monday. So why can she not

conditionalize in advance, before she is put to sleep, and conclude that

the chance of tails is 1/3?

Or what about this version:

same set-up, except that when we awaken Beauty on Monday we will wake her

up by shouting, "HEY, BEAUTY, IT'S MONDAY!".

Now surely when we awaken her in that rude way, she will take the chance

that the coin came up Heads to be 1/2. Or is this really different from a

version in which we will not awaken her on Tuesday at all, no matter how

the coin lands? It doesn't seem to be any different, from her perspective

on Monday when we awaken her by the rude shouting.

Mar 16, 1999, 3:00:00 AM3/16/99

to

Jamie Dreier wrote:

> So as far as I can see, time really does have something to do with it.

Time really isn't the issue. Unfortunately, I completely missed the boat with my

last few posts:

The probability that the coin came up heads given that you were just woken up is

1/2. My apologies.

Anyone who still doesn't believe it can imagine what would happen if they increased

the number of interrogations from 2 to 1 zillion on tails (leaving 1 interrogation

for a head). You agree to do the experiment once, and sure enough you awaken and

they ask the question. Do you really think you can be almost positive that the coin

didn't come up heads?

Mar 16, 1999, 3:00:00 AM3/16/99

to

Jamie Dreier wrote:

> Matt McLelland <mat...@flash.net> wrote:

>

> > Matthew T. Russotto wrote:

> >

> > > P(tails | Monday) = 1/2

> >

> > Bzzz.

> > p(tails | Monday) = 1/3

> >

> > > P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)

> > > = 1/2*1/2 + 1*1/2 = 3/4

> > > P(Tuesday) = 1/4

> > > (alarm bells should be going off by now)

> >

> > No. So far so good. Keep in mind that the even "It is monday" is

> really "I got

> > woken up on Monday"

>

> Really?

Really. My previous retraction doesn't affect anything I said to Matthew T.

Russotto.

> So wait, let's see how this works.

> You think that P(tails | I got woken up on Monday) = 1/3.

Yep.

> But she *knows* she will be awakened on Monday. So why can she not

> conditionalize in advance, before she is put to sleep, and conclude that

> the chance of tails is 1/3?

You are confusing the events "She will be awakened on Monday" and "Today is

monday and she was awakened". They are not the same. The probability of the

first is 1 and the probability of the second isn't.

Let me give you a simple example that doesn't involve time:

I flip a coin. If it comes up heads I put the number 123456 in a hat, and if

it comes up tails I put the numbers 1 through a million in a hat.

Scenario 1: You pull a single number from my hat. If you pull 123456,

hopefully you will guess the coin was heads.

Scenario 2: You pull the numbers from my hat until none are left. Now, you

know that you will eventually pull 123456 from either hat. So if you learned

that 123456 was pulled from the hat at some point during our trial, there

would be no 'information' gained. On the other hand, if you pull 123456 from

the hat on your first pull, then the odds are that the coins flipped heads.

> Or what about this version:

> same set-up, except that when we awaken Beauty on Monday we will wake her

> up by shouting, "HEY, BEAUTY, IT'S MONDAY!".

This *is* different.

PS: The event "Today is Monday" is perfectly valid.

Mar 16, 1999, 3:00:00 AM3/16/99

to

In article <36EDD8D7...@flash.net>,

Matt McLelland <mat...@flash.net> wrote:

}Matthew T. Russotto wrote:

}

}> P(tails | Monday) = 1/2

}

}Bzzz.

}p(tails | Monday) = 1/3

Matt McLelland <mat...@flash.net> wrote:

}Matthew T. Russotto wrote:

}

}> P(tails | Monday) = 1/2

}

}Bzzz.

}p(tails | Monday) = 1/3

This means that if we tell Sleeping Beauty what day we woke her up,

on Monday she should say that the probability of the coin having

landed "tails" is 1/3. Surely that can't be the case!

Mar 16, 1999, 3:00:00 AM3/16/99

to

In article <36EDECBB...@flash.net>,

Matt McLelland <mat...@flash.net> wrote:

Matt McLelland <mat...@flash.net> wrote:

> Anyone who still doesn't believe it can imagine what would happen if

> they increased the number of interrogations from 2 to 1 zillion on

> tails (leaving 1 interrogation for a head). You agree to do the

> experiment once, and sure enough you awaken and they ask the question.

> Do you really think you can be almost positive that the coin didn't

> come up heads?

Yes. Of course. How could it be any other way? There are

a zillion and one equiprobable possible explanations for

why she was awakened. One is heads, and a zillion are tails.

P(heads) = 1/zillion.

-Ted

Mar 16, 1999, 3:00:00 AM3/16/99

to

I wrote

> P(heads) = 1/zillion.

I meant

P(heads) = 1/(zillion+1)

of course. Sorry about that. (If only all the errors I made were

in the zillionth decimal place!)

-Ted

Mar 16, 1999, 3:00:00 AM3/16/99

to

Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:

>ka...@shore.net (David A Karr) wrote:

>

>> In fact I'm not sure I know what Jamie's question even means.

>> What's the significance of the "credence" that Sleeping Beauty

>> assigns to a proposition that we already know to be true (or false)?

>

>Hmmm.

>

>Well, I meant to be using the standard Bayesian sense of 'credence', which

>is generally cashed out as 'degree of belief'.

>ka...@shore.net (David A Karr) wrote:

>

>> In fact I'm not sure I know what Jamie's question even means.

>> What's the significance of the "credence" that Sleeping Beauty

>> assigns to a proposition that we already know to be true (or false)?

>

>Hmmm.

>

>Well, I meant to be using the standard Bayesian sense of 'credence', which

>is generally cashed out as 'degree of belief'.

And my question is, what the heck difference does it make *what*

S.B.'s "degree of belief" is? Why should S.B. care what number she

says? Why should the experimenter care?

>If you prefer, you may (as Matt McLelland suggests) rephrase the question:

>What should she take the odds of Heads to be when we interview her? (I am

>talking about her rational state of belief, though -- which may or may not

>be logically related to her disposition to bet, I intend not to take any

>stand on what the relation is or isn't.)

I suggested placing bets because this actually puts S.B. in a situation

where giving an "incorrect" credence to a proposition could reasonably

be perceived as being disadvantageous to her. There are other ways to

do this, for example I'm sure you could recast the original problem

in such a way that Sleeping Beauty's life is in jeopardy and she has to

make a decision that will either increase or decrease her risk--and she

cannot "opt out of" this "bet." But in any case the essential thing

is to posit an answer to the question, "Who cares?"

I still don't know what you mean by "rational state of belief."

Assuming the number of angels that can dance simultaneously on the

head of a pin is finite, is that number closer to 200,000 or to 2?

What's a "rational state of belief" regarding that question?

Mar 16, 1999, 3:00:00 AM3/16/99

to

Matt McLelland <mat...@flash.net> wrote:

>David A Karr wrote:

>> In the latter case she'll have two chances to cancel a losing bet

>> for every chance to cancel a winning bet, yet it still pays *not* to

>> cancel the bet; in that sense her "credence" is 1/2.

>[...]

> What is your credence now for the proposition that our coin landed Heads?

>

>Your objection is that this could be interpreted to mean "What *were* the odds

>that our coin landed heads when we flipped it." I think that the use of the

>word 'now' makes implies a meaning of "What are the odds that the coin came up

>heads given that we just woke you up."

>David A Karr wrote:

>> In the latter case she'll have two chances to cancel a losing bet

>> for every chance to cancel a winning bet, yet it still pays *not* to

>> cancel the bet; in that sense her "credence" is 1/2.

> What is your credence now for the proposition that our coin landed Heads?

>

>Your objection is that this could be interpreted to mean "What *were* the odds

>that our coin landed heads when we flipped it." I think that the use of the

>word 'now' makes implies a meaning of "What are the odds that the coin came up

>heads given that we just woke you up."

I think you mischaracterized my argument. I very much intended to

ask, "What are the odds that the coin came up heads given that we just

woke you up." I'm not so much concerned with any decisions S.B. might

have made before she went to sleep; I'm asking, *now* *that* *she's*

*been* *awakened* and asked to make a decision, what should she decide?

I think it's obvious she should make this decision in light of any

information she has *now*, not just in light of any information she

had before the coin was tossed.

If you like, let's not have any bets placed before S.B. goes to sleep.

Instead, each time we wake her we'll ask, "Do you want your agent to

pay $p on Wednesday, knowing that you'll get back $1 if the coin came

up heads and $0 if it came up tails?" Moreover, S.B. knew we were going

to ask this.

Personally, if I were S.B. in this experiment I'd ask, "Do you mean my

agent will pay $p *in* *addition* to any other payments I might

authorize or have authorized this week? Or does the order I give now

supercede any previous orders I may have given?" And I would refuse

to answer the question that was asked me until I'd gotten a

satisfactory answer to these two.

Mar 16, 1999, 3:00:00 AM3/16/99

to

Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:

>But which set of possible worlds represents the pseudoevent (hm, well, to

>be less tendentious: the 'purported event') that today is Monday? Today is

>Monday, after all, in *every* possible world (I'm writing at 11:52 pm on

>Monday), or else in *no* possible world (you are most likely reading this

>on Tuesday or Wednesday).

>But which set of possible worlds represents the pseudoevent (hm, well, to

>be less tendentious: the 'purported event') that today is Monday? Today is

>Monday, after all, in *every* possible world (I'm writing at 11:52 pm on

>Monday), or else in *no* possible world (you are most likely reading this

>on Tuesday or Wednesday).

Actually, I think in a sense there *is* a possible world in which

today is Monday (or "is not Monday," in case you happen to think it's

Monday when you read this). After all, someone at some point in time

declared, "It's time to start holding a Sabbath every seven days, and

the first one will be ... days from now." (Or substitute your own

favorite theory for the origin of our current seven-day week.) I see

no fundamental reason why that announcement couldn't possibly have

been delayed a day or two.

Robinson Crusoe faced this question. It was important to him to know

what day was Sunday, because he feared he might be punished if he

failed to make Sunday observances on the proper day. And as it turned

out, every week for many years he believed "today is Sunday" when in

fact the day was not Sunday.

But you can easily set up the Sleeping Beauty experiment to get around

this what-day-is-today question. On the day we flip the coin, we set

up an empty cup. On each successive day thereafter, early in the

morning we drop a small white ball into the cup. If after this act

there is one ball in the cup, or if there are two balls and the coin

shows tails, we wake S.B. and interview her; otherwise we let her

sleep, except that on the day when we drop the seventh ball in the cup

we wake S.B. and end the experiment. We also promise not to let the

coin be turned over from the moment it lands heads or tails until the

moment the experiment ends.

When S.B. is interviewed, then, the following two propositions seem

to me to be equally subject to whatever "degree of credence" she can

assign:

The coin is facing heads up.

There is exactly one ball in the cup.

Mar 16, 1999, 3:00:00 AM3/16/99

to

(I hope you don't object to me posting into this dialogue.)

Instead of playing the stated game with S.B., suppose that the

consequence of Heads is to write the letter "A" on a slip of

white paper, whereas the consequence of Tails is to write "A"

on a slip of white paper and also on n-1 additional slips of

colored paper, each having a different color (n=2 corresponds

to the old scenario). Suppose that S.B. knows all about how

this procedure is performed, but, as far as the outcome is

concerned, S.B. is informed only of the letter written on one

piece of paper, but not whether it is a consequence of H or T.

(The letter, of course, is always "A", just as before the

information available to S.B. was only "I've been Awakened",

together with the game rules.)

It seems to me that S.B.'s "state of information" about H or T

is exactly the same in both of these games, with

"Monday"<->"white paper", "Tuesday"<->"colored paper",

(or if n>2, the different days correspond to different colors).

It has nothing essential to do with time, but has everything to

do with indistiguishability of outcomes like "A" when not

accompanied by any distinguishing feature, such as color or day.

In both of these games, S.B. learns nothing relevant about H or T,

so that pr(T|A)=pr(T)=1/2=pr(H)=pr(H|A), with obvious notation.

Arguments based on there being n cases stemming from T and only

one case from H must incorporate the fact that the probabilility

of any one of the n T-cases being involved in the outcome is also

proportionately smaller.

--

r e s (Spam-block=XX)

Jamie Dreier wrote in message ...

>Matt McLelland <mat...@flash.net> wrote:

>> Jamie Dreier wrote:

>>

>> > Back up a little and I think it can be made even *more* compelling.

>> > Before she is put to sleep, Beauty certainly thinks that the chance of

>> > Heads is 1/2. Now for the sake of argument, suppose that upon awakening

>> > she really does get some hard-to-state information, so that she

reasonably

>> > changes her credence in Heads to 1/3.

>>

>> There is nothing paradoxical about this. It isn't even very

>complicated, and time

>> really has nothing to do with it. Change the problem so that don't

>ever interogate

>> her if the coin comes up heads, and suppose she knows this. Now imagine

>that you

>> are her, and that you get interogated. You don't have reason to doubt

>that the coin

>> is still fair, but you still know that it came up tails with 100%

>certainty. There

>> isn't anything deeper than this invovled in this problem.

>

>I think there is.

>

>In your example, the fact that she is being interrogated does,

>uncontroversially, count as information for her. We can put that

>information in a tenseless way, so that she could in principle get it at

>some other time -- say, before she is put to sleep in the first place. At

>that moment, she will quite reasonably think, "Of course, if I am

>interrogated at all, that will mean that the coin came up tails. That is,

>pr(Tails | I will be interrogated) = 1."

>

>Then, when she is in fact interrogated, she conditionalizes as usual, and

voila.

>

>But in the original problem, there doesn't seem to be any way to state the

>relevant information (if it really is information) in a neutral, untensed

>way. We can't put it like this: "I have been or will be interrogated."

>Because she already knows that at the outset, so if pr(Heads | I have been

>or will be interrogated) = 1/3, then since she knows that the condition

>obtains, she can just conditionalize and conclude, pr(Heads) = 1/2. But

>that's not right.

>

>So as far as I can see, time really does have something to do with it.

>

Mar 16, 1999, 3:00:00 AM3/16/99

to

r e s <XXr...@ix.netcom.com> wrote:

>Arguments based on there being n cases stemming from T and only

>one case from H must incorporate the fact that the probabilility

>of any one of the n T-cases being involved in the outcome is also

>proportionately smaller.

>Arguments based on there being n cases stemming from T and only

>one case from H must incorporate the fact that the probabilility

>of any one of the n T-cases being involved in the outcome is also

>proportionately smaller.

True, but you've posited an experiment in which S.B. will be

informed *only* *once* that an A is written on a piece of paper.

In Jamie's experiment, S.B. receives her information *twice*

in some cases, although by the second time she's forgotten that

she received the information before.

The two experiments are not isomorphic; at least, it's not obvious to

me that they're isomorphic.

Mar 16, 1999, 3:00:00 AM3/16/99

to

ka...@shore.net (David A Karr) wrote:

> >> In fact I'm not sure I know what Jamie's question even means.

> >> What's the significance of the "credence" that Sleeping Beauty

> >> assigns to a proposition that we already know to be true (or false)?

> >

> >Hmmm.

> >

> >Well, I meant to be using the standard Bayesian sense of 'credence', which

> >is generally cashed out as 'degree of belief'.

>

> And my question is, what the heck difference does it make *what*

> S.B.'s "degree of belief" is? Why should S.B. care what number she

> says? Why should the experimenter care?

Well, why should anyone care about anything???

I guess I do care, personally, that my representation of the world be

rational. Why should I? I don't know. I know that I do care whether I use,

say, modus ponens correctly. Why should I?

> >If you prefer, you may (as Matt McLelland suggests) rephrase the question:

> >What should she take the odds of Heads to be when we interview her? (I am

> >talking about her rational state of belief, though -- which may or may not

> >be logically related to her disposition to bet, I intend not to take any

> >stand on what the relation is or isn't.)

>

> I suggested placing bets because this actually puts S.B. in a situation

> where giving an "incorrect" credence to a proposition could reasonably

> be perceived as being disadvantageous to her. There are other ways to

> do this, for example I'm sure you could recast the original problem

> in such a way that Sleeping Beauty's life is in jeopardy and she has to

> make a decision that will either increase or decrease her risk--and she

> cannot "opt out of" this "bet." But in any case the essential thing

> is to posit an answer to the question, "Who cares?"

If it's that essential, then let's just suppose that she really, really

wants to have rational credences.

> I still don't know what you mean by "rational state of belief."

> Assuming the number of angels that can dance simultaneously on the

> head of a pin is finite, is that number closer to 200,000 or to 2?

> What's a "rational state of belief" regarding that question?

Oh, hold on.

I'm certainly not saying that there is any particularly rational state

regarding that question. In general, my view is that rationality is a

feature of your beliefs collectively, not one by one. So I suppose that

what it's rational for you to think about angels on pins depends on what

else you believe.

I am resisting a particular characterization of the same question in terms

of bets for a reason, I'm not just refusing in order to be difficult.

Why don't I post a separate account of my reason. I think I will.

Mar 16, 1999, 3:00:00 AM3/16/99

to

David Karr,

If we asked for Beauty's dispositions to bet, instead of asking for her

credence or what she should believe, I think we'd be asking a different

question.

Suppose it's like this. Suppose we tell her in advance that when(ever) she

is awakened, we will ask her to declare her fair odds that the coin comes

up heads. She must name a fair price for a ticket that pays, um, let's

see, $6 if the coin landed Heads, and nothing otherwise. We will sell her

the ticket for the named price. She is supposed to decide what is the

largest sum she will pay.

Now in this case, it seems pretty obvious that the price she should name

is $2. (Or $1.99, presumably she'd be indifferent to getting the ticket if

it were really sold for the 'fair price'. I'll ignore this hereafter.)

Suppose she says $3 instead -- this is the other obvious price to name.

But now she looks to be in some trouble. She knows that if the coin does

land Heads, she will buy a ticket one time, and she will win, so she will

net $3. On the other hand, if the coin lands Tails she will bet twice,

losing $3 each time, a net loss of $6. Since her prior for the coin

landing Heads is 1/2, this looks like a terrible plan. If she executed it

repeatedly in many runs of the game, she'd take a bath.

Instead she should offer to pay at most $2. Then she nets $4 if the coin

lands Heads, and loses $2 on each of two bets if the coin lands Tails.

Fair.

However, this does not seem to me to show that she should take the real

chance of Heads to be 1/3 at the moment she awakens. The problem is that

*the amount she has to bet in all depends on the outcome on which she is

betting*. In such circumstances, the odds you will take do not reflect

your view of the actual chances.

To see this (maybe it's obvious, humor me), consider the grossly unpopular

Variable Bet Casino. At the VBC they have a roulette table, and you bet

with plaid chips. The roulette wheel is an ordinary casino roulette wheel

(pretend there are no zero nor double-zero so that roulette actually uses

'fair bets'). But the rule is that the plaid chips are worth $1 if the

ball lands in a red space, and $1000 if it lands in a black space.

Now what are the fair odds for the bets on red and black? Not even odds,

that's for sure. The fair odds would pay 1000 to one for a bet on red, and

one to 1000 for a bet on black.

But the fact that these are the odds I deem fair clearly does *not* mean

that I think the chance that the ball will land in a red space is very

tiny. I think the chance is 1/2. Why do my fair odds not reflect my views

about the actual chances? Because the amount that I am betting is not

fixed, it depends on the outcome on which I am betting.

Same for Beauty.

So, while the question of what odds she would take is interesting (sort of

-- it seems to me to be pretty trivial), it doesn't settle the question of

what Beauty should believe.

Mar 16, 1999, 3:00:00 AM3/16/99

to

It's just *because* the alternative game involves S.B. being

informed only once of the outcome, that it becomes isomorphic

to the original game. That's the very part that properly

corresponds to "forgetting" in the first game.

informed only once of the outcome, that it becomes isomorphic

to the original game. That's the very part that properly

corresponds to "forgetting" in the first game.

Please reconsider this, because I think it succeeds as a true

isomorphism.

--

r e s (Spam-block=XX)

David A Karr wrote in message <7pyH2.84$no1....@news.shore.net>...

Mar 17, 1999, 3:00:00 AM3/17/99

to

In article <7con13$d...@sjx-ixn9.ix.netcom.com>,}Consider the odds ratio, where Ak is the particular event of

}SB being awakened for the k-th interview (k=1):

}

}pr(H|A1)/pr(T|A1)

}=[pr(A1|H)*pr(H)] / [pr(A1|T)*Pr(T)]

}=pr(A1|H)/pr(A1|T), since we're given that pr(H)=pr(T).

}=1/pr(A1|T), since pr(A1|H)=1.

}

}So, what is pr(A1|T)?

}I assert pr(A1|T)=1/n (e.g. 1/2 for the original n=2)

}where n is the number of times SB is to be awakened if Tail

}occurs.This must be so because the rules guarantee that,

}given T, Sb cannot know which of the A1,...,An obtains.

}Given T, Sb can only know that some one of these Ak obtains.

}

}Therefore, pr(H|A1) / pr(T|A1) = pr(H|A1) / [1-pr(H|A1)] = n,

}and

}pr(H|A1)= n/(n+1),

}hence

}pr(H|A1)=2/3 if n=2.

}

}The same argument shows that if you believe that pr(H|A1)=1/2,

}then you must believe that pr(A1|T)=1.

}SB being awakened for the k-th interview (k=1):

}

}pr(H|A1)/pr(T|A1)

}=[pr(A1|H)*pr(H)] / [pr(A1|T)*Pr(T)]

}=pr(A1|H)/pr(A1|T), since we're given that pr(H)=pr(T).

}=1/pr(A1|T), since pr(A1|H)=1.

}

}So, what is pr(A1|T)?

}I assert pr(A1|T)=1/n (e.g. 1/2 for the original n=2)

}where n is the number of times SB is to be awakened if Tail

}occurs.This must be so because the rules guarantee that,

}given T, Sb cannot know which of the A1,...,An obtains.

}Given T, Sb can only know that some one of these Ak obtains.

}

}Therefore, pr(H|A1) / pr(T|A1) = pr(H|A1) / [1-pr(H|A1)] = n,

}and

}pr(H|A1)= n/(n+1),

}hence

}pr(H|A1)=2/3 if n=2.

}

}The same argument shows that if you believe that pr(H|A1)=1/2,

}then you must believe that pr(A1|T)=1.

My argument is with the assumption that P(H) = P(T). I've shown that a

contradiction occurs if P(H)=P(T)=1/2, P(H|A1) = P(T|A1) = 1/2, and

P(An|T) = 1/n. It can be resolved by changing P(H|A1) = 1/(n+1), or

by changing P(H) = 1/(n+1). The question is which change fits the

description of the experiment. On the surface, it seems like we've

flipped a fair coin, therefore P(H)=P(T)=1/2. But I claim that what

we've actually done is used that fair coin to produce a distribution

(and a less than random one) where P(H) = 1/(n+1). We've done this by

flipping the coin once and measuring once if it comes up heads, but

measuring n times if it comes up tails.

Mar 17, 1999, 3:00:00 AM3/17/99

to

In article <36EF452F...@flash.net>,

Matt McLelland <mat...@flash.net> wrote:

Matt McLelland <mat...@flash.net> wrote:

I have no comment on your AIDS-patient analogy, since I don't see what

it's supposed to show. That some of the conclusions in this analysis

are counterintuitive? Well, maybe. I guess it depends on your

intuition. This whole daily-amnesia scenario is so crazy to begin

with that this sort of appeal to intuition doesn't really help that

much, IMHO.

Onwards.

>Correct! But those last events are not *disjoint*!!!

I couldn't care less. The assertion I was making had nothing to do

with disjointness. In the language of the original puzzle (with a

simple coin-flip and no zillions), I was asserting the following:

P(heads and today is monday) = P(tails and today is monday)

Note that this assertion makes no mention of disjointness.

Frankly, I thought it was an agonizingly obvious assertion to make

and was astonished to find people disagreeing with it! If you really

disagree with this assertion, then tell me -- which of those two

events is less likely? That is, which event would occur less

often in a large ensemble of repeated trials of the experiment?

(Or, if you don't agree that these last two questions are equivalent,

then what definition of probability are you using?)

> P(The coin came up heads and *this* is my first time up) = 1/2

>and

> P(The coin came up tails and *this* is my first time up) = 1/2*1/N

I agree with this, but I don't see why it's relevant to the problem.

>Maybe this example would be better. Suppose that we are going to do the

>same old sleeping beauty experiment only this time the coin is biased 100

>to 1 in favor of heads. Also, instead of waking her twice, we will wake

>her 10,000 times for tails. Your argument seems to be that if we agree

>to ask sleeping beauty every time what the coin came up, she will get it

>right only 1% of the time if she answers heads.

Right.

>How can this be true if

>the odds are only 1% that it will be tails? The simple answer is that

>her mistakes are magnified 10,000 times. Imagine now that she awakens to

>find some strange person has broken into the laboratory - something

>clearly not supposed to happen every time. They ask her at gunpoint,

>what was the result of the coin toss? She should clearly answer heads

>and has only a 1% chance of answering incorrectly.

Right. But I don't understand what this has to do with the situation

at hand. She clearly has different information in this case, so

naturally her assessment of the probabilities will be different.

-Ted

Mar 17, 1999, 3:00:00 AM3/17/99

to

Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:

>> the other hand, if her score is not weighted by time, then any number

>> in the range [0,1], even a random number, is an equally rational credence

>> *under* *this* *set* *of* *values*. However, I don't think this kind

>> of reasoning is what you had in mind, since it doesn't give any good

>> reason to assign credences between 0 and 1, which seems to be what

>> you're looking for.

>

>Huh.

>No, those are certainly not what I had in mind!

>> the other hand, if her score is not weighted by time, then any number

>> in the range [0,1], even a random number, is an equally rational credence

>> *under* *this* *set* *of* *values*. However, I don't think this kind

>> of reasoning is what you had in mind, since it doesn't give any good

>> reason to assign credences between 0 and 1, which seems to be what

>> you're looking for.

>

>Huh.

>No, those are certainly not what I had in mind!

I didn't think so, and I didn't bring these up just to be tendentious.

I'm doing this because I'm confused.

>As I said, I generally just take 'credence' and 'degree of belief' for

>granted, it's the way I'm used to thinking about probability. How

>confident should she be that the coin landed Heads, given what else she

>believes? Measure confidence on the [0,1] scale, where 1 is the confidence

>you place in an obvious tautology and 0 your confidence in an obvious

>contradiction, and .5 your confidence in the quarter I am now tossing

>landing Heads, and so on.

Now we're getting into yeat another semantic muddle. I use the term

"confidence" to refer to a particular non-Bayesian measure of belief,

the one usually meant when someone says they used "95% percent

confidence intervals". If I analyze a statistical sample, I might

make a statement about a hypothesis with "confidence 0.95". This is

not at all the same as my saying that I estimate the hypothesis to be

true with *probability* 0.95.

Now this is interesting because I understand two measures of something

roughly corresponding to a degree or strength of belief, both measured

on the scale [0,1], yet not equivalent to each other nor even having

a well-defined mapping from one to the other. So maybe we should

say "probability" rather than "credence" since at least this gives me

a clue which of the two measures we're looking for.

>If she makes her judgment and then the experimenter tells her how the coin

>really landed, should she be more surprised if he tells her "Heads" than

>if he tells her "Tails"? Or equally surprised?

>

>Can't you just think about the probabiity of Heads, from Beauty's perspective?

I just don't know. Usually I like my probability estimates to fit into

a rational world view, and one of the ways I test their rationality is

to imagine that I (or someone else) could make some sort of bet on them.

(After all, that's the origin of this branch of mathematics, isn't it?)

In this case all the betting does is to point out difficulties with

other possible measures of goodness such as "how suprised" Beauty

should be. For example, if she were to assign probability 1/3 to

heads, then she'll be twice as surprised when the coin is revealed to

be heads as when it's revealed to be tails. But if the experimenter

really does make this revelation each time before he puts Beauty back

to sleep, then Beauty ends up being *exactly* as suprised by tails as

by heads over the course of the experiment (half as surprised each

time, but it happens twice as often). This minimizes her risk,

i.e. the variance in how surprised she'll be. I find that a

compelling argument for adopting this probability estimate, but not

compelling enough to make me give up the notion that the estimate

probably really should be 1/2 after all.

I'd be inclined to just pick a side (1/3 or 1/2) and stay there, but I

keep having this fear that the whole edifice of reasoning is built on

sand.

A few related questions:

Suppose a few seconds after Beauty wakes, the experimenter tells her

what day it is? What should she assign as the probability that the

coin came up heads, given that she's just been told today is Monday?

r e s seems to think the answer is 2/3. This boggles me.

Suppose Beauty just woke up and has no other new information.

What's the probability (in her rational view) that today is Monday?

Mar 17, 1999, 3:00:00 AM3/17/99

to

bu...@pac2.berkeley.edu wrote:

> >Correct! But those last events are not *disjoint*!!!

>

> I couldn't care less. The assertion I was making had nothing to do

> with disjointness.

Oh really? Sometime in the past you wrote:

> Yes. Of course. How could it be any other way? There are

> a zillion and one equiprobable possible explanations for

> why she was awakened. One is heads, and a zillion are tails.

> P(heads) = 1/zillion.

Now, initially I objected on the grounds that the events were not

equiprobable, believing you to be talking about events of the form "The coin

flipped tails, it is day 2, and here I am". You retorted that they were

equiprobable and claimed the events you were talking about were of the form

"The coin flipped tails and I was awakened on day 2". I then objected that

these events are not disjoint. Why does it matter? Because the only way you

can conclude that N equiproabable events each have a probability of 1/N is if

they are disjoint!

> In the language of the original puzzle (with a

> simple coin-flip and no zillions), I was asserting the following:

>

> P(heads and today is monday) = P(tails and today is monday)

This isn't true!

P(heads and today is monday) = 1/2

P(tails and today is monday) = 1/N (N being the number of awakenings)

> Note that this assertion makes no mention of disjointness.

> Frankly, I thought it was an agonizingly obvious assertion to make

> and was astonished to find people disagreeing with it!

You shouldn't - it is wrong.

> If you really disagree with this assertion, then tell me -- which of those

> two

> events is less likely? That is, which event would occur less

> often in a large ensemble of repeated trials of the experiment?

> (Or, if you don't agree that these last two questions are equivalent,

> then what definition of probability are you using?)

>

> > P(The coin came up heads and *this* is my first time up) = 1/2

> >and

> > P(The coin came up tails and *this* is my first time up) = 1/2*1/N

>

> I agree with this, but I don't see why it's relevant to the problem.

? You agree? How is this different from the events with "this" replaced by

"today"?

Mar 17, 1999, 3:00:00 AM3/17/99

to

Matthew T. Russotto wrote in message ...

[...]

>My argument is with the assumption that P(H) = P(T).

[...]

>My argument is with the assumption that P(H) = P(T).

This isn't an assumption, but is part of the problem

statement, viz.,"we'll flip a (fair) coin".

We'll have little hope of communicating intelligently

about this problem unless we can at least agree that

this means, a priori, pr(H)=pr(T).

>I've shown that a

>contradiction occurs if P(H)=P(T)=1/2, P(H|A1) = P(T|A1) = 1/2, and

>P(An|T) = 1/n. It can be resolved by changing P(H|A1) = 1/(n+1), or

>by changing P(H) = 1/(n+1). The question is which change fits the

>description of the experiment. On the surface, it seems like we've

>flipped a fair coin, therefore P(H)=P(T)=1/2.

I would say that it's an explicit part of the problem that pr(H)=pr(T),

meaning the probabilities unconditioned by any information other than

the rules of the game. So, if, as you say below, you see a different

distribution arising for H/T, it would presumably involve some

pr(H|E)=/=pr(T|E) calculated for some conditioning event E, and it

appears that you've taken E=A1. (Although I don't agree with the actual

values you seem to have obtained for the unequal conditional

probabilities -- I showed in another posting that pr(H|A1)=2/3.)

>But I claim that what

>we've actually done is used that fair coin to produce a distribution

>(and a less than random one) where P(H) = 1/(n+1). We've done this by

>flipping the coin once and measuring once if it comes up heads, but

>measuring n times if it comes up tails.

--

r e s (Spam-block=XX)

Mar 17, 1999, 3:00:00 AM3/17/99

to

Okay, I'll try to put forth several different reasons why 1/2 is the

correct answer. This post is aimed more towards those who believe the

probability of heads is not 1/2.

correct answer. This post is aimed more towards those who believe the

probability of heads is not 1/2.

First, there's the straightforward probability approach:

P(heads, Monday) = 1/2 * 1 + 1/2 * 0 = 1/2

P(tails, Monday) = 1/2 * 0 + 1/2 * 1/2 = 1/4

P(tails, Tuesday) = 1/2 * 0 + 1/2 * 1/2 = 1/4

I'm hoping you can see why this is reasonable.

Secondly, think of it this way: Sleeping Beauty knows that no matter

what, if it is Monday P(heads) = 1/2. This can is simple to see; if we

only woke her up once in either case, P(heads) = 1/2. Now, she will only

wake up again on Tuesday if she woke up on Monday after a flip of tails.

Therefore, the probability of waking up Tuesday is the same as the

probability of (tails, Monday), which is 1/2. Another way to say it is:

the chances of waking up on a given one of the "tails days" (Monday or

Tuesday) are equiprobable. So P(tails, Monday) = P(tails, Tuesday). We

also know that the chance of heading down the "heads branch" of the

system is 1/2, while the chance of following the "tails branch" is also

1/2. If all of three cases were equiprobable, the probability of waking

up is more than sure - it's 3/2! So P(tails, Monday) + P(tails, Tuesday)

= 1/2, and each equals 1/4.

Again. I'll use a bag for my analogy. If I flip heads, I will put a red

ball in the bag. If I flip tails, I will put two blue balls in the bag.

If I ask you what are your odds of pulling out a red ball after I flip,

would you say 1/3? I don't think so. It's obvious that you can only pull

a red ball if it landed on heads (a 1/2 chance), just as you can only

pull a blue ball if it landed on tails (also a 1/2 chance). Even if I

extended it to 1 million blue balls, the odds are still 1/2.

Basically, there is a lot of confusion over how to interpret the

problem. Many people are saying there are three cases (Monday & heads,

Monday & tails, Tuesday & tails), and believe that one of them is being

picked at random. If this were true, it definitely would be 1/3, but

it's not true. There are essentially only two cases: Monday & heads, or

Monday & tails. These are the only possibilities. If it's tails, then we

just do a lot of extra stuff. If it's heads, then we stop.

Some argue with the betting situation, where SB places a bet on whether

it's heads or tails. This is a different question from the original.

Here, people are taking into account the odds (as in expected returns),

instead of the probability (if it's heads or tails). If I go back to the

bag analogy, it's like saying: "For every ball in the bag, whether one

or two, we will bet on what it is." Obviously, you would bet on blue

because it has better returns. It's equivalent to betting on the coin,

and having 1:1 returns for heads, while having 2:1 returns for tails.

On the other hand, SB is not even deciding if she wants to bet tails or

heads, she's being asked for the probability of it being heads. If it's

Monday, it's a 1/2 chance; if it's Tuesday (or any later day for the

"large n" case), it's still a 1/2 chance that heads came up. She's right

no matter what day it is if she says 1/2. But if we decide instead to

guess "heads or tails" when she wakes up, the question is transformed.

Now she will be right once if it's heads, and wrong numerous times if

it's tails.

That's my two cents. And not surprisingly, one's one heads and one's on

tails =)

p.s. I'm a little frustrated now because I had to break my "e" rule...

well, it was fun trying!

Mar 18, 1999, 3:00:00 AM3/18/99

to

David A Karr wrote:

> > I thought we had already agreed that the phrase involving

> >"credence" was to mean "What probability should SB assign to the

> >event 'the coin flipped heads' "... ? Or has this discussion

> >progressed into a debate on the philosophy of probability?

>

> I was rather under the impression that the all the other discussion

> was simply begging that question. I could be wrong.

If I understand your position it is that we face a dilemma if our definition of

probability involves frequency of occurrence. After all, if we run the

experiment 200 times, then we will awaken 300 times and the coin will only be

heads 100 of those times - thus if we pick some random awakening there is a 1/3

chance that the associated coin flipped heads. All true so far. The problem

is that in reality, each awakening isn't equally likely, and so arguments based

on picking a *random* awakening don't tell us what the probability will be in a

some awakening decided by the experiment. I don't think that the usual models

for probability are shaken by this problem.

You previously brought up the good point that we should concern ourselves with

how we should simulate this if we were to do so. We can follow the example of

how we measure the bias of a coin. We devise a trial whose outcome will be one

of two events (head or tails) and then repeat the experiment a large number of

times and compute the frequencies. So, now, what should constitute a trial in

our present case? We flip a coin, and it comes up heads we will increment the

count on the event "The coin was heads when I awakened". What if it comes up

tails? Do we increment both "The coin was tails when I was awakened on Monday"

and the "The coin was tails when I was awakened on Tuesday?" No. That would

mean that a single trial had been used to count two *disjoint* events. Instead,

if the coin comes up tails, we must *pick* *one* day at random and increment its

event.

I have a another point that relates to your conversation with r e s, but I think

it is interesting enough not to be buried at the bottom of a long message.

Mar 18, 1999, 3:00:00 AM3/18/99

to

I thought of a more natural situation in which this problem arises - cloning!

You are at the lab to be cloned but are uneasy about going through with it. The

technician is annoyed by your indecision and tells you that he will decide for you.

You insist that you don't want to know his decision for your own peace of mind, and

so he agrees that after you are put to sleep for the procedure he will flip a coin

and only clone you if it comes up heads. When you awaken in a recovery room, it

dawns on you that you have no idea whether you are yourself or a clone!

1. What are the odds that you were cloned?

2. What are the odds that you are the clone?

Mar 18, 1999, 3:00:00 AM3/18/99

to

In article <36F04A90...@flash.net>,

This is more natural?

Given that the coin is fair:

The odds that you were cloned is 1/2. The odds that you are the clone

is 1/3rd.

Mar 18, 1999, 3:00:00 AM3/18/99

to

When two incomplete observers have different amounts of information, their

computations of probability can very well differ and still be correct.

computations of probability can very well differ and still be correct.

Before the experiment, both Beauty and we believe that the probability

of the coin coming up heads will be 50%. Both she and we believe that

there will be two occasions for an event to occur. Everyone knows that

for an observer who will not know how the coin landed or which day it is,

the following events will appear equally probable:

(a) Heads and Monday and we will awaken her.

(b) Heads and Tuesday and we will not awaken her.

(c) Tails and Monday and we will awaken her.

(d) Tails and Tuesday and we will awaken her.

When Beauty is awakened, she will know that she is awakened, but she

will not know how the coin landed and she will not know what day it is.

She will have more information than a completely ignorant observer.

She will know that she is awake. Event (b) has been eliminated for her.

The remaining events will appear equally probable for her. P(Tails)

will be 2/3. Also P(Tuesday) will be 2/3.

Someone has posted a scenario in which Beauty places a bet each time

she is awake. In such a scenario, if she does not base her bet on a

computation that P(Tails) is 2/3, she will suffer. That poster is right.

However, when Beauty is awakened, we will know that she is awakened,

we will know how the coin landed, and we will know what day it is.

We will have more information than a completely ignorant observer.

Event (b) is always eliminated for us, but also two other events will

be eliminated for us. At the time an event occurs, none of the events

will have probability 2/3 for us. One event will have probability 1

for us. Obviously we can't use this knowledge in placing a bet now

since we don't have the knowledge yet, but we sure will have it on

Monday and Tuesday. When we have already seen the coin land, if we

place a bet based on a computation that P(Tails) is 1/2 or that

P(Tails) is 2/3, we will suffer.

Here is another example. When Monty Hall opens a door, he already

knows if the contestant's first choice was right. The contestant

only gains enough information to compute that switching to the

remaining door has a 2/3 chance of winning while staying put retains

its old 1/3 chance. But Monty, and any other fully informed observer,

knows which door wins with probability 1.

So I think there is no longer any reason for confusion about whether

Beauty's computation of P(Tails) should be 2/3.

There still seems to be a paradox though. Someone else posted a

scenario in which Beauty places a bet before the experiment begins,

and then each time she is awakened, she gets a chance to cancel the

bet. She is given an advantage originally, paying $0.45 for a return

of $1 if the coin lands heads and a return of $0 if the coin lands

tails. Before the experiment begins, she and we believe P(heads) is

0.5 so it looks profitable to place a bet. When she is awakened,

she is allowed to cancel the bet. With her new information, she will

want to cancel. It is not really paradoxical that she might want to

cancel, it is paradoxical that she can figure out that she will always

want to cancel. This is the part that I would really like to resolve.

When Beauty is awakened, there is a 1/3 probability that she will

cancel a winning bet, a 1/3 probability that she will cancel a losing

bet, and a 1/3 probability that she will pseudo-cancel an already

canceled bet. We will know which case it is, but she will not.

But that does not resolve the paradox. If Beauty wants to be as

pedantic as I do, then the scenario will simply have its wording

made more accurate: When Beauty is awake she will have an option

to cancel or pseudo-cancel the bet, and she will not know which

operation actually takes place when she makes that decision, though

we will know. Since she will still always make that decision, the

paradox is still there.

This beats Newcomb's pseudo-paradox, that's for sure. In Newcomb's

problem the answer depends on what the premises really are. If the

premises are that the predictor is really perfect then the contestant

knows that it is most profitable to leave some money behind on the

table. If the premises are that causality exists then the contestant

takes both boxes. If the premises are that the predictor is really

perfect *and* causality exists then Bertrand Russell is the pope.

But Beauty has no inconsistent premises, at least not that I can see.

--

<< If this were the company's opinion, I would not be allowed to post it. >>

"I paid money for this car, I pay taxes for vehicle registration and a driver's

license, so I can drive in any lane I want, and no innocent victim gets to call

the cops just 'cause the lane's not goin' the same direction as me" - J Spammer

Mar 18, 1999, 3:00:00 AM3/18/99

to

In article <7cn86c$sjv$1...@agate.berkeley.edu>, bu...@pac2.berkeley.edu writes:

>In article <36EEF61B...@flash.net>,

>Matt McLelland <mat...@flash.net> wrote:

>>They aren't equiprobable. Think about it.

>

>Of course they are. Think about it. :-)

>

>Here's the argument one more time. Repeat the experiment N times,

>for large N. Heads will come up N/2 times; tails will come up

>N/2 times. So the event

> The coin came up heads and I was awakened for the first time

>and the event

> The coin came up tails and I was awakened for the kth time

>(for some fixed k between 1 and a zillion) will both occur N/2 times.

>If they occur equally often in a large number of trials, then BY

>DEFINITION they're equiprobable.

>

>That's phrased in frequentist language,

>In article <36EEF61B...@flash.net>,

>Matt McLelland <mat...@flash.net> wrote:

>>They aren't equiprobable. Think about it.

>

>Of course they are. Think about it. :-)

>

>Here's the argument one more time. Repeat the experiment N times,

>for large N. Heads will come up N/2 times; tails will come up

>N/2 times. So the event

> The coin came up heads and I was awakened for the first time

>and the event

> The coin came up tails and I was awakened for the kth time

>(for some fixed k between 1 and a zillion) will both occur N/2 times.

>If they occur equally often in a large number of trials, then BY

>DEFINITION they're equiprobable.

>

>That's phrased in frequentist language,

Of course they aren't. Think about it. :-)

Repeat the experiment for N AWAKENINGS, for large N. Heads will have

occured N/3 times; tails will also have occured N/3 times. But N/3

awakenings will have been preceded by heads, and 2N/3 awakenings will

have been preceded by tails. So the event

The coin came up heads and I was awakened

and the event

The coin came up tails and I was awakened

(for some fixed k between 1 and a zillion) will occur different numbers

of times. If they occur differently often in a large number of trials,

then BY DEFINITION they're differently probable.

That's phrased in frequentist language, too. And from the amount of

information Beauty has when she's awake, it's right.

Mar 18, 1999, 3:00:00 AM3/18/99

to

Matthew T. Russotto wrote:

> This is more natural?

Perhaps more natural wasn't the correct way to put it - less contived might have been

better.

> Given that the coin is fair:

> The odds that you were cloned is 1/2. The odds that you are the clone

> is 1/3rd.

Correct! Now we just have to agree that this is isomorphic to the original problem from

Beauty's perspective when she wakes up and we are done.

Mar 18, 1999, 3:00:00 AM3/18/99

to

Norman Diamond wrote:

> Of course they aren't. Think about it. :-)

>

> Repeat the experiment for N AWAKENINGS, for large N. Heads will have

> occured N/3 times; tails will also have occured N/3 times. But N/3

> awakenings will have been preceded by heads, and 2N/3 awakenings will

> have been preceded by tails. So the event

> The coin came up heads and I was awakened

> and the event

> The coin came up tails and I was awakened

> (for some fixed k between 1 and a zillion) will occur different numbers

> of times. If they occur differently often in a large number of trials,

> then BY DEFINITION they're differently probable.

I would like to point out that this is not what I was saying - these two events

*are* equiprobable. I think you actually agree with the guy you are arguing (Ted -

bu...@pac2.berkeley.edu). You are counting every awakening that occurs in the

experiment - when what you should be doing is counting a particular random

awakening per experiment. (I posted more on this a post or two ago)

Mar 18, 1999, 3:00:00 AM3/18/99

to

Norman Diamond wrote:

> Before the experiment, both Beauty and we believe that the probability

> of the coin coming up heads will be 50%. Both she and we believe that

> there will be two occasions for an event to occur. Everyone knows that

> for an observer who will not know how the coin landed or which day it is,

> the following events will appear equally probable:

> (a) Heads and Monday and we will awaken her.

> (b) Heads and Tuesday and we will not awaken her.

> (c) Tails and Monday and we will awaken her.

> (d) Tails and Tuesday and we will awaken her.

I would like to add some more "equally probable" events to your list:

(e) Heads and Wednesday and we will not awaken her.

(f) Heads and Thursday and we will not awaken her.

...

You see, you can't just claim that all of these things are equally probable. What

is the a priori probability that today is Monday?? Now, we could, by construction

treat 'Monday' and 'Tuesday' as events on which the random variable 'Today' is

equally likely. For example, an observer to this experiment picks a random day to

visit (Monday or Tuesday) with equal probability and independently of the coin toss

in the experiment. For him, the events you listed really are equiprobable. If

this observer learned that Beauty was awake, he could eliminate choice (b) and

proceed to draw all of your conclusions. Unfortunately, Beauty doesn't pick a day

to visit randomly and independently of the coin toss. If the coin came up heads

she *is* going to visit Monday- Tuesday just isn't a possibility anymore.

Mar 18, 1999, 3:00:00 AM3/18/99

to

In article <7cprb1$e...@dfw-ixnews5.ix.netcom.com>,

}Matthew T. Russotto wrote in message ...

}[...]

}>My argument is with the assumption that P(H) = P(T).

}

}This isn't an assumption, but is part of the problem

}statement, viz.,"we'll flip a (fair) coin".

}

}We'll have little hope of communicating intelligently

}about this problem unless we can at least agree that

}this means, a priori, pr(H)=pr(T).

}[...]

}>My argument is with the assumption that P(H) = P(T).

}

}This isn't an assumption, but is part of the problem

}statement, viz.,"we'll flip a (fair) coin".

}

}We'll have little hope of communicating intelligently

}about this problem unless we can at least agree that

}this means, a priori, pr(H)=pr(T).

I agree that the toss is fair. I don't agree that the "random"

variable derived from the toss is fair. You're measuring the value of

the toss twice in the case that it is heads, and only once if it is

tails.

Suppose I flip a fair coin many times. Each time I flip it, I write

"H" if it is heads, and "TT" if it is tails. If I pick a random

letter from the page, what is the probability that it is a "T"?

}I would say that it's an explicit part of the problem that pr(H)=pr(T),

}meaning the probabilities unconditioned by any information other than

}the rules of the game. So, if, as you say below, you see a different

}distribution arising for H/T, it would presumably involve some

}pr(H|E)=/=pr(T|E) calculated for some conditioning event E, and it

}appears that you've taken E=A1. (Although I don't agree with the actual

}values you seem to have obtained for the unequal conditional

}probabilities -- I showed in another posting that pr(H|A1)=2/3.)

Yes, I missstated that -- I meant that you could resolve the contradiction

by stating P(T|A1)=1/(n+1), not P(H|A1). But that doesn't fit the

real situation.

Mar 18, 1999, 3:00:00 AM3/18/99

to

Matthew T. Russotto wrote:

> Suppose I flip a fair coin many times. Each time I flip it, I write

> "H" if it is heads, and "TT" if it is tails. If I pick a random

> letter from the page, what is the probability that it is a "T"?

Now try this one:

You flip a coin *once* and write "H" if it is heads and "TT" if it is tails.

If you pick a random letter from the page, what is the probability that it is

a "T"?

After all, Beauty is only going to participate in this experiment once.

Mar 18, 1999, 3:00:00 AM3/18/99

to

kIdMiGaRu wrote:

> Secondly, think of it this way: Sleeping Beauty knows that no matter

> what, if it is Monday P(heads) = 1/2.

This isn't true. P(heads | Monday) = 2/3. That is, if she knows she got

woken up on Monday the odds are 2/3 that the coin was heads. I think I

agree with all of the other stuff you said, so maybe I just misunderstand

what you mean by this statement.

Mar 18, 1999, 3:00:00 AM3/18/99

to

Matt McLelland wrote:

Oops! No this is incorrect. The correct probability for this question is 1/4. The

question I meant to ask was "If you can tell that you are not the clone, what are the odds

that you were cloned?"

Mar 18, 1999, 3:00:00 AM3/18/99

to

Nobody tell the "Horrible Question" guy that my answer is bogus....

Mar 18, 1999, 3:00:00 AM3/18/99

to

I don't want to be the one person in the rec.puzzles universe without

a comment on this. Perhaps the following (very similar) situation will

clarify matters.

a comment on this. Perhaps the following (very similar) situation will

clarify matters.

Suppose that the situation is as described, but we explicitly say that,

given a head outcome, we let SB sleep on Tuesday. Introduce another

character, Bob, who never keeps track of what day it is but knows all

the details of the SB experiment.

We now tell Bob that beauty is awake, and ask Bob what he thinks

the probability that the coin came up heads is. He reasons:

P( flip was heads | she's awake ) = P(she's awake | flip was heads) P(heads)

----------------------------------------

P(she's awake)

= 1/2 * 1/2

---------------

1/2*1/2 + 1/2*1

= 1/3

Is there a good argument that SB's calculation should be different

from Bob's? Does the explicit statement that we let her sleep on

Tuesday given a head outcome change the problem?

Mike

In article <pl436000-150...@bootp-17.college.brown.edu>,

pl43...@brownvmDOTbrown.edu (Jamie Dreier) wrote:

>

> We plan to put Beauty to sleep by chemical means, and then we'll flip a

> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday

> afternoon and interview her. If it lands Tails, we will awaken her Monday

> afternoon, interview her, put her back to sleep, and then awaken her again

> on Tuesday afternoon and interview her again.

>

> The (each?) interview is to consist of the one question: what is your

> credence now for the proposition that our coin landed Heads?

>

> When awakened (and during the interview) Beauty will not be able to tell

> which day it is, nor will she remember whether she has been awakened

> before.

>

> She knows the above details of our experiment.

>

> What credence should she state in answer to our question?

>

> -Jamie

>

> p.s. Don't worry, we will awaken Beauty afterward and she'll suffer no ill

> effects.

>

> p.p.s. This puzzle/problem is, as far as I know, due to a graduate student

> at MIT. Unfortunately I don't know his name (I do know it's a man). The

> problem apparently arose out of some consideration of the Case of the

> Absentminded Driver.

>

> p.p.p.s. Once again, I have no very confident 'solution' of my own; I will

> eventually post the author's solution, but I am not entirely happy with

> that one either.

>

> --

> SpamGard: For real return address replace "DOT" with "."

>

-----------== Posted via Deja News, The Discussion Network ==----------

http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

Mar 18, 1999, 3:00:00 AM3/18/99

to

An argument for 1/3 (which I am convinced is the correct answer).

1. Change the experiment so that if the coin comes up heads, she is

still woken once, but it is decided at random (e.g. by another

coin toss) whether to wake her on Monday or Tuesday (each with

probability 1/2). This surely can't change the answer.

2. Now consider a variant where she is put to sleep for only one day,

and woken at most once (so no need to make her forget); heads

means wake her with probability 1/2 (e.g. based on another coin

toss); tails means do wake her. If she is woken, her reckoning of

the probability that the coin came up heads is 1/3. (Agreed?)

(I'm assuming that the final awakening at the end of the

experiment cannot be confused with the awakenings we're interested

in.)

3. In the original experiment as modified in (1), allow her to ask

what day it is. If the answer is Monday, she is in an equivalent

position to that in (2) (since she knows that the plan implied

that she would be woken on Monday with probability 1/2 if the coin

came up heads, and with probability 1 if the coin came up tails);

therefore her reckoning of the probability is 1/3. Similarly, if

the answer is Tuesday then her reckoning of the probability is

1/3. Since it makes no difference to her reckoning what day it

is, her reckoning is 1/3 before she asks the question.

If she decides not to ask after all, then her reckoning is still

1/3 (since she has gained no new information by deciding not to

ask).

Merely knowing that she is allowed to ask what day it is can't

make any difference to her reckoning of the probability that the

coin came up heads, so her reckoning of the probability in (1) is

1/3. Hence her reckoning in the original question is also 1/3.

--

John Rickard <John.R...@virata.com>

Mar 18, 1999, 3:00:00 AM3/18/99

to

In article <36F09211...@flash.net>,

Matt McLelland <mat...@flash.net> wrote:

}kIdMiGaRu wrote:

}

}> Secondly, think of it this way: Sleeping Beauty knows that no matter

}> what, if it is Monday P(heads) = 1/2.

}

}This isn't true. P(heads | Monday) = 2/3. That is, if she knows she got

}woken up on Monday the odds are 2/3 that the coin was heads.

Matt McLelland <mat...@flash.net> wrote:

}kIdMiGaRu wrote:

}

}> Secondly, think of it this way: Sleeping Beauty knows that no matter

}> what, if it is Monday P(heads) = 1/2.

}

}This isn't true. P(heads | Monday) = 2/3. That is, if she knows she got

}woken up on Monday the odds are 2/3 that the coin was heads.

That's not the case. It would be the case if she was woken up on

Monday if the coin was heads, and Monday or Tuesday randomly if the

coin was tails. But in fact she is woken BOTH Monday and Tuesday if

the coin was tails.

Mar 18, 1999, 3:00:00 AM3/18/99

to

ka...@shore.net (David A Karr)

> Now we're getting into yeat another semantic muddle. I use the term

> "confidence" to refer to a particular non-Bayesian measure of belief,

> the one usually meant when someone says they used "95% percent

> confidence intervals". If I analyze a statistical sample, I might

> make a statement about a hypothesis with "confidence 0.95". This is

> not at all the same as my saying that I estimate the hypothesis to be

> true with *probability* 0.95.

Hm, right.

Remind us what it does mean. ;-)

Give an example, preferably involving balls in urns. ;-)

> Now this is interesting because I understand two measures of something

> roughly corresponding to a degree or strength of belief, both measured

> on the scale [0,1], yet not equivalent to each other nor even having

> a well-defined mapping from one to the other. So maybe we should

> say "probability" rather than "credence" since at least this gives me

> a clue which of the two measures we're looking for.

'Probability' is fine.

As you know, I use these epistemic words, 'confidence', 'credence',

because that's my preferred interpretation of ordinary probability

statements.

I did try to explain in terms independent of any explicitly Bayesian

dogma. 'Confidence' is the converse of 'surpisingness'. When your

confidence (in my sense) for a certain event is high, you will not be very

surprised if and when it happens (when you learn that it has happened).

When your confidence is very low, you will be very surprised if it

happens.

At the limits, your confidence in an obvious tautology is 1, since the

surprisingness of the event, "Either it rains on Sunday or it doesn't", is

nil. Your confidence in a contradiction is 0, since the contradiction's

actually happening is so surprising as to be literally unimaginable.

At the midpoint, the surprisingness of the coin's landing Heads is exactly

the same as the surpringness of its landing Tails; thus your confidence in

the two is also the same. So that's 1/2.

We can think of all confidences as limits of sequences of compounded

equally-surprising events, a la Ramsey.

> Usually I like my probability estimates to fit into

> a rational world view, and one of the ways I test their rationality is

> to imagine that I (or someone else) could make some sort of bet on them.

> (After all, that's the origin of this branch of mathematics, isn't it?)

Well, let's see.

Ian Hacking argues that the modern conception of probability arose by the

unification of two different concepts: the idea of relative long term

frequencies of repeatable events, and the idea of a degree of belief (as

in the Pyrrhonian skeptics' views). He thinks that these got unified by

actuaries in (as I recall) Holland and Flanders, who needed a science of

probability to run their insurance businesses. (So I guess Hacking is

vaguely a Marxist: concept formation driven by economic innovation.)

But you presumably mean something later, like Ramsey's formulation of

decision theory. That formulation is heavily dependent on an agent's

dispositions to bet. These dispositions reveal the agent's credences.

>

> In this case all the betting does is to point out difficulties with

> other possible measures of goodness such as "how suprised" Beauty

> should be. For example, if she were to assign probability 1/3 to

> heads, then she'll be twice as surprised when the coin is revealed to

> be heads as when it's revealed to be tails. But if the experimenter

> really does make this revelation each time before he puts Beauty back

> to sleep, then Beauty ends up being *exactly* as suprised by tails as

> by heads over the course of the experiment (half as surprised each

> time, but it happens twice as often). This minimizes her risk,

> i.e. the variance in how surprised she'll be. I find that a

> compelling argument for adopting this probability estimate, but not

> compelling enough to make me give up the notion that the estimate

> probably really should be 1/2 after all.

Ahhhh.

The 'surprisingness' point really is supposed to be a little different.

The probability you report is supposed to be an accurate measure of how

surprised you will actually be (higher prob => less surprised).

> A few related questions:

>

> Suppose a few seconds after Beauty wakes, the experimenter tells her

> what day it is? What should she assign as the probability that the

> coin came up heads, given that she's just been told today is Monday?

> r e s seems to think the answer is 2/3. This boggles me.

Me too.

That is the worst problem, I think, with my own gut feeling that she

should think (before hearing what day it is, but upon awakening) that the

probability is 1/2. If it is 1/2, then as far as I can tell she *has* to

change to 2/3 when she discovers that it is Monday.

> Suppose Beauty just woke up and has no other new information.

> What's the probability (in her rational view) that today is Monday?

This one does not bother me as much, directly.

The strong intuition is that she should think that Monday is *more likely*

than Tuesday. But both sides say this. The difference is that the Halfers

(like me) say that the chance that it's Monday is 3/4, while the Thirders

say that the chance that it's Monday is 2/3.

(Or have I messed that up?)

-Jamie

Mar 18, 1999, 3:00:00 AM3/18/99

to

In article <36F027AA...@flash.net>,

Matt McLelland <mat...@flash.net> wrote:

>bu...@pac2.berkeley.edu wrote:

Matt McLelland <mat...@flash.net> wrote:

>bu...@pac2.berkeley.edu wrote:

>> P(heads and today is monday) = P(tails and today is monday)

>

>This isn't true!

I see that I wasn't clear here. Let me specify what I meant by

this. By P(X) I mean simply the probability that event X will

occur during one complete run of the experiment. By that

definition, it is clear (I hope) that both of the above probabilities

are 1/2 (and hence they're equal).

You apparently mean something else (or rather, thought I meant

something else) by P(X), although, to be honest, I can't think of any

meaning one could attach to it that would make the statement below

true.

>P(heads and today is monday) = 1/2

>P(tails and today is monday) = 1/N (N being the number of awakenings)

>> > P(The coin came up heads and *this* is my first time up) = 1/2

>> >and

>> > P(The coin came up tails and *this* is my first time up) = 1/2*1/N

>>

>> I agree with this, but I don't see why it's relevant to the problem.

>

>? You agree? How is this different from the events with "this" replaced by

>"today"?

Hmm. My brain seems to have turned off when I wrote that.

I do not agree with the above statement, and I have no idea

why I wrote that I did. Sorry!

-Ted

Mar 18, 1999, 3:00:00 AM3/18/99

to

In article <7cpnl7$tvh$1...@nntpd.lkg.dec.com>,

Norman Diamond <dia...@tbj.dec.com> wrote:

>In article <7cn86c$sjv$1...@agate.berkeley.edu>, bu...@pac2.berkeley.edu writes:

>>In article <36EEF61B...@flash.net>,

>>Matt McLelland <mat...@flash.net> wrote:

>>>They aren't equiprobable. Think about it.

>>

>>Of course they are. Think about it. :-)

[...]

>Of course they aren't. Think about it. :-)

>

>Repeat the experiment for N AWAKENINGS, for large N. Heads will have

>occured N/3 times; tails will also have occured N/3 times. But N/3

>awakenings will have been preceded by heads, and 2N/3 awakenings will

>have been preceded by tails. So the event

> The coin came up heads and I was awakened

>and the event

> The coin came up tails and I was awakened

>(for some fixed k between 1 and a zillion) will occur different numbers

>of times.

Norman Diamond <dia...@tbj.dec.com> wrote:

>In article <7cn86c$sjv$1...@agate.berkeley.edu>, bu...@pac2.berkeley.edu writes:

>>Matt McLelland <mat...@flash.net> wrote:

>>>They aren't equiprobable. Think about it.

>>

>>Of course they are. Think about it. :-)

>Of course they aren't. Think about it. :-)

>

>Repeat the experiment for N AWAKENINGS, for large N. Heads will have

>occured N/3 times; tails will also have occured N/3 times. But N/3

>awakenings will have been preceded by heads, and 2N/3 awakenings will

>have been preceded by tails. So the event

> The coin came up heads and I was awakened

>and the event

> The coin came up tails and I was awakened

>(for some fixed k between 1 and a zillion) will occur different numbers

>of times.

I completely agree with this, and I don't think it contradicts

anything I said. Those two events aren't the ones I was talking about

when I said "these events are equiprobable." I was talking about the

events

"the coin came up heads and an awakening occurred on Monday,"

"the coin came up tails and an awakening occurred on Monday,"

"the coin came up tails and an awakening occurred on Tuesday."

Those events all occur equally often in an ensemble, so they're

equally probable. That's all I was saying.

As far as I can tell, you and I are in complete agreement.

-Ted