The (each?) interview is to consist of the one question: what is your
credence now for the proposition that our coin landed Heads?
When awakened (and during the interview) Beauty will not be able to tell
which day it is, nor will she remember whether she has been awakened
before.
She knows the above details of our experiment.
What credence should she state in answer to our question?
-Jamie
p.s. Don't worry, we will awaken Beauty afterward and she'll suffer no ill
effects.
p.p.s. This puzzle/problem is, as far as I know, due to a graduate student
at MIT. Unfortunately I don't know his name (I do know it's a man). The
problem apparently arose out of some consideration of the Case of the
Absentminded Driver.
p.p.p.s. Once again, I have no very confident 'solution' of my own; I will
eventually post the author's solution, but I am not entirely happy with
that one either.
--
SpamGard: For real return address replace "DOT" with "."
50/50. The Heads and Tails environments are identical. They
give her no information, so the probability remains the same.
| Jim Ferry | Center for Simulation |
+------------------------------------+ of Advanced Rockets |
| http://www.uiuc.edu/ph/www/jferry/ +------------------------+
| jferry@expunge_this_field.uiuc.edu | University of Illinois |
>> What credence should she state in answer to our question?
>
>50/50. The Heads and Tails environments are identical. They
>give her no information, so the probability remains the same.
SPOILER
Nope. She should say that the probability that it's tails is 2/3.
Imagine repeating the experiment a million times. Heads comes
up half a million times, as does tails. But each time tails
comes up she's awakened twice. So there are a total of
1.5 million awakenings, and only half a million of them
occur after the coin came up heads.
-Ted
Nope. 2/3 of the awakenings occur after a Tail, but these
awakenings are not equally likely events.
P(Heads & Monday) = 1/2
P(Tails & Monday) = 1/4
P(Tails & Tuesday) = 1/4
This would be correct if the rules stated that she would be awakened
on Monday OR Tuesday in the event the coin comes up tails,
but unless I misread the question, she is to be awakened on Monday
AND Tuesday in this case. That means that the three events
are definitely equiprobable.
Once again, imagine repeating the experiment a million times.
The event (Heads & Monday) will occur half a million times.
So will the event (Tails & Monday) (since tails comes up half the
time, and every time it does a Monday-awakening occurs). So
those two events are equally probable.
-Ted
Each has something obvious to be said in its favor. But they can't both be
right. (Can they?)
So, to summarize:
There is a frequentist sort of argument in favor of her declaring that the
chance that the coin landed Heads is only 1/3 (to wit: suppose the game
were played repeatedly, and on each occasion for guessing she made a guess
to herself, "I guess that it's Heads"; she would be right only 1/3 of the
time). On the other hand, there is a more Bayesian sort of argument that
she should think the chance is 1/2 (to wit: I thought it was 1/2 before
they put me to sleep, and I clearly have no new information, so it would
be irrational to change my mind).
Two arguments, incompatible conclusions, at least one of the arguments
must be faulty.
-Jamie
"Clearly"? I see nothing clear about it.
In fact I'm not sure I know what Jamie's question even means.
What's the significance of the "credence" that Sleeping Beauty
assigns to a proposition that we already know to be true (or false)?
Are we offering her a "bet"---she has the option to pay $p (out of
the royal treasury, whose amount is "large enough" but hidden from her),
and if she does so and the coin was "heads" we pay her back $1?
And do we offer this bet every time we interview her?
If so, I wouldn't want to risk more than $0.50 if I were
Sleeping Beauty in your experiment, would you?
Change this back to 1/2 chance of heads and two wakenings on tails
for every wakening on heads: then as S.B. I wouldn't even pay
$0.34 for the chance that the coin came up heads. In other words,
that measure of "credence" is 1/3.
On the other hand, suppose again the coin comes up heads on 1/2 of
all flips, and S.B. is woken once on heads, twice on tails. But now
suppose S.B. has the opportunity *before* the coin toss to pay $0.45
"betting on heads." Each time we wake her up, we ask if she still
wants her bet (if she still has one) to stand---if she says "no" at
any time during the week, her $0.45 is returned at the end of the week,
otherwise she gets $1 on heads and $0 on tails at the end of the week.
In the latter case she'll have two chances to cancel a losing bet
for every chance to cancel a winning bet, yet it still pays *not* to
cancel the bet; in that sense her "credence" is 1/2.
So is the "credence" that the coin came up heads 1/2 or 1/3?
Again, what *is* "credence"?
--
David A. Karr "Groups of guitars are on the way out, Mr. Epstein."
ka...@shore.net --Decca executive Dick Rowe, 1962
> Eytan Zweig
> After the experiment ends, Beauty is woken up once more, this time for good.
> She does not, however, remember anything that happened while the experiment
> was going on, except the rules of the experiment. Once again she is
> interviewed, and asked what is the credence for the possibility that the
> coin came up heads.
>
> Obviously, the credence is now 50%; however, she gained no new information
> except that the experiment is over, which is irrelevent to the credence -
> everything else is exactly the same knowledge as she had during the
> experiment. Why, according to your logic, does the credence change?
Indeed.
The 'no new information' reasoning seems very compelling.
Back up a little and I think it can be made even *more* compelling.
Before she is put to sleep, Beauty certainly thinks that the chance of
Heads is 1/2. Now for the sake of argument, suppose that upon awakening
she really does get some hard-to-state information, so that she reasonably
changes her credence in Heads to 1/3.
But whatever that strange information might be, she *knew* she was going
to get it, she knew this before she was put to sleep. Whenever you *know*
for sure that you are going to get information soon that will rationally
make you take the chance of Heads to be 1/3, surely you must *now* take
the chance of Heads to be 1/3. Otherwise your beliefs suffer from a very
blatant sort of diachronic incoherence (and we'll make an easy dutch book
against you).
So she couldn't possibly be getting any new information, because it
couldn't possibly be rational for her to think ahead of time that the
chance of Heads is 1/3.
ka...@shore.net (David A Karr) wrote:
> In fact I'm not sure I know what Jamie's question even means.
> What's the significance of the "credence" that Sleeping Beauty
> assigns to a proposition that we already know to be true (or false)?
Hmmm.
Well, I meant to be using the standard Bayesian sense of 'credence', which
is generally cashed out as 'degree of belief'.
If you prefer, you may (as Matt McLelland suggests) rephrase the question:
What should she take the odds of Heads to be when we interview her? (I am
talking about her rational state of belief, though -- which may or may not
be logically related to her disposition to bet, I intend not to take any
stand on what the relation is or isn't.)
> Are we offering her a "bet"---she has the option to pay $p (out of
> the royal treasury, whose amount is "large enough" but hidden from her),
> and if she does so and the coin was "heads" we pay her back $1?
> And do we offer this bet every time we interview her?
> If so, I wouldn't want to risk more than $0.50 if I were
> Sleeping Beauty in your experiment, would you?
>
>
> Change this back to 1/2 chance of heads and two wakenings on tails
> for every wakening on heads: then as S.B. I wouldn't even pay
> $0.34 for the chance that the coin came up heads. In other words,
> that measure of "credence" is 1/3.
Yeah.
Ok, but when we interview her, we can just ask her, What do you think is
the chance that the coin came up Heads? And what should she say, what is
the rational thing for her to believe?
Matt McLelland <mat...@flash.net> adds:
> The answer could be "50-50 of course. It is still a fair coin" or "Given that
> I am up, the odds are just 1/3."
> I think the second answer was the intended one.
Well, that is the answer endorsed by the author of the problem. He thinks
she should say that the probability of Heads given "I am awake now" is
1/3. *I* intended neither answer in particular. I keep waffling, myself.
bu...@pac2.berkeley.edu wrote:
> I don't think it has anything to do with frequentism vs. Bayesianism --
> I can phrase the 2/3-tails argument in Bayesian terms just as well
> as in frequentist terms.
Ok, that may be right. Still, the reasoning given in support of the 1/3
Heads answer tends to be by way of some facts about long term relative
frequencies, while the reasoning in favor of the 1/2 answer tends to be in
terms of rational change in belief.
(Note that without assuming that rational updating is by
conditionalization, it's hard to find any argument at all in favor of a
1/2 answer.)
> As it happens, I'm a diehard Bayesian;
> I phrased the argument in frequentist terms because in my
> experience that's what other people respond best to.
>
> Consider a universe of four possible events:
>
> 1. Coin came up heads; today is Monday
> 2. Coin came up heads; today is Tuesday
> 3. Coin came up tails; today is Monday
> 4. Coin came up tails; today is Tuesday
>
> Surely we can agree that these four events are equally probable, if we
> lack the information that Sleeping Beauty was awakened. To be
> specific, imagine that Rip van Winkle is asleep beside Sleeping Beauty
> and that the rules of the game dictate that he will be awakened on
> Monday and Tuesday regrardless of the outcome of the coin flip. When
> he wakes up, he assesses the probability of the four events above to
> be equal: 1/4 each. (Assume he's awakened before S.B., in the
> cases where she is to be awakened.)
>
> Sleeping Beauty is in excatly the same state as Rip van Winkle, except
> that she has one more piece of information: she knows that she's been
> awakened. Now use standard Bayesian techniques to assess her
> (subjective) probabilities for the three events. Number 2 is ruled
> out; the others are equally probable. QED.
>
> -Ted
Yeah, good point.
Here's the funny thing, though. The information, "I have been awakened",
is a peculiar piece of information. For one thing, it is 'essentially
tensed' information, which is very peculiar in itself. It is something
that in principle she cannot know in advance. If we try, "At some moment I
will be or have been awakened", as a proxy for "I have been awakened",
then we have failed to capture the information, since obviously she does
already know, before the experiment, that she will be awakened at some
moment, and still she thinks (before the experiment) that the chance that
the coin lands Heads is 1/2.
Another odd thing about this information is that Beauty herself cannot
possibly receive its contradictory as information. She cannot ever be in a
position to conditionalize on, "I have not been awakened."
When I first looked at this problem, I thought, "I can see what's so
strange about this situation -- it's that the 'information' that appears
to be relevant is information not about what the world is like, but about
what day it is." But this is not right, I was completely wrong about that.
This feature is entirely incidental, and could be squeezed right out with
a more complicated story.
> Jamie Dreier wrote:
>
> > Back up a little and I think it can be made even *more* compelling.
> > Before she is put to sleep, Beauty certainly thinks that the chance of
> > Heads is 1/2. Now for the sake of argument, suppose that upon awakening
> > she really does get some hard-to-state information, so that she reasonably
> > changes her credence in Heads to 1/3.
>
> There is nothing paradoxical about this. It isn't even very
complicated, and time
> really has nothing to do with it. Change the problem so that don't
ever interogate
> her if the coin comes up heads, and suppose she knows this. Now imagine
that you
> are her, and that you get interogated. You don't have reason to doubt
that the coin
> is still fair, but you still know that it came up tails with 100%
certainty. There
> isn't anything deeper than this invovled in this problem.
I think there is.
In your example, the fact that she is being interrogated does,
uncontroversially, count as information for her. We can put that
information in a tenseless way, so that she could in principle get it at
some other time -- say, before she is put to sleep in the first place. At
that moment, she will quite reasonably think, "Of course, if I am
interrogated at all, that will mean that the coin came up tails. That is,
pr(Tails | I will be interrogated) = 1."
Then, when she is in fact interrogated, she conditionalizes as usual, and voila.
But in the original problem, there doesn't seem to be any way to state the
relevant information (if it really is information) in a neutral, untensed
way. We can't put it like this: "I have been or will be interrogated."
Because she already knows that at the outset, so if pr(Heads | I have been
or will be interrogated) = 1/3, then since she knows that the condition
obtains, she can just conditionalize and conclude, pr(Heads) = 1/2. But
that's not right.
So as far as I can see, time really does have something to do with it.
> The only explanation I can come up with is that "today is Monday" and
> "today is Tuesday" can't be treated as events.
I think that's right. At least in the most straightforward way, they cannot be.
The 'events' in probability theory can often be thought of as sets of
possible worlds. The probability function is a measure on these sets.
Conjunction of the events corresponds to intersection of the sets,
disjunction to union, and so on. The event, "the next Congress will have a
Republican majority", is represented by the set of worlds in which the
next Congress has a Republican majority. My estimate of the probability of
the event comes from my measure of that set.
But which set of possible worlds represents the pseudoevent (hm, well, to
be less tendentious: the 'purported event') that today is Monday? Today is
Monday, after all, in *every* possible world (I'm writing at 11:52 pm on
Monday), or else in *no* possible world (you are most likely reading this
on Tuesday or Wednesday).
I guess the interesting question is whether we can extend the usual
apparatus to include this new sort of information. It ought to be
possible. After all, we *do* sometimes find ourselves in the position of
not knowing what time it is, and of having some educated guesses about it
("I know it isn't noon, my probabilities for times are clustered around
midnight..."). And sometimes this makes a difference to what we think we
should do. ("I'm pretty sure my watch is fast, so I can play two more
rounds of freecell before I have to run off to the meeting, but there is a
small chance that my watch is slow, in which case I'd better stop typing
*now* and hightail it over there....")
>-Ted
No, this reasoning only applies if she knows that the experiment shall be
repeated several times, and that the amount of tails shall be equal to
heads. It does not apply to a single case where one result eliminates the
other.
For instance, take an extreme case - instead of throwing a normal die, say
it is weighed in such a fashion that there is only a 1/1000000 chance of
tails coming up- but in that case, she will be woken 999,999 times! By your
reasoning, she should say the chance of it being tails is 50%, which is
clearly incorrect.
Eytan Zweig
Ok, think about this related question -
After the experiment ends, Beauty is woken up once more, this time for good.
She does not, however, remember anything that happened while the experiment
was going on, except the rules of the experiment. Once again she is
interviewed, and asked what is the credence for the possibility that the
coin came up heads.
Obviously, the credence is now 50%; however, she gained no new information
except that the experiment is over, which is irrelevent to the credence -
everything else is exactly the same knowledge as she had during the
experiment. Why, according to your logic, does the credence change?
Eytan Zweig
> On the other hand, suppose again the coin comes up heads on 1/2 of
> all flips, and S.B. is woken once on heads, twice on tails. But now
> suppose S.B. has the opportunity *before* the coin toss to pay $0.45
> "betting on heads." Each time we wake her up, we ask if she still
> wants her bet (if she still has one) to stand---if she says "no" at
> any time during the week, her $0.45 is returned at the end of the week,
> otherwise she gets $1 on heads and $0 on tails at the end of the week.
>
> In the latter case she'll have two chances to cancel a losing bet
> for every chance to cancel a winning bet, yet it still pays *not* to
> cancel the bet; in that sense her "credence" is 1/2.
I think that "What is your credence" can be assumed to mean "What are the odds
[to you]". The question was:
What is your credence now for the proposition that our coin landed Heads?
Your objection is that this could be interpreted to mean "What *were* the odds
that our coin landed heads when we flipped it." I think that the use of the
word 'now' makes implies a meaning of "What are the odds that the coin came up
heads given that we just woke you up."
This isn't really a problem with the use of the word credence, I think. The
same possible ambiguity exists in the statement:
Now what are the odds that our coin landed heads?
I don't think it has anything to do with frequentism vs. Bayesianism --
I can phrase the 2/3-tails argument in Bayesian terms just as well
as in frequentist terms. As it happens, I'm a diehard Bayesian;
Can you explain that last clause? Why on earth would you think that
the information that the experiment is over is irrelevant? Of course
it's relevant -- it completely changes the universe of possibilities
for her, from
{heads and I'm being awakened for the first time,
tails and I'm being awakened for the first time,
tails and I'm being awakened for the second time}
to
{heads and the experiment is over,
tails and the experiment is over}.
With a different universe of possibilities, there's no reason to
expect the probability of tails to be the same in the two cases. I
can't even begin to guess your reasons for suggesting that the two
situations (1. she knows the experiment is still going on; 2. she
knows the experiment is over) are equivalent.
-Ted
She's given that the coin is fair, so your objection doesn't apply.
Consider this:
Same setup, only this time Sleeping Beauty is told whether it is
Monday or Tuesday. Clearly, on Tuesday she should answer that the
probability of tails is 1, and on Monday she should answer that the
probability of tails is 1/2.
Certainly when she isn't told, the aggregate probability of combining
the two cases can't drop to 1/2.
But to confound things further:
P(tails) = 1/2
P(heads) = 1/2
P(tails | Monday) = 1/2
P(tails | Tuesday) = 1
P(Monday | tails) = 1/2
P(Monday | heads) = 1
P(Tuesday | tails) = 1/2
P(Tuesday | heads) = 0
P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)
= 1/2*1/2 + 1*1/2 = 3/4
P(Tuesday) = 1/4
(alarm bells should be going off by now)
P(tails) = P(tails|Monday)*P(Monday) + P(tails|Tuesday)*P(Tuesday)
= 1/2 * 3/4 + 1 * 1/4 = 5/8
A contradiction! But all I've used is
P(A) = P(A|B)P(B) + P(A|~B)P(~B)... that can't be wrong!
The only explanation I can come up with is that "today is Monday" and
"today is Tuesday" can't be treated as events.
--
Matthew T. Russotto russ...@pond.com
"Extremism in defense of liberty is no vice, and moderation in pursuit
of justice is no virtue."
>For instance, take an extreme case - instead of throwing a normal die, say
>it is weighed in such a fashion that there is only a 1/1000000 chance of
>tails coming up- but in that case, she will be woken 999,999 times! By your
>reasoning, she should say the chance of it being tails is 50%, which is
>clearly incorrect.
I don't understand the last clause ("which is clearly incorrect").
It would make a lot more sense to me if it said "which is
clearly correct." :-)
When she's awakened, there are 1,000,000 distinct possibilities:
1. Heads, and I'm being awakened for the first time.
2. Tails, and I'm being awakened for the first time.
3. Tails, and I'm being awakened for the second time.
...
1000000. Tails, and I'm being awakened for the 999,999th time.
Clearly the first one is 999,999 times more likely than any of the
others, but the others are 999,999 times more numerous, so, based on
the information she has available, the probabilities of heads and
tails are equal.
-Ted
> Back up a little and I think it can be made even *more* compelling.
> Before she is put to sleep, Beauty certainly thinks that the chance of
> Heads is 1/2. Now for the sake of argument, suppose that upon awakening
> she really does get some hard-to-state information, so that she reasonably
> changes her credence in Heads to 1/3.
There is nothing paradoxical about this. It isn't even very complicated, and time
> P(tails | Monday) = 1/2
Bzzz.
p(tails | Monday) = 1/3
> P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)
> = 1/2*1/2 + 1*1/2 = 3/4
> P(Tuesday) = 1/4
> (alarm bells should be going off by now)
No. So far so good. Keep in mind that the even "It is monday" is really "I got
woken up on Monday"
> P(tails) = P(tails|Monday)*P(Monday) + P(tails|Tuesday)*P(Tuesday)
> = 1/2 * 3/4 + 1 * 1/4 = 5/8
Substituting correct values you would have the following non-contradiction:
P(tails) = 1/3 * 3/4 + 1*1/4 = 1/2
> Matthew T. Russotto wrote:
>
> > P(tails | Monday) = 1/2
>
> Bzzz.
> p(tails | Monday) = 1/3
>
> > P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)
> > = 1/2*1/2 + 1*1/2 = 3/4
> > P(Tuesday) = 1/4
> > (alarm bells should be going off by now)
>
> No. So far so good. Keep in mind that the even "It is monday" is
really "I got
> woken up on Monday"
Really?
So wait, let's see how this works.
You think that P(tails | I got woken up on Monday) = 1/3.
But she *knows* she will be awakened on Monday. So why can she not
conditionalize in advance, before she is put to sleep, and conclude that
the chance of tails is 1/3?
Or what about this version:
same set-up, except that when we awaken Beauty on Monday we will wake her
up by shouting, "HEY, BEAUTY, IT'S MONDAY!".
Now surely when we awaken her in that rude way, she will take the chance
that the coin came up Heads to be 1/2. Or is this really different from a
version in which we will not awaken her on Tuesday at all, no matter how
the coin lands? It doesn't seem to be any different, from her perspective
on Monday when we awaken her by the rude shouting.
> So as far as I can see, time really does have something to do with it.
Time really isn't the issue. Unfortunately, I completely missed the boat with my
last few posts:
The probability that the coin came up heads given that you were just woken up is
1/2. My apologies.
Anyone who still doesn't believe it can imagine what would happen if they increased
the number of interrogations from 2 to 1 zillion on tails (leaving 1 interrogation
for a head). You agree to do the experiment once, and sure enough you awaken and
they ask the question. Do you really think you can be almost positive that the coin
didn't come up heads?
> Matt McLelland <mat...@flash.net> wrote:
>
> > Matthew T. Russotto wrote:
> >
> > > P(tails | Monday) = 1/2
> >
> > Bzzz.
> > p(tails | Monday) = 1/3
> >
> > > P(Monday) = P(Monday|tails)*P(tails) + P(Monday|heads)*P(heads)
> > > = 1/2*1/2 + 1*1/2 = 3/4
> > > P(Tuesday) = 1/4
> > > (alarm bells should be going off by now)
> >
> > No. So far so good. Keep in mind that the even "It is monday" is
> really "I got
> > woken up on Monday"
>
> Really?
Really. My previous retraction doesn't affect anything I said to Matthew T.
Russotto.
> So wait, let's see how this works.
> You think that P(tails | I got woken up on Monday) = 1/3.
Yep.
> But she *knows* she will be awakened on Monday. So why can she not
> conditionalize in advance, before she is put to sleep, and conclude that
> the chance of tails is 1/3?
You are confusing the events "She will be awakened on Monday" and "Today is
monday and she was awakened". They are not the same. The probability of the
first is 1 and the probability of the second isn't.
Let me give you a simple example that doesn't involve time:
I flip a coin. If it comes up heads I put the number 123456 in a hat, and if
it comes up tails I put the numbers 1 through a million in a hat.
Scenario 1: You pull a single number from my hat. If you pull 123456,
hopefully you will guess the coin was heads.
Scenario 2: You pull the numbers from my hat until none are left. Now, you
know that you will eventually pull 123456 from either hat. So if you learned
that 123456 was pulled from the hat at some point during our trial, there
would be no 'information' gained. On the other hand, if you pull 123456 from
the hat on your first pull, then the odds are that the coins flipped heads.
> Or what about this version:
> same set-up, except that when we awaken Beauty on Monday we will wake her
> up by shouting, "HEY, BEAUTY, IT'S MONDAY!".
This *is* different.
PS: The event "Today is Monday" is perfectly valid.
This means that if we tell Sleeping Beauty what day we woke her up,
on Monday she should say that the probability of the coin having
landed "tails" is 1/3. Surely that can't be the case!
> Anyone who still doesn't believe it can imagine what would happen if
> they increased the number of interrogations from 2 to 1 zillion on
> tails (leaving 1 interrogation for a head). You agree to do the
> experiment once, and sure enough you awaken and they ask the question.
> Do you really think you can be almost positive that the coin didn't
> come up heads?
Yes. Of course. How could it be any other way? There are
a zillion and one equiprobable possible explanations for
why she was awakened. One is heads, and a zillion are tails.
P(heads) = 1/zillion.
-Ted
> P(heads) = 1/zillion.
I meant
P(heads) = 1/(zillion+1)
of course. Sorry about that. (If only all the errors I made were
in the zillionth decimal place!)
-Ted
And my question is, what the heck difference does it make *what*
S.B.'s "degree of belief" is? Why should S.B. care what number she
says? Why should the experimenter care?
>If you prefer, you may (as Matt McLelland suggests) rephrase the question:
>What should she take the odds of Heads to be when we interview her? (I am
>talking about her rational state of belief, though -- which may or may not
>be logically related to her disposition to bet, I intend not to take any
>stand on what the relation is or isn't.)
I suggested placing bets because this actually puts S.B. in a situation
where giving an "incorrect" credence to a proposition could reasonably
be perceived as being disadvantageous to her. There are other ways to
do this, for example I'm sure you could recast the original problem
in such a way that Sleeping Beauty's life is in jeopardy and she has to
make a decision that will either increase or decrease her risk--and she
cannot "opt out of" this "bet." But in any case the essential thing
is to posit an answer to the question, "Who cares?"
I still don't know what you mean by "rational state of belief."
Assuming the number of angels that can dance simultaneously on the
head of a pin is finite, is that number closer to 200,000 or to 2?
What's a "rational state of belief" regarding that question?
I think you mischaracterized my argument. I very much intended to
ask, "What are the odds that the coin came up heads given that we just
woke you up." I'm not so much concerned with any decisions S.B. might
have made before she went to sleep; I'm asking, *now* *that* *she's*
*been* *awakened* and asked to make a decision, what should she decide?
I think it's obvious she should make this decision in light of any
information she has *now*, not just in light of any information she
had before the coin was tossed.
If you like, let's not have any bets placed before S.B. goes to sleep.
Instead, each time we wake her we'll ask, "Do you want your agent to
pay $p on Wednesday, knowing that you'll get back $1 if the coin came
up heads and $0 if it came up tails?" Moreover, S.B. knew we were going
to ask this.
Personally, if I were S.B. in this experiment I'd ask, "Do you mean my
agent will pay $p *in* *addition* to any other payments I might
authorize or have authorized this week? Or does the order I give now
supercede any previous orders I may have given?" And I would refuse
to answer the question that was asked me until I'd gotten a
satisfactory answer to these two.
Actually, I think in a sense there *is* a possible world in which
today is Monday (or "is not Monday," in case you happen to think it's
Monday when you read this). After all, someone at some point in time
declared, "It's time to start holding a Sabbath every seven days, and
the first one will be ... days from now." (Or substitute your own
favorite theory for the origin of our current seven-day week.) I see
no fundamental reason why that announcement couldn't possibly have
been delayed a day or two.
Robinson Crusoe faced this question. It was important to him to know
what day was Sunday, because he feared he might be punished if he
failed to make Sunday observances on the proper day. And as it turned
out, every week for many years he believed "today is Sunday" when in
fact the day was not Sunday.
But you can easily set up the Sleeping Beauty experiment to get around
this what-day-is-today question. On the day we flip the coin, we set
up an empty cup. On each successive day thereafter, early in the
morning we drop a small white ball into the cup. If after this act
there is one ball in the cup, or if there are two balls and the coin
shows tails, we wake S.B. and interview her; otherwise we let her
sleep, except that on the day when we drop the seventh ball in the cup
we wake S.B. and end the experiment. We also promise not to let the
coin be turned over from the moment it lands heads or tails until the
moment the experiment ends.
When S.B. is interviewed, then, the following two propositions seem
to me to be equally subject to whatever "degree of credence" she can
assign:
The coin is facing heads up.
There is exactly one ball in the cup.
Instead of playing the stated game with S.B., suppose that the
consequence of Heads is to write the letter "A" on a slip of
white paper, whereas the consequence of Tails is to write "A"
on a slip of white paper and also on n-1 additional slips of
colored paper, each having a different color (n=2 corresponds
to the old scenario). Suppose that S.B. knows all about how
this procedure is performed, but, as far as the outcome is
concerned, S.B. is informed only of the letter written on one
piece of paper, but not whether it is a consequence of H or T.
(The letter, of course, is always "A", just as before the
information available to S.B. was only "I've been Awakened",
together with the game rules.)
It seems to me that S.B.'s "state of information" about H or T
is exactly the same in both of these games, with
"Monday"<->"white paper", "Tuesday"<->"colored paper",
(or if n>2, the different days correspond to different colors).
It has nothing essential to do with time, but has everything to
do with indistiguishability of outcomes like "A" when not
accompanied by any distinguishing feature, such as color or day.
In both of these games, S.B. learns nothing relevant about H or T,
so that pr(T|A)=pr(T)=1/2=pr(H)=pr(H|A), with obvious notation.
Arguments based on there being n cases stemming from T and only
one case from H must incorporate the fact that the probabilility
of any one of the n T-cases being involved in the outcome is also
proportionately smaller.
--
r e s (Spam-block=XX)
Jamie Dreier wrote in message ...
>Matt McLelland <mat...@flash.net> wrote:
>> Jamie Dreier wrote:
>>
>> > Back up a little and I think it can be made even *more* compelling.
>> > Before she is put to sleep, Beauty certainly thinks that the chance of
>> > Heads is 1/2. Now for the sake of argument, suppose that upon awakening
>> > she really does get some hard-to-state information, so that she
reasonably
>> > changes her credence in Heads to 1/3.
>>
>> There is nothing paradoxical about this. It isn't even very
>complicated, and time
>> really has nothing to do with it. Change the problem so that don't
>ever interogate
>> her if the coin comes up heads, and suppose she knows this. Now imagine
>that you
>> are her, and that you get interogated. You don't have reason to doubt
>that the coin
>> is still fair, but you still know that it came up tails with 100%
>certainty. There
>> isn't anything deeper than this invovled in this problem.
>
>I think there is.
>
>In your example, the fact that she is being interrogated does,
>uncontroversially, count as information for her. We can put that
>information in a tenseless way, so that she could in principle get it at
>some other time -- say, before she is put to sleep in the first place. At
>that moment, she will quite reasonably think, "Of course, if I am
>interrogated at all, that will mean that the coin came up tails. That is,
>pr(Tails | I will be interrogated) = 1."
>
>Then, when she is in fact interrogated, she conditionalizes as usual, and
voila.
>
>But in the original problem, there doesn't seem to be any way to state the
>relevant information (if it really is information) in a neutral, untensed
>way. We can't put it like this: "I have been or will be interrogated."
>Because she already knows that at the outset, so if pr(Heads | I have been
>or will be interrogated) = 1/3, then since she knows that the condition
>obtains, she can just conditionalize and conclude, pr(Heads) = 1/2. But
>that's not right.
>
>So as far as I can see, time really does have something to do with it.
>
True, but you've posited an experiment in which S.B. will be
informed *only* *once* that an A is written on a piece of paper.
In Jamie's experiment, S.B. receives her information *twice*
in some cases, although by the second time she's forgotten that
she received the information before.
The two experiments are not isomorphic; at least, it's not obvious to
me that they're isomorphic.
> >> In fact I'm not sure I know what Jamie's question even means.
> >> What's the significance of the "credence" that Sleeping Beauty
> >> assigns to a proposition that we already know to be true (or false)?
> >
> >Hmmm.
> >
> >Well, I meant to be using the standard Bayesian sense of 'credence', which
> >is generally cashed out as 'degree of belief'.
>
> And my question is, what the heck difference does it make *what*
> S.B.'s "degree of belief" is? Why should S.B. care what number she
> says? Why should the experimenter care?
Well, why should anyone care about anything???
I guess I do care, personally, that my representation of the world be
rational. Why should I? I don't know. I know that I do care whether I use,
say, modus ponens correctly. Why should I?
> >If you prefer, you may (as Matt McLelland suggests) rephrase the question:
> >What should she take the odds of Heads to be when we interview her? (I am
> >talking about her rational state of belief, though -- which may or may not
> >be logically related to her disposition to bet, I intend not to take any
> >stand on what the relation is or isn't.)
>
> I suggested placing bets because this actually puts S.B. in a situation
> where giving an "incorrect" credence to a proposition could reasonably
> be perceived as being disadvantageous to her. There are other ways to
> do this, for example I'm sure you could recast the original problem
> in such a way that Sleeping Beauty's life is in jeopardy and she has to
> make a decision that will either increase or decrease her risk--and she
> cannot "opt out of" this "bet." But in any case the essential thing
> is to posit an answer to the question, "Who cares?"
If it's that essential, then let's just suppose that she really, really
wants to have rational credences.
> I still don't know what you mean by "rational state of belief."
> Assuming the number of angels that can dance simultaneously on the
> head of a pin is finite, is that number closer to 200,000 or to 2?
> What's a "rational state of belief" regarding that question?
Oh, hold on.
I'm certainly not saying that there is any particularly rational state
regarding that question. In general, my view is that rationality is a
feature of your beliefs collectively, not one by one. So I suppose that
what it's rational for you to think about angels on pins depends on what
else you believe.
I am resisting a particular characterization of the same question in terms
of bets for a reason, I'm not just refusing in order to be difficult.
Why don't I post a separate account of my reason. I think I will.
If we asked for Beauty's dispositions to bet, instead of asking for her
credence or what she should believe, I think we'd be asking a different
question.
Suppose it's like this. Suppose we tell her in advance that when(ever) she
is awakened, we will ask her to declare her fair odds that the coin comes
up heads. She must name a fair price for a ticket that pays, um, let's
see, $6 if the coin landed Heads, and nothing otherwise. We will sell her
the ticket for the named price. She is supposed to decide what is the
largest sum she will pay.
Now in this case, it seems pretty obvious that the price she should name
is $2. (Or $1.99, presumably she'd be indifferent to getting the ticket if
it were really sold for the 'fair price'. I'll ignore this hereafter.)
Suppose she says $3 instead -- this is the other obvious price to name.
But now she looks to be in some trouble. She knows that if the coin does
land Heads, she will buy a ticket one time, and she will win, so she will
net $3. On the other hand, if the coin lands Tails she will bet twice,
losing $3 each time, a net loss of $6. Since her prior for the coin
landing Heads is 1/2, this looks like a terrible plan. If she executed it
repeatedly in many runs of the game, she'd take a bath.
Instead she should offer to pay at most $2. Then she nets $4 if the coin
lands Heads, and loses $2 on each of two bets if the coin lands Tails.
Fair.
However, this does not seem to me to show that she should take the real
chance of Heads to be 1/3 at the moment she awakens. The problem is that
*the amount she has to bet in all depends on the outcome on which she is
betting*. In such circumstances, the odds you will take do not reflect
your view of the actual chances.
To see this (maybe it's obvious, humor me), consider the grossly unpopular
Variable Bet Casino. At the VBC they have a roulette table, and you bet
with plaid chips. The roulette wheel is an ordinary casino roulette wheel
(pretend there are no zero nor double-zero so that roulette actually uses
'fair bets'). But the rule is that the plaid chips are worth $1 if the
ball lands in a red space, and $1000 if it lands in a black space.
Now what are the fair odds for the bets on red and black? Not even odds,
that's for sure. The fair odds would pay 1000 to one for a bet on red, and
one to 1000 for a bet on black.
But the fact that these are the odds I deem fair clearly does *not* mean
that I think the chance that the ball will land in a red space is very
tiny. I think the chance is 1/2. Why do my fair odds not reflect my views
about the actual chances? Because the amount that I am betting is not
fixed, it depends on the outcome on which I am betting.
Same for Beauty.
So, while the question of what odds she would take is interesting (sort of
-- it seems to me to be pretty trivial), it doesn't settle the question of
what Beauty should believe.
Please reconsider this, because I think it succeeds as a true
isomorphism.
--
r e s (Spam-block=XX)
David A Karr wrote in message <7pyH2.84$no1....@news.shore.net>...
My argument is with the assumption that P(H) = P(T). I've shown that a
contradiction occurs if P(H)=P(T)=1/2, P(H|A1) = P(T|A1) = 1/2, and
P(An|T) = 1/n. It can be resolved by changing P(H|A1) = 1/(n+1), or
by changing P(H) = 1/(n+1). The question is which change fits the
description of the experiment. On the surface, it seems like we've
flipped a fair coin, therefore P(H)=P(T)=1/2. But I claim that what
we've actually done is used that fair coin to produce a distribution
(and a less than random one) where P(H) = 1/(n+1). We've done this by
flipping the coin once and measuring once if it comes up heads, but
measuring n times if it comes up tails.
I have no comment on your AIDS-patient analogy, since I don't see what
it's supposed to show. That some of the conclusions in this analysis
are counterintuitive? Well, maybe. I guess it depends on your
intuition. This whole daily-amnesia scenario is so crazy to begin
with that this sort of appeal to intuition doesn't really help that
much, IMHO.
Onwards.
>Correct! But those last events are not *disjoint*!!!
I couldn't care less. The assertion I was making had nothing to do
with disjointness. In the language of the original puzzle (with a
simple coin-flip and no zillions), I was asserting the following:
P(heads and today is monday) = P(tails and today is monday)
Note that this assertion makes no mention of disjointness.
Frankly, I thought it was an agonizingly obvious assertion to make
and was astonished to find people disagreeing with it! If you really
disagree with this assertion, then tell me -- which of those two
events is less likely? That is, which event would occur less
often in a large ensemble of repeated trials of the experiment?
(Or, if you don't agree that these last two questions are equivalent,
then what definition of probability are you using?)
> P(The coin came up heads and *this* is my first time up) = 1/2
>and
> P(The coin came up tails and *this* is my first time up) = 1/2*1/N
I agree with this, but I don't see why it's relevant to the problem.
>Maybe this example would be better. Suppose that we are going to do the
>same old sleeping beauty experiment only this time the coin is biased 100
>to 1 in favor of heads. Also, instead of waking her twice, we will wake
>her 10,000 times for tails. Your argument seems to be that if we agree
>to ask sleeping beauty every time what the coin came up, she will get it
>right only 1% of the time if she answers heads.
Right.
>How can this be true if
>the odds are only 1% that it will be tails? The simple answer is that
>her mistakes are magnified 10,000 times. Imagine now that she awakens to
>find some strange person has broken into the laboratory - something
>clearly not supposed to happen every time. They ask her at gunpoint,
>what was the result of the coin toss? She should clearly answer heads
>and has only a 1% chance of answering incorrectly.
Right. But I don't understand what this has to do with the situation
at hand. She clearly has different information in this case, so
naturally her assessment of the probabilities will be different.
-Ted
I didn't think so, and I didn't bring these up just to be tendentious.
I'm doing this because I'm confused.
>As I said, I generally just take 'credence' and 'degree of belief' for
>granted, it's the way I'm used to thinking about probability. How
>confident should she be that the coin landed Heads, given what else she
>believes? Measure confidence on the [0,1] scale, where 1 is the confidence
>you place in an obvious tautology and 0 your confidence in an obvious
>contradiction, and .5 your confidence in the quarter I am now tossing
>landing Heads, and so on.
Now we're getting into yeat another semantic muddle. I use the term
"confidence" to refer to a particular non-Bayesian measure of belief,
the one usually meant when someone says they used "95% percent
confidence intervals". If I analyze a statistical sample, I might
make a statement about a hypothesis with "confidence 0.95". This is
not at all the same as my saying that I estimate the hypothesis to be
true with *probability* 0.95.
Now this is interesting because I understand two measures of something
roughly corresponding to a degree or strength of belief, both measured
on the scale [0,1], yet not equivalent to each other nor even having
a well-defined mapping from one to the other. So maybe we should
say "probability" rather than "credence" since at least this gives me
a clue which of the two measures we're looking for.
>If she makes her judgment and then the experimenter tells her how the coin
>really landed, should she be more surprised if he tells her "Heads" than
>if he tells her "Tails"? Or equally surprised?
>
>Can't you just think about the probabiity of Heads, from Beauty's perspective?
I just don't know. Usually I like my probability estimates to fit into
a rational world view, and one of the ways I test their rationality is
to imagine that I (or someone else) could make some sort of bet on them.
(After all, that's the origin of this branch of mathematics, isn't it?)
In this case all the betting does is to point out difficulties with
other possible measures of goodness such as "how suprised" Beauty
should be. For example, if she were to assign probability 1/3 to
heads, then she'll be twice as surprised when the coin is revealed to
be heads as when it's revealed to be tails. But if the experimenter
really does make this revelation each time before he puts Beauty back
to sleep, then Beauty ends up being *exactly* as suprised by tails as
by heads over the course of the experiment (half as surprised each
time, but it happens twice as often). This minimizes her risk,
i.e. the variance in how surprised she'll be. I find that a
compelling argument for adopting this probability estimate, but not
compelling enough to make me give up the notion that the estimate
probably really should be 1/2 after all.
I'd be inclined to just pick a side (1/3 or 1/2) and stay there, but I
keep having this fear that the whole edifice of reasoning is built on
sand.
A few related questions:
Suppose a few seconds after Beauty wakes, the experimenter tells her
what day it is? What should she assign as the probability that the
coin came up heads, given that she's just been told today is Monday?
r e s seems to think the answer is 2/3. This boggles me.
Suppose Beauty just woke up and has no other new information.
What's the probability (in her rational view) that today is Monday?
> >Correct! But those last events are not *disjoint*!!!
>
> I couldn't care less. The assertion I was making had nothing to do
> with disjointness.
Oh really? Sometime in the past you wrote:
> Yes. Of course. How could it be any other way? There are
> a zillion and one equiprobable possible explanations for
> why she was awakened. One is heads, and a zillion are tails.
> P(heads) = 1/zillion.
Now, initially I objected on the grounds that the events were not
equiprobable, believing you to be talking about events of the form "The coin
flipped tails, it is day 2, and here I am". You retorted that they were
equiprobable and claimed the events you were talking about were of the form
"The coin flipped tails and I was awakened on day 2". I then objected that
these events are not disjoint. Why does it matter? Because the only way you
can conclude that N equiproabable events each have a probability of 1/N is if
they are disjoint!
> In the language of the original puzzle (with a
> simple coin-flip and no zillions), I was asserting the following:
>
> P(heads and today is monday) = P(tails and today is monday)
This isn't true!
P(heads and today is monday) = 1/2
P(tails and today is monday) = 1/N (N being the number of awakenings)
> Note that this assertion makes no mention of disjointness.
> Frankly, I thought it was an agonizingly obvious assertion to make
> and was astonished to find people disagreeing with it!
You shouldn't - it is wrong.
> If you really disagree with this assertion, then tell me -- which of those
> two
> events is less likely? That is, which event would occur less
> often in a large ensemble of repeated trials of the experiment?
> (Or, if you don't agree that these last two questions are equivalent,
> then what definition of probability are you using?)
>
> > P(The coin came up heads and *this* is my first time up) = 1/2
> >and
> > P(The coin came up tails and *this* is my first time up) = 1/2*1/N
>
> I agree with this, but I don't see why it's relevant to the problem.
? You agree? How is this different from the events with "this" replaced by
"today"?
This isn't an assumption, but is part of the problem
statement, viz.,"we'll flip a (fair) coin".
We'll have little hope of communicating intelligently
about this problem unless we can at least agree that
this means, a priori, pr(H)=pr(T).
>I've shown that a
>contradiction occurs if P(H)=P(T)=1/2, P(H|A1) = P(T|A1) = 1/2, and
>P(An|T) = 1/n. It can be resolved by changing P(H|A1) = 1/(n+1), or
>by changing P(H) = 1/(n+1). The question is which change fits the
>description of the experiment. On the surface, it seems like we've
>flipped a fair coin, therefore P(H)=P(T)=1/2.
I would say that it's an explicit part of the problem that pr(H)=pr(T),
meaning the probabilities unconditioned by any information other than
the rules of the game. So, if, as you say below, you see a different
distribution arising for H/T, it would presumably involve some
pr(H|E)=/=pr(T|E) calculated for some conditioning event E, and it
appears that you've taken E=A1. (Although I don't agree with the actual
values you seem to have obtained for the unequal conditional
probabilities -- I showed in another posting that pr(H|A1)=2/3.)
>But I claim that what
>we've actually done is used that fair coin to produce a distribution
>(and a less than random one) where P(H) = 1/(n+1). We've done this by
>flipping the coin once and measuring once if it comes up heads, but
>measuring n times if it comes up tails.
--
r e s (Spam-block=XX)
First, there's the straightforward probability approach:
P(heads, Monday) = 1/2 * 1 + 1/2 * 0 = 1/2
P(tails, Monday) = 1/2 * 0 + 1/2 * 1/2 = 1/4
P(tails, Tuesday) = 1/2 * 0 + 1/2 * 1/2 = 1/4
I'm hoping you can see why this is reasonable.
Secondly, think of it this way: Sleeping Beauty knows that no matter
what, if it is Monday P(heads) = 1/2. This can is simple to see; if we
only woke her up once in either case, P(heads) = 1/2. Now, she will only
wake up again on Tuesday if she woke up on Monday after a flip of tails.
Therefore, the probability of waking up Tuesday is the same as the
probability of (tails, Monday), which is 1/2. Another way to say it is:
the chances of waking up on a given one of the "tails days" (Monday or
Tuesday) are equiprobable. So P(tails, Monday) = P(tails, Tuesday). We
also know that the chance of heading down the "heads branch" of the
system is 1/2, while the chance of following the "tails branch" is also
1/2. If all of three cases were equiprobable, the probability of waking
up is more than sure - it's 3/2! So P(tails, Monday) + P(tails, Tuesday)
= 1/2, and each equals 1/4.
Again. I'll use a bag for my analogy. If I flip heads, I will put a red
ball in the bag. If I flip tails, I will put two blue balls in the bag.
If I ask you what are your odds of pulling out a red ball after I flip,
would you say 1/3? I don't think so. It's obvious that you can only pull
a red ball if it landed on heads (a 1/2 chance), just as you can only
pull a blue ball if it landed on tails (also a 1/2 chance). Even if I
extended it to 1 million blue balls, the odds are still 1/2.
Basically, there is a lot of confusion over how to interpret the
problem. Many people are saying there are three cases (Monday & heads,
Monday & tails, Tuesday & tails), and believe that one of them is being
picked at random. If this were true, it definitely would be 1/3, but
it's not true. There are essentially only two cases: Monday & heads, or
Monday & tails. These are the only possibilities. If it's tails, then we
just do a lot of extra stuff. If it's heads, then we stop.
Some argue with the betting situation, where SB places a bet on whether
it's heads or tails. This is a different question from the original.
Here, people are taking into account the odds (as in expected returns),
instead of the probability (if it's heads or tails). If I go back to the
bag analogy, it's like saying: "For every ball in the bag, whether one
or two, we will bet on what it is." Obviously, you would bet on blue
because it has better returns. It's equivalent to betting on the coin,
and having 1:1 returns for heads, while having 2:1 returns for tails.
On the other hand, SB is not even deciding if she wants to bet tails or
heads, she's being asked for the probability of it being heads. If it's
Monday, it's a 1/2 chance; if it's Tuesday (or any later day for the
"large n" case), it's still a 1/2 chance that heads came up. She's right
no matter what day it is if she says 1/2. But if we decide instead to
guess "heads or tails" when she wakes up, the question is transformed.
Now she will be right once if it's heads, and wrong numerous times if
it's tails.
That's my two cents. And not surprisingly, one's one heads and one's on
tails =)
p.s. I'm a little frustrated now because I had to break my "e" rule...
well, it was fun trying!
> > I thought we had already agreed that the phrase involving
> >"credence" was to mean "What probability should SB assign to the
> >event 'the coin flipped heads' "... ? Or has this discussion
> >progressed into a debate on the philosophy of probability?
>
> I was rather under the impression that the all the other discussion
> was simply begging that question. I could be wrong.
If I understand your position it is that we face a dilemma if our definition of
probability involves frequency of occurrence. After all, if we run the
experiment 200 times, then we will awaken 300 times and the coin will only be
heads 100 of those times - thus if we pick some random awakening there is a 1/3
chance that the associated coin flipped heads. All true so far. The problem
is that in reality, each awakening isn't equally likely, and so arguments based
on picking a *random* awakening don't tell us what the probability will be in a
some awakening decided by the experiment. I don't think that the usual models
for probability are shaken by this problem.
You previously brought up the good point that we should concern ourselves with
how we should simulate this if we were to do so. We can follow the example of
how we measure the bias of a coin. We devise a trial whose outcome will be one
of two events (head or tails) and then repeat the experiment a large number of
times and compute the frequencies. So, now, what should constitute a trial in
our present case? We flip a coin, and it comes up heads we will increment the
count on the event "The coin was heads when I awakened". What if it comes up
tails? Do we increment both "The coin was tails when I was awakened on Monday"
and the "The coin was tails when I was awakened on Tuesday?" No. That would
mean that a single trial had been used to count two *disjoint* events. Instead,
if the coin comes up tails, we must *pick* *one* day at random and increment its
event.
I have a another point that relates to your conversation with r e s, but I think
it is interesting enough not to be buried at the bottom of a long message.
You are at the lab to be cloned but are uneasy about going through with it. The
technician is annoyed by your indecision and tells you that he will decide for you.
You insist that you don't want to know his decision for your own peace of mind, and
so he agrees that after you are put to sleep for the procedure he will flip a coin
and only clone you if it comes up heads. When you awaken in a recovery room, it
dawns on you that you have no idea whether you are yourself or a clone!
1. What are the odds that you were cloned?
2. What are the odds that you are the clone?
This is more natural?
Given that the coin is fair:
The odds that you were cloned is 1/2. The odds that you are the clone
is 1/3rd.
Before the experiment, both Beauty and we believe that the probability
of the coin coming up heads will be 50%. Both she and we believe that
there will be two occasions for an event to occur. Everyone knows that
for an observer who will not know how the coin landed or which day it is,
the following events will appear equally probable:
(a) Heads and Monday and we will awaken her.
(b) Heads and Tuesday and we will not awaken her.
(c) Tails and Monday and we will awaken her.
(d) Tails and Tuesday and we will awaken her.
When Beauty is awakened, she will know that she is awakened, but she
will not know how the coin landed and she will not know what day it is.
She will have more information than a completely ignorant observer.
She will know that she is awake. Event (b) has been eliminated for her.
The remaining events will appear equally probable for her. P(Tails)
will be 2/3. Also P(Tuesday) will be 2/3.
Someone has posted a scenario in which Beauty places a bet each time
she is awake. In such a scenario, if she does not base her bet on a
computation that P(Tails) is 2/3, she will suffer. That poster is right.
However, when Beauty is awakened, we will know that she is awakened,
we will know how the coin landed, and we will know what day it is.
We will have more information than a completely ignorant observer.
Event (b) is always eliminated for us, but also two other events will
be eliminated for us. At the time an event occurs, none of the events
will have probability 2/3 for us. One event will have probability 1
for us. Obviously we can't use this knowledge in placing a bet now
since we don't have the knowledge yet, but we sure will have it on
Monday and Tuesday. When we have already seen the coin land, if we
place a bet based on a computation that P(Tails) is 1/2 or that
P(Tails) is 2/3, we will suffer.
Here is another example. When Monty Hall opens a door, he already
knows if the contestant's first choice was right. The contestant
only gains enough information to compute that switching to the
remaining door has a 2/3 chance of winning while staying put retains
its old 1/3 chance. But Monty, and any other fully informed observer,
knows which door wins with probability 1.
So I think there is no longer any reason for confusion about whether
Beauty's computation of P(Tails) should be 2/3.
There still seems to be a paradox though. Someone else posted a
scenario in which Beauty places a bet before the experiment begins,
and then each time she is awakened, she gets a chance to cancel the
bet. She is given an advantage originally, paying $0.45 for a return
of $1 if the coin lands heads and a return of $0 if the coin lands
tails. Before the experiment begins, she and we believe P(heads) is
0.5 so it looks profitable to place a bet. When she is awakened,
she is allowed to cancel the bet. With her new information, she will
want to cancel. It is not really paradoxical that she might want to
cancel, it is paradoxical that she can figure out that she will always
want to cancel. This is the part that I would really like to resolve.
When Beauty is awakened, there is a 1/3 probability that she will
cancel a winning bet, a 1/3 probability that she will cancel a losing
bet, and a 1/3 probability that she will pseudo-cancel an already
canceled bet. We will know which case it is, but she will not.
But that does not resolve the paradox. If Beauty wants to be as
pedantic as I do, then the scenario will simply have its wording
made more accurate: When Beauty is awake she will have an option
to cancel or pseudo-cancel the bet, and she will not know which
operation actually takes place when she makes that decision, though
we will know. Since she will still always make that decision, the
paradox is still there.
This beats Newcomb's pseudo-paradox, that's for sure. In Newcomb's
problem the answer depends on what the premises really are. If the
premises are that the predictor is really perfect then the contestant
knows that it is most profitable to leave some money behind on the
table. If the premises are that causality exists then the contestant
takes both boxes. If the premises are that the predictor is really
perfect *and* causality exists then Bertrand Russell is the pope.
But Beauty has no inconsistent premises, at least not that I can see.
--
<< If this were the company's opinion, I would not be allowed to post it. >>
"I paid money for this car, I pay taxes for vehicle registration and a driver's
license, so I can drive in any lane I want, and no innocent victim gets to call
the cops just 'cause the lane's not goin' the same direction as me" - J Spammer
Of course they aren't. Think about it. :-)
Repeat the experiment for N AWAKENINGS, for large N. Heads will have
occured N/3 times; tails will also have occured N/3 times. But N/3
awakenings will have been preceded by heads, and 2N/3 awakenings will
have been preceded by tails. So the event
The coin came up heads and I was awakened
and the event
The coin came up tails and I was awakened
(for some fixed k between 1 and a zillion) will occur different numbers
of times. If they occur differently often in a large number of trials,
then BY DEFINITION they're differently probable.
That's phrased in frequentist language, too. And from the amount of
information Beauty has when she's awake, it's right.
> This is more natural?
Perhaps more natural wasn't the correct way to put it - less contived might have been
better.
> Given that the coin is fair:
> The odds that you were cloned is 1/2. The odds that you are the clone
> is 1/3rd.
Correct! Now we just have to agree that this is isomorphic to the original problem from
Beauty's perspective when she wakes up and we are done.
> Of course they aren't. Think about it. :-)
>
> Repeat the experiment for N AWAKENINGS, for large N. Heads will have
> occured N/3 times; tails will also have occured N/3 times. But N/3
> awakenings will have been preceded by heads, and 2N/3 awakenings will
> have been preceded by tails. So the event
> The coin came up heads and I was awakened
> and the event
> The coin came up tails and I was awakened
> (for some fixed k between 1 and a zillion) will occur different numbers
> of times. If they occur differently often in a large number of trials,
> then BY DEFINITION they're differently probable.
I would like to point out that this is not what I was saying - these two events
*are* equiprobable. I think you actually agree with the guy you are arguing (Ted -
bu...@pac2.berkeley.edu). You are counting every awakening that occurs in the
experiment - when what you should be doing is counting a particular random
awakening per experiment. (I posted more on this a post or two ago)
> Before the experiment, both Beauty and we believe that the probability
> of the coin coming up heads will be 50%. Both she and we believe that
> there will be two occasions for an event to occur. Everyone knows that
> for an observer who will not know how the coin landed or which day it is,
> the following events will appear equally probable:
> (a) Heads and Monday and we will awaken her.
> (b) Heads and Tuesday and we will not awaken her.
> (c) Tails and Monday and we will awaken her.
> (d) Tails and Tuesday and we will awaken her.
I would like to add some more "equally probable" events to your list:
(e) Heads and Wednesday and we will not awaken her.
(f) Heads and Thursday and we will not awaken her.
...
You see, you can't just claim that all of these things are equally probable. What
is the a priori probability that today is Monday?? Now, we could, by construction
treat 'Monday' and 'Tuesday' as events on which the random variable 'Today' is
equally likely. For example, an observer to this experiment picks a random day to
visit (Monday or Tuesday) with equal probability and independently of the coin toss
in the experiment. For him, the events you listed really are equiprobable. If
this observer learned that Beauty was awake, he could eliminate choice (b) and
proceed to draw all of your conclusions. Unfortunately, Beauty doesn't pick a day
to visit randomly and independently of the coin toss. If the coin came up heads
she *is* going to visit Monday- Tuesday just isn't a possibility anymore.
I agree that the toss is fair. I don't agree that the "random"
variable derived from the toss is fair. You're measuring the value of
the toss twice in the case that it is heads, and only once if it is
tails.
Suppose I flip a fair coin many times. Each time I flip it, I write
"H" if it is heads, and "TT" if it is tails. If I pick a random
letter from the page, what is the probability that it is a "T"?
}I would say that it's an explicit part of the problem that pr(H)=pr(T),
}meaning the probabilities unconditioned by any information other than
}the rules of the game. So, if, as you say below, you see a different
}distribution arising for H/T, it would presumably involve some
}pr(H|E)=/=pr(T|E) calculated for some conditioning event E, and it
}appears that you've taken E=A1. (Although I don't agree with the actual
}values you seem to have obtained for the unequal conditional
}probabilities -- I showed in another posting that pr(H|A1)=2/3.)
Yes, I missstated that -- I meant that you could resolve the contradiction
by stating P(T|A1)=1/(n+1), not P(H|A1). But that doesn't fit the
real situation.
> Suppose I flip a fair coin many times. Each time I flip it, I write
> "H" if it is heads, and "TT" if it is tails. If I pick a random
> letter from the page, what is the probability that it is a "T"?
Now try this one:
You flip a coin *once* and write "H" if it is heads and "TT" if it is tails.
If you pick a random letter from the page, what is the probability that it is
a "T"?
After all, Beauty is only going to participate in this experiment once.
> Secondly, think of it this way: Sleeping Beauty knows that no matter
> what, if it is Monday P(heads) = 1/2.
This isn't true. P(heads | Monday) = 2/3. That is, if she knows she got
woken up on Monday the odds are 2/3 that the coin was heads. I think I
agree with all of the other stuff you said, so maybe I just misunderstand
what you mean by this statement.
Matt McLelland wrote:
Oops! No this is incorrect. The correct probability for this question is 1/4. The
question I meant to ask was "If you can tell that you are not the clone, what are the odds
that you were cloned?"
Suppose that the situation is as described, but we explicitly say that,
given a head outcome, we let SB sleep on Tuesday. Introduce another
character, Bob, who never keeps track of what day it is but knows all
the details of the SB experiment.
We now tell Bob that beauty is awake, and ask Bob what he thinks
the probability that the coin came up heads is. He reasons:
P( flip was heads | she's awake ) = P(she's awake | flip was heads) P(heads)
----------------------------------------
P(she's awake)
= 1/2 * 1/2
---------------
1/2*1/2 + 1/2*1
= 1/3
Is there a good argument that SB's calculation should be different
from Bob's? Does the explicit statement that we let her sleep on
Tuesday given a head outcome change the problem?
Mike
In article <pl436000-150...@bootp-17.college.brown.edu>,
pl43...@brownvmDOTbrown.edu (Jamie Dreier) wrote:
>
> We plan to put Beauty to sleep by chemical means, and then we'll flip a
> (fair) coin. If the coin lands Heads, we will awaken Beauty on Monday
> afternoon and interview her. If it lands Tails, we will awaken her Monday
> afternoon, interview her, put her back to sleep, and then awaken her again
> on Tuesday afternoon and interview her again.
>
> The (each?) interview is to consist of the one question: what is your
> credence now for the proposition that our coin landed Heads?
>
> When awakened (and during the interview) Beauty will not be able to tell
> which day it is, nor will she remember whether she has been awakened
> before.
>
> She knows the above details of our experiment.
>
> What credence should she state in answer to our question?
>
> -Jamie
>
> p.s. Don't worry, we will awaken Beauty afterward and she'll suffer no ill
> effects.
>
> p.p.s. This puzzle/problem is, as far as I know, due to a graduate student
> at MIT. Unfortunately I don't know his name (I do know it's a man). The
> problem apparently arose out of some consideration of the Case of the
> Absentminded Driver.
>
> p.p.p.s. Once again, I have no very confident 'solution' of my own; I will
> eventually post the author's solution, but I am not entirely happy with
> that one either.
>
> --
> SpamGard: For real return address replace "DOT" with "."
>
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
1. Change the experiment so that if the coin comes up heads, she is
still woken once, but it is decided at random (e.g. by another
coin toss) whether to wake her on Monday or Tuesday (each with
probability 1/2). This surely can't change the answer.
2. Now consider a variant where she is put to sleep for only one day,
and woken at most once (so no need to make her forget); heads
means wake her with probability 1/2 (e.g. based on another coin
toss); tails means do wake her. If she is woken, her reckoning of
the probability that the coin came up heads is 1/3. (Agreed?)
(I'm assuming that the final awakening at the end of the
experiment cannot be confused with the awakenings we're interested
in.)
3. In the original experiment as modified in (1), allow her to ask
what day it is. If the answer is Monday, she is in an equivalent
position to that in (2) (since she knows that the plan implied
that she would be woken on Monday with probability 1/2 if the coin
came up heads, and with probability 1 if the coin came up tails);
therefore her reckoning of the probability is 1/3. Similarly, if
the answer is Tuesday then her reckoning of the probability is
1/3. Since it makes no difference to her reckoning what day it
is, her reckoning is 1/3 before she asks the question.
If she decides not to ask after all, then her reckoning is still
1/3 (since she has gained no new information by deciding not to
ask).
Merely knowing that she is allowed to ask what day it is can't
make any difference to her reckoning of the probability that the
coin came up heads, so her reckoning of the probability in (1) is
1/3. Hence her reckoning in the original question is also 1/3.
--
John Rickard <John.R...@virata.com>
That's not the case. It would be the case if she was woken up on
Monday if the coin was heads, and Monday or Tuesday randomly if the
coin was tails. But in fact she is woken BOTH Monday and Tuesday if
the coin was tails.
> Now we're getting into yeat another semantic muddle. I use the term
> "confidence" to refer to a particular non-Bayesian measure of belief,
> the one usually meant when someone says they used "95% percent
> confidence intervals". If I analyze a statistical sample, I might
> make a statement about a hypothesis with "confidence 0.95". This is
> not at all the same as my saying that I estimate the hypothesis to be
> true with *probability* 0.95.
Hm, right.
Remind us what it does mean. ;-)
Give an example, preferably involving balls in urns. ;-)
> Now this is interesting because I understand two measures of something
> roughly corresponding to a degree or strength of belief, both measured
> on the scale [0,1], yet not equivalent to each other nor even having
> a well-defined mapping from one to the other. So maybe we should
> say "probability" rather than "credence" since at least this gives me
> a clue which of the two measures we're looking for.
'Probability' is fine.
As you know, I use these epistemic words, 'confidence', 'credence',
because that's my preferred interpretation of ordinary probability
statements.
I did try to explain in terms independent of any explicitly Bayesian
dogma. 'Confidence' is the converse of 'surpisingness'. When your
confidence (in my sense) for a certain event is high, you will not be very
surprised if and when it happens (when you learn that it has happened).
When your confidence is very low, you will be very surprised if it
happens.
At the limits, your confidence in an obvious tautology is 1, since the
surprisingness of the event, "Either it rains on Sunday or it doesn't", is
nil. Your confidence in a contradiction is 0, since the contradiction's
actually happening is so surprising as to be literally unimaginable.
At the midpoint, the surprisingness of the coin's landing Heads is exactly
the same as the surpringness of its landing Tails; thus your confidence in
the two is also the same. So that's 1/2.
We can think of all confidences as limits of sequences of compounded
equally-surprising events, a la Ramsey.
> Usually I like my probability estimates to fit into
> a rational world view, and one of the ways I test their rationality is
> to imagine that I (or someone else) could make some sort of bet on them.
> (After all, that's the origin of this branch of mathematics, isn't it?)
Well, let's see.
Ian Hacking argues that the modern conception of probability arose by the
unification of two different concepts: the idea of relative long term
frequencies of repeatable events, and the idea of a degree of belief (as
in the Pyrrhonian skeptics' views). He thinks that these got unified by
actuaries in (as I recall) Holland and Flanders, who needed a science of
probability to run their insurance businesses. (So I guess Hacking is
vaguely a Marxist: concept formation driven by economic innovation.)
But you presumably mean something later, like Ramsey's formulation of
decision theory. That formulation is heavily dependent on an agent's
dispositions to bet. These dispositions reveal the agent's credences.
>
> In this case all the betting does is to point out difficulties with
> other possible measures of goodness such as "how suprised" Beauty
> should be. For example, if she were to assign probability 1/3 to
> heads, then she'll be twice as surprised when the coin is revealed to
> be heads as when it's revealed to be tails. But if the experimenter
> really does make this revelation each time before he puts Beauty back
> to sleep, then Beauty ends up being *exactly* as suprised by tails as
> by heads over the course of the experiment (half as surprised each
> time, but it happens twice as often). This minimizes her risk,
> i.e. the variance in how surprised she'll be. I find that a
> compelling argument for adopting this probability estimate, but not
> compelling enough to make me give up the notion that the estimate
> probably really should be 1/2 after all.
Ahhhh.
The 'surprisingness' point really is supposed to be a little different.
The probability you report is supposed to be an accurate measure of how
surprised you will actually be (higher prob => less surprised).
> A few related questions:
>
> Suppose a few seconds after Beauty wakes, the experimenter tells her
> what day it is? What should she assign as the probability that the
> coin came up heads, given that she's just been told today is Monday?
> r e s seems to think the answer is 2/3. This boggles me.
Me too.
That is the worst problem, I think, with my own gut feeling that she
should think (before hearing what day it is, but upon awakening) that the
probability is 1/2. If it is 1/2, then as far as I can tell she *has* to
change to 2/3 when she discovers that it is Monday.
> Suppose Beauty just woke up and has no other new information.
> What's the probability (in her rational view) that today is Monday?
This one does not bother me as much, directly.
The strong intuition is that she should think that Monday is *more likely*
than Tuesday. But both sides say this. The difference is that the Halfers
(like me) say that the chance that it's Monday is 3/4, while the Thirders
say that the chance that it's Monday is 2/3.
(Or have I messed that up?)
-Jamie
>> P(heads and today is monday) = P(tails and today is monday)
>
>This isn't true!
I see that I wasn't clear here. Let me specify what I meant by
this. By P(X) I mean simply the probability that event X will
occur during one complete run of the experiment. By that
definition, it is clear (I hope) that both of the above probabilities
are 1/2 (and hence they're equal).
You apparently mean something else (or rather, thought I meant
something else) by P(X), although, to be honest, I can't think of any
meaning one could attach to it that would make the statement below
true.
>P(heads and today is monday) = 1/2
>P(tails and today is monday) = 1/N (N being the number of awakenings)
>> > P(The coin came up heads and *this* is my first time up) = 1/2
>> >and
>> > P(The coin came up tails and *this* is my first time up) = 1/2*1/N
>>
>> I agree with this, but I don't see why it's relevant to the problem.
>
>? You agree? How is this different from the events with "this" replaced by
>"today"?
Hmm. My brain seems to have turned off when I wrote that.
I do not agree with the above statement, and I have no idea
why I wrote that I did. Sorry!
-Ted
I completely agree with this, and I don't think it contradicts
anything I said. Those two events aren't the ones I was talking about
when I said "these events are equiprobable." I was talking about the
events
"the coin came up heads and an awakening occurred on Monday,"
"the coin came up tails and an awakening occurred on Monday,"
"the coin came up tails and an awakening occurred on Tuesday."
Those events all occur equally often in an ensemble, so they're
equally probable. That's all I was saying.
As far as I can tell, you and I are in complete agreement.
-Ted
> An argument for 1/3 (which I am convinced is the correct answer).
Oh good, another brave soul has inducted themselves into the Hall of
Wrong.
> 1. Change the experiment so that if the coin comes up heads, she is
> still woken once, but it is decided at random (e.g. by another
> coin toss) whether to wake her on Monday or Tuesday (each with
> probability 1/2). This surely can't change the answer.
And it doesn't.
> 2. Now consider a variant where she is put to sleep for only one day,
> and woken at most once (so no need to make her forget); heads
> means wake her with probability 1/2 (e.g. based on another coin
> toss); tails means do wake her. If she is woken, her reckoning of
> the probability that the coin came up heads is 1/3. (Agreed?)
True. True.
> 3. In the original experiment as modified in (1), allow her to ask
> what day it is. If the answer is Monday, she is in an equivalent
> position to that in (2) (since she knows that the plan implied
> that she would be woken on Monday with probability 1/2 if the coin
> came up heads, and with probability 1 if the coin came up tails);
> therefore her reckoning of the probability is 1/3. Similarly, if
> the answer is Tuesday then her reckoning of the probability is
> 1/3. Since it makes no difference to her reckoning what day it
> is, her reckoning is 1/3 before she asks the question.
Bzzz. Wrongo. The innocuous looking explanation in parenthesis doesn't
cut it... and the statement it is explaining isn't true. In reality after
you are told it is Monday, the odds are 50-50 that the coin came up heads.
For all of these new people who are jumping on the 1/3 bandwagon, I really
wish you would really think about the following situation. Don't try to
calculate probabilities using any method, but use your intuition:
A coin is flipped which is biased 1 zillion to 1 against tails. If the
coin flips tails then you are awakened zillion^2 times, heads only once.
You agree to do the experiment *1* time. You are put under and when you
wake up you are asked whether or not you think the coin came up heads.
To put things in perspective, maybe you and your friend (performing the
experiment) agree to put a bucket full of sand on the ground and you only
get woken up once until the sand spontaneously forms a 1 cubic meter
diamond by random molecule interactions. Do you *really* think that when
you get up after performing this experiment *1* time that you can be
*certain* that a 1 cubic meter diamond is waiting for you?
->I thought of a more natural situation in which this problem arises - cloning!
->
->You are at the lab to be cloned but are uneasy about going through with
it. The
->technician is annoyed by your indecision and tells you that he will
decide for you.
->You insist that you don't want to know his decision for your own peace
of mind, and
->so he agrees that after you are put to sleep for the procedure he will
flip a coin
->and only clone you if it comes up heads. When you awaken in a recovery
room, it
->dawns on you that you have no idea whether you are yourself or a clone!
->
->1. What are the odds that you were cloned?
->2. What are the odds that you are the clone?
hah. I'd check for my appendectomy scar.
--
Carl Witthoft c...@world.std.com ca...@aoainc.com http://world.std.com/~cgw
Got any old pinball machines for sale?
I agree. I posted this very unclearly, so here's what I really meant.
P(tails, SOMEDAY) = 1/2. It's only if you are counting on a specific day
that the probability becomes 1/4. This is because the choice between
Monday and Tuesday is dependent on the first probability, that of
flipping the coin. However, if you just want to know if she will wake up
on some day due to a tails flip, then the probability is 1/2.
Here's another analogy I just thought of to explain the 1/2 for heads to
the "1/3 bandwagon." Imagine a driver on a highway. Whenever she comes
to a fork in the road, she will pick one of the two directions at
random. She approaches the first fork in the road (this is analogous to
the coin flip). Going left takes her to Offramp 1, while the highway
continues to the right. Further down that path, another fork (similar to
Monday vs. Tuesday) splits into two paths; left goes to Offramp 2 and
right goes to Offramp 3.
---------monospace font illustration below---------
1 2 3
\ \ /
\ \ /
\ \ /
\ /
\ /
\ /
|
Driver
---------------------------------------------------
Let's say you know she left the highway on one of these offramps, but
you don't know which. What's the probability that she exited at Offramp
1? If it's not clear yet that the probability is 1/2 instead of 1/3,
then you must be viewing the problem incorrectly. You're right, it's not
completely isomorphic because she only exits at either 2 or 3 instead of
both, but it doesn't matter from S.B.'s point of view. When she wakes
up, she doesn't know if she's passed Monday and Tuesday or just Monday,
but it really doesn't matter. If the flip was tails, we wake her up
today (whether it is Monday or Tuesday). The probability of it being
Monday is the same as it being Tuesday, so it's just as if they woke her
up on one of those days at random, which is isomorphic to picking
between Offramp 2 and Offramp 3 at random.
->For all of these new people who are jumping on the 1/3 bandwagon, I really
->wish you would really think about the following situation. Don't try to
->calculate probabilities using any method, but use your intuition:
->
->A coin is flipped which is biased 1 zillion to 1 against tails. If the
->coin flips tails then you are awakened zillion^2 times, heads only once.
->You agree to do the experiment *1* time. You are put under and when you
->wake up you are asked whether or not you think the coin came up heads.
->To put things in perspective, maybe you and your friend (performing the
->experiment) agree to put a bucket full of sand on the ground and you only
->get woken up once until the sand spontaneously forms a 1 cubic meter
->diamond by random molecule interactions. Do you *really* think that when
->you get up after performing this experiment *1* time that you can be
->*certain* that a 1 cubic meter diamond is waiting for you?
Well, I'm not going to try to decipher your proposition there, but I would
like to point out as strongly as possible:
INTUITION DOES NOT WORK FOR STATISTICS!!!!!
For gosh sakes, doesn't the M.H. problem make that clear to you?
If you can't prove or disprove your proposition using mathematical
expressions, then you don't have a handle on the problem (sort of like the
infamous "to learn something teach it; to understand it, program in on a
computer").
And in fact I don't accept that a biased coin yields a situation analogous
to the original problem.
> ->A coin is flipped which is biased 1 zillion to 1 against tails. If the
> ->coin flips tails then you are awakened zillion^2 times, heads only once.
> ->You agree to do the experiment *1* time. You are put under and when you
> ->wake up you are asked whether or not you think the coin came up heads.
> ->To put things in perspective, maybe you and your friend (performing the
> ->experiment) agree to put a bucket full of sand on the ground and you only
> ->get woken up once until the sand spontaneously forms a 1 cubic meter
> ->diamond by random molecule interactions. Do you *really* think that when
> ->you get up after performing this experiment *1* time that you can be
> ->*certain* that a 1 cubic meter diamond is waiting for you?
>
> Well, I'm not going to try to decipher your proposition there, but I would
> like to point out as strongly as possible:
>
> INTUITION DOES NOT WORK FOR STATISTICS!!!!!
I often hear "intuition is often wrong". What that really means is "often
people have poor intuition".
> For gosh sakes, doesn't the M.H. problem make that clear to you?
The M.H. problem is perfectly intuitive. *In fact*, the way I typically
convince a lay person of the correctness of the "switch" answer in the MH
problem is to ask them to imagine that there are 1 million doors. You then pick
one and Monty, knowing where the prize is, opens 999,998 doors which don't have
the prize behind them. Then I ask them, do you really think you got lucky and
picked the right door initially? Because if you didn't guess correctly then the
prize lies behind the other door. Most people find this argument *very*
compelling. I was hoping that the equally compelling argument above could be
used to convince people to at least keep an open mind. Unless you can provide
an explanation as to why a biased coin is different, or you realize that 50% is
the answer by some other means, I suggest you look at it. The idea is that if
you bias the coin strongly in favor of heads, it is clear that there is a good
chance you will wake up due to a head - regardless of what happens if it comes
up tails.
> If you can't prove or disprove your proposition using mathematical
> expressions, then you don't have a handle on the problem (sort of like the
> infamous "to learn something teach it; to understand it, program in on a
> computer").
In fact, the result of 50% is very easily proved mathematically, as r e s has
shown *repeatedly*. Furthermore, I have repeatedly gone to the trouble of
exposing the fallacy in every position I have seen which favors 1/3 as the
answer.
> And in fact I don't accept that a biased coin yields a situation analogous to
> the original problem.
What do you mean? You think that biasing the coin changes things dramatically?
Why don't you explain this.
You forgot some:
(g) Tails and Wednesday and we will not awaken her.
(h) Tails and Thursday and we will not awaken her.
...
There is no problem. You create a new experiment adding Wednesday and
Thursday and ..., then you add these events, and they are all equally
probable.
Beauty's conditional probabilities don't change. When she is awake,
she gains information that events (b), (e), (f), (g), (h), ... have
been eliminated. The three remaining events remain equally probable
(1/3 chance) for her.
If you don't believe it yet, consider solely coin flips. Someone else
set an experiment where a 1-yen coin and 5-yen coin are both flipped
once. Events HH, HT, TH, and TT are equally probable. Then you want
to object that adding a 10-yen coin changes the probabilities. It
does not. Events HHH, HHT, HTH, HTT, THH, THT, TTH, and TTT will be
equally probable. Meanwhile, for the 1 and 5, events HH, HT, TH, and
TT will continue to exist and will remain equally probable, though of
course their probabilities will be 0.25 not 0.125.
Suppose with just the 1 and 5, if event HH or TH or TT occured then
the person informs you that there was a flip, but if event HT occured
then he does not inform you. I say that when you are informed of a
flip you see 3 equally probable possibilities, and I'm not quite sure
yet what you say about this. But you say that if a 10-yen coin is
added then my version is disproved. I think not. With a 1, 5, and
10, if event HHH or THH or TTH occured then the person will inform
you that there was a flip, but if one of the other five events occurs
then he will not inform you. Then I still say when you are informed
that there was a flip, you have 3 equally probable possibilities.
>You see, you can't just claim that all of these things are equally probable. What
>is the a priori probability that today is Monday??
Not needed. If this is what's bothering you, change the wording of the
original problem just to say that Beauty is awakened twice for tails but
only once for heads. No weekdays involved.
Right. But what is the mapping onto Beauty's problem? If you put one
red ball in the bag, Beauty will draw it once. If you put two blue balls
into the bag, Beauty will draw twice. See it now?
(In the original problem the blue balls can be distinguished and we know
which order they will be drawn in, but this adds no information about
probabilities, it merely clarifies the mapping.)
You know for one flip that Beauty has a 50% chance of a red ball once,
and that Beauty has a 50% chance of drawing a blue ball twice.
Beauty knows that for one drawing she has a 1/3 chance of drawing red
and a 2/3 chance of drawing blue. Beauty doesn't know when you flipped.
1/3 of the awakenings are immediately preceded by heads.
1/3 of the awakenings are immediately preceded by tails.
1/3 of the awakenings are immediately preceded by awakenings, and the
preceding flip was tails.
Rather than rebutting each point you have made, I am going to try to explain the
difference between your coin experiment from the viewpoint of an observer and from the
viewpoint of Beauty, as that is where I believe the heart of your error lies.
> Suppose with just the 1 and 5, if event HH or TH or TT occured then
> the person informs you that there was a flip, but if event HT occured
> then he does not inform you. I say that when you are informed of a
> flip you see 3 equally probable possibilities, and I'm not quite sure
> yet what you say about this.
This analysis is correct. If you were told that the coin was flipped twice then the
only thing you could eliminate is HT, and the other cases are equally probable. This
problem is isomorphic to the following:
A coin is flipped once. If it comes up heads Beauty is awakened on Monday. If the
coin comes up tails she is awakened on both Monday and Tuesday. You, as an observer,
are awakened on a random day (Monday or Tuesday) and are not told which. When you
learn that Beauty was awakened on this same day, what is the probability that the coin
flipped heads?
The answer is 1/3 as you have correctly determined. This can be seen because the day
you are awakened is *independent* of the outcome of the coin toss. We can therefore
conclude that HM, HT, TM, and TT are four equally probable events. When you learn that
Beauty is awake, it eliminates possibility HT and leaves three equally likely choices.
This is *not* the same as the original problem. In the original problem, the day on
which Beauty is awakened is *not* independent of the coin toss, it is intimately
dependent on it.
When Beauty prepares to undergo the experiment, she knows that there is a 50%
chance that the coin will come up heads and she will be awoken on Monday. She *knows*
that she will be woken up no matter what happens. When she awakes, therefore, she has
not eliminated *any* possibility and so the probability of heads is still 50%. It is
ridiculous to say she has eliminated the possibility "I am not awake and it is
tuesday", as that was an impossibility going into the experiment. The argument in this
last paragraph is really very compelling, and I would ask that you really try to find
the fault in this reasoning. (i.e., don't just give another calculation deriving your
answer. Point to the step in this logic which is faulty and explain why.)
No. This seems like utter nonsense to me. I can't see any interpretation
of P(tails,Monday) that would justify this approach, at least not obviously.
For example, if P(tails,Monday) means the prior probability (before the
start of the experiment) that the coin will come up tails and Beauty will
be awakened on Monday, then P(tails,Monday) = 1/2.
If P(tails,Monday) is supposed to be Beauty's estimate that "today is
Monday and the coin came up heads", calculated during one of her
waking periods, then surely it's obvious by now that some of us find
this calculation as obscure as hell, and saying it's "obviously" 1/4
is not going to be in the least reassuring to us.
--
David A. Karr "Groups of guitars are on the way out, Mr. Epstein."
ka...@shore.net --Decca executive Dick Rowe, 1962
Since diamonds are composed of carbon and sand is composed mainly of
silicon and oxygen, I'd say with probability 1 (or practically 1) that
I'd never wake a second time. (To tell the truth, I'm not sure what
exactly you meant by "you only get woken up once until ... .")
Of course. This is what Mr. Bunn was saying, except that I corrected
errors in what was being counted.
>these two events *are* equiprobable.
They are not.
>I think you actually agree with the guy you are arguing (Ted -
>bu...@pac2.berkeley.edu).
Yes and no. I agree with him that frequentism is a valid method of
computing probabilities. I disagree on what to count and on the result
that is obtained, which is why I corrected his posting.
>You are counting every awakening that occurs in the experiment - when what
>you should be doing is counting a particular random awakening per experiment.
Fine. Out of N awakenings, pick one awakening with uniform distribution
over the set of all awakenings. The flip most recent before the awakening
has a 2/3 chance of being tails.
> You know for one flip that Beauty has a 50% chance of a red ball once,
> and that Beauty has a 50% chance of drawing a blue ball twice.
> Beauty knows that for one drawing she has a 1/3 chance of drawing red
> and a 2/3 chance of drawing blue. Beauty doesn't know when you flipped.
No, and I think this example really cuts to the heart of the matter. If there was
one flip, and Beauty picked a random drawing, then the odds of the drawing being
red is 1/2. The odds of the drawing being blue are 1/2.
ON THE OTHER HAND, lets change the original problem so that the experiment is
repeated a large number of times. That is, a coin is repeatedly flipped, and
Beauty is awakened once on heads and twice on tails each time. Her amnesia of
previous events remains throughout the entire procedure, of which she is fully
aware. Now, when she awakens, what probability should she attach to the event that
the coin resulting in her awakening was heads?
No, only if you have two different ways of counting frequency of occurrence,
and you believe *both* are valid. Otherwise, there's no dilemma, merely
the question "what is probability?"
>We flip a coin, and it comes up heads we will increment the count on
>the event "The coin was heads when I awakened". What if it comes up
>tails? Do we increment both "The coin was tails when I was awakened
>on Monday" and the "The coin was tails when I was awakened on
>Tuesday?" No. That would mean that a single trial had been used to
>count two *disjoint* events.
But if you actually ran the experiment, and *if* the coin came up tails,
then you *would* be awakened on Monday and also on Tuesday, i.e., both
events would actually have occurred. How then can these two events
be "disjoint"?
> Matt McLelland <mat...@flash.net> wrote:
> >To put things in perspective, maybe you and your friend (performing the
> >experiment) agree to put a bucket full of sand on the ground and you only
> >get woken up once until the sand spontaneously forms a 1 cubic meter
> >diamond by random molecule interactions. Do you *really* think that when
> >you get up after performing this experiment *1* time that you can be
> >*certain* that a 1 cubic meter diamond is waiting for you?
Ok. This was full of error. I meant that you put a bucket of coal on the
ground and go to sleep. If it turns into a diamond you do the sleeping beauty
thing of waking up and going to back to sleep, forgetting all details, 1
zillion times. If it doesn't you wake up just once and get to go on with your
life. Do you really expect to get a big diamond this way?
> But if you actually ran the experiment, and *if* the coin came up tails,
> then you *would* be awakened on Monday and also on Tuesday, i.e., both
> events would actually have occurred. How then can these two events
> be "disjoint"?
The two events "The coin was heads and today is monday" and "The coin was tails
and today is tuesday" are disjoint - thus when we do repeated trials we
shouldn't be counting both of them with one trial. Again you must be careful
to distinguish between events like "Tails and she was awakened on monday" and
"Tails and today is monday". We are trying to find the probability of the
second type. Events of the first type are all the same event.
Man. Any rec.puzzlers who aren't interested in this must be ready to shoot
themselves (or us) by now.
OK. That's easy:
(1) Suppose I have an urn here. I didn't really have any knowledge
what was in it until a minute ago, but then I reached in and felt a
bunch of balls, so I stirred them up very thoroughly and pulled one
out, looked at it, and dropped it back in. I repeated this five
times. Each time, the ball was black. I would say then that this
provided evidence that most of the balls in the urn are black, with
confidence 95% (actually almost 97% if you want to push it).
(2) This is not the same as my saying that with 95% probability most
of the balls in the urn are black. I don't know what probability I
can rationally assign to that event.
Technically, of course my 95% confidence from paragraph (1) is a
probability, but it's a probability conditioned on a counterfactual.
To wit, it means that if it *wasn't* true that the urn had a majority
of black balls, there's only a 5% probability (less, actually) that
I'd have pulled out five black ones in a row. I say counterfactual
because, having run the experiment, I have been led to disbelieve
that the condition is true. In any case this probability is not at
all the same as the one in paragraph (2).
Now if you were to persuade me that it was highly unlikely that the
balls were black in the first place, more unlikely even than the
counterfactual probability in (1), then I probably ought not to
conclude that the balls are mostly black. So in reality, this sort of
statistics only really works to establish facts that seemed likely
(but we're quite fuzzy about how likely) to begin with.
>As you know, I use these epistemic words, 'confidence', 'credence',
>because that's my preferred interpretation of ordinary probability
>statements.
"Credence" is fine; I objected to "confidence" only because of the
semantic overloading.
>At the midpoint, the surprisingness of the coin's landing Heads is exactly
>the same as the surpringness of its landing Tails; thus your confidence in
>the two is also the same. So that's 1/2.
What's the surprisingness of the following sequence of ten coin flips:
HTTHTHHTTH
I don't see anything surprising about it (I just generated it with a
real coin), but its prior probability was 1/1024. (I would have been
suprised by ten tails in a row, but only because I'm a fallible human.)
>We can think of all confidences as limits of sequences of compounded
>equally-surprising events, a la Ramsey.
Oh, then you could say my sequence is really no more surprising than
the other 1023 disjoint possible outcomes of that experiment.
>> to imagine that I (or someone else) could make some sort of bet on them.
>> (After all, that's the origin of this branch of mathematics, isn't it?)
>
>Ian Hacking [...] thinks that these got unified by
>actuaries in (as I recall) Holland and Flanders, who needed a science of
>probability to run their insurance businesses. [...]
>
>But you presumably mean something later, [...]
No, I'm trying to recall a story wherein a certain gambler is said to
have asked a certain famous mathematician to figure the odds on a certain
gambling game. Unfortunately I don't recall the details at the moment.
Bets made by insurance companies would certainly fit into the concept
I had in mind, though.
>> For example, if she were to assign probability 1/3 to
>> heads, then she'll be twice as surprised when the coin is revealed to
>> be heads as when it's revealed to be tails.
>
>Ahhhh.
>The 'surprisingness' point really is supposed to be a little different.
>The probability you report is supposed to be an accurate measure of how
>surprised you will actually be (higher prob => less surprised).
But I thought the reason that we bother to calculate probabilties in
most practical cases is to tell us how surprised we *should* be when
the low-probability event occurs. Otherwise, HHHHHHHHHH actually would
have lower probability than HTTHTHHTTH, and the car would be behind your
originally-chosen door with probability almost 1/2 (because most people
seem to find it equally surprising that it might up behind the other door).
On the other hand, I don't think my model of minimizing "surprise risk"
was very good, so I withdraw it.
Note that when we introduce betting, there are still subtleties. For
example, if we ask Beauty, "Do you want to bet <blah blah blah>," her
rational response might be, "Wait a minute. Are any bets I might make
this week cumulative, or does my answer to that question simply
supercede any previous answers I might have given to it?" Our answer
to this should determine what odds she's willing to take. I suspect
there may be similar subtleties in defining the probability in any
other way.
If I'm awake and I know the experiment has not yet ended, then hell yes.
Once is enough, never mind the zillion times. You just guaranteed me no
other possibility exists.
I'm assuming a zillion is a lot smaller than the odds against coal turning
into a diamond, however, so when they wake me up I'll figure it's almost
sure that they're about to send me home. I'll be even surer that there's
no diamond when they say, "OK, you can go home now."
I'm assuming in the original Sleeping Beauty experiment that Beauty knows
she won't be interviewed at the end of the week, at least not the way
she is on Monday or (possibly) Tuesday. So when we ask her what she
thinks the probability of heads is, she knows the experiment isn't over
and can answer accordingly.
Did you justify this claim then? (I forget.) Can you?
Then maybe you should have phrased the events that way, rather than
"The coin was tails when I was awakened on Monday" as you did in the
post to which I responded. You've trimmed too much from the message
above; it's very hard for anyone to see what I was referring to.
>Man. Any rec.puzzlers who aren't interested in this must be ready to shoot
>themselves (or us) by now.
If they've half a brain (or better) they're killing every message in this
thread before reading it, as I do with all uninteresting (to me) threads.
What's the probability that whoever last read this post is ready to
shoot me? :-)
Question:
Let one trial consist of a single coin toss. If the toss comes up heads, then
Beauty(SB) is awakened once. If the toss comes up tails, she will awaken N times. Now
suppose that for our experiment we tell SB that we are going to conduct M trials. The
usual forgetting rules apply each time she awakens. When she awakens, what are the
odds that the last coin flipped came up heads?
Solution:
To simplify explanation, I am going to describe how to solve the case of M=2, N=2 and
then wave my hands and give a general formula. I suspect that anyone who doesn't like
my formula for the general case also won't like my explanation for M=2, N=2, and so
this just helps keep things less verbose.
There are four equal probable events corresponding to the coin toss results in the
first and second experiment: HH, HT, TH, TT. That these are equiprobable comes from
the fact that the trials are independent. In the first case, there will be two
awakenings both of which come from heads. In the second case and third cases of HT and
TH there will be three awakenings, of which only 1 will be heads. In the final case of
TT, there will be four awakenings - all from tails. So, the probability that a
particular awakening (when she awakens at some random time) comes from heads given that
the coins came up HH will be 1. Similarly the same conditional probability will be 1/3
given HT, 1/3 given TH, and 0 given TT. By the law of total probability, this gives a
value of 1*1/4 + 1/3*1/4 + 1/3*1/4 = 5/12 that the last coin flipped was heads if you
don't know anything about the two coin flips.
Using a generalized argument, I calculate the solution of the general problem to be:
SUM K=0 to M OF { (M choose K) (M-K) / (M+N*K) } / 2^M
Go ahead and calculate values of this thing for large M. You will see that it
converges to 1/3. However, the original case of M=1, N=2 still evaluates to 1/2. This
shows, if you believe this result, that you *cannot* measure the probability of heads
in a single experiment by simply repeating the experiment a large number of times and
then claiming that the typical result there should apply to the case of M=1.
> Go ahead and calculate values of this thing for large M. You will see that it
> converges to 1/3.
This should be 1/(N+1) here since N wasn't necessarily 2.
Also, there is a mistake in my posted formula. Instead of
SUM K=0 to M OF { (M choose K) (M-K) / (M+N*K) } / 2^M
it should be
SUM K=0 to M OF { (M choose K) (M-K) / (M+(N-1)*K) } / 2^M
> Matt McLelland <mat...@flash.net> wrote:
> >what you should be doing is counting
> >a particular randomawakening per experiment. (I posted more on this a
> >post or two ago)
>
> Did you justify this claim then? (I forget.) Can you?
I don't know if can to anyones satisfaction =). It is clear as can be to
me, however. To approximate the odds of a certain event E occuring in a
trial, one way to do it is to run a bunch of trials and put
P(E) = (number of times E occured in N trials) / N
Let Heads be the event that the coin came up heads. Similarly Tails.
Let Monday be the event that she was woken up on Monday.
It is simple to see that if we use this as the definition of probability
P(Heads and Monday) = 1/2. (If we run a bunch of trials, about half will be
from heads). So under this definition, in half of the trials, when she
wakes up the coin will have come up heads, and so the odds of the coin being
heads when she wakes up is 1/2.
Notice I haven't mentioned what happens on the tails side of the coin yet -
and I have still have managed to compute the odds that the coin is heads
when she wakes.
Poor? Work all of the time and barely make ends meat? Feel like ending it
all? Come to me first!
At my clinic, we will buy you a lottery ticket. Now everyone knows that your
odds of winning our state lottery are only 1 in a million, but the $20 million
dollar jackpot this time can almost certainly be yours if you just call me now!
When you arrive you will be given a warm bath and fed a four star meal of your
choice. When you are done being pampered, you just lie down and go to sleep.
While you are sleeping, our trained personnel will watch the TV to see if your
lottery ticket is a winner. If it isn't, then you just pay our small $250
service fee and go on about your day. If it is, then you will be repeatedly
awakened into lavish surroundings and by beautiful women every day for the rest
of your life. How you ask?!? Well, at the end of every 30 minute interval, we
will wipe your memory clean up to the time you arrived! The mathematics behind
this miracle are complicated, so I won't bore you, but if we suppose that you
will live 50 more years, then your odds of awakening to find that you have won
are amazingly over 50%! Wouldn't you pay $250 to have a 50% chance of winning
the lottery?!? The number is 1800-IMAFOOL. Operators are standing by.
>In the Monty Hall simulations, it was pretty obvious how to decide the
>question. Propose a contestant strategy, play a million games according
>to this strategy, and see how many cars you win. The strategy that wins
>the most cars is clearly best (unless you prefer goats).
>
>Here, we can certainly run a simulation, but I don't see any
>rational (if you'll pardon the term) way for us to agree on how
>to score the results. Maybe that's what the controversy is all about.
It seems to me that it's a simple application of game theory. Call the
two players Adversary and Beauty, and put the coin away for a few moments.
Adversary has two moves in the game depending on when to wake Beauty
(which I'll call M and MT), and Beauty has two moves in guessing which day
it is (which I'll call M and T). We'll say that Beauty gets a point if
she's right and no point if she's wrong. Here's the payoff matrix (y'all
have fixed-width fonts, right?):
B
M T
M 1 0
A
MT 0.5 0.5
Up to this point, we've ignored the fair coin, but all it's saying is that
we know that A's strategy vector is fixed at [0.5, 0.5]. So, if my game
theory and matrix multiplication skills haven't left me, the payoff vector
for Beauty's pure strategies is [0.75, 0.25], i.e. Monday is the right
answer _3/4_ of the time.
I'm not sure if I should be apologetic or defensive in that I didn't come
up with 2/3. Perhaps my implementation of the model is wrong (which I
wouldn't put past me), but I think it's also possible that people haven't
observed that "which day is it?" isn't an independent variable, and only
by considering the model where A has a choice does the elementary solution
flow freely.
-Matthew
---
Matthew Daly mwd...@pobox.com http://www.frontiernet.net/~mwdaly/
Though he is a person to whom things do not happen, perhaps they
may when he is on the other side. - E. Gorey
> [...] i.e. Monday is the right answer _3/4_ of the time. [...]
This analysis seems correct, and has come to the correct conclusion in that
when Beauty awakes, the probability that it is Monday is 3/4. However, this
wasn't the question we were debating (though that the 2/3 crowd would probably
disagree with your result). We were debating the probability that the coin
flipped heads and to a lesser extent the probability that the coin came up
heads if we told her it was Monday. You've found the right answer to the
wrong question!
>
>It is simple to see that if we use this as the definition of probability
>P(Heads and Monday) = 1/2. (If we run a bunch of trials, about half will be
>from heads). So under this definition, in half of the trials, when she
>wakes up the coin will have come up heads, and so the odds of the coin being
>heads when she wakes up is 1/2.
>Notice I haven't mentioned what happens on the tails side of the coin yet -
>and I have still have managed to compute the odds that the coin is heads
>when she wakes.
>
The above encapsulates your error very clearly.
The odds of the coin being heads DURING A TRIAL are 1/2. The odds of the coin
being heads DURING AN AWAKENING are 1/3.
Suppose marmalade comes in large jars and honey comes in small jars. When the
larder is empty I choose one at random. Today I may or may not have gone to
the shop, and am eating either marmalade or honey. What is the chance I am
eating marmalade? (Ans. >50%)
Next time I go to the shop, what is the chance I bought marmalade last time?
(Ans. 50%).
- Gerry Quinn
http://bindweed.com
Good point. I didn't try to understand every post in the thread, but I
thought that I had at least read the first one correctly.... <sigh>
-Matthew, who seems to have some mental allergy to the word "Bayesian"
The problem with the SB problem isn't with the mathematics; it's
mapping the mathematics to the problem. r e s has shown that there is
a problem where 1/2 is the answer, and that in that problem, if
SB knows it is Monday, she should answer that the probability of heads
is 2/3rds. I don't think that's the problem posed.
--
Matthew T. Russotto russ...@pond.com
"Extremism in defense of liberty is no vice, and moderation in pursuit
of justice is no virtue."
> Jamie Dreier <pl43...@brownvmDOTbrown.edu> wrote:
> >ka...@shore.net (David A Karr)
> >> If I analyze a statistical sample, I might
> >> make a statement about a hypothesis with "confidence 0.95". This is
> >> not at all the same as my saying that I estimate the hypothesis to be
> >> true with *probability* 0.95.
> >
> >Give an example, preferably involving balls in urns. ;-)
>
> OK. That's easy:
>
> (1) Suppose I have an urn here. I didn't really have any knowledge
> what was in it until a minute ago, but then I reached in and felt a
> bunch of balls, so I stirred them up very thoroughly and pulled one
> out, looked at it, and dropped it back in. I repeated this five
> times. Each time, the ball was black. I would say then that this
> provided evidence that most of the balls in the urn are black, with
> confidence 95% (actually almost 97% if you want to push it).
Ok.
So let's see, what you did was to find
pr(drew five out of five black | NOT[most balls are black])
and the confidence in [most balls are black] was 1 - that probability.
So the confidence in the hypothesis, given the evidence, is
1 - pr(E | ~H) (where the "~" is negation)
> (2) This is not the same as my saying that with 95% probability most
> of the balls in the urn are black. I don't know what probability I
> can rationally assign to that event.
Right. Neither do I. It is entirely open, depending on your priors.
Surprisingness:
> What's the surprisingness of the following sequence of ten coin flips:
> HTTHTHHTTH
It's very surprising. You know, like 1 - 2^(-10) surprising. As are all
sequences of ten flips.
> I don't see anything surprising about it (I just generated it with a
> real coin), but its prior probability was 1/1024. (I would have been
> suprised by ten tails in a row, but only because I'm a fallible human.)
Yeah.
That's a different concept, a different kind of surprise.
> Oh, then you could say my sequence is really no more surprising than
> the other 1023 disjoint possible outcomes of that experiment.
Right, but all of them are pretty darned surprising. Compared to this
proposition: at least one of the ten flips was a Head.
[The birth of probability theory]
> No, I'm trying to recall a story wherein a certain gambler is said to
> have asked a certain famous mathematician to figure the odds on a certain
> gambling game. Unfortunately I don't recall the details at the moment.
Oh, that one. That's Blaise Pascal, and the gambler was some French noble.
And the game was a dice game resembling Birdcage. Sure, I guess that has
as good a claim as any to the origins of the science of probability.
> Note that when we introduce betting, there are still subtleties. For
> example, if we ask Beauty, "Do you want to bet <blah blah blah>," her
> rational response might be, "Wait a minute. Are any bets I might make
> this week cumulative, or does my answer to that question simply
> supercede any previous answers I might have given to it?" Our answer
> to this should determine what odds she's willing to take.
Absolutely.
There are also some other difficulties, like, she might start thinking,
"When people start asking you to bet on obvious things, there's almost
always some kind of a trick involved." (Beauty hangs around in bars a lot
and once got bit on the nose by the Jack of Clubs.) Or she might say, "Oh,
I hate gambling, it aggravates my ulcer and it's immoral." Those seem like
mere annoyances, though. The question you had her ask is much more
serious.
-Jamie
--
SpamGard: For real return address replace "DOT" with "."
> It seems to me that it's a simple application of game theory. Call the
> two players Adversary and Beauty, and put the coin away for a few moments.
> Adversary has two moves in the game depending on when to wake Beauty
> (which I'll call M and MT), and Beauty has two moves in guessing which day
> it is (which I'll call M and T). We'll say that Beauty gets a point if
> she's right and no point if she's wrong. Here's the payoff matrix (y'all
> have fixed-width fonts, right?):
>
> B
> M T
> M 1 0
> A
> MT 0.5 0.5
>
> Up to this point, we've ignored the fair coin, but all it's saying is that
> we know that A's strategy vector is fixed at [0.5, 0.5]. So, if my game
> theory and matrix multiplication skills haven't left me, the payoff vector
> for Beauty's pure strategies is [0.75, 0.25], i.e. Monday is the right
> answer _3/4_ of the time.
>
> I'm not sure if I should be apologetic or defensive in that I didn't come
> up with 2/3. Perhaps my implementation of the model is wrong (which I
> wouldn't put past me), but I think it's also possible that people haven't
> observed that "which day is it?" isn't an independent variable, and only
> by considering the model where A has a choice does the elementary solution
> flow freely.
Hmmmm.
As Matt Mc. noted, you didn't answer the original question, but the answer
you give falls squarely on the Halfers side.
But, when you say that 'Monday' is the right answer "_3/4_ of the time",
which 'times' are you considering? Note that if, in repeated trials,
Beauty always guesses 'Monday', she will be right only 2/3 of the 'times'.
What's a trial? If we run the experiment multiple times and count
each experiment as a trial, we find that
P(Heads and Monday) = 1/2
P(Tails and Monday) = 1/2
P(Tails and Tuesday) = 1/2
This is because for some trials, two of these "events" will occur;
they are not disjoint. If we actually look at the outcomes of the
experiment, we find that they are
Heads and Monday
Tails and (Monday AND Tuesday).
In this experiment, "Tails and Monday" is not really an event -- that
is, it is not some combination of outcomes. This is not, therefore,
the experiment that SB is being asked to guess the probability for.
I like marmalade and honey. Marmalade comes in large jars which last for two
weeks, while honey comes in small jars which only last a week. I always keep
a jar of one or the other. When it's empty, I go down to the shop and buy a
jar of either marmalade or honey, each with probability 50%.
Q1: You call to my house - what is the chance I am eating marmalade?
Answer: 2/3
Q2: You meet me in the shop - what is the chance that the last time I bought a
tooth-rotting toast-enhancer, it was marmalade?
Answer: 1/2
Calling to the house corresponds to awakening Sleeping Beauty. Meeting me in
the shop corresponds to the situation before or after the experiment. It's
easy to invent spurious paradoxes though. If you meet me in the shop, the
chance I was eating marmalade yesterday is only 1/2. But the chance I was
eating marmalade ten days ago is (wait for it) 3/4.
> I just thought I'd post this - it's the same paradox but in slightly different
> form, and I think it makes it much clearer:
>
> I like marmalade and honey. Marmalade comes in large jars which last for two
> weeks, while honey comes in small jars which only last a week. I always keep
> a jar of one or the other. When it's empty, I go down to the shop and buy a
> jar of either marmalade or honey, each with probability 50%.
>
> Q1: You call to my house - what is the chance I am eating marmalade?
> Answer: 2/3
You have posed a very similar problem. But as I have pointed out, this problem
involves *repeated* trials of the experiment. Please read my post detailing the
calculation of this experiment [just a couple of posts ago]. The answer to this
question is in fact asymptotically 2/3. If you want to make the problem
isomorphic to the SB problem please *remove* the requirement that you go to the
store *every* week and instead only go once. I now don't see a simple way to
change the problem to make the problem isomorphic. It should be clear that if
you only go once and come home with a jar of gook, that it has a 50% chance of
being marmalade or honey- and it will stay this way with a suitable modification
to make the marmalade call happen twice. [Note it would not be ok to change the
problem to "I am twice as likely to call" because SB always calls on heads - she
is no more likely to wake up on heads]
Matthew, I had the feeling a number of times while reading your
argument that I wasn't sure I understood it. Here's one early on. If
B chooses T and A chooses MT, then B is wrong when she wakes on Monday
(no point), and right when she wakes on Tuesday (earning one point).
So how did you determine that the correct payoff for that strategy
combination was the average of the two days (0.5) rather than the
total score accumulated in that game (1)? (Or for that matter, how
do you determine that the correct score is not 0?)
I'm not claiming here that 1 or 0 is the correct answer rather than
0.5, but I do suspect more and more that the problem as originally
given is ambiguous, and that any of these three answers could be
forced by additional assumptions.
Repeated trials simplify probability calculations by making explicit an
implied ensemble of possible universes. But the answer is still the same.
I buy a jar of one or the other, just once. You call in three days and again
in ten days. Or maybe you call at a random time, or many times. On a
particular occasion when you call, I am eating one or the other. Chances are
2:1 it's marmalade. There's more marmalade, just like there's more 'wakening
on heads'.
>I buy a jar of one or the other, just once. You call in three days and again
>in ten days. Or maybe you call at a random time, or many times. On a
>particular occasion when you call, I am eating one or the other. Chances are
>2:1 it's marmalade. There's more marmalade, just like there's more 'wakening
>on heads'.
Uh. I've been following this thread for a while, and this doesn't sit
well with me. If you buy a random jar, and I call you in 3 days time,
it's 1:1 Marmalade or Honey. If I call you in 10 days time, it's 1:1
Marmalade or Nothing. If I call you on a random day and you just so
happen to be eating -something-, then yes, it is 2:1 Marmalade. But,
this is different from Sleeping Beauty in the fact that, because you
are eating something when I called, I get an extra piece of
information. Beauty doesn't get that option. Even if we do the trial
once, I have a 1/4 chance of calling you when you're out of Honey,
which puts you squarely back at 1:1 Honey:Marmalade.
On the SB problem, which, at least in my mind, is different, think of
it this way: If she guesses heads every time, she will either be right
once or wrong twice. If she guesses tails, she will either by right
twice or wrong once. If we flip the zillion-sided coin (1 heads, one
zillion-1 tails) she will either be, on a guess of heads, right once
or wrong a zillion-1 times. On a guess of tails, she will either be
right a zillion-1 times or wrong once. As long as the coin is fair (or
we know and account for how unfair it is :) her guess should be tails,
EVEN THOUGH she knows it is a 50:50 chance either way.
And yes, I was a staunch supporter of the 50:50 throughout most of
this, and by 'most of this' I'm including THIS post. Only after I
started explaining it did I realize I was wrong.
--Parallax
> I'm not claiming here that 1 or 0 is the correct answer rather than
> 0.5, but I do suspect more and more that the problem as originally
> given is ambiguous, and that any of these three answers could be
> forced by additional assumptions.
I seem to be getting the same impression from many of your posts... that you
think there is a subjective element to determining probability. I may
(depending on what you actually believe) whole-heartedly disagree with
this. The probability that an event will happen is a very real thing. If
someone put a gun to Beauty's head and said "Heads or Tails?" then she has
an optimal answer, or at least a set of answers with equal and maximal
probability. Now, we can disagree on whether the word "credence" means
probability or something about expected values or about confidence levels or
whatever, but once we agree that it means probability in the usual sense, it
is has a well defined value (based on a certain fact pattern).
That Beauty should assign a probability of 50% to the event that "The coin
landed heads" upon waking is secondary in importance to realizing that there
is *some* well-defined probability which she should assign to that event.
> Repeated trials simplify probability calculations by making explicit an
> implied ensemble of possible universes. But the answer is still the same.
This is true...if you do the repetition correctly. Here is what you are doing wrong:
You are running 1000 experiments and then piling all of the awakenings into a pot.
Then you image that Beauty awakens at a random one of these awakenings and find out if
the coin responsible is heads or tails. Since, in the 1000 experiments all run
together there will be 500 heads and 1000 tails, you will compute the frequency of
heads to be about 1/3.
What you should be doing is:
Run 1000 experiments. For each experiment, wake Beauty up once at some random
awakening and find out if the coin came up heads or tails. In this case the ratio of
heads to tails will be about 1/2.
> What's a trial? If we run the experiment multiple times and count
> each experiment as a trial, we find that
> P(Heads and Monday) = 1/2
> P(Tails and Monday) = 1/2
> P(Tails and Tuesday) = 1/2
>
> This is because for some trials, two of these "events" will occur;
> they are not disjoint. If we actually look at the outcomes of the
> experiment, we find that they are
>
> Heads and Monday
> Tails and (Monday AND Tuesday).
>
> In this experiment, "Tails and Monday" is not really an event -- that
> is, it is not some combination of outcomes. This is not, therefore,
> the experiment that SB is being asked to guess the probability for.
This is all true. The experiment we have to assign probabilities for is the
following:
Flip a fair coin and record the result in the event H (for heads, ~H for tails).
Pick a random day to wake up on out of the possilibities and record this as M
(for monday, ~M for tuesday). The reason we pick a random day to wake up on is
because SB has no idea what day it is when she wakes up... from her perspective
the day is a random one.
P(H) = P(H and M) = 1/2
P(~H) = 1/2
P(~H and M) = 1/4
P(~H and ~M) = 1/4
Gerry Quinn wrote:
> Repeated trials simplify probability calculations by making explicit an
> implied ensemble of possible universes. But the answer is still the same.
This is true...if you do the repetition correctly. Here is what you are doing wrong:
You are running 1000 experiments and then piling all of the awakenings into a pot.
It is easy to see that if we do this there will be about 500 awakenings from heads in the
pot and 1000 from tails. So, if Beauty awakens at a random one of these awakenings it is
easy to see there is 1/3 probability of it being heads.
What you should be doing is:
Run 1000 experiments. For each experiment, wake Beauty up once at some random
awakening and find out if the coin came up heads or tails. In this case the ratio of
heads to tails will be about 1/2.
Compare this with how you compute the odds that a coin would come up heads. You flip it
1000 times and *each* *time* ask "Did it come up heads?". For a fair coin you will find
near 500 times for heads, and so the odds of heads are 500/1000. To do the same thing in
this case, we do 1000 trials of the experiment and *each* *time* ask "Does a random beauty
awaken to heads?" Again we find 500 times that she does - and the odds are 500/1000.
->Matt McLelland <mat...@flash.net> wrote:
->>David A Karr wrote:
->>
->>> But if you actually ran the experiment, and *if* the coin came up tails,
->>> then you *would* be awakened on Monday and also on Tuesday, i.e., both
->>> events would actually have occurred. How then can these two events
->>> be "disjoint"?
->>
->>The two events "The coin was heads and today is monday" and "The coin
->>was tails and today is tuesday" are disjoint
->
->Then maybe you should have phrased the events that way, rather than
->"The coin was tails when I was awakened on Monday" as you did in the
->post to which I responded. You've trimmed too much from the message
->above; it's very hard for anyone to see what I was referring to.
->
->>Man. Any rec.puzzlers who aren't interested in this must be ready to shoot
->>themselves (or us) by now.
->
->If they've half a brain (or better) they're killing every message in this
->thread before reading it, as I do with all uninteresting (to me) threads.
->
->What's the probability that whoever last read this post is ready to
->shoot me? :-)
At this very instant, zero, since I'm the last person (or was when I typed
this) and I wouldn't shoot someone from the next town over :-)
--
Carl Witthoft c...@world.std.com ca...@aoainc.com http://world.std.com/~cgw
Got any old pinball machines for sale?
No. This is most emphatically *not* what I am getting at, at least
not if you're referring here to mathematical probabilities on which
we can do calculation.
What I do think is that in the Sleeping Beauty problem, the way Jamie
originally expressed it, the phrase "probability that the coin landed
heads (as estimated by Beauty)" could be mapped onto two entirely
different mathematical models, each of which gives a different answer.
Call this a "subjective element" if you wish. I prefer to call it an
ambiguity.
We don't usually come across ambiguities like this one, because most
people in posing probability problems tend to stick to tried-and-true
formulations in which we already all (or almost all) agree what the
words mean.
>If
>someone put a gun to Beauty's head and said "Heads or Tails?" then she has
>an optimal answer, or at least a set of answers with equal and maximal
>probability.
Ah, but what if we put a gun to her head and said, "Monday or Tuesday?"
What's Beauty's optimal strategy then?
This gun-to-the-head thing isn't very persuasive to me.
>That Beauty should assign a probability of 50% to the event that "The coin
>landed heads" upon waking is secondary in importance to realizing that there
>is *some* well-defined probability which she should assign to that event.
I'd be far better convinced of this fact if you didn't seem to be
making the same argument (in different forms) relying on the claim
that it doesn't matter whether Beauty wakes up twice (once each on
Monday *and* Tuesday) or merely once on a random day (Monday *or*
Tuesday). It's all the logical leaps like this that everyone (on both
sides) seems to be making, that persuade me that there may very well
be a fundamental ambiguity hidden in the problem, perhaps in the
background assumptions under which Beauty is supposed to calculate her
probability.
--
David A. Karr "Groups of guitars are on the way out, Mr. Epstein."
ka...@shore.net --Decca executive Dick Rowe, 1962
> >If
> >someone put a gun to Beauty's head and said "Heads or Tails?" then she has
> >an optimal answer, or at least a set of answers with equal and maximal
> >probability.
>
> Ah, but what if we put a gun to her head and said, "Monday or Tuesday?"
> What's Beauty's optimal strategy then?
>
> This gun-to-the-head thing isn't very persuasive to me.
I don't know how there can be any ambiguity when the problem is phrased like
this:
*You* are to participate in the experiment (coin toss- heads wakes you once,
tails twice. memory loss. etc..). When you are awake someone puts a gun to your
head and ask "Heads or tails?" (referring to the coin, not what you want him to
shoot). Maybe they ask "Monday or Tuesday?", what's the difference? My point is
that in a real life situation like this, there is a *unique* probability which
you, as an intellegent person, ought assign to each choice being true.
> >That Beauty should assign a probability of 50% to the event that "The coin
> >landed heads" upon waking is secondary in importance to realizing that there
> >is *some* well-defined probability which she should assign to that event.
>
> I'd be far better convinced of this fact if you didn't seem to be
> making the same argument (in different forms) relying on the claim
> that it doesn't matter whether Beauty wakes up twice (once each on
> Monday *and* Tuesday) or merely once on a random day (Monday *or*
> Tuesday).
I think this is perfectly clear. Each time beauty is awakened, for all
practical purposes she is a different person than any other beauty awakened at a
different time. That is why I said that we could regard this second beauty as a
clone. That is, we can regard Monday Beauty and Tuesday Beauty as different
people since they have different memories, experiences, etc. When you wake up,
you are either Monday Beauty or your clone Tuesday Beauty, but not both. Now,
suppose that when the awaken the guy tells you "I'm sorry Beauty, but we are only
able to wake you up this one time, but I'm not telling you which time it was."
Basically, what he has told you is that your clone is not going to get to wake
up. So what? You are a different person than her for all practical purposes
(big exception relating to betting problems - you share a pocketbook). The
situation for *you* is exactly the same. What happens to your clone is
irrelevant.
> It's all the logical leaps like this that everyone (on both
> sides) seems to be making, that persuade me that there may very well
> be a fundamental ambiguity hidden in the problem, perhaps in the
> background assumptions under which Beauty is supposed to calculate her
> probability.
Again, I don't see how this could be possible. The situation explains completely
all information which Beauty would have at her fingertips. Change the problem to
one about betting, as you keep doing, and she is still just as sure that the
answer is 50%. If a bet superscedes any previous bets she has made, then her
odds of winning her bet are 50% and her odds of losing her bet are 50%. If she
makes a different bet each time she wakes up, then she will win her bet 50% of
the time, and lose the same bet twice 50% of the time. In this last case, this
will be exactly what she is thinking: "Well I know the odds of losing are just
50%, but since I will presumably make the same bet again on tails I will end up
losing my money twice if I am wrong."
To replicate the Beauty experiment, you have to wake her up twice if the coin
came up heads.
- Gerry
3 days = Monday. 10 days = Tuesday.
"I am eating something" = "Beauty has just been awakened"
"I'm all out of sweet spreads" = "It's Tuesday, it was tails, and the
experiment is over. Or it's Wednesday or later"
Only wakenings count.
Suppose you call on a random day, I'm spreading marmalade or honey, and you
ask me which - suppose also I've been doing it in an absentminded way without
looking, and like Beauty I can't remember 24 hours back. It's 2:1 marmalade,
just like you said above!
- Gerry
> To replicate the Beauty experiment, you have to wake her up twice if the coin
> came up heads.
No!!! Ok. Rather than giving a calculation showing that you are right, please answer
the following questions:
1. Beauty flips the coin, and without looking at the result, gives to the person
running the experiment. What are the odds to her then that it was heads?
2. The guy explain the details of the sedation and memory loss that she will undergo
depending on the outcome of the coin. When he is done explaining, what are the odds
to her that the coin is heads.
3. Beauty is about to be sedated. She thinks to herself before being sedated "If the
coin over there landed heads up, then I will wake up due to a head. If the coin over
there is tails, then the next time I wake up will be due to a tail". After thinking
about this, what are the odds to her that she will wake up due to a head? *Here I am
asking for what she projects the chances to be that she will wake up due to a head.
4. Beauty is sedated. When she awakens, she thinks "Well, here I am. Just like they
said." Now what are the odds that the coin is heads?
When you are done answering the questions, explain the information which Beauty
receives to stop thinking the odds are 1/2 and start thinking they are 1/3.
That doesn't conform to the original problem.
--
Matthew T. Russotto russ...@pond.com
"Extremism in defense of liberty is no vice, and moderation in pursuit
of justice is no virtue."
> In article <36F2BCB2...@flash.net>,
> Matt McLelland <mat...@flash.net> wrote:
> }
> }Run 1000 experiments. For each experiment, wake Beauty up once at some random
> }awakening and find out if the coin came up heads or tails. In this case the ratio of
> }heads to tails will be about 1/2.
> That doesn't conform to the original problem.
Yes it does. The event we want to capture is "I woke up *this* time due to a head". To
simulate this we flip the coin and the randomly pick one day she will be awoken to be
'this time'.
How do you think we should do it? It is clear that the events "I woke up this time on
Monday due to a head", "I woke up this time on Monday due to a tail", and "I woke up this
time on Tuesday due to a tail" are three *disjoint* events. Hence, in our repeated
trials, we *cannot* have two of these events happen in the same trial. (Note that if you
did, the frequencies of these disjoint events would be 1/2, 1/2, and 1/2. Further note
that 1/2+1/2+1/2 = 3/2 >1)
}How do you think we should do it? It is clear that the events "I woke up this time on
}Monday due to a head", "I woke up this time on Monday due to a tail", and "I woke up this
}time on Tuesday due to a tail" are three *disjoint* events. Hence, in our repeated
}trials, we *cannot* have two of these events happen in the same trial.
But we do! The problem statement specifies that if we get a "tails",
we wake up Beauty twice, once on Monday and once on Tuesday. That
IMO, shows that your proposed model doesn't match the problem.
> In article <36F30008...@flash.net>,
> Matt McLelland <mat...@flash.net> wrote:
>
> }How do you think we should do it? It is clear that the events "I woke up this time on
> }Monday due to a head", "I woke up this time on Monday due to a tail", and "I woke up this
> }time on Tuesday due to a tail" are three *disjoint* events. Hence, in our repeated
> }trials, we *cannot* have two of these events happen in the same trial.
>
> But we do! The problem statement specifies that if we get a "tails",
> we wake up Beauty twice, once on Monday and once on Tuesday. That
> IMO, shows that your proposed model doesn't match the problem.
You have missed my point. First of all, the question "how do you think we should do it?"
wasn't rhetorical. How do you think we should set up a set of repeated trials of waking up in
Beauty's position? After all, this line of reasoning is the most prominent reason why the
2/3 group believes their answer. It is a well known that probability of an event is supposed
to roughly correspond to the frequency that the event occurs if the same experiment is
conducted over and over again and left to chance each time.
There have been at least two primary flaws made by members of your party in trying to set this
up. One is to not cleanly separate the trials from each other. Instead of asking whether or
not Beauty awoke to heads in each trial and incrementing the "heads counter" then, some have
imagined that they are all run in succession and then ask what her odds would be if she awoke
in some random awakening of some random trial. This fallacy can be exposed by considering
the following problems :
1. You flip a coin *once*. If it is heads you write an "H" on a blank piece of paper. If it
is tails you write "TT". Now you pick a random letter off of the page and give yourself a
point if it is heads, and repeat this alot with a blank sheet of paper each time.
2. You start with a blank sheet of paper. You flip a coin and write "H" for heads and "TT"
for tails. You repeat this alot. When you are done you pick alot of random letters and give
your self a point for each head.
It is easy to see that in 1 you win a point in about 1/2 of the letters you pick. It is also
to see that in 2 you win a point in only 1/3 of the letters you pick.
The other fallacy I have seen advocated is really quite peculiar. I believe this is what you
are advocating. For each trial, we increment the "Heads counter" when the coin comes up heads
and we increment "Tails counter - Monday" *and* "Tails counter - Tuesday" when the coin comes
up tails. Then, for some reason, these counts are interpreted as the frequencies of Beauty
waking up to Heads on Monday, Tails on Monday, and Tails on Tuesday, respectively. First out
would point out that this must be a fallacy as we are incrementing the counters of two
*disjoint* events in one trial! The whole point of the trial is to simulate Beauty waking up
*one* time at random. The problem with this setup becomes even more apparent when we try to
interpret the results: She wakes up to Heads on Monday 1/2 of the time, she wakes up to
Tails on Monday 1/2 of the time, and she wakes up to Tails on Tuesday 1/2 of the time. You
want to fix this fallacy by waving your hands and saying ... well if we scale everything down
by a uniform amount they will all be 1/3. Of course this doesn't work.
If you don't believe that either one of the points characterizes your argument, then please:
How would you setup a trial which could be repeated many times with which to collect data on
the frequency of "I woke up because of heads?"
On Mon, 15 Mar 1999, Jamie Dreier wrote:
> Well, those are the two answers I expected, of course.
>
> Each has something obvious to be said in its favor. But they can't both be
> right. (Can they?)
>
> So, to summarize:
>
> There is a frequentist sort of argument in favor of her declaring that the
> chance that the coin landed Heads is only 1/3 (to wit: suppose the game
> were played repeatedly, and on each occasion for guessing she made a guess
> to herself, "I guess that it's Heads"; she would be right only 1/3 of the
> time). On the other hand, there is a more Bayesian sort of argument that
> she should think the chance is 1/2 (to wit: I thought it was 1/2 before
> they put me to sleep, and I clearly have no new information, so it would
> be irrational to change my mind).
>
> Two arguments, incompatible conclusions, at least one of the arguments
> must be faulty.
1/2 of the coin flips are heads.
BUT, there are twice as many times that the question is asked for tails.
Now, if there is an incentive to answer correctly, then she will always
answer tails (the reason being that the expected untility of answering
tails will net her 2 with probability 1/2 whereas heads will get her 1
with probability 1/2--the rewards are disproportionate because the two
occurances of the question happening when tails is called are not
independant).
New question: Beauty is put to sleep and woken at 4:00 the next day. A
coin is flipped, and if it is heads, we keep her awake, tails means she
sleeps another day, and she gets a reward X every time she answers the
following question correctly: "How did the coin flip turn out?" Does
Beauty have a strategy to follow? If so, what is it?
> New question: Beauty is put to sleep and woken at 4:00 the next day. A
> coin is flipped, and if it is heads, we keep her awake, tails means she
> sleeps another day, and she gets a reward X every time she answers the
> following question correctly: "How did the coin flip turn out?" Does
> Beauty have a strategy to follow? If so, what is it?
If this is intended to be a real question, I think it badly needs
clarification. For example, when does she get asked the question? Are you
intending that if she is asleep she misses her chance to answer it? Is the old
assumption about forgetting previous events still in this problem somewhere?
Just to clarify - I can't remember the original, so I'm assuming she is woken
twice on heads, once on tails. If the other way round, reverse all
polarities...
>No!!! Ok. Rather than giving a calculation showing that you are right, please
> answer
>the following questions:
>
>1. Beauty flips the coin, and without looking at the result, gives to the
> person
>running the experiment. What are the odds to her then that it was heads?
>
This is before the experiment, right? Then the odds are 1/2.
>2. The guy explain the details of the sedation and memory loss that she will
> undergo
>depending on the outcome of the coin. When he is done explaining, what are the
> odds
>to her that the coin is heads.
>
Still 1/2.
>3. Beauty is about to be sedated. She thinks to herself before being sedated
> "If the
>coin over there landed heads up, then I will wake up due to a head. If the
> coin over
>there is tails, then the next time I wake up will be due to a tail". After
> thinking
>about this, what are the odds to her that she will wake up due to a head?
> *Here I am
>asking for what she projects the chances to be that she will wake up due to a
> head.
>
The odds that she will wake up due to a head are 1/2.
The odds that she will wake up NEXT TIME due to a head are 1/2.
The odds that she will wake up twice due to a tail are 1/2.
The odds that she will wake up NEXT TIME due to a tail are 1/2.
The odds that she will wake a second time due to a tail are 1/2.
>4. Beauty is sedated. When she awakens, she thinks "Well, here I am. Just
> like they
>said." Now what are the odds that the coin is heads?
>
2/3.
>When you are done answering the questions, explain the information which Beauty
>receives to stop thinking the odds are 1/2 and start thinking they are 1/3.
>
That the experiment is in progress, and it is Monday or Tuesday.
- Gerry
>How do you think we should do it? It is clear that the events "I woke up this
> time on
>Monday due to a head", "I woke up this time on Monday due to a tail", and "I
> woke up this
>time on Tuesday due to a tail" are three *disjoint* events. Hence, in our
> repeated
>trials, we *cannot* have two of these events happen in the same trial. (Note
> that if you
>did, the frequencies of these disjoint events would be 1/2, 1/2, and 1/2.
> Further note
>that 1/2+1/2+1/2 = 3/2 >1)
>
So in your version, we cannot wake Beauty twice in the same trial?
- Gerry
What we did and how we did it is clearly explained in the problem definition.
Just like the Sleeping Beauty trial.
- Gerry
It is with some trepidation that I make a contribution to this thread,
but it has occurred to me that no one has actually run the experiment and
checked the results. I repost the original question above just to refresh
our memories. Here goes.
Let's perform the experiment four times with a hypothetical coin toss of
HTTH.
WEEK 1 - Heads
1) SB is woken up on Monday
WEEK 2 - Tails
1) SB is woken up on Monday
2) SB is woken up on Tuesday
WEEK 3 - Tails
1) SB is woken up on Monday
2) SB is woken up on Tuesday
WEEK 4 - Heads
1) SB is woken up on Monday
RESULTS:
No. of coin tosses: 4
No. of heads: 2
No. of tails: 2
Probability of heads on any given coin toss: 1/2
No. of awakenings: 6
No. of awakenings on heads: 2
No. of awakenings on tails: 4
Probability of heads on any given awakening: 1/3
--
Michael C.
You've answered your own question: that's how I would do it. In fact,
I merely need enumerate the awakenings from TWO trials, one where the
coin comes up heads and one where it comes up tails, since they are
equally probable:
Trial 1:
Beauty awakens on Monday, coin is heads
Trial 2:
Beauty awakens on Monday, coin is tails and
Beauty awakens on Tuesday, coin is tails
Pick a random awakening from one of these, and the odds that the coin
is tails is 2/3rds
}The other fallacy I have seen advocated is really quite peculiar. I
}believe this is what you are advocating. For each trial, we
}increment the "Heads counter" when the coin comes up heads and we
}increment "Tails counter - Monday" *and* "Tails counter - Tuesday"
}when the coin comes up tails. Then, for some reason, these counts
}are interpreted as the frequencies of Beauty waking up to Heads on
}Monday, Tails on Monday, and Tails on Tuesday, respectively. First
}out would point out that this must be a fallacy as we are
}incrementing the counters of two *disjoint* events in one trial!
From the perspective of the experimenter, "Tails on Monday" and "Tails
on Tuesday" are not events. From Beauty's perspective they are; if
we do repeated trials of this experiment and look at things from
Beauty's perspective, we see a rigged coin. The fallacy you appear to
be running into is that there's two different meanings of "trial".
One is the "trial" from the perspective of the experimenter, where
there is one flip of the coin and there may be one or two awakenings.
The other is a trial from Beauty's perspective, which is the same as
an awakening.
->Matthew T. Russotto wrote:
->
->> In article <36F2BCB2...@flash.net>,
->> Matt McLelland <mat...@flash.net> wrote:
->> }
->> }Run 1000 experiments. For each experiment, wake Beauty up once at
some random
->> }awakening and find out if the coin came up heads or tails. In this
case the ratio of
->> }heads to tails will be about 1/2.
->
->> That doesn't conform to the original problem.
->
->Yes it does. The event we want to capture is "I woke up *this* time due
to a head". To
->simulate this we flip the coin and the randomly pick one day she will be
awoken to be
->'this time'.
No it absolutely does NOT.
I'm sorry, Matt (the McL one, not the Russ. one :-) ), but you really
need to step back, take a deep breath, and try again in a couple days.
You are just plain not getting it. "...randomly pick one day...." is NOT
a valid restatement of the original problem.
--
Carl Witthoft c...@world.std.com ca...@aoainc.com http://world.std.com/~cgw
Got any old pinball machines for sale?
> Just to clarify - I can't remember the original, so I'm assuming she is woken
> twice on heads, once on tails. If the other way round, reverse all
> polarities...
Yeah... you have your polarity wrong.
> The odds that she will wake up due to a head are 1/2.
> The odds that she will wake up NEXT TIME due to a head are 1/2.
> The odds that she will wake up twice due to a tail are 1/2.
> The odds that she will wake up NEXT TIME due to a tail are 1/2.
> The odds that she will wake a second time due to a tail are 1/2.
You have answered the wrong questions here. I didn't mean to ask "What are the odd
that the *next* time..."
The question I want you to answer is in (2):
1. What are the coins odds before I go to sleep? [we agree 1/2]
2. As Beauty stands there before the experiment takes place, she thinks ahead.
"When I awaken I won't know what day it is or what the coin came up. What odds
should I place on heads then?"
You see, it stands to reason that since she knows all of the details of the
experiment, she should know what she *should* say before it even happens. To prove
this point, notice that *you* have concluded that if you were in this situation you
would place odds of 1/3 on heads even though you haven't actually participated in it.
Now, she thinks to herself:
If the coin comes up heads right now, then when I awaken not knowing what day it is,
then coin will still have come up heads. Similarly if the coin comes up tails.
Since I will get no new information when I wake up, the odds of the first event must
be the odds of the coin coming up heads.
> }There have been at least two primary flaws made by members of your
> }party in trying to set this up. One is to not cleanly separate the
> }trials from each other. Instead of asking whether or
> }not Beauty awoke to heads in each trial and incrementing the "heads
> }counter" then, some have imagined that they are all run in succession
> }and then ask what her odds would be if she awoke in some random
> }awakening of some random trial.
>
> You've answered your own question: that's how I would do it. In fact,
> I merely need enumerate the awakenings from TWO trials, one where the
> coin comes up heads and one where it comes up tails, since they are
> equally probable:
> Trial 1:
> Beauty awakens on Monday, coin is heads
>
> Trial 2:
> Beauty awakens on Monday, coin is tails and
> Beauty awakens on Tuesday, coin is tails
> Pick a random awakening from one of these, and the odds that the coin
> is tails is 2/3rds
Did you even read the explanation of why this is a fallacy? In a repeated trial experiment, the
trials must be cleanly separated from each other. The correct way to have approached this would
be to pick a random awakening from the first trial and then one from the second. The first one
would give you the head and the next the tail. Read the analogy to putting letters on a piece of
paper again. Here it is for you:
1. You flip a coin *once*. If it is heads you write an "H" on a blank piece of paper. If it
is tails you write "TT". Now you pick a random letter off of the page and give yourself a
point if it is heads, and repeat this alot with a blank sheet of paper each time.
2. You start with a blank sheet of paper. You flip a coin and write "H" for heads and "TT"
for tails. You repeat this alot. When you are done you pick alot of random letters and give
your self a point for each head.
It is easy to see that in 1 you win a point in about 1/2 of the letters you pick. It is also
to see that in 2 you win a point in only 1/3 of the letters you pick.
You are using 2 when you should be using 1.
> From the perspective of the experimenter, "Tails on Monday" and "Tails
> on Tuesday" are not events. From Beauty's perspective they are; if
> we do repeated trials of this experiment and look at things from
> Beauty's perspective, we see a rigged coin. The fallacy you appear to
> be running into is that there's two different meanings of "trial".
> One is the "trial" from the perspective of the experimenter, where
> there is one flip of the coin and there may be one or two awakenings.
> The other is a trial from Beauty's perspective, which is the same as
> an awakening.
I have been talking about doing trials from Beauty's perspective this whole time. The trials
from the point of view of the experimenter is trivial. The only thing you need to realize in
order to analyze this problem from Beauty's perspective is to realize the when she awakens, she
will have no idea what day it is, and can thus assume that she has awakened on a random day of
the possibilities enumerated by the coin toss.
> I'm sorry, Matt (the McL one, not the Russ. one :-) ), but you really
> need to step back, take a deep breath, and try again in a couple days.
> You are just plain not getting it. "...randomly pick one day...." is NOT
> a valid restatement of the original problem.
This is a conspiracy, right? One of you started emailing other people saying
"Hey, lets play a joke on this crazy Matt McL". Is that how it happened?
Every time you see another post from me you just sit back and laugh. Is that
it?
Because if not, YOU take a deep breath and a couple of days off before thinking
about it again. And start eating better. =)
I read it; I disagree. That answers the wrong question.
}> From the perspective of the experimenter, "Tails on Monday" and "Tails
}> on Tuesday" are not events. From Beauty's perspective they are; if
}> we do repeated trials of this experiment and look at things from
}> Beauty's perspective, we see a rigged coin. The fallacy you appear to
}> be running into is that there's two different meanings of "trial".
}> One is the "trial" from the perspective of the experimenter, where
}> there is one flip of the coin and there may be one or two awakenings.
}> The other is a trial from Beauty's perspective, which is the same as
}> an awakening.
}
}I have been talking about doing trials from Beauty's perspective this
}whole time. The trials from the point of view of the experimenter is
}trivial. The only thing you need to realize in order to analyze this
}problem from Beauty's perspective is to realize the when she awakens,
}she will have no idea what day it is, and can thus assume that she
}has awakened on a random day of the possibilities enumerated by the coin toss.
She must also take into account that if the coin came up tails, she
had an extra trial.
Suppose we do the repeated experiment. Here's what the trials look like from
the point of view of the experimenter
1. H, Monday
2. T, Monday and Tuesday
3. H, Monday
4. T, Monday and Tuesday
5. H, Monday
6. H, Monday
7. T, Monday and Tuesday
3 tails, 4 heads
Here's what the awakenings would look like from Beauty's perspective
(though she isn't told this):
1. H, Monday
2. T, Monday
3. T, Tuesday
4. H, Monday
5. T, Monday
6. T, Tuesday
7. H, Monday
8. H, Tuesday
9. T, Monday
10.T, Tuesday
6 tails, 4 heads
>Now, she thinks to herself:
>
>If the coin comes up heads right now, then when I awaken not knowing what day
> it is,
>then coin will still have come up heads. Similarly if the coin comes up tails.
>Since I will get no new information when I wake up, the odds of the first event
> must
>be the odds of the coin coming up heads.
>
When she wakes up, she gets the information that she has just woken up.
Subtle but relevant. There are three equiprobable cases now, whereas before
there were two.
- Gerry
> She must also take into account that if the coin came up tails, she
> had an extra trial.
You aren't understanding what a "trial" is supposed to be.
You see, when you want to find out how many times an event will happen in an
experiment, you repeat the experiment a large number of times. It is essential
that each time you do it you leave it completely up to chance and preform it
completely *independently* of the previous trials.
What is to be our "trial"? Well what are we interested in? We are interested in
the number of times the coin came up heads out of times she wakes up. That is, our
trial must correspond to waking Beauty up. I think you agree with this... but we
disagree about how it should be done.
How you think it should be done is very clear. You want to perform the experiment
N times. Then you want to have one trial if the coin came up heads and two trials
if the coin came up tails. Think about this: these trials are *not* independent
of each other. At this point, I would like to know what you think about this:
A. Our trials don't need to be independent of each other
B. These trials are independent of each other.
Tell me which answer you believe.
Again, "Beauty wakes up and the coin is tails" is NOT an event for the
experiment being performed. Talking about that probability in the
context of the original experiment is meaningless. In order to figure
that probability, we must redefine the experiment so that "Beauty
wakes up and the coin is tails" IS an event. To do that, we look at
things from Beauty's perspective and consider each awakening to be an
experiment.
}What is to be our "trial"? Well what are we interested in? We are interested in
}the number of times the coin came up heads out of times she wakes up. That is, our
}trial must correspond to waking Beauty up. I think you agree with this... but we
}disagree about how it should be done.
}
}How you think it should be done is very clear. You want to perform the experiment
}N times. Then you want to have one trial if the coin came up heads and two trials
}if the coin came up tails. Think about this: these trials are *not* independent
}of each other. At this point, I would like to know what you think about this:
}A. Our trials don't need to be independent of each other
}B. These trials are independent of each other.
}
}Tell me which answer you believe.
They don't need to be independent of each other in the context of the
original experiment.
Suppose I build a nice little black box that returns "heads" or
"tails" on a push of a button, and ask someone to figure out the
probability of "heads" for the little box. Unbeknownst to him, the
way the box really works is that each time the button is pushed it
flips a fair coin and returns its value, unless it flipped the coin
the last time and it came out tails, in which case it returns tails.
What probability will he assign to "heads" if he pushes the button a
large number of times?
That's nearly Beauty's situation in the repeated experiment, except
she can never stop pushing on an odd-numbered 'tails'.
> Beauty wakes up and the coin is tails" is NOT an event for the
> experiment being performed. Talking about that probability in the
> context of the original experiment is meaningless. In order to figure
> that probability, we must redefine the experiment so that "Beauty
> wakes up and the coin is tails" IS an event. To do that, we look at
> things from Beauty's perspective and consider each awakening to be an
> experiment.
I have never supported the position you're arguing against! You are right about that
whole paragraph. On this much we agree. The "trials" cannot simply be runs of the
experiment. If we ran the experiment many times and then counted how many times the
coin was heads we would be measuring the odds that the coin would be heads in a random
*experiment*. What we are interested in is the probability that the coins will be heads
in a random *awakening*. We agree up to this point.
> }What is to be our "trial"? Well what are we interested in? We are interested in
> }the number of times the coin came up heads out of times she wakes up. That is, our
> }trial must correspond to waking Beauty up. I think you agree with this... but we
> }disagree about how it should be done.
> }
> }How you think it should be done is very clear. You want to perform the experiment
> }N times. Then you want to have one trial if the coin came up heads and two trials
> }if the coin came up tails. Think about this: these trials are *not* independent
> }of each other. At this point, I would like to know what you think about this:
> }A. Our trials don't need to be independent of each other
> }B. These trials are independent of each other.
> }
> }Tell me which answer you believe.
> They don't need to be independent of each other in the context of the
> original experiment.
I really don't have any idea what this means. I have never encountered the "in this or
that context" modifier to the statement A is independent of B. The trials are either
independent or not, according to the definition of independence. Can explain what this
means?
> Suppose I build a nice little black box that returns "heads" or
> "tails" on a push of a button, and ask someone to figure out the
> probability of "heads" for the little box. Unbeknownst to him, the
> way the box really works is that each time the button is pushed it
> flips a fair coin and returns its value, unless it flipped the coin
> the last time and it came out tails, in which case it returns tails.
> What probability will he assign to "heads" if he pushes the button a
> large number of times?
Well, he will of course notice that the returned value is not random... that the trials
are not independent. That isn't the main flaw with the analogy though, it is that you
plan on repeating the experiment a large number of times. I have already acknowledged
that if you repeat this experiment many times such that she doesn't get remember waking
up in *any* of the previous experiments, then she should assign probability near 1/3 to
heads.
And indeed, Albert now recalled just the monologue. But he could not
help wondering, "What is the probability that the dice gave me only one
human life rather than two, given that I'm now living one, and given
that the monologue is true?"
Correspondences:
Allotted 1 Life <-> Heads
Allotted 2 Lives <->Tails
Wake for interview Monday <-> Live the first (and maybe only) life
Wake for interview Tuesday <-> Live a second life
--
r e s (Spam-block=XX)
Jamie Dreier wrote ...
I'd like to modify that a bit, and agree with David Karr that the
answer depends on one's definitions. I still think that 1/3 is the
best answer.
The original question asked for Beauty's "credence" for the
proposition that the coin landed heads. The answer to that is that it
depends on whether she is a thirdist or a halfist!
If we ask for what her credence "ought" to be, or what a "rational"
value for it is, then we get into problems of definitions. The best
criterion I know of is the betting one; and if she has a chance to bet
on the result of the toss (subject to lots of reasonable and/or
simplifying assumptions) then she will do best in the long run if she
takes the probability of heads as 1/3.
I'm not convinced by David Karr's argument that one can alternatively
take the "credence" to be 1/2 by considering a scenario where she has
an option to cancel a bet that has already been arranged: it's easy to
find a normal example (without the waking and forgetting) where the
decision whether to cancel a bet that may already have been cancelled
(or may be cancelled in the future) does not depend simply on how the
odds compare with one's reckoning of the probability.
-----ooOoo-----
Matt McLelland <mat...@flash.net> wrote:
> John Rickard wrote:
[(1) Variant where on heads Beauty is woken on Monday *or* Tuesday,
at random, instead of just Monday. Agreed that this does not change
the probability. (2) Variant where she is woken at most once: heads
= woken with probability 1/2; tails = woken. Agreed that if she
wakes she reckons the probability of heads as 1/3.]
> > 3. In the original experiment as modified in (1), allow her to ask
> > what day it is. If the answer is Monday, she is in an equivalent
> > position to that in (2) (since she knows that the plan implied
> > that she would be woken on Monday with probability 1/2 if the coin
> > came up heads, and with probability 1 if the coin came up tails);
> > therefore her reckoning of the probability is 1/3. Similarly, if
> > the answer is Tuesday then her reckoning of the probability is
> > 1/3. Since it makes no difference to her reckoning what day it
> > is, her reckoning is 1/3 before she asks the question.
>
> Bzzz. Wrongo. The innocuous looking explanation in parenthesis doesn't
> cut it... and the statement it is explaining isn't true. In reality after
> you are told it is Monday, the odds are 50-50 that the coin came up heads.
That doesn't seem reasonable to me. If she *knows* that it is Monday,
then how can what is going to happen on Tuesday make a difference to
her credence of what has already happened?
-----ooOoo-----
Matt McLelland <mat...@flash.net> wrote:
> At my clinic, we will buy you a lottery ticket. Now everyone knows
> that your odds of winning our state lottery are only 1 in a million,
> but the $20 million dollar jackpot this time can almost certainly be
> yours if you just call me now! When you arrive you will be given a
> warm bath and fed a four star meal of your choice. When you are
> done being pampered, you just lie down and go to sleep. While you
> are sleeping, our trained personnel will watch the TV to see if your
> lottery ticket is a winner. If it isn't, then you just pay our
> small $250 service fee and go on about your day. If it is, then you
> will be repeatedly awakened into lavish surroundings and by
> beautiful women every day for the rest of your life. How you ask?!?
> Well, at the end of every 30 minute interval, we will wipe your
> memory clean up to the time you arrived! The mathematics behind
> this miracle are complicated, so I won't bore you, but if we suppose
> that you will live 50 more years, then your odds of awakening to
> find that you have won are amazingly over 50%! Wouldn't you pay
> $250 to have a 50% chance of winning the lottery?!?
The answer to the last question is yes, of course. But I wouldn't buy
your service (even apart from the fact that I wouldn't want $20
million at the price of losing my memory every 30 minutes for the rest
of my life). For the probability that I will win is (I know now)
.000001; the fact that I know that I will at some time think that
there is a probability of > .5 that I have won is, I admit, strange,
but not out of the question. The situation is that:
With probability .999999, I will think wrongly, once, that I have
probably won.
With probability .000001, I will think rightly, > 1000000 times,
that I have probably won.
This can be viewed as a scaling up, by a factor of > 1000000, of a
situation where:
With probability a bit less than .000001, I will think wrongly,
once, that I have probably won.
With probability a bit less than .000001, I will think rightly,
once, that I have probably won.
which is easy to arrange. It's the scaling up that can't be done
without the way of making me forget.
-----ooOoo-----
Consider the original experiment (heads = woken on Monday;
tails = woken on both Monday and Tuesday). Let's say that after being
woken and asked for her credence that the coin came up heads, Beauty
is told what day it is, and asked again for her credence. (And she
knows beforehand that this will happen.) If I understand correctly,
you (Matt McLelland) think that if she is told that it is Monday, her
credence should then be 2/3 that the coin came up heads? Does this
apply even if the coin is not tossed until Tuesday morning -- after
being told that it is Monday, should she still think that the coin has
a probability of 2/3 of coming up heads? (Or if Beauty herself tosses
the coin after being told that it is Monday?)
-----ooOoo-----
I don't think this will convince Matt McLelland, but it might be of
interest to some readers.
Suppose that there are 100 experimenters, each of whom tosses a coin
independently, and wakes Beauty once (heads) or twice (tails)
accordingly and asks for her credence that the experimenter waking her
tossed heads. She will almost certainly be woken about 150 times,
about 50 of which will be by an experimenter who tossed heads, so it
seems to me (and I *think* Matt will agree) that her credence is about
1/3 each time. But when we consider her wakings by one particular
experimenter, it seems to me (and here is where I think Matt will
disagree) that the other 99 experimenters are irrelevant, and so she
should give the same answer as if the other 99 didn't exist.
--
John Rickard <John.R...@virata.com>
I like the following variation on our Betting Beauty: If she's
right, she gets to stay awake. Otherwise, after the experiment,
she's put to Sleep. Capital S.
Suppose she guesses Heads with probability p. If she's awakened
(at most) h times after Heads and t times after Tails, then the
probability of her dying is
((1-p)^h + p^t)/2, which is minimized for p* = the (unique) root
of h (1-p)^(h-1) = t p^(t-1) (for h*t >= 2).
For h=1, t=2, p* = 1/2, and she dies with probability 3/8.
| Jim Ferry | Center for Simulation |
+------------------------------------+ of Advanced Rockets |
| http://www.uiuc.edu/ph/www/jferry/ +------------------------+
| jferry@expunge_this_field.uiuc.edu | University of Illinois |
Do you mean the first time she guesses correctly, she short-circuits
the rest of the experiment (no more going back to sleep)? Honestly,
I had to reread this several times to come to that understanding.
>Suppose she guesses Heads with probability p. If she's awakened
>(at most) h times after Heads and t times after Tails, then the
>probability of her dying is
>
>((1-p)^h + p^t)/2,
>
>For h=1, t=2, p* = 1/2, and she dies with probability 3/8.
Sure, but in this example "heads" is more heavily favored than in an
example where there are *always* two chances to guess on tails and
they pay off independently.
By introducing the gun to the head, you've ruled out all the "pay off
independently" schedules.
What Beauty has to remember is that when she has to make this
decision, there's a possibility that she's either been awakened
before, or will be awakened again (depending on how she answers).
This complicates things. In fact what you've done here is to set up a
yet more complex example of the conditional bet I proposed to allow
Beauty to make (where only the bet she makes or doesn't make on the
last day counts).
In your game, answering "heads" when the correct answer is "tails" is
not as bad as answering "tails" when the correct answer is "heads."
In the first case you get another chance, in the next you're dead.
Naturally, this is going to favor a shift in strategy away from tails
and toward heads.
You decided to short-circuit the game by ending it if Beauty guesses
right. Suppose we end it if Beauty guesses *wrong*: one wrong guess
and we don't wake you again, guess right and you get to continue
playing. In this case the chance of dying is (1 + p - p^2)/2,
and there are two optimal choices: either *p = 0 or *p = 1.
What does this say about the probability that the coin is heads?
I'll say one thing about this variation on the problem: it's a lot
easier to solve in the probability space of experiments rather than
the probability space of awakenings, if only because I don't quite see
how to define the latter space prior to knowing what strategy Beauty
follows.
Conversely, if the payoff for each awakening is independent, I have
trouble doing the calculation based on the experiment space. (It is
*not* correct to say the payoff will simply be doubled in case of
tails, because that assumes I make the same choice both times, and I
can't *know* whether I should do that until *after* I've done the
calculations.)
In the case where the last guess cancels any and all previous guesses,
it's easy enough to do the calculation over either probability space,
I think.
--
David A. Karr "Groups of guitars are on the way out, Mr. Epstein."
ka...@shore.net --Decca executive Dick Rowe, 1962
> > Bzzz. Wrongo. The innocuous looking explanation in parenthesis doesn't
> > cut it... and the statement it is explaining isn't true. In reality after
> > you are told it is Monday, the odds are 50-50 that the coin came up heads.
>
> That doesn't seem reasonable to me. If she *knows* that it is Monday,
> then how can what is going to happen on Tuesday make a difference to
> her credence of what has already happened?
Well, the reasoning I was using is that if the coin came up heads, then she must
be awakened on Monday. On the other hand, if the coin came up tails, then she
could be woken on Monday or Tuesday. Upon learning that the fact that this
wakening is indeed Monday, maybe it should then seem more likely to her that the
coin came up heads. But...
> Consider the original experiment (heads = woken on Monday;
> tails = woken on both Monday and Tuesday). Let's say that after being
> woken and asked for her credence that the coin came up heads, Beauty
> is told what day it is, and asked again for her credence. (And she
> knows beforehand that this will happen.) If I understand correctly,
> you (Matt McLelland) think that if she is told that it is Monday, her
> credence should then be 2/3 that the coin came up heads? Does this
> apply even if the coin is not tossed until Tuesday morning -- after
> being told that it is Monday, should she still think that the coin has
> a probability of 2/3 of coming up heads? (Or if Beauty herself tosses
> the coin after being told that it is Monday?)
This is an excellent point. The reason I have been slow in responding to this
message is because I was thinking about this particular passage. I *can't* get
my intuition to let me believe that she would, upon getting this extra
information, be able to make a prediction on a coin not yet flipped.
Unfortunately, you haven't won me over to the 2/3 side. This, combined with
some paradoxes I have posted just leave me confused.
> Suppose that there are 100 experimenters, each of whom tosses a coin
> independently, and wakes Beauty once (heads) or twice (tails)
> accordingly and asks for her credence that the experimenter waking her
> tossed heads. She will almost certainly be woken about 150 times,
> about 50 of which will be by an experimenter who tossed heads, so it
> seems to me (and I *think* Matt will agree) that her credence is about
> 1/3 each time. But when we consider her wakings by one particular
> experimenter, it seems to me (and here is where I think Matt will
> disagree) that the other 99 experimenters are irrelevant, and so she
> should give the same answer as if the other 99 didn't exist.
You have correctly spotted the point at which I would disagree.
I really don't think you can find the previous argument convincing at all.
Compare these problems:
You have a coin and a paper. You flip the coin and write "H" for heads and "TT"
for tails. Then you pick a letter off of the paper. What are the odds that it
is "H"?
You have a coin and a paper. You flip the coin and write "H" for heads and "TT"
for tails. You repeat this 100 times. Then you pick a letter off of the
paper. What are the odds that it is "H"?
In general, it is *not* the case that in every situation I should be
able to compute the probability of every outcome. How, for example,
should I compute the probability that I'll spend eternity in a fiery pit?
As indicated by the above example, the mere fact that there's a
possible outcome that I very much want to avoid is not sufficient to
define a unique probability. Same for the gun to the head.
Conversely, the fact that I make a certain decision with absolute
reliability does not indicate what probability I assign to the
outcomes. For example, if you wake me on Monday heads or tails, and
Tuesday only on tails, and each time you hold a gun to my head and
ask, "Monday or Tuesday?", and if you've told me in advance of the
experiment that you're going to do all this and shoot me if I answer
wrong, then I'll answer "Monday" every time. But that doesn't
indicate whether I calculated the chance that it's Monday as 2/3 or 3/4.
(In fact, I calculate the probability that it's Monday, given that you
just pointed the gun at me and all that, as 2/3, but my chance of
getting shot by you is also 2/3. Specifically, you might just let me
live, you might shoot me tomorrow, or you might just shoot me now,
each with--as it turns out--equal probability given the knowledge I
have at that moment. But of course I would still say "Monday" even if
I believed the probability really was 3/4.)
>If the coin lands Heads, we will awaken Beauty on Monday
>afternoon and interview her. If it lands Tails, we will awaken her Monday
>afternoon, interview her, put her back to sleep, and then awaken her again
>on Tuesday afternoon and interview her again.
However, here is what we actually do:
If the coin lands Tails, we will awaken Beauty on Monday
afternoon and interview her. If it lands Heads, we will awaken her Monday
afternoon, interview her, put her back to sleep, and then awaken her again
on Tuesday afternoon and interview her again.
Before we flip, we believe P(heads) = 1/2 and Beauty believes P(heads) = 1/2.
When Beauty wakes up, we will know either P(heads) = 1 or P(heads) = 0,
but Beauty will compute P(heads) = 1/3 because of what we told her. If
Beauty must make a bet, she will bet on tails. Then she will be wrong
2/3 of the times she wakes up instead of 1/3 of the times she wakes up,
because we lied. But she still must compute what she must compute.
Yes Beauty, probability is relative.
--
<< If this were the company's opinion, I would not be allowed to post it. >>
"I paid money for this car, I pay taxes for vehicle registration and a driver's
license, so I can drive in any lane I want, and no innocent victim gets to call
the cops just 'cause the lane's not goin' the same direction as me" - J Spammer
I agree completely up to this point.
:There still seems to be a paradox though. Someone else posted a
:scenario in which Beauty places a bet before the experiment begins,
:and then each time she is awakened, she gets a chance to cancel the
:bet. She is given an advantage originally, paying $0.45 for a return
:of $1 if the coin lands heads and a return of $0 if the coin lands
:tails. Before the experiment begins, she and we believe P(heads) is
:0.5 so it looks profitable to place a bet. When she is awakened,
:she is allowed to cancel the bet. With her new information, she will
:want to cancel. It is not really paradoxical that she might want to
:cancel, it is paradoxical that she can figure out that she will always
:want to cancel. This is the part that I would really like to resolve.
The mistake is in assuming that this situation is the same as the original
and that she will want to cancel the bet. In this situation she will
*NOT* want to cancel!!
When awakened, she has two possible strategies.
Strategy 1: Cancel (or recancel) the bet.
Strategy 2: Don't cancel the bet.
What is the expected gain from strategy 1?
$0. Because the initial bet will always get cancelled.
What is the expected gain from strategy 2?
$.05 per initial bet. Because every bet on a coin flip will stand (not be
cancelled) and she pays $.45 for an expected gain of $.50
Thus, she should not cancel the bet.
In the original problem (betting form) she doesn't make any bet at the
start. When awakened, it is favorable for her to bet on tails because
she gets two opportunities to bet on every occurrence of tails. In this
problem she instead gets to make a favorable bet at the outset. Now if
tails comes up she gets two opportunities to cancel a favorable bet instead
of only one opportunity. But no matter how many opportunities she gets
she shouldn't cancel a favorable bet. You could wake her up one million
times on tails and once on heads, or wake up one million times on heads
and only once on tails; in either case she should never cancel the bet.
So yes, after making her initial bet she can indeed figure out in
advance what her strategy should be when she is awakened, but that
strategy should be to not cancel the bet. There is no paradox here.
Regards,
Glenn C. Rhoads
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
I'm not convinced by it either. In fact I retract what I said earlier
about 1/2 being as good an answer as 1/3 due to an "ambiguity" in the
problem. I now think the question in the original problem can most
reasonably be taken as to estimate the conditional probability of
heads *given* the knowledge that Beauty has when she wakes up on
Monday or Tuesday.
The number 1/2 is the value of the probability conditioned on the
knowledge Beauty has before she goes to sleep. Under certain
circumstances, she can contrive valid decision procedures that take
this value as a non-trivial input, but that's not a persuasive answer
to the question that was asked Beauty when she woke.
> I'm not convinced by it either. In fact I retract what I said earlier
> about 1/2 being as good an answer as 1/3 due to an "ambiguity" in the
> problem. I now think the question in the original problem can most
> reasonably be taken as to estimate the conditional probability of
> heads *given* the knowledge that Beauty has when she wakes up on
> Monday or Tuesday.
This conditional probability bit is a sham. She could have conditioned on
"I will wake up on Monday or Tuesday" before she went to sleep. People seem
to have more confidence in the Beauty problem than they do in my cloning
problem. I don't see the difference, so I am going to go ahead and rephrase
the cloning problem that I just posted in terms of Beauty. Consider this:
Beauty is rushed to the hospital after experiencing a sharp pain in her
head. When she arrives, doctors administer preliminary tests which are very
pessimistic indeed. They tell her that she will live at least 23 more
hours. Then, there is a 1 in 2 chance that she will drop dead in the next
hour. However, they tell her that if she does live past 24 hours, then all
danger from this condition will have passed. Beauty is horrified at the
news. Her daughter is touring Europe, and would most likely only be able to
make it back to see her mother for a short few minutes before she dies.
Nevertheless, steps are taken to bring her daughter back. Seeing her
daughter before she dies is the most important thing on Beauty's mind.
Over the next hour, the doctors perform more extensive tests which
conclusively determine if Beauty will be dead at the end of the day.
Beauty, a mathematician, doesn't want to know. Instead, she wants to engage
in the same procedure that we have seen repeatedly on this thread. She is
to be put to sleep. If the test says she will die, then she is awakened
once. She isn't told of the test results, and is just left to live out the
rest of her day. If the test says she will live, then for the next ten
years Beauty will have her memory cleared every other day. (She will get to
experience two days, each time, before having her memory wiped).
Beauty is put to sleep. When she wakens she feels great piece of mind
knowing that her odds of dying by the end of the day are now less than 1 in
1750. Wow she thinks, I really don't have much to worry about with that
whole disease thing. This procedure, while not actually having done
anything to decrease her odds of dying, has put her mind at rest. The day
seems to pass quickly, just like any other day, with her mind at ease. She
is told that in 15 minutes, she will be possibly be in the hour period where
she risks death. Beauty laughs as she thinks about how improbable that
is. "Oh wait, my daughter should be here by now", thinks Beauty. "Oh
well, I'm tired. I'll just see her in the morning." And with that
thought, Beauty turns out her light and goes to sleep.
Wouldn't you have done the same thing?
Of course she could. She could condition on that probability at any
time, with the following result:
P(heads | I will be awoken on Monday or Tuesday) = 1/2.
I don't think any of us disputes this value. The value 1/3 comes up
only as the result of different conditioning, which I've already
described more than enough.
>Beauty, a mathematician, doesn't want to know. Instead, she wants to engage
>in the same procedure that we have seen repeatedly on this thread.
We've seen it repeatedly because you've posted it repeatedly. I don't
think anyone believes that this is a life-enhancing procedure. It
certainly doesn't alter any of my probability estimates *given* *my*
*current* *state* *of* *knowledge* (i.e, when I am deciding whether to
undergo the procedure), which I should certainly be able to use to
guide any decisions that I make *now*.
In fact regardless of what I think Beauty's rational conclusions
should be during her waking-with-wiped-memory sessions, I think all
these experiments with amnesia and/or "clones" are cruel or at best
unnatural. (I put "clones" in quotes because *real* clones of animals
are not nearly so unnatural.)
> >This conditional probability bit is a sham. She could have conditioned on
> >"I will wake up on Monday or Tuesday" before she went to sleep.
>
> Of course she could. She could condition on that probability at any
> time, with the following result:
>
> P(heads | I will be awoken on Monday or Tuesday) = 1/2.
I have misspoken again. What I intended was for her to calculate:
P(heads | I awaken to one of the awakens which will follow) = 1/2
I have posted along the same lines on several occasions because no one has been
able to explain nonsensical results that stem from a 1/3 belief. Briefly, the
situation is:
Some one has a coin biased 3:2 against tails.
They will put you to sleep, flip it.
On heads they wake you and torture you.
On tails they wake you and let your spend half of your life.
Then they wipe your memory back to the time you went to sleep.
They wake you and let you live the other half.
Before you go to sleep you will think the odds against you. Before you go to
sleep you know that the first thing you will experience (even if it is the second
time) will be that you were put to sleep. In fact, you can compute that should
you awaken to some random awakening to follow, you should still compute your
odds of survival as 3:2 against. When you really do awaken, however, you should
think the odds are in your favor. IMHO this is nonsense.
I do find it fairly convincing. To give more detail, assume that each
experimenter is assigned beforehand two time slots for possible
awakenings. Then what happens during the time slots allocated to one
particular experimenter -- Phillip, let's say -- is independent of
what happens during the other experimenters' time slots, and indeed is
independent of whether the other 99 experimenters exist or not. So
when Beauty is woken by Phillip, why should her judgement of what
happens during his two time slots be affected by the existence or
otherwise of the other experimenters?
: Compare these problems:
:
: You have a coin and a paper. You flip the coin and write "H" for
: heads and "TT" for tails. Then you pick a letter off of the paper.
: What are the odds that it is "H"?
:
: You have a coin and a paper. You flip the coin and write "H" for
: heads and "TT" for tails. You repeat this 100 times. Then you pick
: a letter off of the paper. What are the odds that it is "H"?
(Answers are 1/2 and almost exactly 1/3, of course.)
In this case a similar argument doesn't hold. If you consider one
particular coin flip in the second case (as I am considering one
particular experimenter above), then the two problems, restricted to
that coin flip, are very different: in the first problem it is certain
that a letter resulting from that coin flip will be picked; in the
second, there is only a 1/100 chance that a letter resulting from that
coin flip will be picked. (Whereas in my scenario, regardless of
whether there are 100 experimenters or only Phillip, there is a
probability of 1/2 that Phillip will wake her once and a probability
of 1/2 that he will wake her twice.)
--
John Rickard <John.R...@virata.com>
I don't understand which state of knowledge "I awaken" represents.
"I have just awakened and the experiment is underway" is a
reasonable approximation of the state of knowledge in which I'm
interested; is that what you had in mind?
>Some one has a coin biased 3:2 against tails.
>They will put you to sleep, flip it.
>On heads they wake you and torture you.
>On tails they wake you and let your spend half of your life.
>Then they wipe your memory back to the time you went to sleep.
>They wake you and let you live the other half.
>
>[...] In fact, you can
>compute that should you awaken to some random awakening to follow, you
>should still compute yourodds of survival as 3:2 against.
It seems to me here you are saying in effect that when I awake, I
should compute the odds of survival as 3:2 against because I should
compute the odds as 3:2 against. You conclude that any disagreement
with your conclusion is nonsense, yet the justification seems
circular.
> >[...] In fact, you can
> >compute that should you awaken to some random awakening to follow, you
> >should still compute yourodds of survival as 3:2 against.
>
> It seems to me here you are saying in effect that when I awake, I
> should compute the odds of survival as 3:2 against because I should
> compute the odds as 3:2 against. You conclude that any disagreement
> with your conclusion is nonsense, yet the justification seems
> circular.
I think you would actually agree with the statement you quoted. On the
other hand, there is a big logical gap in my logic between that statement
and the one following it. I just thought that if I called your position
nonsense with enough enthusiasm, it would really be nonsense...
Anyway, I don't want to argue about this anymore =). I have just been
playing devil's advocate for the last many days, as I am still very uneasy
about the 1/3 result. I might think about it again at some point, but most
reasoning (and the strongest reasoning) seems to support 1/3, and all I can
do to support 1/2 is dream up counter-intuitive sounding consequences of the
1/3 position. So, I think I will make this my last post to the sleeping
beauty thread until I have an important revelation. I think this was an
interesting thread.
[Beauty has a 1/2 chance of dying between 23 and 24 hours from now.
She wants to see her daughter before she dies (if she does); her
daughter is travelling a long way and may only just arrive in time.]
: She is to be put to sleep. If the test says she will die, then she
: is awakened once. She isn't told of the test results, and is just
: left to live out the rest of her day. If the test says she will
: live, then for the next ten years Beauty will have her memory
: cleared every other day. (She will get to experience two days, each
: time, before having her memory wiped).
:
: Beauty is put to sleep. When she wakens she feels great piece of mind
: knowing that her odds of dying by the end of the day are now less than 1 in
: 1750. Wow she thinks, I really don't have much to worry about with that
: whole disease thing. This procedure, while not actually having done
: anything to decrease her odds of dying, has put her mind at rest. The day
: seems to pass quickly, just like any other day, with her mind at ease. She
: is told that in 15 minutes, she will be possibly be in the hour period where
: she risks death. Beauty laughs as she thinks about how improbable that
: is. "Oh wait, my daughter should be here by now", thinks Beauty. "Oh
: well, I'm tired. I'll just see her in the morning." And with that
: thought, Beauty turns out her light and goes to sleep.
(Rather, Beauty should think that this is almost certainly not the day
of the test, so she has no reason to expect her daughter to arrive
that day.)
Yes, this is a counter-intuitive situation. But one can get a similar
situation without assuming the thirdist position, so I don't think it
is actually much of an argument against thirdism.
For suppose instead that Beauty's possible condition is not fatal. If
she has it, it will start to affect her in 24 hours time, and will
last for the rest of her life, except that she will be clear of it for
the first 30 seconds after waking each day. She wants to see her
daughter before the condition starts to affect her. (Make up your own
plausible details if you can!) And suppose that she will have her
memory wiped every other day for ten years, as above, regardless of
whether she has the condition or not.
She wakes up. She should reckon the probability that she has the
condition as 1/2, and that (independently of whether she has the
condition) it is equally likely to be any of the approximately 1800
possible days. (Assume for simplicity that she will not die during
the next 10 years.)
Thirty seconds pass, and the condition does not affect her. She can
now rule out the cases where she has the condition and it is any day
other than the first day. The other cases remain equally probable, so
she should now reckon the probability that she has the condition as
about 1/1800. She is in the same situation as in Matt's scenario.
(I am not putting this forward as an independent argument for
thirdism, but just as an attempt to show that the anti-thirdist
argument proposed is faulty.)
--
John Rickard <John.R...@virata.com>
Your most recent post makes an analogy between these two situations (and I
understood its purpose):
Situation 1:
A coin is flipped. Heads or tails, Beauty is woken every day of the week. If the
coin was tails, she is given a sticker a few minutes after waking up that says
"You are awake". If the coin was heads, she only gets the sticker on Monday.
Situation 2: (originalish)
A coin is flipped. On heads she is woken up only on Monday, and on tails she is
woken up every day of the week.
Now, you would argue that she is the same situation in 1 after getting her sticker
as she is in 2 after being woken up. There is much evidence to this effect. I
have trouble dealing with this in terms of the "surprise" test. In 1 she is
mildly surprised to get a sticker and concludes the coin likely came up tails when
she gets it. In 2 she isn't surprised at all.
Matt McLelland wrote:
> I have vowed not to post about Beauty anymore to the group until I have new
> idea... but I will tell you that it still bothers me everytime I think about it.
[...]
Suppose there is a long-run of hypothetical (independent)
repetitions of the experiment, and let h/t be her guess
as to whether H/T has been tossed.
If (T,h,h), say, occurs, then has she been incorrect twice,
or only once? Do the two answers in (T,h,h) express two
opinions or is it only one opinion expressed twice? And
if it's just one opinion expressed twice, does that make
it incorrect twice or only once?
I think the whole sense of paradox or dissonance turns on
implicitly viewing the problem first in one of these ways
and then the other, getting different answers, and not
realizing that different interpretations are being used.
--
r e s (Spam-block=XX)
Matt McLelland <mat...@flash.net> wrote ...
> From a frequentist viewpoint, paradox has been claimed by
> saying "If she were to always guess Tails, then she would
> be right 2/3 of the time", but consider:
>
> Suppose there is a long-run of hypothetical (independent)
> repetitions of the experiment, and let h/t be her guess
> as to whether H/T has been tossed.
>
> If (T,h,h), say, occurs, then has she been incorrect twice,
> or only once? Do the two answers in (T,h,h) express two
> opinions or is it only one opinion expressed twice? And
> if it's just one opinion expressed twice, does that make
> it incorrect twice or only once?
I was never happy with their frequentist argument - but consider this one.
It is more easy to phrase in terms of clones. They like to run the
experiment N times and then pick a random clone and ask what he is thinking.
I see logical problems with these arguments (that you are bringing up). On
the other hand, suppose that a guy is going to play this game (heads wins a
dollar and some pennies ; on tails he is cloned and loses one dollar). Then,
he and any clones are allowed to play again. And again. When you are done
with this there will be a certain number of clones. If you work out the
math, you will find that the odds are that most of them will be poorer than
when this began.
This seems to me pretty compelling. I would like (from a halfist position)
to say to you that playing the game once is a good idea. However, it is a
good bet to play it the first time, why shouldn't each clone think it still
to be a good bet? I can't think of a good reason. Unfortunately, if you are
interested in the money you will have after you are completely finished
playing the game, then you should favor not playing at all to playing ten
times (this is based on the assumption that you will randomly be one of the
clones when it is over). I don't know how it can be that it is advantageous
to play once, but detrimental to play ten times.
This is true. It is also true that the 'original' is likely to be
richer than when this began.
This scenario penalizes each clone $1 upon creation, which is why the
average over all clones is a loss. However, if you do not care about
the fate of your subsequent clones, it is always right to bet.
I offer you a bet, flipping a fair coin:
A. if a fair coin is heads, you get $1.10
B. if a fair coin is tails, you pay me $1.00
AND a third party must pay me $1.00
If you're indifferent to the welfare of the third party, you should take
this bet. But: on average you and the third party lose money.
Everyone allowed a decision is acting rationally, but the one forced to
play (the third party / the new clone) is getting screwed over. I don't
see a paradox here.
Tim
[...]
> Everyone allowed a decision is acting rationally, but the one forced to
> play (the third party / the new clone) is getting screwed over. I don't
> see a paradox here.
True, but this isn't where the paradox lives. The paradox (to me) lies in this
situation:
A coin is flipped 10 times in a row.
If (and only if) it comes up tails each time, then you are cloned a million times.
When you are wakened, after the possible cloning, you (and clones of you) are
asked "Were they all tails?"
If you answer correctly, you live happily.
If you answer incorrectly, you are tortured and killed.
The paradox is that thirders would answer yes to this question. (I wouldn't - but
can't justify my decision)
Don't jump to conclusions about "thirders."
I've taken a "thirder" position despite the fact that initially it
seemed totally against the "obvious" intuitions; the reason I've done
so is because the alternative seemed to lead to worse paradoxes.
And my "thirder" position is limited to giving the answer 1/3 when
asked to compute a certain well-phrased conditional probability.
The guess you're asking the victim of the above experiment to make is
not a well-phrased conditional probability of the sort to which I just
referred. He might *use* that probability in making a decision about
how to answer, but you haven't specified how.
In the example you've given, I'd calculate that
P(all tails | I was here in the particular room where I was
at the particular moment that that question was asked)
is approximately 0.999. Now it might seem natural that since I
(original or whatever clone I am) *did* happen to be in a particular
room and hear that particular question at that particular time,
I should take 0.999 as the simple probability of "all tails" in
any decision calculation I make. But the decision doesn't follow
a fortiori even from such a conclusion, nor does it follow that
it's inconsistent with the decision that my before-the-possible-cloning
self would have wanted me to make.
For example, suppose that before the cloning, my motivation is to
maximize the chance of "me" (the original) surviving, without
regard for any clones. Then I'd hope my answer will be "no."
When I wake up, will I be able to answer "no" even if I think the
probability of all tails is now 0.999? Well, the fact is that I'm
almost certainly a clone according to this viewpoint, and if I've
managed to hold onto my earlier conviction that the suffering of
clones doesn't matter, then I should be happy to go to my death
*if* I'm a clone. The only possibility that I'm worried about is
the remote possibility that I'm *not* a clone, in which case getting
tortured and killed would be a Bad Thing. But
P(all tails | I'm not a clone) = 1/1024
in my viewpoint, so I'd better not bet on all tails. I answer "no".
On the other hand, suppose before the possible cloning, I believe that
all clones are thinking, feeling humans with the same rights as
myself. In that case, it doesn't seem unreasonable to conclude that
it is better to go to an almost certain death myself (expected pain
and suffering = 1, for reference) rather than take a 1/1024 chance of
killing a million people (expected pain and suffering = 976.6). If I
believe this, then before the possible cloning I would believe that
the correct answer is "yes". After the cloning, my belief is
consistent with self-interest (under the viewpoint that I'm almost
certainly a clone) and indeed I will answer "yes."
Both these sets of beliefs turn out to result in post-awakening
answers that are consistent with pre-sleeping desire. Alternatively,
I'm sure you could come up with reasonable histories in which I don't
have enough strength of conviction to hold to the answer I previously
wanted my future self to give. (For example, when I perceive myself
as the original I don't care what happens to clones, but when I
believe I'm actually a clone my earlier set of values doesn't seem
as attractive any more.)
By the way, I wrote about another interpretation of the problem that I
thought it was too Newcombish. The paradox of the Newcomb problem is
that at prior to the predictor committing to a prediction, it seems
that the expected outcome for you is much better if you take just one
box. But when you are actually asked to make the choice, the
predictor has predicted and a dominance argument seems to show that
you should take both boxes. My position w.r.t. Newcomb is that I'd
rather take one box, and I pray I have enough conviction actually to
carry through with that despite the apparent irrationality in light of
the dominance argument. (Or at least, I pray the predictor will
believe I'll carry through.)
So my position on Newcomb is an awful lot like the implied argument
above, "Don't you hope that you'd still say 'no' even after you wake
up?" It seems that it is possible to set up bizarre situations where
the "best" thing to do at a certain decision point is to take option
P, even though all rational decision procedures applied in the local
context of that decision point say you should do the opposite, not-P.
The ability to set up such a bizarre situation doesn't seem to me to
say much if anything about the values of the inputs (such as
conditional probabilities of certain events) to the rational decision
procedures themselves, it just raises the possibility that you might
want to act in a locally irrational way sometimes.
> Matt McLelland <mat...@flash.net> wrote:
> >A coin is flipped 10 times in a row.
> >If (and only if) it comes up tails each time, then you are cloned a
> >million times.
> >When you are wakened, after the possible cloning, you (and clones of you) are
> >asked "Were they all tails?"
> >If you answer correctly, you live happily.
> >If you answer incorrectly, you are tortured and killed.
> >
> >The paradox is that thirders would answer yes to this question. (I
> >wouldn't - but can't justify my decision)
>
> Don't jump to conclusions about "thirders."
Actually, most of your points are consistent with the conclusions I have jumped to
about thirders (I think).
> I've taken a "thirder" position despite the fact that initially it
> seemed totally against the "obvious" intuitions; the reason I've done
> so is because the alternative seemed to lead to worse paradoxes.
> And my "thirder" position is limited to giving the answer 1/3 when
> asked to compute a certain well-phrased conditional probability.
I think that the question asked in this example, with the selfish motivation I
have in mind, requires you to simply check that conditional probability. (I
don't deny that your objection was valid - I am now specifying the motivation I am
interested in).
> For example, suppose that before the cloning, my motivation is to
> maximize the chance of "me" (the original) surviving, without
> regard for any clones.
I reject this motivation. Someone else can think like this, but not I. Even if I
did, *I* am not the only one who gets to make the decision - my clone does to.
> On the other hand, suppose before the possible cloning, I believe that
> all clones are thinking, feeling humans with the same rights as
> myself. In that case, it doesn't seem unreasonable to conclude that
> it is better to go to an almost certain death myself (expected pain
> and suffering = 1, for reference) rather than take a 1/1024 chance of
> killing a million people (expected pain and suffering = 976.6).
First of all, any value I am going to place in the clones is going to be motivated
by selfishness. For the purposes of this analysis, assume that I would not lose a
nights sleep if everyone (save rec.puzzlers) in Europe was killed in a ball of
fire.
With that said, it seems to me that if someone threatens to clone the earth and me
with it, I have no reason to care. I am not going to be excited if I am living a
good life, nor upset if my life is pain filled. I can imagine my thoughts before
and after the cloning - no noticeable difference. Why should I care? What I find
extremely odd is that the thirder position seems to force the viewpoint that I
should care about this.
> If I believe this, then before the possible cloning I would believe that
> the correct answer is "yes". After the cloning, my belief is
> consistent with self-interest (under the viewpoint that I'm almost
> certainly a clone) and indeed I will answer "yes."
Yes, this works out. Now, could we have started with a firm belief that our
answer upon waking should be "Yes" and then work backward to conclude that we
should answer "Yes" before. In other words, forget assigning some arbitrary
weight to the "worth" of future clones and calculate that worth. It seems that
from the thirder position you should be able to do this, and that it really is
twice as bad to be tortured along with a clone as opposed to being tortured alone.
> By the way, I wrote about another interpretation of the problem that I
> thought it was too Newcombish. The paradox of the Newcomb problem is
> that at prior to the predictor committing to a prediction, it seems
> that the expected outcome for you is much better if you take just one
> box. But when you are actually asked to make the choice, the
> predictor has predicted and a dominance argument seems to show that
> you should take both boxes. My position w.r.t. Newcomb is that I'd
> rather take one box, and I pray I have enough conviction actually to
> carry through with that despite the apparent irrationality in light of
> the dominance argument. (Or at least, I pray the predictor will
> believe I'll carry through.)
This reminds me of some star trek episode I once saw. The crew gets a glimpse of
the future and is desperately trying to avoid fate. Of course, if the prediction
is correct, you can't avoid it. I find this Newcomb paradox easily resolved - you
can simply think of the predictor as acting after you have chosen.
> So my position on Newcomb is an awful lot like the implied argument
> above, "Don't you hope that you'd still say 'no' even after you wake
> up?" It seems that it is possible to set up bizarre situations where
> the "best" thing to do at a certain decision point is to take option
> P, even though all rational decision procedures applied in the local
> context of that decision point say you should do the opposite, not-P.
> The ability to set up such a bizarre situation doesn't seem to me to
> say much if anything about the values of the inputs (such as
> conditional probabilities of certain events) to the rational decision
> procedures themselves, it just raises the possibility that you might
> want to act in a locally irrational way sometimes.
If I understand the Newcomb paradox, I think it can be handled in a perfectly
rational manner. I don't think that you can decide to act irrationally. If you
decided upon it, it isn't irrational. Furthermore, with all of the information,
I think you should be able to formulate your plan of action for any time at any
other time.
In a way, it seems that we're forced into some position like this in
order to resolve the paradox. It's still not entirely comfortable,
though, is it? (That's the problem with the "thirder" position--
no matter how well we can make the math work, it still doesn't seem
quite kosher.)
On the other hand we could take the above as just one of the many
perplexities that arise out of duplicating a person's future (by
creating identical replicas in identical environments), and the
philosophical paradoxes of the Star Trek transporter.
>If I understand the Newcomb paradox, I think it can be handled in a
>perfectly rational manner. I don't think that you can decide to act
>irrationally. If youdecided upon it, it isn't irrational.
Good point. I would want to act rationally. It just happens that what
I think is "rational" behavior in the Newcomb situation doesn't match
what some other people think is "rational." But the fundamental
difficulty there is that predicter must indeed predict *before* I choose
(i.e., while I am still thinking, and some time before I take any
external action to commit myself). It really should be possible to
keep such difficulties out of the Sleeping Beauty discussion, however.
The original Sleeping Beauty doesn't have this problem. If she is to be
awakened either once or a million times, she must (as a rational thirder)
say that they were all tails. But if she's wrong, she only gets tortured
and killed once. She doesn't get tortured and killed a million times.
Cloning creates paradoxes. The utilities of individual clones cannot
be added to produce a meaningful result. The Prisoner Paradox has the
same problem.
The original Sleeping Beauty problem can be modified. If she guesses
wrong, she will be tortured a million times but not killed. Then she
can compute the chances of a happy life and the utilities of each
answer. But if the utility of a happy life isn't a high enough in
proportion to the negative utility of a million tortures, she should
decline the bet. No paradox there. If you force her to take a bad
bet without giving her a choice, see Tim Firman's words.
--
<< If this were the company's opinion, I would not be allowed to post it. >>
"I paid money for this car, I pay taxes for vehicle registration and a driver's
license, so I can drive in any lane I want, and no innocent victim gets to call
the cops just 'cause the lane's not goin' the same direction as me" - J Spammer
> Cloning creates paradoxes. The utilities of individual clones cannot
> be added to produce a meaningful result. The Prisoner Paradox has the
> same problem.
I would argue that the same paradox exists with sleeping beauty. You would like to say
that when she gets woken and tortured a million times, she should count that as a
million times worse than only being tortured once. I think this is on very shaky
ground. It would be presumably be a million times worse if she didn't lose her memory
after each torture. Do you think the memory loss changes nothing? I can see a strong
argument in favor of the belief that getting tortured a million times in a row, but
such that your memory is wiped after each session, is only as bad as being tortured
once and losing your memory. Of course this assumes no scars or permenant damage- just
pain. I think that you can make a strong case that "we are what we remember we are."
At no point does Beauty remember being tortured more than once.
> The original Sleeping Beauty problem can be modified. If she guesses
> wrong, she will be tortured a million times but not killed. Then she
> can compute the chances of a happy life and the utilities of each
> answer. But if the utility of a happy life isn't a high enough in
> proportion to the negative utility of a million tortures, she should
> decline the bet. No paradox there. If you force her to take a bad
> bet without giving her a choice, see Tim Firman's words.
I have just as hard a time adding utilities in the Sleeping Beauty case as I do in the
case of the clones.