Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Probability Problem

7 views
Skip to first unread message

John M. Rulnick

unread,
Aug 19, 1993, 4:20:32 PM8/19/93
to
I am still receiving emails about my challenge to the Tilly claim, and
am happy to finally be making some headway. However, there appear to
be plenty of holdouts, and many are saying something similar to this
person's comment:

However, it seems to me that the quantity that you are computing is
the probability of winning _in_the_long_run_.

Let me say that I disagree that this phrase is necessary (or
appropriate). It implies some need for repeated trials, averaging,
what have you. There is no need for such a phrase; I am calculating
the probability of a "win" under the given strategy, assuming "2 real
numbers" about which we have "no information."

The person continues:

Now, if this is true,
then we are, in fact, in complete agreement. Since you can, on any
given trial, make my probability of winning as close as you like to
0.5, clearly I cannot win strictly more than half the trials IN THE
LONG RUN. Having assumed that this is your interpretation of the
claim, your comment about the sigma algebras become clear. Under
my reading of the claim, however, it is simply irrelevant.

S/he concludes:

If you agree that our interpretations of the claim
are different, then I think we both understand more than we did.
_Then_ we can discuss whether it's your or my interpretation that's
mistaken, and how the wording of the claim should be altered to avoid
the mistake.

Now it appears to me that these people are solving the following problem:

(1) "Prove that, for any distinct real numbers a and b and any random
variable X with distribution F (possessing a density and having the
real line as support), P(X > min(a,b)) + P(X < max(a,b)) > 1."

But of course this follows from the definition of a distribution
function! (Divide both sides of the inequality by 2 to get precisely
the "> 0.5" statement.)

I hardly think anyone would be "amazed" by this "paradox" (though I
have been wrong before). The statement I addressed is:

(2) "Suppose that you have 2 real numbers that are not the same. You
hand me one of them randomly and I look at it. With no more
information I can make an educated guess as to which is larger, and
know that I have _better_ than even odds of being right."

Now this, if true, would justifiably amaze people. It would amaze me.
But it is false, and not false "in the long run" or with any other
qualification. It is false, and I have shown that it cannot be
proven.

Now you may see the two problems I've just cited as equivalent, in
which case it will be up to you whether to accept both or accept
neither. Of course you know where I stand. If you see them as the
same problem, and you accept both, then (since you may, like the
writer cited above, be interested in properly wording Tilly's problem)
I will suggest the wording in (1). Assuming you think they are
equivalent, I believe the wording in (1) is far less ambiguous. But,
of course, I doubt Tilly (or anyone) would want to preface (1) with

"The following problem is the strangest math result that I have
ever heard."

as he did when he introduced the problem to sci.math.

John Rulnick
rul...@ee.ucla.edu

Benjamin J. Tilly

unread,
Aug 22, 1993, 1:26:47 AM8/22/93
to
In article <11...@lee.SEAS.UCLA.EDU>

rul...@nimbus.seas.ucla.edu (John M. Rulnick) writes:

> Now it appears to me that these people are solving the following problem:
>
> (1) "Prove that, for any distinct real numbers a and b and any random
> variable X with distribution F (possessing a density and having the
> real line as support), P(X > min(a,b)) + P(X < max(a,b)) > 1."
>
> But of course this follows from the definition of a distribution
> function! (Divide both sides of the inequality by 2 to get precisely
> the "> 0.5" statement.)
>
> I hardly think anyone would be "amazed" by this "paradox" (though I
> have been wrong before). The statement I addressed is:
>

Nobody is generally amazed by the solution to a strange problem. And
yes this is part of a solution. I say part because one has to specify F
in the solution to the problem as I gave it.

> (2) "Suppose that you have 2 real numbers that are not the same. You
> hand me one of them randomly and I look at it. With no more
> information I can make an educated guess as to which is larger, and
> know that I have _better_ than even odds of being right."
>
> Now this, if true, would justifiably amaze people. It would amaze me.
> But it is false, and not false "in the long run" or with any other
> qualification. It is false, and I have shown that it cannot be
> proven.
>

I have yet to see you show that. I am getting to question if you have
read this problem at all. Let us go through the statement carefully.
The first sentence is "Suppose that you have two real numbers that are
not the same." Note that before I say anything, before anything is
defined, you have to actually have the two numbers.

Next the experiment is defined as your handing me one of the numbers at
random and me handing you my guess. Now what exactly is my assertion?
It is that I have better than even odds of being right in the
experiment as defined. And what is the experiment defined to be? It is
defined to be you handing me at random one of the two numbers that you
actually have. Therefore the experiment explicitly depends on the
numbers that you have, and the probability of my being right has to be
worked out with the assumption that the two numbers that you have
actually exist. Therefore all questions of how you got them are totally
irrelevant to the problem as stated.

> Now you may see the two problems I've just cited as equivalent, in
> which case it will be up to you whether to accept both or accept
> neither.

If you think that the problem as I stated it cannot be solved by
solving the other one, then I challenge you to show it.

> Of course you know where I stand. If you see them as the
> same problem, and you accept both, then (since you may, like the
> writer cited above, be interested in properly wording Tilly's problem)
> I will suggest the wording in (1). Assuming you think they are
> equivalent, I believe the wording in (1) is far less ambiguous. But,
> of course, I doubt Tilly (or anyone) would want to preface (1) with
>
> "The following problem is the strangest math result that I have
> ever heard."
>
> as he did when he introduced the problem to sci.math.
>
> John Rulnick
> rul...@ee.ucla.edu

Two things. One is I wish that you would let this entire issue drop.
You made a mistake, so what? You are human. The other thing is that if
you think that the wording is wrong then I am waiting for a clear
explanation of why. When I went through the wording, the meaning looked
quite precise. Sure you have to read it word for word, but it meant
exactly what it said. And the first sentence makes it clear upon a
close reading that before anything else happens, before any
probabilities are defined, you actually have to *have* the two numbers.
That is an important point. Once you get it perhaps you will let this
thread drop _permanently_.

Ben Tilly

Paul Hughett

unread,
Aug 24, 1993, 11:38:21 PM8/24/93
to
In article 46523 of sci.math, Benjamin...@dartmouth.edu writes:

B> The following problem is the strangest math result that I have ever
B> heard. (And was also the cause of an argument not long ago on the IAMS
B> mailing list.) I am presenting it here for general interest. BTW, it is
B> not original, I got it from Laurie Snell.

B> Suppose that you have 2 real numbers that are not the same. You hand me
B> one of them randomly and I look at it. With no more information I can
B> make an educated guess as to which is larger, and know that I have
B> _better_ than even odds of being right. How? (Please try and think
B> about it for a few minutes rather than looking at the answer right
B> away.)

B> Solution. I pick a random number from a normal distribution and pretend
B> that it is the number that you did not give me. Therefore if you gave
B> me a larger number then I say that I was given the larger number and
B> vica versa.

B> Now there are a couple of ways to see that I have better than even odds
B> of being right. One is to note that since the normal distribution is
B> continuous the chance that I pick one of the numbers that you did is 0.
B> Furthermore if the one that I picked is larger or smaller than both of
B> your numbers then I have even odds of getting the right answer. But if
B> I picked a number that is betweeen your numbers then I *will* get the
B> right answer. The last case has probability greater than 0 no matter
B> what your numbers are (how much greater depends on what your numbers
B> are though) so I know that I have better than even odds of being right.

B> Another way to come to the same conclusion is to note that the larger
B> the number that you give me the more likely I am to think that I have
B> been given the larger number. Therefore by Bayes formula the odds that
B> I have the larger given that I think that I do are better than even,
B> and the same is true if I think that I have the smaller. Therefore I
B> have better than even odds of being right.

Ben's conclusion is _not_ true in general, though it is true if some
additional conditions are imposed on the way that I randomly choose one
of the numbers to hand to him.

Here is my counterexample: Say the two numbers are x and y. I flip
a fair coin. If it comes up heads I hand him the larger number;
otherwise I hand him the smaller number. Let v be the (visible) number
that I hand him and h be the hidden number. Given this method of
random selection, h < v and v < h are equally likely and Ben cannot guess
which holds with better than a 50:50 chance.

Now this seems a bit paradoxical, since Ben's analysis makes no
reference to the characteristics of my selection process for v except
that it is "random." Let's analyze this a little more closely.

Let s be the surrogate number that Ben picks from a normal distribution.
The probability that Ben gets the right answer is

P(right) = P( (v < h & v < s) or (v > h & v > s) )

= P( (v < h & v < s & x < y) or
(v < h & v < s & x > y) or
(v > h & v > s & x < y) or
(v > h & v > s & x > y) )

Note that v < h & x < y implies v = x and similarly for other cases.

= P( (v = x & x < s & x < y) or
(v = y & y < s & x > y) or
(v = y & y > s & x < y) or
(v = x & x > s & x > y) )

The various events are disjoint, so we have

P(right) = P( v = x & x < s & x < y )
+ P( v = y & y < s & x > y )
+ P( v = y & y > s & x < y )
+ P( v = x & x > s & x > y )

Now we _assume_ that the events { v = x }, { x < s }, { y < s }, and
{ x < y } are all independent. Then

P(right) = P(v = x) P(x < s) P(x < y)
+ P(v = y) P(y < s) P(x > y)
+ P(v = y) P(y > s) P(x < y)
+ P(v = x) P(x > s) P(x > y)

We _assume_ further than P(v = x) = P(v = y) = 0.5. Then

P(right) = 0.5 P(x < y) [ P(x < s) + P(s < y) ]
+ 0.5 P(x > y) [ P(x > s) + P(s > y) ]

= 0.5 P(x < y) [ P(x < s < y) + P(s > y) + P(s < y) ]
+ 0.5 P(x > y) [ P(x > s > y) + P(s < y) + P(s > y) ]

= 0.5 P(x < y) [ P(x < s < y) + 1 ]
+ 0.5 P(x > y) [ P(x > s > y) + 1 ]

= 0.5 + P(x < y) P(x < s < y) + P(x > y) P(x > s > y)

> 0.5

because, since Ben is choosing s from a normal distribution, both
P(x < s < y) and P(x > s > y) are strictly greater than zero.

Good! We now have Ben's conclusion, at the cost of making the
assumptions stated above. Let's take a closer look at those assumptions.

1. The events {x < s} and {y < s} are independent of {v = x} and
{x < y}. This seems reasonable enough to me; Ben is choosing s and
presumably can arrange for the necessary independence.

2. The events {v = x} and {x < y} are independent. Not so reasonable
this time. I get to choose v and there is nothing in the problem statement
that compels me to make it independent of the relative magnitudes of
x and y. In fact, this is exactly how my counterexample manages to
reduce P(right) to 0.5.

3. P(v = x) = P(v = y) = 0.5. Again, not so reasonable. The
rules don't require me to choose v this way and I can probably mess up
Ben's strategy by violating this assumption. (I might need to know
what distribution Ben is choosing s from; I have not analyzed this
possibility in detail.)

What we have here is an illustration of the difference between
a probabilist and a game theorist. The probabilist can reasonably
argue that assumptions 2 and 3 are implicit in the problem statement;
absent an explicit statement to the contrary, random choices are uniform
over the possibilities and independent of everything else. The game
theorist, on the other hand, takes the position that he can make his
random choice in any way not explicitly prohibited by the rules. In
short, the probabilist and the game theorist make different default
assumptions. Ben's conclusion is true under one reasonable set of
assumptions but false under another reasonable set of assumptions.


--
*-----------------------------------------------------------------------------
* Paul Hughett hug...@eecs.berkeley.edu
* EECS Department
* University of California at Berkeley

Mark van Hoeij

unread,
Aug 25, 1993, 8:28:48 AM8/25/93
to
In <33...@dog.ee.lbl.gov> hug...@iceberg.lbl.gov (Paul Hughett) writes:

> Here is my counterexample: Say the two numbers are x and y. I flip
>a fair coin. If it comes up heads I hand him the larger number;
>otherwise I hand him the smaller number. Let v be the (visible) number
>that I hand him and h be the hidden number. Given this method of
>random selection, h < v and v < h are equally likely and Ben cannot guess
>which holds with better than a 50:50 chance.

I think you should first decide if you will give the bigger or the
smaller number by use of a fair coin, and then choose the numbers x and y.
Otherwise the reasoning with the distribution of the numbers still holds.

Mark van Hoeij

Paul Hughett

unread,
Aug 25, 1993, 2:06:20 PM8/25/93
to
In <33...@dog.ee.lbl.gov> hug...@iceberg.lbl.gov (Paul Hughett) writes:

P> Here is my counterexample: Say the two numbers are x and y. I flip
P> a fair coin. If it comes up heads I hand him the larger number;
P> otherwise I hand him the smaller number. Let v be the (visible) number
P> that I hand him and h be the hidden number. Given this method of
P> random selection, h < v and v < h are equally likely and Ben cannot guess
P> which holds with better than a 50:50 chance.

In article <CCBEo...@sci.kun.nl> ho...@sci.kun.nl (Mark van Hoeij) writes:

M> I think you should first decide if you will give the bigger or the
M> smaller number by use of a fair coin, and then choose the numbers x and y.
M> Otherwise the reasoning with the distribution of the numbers still holds.

The point of my article is that Ben's probability of success depends
on exactly how I choose the two numbers and which one to give him--a fact
which is not apparent in Ben's analysis. You can indeed recover Ben's
result by changing my selection method.

To give another example of a selection strategy in which Ben's result
P(right) > 0.5 does hold: I choose x and y independently from a standard
normal distribution and give him the first of them.

Paul Hughett

Benjamin J. Tilly

unread,
Aug 25, 1993, 9:10:23 PM8/25/93
to
In article <33...@dog.ee.lbl.gov>
hug...@iceberg.lbl.gov (Paul Hughett) writes:

> In <33...@dog.ee.lbl.gov> hug...@iceberg.lbl.gov (Paul Hughett) writes:
>
> P> Here is my counterexample: Say the two numbers are x and y. I flip
> P> a fair coin. If it comes up heads I hand him the larger number;
> P> otherwise I hand him the smaller number. Let v be the (visible) number
> P> that I hand him and h be the hidden number. Given this method of
> P> random selection, h < v and v < h are equally likely and Ben cannot guess
> P> which holds with better than a 50:50 chance.
>

Not a counterexample at all. Do the computation using Bayes' formula
and see what you get. If P(x) is the probability that I guess that x is
the larger, then the exact probability that I am right is 1/2 +
P(larger number) - P(smaller number). Given that with my strategy P is
strictly increasing, I am right no matter how you choose which number
to give me.

> In article <CCBEo...@sci.kun.nl> ho...@sci.kun.nl (Mark van Hoeij) writes:
>
> M> I think you should first decide if you will give the bigger or the
> M> smaller number by use of a fair coin, and then choose the numbers x and y.
> M> Otherwise the reasoning with the distribution of the numbers still holds.
>
> The point of my article is that Ben's probability of success depends
> on exactly how I choose the two numbers and which one to give him--a fact
> which is not apparent in Ben's analysis. You can indeed recover Ben's
> result by changing my selection method.
>

I have said time and time again that it depends on the numbers that you
have. Given that we agree that handing me one at random means giving me
the larger with even odds and the smaller the rest of the time, my
strategy will always work. In summary, the amount by which I am better
than even depends on the numbers that you have, the truth of my
statement does not.

> To give another example of a selection strategy in which Ben's result
> P(right) > 0.5 does hold: I choose x and y independently from a standard
> normal distribution and give him the first of them.
>
> Paul Hughett
>

The selection strategy that you have for the two numbers has nothing to
do with the problem. The probability that I claim is larger than 1/2 is
only defined *after* the two numbers have been chosen. Please try the
calculations before posting next time.

Ben Tilly

Paul Hughett

unread,
Aug 27, 1993, 9:36:54 PM8/27/93
to

I stand by my analysis of the probability problem posted by Ben Tilly.

I also now have a better counterexample in which I can easily
compute the probabilities that Ben will guess right. Consider the two
(out of many) possibilities that my two numbers are {1,2} or {1,0}. In
either case I hand Ben the number 1 (which is a random choice, if a
degenerate one) and ask him to guess whether it is larger or smaller
than the hidden number h.

Now suppose that Ben chooses his surrogate number s from a standard
normal distribution with cumulative distribution function Phi(x).
Then, in the case h = 0, he will correctly guess that h < 1 whenever
s < 1; this occurs with probability Phi(1) = .8413 and he does indeed
have better than even odds of getting the right answer. But if h = 2,
he correctly guesses that h > 1 only when s > 1; this occurs with
probability 1 - Phi(1) = .1587, so his odds of getting the right answer
are considerably worse than even. Ben does not get better than even odds
for both h = 0 and h = 2.

Suppose that Ben uses instead a normal distribution with mean 1 and
variance 1. Then, for h = 0, he correctly guesses h < 1 whenever s < 1
and has exactly even odds of getting the right answer. For h = 2, he
again has even odds. Thus he has even odds of getting the right answer
independent of the value of h.

But this is the best that he can do! There is no choice of mean and
variance such that Ben can get (strictly) better than even odds for both
h = 0 and h = 2.

Benjamin J. Tilly

unread,
Aug 27, 1993, 11:06:40 PM8/27/93
to
First of all this problem has been hashed over many times. Please, to
all out there. Before posting read what has been written on it. Because
if you jump in with a criticism that has shown up, you will contribute
nothing because your point will have already have been dealt with,
probably, as with all of the points below, many times.

In article <33...@dog.ee.lbl.gov>
hug...@iceberg.lbl.gov (Paul Hughett) writes:

> In article 46523 of sci.math, Benjamin...@dartmouth.edu writes:
>
> B> The following problem is the strangest math result that I have ever
> B> heard. (And was also the cause of an argument not long ago on the IAMS
> B> mailing list.) I am presenting it here for general interest. BTW, it is
> B> not original, I got it from Laurie Snell.
>
> B> Suppose that you have 2 real numbers that are not the same. You hand me
> B> one of them randomly and I look at it. With no more information I can
> B> make an educated guess as to which is larger, and know that I have
> B> _better_ than even odds of being right. How? (Please try and think
> B> about it for a few minutes rather than looking at the answer right
> B> away.)
>

(Solution deleted.)


> Ben's conclusion is _not_ true in general, though it is true if some
> additional conditions are imposed on the way that I randomly choose one
> of the numbers to hand to him.
>

The probability defined is my probability of being right in the
experiment as described above. That experiment is you hand me one of
your two numbers and I make my guess. An important point is that the
probability in question is only defined given that you actually have
the two numbers. Thus all questions that have to do with how the
numbers that you have were arrived at have nothing to do with the
problem. This point has been repeated many, many times in this thread
as Paul would have kmnown if he had actually read the thread.

> Here is my counterexample: Say the two numbers are x and y. I flip
> a fair coin. If it comes up heads I hand him the larger number;
> otherwise I hand him the smaller number. Let v be the (visible) number
> that I hand him and h be the hidden number. Given this method of
> random selection, h < v and v < h are equally likely and Ben cannot guess
> which holds with better than a 50:50 chance.
>

False. If you look closely v and h are both *random variables*. (This
point is also old hat.) Thus the fact that v<h and h<v are equally
likely is no problem. Let us suppose that x and y are your two numbers
with x>y. Let p(r) be the probability that I would say that you gave me
the larger if you gave me r. Note that p is an increasing function with
any of the solutions that have appeared on this thread. Thus we get the
the probability that I am right is from Bayes' theorem,
P(I am right)=P(v=x)P(I guess that v is larger given v=x)
+ P(v=y)P(I think that v is smaller given v=y)
=(0.5)p(x) + (0.5)(1-p(y))
=1/2 + 1/2(p(x)-p(y)) > 1/2
since p is increasing. Thus your counterexample is not. Now let us go
through your analysis.

> Now this seems a bit paradoxical, since Ben's analysis makes no
> reference to the characteristics of my selection process for v except
> that it is "random." Let's analyze this a little more closely.
>
> Let s be the surrogate number that Ben picks from a normal distribution.
> The probability that Ben gets the right answer is
>
> P(right) = P( (v < h & v < s) or (v > h & v > s) )
>
> = P( (v < h & v < s & x < y) or
> (v < h & v < s & x > y) or
> (v > h & v > s & x < y) or
> (v > h & v > s & x > y) )
>
> Note that v < h & x < y implies v = x and similarly for other cases.
>
> = P( (v = x & x < s & x < y) or
> (v = y & y < s & x > y) or
> (v = y & y > s & x < y) or
> (v = x & x > s & x > y) )
>
> The various events are disjoint, so we have
>
> P(right) = P( v = x & x < s & x < y )
> + P( v = y & y < s & x > y )
> + P( v = y & y > s & x < y )
> + P( v = x & x > s & x > y )
>
> Now we _assume_ that the events { v = x }, { x < s }, { y < s }, and
> { x < y } are all independent. Then
>

Actually x and y are not random variables. If you read the problem
again you will see that they are the two numbers that you had before
anything happened. I was very clear about that when I said, "Suppose
that you have two real numbers that are different." But let us ignore
that for now and go on.

1) Your counterexample is not a counterexample.
2) If you read the statement carefully you will see that x<y or y<x has
to be fixed. Thus this assumption is already made in the problem
statement.

> 3. P(v = x) = P(v = y) = 0.5. Again, not so reasonable. The
> rules don't require me to choose v this way and I can probably mess up
> Ben's strategy by violating this assumption. (I might need to know
> what distribution Ben is choosing s from; I have not analyzed this
> possibility in detail.)
>

Actually this is an assumption. I have used the standard assumption
that if you have a finite set and I ask you to choose randomly from it,
then you will use the distribution that gives equal chances to each
possibility unless specified otherwise. If you change this convention
then you can mess me up. And yes you will need to know my distribution
to be certain of giving me worse than even odds. And again these are
points that have come up before.

> What we have here is an illustration of the difference between
> a probabilist and a game theorist. The probabilist can reasonably
> argue that assumptions 2 and 3 are implicit in the problem statement;
> absent an explicit statement to the contrary, random choices are uniform
> over the possibilities and independent of everything else. The game
> theorist, on the other hand, takes the position that he can make his
> random choice in any way not explicitly prohibited by the rules. In
> short, the probabilist and the game theorist make different default
> assumptions. Ben's conclusion is true under one reasonable set of
> assumptions but false under another reasonable set of assumptions.
>

I agree that 3 has to be taken as a default assumption, but 2 does not
since within the context of the problem you have the numbers x and y
before we begin talking about random decisions and probabilities. And
yes this point has also come up before.

Ben Tilly

Benjamin J. Tilly

unread,
Aug 28, 1993, 4:58:27 PM8/28/93
to
A while ago I presented the following problem.

In article <CCG8n...@dartvax.dartmouth.edu>


Benjamin...@dartmouth.edu (Benjamin J. Tilly) writes:

> The following problem is the strangest math result that I have ever

> heard. (And was also the cause of an argument not long ago on the IAMS

> mailing list.) I am presenting it here for general interest. BTW, it is

> not original, I got it from Laurie Snell.
>

> Suppose that you have 2 real numbers that are not the same. You hand me

> one of them randomly and I look at it. With no more information I can

> make an educated guess as to which is larger, and know that I have

> _better_ than even odds of being right. How? (Please try and think

> about it for a few minutes rather than looking at the answer right

> away.)

This has generated a lot of discussion which I hope has come to an end.
For any more people who might jump in, here is a summary. Note that
there will actually be a new point in this article! :-) When I posted
this I knew that in the solution I made 2 assumptions. Note that I made
no assumptions about where you got your numbers from. That is taken
care of in the first sentence. One of the assumptions has come up, it
is that when I say that you give me a number at random, I meant that
there are even odds that you hand me the larger and even odds that you
hand me the smaller. This is a linguistic convention that is standard
in probability theory so it is no big deal. The second one, amusingly
enough, only came up once in e-mail and nobody else posted it. This was
despite a fair amount of effort in trying to shoot it down. (Not to
mention a fair number of insults. John Rulnik has yet to apologize.)
What it is is that I said in my solution that I choose a random number.
Now how do I do that?

What I assumed is that we had an idealized situation in which that is
all right. This would be a standard assumption in math. But in this
case it cannot really be justified physically. If we assume that QM
really gives random predictions, an assumption that cannot be proved,
then we can make random choices out of a (finite) discrete
distribution. We cannot do better than that because of measurement
error. To choose something from a continuous random variable is
impossible. However we can pick the digits one by one, and for this
problem we do not need to know the random number exactly, we just need
to know it well enough that we can make our decision. Thus given long
enough we can actually make our decision. But now note that if the
numbers that you have are n+n^(-n) and n-n^(-n) where n=10^100 then
with any of the strategies that have been given so far in this thread,
I will have worse than even odds when you take into account the chance
that I die before I finish making my decision.

I am suprised that nobody out there noticed this slight flaw. Again, as
an idealized problem it can be justified, as a problem about the real
world it cannot be worked around. Now I would ask that all further
postings on this topic not just be a rehash of what has been said
already.

BTW anyone who wants to should take this problem over to some of the
newsgroups that like puzzles like this one. Of course you should be
ready to spend some time before you see the end of it. :-)

Ben Tilly

Benjamin Schoenberg

unread,
Aug 29, 1993, 7:56:02 PM8/29/93
to
Ben,
sorry to prolong this aggravating thread, but I believe the heart of the
problem has not been resolved. As soon as we answer this question, I'll be
satisfied:
"I have two distinct reals, x and y. What is the probability that x < y?"

I claim this probability is undefined, and that this is why the wording of
your problem is incorrect. When you say that it's a common assumption that
when one of two objects is selected at random, each is with equal probability,
I agree. Each of x and y has an equal chance of being picked. But what is
the probability that the one selected is larger?

It is easy to confuse the issue by saying something like, "since x and y are
fixed, one of them is larger, call it L. We select x or y with equal
probability, so we select L or not L with equal probability."
But this begs the question of whether or not P(x=L) is even defined.

Example: My house is either yellow or white. What is the probability it is
white?

So my suggestion is that instead of saying, "You have two distinct reals and
you hand me one at random," you need to say, "you have two distinct reals and
you hand me either the larger or the smaller with equal probability." Just
picking one at random is not enough. There is actually more information in
that second sentence, which may be what John Rulnick was trying to say before
communication broke down.
With that suggestion I'll have to grudgingly admit that this is the strangest
result I've ever seen, too, after Banach-Tarski! Thanks (I think :-( ) for the
interesting thing to chew on.
-Ben Schoenberg

Alan J. Filipski

unread,
Aug 30, 1993, 6:15:03 PM8/30/93
to
In article <CCHM9...@dartvax.dartmouth.edu> Benjamin...@dartmouth.edu (Benjamin J. Tilly) writes:
>
> Suppose that you have 2 real numbers that are not the same. You hand me
> one of them randomly and I look at it. With no more information I can
> make an educated guess as to which is larger, and know that I have
> _better_ than even odds of being right. How? (Please try and think
> about it for a few minutes rather than looking at the answer right
> away.)
--
--This has generated a lot of discussion which I hope has come to an end.

One question. When you say that you have "better than even odds of
being right" I assume that means that your probability of being right
is > 0.5. When I think of a "probability", I want to think of some
experiment that can be repeated so that we can estimate that
probability. What is the experiment in this case? How does it vary
from trial to trial so that we may estimate the probability of
success? The numbers, as you have emphasized, are fixed, so they do
not vary. The strategy, apparently, either does not vary or is not
operationally well-defined. So, what constitutes a trial of this
method? Can you simulate it on a computer and observe more wins than
losses?

I suppose the above are more than one question; the important one to
aid my understanding is "Can you demonstrate your claim via a
simulation?"

Sorry if this has been asked and answered; I haven't been able to keep
up with every post.


--------------
alan filipski
a...@gtx.com

Benjamin J. Tilly

unread,
Aug 30, 1993, 9:41:50 PM8/30/93
to
In article <25rfmi$j...@news.u.washington.edu>
benj...@corona.math.washington.edu (Benjamin Schoenberg) writes:

> Ben,
> sorry to prolong this aggravating thread, but I believe the heart of the
> problem has not been resolved. As soon as we answer this question, I'll be
> satisfied:
> "I have two distinct reals, x and y. What is the probability that x < y?"
>

The probability is either 0 or 1, depending on what the two numbers
are. This is because you specify that you have the two distinct reals.
Of course I do not have enough information to say which is the right
answer.

> I claim this probability is undefined, and that this is why the wording of
> your problem is incorrect. When you say that it's a common assumption that
> when one of two objects is selected at random, each is with equal probability,
> I agree. Each of x and y has an equal chance of being picked. But what is
> the probability that the one selected is larger?
>

Exactly 1/2. To see this note that if the numbers exist then there is
an answer to the question of which is larger. Even though I do not know
that answer I do know that it exists. No matter which answer is right
the probability that you specified is 1/2, so I do know what the answer
to that last question is.

> It is easy to confuse the issue by saying something like, "since x and y are
> fixed, one of them is larger, call it L. We select x or y with equal
> probability, so we select L or not L with equal probability."
> But this begs the question of whether or not P(x=L) is even defined.
>

If you read the statement of the problem I did not say that the numbers
are x and y. In the solution I labeled the larger one x. This is fine
because there is are two of them and one is larger. The way that I
assign the labels has nothing to do with what the probability is,
although it will help me keep straight what is going on while I figure
out what the situation is.

> Example: My house is either yellow or white. What is the probability it is
> white?
>

I do not know, although I do know that it is either 0 or 1. If this
seems strange to you then think about what you go through when doing a
poll. You actually use a lot of probability theory, even though you do
not know what the probabilities involved are. This no more
contradictory there than it is in this case.

Actually there is a subtle point in this case. By the fact that I know
that there is an answer I know that the probability is 0 or 1. However
with whatever information I have I have a probability that I can assign
for what my belief as to the answer is. Note here that I am introducing
a probability distribution here based on my belief as to the likelyhood
of different outcomes. Now what that distribution "should" be is
indeterminate. Also it does not measure an actual probability that has
an independent existence in the situation. But aside from mentioning
that there is a subtle point here, I do not want to go into more
thoughts on it right now.

> So my suggestion is that instead of saying, "You have two distinct reals and
> you hand me one at random," you need to say, "you have two distinct reals and
> you hand me either the larger or the smaller with equal probability." Just
> picking one at random is not enough. There is actually more information in
> that second sentence, which may be what John Rulnick was trying to say before
> communication broke down.

Actually there is no more information in the second sentence than there
was in the first. I think that the situation with John is that he was
wrong, he was quite rude about it, and then he refused to back down
from his position as it became clear that he was wrong. However I could
be wrong.

> With that suggestion I'll have to grudgingly admit that this is the strangest
> result I've ever seen, too, after Banach-Tarski! Thanks (I think :-( ) for the
> interesting thing to chew on.
> -Ben Schoenberg

Thank you for the compliment. I had been trying to introduce an
interesting problem to sci.math for others to enjoy. I hope that
happened for some people at least.

Ben Tilly

J E H Shaw

unread,
Aug 31, 1993, 4:12:49 AM8/31/93
to
In article <CCLop...@dartvax.dartmouth.edu>,

Benjamin...@dartmouth.edu (Benjamin J. Tilly) writes:
>In article <25rfmi$j...@news.u.washington.edu>
>benj...@corona.math.washington.edu (Benjamin Schoenberg) writes:
>
>> Ben,
>> sorry to prolong this aggravating thread, but I believe the heart of the
>> problem has not been resolved. As soon as we answer this question, I'll be
>> satisfied:
>> "I have two distinct reals, x and y. What is the probability that x < y?"
>>
>The probability is either 0 or 1, depending on what the two numbers
>are....

#DEF SOAPBOX
Many people, including myself, would argue that in practice
(i.e. outside examination questions) there is no such thing as
"the probability that x < y", only "your probability for x < y",
"my probability for x < y", etc. When I finally got out of the
habit of saying "probability that" it was very beneficial, like
stopping smoking.

The first page of the forward to De Finetti's "Theory of Probability"
states "Probability does not exist". He then examines it for two
highly-recommended volumes.
#UNDEF SOAPBOX

-- Ewart Shaw
--
J.E.H.Shaw, Department of Statistics, | JANET: st...@uk.ac.warwick
University of Warwick, | BITNET: strgh%uk.ac.warwick@UKACRL
Coventry CV4 7AL, U.K. | PHONE: +44 203 523069
$$\times\times\qquad\top\gamma\alpha\omega\exists\qquad{\odot\odot\atop\smile}$$

Benjamin J. Tilly

unread,
Aug 31, 1993, 1:50:02 PM8/31/93
to
In article <25v161$a...@violet.csv.warwick.ac.uk>

st...@csv.warwick.ac.uk (J E H Shaw) writes:

> #DEF SOAPBOX
> Many people, including myself, would argue that in practice
> (i.e. outside examination questions) there is no such thing as
> "the probability that x < y", only "your probability for x < y",
> "my probability for x < y", etc. When I finally got out of the
> habit of saying "probability that" it was very beneficial, like
> stopping smoking.
>
> The first page of the forward to De Finetti's "Theory of Probability"
> states "Probability does not exist". He then examines it for two
> highly-recommended volumes.
> #UNDEF SOAPBOX
>

There is some merit to this view. I personally like to reserve the word
probability to a situation where what it means has been fairly
carefully laid out. Thus in my problem I had an explicit situation in
which the probability was defined. However outside of exams and
explicit puzzles like this you are unlikely to have this freedom. In
practice I like to assume that we have a model of the situation in
which there is a probability that I do not know. But I will avoid using
the word probability for my conclusions because I do not know enough to
find the real probability.

Let me explain that with an example. Let us consider a poll about who
people are going to vote for in the election. There are some number of
people. I assume that each of them actually would vote for one of the
people if they had to vote right now. Thus if I were to pick a random
person out of that list, then there is a probability that that is a
person who would vote for canidate A. Now I take a poll. Once I have a
poll I can, under the assumption that my poll has no bias, construct a
confidence interval that is the set of possible probabilities that
would make my poll results be in the "most probable" (there are some
subtleties here) 95%. But when I state that confidence interval I do
not turn around and say that there is a 95% chance that the answer is
in that interval. In fact I say that the answer is in or not in that
interval, but I do not know which is the case.

Now note that I draw a line here. In the practical situation I am not
computing probabilities that exist, but I am computing probabilities
that would exist under different possible assumptions. Thus I am saying
that there is one model that is right, and then I am constructing a
confidence interval consisting of the models that would make a
reasonable prediction. But I do not talk about the likelyhood that the
real model is in that confidence interval. By contrast a Bayesian
would, but that is a different story.

Also note that I would give a measure theoretic definition of
probability if I was pressed. However what I have written above is the
meaning that I would try to express with the measures that I would
choose. :-)

Ben Tilly

Bently Preece

unread,
Aug 31, 1993, 4:42:36 PM8/31/93
to

Benjamin Schoenberg writes:

[...] "I have two distinct reals, x and y. What is the
probability that x < y?"

And Benjamin J. Tilly writes:

[... a reasonable response omitted ...]


Would you excuse an ignorant reader if he joined the conversation? I
think Ben Tilly is correct and Ben Schoenberg misunderstood him, and I'd
like to try explaining why. Maybe Tilly would indulge me by agreeing or
not. But I'm not very good at explanations, so please bear with me.
I'd like to do it by beginning with some very simple cases and building
up.

Let this be Game 1:

You place the numbers 0 and 1 in a hat, shake them about, then draw
one of them out at random. I try to guess whether it's the larger
or the smaller number.

Obviously, there's a simple strategy -- Strategy 1 -- which guarantees
that I'll win all the time:

If you draw the 0, then I guess that it's the smaller number. If
you draw the 1, then I guess that it's the larger.

I'm not trying to be subtle here, I'm trying to be obvious. This is a
well-defined game with a well-defined strategy, and the strategy works all
the time.

Now compare Game 1 with Game 2:

You place the numbers 1 and 2 in a hat, shake them about, then
draw one of them out at random. I try to guess whether it's the
larger or the smaller number.

Again, there is a simple strategy -- Strategy 2 -- which guarantees that
I'll win all the time:

If you draw the 1, then I guess that it's the smaller number. If
you draw the 2, then I guess that it's the larger.

And again there is nothing subtle about the this. Second game same as
the first. The point is that these are two entirely different strategies.
Neither strategy will work for the other game. In fact they don't even
apply to each other, since for example Strategy 1 doesn't say what I
should do if you draw out the 2.

The question to ask is whether there exists a single strategy which
will work for both games. If I insist on a win all the time, then I
don't think so. But if I'd be satisfied with one strategy that gives
me better than a fifty-fifty chance of winning, then Strategy 3 works:

If you draw a 0, then I guess that it's the smaller number. If you
draw a 1, then half the time I guess that it's the smaller number
and half of the time I guess that it's the larger number. If you
draw a 2, then I guess that it's the larger number.

This single strategy works on both Game 1 and Game 2. For example, let's
try it on Game 1. Half the time you'll draw the 0, I'll guess that it's
the smaller, and I'll win. Half the time you'll draw the 1. Of these
times, half of them I'll guess it's the smaller and lose, the other half
I'll guess it's the larger and win. You'll never draw a 2. So I win
(1/2) + (1/2)*(1/2) = (3/4) of the time. Better than fifty-fifty. Of
course, Game 2 works basically the same way.

I'm sorry if I'm belaboring the obvious, but I think this is the point
of Tilly's argument, that a single strategy can work for more than one
game. Each pair of numbers A and B determines a game:

You place the numbers A and B in a hat, shake them about, then
draw one of them out at random. I try do guess whether it's the
larger or the smaller number.

Given A and B there is the obvious strategy which let's you decide whether
the one you're shown (at random) is the larger. But is there one single
strategy which works for all A and B? Yes. Tilly gave one. I'd like to
propose my own Strategy F:

Let F be any strictly increasing function from R to [0,1]. If the
number X is drawn, guess that it's the larger number with frequency
F(X), and guess that it's the smaller number with frequency 1-F(X).

That is, assume that the larger a number is, the more likely it is to
be the larger number. It doesn't matter what F(X) is, provided it's
strictly increasing. Tilly's strategy just specifies the function F.

For example, try this on Game 1. Half the time, you'll draw the 0.
Of this half, I'll guess that it's the smaller 1-F(0) of the time and
win, and F(0) of the time I'll guess that it's the larger and lose.
Half the time, you'll draw the 1. Of this half, I'll guess that it's
the smaller 1-F(1) of the time and lose, and F(1) of the time I'll guess
that it's the larger and win. So I win (1/2)*(1-F(0)) + (1/2)*F(1) =
(1/2) + (1/2)*(F(1)-F(0)) of the time, and lose (1/2)*F(0) +
(1/2)*(1-F(1)) = (1/2) - (1/2)*(F(1)-(F(0)) of the time. Since F(1) >
F(0), I win more than I lose.

The strategy also works on Game 2 -- or any other game with any other A
and B. I still have better than a fifty-fifty chance of winning. This
is all Tilly was trying to say. Even though I don't know the numbers A
and B, I still have a strategy which lets me win more than half the time.

If it seems odd that I can pick a good strategy, even though I don't
know which two numbers you're playing with, then I'd like to suggest
Game 3:

I have a brother, and he has a wife. I'll flip a coin and based on
the result tell you either his name or hers. You guess which it is.

You have no idea (I presume) what those two names are, and each of them is
as likely as the other to come up, yet you already have a good strategy
for winning most of the time. Ready? ... It's heads. The name is Donna.
Is Donna my brother or his wife?

Obviously, Donna is more likely to be a woman's name, so Donna is more
likely to be my brother's wife. Numbers are the same way. The larger a
number is, the more likely it is to be the larger number. That's all
there is to it.

I apologize for being so long-winded.
---
Bently H. Preece NCR Network Products Division
software engineer 2700 Snelling Ave. N.
Bently...@StPaul.NCR.COM St. Paul, MN USA 55113
---
Bently H. Preece NCR Network Products Division
software engineer 2700 Snelling Ave. N.
Bently...@StPaul.NCR.COM St. Paul, MN USA 55113

Richard C. Yeh

unread,
Aug 31, 1993, 10:46:16 AM8/31/93
to

You wrote

> [Well-mannered argument ellided]
>
> ... Numbers are the same way. The larger a


> number is, the more likely it is to be the larger number.

Wait, please.

Is _this_ the main thrust of Ben Tilly's argument?
No wonder I couldn't understand! (actually, I didn't take the time)

I was under the misconception that the guesser could not know the
bounds of the interval from which the numbers A and B were to be
chosen.

If we were to run this test (2 numbers unknown to the guesser,
one of them is picked,
guesser determines which is larger)
many times, assuming that the 2 numbers were "uniformly distributed"
among the integers, then I think that your argument amounts to
little more than:

"There are more integers less than +10 than there are greater than +10."

:)
--
Richard C. Yeh rc...@jupiter.risc.rockwell.com _______
-until September 16, 1993 |
|__
The Ksp of Palladium(II) Sulfide is 2.03 E-58 | |
The Ksp of Platinum (II) Sulfide is 9.91 E-74 __|_|____

I take full responsibility for all of my actions and thoughts.
--
Richard C. Yeh rc...@jupiter.risc.rockwell.com _______
-until September 16, 1993 |
|__
The Ksp of Palladium(II) Sulfide is 2.03 E-58 | |

Roberto Sierra

unread,
Aug 31, 1993, 9:25:53 PM8/31/93
to
Benjamin J. Tilly (Benjamin...@dartmouth.edu) wrote:
: > #DEF SOAPBOX

: > Many people, including myself, would argue that in practice
: > (i.e. outside examination questions) there is no such thing as
: > "the probability that x < y", only "your probability for x < y",
: > "my probability for x < y", etc. When I finally got out of the
: > habit of saying "probability that" it was very beneficial, like
: > stopping smoking.
: >
: > The first page of the forward to De Finetti's "Theory of Probability"
: > states "Probability does not exist". He then examines it for two
: > highly-recommended volumes.
: > #UNDEF SOAPBOX
[snip]
: Also note that I would give a measure theoretic definition of

: probability if I was pressed. However what I have written above is the
: meaning that I would try to express with the measures that I would
: choose. :-)

I agree with the sentiments of both posters. A while back on the question
of what the probability of finding two or more people with the same birthday
in a room full of N people, I joked that my high-school friend commented
"the probability is always 50-50 -- they either do or don't have a common
birthday". Though this was meant as a joke, I've always felt that there
is a certain innate truth to the fact. A common problem is that you tend
to go *too* far and use the equations that your arrive at to say *too*
much about the world. For example, there is exactly a 50-50 chance that
you'll find a shared birthday with 22.05 people (or whatever) in the room.
[My Mathematica is cranking right now or I'd give you the real figure.]

But statistics and probability are highly useful tools, just beware of
applying the techniques everywhere.

The only probability problem I am reasonably certain of is this:
-- There is 100% certainty that chaos is present everywhere in the
universe, which makes my figure of 100% highly unstable.

--

\\|// | "Due to the earthquake in the area you
- - | are calling, your call cannot be
o o | completed." -- N.E. Telephone, 10/89
J roberto sierra |
O tempered microdesigns | NOTICE:
\_/ san francisco, calif. | All the ideas and opinions expressed
be...@netcom.com | herein are not those of the author.

Benjamin J. Tilly

unread,
Aug 31, 1993, 10:29:47 PM8/31/93
to
In article <RCYEH.93A...@positron.risc.rockwell.com>

rc...@positron.risc.rockwell.com (Richard C. Yeh) writes:

>
> You wrote
>
> > [Well-mannered argument ellided]
> >
> > ... Numbers are the same way. The larger a
> > number is, the more likely it is to be the larger number.
>
> Wait, please.
>
> Is _this_ the main thrust of Ben Tilly's argument?
> No wonder I couldn't understand! (actually, I didn't take the time)
>

This is one way to think of it. There are technicalities that were laid
out by Bently Preece which you deleted.

> I was under the misconception that the guesser could not know the
> bounds of the interval from which the numbers A and B were to be
> chosen.
>

This is not a misconception.

> If we were to run this test (2 numbers unknown to the guesser,
> one of them is picked,
> guesser determines which is larger)
> many times, assuming that the 2 numbers were "uniformly distributed"
> among the integers, then I think that your argument amounts to
> little more than:
>
> "There are more integers less than +10 than there are greater than +10."
>

Here is a technical point that is critical. The two numbers may be
unknown to the guesser, but they exist. So think of repeated trials as
you approaching different people and each of them using my strategy,
BUT your two numbers remain the same each time. The reason for my
saying this is that if you are not very careful about this, then it all
falls apart.

Ben Tilly

John C. Baez

unread,
Aug 31, 1993, 10:58:06 PM8/31/93
to
In article <25rfmi$j...@news.u.washington.edu> benj...@corona.math.washington.edu (Benjamin Schoenberg) writes:
>Ben,
>sorry to prolong this aggravating thread, but I believe the heart of the
>problem has not been resolved. As soon as we answer this question, I'll be
>satisfied:
>"I have two distinct reals, x and y. What is the probability that x < y?"

>I claim this probability is undefined, and that this is why the wording of
>your problem is incorrect.

Indeed the probability as you state is undefined. In the *correct* way
of stating the puzzle, this problem does not arise! (By "correct" I
mean simply the way that gives an initially surprising but true result.)

Here it is again.

I have two distinct reals, x and y. With a 50% probability I pick x and
hand it to you; with 50% probability I pick y and hand it to you. You
look at the one I hand to you and endeavor to guess whether it's the
larger or smaller of the two.

Theorem: there exists a strategy such that for any choice of x and y you
will succeed with probability >50%.

Proof: given repeatedly on sci.math for those who had eyes to see.

Note, the way the problem is stated, the probability that you get the
larger of x and y is 50%.


Mike McCarty

unread,
Sep 1, 1993, 9:58:58 PM9/1/93
to
I have seen this thread wind on and on, and have manfully resisted the
desire to jump in. I solved this problem in my first year of graduate
school (I was taking Mathematical Probability and Statistics). The
problem is in the statement "choose two real numbers at random". The
problem is that there is no uniform density on the entire reals. When
you once specify the density you are using, then the problem has actual
solutions.

Let's make things more concrete.

Suppose the density is this

f(x) = 1 0 <= x <= 1
0 otherwise

Then you look at your number and it is 8/10. I immediately know that you
have 80% probability of having a higher number than I do.

When the density is actually specified, then the problems vanish.

Take another, both are Normally Distributed.

Your number is 1.645. I know immediately that you have 95% probability
of being greater than my number.

The problem lies in (naively) assuming that numbers can be chosen from
the entire reals so that they are "equiprobable". It is provable (it
takes some sophisticated measure theory) that no such density exists.

Interested readers should see Kai Lai Chung "Elementary Probability
Theory".

Warning: at least one semester of graduate-level measure theory
including the sigma algebra of sets is recommended. Take the word
"Elementary" with a grain of salt.

Mike

Benjamin J. Tilly

unread,
Sep 2, 1993, 1:37:00 PM9/2/93
to
In article <1993Sep2.0...@digi.lonestar.org>
jmcc...@digi.lonestar.org (Mike McCarty) writes:

> I have seen this thread wind on and on, and have manfully resisted the
> desire to jump in. I solved this problem in my first year of graduate
> school (I was taking Mathematical Probability and Statistics). The
> problem is in the statement "choose two real numbers at random". The
> problem is that there is no uniform density on the entire reals. When
> you once specify the density you are using, then the problem has actual
> solutions.
>

You may have seen the thread wind on and on, but you have not been
reading it. The statement started with "Suppose that you have two real
numbers and they are different." I am very aware that there are
problemsof the sort that you mention, hence the wording. I have also
explained this fact in painful detail more times than I want to count.

Ben Tilly

Benjamin J. Tilly

unread,
Sep 2, 1993, 1:54:07 PM9/2/93
to
In article <33...@dog.ee.lbl.gov>
hug...@iceberg.lbl.gov (Paul Hughett) writes:

>
> I stand by my analysis of the probability problem posted by Ben Tilly.
>
> I also now have a better counterexample in which I can easily
> compute the probabilities that Ben will guess right. Consider the two
> (out of many) possibilities that my two numbers are {1,2} or {1,0}. In
> either case I hand Ben the number 1 (which is a random choice, if a
> degenerate one) and ask him to guess whether it is larger or smaller
> than the hidden number h.
>

If you change the problem a lot, then you are right. The wording of the
problem was quite precise on this. What it said was that *if* you have
two numbers that are different, then we can define an experiment where
you hand me one or the other at random. I have pointed out elsewhere
that by this I meant with equal likelyhood of giving me either. Then I
make my guess by the algorithm that I gave in the solution. The
statement is that I know that the probability that my guess is right in
the experiment just specified is larger than 1/2. Exactly how much
larger will depend on your numbers.

Thus I have really defined an entire class of experiments, for each one
of which my strategy gives me better than even odds. For your example
to be a counterexample it needs to be in this class. It is not. In
effect your problem is hand me the number 1 and then randomly choose
the number in your hand out of {0,2}. This is completely different from
having two nmumbers and handing me one of them randomly.

> Now suppose that Ben chooses his surrogate number s from a standard
> normal distribution with cumulative distribution function Phi(x).
> Then, in the case h = 0, he will correctly guess that h < 1 whenever
> s < 1; this occurs with probability Phi(1) = .8413 and he does indeed
> have better than even odds of getting the right answer. But if h = 2,
> he correctly guesses that h > 1 only when s > 1; this occurs with
> probability 1 - Phi(1) = .1587, so his odds of getting the right answer
> are considerably worse than even. Ben does not get better than even odds
> for both h = 0 and h = 2.
>
> Suppose that Ben uses instead a normal distribution with mean 1 and
> variance 1. Then, for h = 0, he correctly guesses h < 1 whenever s < 1
> and has exactly even odds of getting the right answer. For h = 2, he
> again has even odds. Thus he has even odds of getting the right answer
> independent of the value of h.
>
> But this is the best that he can do! There is no choice of mean and
> variance such that Ben can get (strictly) better than even odds for both
> h = 0 and h = 2.
>

Again, the reason why this can work is that your example was not an
example of the problem that I actually posed. In the future I will not
post responses to any more of your examples unless they either are
examples that fit the rules for my problem, or you have posted not only
your example, but my problem statement also and you have indicated why
you think that your example fits the bill.

As I have said before, READ THE PROBLEM CAREFULLY! It is very specific,
and seemingly minor changes in wording can change it to a very
different (usually false) problem.

Ben Tilly

John C. Baez

unread,
Sep 2, 1993, 11:08:32 PM9/2/93
to

Please, Ben, take a well-deserved rest! The more you continue to
explain this problem, the more people will get interested and post
confused articles - you cannot overcome the ignorance of humanity
single-handed! If you wish, write a brief, clear explanation of the
problem and *email* it to anyone who raises further questions.

A large part of the problem *is* that this puzzle is reminiscent of the
paradoxes based on the assumption of a uniform probability distribution
on the reals, so people who have solved such paradoxes will plunge in
mistakenly thinking this is one of those. Indeed, as you note, it is
easy (but not so interesting) to word this puzzle in such a way that it
*is* just one of those. Starting it with "Suppose that you have two
real numbers and they are different" rules that out.


Benjamin Schoenberg

unread,
Sep 3, 1993, 5:41:58 AM9/3/93
to

Let's call this last assertion *.
This is exactly the "obvious statement" that I was trying to ask for a
rigorous proof of. Apparently I failed to get that across, judging by the
snide "for those who had eyes to see". I've seen all the proofs (and I
include one below by Ben T.). Where you assume I was incorrectly
stating the problem, I was in fact telling you that I was having trouble
proving * without using the P(x>y) which we all agree is undefined.

Here is a proof by Ben T. in which he is responding to hug...@iceberg.lbl.gov
(Paul Hughett). Here, let v be the visible number, and let h be the hidden
one. (Sorry I chose to use "x" and "y" above in a different context.)

Date: Sat, 28 Aug 1993 03:06:40 GMT


>Let us suppose that x and y are your two numbers
>with x>y. Let p(r) be the probability that I would say that you gave me
>the larger if you gave me r. Note that p is an increasing function with
>any of the solutions that have appeared on this thread. Thus we get the
>the probability that I am right is from Bayes' theorem,
>P(I am right)=P(v=x)P(I guess that v is larger given v=x)
> + P(v=y)P(I think that v is smaller given v=y)
> =(0.5)p(x) + (0.5)(1-p(y))
> =1/2 + 1/2(p(x)-p(y)) > 1/2

Note that P(v=x) is the same as *, and that it is taken to be obvious. In
fact, the closest we get to a proof of * is in Ben T.'s response to my last
post:

Date: Tue, 31 Aug 1993 01:41:50 GMT


>> But what is
>> the probability that the one selected is larger?
>>

>Exactly 1/2. To see this note that if the numbers exist then there is
>an answer to the question of which is larger. Even though I do not know
>that answer I do know that it exists. No matter which answer is right
>the probability that you specified is 1/2, so I do know what the answer
>to that last question is.

I've finally convinced myself of this. Starting with fresh notation:
I have two distinct reals, a and b. One of them is larger, call it L. I flip
a fair coin. If it is heads I reveal a (set r = a) and if it is tails I
reveal b (set r = b). What is P(r=L)?

There is a proof by cases: CASE 1: a=L. Then P(r=L) = .5
CASE 2: Otherwise, b=L, and P(r=L) = .5
This seems painfully clear, but there is still one question which bothers me:
Note that (r=a and r=L) IFF a=L. Doesn't that mean P(r=a and r=L) = P(a=L)?
The right side is undefined, but the left is P(r=a)P(r=L)=.25
Please clear this up for me so I can stop thinking about this problem!
Thanks in advance,
-Ben Schoenberg

0 new messages