Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Cube Q

12 views
Skip to first unread message

David C. Ullrich

unread,
Jul 9, 2001, 5:36:30 PM7/9/01
to
Say it's the start of the first game of a three-point match.
X opens with a play that leaves a blot on O's one point.
(I think that X ran a six out and brought the other number
down, possibly he ran one man out with a 6-4 or something,
was too stunned to write it down. But after his first
play X has one man back, on O's one point, and a blot
or blots elsewhere, I think blots.)

O rolls double 5's. O decides to make the one point
and three point, pointing on X and planning on going
after the other blots.

Does _X_ have a double here? From the bar against a
(bad) three-point bar, when he's accomplished
nothing and has other blots?

You can guess how the question comes up and how
the game turned out... The double seemed ridiculous
to me, but I know nothing about the cube, especially
in match play.

I know nothing. But I woulda thought the question
might be whether X had a take if he bounced, not
whether he had a double(???)

David C. Ullrich
**********************************
About this published paper that has to do with some agreements with some of the
things that I say, I heard about it at least a couple months ago, although it might
have been within the last two months. (Ross Finlayson, sci.math 7-6-01)

Adam Stocks

unread,
Jul 9, 2001, 7:39:11 PM7/9/01
to
Hi David

In assessing cube actions, it's very important to know the exact position. A
small change in layout can often make all the difference. However, I would
guess from your post that it is the sort of position which is rarely a
double .

Adam

"David C. Ullrich" <ull...@math.okstate.edu> wrote in message
news:3b4a220f...@nntp.sprynet.com...

maareyes

unread,
Jul 9, 2001, 9:25:30 PM7/9/01
to
I would not put alot of weight to the cube actions of an
opponent unless it is a reputable bot or very high
rating/experienced player. Since it is one of the hardest
aspects of the game to master, you will see many incidents
of incorrect cube actions ranging from slightly off to 'off
by a country mile'. Sometimes you will lose against such an
opponent.
Some servers have players that turn the cube back and forth
to 64 just to make the game worth more points. I bet that
if your opponent was playing for real money he would think
twice about doubling from the bar :)

If you have a windows machine, you can pick up on correct
cube action by playing an opponent like Jellyfish.
Observing an opponent that doubles correctly is a good way
to learn.

http://jelly.effect.no/

regards
maareyes


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 80,000 Newsgroups - 16 Different Servers! =-----

David C. Ullrich

unread,
Jul 10, 2001, 9:21:28 AM7/10/01
to
On Tue, 10 Jul 2001 00:39:11 +0100, "Adam Stocks"
<riff...@bigfoot.com> wrote:

>Hi David
>
>In assessing cube actions, it's very important to know the exact position. A
>small change in layout can often make all the difference. However, I would
>guess from your post that it is the sort of position which is rarely a
>double .

Yes of course the exact position is very important. I specified
everything except for the location of the other blot(s) - my
conjecture is it's far from a double in _any_ of the positions
included.

(You say "rarely" - can you name a position consitent with
my description where you feel it _is_ a double?)

Thanks.

David C. Ullrich

unread,
Jul 10, 2001, 9:25:53 AM7/10/01
to
On Mon, 9 Jul 2001 21:25:30 -0400 (Eastern Daylight Time),
nos...@gammonvillage.com (maareyes) wrote:

>I would not put alot of weight to the cube actions of an
>opponent unless it is a reputable bot or very high
>rating/experienced player. Since it is one of the hardest
>aspects of the game to master, you will see many incidents
>of incorrect cube actions ranging from slightly off to 'off
>by a country mile'. Sometimes you will lose against such an
>opponent.

Well right. I don't place much weight in the opinions
of opponents I don't know very well, but I'm badly out
of practice, so I don't know how much weight to give
_my_ opinions either. (It's a funny thing about the
game: When you see a play that makes no sense to you
at all that sometimes indicates you understand things
much better than your opponent, and it sometimes
indicates just the opposite...)

>Some servers have players that turn the cube back and forth
>to 64 just to make the game worth more points. I bet that
>if your opponent was playing for real money he would think
>twice about doubling from the bar :)

People play backgammon for money?<g>

>If you have a windows machine, you can pick up on correct
>cube action by playing an opponent like Jellyfish.
>Observing an opponent that doubles correctly is a good way
>to learn.
>
>http://jelly.effect.no/

Keen, thanks.

>regards
>maareyes
>
>
>-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
>http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
>-----== Over 80,000 Newsgroups - 16 Different Servers! =-----

Adam Stocks

unread,
Jul 10, 2001, 10:37:13 AM7/10/01
to
The (class of ) opening position you describe would never be a double at
0-0/3. However, if X is trailing in a post-Crawford game in a longer match,
it may be correct to double. Match play doubling strategy is rather
different to money play doubling strategy. You can read a detailed study of
match-play doubling at the link below:

http://www.bkgm.com/articles.html

Yes, people do play backgammon for exhorbitant amounts of cash. (not me!)
:-)

Adam

"David C. Ullrich" <ull...@math.okstate.edu> wrote in message

news:3b4b017...@nntp.sprynet.com...

Adam Stocks

unread,
Jul 10, 2001, 11:52:11 AM7/10/01
to
Actually it doesn't have to be post Crawford - it could be 2-away 5-away or
something.

"Adam Stocks" <riff...@bigfoot.com> wrote in message
news:9if3hs$le7$1...@newsg2.svr.pol.co.uk...

Douglas Zare

unread,
Jul 10, 2001, 12:52:55 PM7/10/01
to

"David C. Ullrich" wrote:

> [0-0(3): 6-4, 5-5, double?]


> Does _X_ have a double here? From the bar against a
> (bad) three-point bar, when he's accomplished
> nothing and has other blots?

Terrible double, easy take. With the cube in the center, I believe that O is a mild
favorite. Holding a 2-cube, O is a strong favorite. I'd guess that after 24/18 13/9 O
is too good to redouble if X flunks because gammons on a 4-cube don't mean anything.
3-away 3-away is a difficult score.

I've had an abusive opponent on Yahoo who doubled immediately in a 3-point match. This
is wrong for many reasons--in money play it might be right to double immediately
against a terrible opponent, such as Jellyfish on its blind drunk level, but in match
play one should usually be more conservative if one's opponent is weaker (unless you
think they might pass, or will make a worse error back, such as redoubling and then
still trading wins for gammons). I think the error of doubling immediately at -3:-3 is
larger than the error of playing an opening 3-1 6/5 6/3.

> You can guess how the question comes up and how
> the game turned out... The double seemed ridiculous
> to me, but I know nothing about the cube, especially
> in match play.

Well, I don't always use this explanation as an introduction to the doubling cube:

In money play, the continuous limit model is to assume that equity is a Brownian
motion. The take point must be 1/4 of the way from losing to the opposite take point,
and there is no reason (in this model) to double before then. So the take point is when
one's equity is -3/5, corresponding to winning 1/5 of the time and losing 4/5 of the
time. Potential gammons complicate matters, so I'll assume they don't occur. The beaver
point is an equity of -1/5. In reality, because one can't be sure of making an
efficient double, one usually doubles before the take point and often afterwards, and
because the recube is not perfectly efficient the take point is higher. While the take
point is the same for doubles and redoubles, some initial doubles might not be proper
redoubles. Also, one might become too good to redouble. A further complication is that
the expected payoff under perfect play might not exist since the cube is unbounded, and
easily repeatable positions might be proper redoubles.

In match play, one must take into account the Crawford Rule and typically one uses a
match equity table (with a conflicting scale of equity from the above), as at
http://www.gammonline.com/demo/equity.htm , where the first row and column are the
values before the Crawford game. Now one can create a continuous limit model as before,
but recursively calculate the take points. Unfortunately, this fails more
catastrophically than before, because the relative values of gammons, single wins,
single losses, and single gammons now depend on the level of the cube, so a position
can be both too good to double and still a take.

For example, let's consider the continuous limit take point for -3:-3 assuming no
gammons. First, the take point on a redouble to 4 is the equity from passing to get to
Crawford 3-away, 25%. The equity from passing a 2-cube is about 40%, which is 3/10 of
the way from trailing -3:-1 (25%) to leading -1:-3 (75%), so one needs to be 3/10 of
the way from losing (0% game winning chances) to recubing to 4 (75% game winning
chances). The continuous model's take point is 22.5% game winning chances. This is much
too low--the actual value is between 22.5% and 30% since the recubes are not going to
be exactly at the take point.

> I know nothing. But I woulda thought the question
> might be whether X had a take if he bounced, not
> whether he had a double(???)

Yes, that's much closer. The take point is higher than normal at -3:-3, but the recube
is slightly more powerful than normal and this sort of blitzing position lets X turn
the game around a lot. I think it's double/pass after 24/18 13/9 and double/take after
24/14, as for money.

Douglas Zare

spurs

unread,
Jul 10, 2001, 3:46:32 PM7/10/01
to
A consideration for the early double at -3 v -3 is the value of the NEXT
game. If there is a next game then the score is 2-0, now the leader has no
value in a gammon or backgammon. Perhaps that is enough to NOT double
early, unless you lose! You work it out!
--
spurs

Roy Passfield @ Oxnard, California
http://www.dock.net/spurs

"Making a living is NOT the same as making a life"
(Roy Passfield, 1999)


"David C. Ullrich" <ull...@math.okstate.edu> wrote in message

news:3b4a220f...@nntp.sprynet.com...

David C. Ullrich

unread,
Jul 11, 2001, 9:59:41 AM7/11/01
to
On Tue, 10 Jul 2001 15:37:13 +0100, "Adam Stocks"
<riff...@bigfoot.com> wrote:

>The (class of ) opening position you describe would never be a double at
>0-0/3.

That's certainly what I thought. I probably shoulda beavered.
(I'm 20 years out of touch: Are beavers standard in match play
these days or is that just a FIBS thing?)

Realized yesterday that not beavering may have been a significant
error: with the cube at 2 at -3:-3 one tries to avoid getting
gammoned, while with the cube at 4 one is free to concentrate
on winning the game.


> However, if X is trailing in a post-Crawford game in a longer match,
>it may be correct to double.

Wouldna asked the question in _that_ situation. (Could also be
correct to double in a pre-Crawford game in a long match.)

DU

> Match play doubling strategy is rather
>different to money play doubling strategy.

Of course. In money play you play to maximize your "equity".
In match play you play to maximize the probability that
you win the match. Not the same goal at all.

> You can read a detailed study of
>match-play doubling at the link below:
>
>http://www.bkgm.com/articles.html

Thanks.

DU

David C. Ullrich

unread,
Jul 11, 2001, 10:29:51 AM7/11/01
to
On Tue, 10 Jul 2001 12:52:55 -0400, Douglas Zare
<za...@math.columbia.edu> wrote:

>
>"David C. Ullrich" wrote:
>
>> [0-0(3): 6-4, 5-5, double?]
>> Does _X_ have a double here? From the bar against a
>> (bad) three-point bar, when he's accomplished
>> nothing and has other blots?
>
>Terrible double, easy take. With the cube in the center, I believe that O is a mild
>favorite. Holding a 2-cube, O is a strong favorite. I'd guess that after 24/18 13/9 O
>is too good to redouble if X flunks because gammons on a 4-cube don't mean anything.
>3-away 3-away is a difficult score.

"flunk" = "bounce"? (he didn't bounce...)

>I've had an abusive opponent on Yahoo who doubled immediately in a 3-point match. This
>is wrong for many reasons--in money play it might be right to double immediately
>against a terrible opponent, such as Jellyfish on its blind drunk level, but in match
>play one should usually be more conservative if one's opponent is weaker

What I think happened was he thought I was _much_ weaker than I was. I
think he was the guy with whom I played literally my first serious
game in twenty years - the match in question was a few weeks later,
when I'd recalled a few things I'd forgotten (like if you leave a lot
of blots you may get gammoned, subtle issues like that.)

> (unless you
>think they might pass, or will make a worse error back, such as redoubling and then
>still trading wins for gammons). I think the error of doubling immediately at -3:-3 is
>larger than the error of playing an opening 3-1 6/5 6/3.
>
>> You can guess how the question comes up and how
>> the game turned out... The double seemed ridiculous
>> to me, but I know nothing about the cube, especially
>> in match play.
>
>Well, I don't always use this explanation as an introduction to the doubling cube:
>
>In money play, the continuous limit model is to assume that equity is a Brownian
>motion. The take point must be 1/4 of the way from losing to the opposite take point,
>and there is no reason (in this model) to double before then. So the take point is when
>one's equity is -3/5, corresponding to winning 1/5 of the time and losing 4/5 of the
>time. Potential gammons complicate matters, so I'll assume they don't occur. The beaver
>point is an equity of -1/5. In reality, because one can't be sure of making an
>efficient double, one usually doubles before the take point and often afterwards, and
>because the recube is not perfectly efficient the take point is higher. While the take
>point is the same for doubles and redoubles, some initial doubles might not be proper
>redoubles. Also, one might become too good to redouble. A further complication is that
>the expected payoff under perfect play might not exist since the cube is unbounded, and
>easily repeatable positions might be proper redoubles.

What was that paper years ago with the section titled "Great
Expectations"?

As long as you're giving explanations you don't normally use I'll
point out a curious fact: One can use the Brower Fixed-Point
Theorem to show that there _is_ a well-defined mapping E from
positions to reals, which has the properties that "equity"
should have. You don't define it by adding probabilities as
though it were literally an expectation:

You define "position" and "choice" with suitable granularity,
for example if the position includes "X to roll or double"
then this is a position where X has a choice of two
positions to move to, one being "X to roll" and one being
"O to take or drop". For each position P there is a natural
interval I_P where an actual equity muct lie (in most positions
I_P = [-3C, 3C], where C is the current value of the cube.)

Now say X is the Cartesian product of the I_P (X is the space
of all possible "equity" functions that are consistent with
natural bounds given by the size of the cube.) You define
a map T from X to X by natural rules, like

() if P is a position where X wins n points then T(E)(P)=n.

(i) if P is a position where X has a choice to move to one of
positions P_1, ... P_n then

T(E)(P) = max(E(P_1), ...E(P_n))

(same thing with "min" if O has a choice)

(ii) if P is a position where X is about to roll then

T(E)(P) = weighted average of E(P'),

where P' runs over the positions "Same as P except X just
rolled 1-1", etc.

And so on; there are various clauses in the definiton
of T, all "natural". Istr there's exactly one spot where
I_P is not symmetric and one spot where verifying that
T(E)(P) actually lies in I_P is slightly tricky, but
it works out.

The bit about E(P) lying in I_P is important: Although
the cube can be unbounded, X is nonetheless compact.
Brower gives an E such that T(E) = E; for this E we
have

(1) E(position where X has a decision)
= max(E(positions X can move to)),

(2) E(position where X is to roll) = weighted average,

etc.

A person could even calculate E approximately; it's
going to agree with the equity in places where you
can work out the equity because of (1) and (2), but
it's defined globally without any problems about
divergent sums.

So a reasonable strategy for someone with infinite
computational power would be always to play to
maximize this E. I don't know that that helps<g>:
I'm not certain, but I _think_ that whether there
exists a "law of large numbers" "guaranteeing"
results following this straegy depends on whether
certain sums are finite.

>In match play, one must take into account the Crawford Rule and typically one uses a
>match equity table (with a conflicting scale of equity from the above), as at
>http://www.gammonline.com/demo/equity.htm , where the first row and column are the
>values before the Crawford game. Now one can create a continuous limit model as before,
>but recursively calculate the take points. Unfortunately, this fails more
>catastrophically than before, because the relative values of gammons, single wins,
>single losses, and single gammons now depend on the level of the cube, so a position
>can be both too good to double and still a take.

Right. Seems to me that the notion of maximizing "equity" in match
play is irrelevant - you want to maximize the probability that you
win the match. If there's a long way to go in the match compared
to the size of the cube then maximizing some notion of equity would
be a useful approzimation, but towards the end of the match
it's just irrelevant. (It's not clear to me whether one can
show that there exists a function P mapping positions to [0,1]
that has the properties that "probability of winning the match
from this point" "should" have, as for equity in money games
above. Seems likely but I haven't thought about the details.)

I saw a link to http://www.gammonline.com/demo/equity.htm the
other day. I said great, this is what we need to know. Except
then I realized that I didn't know what the table was saying
about Crawford. I'm puzzled by what you meant by "where the first
row and column are the values before the Crawford game", since
when one of the scores is -1 we _are_ at the Crawford game.

But never mind that. In any case, even assuming equal players
and assuming that the "probability" exists, etc, it's clear that
the probability of winning a match from -n:-m depends on n, m,
and also on whether or not this is the Crawford game. So it
seems like that table needs another columm and another row.
This is actually kind of important if a person is trying to
make the right play. Is it clear to you that the needed information
_is_ in that table if you look at it right???

>For example, let's consider the continuous limit take point for -3:-3 assuming no
>gammons. First, the take point on a redouble to 4 is the equity from passing to get to
>Crawford 3-away, 25%. The equity from passing a 2-cube is about 40%, which is 3/10 of
>the way from trailing -3:-1 (25%) to leading -1:-3 (75%), so one needs to be 3/10 of
>the way from losing (0% game winning chances) to recubing to 4 (75% game winning
>chances). The continuous model's take point is 22.5% game winning chances. This is much
>too low--the actual value is between 22.5% and 30% since the recubes are not going to
>be exactly at the take point.
>
>> I know nothing. But I woulda thought the question
>> might be whether X had a take if he bounced, not
>> whether he had a double(???)
>
>Yes, that's much closer. The take point is higher than normal at -3:-3, but the recube
>is slightly more powerful than normal and this sort of blitzing position lets X turn
>the game around a lot. I think it's double/pass after 24/18 13/9 and double/take after
>24/14, as for money.
>
>Douglas Zare
>

Adam Stocks

unread,
Jul 11, 2001, 1:58:30 PM7/11/01
to

"David C. Ullrich" <ull...@math.okstate.edu> wrote in message
news:3b4c5a3...@nntp.sprynet.com...

> On Tue, 10 Jul 2001 15:37:13 +0100, "Adam Stocks"
> <riff...@bigfoot.com> wrote:
>
> >The (class of ) opening position you describe would never be a double at
> >0-0/3.
>
> That's certainly what I thought. I probably shoulda beavered.
> (I'm 20 years out of touch: Are beavers standard in match play
> these days or is that just a FIBS thing?)
>
Beavers are not standard in matchplay, but are normal in money play, subject
to mutual agreement at the start of the session.

> Realized yesterday that not beavering may have been a significant
> error: with the cube at 2 at -3:-3 one tries to avoid getting
> gammoned, while with the cube at 4 one is free to concentrate
> on winning the game.
>

Sounds like a good beaver to me, for money or any n-away,n-away match.

>
> > However, if X is trailing in a post-Crawford game in a longer match,
> >it may be correct to double.
>

My mistake! sorry.

> Wouldna asked the question in _that_ situation. (Could also be
> correct to double in a pre-Crawford game in a long match.)
>

Right, assuming he trails by enough points relative to the match length.
.


Adam.


Daniel Murphy

unread,
Jul 12, 2001, 12:46:28 AM7/12/01
to
On Wed, 11 Jul 2001 14:29:51 GMT, ull...@math.okstate.edu (David C.
Ullrich) wrote:

>I saw a link to http://www.gammonline.com/demo/equity.htm the
>other day. I said great, this is what we need to know. Except
>then I realized that I didn't know what the table was saying
>about Crawford. I'm puzzled by what you meant by "where the first
>row and column are the values before the Crawford game", since
>when one of the scores is -1 we _are_ at the Crawford game.

No -- only the the first game in which one of the scores is -1 (one
player needs 1 point to win the match) is the Crawford game. In that
game the Crawford rule applies. Subsequent games are post-Crawford
games, and the Crawford rule does not apply to them (and neither does
the match equity table).

Daniel Murphy

David C. Ullrich

unread,
Jul 12, 2001, 10:42:48 AM7/12/01
to
On Thu, 12 Jul 2001 04:46:28 GMT, rac...@best.com (Daniel Murphy)
wrote:

>On Wed, 11 Jul 2001 14:29:51 GMT, ull...@math.okstate.edu (David C.
>Ullrich) wrote:
>
>>I saw a link to http://www.gammonline.com/demo/equity.htm the
>>other day. I said great, this is what we need to know. Except
>>then I realized that I didn't know what the table was saying
>>about Crawford. I'm puzzled by what you meant by "where the first
>>row and column are the values before the Crawford game", since
>>when one of the scores is -1 we _are_ at the Crawford game.
>
>No -- only the the first game in which one of the scores is -1 (one
>player needs 1 point to win the match) is the Crawford game.

Right. "When" was maybe not the word I meant...

> In that
>game the Crawford rule applies. Subsequent games are post-Crawford
>games, and the Crawford rule does not apply to them (and neither does
>the match equity table).

Fine. What I really wanted to know was two things:

(i) Is that table talking about the Crawford game, post-Crawford
games, or an average of both?

(ii) Regardless of the answer to (i), the table does not contain
all the information we need here. Where can we find a similar
table that gives the odds for the Crawford game and _also_
gives the odds for post-Crawford games?

I think that Zare meant to be including an answer to (i), but
I didn't follow what he said. You say the answer to (i) is that
the table is just talking about the Crawford game, great. Do
you know a similar table giving the answer for post-Crawford
games?

(Or is there a simple way I'm overlooking to get the answer
for post-Crawford games from the existing table by some
calculuation?)


>Daniel Murphy
>


David C. Ullrich

Peter van Rossum

unread,
Jul 12, 2001, 1:09:30 PM7/12/01
to
In article <3b4db636...@nntp.sprynet.com>,

David C. Ullrich <ull...@math.okstate.edu> wrote:
>>On Wed, 11 Jul 2001 14:29:51 GMT, ull...@math.okstate.edu (David C.
>>Ullrich) wrote:
>>>I saw a link to http://www.gammonline.com/demo/equity.htm the
>>>other day. I said great, this is what we need to know. Except
>>>then I realized that I didn't know what the table was saying
>>>about Crawford. [...]

>
>Fine. What I really wanted to know was two things:
>
>(i) Is that table talking about the Crawford game, post-Crawford
>games, or an average of both?

It's talking about the Crawford game.

>(ii) Regardless of the answer to (i), the table does not contain
>all the information we need here. Where can we find a similar
>table that gives the odds for the Crawford game and _also_
>gives the odds for post-Crawford games?

If you're just trying to determine whether or not to double/take/beaver,
you never need the post-Crawford game match equities, since
you can't jump directly past the Crawford from a score where both players
have more than one point to go.

>[...]


>(Or is there a simple way I'm overlooking to get the answer
>for post-Crawford games from the existing table by some
>calculuation?)

Hmm, this may even be annoying to compute, especially if you
take into account the opponent's possible free drop.

Peter
--
Peter van Rossum, Department of Mathematics, University of Nijmegen,
Toernooiveld 1, 6525 ED Nijmegen, The Netherlands, Phone: +31-24-3652997,
E-mail: pet...@sci.kun.nl

Daniel Murphy

unread,
Jul 12, 2001, 5:05:33 PM7/12/01
to
On Thu, 12 Jul 2001 14:42:48 GMT, ull...@math.okstate.edu (David C.
Ullrich) wrote:

>(i) Is that table talking about the Crawford game, post-Crawford
>games, or an average of both?

The top line of the match equity table -- when 1 player is 1 point
away from winning the match -- refers only to the Crawford game.

>(ii) Regardless of the answer to (i), the table does not contain
>all the information we need here. Where can we find a similar
>table that gives the odds for the Crawford game and _also_
>gives the odds for post-Crawford games?

It's easy enough to estimate match equities for post-Crawford scores.
Usually we ignore gammons: if trailer needs 1 or 2 points, his match
equity is about 50%; 3 or 4 points, about 25%; 5 or 6 points, about
12.5%. And so on. If you want to consider gammons, that might change
to about 50%, about 30%, about 18%, about 12%, and so on.

But there's no need for a post-Crawford match equity table or to worry
about theoretical gammon rates, because post-Crawford cube decisions
aren't very complicated and don't require the kind of match equity
calculations that pre-Crawford games do.

Remember why we use a match equity table in the first place. Unlike in
money play, correct cube decisions in match play are greatly affected
-- often in puzzling and unexpected ways -- by both player's current
scores, the size of the cube and the resulting score after a
double/drop or double/take.

In a money game, we can always drop a cube and lose a point, or take
the cube and either lose or win 2 points (or lose or win 4 on a
gammon). We're always risking 1 extra point for a chance to win 3 (by
winning 2 instead of losing 1). And the same with recubes -- the only
difference is the size of the cube. The odds we're weighing are still
the same -- drop a 2-cube and lose 2 points, or take, risking 2
additional points for the chance to win 6 (by winning 4 instead of
losing 2).

I'm ignoring the complications of gammons, the value of owning the
cube, volatility, psychology -- all interesting factors -- to make the
point that in money play the odds we need to beat in order to
correctly take a cube never change. A point is always worth one times
the stakes, our gammonless take point is always about 25%, and the top
of our doubling window is always about 75%. A good redouble is a
redouble (and a proper take or drop) whether the cube is on 4 or 16.
And our formula for figuring out when to play on for gammon or double,
or make a gammonish or a safe play, never changes no matter what the
cube size or score.

Not so in match play, where we're not risking money points, but match
winning and match losing chances. The odds we need to beat in order to
correctly take and the value of points won or lost change dramatically
as one or both players close in on match point. Money doubles may be
match drops, depending on the score. Or a match position may be not
good enough to double to 2, but a drop if the cube is going to 4. Or a
gammon win or loss may be worth a lot to one player and mean nothing
to the other. We can't double, take and pass correctly if we don't
know how much match equity is at stake for both players at particular
scores. And we can't know that without a match equity table.

As a match approaches the end, correct cube decisions become
increasingly affected by score (and generally more complicated) up
until the Crawford game.

The Crawford game is easy: no cube allowed, the leader doesn't want to
be gammoned, and the trailer would badly like to win one.

After the Crawford game, we throw the match table away, because
post-Crawford cube action is simple and never requires tortured
calculations of match equity. The few paragraphs here:

http://www.bkgm.com/articles/mpd.html#post-crawford

cover the basics.

There are also some articles on post-Crawford play at

http://www.bkgm.com/rgb/rgb.cgi?menu+matchplay

Other than that, there's a bit more to be said about gammon threats
post-Crawford, but not much -- a few paragraphs would do. But in any
case it's nothing that an equity table would help with.


Daniel Murphy
Raccoon on FIBS/GamesGrid

Jive Dadson

unread,
Jul 12, 2001, 8:52:01 PM7/12/01
to
Daniel Murphy wrote:
>
> ...

> The Crawford game is easy: no cube allowed, the leader doesn't want to
> be gammoned, and the trailer would badly like to win one.

Maybe. Maybe not.

If the trailer is an odd number of points away, the leader has an
(almost) "free gammon". The difference between losing one point and two
is only the free drop. For example, if the trailer is 3 points away and
wins, he'll be either one or two points away after the game. In either
case, he will need only one more (doubled) game to win the match. The
leader might stay back in a back game longer than he would in a money
game, risking a gammon for one last chance to win. However, losing a
backgammon would be a disaster, so he shouldn't over-do it. The trailer
should play safe and try to lock up the win, rather than playing all out
for a gammon.

If the trailer is a even number of points away, losing a gammon is much
worse for the leader than losing a single game, but there is little
difference between losing a gammon and a backgammon. Thus, the leader
should first try to steer toward non-gammonish positions. But if the
leader is unfortunate enough to be forced into a deep back game, he
should risk a backgammon in a desperate attempt to hit a blot and avoid
being gammoned. The trailer should play aggressively for a gammon if
the opportunity presents itself.

Jive

Jive Dadson

unread,
Jul 12, 2001, 8:54:39 PM7/12/01
to
Adam Stocks wrote:
>
> Actually it doesn't have to be post Crawford - it could be 2-away 5-away or
> something.

Nope. It's a mistake at that score also.

J.

Jive Dadson

unread,
Jul 12, 2001, 8:58:41 PM7/12/01
to
Adam Stocks wrote:
>
> The (class of ) opening position you describe would never be a double at
> 0-0/3. However, if X is trailing in a post-Crawford game in a longer match,
> it may be correct to double.

If X is trailing in a post-Crawford score, it is always correct to
double. (Duh!) The only time it would NOT be correct to double would be
when you were an even number of points away and had significant gammon
vig.

J.

Daniel Murphy

unread,
Jul 13, 2001, 1:16:13 AM7/13/01
to
On 13 Jul 2001 00:52:01 GMT, Jive Dadson <jda...@ix.netcom.com>
wrote:

That is right, David. One line doesn't do justice to Crawford game
strategy. In the Crawford game, gammon are worth a lot or almost
nothing depending on whether trailer needs an odd or even number of
points. And even when the trailer in the Crawford game needs an even
number of points, he shouldn't play for gammon at ALL cost. He still
wants to win the game and needs to weigh his gammon chances vs. losing
chances and determine which play gives him the best chance to win the
match.

That's very similar to the post-Crawford games, where a single point
also means a lot or almost nothing depending on whether trailer is an
odd or even number of points away from winning the match. In any case,
once the Crawford game this odd/even factor is the main factor in cube
and checker decisions, and the match equity table is no longer
helpful.

David C. Ullrich

unread,
Jul 13, 2001, 9:16:50 AM7/13/01
to
On 12 Jul 2001 19:09:30 +0200, pet...@fanth.sci.kun.nl (Peter van
Rossum) wrote:

>In article <3b4db636...@nntp.sprynet.com>,
>David C. Ullrich <ull...@math.okstate.edu> wrote:
>>>On Wed, 11 Jul 2001 14:29:51 GMT, ull...@math.okstate.edu (David C.
>>>Ullrich) wrote:
>>>>I saw a link to http://www.gammonline.com/demo/equity.htm the
>>>>other day. I said great, this is what we need to know. Except
>>>>then I realized that I didn't know what the table was saying
>>>>about Crawford. [...]
>>
>>Fine. What I really wanted to know was two things:
>>
>>(i) Is that table talking about the Crawford game, post-Crawford
>>games, or an average of both?
>
>It's talking about the Crawford game.
>
>>(ii) Regardless of the answer to (i), the table does not contain
>>all the information we need here. Where can we find a similar
>>table that gives the odds for the Crawford game and _also_
>>gives the odds for post-Crawford games?
>
>If you're just trying to determine whether or not to double/take/beaver,
>you never need the post-Crawford game match equities, since
>you can't jump directly past the Crawford from a score where both players
>have more than one point to go.

Don't quite follow that. For example you do need to know something
about the probabilities to decide whether to take when the opponent
doubles, don't you? (Ie to decide whether to take a "free drop"
when you have one available).

Anyway, seems like you also need to know something about
these probabilities to decide on correct moves. At
least theoretically: Say I'm way behind, post-Crawford.
Say I have one play that increases my chance of winning a
gammon but also increases my chance of losing relative to
another play. Which play is correct depends on the
probabilities of winning matches at various scores,
doesn't it?

>>[...]
>>(Or is there a simple way I'm overlooking to get the answer
>>for post-Crawford games from the existing table by some
>>calculuation?)
>
>Hmm, this may even be annoying to compute, especially if you
>take into account the opponent's possible free drop.
>
>Peter
>--
>Peter van Rossum, Department of Mathematics, University of Nijmegen,
>Toernooiveld 1, 6525 ED Nijmegen, The Netherlands, Phone: +31-24-3652997,
>E-mail: pet...@sci.kun.nl
>


David C. Ullrich

David C. Ullrich

unread,
Jul 13, 2001, 9:31:38 AM7/13/01
to
On Thu, 12 Jul 2001 21:05:33 GMT, rac...@best.com (Daniel Murphy)
wrote:

>On Thu, 12 Jul 2001 14:42:48 GMT, ull...@math.okstate.edu (David C.


>Ullrich) wrote:
>
>>(i) Is that table talking about the Crawford game, post-Crawford
>>games, or an average of both?
>
>The top line of the match equity table -- when 1 player is 1 point
>away from winning the match -- refers only to the Crawford game.
>
>>(ii) Regardless of the answer to (i), the table does not contain
>>all the information we need here. Where can we find a similar
>>table that gives the odds for the Crawford game and _also_
>>gives the odds for post-Crawford games?
>
>It's easy enough to estimate match equities for post-Crawford scores.
>Usually we ignore gammons:

You talk about ignoring gammons over and over here - I don't
see why we should ignore them.

Say I'm behind -n:-1, post-Crawford. Say I have a play to make,
and two choices: One play gives a certain chance of winning a
single game and a certain chance of losing; the other play
increases my chances of winning a gammon but also increases
my chances of losing. The probability that I win the match
after one play or the other depends on the chances of winning,
winning a gammon, and losing in that game after either play,
and also on the chances of winning the match starting from
various scores.

(Say P(n) is the probability I win from -n:-1, post-Crawford.
Say the probability of win gammon, win single game, lose game
are W, G, L. Say I'm behind -n:-1. Then the probability that
I win the match is

W P(n-1) + G P(n-2).

I can adjust the values of W and G by making this play or that
play. What I want to maximize is W P(n-1) + G P(n-2); I
don't see how to figure out which choice of W and G maximize
this without knowing what P(n-1) and P(n-2) are...)

> if trailer needs 1 or 2 points, his match
>equity is about 50%; 3 or 4 points, about 25%; 5 or 6 points, about
>12.5%. And so on. If you want to consider gammons, that might change
>to about 50%, about 30%, about 18%, about 12%, and so on.
>
>But there's no need for a post-Crawford match equity table or to worry
>about theoretical gammon rates, because post-Crawford cube decisions
>aren't very complicated and don't require the kind of match equity
>calculations that pre-Crawford games do.
>
>Remember why we use a match equity table in the first place. Unlike in
>money play, correct cube decisions in match play are greatly affected
>-- often in puzzling and unexpected ways -- by both player's current
>scores, the size of the cube and the resulting score after a
>double/drop or double/take.

If the only reason for that table were to determine doubling
strategy it would seem clear it's not needed for post-Crawford
games much. But at least theoretically that table also says a lot
about what the correct play in a situation is, as above. And
when I'm way behind I certainly don't want to be ignoring my
chances of winning a gammon.

I dno't see why an equity table is irrelevant. In the notation
above, is my chance of winning the match something other then
W P(n-1) + G P(n-2) ? It seems to me like that's what it is,
and we need an equity table or something like it to find that
quantity.

>Daniel Murphy
>Raccoon on FIBS/GamesGrid


David C. Ullrich

Adam Stocks

unread,
Jul 13, 2001, 9:33:48 AM7/13/01
to
See my correction posting - I meant to say pre-, not post- Crawford. :-)

"Jive Dadson" <jda...@ix.netcom.com> wrote in message
news:3B4D9EA7...@ix.netcom.com...

Adam Stocks

unread,
Jul 13, 2001, 9:31:43 AM7/13/01
to
I was only generalising, since the exact position was not given. The general
principle being that the game underdog can double in a match if he is
trailing by enough points.

"Jive Dadson" <jda...@ix.netcom.com> wrote in message

news:3B4D9DB5...@ix.netcom.com...

Douglas Zare

unread,
Jul 19, 2001, 12:46:05 AM7/19/01
to

"David C. Ullrich" wrote:

> On Tue, 10 Jul 2001 12:52:55 -0400, Douglas Zare
> <za...@math.columbia.edu> wrote:

> [...]


> > A further complication is that
> >the expected payoff under perfect play might not exist since the cube is unbounded, and
> >easily repeatable positions might be proper redoubles.
>
> What was that paper years ago with the section titled "Great
> Expectations"?

Perhaps it was the following:

48 #10007 62L15 (60G40)
Chow, Y. S.; Robbins, Herbert; Siegmund, David
Great expectations: the theory of optimal stopping.
Houghton Mifflin Co., Boston, Mass., 1971. xii+141 pp. $11.95.
----
...$V$ is the supremum of all possible expected rewards and is regarded as the value of the
sequence $\{X\sb n,\scr F\sb n\}$. The major problems of optimal stopping theory are to
calculate $V$ and find optimal or nearly optimal stopping rules...The authors present a
rigorous and systematic account of the mathematical foundations of the theory with the
emphasis upon their own contributions...
----

which I grabbed from a mathscinet search. Or did you mean an analysis of situations where
the expected value is infinite or undefined?

> As long as you're giving explanations you don't normally use I'll
> point out a curious fact: One can use the Brower Fixed-Point
> Theorem to show that there _is_ a well-defined mapping E from
> positions to reals, which has the properties that "equity"
> should have. You don't define it by adding probabilities as
> though it were literally an expectation:

It has some of the properties equity should have, but not all. I'll give my objection in an
example below.

> You define "position" and "choice" with suitable granularity,
> for example if the position includes "X to roll or double"
> then this is a position where X has a choice of two
> positions to move to, one being "X to roll" and one being
> "O to take or drop". For each position P there is a natural
> interval I_P where an actual equity muct lie (in most positions
> I_P = [-3C, 3C], where C is the current value of the cube.)

Do you need an infinite-dimensional space here? One could just use a finite-dimensional
space, assuming that C is always normalized to 1, and have a finite product of intervals
[-3,3]. This isn't just a technicality, since it forces the "equity" to scale with the value
of the cube. It would not be satisfying to say that a position is worth 0.5 if you hold a 2
cube and -0.3 if you hold a 4 cube.

> Now say X is the Cartesian product of the I_P (X is the space
> of all possible "equity" functions that are consistent with
> natural bounds given by the size of the cube.) You define
> a map T from X to X by natural rules, like

> [...submartingale conditions...]

> Brower gives an E such that T(E) = E; for this E we
> have
>
> (1) E(position where X has a decision)
> = max(E(positions X can move to)),
>
> (2) E(position where X is to roll) = weighted average,
>
> etc.
>
> A person could even calculate E approximately; it's
> going to agree with the equity in places where you
> can work out the equity because of (1) and (2), but
> it's defined globally without any problems about
> divergent sums.

Nice argument, but...

Let's take the position at http://www.bkgm.com/rgb/rgb.cgi?view+366 . Both players have one
checker on the bar against 5 point boards. Whoever rolls a 6 first will have a big
advantage. If I recall correctly, my conclusion was that Snowie evaluations underestimate
the number of backgammons--5 closed out gives a lot, though I'm not sure when one would hit
loose on the 6. For simplicity, let's assume that whoever rolls a 6 first has an equity of
twice the value of the cube, with or without access to the cube.

If one iterates T(E), there is an attracting cycle of length 2 (with the value of the
position holding the cube oscillating between 22/61 and 44/61, and the value without access
to the cube oscillating between 121/(18*61) and 22/61), and a repelling fixed point. The
expected value doesn't exist because the attracting cycle is of length greater than 1, but
you suggest using the fixed point, such that the value of the position with access to the
cube is 22/43 and without access to the cube is 11/43. I think the fixed point is unique.

That's 0.511628 holding the cube, 0.255814 not holding the cube.

What happens if the position is not symmetric? Suppose if you roll a 6 first, your equity
becomes slightly greater, 21/10 times the value of the cube, but if your opponent rolls a 6
first, your opponent gets 2 times the value of the cube as above. Surely this is better for
you, right? You could throw away the extra 0.1 times the value of the cube if you wanted to
return to the original position. However, the fixed point has moved in the wrong direction!

You, holding the cube: 0.445847 (671/1505)
You, not holding the cube: 0.222924 (671/3010, half the above)
Opponent, holding the cube: 0.60299 (363/602)
Opponent, not holding the cube: 0.301495 (363/1204)

So increasing your payoff has apparently made you worse off, and your opponent better off.
That is NOT good for a measure of the value of the position. What went wrong?

This position is a generalization of the Petersburg Paradox. In the classical paradox, one
gets paid 2^n with probability 1/2^n for n=1, 2, 3, ... but let's change that to a payoff of
3^n with a probability of 1/2^n, i.e., we toss a coin until the first head, and if that is
on the nth toss you get 3^n. The expected value still does not exist. If you use the above
machinery to look for a fixed point, you get a value of -3, i.e., that you would have to be
paid to play this game! That's clearly unsatisfying, since you will definitely win.

In the first position analyzed, the payoff is (-2)^(n+1) with probability
(11/36)(25/36)^(n-1). In the second position, we have added in a bonus of 2^(n+1)/20 with
probability (11/36)(25/36)^(n-1) but only when n is odd. The problem is that the value
associated to this bonus is negative, which doesn't make sense.

Incidentally, there is still an attracting cycle of length 2, and this cycle moves in the
sensible direction. I'll say more about that later.

> So a reasonable strategy for someone with infinite
> computational power would be always to play to
> maximize this E. I don't know that that helps<g>:
> I'm not certain, but I _think_ that whether there
> exists a "law of large numbers" "guaranteeing"
> results following this straegy depends on whether
> certain sums are finite.

I don't think the sums are necessarily finite. IIRC someone worked out that for the
classical Petersburg Paradox, the median payoff after n trials grows as c n log n. For other
variations, the median can oscillate in sign infinitely often. In those cases, it is unclear
to me who should pay to play. I'm not sure what happens in this case--in what sense does
always redoubling beat holding the cube? I'll try to work out the properties of the
distribution of partial sums this weekend if no one else has posted them by Friday.

> >In match play, one must take into account the Crawford Rule and typically one uses a
> >match equity table (with a conflicting scale of equity from the above), as at
> >http://www.gammonline.com/demo/equity.htm , where the first row and column are the

> >values before the Crawford game. [...]

That's just before the Crawford game starts, as others have clarified. One needs a match
equity table with much higher accuracy to analyze the free take, free drop, and gammon
prices from the Crawford game onward. Snowie's match equity table provides up to 4 digits,
but they are wrong. The Jacobs-Trice tables give up to 3 digits.

> Right. Seems to me that the notion of maximizing "equity" in match
> play is irrelevant - you want to maximize the probability that you
> win the match. If there's a long way to go in the match compared
> to the size of the cube then maximizing some notion of equity would
> be a useful approzimation, but towards the end of the match
> it's just irrelevant. (It's not clear to me whether one can
> show that there exists a function P mapping positions to [0,1]
> that has the properties that "probability of winning the match
> from this point" "should" have, as for equity in money games
> above. Seems likely but I haven't thought about the details.)

It's easier for match play than for money play since everything is finite. (The result that
backgammon games end with probability 1 is attributed to Curt McMullen. I filled in the
details at http://www.gammonvillage.com/news/article_display.cfm?resourceid=281 .) Match
equity must be a submartingale with optimal play. For some reason one talks about match
winning chances from 0% to 100% but money equity from -1 to 1 (ignoring gammons).

> [...]


> But never mind that. In any case, even assuming equal players
> and assuming that the "probability" exists, etc, it's clear that
> the probability of winning a match from -n:-m depends on n, m,
> and also on whether or not this is the Crawford game. So it
> seems like that table needs another columm and another row.
> This is actually kind of important if a person is trying to
> make the right play. Is it clear to you that the needed information
> _is_ in that table if you look at it right???

No. On the other hand, the extra line would only be helpful for decisions in or after the
Crawford game. One shortcut is to assume that the value of the free drop is negligible and
that there are no backgammons in the Crawford game. Then trailing Post-Crawford 2n-away and
(2n-1)-away have about twice the match winning chances as Crawford (2n+1)-away. The
assumption that backgammons don't occur is not bad, but assuming the free drop is negligible
is more serious, and gets worse as n increases.

Douglas Zare

David C. Ullrich

unread,
Jul 19, 2001, 10:56:11 AM7/19/01
to
On Thu, 19 Jul 2001 00:46:05 -0400, Douglas Zare
<za...@math.columbia.edu> wrote:

>
>"David C. Ullrich" wrote:
>
>> On Tue, 10 Jul 2001 12:52:55 -0400, Douglas Zare
>> <za...@math.columbia.edu> wrote:
>> [...]
>> > A further complication is that
>> >the expected payoff under perfect play might not exist since the cube is unbounded, and
>> >easily repeatable positions might be proper redoubles.
>>
>> What was that paper years ago with the section titled "Great
>> Expectations"?
>
>Perhaps it was the following:
>
>48 #10007 62L15 (60G40)
>Chow, Y. S.; Robbins, Herbert; Siegmund, David
>Great expectations: the theory of optimal stopping.
>Houghton Mifflin Co., Boston, Mass., 1971. xii+141 pp. $11.95.
>----
>...$V$ is the supremum of all possible expected rewards and is regarded as the value of the
>sequence $\{X\sb n,\scr F\sb n\}$. The major problems of optimal stopping theory are to
>calculate $V$ and find optimal or nearly optimal stopping rules...The authors present a
>rigorous and systematic account of the mathematical foundations of the theory with the
>emphasis upon their own contributions...
>----
>
>which I grabbed from a mathscinet search. Or did you mean an analysis of situations where
>the expected value is infinite or undefined?

I really don't recall what paper it was, and this doesn't matter a
bit, but no, the "Great Expectations" was the title of a section
in the paper, not the paper itself. (Yes, it was like added on when
the authors realized that some of their gizmos were not defined
because of sums diverging...)

>> As long as you're giving explanations you don't normally use I'll
>> point out a curious fact: One can use the Brower Fixed-Point
>> Theorem to show that there _is_ a well-defined mapping E from
>> positions to reals, which has the properties that "equity"
>> should have. You don't define it by adding probabilities as
>> though it were literally an expectation:
>
>It has some of the properties equity should have, but not all.

Huh - I've wondered about exactly that.

> I'll give my objection in an
>example below.
>
>> You define "position" and "choice" with suitable granularity,
>> for example if the position includes "X to roll or double"
>> then this is a position where X has a choice of two
>> positions to move to, one being "X to roll" and one being
>> "O to take or drop". For each position P there is a natural
>> interval I_P where an actual equity muct lie (in most positions
>> I_P = [-3C, 3C], where C is the current value of the cube.)
>
>Do you need an infinite-dimensional space here? One could just use a finite-dimensional
>space, assuming that C is always normalized to 1, and have a finite product of intervals
>[-3,3]. This isn't just a technicality, since it forces the "equity" to scale with the value
>of the cube. It would not be satisfying to say that a position is worth 0.5 if you hold a 2
>cube and -0.3 if you hold a 4 cube.

Yes, could very well be that this would work just as well, and yes
this makes more sense. (I've sort of assumed without proof that there
was a unique fixed point, in which case it would automatically scale
properly.)

>> Now say X is the Cartesian product of the I_P (X is the space
>> of all possible "equity" functions that are consistent with
>> natural bounds given by the size of the cube.) You define
>> a map T from X to X by natural rules, like
>> [...submartingale conditions...]
>
>> Brower gives an E such that T(E) = E; for this E we
>> have
>>
>> (1) E(position where X has a decision)
>> = max(E(positions X can move to)),
>>
>> (2) E(position where X is to roll) = weighted average,
>>
>> etc.
>>
>> A person could even calculate E approximately; it's
>> going to agree with the equity in places where you
>> can work out the equity because of (1) and (2), but
>> it's defined globally without any problems about
>> divergent sums.
>
>Nice argument, but...

Not actually an argument at all, haven't asserted that this
fixed point actually has any particular properties. (Hmm,
I've claimed it has _some_ properties, not clear (yet) whether
your "objection" is to something I claimed or just shows
something I've wondered about is false...)

Still not clear, but only because I'm using usenet to wake up in the
morning, as I do since giving up coffee. I'll get back to you on this.

Um: It's not clear to me whether you are assuming (or asserting or
know how to prove) that there really is a "real" equity for every
position. If there is then the real equity _is_ a fixed point of
T...

>Incidentally, there is still an attracting cycle of length 2, and this cycle moves in the
>sensible direction. I'll say more about that later.
>
>> So a reasonable strategy for someone with infinite
>> computational power would be always to play to
>> maximize this E. I don't know that that helps<g>:
>> I'm not certain, but I _think_ that whether there
>> exists a "law of large numbers" "guaranteeing"
>> results following this straegy depends on whether
>> certain sums are finite.
>
>I don't think the sums are necessarily finite. IIRC someone worked out that for the
>classical Petersburg Paradox, the median payoff after n trials grows as c n log n. For other
>variations, the median can oscillate in sign infinitely often. In those cases, it is unclear
>to me who should pay to play. I'm not sure what happens in this case--in what sense does
>always redoubling beat holding the cube? I'll try to work out the properties of the
>distribution of partial sums this weekend if no one else has posted them by Friday.
>
>> >In match play, one must take into account the Crawford Rule and typically one uses a
>> >match equity table (with a conflicting scale of equity from the above), as at
>> >http://www.gammonline.com/demo/equity.htm , where the first row and column are the
>> >values before the Crawford game. [...]
>
>That's just before the Crawford game starts, as others have clarified. One needs a match
>equity table with much higher accuracy to analyze the free take, free drop, and gammon
>prices from the Crawford game onward. Snowie's match equity table provides up to 4 digits,
>but they are wrong. The Jacobs-Trice tables give up to 3 digits.

Where do I find these tables?

(In another thread people are saying things that seem impossible.
When I work out the odds using the match equity table I have I
get that the "obviously wrong" play is right, and this puzzled
me, until I realized that the calculation is only showing it's
right by less than one percent; the table is much less accurate
than that, so in fact the calculation says nothing.)

>> Right. Seems to me that the notion of maximizing "equity" in match
>> play is irrelevant - you want to maximize the probability that you
>> win the match. If there's a long way to go in the match compared
>> to the size of the cube then maximizing some notion of equity would
>> be a useful approzimation, but towards the end of the match
>> it's just irrelevant. (It's not clear to me whether one can
>> show that there exists a function P mapping positions to [0,1]
>> that has the properties that "probability of winning the match
>> from this point" "should" have, as for equity in money games
>> above. Seems likely but I haven't thought about the details.)
>
>It's easier for match play than for money play since everything is finite.

Yes.

>(The result that
>backgammon games end with probability 1 is attributed to Curt McMullen.

Heh-heh. That's regardless of what strategy the players are using,
like even if neither is trying to win, they're both just trying
to extend the game? (I thought about this years ago, decided the
probability was probably 1 since in any position there is probably
a sequence of rolls that will end the game regardless - didn't
give an actual _proof_.)

> I filled in the
>details at http://www.gammonvillage.com/news/article_display.cfm?resourceid=281 .) Match
>equity must be a submartingale with optimal play. For some reason one talks about match
>winning chances from 0% to 100% but money equity from -1 to 1 (ignoring gammons).
>
>> [...]
>> But never mind that. In any case, even assuming equal players
>> and assuming that the "probability" exists, etc, it's clear that
>> the probability of winning a match from -n:-m depends on n, m,
>> and also on whether or not this is the Crawford game. So it
>> seems like that table needs another columm and another row.
>> This is actually kind of important if a person is trying to
>> make the right play. Is it clear to you that the needed information
>> _is_ in that table if you look at it right???
>
>No. On the other hand, the extra line would only be helpful for decisions in or after the
>Crawford game. One shortcut is to assume that the value of the free drop is negligible and
>that there are no backgammons in the Crawford game. Then trailing Post-Crawford 2n-away and
>(2n-1)-away have about twice the match winning chances as Crawford (2n+1)-away. The
>assumption that backgammons don't occur is not bad, but assuming the free drop is negligible
>is more serious, and gets worse as n increases.

Yeah, I realized later that one could get an approximation by
something like this.

David C. Ullrich

unread,
Jul 20, 2001, 9:45:19 AM7/20/01
to
Ok, I looked a little more carefully:

On Thu, 19 Jul 2001 00:46:05 -0400, Douglas Zare
<za...@math.columbia.edu> wrote:

>
>"David C. Ullrich" wrote:
>
>[...]


>
>> As long as you're giving explanations you don't normally use I'll
>> point out a curious fact: One can use the Brower Fixed-Point
>> Theorem to show that there _is_ a well-defined mapping E from
>> positions to reals, which has the properties that "equity"
>> should have. You don't define it by adding probabilities as
>> though it were literally an expectation:
>
>It has some of the properties equity should have, but not all. I'll give my objection in an
>example below.
>
>> You define "position" and "choice" with suitable granularity,
>> for example if the position includes "X to roll or double"
>> then this is a position where X has a choice of two
>> positions to move to, one being "X to roll" and one being
>> "O to take or drop". For each position P there is a natural
>> interval I_P where an actual equity muct lie (in most positions
>> I_P = [-3C, 3C], where C is the current value of the cube.)
>
>Do you need an infinite-dimensional space here? One could just use a finite-dimensional
>space, assuming that C is always normalized to 1, and have a finite product of intervals
>[-3,3]. This isn't just a technicality, since it forces the "equity" to scale with the value
>of the cube. It would not be satisfying to say that a position is worth 0.5 if you hold a 2
>cube and -0.3 if you hold a 4 cube.

The more I think about it the more I tend to suspect that a fixed
point of T will behave properly wrt the value of the cube
automatically. But never mind, instead of proving that one could
just as well declare it to be true.

I find it simpler conceptually to simply stick with the
infinite-dimensional product and let X be the space of
all E which _do_ scale properly. Of course in calculations
the finite-dimensional version would be a good idea.

About those I_P's - this has some relevance below: It's
important that we choose the I_P just right. We need
compactness, we don't want to use [-infinity, infinity]
for at least two reasons, and we need something with
the property that T actually maps X into X, is such that
if E(P) is in I_P for all P then T(E)(P) is also in I_P.
Years ago I was stuck on a detail here: Say P is a position
where X has just doubled from C to 2C and O has to decide
whether to take or drop. I couldn't decide whether
I_P should be [-3C, 3C] or [-6C, 6C] for a while; when
I wrote down the details I couldn't show that T mapped
X into X with either choice.

Finally realized that in such a position I_P should be
[-6C, C]. This is "natural", since the "real" equity
cannot be larger than C (because O can just drop if
he feels like it) nor less than -6C (because X can
just drop any redoubles.) I don't know whether there
_is_ such a thing as "equity" here, but the previous
sentence is just motivation - if you set I_P = [-6C, C]
in this case (and [-C, 6C] in positions where O has
doubled and X is to take or drop) then you can show
that the T vaguely defined above does map X into X.

Seems worth mentioning just to emphasize that we do
need to _decide_ what all the I_P should be to make
the machine go, and not just any old choice will work.
This is what comes up below.

>> Now say X is the Cartesian product of the I_P (X is the space
>> of all possible "equity" functions that are consistent with
>> natural bounds given by the size of the cube.) You define
>> a map T from X to X by natural rules, like
>> [...submartingale conditions...]

Not important: Are they really "submartingale" conditions?
If P is such that X has a choice then the condition is

T(E)(P) = max(E(Q) : Q is one of X's choices)

but if O has a choice then the condition is

T(E)(P) = min(E(Q) : Q is one of O's choices).

>> Brower gives an E such that T(E) = E; for this E we
>> have
>>
>> (1) E(position where X has a decision)
>> = max(E(positions X can move to)),
>>
>> (2) E(position where X is to roll) = weighted average,
>>
>> etc.
>>
>> A person could even calculate E approximately; it's
>> going to agree with the equity in places where you
>> can work out the equity because of (1) and (2), but
>> it's defined globally without any problems about
>> divergent sums.
>
>Nice argument, but...
>
>Let's take the position at http://www.bkgm.com/rgb/rgb.cgi?view+366 . Both players have one
>checker on the bar against 5 point boards. Whoever rolls a 6 first will have a big
>advantage. If I recall correctly, my conclusion was that Snowie evaluations underestimate
>the number of backgammons--5 closed out gives a lot, though I'm not sure when one would hit
>loose on the 6. For simplicity, let's assume that whoever rolls a 6 first has an equity of
>twice the value of the cube,

Believe it or not it took me a while to decide exactly what we might
mean by this assumption. At first I thought that you just meant that
you were going to start iterating T(E) with an E function having this
property. That didn't seem right. Then I decided you might mean that
we should assume that the "real" equity is 2C. But that makes me
nervous because we don't know that there is such a thing; in any
case, while I know kinda what you'd mean by that, sort of, if
we're talking about _proving_ things about properties of fixed
points of T it's hard to see what assumptions about the "real"
equity have to do with it... (the things I say here about the
real equity are all just motivation, not part of a proof of
anything.)

Decided that a reasonable interpretation is this:

Let's pretend we're not playing backgammon. We're playing a game
where there's a cube that works as in backgammon, but the
significance of dice rolls is different: In this game the
first player who rolls a 6 wins 2C, period.

I suspect that the analysis of that game will be more or less
the same as what you're talking about below - the advantage
to this formulation is that I know exactly what everything
means. If you don't object I'll take the above as what we're
talking about here (and I'll feel free to interpret later
statements about assumptions about the real equity in a
position in the same way.)

(Yes, this is just technicalities. But if we're going to
actually _prove_ things...)

*****************

Now, you're way ahead of me in actually working out the
numbers here (I'm a little slow this month for various
reasons). I don't see exactly where the numbers in the
next few paragraphs come from, but I wouldn't, because I
haven't tried to do the calculations. But: In the
3^n thing below I've done the calculations, and what
I get is not at _all_ what you say you get. So before
I try to reproduce your calculations in the next few
paragraphs we should get synchronized on what the
story is in that simpler situation (you say that the
machinery gives -3. Of course -3 is absurd, but I
don't get any -3 from the machine, so I wonder whether
the machinery you're talking about is really exactly
the same as the machinery I'm talking about...)


>with or without access to the cube.

>If one iterates T(E), there is an attracting cycle of length 2 (with the value of the
>position holding the cube oscillating between 22/61 and 44/61, and the value without access
>to the cube oscillating between 121/(18*61) and 22/61), and a repelling fixed point. The
>expected value doesn't exist because the attracting cycle is of length greater than 1, but
>you suggest using the fixed point, such that the value of the position with access to the
>cube is 22/43 and without access to the cube is 11/43. I think the fixed point is unique.
>
>That's 0.511628 holding the cube, 0.255814 not holding the cube.
>
>What happens if the position is not symmetric? Suppose if you roll a 6 first, your equity
>becomes slightly greater, 21/10 times the value of the cube, but if your opponent rolls a 6
>first, your opponent gets 2 times the value of the cube as above. Surely this is better for
>you, right? You could throw away the extra 0.1 times the value of the cube if you wanted to
>return to the original position. However, the fixed point has moved in the wrong direction!
>
>You, holding the cube: 0.445847 (671/1505)
>You, not holding the cube: 0.222924 (671/3010, half the above)
>Opponent, holding the cube: 0.60299 (363/602)
>Opponent, not holding the cube: 0.301495 (363/1204)
>
>So increasing your payoff has apparently made you worse off, and your opponent better off.
>That is NOT good for a measure of the value of the position. What went wrong?

So for now let's skip to here:

>This position is a generalization of the Petersburg Paradox. In the classical paradox, one
>gets paid 2^n with probability 1/2^n for n=1, 2, 3, ... but let's change that to a payoff of
>3^n with a probability of 1/2^n, i.e., we toss a coin until the first head, and if that is
>on the nth toss you get 3^n. The expected value still does not exist. If you use the above
>machinery to look for a fixed point, you get a value of -3, i.e., that you would have to be
>paid to play this game! That's clearly unsatisfying, since you will definitely win.

Of course -3 is wrong, but I have no idea how you got this -3.

"The machinery" involves first _choosing_ an interval I_P for each
position P. If I make what seems to me the natural choice of I_P
I can show that T has a unique fixed point E, which satisifies
E(P) = +infinity for all E. (This is the right answer.) If I
leave out what seems like the natural assumption on the I_P
there are infinitely many fixed points, in fact for _any_ real
number A there is a fixed point E such that

E(starting position) = A.

So I cannot see where this -3 comes from.

You say "If you use the above machinery". I don't see what this
means until you tell me what intervals I_P you're considering;
the choice of the I_P is not part of the machine.

Here's the details when I work this out "using the above
machinery": Let's see. There's only one player, and there
are no decisions, just coin tosses. So the positions
are P_0, P_1, etc, where being in position P_n means that
there have been n flips so far, all tails.

What "should" I_P be? The "natural" choice seems to me to
be I_P = [0,infinity] for all P. (Negative E make no
sense here, as you point out. I don't see any natural
upper bound on what E(P) "should" be, and I also don't
see another way to choose I_P, without including negative
values, that will make T map X into X.)

Since there are no decisions, only coin tosses, the definition
of T is simple:

[i] T(E)(P_n) = 3^n/2 + E(P_(n+1))/2 .

So if E is a fixed point then

[ii] E(P_n) = 3^n/2 + E(P_(n+1))/2.

(If anyone tuned in late: Why this rigamorole about fixed-point
theorems intstead of just saying that an expectation satisfies
the above? In actual backgammon the recursion doesn't terminate,
because positions can be repeated - so it's not clear that
there _is_ a function E(P) that satisfies all the conditions
you want.)

Now, [ii] is just a recurrence that determines E(P_n) for
all n given E(P_0). If we assume that E(P_0) is _finite_
then it's easy to see from [ii] that

[iii] E(P_n) <= 2^n E(P_0) - 3^(n-1).

In particular [iii] implies that E(P_n) < 0 if n is large
enough, which contradicts the fact that E(P_n) is in
I_P_n. So that can't happen.

So with what seems like the natural choice of I_P the
_only_ fixed point is E(P) = infinity for all P, which
actually is the correct expected value.

Otoh if, say, we set I_P = [-infinity, infininty] for all
P then for _any_ value of E(P_0)) there is a function
E satisfying [iii] for all n. Ie if we allow these I_P
then there are many fixed points, including ones
giving totally arbitrary E(P_0).

Hence the question of how you get that -3. (If we
assume that I_P_0 = [-3,-3] then it follows that
E(P_0) = -3, but that's probably not what you
had in mind... when you say the machinery above
gives -3 for this game I really _do_ wonder whether
we have different machinery in mind.)


David C. Ullrich

Douglas Zare

unread,
Jul 22, 2001, 12:23:37 AM7/22/01
to
"David C. Ullrich" wrote:

> Ok, I looked a little more carefully:
>
> On Thu, 19 Jul 2001 00:46:05 -0400, Douglas Zare
> <za...@math.columbia.edu> wrote:
> >
> >"David C. Ullrich" wrote:
> >[...]
> >

> >Do you need an infinite-dimensional space here? One could just use a finite-dimensional
> >space, assuming that C is always normalized to 1, and have a finite product of intervals
> >[-3,3]. This isn't just a technicality, since it forces the "equity" to scale with the value
> >of the cube. It would not be satisfying to say that a position is worth 0.5 if you hold a 2
> >cube and -0.3 if you hold a 4 cube.
>
> The more I think about it the more I tend to suspect that a fixed
> point of T will behave properly wrt the value of the cube
> automatically. But never mind, instead of proving that one could
> just as well declare it to be true.

The cycle of order 2 in the normalized version implies that there is a fixed point of the
unnormalized map which does not scale. Well, it scales properly by powers of 4 but not all powers
of 2. Also, I'm skipping the extra step where one decides whether to take or drop. That would
probably lengthen the cycle.

In particular, suppose holding the cube with a value that is a power of 4 gives you 22/61 C, but
holding the cube which is twice a power of 4 gives you 44/61 C. If the cube is 1, 4, 16, etc.,
the prescribed strategy is not to double (at C=1, your opponent won't either according to these
values) which wins 2C 36/61 - 2C 25/61 = 22/61 C.

However, if your opponent won't redouble, then you should--when the cube is 2, 8, 32, 128, etc.
Since according to these values, no one will double after that, you expect to get 44/61 C.

Of course, this value system is absurd. If the current stakes are $20, you shouldn't have to know
whether the initial stakes were $5 or $10 to make your decision.

> I find it simpler conceptually to simply stick with the
> infinite-dimensional product and let X be the space of
> all E which _do_ scale properly. Of course in calculations
> the finite-dimensional version would be a good idea.
>
> About those I_P's - this has some relevance below: It's
> important that we choose the I_P just right. We need
> compactness, we don't want to use [-infinity, infinity]
> for at least two reasons, and we need something with
> the property that T actually maps X into X, is such that
> if E(P) is in I_P for all P then T(E)(P) is also in I_P.

Which two reasons did you have in mind? If one uses [-oo,oo], then any fixed points in a more
restricted setting will still be present. The existence result seems weaker, but couldn't you
just show that the fixed points have to be within reasonable intervals afterwards due to the
ability to pass a double?

> >> Now say X is the Cartesian product of the I_P (X is the space
> >> of all possible "equity" functions that are consistent with
> >> natural bounds given by the size of the cube.) You define
> >> a map T from X to X by natural rules, like
> >> [...submartingale conditions...]
>
> Not important: Are they really "submartingale" conditions?

You are right; the value is not a semimartingale if both players make "errors." Each player can
at best keep the expected value after the next step constant.

> [...]


> Let's pretend we're not playing backgammon. We're playing a game
> where there's a cube that works as in backgammon, but the
> significance of dice rolls is different: In this game the
> first player who rolls a 6 wins 2C, period.

Yes, that's what I meant. The actual value, according to Snowie rollouts, should be a bit
smaller, but that shouldn't affect the analysis. .

> But: In the
> 3^n thing below I've done the calculations, and what

> I get is not at _all_ what you say you get. [...]


>
> >This position is a generalization of the Petersburg Paradox. In the classical paradox, one
> >gets paid 2^n with probability 1/2^n for n=1, 2, 3, ... but let's change that to a payoff of
> >3^n with a probability of 1/2^n, i.e., we toss a coin until the first head, and if that is
> >on the nth toss you get 3^n. The expected value still does not exist. If you use the above
> >machinery to look for a fixed point, you get a value of -3, i.e., that you would have to be
> >paid to play this game! That's clearly unsatisfying, since you will definitely win.
>
> Of course -3 is wrong, but I have no idea how you got this -3.
>
> "The machinery" involves first _choosing_ an interval I_P for each
> position P. If I make what seems to me the natural choice of I_P
> I can show that T has a unique fixed point E, which satisifies
> E(P) = +infinity for all E. (This is the right answer.) If I
> leave out what seems like the natural assumption on the I_P
> there are infinitely many fixed points, in fact for _any_ real
> number A there is a fixed point E such that
>
> E(starting position) = A.
>
> So I cannot see where this -3 comes from.

For starters, it comes from summing a geometric series outside the radius of convergence:
3/2+(3/2)^2+(3/2)^3+... ~ (3/2)/(1-(3/2)). That allows one to find a fixed point.

Before the nth coin toss, the value is -(3^n). If one stops, that is lucky by 2(3^n), and the
payoff is 3^n. If one continues, that is unlucky by 2(3^n), and the future payoffs are valued at
-(3^(n+1)).

> You say "If you use the above machinery". I don't see what this
> means until you tell me what intervals I_P you're considering;
> the choice of the I_P is not part of the machine.

I looked for a fixed point without specifying the intervals. Is there some natural way to choose
between fixed points when there are multiple ones? I suppose [-oo,oo] was what I used, but I also
assumed that the value would scale by a factor of 3. The equation satisfied by a fixed point is
then

a= 3/2a +3/2

so the only possibilities are -3, -oo, and oo.

> Otoh if, say, we set I_P = [-infinity, infininty] for all
> P then for _any_ value of E(P_0)) there is a function
> E satisfying [iii] for all n. Ie if we allow these I_P
> then there are many fixed points, including ones
> giving totally arbitrary E(P_0).

Yes, but if E(P_0)=1, then E(P_1)=-1, which is not 3E(P_0).


> IIRC someone worked out that for the
> >classical Petersburg Paradox, the median payoff after n trials grows as c n log n. For other
> >variations, the median can oscillate in sign infinitely often. In those cases, it is unclear
> >to me who should pay to play. I'm not sure what happens in this case--in what sense does
> >always redoubling beat holding the cube? I'll try to work out the properties of the
> >distribution of partial sums this weekend if no one else has posted them by Friday.

I haven't worked out all of the details, but the sum of n trials can be approximated in some
sense by a sum of scaled Poisson distributions corresponding to the possible levels of payoffs.
To approximate the distribution (in the sense of the cumulative distribution function or its
inverse), one can ignore a set of extremely unlikely outcomes which occur in each trial with
probability less than c/n, where c is chosen based on how closely one wants to approximate the
distribution.

The distribution of sums for 36/25 n trials looks like the distribution for n trials multiplied
by -2. This implies that any fixed percentile will grow no faster than
cn^(log2/log(36/25)) or about n^1.9. So, if you want to play this 10 times a few hundred dollars
need to be on hand, but if you play this 1000 times the payoff will often be a few million.

Douglas Zare

David C. Ullrich

unread,
Jul 22, 2001, 9:32:19 AM7/22/01
to
On Sun, 22 Jul 2001 00:23:37 -0400, Douglas Zare
<za...@math.columbia.edu> wrote:

>"David C. Ullrich" wrote:
>
>> Ok, I looked a little more carefully:
>>
>> On Thu, 19 Jul 2001 00:46:05 -0400, Douglas Zare
>> <za...@math.columbia.edu> wrote:
>> >
>> >"David C. Ullrich" wrote:
>> >[...]
>> >
>> >Do you need an infinite-dimensional space here? One could just use a finite-dimensional
>> >space, assuming that C is always normalized to 1, and have a finite product of intervals
>> >[-3,3]. This isn't just a technicality, since it forces the "equity" to scale with the value
>> >of the cube. It would not be satisfying to say that a position is worth 0.5 if you hold a 2
>> >cube and -0.3 if you hold a 4 cube.
>>
>> The more I think about it the more I tend to suspect that a fixed
>> point of T will behave properly wrt the value of the cube
>> automatically. But never mind, instead of proving that one could
>> just as well declare it to be true.
>
>The cycle of order 2 in the normalized version implies that there is a fixed point of the
>unnormalized map which does not scale. Well, it scales properly by powers of 4 but not all powers
>of 2. Also, I'm skipping the extra step where one decides whether to take or drop. That would
>probably lengthen the cycle.

Well, if you're skipping steps you're simply not calculating a fixed
point for T. It's not clear to me exactly what you are doing, but when
I suggest something might be interesting, you say you've worked it out
and it doesn't come out right, then later you say something about
what would "probably" happen if you actually did the calculation I
was talking about...

I suppose I should just try to work this out. (20 years ago, the
last time I thought about this, it never ocurred to me that one
could actually do the calculations for simplified games like
this. Haven't got around to it yet just because I've been a little
under the weather.)

>In particular, suppose holding the cube with a value that is a power of 4 gives you 22/61 C, but
>holding the cube which is twice a power of 4 gives you 44/61 C. If the cube is 1, 4, 16, etc.,
>the prescribed strategy is not to double (at C=1, your opponent won't either according to these
>values) which wins 2C 36/61 - 2C 25/61 = 22/61 C.
>
>However, if your opponent won't redouble, then you should--when the cube is 2, 8, 32, 128, etc.
>Since according to these values, no one will double after that, you expect to get 44/61 C.
>
>Of course, this value system is absurd. If the current stakes are $20, you shouldn't have to know
>whether the initial stakes were $5 or $10 to make your decision.
>
>> I find it simpler conceptually to simply stick with the
>> infinite-dimensional product and let X be the space of
>> all E which _do_ scale properly. Of course in calculations
>> the finite-dimensional version would be a good idea.
>>
>> About those I_P's - this has some relevance below: It's
>> important that we choose the I_P just right. We need
>> compactness, we don't want to use [-infinity, infinity]
>> for at least two reasons, and we need something with
>> the property that T actually maps X into X, is such that
>> if E(P) is in I_P for all P then T(E)(P) is also in I_P.
>
>Which two reasons did you have in mind?

One is that T is no longer defined, so nothing works at all!
If a player is about to roll in position P then

T(E)(P) = a weighted average of E(position "P after rolling
various rolls")

If E(Q) can be both infinity and -infinity then these
sums are undefined.

And the second is what you mention:

>If one uses [-oo,oo], then any fixed points in a more
>restricted setting will still be present. The existence result seems weaker, but couldn't you
>just show that the fixed points have to be within reasonable intervals afterwards due to the
>ability to pass a double?

How do you show that? We're trying to show that there
exists a function E mapping positions to reals that
satisfies certain properties; if we take I_P as above
then various of the properties we want are immediate.

Even if the machine didn't break down totally with
I_P = [-oo,oo], making this paragraph moot, I don't
see any advantage to taking a different I_P and
trying to show the fixed point maps P into the "right"
interval.

>> >> Now say X is the Cartesian product of the I_P (X is the space
>> >> of all possible "equity" functions that are consistent with
>> >> natural bounds given by the size of the cube.) You define
>> >> a map T from X to X by natural rules, like
>> >> [...submartingale conditions...]
>>
>> Not important: Are they really "submartingale" conditions?
>
>You are right; the value is not a semimartingale if both players make "errors." Each player can
>at best keep the expected value after the next step constant.
>
>> [...]
>> Let's pretend we're not playing backgammon. We're playing a game
>> where there's a cube that works as in backgammon, but the
>> significance of dice rolls is different: In this game the
>> first player who rolls a 6 wins 2C, period.
>
>Yes, that's what I meant. The actual value, according to Snowie rollouts, should be a bit
>smaller, but that shouldn't affect the analysis. .

Right. (I realized that another way to make all this precise would
be to assume gentleman mathematician backgammon players, who are not
interested in winning by luck: In any situation where it's possible
to calculate the actual equity (whatever that is) the game stops
and that amount is paid.)

>> But: In the
>> 3^n thing below I've done the calculations, and what
>> I get is not at _all_ what you say you get. [...]
>>
>> >This position is a generalization of the Petersburg Paradox. In the classical paradox, one
>> >gets paid 2^n with probability 1/2^n for n=1, 2, 3, ... but let's change that to a payoff of
>> >3^n with a probability of 1/2^n, i.e., we toss a coin until the first head, and if that is
>> >on the nth toss you get 3^n. The expected value still does not exist. If you use the above
>> >machinery to look for a fixed point, you get a value of -3, i.e., that you would have to be
>> >paid to play this game! That's clearly unsatisfying, since you will definitely win.
>>
>> Of course -3 is wrong, but I have no idea how you got this -3.
>>
>> "The machinery" involves first _choosing_ an interval I_P for each
>> position P. If I make what seems to me the natural choice of I_P
>> I can show that T has a unique fixed point E, which satisifies
>> E(P) = +infinity for all E. (This is the right answer.) If I
>> leave out what seems like the natural assumption on the I_P
>> there are infinitely many fixed points, in fact for _any_ real
>> number A there is a fixed point E such that
>>
>> E(starting position) = A.
>>
>> So I cannot see where this -3 comes from.
>
>For starters, it comes from summing a geometric series outside the radius of convergence:
>3/2+(3/2)^2+(3/2)^3+... ~ (3/2)/(1-(3/2)). That allows one to find a fixed point.

Um, I don't see offhand what _that_ has to do with analyzying this
game by "the above machinery".

>Before the nth coin toss, the value is -(3^n). If one stops, that is lucky by 2(3^n), and the
>payoff is 3^n. If one continues, that is unlucky by 2(3^n), and the future payoffs are valued at
>-(3^(n+1)).
>
>> You say "If you use the above machinery". I don't see what this
>> means until you tell me what intervals I_P you're considering;
>> the choice of the I_P is not part of the machine.
>
>I looked for a fixed point without specifying the intervals. Is there some natural way to choose
>between fixed points when there are multiple ones?

Don't know. I don't know whether there ever _is_ more than one fixed
point if we take the "natural" I_P; that's one of the things I wish
I knew.

I do think it's natural to restrict to the natural I_P. We're not
trying to show that there exists an equity function that is the
only possible equity function in the universe. We're trying to
show that there is an equity function that works the way we want
an equity function to work. If there _is_ such a thing as the
actual "equity" its value is certainly in [-3C,3C], since it's
clear that O has a strategy that will prevent X from winning
more than 3C and X has a strategy that will prevent X from
losing more than 3C.

> I suppose [-oo,oo] was what I used, but I also
>assumed that the value would scale by a factor of 3. The equation satisfied by a fixed point is
>then
>
>a= 3/2a +3/2

??? I figured you'd say something like that. I don't see what
sense this makes. Um: I'm certain it makes a lot of sense from
various points of view, maybe. I don't see what sense it makes
as an explanation of how "the machinery" gives -3 for this
simplified game...

Oh: your "a" is E(starting position) and as you say, you are
making the natural assumption that the equity scales with the
"cube". That would indeed restrict the infintiely many fized points
to just a few, one with E(starting position) = -3. And
one being E(starting position) = infinity, which is the
right answer.

>so the only possibilities are -3, -oo, and oo.
>
>> Otoh if, say, we set I_P = [-infinity, infininty] for all
>> P then for _any_ value of E(P_0)) there is a function
>> E satisfying [iii] for all n. Ie if we allow these I_P
>> then there are many fixed points, including ones
>> giving totally arbitrary E(P_0).
>
>Yes, but if E(P_0)=1, then E(P_1)=-1, which is not 3E(P_0).

Right.

But in any case, like I said: We're not (well I'm not) trying
to establish uniqeness under no constraints whatever, just
trying to show that there does exist something which works
the way "actual equity" would work (or to show that there
really is no such thing.) It's clear that the real expectation
cannot be negative, so restricting I_P to a subset of [0,infinity]
is an extremely natural thing to do, and if we do that then
the only fp gives E(starting position) = infinity.

>> IIRC someone worked out that for the
>> >classical Petersburg Paradox, the median payoff after n trials grows as c n log n. For other
>> >variations, the median can oscillate in sign infinitely often. In those cases, it is unclear
>> >to me who should pay to play. I'm not sure what happens in this case--in what sense does
>> >always redoubling beat holding the cube? I'll try to work out the properties of the
>> >distribution of partial sums this weekend if no one else has posted them by Friday.
>
>I haven't worked out all of the details, but the sum of n trials can be approximated in some
>sense by a sum of scaled Poisson distributions corresponding to the possible levels of payoffs.
>To approximate the distribution (in the sense of the cumulative distribution function or its
>inverse), one can ignore a set of extremely unlikely outcomes which occur in each trial with
>probability less than c/n, where c is chosen based on how closely one wants to approximate the
>distribution.
>
>The distribution of sums for 36/25 n trials looks like the distribution for n trials multiplied
>by -2. This implies that any fixed percentile will grow no faster than
>cn^(log2/log(36/25)) or about n^1.9. So, if you want to play this 10 times a few hundred dollars
>need to be on hand, but if you play this 1000 times the payoff will often be a few million.
>
>Douglas Zare
>


David C. Ullrich

Douglas Zare

unread,
Jul 29, 2001, 9:11:12 PM7/29/01
to

"David C. Ullrich" wrote:

I choose an operator and worked out its properties. As far as I can see, your definition of T was
ambiguous. You said that one should choose the coordinates with "suitable granularity." Isn't it a bad
sign if the exact level of granularity matters, e.g., if the properties of the fixed points (and the
values they associate to positions) change if you add in an extra step with no effect on the game
(relabelling one step as two)?

Here is one possibility in Mathematica code:

holding[0] = 0; nothold[0] = 0; oppdec[0] = 0;
holding[n_] := holding[n] = Max[22/36 - 25/36 nothold[n - 1], oppdec[n - 1]]
nothold[n_] := nothold[n] = 22/36 - 25/36 holding[n - 1]
oppdec[n_] := oppdec[n] = -Max[-1, -44/36 + 50/36 holding[n - 1]]

In this case there is an attracting cycle of length 4:

holding, not holding, opponent deciding
{0.57664, 0.28832, 0.57664}
{0.57664, 0.21066, 0.42133}
{0.46482, 0.21066, 0.42133}
{0.46482, 0.28832, 0.57664}

with fixed point
{0.51163, 0.25581, 0.51163}= (22/43, 11/43, 22/43}.

Another choice, with the only difference being the definition of the oppdec function, gives an
attracting cycle of length 2:

oppdec[n_] := oppdec[n] = -Max[-1, -2nothold[n - 1]]

{0.36066, 0.11020, 0.72131}
{0.72131, 0.36066, 0.22040}

with the same fixed point
{0.51163, 0.25581, 0.51163}.

Those are in the scaling version/subspace. I'm not sure whether the cycles lift to fixed points which
do not scale. Am I allowed to say what I would guess would happen?

> T(E)(P) = a weighted average of E(position "P after rolling
> various rolls")
>
> If E(Q) can be both infinity and -infinity then these
> sums are undefined.

Ok, good point. How about [-6,6]?

> Even if the machine didn't break down totally with
> I_P = [-oo,oo], making this paragraph moot, I don't
> see any advantage to taking a different I_P and
> trying to show the fixed point maps P into the "right"
> interval.

Didn't you say you had a problem showing that the map was actually from a region to itself? If one
takes [-6,6], then the fact that the range is suitably restricted is easy, and then one can check that
the value is always between -3 and 3.

> >> >This position is a generalization of the Petersburg Paradox. In the classical paradox, one
> >> >gets paid 2^n with probability 1/2^n for n=1, 2, 3, ... but let's change that to a payoff of
> >> >3^n with a probability of 1/2^n, i.e., we toss a coin until the first head, and if that is
> >> >on the nth toss you get 3^n. The expected value still does not exist. If you use the above
> >> >machinery to look for a fixed point, you get a value of -3, i.e., that you would have to be
> >> >paid to play this game! That's clearly unsatisfying, since you will definitely win.
> >>
> >> Of course -3 is wrong, but I have no idea how you got this -3.
> >>
> >> "The machinery" involves first _choosing_ an interval I_P for each
> >> position P. If I make what seems to me the natural choice of I_P
> >> I can show that T has a unique fixed point E, which satisifies
> >> E(P) = +infinity for all E. (This is the right answer.) If I
> >> leave out what seems like the natural assumption on the I_P
> >> there are infinitely many fixed points, in fact for _any_ real
> >> number A there is a fixed point E such that
> >>
> >> E(starting position) = A.
> >>
> >> So I cannot see where this -3 comes from.
> >
> >For starters, it comes from summing a geometric series outside the radius of convergence:
> >3/2+(3/2)^2+(3/2)^3+... ~ (3/2)/(1-(3/2)). That allows one to find a fixed point.
>
> Um, I don't see offhand what _that_ has to do with analyzying this
> game by "the above machinery".

I identified the algebraic condition the fixed point that scales satisfies. It's not supposed to be
deeply satisfying. In fact, I brought it up to illustrate an objection to attributing meaning to the
fixed point: the fixed point can move downwards when one is made better off.

Douglas Zare

David C. Ullrich

unread,
Jul 30, 2001, 11:05:27 AM7/30/01
to
Well, lemme just say first that it doesn't seem
right for you to be doing all the work here. I
haven't been up to much serious work, (even
this serious); had no idea you were going to be
getting into this so seriously, assumed you'd
say at most "that's interesting" and move on...
(I'm upper than I've been, may be fully up soon,
we'll see.)

On Sun, 29 Jul 2001 21:11:12 -0400, Douglas Zare
<za...@math.columbia.edu> wrote:

Well you're right about that. There's nothing ambiguous about the
actual definition of T that I have in mind, but I never gave a
complete description of that definition here. In any case, since
I wrote the above I've come to the point of view that one can
regard these arguments as being precise analyses of games other
than backgammon, which have the advantage that they're simple enough
that the analysis is doable, while still being enough like
backgammon to be relevant. So never mind the previous paragraph.

Skipping the step where one decides to take or drop has no
effect _if_ we already know whether it's correct to take or
drop. I'm pretty sure you're way ahead of me on _that_ question,
hence the differing opinions on what matters. (My pov when
I wrote the above was that it may very well be correct to
double and take here, but since the whole point to the
whole thing is to see whether we can show that there _is_
such a thing a "correct play" making assumptions about
what's correct seems like a bad idea, even if the
assumptions are correct. Revised pov would say well ok,
instead of assuming something about what the correct play
is let's just consider the related game where there is no
option; the analysis will be the same, the related game
seems close enough to backgammon that the analysis is
relevant, and I don't need to worry about what we're
skipping.)

I think this is in regard to the question of what the I_P should
be, I'm not sure in exactly what context. I believe that simply
using [-6,6] works for most positions; I don't think that it
works for a "position" where X has just doubled and O is deciding


whether to take or drop.

>> Even if the machine didn't break down totally with


>> I_P = [-oo,oo], making this paragraph moot, I don't
>> see any advantage to taking a different I_P and
>> trying to show the fixed point maps P into the "right"
>> interval.
>
>Didn't you say you had a problem showing that the map was actually from a region to itself?

Yes and no: I said that I _had_ had a problem with that. Once I
realized that the "natural" choice for I_P was [-6C, C] for
a position where X has doubled from C to 2C and O is deciding
whether to take the problem went away. When I worked all this
out twenty years ago I couldn't find another choice that
let me show that T mapped X to X. I could try to get back
to you on what goes wrong with other choices (if you care;
seems like [-6C,C] is clearly the natural choice anyway:
The reason [-3C,3C] is "right in most positions is that


O has a strategy that will prevent X from winning more

than 3C and X has a strategy that will prevent him from
losing more than 3C. In the position I'm discussing right
now [-6C, C] is "right" in the same sense; now if we
motivate the choice of I_P by this sort of argument
and then it turns out that T maps X to X that seems
like the thing to do.)

Simply making the intervals larger doesn't necessarily
make it easier to show that T maps X to X: the range
has increased but so has the domain...

>If one
>takes [-6,6], then the fact that the range is suitably restricted is easy, and then one can check that
>the value is always between -3 and 3.

Could be. I'm not certain exactly what you're suggesting I_P should be
in the auto-scaled version in the P above, where X has just doubled:
If we say the interval used in defining X is [a,b] and the actual
"equity" is then in [aC, bC] it's not clear what C is here; is
C the old C or the new one?

Not a big problem, but it's not clear what convention you're
using there.

>> >> >This position is a generalization of the Petersburg Paradox. In the classical paradox, one
>> >> >gets paid 2^n with probability 1/2^n for n=1, 2, 3, ... but let's change that to a payoff of
>> >> >3^n with a probability of 1/2^n, i.e., we toss a coin until the first head, and if that is
>> >> >on the nth toss you get 3^n. The expected value still does not exist. If you use the above
>> >> >machinery to look for a fixed point, you get a value of -3, i.e., that you would have to be
>> >> >paid to play this game! That's clearly unsatisfying, since you will definitely win.
>> >>
>> >> Of course -3 is wrong, but I have no idea how you got this -3.
>> >>
>> >> "The machinery" involves first _choosing_ an interval I_P for each
>> >> position P. If I make what seems to me the natural choice of I_P
>> >> I can show that T has a unique fixed point E, which satisifies
>> >> E(P) = +infinity for all E. (This is the right answer.) If I
>> >> leave out what seems like the natural assumption on the I_P
>> >> there are infinitely many fixed points, in fact for _any_ real
>> >> number A there is a fixed point E such that
>> >>
>> >> E(starting position) = A.
>> >>
>> >> So I cannot see where this -3 comes from.
>> >
>> >For starters, it comes from summing a geometric series outside the radius of convergence:
>> >3/2+(3/2)^2+(3/2)^3+... ~ (3/2)/(1-(3/2)). That allows one to find a fixed point.
>>
>> Um, I don't see offhand what _that_ has to do with analyzying this
>> game by "the above machinery".
>
>I identified the algebraic condition the fixed point that scales satisfies. It's not supposed to be
>deeply satisfying. In fact, I brought it up to illustrate an objection to attributing meaning to the
>fixed point: the fixed point can move downwards when one is made better off.

Right. Realized that later. Been meaning to get back to you on exactly
this point - if the fixed point really does do that then yes, that
does indicate that this thing is not The Right Thing. (And no, I have
no reason to think that you got it wrong, just haven't got around to
verifying it myself. At first it seemed clear that something funny was
up since I really didn't see what the things you were saying had to
do with the machinery I'd set up. But it's much clearer now what this
does have to do with the things I've been talking about - really have
been meaning to work this out and I still intend to asap; exactly
when that is is not clear.)

0 new messages