Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Extremely theoretical

39 views
Skip to first unread message

ptane...@hotmail.com

unread,
Aug 19, 1997, 3:00:00 AM8/19/97
to


The diagram below should pique the interest of backgammon theoreticians.
Its invention was motivated by the paper: "Optimal Doubling in Backgammon",
E. Keeler & J. Spencer, Operations Research, Vol. 23, No.6. It is
presented as a problem: What makes this position extraordinary? Those
without access to the aforementioned paper will find this a real challenge,
though there are some contributors to this newsgroup who are evidently
perspicacious enough to suss it out on their own.
I have since been informed that Bob Floyd published a position
essentially identical in an obscure, now defunct, backgammon magazine
several years ago. He therefore deserves recognition of first discovery.
This posting is submitted for the edification of those unfamiliar with
that article.
Do any other positions exist which possess the same property?
It should be noted that the Jacoby rule regarding an initial double
may or may not be in effect; the conclusion is unaltered.
Also, realize that it is quite legal.


Money game
X on roll - or O on roll - it doesn't matter!


O X X ^ ^ ^ | | X O O O O O
| | O O O O O
| |
| X | ---
| | | 1 |
| O | ---
| |
| | X X X X X
X ^ O O ^ ^ | | O X X X X X
_________________________________________________
< X home board >


Paul Tanenbaum

--

Posted using Reference.COM http://www.reference.com
Browse, Search and Post Usenet and Mailing list Archive and Catalog.

InReference, Inc. accepts no responsibility for the content of this posting.

Brian Sheppard

unread,
Aug 20, 1997, 3:00:00 AM8/20/97
to

Gary Wong <ga...@cs.auckland.ac.nz> wrote in article
<ieypvr9...@cs20.cs.auckland.ac.nz>...
> ptane...@hotmail.com writes:
> > Money game

> > O X X ^ ^ ^ | | X O O O O O
> > | | O O O O O
> > | X | ---
> > | | | 1 |
> > | O | ---
> > | | X X X X X
> > X ^ O O ^ ^ | | O X X X X X
> > _________________________________________________
> > < X home board >
>
> I guess 11/36 probability of a 6 makes the
> situation a double/take for whoever's on roll, right? So the cube
> gets turned every roll until somebody rolls a 6?
>
> In that case, the expected gain of the player on roll (from this roll
only)
> is 11/36 of 2 points (after doubling, ignoring gammons and backgammons)
> for 0.61 points; the expected gain of the opponent on the turn after is
> 25/36 (the first player missing) x 11/36 (the second player hitting) x 4
> (the cube is turned twice) for 0.85 points; thereafter the expected gains
> continue increasing at a factor of 2 (for the cube) x 25/36 (for the
> probability of the game lasting that long) = 50/36 = 1.39. So the total
> expected gain is the difference of the odd or the even terms (depending
> which player you mean) of the geometric series a = 22/36, r = 50/36.
> But since r > 1, the series never converges, and so the expected gain
> for both players is infinite, right? (and so is the expected loss!)
That
> sure counts as "extraordinary" in my book :-)

Expected gain is infinite? Not at all! The expected gain in a backgammon
game is never more than triple the cube. I don't know where you made your
error, but you can rest assured that there are some.

The way to analyze this type of position is with a "recurrent equation."
In this type of equation the quantity you wish to evaluate comes up as a
term on both sides of the equation after you analyze a few rolls. In this
case, owing to the symmetry of the position, we obtain a "recurrence" of
the quantity after just one roll.

Let's analyze the cubeless case first. Call the equity of the side to move
X,
and let the equity of the side to move given that he rolls a 6 be E. Then
we
have the following equation:

In 36 rolls, the side to move will have 11 sixes, winning E each time,
and 25 misses, in which case he loses X, since the situation is exactly
reversed. Therefore,

36*X = 11*E - 25*X

So X = E * 11/61. In words: multiply the value of the position
after a 6 by 11/61 to obtain the equity for the side to move.


When we take the cube into account that changes things slightly. Let Y be
the
equity after the opponent takes. In 36 rolls the side to move will win E in
11
rolls, and will lose 2Y in 25 rolls (because the opponent will redouble).
Now
the equation is 36*Y = 11*E - 25 * 2Y, so Y = E * 11/86.

To sum up: the side to roll should double, to raise his equity from 11/61 *
E
to 2 * 11/86 * E. The other side should take because 11/61 * E is
definitely
less than 1.

Brian

Gary Wong

unread,
Aug 20, 1997, 3:00:00 AM8/20/97
to

ptane...@hotmail.com writes:
> The diagram below should pique the interest of backgammon theoreticians.
> Its invention was motivated by the paper: "Optimal Doubling in Backgammon",
> E. Keeler & J. Spencer, Operations Research, Vol. 23, No.6. It is
> presented as a problem: What makes this position extraordinary?

> [snip]

> Money game
> X on roll - or O on roll - it doesn't matter!
>
>

> O X X ^ ^ ^ | | X O O O O O
> | | O O O O O
> | |
> | X | ---
> | | | 1 |
> | O | ---
> | |
> | | X X X X X
> X ^ O O ^ ^ | | O X X X X X
> _________________________________________________
> < X home board >

Hmmm... looks like the first player to roll a 6 gets a tremendous
advantage (either an anchor and 2 of the opponent on the bar against a
5 point board for 6-1 to 6-5, or 3 of the opponent on the bar for
6-6). And I guess 11/36 probability of this happening makes the


situation a double/take for whoever's on roll, right? So the cube
gets turned every roll until somebody rolls a 6?

In that case, the expected gain of the player on roll (from this roll only)
is 11/36 of 2 points (after doubling, ignoring gammons and backgammons)
for 0.61 points; the expected gain of the opponent on the turn after is
25/36 (the first player missing) x 11/36 (the second player hitting) x 4
(the cube is turned twice) for 0.85 points; thereafter the expected gains
continue increasing at a factor of 2 (for the cube) x 25/36 (for the
probability of the game lasting that long) = 50/36 = 1.39. So the total
expected gain is the difference of the odd or the even terms (depending
which player you mean) of the geometric series a = 22/36, r = 50/36.
But since r > 1, the series never converges, and so the expected gain
for both players is infinite, right? (and so is the expected loss!) That
sure counts as "extraordinary" in my book :-)

Cheers,
Gary. (GaryW on FIBS)
--
Gary Wong, Computer Science Department, University of Auckland, New Zealand
ga...@cs.auckland.ac.nz http://www.cs.auckland.ac.nz/~gary/

David desJardins

unread,
Aug 21, 1997, 3:00:00 AM8/21/97
to

Brian Sheppard (invalid address) writes:
> Expected gain is infinite? Not at all! The expected gain in a backgammon
> game is never more than triple the cube. I don't know where you made your
> error, but you can rest assured that there are some.

As Brian says, the expected value can't be infinite. Brian goes on to
correctly compute the value.

What is true (and somewhat interesting) is that the *variance* of the
eventual payoff is infinite. That proves that the payoffs of some
backgammon positions have infinite variance. If you believe that this
position has a positive probability of occurring in a normal game with
best play by both sides (which appears likely enough, although the
probability is certainly small), then that demonstrates that the payoff
of a money backgammon game from the normal start position with best play
by both players must also have infinite variance. This is not really
surprising, but it is interesting to have a way to (almost) prove it.

David desJardins

Brian Sheppard

unread,
Aug 21, 1997, 3:00:00 AM8/21/97
to

bob koca <ko...@bobrae.bd.psu.edu> wrote in article
<VHs$dLbr8...@news.erie.net>...

> "Brian Sheppard" <!bri...@mstone.com> wrote:
>
> >Gary Wong <ga...@cs.auckland.ac.nz> wrote in article
> ><ieypvr9...@cs20.cs.auckland.ac.nz>...
> >> ptane...@hotmail.com writes:
> >> > Money game
> >> > O X X ^ ^ ^ | | X O O O O O
> >> > | | O O O O O
> >> > | X | ---
> >> > | | | 1 |
> >> > | O | ---
> >> > | | X X X X X
> >> > X ^ O O ^ ^ | | O X X X X X
> >> > _________________________________________________
> >> > < X home board >
> >>
> Gary's only error, and it is a small one, was calling the expected
> value infinite. Undefined is more appropriate.
>
> Brian's recursion method normally works, however it depends on
> the expected values actually existing. Here they don't. Many people
> find examples of this sort confusing and I believe some of that stems
> from not knowing exactly what expected value means.
>
> Suppose we played Gary's position many many times. The average
> winnings will NEVER settle down to a limit. The reason is that there
> will be eventually huger and huger amounts of points lost on a single
> game. Enough to knock off whatever progress was made towards the
> average gain approaching a limit.
>
> The concept of undefined expected values is a little tricky and
> a little counterintuitive. If you are interested in exploring it
> further I would suggest looking at definitions of expected value,
> the weak and strong laws of large numbers, and the cauchy distribution
> (an example in which expected value does not exist, even though the
> distribution is symmetric). All of these will probably be in a good
> undergraduate level probability text.

First I want to tender my apologies to Gary, who it seems has grokked
the real issue.

I can get my head around the concept of "undefined expected values"
for only moments at a time, and then it slips away. Perhaps if you
could answer a few questions it might help me to understand what is
going on.

First: am I correct to say that the situation arises because the payoff
is increasing at a factor of 2 per turn (because of the cube-turn) but
the probability of that payoff is decreasing at the rate 25/36 (which
is greater than 1/2)? The point is that the chance of a huge payoff is
decreasing, but the payoffs are increasing faster.

Second: suppose we take a sample of outcomes from this game, and
observe the average amount won as time goes on for an indefinite
number of games. Given an arbitrary positive number N, is it true
that the series of averages will eventually be larger than N? Is
it also true that the series of averages will eventually be less
than -N?

Third: is it correct to double? If it is not correct to double,
then the position does have a theoretical equity. (But, what is
the basis for deciding when to double if there is no equity to
reason about?)

Fourth: is it correct to take? If it is not correct to take, then
the position does have theoretical equity.

Fifth: If doubling and taking is correct, then this position has
undefined equity. That means that positions that lead to this have
undefined equity and so on. What process, if any, prevents backgammon
as a whole from having undefined equity?

Sixth: I am curious about computer evaluation of this situation. Does
JF double for the side to roll?

Thanks in advance to anyone who can answer any of these.

Brian


Morten Wang

unread,
Aug 21, 1997, 3:00:00 AM8/21/97
to

ptane...@hotmail.com writes:
>> >> > Money game
>> >> > O X X ^ ^ ^ | | X O O O O O
>> >> > | | O O O O O
>> >> > | X | ---
>> >> > | | | 1 |
>> >> > | O | ---
>> >> > | | X X X X X
>> >> > X ^ O O ^ ^ | | O X X X X X
>> >> > _________________________________________________
>> >> > < X home board >

["Brian Sheppard" <!bri...@mstone.com>]


>Sixth: I am curious about computer evaluation of this situation. Does
>JF double for the side to roll?

My JellyFish v3.0 says it's a double/take when it analyzes the position on level
5. On level 6 & 7 it becomes a no-double/take.

Morten!

--
"God does not deduct from our alloted life span
the time spent playing backgammon."
-> Morty on FIBS
--> Backgammon homepage: http://home.sn.no/~warnckew/gammon/

Bob Koca

unread,
Aug 23, 1997, 3:00:00 AM8/23/97
to

See :thread for previous discussion from Gary Wong, Brian Shepard and
myself most of which I edit out )


Brian wrote:
>> Third: is it correct to double? If it is not correct to double,
>> then the position does have a theoretical equity. (But, what is
>> the basis for deciding when to double if there is no equity to
>> reason about?)


One method may be to compare the not doubling strategy to other
strategies which will result in defined equities. For example,
suppose you own the cube in the above position. The strategy "never
double" has a clearly defined equity. The strategy double once, take
if doubled but never double after that also has a clearly defined
equity. If this equity is higher, then it is clearly better to double.
A similar technique may be useful to decide to take.


Bob Koca
bobk on FIBS


Chuck Bower

unread,
Aug 24, 1997, 3:00:00 AM8/24/97
to

In article <ieyiuww...@cs20.cs.auckland.ac.nz>,
Gary Wong <ga...@cs.auckland.ac.nz> wrote:

(snip)
>Let's assume it is a no double/take. But in that case your opponent can
>obtain an advantage with the "incorrect" double, because if you take and
>never redouble (as we've assumed is correct), you remain the underdog
>(because he is on roll) and end up losing twice the expected number of
>points (for the double), without being able to take advantage of owning
>the cube. Therefore the player following "correct" play ends up worse off,
>so the initial assumption must have been wrong -- it's not a no double/take.
(snip)

I'm jumping in to this discussion in the middle, so my appologies to
Gary if I'm taking this out of context, but the above paragraph as written
seems to have a flaw.

In BG, if you don't use the cube THIS time, that (usually) doesn't mean
you can't use it later. It seems that Gary's argument is only valid if you
can't cube later. That is, suppose one player enters. Can't s/he use the
cube next roll (under the right circumstances, obviously). You may not
get the optimum value from the cube, but you will get some value. If I'm
right, then this proof crumbles. That doesn't mean that doubling now is
wrong, it only means that the above argument doesn't prove that it is right.


Chuck
bo...@bigbang.astro.indiana.edu
c_ray on FIBS


Gary Wong

unread,
Aug 26, 1997, 3:00:00 AM8/26/97
to

You're right, I didn't address the possibility of waiting now and doubling
later. But I think the result still holds -- the key is that you will be
faced with _exactly the same position_ every move from now until when
somebody enters (when they are assumed to become massive favourites to win;
if your opponent enters you obviously no longer want to double, and if you
enter then you've lost your market by a long way). The only thing that can
change between this turn and your next turn (assuming nobody enters) is the
value of the cube, which makes utterly no difference to your doubling
strategy in an unlimited money game (it might make a difference in a match
game of course, or in a money game if the value of the cube starts approaching
the cash in your pocket, but that's a different story). Therefore, if it will
be correct to double next turn, it is also correct to double this turn. Your
market cannot possibly improve under these conditions but can get worse (if
somebody enters), and so if it is correct to double at all, it's correct to
double now.

Does that sound more reasonable?

Stephen Turner

unread,
Aug 26, 1997, 3:00:00 AM8/26/97
to

Stein Kulseth wrote:
>
>
> It is possible to assign a settlement value to it, .41
> (from the equation V = 2*11/36*1.6 - 2*25/36 * V, the settlement value
> where it doesn't matter which player propose the settlement.)
> If you propose/accept a lower or higher value you make the Position
> more valuable to your opponent than to youself, example:
> Settlement value .40
> - your value of the Position: 0.40
> - your opponent's: 2*11/36*1.6 - 2*25/36*.40 = .42
> Settlement value .42
> - your value of the Position: 2*11/36*1.6 - 2*25/36*.42 = .39
> - your opponent's: .42
>
> This value can then be used to figure out the equity estimates used
> to define theoretical play.

I haven't yet grasped the meaning of this "settlement value". I guess it's
of the form "if there is to be a settlement, this must be the fair value".
But how do we know there can be a settlement? Maybe the player on roll
might be better to choose to play on rather than settle at any value, or
something?

Let's play a simpler game. We toss a coin until it comes up heads. If it
comes up heads on the first toss, you pay me 1. If it first comes up heads
on the second toss, I pay you 3. If on the 3rd toss, you pay me 9, etc.

Now the equity is undefined again, in the sense that points per game
doesn't converge. But there is still a settlement value, viz.
s = 1/2 + (1/2)(-3s) => s = 1/5
I guess this means that if we must settle, 1/5 is the fair value of the
settlement. But is it at least as good for both players to settle, as to
play on? I don't see how to answer this question except in terms of the
expected gain from playing on, which we've already agreed is undefined.
Does the settlement value have any meaning, in terms of money?

I don't understand the last sentence quoted above either. Surely that
would only be true if we agreed to settle all positions like this, at
the settlement value: but that begs the question again. Or am I missing
something obvious?

--
Stephen Turner sr...@cam.ac.uk http://www.statslab.cam.ac.uk/~sret1/
Statistical Laboratory, 16 Mill Lane, Cambridge, CB2 1SB, England
"As always, it's considered good practice to temporarily disable any
virus detection software prior to installing new software." (Netscape)

Bill Daly

unread,
Aug 27, 1997, 3:00:00 AM8/27/97
to

Thanks for your lucid explanation of this problem, since I got various
messages out of order and was having trouble following it. In the
interest of conserving space, I won't quote your message.

If both players adopt an unlimited doubling strategy, then the equity of
the position can't be calculated at all, since it involves the summation
of a divergent series. In this case, it is meaningless to talk about
comparing that strategy to other strategies, because there is nothing to
go by. A better approach is to assume that there will only be a finite
number of doubles, and in this case, whoever doubles last has the
advantage. If for example you decide to double n times, and your
opponent redoubles each time, then your equity will be negative, thus
you are better off not doubling at all. If you don't double and your
opponent does, your equity will still be positive (about .09), but if
you already own the cube and don't double, your equity will be the
theoretical .29 which you calculated. (Actually, in this case, you may
gain somewhat more because if you enter first, but the game swings
against you, you may still be able to use the cube to salvage the game.)
Thus it seems to me that if the cube is at 1, then the first player
should double, while if the cube is already owned by someone, there
should be no further doubles.

Of course, this is just a superficial look at it, since it is not clear
whether or not it makes sense for a player to conclude that he is more
likely to be the last to double. Suppose that you are prepared to double
up to n times, and your opponent is prepared to double up to m times.
You will gain when m < n and lose otherwise, and I think it is possible
to work out what your equity will be for each possible value of m, so if
you make some assumption about the distribution of m, you should be able
to calculate your expected gain or loss. This should determine whether
or not you double originally, since your expectation must be greater
than .09 to justify doubling at all. After one round of doubles, you
will be faced with the same problem, except that you need to recalculate
the assumed distribution of m, since you now know that m was not 0.
Presumably at some point, one of the players will decide to stop
doubling.

The reason that I think that the chain of doubles must logically
terminate is this. At some point, the cube will be so high that the
poorer (in money) player won't be able to pay off. When this happens,
the richer player has no further reason to double, since he can't gain
anything and can only increase his potential loss. For the converse
reason, the poorer player might as well continue to double, since he has
nothing more to lose and can only increase his potential gain. Thus, in
principle, the poorer player should be the last to double, and the
richer player shouldn't double at all, unless the cube is at 1. The only
time that the situation becomes more complicated is when the two players
have about the same amount of money. In the limiting case, neither
player can lose more than a finite amount of money, say X for one player
and O for the other, and once the cube is sufficiently high, the
expectation becomes a series of the form

11/36*O - 25/36*11/36*X + 25/36*11/36*11/36*O - ... = 11/36*(O -
25/36*X)*(1 + (11/36)^2 + ...)

which is positive when O > 25/36*X and negative otherwise. This says
that if your opponent's maximum loss is less than 25/36 times your own
maximum loss, then you are the richer player, and should not double
unless the cube is at 1. Actually, for most rational people,
double-or-nothing is a poor gamble if your entire net worth is at stake,
so I would expect that in practice neither player would consider it a
good idea to get into a doubling frenzy.

Actually, positions of this type are easily constructed, and don't have
to be all that symmetrical. The basic conditions are:

1. Both players have a single man on the bar.

2. Whoever enters first will have positive equity assuming that the
opponent owns the cube.

Suppose then that X enters with probability Px, and wins Kx when he
enters, and similarly Po and Ko for O. Write Qx = 1-Px and Qy = 1-Py.

3. Qx > 1/2*(Px*Kx)/(Po*Ko) and Qo > 1/2*(Po*Ko)/(Px*Kx).

Then if both players adopt an unlimited doubling strategy, the
expectation for X, with X to move, is

Px*2*Kx - Qx*Po*4*Ko + Qx*Qo*Px*8*Kx - Qx^2*Qo*Po*16*Ko + ...
= 2*Px*Kx*(1 + (4*Qx*Qo) + (4*Qx*Qo)^2 + ...) - 4*Qx*Po*Ko*(1 +
(4*Qx*Qo) + (4*Qx*Qo)^2 + ...)
= 2*(Px*Kx - 2*Qx*Po*Ko)*(1 + (4*Qx*Qo) + (4*Qx*Qo)^2 + ...)
= 2*Px*Kx - 4*Qx*(Po*Ko - 2*Qo*Px*Kx)*(1 + (4*Qx*Qo) + (4*Qx*Qo)^2 +
...)

which under the stated conditions is divergent, since 4*Qx*Qo > 1 from
the 3rd condition. The 3rd condition also implies that the partial sums
of the series will alternate in sign, so that the divergence is towards
infinity in both directions, i.e. the expectation will be indeterminate.

Note that it is possible to have 4*Qx*Qo > 1 even when condition 3 is
not true, and in this case the expectation diverges toward infinity in a
single direction, i.e. one player should double and the other should
either drop or not redouble.

It seems to me that positions of this type may well occur fairly
frequently in actual play. The first two conditions are common enough.
The third condition tends to imply that Qx and Qo can't be too small,
which suggests that both boards should be nearly closed.

I expect that it is also possible to construct positions of this type
where whoever enters first is likely to lose, since the same reasoning
applies whenever Kx and Ko have the same sign.

I suppose one could call these "Pandora positions", since whoever
creates one is opening Pandora's box.

Regards,

Bill

Stein Kulseth

unread,
Aug 28, 1997, 3:00:00 AM8/28/97
to

|> At some point, the cube will be so high that the
|> poorer (in money) player won't be able to pay off. When this happens,
|> the richer player has no further reason to double, since he can't gain
|> anything and can only increase his potential loss.

Or the richer player should double, as the poorer player would then
have to pass (which means the poorer player should not double).

Seems the position is not only dependent on your bankroll and the estimate
of your opponents bankroll, but also on your play ethics and the estimate
of your opponenets play ethics as well.

|> I suppose one could call these "Pandora positions", since whoever
|> creates one is opening Pandora's box.

A very fitting name indeed.

--
stein....@fou.telenor.no - http://www.fou.telenor.no/fou/ttkust


Bill Daly

unread,
Aug 28, 1997, 3:00:00 AM8/28/97
to

Stein Kulseth wrote:
>
> |> At some point, the cube will be so high that the
> |> poorer (in money) player won't be able to pay off. When this happens,
> |> the richer player has no further reason to double, since he can't gain
> |> anything and can only increase his potential loss.
>
> Or the richer player should double, as the poorer player would then
> have to pass (which means the poorer player should not double).
>
> Seems the position is not only dependent on your bankroll and the estimate
> of your opponents bankroll, but also on your play ethics and the estimate
> of your opponenets play ethics as well.
> ...

I think the only way out of it is to assume that each player has perfect
knowledge, which is the same as saying that both should state in advance
the maximum amount that each is willing to lose, and the analysis then
becomes a match-play analysis. My argument assumes that if a player
loses a game when the cube is higher than his stake, he loses only his
stake, not the full amount of the cube, and at this point he will take
all doubles, beavering if permitted, and will always redouble.

Here is a plausible negotiation strategy. Each player writes down on a
piece of paper the maximum amount he is willing to lose (at least the
current value of the cube), and the players exchange the pieces of
paper, which creates the above scenario. The game then proceeds on the
assumption that neither player can lose more than the amount he wrote
down, no matter how high the cube gets, i.e. converting the game into
match play for this one occasion. So what amount should a player put
down? Clearly both players should put down the current value of the
cube, in effect making the game cubeless. Whoever owns the cube can
implement this without negotiation simply by refusing to double.

Regards,

Bill

0 new messages