Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

An open-ended maths/backgammon problem

14 views
Skip to first unread message

paulde...@att.net

unread,
Aug 20, 2007, 3:59:46 AM8/20/07
to
Suppose two players are playing a money game.

Define the equity of player A as being the number of points that A
expects to win.

What can be said about the class of real numbers which can thus be
attained as equities?

Paul Epstein

bob

unread,
Aug 20, 2007, 2:10:41 PM8/20/07
to


1) do you mean the equities possible at the start of the game for
one player against another
or the possible equities as the game progresses? If it is as the
game progresses do we look only
at the end of each turn or after teh dice are rollled also? The
second distinction is important
if one player is using a random strategy.
2) are you assuming theoretically optimal play

If you do not assume theoretical optimal play then I think the answer
is all real numbers.
Note that positions can arise in which the expected cube level is
undefined (think infinity if you want)
if it is turned every roll. An example would be if each player has a
single checker to enter vs. a five point board.
Supppose that the cube starts at 1 and it it is turned every roll. By
the time someone has entered the cube is at
2 with probability p=11/36, is at 4 with probability (1-p)p, is at 8
with probability (1-p)^2(p), ...
The infinite sum 2p+4(1-p)p+8(1-p)^2p+...diverges.

Thus, arbitrarily high cube levels have a chance at being reached from
the beginning of the
game. Have one player play poorly enough to almost always lose and you
can thus make opponenet have arbitrarily high equity.
To get the full range of real numbers a random strategy may need to be
assumed though.
For example suppose it is the end of the game with the cube on X.
Player A has 2 on the ace point and player B has 1 on the 3 point and
one on the ace point and has just rolled a 31. If B is using a
strategy of play 3/2/off with probability P and 3/off 1/off with
probability 1-P
then any equity from -X to 2 X is possible for A.

Bob Koca

Philippe Michel

unread,
Aug 20, 2007, 6:14:11 PM8/20/07
to
On 2007-08-20, bob <bob_...@hotmail.com> wrote:
> On Aug 20, 3:59 am, pauldepst...@att.net wrote:
>> Suppose two players are playing a money game.
>>
>> Define the equity of player A as being the number of points that A
>> expects to win.
>>
>> What can be said about the class of real numbers which can thus be
>> attained as equities?
>>
>> Paul Epstein
>
>
> 1) do you mean the equities possible at the start of the game for
> one player against another
> or the possible equities as the game progresses? If it is as the
> game progresses do we look only
> at the end of each turn or after teh dice are rollled also? The
> second distinction is important
> if one player is using a random strategy.
> 2) are you assuming theoretically optimal play
>
> If you do not assume theoretical optimal play then I think the answer
> is all real numbers.

That seems impossible. The number of games (or moves) is countable, so the
possible equities have to be as well. Don't you mean all rational numbers ?

> To get the full range of real numbers a random strategy may need to be
> assumed though.
> For example suppose it is the end of the game with the cube on X.
> Player A has 2 on the ace point and player B has 1 on the 3 point and
> one on the ace point and has just rolled a 31. If B is using a
> strategy of play 3/2/off with probability P and 3/off 1/off with
> probability 1-P

But can you play 3/2/off 1/pi of the time, for instance ?

Philippe Michel

unread,
Aug 20, 2007, 6:24:50 PM8/20/07
to
On 2007-08-20, bob <bob_...@hotmail.com> wrote:
> On Aug 20, 3:59 am, pauldepst...@att.net wrote:
>> Suppose two players are playing a money game.
>>
>> Define the equity of player A as being the number of points that A
>> expects to win.
>>
>> What can be said about the class of real numbers which can thus be
>> attained as equities?
>>
>> Paul Epstein
>
>
> 1) do you mean the equities possible at the start of the game for
> one player against another
> or the possible equities as the game progresses? If it is as the
> game progresses do we look only
> at the end of each turn or after teh dice are rollled also? The
> second distinction is important
> if one player is using a random strategy.
> 2) are you assuming theoretically optimal play
>
> If you do not assume theoretical optimal play then I think the answer
> is all real numbers.

That seems impossible. The games (or moves) are countable, so the possible

equities have to be as well. Don't you mean all rational numbers ?

> To get the full range of real numbers a random strategy may need to be


> assumed though.
> For example suppose it is the end of the game with the cube on X.
> Player A has 2 on the ace point and player B has 1 on the 3 point and
> one on the ace point and has just rolled a 31. If B is using a
> strategy of play 3/2/off with probability P and 3/off 1/off with
> probability 1-P

But can you play 3/2/off 1/pi of the time, for instance ?

bob

unread,
Aug 20, 2007, 8:14:11 PM8/20/07
to
On Aug 20, 6:24 pm, Philippe Michel

<philippe.mich...@tele1plus1.fr.invalid> wrote:
> > If you do not assume theoretical optimal play then I think the answer
> > is all real numbers.
>
> That seems impossible. The games (or moves) are countable, so the possible
> equities have to be as well. Don't you mean all rational numbers ?
>

> But can you play 3/2/off 1/pi of the time, for instance ?- Hide quoted text -
>

If a random strategy is not allowed then I agree that it would not
be all real numbers. I don't think
though that all rational numbers would be possible.

There is nothing about probability theory that precluds irrational
numbers. Here is one practical way to do a task
that gives a success with probability of 1/pi. Write the decimal
expansion of 1/pi in this case .318309...
Draw a natural number randomly from 0 to 9. If it is less than 3, stop
and declare a success. If it is 4 or higher stop and declare a
failure. If it is a 3 continue on by drawing a second value. If it is
a 0 stop and declare a success. If it is 2 or higher stop
and declare a failure. Only if it is the second digit in the expansion
( in this case 1) is the outcome in doubt. Continue on
until the outcome is decided one way or the other.

Bob Koca


Havard Raddum

unread,
Aug 20, 2007, 8:41:44 PM8/20/07
to
Philippe Michel wrote:
> On 2007-08-20, bob <bob_...@hotmail.com> wrote:
>> On Aug 20, 3:59 am, pauldepst...@att.net wrote:
>>> Suppose two players are playing a money game.
>>>
>>> Define the equity of player A as being the number of points that A
>>> expects to win.
>>>
>>> What can be said about the class of real numbers which can thus be
>>> attained as equities?
>>>
>>> Paul Epstein
>>
>> 1) do you mean the equities possible at the start of the game for
>> one player against another
>> or the possible equities as the game progresses? If it is as the
>> game progresses do we look only
>> at the end of each turn or after teh dice are rollled also? The
>> second distinction is important
>> if one player is using a random strategy.
>> 2) are you assuming theoretically optimal play
>>
>> If you do not assume theoretical optimal play then I think the answer
>> is all real numbers.
>
> That seems impossible. The games (or moves) are countable, so the possible
> equities have to be as well. Don't you mean all rational numbers ?
>

Disregarding the cube, the number of possible positions in a backgammon
game is finite, and there is one equity number for each position. Hence
the set of equities that can be obtained with a centered cube is finite,
and resides in the interval [-3,3]. There is no upper bound on the
number of moves in a backgammon game, and so the possible values of the
cube may not be finite, and so the set of possible equities involving
the cube may be infinite.

In all cases, it is safe to say that there are rational numbers that are
not obtained as the equity of a backgammon position. Since there are
only finitely many equities with a centered cube, there exists a
smallest positive equity e. None of the rational numbers in (0,e) can
then be obtained as the equity of a backgammon position.

David C. Ullrich

unread,
Aug 21, 2007, 5:24:06 AM8/21/07
to
On Mon, 20 Aug 2007 11:10:41 -0700, bob <bob_...@hotmail.com> wrote:

>On Aug 20, 3:59 am, pauldepst...@att.net wrote:
>> Suppose two players are playing a money game.
>>
>> Define the equity of player A as being the number of points that A
>> expects to win.
>>
>> What can be said about the class of real numbers which can thus be
>> attained as equities?
>>
>> Paul Epstein
>
>
> 1) do you mean the equities possible at the start of the game for
>one player against another
> or the possible equities as the game progresses? If it is as the
>game progresses do we look only
> at the end of each turn or after teh dice are rollled also? The
>second distinction is important
> if one player is using a random strategy.
> 2) are you assuming theoretically optimal play
>
>If you do not assume theoretical optimal play then I think the answer
>is all real numbers.

The definition of "equity" _assumes_ optimal play on both sides;
the equity is exactly the expected value assuming that.

If we do assume optimal play then the equity has to lie between
-3c and 3c, where c is the current value of the cube.

Proof: Say we're talking about X's equity. X has a strategy that
guarantees he cannot lose more than 3c points, namely

S: Make plays at random and drop all doubles.

If X plays strategy S then his expected win is >= -3c, since
in fact he can't possibly lose more than 3c points. Now if
OS is the optimal strategy then by definition the equity
is X's expected winnings if he plays strategy OS,
and saying OS is optimal means this is >= his expected
winnings with any other strategy, in particular it's
>= -3c, his expected winnings if he plays strategy S.

Similarly the equity is <= 3c, by considering the
case where O plays strategy S. QED.

>Note that positions can arise in which the expected cube level is
>undefined (think infinity if you want)
>if it is turned every roll. An example would be if each player has a
>single checker to enter vs. a five point board.
>Supppose that the cube starts at 1 and it it is turned every roll. By
>the time someone has entered the cube is at
>2 with probability p=11/36, is at 4 with probability (1-p)p, is at 8
>with probability (1-p)^2(p), ...
>The infinite sum 2p+4(1-p)p+8(1-p)^2p+...diverges.
>
>Thus, arbitrarily high cube levels have a chance at being reached from
>the beginning of the
>game. Have one player play poorly enough to almost always lose and you
>can thus make opponenet have arbitrarily high equity.
>To get the full range of real numbers a random strategy may need to be
>assumed though.
>For example suppose it is the end of the game with the cube on X.
>Player A has 2 on the ace point and player B has 1 on the 3 point and
>one on the ace point and has just rolled a 31. If B is using a
>strategy of play 3/2/off with probability P and 3/off 1/off with
>probability 1-P
>then any equity from -X to 2 X is possible for A.
>
>Bob Koca


************************

David C. Ullrich

paulde...@att.net

unread,
Aug 21, 2007, 5:31:10 AM8/21/07
to
On Aug 21, 5:24 pm, David C. Ullrich <ullr...@math.okstate.edu> wrote:
> David C. Ullrich- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

A nice piece of mathematics but it says nothing about the class I
asked for, since every positive real is <= 3* some power of 2.

Paul Epstein

paulde...@att.net

unread,
Aug 21, 2007, 5:48:17 AM8/21/07
to
On Aug 21, 8:41 am, Havard Raddum <h.rad...@isi.qut.edu.au> wrote:
> Philippe Michel wrote:
> > But can you play 3/2/off 1/pi of the time, for instance ?- Hide quoted text -

>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

Good argument.

It should be noted that the smallest positive equity could well arise
on a non-centered cube. The point is that there are only a finite
number of cube values, namely 1 and 2, which could give rise to the
smallest positive equity.

Paul Epstein

bob

unread,
Aug 21, 2007, 9:54:45 AM8/21/07
to
On Aug 21, 5:24 am, David C. Ullrich <ullr...@math.okstate.edu> wrote:
>
> The definition of "equity" _assumes_ optimal play on both sides;
> the equity is exactly the expected value assuming that.
>
> If we do assume optimal play then the equity has to lie between
> -3c and 3c, where c is the current value of the cube.


Your proof falls short in that you would need to assume that the
equities actually exist. Suppose there is a position in which both
players
are on the bar and the first to roll an entering 6 obtains a huge
advantage.
It may be correct for each player to double every roll until that 6 is
obtained.
In that case the expecetd value of the game does not exist. I doubt
that such a position
could actually arise but how to prove it can't? If you change your
statement to "If the
equity exists, then it has to lie between -3c and 3c ..." then the
proof is o.k..

Bob Koca

bob

unread,
Aug 21, 2007, 10:40:39 AM8/21/07
to
On Aug 21, 5:48 am, pauldepst...@att.net wrote:

>
> It should be noted that the smallest positive equity could well arise
> on a non-centered cube. The point is that there are only a finite
> number of cube values, namely 1 and 2, which could give rise to the
> smallest positive equity.
>

Since you are assuming optimal play, I thnk that instead of
considering all positions reachable by legal play
one should instead consider only positions reachable by optimal play.
If one does that then a positon reachable with a cube
on 4 might not be reachable with the cube only on 1 or 2 so your
statement "namely 1 or 2" would need to be modified.

Bob Koca

David C. Ullrich

unread,
Aug 22, 2007, 7:18:12 AM8/22/07
to

Um. If you know all the equities for the cube in the middle
and all the possible equities for the cube = 2 then you
know all the possible equities for any c. (If the cube
is not in the middle then the equities for any c > 2
are exactly c/2 times the equities for c = 2.)

>Paul Epstein


************************

David C. Ullrich

David C. Ullrich

unread,
Aug 22, 2007, 7:26:04 AM8/22/07
to
On Tue, 21 Aug 2007 06:54:45 -0700, bob <bob_...@hotmail.com> wrote:

>On Aug 21, 5:24 am, David C. Ullrich <ullr...@math.okstate.edu> wrote:
>>
>> The definition of "equity" _assumes_ optimal play on both sides;
>> the equity is exactly the expected value assuming that.
>>
>> If we do assume optimal play then the equity has to lie between
>> -3c and 3c, where c is the current value of the cube.
>
>
> Your proof falls short in that you would need to assume that the
>equities actually exist.

Yes, that's tricky. Usually around here people simply assume
that there is such a thing as a well-defined equity - I
decided not to worry about that.

If one tries to actually _define_ the equity there are problems
that arise a lot sooner than the sort of situation you describe
below:

1. On the one hand, the equity is defined as the expectation,
assuming optimal play by both players.

2. On the other hand, optimal play is play which maximizes
your equity.

A teensy problem there. In a non-contact position in fact
you can use (1) and (2) to calculate equities and optimal
pley recursively (the recursion terminates because when
the game's over we know what the equity is, it's however
much you won.) In non-contact positions there's a bit
of a problem.

It's actually possible to use a thing called the
"Brouwer fixed-point theorem" to show that there is
a function E mapping positions to real numbers that
has all the properties that the equity should have.
Doesn't quite prove that it's the "real" equity,
howver. I should write this up sometime...

>Suppose there is a position in which both
>players
>are on the bar and the first to roll an entering 6 obtains a huge
>advantage.
>It may be correct for each player to double every roll until that 6 is
>obtained.
>In that case the expecetd value of the game does not exist. I doubt
>that such a position
>could actually arise but how to prove it can't? If you change your
>statement to "If the
>equity exists, then it has to lie between -3c and 3c ..." then the
>proof is o.k..
>
>Bob Koca


************************

David C. Ullrich

Maik Stiebler

unread,
Aug 22, 2007, 8:36:42 AM8/22/07
to
On 22 Aug., 13:26, David C. Ullrich <ullr...@math.okstate.edu> wrote:

> If one tries to actually _define_ the equity there are problems
> that arise a lot sooner than the sort of situation you describe
> below:
>
> 1. On the one hand, the equity is defined as the expectation,
> assuming optimal play by both players.
>
> 2. On the other hand, optimal play is play which maximizes
> your equity.
>
> A teensy problem there.

I'm not a mathematician, but isn't that exactly what v.Neumann's
Minimax
Theorem is about, i.e., sloppily worded, in finite, zero-sum, two-
player games (like
backgammon with finite cube values) there exist strategies for both
players
('optimal play') that allow them to realise at least the minimax
value
('equity') of the position?

David C. Ullrich

unread,
Aug 23, 2007, 8:36:21 AM8/23/07
to

I _am_ a mathematician, but I don't really know anything
about game theory. However, I'm pretty sure that that theorem
doesn't really apply here, because there's no limit on the
llength of the game, and hence no limit on the size of the
cube.

Ah. At

http://en.wikipedia.org/wiki/Expectiminimax_tree

there's a nice description of a minimax algorithm
for a game including dice. The algorithm only
works as presented there if the tree is finite
(I pointed out in my OP that there's no problem
in a non-contact position - the algorithm on
that page is exactly what I had in mind in
that case). Things would not be so bad with
an infinite tree with bounded payoffs - you
could (conceptually) run that algorithm for
an infinite length of time and expect that
things would converge. But as Bob pointed
out, since there's no limit on the size of
the cube we don't have any guarantee that that
would work in backgammon - there very well
could be divergent sums that arise.

You said "backgammon with finite cube values".
If you really meant _bounded_ cube values
then maybe yes, although the games still
not quite a "finite game". But the cube
values in actual backgammon are _not_
bounded (although they're finite).


************************

David C. Ullrich

Maik Stiebler

unread,
Aug 23, 2007, 10:53:20 AM8/23/07
to
On 23 Aug., 14:36, David C. Ullrich <ullr...@math.okstate.edu> wrote:

> I'm pretty sure that that theorem
> doesn't really apply here, because there's no limit on the
> llength of the game, and hence no limit on the size of the
> cube.

You're right, I meant to restrict my comment to backgammon with
*bounded* cube values. Bob pointed out the sort of problems that
can arise with unbounded cube values, and when you wrote that 'there


are problems that arise a lot sooner' than the sort of

situation he described, I assumed you were thinking of problems
that arise in the bounded case, and that's why I thought my comment
to be relevant, even though it has nothing to do with actual
moneygame
backgammon.

I wonder if 'no limit on the length of the game' is a problem.
I thought the Minimax Theorem applied for bounded cube value
backgammon
because it has a finite number of possible game states and ends with
probability 1, but I do not have a reference that explicitly supports
this idea.

David C. Ullrich

unread,
Aug 24, 2007, 6:57:04 AM8/24/07
to

It seems likely to me as well that there's no real problem in
that case - I said "Things would not be so bad with


an infinite tree with bounded payoffs - you
could (conceptually) run that algorithm for
an infinite length of time and expect that
things would converge".

Of course we also need a proof that the game does
end with probability one. Seems to me that we
need a proof of that _regardless_ of the strategy
being used by both players, for example if they're
both playing to make the game last as long as
possible instead of to win. But even then it
seems likely that the game is almost sure to end;
surely in any position there's a sequence of dice
rolls that will force the game to end regardless
of how the players play?

************************

David C. Ullrich

paulde...@att.net

unread,
Aug 24, 2007, 7:30:16 AM8/24/07
to
> David C. Ullrich- Hide quoted text -
>
> - Show quoted text -

Douglas Zare has written a proof of exactly that assertion:
backgammon ends with probability 1. I'm a bit busy/lazy to pull up
the link but it should be an easy google now I've given the author.

The assertion holds even if the players are trying to max the length
of the game, rather than playing well.

David states correctly that there must always be a finite game-ending
sequence from every position. However, I don't know an easy way to
prove this, and Zare's proof is ingenious.

As well as the existence of a finite game-ending sequence, it is
important to realise that the length of such sequences is bounded
because of the finiteness of the number of backgammon positions. [I
know that's a very obvious point for David, but maybe non-obvious
enough to some to be worth remarking on.]

David mentioned Brouwer's fixed point theorem. Interestingly, this
has applications to the game Hex and maybe to other areas of
recreational mathematics.

Paul Epstein


bob

unread,
Aug 24, 2007, 12:49:35 PM8/24/07
to
On Aug 24, 7:30 am, pauldepst...@att.net wrote:
>
> Douglas Zare has written a proof of exactly that assertion:
> backgammon ends with probability 1. I'm a bit busy/lazy to pull up
> the link but it should be an easy google now I've given the author.
>
> The assertion holds even if the players are trying to max the length
> of the game, rather than playing well.
>
> David states correctly that there must always be a finite game-ending
> sequence from every position. However, I don't know an easy way to
> prove this, and Zare's proof is ingenious.
>
> > Paul Epstein

It wasn't Zare's proof but rather his writeup of someone elses
proof. I posed this problem on r.g.b. in 1994 and had some good
progress and C. T. McMullen finished it off. The entire thread is
http://groups.google.com/group/rec.games.backgammon/browse_thread/thread/e0e6e81acbdd9a30/5cea49f4b39294aa?lnk=gst&q=backgammon+terminates+with&rnum=1#5cea49f4b39294aa

Zare's useful writeup of it that explains a few points in more detail
is at http://www.bkgm.com/articles/Zare/BackgammonEnds.html

Bob Koca


paulde...@att.net

unread,
Aug 25, 2007, 8:28:49 PM8/25/07
to
On Aug 25, 12:49 am, bob <bob_k...@hotmail.com> wrote:
> On Aug 24, 7:30 am, pauldepst...@att.net wrote:
>
>
>
> > Douglas Zare has written a proof of exactly that assertion:
> > backgammon ends with probability 1. I'm a bit busy/lazy to pull up
> > the link but it should be an easy google now I've given the author.
>
> > The assertion holds even if the players are trying to max the length
> > of the game, rather than playing well.
>
> > David states correctly that there must always be a finite game-ending
> > sequence from every position. However, I don't know an easy way to
> > prove this, and Zare's proof is ingenious.
>
> > > Paul Epstein
>
> It wasn't Zare's proof but rather his writeup of someone elses
> proof. I posed this problem on r.g.b. in 1994 and had some good
> progress and C. T. McMullen finished it off. The entire thread ishttp://groups.google.com/group/rec.games.backgammon/browse_thread/thr...

>
> Zare's useful writeup of it that explains a few points in more detail
> is athttp://www.bkgm.com/articles/Zare/BackgammonEnds.html
>
> Bob Koca

Interesting.

Extra background if anyone's interested. C in C. T. McMullen stands
for Curt and he is a
Fields Medallist. Plenty more on google I'm sure

Paul Epstein

paulde...@att.net

unread,
Aug 25, 2007, 8:30:51 PM8/25/07
to
> Paul Epstein- Hide quoted text -

>
> - Show quoted text -

I can't resist minor corrections and tangential comments. His first
name is officially Curtis but everyone who talks about him seems to
call him Curt.

Paul Epstein

David C. Ullrich

unread,
Aug 26, 2007, 6:40:49 AM8/26/07
to
On Fri, 24 Aug 2007 09:49:35 -0700, bob <bob_...@hotmail.com> wrote:

>On Aug 24, 7:30 am, pauldepst...@att.net wrote:
>>
>> Douglas Zare has written a proof of exactly that assertion:
>> backgammon ends with probability 1. I'm a bit busy/lazy to pull up
>> the link but it should be an easy google now I've given the author.
>>
>> The assertion holds even if the players are trying to max the length
>> of the game, rather than playing well.
>>
>> David states correctly that there must always be a finite game-ending
>> sequence from every position. However, I don't know an easy way to
>> prove this, and Zare's proof is ingenious.
>>
>> > Paul Epstein
>
> It wasn't Zare's proof but rather his writeup of someone elses
>proof. I posed this problem on r.g.b. in 1994 and had some good
>progress and C. T. McMullen finished it off. The entire thread is
>http://groups.google.com/group/rec.games.backgammon/browse_thread/thread/e0e6e81acbdd9a30/5cea49f4b39294aa?lnk=gst&q=backgammon+terminates+with&rnum=1#5cea49f4b39294aa

Heh - that's very clever! I would have thought the argument
would be a lot more complicated.

>Zare's useful writeup of it that explains a few points in more detail
>is at http://www.bkgm.com/articles/Zare/BackgammonEnds.html
>
>Bob Koca
>


************************

David C. Ullrich

bob

unread,
Aug 26, 2007, 5:42:01 PM8/26/07
to
On Aug 25, 8:30 pm, pauldepst...@att.net wrote:
>
> I can't resist minor corrections and tangential comments. His first
> name is officially Curtis but everyone who talks about him seems to
> call him Curt.
>

And even more tangential is that I had a friend named Curtis from
Brooklyn and my
friends and I thought it was hilarious to call him Coitus.

Bob Koca


paulde...@att.net

unread,
Aug 27, 2007, 5:08:46 AM8/27/07
to

Was his name "Curtis" or "Curtis from Brooklyn"?
Also, what/who exactly came from Brooklyn -- Curtis or his name (or
both)?


Peter Schneider

unread,
Aug 31, 2007, 4:53:51 PM8/31/07
to
Hi,

sorry for being late with this contribution to the question what can be said
about the class of numbers that are defined as possible equities in a
backgammon game.

"bob" <bob_...@hotmail.com> wrote in
news:1187655251.8...@50g2000hsm.googlegroups.com...


> On Aug 20, 6:24 pm, Philippe Michel
> <philippe.mich...@tele1plus1.fr.invalid> wrote:
>> > If you do not assume theoretical optimal play then I think the answer
>> > is all real numbers.
>>
>> That seems impossible. The games (or moves) are countable, so the
>> possible
>> equities have to be as well. Don't you mean all rational numbers ?
>>
>
>> But can you play 3/2/off 1/pi of the time, for instance ?- Hide quoted
>> text -
>>
>
> If a random strategy is not allowed then I agree that it would not
> be all real numbers. I don't think
> though that all rational numbers would be possible.
>
> There is nothing about probability theory that precluds irrational

> numbers. [...]

Nothing in probability theory, but I think in backgammon.

Let's consider cubeless equity first. Elsewhere in this thread it was
mentioned that backgammon ends with probability 1 (i.e. infinite games are
impossible). This means that we have a tree starting at each position whose
leaves are last positions of the game. Each traversal of the tree
constitutes a possible game from the given position. (In Snowie 3ply/gnub
2ply, this tree is completely computed and evaluated for any position from
where the game must end in no more than 2 half moves, assuming perfect
play.) Each position's equity is the sum of all children nodes' equity,
weighted with their respective probability. All positions from where the
game must end after the next roll and half move have an equity of 1; all
others can be recursively reduced to such positions by following the tree
branches. Since the probabilities and therefore the equities involved are
sums of multiples of potencies of 1/36, they are rational.

Now if all cubeless equities are rational, what about cube vigorish? At
first glance one would think that it is "arbitrary", i.e. usually
irrational, but I think that a similar reasoning as above makes it plausible
to assume only rational numbers: cube vigorish is the combined equity gain
during the remaining course of the game from (a) doubles one is able to give
because of one's cube possession plus (b) doubles the opponent cannot give
because of a lack of cube possession. Immediately before the last but one
roll cube vigorish can be exactly computed from the possible resulting
positions that would fulfill a (for the opp) and the probabilities of the
rolls that lead to them, making the vigorish rational.

Again recursion should show that cube vigorish is rational for all previous
positions as well since it is always the weighted sum of cube vigorish of
the next play depth.

Since cubeful equity is cubeless equity plus or minus cube vigorish times
cube value, it will also be rational.

Peter
aka the juggler on FIBS


David C. Ullrich

unread,
Sep 1, 2007, 8:40:29 AM9/1/07
to

Yes, but it doesn't mean that this tree is finite. In fact
it doesn't even imply that there are no infinitely long branches in
the tree, just that the infinitely long branches will be traversed
with probability zero.

So pretend there are no infinite branches. It's still true that the
tree is infinite, and an infinite sum of rational numbers can
be irrational.

>Each traversal of the tree
>constitutes a possible game from the given position. (In Snowie 3ply/gnub
>2ply, this tree is completely computed and evaluated for any position from
>where the game must end in no more than 2 half moves, assuming perfect
>play.)

Alas for a general position there is no n such that the game must
end in n moves.

>Each position's equity is the sum of all children nodes' equity,
>weighted with their respective probability. All positions from where the
>game must end after the next roll and half move have an equity of 1; all
>others can be recursively reduced to such positions by following the tree
>branches. Since the probabilities and therefore the equities involved are
>sums of multiples of potencies of 1/36, they are rational.
>
>Now if all cubeless equities are rational, what about cube vigorish? At
>first glance one would think that it is "arbitrary", i.e. usually
>irrational, but I think that a similar reasoning as above makes it plausible
>to assume only rational numbers: cube vigorish is the combined equity gain
>during the remaining course of the game from (a) doubles one is able to give
>because of one's cube possession plus (b) doubles the opponent cannot give
>because of a lack of cube possession. Immediately before the last but one
>roll cube vigorish can be exactly computed from the possible resulting
>positions that would fulfill a (for the opp) and the probabilities of the
>rolls that lead to them, making the vigorish rational.
>
>Again recursion should show that cube vigorish is rational for all previous
>positions as well since it is always the weighted sum of cube vigorish of
>the next play depth.
>
>Since cubeful equity is cubeless equity plus or minus cube vigorish times
>cube value, it will also be rational.
>
>Peter
>aka the juggler on FIBS
>


************************

David C. Ullrich

Peter Schneider

unread,
Sep 1, 2007, 12:18:58 PM9/1/07
to
Hi David,

"David C. Ullrich" <ull...@math.okstate.edu> wrote in
news:d8nid3t6kl4k3vqqo...@4ax.com...


> On Fri, 31 Aug 2007 22:53:51 +0200, "Peter Schneider"
> <schne...@gmx--dot--net.removethis> wrote:
>

>>Let's consider cubeless equity first. Elsewhere in this thread it was
>>mentioned that backgammon ends with probability 1 (i.e. infinite games are
>>impossible). This means that we have a tree starting at each position
>>whose
>>leaves are last positions of the game.
>
> Yes, but it doesn't mean that this tree is finite. In fact
> it doesn't even imply that there are no infinitely long branches in
> the tree, just that the infinitely long branches will be traversed
> with probability zero.
>
> So pretend there are no infinite branches. It's still true that the
> tree is infinite, and an infinite sum of rational numbers can
> be irrational.

Ah, I see that I really misunderstood Zare's article when I glanced over it,
thinking he gave an upper limit for game lengths (which seems indeed
impossible at second thought). Yes, if we have loops and thus infinite sums
we may have irrational numbers.

Best regards,
Peter aka the juggler


Oliver Riordan

unread,
Sep 2, 2007, 7:40:34 AM9/2/07
to

Hi Peter (and David).

On Sat, 1 Sep 2007, Peter Schneider wrote:

> Ah, I see that I really misunderstood Zare's article when I glanced over
> it, thinking he gave an upper limit for game lengths (which seems indeed
> impossible at second thought). Yes, if we have loops and thus infinite
> sums we may have irrational numbers.

In fact, you are right that equities will be rational if they exist
(more precisely, depending on what you take the definition to be, if a
set of equities exist, then a rational set exists); however, as David
pointed out, the simple reason you gave is wrong. Here is a somewhat
more involved argument - I hope it's correct!

Let's number the possible positions 1,2,...N. Here a position is
always from the point of view of the player on roll, and the position
codes where the pieces are and also where the cube is, but not the
cube value (equities will be proportional to cube value), so there's a
finite set of positions.

Suppose for now that the usual approach to defining equities (max over
your strategy of min over his strategy of expected outcome) actually
works, because the expectation always exists. (This would be true with
no cube, since the game ends with probability 1, and any bounded
random variable has an expectation. In other words, the sums involved
all converge.) Let x_i be the expectation of position i for the player
on roll, and let x be the equity vector (x_1,x_2,..,x_N). Then the
following is true:

(*)
There exists a strategy S (choice of what to do in all situations) with
the following two properties:

(1) the equity vector is consistent with S in the obvious sense, i.e.,
for every i, either
(a) x_i=1,2 or 3, if the game is over, as appropriate
(b) x_i=1, if S says it's a double/drop
(c) x_i=-(sum of 36 terms x_j)/36, if S says no double or player on
roll can't double
[some terms repeat of course, and the - sign is because the
player on roll changes]
(d) x_i=-2*(sum of 36 terms x_j)/36, if S says double/take

and

(2) the strategy S is (one of the) optimal strategies according to the
equities x_i.
(*)

[If the usual definition doesn't work, you can take (*) as the
definition of equity, though in this case it would be better to use a
different term, calling a solution to (*) a set of consistent
equities, say. Once can rewrite (*) in the form x=F(x) for a
continuous function x on the cube [-3,3]^N, so it's true that (*) has
at least one solution by the Brouwer Fixed Point Theorem. This
solution may have stupid properties though, such as decreasing if you
make the position `better'. I think I had an example of this, but have
forgotten it.]

Writing x as a column vector, condition (1) above is a matrix
condition, that x=Ax+b, where A is a matrix with rational entries (all
multiples of 1/36), and b is a vector with rational coordinates (all
0,1,2 or 3). Both A and b depend on the strategy S, of course.
Rearranging, (I-A)x=b, so x=(I-A)^{-1}b. Since I-A is a rational
matrix, so is its inverse, so the equities are rational.

Of course, there's a snag - this assumes that (I-A) is invertible. In
general, x=Ax+b will have lots of solutions, some of which will be
rational, and some not. You can't just pick any rational solution, as
it may not satisfy (2) above. Fortunately, with the strategy S fixed,
(2) is just a set of linear inequalities in the x_i with rational
coefficients (with rational constant terms as well). So the set of
vectors x satisfying (1) and (2) is given by a set of rational linear
inequalities, i.e., it's a rational polytope. If this polytope is
non-empty, it certainly contains a rational point: any of its corners
(extreme points) will do.

So if consistent equities exist for some strategy S, then rational
consistent equities exist for the same S.

In fact, if equities can be defined as the expected outcome for some
strategy S (some strategies might have finite expected outcomes,
others not) then the equation x=Ax+b does have a unique solution for
this S, given by the expected outcome, so the equities are rational.
[Playing according to S, existence of the expectation says that as k
tends to infinity, the total positive contribution to the expectation
from wins after time k tends to zero, and the same for negative
contribution. It follows that if you start with any vector x^0 and
iterate by setting x^{n+1}=Ax^n+b, then the iteration converges to the
expectation. But starting with any solution to x=Ax+b, nothing
changes, so any solution is equal to the expectation.]


Oliver

0 new messages