Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Propagation of error in take point calculation

77 views
Skip to first unread message

dbro...@my-dejanews.com

unread,
Feb 19, 1999, 3:00:00 AM2/19/99
to
I tried to post this once before, and all the line lengths came out wrong. I
think this version should be okay.

Objective: Show how small errors in a match equity table are transmitted
into a take point calculation, both empirically and analytically.

Suppose the match score is (-7, -7). Using the Woolsey MET, the last roll
take point calculation would be:

Drop 44%
Take/Lose 37%
Take/Win 63%
Take point = 100*(44-37)/(63-37)=26.9%

It would not surprise me if the three numbers pulled out of the table were
off by up to 1% each (this might even be optimistic). Consider the two worst
case scenarios:

Drop 43%
Take/Lose 38%
Take/Win 64%
Take point = 100*(43-38)/(64-38)=19.2%

Drop 45%
Take/Lose 36%
Take/Win 62%
Take point = 100*(45-36)/(62-36)=34.6%

So the "nominal" take point is 27%, but the "real" take point might be as low
as 19% or as high as 35%, for a range of about 16%. That seems like a lot of
error propagation, and that's only if the tabled numbers are within 1%. If
they might be further off, the range would be even larger.

Sometimes we luck out, and one or more of the numbers used in the calculation
are 0%, 50%, or 100%. These numbers should be correct to lots of decimal
places. :) So, the range on these calculations would be less, perhaps much
less.

Of course the above is an empirical result. I achieved an analytical result
by expanding a first order Taylor series of the take point formula around the
"nominal" numbers, and assuming that the errors in the "nominal" numbers are
jointly independent. The T.S. gave a linear function in three variables of
the form:

(Drop gradient)*(Drop deviation) +
(Take/lose gradient)*(Take/lose deviation) +
(Take/win gradient)*(Take/win deviation)

Gradients are found by differentiating the take point formula with respect to
"nominal Drop", "nominal Take/lose", and "nominal Take/win". Deviations are
of the form "true Drop - nominal Drop", "true Take/lose - nominal Take/lose"
and "true Take/win - nominal Take/win". [please insert suitable notation,
this is Courier font, ok? :) ]

Assuming independence and treating the gradients and "true" values as
constants, the Variance of the the above expression would be:

(Drop gradient squared)*Var(nominal Drop) +
(Take/lose gradient squared)*Var(nominal Take/lose) +
(Take/win gradient squared)*Var(nominal Take/win)

and you get the standard deviation by taking a square root.

I assumed Var(Drop)=Var(Take/lose)=Var(Take/win)=0.25% squared. How did I
come up with that? Well, if the number in the table is +/- 1%, and I take
that to be about the same as 2*sigma (that's a guess), then sigma is 0.5% and
sigma squared is 0.25% squared. Yeah that's crude, but what the hey :).
Don't forget that we can luck out and get sigma=0 if we use a nominal of 0%,
50%, or 100%, we just didn't in the (-7,-7) case.

When I did the math for the (-7, -7) case, the standard deviation of the take
point was about 2.44%, so a +/- 2*sigma range would be almost 10%, or about
22% to 32% around the 27% nominal. You'd expect the analytic range to be
smaller than the 16% empirical "worst case" range since it's more like an RMS
of all the cases instead of just the difference of the two worst cases.
Also, +/- 3*sigma (analytical) would be close to 15% anyway, if the
difference bothers you :).

Similar empirical/analytic calcuations could be done on the double point
calculation, it would be four variables instead of three and the ranges would
presumably be even wider.

So, my question is "Why bother with all this math over the board when the
error in the results might be so large?" I'll now step back and let the REAL
math folks cut this "analysis" to shreds.... :)


Regards,
Dave Brotherton

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

Kit Woolsey

unread,
Feb 19, 1999, 3:00:00 AM2/19/99
to
dbro...@my-dejanews.com wrote:

: So, my question is "Why bother with all this math over the board when the


: error in the results might be so large?" I'll now step back and let the REAL
: math folks cut this "analysis" to shreds.... :)


I'm not a real math folk, so I can't dispute your analysis. And I will
be the first to admit that there may be errors in my table. The figures
were not derived on a particularly scientific basis. There are a
mishmash of empirical data, a program which was based on some assumptions
which may not be sound, a lot of judgment, and some fudging. Other
people have derived similar match equity tables, all of which are
probably just as inaccurate.

So, why use the tables? Two reasons.

First of all, they do work. Even if there are inaccuracies, our
empirical experience has shown that the tables are reasonably accurate.
For most backgammon positions, a player is likely to misasses his winning
chances by far more than any inaccuracy which would come from a match
equity table.

Secondly, we have to have some basis by which to make our cube
decisions. When faced with a cube decision which may be affected by the
match score, my approach is first to determine what winning chances I
need (taking gammons and recubes appropriately into account) to justify
taking at the match score. Having done this, I then examine the
position, make my best guess as to my winning chances, and act
accordingly. I may be wrong, but at least I am making my decision based
on objective criteria as much as possible. This is superior to just
making a blind guess about the necessary winning chances, which can lead
a player to making a huge cube blunder.

Kit

Chuck Bower

unread,
Feb 19, 1999, 3:00:00 AM2/19/99
to
In article <7aj001$suf$1...@nnrp1.dejanews.com>,
<dbro...@my-dejanews.com> wrote:

(snip)


>Objective: Show how small errors in a match equity table are transmitted
>into a take point calculation, both empirically and analytically.

(snip)

This was done in Ortega/Kleinman's CUBES AND GAMMONS AT THE END
OF THE MATCH. I think they actually assumed a 0.5% error in the
Woolsey-Heinrich match equity table (MET) and took scores closer to the
end (where the problem is even more magnified, I believe).

Their solution was to go to a three digit Woolsey-Heinrich MET
(presumably computer generated) which when rounded gave the standard
one. What Dave addresses is that forcing the table to more
significant figures doesn't really make it more accurate. (It almost
certainly results in more accurate answers, but there is still the
uncertainty of how 'correct' the table is to begin with.)

Maybe what Dave's question is leading to is: "is there a better
way to build a match equity table than the current methods?" I can
at least imagine (in my simple mind) an iterative simulation (possibly
with a BG robot, but maybe with a much simpler model) which could be set
in motion and converge on a self-consistent table. (I don't know that
this wouldn't lead to the current table, though!)

Of course, as is the case with any backgammon analysis, the
decisions made should depend upon the particular combatants. An MET
generated by a model won't be accurate if the actual players' actions
are different from the model's. Given that it's impractical to generate
an MET for every pair of opponents, you will usually have to live with
this shortcoming.


Chuck
bo...@bigbang.astro.indiana.edu
c_ray on FIBS


dbro...@my-dejanews.com

unread,
Feb 20, 1999, 3:00:00 AM2/20/99
to
In article <7akh6i$asp$1...@flotsam.uits.indiana.edu>,

bo...@bigbang.astro.indiana.edu (Chuck Bower) wrote:
> In article <7aj001$suf$1...@nnrp1.dejanews.com>,
> <dbro...@my-dejanews.com> wrote:
>
> (snip)
> >Objective: Show how small errors in a match equity table are transmitted
> >into a take point calculation, both empirically and analytically.
> (snip)
>
> This was done in Ortega/Kleinman's CUBES AND GAMMONS AT THE END
> OF THE MATCH. I think they actually assumed a 0.5% error in the
> Woolsey-Heinrich match equity table (MET) and took scores closer to the
> end (where the problem is even more magnified, I believe)

First of all, thanks Chuck for responding to the original post. I'm
flattered that someone of your caliber thinks this is a topic worth
discussing. I read all your posts religiously. You are easily one of the
best contributors to r.g.b., IMHO.

Secondly, I made a minor mistake in my "formula" posted for the Taylor
series, but it didn't make any difference to my analytic results (I posted it
wrong, but did it right). I forgot the "true take point" term I was supposed
to put in front of all the (gradient)*(deviation) terms, of course (duh).
Just thought I'd point that out before anyone else did.

I guess Ortega/Kleinman should be the next book I pick up, it sounds
interesting. Although I'm sure the end of the match poses unique problems of
it's own, I'm not sure why the end of the match should magnify this
particular problem - I'd think just the opposite.

If we go from +/- 1% to +/- 0.5% on the error of most numbers in the table
(not counting 0%, 50%, and 100%, of course), then that cuts all my analytic
sigmas, input and output, exactly in half. So the (-7,-7) case would have an
output sigma of about 1.22% instead of the 2.44% I used before.

Since most scores near the end of the match will have some 0%'s and 100%'s
floating around in the take point calculation (and maybe even some
50%'s,too), and since those terms contribute NO input sigma, then the output
sigmas should be less. Although I didn't do all the cases, I did a few using
the smaller error term and they seem to bear this out:

At (-4,-2) when the leader doubles on the last roll (no reship): Drop
17%+/-0.5%, Take/lose 0%+/-0%, Take/win 50%+/-0% gives a take point of 34%
with a 0.5% output sigma. (I'm now treating the +/-0.5% input error as my
+/-2*(input sigma) guess, so my input sigmas are 0.25% for the Drop, and 0%
for Take/lose and Take/win).

If you allow the reship for the trailer at (-4,-2), then Take/win 100%+/-0%
for a take point of 17% with an even smaller 0.25% output sigma. None of the
input sigmas changed, only the Take/win nominal value, but the output sigma
drops anyway.

Finally, look at (-1,-2) post crawford - Drop 50%, Take/lose 0%, Take/win
100%, all with 0% input sigmas, and hence the output sigma is 0% too!

So, I get the following take point results (for these few cases):

(-7,-7) last roll - 27% with 1.22% output sigma
(-4,-2) last roll - 34% with 0.50% output sigma (trailer's take point)
(-4,-2) reship - 17% with 0.25% output sigma
(-1,-2) pc - 50% with 0.00% output sigma

Maybe I was just lucky and other scores would give bigger output sigmas,
though. I didn't (and probably won't) do them all, at least not right
away....

> Maybe what Dave's question is leading to is: "is there a better
> way to build a match equity table than the current methods?" I can
> at least imagine (in my simple mind) an iterative simulation (possibly
> with a BG robot, but maybe with a much simpler model) which could be set
> in motion and converge on a self-consistent table. (I don't know that
> this wouldn't lead to the current table, though!)

This is an intriguing idea! I'm no computer programmer, but I wonder what
using the "continuous model" in such an iterative scheme would do....

dbro...@my-dejanews.com

unread,
Feb 20, 1999, 3:00:00 AM2/20/99
to
In article <kwoolseyF...@netcom.com>,

kwoo...@netcom.com (Kit Woolsey) wrote:
> dbro...@my-dejanews.com wrote:
>
> : So, my question is "Why bother with all this math over the board when the
> : error in the results might be so large?"

(snip)

> So, why use the tables? Two reasons.
>
> First of all, they do work.

(snip)

Your work and results speak for themselves, Mr. Woolsey, both as a match
player and as an analyst/writer. I'm not ABOUT to argue THIS point, not one
little bit! I hereby officially take back the "why bother" question. [It
WAS a little on the whiny side, wasn't it? I had lost two straight matches
0-7 at the Flint, MI club right before I wrote it, and haven't put two
consecutive match wins together there all year (about 16 matches so far, I
think). :)]

>
> Secondly, we have to have some basis by which to make our cube
> decisions. When faced with a cube decision which may be affected by the
> match score, my approach is first to determine what winning chances I
> need (taking gammons and recubes appropriately into account) to justify
> taking at the match score. Having done this, I then examine the
> position, make my best guess as to my winning chances, and act
> accordingly. I may be wrong, but at least I am making my decision based
> on objective criteria as much as possible. This is superior to just
> making a blind guess about the necessary winning chances, which can lead
> a player to making a huge cube blunder.
>
> Kit
>

I couldn't agree more, at least in theory. In practice I don't have enough
experience keeping it all straight over the board to do it effectively
(yet....). I'm now inspired by your response to keep trying instead of
chucking the whole thing!

In retrospect, I think my comment was prompted by two things: 1) My genuine
surprise at the size of the transmitted output error from such seemingly
small input errors, and 2) "Sour grapes"! I certainly don't have an
alternative method of decision making to put forward.

Thanks for taking the time to respond, you made my day! My attitude is now
officially adjusted, and my winning streak should start RIGHT NOW.... :)

Daniel Murphy

unread,
Feb 20, 1999, 3:00:00 AM2/20/99
to
On 19 Feb 1999 20:22:42 GMT, bo...@bigbang.astro.indiana.edu (Chuck Bower)
wrote:

>In article <7aj001$suf$1...@nnrp1.dejanews.com>, Dave Brotherman
> <dbro...@my-dejanews.com> wrote:
>
[Showed how small errors due to rounding in a two-decimal match equity
table can lead to cube handling errors and asks about using a three-decimal
place table]

> This was done in Ortega/Kleinman's CUBES AND GAMMONS AT THE END
>OF THE MATCH. I think they actually assumed a 0.5% error in the
>Woolsey-Heinrich match equity table (MET) and took scores closer to the

>end (where the problem is even more magnified, I believe).

Let me jump in here, since I'm in the middle of going through Antonio
Ortega and Danny Kleinman's excellent book for the umpteenth time.

Chuck, I don't think Ortega/Kleinman assumed Kit Woolsey's match table was
in error. In fact, Ortega/Kleinman (page 9) accept the Woolsey chart as
basically accurate and a corrective to earlier charts, since it, unlike
earlier charts by Danny Kleinman, Bill Robertie and Roy Friedamn, is
empirically derived (based on actual match results, not mathematically
calculated). Thus they note that, compared to the Woolsey table, the
Kleinman and Robertie tables "underestimate gammon frequencies and
trailers' equities by small margins, while Friedman's chart overestimates
these by larger margins."

But Ortega/Kleinman note that all of these charts use two decimal places,
and then demonstrate, as Dave Brotherton did, how such a chart can easily
lead to the wrong cube action. And Ortega/Kleinman would, I think, rather
avoid having to make statements like "The correct cube action in this
position is double/pass, unless the errors in rounding to two decimal
places are such that the correct cube action is actually no double/take."

So Ortega/Kleinman generate the three-decimal place match equity table that
they use in their book. They do this by noting that Kleinman's original
table and Woolsey's table agree closely, and then adjust Woolsey's table in
the direction of Kleinman's table. To quote:

"The charts differ by some number, N, of thousandths. We have adjust
Woolsey's figures in the direction Danny's by the square root of N, rounded
to the nearest integer and divided by 1000. The result is a new chart
entirely consistent with Woolsey's, none of its entries deviating by more
than .005."

This exercise rather begs the questions of which how accurate Woolsey's
charts is, or which chart -- Woolsey's or Kleinman's -- is more accurate,
or why a Woolsey chart adjusted in the direction of Kleinman's is more
accurate than the original. But it does seem unlikely that an interpolated
chart derived from the Woolsey and Kleinman charts (which are very close)
would be much less accurate than the original.

As Chuck also noted, Ortega/Kleinman point out that "random deviations in
empirical data used to develop the charts, or errors in the assumptions
made" create errors larger than any created by rounding off to two decimal
places. Nevertheless, as Kit has said (many times) you have to have
something to go on, and if that something has small inaccuracies it is
still better than using nothing at all.

_______________________________________________
Daniel Murphy http://www.cityraccoon.com
Humlebæk BG Klub http://www.hbgk.dk
Raccoon on FIBS http://www.fibs.com

Chuck Bower

unread,
Feb 20, 1999, 3:00:00 AM2/20/99
to

Daniel Murphy wrote:

> On 19 Feb 1999 20:22:42 GMT, bo...@bigbang.astro.indiana.edu (Chuck Bower)
> wrote:
>
> >In article <7aj001$suf$1...@nnrp1.dejanews.com>, Dave Brotherman
> > <dbro...@my-dejanews.com> wrote:
> >
> [Showed how small errors due to rounding in a two-decimal match equity
> table can lead to cube handling errors and asks about using a three-decimal
> place table]
>
> > This was done in Ortega/Kleinman's CUBES AND GAMMONS AT THE END
> >OF THE MATCH. I think they actually assumed a 0.5% error in the
> >Woolsey-Heinrich match equity table (MET) and took scores closer to the
> >end (where the problem is even more magnified, I believe).
>
> Let me jump in here, since I'm in the middle of going through Antonio
> Ortega and Danny Kleinman's excellent book for the umpteenth time.
>
> Chuck, I don't think Ortega/Kleinman assumed Kit Woolsey's match table was
> in error.

(snip)

I agree. That wasn't what I was trying to say (but apparently that is
the way it came across.) They were showing that 2-place MET's lead
to inaccurate take-points. Dave's angle is a bit different, but has similar
conclusions.

dbro...@my-dejanews.com

unread,
Feb 20, 1999, 3:00:00 AM2/20/99
to
In article <36ce784e...@news.inet.tele.dk>, rac...@cityraccoon.com
wrote: (snip)

> Let me jump in here, since I'm in the middle of going through Antonio
> Ortega and Danny Kleinman's excellent book for the umpteenth time.

That makes two unsolicited positive recommendations of the Ortega/Kleinman
book from r.g.b. contributors - I guess I really should pick it up.

(snip)


> So Ortega/Kleinman generate the three-decimal place match equity table that
> they use in their book. They do this by noting that Kleinman's original
> table and Woolsey's table agree closely, and then adjust Woolsey's table in
> the direction of Kleinman's table. To quote:
>
> "The charts differ by some number, N, of thousandths. We have adjust
> Woolsey's figures in the direction Danny's by the square root of N, rounded
> to the nearest integer and divided by 1000. The result is a new chart
> entirely consistent with Woolsey's, none of its entries deviating by more
> than .005."

I assume that the references to "thousandths" and ".005" are decimal values
rather than percent values? I think that would be consistent with Chuck
Bower's post (and my later reply showing how you catch some mathematical
breaks for at least some scores near the end of a match - I did some
recalculations using smaller input errors). If this is so, then I guess I'm
still sort of surprised by the size of the propagated output error relative
to the much smaller assumed input errors, but more so at the beginning rather
than the end of the match.

Here's some more food for thought.... I'd guess that only a small subset of
folks could reliably calculate using 3 digit numbers over the board, even if
they could remember them perfectly. Most folks don't flatly memorize the 2
digit MET, but use some approximate formula (Neil's numbers, Janowski's, and
Turner's to name the ones I'm aware of for the Woolsey table, maybe there are
others for other folks tables?) to estimate the table entries, at least when
the leader is 3+ away (on a good day, even I can remember the 1-away/2-away 2
digit numbers - up to a 7-pt match anyway). I'd guess that formula errors
are of about the same magnitude as the table entry errors, so the results
would be similar, but I could be wrong about that (maybe the formulas are
worse, but one or more of them just might be better!).

Also, things would presumably get even more hairy looking if we tried to
factor in gammons (after all, the title of the O/K book implies that this is
part of the end of the match "fun"). Figuring take points assuming some
%gammons other than 0% would add a fourth source of error, one that might
swamp the contribution of the other three!

I woke up this morning thinking about how you might guess some input errors
for estimated %gammons given a board position - this, of course, depends on
the position and the skill of the player doing the estimating, not the MET.
For a lower bound, I guess you could show a bunch of top players a bunch of
positions and have them estimate %gammons, then do a bunch of robot rollouts
to do (presumably better) estimates (but treat them as "true" values), then
compare? Does anyone want to venture an opinion about what the input errors
from this source might be? If there's interest in this topic, I promise to
try to do further error propagation analyses that factors in at least a
guesstimated lower bound for errors in estimating %gammons, for some number
of late (and early!) match scores (after all, +/-0.5% error for MET table
entries was presumably someone's opinion too).

Chuck Bower

unread,
Feb 20, 1999, 3:00:00 AM2/20/99
to

dbro...@my-dejanews.com wrote:

(snip)

> I woke up this morning thinking about how you might guess some input errors
> for estimated %gammons given a board position - this, of course, depends on
> the position and the skill of the player doing the estimating, not the MET.
> For a lower bound, I guess you could show a bunch of top players a bunch of
> positions and have them estimate %gammons, then do a bunch of robot rollouts
> to do (presumably better) estimates (but treat them as "true" values), then
> compare? Does anyone want to venture an opinion about what the input errors
> from this source might be? If there's interest in this topic, I promise to
> try to do further error propagation analyses that factors in at least a
> guesstimated lower bound for errors in estimating %gammons, for some number
> of late (and early!) match scores (after all, +/-0.5% error for MET table
> entries was presumably someone's opinion too).

Here is my guesstimate of my personal errors in estimating
gammon fractions. (I could be way off, of course.) Here I refer to
gammon fraction for each player (gammon wins/total wins) and my
"error" is the difference between my estimate of this quantity and the
'true' gammon fraction divided by the true fraction (i.e. a relative error).

estimate is within: fraction of the time:

10% 1/2
20% 3/4
33% 9/10

Is this what you're looking for?

dbro...@my-dejanews.com

unread,
Feb 21, 1999, 3:00:00 AM2/21/99
to
In article <36CEFF40...@bigbang.astro.indiana.edu>,

Chuck Bower <bo...@bigbang.astro.indiana.edu> wrote:
> Here is my guesstimate of my personal errors in estimating
> gammon fractions. (I could be way off, of course.) Here I refer to
> gammon fraction for each player (gammon wins/total wins) and my
> "error" is the difference between my estimate of this quantity and the
> 'true' gammon fraction divided by the true fraction (i.e. a relative error).
>
> estimate is within: fraction of the time:
>
> 10% 1/2
> 20% 3/4
> 33% 9/10
>
> Is this what you're looking for?
>
> Chuck
> bo...@bigbang.astro.indiana.edu
> c_ray on FIBS
>
>

Chuck, thanks again for your interest. I'd want to create a table (for a
given position/match score) something along the lines of (i hope this all
lines up)....

Quantity Value Est Error

Drop ME x1 +/-e1
Take/lose ME x2 +/-e2
Take/win ME x3 +/-e3
Doubler's gam.fr. x4 +/-e4
Taker's gam.fr. x5 +/-e5
Nominal take point y +/-e(propagated)

Say all the x's, y's and e's are in %. "y" would be the appropriate f(x1,
x2, x3, x4, x5) - I think I could figure that out.... I guess there are 5
variables now, not 4, if we factor in both the Doubler's and the Taker's
gammon fraction....forgot about that earlier :(

The +/-e's for the two gammon fractions would, conceptually at least, be no
different from the +/-0.5% used for the three ME's (besides 0%, 50%, and
100%). I suppose I could use, as a lower bound, say e4=0.1*x4%, and
e5=0.1*x5% as a starting point (though this would be based on being able to
estimate both gam.fr's to within 10% relative all the time, not 1/2 the
time....). Assuming +/-e's are approximately +/- 2*sigmas, that would make
the input sigmas either 0% or 0.25% for the ME's, and (0.05*x4)% and
(0.05*x5)% for the gammon fractions. Does this sound right to you, at least
as a lower bound? Perhaps you could also suggest some values for x4 and x5
to start playing with, maybe even based on an actual position.

One other thing to think about, too.... My analytic result assumes all the
e's are independent random variables (ten combos in all). Independence
between each of the three ME e's and each of the two gam.fr. e's (six combos)
seems pretty unassailable to me (at least I have no a priori reason to
suspect any). But within the three ME e's (three combos) and within the two
gam.fr. e's for a particular player (one combo) is another story! They would
NOTbe independent, for example, there was something about the way the MET was
constructed (or the way the approximating formulas work, if we use those)
that tend to make the x's we use biased in one direction away from the "true"
values instead of unbiased around the true values. There are a lot of other
ways to violate the independence assumption, but that's the one I'm most
worried about.

If someone told me that they could definitively show that all three of those
suspect combos of the ME e's had small positive covariances between them, I
wouldn't be too surprised. That would tend to inflate the output
e(propagated) calculation even more, if it were true (at least I'm pretty
sure that's the way it would work, I'd have to work it out to be absolutely
sure). Even worse, positive covariances in the e's might suggest bias in the
x's, and hence bias in the nominal take point, which would not be good
either. But....

Both positive AND NEGATIVE covariances for the the gam. fr. combo of the e's
would also be easy for me to believe for certain opponents. I'm pretty sure
that negative covariances would actually decrease the e(propagated), too.
Suppose a pessimistic player habitually overestimated his own gam.fr. and
underestimated his oppo's gam.fr. The result would be a negative covariance
between that one combo of the e's. An optimistic player who did exactly the
opposite would also decrease his e(propagated). Don't be in a hurry to
cultivate one habit or the other, however, since you've also biased your
nominal take point! I SERIOUSLY doubt that the Mean Squared Error of the
nominal take point decreases for these players (remember, MSE=bias squared
plus variance, so you're only doing better if the decrease in the propagated
variance makes up for your added bias squared, and I'd have to see it to
believe it....). Variance and MSE are the same thing for unbiased estimators
of course, and I'm assuming all the e's have mean 0 (i.e. are unbiased).

Anyway, I'm assuming the covariances among all ten combos of the e's to be
exactly 0 for now (this is a result of assuming independence of the e's).

Lasse Hjorth Madsen

unread,
Feb 22, 1999, 3:00:00 AM2/22/99
to
Hi all,

Chuck Bower wrote in message <7akh6i$asp$1...@flotsam.uits.indiana.edu>...

[snip]

> Maybe what Dave's question is leading to is: "is there a better
>way to build a match equity table than the current methods?" I can
>at least imagine (in my simple mind) an iterative simulation (possibly
>with a BG robot, but maybe with a much simpler model) which could be set
>in motion and converge on a self-consistent table. (I don't know that
>this wouldn't lead to the current table, though!)
>

[snip]

I did this some years ago, using a quite simple recursive (sp?) algorithm.
Relatively quickly the table converges, and the result is, in fact, very
close to Woolseys. The average difference per entry is, if I remember
correct, about .3 percent point, which is probably partly due to rounding
errors.

Let me know if anybody would like a copy of this table. If nothing else,
it's useful in situations involving matches longer than 15 points.


>
> Chuck
> bo...@bigbang.astro.indiana.edu
> c_ray on FIBS


Thanks,

Lasse

Ian Shaw

unread,
Feb 22, 1999, 3:00:00 AM2/22/99
to

dbro...@my-dejanews.com wrote in message
<7amnpb$eb$1...@nnrp1.dejanews.com>...

>
>Here's some more food for thought.... I'd guess that only a small subset of
>folks could reliably calculate using 3 digit numbers over the board, even
if
>they could remember them perfectly. Most folks don't flatly memorize the 2
>digit MET, but use some approximate formula (Neil's numbers, Janowski's,
and
>Turner's to name the ones I'm aware of for the Woolsey table, maybe there
are
>others for other folks tables?) to estimate the table entries, at least
when
>the leader is 3+ away (on a good day, even I can remember the 1-away/2-away
2
>digit numbers - up to a 7-pt match anyway). I'd guess that formula errors
>are of about the same magnitude as the table entry errors, so the results
>would be similar, but I could be wrong about that (maybe the formulas are
>worse, but one or more of them just might be better!).
>


Just for interest, here is a formula I've come up with, derived from Tom
Keith's Takepoint Tables. If I remember right, Tom's tables are derived
mathmatically and assume a gammon rate of 20%.

Takepoint for a 2-cube at match scores of 4-away, 4-away and higher.

Takepoint = 69% + Taker's Points Required
IF Doubler > 6 away, subtract quarter of Doubler's Points
Required
IF Doubler < 6 away, add quarter of Doubler's Points
Required

An Example:
Match Score 7 away, 9 away

7-away doubles. 9-away's takepoint is:
69 + 9 - (1/4 of 7 = 2) = 76%

This works reasonably well up to about 11-away, 11-away. By "reasonably
well", I mean that it approximates Tom's table to within 1%, with the odd
exception. This is certainly not accurate enough for the sort of analysis
being discussed here. In practice, though, I reckon it's far more accurate
than than I can calculate from the raw Match Equity Table, in my head,
before my opponent thinks I've dropped the connection. (All that long
division, yeucch!).

I'm also sure that I can't estimate the postion equity to within 1%, let
alone figure in all the gammons etc, so I figure this is good enough for me,
as someone to whom match play (yea, even doubling) is a prety new concept.

One day I'll try to find a formula for 4-cubes. Until, I'll try to do it
straight from the MET.

Any other suggestions welcome.
--
Regards
Ian Shaw (ian on FIBS)


Dbroth02

unread,
Feb 23, 1999, 3:00:00 AM2/23/99
to
I found an error in my previous results. Sometimes, two numbers in the MET are
constrained by the fact they must sum to 100%. In this special case, it you can
do a similar kind of empirical and anaylytic analysis, but you have one less
variable to worry about. This actually happened in the original (-7,-7) case
in posted when I started this thread. The empirical worst case numbers I gave
are reduced somewhat when you impose the constraint, but it makes less
difference in the analytic results. I'll summarize everything in another post
later, when I can throw in some analysis using non-zero gammon fractions. Stay
tuned!

dbro...@my-dejanews.com

unread,
Feb 23, 1999, 3:00:00 AM2/23/99
to
In article <2IbA2.14776$Lz1.835@wards>,

"Ian Shaw" <ian....@riverauto.co.uk> wrote:
> Just for interest, here is a formula I've come up with, derived from Tom
> Keith's Takepoint Tables. If I remember right, Tom's tables are derived
> mathmatically and assume a gammon rate of 20%.
>
> Takepoint for a 2-cube at match scores of 4-away, 4-away and higher.
>
> Takepoint = 69% + Taker's Points Required
> IF Doubler > 6 away, subtract quarter of Doubler's Points
> Required
> IF Doubler < 6 away, add quarter of Doubler's Points
> Required

I don't remember seeing any formulas for take points before, what a great
idea :) And this one looks real easy, too. But....

1) I'd assume Keith's MET is based on a 20% gf and these take points are
gammonless - right? 2) What happens here at precisely 6 away? I'd guess 69%
+Takers Away, no adjustment (since more than 6 adjusts down and less than 6
adjusts up)? 3) How close is this formula for 2-cube gammonless take points
based on Woolsey's MET? If it's no good for Woolsey's, has anyone figured a
good formula? 4) Anyone have take point formulas for higher cubes?

Ian Shaw

unread,
Feb 23, 1999, 3:00:00 AM2/23/99
to

dbro...@my-dejanews.com wrote in message
<7at7ft$bt0$1...@nnrp1.dejanews.com>...

>In article <2IbA2.14776$Lz1.835@wards>,
> "Ian Shaw" <ian....@riverauto.co.uk> wrote:
>> Just for interest, here is a formula I've come up with, derived from Tom
>> Keith's Takepoint Tables. If I remember right, Tom's tables are derived
>> mathmatically and assume a gammon rate of 20%.
>>
>> Takepoint for a 2-cube at match scores of 4-away, 4-away and higher.
>>
>> Takepoint = 69% + Taker's Points Required
>> IF Doubler > 6 away, subtract quarter of Doubler's Points
>> Required
>> IF Doubler < 6 away, add quarter of Doubler's Points
>> Required
>
>I don't remember seeing any formulas for take points before, what a great
>idea :) And this one looks real easy, too. But....
>
>1) I'd assume Keith's MET is based on a 20% gf and these take points are
>gammonless - right?

Erm, I'd assumed that that since the MET is based on 20% gammons, the
takepoints also assume 20% gammons. Am I wrong? This seems to be a
reasonable starting point for mid-game positions. Obviously, things change
once the game comes down to a race where gammons are unlikely, but I've not
worked things out that far yet. I suppose the start point would be to
produce a MET which assumes a zero gammon rate and start from there.
Comparing the results would probably be quite instructive.

>2) What happens here at precisely 6 away? I'd guess 69%
>+Takers Away, no adjustment (since more than 6 adjusts down and less than 6
>adjusts up)?

Spot on.

>3) How close is this formula for 2-cube gammonless take points
>based on Woolsey's MET? If it's no good for Woolsey's, has anyone figured
a
>good formula?

Tom says that his table matches Kit's to within 1% virtually everywhere.
This should mean that they give almost the same calculated takepoints. There
will be some propagation of error though, so maybe the formula can be
tweaked slightly. I may have a go sometime. I know my formula is flawed,
probably seriously. However, it is easy to use over the board, where I have
enough trouble holding a pip count in my head, let alone working out winning
chances. I'm sure my errors there completely swamp any errors in my
takepoint evaluation.

>4) Anyone have take point formulas for higher cubes?
>
>Regards,
>Dave Brotherton
>

Gary Wong

unread,
Feb 23, 1999, 3:00:00 AM2/23/99
to
"Ian Shaw" <ian....@riverauto.co.uk> writes:
> >1) I'd assume Keith's MET is based on a 20% gf and these take points are
> >gammonless - right?
>
> Erm, I'd assumed that that since the MET is based on 20% gammons, the
> takepoints also assume 20% gammons. Am I wrong? This seems to be a
> reasonable starting point for mid-game positions. Obviously, things change
> once the game comes down to a race where gammons are unlikely, but I've not
> worked things out that far yet.

Maybe I'm missing the point, but I don't think gammon rates here change
anything. You're describing gammon rates in the _current_ game, right?
But when you compute a drop point, you do it based on the match equities
at the _future_ scores; and gammons in future games are not affected by
the current position. Gammons in the current game do matter of course,
but only because they change the score after this game -- which we are
already perfectly capable of accounting for.

> I suppose the start point would be to
> produce a MET which assumes a zero gammon rate and start from there.
> Comparing the results would probably be quite instructive.

It could be interesting, but it appears that for reasonable assumptions
about gammon rates, the match equity isn't particularly sensitive to the
fraction you apply. I posted an article archived at:

http://www.dejanews.com/getdoc.xp?AN=373941296

containing tables computed at 20% and 30% gammon rates, and the two are
fairly similar. (Admittedly I didn't think to check whether drop points
derived from the tables were also similar.)

Cheers,
Gary.
--
Gary Wong, Department of Computer Science, University of Arizona
ga...@cs.arizona.edu http://www.cs.arizona.edu/~gary/

Gary Wong

unread,
Feb 23, 1999, 3:00:00 AM2/23/99
to
dbro...@my-dejanews.com writes:
> In article <wtlnhpt...@brigantine.CS.Arizona.EDU>,

> Gary Wong <ga...@cs.arizona.edu> wrote:
> > "Ian Shaw" <ian....@riverauto.co.uk> writes:
> > > >1) I'd assume Keith's MET is based on a 20% gf and these take points are
> > > >gammonless - right?
> > >
> > > Erm, I'd assumed that that since the MET is based on 20% gammons, the
> > > takepoints also assume 20% gammons. Am I wrong?
> >
> > Maybe I'm missing the point, but I don't think gammon rates here change
> > anything. You're describing gammon rates in the _current_ game, right?
>
> I guess the point is I don't understand something [I was the guy with the
> question marked 1) above] :). We are, of course, "perfectly capable of
> accounting for" gammons in the current game using the usual (i.e. long)
> method, but the short take point formula Ian gave must assume something about
> the gammon fraction in the current game...

Argh, sorry. I see what you mean now. That'll teach me to leap into the
middle of a thread without paying attention to the context first :-)

dbro...@my-dejanews.com

unread,
Feb 24, 1999, 3:00:00 AM2/24/99
to
In article <wtlnhpt...@brigantine.CS.Arizona.EDU>,
Gary Wong <ga...@cs.arizona.edu> wrote:
> "Ian Shaw" <ian....@riverauto.co.uk> writes:
> > >1) I'd assume Keith's MET is based on a 20% gf and these take points are
> > >gammonless - right?
> >
> > Erm, I'd assumed that that since the MET is based on 20% gammons, the
> > takepoints also assume 20% gammons. Am I wrong? This seems to be a
> > reasonable starting point for mid-game positions. Obviously, things change
> > once the game comes down to a race where gammons are unlikely, but I've not
> > worked things out that far yet.
>
> Maybe I'm missing the point, but I don't think gammon rates here change
> anything. You're describing gammon rates in the _current_ game, right?
> But when you compute a drop point, you do it based on the match equities
> at the _future_ scores; and gammons in future games are not affected by
> the current position. Gammons in the current game do matter of course,
> but only because they change the score after this game -- which we are
> already perfectly capable of accounting for.

I guess the point is I don't understand something [I was the guy with the


question marked 1) above] :). We are, of course, "perfectly capable of
accounting for" gammons in the current game using the usual (i.e. long)
method, but the short take point formula Ian gave must assume something about

the gammon fraction in the current game, whether gammonless, 20% for the
doubler, 20% for both, or something else. I'm asking if the formula Ian gave
previously for take points assumes the current game is gammonless or not. I
imagine it is, but wanted to make sure. It looks like Ian has some doubt
about it, though. Can someone (maybe Tom Keith, if he's reading this)
definitively clear this up?

Kit Woolsey

unread,
Feb 24, 1999, 3:00:00 AM2/24/99
to
Ian Shaw (ian....@riverauto.co.uk) wrote:

<snip>

: Tom says that his table matches Kit's to within 1% virtually everywhere.


: This should mean that they give almost the same calculated takepoints. There
: will be some propagation of error though, so maybe the formula can be
: tweaked slightly. I may have a go sometime. I know my formula is flawed,
: probably seriously. However, it is easy to use over the board, where I have
: enough trouble holding a pip count in my head, let alone working out winning
: chances. I'm sure my errors there completely swamp any errors in my
: takepoint evaluation.

While take point tables are fine for last roll or dead cube situations,
they are very dangerous to use when the cube is still in play. The
problem is that they don't take recube vig into account, which can lead
to major blunders.

As a simple example, consider the following situation: X is 3 away, O is
4 away, and X is doubling to 2. What is O's "take point".

The straightforward approach without taking recube vig into account gives us:

If O passes, he is 32%.
If O takes and loses, he is 17%
If O takes and wins, he is 60%

Thus, O is risking 15% to gain 28%, which comes to a take point of 34.8%.

A little common sense will show us that this is ridiculous. Let us
suppose that O is going to adopt the clearly inferior strategy of
redoubling immediately. If so, the game is for the match. Thus, if O
has 32% chances of winning he has a take, since if he passes his match
equity would be 32%. Note this is lower than the 34.8% we computed.
That 34.8% figure is based on the assumption that O never redoubles.

Just as redoubling immediately is better than never redoubling, waiting
until there are some decent market losers is better than redoubling
immediately. Therefore, the true take point is lower that 32%. How much
lower would depend upon the position type and how likely O is to make a
fairly efficient redouble. However since X's take point on a recube is
40% (and that is accurate, since the cube would then be dead), it is
clear that O wields large recube vig and will often be able to make an
effective redouble. Thus, O's true take point is probably around 26 or 27%.

My personal approach is to not use take point tables, because of this
kind of potential mistake. Instead I do the gain-loss evaluations and
adjust them for recube potential. In the above example, I might say that
O will redouble very aggressively, so aggressively that he never loses
his market. I might estimate that for a little under half of O's losses
that redouble never comes, in which case O will still have 17% match
equity being behind 4 away, 1 away. Thus O's average equity when he
loses will be about 8% (approximately midway between 0% and 17%), but his
equity when he wins will be 100% (since the cube will always be at 4 when
he wins by our assumption that he never loses his market). This gives us:

O passes: 32%
O takes and wins: 100%
O takes and loses: 8%

So O is risking 24% to gain 68%, making his real take point just about 26%.

Kit


Ian Shaw

unread,
Feb 24, 1999, 3:00:00 AM2/24/99
to

Kit Woolsey wrote in message ...

>
>While take point tables are fine for last roll or dead cube situations,
>they are very dangerous to use when the cube is still in play. The
>problem is that they don't take recube vig into account, which can lead
>to major blunders.
>
>
[>A little common sense snipped]

The light slowly dawns! Thanks for your explanation, Kit. I tried to produce
a take point table from your MET, and it differed considerably from Tom's. I
now see that I had not taken any account of recube vig.

Going back to Tom's table, he gives the takepoint at 4-away, 3-away as ...
76%, pretty close to your calculation. It appears that Tom has taken recube
vig into account when calculating his table.

To answer Dave's question about the accuracy of my formula compared to Kit's
table, I would need to know the assumptions Tom used for recube action.
Unfortunately, Tom does not explain how he calculated the table, so I can't
repeat the calculation using the Woolsey-Heinrich MET as a base. All I can
say is that providing, Tom's MET matches Kit's closely and that Tom's recube
assumptions are similar to Kit's, the formula should hold up.

Just out of interest, how quickly can experienced players do these sort of
calculations, figuring in the recubes? I think I need to improve my mental
arithmetic speed and accuracy considerably.

Ian Shaw

unread,
Feb 24, 1999, 3:00:00 AM2/24/99
to

dbro...@my-dejanews.com wrote in message
<7avsgc$m77$1...@nnrp1.dejanews.com>...
[confusion snipped]

> We are, of course, "perfectly capable of
>accounting for" gammons in the current game using the usual (i.e. long)
>method, but the short take point formula Ian gave must assume something
about
>the gammon fraction in the current game, whether gammonless, 20% for the
>doubler, 20% for both, or something else. I'm asking if the formula Ian
gave
>previously for take points assumes the current game is gammonless or not.
I
>imagine it is, but wanted to make sure. It looks like Ian has some doubt
>about it, though. Can someone (maybe Tom Keith, if he's reading this)
>definitively clear this up?
>
>Regards,
>Dave Brotherton


Tom's Market Window Table, from where my formula came, assumes that the
gammon rate for the current game at the point of the cube descision is 20%.
It is incidental that the MET table used to calculate the equities for the
various match scores possible as a result of the current game also assumes
that the gammon rate at the start of a game is also 20%.
Tom also discusses the effect of the current game's gammon potential. You
can find the whole thing at http://www.bkgm.com/articles/mpd.html amongst
much other excellent stuff.

I hope this clears up any confusion, not least my own ;o)

Tom Keith

unread,
Feb 24, 1999, 3:00:00 AM2/24/99
to
"Ian Shaw" <ian....@riverauto.co.uk> wrote:

> > 1) I'd assume Keith's MET is based on a 20% gf and these take points are
> > gammonless - right?
>
> Erm, I'd assumed that that since the MET is based on 20% gammons, the
> takepoints also assume 20% gammons. Am I wrong? This seems to be a
> reasonable starting point for mid-game positions. Obviously, things change
> once the game comes down to a race where gammons are unlikely, but I've not
> worked things out that far yet.

dbro...@my-dejanews.com wrote:

> I guess the point is I don't understand something [I was the guy with the

> question marked 1) above] :). We are, of course, "perfectly capable of


> accounting for" gammons in the current game using the usual (i.e. long)
> method, but the short take point formula Ian gave must assume something about
> the gammon fraction in the current game, whether gammonless, 20% for the
> doubler, 20% for both, or something else. I'm asking if the formula Ian gave
> previously for take points assumes the current game is gammonless or not. I
> imagine it is, but wanted to make sure. It looks like Ian has some doubt
> about it, though. Can someone (maybe Tom Keith, if he's reading this)
> definitively clear this up?
>
> Regards,
> Dave Brotherton

Hi Ian, Dave, et al.

It's been a while since I've looked at this stuff, but I think Ian is
correct. That is, the take points calculated in the article
"http://www.bkgm.com/articles/mpd.html" assume that 20% of the wins in the
current game will be gammons. Different gammon expectations for the current
game will give different take points. In fact, there is a chart in the
article ("http://www.bkgm.com/articles/mpd_gam.gif") that shows how the take
points change according to the likelihood of a gammon in that the game.
(Even this chart does not cover all the possibilities because it assumes
both players win the same fraction of gammons.)

Tom Keith

0 new messages