0 views

Skip to first unread message

Feb 18, 1999, 3:00:00 AM2/18/99

to

Hi folks,

at the moment I'm playing a few longer money sessions against two guys

and (for training) 1000 money games against Snowie 1.3. While thinking

about some longer winning and loosing streaks I was wondering, if there

is reliable statistical material available about what may happen in a

given amount of games.

What I mean:

1) Let's assume you play against someone 1000 money games. You and your

opponent are ranked between expert and world class level (if that

matters). You play using the jacoby rule, beaver and raccoons, but no

automatics. Let's further assume, that the gammon probability is round

about 20%.

What is the interval of possible results with a given 95% probability?

2) Nearly the same, but now you play only 100 games.

3) And now you stop at 50 games.

4) To make it more difficult, the same scenario as shown above, but now

you are 5% better than your opponent (equity = +0.01 per game).

Years ago I studied economics including two semesters statistic. I know

that there is a possibility to calculate it, but I'm too short of time

to figure it out again.

Can anyone help?

Ciao

acepoint (Achim)

If that helps: A friend played 770 games against jellyfish 3.0. Points

per game are 2.063.

Feb 18, 1999, 3:00:00 AM2/18/99

to

Hi Achim.

Some time ago I made a formula which might be useful to you:

If you play n games, a 96% confidenceinterval for two

equally good players is 0 +/- 5*sqrt(n) points.

That is, if you play 100 games, you should be prepared

to lose 50 points.

If the players are of inequal strength, the 0 in the formula

can be replaced with ppg*n. But I don't think it's an easy

thing to calculate the ppg. But hey, David Montgomery made a formula

once to translate between fibs rating and moneygame.

*searching* please wait.

Found it!

50 FIBS points are worth about 0.1 ppg.

Some time ago I made a formula which might be useful to you:

If you play n games, a 96% confidenceinterval for two

equally good players is 0 +/- 5*sqrt(n) points.

That is, if you play 100 games, you should be prepared

to lose 50 points.

If the players are of inequal strength, the 0 in the formula

can be replaced with ppg*n. But I don't think it's an easy

thing to calculate the ppg. But hey, David Montgomery made a formula

once to translate between fibs rating and moneygame.

*searching* please wait.

Found it!

50 FIBS points are worth about 0.1 ppg.

Stig Eide

-----------== Posted via Deja News, The Discussion Network ==----------

http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

Feb 18, 1999, 3:00:00 AM2/18/99

to

Achim Müller <acep...@deltacity.net> writes:

> 1) Let's assume you play against someone 1000 money games. You and your

> opponent are ranked between expert and world class level (if that

> matters). You play using the jacoby rule, beaver and raccoons, but no

> automatics. Let's further assume, that the gammon probability is round

> about 20%.

>

> What is the interval of possible results with a given 95% probability?

> 1) Let's assume you play against someone 1000 money games. You and your

> opponent are ranked between expert and world class level (if that

> matters). You play using the jacoby rule, beaver and raccoons, but no

> automatics. Let's further assume, that the gammon probability is round

> about 20%.

>

> What is the interval of possible results with a given 95% probability?

A reasonable model for the distribution of points after a long series of

money games is that points per game after n games are normally distributed

with mean pn and variance 9n (where p is the advantage you have in

points per game). This is just an approximate model and is not tailored

to the parameters you specified, but it seems to fit the data I have

fairly well.

A 95% prediction interval for the score after 1000 games between evenly

matched players is from -186 to +186 points.

> 2) Nearly the same, but now you play only 100 games.

Now the interval is from -59 to +59 points.

> 3) And now you stop at 50 games.

-42 to +42 points.

> 4) To make it more difficult, the same scenario as shown above, but now

> you are 5% better than your opponent (equity = +0.01 per game).

I'm not sure what you mean by 5% better (expecting to win 55% of the games

would be worth well over 0.01ppg). 0.01ppg really is pretty tiny, but if

you had an advantage of that amount, the three intervals above would become

-176 to +196, -58 to +60 and -41 to +42 points respectively.

Cheers,

Gary.

--

Gary Wong, Department of Computer Science, University of Arizona

ga...@cs.arizona.edu http://www.cs.arizona.edu/~gary/

Feb 18, 1999, 3:00:00 AM2/18/99

to

Gary Wong <ga...@cs.arizona.edu> writes:

> A 95% prediction interval for the score after 1000 games between evenly

> matched players is from -186 to +186 points.

> A 95% prediction interval for the score after 1000 games between evenly

> matched players is from -186 to +186 points.

Yes, but note that another "95% prediction interval" for the same

hypothesis (and your mathematical model) is that the score is anything

higher than -156 points. And another 95% prediction interval is that

the score is anything less than +156 points. And another 95% prediction

interval is that the score is anwhere outside the range from -6 points

to +6 points.

All of these prediction intervals are equally valid a priori. If you

played 1000 games and at the end the score were +3 points, then you

could "reject the null hypothesis at the 95% confidence level" on the

basis of the last confidence interval mentioned above.

Whether you believe that means the null hypothesis is wrong depends, at

least in part, on how plausible you think it is that there is a

mechanism operating which causes either high or low scores to be more

likely than scores near zero.

David desJardins

Feb 18, 1999, 3:00:00 AM2/18/99

to

Gary Wong wrote:

>

> > 4) To make it more difficult, the same scenario as shown above, but now

> > you are 5% better than your opponent (equity = +0.01 per game).

>

> I'm not sure what you mean by 5% better (expecting to win 55% of the games

> would be worth well over 0.01ppg).

>

> > 4) To make it more difficult, the same scenario as shown above, but now

> > you are 5% better than your opponent (equity = +0.01 per game).

>

> I'm not sure what you mean by 5% better (expecting to win 55% of the games

> would be worth well over 0.01ppg).

Ooops! Of course I meant 0.1 ppg. Sorry for that mistake, tough night

last night ;-)

But now I have two models:

1. Stig Eide

> If you play n games, a 96% confidenceinterval for two

> equally good players is 0 +/- 5*sqrt(n) points.

2. Gary Wong

> A reasonable model for the distribution of points after a long series of

> money games is that points per game after n games are normally distributed

> with mean pn and variance 9n (where p is the advantage you have in

> points per game).

I also thought about the normal distribution, but wasn´t able to get the

formula. Could you, Gary, explain where the 9 in "9n" comes from?

Ans a last question to Stig: How did you get your formula?

Ciao

acepoint (Achim)

Feb 18, 1999, 3:00:00 AM2/18/99

to

Achim Müller <acep...@deltacity.net> writes:

> But now I have two models:

>

> 1. Stig Eide

>

>> If you play n games, a 96% confidenceinterval for two

>> equally good players is 0 +/- 5*sqrt(n) points. => But now I have two models:

>

> 1. Stig Eide

>

>> If you play n games, a 96% confidenceinterval for two

>

> 2. Gary Wong

>

>> A reasonable model for the distribution of points after a long series of

>> money games is that points per game after n games are normally distributed

>> with mean pn and variance 9n (where p is the advantage you have in

>> points per game).

>

> I also thought about the normal distribution, but wasn't able to get the

> formula. Could you, Gary, explain where the 9 in "9n" comes from?

Sure. There's no deep theoretical reason for picking that value, but an

observation of several hundred games of mine showed a sample variance

of 8.4. This result and others I've occasionally seen posted here

lead me to estimate that the true variance is somewhere around 9.

(Obviously it will depend on the players, the rules, etc. etc., but

9 is a convenient rule of thumb.)

The variance I estimate agrees fairly closely with Stig's; a variance of

9 is a standard deviation of 3 so I would predict a +/-2 sigma interval

of 0 +/- 6*sqrt(n) points -- essentially identical to his result, at this

level of accuracy.

(The normal distribution is a bad fit for small numbers of games of course,

because backgammon game distributions have very long tails because of the

relatively high probability of obtaining a score a long way from the mean,

because of the cube. It's not until you add up a lot of games that the

central limit theorem kicks in and this approximation becomes reasonable.

In practice I wouldn't be too worried about weird cube effects once there

were, say, 50 or more games.)

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu