Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.

Dismiss

1 view

Skip to first unread message

Jan 16, 2003, 4:17:05â€¯PM1/16/03

to

To have a 99% chance of surviving an n-game money session against an equally

skilled opponent, you need roughly 5*sqrt(n)+20 points of capital.

skilled opponent, you need roughly 5*sqrt(n)+20 points of capital.

Adam

Jan 16, 2003, 10:51:04â€¯PM1/16/03

to

Typo alert - should have read 5*sqrt(n)+10

"Adam Stocks" <ad...@stocks49.freeserve.co.uk> wrote in message

news:b076ul$oko$1...@newsg1.svr.pol.co.uk...

Jan 17, 2003, 2:05:36â€¯PM1/17/03

to

Ok, I'll bite.

Such formulae are useful for people with more finite bankrolls

(politically correct for poor) so that they know not to go in over their

head. But, without proper justification they are just letters and

numbers.

Can you provide some insight about how you got this? This might help

people interpet it appropriately or modify it.

e.g, assumed a large number of games is probably Gaussian, assumed

something about the variance, you have empirical evidence after lots of

money sessions, etc. I assume you'll have to make some assumptions

about gammon percentages, but the backgammon community hasn't had a

problem with this before as long as it's well stated.

I'm particularly interested in where the +10 part comes from. It seems

that the amount of capital should be directly proportional to sqrt(n).

Is this just to make sure the formula works for sessions with a low

number of games, where a Gaussian distribution is a bad assumption?

There's one way that I'd like to modify it. Since I assume most people

want to play against people who are worse than them in the long run, it

might be useful to see where the chance of winning against that opponent

in a single game fits into this formula. When you play against someone

worse than you, the necessary capital goes down, but by how much?

Thanks for the food for thought,

Chris

Jan 17, 2003, 2:06:09â€¯PM1/17/03

to

For those of us who are mathematically impaired, please interpret..

"Adam Stocks" <ad...@stocks49.freeserve.co.uk> wrote in message

news:b07u19$5uq$1...@news5.svr.pol.co.uk...

Jan 17, 2003, 7:17:53â€¯PM1/17/03

to

>...

> For those of us who are mathematically impaired, please interpret..

> For those of us who are mathematically impaired, please interpret..

> >

> > "Adam Stocks" <ad...@stocks49.freeserve.co.uk> wrote in message

> > news:b076ul$oko$1...@newsg1.svr.pol.co.uk...

> > > To have a 99% chance of surviving an n-game money session against an

> equally

> > > skilled opponent, you need roughly ' 5*sqrt(n)+10 '

points of capital.

Say you agree to play someone 25 games at $10 a point. This formula

then says that you should have $350 in order to be fairly confident

that you won't go broke and can handle the normal equity swings.

But, if you are playing someone who say is 1/3 of a point per game

stronger than you... it may take more cash...

Jan 17, 2003, 8:10:44â€¯PM1/17/03

to

"Christopher Alvino" <gtg...@prism.gatech.edu> wrote in message

news:3E285400...@prism.gatech.edu...

> Ok, I'll bite.

>

> Such formulae are useful for people with more finite bankrolls

> (politically correct for poor) so that they know not to go in over their

> head. But, without proper justification they are just letters and

> numbers.

Everybody's bankroll is finite, even Bill Gates'. To maximise profit

against a weaker opponent, one must play for the maximum stakes possible

within the bounds of one's personal threshold for acceptable risk of ruin -

i.e going in NEARLY over one's head, but not quite. A similar principle

applies if the opponent is of equal skill level, except one is trying to

minimise the amount of capital required to stay within the ruin threshold

for a given session length.

> Can you provide some insight about how you got this? This might help

> people interpet it appropriately or modify it.

> e.g, assumed a large number of games is probably Gaussian, assumed

> something about the variance, you have empirical evidence after lots of

> money sessions, etc. I assume you'll have to make some assumptions

> about gammon percentages, but the backgammon community hasn't had a

> problem with this before as long as it's well stated.

I, being a simple type of folk, (I'm not a mathematician) used a simple

model to determine the stated formula. The base formula I came up with

gives capital in direct proportion to sqrt(n), then I adjusted it.

The distribution of net (games won - games lost) as a result of playing n

games will tend towards a Gaussian distribution as n gets larger, i.e. of

the form (p+q)^n. The standard deviation of the net gain from an n-game

session will be sqrt(n). Don't ask me to proove it, it just does for all

the examples I tried :-). A 1% probability threshold in a Gaussian

distribution occurs at about 2.5 standard deviations above/below mean, so

the probability of losing 2.5 or more standard deviations of games is about

1%.

In the base formula, I have subjectively assumed a fixed amount of gain/loss

per game equal to 2.25*(stake per point), a figure which the backgammon

community seems to have deemed reasonable for typical (non-beginner)

players. Therefore, there is about 1% chance of losing (2.5*2.25) = 5.625

or more standard deviations of points in an n-game session. Therefore there

is a 99% chance of not losing 5.625*sqrt(n) points in an n-game session. So

Capital=5.625*sqrt(n) was my base formual.

> I'm particularly interested in where the +10 part comes from. It seems

> that the amount of capital should be directly proportional to sqrt(n).

> Is this just to make sure the formula works for sessions with a low

> number of games, where a Gaussian distribution is a bad assumption?

Right. My base model assumes a fixed amount of 2.25 points per game as the

gain or loss, which is obviously not the case in practice. Due to big cubes

and gammons, the net gains/losses possible for each game will form a

non-rectangular (Gaussian-looking) distribution of their own, (i.e. a

distribution within a distribution) which, for a large n, won't matter, but

for a small n, will give 'fatter tails' to the overall distribution,

particularly relevant to small n values. Therefore, I weighted the formula

in respect of small n values by adding the +10 constant, and reducing the

5.625 factor down to 5.000 to help compensate for the +10 constant, giving

Capital=5*sqrt(n)+10. It's only an approximation, but it's easy to remember

(hey, it even rhymes), and looks very reasonable to me for any realistic n.

> There's one way that I'd like to modify it. Since I assume most people

> want to play against people who are worse than them in the long run, it

> might be useful to see where the chance of winning against that opponent

> in a single game fits into this formula. When you play against someone

> worse than you, the necessary capital goes down, but by how much?

That was my next quest. Unfortunately, since I'm not a mathematician, I

will have to model this using a spreadsheet to provide, say, a (0.52+0.48)^n

distribution, and see empirically what the standard deviations are etc.

Although I'm sure Doug will be delighted to enlighten us all :-).

> Thanks for the food for thought,

> Chris

Yw,

Adam

Jan 17, 2003, 8:36:19â€¯PM1/17/03

to

"RobAdams" <georgeha...@yahoo.com> wrote in message

news:bf147be3.03011...@posting.google.com...

That's true (see my other posting). But if THINK are the stronger player,

you would be prudent to work out the risk of ruin based on the assumption

that you MAY be only, say, of equal skill level, not actually stronger,

since by playing him, you are simply testing your hypothesis that you are

stronger, so you need to know the risk of ruin for the case that your

hypothesis is wrong.

Adam

Jan 18, 2003, 2:02:25â€¯AM1/18/03

to

Adam,

Ok, then I agree with your formula for bankroll, assuming finite length

sessions and that you are of equal skill with your opponent. I've seen

arguments like this before in poker.

Now, here's another formula to add to your collection, based on similar

assumptions. I think this is well known for poker people and maybe for

BG people too.

If you have an edge on your opponent, and you win on average P points

per game against that opponent, then you need a bankroll of:

BR = (Z^2 * S^2) / (4 * P),

to have some chance (depending on the below table) of not going bust,

where Z is the number of standard deviations away from the mean and S is

the standard deviation (where you used 2.25). Note that the number of

games disappears from this equation since you have an edge on this

opponent. Since you have an edge, we can find the local minimum of the

'bad luck' curve for a certain number of standard deviations away from

the mean.

Table:

Risk of Ruin Z

-----------------------

20% 0.85

10% 1.3

5% 1.7

2% 2.1

1% 2.3 (by the way, I get 2.3 here instead of 2.5 from my Gaussian

random variables reference)

0.5% 2.6

etc.

You can plug is whatever you like depending on what you want your risk

of ruin to be and whatever your edge on your opponent is. For instance,

say I'm Bill Robertie and I have a 0.02 point per game edge on my

opponent, TD-gammon 2.1 [http://www.research.ibm.com/massive/tdl.html].

I want my risk of ruin to be 1% and my standard deviation per game is

2.25 points per game as you stated. Then my necessary bankroll would be,

(2.25^2 * 2.3^2) / (4 * 0.02) = 334.8 points.

Which is about $1674 if I'm playing $5 per point (a stake at which I'm

sure Robertie wouldn't bother). This seems reasonable for a game with

that small of an edge and that low of a risk of ruin.

One more thing to note, if P approaches 0 (that is you are equal with

your opponent) then this equation grows without bound (approaches

infinity). This makes sense since I'm assuming an infinite number of

games. Therefore, it's the same as when N approaches infinity in your

formula.

Chris

Jan 18, 2003, 1:14:01â€¯PM1/18/03

to

Chris,

It looks to me at first glance as if your BR formula is of the 'freeze-out'

type - i.e. gives the bankroll necessary to avoid ruin before your opponent

is ruined, during an indefinite number of plays (mine doesn't attempt to do

that, since it assumes that the opponent is always solvent). I'm not quite

sure how the P (p.p.g.) variable would be applied to poker though. Do you

have any more info on your formula's construction ?

Adam (Mindsports Olympiad No Limit Hold'em Gold medalist, 2002 { i just

winged it lol } )

"Christopher Alvino" <gtg...@prism.gatech.edu> wrote in message

news:3E28FC01...@prism.gatech.edu...

Jan 18, 2003, 1:52:46â€¯PM1/18/03

to

Adam,

Actually, it doesn't care what your opponent's bankroll is. It assumes

that the opponent is always there. So it's not of the freeze-out type.

It only assumes that you are a favorite over your opponent. What do you

mean about the opponent being solvent?

You can construct this formula in the following way:

Assume that at time 0, you have a bankroll of BR and you win P points

per game on average. Then your expected assets after N games is BR+N*P.

But of course this differs based on the variance. If you have bad luck

that is Z standard deviations below the mean, then your assets will

follow the curve BR+N*P-Z*S*sqrt(N). Again S is the standard deviation

per game. Keep in mind, this is accurate when the variable becomes

"more Gaussian".

So this curve BR+N*P-Z*S*sqrt(N) has the property that it dips down

because of the bad luck, then starts to go back up because of the N*P

term which eventually overcomes the bad luck term, -Z*S*sqrt(N). So we

can find the number of games M at which this curve is at it's minimum

with some calculus.. and that is:

M = (Z^2 * S^2) / ( 2 * P)

Now, plugging this into N in the formula BR+N*P-Z*S*sqrt(N) will give us

that our assets at the bottom of that curve (which represents the lowest

point of the bad luck streak) are:

BR - (Z^2 * S^2) / (4 * P)

Now, we simply don't want to go bust. So this has to be greater than 0

at all times, which means we want BR >= (Z^2 * S^2) / (4 * P).

In poker, some people chart their hourly win and hourly standard

deviation. This information, once you have enough games for it to be

accurate, can be plugged into this formula in the same way.

Chris

Jan 18, 2003, 1:57:53â€¯PM1/18/03

to

Ok, my turn for a correction. M should have been,

>

> M = (Z* S)^2 / ( 2 * P)^2

>

Jan 18, 2003, 3:36:51â€¯PM1/18/03

to

In the light of your correction, assuming your differentiation is correct,

(I'm a wee bit too rusty to check right now), shoudn't the minimum function

read M = (Z*S)^2 / (4*(P^2)) ?

(I'm a wee bit too rusty to check right now), shoudn't the minimum function

read M = (Z*S)^2 / (4*(P^2)) ?

Adam

"Christopher Alvino" <gtg...@prism.gatech.edu> wrote in message

news:3E29A27E...@prism.gatech.edu...

Jan 18, 2003, 4:37:07â€¯PM1/18/03

to

Yes, that's the same as,

M = (Z* S)^2 / ( 2 * P)^2

M = (Z* S)^2 / ( 2 * P)^2

Jan 19, 2003, 11:43:01â€¯AM1/19/03

to

"Christopher Alvino" <gtg...@prism.gatech.edu> wrote in message

news:3E29A27E...@prism.gatech.edu...> Adam,

>

> Actually, it doesn't care what your opponent's bankroll is. It assumes

> that the opponent is always there. So it's not of the freeze-out type.

> It only assumes that you are a favorite over your opponent. What do you

> mean about the opponent being solvent?

>

> You can construct this formula in the following way:

> Assume that at time 0, you have a bankroll of BR and you win P points

> per game on average. Then your expected assets after N games is BR+N*P.

> But of course this differs based on the variance. If you have bad luck

> that is Z standard deviations below the mean, then your assets will

> follow the curve BR+N*P-Z*S*sqrt(N). Again S is the standard deviation

> per game. Keep in mind, this is accurate when the variable becomes

> "more Gaussian".

>

> So this curve BR+N*P-Z*S*sqrt(N) has the property that it dips down

> because of the bad luck, then starts to go back up because of the N*P

> term which eventually overcomes the bad luck term, -Z*S*sqrt(N). So we

> can find the number of games M at which this curve is at it's minimum

> with some calculus.. and that is:

>

> M = (Z^2 * S^2) / ( 2 * P)

Agreed. Minimum occurs when P - 0.5*Z*S/sqrt(N) = 0, which results in BR

being minimum at N =(Z^2 * S^2) / (2*P )^2, which when plugged into the BR

formula, reduces to

BR >= (Z^2*S^2) / (4*P)

I had to check my differentiation formula was right.

> Now, plugging this into N in the formula BR+N*P-Z*S*sqrt(N) will give us

> that our assets at the bottom of that curve (which represents the lowest

> point of the bad luck streak) are:

>

> BR - (Z^2 * S^2) / (4 * P)

>

> Now, we simply don't want to go bust. So this has to be greater than 0

> at all times, which means we want BR >= (Z^2 * S^2) / (4 * P).

>

> In poker, some people chart their hourly win and hourly standard

> deviation. This information, once you have enough games for it to be

> accurate, can be plugged into this formula in the same way.

>

> Chris

I didn't have a Z reference table to hand at the time, so I interpolated

from manual (p+q)^n calculations to get my 2.5 figure, hence the

discrepancy. Your 2.3 figure is better.

There has been some work previously done to come up with a p.p.g advantage

for a given Elo rating advantage, and the most common figure seems to be

about +0.1ppg per 50 Elo (although I prefer +0.09ppg, so I will use 0.09 for

the minute). And, also in the backgammon case, S will be about 2.25ppg,

(leaving Z as a constant 2.3), we can now say that for an R Elo advantage,

BR = (Z^2 * S^2) / (4*P) = (26.780625) / (4*0.0018*R)

BR = 3719.53125 / R

e.g. Bill Robertie 0.02ppg, R = (0.02/0.09)*50 = +11.1111111 Elo

BR = 3719.53125 / R = 334.76 points

If we use 0.1ppg per 50 Elo instead of 0.09ppg, then BR = 3347.578125 / R,

so a reasonable compromise would be

BR = 3500 / R

Adam

Jan 19, 2003, 12:40:24â€¯PM1/19/03

to

My own cube distribution calculations give an approximate probability of a

cube being 8 or more in a money game as being roughly 4%, and with a 20%

gammon rate, the probabilty of losing 16 or more points in a single game

would be roughly 0.004. This is significantly over the 1% Z=2.3 threshold,

and with my using Z=2.5 earlier in error, I think that my original formula

for equal players should be more like BR = 5*sqrt(N)+5, which gives a more

realistic looking BR = 10 for a 1-game session, 12.1 for 2 games, 13.7 for 3

games, 40.4 for 50 games.

cube being 8 or more in a money game as being roughly 4%, and with a 20%

gammon rate, the probabilty of losing 16 or more points in a single game

would be roughly 0.004. This is significantly over the 1% Z=2.3 threshold,

and with my using Z=2.5 earlier in error, I think that my original formula

for equal players should be more like BR = 5*sqrt(N)+5, which gives a more

realistic looking BR = 10 for a 1-game session, 12.1 for 2 games, 13.7 for 3

games, 40.4 for 50 games.

Adam

"Adam Stocks" <ad...@stocks49.freeserve.co.uk> wrote in message

news:b0ekfa$pml$1...@newsg1.svr.pol.co.uk...

0 new messages

Search

Clear search

Close search

Google apps

Main menu