Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.

Dismiss

1 view

Skip to first unread message

Jun 4, 1999, 3:00:00â€¯AM6/4/99

to

hi BG'ers!

Has anyone done any analysis on what optimum stakes to use given:

1) account size (money saved allocated for backgammon)

2) skill estimation (for example, assuming head to head and a 53 - 47%

skill advantage)

3) head to head; or chouette play (say with 4 players)

any thoughts?

Sent via Deja.com http://www.deja.com/

Share what you know. Learn what you don't.

Jun 4, 1999, 3:00:00â€¯AM6/4/99

to

I don't know if such work has been done for backgammon in particular,

but this sort of thing is well understood for other games (e.g.

blackjack and poker.) Check out the gambling theory books on my books

page (http://www.gammon.com/books/) and you'll probably find some good

information.

but this sort of thing is well understood for other games (e.g.

blackjack and poker.) Check out the gambling theory books on my books

page (http://www.gammon.com/books/) and you'll probably find some good

information.

If I had to guess, I'd say that you probably need a $1000 bankroll to

comfortably play heads-up at $5/point, and maybe $2500 for a

four-person chouette at those stakes. But that's just a SWAG.

-Patti

--

Patti Beadles |

pat...@netcom.com/pat...@gammon.com |

http://www.gammon.com/ | The deep end isn't a place

or just yell, "Hey, Patti!" | for dipping a toe.

Jun 5, 1999, 3:00:00â€¯AM6/5/99

to

hi,

you will find some mathematical responds and others gambling stuff in the

following books :

Inequalities for stochastic processes

how to gamble if yoou must

Lester E. DUBIN

Leonard J. SAVAGE

DOVERS BOOKS

The theory of gambling and statistical logic

Richard E. EPSTEIN

Academic PRESS

Jun 5, 1999, 3:00:00â€¯AM6/5/99

to

In Bell's "Backgammon: Winning with the Doubling Cube," he states you

should be able to lose 100 pts per player in a chouette for any one

session without it affecting your play.

should be able to lose 100 pts per player in a chouette for any one

session without it affecting your play.

jdg

pat...@netcom.com (Patti Beadles) wrote:

**** Remove _spamme_ from e-mail address to respond. ****

Jun 5, 1999, 3:00:00â€¯AM6/5/99

to

In article <37598dbe...@news.thegrid.net>,

John Graas <jgraas_...@csi.com> wrote:

>In Bell's "Backgammon: Winning with the Doubling Cube," he states you

>should be able to lose 100 pts per player in a chouette for any one

>session without it affecting your play.

John Graas <jgraas_...@csi.com> wrote:

>In Bell's "Backgammon: Winning with the Doubling Cube," he states you

>should be able to lose 100 pts per player in a chouette for any one

>session without it affecting your play.

Interesting! That's "in one session", but doesn't really address the

long haul. It sounds like my 500 point bankroll for a four-player

chouette was low.

Does he support this number in any way?

-Patti

--

Patti Beadles |

pat...@netcom.com/pat...@gammon.com |

http://www.gammon.com/ | Try to relax

or just yell, "Hey, Patti!" | and enjoy the crisis

Jun 6, 1999, 3:00:00â€¯AM6/6/99

to

Support? Not really.

His guideline is built around the idea of being able to make the

correct take/double decision at any cube level within the session

without money pressure getting in the way. The idea being not to give

up any equity over the long run due to being under captitalized.

For any one session, your bankroll looks fine. For the long haul, I'd

at least triple it: if you drop 400 pts twice in a row, you are

clearly out of your league, and you need to re-evaluate. And you'll

still have two sessions' worth remaining when you half the stake

you're willing to play for (realising that you are out of your

league).

Going back to my 21 counting days, if you had a 1% advantage over the

house, you needed a stake of 200 average bets to give yourself a 95%

(2 std deviations) chance of doubling you money before losing it all.

Twice this -- 400 bets -- for a long-run bankroll. At no advantage

(counting for comps), you needed a bankroll of at least 1,000 bets to

be reasonably assured of avoiding "gambler's ruin."

I _guess_ that the "average bet" for a bg game is a 2-cube (anybody

have the real number). Then, for example, every 4th game of a 4-way

chou, you're averaging an effective 6-cube. Assuming 50% win rate in

the box, then you looking at two 6-cubes and three 2-cubes every five

games: an average of a 3.6 cube. :-)

So... are we talking 3,600 pts for a long-run bankroll? Assuming even

competition? Sounds a bit high. Are we talking a 1,440 pt long-run

bankroll if you have a slight advantage? Sounds reasonable, but maybe

a bit conservative.

As for the long-haul, Keynes (economist) is often quoted as saying,

"In the long-run we will all be dead."

jdg

pat...@netcom.com (Patti Beadles) wrote:

**** Remove _spamme_ from e-mail address to respond. ****

Jun 9, 1999, 3:00:00â€¯AM6/9/99

to

In article <7j91hk$ida$1...@nnrp1.deja.com>, <stho...@armstronglaing.com> wrote:

>hi BG'ers!

>

>Has anyone done any analysis on what optimum stakes to use given:

>

>1) account size (money saved allocated for backgammon)

>2) skill estimation (for example, assuming head to head and a 53 - 47%

>skill advantage)

>3) head to head; or chouette play (say with 4 players)

>

>any thoughts?

Sure! (Any SOUND answers? That may be tougher. :) And since

I haven't posted in two and a half months, I need to compensate by

writing a LONG one.

This is an interesting (IMO) topic. I don't know of any detailed

work that has been done on this subject--money management for head-to-

head (and choutte) backgammon play. I can imagine that a computer

simulation could be developed which would help answer this question

in greater detail than what I am going to present BoE. (I believe

John Graas's post was along a similar vein. I'm just more verbose. :)

One money management technique which HAS been studied fairly

extensively is known as the "Kelly Criterion". There is a web page

(which appears to have originally been a newsgroup post):

http://www.primenet.com/~jaygee/KELLY.HTM

This page is definitely more mathematical than what I am going to

present here, but seems to make sense (on my initial, quick reading).

First some sketchy history (and, as usual, from memory so I

may have messed up some of the details): In the 1950's, a researcher(s)

for Ma Bell (probably Bell Labs) wrote journal article about signal

routing. I believe it was pub'ed in one of Ma Bell's own journals.

The author (or one of the authors) was named 'Kelly'. Someone who

read the article (sorry, no name here) realized that the paper's

contents actually applied well to money management in some gambling

situations (blackjack)? Starting in the early 60's, Edward O. Thorp

(of "Beat the Dealer" and "Thorp(e) Count" fame) made the Kelly method

popular as a money management technique for casino blackjack (assuming

intelligent card counting practice).

Although I don't know if it has ever been proven, Thorp (see

for example, his book "Mathematics of Gambling", 1984, Lyle Stuart

publisher) contends that the Kelly Criterion is an optimal technique

for making money as fast as possible under the condition that you are

guaranteed never to blow your entire bankroll. There are some

conditions on the Kelly method in it's strictest application:

1) player MUST have an edge!

2) player has the option of varying his bet amount at each new

opportunity (e.g. before each round of cards is dealt).

3) during a particular trial (e.g. during a particular hand of

cards) the amount bet does not change.

4) bets are made "even money"; that is, you are paid the same for

a win as you would have to shell out for a loss. (NOTE: there

is an expanded Kelly System which takes into account odds being

offered, e.g. for horse racing. Although the enhancement is

simple, I don't think it applies to my model for backgammon

below so I'm not including it. See the above WWW page for

details of this enhancement.)

Before getting specifically to backgammon, let's create an

(artificial) example. (Note: this example is a modification of

one in Thorp's "Mathematics of Gambling" book.)

Suppose an HONEST (but maybe not particularly bright) person approaches

you with the following proposition:

You place a bet (size your choice). You roll a fair die. If it comes

up 1,2,3, or 4, he gives you the amount of your bet. If it comes up

5 or 6, you lose the amount of your bet.

He will play for the next four hours (or less, if YOU decide to quit).

You have $90 in your pocket. How much should you bet on each roll?

(I should also say that YOU are honest, meaning you won't bet more than

is in your pocket!)

Hopefully you realize that betting EVERYTHING each turn is likely

to end in ruin for you. But by betting very conservatively you'll

blow a chance to make a lot of money. What is the best compromise

so that you won't go broke (and have to quit such a lucrative proposition)

but still will rake in a large profit?

The simple ("oddsless") Kelly Criterion says you should bet a

percentage of your current bankroll which is equal to your percentage

edge. In the above example, you are a 2::1 favorite on each roll.

On average you will win 2/3 of the tosses and lose 1/3. Take the

difference (2/3 - 1/3) and that is your 'edge'--1/3. So you should

wager 1/3 of your current bankroll at every opportunity.

Note that you must CHANGE the amount bet on every new opportunity.

Let's take a hypothetical sequnce:

Bankroll Bet result

90 30 lose

60 20 win

80 26.67 win

106.67 35.56 win

142.23 .......................

By lowering your bet as your bankroll decreases, you insure that you

never go broke. But by raising it as your bankroll increases, you

give yourself the maximum opportunity for growth.

OK, back to reality--backgammon. First we see that backgammon

doesn't fulfill the strict conditions stated above. For one, a game

starts off worth a point but usually ends up being worth more than

that. The value of any given (money) game is unknown before it

begins. Secondly it is not customary in a BG money session for the

stake to change, and certainly not every game! Typically two players

agree beforehand on a stake and it remains that way thoughout the

session (barring certain kinds of 'steaming', of course).

This 'unknown' value of each game enters my money management

technique in two ways. First, we can assign a typical value to a

game. This has been discussed on the newsgroup before, and a good

number to use is '3'. That is the standard deviation for money play

for Jellyfish, and "not too loose, not too tight" humans as well.

The second place the value uncertainty enters is what I will

call an 'escrow'. I'm borrowing this term from finance, but most

likely I'm abusing its accepted meaning. (Sorry, bankers among you.)

Consider the MAXIMUM amount you are likely to lose on a single game

over MANY sessions. Experience enters here. I'll take myself as

an example. (Note that I tend to be conservative compared to your

typical money player in handling the cube.) In my lifetime, I'd

guess I've played around 20,000 'money' games of backgammon, head-

to-head and choutte. I only recall the cube reaching 32 twice.

16 is a rarity. So in several money sessions, the worst I can

imagine is seeing a 16 cube accepted. I set my escrow at twice that

= 32. If my bankroll EVER gets less than 32 (my escrow), I must stop

playing (or go to the bank machine...).

Next I need to estimate my percentage edge. Thanks to the

BG ratings formulas and online servers, this is a lot easier than it used

to be. I just need to know (or estimate) my opponent's online rating

and my own. The differece tells me my edge:

(see Kevin Bastian's page: http://www.northcoast.com/~mccool/fibsrate.html)

Ratings difference: Edge in a single game

50 3%

100 6%

150 9%

200 11%

(I assumed 1-point matches. Note that the relationship between rating

difference and edge is close enough to linear that interpolation is

reasonable.)

I think we have enough info to now speculate on a bankroll size

given a known stake:

bankroll = (escrow + 300/edge) * stake

where 'edge' is in percent, 'escrow' in points, and 'stake' and 'bankroll

in some appropriate monetary units.

Let's take me as an example (so 'escrow' = 32 points). Say I want

to play $5 per point against someone I estimate to be 50 ratings points

weaker than myself:

bankroll = (32 + 300/3) * $5.

= $660.

OK, ready to try a chouette? Let's assume n total players so a

maximum (when in the box) of n-1 opponents. How does this affect your

escrow? You must multiply it by n-1. And how about your "base amount"?

1/n of the time you are playing for n-1 times the stake (per point)

and (n-1)/n of the time you are playing for a single stake. So your

'average' stake in a chouette is 1/n * (n-1) + (n-1)/n * 1 = 2(n-1)/n.

You must multiply the 300/edge term in the above equation to apply

the formula to choutte's:

chouette bankroll = [(n-1)*("1-on-1 escrow") + 600*(n-1)/(n*edge)] * stake

Again, suppose I get in a chouette where I am 50 ratings points better

than the BEST of my opponents (i.e. assume you are ALWAYS playing the

best player), for a $5/point session with a total of four players, I

should have:

chouette bankroll = [3*32 + 600*3/(4*3)] * $5

= $1230.

For those who have read this far, I suspect 95% are going to

say "you're nuts! I get into money games all the time with nowhere

near this much cushion." And I believe you. And maybe my numbers

are completely worthless. On the other hand, how often do you have

to resort to "IOU's" or writing checks, or getting out of the game

prematurely because of an uncomfortable losing streak? And have

you ever conciously (let alone unconciously) changed your doubling

strategy because the cube was getting too high for your (payability)

comfort?

Chuck

bo...@bigbang.astro.indiana.edu

c_ray on FIBS

Jun 9, 1999, 3:00:00â€¯AM6/9/99

to

Chuck, I applaud your work! You've done an excellent job of wrapping

backgammon and Kelly betting together.

backgammon and Kelly betting together.

But I do question one of your assumptions:

> This 'unknown' value of each game enters my money management

>technique in two ways. First, we can assign a typical value to a

>game. This has been discussed on the newsgroup before, and a good

>number to use is '3'. That is the standard deviation for money play

>for Jellyfish, and "not too loose, not too tight" humans as well.

I don't think it's that simple, unless you're inclined to settle any

time you get a four cube or higher. Otherwise, you aren't really

accountin for the swings that you'll get when you get gammoned on that

eight cube.

Of course, we could be discussing apples and oranges here. I'm

talking about total bankroll, and you seem to be talking about session

bankroll.

-Patti

--

Patti Beadles |

pat...@netcom.com/pat...@gammon.com | You are sick. It's the kind of

http://www.gammon.com/ | sick that we all like, mind you,

or just yell, "Hey, Patti!" | but it is sick.

Jun 10, 1999, 3:00:00â€¯AM6/10/99

to

I asked Danny Kleinman about applying the Kelly criterion to backgammon

in a letter to the Chicago Point in the January 1989 issue. Kleinman

wrote that Michelin Chabot had written two books applying "Kelly theory" to

backgammon, but Kleinman disparaged the books and didn't give any

references. Kleinman then admitted that he didn't know about the Kelly

criterion and went on to suggest "stakes low enough to absorb a 200

point loss without emotional ruin." This isn't horrible advice, but

with Kelly you can do much better.

in a letter to the Chicago Point in the January 1989 issue. Kleinman

wrote that Michelin Chabot had written two books applying "Kelly theory" to

backgammon, but Kleinman disparaged the books and didn't give any

references. Kleinman then admitted that he didn't know about the Kelly

criterion and went on to suggest "stakes low enough to absorb a 200

point loss without emotional ruin." This isn't horrible advice, but

with Kelly you can do much better.

Let r be a random variable, the result of a heads-up backgammon

game. Let's say we have the probability distribution for r.

Then the Kelly criterion is that we should set our stakes at

proportion p of our total bankroll each game, where p maximizes

E[log(1+pr)]. If we bet this way, we achieve the greatest expected

bankroll growth in the long run.

An approximation for p that is often used is E[r]/E[r^2].

For example, let's say you have a .1ppg advantage over your opponent,

and that the variance of the single game result is 10. Then this

approximation says that you should set the stakes at .1/10=.01, or

1% of your total bankroll. If your bankroll is $1000, you do best

to play the next game for dimes.

A few months back I wrote a program that both calculates the Kelly

approximation and explicitly maximizes E[log(1+pr)], given r's

probability distribution.

For modest edges and/or decent variances, such that the approximation

indicates p < .015, the approximation was quite accurate. So if you

have a good idea of your edge E[r] and the variance E[r^2] (neither is

too hard to estimate), and if E[r]/E[r^2] < .015, then you can fairly

easily get a good estimate of what your optimum stake size is.

When the approximation indicates a bigger p, it could be substantially

too high. Playing around with my program I saw approximation values over

.04, but the p that maximized E[log(1+pr)] was in those cases a little

over .02. The possibility of extreme results in backgammon, like 16

and 32 cubes, makes it inavisable to set the stake size as high as the

approximation suggests. Doing so would make it too likely that you

might suffer a big drawdown, after which you would not be able to earn

as much.

(All of this assumes that we can bet exactly proportion p every game,

which in fact we cannot. Because of the inability to fine-tune the

stakes after every game, p should almost certainly be somewhat lower

than predicted by the Kelly criterion.)

Chris Yep, Michael Klein, and Gary Wong gave me a lot of help in

figuring out the Kelly criterion and what it means for backgammon.

Thanks guys.

If anyone has a reference for the Chabot books Kleinman referred

to, please let me know. I would very much like to read them.

David Montgomery

mo...@cs.umd.edu

monty on FIBS and GG

Jun 10, 1999, 3:00:00â€¯AM6/10/99

to

Chuck Bower wrote news:7jmd7g$s7e$1...@flotsam.uits.indiana.edu...

>OK, back to reality--backgammon. First we see that backgammon

>doesn't fulfill the strict conditions stated above. For one,

>a game starts off worth a point but usually ends up being worth

>more than that. The value of any given (money) game is.....

I enjoyed reading your above extensive comments. Lately

I had been seriously considering playing against JF and

SW for money but I have also been afraid that there may

be more to gambling than to just playing for fibs points

or even for nothing, against robots in one's own privacy.

A few times it crossed my mind to post an article in rgb

to ask if somebody would be willing to partner up or do

some hand-holding/coaching to ease me into the gambling

world. Of course, there may be an element of personality

required and being coached, reading books on the subject,

etc. may not help...

>Next I need to estimate my percentage edge. Thanks to the

>BG ratings formulas and online servers, this is a lot

>easier than it used to be.

I think you should have made this comment with a big "IF".

For all the reasons offered before by several people, such

ratings are mostly useless. As part of my "experiments":),

I deliberately lowered my FIBS rating to 1400's. The other

day I invited a 1640(?) rated player and was turned down as

well as put down at the same time. After an exchange of some

"friendly" :) words with that person, I was offered to play

for $5 per point. Despite how I feel about gambling playing

bg, if that person was in front of me with his cash money

on the table, I'm pretty sure I would have given in to the

temptation... :) I feel it's my duty :) to remind that on

top of lack of necessary control and policies, popular but

bogus formulas like FIBS' and their products (ratings) are

nothing but useles...

MK

Jun 10, 1999, 3:00:00â€¯AM6/10/99

to

In article <pattibFD...@netcom.com>,

Patti Beadles <pat...@netcom.com> wrote:

Patti Beadles <pat...@netcom.com> wrote:

>Chuck, I applaud your work! You've done an excellent job of wrapping

>backgammon and Kelly betting together.

Thanks!

>But I do question one of your assumptions:

>

>> This 'unknown' value of each game enters my money management

>>technique in two ways. First, we can assign a typical value to a

>>game. This has been discussed on the newsgroup before, and a good

>>number to use is '3'. That is the standard deviation for money play

>>for Jellyfish, and "not too loose, not too tight" humans as well.

>

>I don't think it's that simple, unless you're inclined to settle any

>time you get a four cube or higher. Otherwise, you aren't really

>accountin for the swings that you'll get when you get gammoned on that

>eight cube.

I agree that setting a bankroll--stake realtionship is not as

simple as my model made it sound. However, I don't understand the

part about "inclined to settle any time you get a four cube or higher".

In the example I gave, my bankroll vs. a player rated 50 points below

me was 132 units (points). Thus assuming I was even on all other games,

I could handle FOUR 24-point losses before getting close to my escrow.

As I mentioned, for MY play just one of these would be extremely rare.

If you see these fairly often (e.g. maybe once per session), then

probably the S.D. for YOUR play (and, let's say, against that particular

opponent/chouette) is higher than 3 and you should adjust accordingly.

In addition, your escrow should be higher.

>Of course, we could be discussing apples and oranges here. I'm

>talking about total bankroll, and you seem to be talking about session

>bankroll.

Hmmm. I didn't realize there should even be a distinction. I'll

have to think about that one.

On another note, having now read David Montgomery's post, I see

that a lot more work has been done on this subject than I thought.

Appartently, though, very little has been published, or at least

made available in form that is readily accessible. I think David's

numbers and mine agree pretty closely, which does give me SOME

confidence that what I said wasn't totally off-base.

Probably the biggest impediment to using my simplified model of

applying the Kelly Criterion is that typically you don't (can't?) change

the stake after each game. This hurts in two ways: it makes you more

likely to go broke (when you are losing) and it doesn't allow you to

maximize your earnings (when you are winning). This makes me think

there is probably a better money management scheme.

And maybe this is where the "total vs. session bankrolls" idea enters.

If you have money in reserve (total > session) then you have a chance

to adjust your bet size on the NEXT session. As in the true Kelly

method, this helps both when you are winning and when you are losing.

Besides, if you are getting hammered, maybe you underestimated your

opponent....

Finally, using the Kelly Criterion (or some other money management

optimization method) is really related to maximizing profit. If you

are just sitting down to a friendly game, that might not be your

primary goal. In that case, the size of your bankroll (or stake) may

be determined by other factors.

Jun 10, 1999, 3:00:00â€¯AM6/10/99

to

Chuck Bower <bo...@bigbang.astro.indiana.edu> writes:

> I agree that setting a bankroll--stake realtionship is not as

> simple as my model made it sound. However, I don't understand the

> part about "inclined to settle any time you get a four cube or

> higher".

> I agree that setting a bankroll--stake realtionship is not as

> simple as my model made it sound. However, I don't understand the

> part about "inclined to settle any time you get a four cube or

> higher".

The central limit theorem says that if you add up a large enough sample

from a suitable distribution, then the distribution of the total will be

roughly normal. It's not clear that this applies to backgammon, because

you can have arbitrarily large payoffs and it's not even clear that the

expectation and variance exist. But even aside from that, it's fairly

tricky to tackle the question of what is "large enough". It's certainly

the case that, for a distribution with "thick tails" such as backgammon

payoffs (because paying 16 points for getting gammoned at 8 would be a

5-sigma event with probability less than one in a million, if the payoff

distribution were normal with standard deviation 3, the probability of

large payoffs at backgammon is much larger than in the normal

approximation), the "large enough" that you need for the sum to converge

to the normal distribution is increased.

> In the example I gave, my bankroll vs. a player rated 50 points below

> me was 132 units (points). Thus assuming I was even on all other games,

> I could handle FOUR 24-point losses before getting close to my escrow.

> As I mentioned, for MY play just one of these would be extremely rare.

However rare it is for you, it's a lot less so than the normal

approximation would say.

David desJardins

Jun 11, 1999, 3:00:00â€¯AM6/11/99

to

David Montgomery wrote in message <7jnkdc$o...@krackle.cs.umd.edu>...

>Let r be a random variable, the result of a heads-up backgammon

>game. Let's say we have the probability distribution for r.

>Then the Kelly criterion is that we should set our stakes at

>proportion p of our total bankroll each game, where p maximizes

>E[log(1+pr)]. If we bet this way, we achieve the greatest expected

>bankroll growth in the long run.

>

>An approximation for p that is often used is E[r]/E[r^2].

>

>For example, let's say you have a .1ppg advantage over your opponent,

>and that the variance of the single game result is 10. Then this

>approximation says that you should set the stakes at .1/10=.01, or

>1% of your total bankroll. If your bankroll is $1000, you do best

>to play the next game for dimes.

This is interesting. I suppose that your cube decisions should be

based on maximizing E[log(1+pr)], not based on maximizing E[r]. This

would mean getting more cautious as the cube gets bigger (no doubt

a sensible idea for anyone on a finite bankroll). I suspect that if

doubling is based on maximizing E[r], then E[r^2] would diverge

to infinity.

An interesting side effect is that the person with the larger bankroll

actually gains some equity from this effect, but probably not too much

if the stakes are reasonable.

0 new messages

Search

Clear search

Close search

Google apps

Main menu