Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.

Dismiss

81 views

Skip to first unread message

Jan 14, 2003, 2:21:22 PM1/14/03

to

The [0,1] game and Derivatives: A Study of Some Poker-Like Games

Bill Chen/Jerrod Ankenman

Bill Chen/Jerrod Ankenman

[Some background: This series is the result of some late-night phone

calls between your humble narrator and former rgp poster and math guru

at large Bill Chen from his new digs in the Philadelphia area. Bill has

done some pretty significant work on a number of games related to the

topic of this series. I claim no particular credit for any of the work

herein; all I claim credit for is being able to understand Bill as he

walked me through some of these things. It is Bill's hope that there are

some out there who will be able to add to this work, or have done

similar work and would be willing to share information privately or

publicly. It is my hope that I can write about these topics in a way

that is comprehensible to people without math PhDs -- I do not myself

have any math degree or even math courses past basic calculus. However,

many of these topics are quite conceptual, and it doesn't take the

ability to integrate over 14 variables or solve differential equations

to be able to understand these things. You can address feedback that you

don't wish to post to RGP to myself at jerroda...@yahoo.com (or if

you know my other address, that's fine too) and I'll forward everything

to Bill. Thanks! Hope this is worthwhile for the reader.]

Part 1: Introduction and Game #1

This is part 1 in a many-part investigation of what we will refer to as

the [0,1] game, which can in some ways be thought of as analogous to

poker and is actually interesting in its own right. The following are

the common rules of this game, which extend through all the examples:

There are two players, Player X and Player Y. Each player is "dealt" a

random real number from 0 to 1. Player X acts first, and there is one

round of betting. Each game has a different structure of betting - this

is not structure in the way that term is typically used (ie to

differentiate between limit and pot-limit), but instead to denote

whether check-raise is allowed and how many bets and raises are allowed.

Assume a limit structure wherever possible. After the betting, if the

last bet was called, there is a showdown, and the player with the LOWEST

card wins the pot. Each game will specify a pot size. The first examples

will begin with infinite pots, and then we will introduce more

complexity in order to solve pots of finite but arbitrary size.

As we go along, we will introduce notation and methodology that will

help us to solve more complicated problems. In each example, we will be

searching for two things: optimal strategies for both X and Y, and the

value of the game (how many bets Y wins per iteration). As we will see,

and should match the reader's intuition, Player Y will never have

negative EV in these games, due to his positional advantage.

------

Game #1: infinite pot, one bet.

Our first toy game is relatively simple, but begins to introduce some

concepts that will be useful later. Firstly, the assumption of the

infinite pot. The infinite pot actually is functionally equivalent to a

rule that "neither player may fold." In this game, there is only one bet

available, so the only possible betting sequences are:

bet-call

check-bet-call

check-check

Firstly, we will solve this game in an intuitive way: Player X is going

to bet some percentage of the time, and he is going to bet his best

hands. Betting worse hands in any case than he would check is a

dominated strategy. Whatever that percentage is, player Y is forced to

call all the time. But if Player X checks, then Player Y can bet. Since

Player X checked, Player Y can bet all the hands that X would have bet,

since he is sure to win. In addition, Player Y can bet some fraction of

the hands that X would not have bet. If Player Y bets the top half of

those hands, he will gain a bet whenever X has a hand in the lower half,

and break even the rest of the time. If he bets more or less, he will

either miss bets or bet too often when losing.

So if X bets a certain percentage of hands, Y always calls. If X checks,

then Y bets all the hands that X would have bet, plus half the hands

that he would not bet. This is Player Y's optimal strategy.

Now, for Player X's optimal strategy. Player X wants to make it so that

Player Y is indifferent to checking or betting after Player X checks.

However, Player Y is always going to bet more hands than Player X,

because he will bet half the hands that Player X does not. But if Player

X bets EVERY hand, then there are no hands that Player Y will bet and

Player X will not. So Player X's optimal strategy is to bet all hands.

So now, the betting will go: bet-call 100% of time. Since both players'

chance of winning the hand is equal, the value of this game is 0.

Simple, right? OK, now let's look at this in a little more formulaic

way.

x1

|----------|---------|----------|

0 y1 1

This diagram represents this game, before we've actually analyzed it.

x1 is the cutoff between when player X will bet and when he will check.

y1 is the cutoff between when player Y will bet and when we will check

(if player X checks).

Now, what we want to do is find out what the optimal value of x1 is.

This is the same as asking, "where should x1 be so that player Y can't

exploit it?" To understand how we're going to do this, let's address a

question. What's going to happen at x1 anyway? For hand values to the

left of x1, X will be betting, and for hand values to the right of x1, X

will be checking. Player Y could try to exploit the value of x1.

Well, first off, we know that in the area left of x1 on the line, Y

can't exploit the strategy, because he's forced to call. But to the

right of the area, Y has a choice - to check or to bet. And the point

where Y changes his strategy is y1. The same thing that we said above

about x1 is true about y1 - to the left of it, Y will bet, and to the

right of it, Y will check.

Now if Y could do better by checking at y1, then he could move y1 a tiny

bit until he came to a point where he was indifferent and the resulting

y1 would be BETTER for him. So the idea is for X to try and make it so

that no matter what Y does at y1, he can't make any more money by

switching strategies.

We'll call this "indifference" - X wants to make it so that Y is

"indifferent" to checking or betting at y1. To do this, X has to make

the value of checking equal to the value of betting. X is going to

construct an equation (we'll call these "indifference equations")

setting these two values equal.

But it's not just y1. The same concept applies to x1. To the left of

that point, X will bet. To the right of that point, X will check. In the

process of creating an optimal strategy, Y must make it so that X is

indifferent to checking and betting at x1. This is an important part of

our methodology. What is actually going to take place here is that at

each threshold point (y1,x1 in this example), we're going to find an

indifference equation. Since we will have the same number of equations

as unknowns (2 in this example), we will then solve the resulting system

of equations for the unknowns we have.

*This will give us optimal strategies for each side.*

First let's look at y1. The value of betting at y1 is:

X's hand Result

0->x1 X already bet

x1->y1 -1

y1->1 1

This is equivalent to:

-1*(y1-x1) + (1-y1)

The value of checking at y1 is:

X's hand Result

0->x1 X already bet

x1->y1 0

y1->1 0

Our indifference equation is the result of setting these two things (the

value of checking and the value of betting) equal:

EV(check at y1) = EV(bet at y1)

0 = -1*(y1-x1) + (1-y1)

0 = -y1 + x1 + 1 -y1

0 = -2y1 + x1 + 1

2y1 = x1 + 1 [1]

Next, we'll consider the next point, x1. Y will want to make X

indifferent between checking and betting at x1 as well.

So what are the values of these two actions?

Betting at x1:

Y's hand Result

0->x1 -1

x1->y1 1

y1->1 1

or

-1*(x1-0) + (y1-x1) + (1-y1)

Checking at x1:

Y's hand Result

0->x1 -1

x1->y1 1

y1->1 0

-1*(x1-0) + (y1-x1)

-1*(x1-0) + (y1-x1) = -1*(x1-0) + (y1-x1) + (1-y1)

0 = 1 - y1

y1 = 1 [2]

Now, using these two equations to solve for x1 and y1:

2y1 = x1 + 1 [1]

y1 = 1 [2]

2 = x1 + 1

x1 = 1 [3]

That's the same answer as we got before: x1 is 1, so X should bet all

his hands, and y1 is 1, so Y never gets to bet.

OK, you're probably saying to yourself; that's so simple, it's hardly

worth going through any algebra at all. And I tried to flesh out all the

details of where we came up with equations in a very thorough manner.

Later on this series, I will no longer do this, because it would wear

down my fingers from typing. But, this algebraic method is the

foundation for the analysis of some games that are quite complicated,

and it is valuable to get in the habit of solving problems this way

because of its repeatability.

So let's recap our methodology:

**

We assign variable values to each decision threshold point - that is, a

point where a player switches from one strategy to another. In the

optimal strategies, each player must be indifferent between the

neighboring strategies of each of his threshold points. To effect this,

we create indifference equations for each threshold point. Since we have

N threshold points, we will always have N equations. We then solve this

system of equations, and by doing so, find the optimal strategy.

**

Next: Part 2: Raising and Game #2

Jan 14, 2003, 4:00:24 PM1/14/03

to

Jerrod Ankenman:

> Bill Chen... ...has done some pretty significant work on a number of

> games... This is part 1 in a many-part investigation of what we will

> refer to as the [0,1] game... ...two players, Player X and Player Y.

> Each player is "dealt" a random real number from 0 to 1. Player X

> acts first, and there is one round of betting. Each game has a different

> structure of betting...whether check-raise is allowed and how many

> bets and raises are allowed. LOWEST card wins the pot. Each game

> will specify a pot size. The first examples will begin with infinite pots,

> and then...pots of finite but arbitrary size.

> Bill Chen... ...has done some pretty significant work on a number of

> games... This is part 1 in a many-part investigation of what we will

> refer to as the [0,1] game... ...two players, Player X and Player Y.

> Each player is "dealt" a random real number from 0 to 1. Player X

> acts first, and there is one round of betting. Each game has a different

> bets and raises are allowed. LOWEST card wins the pot. Each game

> will specify a pot size. The first examples will begin with infinite pots,

Jerrod.....will one of your 'examples' be a 1-unit initial pot with only a

single 1-unit bet allowed by either player (no raise, nor check-raise)...?!

P.S. Any particular reason why "LOWEST card wins the pot"...?!

Jan 14, 2003, 5:49:59 PM1/14/03

to

In article <3E246397...@yahoo.com>, Jerrod Ankenman <jerroda...@yahoo.com> wrote:

>Game #1: infinite pot, one bet.

>Game #1: infinite pot, one bet.

>EV(check at y1) = EV(bet at y1)

(Nit) You have assumed a uniform distribution across the range [0,1].

Actually Y's EV at y1 is 10 (or any other finite value).

infinite pot * (y1-x1) - infinite pot * (1-y1) = 10

infinite pot * ( 2* y1 - x1 - 1 ) = 10

2 * y1 - x1 -1 = 0

Under the covers you divided by an infintly large number and that does not

work.

Redo the problem with a finite pot and it works.

Mike G

Jan 14, 2003, 7:28:30 PM1/14/03

to

BTW Don't respond privately, my cyra address should bounce. As Jerrod

said, send all comments to him.

said, send all comments to him.

So anyway this is sort of a reason as to why one should stay tuned.

Anyway, the first game isn't very exciting, but hopefully when we get

into game 15 or so, it will start to look like real HU poker. One

questions along the way that we will hope to answer: If a player bets

what % of his hands should you raise with? There is actually a number,

which we call r (it's around 40%, and irrational--see if you can guess

what it is) that pops up all over the place it's can be acalled the

golden mean of limit poker.

Also, what should the first player really do with his strong hands,

check raise or bet? How much is check-raising worth anyway? What if

the two players have different distributions of hands, we model a

2-card draw (called the x^2 distribution) vs a 1 card draw. What

about pot limit and no limit? Some of the answers are surprising,

and my hope is that these results will stimulate the current poker

theory to a new level of discussion.

How many raises in hold'em should you go without the nuts? What is

the effect of having some of your oppoennt's cards, what happens when

we apply our model to actual river play? Wait and see.

Bill

Jan 14, 2003, 9:16:09 PM1/14/03

to

Mike Garcia wrote:

> >Game #1: infinite pot, one bet.

>

> >EV(check at y1) = EV(bet at y1)

>

> (Nit) You have assumed a uniform distribution across the range [0,1].

Correct. This was not specified.

> Actually Y's EV at y1 is 10 (or any other finite value).

Thought it was not explicitly noted, we are only calculating Y's EV for

betting that takes place in this game. The pot is not included. Later we

will consider finite pots, which have more complexity than this simple

example (because players can fold).

> Under the covers you divided by an infintly large number and that does not

> work.

> Redo the problem with a finite pot and it works.

But with a finite pot, the optimal strategies involve folding and are

more complicated. We'll get to that later. The infinite pot stipulation

isn't part of the EV calculations for these games. If you wish, simply

stipulate instead that neither player may fold.

Jerrod Ankenman

Jan 14, 2003, 9:23:10 PM1/14/03

to

Barbara Yoon wrote:

>

> Jerrod.....will one of your 'examples' be a 1-unit initial pot with

> only a single 1-unit bet allowed by either player (no raise, nor

> check-raise)...?!

Well, currently my plan is to skip that particular example and solve

that game (1 bet left, finite pot) generally for pot size p, where p can

be anything. Of course, we will be able to plug 1 in for p to reduce to

that example.

> P.S. Any particular reason why "LOWEST card wins the pot"...?!

It makes the algebra much, much, simpler, especially when we start

looking at x_n for higher n. (Otherwise we have to subtract from 1 much

more often).

Jerrod Ankenman

Jan 15, 2003, 12:19:00 PM1/15/03

to

This brings up a good point. We can look at game values in two ways:

(1) absolute game value and (2) ex-showdown value, or the absolute

game value minus the showdown value. Now fot the finite pot case,

the showdown value of the [0,1] game must be the same for both

players, so it doesn't matter which one we consider (though I think

ex-showdown is easier to deal with).

(1) absolute game value and (2) ex-showdown value, or the absolute

game value minus the showdown value. Now fot the finite pot case,

the showdown value of the [0,1] game must be the same for both

players, so it doesn't matter which one we consider (though I think

ex-showdown is easier to deal with).

The infinite pot case is an abstraction of the finite pot case, only

dealing with ex-showdown value to avoid infinities. This will come

into play later when the two players have unequal distributitons,

where if we consider infinite pots one player will have infinite EV.

So for the near future, I propose (I can only propose, since Jerrod is

actually doing all of the work writing it up) we use "game value" as

"ex-showdown value," and "EV" to mean "ex-showdown EV" and we will

explicitly use "absolute game value" to mean including showdown

equity. Is this okay Jerrod?

Bill

mt...@nowhere.cornell.edu (Mike Garcia) wrote in message news:<b0246m$pcc$1...@news01.cit.cornell.edu>...

Jan 16, 2003, 6:44:02 PM1/16/03

to

wc...@cyra.com (Bill chen) wrote in message news:<16b07253.03011...@posting.google.com>...

> This brings up a good point. We can look at game values in two ways:

> (1) absolute game value and (2) ex-showdown value, or the absolute

> game value minus the showdown value. Now fot the finite pot case,

> the showdown value of the [0,1] game must be the same for both

> players, so it doesn't matter which one we consider (though I think

> ex-showdown is easier to deal with).

>

> This brings up a good point. We can look at game values in two ways:

> (1) absolute game value and (2) ex-showdown value, or the absolute

> game value minus the showdown value. Now fot the finite pot case,

> the showdown value of the [0,1] game must be the same for both

> players, so it doesn't matter which one we consider (though I think

> ex-showdown is easier to deal with).

>

Pokerroom.com has EV stats on 2 handed games listed by position and

hand. https://www.pokerroom.com/evstats/totalStatsPositions.php?players=2

This might be a better way to judge individual hands, as a sort of

playability EV. Just a thought.

Jan 16, 2003, 8:44:52 PM1/16/03

to

Eric wrote:

I think ducks have higher centers of gravity than otters of equal height

when both stand upright. Just another thought.

Tom Weideman

Jan 17, 2003, 9:24:57 PM1/17/03

to

Tom Weideman <zwi...@attbi.com> wrote in message news:<BA4CA012.2404A%zwi...@attbi.com>...

>

> I think ducks have higher centers of gravity than otters of equal height

> when both stand upright. Just another thought.

>

>

> Tom Weideman

>

> I think ducks have higher centers of gravity than otters of equal height

> when both stand upright. Just another thought.

>

>

> Tom Weideman

I see your post avoided the math part of these posts. I guess you are

switching your area of expertise to small animals.

"Math is hard" - Bill Chen.

Jan 17, 2003, 9:46:40 PM1/17/03

to

Bill Chen wrote:

>>>> This brings up a good point. We can look at game values in two ways:

>>>> (1) absolute game value and (2) ex-showdown value, or the absolute

>>>> game value minus the showdown value. Now fot the finite pot case,

>>>> the showdown value of the [0,1] game must be the same for both

>>>> players, so it doesn't matter which one we consider (though I think

>>>> ex-showdown is easier to deal with).

>>>> This brings up a good point. We can look at game values in two ways:

>>>> (1) absolute game value and (2) ex-showdown value, or the absolute

>>>> game value minus the showdown value. Now fot the finite pot case,

>>>> the showdown value of the [0,1] game must be the same for both

>>>> players, so it doesn't matter which one we consider (though I think

>>>> ex-showdown is easier to deal with).

Eric wrote:

>>> Pokerroom.com has EV stats on 2 handed games listed by position and

>>> hand. https://www.pokerroom.com/evstats/totalStatsPositions.php?players=2

>>>

>>> This might be a better way to judge individual hands, as a sort of

>>> playability EV. Just a thought.

Tom wrote:

>> I think ducks have higher centers of gravity than otters of equal height

>> when both stand upright. Just another thought.

Eric wrote:

> I see your post avoided the math part of these posts. I guess you are

> switching your area of expertise to small animals.

Unlike so many other usenet denizens, I cannot simply declare my area of

expertise by choosing to post about it.

In any case, I wouldn't expect you to get my joke. But here's a hint: You

and I added roughly equal amounts of useful content to this thread.

Tom Weideman

Jan 21, 2003, 10:51:03 AM1/21/03

to

> In any case, I wouldn't expect you to get my joke. But here's a hint: You

> and I added roughly equal amounts of useful content to this thread.

>

>

> Tom Weideman

> and I added roughly equal amounts of useful content to this thread.

>

>

> Tom Weideman

I got your joke. And those EV stats, may or may not have any useful

content. That's for individual posters to decide for themselves. You

decided it's not useful to you, fine. Why not post something useful,

instead of just adding more noise to RGP?

0 new messages

Search

Clear search

Close search

Google apps

Main menu