Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The [0,1] Game: Part 3

20 views
Skip to first unread message

Jerrod Ankenman

unread,
Jan 17, 2003, 2:35:55 PM1/17/03
to
The [0,1] game and Derivatives: A Study of Some Poker-Like Games
Bill Chen/Jerrod Ankenman

Part 3: Infinite bets, "r", and Game #3

This is part 3 in a many-part investigation of what we will refer to as
the [0,1] game, which can in some ways be thought of as analogous to
poker and is actually interesting in its own right.

Recap to this point: In Part 1, we analyzed a very simple [0,1] game
where the pot is infinite, but only one bet is allowed. We found
intuitively what the optimal strategy and value of the game were. Then
we created a method using indifference equations to solve the same
problem in a little harder way, and got the same answers. In Part 2, we
expanded the game to
allow two bets. Player Y was able to exploit his position to win money.
We introduced the y1,y2 notation and reconfirmed the ability of our
methodology to find an optimal solution.

In this part: We expand our game to infinite bets (no cap), and run into
"the golden mean of poker," 'r'.

A note on rigor: A number of respondents have made various points
surrounding our "infinite pot" stipulation. It is our belief that our
conclusions do in fact hold true as the limit of the finite pot case.
However, the point of these early games is not to engage in discussions
about 0*infinity problems and things of that nature, but to solve simple
games that help lead us into more complexity. For this reason, we're
going to (for the purposes of the discussion) define "infinite pot" to
simply mean that neither player may fold (by rule).

In our previous two parts, we examined a heads up game, without
check-raising, with caps of one and two bets, respectively. While we
could extend the methodologies we employed previously to solve games of
3, 4, or N bets, we can also take a look at the case where both parties
can raise infinitely (as is the case in many headsup games of this
nature).

Before we get to game #3, however, let's look at a slightly simpler game
that will help to introduce some concepts and also help us to calculate
the ex-showdown value of game #3.

Game #3a: infinite pot, infinite bets allowed. X bets in the dark.

In this game, X bets all his hands. Y can either call or raise; if Y
calls, the betting is over. If Y raises, then the game is reversed, and
X is put into exactly the same position.

We can clearly see that Y will want to raise with some fraction of his
hands. Call this fraction r. Using (in the abstract) our x_n notation:

y2 = r

If Y raises, it's back to X. But now the game is the same for X as it
was for Y just a minute ago, only the range of hands Y could have is r
instead of 1 (all hands). If it was optimal for Y to raise r*1 hands,
the optimal strategy must be for X to also raise with the r fraction of
Y's hands!

x3 = r^2

This is actually going to go on forever, in fact, for when Y gets the
turn back, he'll want to raise with the r fraction of X's last hands,
and so on.

y4 = r^3

We're not going to solve for the value of r yet. But let's consider the
ex-showdown value of this game; X bets dark, Y raises with r hands, X
reraises with r^2 hands, and so on.

We have a number of scenarios:

X is Y is Result Probability
< r < r Special (more raising) r^2
< r > r X wins 1 bet. r*(1-r)
> r < r Y wins 2 bets. r*(1-r)
> r > r They break even. (1-r)^2

Let's refer to these scenarios as 1-4, top to bottom.

Scenario 4, of course, is irrelevant.
Scenarios 2 and 3 are pretty straightforward.
Scenario 1 is a little bit tricky. When both players have hands better
than r, what happens? Y raises, and then the game is reversed; X and Y
switch places.

Let's create a variable "g" that is Player Y's edge in this whole game.
In Scenario 1, what happens is we have another game just like this one,
but with X and Y reversed. Let's see if we can use this to write a
formula to solve for g. For each scenario, Y's edge will be his net on
that scenario times the probability of that scenario occurring.

g = 0*(1-r)^2 + r*(1-r)*2 - r*(1-r)*1 - r^2*g

The last term comes about because r^2 of the time, the players will play
this same game again, only player X will this time have the edge.

r^2g + g = r*(1-r)
g = r*(1-r)/(r^2+1)

(we will simplify this later)

So Player Y's edge in this game (where X bets dark and infinite raises
are allowed) is g = r*(1-r)/(r^2+1).

With that, let's move on to the full game.

Game #3: infinite pot, infinite bets allowed. Check-raise is not
allowed.

In the past, we drew a little line graph to show the cutoff points for
each player. Obviously, with infinite bets, we can't draw /all/ the
points (x100 gets a little hairy). But we'll draw up to y4, and then
remember that the other points are there.

x3 x1
|--|--|----|---------|----------|----------|
0 y4 y2 y1 1


Notice that there is no x2, nor y3. This is because according to the
rules of this game, X cannot put in the second bet, nor can Y put in the
third bet. (Check-raise is not allowed). In fact, there will be no x_n
for n even, nor y_n for y odd.

The next step is to write some indifference equations.

First of all, let's write the equation for y1. At y1, player X wants to
make player Y indifferent to betting or checking, just like last time.
And so this equation is actually the same as previously:

2y1 = x1 + 1 [1]
(this from part 1)

Next, let's write the equation for x1. Player Y wants to make player X
indifferent to betting or checking at x1.

The values of actions at x1:
Y's hand Bet Chk Diff
[0,y2] -1 -2 -1
[y1,1] 1 0 1

(note: there are other ranges with values, but they're the same, check
or bet)

So the indifference equation is:

y2 - 0 = 1 - y1 or
y2 = 1 - y1 [2]

Next, we'll write an indifference equation at y2. Player X wants to make
it indifferent for Y to call or raise at y2.

Values of actions of y2:
X's hand Call Rai Diff
[0,x3] -1 -3 -2
[x3,y2] -1 -2 -1
[y2,x1] 1 2 1

The indifference equation, then is:

2x3 + (y2 - x3) = x1 - y2 or
x3 + 2y2 = x1 [3]

Now we have three equations. But we actually have four unknowns now:
x1,x3,y1,y2. And even doing the indifference equation for x3 won't help
us here, because it needs y4, and so on. We'll need something else. What
else do we know?

Think back to game #3a. When X bets in this game, Y calls with hands >
x1, as we know that y2 must be a better hand than x1. But the rest of
the game (Y between 0 and x1) is just the same as game #3a, only over a
smaller interval. Here we're going to get the same set of equations as
we did there:

x3 = r*y2 = r^2*x1 [4]

This is our fourth equation.

2y1 = x1 + 1 [1]
y2 = 1 - y1 [2]
x3 + 2y2 = x1 [3]
x3 = r*y2 = r^2*x1 [4]

Plugging into [3]:

r^2x1 + 2rx1 = x1
r^2 + 2r = 1
r^2 + 2r - 1 = 0

and, trotting out our old familiar quadratic formula:

r = -1 +/- sqrt(2)

Since -1 - sqrt(2) would result in bad karma,

r = sqrt(2) - 1 ~= .414

Now we've solved for r. What this means is that when infinite bets are
allowed, you should generally put in another raise with about 41% of the
hands your opponent can have to have raised you this far.

This value r pops up frequently in our analysis; we are facetiously
calling it the "Golden Mean of Poker" - students of math should probably
be able to recognize some of the parallels to the mathematical concept.

OK, so now we can actually solve for all these values:

rx1 = 1 - y1
rx1 = 1 - (x1+1/2)
2rx1 = 2 - x1 - 1
2rx1 + x1 = 1

x1 = 1/(1+2r)

y1 = (1/(1+2r) + 1)/2 = (1+r)/(1+2r)

y2 = r/(1+2r)

x3 = r^2/(1+2r)

x_n = r^(n-1)/(1+2r) (for odd n)

y_n = r^(n-1)/(1+2r) (for even n)


Let's calculate the ex-showdown value of this game.

We could try to add up all the infinitesmal values of each raising range
(from x3 to x_inf), but luckily, there's a little trick (again related
to game #3a) we can use to simplify the math.

There are four scenarios to start with:

X is Y is Result
< x1 < x1 Special (includes more raising)
< x1 > x1 X wins 1 bet
> x1 < x1 Y wins 1 bet
[x1,y1] [x1,y1] They break even (each wins half the hands)
[y1,1] [x1,y1] Y wins 1 bet.
[x1,1] [y1,1] No betting occurs

Note that the second and third categories cancel each other out, because
they happen with equal probability and offset each other.

Now, to calculate the "Special" category:

What's happening in the "Special" category? It's game #3a again! We know
from that game that Y's edge is g.

g = r*(1-r)/(r^2+1)

now that we know the value of r:
r^2 = 1 - 2r (try it yourself and see - remember r = sqrt(2) - 1), so

g = r*(1-r)/(2-2r)
g = r/2

Note that this g also applies to game #3a - we couldn't simplify because
we hadn't yet solved for r. All games of that nature have an edge of r/2
bets for player Y.

So Y's net win is the 1 bet he wins when X is in [y1,1] and Y is in
[x1,y1] and the r/2 bets he wins when X and Y are both < x1.

x1^2*r/2 + (y1 - x1)*(1-y1)
(1/(1+2r))^2*r/2 + (r/(1+2r))*(r/(1+2r))
[(1+2r)^2 = 4r^2 + 4r + 1 = 5 - 4r]
r + 2*(1 - 2r) / (2* (5-4r))
(2 - 3r) / (2 * (5 - 4r))
(2-r)/14 ~= .11327

Recap of game values (for player Y):
1 bet, no folding: 0
2 bets, no folding: 1/8 (.125)
inf bets, no folding: .11327

So adding infinite bets favors player X, but not by much. Of course, the
marginal value of each bet diminishes quickly, so most of the value in
infinite bets is in allowing player X to bet and reraise.

This was definitely the hairiest of the games we've done so far. In the
next installment, we'll get back to slightly easier games, and solve the
two-bet game with check-raise.

Next: Check-raises and Game #4

Barbara Yoon

unread,
Jan 18, 2003, 12:59:29 PM1/18/03
to
Jerrod Ankenman:

> The [0,1] game and Derivatives: A Study of Some Poker-Like Games
> Bill Chen/Jerrod Ankenman
>
> Game #3: infinite pot, infinite bets allowed. Check-raise is not allowed.
>
> In the past, we drew a little line graph to show the cutoff points for
> each player. Obviously, with infinite bets, we can't draw /all/ the
> points (x100 gets a little hairy). But we'll draw up to y4, and then
> remember that the other points are there.
>
> x3 x1
> |--|--|----|---------|----------|----------|
> 0 y4 y2 y1 1
>
> Notice that there is no x2, nor y3. This is because according to the
> rules of this game, X cannot put in the second bet, nor can Y put in the
> third bet. (Check-raise is not allowed). In fact, there will be no x_n
> for n even, nor y_n for y odd.
>
> The next step is to write some indifference equations.
>
> First of all, let's write the equation for y1. At y1, player X wants to
> make player Y indifferent to betting or checking, just like last time.
> And so this equation is actually the same as previously:
>
> 2y1 = x1 + 1 [1] (this from part 1)
>
> Next, let's write the equation for x1. Player Y wants to make player X
> indifferent to betting or checking at x1.
>
> The values of actions at x1:
> Y's hand Bet Chk Diff
> [0,y2] -1 -2 -1
> [y1,1] 1 0 1

I respect that in this series of posts, you are trying to convey a lot of
very complicated thoughts -- but if possible, couldn't you make each
part somewhat more 'self-contained' -- that is, not requiring the readers'
fresh and thorough recollection of previous parts -- for example here,
the "[0,y2] -1 -2 -1" and "[y1,1] 1 0 1" and such...?!

0 new messages