Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Three-Way Tournament Settlements (Math)

6 views
Skip to first unread message

Tom Weideman

unread,
Jun 25, 1997, 3:00:00 AM6/25/97
to

Some time back, there was a raging discussion here about the proper
3-way split at the end of a percentage prize payout tournament with
players of equal playing ability. I believe it was started because a
Card Player article written by McEvoy on the subject in which he
butchered the math involved was subsequently corrected by Malmuth (in
his usual tact-free manner), who pointed to his work in _Gambling
Theory_. Silly arguments ensued along the lines of McEvoy being more of
an authority because he plays tournaments and Malmuth doesn't, etc. I
want to make it clear from the start that it is not the intention of
this post to start this nonsense back up again -- I just want to do my
part to further our knowledge in this area. I started thinking about
the problem back then, and it has remained on the back burner for some
time since. Recently I returned to it, and I'd like to share my
preliminary results. Keep in mind that these results ARE preliminary,
and that much more needs to be done.

I think of this problem as a 3-way gambler's ruin. We are given the
three bankrolls, and assuming a fair game (the players are assumed to be
of equal ability, otherwise the problem gets even tougher), we want to
know the probabilities of each player finishing in each position. I
looked through whatever literature I could find, but I saw no mention at
all of a gambler's ruin problem involving more than 2 competitors. So I
came up with my own method. Unfortunately, this particular method only
works with 3 players, but (with great effort) it certainly can be
generalized to more players if necessary.

Consider the following triangular diagram:

A
/ \
a - b
/ \ / \
c - d - e
/ \ / \ / \
f - g - h - i
/ \ / \ / \ / \
B - j - k - l - C

• In this diagram, the capital letters A, B, and C represent the
different contestants in the tournament.
• Each labelled point represents a possible state of the tournament.
• The three vertices of the large triangle represent a win for its
respective player.
• The sides of the large triangle represent eliminations of the player
whose vertex is across from it. For example, positions A, a, c, f, and
B represent possible states of the tournament where player C has been
eliminated.
• Each dashed line represents a confrontation between two players. The
two players involved are those whose side of the big triangle is
parallel to the line. For example, going along the line from d to h
represents a confrontation between A and C where C was the winner.

Since a vertex of the big triangle means that a player has all of the
chips, and the side opposite the vertex means that player has none of
them, then each successive parallel row represents the various states in
between. For simplicity, we will refer to each row as being "1 bet",
without specifying it any further. Each confrontation between two
players is therefore contesting 1 bet. Thus the example of d-to-h given
above would read "A loses 1 bet to C". The row a-b has player A with 3
bets, c-d-e two bets, f-g-h-i 1 bet, and B-j-k-l-C 0 bets. The row i-l
has player C with 3 bets, e-h-k 2 bets, and so on.

Now the question becomes: "Given an initial state of the tournament
(pick a letter in the diagram), what are the probabilities of player A
finishing 1st, 2nd, and 3rd?" The math is actually pretty easy (though
tedious). First, a couple rules regarding the use of the triangle:

1) Once an outer edge is reached, it cannot be exited (i.e. there is no
way for someone to return to the tournament once eliminated).
2) Every transition direction exiting from a letter has an equal
probability of every other (with the exception of the zero probabilities
generated from (1) above). For example, the probability that the state
of the tournament will go from d to a is the same as d to b, c, e, g,
and h (i.e. it is 1/6). Similarly, the probability of the state of the
tournament going from c to a is the same as c to f (i.e. it is 1/2).

Okay, now for a calculation. The number of bets in play is 4 (count the
number of rows), so we'll choose the starting point:

A = 2 bets, B = 1 bet, C = 1 bet

The point on the diagram for this is d. Let's start by calculating the
probability of player A being eliminated first. From here on, the
letters will be variables that equal the probability of this outcome at
the state of the tournament they represent on the diagram. We thus seek
the value of d.

The probability d is the sum of products of conditional probabilities:

d = (1/6)a + (1/6)b + (1/6)c + (1/6)e + (1/6)g + (1/6)h

Notice that a, b, c, and e are states where another player has been
eliminated, so the probability that A is eliminated first in these
states is zero:

a = b = c = e = 0

Also note that the symmetry of the triangle demands that g = h, leaving
the simple relation:

*** d = (1/3)g ***

Now we need g, so we repeat the process:

g = (1/6)c + (1/6)d + (1/6)f + (1/6)h + (1/6)j + (1/6)k

States j and k both represent eliminations of A, while c and f are
eliminations of C, so:

c = f = 0, j = k = 1

Plugging in these numbers as well as h = g gives the equation:

g = (1/6)d + (1/6)g + 1/3

*** 5g = d + 2 ***

Now solve the two starred equations simultaneously for d to get:
---------
| d = 1/7 | ( = Probability of A finishing third)
---------

This is NOT Malmuth's result. For the same starting conditions, he gets
a probability of 1/5. I believe the source of the problem for Malmuth's
calculation is the assumption that when 1 player busts out, each of the
other players will get half of the busted player's chips. Of course, he
also assumes that the number of total bets held by each player is large
(i.e. no one is about to be put all-in), but his reasons for this
assumption are poker related (i.e. he wants the number of hands that a
each player has position over the other to even out), and doesn't apply
to my method, which assume equal probabilities of all results (washing
out poker-related edges held by players).

Now to finish off this particular example:

• The probabilities of B and C coming in 3rd are the same of course,
which makes them 3/7 (you get this if you do it the long way too).

• Repeating the process so that it ends at the vertex A gives the
probability that A will win, and it coincidentally comes out to be the
same probability that Malmuth gets: 1/2. HOWEVER, it is not immediately
clear that there is agreement in all chip distributions. More work has
to be done in this area to prove or disprove this. Malmuth makes a good
argument for this being the case (by considering the expectation for the
freeze-out tournament situation), so if my method does not get the same
results each time regarding finishing first, then something is clearly
wrong somewhere.

• Repeating the process so that the course of the tournament eliminates
either B or C first, and then elimnates A gives the expected probability
of 5/14 (this is expected because adding it to 1/2 and 1/7 gives unity).

Recap for player A with A = 2, B = 1, C = 1:
1st 2nd 3rd
Malmuth probabilities: (1/2) (3/10) (1/5)
Weideman probabilities: (1/2) (5/14) (1/7)

Okay, what does this mean in terms of money? Well, clearly the correct
payout is more heavily weighted toward the chip leader than Malmuth
claims. Specifically, for a payout of 50%, 30%, 20%, the payoffs to
player A are:

Malmuth: (1/2)(50%) + (3/10)(30%) + (1/5)(20%) = 38.00% of prize fund
Weideman: (1/2)(50%) + (5/14)(30%) + (1/7)(20%) = 38.57% of prize fund

This is not a huge difference (only $571 in a $100k purse), but it is
not obvious whether these two methods converge or diverge for other more
complicated cases.

At this point I'd like to describe some problems with my method:

1. I have not yet proven/disproven that my method predicts that the
probability of winning the event is equal to the fraction of the total
number of chips the players holds. I plan to at least work out the
3,1,1 case to test the conjecture.

2. I've shown a result for stack sizes of 2 bets, 1 bet, and 1 bet, but
I have not proven that the same result occurs for any number of bets in
the same proportions. I'm still working on this (I haven't even gotten
around to calculating the results for A=4, B=2, C=2 yet). If in fact
the results DO change for greater numbers of total bets, then one
consequence is that some players gain and others lose as the betting
limits of the tournament go up.

3. I haven't yet developed a simplified method for calculating the
result for three arbitrary stack sizes (even on a computer). So far I
have to do the algebra in each individual case (and don't be fooled by
the apparent simplicity of this one case -- as the number of total bets
available increases, the amount of tedious labor increases very
rapidly). Matrix methods are looking promising for solving the general
case, and work is still in progress.

4. Extending the method to a case where the players do not play equally
well is difficult. Actually, assuming a good mathematical assumption
can be made regarding relative playing ability, each individual case can
be calculated, but the generalization mentioned in #2 seems even more
daunting.

5. Three-way confrontations in a single hand are not considered in my
method. Incorporating these is particularly difficult, and it is not
clear that they will contribute much to the final answer. Still, any
departure from possible real scenarios should be mentioned.

Any feedback from you math.weenies out there is appreciated.

Tom Weideman

Erik Reuter

unread,
Jun 25, 1997, 3:00:00 AM6/25/97
to

Thanks for posting your method! I can see it now, everyone at the final
table pulling out calculators and scribbling triangles....

I haven't had time to study your method yet, but I did run a Monte Carlo
simulation. Here's a third data point to add to your list:

Reuter: 40.02% of prize fund (I think the uncertainty is about +/- .05%)

In article <33B183...@dcn.davis.ca.us>, Tom Weideman


<zugz...@dcn.davis.ca.us> wrote:
>
> Malmuth: (1/2)(50%) + (3/10)(30%) + (1/5)(20%) = 38.00% of prize fund
> Weideman: (1/2)(50%) + (5/14)(30%) + (1/7)(20%) = 38.57% of prize fund

--
Erik Reuter, e-re...@uiuc.edu

Tom Weideman

unread,
Jun 25, 1997, 3:00:00 AM6/25/97
to

Mason has reminded me by private email that in fact the essays regarding
tournament settlement appearing in his _Gambling Theory_ book were
written by Mark Weitzman, rather than him. I was careless in this
regard because it was not my intention to point fingers at an
individual's "wrong" calculation, but just to compare my work with the
work that came before. Still, I apologize to Mr. Malmuth for my
mistake.

Mason further points out that HIS method appears in the 4th edition of
the same book (which I do not own -- I have the third edition). He
tells me that he does not assume that half the stack of a busted player
goes to each of the remaining players. Can anyone who owns this edition
tell me if his method is equivalent to mine, and if not, what
assumptions does he make?

I should point out that I tried plugging in a variable for the fraction
(rather than the constant 1/2) of the stack that goes to player A when
another player busts out, to see what fraction would be correct. I
found that no solution exists, i.e. that NO constant fraction gives a
result that comes out equivalent to my solution.

Tom Weideman

Harry 026

unread,
Jun 26, 1997, 3:00:00 AM6/26/97
to

About a year ago, following three articles in CardPlayer, I also
researched the problem of how to equitably divide the prize pool among the
final three players in a tournament. The third article in CardPlayer was
by Mike Caro, in which he stated that the two previous articles had
erroneous results because the division of the prizes depends on the bet
sizes. My investigations supported his conclusion.

I ran a number of Markov chains, where A, B, and C (the chip holdings
of the three players) would vary between about 4 and 9. (Although I
haven't studied Tom's posting yet, I believe that my Markov chains
accomplished what he did with his case where A=2, B=1, and C=1. He is
certainly correct when he says that the arithmetic becomes formidable as
the chip holdings increase.) I ran the chains for different betting
possibilities, including the following:
1) On each hand, the lowest goes all in and the other players call.
2) On each bet, two players are selected to play (equally likely
that any pair of players are involved) and the lowest goes all in.
3) On each hand, all players bet one unit.
etc.

The results of the various runs led me to these conclusions:
1) The probability of a player winning the tournament is the same
as his chip holdings divided by the total chips in play, regardless of the
betting structure. (This is the conventional wisdom, and doesn't surprise
me, but I see no obvious mathematical justification. I'd be grateful to
anyone who can supply one.)
2) The probability of a player coming in second is a function of
his chip holdings *and* the betting pattern.

I came up with formulas for an N-way split which is based on the
following assumptions:
1) All players are equally likely to win any hand, regardless of
skill or position.
2) The probability of Player A placing first is equal to the fraction
of the total chips in play held by A.
3) If Player A doesn't place first, then the probability that he will
come in second is equal to A's chips divided by the total chips of A and
the third-place finisher.

In the formulas below, these symbols apply:
A is player A's chip holdings;
B is player B's chip holdings, etc.
T is the total number of chips in play;
P1 is first-place prize (or percentage);
P2 is second-place prize, etc.

For two players, the formula is the one everyone agrees on.

For three players, A's share =

[A*P1 + B*(A*P2+C*P3)/(A+C) + C*(A*P2+B*P3)/(A+B)]/T

(To compute the amounts players B and C should receive, rename the players
and use the above formula again.)

The formula for four players is about five times as long as the one
for three players so I won't attempt to give it here.

Using Tom's case of A=2, B=1, C=1, with P1=50%, P2=30% and P3=20%, my
formula gives the following distribution:

A should get 38.33%
B should get 30.83%
C should get 30.83%.

My formula might be a good compromise, since it almost exactly splits the
difference between Tom's 38.57%- and Malmuth's 38%-share for A.

The Markov-chain program does the necessary arithmetic in a few
seconds. Setting up the transition matrix is the bottleneck. Details
supplied upon request.

Harry

Harry 026

unread,
Jun 26, 1997, 3:00:00 AM6/26/97
to

A brief correction to my previous post. The third assumption:

> 3) If Player A doesn't place first, then the probability that he
will
>come in second is equal to A's chips divided by the total chips of A and
>the third-place finisher.

is for the three-player case only. A modification of this assumption is
required for the four- (or more-) player case.

I am not claiming that this assumption is true, since the true
probability of A coming in second depends on the betting structure. But,
with this assumption (and the other two), the given formula follows.

Harry

Harry

Erik Reuter

unread,
Jun 26, 1997, 3:00:00 AM6/26/97
to

In article <19970626155...@ladder01.news.aol.com>,

harr...@aol.com (Harry 026) wrote:
> 1) The probability of a player winning the tournament is the same
> as his chip holdings divided by the total chips in play, regardless of the
> betting structure. (This is the conventional wisdom, and doesn't surprise
> me, but I see no obvious mathematical justification. I'd be grateful to
> anyone who can supply one.)

I'd be interested in this as well. The best I can do is to note (assume a
3 handed tournament with players of equal ability) if your chip fraction
is 0, your chance of winning first is 0, and if everyone's chip fraction
is 1/3 then your chance is 1/3, and if it is 1, then your chance of
winning is 1. So it comes down to proving that the correct way to connect
the dots is linearly.

> 2) The probability of a player coming in second is a function of
> his chip holdings *and* the betting pattern.

Yes. As Caro points out, the bigger the betting and the more multi-way
confrontations, the higher the EV of the player with the most chips. In
Weideman's problem, 2, 1, and 1 chips, it is quite likely that someone
will go all-in every time (because of antes and blinds) and everyone will
call. So the 2 chip player should have a higher EV than if the stacks were
2K,1K,1K with small antes.

Did you see Hill's article in Card Player? I like his method for anyone
trying to negotiate a tournament deal at the table. A programmable
calculator could easily be programmed with the necessary equations:

Take the example of a 0.5/0.3/0.2 payoff for each place and chip fractions
a + b + c = 1, (a > b) && (a > c)


A gets (0.5 a) for his fraction of 1st, and also since A has the most
chips one would expect him to be more likely to finish 2nd than 3rd, so
try to bracket the answer:

A will finish 2nd or 3rd with probability (1-a), so A will finish second
with probability range

[(1 - a) / 2, (1 - a)]..........A, 2nd

and 3rd with

[0, (1 - a) / 2]................A, 3rd

so A's fair share of the prize fund is between

(a)(0.5) + (1 - a)(0.3) + (0)(0.2) = 0.3 + 0.2 a

and

(a)(0.5) + ((1 - a) / 2)(0.3) + ((1 - a) / 2)(0.2) = 0.25 + 0.25 a

so negotiate A's share somewhere in the range

[(1 + a) / 4 , 0.3 + 0.2 a]........prize share range for A

For Weideman's 2/1/1 example, a = 0.5, b = c = 0.25 and the result is

[37.5% , 40%]........prize share range for A

--
Erik Reuter, e-re...@uiuc.edu

John MacDonald

unread,
Jun 27, 1997, 3:00:00 AM6/27/97
to

It's been a while since I've read it but isn't this topic covered in
Edward Packel's "The Mathematics of Games and Gambling"? I'll have to
check my library at home and get back to you next week.

Tom Weideman <zugz...@dcn.davis.ca.us> wrote:

"Perhaps the men of genius are the only true men.And the rest of us - what are we? Teachable animals" ... AldousHuxley
** John MacDonald - The views expressed are my own. I apologize in advance for any unintentiional **
** errors made. As with most of the posters here, I am only human and hence prone to malfunction. **


David Monaghan

unread,
Jun 28, 1997, 3:00:00 AM6/28/97
to

On Thu, 26 Jun 1997 18:20:32 -0500, e-re...@uiuc.edu (Erik Reuter)
wrote:

>In article <19970626155...@ladder01.news.aol.com>,
>harr...@aol.com (Harry 026) wrote:
>> 1) The probability of a player winning the tournament is the same
>> as his chip holdings divided by the total chips in play, regardless of the
>> betting structure. (This is the conventional wisdom, and doesn't surprise
>> me, but I see no obvious mathematical justification. I'd be grateful to
>> anyone who can supply one.)
>
>I'd be interested in this as well. The best I can do is to note (assume a
>3 handed tournament with players of equal ability) if your chip fraction
>is 0, your chance of winning first is 0, and if everyone's chip fraction
>is 1/3 then your chance is 1/3, and if it is 1, then your chance of
>winning is 1. So it comes down to proving that the correct way to connect
>the dots is linearly.
>

There are a lot of contributors better at the maths than me but I can
say from experience that if you run a Monte Carlo with a single
betting unit for each confrontation and an even money proposition for
each bet the chances of winning are proportional to the percentage of
chips a player has. Of course the "all other things equal" situation
never exists in real life but it does allow one factor to be teased
out from any calculation about win chances.


DaveM

Harry 026

unread,
Jul 1, 1997, 3:00:00 AM7/1/97
to

I wrote, concerning a tournament of two or more players:

> 1) The probability of a player winning the tournament
> is the same as his chip holdings divided by the total chips
> in play, regardless of the betting structure. (This is the
> conventional wisdom, and doesn't surprise me, but I see
> no obvious mathematical justification.

Here is a justification:

Suppose A holds n out of a total of N chips. Imagine that A is a
team of n players, and all of B, C, D,... form a team of N-n players, with
each player holding a single chip. Assuming equal abilities, every member
of each team has an equal probability of winning the tournament: 1/N.
Since A wins if any member of the team wins, and only one member can win,
then the probability that A will win is n/N. The betting structure is not
a factor, because a single hand might result in the A-team winning or
losing anywhere from zero to n chips. After the hand, although the number
of chips that A holds might have changed, the argument can be reapplied.

Harry

Zagie

unread,
Jul 1, 1997, 3:00:00 AM7/1/97
to

In article <19970701152...@ladder02.news.aol.com>,
harr...@aol.com (Harry 026) writes:

This implies that there is no inherent advantage to being the big stack,
with the opportunity to bully the other players. But that is clearly not
true. So you've convinced me of the opposite of your assertion, and I
thank you for it, because until now I had been puzzled.

To look at it another way, if you are the big stack in the
latter stages of a tournament -- especially if you have
a big lead -- then AJo is a 'good' hand for you. You
can bet it and even call with it if the bettor is in desparate
straits, and you will make money with it, over the long run.

On the other hand, for the medium stacks, AJo is
just a dog without much opportunity to earn more chips
with. Even if you don't agree with where I draw the line,
you have to agree that there are hands which are money
winners for big stacks but are too risky for the medium
stacks.

If there are hands which are money winners for the big
stacks that are not for the medium stacks, then the
big stack has an advantage that is MORE than the
linear advantage of stack size.

I think.

Regards,
Zag

Tom Weideman

unread,
Jul 1, 1997, 3:00:00 AM7/1/97
to

Harry 026 wrote:
>
> I wrote, concerning a tournament of two or more players:
>
> > 1) The probability of a player winning the tournament
> > is the same as his chip holdings divided by the total chips
> > in play, regardless of the betting structure. (This is the
> > conventional wisdom, and doesn't surprise me, but I see
> > no obvious mathematical justification.

I don't recall reading this (my news server may not have received it
yet, it does that sometimes).

> Here is a justification:
>
> Suppose A holds n out of a total of N chips. Imagine that A is a
> team of n players, and all of B, C, D,... form a team of N-n players, with
> each player holding a single chip. Assuming equal abilities, every member
> of each team has an equal probability of winning the tournament: 1/N.
> Since A wins if any member of the team wins, and only one member can win,
> then the probability that A will win is n/N. The betting structure is not
> a factor, because a single hand might result in the A-team winning or
> losing anywhere from zero to n chips. After the hand, although the number
> of chips that A holds might have changed, the argument can be reapplied.

This argument seems okay to me. I also like the one given in _Gambling
Theory_. The argument there goes something like this: If the tournament
were a freezeout, and all of the players are equally skilled, then the
expected return of each player is exactly the number of chips they have
in front of them. This equals the probability of winning times the
total prize pool. Solving for the former gives the ratio of the
player's stack size to the total number of chips.

Though I have not yet proven it for a general number of bets N and stack
size n, I have gotten around to calculating a sufficient number of 1st
place probabilities using my model to convince myself that this same
probability is reached using my "triangle model" (it's amazing too,
since the answer magically falls out of tons of algebra that gives no
hint that the probability is headed toward n/N). What this means in
terms of the triangle model is that every point along a straight-line
chord through the triangle has the same win probability for the player
whose vertex is opposite that chord. The REAL trick is not in finding
the win probability, but the 2nd and 3rd place probabilities.

I have also recently discovered some other subtle aspects of the model,
which I hope will lead me to a deeper insight into the problem. I'll
post them once I've collected my thoughts in a more coherent manner.
Actually, maybe I won't after all... few people seem interested in this
topic anyway, perhaps because it doesn't result in any significant (i.e.
real-life, real money-related) changes.

Tom Weideman

RPGross

unread,
Jul 1, 1997, 3:00:00 AM7/1/97
to

I Think it is intuitively obvious that with 3 players of equal ability and
equal stacks each will win 1/3 if the time. However, it is not so
obvious, at least to me, that a player with 20% of the chips will win 20%
of the time. My belief is that either this will happen or my alternative
hypothesis is that the 20% stack will win more than 20%. Of course, there
is a third possible hypothsis: less that 20%, and I cant rule this out
either.

Anyway, isnt there some software out there that could test this
emperically? Just set up a trial with a 20%, 30%, and 50% stacks and run
thisuntil two players bust enuf times to get a reliable frequency
distribution. Ive got Turbo Texas, but I cant think of a way to do this
on that. Im not familiar with other software, tho.

I just reread your post and you seem to say that you have alredy done
this. If so, why not accept that as the answer?


Mr. Bob

unread,
Jul 2, 1997, 3:00:00 AM7/2/97
to

> > Suppose A holds n out of a total of N chips. Imagine that A is a
> > team of n players, and all of B, C, D,... form a team of N-n players, with
> > each player holding a single chip. Assuming equal abilities, every member
> > of each team has an equal probability of winning the tournament: 1/N.
> > Since A wins if any member of the team wins, and only one member can win,
> > then the probability that A will win is n/N. The betting structure is not
> > a factor, because a single hand might result in the A-team winning or
> > losing anywhere from zero to n chips. After the hand, although the number
> > of chips that A holds might have changed, the argument can be reapplied.
>
> This argument seems okay to me. I also like the one given in _Gambling
> Theory_. The argument there goes something like this: If the tournament
> were a freezeout, and all of the players are equally skilled, then the
> expected return of each player is exactly the number of chips they have
> in front of them. This equals the probability of winning times the
> total prize pool. Solving for the former gives the ratio of the
> player's stack size to the total number of chips.

For the two player case, you MUST make the following assumption: the two
players are identical, not merely "equally skilled." In other words, they
are using exactly the same strategy.

Even with this strict assumption, you still may not be able to obtain the
n/N result if there is any net advantage or disadvantage to being able to
go "all in" in the particular game being analyzed.

For the case of n>2 players you must also make the following assumptions:

there is no net positional (location at the table) advantage / disadvantage

there is no collusion between any of the players

For two players in a finite zero-sum game (such as heads up poker), there
must be an equilibrium set of strategies, and there must be a strategy
whose worst case scenario is a zero expectation (the maxi-min strategy).

When there are three or more players, the analysis falls apart, unless
more restrictive assumptions are made.


Mr. Bob

P.S. Mandatory PRESTO Story: Master and Grasshopper

15-30 at the Showboat in East Chicago last Friday. UTG raises pre-flop,
two callers, I re-raise on the button with KK, blinds fold, all others
call.
Flop comes AK5. UTG bets out, two callers, I raise, UTG re-raises, fold,
fold, I call. Turn: A. UTG bets, I raise, UTG raises (oops..), I call.
River 8, no flush possible. UTG bets, I raise, UTG re-raises, I call.

Now this particular player has seen me do my little PRESTO dance a few
times, and has sat next to me when I've spoken at great length about the
power of PRESTO. He proceeds to imitate me- stands up and slams down his
cards as he says, "PRESTO!"

I start smile as I turn over my cards, and say in my best (which is pretty
poor) Kung-Fu imitation, "Ah GrassHoppah you play PRESTO poorly. You make
four bet pre-flop, I put you on Aces and fold on turn." (I'm lying.) "You
must learn play presto right." Ahhh, the power of PRESTO.

I'm such a jerk sometimes.

Ramsey

unread,
Jul 2, 1997, 3:00:00 AM7/2/97
to

Zag writes (in response to a post from Harry 026 related to the final
stages in tournament play)

>This implies that there is no inherent advantage to being the big stack,
>with the opportunity to bully the other players. But that is clearly not
>true. So you've convinced me of the opposite of your assertion, and I
>thank you for it, because until now I had been puzzled.
>
>To look at it another way, if you are the big stack in the
>latter stages of a tournament -- especially if you have
>a big lead -- then AJo is a 'good' hand for you. You
>can bet it and even call with it if the bettor is in desparate
>straits, and you will make money with it, over the long run.
>
>On the other hand, for the medium stacks, AJo is
>just a dog without much opportunity to earn more chips
>with. Even if you don't agree with where I draw the line,
>you have to agree that there are hands which are money
>winners for big stacks but are too risky for the medium
>stacks.
>
>If there are hands which are money winners for the big
>stacks that are not for the medium stacks, then the
>big stack has an advantage that is MORE than the
>linear advantage of stack size.
>
>I think.
>

Whenever someone writes "it is obvious that" or, as in this case
"clearly" I have an overwhelming desire to muddy the waters. So here
goes...

Any calculations (or simulations) related to the final stages of a
tournament are going to be iffy because there are at least 2 factors
which they are unlikely to handle realistically. These are the effect
of the blinds/antes which typically are large in relation to stack sizes
(and rise frequently) and the effect of strategy changes as a function
of stack sizes.

However to agree that the chance of winning a tournament is directly
related to the ratio of your chips to total chips is not to
(necessarily) deny that the large stack has an inherent advantage. The
large stack will have an inherent advantage if the chances of coming
second are greater proportionally than you would expect. In Toms
original post (for a 3 player final with stacks of 2-1-1) he claculates
the chance of the big stack coming second as 5/14 with the small stacks
9/28. This implies an inherent advantage for the big stack.
Interestingly the comparison figures he quotes from Mason Malmuth's book
gives 3/10 for the large stack and 7/20 for the small stacks - this
gives the small stacks the advantage.

This is where stack size strategy and the real world intervene. If you
have a large stack and the other two players have very small stacks you
can normally engineer the situation where the other two players are
forced to go all-in. This, if we accept the original premise, does not
increase (or decrease) your chance of winning the tournament but it does
significantly increase the chance that you don't come third and
therefore is a strongly +ev strategy.

This gives us a guide to the correct strategy in the final stages of a
tournament when you have a large stack. Suprisingly the strategy is
*not* to use your stack to bully your opponents but rather you use it to
avoid confrontations. That is the large stack allows you to fold more
often and force the smaller stacks into confrontations between
themselves. This does not increase your chances of winning but it
guarantees a higher place on the payout scale.

Suppose you hold AJ in the late stages of a tournament (or similar
hand). Whether you are big stack or nearly all-in you should raise if
you are first-in. However if someone has already raised then as big
stack you can (and should) fold but if you are small to medium stacked
you virtually have to call - because you probably won't get a better
hand before you are blinded away.

In general the smaller your stack the weaker the hands you are forced to
play to avoid being blinded away, and the more often you have to play
hands (and each hand a small stack plays is a chance to go bust).

Suppose the game has just 1 blind and 5 players. If player A has 10
blinds then he can go 10 rounds and see 50 hands and with a fair wind
will win 10 blinds to be back level at the end of that time. If player
B has 1 blind then he has to play at least 1 of the next 5 hands and he
has to constantly lower his standards (if you fold this hand you might
get worse hands on the remaining x opportunities you have so do you fold
this slightly substandard hand and hope or take a chance now?). And if
player B wins the first confrontation he will still have to pay a blind
almost immediately and so will be back in the same situation.

So the inherent advantage of the big stack is not that they can bully
with more hands but that they will be able to play, on average, better
hands than the weaker stacks.

As a sidenote my experience is that the bullying strategy is effective
in the middle stages of the tournament when the blinds are still low
enough for them not to be overwhelmingly important and the number of
remaining players is dropping quickly. At this time players in general
become very conservative trying to stay alive to reach the final stages.
Here you can bully with great effect both with preflop raises and on the
flop/turn when it is obvious that nobody has hit anything.

Confused?...
--
Ramsey
sjri...@sjrindex.demon.co.uk

David Monaghan

unread,
Jul 3, 1997, 3:00:00 AM7/3/97
to

On Wed, 02 Jul 1997 09:38:47 -0500, bobm...@NOSPAMnwu.edu (Mr. Bob)
wrote:

>For the two player case, you MUST make the following assumption: the two
>players are identical, not merely "equally skilled." In other words, they
>are using exactly the same strategy.

This may just be semantics but I would take the phrase "equally
skilled" to mean that both their strategies and judgement are equally
good ( or poor ). In these circumstances neither player has an
advantage and the outcome is determined by chance and so is
proportional to stack size. I can't see any reason why the players
can't use different strategies provided neither is superior or
dominant over the other.

DaveM

Zagie

unread,
Jul 8, 1997, 3:00:00 AM7/8/97
to

<Ramsey wrote a thought-provoking letter about the
advantages of being the tall stack in the late stages
of a tourney. His general theme was that the luxury
of being able to wait for good hands without fear of
being blinded out was more important than the ability
to bully the other players.>

This makes a lot of sense, and normally I would
take consider my tiny bit of experience compared
to yours and keep my mouth shut. However, in
looking over the final table of the WSOP, I see
that Stu U. clearly bullied the other players
unmercifully. A statistic in "Card Player" said that
he attacked the blinds more than any two other
players. He clearly was the chip leader at the
time, so it is exactly the situation we are talking
about. And -- nothing personal, Ramsey -- I'd take
his advice over yours in a heartbeat.

Did I misunderstand something?

Regards,
Zag

Ramsey

unread,
Jul 8, 1997, 3:00:00 AM7/8/97
to

In article <19970708160...@ladder01.news.aol.com>, Zagie
<za...@aol.com> writes

><Ramsey wrote a thought-provoking letter about the
>advantages of being the tall stack in the late stages
>of a tourney. His general theme was that the luxury
>of being able to wait for good hands without fear of
>being blinded out was more important than the ability
>to bully the other players.>
>
>This makes a lot of sense, and normally I would
>take consider my tiny bit of experience compared
>to yours and keep my mouth shut. However, in
>looking over the final table of the WSOP, I see
>that Stu U. clearly bullied the other players
>unmercifully. A statistic in "Card Player" said that
>he attacked the blinds more than any two other
>players. He clearly was the chip leader at the
>time, so it is exactly the situation we are talking
>about. And -- nothing personal, Ramsey -- I'd take
>his advice over yours in a heartbeat.
>
>Did I misunderstand something?
>

No, I am sure Stu and I agree on this.

I haven't analysed this years final table but I did last year and posted
the results. I don't have it to hand but I remember one startling fact
about the hands Huck Seed (the winner) played: he raised far more
preflop than anyone else and the amount he won in blinds/antes when
players folded was more than the amount he paid when he raised and was
called/reraised. In other words all the times he was called preflop he
was free-rolling. This is superb bullying :)

Unfortunately you and I are unlikely to be at the final table at WSOP
and there is a fundamental difference from that and the final tables we
will normally play at: In the WSOP the blinds/antes at the final table
are still a relatively low proportion of the average stack; in our
tournaments they are geared to finish quickly and the blinds are a very
high proportion of the stack size. Typically at the start of a 10
player pot-limit final the blinds will be 1k and 1k with 100k chips in
circulation. So the average stack will not have enough to make a full
bet on the flop if there is a preflop raise.

When you have sufficient chips to see several more round of hands you
can be bullied. When you are going to be blinded away in the next few
hands you can't be bullied; you have to pick a hand to make a stand. So
if you are the big stack try bullying when the other players can afford
to fold but otherwise play to conserve your chips.

This is what I was trying to get across with my last paragraph in the
previous post:

>As a sidenote my experience is that the bullying strategy is effective
>in the middle stages of the tournament when the blinds are still low
>enough for them not to be overwhelmingly important and the number of
>remaining players is dropping quickly. At this time players in general
>become very conservative trying to stay alive to reach the final stages.
>Here you can bully with great effect both with preflop raises and on the
>flop/turn when it is obvious that nobody has hit anything.

So the middle stages of low level tournaments equate (strategy-wise) to
the final table of WSOP - now there's a thought :)

--
Ramsey
sjri...@sjrindex.demon.co.uk

0 new messages