Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Going Too Far & Implicit Collusion

539 views
Skip to first unread message

Andy Morton

unread,
Apr 3, 1997, 3:00:00 AM4/3/97
to

Implicit Collusion and Going too Far


I usually enjoy reading Mike Caro's Card Player column. One from last June
made a big impression on me. In it he says:

_The real low-limit secret for today_. The most important thing
i can teach you about playing the lower limits is that you
usually should *not* raise from early positions, no matter what
you have... because all of those theories of thinning the field
and driving out opponents who might draw out on you don't hold
true in these smaller games [where] you're usually surrounded
by players who often call with nearly hopeless hands.... Which
is better, playing against a few strong and semistrong players
with possibly a small advantage for double stakes, or playing
against a whole herd of players, mostly weak, for single stakes?
Clearly, when you're not likely to win the pot outright by
chasing everyone out, you want to play against weak opponents,
and the more the merrier. So, why raise? There, I've just
described one of the costliest mistakes in low-limit poker. The
mistake is raising when many potential callers remain behind
you, thus chasing away your profit. Don't do that.

Until recently, this made a lot of sense to me. After all, the Fundamental
Theorem of Poker states (roughly) that when your opponents make mistakes, you
gain, and when they play correctly, you lose. In holdem, if all of those
calling stations in the low-limit games want to chase me with their 5 out
draws to make trips or 2 pair when I flop top pair best kicker, and they
don't have the pot odds to correctly do so, that sounds like a good situation
for me.

Yet, it seems like these players are drawing out so often that something must
be wrong. Hang around the mid-limits, holdem or stud, for any length of time
and you're sure to hear players complain that the lower limit games can't be
beat. You can't fight the huge number of callers, they say. You can't
protect your hand once the pot has grown so big, they say.

At first, I thought these players were wrong. They just don't understand the
increased variance of playing in such situations, I told myself. In one
sense, these players are right, of course. The large number of calling
stations combined with a raise or two early in a hand make the pots in these
games very large relative to the bet size. This has the effect of reducing
the magnitude of the errors made by each individual caller at each individual
decision. Heck, the pot might get so big from all that calling that the
callers _ought_ to chase. For lack of a better term, I call this behavior on
the fishes' part _schooling_. Still, tight-aggressive players are on average
wading into these pots with better than average hands, and in holdem when
they flop top pair best kicker, for example, they should be taking the best
of it against each of these long-shot draws (like second pair random kicker).
In holdem, the schooling phenomenon increases the variance of the player who
flops top pair holding AK, but probably also _increases_ his expectation in
the long run, I thought, relative to a game where these players are correctly
folding their weak draws.

Thinking this way, I was delighted to follow Caro's advice, and not try to
run players with weak draws out of the pots where I thought I held the best
hand on the flop or turn. This is contrary to a lot of advice from other
poker strategists, as Caro points out, and I found myself (successfully, I
think) trying to convince some of my poker playing buddies of Caro's point of
view in a discussion last week.

Well, some more thinking, rereading some old r.g.p. posts (thank you,
dejanews), a long discussion with Abdul Jalib, and a little algebra have
changed my mind: I think Caro's advice is dead wrong (at least in many
situations) and I think I can convince you of this, if you'll follow me for
a bit longer.

What I'm going to tell you is that if you bet the best hand with more cards
to come against two or more opponents, you will often make more money if some
of them fold, *even if they are folding correctly, and would be making a
mistake to call your bet.* Put another way, *you want your opponents to fold
correctly, because their mistaken chasing you will cost you money in the long
run.* I found this result very surprising to say the least. I've never seen
it described correctly in any book or article, although at least a few posts
to this newsgroup have concerned closely related topics.

I'm no poker authority but I think this concept has got to lead to changes in
strategy in situations where players are chasing too much (and yes, Virginia,
this happens not only in the 3-6 games, but also in the higher limits from
time to time. Curiously, I have several friends who play very well who often
complain that they can't beat 20-40 games when they get loose like this, or
at least don't do as well in these games as they do in tighter games.
hmmm....). Let's look at a specific example.


Suppose in holdem you hold AdKc and the flop is Ks9h3h, giving you top pair
best kicker. When the betting on the flop is complete you have two opponents
remaining, one of whom you know has the nut flush draw (say AhTh, giving him
9 outs) and one of whom you believe holds second pair random kicker (say
Qc9c, 4 outs), leaving you with all the remaining cards in the deck as your
outs. The turn card is an apparent blank (say the 6d) and we1ll say the pot
size at that point is P, expressed in big bets.

When you bet the turn player A, holding the flush draw, is sure to call and
is almost certainly getting the correct pot odds to call your bet. Once
player A calls, player B must decide whether to call or fold. To figure out
which action player B should choose, calculate his expectation in each case.
This depends on the number of cards among the remaining 46 that will give him
the best hand, and the size of the pot when he is deciding:

E(player B|folding) = 0

E(player B|calling) = 4/46 * (P+2) - 42/46 * (1)

Player B doesn't win or lose anything by folding. When calling, he wins the
pot 4/46 of the time, and loses one big bet the remainder of the time.
Setting these two expectations equal to each other and solving for P lets us
determine the potsize at which he is indifferent to calling or folding:


E(player B|folding) = E(player B|calling) => P'_B = 8.5 Big bets

When the pot is larger than this, player B should chase you; otherwise, it's
in B's best interest to fold. This calculation is familiar to many
rec.gamblers, of course.

To figure out which action on player B's part _you_ would prefer, calculate
your expectation the same way:

E(you|B folds) = 37/46 * (P+2)

E(you|B calls) = 33/46 * (P+3)

Your expectation depends in each case on the size of the pot (ie, the pot
odds B is getting when considering his call). Setting these two equal lets
us calculate the potsize P where you are indifferent whether B calls or
folds:

E(you|B calls) = E(you|B folds) => P'_you = 6.25 Big bets.

When the pot is smaller than this, you profit when player B is chasing, but
when the pot is larger than this, your expectation is higher when B folds
instead of chasing.

This is very surprising. There's a range of pot sizes (in this case between
8.5 and 6.25 big bets when the turn card falls) where it's correct for B to
fold, and you make more money when he does so than when he incorrectly
chases. You can see this graphically below

|
B SHOULD FOLD | B SHOULD CALL
|
v
|
YOU WANT B TO CALL| YOU WANT B TO FOLD
|
v
+---+---+---+---+---+---+---+---+---+---> POT SIZE, P, in big bets
0 1 2 3 4 5 6 7 8 9
XXXXXXXXXX
^
PARADOXICAL REGION

The range of pot sizes marked with the X's is where you want your opponent to
fold correctly, because you lose expectation when he calls incorrectly.


This is an apparent violation of the Fundamental Theorem of Poker, which
results from the fact that the pot is not heads up but multiway. (While
Sklansky states in Theory of Poker that the FToP does not apply in certain
multiway situations, it would probably be better to say that it in general
does not apply to multiway situations.) In essence what is happening is that
by calling when P is in this middle region, player B is paying too high a
price for his weak draw (he will win the pot too infrequently to pay for all
his calls trying to suck out), but you are no longer the sole benefactor of
that high price -- player A is now taking B's money those times that A makes
his flush draw. Compared to the case where you are heads up with player B,
you still stand the risk of losing the whole pot, but are no longer getting
100% of the compensation from B's loose calls.

These sorts of situations come up all the time in Hold'em, both on the flop
and on the turn. It1s the existence of this middle region of pot sizes,
where you want at least some of your opponents to fold correctly, that
explains the standard poker strategy of thinning the field as much as
possible when you think you hold the best hand. Even players with incorrect
draws cost you money when they call your bets, because part of their calls
end up in the stacks of other players drawing against you. This is why
Caro's advice now seems wrong to me, in general. Those weak calling stations
are costing you money when they make the mistake of calling too much. In
practice, when you flop a best but vulnerable hand, the pot size is rarely
smaller than this middle region, where you actually want your opponents
to call. Normally, the pot size is such that you want them to fold even if
they would be wise to do so. In loose games, the pot size will often be at
the high side of the scale, where you would love for them to fold, but they
have odds to call and their fishy calls become correct.

This brings up another interesting point. In our three-handed example, both
you and player B are losing money when B chases you incorrectly (both your
and his expectations would be higher if he folded). This implies that player
A is benefitting from his call, since poker is a zero-sum game (neglecting
rake, etc). In fact, player A is benefitting _more_ from B's call than the
magnitude of B's mistake in calling (since you are also losing expectation
due to B's call).

Because you are losing expectation from B's call, it follows that the
_aggregate_ of all other players (ie, A and B) must be gaining from B's
call. In other words, if A and B were to meet in the parking lot after the
game and split their profits, they would have been colluding against you.

I don't really know Roy Hashimoto or Lee Jones, but I suspect that this
situation might be what Roy had in mind when he first described what he calls
"implicit collusion" in games where there are many calling stations: one
fish makes a play which reduces his overall expectation and all fish benefit
by more than the magnitude of the first fish's mistake. That's collusion,
just as if a player reraises with the worst hand to trap a third player for
more bets when the first player's buddy has the nuts. Of course no one
realizes there's collusion going on in these situations, so the collusion is
implicit. (I'd sure like to hear from Roy or Lee on this point, because I
think there's a significant difference between what I've called 'schooling'
and what I've called 'implicit collusion', and that the two concepts are
often confused with each other, but I'd hate to further confuse the issue by
misappropriating someone else's label for this phenomenon.)

There was an interesting thread on this group last year started by Mason
Malmuth called 'Going Too Far,' about the appropriate strategy changes in a
game where many players are calling too loosely not only before the flop but
also on the later streets. I suspect that the phenomenon described here
(where both the leader and the chasers are giving up expectation to the
player who is drawing to a very strong hand) lies behind the correct response
to his discussion in that thread. One strategy change he mentions is that
you'd like your starting hand to be suited in games like these. In light of
what I've presented here I can not only understand this strategy change, but
can see others as well. If this has made sense to anyone who can think of
other strategy changes resulting from these ideas, let's hear them.

Finally, having criticized something by one of the famous poker authors,
Abdul is encouraging me to go for broke <g>: It seems pretty clear that
Sklansky also missed this idea, at least when he was writing Winning Poker,
the precursor to Theory of Poker. First, he mentions that the Fundamental
Theorem applies to all two-way and nearly all multiway pots. While I haven't
proven it, it seems likely that nearly all multiway pots will contain some
sort of region of implied collusion where the leader would prefer that
players fold correctly, ie where the Fundamental Theorem breaks down. Later,
in the chapter "Win the Big Pots Right Away," Sklansky makes his ignorance of
this concept explicit. Discussing a multiway seven stud hand in which your
hand is almost certainly best on fourth street he writes:

You must ask yourself whether an opponent would be correct to
take [the odds you are giving him] knowing what you had. If so,
you would rather have that opponent fold. If not -- that is if
the odds against your opponent1s making a winning hand are
greater than the pot odds he1s getting -- then you would rather
have him call. In this case, instead of winning the pot right
away, you1re willing to take the tiny risk that your opponent
will outdraw you and try to win at least one more bet. ...you
would not want to put in a raise to drive people out. (p. 62)

Slowplaying is certainly correct in some cases, but your 'druthers' in a
multiway pot can never be decided so simply as by asking whether each of your
individual opponents has the right pot odds to chase you.

Erik Reuter

unread,
Apr 6, 1997, 4:00:00 AM4/6/97
to

This is the type of discussion I find most interesting on r.g.p. Thanks
for the post, Andy!

Despite a small mistake in the EV calculations, I think that the "middle
region" concept you point out is true. There are no doubt multi-way poker
situations where the hand with the largest pot equity would prefer a
drawing hand to make a correct fold (correct fold = folding is the play
which maximizes the folder's EV).

In article <33442B...@ix.netcom.com>, Andy Morton


<and...@ix.netcom.com> wrote:
> Suppose in holdem you hold AdKc and the flop is Ks9h3h, giving you top pair
> best kicker. When the betting on the flop is complete you have two opponents
> remaining, one of whom you know has the nut flush draw (say AhTh, giving him
> 9 outs) and one of whom you believe holds second pair random kicker (say
> Qc9c, 4 outs), leaving you with all the remaining cards in the deck as your

> outs. The turn card is an apparent blank (say the 6d) and we'll say the pot


> size at that point is P, expressed in big bets.

However, the interpretation of this concept in practice is difficult.
Suppose the pot is in the "middle region" and another hand is added to
your example which is also drawing to the flush (Jh2h). In this case, the
AK clearly has a higher EV when the lower flush draw calls (given that the
higher flush draw calls) since the lower flush draw has no chance to win.
If the only way to get the Q9 to fold also gets the J2 to fold, then the
change in the EV of the AK is the sum of a positive from the Q9 folding
and a negative from the J2 folding. In general, I think this will go both
ways in different situations, so it is not possible to say that one ALWAYS
(or even usually) should try to eliminate or not eliminate players. It
depends on specific holdings and specific situations.

If I have AdJd pre-flop, my intuition is that I would like to have AcTs
and Td8d playing against me, and it may be beneficial to not raise
pre-flop if raising would chase them out. The concept is that of a
"dominating hand", where I expect to make extra money in later betting
rounds by giving up some in early betting rounds in order to encourage
dominated hands to play with me. On the other hand, if the dominated hands
would have called a raise anyway, or if there are lots of hands with live
draws out, raising to get more money or eliminate players is surely best.

In article <33442B...@ix.netcom.com>, Andy Morton
<and...@ix.netcom.com> wrote:
[example EV calculation deleted]


> Slowplaying is certainly correct in some cases, but your 'druthers' in a
> multiway pot can never be decided so simply as by asking whether each of your
> individual opponents has the right pot odds to chase you.

Nicely demonstrated.

Getting back to the "middle region" concept, I think this is the clearest
demonstration I have seen of the problems with the FToP when applied to
multi-way pots. I remember having an email discussion some time ago where
it was suggested that Sklansky's Fundamental Theorem of Poker wasn't
fundamental, and often wasn't even correct. I believe I added my opinion
that it is also vague in its phrasing ("you lose...you win": compared to
what? how much?) and difficult to apply quantitatively.

Nevertheless, the FToP does have its conceptual uses for heads-up pots.
Perhaps what is needed is an additonal theorem: the Fundamental Theorem of
Poker for Multi-way Pots. Without careful thought, I offer for discussion:
"In a multi-way pot with knowledge of all opponents' hole cards, there
exists a mathematical expected value (EV) for each possible way of playing
a hand, and the best play is that which has the maximum EV." Is it true?
Is it useful?

--
Erik Reuter, e-re...@uiuc.edu

abd...@earthlink.net

unread,
Apr 7, 1997, 3:00:00 AM4/7/97
to

Thanks Andy, for your insightful (and maybe inciteful) article.

I've been pondering the question of table image for a long while.
Opinions differ widely on the subject, ranging from culturing
a loose-crazy image to desiring respect. In a recent Card Player
column, Roy Cooke wrote that although you'll make money as a
tight-aggressive player with a tight-aggressive image, you'll
make more money as a tight-aggressive player with a loose-crazy image.
That seemed hard to argue with. However, as a tight-aggressive player
with a tight-aggressive image, I win a lot of pots due to
respect, often "too much" respect like when I'm semi-bluffing.
Winning a pot when you do not have the cards is worth a lot, and that
sort of play is difficult to pull off with a loose image. That's why
I've argued in the past that a tight-aggressive player should appear
tight-aggressive.

Now Andy's article gives me a new perspective on the issue, perhaps a
confirmation of what I've believed is the correct approach. Andy
shows that when we have a best but vulnerable hand in a multiway pot,
we often don't want the players to make mistakes by calling... we
want them to fold correctly. The pots where we want them to call
incorrectly tend to be quite small, smaller than in most of the games
I play in, I believe. So given medium to big pots, do we want a
loose crazy image, where everybody calls when we bet or raise? I
think not. We want our bets and raises to command respect, to instigate
fear, to cause good laydowns (and some bad ones too!)

A while ago, one player criticized my tight-aggressive play, pointing
out that he gets out of my way when I bet or raise, doesn't give me any
action. In multiway pots, that's usually good! (And heads up, I
routinely check-raise-bluffed him out on the flop, but that's another
issue.)

Now much of the time, we'll be the one drawing. In these cases,
we do want everybody and their mothers to call, to support our
draw. So, is a tight-aggressive image bad then? I don't think so.
It may not be the best image in those situations, but it doesn't
hurt us a signficant amount. There is a big difference between a
tight-aggressive player betting/raising versus lamely calling along.
Some players are astute enough to be concerned if a good player so
much as calls, but for the most part a tight-aggressive player
calling along is no reason for the other players to fold.

I think I'll go practice fixing a permanent scowl on my face...

-Abdul

Net-Tamer V 1.08.1 - Test Drive

Net-Tamer V 1.08 Palm Top - Test Drive

Erik Reuter

unread,
Apr 7, 1997, 3:00:00 AM4/7/97
to

> Andy
> shows that when we have a best but vulnerable hand in a multiway pot,
> we often don't want the players to make mistakes by calling...

I think this is too strong a statement. Andy's example showed that in
certain situations (NOT necessarily "often") we want a certain hand to
fold correctly.

It remains to be shown how rare or frequent such situations are, and how
much difference they make, EV wise. Arguments as to how frequently such
situations occur in actual play would be interesting to see.

--
Erik Reuter, e-re...@uiuc.edu

Jazbo Burns

unread,
Apr 7, 1997, 3:00:00 AM4/7/97
to

An outstanding post, Andy. It's articles like these that make r.g.p. one
of the best newsgroups on the net.

In the original post, Andy Morton <and...@ix.netcom.com>
demonstrated that David Sklansky's claim that his Fundamental Theorem
of Poker applies to multiway action is false. The particular example
he used is from hold'em where there has been a bet and a call and the
third player is faced with a decision on whether to call or not.
Assuming you are the bettor and have the best chance to win the pot
(you have the most outs), Sklansky says you want the last player to
call whenever the pot odds from his point of view are insufficient (FToP).
Andy showed that there is a region where it is incorrect for the third
player to call, yet it costs you money if they do. I will take the
liberty to recast things from Andy's original to simplify the
notation.

There are three players A, B and C, having a, b and c outs on the
river, respectively. A has bet one unit, B has called and C must
decide whether or not to call with p units (including the bets of A
and B) in the pot. Pot odds dictate that C should call if and only if

p > p_c = (a+b)/c.

Player A would prefer that C fold whenever p is large enough so the
A's fraction of the pot with one more bet on it (we assume no bets
after the flop -- perhaps A is all-in) is worth more than his fraction
when C calls. Algebraically this reduces to

(a+c)p > a(p+1), or

p > p_a = a/c.

Note that unless Player B or C has no outs (b=0 or c=0), p_a < p_c, so
there is *always* a region where Player C should fold from his point
of view but Player A would prefer that C not call (and, symmetrically,
the same holds for Player B!). Also note that this result holds for
more than three players (just consider the players that have called as
a collective named B and the players to call as a collective named C).

I propose we name this result appropriately:

Morton's Theorem: Ignoring future betting rounds, there is always a pot
size such that the next player should fold to the bet according to
the Fundamental Theorem of Poker, yet the other players do not
want him to call (unless he has no outs).


When another betting round must be considered, things become more
complicated. I doubt we can come up with anything as crisp as
Morton's Theorem, but this is an excellent topic for discussion.
Thanks for starting it, Andy!

--jazbo
--
- - - - -
Video poker strategy cards for sale:
http://www.monmouth.com/~jburns/vidpoker.html

Erik Reuter

unread,
Apr 7, 1997, 3:00:00 AM4/7/97
to

In article <JBURNS.97...@wildcat.monmouth.com>, jbu...@monmouth.com
(Jazbo Burns) wrote:

> There are three players A, B and C, having a, b and c outs on the
> river, respectively. A has bet one unit, B has called and C must
> decide whether or not to call with p units (including the bets of A
> and B) in the pot.
>

> p > p_c = (a+b)/c.


>
> (a+c)p > a(p+1), or
> p > p_a = a/c.
>
> Note that unless Player B or C has no outs (b=0 or c=0), p_a < p_c, so
> there is *always* a region where Player C should fold from his point
> of view but Player A would prefer that C not call (and, symmetrically,
> the same holds for Player B!).

It's not symmetrical since A gets all of C's outs if C folds, and B gets
none of C's outs. B wants C to call, no matter what the pot size, since
b(p+1) > b(p)

> Morton's Theorem: Ignoring future betting rounds, there is always a pot
> size such that the next player should fold to the bet according to
> the Fundamental Theorem of Poker, yet the other players do not
> want him to call (unless he has no outs).

This must be restated in light of the above. (SOME of the other players,
namely B, DO want him to call). Also, it isn't true head's up, since then
b = 0 and p_c = p_a, but as stated above it isn't this isn't crystal
clear.

"If the best hand bets and is called by a drawing hand, there is a range
of pot sizes such that the next player's maximal EV play is to fold, yet
the best hand has a higher EV when this player folds."

I'm more interested in applications of the theorem to poker strategy. It
would seem one application would be that certain situations may arise
where slowplaying is not correct based on the theorem, but slowplaying
would have been correct if the theorem were not considered.

--
Erik Reuter, e-re...@uiuc.edu

Lee Jones

unread,
Apr 8, 1997, 3:00:00 AM4/8/97
to

In article <33442B...@ix.netcom.com>,
Andy Morton <and...@ix.netcom.com> wrote:

>Well, some more thinking, rereading some old r.g.p. posts (thank you,
>dejanews), a long discussion with Abdul Jalib, and a little algebra have
>changed my mind: I think Caro's advice is dead wrong (at least in many
>situations) and I think I can convince you of this, if you'll follow me for
>a bit longer.

Jesus, Mary, and Joseph. This may be the most important post to r.g.p. in
the last couple of years. Andy, I haven't read your entire post carefully
a second time, but I think Jazbo's right that you may have earned a theorem
with your name on it.

I'm going to forward the post to Roy and discuss it with him, but there
are many people out there (Pudaite comes to mind) who are far more capable than
I at analyzing your thoughts.

Thank you - this is *very* important.

Regards, Lee
--
Lee Jones | "Let the music move you. Get your boots on.
le...@sgi.com | Do the gum boots dance. It's township jive time."
415-933-3356 | -Ladysmith Black Mombaza

Jazbo Burns

unread,
Apr 8, 1997, 3:00:00 AM4/8/97
to

Thanks e-re...@uiuc.edu (Erik Reuter) for pointing out the error in
my post -- I plead sleep deprivation after an all-nighter in AC :-).
I did realize the asymmetry of the situation when I awoke this
morning, but Erik had already caught and corrected my error before I
could --- that's what makes this group so great. [I really did try to
post this at 8AM, but my ISP was constipated or something & wouldn't
take the article.]

It's obvious that if one player makes a mistake according to FToP,
then *some* player must benefit (since the game is zero sum). What
Andy Morton pointed out was that it's possible for a player to
actually *lose* expectation when another player makes an FToP mistake,
which contradicts FToP (which claims you gain whenever some player
makes a play they should not make if they had full knowledge of the
situation). So FToP does not hold, in general, for multiway pots.

To recap: Player A bets, B calls and C is deciding whether to call or
fold with p units in the pot (including A and B's bets). In the
card(s) to come, A has a outs, B has b, and C has c. In the original
situation given by Andy, A had a made hand, so if C folds A gets the
additional benefit of C's outs, and B gets none. That's why B prefers
that C call whenever the FToP says he shouldn't. But then, B actually
prefers that C always calls since none of C's outs impact B.

Eric rewrote the theorem with this in mind in terms of the "Best Hand".
That captures the idea that the outs of the folding hand go to the
best hand, but's that's not true in general. For example, Player A
has two small pair, Player B has a flush draw, and Player C has top pair.
Now if any of C's outs complete B's Flush (but not A's Full House),
the Theorem could apply to either A and B (it can only make a difference
for one of them in any given instance).

Morton's Theorem [Precise version]: Suppose Player A has bet one unit
and has a>0 outs to win the pot, Player B has called with b>0
outs, and Player C is to act with c>0 outs. Then, ignoring future
action, either there is a pot size p such that exactly one of A or
B prefers that C fold even though C does not have pot odds for the
call (violating FToP), or C's outs are divided among A and B
exactly according to their respective chances of winning the pot.

Morton's Theorem [Informal version]: In almost any multiway situation,
if the final player to call has any outs, then there is some
player that prefers that C fold even though C does not have pot
odds for the call.

The special condition (weasel words) in the precise version of the
theorem come about only in special cases. Suppose with 42 cards to
come a=24, b=12 and c=6. If c'=4 of c's outs go to A and c"=2 go to
B, then a/c' = (a+b)/c and b/c" = (a+b)/c, so there is no pot size
where the FToP fails. However, I haven't (with a short search) been
able to come up with a situation like this.

I've thought a little about how to extend this result to include
implied odds, but I think I'll wait until I've fully caught up on
sleep before posting about it :-).

Andy Morton

unread,
Apr 10, 1997, 3:00:00 AM4/10/97
to

"Morton's Theorem," heh, heh. Kewl. I've got to forward this thread to
my mother.... Thanks, Jazbo.

Seriously, i have a couple comments.

First, I also think this _type_ of post is the most fun on r.g.p., and
would love to see more. It was sort of frustrating that this post
languished for a week before it got its first response while 50 people
weighed in to straighten out the guy who wanted five pros at his table.
Not only that, but after 50 or so responses, it turns out he had a point
after all. Anyway, I for one would be more of an active participant and
less of a lurker on r.g.p. if the ratio of light to heat were a little
higher.

Second, I totally agree with Erik's point that while the idea i
presented is mathematically correct, the important question is how (or
even whether) it leads to changes in correct strategy. As I said in my
post, I think these situations (where you'd prefer some of your
opponents to fold, even if it's correct for them to do so) come up all
the time in a typical game.

I'm not sure how best to demonstrate that, however. Would it be at all
convincing if i just came up with some more examples of other situations
and repeated the calculation? I'm not sure, because each time someone
could say, "yea, but what if instead of having 5 outs, your opponent is
drawing dead? then you'd want him in, not out." Maybe there would be
some way to run a few hundred hands of Holdem Master or something
similar with a calling station in the game. Then simply ask, for each
call he made, a) did he have correct odds to chase, and b) did his call
cost the leader any expectation? Would that be sufficient to convince
people these situations are common? Is there a better way?

Then, if these situations do come up, are they really all that serious?
If i remember my example correctly, player B's fishy call on the turn
can cost you up to more than 5% of your EV when you bet. That seems
pretty serious to me. Try telling a blackjack player he's losing up to
5% on some of his bets and i bet you get his attention real quick.

Furthermore, even if we convince ourselves these situations come up
often enough and are potentially costly, then how do we exploit our
understanding? Are strategic changes (eg, play more drawing hands) more
useful than tactical ones (eg, checkraise the turn in a particular
situation)? I never would have thought of the implications of this
stuff on table image, but abdul's post makes at least some sense to me
and indicates that there may be all sorts of ways to exploit the things
we've been talking about here.

Erik Reuter

unread,
Apr 10, 1997, 3:00:00 AM4/10/97
to

In article <334C9F...@ix.netcom.com>, Andy Morton
<and...@ix.netcom.com> wrote:

> First, I also think this _type_ of post is the most fun on r.g.p., and
> would love to see more. It was sort of frustrating that this post
> languished for a week before it got its first response while 50 people
> weighed in to straighten out the guy who wanted five pros at his table.

I think that often the best posts in r.g.p. are like the tip of an
iceberg: all the thinking and calculation that is behind such a post is
not readily apparent. The best posts have more background work supporting
them than the general run of the mill posts which generate lots of
replies. This leads to a constant quality-response figure for each thread:
post quality x number of responses = constant (holds for quality > 0). So
if a post gets few responses then it is of high quality. Either that, or
the post was worthless :-)

> As I said in my
> post, I think these situations (where you'd prefer some of your
> opponents to fold, even if it's correct for them to do so) come up all
> the time in a typical game.

I'm not so sure. I believe you will find that in most realistic cases such
situations occur only occasionally, and for a fairly small range of pot
sizes, and make a small fraction of a bet difference. When all this is
taken into account, the overall effect on strategy should be small.

As Jazbo demonstrated, if the outs of the folder get distributed to the
remaining players proportionally to their chances of winning, this
situation does not come up. Morton's example (29/9/4) gave an uneven
distributions of the outs (33/9/0). In the situations where the outs get
distributed more evenly, the range of pot sizes where this phenomenon
occurs will be smaller. Note that if we give just 1 of C's 4 outs to B
(32/10/0) then A no longer benefits by C folding correctly. I suspect that
in many realistic situations where X benefits from Y folding correctly, X
will be trailing another hand. If so, there aren't many situations where X
can raise to fold out C without losing more money to the leading hand. And
to have an image where a mere call strikes fear into the hearts of your
opponents is probably not good for maximizing your EV in other situations.

Most importantly, we have to consider how this theory could be applied in
practice. I can think of two ways: raising more often with relatively
weaker hands, or developing an image that encourages opponents to fold
more frequently when you bet or call. However, THE ISSUE IS NOT A
COMPARISON BETWEEN 0 AND WHATEVER CAN BE GAINED FROM THIS STRATEGY DUE TO
THIS PHENOMENON ALONE! It is likely that if you take either of these two
strategies, you will lose more due to other effects than you could ever
gain by influencing your opponents to make correct folds against you in
situations where those correct folds benefit you. In other words, the
change in your EV due to these strategies may include a positive term from
the issue under discussion, but a much larger negative term due to loss of
income from loose (incorrect) calls.

Admittedly, I have no data or calculations to back up my qualitative
assertions, and I don't plan to work on it. Of course, if you disagree, I
would be interested seeing your arguments (or better yet, numbers).

> I'm not sure how best to demonstrate that, however. Would it be at all
> convincing if i just came up with some more examples of other situations
> and repeated the calculation?

Not convincing, but it may help in developing your intuition for the
situation and suggesting how to further analyze the issue.

I suggest trying examples spanning the entire range of distribution of C's
outs: you already have one extreme (all outs going to A). The other
extreme (all outs going to B) is somewhat interesting. And everything in
between. (I am always assuming that A is the player with the most outs).

> Maybe there would be
> some way to run a few hundred hands of Holdem Master or something
> similar with a calling station in the game. Then simply ask, for each
> call he made, a) did he have correct odds to chase, and b) did his call
> cost the leader any expectation? Would that be sufficient to convince
> people these situations are common? Is there a better way?

That would be an interesting demonstration. If you do it, make sure to
record enough data to figure out the EV of all players in the pot, and the
delta(EV) under various circumstances such as: player x folds, player y
raises.

It may be difficult to keep track of gains from this issue and properly
offset losses from other effects caused by implementing the strategy
necessary to obtain said gains. I can't think of an elegant simulation to
do it. Maybe someone else can.

> Then, if these situations do come up, are they really all that serious?
> If i remember my example correctly, player B's fishy call on the turn
> can cost you up to more than 5% of your EV when you bet. That seems
> pretty serious to me. Try telling a blackjack player he's losing up to
> 5% on some of his bets and i bet you get his attention real quick.

Bad comparison. It is more meaningful to compare to, say, the change in EV
when a calling station stops paying you off when you have the nuts because
you projected an image to get this player to fold to you.

> Furthermore, even if we convince ourselves these situations come up
> often enough and are potentially costly, then how do we exploit our
> understanding? Are strategic changes (eg, play more drawing hands) more
> useful than tactical ones (eg, checkraise the turn in a particular
> situation)? I never would have thought of the implications of this
> stuff on table image, but abdul's post makes at least some sense to me
> and indicates that there may be all sorts of ways to exploit the things
> we've been talking about here.

Once again, the problem is in finding a strategy that reaps the gains from
this issue without suffering greater losses from other issues. I think you
will have difficulty clearly identifying strategies that do this.

--
Erik Reuter, e-re...@uiuc.edu

Stephen H. Landrum

unread,
Apr 10, 1997, 3:00:00 AM4/10/97
to

Andy Morton wrote:
> First, I also think this _type_ of post is the most fun on r.g.p., and
> would love to see more. It was sort of frustrating that this post
> languished for a week before it got its first response while 50 people
> weighed in to straighten out the guy who wanted five pros at his table.

The answer to this one is simple. Posts that require thought take
time to respond to, and obviously ones that don't can be responded to
immediately. I couldn't keep up with RGP if there were 20 seriously
thought-provoking posts a day.

> Not only that, but after 50 or so responses, it turns out he had a point
> after all.

It sure wasn't contained in his original post.

> Anyway, I for one would be more of an active participant and
> less of a lurker on r.g.p. if the ratio of light to heat were a little
> higher.

Flames are unfortunately part of the nature of delayed electronic
communication.

> Second, I totally agree with Erik's point that while the idea i
> presented is mathematically correct, the important question is how (or

> even whether) it leads to changes in correct strategy. As I said in my
> post, I think these situations (where you'd prefer some of your
> opponents to fold, even if it's correct for them to do so) come up all
> the time in a typical game.

Some of this is implicitly incorporated into the ideas of seat
selection, hand selection, selective aggression, and pot size
manipulation. It's also been discussed indirectly in the past as the
concept of fish "schooling" for protection. Each of their individual
calls is poor, but when they all call, they help each other more than
they help the bettor.

> I'm not sure how best to demonstrate that, however. Would it be at all
> convincing if i just came up with some more examples of other situations
> and repeated the calculation?

I don't think convincing is as important as learning. It may be that
the situation occurs less frequently than you feel it does, or it may
occur in almost every hand that's played with 4 or more players.

Better than examples would be some research.

> I'm not sure, because each time someone
> could say, "yea, but what if instead of having 5 outs, your opponent is
> drawing dead? then you'd want him in, not out."

That's the problem with examples, there are counterexamples. Without
some demonstration of how often the examples apply, they are not good
evidence for the assertions that these situations are frequent or rare.

> Maybe there would be
> some way to run a few hundred hands of Holdem Master or something
> similar with a calling station in the game. Then simply ask, for each
> call he made, a) did he have correct odds to chase, and b) did his call
> cost the leader any expectation? Would that be sufficient to convince
> people these situations are common?

Again, the goal should not be convincing, but determining.

> Is there a better way?

Can't think of one at the moment.

> Then, if these situations do come up, are they really all that serious?

That's certainly a question to be addressed.

> If i remember my example correctly, player B's fishy call on the turn
> can cost you up to more than 5% of your EV when you bet. That seems
> pretty serious to me.

Perhaps. It should be looked at with respect to the whole. B's fishy
calls are contributing hugely to your EV at other times, so it's
important to see where it fits into the big picture. Is your bet -EV
because B calls, or is your bet +EV but B's call just reduces the value
of it?

> Try telling a blackjack player he's losing up to
> 5% on some of his bets and i bet you get his attention real quick.

Losing up to 5% relative to what? Many times the BJ player is making
a -EV play, because it is better than all the other choices which are
also -EV. Splitting 8's against a dealer T is a -EV play, but it's
better than standing or drawing, it's only +EV when compared to the
alternatives.

If he can make a different choice that improves his EV, then he'll be
interested in what you are saying.

> Furthermore, even if we convince ourselves these situations come up
> often enough and are potentially costly, then how do we exploit our
> understanding?

Change "convince ourselves" to "learn" and I agree with you.

Are strategic changes (eg, play more drawing hands) more
> useful than tactical ones (eg, checkraise the turn in a particular
> situation)? I never would have thought of the implications of this
> stuff on table image, but abdul's post makes at least some sense to me
> and indicates that there may be all sorts of ways to exploit the things
> we've been talking about here.

--
"Stephen H. Landrum" <slan...@pacbell.net>

Robert Copps

unread,
Apr 10, 1997, 3:00:00 AM4/10/97
to

In article <334C9F...@ix.netcom.com>, and...@ix.netcom.com (Andy

Morton) writes:
>
>
> First, I also think this _type_ of post is the most fun on r.g.p., and
> would love to see more. It was sort of frustrating that this post
> languished for a week before it got its first response while 50 people
> weighed in to straighten out the guy who wanted five pros at his table.
>


Last year Mason Malmuth started a thread, also called "Going too far". It
touched on this subject. In it and elsewhere I pointed out what I called a
"limitation" of the FTOP. No one has ever commented on those points.
Perhapss if I had expressed it as a theorem... :-). Actually, in his
original discussion of the theorem, Sklansky points out that there are
situations where the FTOP does not apply. The reason I did not comment on
your first post is that I thought you were just re-stating what S had.

>
>
> Not only that, but after 50 or so responses, it turns out he had a point

> after all. Anyway, I for one would be more of an active participant and


> less of a lurker on r.g.p. if the ratio of light to heat were a little
> higher.


Just as we all play for different reasons, and play in different ways, we
are interested in different parts of the game (e.g., if more people talked
about gambling hormones, I would find something else to think about :-).
That's what I like about RGP. I find that the more advanced mathematical
concerns are pretty low EV for me, but I get a huge charge out of learning
about how other players evaluate play.

>
>
> Second, I totally agree with Erik's point that while the idea i
> presented is mathematically correct, the important question is how (or
> even whether) it leads to changes in correct strategy. As I said in my
> post, I think these situations (where you'd prefer some of your
> opponents to fold, even if it's correct for them to do so) come up all
> the time in a typical game.

> [...]
>


If I understand your main point I find myself in these situations about
once every 15 hands: perhaps once every three or four hours.

I think that if you include more in the pot than the equity represented by
the chips, if you include other factors such as the ability to outplay
opponents, potential bluffs and semi-bluffs, a lot of players have an
intuitive grasp of how to play in these situations (though they would never
recognize their algebraic representation).

These are the players we regard as "tough" as opposed to "tight". They know
intuitively when to raise with 2nd best, how to max the dead money in the
pot to justify potential moves on later rounds, and so on. They play like
they are using rapiers, building huge pots and occasionally winning them
with apparent miracles. You almost never can put them on a hand.

In short, given a pot of contestable size, if you believe you are second
best with outs and that third best has a draw to a bigger hand than yours
with some of your outs, you raise, because you want him to fold. If you
believe that the third best's draw is not in conflict with your own, you
want him to call and minimize your own investment. If there are sufficient
callers, especially if they are in conflict (all with flush draws perhaps)
you want them all to put in more money, You can even cap it -- even though
the leading player will gain more $EV than you, your action is still
profitable.

A game with more than two players who have a grasp of these points is a
thrilling experience!


Andy Morton

unread,
Apr 10, 1997, 3:00:00 AM4/10/97
to

Robert Copps wrote:
> Last year Mason Malmuth started a thread, also called "Going too far". It
> touched on this subject. In it and elsewhere I pointed out what I called a
> "limitation" of the FTOP. No one has ever commented on those points.
> Perhapss if I had expressed it as a theorem... :-). Actually, in his
> original discussion of the theorem, Sklansky points out that there are
> situations where the FTOP does not apply. The reason I did not comment on
> your first post is that I thought you were just re-stating what S had.
>

Well, i discussed mason's thread in my post, and i deliberately named my post after
his to draw attention to the parallel. I also very clearly stated that i had never
seen these things discussed correctly in any book or article, but that i had seen
them alluded to in posts on r.g.p. For the record, I was thinking of a post by Tom
Weideman a couple years ago concerning how many opponents you'd like to have when
you hold AA, and even more to the point, a reply by Mike Maurer to another post
discussing an old conundrum from lowball about how to play a pat 95 against 3
opponents who each draw one. I suspect the math behind this lowball discussion is
equivalent to the math in this current thread. But I certainly haven't read
everything on r.g.p. and i know several threads have skated around this topic, so I
certainly didn't mean to claim that this idea started with me.

As for Sklansky and the FToP, no, i was not simply re-stating what he's written
before. And as i described in my original post, it seems very likely that the FToP
does not apply to most multiway situations. Imho, the FToP has created much more
confusion on this group, where people are trying to think carefully about poker,
than it has dispelled. I'm willing to swallow hole the statement made by Erik the
other day for discussion, that in every situation there is one set of decisions
that maximize your EV (given a knowledge of everyone's holdings), and that you
should make your decisions accordingly. "Maximize your EV" -- that's _the_
fundamental theorem of poker. (hmmm, assuming the stakes aren't too high for your
bankroll.)

btw, a good example of the confusion caused by trying to use the FToP is this
situation where you flop a good hand against a lot of players, and you know that
your bet will get so many calls that each call will be correct, both here and then
again on the turn. So people sometimes suggest you should check the flop so that
calling your bet on the turn will be a mistake for the chasers. I'm not sure what
the correct play would be in those circumstances, but it seems that the way to
determine it isn't to try to manipulate the pot size so that your opponents are
making mistakes. The way to figure out your best strategy is to play the way that
will maximize your own EV, regardless of whether your opponents are making mistakes
or not. (Don't misunderstand me: manipulating pot size is still an important
weapon, it's just that you're not doing it to force mistakes, you're doing it to
maximize your own EV.)

As a sort of caveat, i'll mention that mason started another thread last year,
"Giving Up EV," which i never understood. If what he had in mind was really that
playing weakly on the flop and strongly on the turn had a greater net EV than
trying on every street to maximize your EV at that instant, then i guess his ideas
still are consistent with the belief that you should play to maximize your own EV,
not simply to maximize the mistakes made by others.

At any rate, I'd like to think that the one idea you can take to the bank from this
thread is that maximizing the mistakes made by others is not the same as maximizing
your own profit in a particular situation.

As for Stephen's comment about how to learn to exploit these ideas:
> ... the goal should not be convincing, but determining...

I agree completely. If i sounded like i was more interested in convincing than
learning what's what, i misspoke.

Regards,
Andy

William Chen

unread,
Apr 11, 1997, 3:00:00 AM4/11/97
to

Yes, these situations do come up where one player makes a call that has
negative expectation for him and you and greatly adds to someone's positive
expectation. I have done extensive work on game-thoeretic multi-way pots
but I'm not willing to share those calculations yet (since I don't know
whether they are right and don't know how valuable the info is..).

Let me give a simple lowball example. Say you have a straight 98 and
raise-all-in a guy who has a [one card] wheel draw (you both are all in).
You have a slight edge and expect to make money.
But now suppose someone in the field decides to call with a [one card]
96-joker draw -- you are now decidedly a dog and so is the person who
called, but now the wheel draw is a big money favorite. In both cases the
wheel draw only has to catch a 9 or better to win, but wins twice the money
in the second case. If the two players against you are partners, calling
is the correct play.

Ok, a hold'em example--say you have T9s, your opponent has AQ and the board
shows QsJh4s3d. Suppose the pot is 1.5P, and your opponent bets P and you
correctly call (say it's an all-in situation again). Another opponent
whith the A-high flush draw now makes an incorrect (but seemingly reasonable)
call--now your only outs are the straight outs. and the caller has just
given yours and his expectation to the AQ.

The point is that in all games there exist such players who you know will
make these incorrect calls--not because they cheat, but because they are
bad players. You have to factor this into account and will turn some marginal
calls into real money (winner/losers).

Rememebr this--marginal callers increase the value of big draws and strong
hands while they decrease the value of non-nut draws and marginally leading
hands. Heads up, I will confidently play 2nd pair with a non-nut flush draw,
since I have a reasonable # of outs against any hand. In a multi-way pot with
lots of heat i may have to drop.

This concept may seem backwards to most people but let me explain.
Say i have T9s again and the flop comes AsTh2s. I have a marginal hand with a
marginal draw, but aginst either top pair (or even top set) I have outs.
Against a bigger draw, I currecntly have the lead. Say I'm jammed in
with the same hand and flop multi-way--I may be drawing almost dead vs the nut
draw and AT.

Bill


nwe...@gmail.com

unread,
Dec 27, 2012, 3:24:35 PM12/27/12
to
all this theory sounds good on paper,

but since players are human,
and most do not follow logic, you end up losing
to those you run across.

patmp...@gmail.com

unread,
Dec 28, 2012, 5:12:28 AM12/28/12
to
On Thursday, April 3, 1997 4:00:00 PM UTC+8, Andy Morton wrote:
> Implicit Collusion and Going too Far
>

Thanks. I always wondered about this.

Mossingen

unread,
Dec 28, 2012, 1:12:14 PM12/28/12
to
<patmp...@gmail.com> wrote in message
news:df510c6e-49b3-4ef4...@googlegroups.com...
> On Thursday, April 3, 1997 4:00:00 PM UTC+8, Andy Morton wrote:
>> Implicit Collusion and Going too Far
>>
>
> Thanks. I always wondered about this.



The thread where the likes of Caro and Sklansky discuss Morton's post is
among the best poker content RGP has ever produced.


0 new messages