Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Game theory and John Nash

4 views
Skip to first unread message

Jaque Mate

unread,
Jul 10, 2002, 12:35:54 AM7/10/02
to
J Nash from the story in " A beautiful mind"

1) Can someone elaborate in what way Nash equilibrium applies to Poker (for
non mathematicians)?

2)What was his work about "A Simple Three-Person Poker Game"

3)Was game theory as applied to poker unknown before his work?

thanks

Jaque

"The concept of a Nash equilibrium n-tuple is perhaps the most important
idea in noncooperative game theory. ... Whether we are analysing candidates'
election strategies, the causes of war, agenda manipulation in legislatures,
or the actions of interest groups, predictions about events reduce to a
search for and description of equilibria. Put simply, equilibrium strategies
are the things that we predict about people"

Jeff Yoak

unread,
Jul 10, 2002, 4:22:18 PM7/10/02
to
On Tue, 09 Jul 2002 21:35:54 -0700, Jaque Mate wrote:

> J Nash from the story in " A beautiful mind"

I'll avoid going on a long rant about this, but I want to say briefly that
all the math and everything about Nash's notions in that movie was
completely wrong. Not only is that scene with the girls *not* a Nash
Equilibrium, there are Nash equilibriums in that "problem," but they are
different from what he discovers.

That said, I loved the movie. :-) When movies are even peripherally
about math or computers, they tend to be a lot of fun for me, but as a
defense mechanism I've come to expect they are going to get everything
wrong. Once I got to that conclusion, I found that I could enjot those
movies

> 1) Can someone elaborate in what way Nash equilibrium applies to Poker
> (for non mathematicians)?

I'm really interested too. A while back I thought about trying to do some
research in this area, but decided that the problem was too wickedly
complex and there is too much to be gained in statistics and other
elementary things to bother with it. If someone else has, I'd be very
interested in hearing about it too.

For instance, consider the incredibly simplified no-limit "ouvert" aces
problem posted on here last month. All the analysis that goes into it is
statistical in nature and once you have those answers, the result can be
framed in a game theoretical fashion, but the actual game theory is really
trivial.

But... for what I can say off the cuff, my suspicion is that modelling
poker situations with game theory will lead you to looking primarily at
dominant strategy equilibriums. Some of them will also be Nash
equilibriums, but not terribly interesting ones.

When I started playing poker, the very first thing that impressed me
intellectually was that from a game theoretical standpoint it has odd
qualities. Partial information games where it could be right for one
person to bet and another to call, even if they had perfect information,
seemed counter-intuitive. Long before I read about pot and implied odds,
I started thinking along these lines. I came initially to the conclusion
that the driving force of the action in poker is the blinds and antes. It
surprised me that pots could be as large as they are with respect to the
antes and blinds if everyone was acting rationally, even with imperfect
information. Actually, this still sorta surprises me. :-) I think there
is a lot of "greater fool" thinking going on in poker. I wonder if you
spread a low-limit hold'em game in southern california with no ante, blind
or bring-in, how much action it would get. I'd bet that VERY few people
would follow the dominant strategy of only entering a pot with the mortal
nuts.

> 2)What was his work about "A Simple Three-Person Poker Game"

Was this mentioned in the movie? I would have thought that it would have
popped out at me if so. I'm hella-busy with two projects right now, but
I'll dig into this at some point.

> 3)Was game theory as applied to poker unknown before his work?

Almost certainly not significantly. Sklansky would be writing about it if
so. ;-) Well... I guess he does some in the Theory book. There is no
doubt a body of work about it, but it will either turn out to be not
significantly game theory or not significantly poker. You see a lot of,
"Each player gets one 'card' of value zero or one. With simplified
betting and calling rules, what should each do?" kind of problems. These
are INCREDIBLY difficult to answer and they aren't significantly about
poker, as they so simplified as to provide little insight. You might also
get some broad analysis of things like cooperative strategies, but these
will be very high-level and really amount to applications of some very
broad principles of game theory garnered from simpler problems. These are
probably of more value as they might provide some actual guidance, but
will lack the thoroughness that at least I would love to find.


Cheers,
Jeff

LouKrieger

unread,
Jul 10, 2002, 7:29:15 PM7/10/02
to
>> Subject: Re: Game theory and John Nash
From: Jeff Yoak je...@yoak.com

I wonder if you spread a low-limit hold'em game in southern california with no
ante, blind or bring-in, how much action it would get. I'd bet that VERY few
people would follow the dominant strategy of only entering a pot with the
mortal nuts. <<


Of course you're correct. The urge to gamble is much more compelling for most
poker players than the exercise of discipline required to play poker optimally.


Keep flopping aces,

Lou Krieger

Randy Hudson

unread,
Jul 10, 2002, 9:29:57 PM7/10/02
to
In article <pan.2002.07.10.13....@yoak.com>,
Jeff Yoak <je...@yoak.com> wrote:

> On Tue, 09 Jul 2002 21:35:54 -0700, Jaque Mate wrote:
>
>> J Nash from the story in " A beautiful mind"
>
> I'll avoid going on a long rant about this, but I want to say briefly that
> all the math and everything about Nash's notions in that movie was
> completely wrong.

I suspect JM was referring to the book, not the movie. As you say, the
movie left a lot out, and what they didn't leave out, they got wrong.

> I wonder if you spread a low-limit hold'em game in southern california
> with no ante, blind or bring-in, how much action it would get.

Hasn't that experiment been tried? Isn't that the "rocks-and-beer" game?
And yes, most players do get involved with suboptimal hands.

>> 3)Was game theory as applied to poker unknown before his work?

von Neumann used simplified poker examples in his discussions of two-player
zero-sum games. He also was reportedly a poor poker player.

--
Randy Hudson <i...@panix.com>

ben morris

unread,
Jul 10, 2002, 10:41:32 PM7/10/02
to
> is a lot of "greater fool" thinking going on in poker. I wonder if you
> spread a low-limit hold'em game in southern california with no ante, blind
> or bring-in, how much action it would get. I'd bet that VERY few people
> would follow the dominant strategy of only entering a pot with the mortal
> nuts.

After my little excursion into game theory, I looked at some simple problems
like this one, but found them to be more complicated than I thought.

In this case, for example, while it may be a good strategy in certain
conditions, it is definitely not the dominant strategy, and certainly not
necessarily the most profitable.

In Hold'em, for example, I've heard the argument made that if there were no
blind or ante, you should fold every hand until you have aces. There are a
lot of problems with this: First, if you're in an action game, where people
haven't adjusted their early strategy to the fact that there is no money in
the pot, you will be folding a lot of profitable hands. The quest to
"guarantee" a profit will cost you a lot of money.

Second, if you only play aces, you're giving too much information away.
Suddenly, when you're in the pot, implied odds mean I can play a range of
hands. Or let's put it this way... what if we played no limit hold'em with
Aces or better to open? Would that kill all action? let's say you're in
late position and you open. I'm in early position with 77, I'll call.
(since I know your hand, I'd rather be in early position) On the flop, I'll
play a mixed strategy of bluffs and all-in with trips, etc. At this point,
you can no longer play the "only nuts" strategy, because of the money in the
pot, and it's profitable for me. Thus, opening with AA is a negative e.v.
play (either everyone folds, or someone calls with a positive expectation).

In fact, if everyone were playing rationally, the correct game strategy is
to fold every hand, but the dominant strategy would be to get up and leave
(because of time costs). But if there is any deviation from the perfectly
rational strategy among your opponents, you must play the counterstrategy.
The classic example, as I'm sure you know, is the "pick a number" game, in
which a room of people must choose a number between 1 and a hundred, and the
person who chooses the number closest to half the average wins a prize. The
common knowledge or rationality equilibrium play is to choose 0, but in the
history of that game being played, 0 has almost never won. (In a group of
college kids, the winner is usually around 13).

The interesting thing is, if everyone played poker completely rationally
(and there is, of course, a no-collusion equilibrium), the dominant strategy
would still be to get up and leave. What the blinds and antes do, however,
is stimulate action, which gives people more opportunities to make mistakes.
In a no-ante game, if it kept being spread, people would become more and
more conservative, and as they approached equilibrium play, the game would
get very boring. With antes, since the strategies are so complicated, there
is a lot of action, even in equilibrium.

just my thoughts,

ben


Rick McGrath

unread,
Jul 11, 2002, 12:08:55 AM7/11/02
to

"Jaque Mate" <le_bl...@hotmail.com> wrote in message
news:uine1ko...@corp.supernews.com...

> J Nash from the story in " A beautiful mind"
>
> 1) Can someone elaborate in what way Nash equilibrium applies to Poker
(for
> non mathematicians)?

I can think of many things to blurt out here, but I'll try to keep it down.

The basis of a Nash Equilibirum is this: Each person's actions are optimal
conditional on the expected actions of the other players and the information
available to that person. If an equilibrium is reached whereby each player
has no reason to change his or her strategy, we have found a Nash
Equilibrium. In poker the Nash Equilibrium will be a mixed strategy, meaning
that a player will not always do the same thing but has found an optimal
combination of plays. So...you can play a hand properly and lose some of the
time, but that doesn't mean you played it badly. This is the mathematical
root of many of the discussions on RPG. The focus is on playing each hand
the best way you can, knowing this will serve you well in the long run even
if the short run is full of bad beats.

Where using optimization to solve for the Nash Equilibirum has value is in
its ability to show what we should do if our opponent(s) play(s)
sub-optimally. We worked an example of this a couple months ago. If you
email me I might be able to find it in my sent mail. It is also useful to
help understand the concept of optimal bluffing. There is an optimal rate of
bluffing under each circumstance. If we bluff less, others will find it
optimal to fold more of our bets. If we bluff more they will call more. The
combination of our optimal bluffing rate and our oponents' optimal folding
rate, jointly detemined, is a Nash Equilibrium. It can be shown quite
tediously that these conditions are mostly saddle points.

One last example: when members of RPG use simulations to determine how, or
how often, to play certain hands, they are usually solving a Nash
Equilibrium.


> 2)What was his work about "A Simple Three-Person Poker Game"

I found a citation for the paper you asked about. I haven't seen the paper.

"A Simple Three-Person Poker Game", with L.S. Shapley, 1950,
Annals of Mathematical Statistics.

>
> 3)Was game theory as applied to poker unknown before his work?
>

I doubt it was done much or publicly. If it existed it was frowned upon and
buried by the academic community. The study of finance (now respectable) by
mathematicians and economists didn't start in earnest until well into the
20th century. Why? It was considered too much like studying gambling which
was considered dirty. No self respecting academic of the time would do such
a thing. FWIW, a French mathematician provided a firm mathematical
foundation for portfolio theory around 1900-1910 and was run out of the
profession for wallowing in academic filth. His career was ruined. His work
resurfaced decades later as an importantcontribution. So...as for
poker....not likely. This is discussed in the book "Capital Ideas" by Peter
Bernstein.

> thanks
>
> Jaque
>

Rick McGrath


Stig Eide

unread,
Jul 11, 2002, 2:43:59 AM7/11/02
to
Jeff Yoak <je...@yoak.com> wrote (among other interesting things) in
message

> You see a lot of,
> "Each player gets one 'card' of value zero or one. With simplified
> betting and calling rules, what should each do?" kind of problems. These
> are INCREDIBLY difficult to answer and they aren't significantly about
> poker, as they so simplified as to provide little insight.
> Cheers,
> Jeff

What is wrong with the "each player gets a card of value zero to one"
abstraction?
All poker hands can be represented as a number between zero and one,
namely their probability of winning. In my opinion, the ability to
convert your hand into its probability of winning, is one of the
essential skills.
Peace, Stig.

Keith Ellul

unread,
Jul 11, 2002, 3:01:30 AM7/11/02
to

Actually, hold'em is far more complicated than that.

There exist hands A, B, and C such that A is a heads-up showdown
favourite over B, B is a heads-up showdown favourite over C, and C is a
heads-up showdown favourite over A. There is one occasionally-repeated
(in this newsgroup) example of this phenomenon but I can't remember
what it is. So, in this case, how would you convert these hands (A, B,
and C) to numbers? Having one of them as the highest doesn't make much
sense, but neither does having all of them as equal.

Also, there are certain hands which do well heads-up but not well
multi-way. (eg, 55 is a showdown favourite over both AK and JT, but in
a 3-way contest with AK, JT, and 55, 55 will do worse than either of
them).

So, you get these "problems" even if you make the simplifying
assumption that all we care about is showdowns (which is not even close
to true). So, a starting 2-card hand can not be converted (in any
reasonably useful way) to a number between 0 and 1.

Keith


Alix Martin

unread,
Jul 11, 2002, 7:22:31 AM7/11/02
to
The subtlety of Nash equilibriums comes into play when there are more
than two players. The Nash equilibrium must resist not only to
individual strategy changes but also to coordinated strategy changes.

Now the bad news : the fundamental theorem of scamming

"Any Nash Equilibrium in a poker game involves exacly two coalitions."

This means that most ring games are not an equilibrium, as players
have an incentive to coordinate their strategies to gain extra ev.
I'll leave the proof to GCA. Coordinating strategies doesn't need to
involve signalling to gain ev.

An implication is that any big enough game is waiting to be scammed.
The only pure form of poker left is heads-up.

Alix Martin

Alix Martin

unread,
Jul 11, 2002, 7:26:17 AM7/11/02
to
Btw, I haven't read Nash's article. If anyone has it, I'd be very interested.

Alix Martin

Paul L. Schwartz

unread,
Jul 11, 2002, 10:29:10 AM7/11/02
to
"ben morris" <benjami...@yale.edu> wrote in message news:<agir9v$a5t$1...@news.ycc.yale.edu>...

>
>
>
> The classic example, as I'm sure you know, is the "pick a number" game, in
> which a room of people must choose a number between 1 and a hundred, and the
> person who chooses the number closest to half the average wins a prize. The
> common knowledge or rationality equilibrium play is to choose 0, but in the
> history of that game being played, 0 has almost never won. (In a group of
> college kids, the winner is usually around 13).
>
>

NIT ALERT:

Zero cannot be chosen because it is not a number between 1 and 100.

Werner Campen

unread,
Jul 11, 2002, 12:06:04 PM7/11/02
to
In winning poker you are more playing the person than the cards. You are
looking for cards (and eventually a flop, say) that you think you can
play better than the opponent (with pot odds appropriately adjusted for
this advantage). A poker analysis that doesn't take sub-optimal play
into account may be interesting statistically, but has little to do with
the game theory aspects of poker.

And the maxim that you have to give action to get action is literally
correct. The blinds do provide a bit of action, but if all you could do
was win the blinds one tenth of the time, you wouldn't even bother
playing, because you have to post those blinds one tenth of the time as
well. When you fold every hand is when nobody is giving any action (and
of course you have to give some first to entice them) or when you are
evenly matched. One of the real questions is how much action to give. It
had better have some relevance to what other people are giving and the
qaulity of your hand. One strategy you see is the person who gives a lot
of action when they first sit down and then tightens up, trying to lure
the opponents into an uneven action exchange.

--
Werner Campen

Jerrod Ankenman

unread,
Jul 11, 2002, 1:44:54 PM7/11/02
to
Jeff Yoak <je...@yoak.com> wrote in message
> On Tue, 09 Jul 2002 21:35:54 -0700, Jaque Mate wrote:

<liberal snippage>

> > 1) Can someone elaborate in what way Nash equilibrium applies to Poker
> > (for non mathematicians)?
>
> I'm really interested too. A while back I thought about trying to do some
> research in this area, but decided that the problem was too wickedly
> complex and there is too much to be gained in statistics and other
> elementary things to bother with it. If someone else has, I'd be very
> interested in hearing about it too.

I'm pretty sure that it's been established that Nash equilibria for 3+
player poker are all collusive. Heads-up poker can be analyzed in
terms of game theory, but it's mostly too complex to be solved at this
point. You can, however, solve smaller problems that give insight or
are useful in their own right. For example, two RGPers have solved
"jam or fold" NL holdem (producing unexploitable game-theory optimal
strategies based on stack size for jamming and calling before the
flop).

> For instance, consider the incredibly simplified no-limit "ouvert" aces
> problem posted on here last month. All the analysis that goes into it is
> statistical in nature and once you have those answers, the result can be
> framed in a game theoretical fashion, but the actual game theory is really
> trivial.
>
> But... for what I can say off the cuff, my suspicion is that modelling
> poker situations with game theory will lead you to looking primarily at
> dominant strategy equilibriums. Some of them will also be Nash
> equilibriums, but not terribly interesting ones.

Most of the game theory work that is worthwhile, in my opinion,
generally revolves around creating unexploitable strategies. The
better you are in relation to your opponents, the more you should try
to exploit their errors. The better they are in relation to you, the
more you should try to prevent them from exploiting your strategies.
Nash equilibria usually deal with actors who are rational and who make
informed choices. If you are playing poker against these people, you
should probably stop doing that and find other people to play against.
However, for example, in tournaments or in very large games, you may
not have a choice.

> When I started playing poker, the very first thing that impressed me
> intellectually was that from a game theoretical standpoint it has odd
> qualities. Partial information games where it could be right for one
> person to bet and another to call, even if they had perfect information,
> seemed counter-intuitive. Long before I read about pot and implied odds,
> I started thinking along these lines. I came initially to the conclusion
> that the driving force of the action in poker is the blinds and antes. It
> surprised me that pots could be as large as they are with respect to the
> antes and blinds if everyone was acting rationally, even with imperfect
> information. Actually, this still sorta surprises me. :-) I think there
> is a lot of "greater fool" thinking going on in poker. I wonder if you
> spread a low-limit hold'em game in southern california with no ante, blind
> or bring-in, how much action it would get. I'd bet that VERY few people
> would follow the dominant strategy of only entering a pot with the mortal
> nuts.

Sure, that's probably true, and part of what makes poker profitable.
But YOU know not to do this. (btw, to the other poster who was posting
about implied odds, I don't think you can make up the difference in
hand strengths in limit holdem, only when you can vary bet sizes by
larger amounts. In this case, you can't bet enough to make the aces
fold). Also, a little bit of game theory background provides a good
framework for identifying situations where there is a hidden
dominating strategy (ie, manipulating betting in tournaments so that
you can give people chances to fold incorrectly, etc.)

> > 3)Was game theory as applied to poker unknown before his work?
>
> Almost certainly not significantly. Sklansky would be writing about it if
> so. ;-) Well... I guess he does some in the Theory book. There is no
> doubt a body of work about it, but it will either turn out to be not
> significantly game theory or not significantly poker. You see a lot of,
> "Each player gets one 'card' of value zero or one. With simplified
> betting and calling rules, what should each do?" kind of problems. These
> are INCREDIBLY difficult to answer and they aren't significantly about
> poker, as they so simplified as to provide little insight. You might also
> get some broad analysis of things like cooperative strategies, but these
> will be very high-level and really amount to applications of some very
> broad principles of game theory garnered from simpler problems. These are
> probably of more value as they might provide some actual guidance, but
> will lack the thoroughness that at least I would love to find.

Here's a game theory example that takes the form of your "simplified
betting and calling rules, etc," but clearly has analogies in the real
poker world (credit to Bill Chen):

The following game is played by three players, a rock, a hero, and a
maniac. They play the game three-handed; the rock sits to the hero's
left, and the maniac to the hero's right. They each ante one unit.
They are each dealt one card from a large deck that contains about 18%
aces and the rest deuces. After they receive their cards, there is a
round of betting, where any player may bet a fixed amount of 3 units,
and the other players may fold or call at their option. After the
round of betting, all hands that have not folded are shown. All aces
split the pot. If there are no aces, then all deuces split the pot. It
can be shown that if the maniac will always bet, and the rock will
only call with an ace, that the hero is -EV in this game, no matter
what strategy he chooses. This is a pretty important result, as we
have here an example of a completely "fair" game, where the hero knows
every parameter of the game and exactly what his opponents' strategies
are, but cannot win or even break-even, despite the fact that both his
opponents are making serious mistakes in the game.

Consider the implications that might transfer from this to your
favorite poker game.

Jerrod Ankenman

Stig Eide

unread,
Jul 11, 2002, 6:11:04 PM7/11/02
to
I don't think this example is relevant, due to the high portion of
ties?
In Poker, you should be able to get +EV by mimicing the best player
(if his strategy is known, as in the example).
I don't think there is one poker strategy A that is better than B head
to head, but loses if there is two players with strategy A against one
with strategy B?
Peace,
Stig
jerroda...@yahoo.com (Jerrod Ankenman) wrote in message news:<b3e2396.02071...@posting.google.com>...

>
> Here's a game theory example that takes the form of your "simplified
> betting and calling rules, etc," but clearly has analogies in the real
> poker world (credit to Bill Chen):
<snip>

LouKrieger

unread,
Jul 11, 2002, 6:13:47 PM7/11/02
to
>> Subject: Re: Game theory and John Nash
From: Keith Ellul

There exist hands A, B, and C such that A is a heads-up showdown favourite over
B, B is a heads-up showdown favourite over C, and C is a heads-up showdown
favourite over A. <<

Sound more like Roshambo than poker to me....

Rick McGrath

unread,
Jul 11, 2002, 11:19:01 PM7/11/02
to

"Alix Martin" <al...@noos.fr> wrote in message
news:e3f24c4f.02071...@posting.google.com...

Please allow me to phrase my uderstanding of your comment a little
differently and se if it gets anywhere useful or interesting...

The value of a coalition is based on one premise: that taking an action that
is not in my own individual best interest in the game will provide a return
to another member of the coalition that is greater than my loss. As a group
we gain. Example: I know you have the nuts so I raise you to jack up the pot
at the expense of a third player knowing I will lose. You win more than I
lose so *we* win.

The Fundamental Theorem of Scamming says that such opportunities arise
enough in poker to make scamming a winning strategy. It also suggests that
when there is one opportunity to do this in a game there is either a second
opportunity for the other group or there is a counter strategy to defeat it
if all other players at the table team up.

It occurs to me that any mathematical proof of the theorem would likely rely
on the same proof Nash needed to do his work: Kakutani's fixed point
theorem, simpe because we need to prove the existence of such an outcome,
and might be able to prove that it exists for every hand.

Rick

Rick McGrath


tadperry

unread,
Jul 12, 2002, 4:46:18 AM7/12/02
to

"Jeff Yoak" <je...@yoak.com> wrote in message
news:pan.2002.07.10.13....@yoak.com...

> On Tue, 09 Jul 2002 21:35:54 -0700, Jaque Mate wrote:
>
> > J Nash from the story in " A beautiful mind"
>
> I'll avoid going on a long rant about this, but I want to say briefly that
> all the math and everything about Nash's notions in that movie was
> completely wrong. Not only is that scene with the girls *not* a Nash
> Equilibrium, there are Nash equilibriums in that "problem," but they are
> different from what he discovers.
>
> That said, I loved the movie. :-) When movies are even peripherally
> about math or computers, they tend to be a lot of fun for me, but as a
> defense mechanism I've come to expect they are going to get everything
> wrong. Once I got to that conclusion, I found that I could enjot those
> movies
>
> > 1) Can someone elaborate in what way Nash equilibrium applies to Poker
> > (for non mathematicians)?
>
> I'm really interested too. A while back I thought about trying to do some
> research in this area, but decided that the problem was too wickedly
> complex and there is too much to be gained in statistics and other
> elementary things to bother with it. If someone else has, I'd be very
> interested in hearing about it too.

It amounts to setting calling standards, folding standards, betting
standards, raising standards, and bluffing standards as a function of the
potsize and the opponent type. It's possible to just go with "typical
opponent" if you don't know.

In fact, it's best as a backup plan for difficult opponents, as usually the
key to someone is a lot simpler than needing to apply something this
advanced.

For a bot, however, I've been operating on the assumption that this may be
the best approach if combined with something like the following: Take the
past 100 hands against someone, analyze all the data, and reanalyze, say,
the last 30, and combine, weighting the last 30. Then use that analysis as a
player profile to estimate the correct equilibrium percentages for what's
mentioned above and the bot's play would be very difficult to deal with.

It makes it possible to track changes in opponent play and track tendencies
and to exploit them properly.

It's along the lines of this pseudocode: "If opponent response is A, X% of
the time, then throw in, B, the counterplay to that strategy, Y% of the
time."

For some opponents, the numbers that pop out are 0% and 100%, or close to
it.

Now this needs to be done for all the preflop starters for preflop play and
all the possible ways to hit a flop in the general sense (it is quite
finite).

If this induces changes in opponent behavior (something that's actually
somewhat predictable) the bot can automatically adjust.

I think it's good to understand this about poker, and then just go play it,
but a bot is best to follow the above cooked formula, or something like it,
as far as I can tell.

tvp

Stig Eide

unread,
Jul 12, 2002, 8:48:40 AM7/12/02
to
Keith Ellul <kbe...@fe02.math.uwaterloo.ca> wrote in message news:<Pine.SOL.4.44.02071...@fe02.math.uwaterloo.ca>...

> Actually, hold'em is far more complicated than that.
>
> There exist hands A, B, and C such that A is a heads-up showdown
> favourite over B, B is a heads-up showdown favourite over C, and C is a
> heads-up showdown favourite over A. There is one occasionally-repeated
> (in this newsgroup) example of this phenomenon but I can't remember
> what it is. So, in this case, how would you convert these hands (A, B,
> and C) to numbers?
> Keith

Keith!
No matter how you twist it, each hand has a probability of winning,
and that is a number between 0 and 1. Since your opponents hands are
unknown, the probability is constant, depending on the number of
opponents.

> Lou Krieger adds in:


> Sound more like Roshambo than poker to me....

hehe, good ol' Rock - nothing beats it!
Actually Roshambo or Rock, Paper, Scissors is a fine example of how
useless gametheory is.

Peace, Stig

Jacob Johannsen

unread,
Jul 12, 2002, 5:22:23 PM7/12/02
to
stig...@yahoo.com (Stig Eide) writes:

> Keith Ellul <kbe...@fe02.math.uwaterloo.ca> wrote in message news:<Pine.SOL.4.44.02071...@fe02.math.uwaterloo.ca>...
> > Actually, hold'em is far more complicated than that.
> >
> > There exist hands A, B, and C such that A is a heads-up showdown
> > favourite over B, B is a heads-up showdown favourite over C, and C is a
> > heads-up showdown favourite over A. There is one occasionally-repeated
> > (in this newsgroup) example of this phenomenon but I can't remember
> > what it is. So, in this case, how would you convert these hands (A, B,
> > and C) to numbers?
> > Keith
>
> Keith!
> No matter how you twist it, each hand has a probability of winning,
> and that is a number between 0 and 1. Since your opponents hands are
> unknown, the probability is constant, depending on the number of
> opponents.

How would you account for the fact that poker hands (and thus the
probability of winning with a given hand) are not independent (That
is, if I hold aces, the probability of you also holding aces is
smaller than if I held two non-aces)?

>
> > Lou Krieger adds in:
> > Sound more like Roshambo than poker to me....
>
> hehe, good ol' Rock - nothing beats it!
> Actually Roshambo or Rock, Paper, Scissors is a fine example of how
> useless gametheory is.

How so?

--
/Jacob Johannsen aka CNN
--------------------------------------------------------------------------
You're in a phrase of mythic riddle messages, all aligned.
--------------------------------------------------------------------------

Alix Martin

unread,
Jul 12, 2002, 6:13:41 PM7/12/02
to
>
> Please allow me to phrase my uderstanding of your comment a little
> differently and se if it gets anywhere useful or interesting...
>
> The value of a coalition is based on one premise: that taking an action that
> is not in my own individual best interest in the game will provide a return
> to another member of the coalition that is greater than my loss. As a group
> we gain. Example: I know you have the nuts so I raise you to jack up the pot
> at the expense of a third player knowing I will lose. You win more than I
> lose so *we* win.
>

Yes.

> The Fundamental Theorem of Scamming says that such opportunities arise
> enough in poker to make scamming a winning strategy. It also suggests that
> when there is one opportunity to do this in a game there is either a second
> opportunity for the other group or there is a counter strategy to defeat it
> if all other players at the table team up.
>

The ev of a coalition is superior or equal to the ev of an individual
player because ev is maximised over a bigger set. The coalition
achieves at least the same ev as the individual by having each member
of the coalition playing optimal "individual" poker. To show that the
ev of a coalition is strictly superior, it is needed to exhibit a
situation where the coalition gains ev over individual play that
occurs with nonzero probability.

> It occurs to me that any mathematical proof of the theorem would likely rely
> on the same proof Nash needed to do his work: Kakutani's fixed point
> theorem, simpe because we need to prove the existence of such an outcome,
> and might be able to prove that it exists for every hand.
>

This might prove existence, which is left alone by "the fundamental
theorem of sccamming". For 2 players or two coalitions, existence of
an optimal strategy can be proven by recognizing a matrix game (of
strategies or cross strategies) and applying the minimax theorem.

Alix

Patti Beadles

unread,
Jul 13, 2002, 12:49:13 AM7/13/02
to
In article <wvxd6ts...@horse07.daimi.au.dk>,
Jacob Johannsen <c...@daimi.au.dk> wrote:

>> No matter how you twist it, each hand has a probability of winning,
>> and that is a number between 0 and 1. Since your opponents hands are
>> unknown, the probability is constant, depending on the number of
>> opponents.

>How would you account for the fact that poker hands (and thus the
>probability of winning with a given hand) are not independent (That
>is, if I hold aces, the probability of you also holding aces is
>smaller than if I held two non-aces)?

That's easy!

If I calculate the probability of AA winning against two random
hands, then the other hands and the board cards are dealt from
50-card decks. If I'm holding the ace of spades, you will never
have it in your hand, and it will never show up on the board.

-Patti
--
Patti Beadles |
pat...@gammon.com | All religions are equally
http://www.gammon.com/ | ludicrous, and should be ridiculed
or just yell, "Hey, Patti!" | as often as possible. C. Bond

Stig Eide

unread,
Jul 13, 2002, 7:02:25 AM7/13/02
to
Jacob Johannsen <c...@daimi.au.dk> wrote in message news:<wvxd6ts...@horse07.daimi.au.dk>...

> stig...@yahoo.com (Stig Eide) writes:
>
> > hehe, good ol' Rock - nothing beats it!
> > Actually Roshambo or Rock, Paper, Scissors is a fine example of how
> > useless gametheory is.
>
> How so?

If you play the gametheoretically optimal strategy, you will break
even no matter if your opponent is playing like Bart Simpson (his
strategy goes like this: "good ol' Rock - nothing beats it!").
If you want to play RoShamBo or Poker optimal, you have to study zen
and know your opponents next move. Gametheory won't help you there.
You might find the description of "Iocaine Powder" interesting, "a
heuristically designed compilation of strategies and meta-strategies
which took first place in Darse Billings' excellent First
International RoShamBo Programming Competition.":
http://www.ofb.net/~egnor/iocaine.html
Stig Eide

Keith Ellul

unread,
Jul 13, 2002, 10:47:45 AM7/13/02
to
On 12 Jul 2002, Stig Eide wrote:

> Keith Ellul <kbe...@fe02.math.uwaterloo.ca> wrote in message news:<Pine.SOL.4.44.02071...@fe02.math.uwaterloo.ca>...
> > Actually, hold'em is far more complicated than that.
> >
> > There exist hands A, B, and C such that A is a heads-up showdown
> > favourite over B, B is a heads-up showdown favourite over C, and C is a
> > heads-up showdown favourite over A. There is one occasionally-repeated
> > (in this newsgroup) example of this phenomenon but I can't remember
> > what it is. So, in this case, how would you convert these hands (A, B,
> > and C) to numbers?
> > Keith
>
> Keith!
> No matter how you twist it, each hand has a probability of winning,
> and that is a number between 0 and 1. Since your opponents hands are
> unknown, the probability is constant, depending on the number of
> opponents.

No. Do not confuse "unknown" and "random".

You know what probability each hand has of winning against x random hands.
But you are not playing against random hands. So this is irrelevant.

Keith

Jacob Johannsen

unread,
Jul 13, 2002, 11:20:07 AM7/13/02
to
Patti Beadles <pat...@rahul.net> writes:

> In article <wvxd6ts...@horse07.daimi.au.dk>,
> Jacob Johannsen <c...@daimi.au.dk> wrote:
>
> >> No matter how you twist it, each hand has a probability of winning,
> >> and that is a number between 0 and 1. Since your opponents hands are
> >> unknown, the probability is constant, depending on the number of
> >> opponents.
>
> >How would you account for the fact that poker hands (and thus the
> >probability of winning with a given hand) are not independent (That
> >is, if I hold aces, the probability of you also holding aces is
> >smaller than if I held two non-aces)?
>
> That's easy!
>
> If I calculate the probability of AA winning against two random
> hands, then the other hands and the board cards are dealt from
> 50-card decks. If I'm holding the ace of spades, you will never
> have it in your hand, and it will never show up on the board.
>

I might have misunderstood this, but I thought we were dealing
numbers, not cards.

Jacob Johannsen

unread,
Jul 13, 2002, 11:36:11 AM7/13/02
to
stig...@yahoo.com (Stig Eide) writes:

> Jacob Johannsen <c...@daimi.au.dk> wrote in message news:<wvxd6ts...@horse07.daimi.au.dk>...
> > stig...@yahoo.com (Stig Eide) writes:
> >
> > > hehe, good ol' Rock - nothing beats it!
> > > Actually Roshambo or Rock, Paper, Scissors is a fine example of how
> > > useless gametheory is.
> >
> > How so?
>
> If you play the gametheoretically optimal strategy, you will break
> even no matter if your opponent is playing like Bart Simpson (his
> strategy goes like this: "good ol' Rock - nothing beats it!").

True.

> If you want to play RoShamBo or Poker optimal,

Optimal here meaning "winning as much as possible".

> you have to study zen and know your opponents next move.
> Gametheory won't help you there.

Let's say we decide to play a RoShamBo match of 1000 games, and we
both try to win as much as possible. After playing 100 games, I'm
significantly ahead. After another 100 games, in which you try another
non-random strategy, I'm even further ahead significantly. After
another few hundred games, you realise that I'm better at getting into
your head than your are at getting into mine. Therefore, for the rest
of our match you choose to play game theory optimal strategy (that is,
random), so as to minimize your losses. It is a somewhat "defensive"
use of game-theory, but hardly useless.

> You might find the decription of "Iocaine Powder" interesting, "a

> heuristically designed compilation of strategies and meta-strategies
> which took first place in Darse Billings' excellent First
> International RoShamBo Programming Competition.":
> http://www.ofb.net/~egnor/iocaine.html

Been there, but I did'nt read all of it. It's on my todo list.

Patti Beadles

unread,
Jul 13, 2002, 1:02:47 PM7/13/02
to
In article <wvxk7nz...@horse03.daimi.au.dk>,

Jacob Johannsen <c...@daimi.au.dk> wrote:
>I might have misunderstood this, but I thought we were dealing
>numbers, not cards.

We are. Actually, we're dealing both.

We know that if you're holding black aces, and your opponent has
red kings, you have (approximately) an 81% chance of winning the
hand. That's the number.

But the way we derive this number involves cards. We figure out
all of the boards that can be dealt with the remaining 48 cards,
and then figure out which hand wins on each one of them.

Make sense?

Gary Carson

unread,
Jul 13, 2002, 8:24:44 PM7/13/02
to
On 13 Jul 2002 17:02:47 GMT, Patti Beadles <pat...@rahul.net> wrote:

>In article <wvxk7nz...@horse03.daimi.au.dk>,
>Jacob Johannsen <c...@daimi.au.dk> wrote:
>>I might have misunderstood this, but I thought we were dealing
>>numbers, not cards.
>
>We are. Actually, we're dealing both.
>
>We know that if you're holding black aces, and your opponent has
>red kings, you have (approximately) an 81% chance of winning the
>hand. That's the number.
>
>But the way we derive this number involves cards. We figure out
>all of the boards that can be dealt with the remaining 48 cards,
>and then figure out which hand wins on each one of them.
>
>Make sense?
>

I've never seen a game theory model of poker that doesn't assume
independence of opponents hands.

Gary Carson
http:// garycarson.home.mindspring.com

Rick McGrath

unread,
Jul 14, 2002, 12:09:45 AM7/14/02
to
The problem I see with both of these responses is that they assume the game
theoretical optimal strategy is static. The game theoretical optimal
strategy is *only* static of the opponent is also playing his optimal
strategy. The payoff matrix for a given player is not static. It changes
with the strategy played by the opponent. If you play a poorly chosen mixed
strategy and I identify it, my optimal strategy changes because my payoff
matrix changes. Rock, Paper, Scissors is a perfect example of how game
theory directs us to avoid losing if the opponent is playing "perfectly" in
a symmetrical game, and how to change our strategy to beat the hell out of
him if he plays less than a perfect strategy. The game theoretic solution
does NOT say to play a random 1/3 each strategy UNLESS the other player does
the same. It DOES tell us to play all paper against Bart because our payoff
matrix will change.

The problem is not the game theory, it is how you misinterpret the concept
of optimal strategy in game theory. If you calculate the payoff matrix using
statistics you are automatically *assuming* that the other person plays
perfect strategy. You can't then use a counter example of his playing a
sub-optimal strategy and beating you. You are simply violating your *own*
assumptions. Ignoring yuour own assumption after you've done the
calculations does not condemn game theory but your own misunderstanding of
where the payoff matrix comes from.

The proper payoff matrix for a game of heads up poker is not static, and
neither is the optimal play in poker. Both are based on reaction curves
derived from the opponents play. Reaction curves can be calculated for river
play. Some are esy, some are extremely hard. The game theory payoff matrix
for heads up can be derived from the first order conditions of a stackelburg
duopoly.

Rick.

"Jacob Johannsen" <c...@daimi.au.dk> wrote in message

news:wvxhej3...@horse03.daimi.au.dk...

Stig Eide

unread,
Jul 14, 2002, 3:28:23 AM7/14/02
to
garyc...@alumni.northwestern.edu (Gary Carson) wrote in message news:<3d30c3d8...@news.mindspring.com>...

> I've never seen a game theory model of poker that doesn't assume
> independence of opponents hands.
>
> Gary Carson

Gary,
Actually, the dependencies between hands should not matter for a game
theoretically model of Poker. Why? Because the theory is ment to be
applied _after_ the hands are dealt. No reason to assume anything about
_how_ they are dealt.
The "only" numbers a gametheoretical model needs, to describe the hands are:
The winningchance of the hand, and:
How this winningchance will vary between the bettingrounds (Example: A Draw has
huge variance before the River, and might be playable because of that.
Another hand might have the same winningchance, but the River probably
will not do much, that is, the variance (another number between 0 and 1) is low.
Therefor, this last hand is not as good as the Draw - even though they have
the same probability of winning).

Stig Eide

Needless to say, all this is In My Opinion.

PS: I ordered your book at Amazon, but they messed it up and send me
"How to be a Gardener" instead... What a bad beat!

Jacob Johannsen

unread,
Jul 14, 2002, 10:43:20 AM7/14/02
to
"Rick McGrath" <rmcgr...@comcast.net> writes:

> The problem I see with both of these responses is that they assume the game
> theoretical optimal strategy is static. The game theoretical optimal
> strategy is *only* static of the opponent is also playing his optimal
> strategy. The payoff matrix for a given player is not static. It changes
> with the strategy played by the opponent. If you play a poorly chosen mixed
> strategy and I identify it, my optimal strategy changes because my payoff
> matrix changes. Rock, Paper, Scissors is a perfect example of how game
> theory directs us to avoid losing if the opponent is playing "perfectly" in
> a symmetrical game, and how to change our strategy to beat the hell out of
> him if he plays less than a perfect strategy. The game theoretic solution
> does NOT say to play a random 1/3 each strategy UNLESS the other player does
> the same. It DOES tell us to play all paper against Bart because our payoff
> matrix will change.
>
> The problem is not the game theory, it is how you misinterpret the concept
> of optimal strategy in game theory. If you calculate the payoff matrix using
> statistics you are automatically *assuming* that the other person plays
> perfect strategy. You can't then use a counter example of his playing a
> sub-optimal strategy and beating you. You are simply violating your *own*
> assumptions.

I'm not assuming anything about the way my opponent plays, except that
he's choosing a strategy that exploits my choice of strategy. Since
apparently, this fictive opponent is capable of doing this regardless
of my choice of strategy, I would be better off playing randomly,
because this optimizes my winnings (= breaks even in this
case). Therefore, game theory is not useless, which is what I was
trying to prove.

Jacob Johannsen

unread,
Jul 14, 2002, 10:45:47 AM7/14/02
to
Patti Beadles <pat...@rahul.net> writes:

> In article <wvxk7nz...@horse03.daimi.au.dk>,
> Jacob Johannsen <c...@daimi.au.dk> wrote:
> >I might have misunderstood this, but I thought we were dealing
> >numbers, not cards.
>
> We are. Actually, we're dealing both.
>
> We know that if you're holding black aces, and your opponent has
> red kings, you have (approximately) an 81% chance of winning the
> hand. That's the number.
>
> But the way we derive this number involves cards. We figure out
> all of the boards that can be dealt with the remaining 48 cards,
> and then figure out which hand wins on each one of them.
>
> Make sense?
>

Yes, it does. I got the impression that we were "dealing" numbers in
the sense that we assign a random number to each player, which is not
the same.

0 new messages