Do you believe it's mathematically correct, incorrect, or neutral to
play the coin toss hand? Why?
-Patti
[1] Yes, I know there are no perfect coin tosses, and you can't know
that your exact percentages in any case, yada yada yada. Think of it
as the poker equivalent of a theoretical frictionless surface.
--
Patti Beadles |
pat...@gammon.com |
http://www.gammon.com/ |
or just yell, "Hey, Patti!" | Quisque comoedus est.
>It's the first hand of a nolimit tournament in which there is a
>reasonable amount of play, and you feel that you have a skill edge.
>you have the opportunity [1] to take a coin toss to either double
>up or bust out.
>
>Do you believe it's mathematically correct, incorrect, or neutral to
>play the coin toss hand? Why?
>
If you believe you have a skill edge,why should you take a 50-50 and very
possibly knock yourself out?Give your skill a chance to work.
It would be mathematically wrong for you to take such a coin-toss
(even if 'equal skill,' without your "skill edge"), because in tournaments
(other than 'winner-take-all,' that is), each additional chip in your stack is
worth less to you (in terms of expected prize money) than the previous
chip, and so the chips that you would be risking are more valuable to you
than the equal number of chips that you could gain...OK?!
Later, Greg Raymer (FossilMan)
Barbara Yoon <by...@erols.com> wrote in message
news:a0kjs1$5a1$1...@bob.news.rcn.net...
I think the real deciding factor is how much, if any, does your edge over
the field increase if you immediately become chip leader. In other words,
if you were 15% better before the first hand was dealt, will you become 25%
better with the biggest stack? If you will remain at 15%, then I would fold
this coin-toss hand.
Later, Greg Raymer (FossilMan)
Patti Beadles <pat...@rahul.net> wrote in message
news:a0k70s$fue$1...@samba.rahul.net...
Hang on I think I see it now, the rest of the field benefit because they
move closer to being in the money.
E U R E K A ...!!
"Patti Beadles" <pat...@rahul.net> wrote in message
news:a0k70s$fue$1...@samba.rahul.net...
My answer will go against the grain and does not consider
favorable/unfavorable blind and ante structures in specific events ("a
reasonable amount of play" is fine for this exercise).
I understand Barbara's mathematical analogy and statement on relative chip
value (I'll even throw in the all-in value that will diminish if you were to
accept the coin flip). That said, Patti specified no-limit, thus if I were
playing a no rebuy five hour duration NL HE event, I'd accept the outcome of
the toss. Alas, while I believe I understand the pros and cons, I can't
quantify the pro side. The chip leader fringe benefits (mainly moving
players off hands) is enough for me. Besides, I figure I rate to win 54% of
ther flips by calling tails, right?
Lee
>Does anyone know of a rigorous mathematical proof of the commonly-held
>belief that each tournament chip won is worth slightly less than each one
>lost? It seems sensible intuitively and qualitatively, but I've never seen
>it proven quantitatively.
>
All the chips, when possessed by the collection of players, are worth
the prize pool.
All the chips, when possessed by a single player, are only worth first
prize, which is usually around 40% of the prize pool.
Since you start with 1/N of the chips and 1/N of the equity (N = # of
players entering the tournament), all else being equal, you cannot
arrive at the limit of all the chips being worth only a fraction of
the prize pool without the value of additional chips being worth less
than value of the chips you currently possess.
QED
Pick your favorite tournament equity formula and do the math for
different stack sizes, and you'll see it.
Or here's a fairly easy "proof"-- You buy in to a $100 tourney and get
T100 in chips. There are 100 players, and first prize is $4000.
Clearly, at the start of the tournament, each chip in your stack has a
value of $1. At the end of the tournament, each chip in your stack is
worth $.40.
-Patti
--
Patti Beadles |
pat...@gammon.com |
http://www.gammon.com/ | The crazy chick with
or just yell, "Hey, Patti!" | the purple hair.
"Patti Beadles" <pat...@rahul.net> wrote in message
news:a0k70s$fue$1...@samba.rahul.net...
> Say you double up on the first hand of Patti's 40 person, $100 buy-in
> tournament. It seems to me that your equity has gone from $100 to
> $200--ie, that each aditional chip is worth exactly $1.
Since at the end the prize money will not be paid based on how many chips
you have, but rather based on when you busted out, the marginal malue of
chips is a declining function of stack size.
Furthermore, Patty stipulated that your expectation was above-average; that
is, your starting equity exceeded $100.
So doubling your stack less than doubles your tournament equity.
Yet, that doesn't make it wrong to go for it, if there are other
opportunities for positive-equity play. Suppose you see a single-table
satellite forming with a field even weaker than your current tournament.
That creates a safety net; if you bust out of this tournament, you can jump
into that satellite with good positive expectation. In that case, losing
all your chips isn't losing all your equity.
If your own time is valuable, perhaps busting out immediately and freeing up
an expected couple hours of your time is worth a fair amount. Again, while
doubling up doesn't double your current equity, this safety net means that
busting out doesn't lose all your equity either.
Similarly, if this is a rebuy tournament, and you view your marginal equity
from doubling up as worth more than the cost of a rebuy, it's correct to
gamble and toss the coin - heads, you double up, and gain more than a rebuy
in equity; tails, bust, rebuy, and have about the same equity as before
(your share of chips is slightly smaller than before, but your rebuy makes
the prize pool bigger; overall, your equity after the rebuy will be a tiny
bit greater than before losing a rebuy stack in a place-paying tournament
unless they are charging an entry fee on rebuys or equivalently raking a
percentage of the prize pool.)
--
Randy Hudson <i...@panix.com>
Lee Munzer wrote in message ...
This is interresting because the general received wisdom appears to be
that you should play tighter in tourneys, or at very least to try to
avoid all-ins. The reasonm is that even if you have 1/2 chance of
doubling your equity with the all-in, if you survive and do the same
thing later your chsances of survivng are now only .5*.5 or .25; do
it again and you go down to .125 . However, by increasing your
equity you also make it *less likely* that you will be taken all-in
again, and hence this argument too may be flawed. Also, mindful of
the reluctance of ppl to call all-in, you can exploit this by going
all-in yourself with a raise. A while back I saw a guy go All In with
45s, and was called nearly All In by a guy with AA, and was promptly
rewarded by seeing the 45s guy hit his flush. If it aint right to
call with AA, it aint right to call period. If you check the stats AA
is only abt an 80:20 favorite over 45s, and of course normally uoid
expect he would have had something a lot bigger. Ironically if he had
AKs the AA guy's chances would have been better.
I have been experimenting with overbets recdently and my reward was to
see a guy go all-in against my KK, and ~I called to see his AA hold up
and take me out. Of course, I really should have known that the
"impossible" had happened and he had the AA. But Im no Phil Helmuth,
and didnt fold. The problem with overbets is that though they get rid
of drawers, it means the *only* calls you are likely to get are
monsters who will reraise and beat you, so you either win a small pot
or lose a big one. This makes the theory rather complex and I'll have
to do a bit more work on it. On that note, someone mentioned their
"favorite equity formula". Ive never seen one - can someone tell me
one???
PW/BluffKingHal online
JP, Patti, and Randy have all answered your question in their different ways...
I myself tend to understand complicated things like this by relying more on
concrete examples... So here, let's say that you're one of the final three players,
with equal chip stacks, competing for 1st, 2nd, 3rd prizes of $1,200, $900, $600
-- at this point, your stack is worth a prize expectation of $900 (1/3rd of the
total $2,700 prize pool -- right?!). And if you LOSE your stack, you get the
$600 3rd prize, for a loss of $300 from your initial $900 expectation. But if
you WIN, knocking out one of your opponents, and DOUBLING your stack,
your prize money expectation grows to $1,100 (as you would then have 2/3rds
of the chips, competing for the 1st, 2nd prizes of $1,200, $900 -- right?!), for
a gain of only $200 from your initial $900 expectation. So losing your stack
would cost you $300, but winning another stack of equal size would gain you
only $200. I hope we've been able to clarify this complicated issue here...OK?!
>into your decisions.
>Later, Greg Raymer (FossilMan)
>
>
But the original poster said you had a skill edge. THAT is the main reason a
coin toss would be very wrong, even at the beginning of the tournament.
So are we saying that it's possible to be both ahead and behind?
I play in NL tournaments occasionally. I am good at knowing when I'm ahead, but
it's not always by a large margin. Am I actually WRONG to press things when the
edge is marginal?
Separately, what percentage edge do you need to go all-in on that first hand?
If 50-50 isn't enough, is 60-40? Where's the line? How does one figure it?
tvp
>> It would be mathematically wrong for you to take such a coin-toss
>> (even if 'equal skill,' without your "skill edge"), because in tournaments
>> (other than 'winner-take-all,' that is), each additional chip in your stack is
>> worth less to you (in terms of expected prize money) than the previous
>> chip, and so the chips that you would be risking are more valuable to you
>> than the equal number of chips that you could gain...OK?!
Greg Raymer (FossilMan):
>> Barbara is correct, and not. It is true that the chip you win is worth
>> slightly less than the chip you lose. However, early in a tourney, this
>> factor is so negligible that it can be ignored. When you get into or near
>> the money, this factor becomes quite significant and needs to be factored
>> into your decisions.
Dsklansky:
>> But the original poster said you had a skill edge. THAT is the main reason
>> a coin toss would be very wrong, even at the beginning of the tournament.
Amusing how here in 'contentious' RGP, three responders to Patti's question
can superficially appear to be 'arguing' with each other, when in fact there is
absolutely no real conflict at all between them -- just slightly different focuses...
>> ...mathematically wrong...because...each additional chip in your stack is
>> worth less to you than the previous chip...
Greg Raymer (FossilMan):
>> However, early in a tourney, this factor is so negligible that it can be ignored.
TadPerry01:
> [So then] what percentage edge do you need to go all-in on that first hand?
> If 50-50 isn't enough, is 60-40? Where's the line? How does one figure it?
"It depends"...on how many players, the prize structure, relative skill
assumptions, and such -- but typically, assuming equal skill, I suspect
that as "FossilMan" suggests, it's likely closer to your "50-50"...OK?!
Let's assume that it's incorrect for you to take the coin toss if you
have both a skill advantage, and the luxury of time to use it. (I'd
probably take the coin toss in a single table satellite with short
rounds, for example, since I wouldn't have as much time for my skill
edge to be a major factor. I wouldn't in a large tournament with a
relatively weak field.)
The key concept, I think, is that if you take the coin toss, you're
completely nullifying your skill edge 50% of the time.
Would you take 51/49 on the first hand? 55/45? Where do you draw the
line. It seems to me like you should be able to calculate that point
based on your skill edge, but how?
-Patti
--
Patti Beadles |
pat...@gammon.com |
http://www.gammon.com/ | Dammit! I've got a date tonight
or just yell, "Hey, Patti!" | and I can't find my rope.
Your "skill edge" notion here makes your already difficult question even
more complicated... But no matter what, the answer depends on the
PRIZE STRUCTURE -- for examples, assuming EQUAL skill, in
a TEN-player tournament, if it were 'winner-take-all,' then you could
advantageously go for the 'coin-toss' with anything better than "50/50;"
but if the prizes were seats in 'The Big One' for the top NINE finishers
(and nothing for tenth), then you would need "90/10"...see?!
In reality, I wonder whether anyone has played enough tournaments and kept
accurate enough records to get a statistically significant result.
"Patti Beadles" <pat...@rahul.net> wrote in message
news:a0pau3$gaf$1...@samba.rahul.net...
>> ...depends on the PRIZE STRUCTURE -- for examples, assuming
>> EQUAL skill, in a TEN-player tournament, if it were 'winner-take-all,'
>> then you could advantageously go for the 'coin-toss' with anything better
>> than "50/50;" but if the prizes were seats in 'The Big One' for the top
>> NINE finishers (and nothing for tenth), then you would need "90/10"...
The two examples given above are what you could call 'boundary limits'
-- that is, the real answer for more 'typical' prize structure lies somewhere
in between "50/50" and "90/10"...OK?!
It's not just the size of your edge; it's how the edge manifests
itself:
I like Lee's point: If the table is risk averse (playing tighter than
optimum v big stacks), a big stack may have additional value early on.
Just before the final table there is too little calling/too much
stealing in my opinion. Being a big stack here can certainly have
mathematically unwarranted value...
Personally, I tend to avoid the close ones early. I find my edge comes
in the middle to latter stages of play. If I'm still around I do well.
I think the image created by playing tight early helps me to steal
more later on, when the rewards tend to be greater.
Quite a complex issue though and more complex than DS says: If you
play well early with a big stack and struggle with average chips,
maybe you're better taking the 50/50.
What is the value of my time, for purposes of this discussion?
If I bust out on the first hand, I lose all my equity in the tournament
prize pool. If I double up, my equity in the prize pool almost-but-not-quite
doubles, and the length of time I expect to continue playing in the
tournament increases. If I bust out, my prize equity vanishes, but I
now have several more hours of time to use than I did before.
If the tournament buyin is on the order of $100, and there's a juicy $15-$30
game going on, and my mid-limit win rate is between one and two bets per
hour, then gambling on the first hand might well be correct! If I win the
hand, my tournament equity doubles; if I lose the hand, the expected win
from the extra time in the $15-$30 game replaces the lost tournament equity.
I was wondering this but decided "juicy" was an important factor in your
decision.
Except that if you are a skilled player, you increase your winning
chances by doubling up immediately rather than later. This is because
you should have some postive expectation for your play. Suppose you
have 2000 and the next round is an hour of 50-100. Then if you are a
1 big bet/hr winner, having 2000 at the beginning of the round should
be (to borrow David's phrase) pretty close to having 2100 at the end
of the round. Also because of the same effect, P(doubling up) is
greater earlier and less later.
This further complicates the question. In David's model if you assume
you have twice the average chance of winning in an 128 player field,
your chances of doubling up are roughly 55% on average. It's a
reasonable assumption to say P(doubling) is 55% if you are an average
stack, less if less than average. So say you are average with 1000 at
the beginning of the 50-100 round, and a genie (or some other plot
device) gives you the chance to double up immediately, what odds do
you need? Well, if you figure it will take you a round on average to
double up, which seems reasonable since the variance per hour round is
10 big bets. If you win you will have 2000, which we have said to be
worth 2100 one round later, as opposed to taking a 55% chance to have
2000 one round later. Hence you should have a 52.5% chance of winning
the coin flip.
Note that you only need a 50.6:49.4 edge to risk half your chips.
Now, many assumptions are made here and many more left out that should
be considered, but I've tried a few different models and I have gotten
similar results. What does this mean to me in practice? Well in
general I think I am a pretty quantative guy, but I can't really say
"well base on my read of the range of hands my opponent(s) have
including implied odds, I think I have a 1% edge here but since this
is a tournament situation I can now fold!" "But wait a minute that
15-30 game there seems juicy. Let me see, based on the players there
my win rate is $30/hr. So like time value converted to tourney chips
is."
Seriously though, don't sweat it too much until it gets close to the
money. At the beginning of a tourney just put in your chips if you
have an edge, fold if it just seems like a toss up. Not bad advice for
a ring game either.
Yes. That is why I said pretty close.
> In David's model if you assume
> you have twice the average chance of winning in an 128 player field,
> your chances of doubling up are roughly 55% on average.
Could you explain how you arrived at this figure? It's not obvious to
me, and the 55% is crucial to your conclusions so I want to be sure to
understand how you got it.
Thanks a lot,
Matt
Well you need to double up 7 times to win. So say your chance of
winning is 1/64 instead of 1/128. LEt x be your chance of doubling
up. Then we have x^7 = 1/64. So we solve for x.
Bill
But that's not a proof of the question as stated, and certainly not
rigorous. All that you've shown is that the accumulation of *all*
the remaining chips is worth less, on a per-chip basis, than your
original stack (and that only for place-paying tournaments, where
all players are equally skilled). In order to show that *each* chip
won is worth progressively less, you would at least need to show
that the marginal value of additional chips is monotonic with
respect to the number of chips in your stack at all stages of the
tournament, and you haven't come close to showing that yet. For example,
it might well be the case that going from 1/N to 2/N of the chips early
on more than doubles your chance of winning, but the difference between
70% and 90% of the chips is negligible if everyone else is still at 1/N.
Such a situation might arise if players are typically giving up
significant chip EV to avoid the risk of busting out. Then the first
few chips you win early might add significant value in letting you
exploit more marginal situations, but beyond that the utility might drop off.
So it might even be the case that it's right to push marginal opportunities
early if and only if most of your opponents think it's wrong.
--
Dave Wallace (Remove NOSPAM from my address to email me)
It is quite humbling to realize that the storage occupied by the longest
line from a typical Usenet posting is sufficient to provide a state space
so vast that all the computation power in the world can not conquer it.
Thanks for explaining the 55%. But wait a minute, I have a new
question.
> Well, if you figure it will take you a round on average to
> double up, which seems reasonable since the variance per hour round is
> 10 big bets. If you win you will have 2000, which we have said to be
> worth 2100 one round later, as opposed to taking a 55% chance to have
> 2000 one round later. Hence you should have a 52.5% chance of winning
> the coin flip.
Except that if I lose the coin flip, I'm out of chips. This will
happen 47.5% of the time. In the case where I decline the coin flip,
55% of the time I will have 2000 one round later, but the other 45% of
the time, I will usually still have chips, right? I would think there
are many cases where a good player does not double up, but at least
maintains his chip count. And that has to increase his overall
tournament EV. Therefore I would think you would need significantly
better than a 52.5% chance to take the coin flip.
What do you think? Are there some flaws in this reasoning?
Thanks,
Matt