Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Which is greater: luck or skill

11 views
Skip to first unread message

Bill Taylor

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
kai...@cip.informatik.uni-wuerzburg.de (Bernhard Kaiser) writes:

|> Some time ago someone put together a very interesting list, in
|> which he compared the skills in various games.
|> He divided the players of the different games into several classes,
|> such that one player has about a 75% chance to beat a player of
|> a class one lower. (In Backgammon, a 15 Point match was given).
... ...
|> The most skillfull game by a wide margin was go, with i think 14 classes (!)
|> , only the second was chess, with 10, and third was Poker, with 6

I vaguely recall this article too.

Has anyone got a copy of it? I'd like to see the original again.
Indeed, a complete re-post would probably not go amiss.

-------------------------------------------------------------------------------
Bill Taylor w...@math.canterbury.ac.nz
-------------------------------------------------------------------------------
You are what you remember.
-------------------------------------------------------------------------------


Michael Sullivan

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
In article <41trii$8...@cantua.canterbury.ac.nz>,

Bill Taylor <w...@math.canterbury.ac.nz> wrote:
>kai...@cip.informatik.uni-wuerzburg.de (Bernhard Kaiser) writes:

>|> Some time ago someone put together a very interesting list, in
>|> which he compared the skills in various games.
>|> He divided the players of the different games into several classes,
>|> such that one player has about a 75% chance to beat a player of
>|> a class one lower. (In Backgammon, a 15 Point match was given).

>|> The most skillfull game by a wide margin was go, with i think 14 classes (!)

This doesn't make sense. It's too small, actually.

In amateur matches of people one rank apart playing even, records suggest
that the stronger player has approximately an 80% chance of winning.

There are estimated to be 40+ ranks separating a total beginner from a
top professional.

Even if we discount many of the beginning levels as fairly easy to
progress through, there are still at least 20-25 ranks among people who
play and study fairly regularly and have more than a few months
experience.

>|> , only the second was chess, with 10, and third was Poker, with 6

The chess number seems fairly close. I've heard ~150 elo points make for
a 75% chance of winning. That would lead to about 13-15 ranks.

There's a big problem with this method of determining a game's complexity
or "skill", however.

It's easy to stretch the levels by making longer "games". Use two out of
three instead of a single game, and suddenly the stronger player has a
much better chance of winning a contest.

A good example of this is Straight pool. The higher the score one plays
to, the greater chance of the better player winning.

Does this mean that playing straight pool to 500 is more complicated than
playing to 100? It's still the same game and the same skill set. You've
just separated out the skill levels finer by playing longer.

Go is a much longer game than chess -- an average of ~120-150 moves by both
sides to ~30-35 for chess. It's not surprising that there would be
more classes if the games are equivalently "complex."

If you want to compare the two, "best of 3" or "best of 5" for the chess
games might be more reasonable.


Michael

Darse Billings

unread,
Aug 30, 1995, 3:00:00 AM8/30/95
to
m...@panix.com (Michael Sullivan) writes:

>In article <41trii$8...@cantua.canterbury.ac.nz>,
>Bill Taylor <w...@math.canterbury.ac.nz> wrote:
>>kai...@cip.informatik.uni-wuerzburg.de (Bernhard Kaiser) writes:

>>|> Some time ago someone put together a very interesting list, in
>>|> which he compared the skills in various games.
>>|> He divided the players of the different games into several classes,
>>|> such that one player has about a 75% chance to beat a player of
>>|> a class one lower. (In Backgammon, a 15 Point match was given).

>>|> The most skillfull game by a wide margin was go, with i think 14 classes (!)

>This doesn't make sense. It's too small, actually.

>In amateur matches of people one rank apart playing even, records suggest
>that the stronger player has approximately an 80% chance of winning.

>There are estimated to be 40+ ranks separating a total beginner from a
>top professional.

>Even if we discount many of the beginning levels as fairly easy to
>progress through, there are still at least 20-25 ranks among people who
>play and study fairly regularly and have more than a few months
>experience.

The following was posted to rec.games.backgammon, but not the other two
groups in this thread. It should clarify the claim a bit.

==== include ====

From: Peter Fankhauser <fank...@darmstadt.gmd.de>
Date: 27 Aug 1995 15:26:17 GMT
Newsgroups: rec.games.backgammon
Subject: Re: Which is greater: luck or skill

kai...@cip.informatik.uni-wuerzburg.de (Bernhard Kaiser) wrote:

> Som time ago someone put together a very interesting list, in


> which he compared the skills in various games.
> He divided the players of the different games into several classes,
> such that one player has about a 75% chance to beat a player of
> a class one lower. (In Backgammon, a 15 Point match was given).

> He counted 5 classes for Backgammon, means in the first class
> are the absolute beginners, in the fifth the experts.
> A player of the fourth class, for example , would have about a
> 98,5 % chance to win a match against a player from the first class.
> (1- 0,25^3).


>
> The most skillfull game by a wide margin was go, with i think 14 classes (!)

> , only the second was chess, with 10, and third was Poker, with 6

> classes, as i remember. Equally to Backgammon was "Muehle" (i dont
> know the english word, perhaps parchesi ?) other games were below.

To my knowledge it was Bill Robertie who applied this method.
I give his table from (Inside Backgammon, Vol 2, No 1, Jan-Feb, 1992),
hoping he won't sue me for that :)

Complexity Numbers:
-------------------
Go 40
Chess 14
Scrabble 10 (that surprised me a bit)
Poker 10
Backgammon 8
Checkers 8
Hearts 5
Blackjack 2
Craps 0.001
Lotteries 0.0000001 (that surprised me a bit also - in Germany
there's a group-lottery company, which is
fairly successful by chosing those numbers
which are unlikely to be chosen by the
majority - like excluding birthdates etc...
should be higher thus)
Roulette 0
-------------------

funk

==== end include ====

I agree, roughly, with Robertie's numbers for the deterministic games,
but the methods he uses for the games with a random element seem to be
highly dubious.

Backgammon is a great game, and Robertie is a two-time world champion,
but I just can't put him in the same class as Marion Tinsley (recently
deceased checkers world champion). And with all due respect to Gerry
Tesauro's fine TD_Gammon program, Jonathan Schaeffer's Chinook program
is more impressive, and solves a *much* tougher problem.

I also don't understand the ratings for Craps and Roulette. Of course
there isn't much skill in a fixed percentage negative expectation game,
but you can still play it badly, by choosing the worst bets available in
the game. So I would think there has to be at least part of one level of
differential among such gamblers... Lotteries may be the stupidest game
going, but you can still reduce your expectation by choosing common
numbers, so again, the range shouldn't be within epsilon of zero.



>>|> , only the second was chess, with 10, and third was Poker, with 6

Based on how many hands? Or is there a different criteria?



>The chess number seems fairly close. I've heard ~150 elo points make for
>a 75% chance of winning. That would lead to about 13-15 ranks.

It's closer to 200 points, actually. So that would give about 11 ranks.

>There's a big problem with this method of determining a game's complexity
>or "skill", however.

>It's easy to stretch the levels by making longer "games". Use two out of
>three instead of a single game, and suddenly the stronger player has a
>much better chance of winning a contest.

>A good example of this is Straight pool. The higher the score one plays
>to, the greater chance of the better player winning.

>Does this mean that playing straight pool to 500 is more complicated than
>playing to 100? It's still the same game and the same skill set. You've
>just separated out the skill levels finer by playing longer.

>Go is a much longer game than chess -- an average of ~120-150 moves by both
>sides to ~30-35 for chess. It's not surprising that there would be
>more classes if the games are equivalently "complex."

>If you want to compare the two, "best of 3" or "best of 5" for the chess
>games might be more reasonable.

It depends on what level of granularity you want to measure. The natural
choice would seem to be a single game, not individual moves. Robertie
may feel that the effect of randomness on a 15 game backgammon match is
close to the variance associated with the outcome of a single game of
chess, but I see absolutely no justification for such a claim.

Cheers, - Darse.
--

BlddLDfddFdbfRuuruBuubUF

Bernhard Kaiser

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
B
:
: To my knowledge it was Bill Robertie who applied this method.

: I give his table from (Inside Backgammon, Vol 2, No 1, Jan-Feb, 1992),
: hoping he won't sue me for that :)
:
: Complexity Numbers:
: -------------------
: Go 40
: Chess 14
: Scrabble 10 (that surprised me a bit)
: Poker 10
: Backgammon 8
: Checkers 8
: Hearts 5
: Blackjack 2
: Craps 0.001
: Lotteries 0.0000001 (that surprised me a bit also - in Germany
: there's a group-lottery company, which is
: fairly successful by chosing those numbers
: which are unlikely to be chosen by the
: majority - like excluding birthdates etc...
: should be higher thus)
: Roulette 0
: -------------------
:
: funk

: ==== end include ====
dd
B
It seems i have to check my memory sometimes :(

onepointer

i

Gary Jackoway

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
Jared Roach (ro...@u.washington.edu) wrote:
: In article <420i4p$n...@panix2.panix.com>,
: Michael Sullivan <m...@panix.com> wrote:
: >There's a big problem with this method of determining a game's complexity
: >or "skill", however.
: >
: >It's easy to stretch the levels by making longer "games". Use two out of
: >three instead of a single game, and suddenly the stronger player has a
: >much better chance of winning a contest.

: This criticism of the method is invalid. Adding two games
: together produces a different game, as you note by implication. Clearly
: comparing two games of chess against one game of Go is different than
: comparing one game of each. However, this is almost obvious and is
: mundane enough to be nearly irrelevant to the discussion of which
: requires more skill. There is value in this observation, however, in
: that it points out by implication one _reason_ that Go might be more
: complicated than Chess (i.e. it has more moves.) This doesn't change the
: fact that Go is still more complicated, at least by this method of
: analysis.
: Watch that dialectic!

Except there are games which are intended to be played in multiple
rounds. Backgammon, bridge and poker are of this form. Indeed, it
is completely unclear what you define as one "game" for these.
In rubber bridge, for instance, a set of 4-8 hands becomes a "rubber".
Tournament bridge is played with a minimum of 24 hands. International
championships can last for several hundred hands.

In all games, the more times you play the more likely it is that skill will
defeat luck. While this will affect the "skill to luck ratio", this should
not affect the definition of the "complexity" of the game. A definition of
complexity that IS susceptible to repitition is not a good definition
of complexity.

: Jared Roach, 1d AGA

Gary J.

kl2...@student.law.duke.edu

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
In article <DE6q7...@unixhub.SLAC.Stanford.EDU> ve...@jupiter.SLAC.Stanford.EDU (Eric N. Vella) writes:
>From: ve...@jupiter.SLAC.Stanford.EDU (Eric N. Vella)

>Subject: Re: Which is greater: luck or skill
>Date: Thu, 31 Aug 1995 17:06:40 GMT


> Refering to the criterion of winning probability, or number of levels
>separated by large winning probability, as a measure of game complexity --


> Sorry, but I have to side with the first point of view. Seems like both
>agree that the "game" newChess, defined as best-of-three games of regular
>Chess, favors the stronger player and results in more levels of separation
>between players. But do you really want to conclude that newChess is a
>different game, requiring more "skill", than regular Chess? That seems
>unreasonable. The two games still require equal skill, since the best move
>in one is still the best move in the other. Yet skill can be measured more
>accurately in the longer game. So I think the original criticism is both
>valid and correct.

>--
>Eric Vella
>ve...@slac.stanford.edu

So why can we add more games of chess and not more games of go? Your logic
would support best 2 out of 3 go which would lead to more accurate go
gradations. We could continue this ad absurdum, extending to an infinite number
of chess games and go games. Will this substantively change the ratio of
go gradations to chess gradations? I don't know, but instinctively I would
guess no. Any mathematicians willing to prove otherwise?

Ken

Eric N. Vella

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to

Refering to the criterion of winning probability, or number of levels
separated by large winning probability, as a measure of game complexity --

|> Michael Sullivan <m...@panix.com> wrote:

|>> There's a big problem with this method of determining a game's complexity
|>> or "skill", however.
|>>
|>> It's easy to stretch the levels by making longer "games". Use two out of
|>> three instead of a single game, and suddenly the stronger player has a
|>> much better chance of winning a contest.
|>

|> ro...@u.washington.edu (Jared Roach) replied:

|> This criticism of the method is invalid. Adding two games
|> together produces a different game, as you note by implication. Clearly
|> comparing two games of chess against one game of Go is different than
|> comparing one game of each. However, this is almost obvious and is
|> mundane enough to be nearly irrelevant to the discussion of which
|> requires more skill.

Sorry, but I have to side with the first point of view. Seems like both

Jared Roach

unread,
Aug 31, 1995, 3:00:00 AM8/31/95
to
In article <420i4p$n...@panix2.panix.com>,

Michael Sullivan <m...@panix.com> wrote:
>There's a big problem with this method of determining a game's complexity
>or "skill", however.
>
>It's easy to stretch the levels by making longer "games". Use two out of
>three instead of a single game, and suddenly the stronger player has a
>much better chance of winning a contest.

This criticism of the method is invalid. Adding two games

together produces a different game, as you note by implication. Clearly
comparing two games of chess against one game of Go is different than
comparing one game of each. However, this is almost obvious and is
mundane enough to be nearly irrelevant to the discussion of which

requires more skill. There is value in this observation, however, in
that it points out by implication one _reason_ that Go might be more
complicated than Chess (i.e. it has more moves.) This doesn't change the
fact that Go is still more complicated, at least by this method of
analysis.
Watch that dialectic!

__________________________________________
Jared Roach, 1d AGA
ro...@u.washington.edu
Seattle, Washington
"better to play Go than to do nothing..."


Jared Roach

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
Given that we are defining complexity by how many skill levels a
game can be divided into, then one must be very careful to define the
games being compared. And it clearly makes a difference on how long the
game is. Thus with this definition, one game of chess is different than
two, with two games being more complex.
People have argued that this seems an unreasonable definition of
complexity. Why should 500 hands of poker be considered more complex
than one hand of poker?
I empathize with this point. I am uneasy with the definition
of complexity following from how many skill levels the game can be divided
into. I am not sure why I am and I remain unconvinced that it is not a
good definition. By defining a skill level as a group of players such
that the strongest player in the group will win a game against the
weakest player 95% of the time, one does seem to have a reasonable
definition. But what makes me uneasy is the dependence of this definition
on the existence of humans. A definition of complexity should be valid
in the void. To illustrate: I am a much weaker player when I play Go on IGS
than when I sit down at a board. Other players strengths are also skewed
on IGS, often in one direction or another. Such skewing almost certainly
changes the number of skill classes in Go, meaning that Go is either less
or most likely more complex when played on IGS than when played face to
face. This last result is the kind of thing that makes me unhappy with
the definition.
Now to get back to the red herring (I believe) that has been the
mainstay of this thread for the last few postings. Although I feel
comfortable calling two games of chess a different game than one game of
chess, and thus more complex, this is more or less logical sophistry. I
wanted to get the discussion on the track that I follow in the previous
and next paragraphs. This is that the definition of complexity used
(number of skill levels) may not be satisfying. However, once one accepts
that definition, then the conclusions based on skill levels are valid, and
games when added are indeed more complex.
But the reason I feel so much that the game addition thing is a
red herring is that IT MAKES NO SENSE TO ADD GAMES IF THEY ARE
INDEPENDENT. At least for this definition complexity. Since the moves in
one game of chess have no bearing on the next game in a series, there is
a clear logical and precise definition of how long the game of chess is.
Same for Go. A little less so for backgammon. Even less so for poker, but
definable (what is the object of poker? to win a hand? to bankrupt the
opponent(s)? answer this and you can define what makes an independent
game of poker). An example of dependent addition of two games would be
fairy chess.
Thus I feel comfortable claiming Go is more complex than chess,
and perhaps this is because it has more moves, but this is not a trivial
point. Each of the moves in Go is intimately related to all of the
previous moves (at least above the dan level, hopefully :)). This makes
it an extremely complex game. You try to plan 200 moves ahead on a 19x19
board and suddenly you know why professionals spend half an hour thinking
about move number 7 on a Go board. And believe me, if it is game one of
a 7 game series, they are not thinking about how they should play in game 2.
I made one small approximation. Sometimes games in a series do
interact, specifically by players changing strategy or tactics in a game
dependent on their evaluation of their opponent's strengths and
weaknesses in a previous game. Stronger players are more apt to benefit
from this analysis, and thus would gain an even greater advantage from a
series of games than multiplication of probabilities would imply. So I am
ignoring this in claiming that games in a series are independent. I
think this effect is small enough to ignore. I also think that we can
easily agree upon what constitutes a game of Chess or Go for purposes of
this discussion. Gambling games are more difficult, both because of the
dependency of play on the knowledge of the relative states of the bank
accounts, but also because winning and losing is no longer only a yes/no
question, but also a by how much question. One has to impose a yes/no
conclusion to such games before they can be compared. Thus define the
loser in a two person backgammon match to be the first person to go
broke having started with $1000 apiece. Now we can figure out the number
of skill levels and thus the complexity. If we want to.

Dave Ring

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
Jared Roach <ro...@u.washington.edu> wrote:
> Given that we are defining complexity by how many skill levels a
>game can be divided into, then one must be very careful to define the
>games being compared. And it clearly makes a difference on how long the
>game is. Thus with this definition, one game of chess is different than
>two, with two games being more complex.

1. The number of 'levels' of a game has nothing to do with complexity. It
is only a measure of how much luck is involved. (how many levels are there
in most track and field events?)

2. The 'number of games' problem indicates only that this measure of luck
is qualitative, not quantitative. If one wanted a quantitative measurement,
one could demand the same amount of time per player. (ie. however many
games you can play in 24 hours or somesuch) Otherwise, I would
suggest whatever basic unit is the closest to the amount most people play.
In most board games that is one game. In bridge it is a rubber. In backgammon,
it is several games, so one should use a standard number used in
competition (15?). In poker people tend to bring to the table 100-200 times
the smallest bet, so play till you go broke.

3. I don't believe poker has 10 levels. We have all seen a beginner
take all the money from a table full of good players. I would say there are
3 levels maybe 4. I have played with people in Vegas who were _definitely_
better than me and who _might_ be able to beat me 75%. Maybe a top pro
could beat them 75%. I could beat a raw beginner 75%. As evidence, some
pro matches take days because of the number of games required to
distinguish levels.
--
Dave Ring
dwr...@tam2000.tamu.edu


Gary Jackoway

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
john allen xd/d (etl...@etlxdmx.ericsson.se) wrote:
: : Michael Sullivan <m...@panix.com> wrote:
: : >It's easy to stretch the levels by making longer "games". Use two out of
: : >three instead of a single game, and suddenly the stronger player has a
: : >much better chance of winning a contest.

: Hi,

: Adding my twopennorth: In my experience a typical tournament game of Chess
: takes about the same time as one of Go, or a rubber of Bridge. Therefore,
: I would contend that they are comparable, irrespective of the number of
: physical moves. Somewhere in the players' heads there are a great many
: other moves going on which relate to the moves on the board n:1.

: Waddya think then?

I think you don't play bridge! A rubber of bridge lasts 10-20 minutes.
A "session" of tournament bridge (24-27 hands) is much closer in length
to a Go/Chess game.

Certainly, I accept that something like "number of skill levels for
eight hours of play" is a better estimator of complexity than some ill-
defined skill level. But I still think that skill levels is a bad measure
of complexity, even if you try to eliminate the multiple-play issue by
adding time limits.

: Cheers /Yogi

Regards,
Gary J.

y...@ssdevo.enet.dec.com

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to

In article <425l9n$3...@nntp4.u.washington.edu>, ro...@u.washington.edu
(Jared Roach) writes:
|> I empathize with this point. I am uneasy with the definition
|>of complexity following from how many skill levels the game can be divided
|>into.

Like any measurement problem, because the metric is not the same thing as the
property being measured, the best you can hope for is a useful correlation
between the quantities. I know that sounds obvious, but the implication is
that for non-trivial problems it will generally be easy to poke holes in a
given measurement strategy. On the other hand, it is also important to
remember that something is better than nothing, and even if a measurement is
imprecise, it may not be useless. Student performance and software reliability
are classic examples of non-trivial measurement problems.

Please don't take offense at this...it's not intended as a flame.

Hwei

Kevin Cline

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
In article <426d4o$e...@erinews.ericsson.se>,

john allen xd/d <etl...@etlxdmx.ericsson.se> wrote:
>Hi,
>
>Adding my twopennorth: In my experience a typical tournament game of Chess
>takes about the same time as one of Go, or a rubber of Bridge.

I don't know about Go, but a tournament chess game is typically concluded
in a couple of hours, and players are often expected to complete 3 games
in a single day. Experts playing a rubber of bridge for money would
probably finish it less than half an hour, and possibly in as little
as five minutes. OTOH, tournament bridge is played in sessions of 24-32
hands, at 3-4 hours per session. So perhaps the complexity of bridge
should be judged at the session level.

--
Kevin Cline


Erik Van Riper

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
In article <427fka$c...@sun132.spd.dsccc.com>,

Professionals playing go will often play a game over a period of two days
(sometimes more, I have heard of).


--
ge...@imageek.york.cuny.edu http://imageek.york.cuny.edu
Erik Van Riper (718) 262-2667
Systems Administrator Go player Photon Counter
Language design is 10% science and 90% psychology. -- Larry Wall, Perl Author

Karl Juhnke

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
Dave Ring (dwr...@tam2000.tamu.edu) wrote:

: 1. The number of 'levels' of a game has nothing to do with complexity. It


: is only a measure of how much luck is involved. (how many levels are there
: in most track and field events?)

I wouldn't put it quite that way. There is no luck per se in chess, only
human variability (and perhaps outside interference during play, but this
is kept to a minimum in tournaments).

I would rather say that this measure of "complexity" gives us sort of a
ratio. We take the range of (best a human can play) to (worst a human
can play), and we want to divide this into levels. To do so, we take
advantage of variability in an individual's play, whether this
variability is caused by dice or lapses of concentration or whatever.
So, roughly speaking, we get the ratio

(best a human can play) - (worst a human can play)
--------------------------------------------------------
(best an individual plays) - (worst an individual plays)


An important point is that how many levels we divide into depends on how
much an individual varies from game to game. If everyone tended to play
exactly the same every game, there would be _more_ levels. To take an
extreme example, I hereby invent the game "being tall". The two players
stand back to back, and whoever is taller wins. I know of someone who
can beat me 75% of the time at being tall, and someone who can beat him
75% of the time, etc. With the accuracy of my tape measure, there are at
least 100 skill levels of being tall, which makes it the most complex
game on the list!

You see the problem, eh?

Peace,
Fritz

john allen xd/d

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
: Michael Sullivan <m...@panix.com> wrote:
: >It's easy to stretch the levels by making longer "games". Use two out of
: >three instead of a single game, and suddenly the stronger player has a
: >much better chance of winning a contest.

Hi,

Adding my twopennorth: In my experience a typical tournament game of Chess

takes about the same time as one of Go, or a rubber of Bridge. Therefore,
I would contend that they are comparable, irrespective of the number of
physical moves. Somewhere in the players' heads there are a great many
other moves going on which relate to the moves on the board n:1.

Waddya think then?
Cheers /Yogi

---
Email: etl...@etlxdmx.ericsson.se | John (Yogi) Allen (EGH3)
Vax2memo: etl.e...@memo.ericsson.se | East Grinstead Hash House Harriers
Memo: ETL.ETLJHAN | On On in E. Sussex and Kent (UK)
"I think, therefore I can" - Thomas the Tank Engine

Dave Ring

unread,
Sep 2, 1995, 3:00:00 AM9/2/95
to
Karl Juhnke <fr...@emf.net> wrote:
>: 1. The number of 'levels' of a game has nothing to do with complexity. It
>: is only a measure of how much luck is involved. (how many levels are there
>: in most track and field events?)
>
>I wouldn't put it quite that way. There is no luck per se in chess, only
>human variability (and perhaps outside interference during play, but this
>is kept to a minimum in tournaments).

My point is that human variability is better described as 'luck' than
'complexity'.

>An important point is that how many levels we divide into depends on how
>much an individual varies from game to game. If everyone tended to play
>exactly the same every game, there would be _more_ levels. To take an
>extreme example, I hereby invent the game "being tall". The two players
>stand back to back, and whoever is taller wins. I know of someone who
>can beat me 75% of the time at being tall, and someone who can beat him
>75% of the time, etc. With the accuracy of my tape measure, there are at
>least 100 skill levels of being tall, which makes it the most complex
>game on the list!
>You see the problem, eh?

Yes, but I don't express it nearly as well as you. :-)

--
Dave Ring
dwr...@tam2000.tamu.edu


Jared Roach

unread,
Sep 4, 1995, 3:00:00 AM9/4/95
to
The Human Mind As a Measure of Complexity

There has been an ongoing intriguing thread that has developed
from someone pointing out an article (in Backgammon Today, I think) that
develops a measure of complexity and applies it to categorize strategy
games. The measure of complexity as I understand it is to place all
players of a particular game into skill level groups such that all
players in a group have a reasonable chance of beating all other players
in that group. This chance may be set at some arbitrary value. I don't
recall what that value was, but let us say it was 85%.
So for example, take the best chess player in the world and group
with him all the weaker players who he has an 85% chance or less of
beating. Now take the strongest player not in this group and repeat.
Repeat until you have grouped all the chess players in the world. If you
like, add one more group for everyone else in the world who doesn't know
the rules. Just be consistent. Count the total number of groups you
have. This number is a measure of complexity, at least so the article
claims. Some people have disagreed. Let us consider this point.
The first counterclaim is that the measure is no good because it
is not constant under addition of two games. Thus, they say that there
will be more groupings in a game called NewChess which is the winner of
two out of three games of chess played serially. Thus under this measure
NewChess is more complex than chess. This is blatantly counterintuitive
and therefore the complexity measure is no good. However this is a
dialectal red herring because each of the three chess games in NewChess
is independent (moves in one game do not influence moves in another).
Complexity derives from interdependency, at least in part. Therefore it
is reasonable to group players only with respect to the largest subgame
of a the game in question such that it is independent of all other
subgames in that game. I think this is fair and obvious. It is very
easy to see how to analyze both chess and Go this way, for example.
Previous posts have addressed the need for precise definitions of games
with subtle dependencies, such as hands of bridge in a rubber, or poker
hands in a battle to bankrupt. But with precise definitions of games,
and this a reasonable request, the definition of complexity holds
excellently. Thus non-constancy under game addition does not invalidate
the measure of game complexity.
The second counter-claim is that one can define "games" that
produce a number of groupings that appear to be unrelated to complexity.
For example, the game of who is taller is mentioned. Whoever is taller
wins. The number of groups equals the world population, given accurate
measurement. Thus there is no game more complex than this one. The
argument continues that this game is intuitively simple. Thus there is a
contradiction and the measure of complexity is invalid. Track and field
events are also mentioned.
But again this a red herring. Almost so obvious I am surprised
that the person who raised this point didn't immediately provide the
refutation so as to save net bandwidth (wouldn't we really clean up a lot
of threads if people would think about their postings first, and refute
their own arguments, particularly if the refutation is trivial?). These
games are not mental games. Clearly we are using the human mind as a
measure of complexity. If not explicitly stated, the implication is
obvious. It is easy to see how to apply the complexity measure to these
games: let the players be managers of these events (i.e. one step
removed). In the HeightGame each player receives a random human to
represent them. Thus there is a 50% chance of winning regardless and
there is only one grouping. There is no game more simple than this. For
track and field events, each player manages a randomly chosen (or chosen
by recruitment with equal budgets, or some similar method) team.
Management skill will apply and there will probably be multiple
groupings, but not likely as many as Go, for example. Thus track is more
complex than the HeightGame but less complex than Go. This meets the
expectation of intuition.
Note that to counter this claim we exclude from the game
inequalities not related to the capabilities of the human mind. This
seems reasonable and fair. Imagine a game called HandicapGo. The
shortest player in the world gets to play one extra move any time during
the game. The next taller person gets two moves, and so on up to the
world population. Now this game is almost as complex as the HeightGame!
But this is ludicrous. For one thing, it is not exclusively a skill game
of the mind. Among other things it is a biased game. I think we can
specify that sides must be chosen randomly in a biased game for our
complexity measure to be used, and this I think is reasonable fair and
obvious. Alternately a balancing handicap must be given (as in Go).
Thus in the HeightGame the shorter player would receive a handicap equal
to the difference in heights, always producing a drawn game and making
the game very simple by this measure.
Note that chance is acceptable in games evaluated by this
measure. This is because it affects both sides equally (on average). It
is also an excellent substrate for the human mind's mental computations.
In some games, more so than others. Compare 5 card stud and high-low,
for example. One of these games is intuitively more complex, and is also
more complex by our measure of complexity. So our definition of
complexity seems to be pretty good and to hold up to most criticism so far.
Now why is this so? Why does this definition work? What have
(mental) skill groupings got to do with complexity? How can the human
mind possibly be a tool to measure complexity? Well this is not
surprising. The human mind is a very complex thing. In particular, the
process of education is complex. There are very many gradations in
training the complex human mind to do something complex (like Go or chess
or backgammon). There are fewer gradations in training it to do
something simple (like play high-low).
So imagine that complexity is a height, and that more complex
games are higher. Imagine learning as a climbing process. Now let all
humans start this climb over a spread out period of time, perhaps at
different rates or from different starting points. The difference
between the height of the highest and lowest person is our measure of
complexity. It is not perfect because
a) the highest person might not be at the top. They might not be
anywhere near the top. This might particularly be true for games
recently invented or not heavily played or studied (or even for some that
have). Such games might have inaccurate measures of complexity by this
measure. This is one (for once!) significant problem with the measure.
b) there is variability in the scale depending on when and how
you measure the heights of individuals. Thus there is a slightly
different range of skills when Go is played on IGS than on a board. This
is a point I raised previously. But I don't think that it is very
significant. One can avoid it by defining them as two different games
(IGSGo and BoardGo). As new players arise and old players die and get
sick and get Alzheimer's or whatever, the measure also changes, I guess
mostly due to fluctuations at the top, assuming there are enough players
to populate all the skill levels. But with games like chess and Go,
there seem to be enough players to even these variations out on a
statistical scale.
c) it's not linear. But it does seem to be monotonically
increasing, which is acceptable to me.
d) the human mind is complex enough to have a very wide dynamic
range of complexity measures. But I bet like most instruments there are
limits to its scale. I'll bet you it can't measure the differences in
the complexities of 987x987 and 988x988 Go, for example.

It is my feeling that variability at the bottom is not a
problem. I think we as humans can be proud of the fact that there always
seems to be one of us who provides an example of the worst possible
play. I offer somebody in a coma as an example if I have to, but my play
of Go on IGS last night should suffice for at least one game (<grin>).

The human mind is truly a great thing. It is definitely
complex. I think it is very capable of measuring complexity. Both by
intuition and by this more easily definable complexity measure. I wonder
which is more accurate? For this we need an independent definition of
complexity. Any takers? Maybe try equating it with an entropy like
quantity.


One other comment. It's tangential so you don't have to read it
(but you may want to skip to the end). The HeightGame may be simple by
intuition and by the properly defined complexity measure, but as a
professional genomicist, let me tell you it ain't simple at all. It may
even be more complex than GO (this I actually doubt). The number of
genes and environmental factors controlling height is large. Counting
environmental factors, way more than 361. Not all are independent, in
that factor A might increase height if factor B is such and thus, but
decrease height if factor B is so and factors C and D are thus. You get
the picture.
Curiously enough, I think we can use our complexity measure to
measure this. Everybody write an essay on how height is determined.
Arrange these essays in groups based on factual content. The number of
groups is a measure of complexity.

And now as promised, a simple refutation. Imagine the PiGame.
The winner is whoever can recite Pi to more significant figures. If
every human did their best to learn and play this game, it would easily
beat out Chess and Go and most other games. Is it really more complex?
Maybe it is, in which case this is not a refutation at all. Depends on
your definition. I however, am uncomfortable calling the PiGame more
complex than Chess or Go. So maybe we should try to remove
"memorization" components from our measure of complexity.
This would be useful but would almost certainly fail.
Everybody's gonna have to forget all their Chess Openings and Joseki and
Poker Probabilities and calculate on the fly for our "cleaner" definition
of complexity. This is unfortunate.
But maybe it's OK. This whole "definition of complexity" thing
is too complex for my puny mind to comprehend. Maybe I need to go back
to my Physical Chemistry textbook and re-read the definition of entropy.
To me mental complexity is: How much do I have to think about
when making a decision? How many factors do I have to take into
account? How hard is it to evaluate them? How easy is it to make a
mistake? How many choices are there?" Memorization is not a component,
at least to me, and for this I almost think Chess and Go would benefit
greatly from being "Open Book." Then we can concentrate on the thinking
and decision making and forget about memorization. HehHeh. I'm
obviouisly starting to ramble and it is my sincere hope that most of you
have stopped reading long before you get here.


Sorry to people I didn't quote directly. I've lost your names. Thanks
for your posts.


---------------------------------------------------
"Better to play Go than to do nothing..."

Jared Roach
MD/PhD Candidate
Department of Molecular Biotechnology
University of Washington
AGA 1d
ro...@u.washington.edu

Robert Elton Maas

unread,
Sep 4, 1995, 3:00:00 AM9/4/95
to
<<Seems like both agree that the "game" newChess, defined as
best-of-three games of regular Chess, favors the stronger player and
results in more levels of separation between players. But do you
really want to conclude that newChess is a different game, requiring
more "skill", than regular Chess? That seems unreasonable. The two
games still require equal skill, since the best move in one is still
the best move in the other. Yet skill can be measured more accurately
in the longer game.>>

Well said. I'm convinced that measuring the number of levels (as
defined by stronger player winning 90% or some other fixed portion of
games against one-level-lower opponent) is NOT a reasonable way to
define how "hard" (needs skill) a game is, since you can adjust the
number of games in a best-of-n newGame to achieve any large number of
levels you want without changing the true hardness of the game.

I'd like to propose an alternate method of measuring, which for a given
level of technology is well-defined (playing best-of-n and calling that
"one newGame" doesn't change the measured result): Consider all
computer programs that play the game and beat the best human player in
the world if running on the fastest computer in the world. (For Go on
full board, this is currently the empty set.) Now consider running it
on slower computers, or timeshared on the fast computer, so it doesn't
cost as much computer time to play the game but still can match the
best player. Pick that particular combination of software and hardware
and timesharing slice that minimizes total cost (rental or
depreciation, plus consumable resources, whatever) while exactly
matching the best human player in the world. That game is then rated by
how much that computer configuration costs to play for a fixed time.
For example, an hour of Checkers (Draughts) might cost only ten cents,
while an hour of Chess might cost five dollars, in each case being an
exact match for the best player of that game in the world. Any game for
which there isn't yet any program able to match the best human player,
would be rated "infinitely" expensive, hence require "more skill" than
any of the games that require only finite computer cost to match the
best human.

Because computers get much better the more time they have, whereas
humans don't get better as rapidly once they have enough time to do
their usual thinking, I suspect that for any game that involves
intuition the computer-cost of a game with short time controls will be
greater than a game with leisure time allowed (except when the latter
cost is already infinite). For example, playing Go on 9*9 board with
several months per move, maybe there's a computer program right now
that can match any human (hence finite cost), whereas with short time
controls I'm sure there isn't yet (hence infinite cost). Thus by my
proposed measure, a rapid game would require more "skill" than a
leisurely game.

But even with that variation in a single game played under different
time control, I believe that the variation between different games will
overwealm the variation caused by time control, except for absurdly
fast or leisurely games. Thus I believe for games lasting anywhere from
a half hour to two days, Go (on medium to full-size boards) will
measure monotonically "harder" than Chess which in turn will measure
monotonically "harder" than Checkers. As for Backgammon, below Checkers
I'd guess. As for 2-person Risk, I don't have any guess where it'd fit.
As for Poker, I think I'll fold. :-)

john allen xd/d

unread,
Sep 4, 1995, 3:00:00 AM9/4/95
to
In article m...@hpavla.lf.hp.com, ga...@lf.hp.com (Gary Jackoway) writes:
>
>I think you don't play bridge! A rubber of bridge lasts 10-20 minutes.
>A "session" of tournament bridge (24-27 hands) is much closer in length
>to a Go/Chess game.
>
>Regards,
>Gary J.

Well Gary,

You are partly right - I don't play rubber bridge very often, but I play
a good deal of duplicate... I would challenge you to finish a rubber of
bridge in 10 minutes! Six minutes per hand is good going at duplicate,
allowing about 24 hands in an evening. This compares with two friendly
games of Go, or one tournament game.

When not playing bridge at a club, we usually play Chicago (changing
partners) which is completed in 12 hands and generally takes about 1.5
hours. When we do play rubber bridge it often takes longer - the minimum
is two hands, but there is no maximum...

As for chess, I used to play in the N. Herts League with 1.25 hours each,
and then another 15 minutes each - max time three hours, but unless there
was a resignation, games were seldom finished. That was something I didn't
like about serious chess.

So the upshot is that a serious game of Chess, or of Go take about the
same time, and a rubber of Bridge is comparable, but you might get two in
the same time, on average...

Cheers /Yogi

Email: etl...@etlxdmx.ericsson.se | John (Yogi) Allen

East Grinstead Hash House Harriers | On On in Sussex and Kent (UK)
Brighton GO Club | British GO Association (2D)
Folk Singer (vox unpopuli) | Guitar, Melodeon, Traditional

---
Email: etl...@etlxdmx.ericsson.se | John (Yogi) Allen

Vax2memo: etl.e...@memo.ericsson.se | APStools Support 01444 234812
Memo: ETL.ETLJHAN ETL XL/DZ | Burgess Hill, W.Sussex, UK

Michael Greenwald

unread,
Sep 4, 1995, 3:00:00 AM9/4/95
to
r...@BTR.Com (Robert Elton Maas) writes:

> Consider all
>computer programs that play the game and beat the best human player in

>the world if running on the fastest computer in the world. ...

Won't this be very sensitive to the tail in the distribution? Assume
that the skills of human players are normally distributed; then the
skill of the "best human player" at any point in time could vary
wildly. (Look at the distribution of the "maximum elt of a sample (of
size N) selected froma normal distribution).

I don't think your figure will be very stable (unless it is averaged
over a long time) since at any given moment you might have a very
strong "best" chess player, but a (relatively) weak "best" go player.

Of course, I'm probably just being crotchety, since I'm not convinced
that this is an interesting question, in any case. It seems to be
phrased in two different ways: a way to measure luck vs. skill and a
way to measure complexity or difficulty.

As a measure of complexity, it seems useless, since both Go and Chess
are hard enough so that the best human player is in no danger of
playing a perfect game, or "solving" Chess or Go. So we're limited,
at the top, by human ability --- which should be the same for Chess
and Go. If these abilities are different, then they're probably not
comparable, and this method is useless.

As a measure of luck vs. skill, we're not controlling for variation in
quality of play in a single player between games. Unless this is what
you mean by "luck".

David Montgomery

unread,
Sep 4, 1995, 3:00:00 AM9/4/95
to
The original article was Bill Robertie's reply to a letter in
the January-February 1992 issue of Inside Backgammon. Robertie
does call his list of numbers "Complexity Numbers" but I prefer
to think of the numbers as providing a measure of the skill
that is attainable by humans in playing the game. As such it
it a mixture of information about the nature of the game and
the nature of humans.

As a measure of skill, I think it is a very natural measure.
What I generally mean by skill in a game is something that
gives me an edge over some other player without that skill.
That is, its something that will allow me to win more often
than if I didn't have that skill. So its very natural that
an overall measure of the skill in the game (when played by
humans) reflect the number of divisions that can be made
among humans in how likely they are to beat each other (or
beat the house in the case of games like Blackjack).

In Robertie's original article, he attempts to normalize by
selecting "an appropriate contest ... lasting in the four to
five hour range." In effect, he is looking at skill per
unit of time. I think this is the best way to look at things.
It makes tremendous sense for games like blackjack and poker
where there is no natural division into games (except ludicrously
short games). And I think it makes sense for games like
chess and go too.

David Montgomery
monty on FIBS


Kit Woolsey

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
I don't think that looking at computer programs which play the games is
going to produce decent results regarding complexity of games. There are
features about various games which make them relatively easy or difficult
to program a computer to play well. For example, computers are currently
very competitive at the top level in chess and backgammon (although the
backgammon improvement came about largely due to using a different
approach to create the program than had been previously been tried).
However, for bridge no program has yet been written which plays remotely
close to master strength, despite efforts of many talented programmers.
Presumably some feature of bridge, probably the unknown status of the
opponents' hands, make it difficult to write a good program. Does this
mean that bridge is more complex than backgammon or chess? I don't think
so -- merely that it is a different type of game which isn't currently as
easy to program a computer to play well.

Kit

Dave Ring

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
Jared Roach <ro...@u.washington.edu> wrote:
> The second counter-claim is that one can define "games" that
>produce a number of groupings that appear to be unrelated to complexity.
> But again this a red herring. Almost so obvious I am surprised
>that the person who raised this point didn't immediately provide the
>refutation so as to save net bandwidth (wouldn't we really clean up a lot
>of threads if people would think about their postings first, and refute
>their own arguments, particularly if the refutation is trivial?).

Well, I raised it and I find that comment kind of insulting.

> These
>games are not mental games. Clearly we are using the human mind as a
>measure of complexity.

Here is my statement:


>>1. The number of 'levels' of a game has nothing to do with complexity. It
>>is only a measure of how much luck is involved. (how many levels are there
>>in most track and field events?)

Yes, my example, along with roulette, is not a mental game, but it is
easy to find examples of mental games along the same lines.

And true to his suggestion, Jared finds his own refutation himself:


> And now as promised, a simple refutation. Imagine the PiGame.
>The winner is whoever can recite Pi to more significant figures. If
>every human did their best to learn and play this game, it would easily
>beat out Chess and Go and most other games. Is it really more complex?

Of course not. This does not suggest a weakness in our method of
measurement, but in our use of the term 'complexity'. I have suggested
using 'luck' instead, but that is not quite right either. I think
'skillfulness' is just what we are looking for.

(-: It just occurred to me that roulette is a mental game with 2 levels,
those who bet their money, and those who don't :-)
--
Dave Ring
dwr...@tam2000.tamu.edu


Darse Billings

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
kwoo...@netcom.com (Kit Woolsey) writes:

One reason bridge programs are weak in comparison to humans is that it is
very difficult to program the bidding phase. Bridge bidding is really an
exercise in natural language processing, with it's own vocabulary and
grammar. Languages are relatively easy for humans to learn, but represent
huge challenges in the computer science domain.

This illustrates a key point of the original topic -- that the relative
measure of playing levels describes the nature of the players as much as
the game. While the idea of using a strong program as some kind of grand
standard isn't really viable, we *can* use the collection of all computer
programs as a separate population of imperfect game players, and compare
the results of the statistical analysis.

Many computer chess programs have been written over the years, and the
range of skill levels is comparable to the number of levels for human
players (albeit with a different distribution). Not as many Go programs
have been written, but it is clear that the range of playing levels is
much smaller than that of humans. The strongest Go programs play only at
the level of a weak human amateur, and it is very unlikely that a computer
will play on par with strong human players any time in the foreseeable
future. Similarly, there isn't much variation in the strength of bridge
programs, since they are all fairly weak by human standards.

This implies that Go and bridge are "less complex" games among a population
of computer players than humans, which doesn't seem right. Conversely,
many games are played better by computers than humans, but that should be
neither here nor there. To achieve a better absolute scale, we need some
way of estimating the strength of a perfect player and, at the other end,
the worst possible player (perhaps one who chooses moves at random). This
might be easy for some games like chess or Othello (using the top human and
computer standards, respectively), but may be very difficult for a game
like no-limit Hold'em poker (where there may still be a lot of room for
improvement, even among the best players in the world).

Mark A Biggar

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
In article <42eg2j$8...@nntp4.u.washington.edu> ro...@u.washington.edu (Jared Roach) writes:
> One other comment. It's tangential so you don't have to read it
>(but you may want to skip to the end). The HeightGame may be simple by
>intuition and by the properly defined complexity measure, but as a
>professional genomicist, let me tell you it ain't simple at all. It may
>even be more complex than GO (this I actually doubt). The number of
>genes and environmental factors controlling height is large. Counting
>environmental factors, way more than 361. Not all are independent, in
>that factor A might increase height if factor B is such and thus, but
>decrease height if factor B is so and factors C and D are thus. You get
>the picture.
> Curiously enough, I think we can use our complexity measure to
>measure this. Everybody write an essay on how height is determined.
>Arrange these essays in groups based on factual content. The number of
>groups is a measure of complexity.

Not to mention that a given person height changes over the course of a day.
Most people are about 1/2 inch shorter whe thry go to bed then when they
got up that morning. A similarly arranged Weight game has the same problem
with wider variations.

--
Mark Biggar
m...@wdl.loral.com


Jan Ritsema van Eck

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
In article <42eg2j$8...@nntp4.u.washington.edu> ro...@u.washington.edu (Jared Roach) writes:

> The first counterclaim is that the measure is no good because it
>is not constant under addition of two games. Thus, they say that there
>will be more groupings in a game called NewChess which is the winner of
>two out of three games of chess played serially. Thus under this measure
>NewChess is more complex than chess. This is blatantly counterintuitive
>and therefore the complexity measure is no good. However this is a
>dialectal red herring because each of the three chess games in NewChess
>is independent (moves in one game do not influence moves in another).

I think it's time to point out that they are _not_ independent, and to
propose another way to mend the complexity measure to cater for NewChess
etc. In the second and third game, the result to aim at is a function of the
results of the previous games. For instance, in the first game, both players
may (or may not) be happy with a draw; but if the first game isn't a draw,
then in the second game the winner of game 1 probably _will_ be happy with a
draw, whereas the loser of game 1 will try for a win. Even if we exclude the
possibility of a draw (for instance play go with a x.5 komi) we might see
that the winner of game 1 gives herself a break in game 2 ("see if I can win
easily, otherwise save my energy for the deciding game 3").

This effect is also important in those forms of bridge where the games are
not explicitly related in the way they are in rubber bridge. For instance in
duplicate or teams events, the end score of a pair (or team) is simply the
sum of the results of all the games they played, so on first thought, the
games seem to be independent. Yet every bridge player knows they aren't. In
the second half of the session, players know (or have estimated) their
results in the first half. Every once in a while, a game comes up where they
can choose: either play safe (go for an average result or for a result that
is probably close to the opponents' result) or play 'swingy' (try for a
top score risking to get a zero). The ability to recognize these games and
correctly choose between the safe and swingy lines of play, depending on
the risks involved _and_ the previous results, is an (important?) part of
bridge skill.

So I think yes, NewChess _is_ more complex than Chess, as it requires the
ability to adapt one's playing style to the previous results to play
safely, cautiously or aggressively (or even carelesly) as the situation
demands.

The problem is that for our measure of complexity this effect must be
separated from the purely statistical effect of the law of larger numbers.
And the law or larger number also works within single games: in a game of
many moves, both players have had more opportunities to make a mistake than
in a game of just a few moves. But in the long game, the better player
will more probably have made fewer mistakes than his opponent. Therefor, I
would expect that in 9x9 go, the better player loses more often than in
19x19 go so in 9x9 go, the number of levels is less than in 19x19 go; not
only because 19x19 go is more complex (of course it is) but also simply
because the game consists of more moves.

So for a fair comparison between different games like backgammon and go, I
suppose you would have to divide the number of skill-levels by the square
root of the average number of moves in the game. (Square root because the
accuracy of any statistical estimate is proportional to the square root of
the number of observations). In other words: I would expect the number of
skill-levels in best-of-3 chess to be between 1.4 (sqr2) and 1.7 (sqr3) as
large as that in normal chess, so for comparison I would divide the number
of skill levels in best-of-3-chess by, let's say, 1.55. If the result is
still larger than the number of levels in normal chess, I think that's proof
that best-of-3 really _is_ a more complex game.

Anyone have a good idea of the average number of moves in each game in the
list?

jan r.

Karl Juhnke

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
Jared Roach (ro...@u.washington.edu) wrote:
: The second counter-claim is that one can define "games" that
: produce a number of groupings that appear to be unrelated to complexity.
: For example, the game of who is taller is mentioned. Whoever is taller
: wins.
: But again this a red herring. Almost so obvious I am surprised
: that the person who raised this point didn't immediately provide the
: refutation so as to save net bandwidth (wouldn't we really clean up a lot
: of threads if people would think about their postings first, and refute
: their own arguments, particularly if the refutation is trivial?).

Hee hee. Save bandwidth? Your posts are the longest, wordiest ones in
the thread, Jared. ;) Not that they aren't interesting.

And, by the way, you provide the refutation later in your own posting.
By your own logic, you should have scrapped both sections of your article
in order to save bandwidth. It is a little distrubing to be put down by
somebody who also contradicts himself. I know my logic ain't perfect,
but...

: And now as promised, a simple refutation. Imagine the PiGame.

: The winner is whoever can recite Pi to more significant figures. If
: every human did their best to learn and play this game, it would easily
: beat out Chess and Go and most other games. Is it really more complex?

No, reciting Pi is not as complex as Chess or Go, thus providing a neat
refutation to your previous "refutation".

The critical point, which I mentioned earilier, is the amount of
variation in an individual's performance. Presumably once you memorize
300 digits of Pi, you can recite it to 300 digits every time. Almost no
variation makes for a subdivision into many many classes. With Chess or
Go, individuals vary greatly in their performance from game to game,
which _reduces_ the number of levels according to the given measure of
complexity. That's why I find the definition counter-intuitive. More
individual variability = less complexity, while less individual
variability = more complexity.

Peace,
Fritz

Gary Jackoway

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
Karl Juhnke (fr...@emf.net) wrote:
: Jared Roach (ro...@u.washington.edu) wrote:

: : And now as promised, a simple refutation. Imagine the PiGame.

: : The winner is whoever can recite Pi to more significant figures. If
: : every human did their best to learn and play this game, it would easily
: : beat out Chess and Go and most other games. Is it really more complex?

: No, reciting Pi is not as complex as Chess or Go, thus providing a neat

: refutation to your previous "refutation".

: The critical point, which I mentioned earilier, is the amount of
: variation in an individual's performance. Presumably once you memorize
: 300 digits of Pi, you can recite it to 300 digits every time. Almost no
: variation makes for a subdivision into many many classes. With Chess or
: Go, individuals vary greatly in their performance from game to game,
: which _reduces_ the number of levels according to the given measure of
: complexity. That's why I find the definition counter-intuitive. More
: individual variability = less complexity, while less individual
: variability = more complexity.

Okay, consider this: the Sum Game.
The players start with a six digit number chosen randomly.
Player 1 has to double the number (in his/her head) and name the
digits of the new number. Player 2 has to double that number, etc.
First player to fail loses.
Indeed, this is very similar to the game "Simon" which was done
with ever increasing sequences of tones.

These games would certainly have a reasonable number of skill levels
among human players. But neither game is "complex" in any sense
of complex that works for me.

: Peace,
: Fritz

Regards,
Gary J.

Darse Billings

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
mo...@cs.umd.edu (David Montgomery) writes:

>short games). And I think it makes sense for games like
>chess and go too.

Thanks David, this clarifies things a lot.

This scale is a measure of *human variability*, more than a measure of
game complexity. Normalizing the length of the games to a four or five
hour session does make quite a bit of sense, given this fact.

Game complexity can be measured in various ways, such as space complexity
(the number of legal positions, say), decision complexity (the number of
reasonable candidate moves in each position, to achieve a certain level of
play), or computational complexity (a measure of the number of "computer
instructions" required to make strategic decisions). The suggested scale
of playing levels is *not* consistent with these notions of complexity,
since most games have only one such level, corresponding to perfect play.

There are still some problems with the relative skill level approach, of
course. Games like chess and Go have many levels because of the many
degrees of imperfection exhibited by human players, with "luck" playing
only a minor role in the outcome.

Backgammon, on the other hand, is a non-deterministic game, which means
the results of a five hour session are heavily influenced by the roll of
the dice. This has the effect of diluting the measure of natural human
variability, since a strong player has no defence against a bad roll at a
critical moment, and a weak player deserves no credit for getting good
dice. Duplicate bridge reduces the element of luck, but it does not seem
reasonable to call it a much more complex game than rubber bridge, merely
because there is less variance due to the random component of the game.
Ideally, we want an accurate measure of positive expectation accrued over
the five hour session, where the luck element is balanced.

The ranking of poker is also open to much criticism. It is important to
define both the variation in question and the betting structure. For
example, Texas Hold'em and 7-card Stud are much more strategically complex
games than 5-card Draw or 5-card Stud, and therefore have many more skill
levels among human players.

With regard to format, "big bet" poker (no-limit or pot-limit) is a very
different game from limit poker (where the bets are usually small,
relative to the pot size). No-limit poker places a greater emphasis on
such things as careful judgement and knowledge of the opponent, whereas
limit poker requires different skills, such as discipline and precise
knowledge of draw odds. A good no-limit player is *much* more likely to
win a large amount of money from a weaker player in a five hour session
than is a good limit player. But there is also more variance in the limit
game (since more draws are playable, and more hands go to the showdown).
As such, I am unwilling to concede that no-limit is a *much* more skillful
game, regardless of the typical expectations.

Incidentally, multiple hands of poker are *not* independent, unlike
distinct games of chess or Go. Five hours may or may not be enough time
to learn (and exploit) an opponent's style. So the time factor chosen is
important, and must be taken into account by any system aimed at measuring
the range of imperfect players.

Karl Juhnke

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
Darse Billings (da...@cs.ualberta.ca) wrote:
: mo...@cs.umd.edu (David Montgomery) writes:
: >In Robertie's original article, he attempts to normalize by

: >selecting "an appropriate contest ... lasting in the four to
: >five hour range." In effect, he is looking at skill per
: >unit of time. I think this is the best way to look at things.
: >It makes tremendous sense for games like blackjack and poker
: >where there is no natural division into games (except ludicrously
: >short games). And I think it makes sense for games like
: >chess and go too.
:
: Thanks David, this clarifies things a lot.

: This scale is a measure of *human variability*, more than a measure of
: game complexity. Normalizing the length of the games to a four or five
: hour session does make quite a bit of sense, given this fact.

Yes, normalizing by time is a very sensible thing to do. I, too, am
happy for this clarification.

Let me add that for all I and others have criticized Robertie's method,
it must have considerable merit to have provoked such sustained
discussion. The level of this thread has been very high (for a news
group :) and thought-provoking in many ways. Thanks to Robertie for
giving us the fat to chew on!

Unfortunately, I'm afraid that normalizing by time doesn't make all of
the criticisms go away. I know that for chess, skill levels are roughly
the same at different time levels. For example, if Schiller beats Bhat
two out of three at 5-hour games of chess, he will probably win two out
of three at 1-hour games, i.e. action chess.

So when we compare the complexity of 1-hour chess to 5-hour chess, we get
a counter-intuitive result. 1-hour chess is more complex, because to
normalize for time, you have to play best of five, thus skewing the
result in favor of the stronger player.

Are we willing to say that a faster game of chess is a more complex one?
If not, we also have to throw out the normalization of time, and fall back
on other defintions of "one game" which are vulnerable to other criticisms.

Peace,
Fritz

Malcolm Cleaton

unread,
Sep 5, 1995, 3:00:00 AM9/5/95
to
In article <42fm9q$c...@twix.cs.umd.edu>

mo...@cs.umd.edu "David Montgomery" writes:
> In Robertie's original article, he attempts to normalize by
> selecting "an appropriate contest ... lasting in the four to
> five hour range." In effect, he is looking at skill per
> unit of time. I think this is the best way to look at things.
> It makes tremendous sense for games like blackjack and poker
> where there is no natural division into games (except ludicrously
> short games). And I think it makes sense for games like
> chess and go too.

Here's a question. If I spend my 4-5 hours playing just one game of go, do
I get the same complexity number for this contest as if I spend the time
playing best three out of five faster games?

--
Cookie

sem...@ibm.net

unread,
Sep 6, 1995, 3:00:00 AM9/6/95
to
In <kwoolsey...@netcom.com>, kwoo...@netcom.com (Kit Woolsey) writes:
>I don't think that looking at computer programs which play the games is
>going to produce decent results regarding complexity of games. There are
>features about various games which make them relatively easy or difficult
>to program a computer to play well. For example, computers are currently
>very competitive at the top level in chess and backgammon (although the
>backgammon improvement came about largely due to using a different
>approach to create the program than had been previously been tried).
>However, for bridge no program has yet been written which plays remotely
>close to master strength, despite efforts of many talented programmers.
>Presumably some feature of bridge, probably the unknown status of the
>opponents' hands, make it difficult to write a good program. Does this
>mean that bridge is more complex than backgammon or chess? I don't think
>so -- merely that it is a different type of game which isn't currently as
>easy to program a computer to play well.
>
> Kit

After working in a go playing program for some years I fully agree with Kit,
there are some things that are easier for a computer to do, so if they are
important for the game it will play well (othello or mancala), other things are
easiers for humans (pattern recognizing), so a game difficult for a computer
is not the same as a game difficult for a human being.

About the luck factor ... I wonder why are always the same people who have
it !!! May be it isn't luck.

Joan

Radford Neal

unread,
Sep 7, 1995, 3:00:00 AM9/7/95
to
In article <1995Sep7.0...@kestrel.edu>,
Dick King <ki...@reasoning.com> wrote:

>In poker, unlike bridge, bluff is an issue. Correct me if i'm wrong,
>but i understand that in contract bridge it's unethical to lie in your
>bidding or play even if you don't mind deceiving your partner as well.

No, you are allowed to do this. But you're supposed to inform your
opponents before the game if you do this sort of thing often (your
partner presumably already knows :-)

Radford Neal

Dick King

unread,
Sep 7, 1995, 3:00:00 AM9/7/95
to
kwoo...@netcom.com (Kit Woolsey) wrote:

>I don't think that looking at computer programs which play the games is
>going to produce decent results regarding complexity of games.

> ...


>However, for bridge no program has yet been written which plays remotely
>close to master strength, despite efforts of many talented programmers.
>Presumably some feature of bridge, probably the unknown status of the
>opponents' hands, make it difficult to write a good program.

Interesting observation.

Does anyone out there know if there is a decent computer poker
program?

In poker, unlike bridge, bluff is an issue. Correct me if i'm wrong,
but i understand that in contract bridge it's unethical to lie in your
bidding or play even if you don't mind deceiving your partner as well.

-dk


Gary Jackoway

unread,
Sep 7, 1995, 3:00:00 AM9/7/95
to
Dick King (ki...@reasoning.com) wrote:

: In poker, unlike bridge, bluff is an issue. Correct me if i'm wrong,


: but i understand that in contract bridge it's unethical to lie in your
: bidding or play even if you don't mind deceiving your partner as well.

Okay, you're wrong!
In bridge it is acceptable to lie and bluff AS LONG AS your partner
does not know you are doing it. In bridge, there are "psyches" (bidding
as if you have a strong hand when you have a weak hand, for instance),
and "false-cards" (playing a card which would usually show one sort of
holding in a suit when you have something different). These are legal,
to a point. The key problem is that partner, who plays with you frequently,
is much more likely to anticipate such unusual calls and plays. If partner
is "catching" your antics, then you are being unethical.

: -dk

Hope that clarifies.

Gary J.

Pit

unread,
Sep 7, 1995, 3:00:00 AM9/7/95
to
Which is greater: luck or skill ?

In most books about game theory, there is a theorem like
"In a deterministic game where only one side can win,
it is clear from the start which side it is".

This means that if you play the winning side against a perfect player,
you will win a portion of the games. This number i would call SKILL.

In a non-deterministic game, i would call LUCK the portion of the
games a perfect player wins against a perfect player from the
less advantageous position.

----------------------------------------------------------------------

Of course, SKILL will be a ludicrously small number.
Perhaps we'd better take the log(SKILL) as measure ;-)
I think my definition of SKILL is perfect, but nobody will
like to apply it, because it usually is such a small number.

Pit
---------------------------------------------------------------------
I noticed this newsreader replies to the crossposted groups
as well. I left rec.games.go in because of the go quote i worked
into the text :-)
---------------------------------------------------------------------


y...@ssdevo.enet.dec.com

unread,
Sep 8, 1995, 3:00:00 AM9/8/95
to

In article <95Sep7.121...@neuron.ai.toronto.edu>, rad...@cs.toronto.edu (Radford Neal) writes:
|>In article <1995Sep7.0...@kestrel.edu>,
|>Dick King <ki...@reasoning.com> wrote:
|>
|>>In poker, unlike bridge, bluff is an issue. Correct me if i'm wrong [snip]

Anyone out there enjoy pure bluffing games? There are several different games
that are collectively called I Doubt It and Bulls**t. There's one where you
can put down sets of cards in ascending or descending sequence after the lead:
this one has too much luck for my taste. The other one has very little luck
(you often know exactly what's in everyone else's hand) -- you can play or pass,
and you MUST follow with the same pip value as the lead (none of that ascending
or descending stuff). I really like that one because it's pure bluffing,
anyone who's not predictable can play, and it's hilarious. It's one of those
rare games that's pure "strategy" but not reserved for intellectuals.

Michael Sullivan

unread,
Sep 9, 1995, 3:00:00 AM9/9/95
to
In article <1995Sep7.0...@kestrel.edu>,
Dick King <ki...@reasoning.com> wrote:

>In poker, unlike bridge, bluff is an issue. Correct me if i'm wrong,
>but i understand that in contract bridge it's unethical to lie in your
>bidding or play even if you don't mind deceiving your partner as well.

There is some bluff to bridge. Falsecarding as declarer is fairly common
in certain situations. It's less common on defense because most of the
time it's more important for your partner to know the truth about your
hand than for declarer to be fooled.

There's also some bluff to bidding. Competitive bidding gets fierce,
particularly at master level. When someone bids preempts in a high level
game, there's often no telling whether they have what they are "supposed
to" or not. Again, because bidding is an inexact natural language, there
are special contextual clues, and there's a lot of off the cuff guessing
about what to do with non-textbook hands.

Different players who play the same system may bid some hands a number of
different ways, all consistent enough with the system that directors need
not be called. Depends on style and temperament. The stronger the
players, the more true this is. Weak players just screw up -- strong
players come up with a good excuse for bidding wrong :)


Michael

0 new messages