Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why chess is easy to program, Go hard (long)

2 views
Skip to first unread message

George Shrier

unread,
Sep 23, 1994, 2:51:11 PM9/23/94
to

1) The State of the Art

Chess computers have already reached grandmaster strength, whereas
Go programs are still quite weak. It is usually felt that this proves chess is
primarily a fluid tactical game of calculation after all, whereas Go is a more
strategic and structural game requiring the elusive qualities of judgement,
intuition etc. that are too vague to be programmed. In fact, this
may well be so, but the explanation has no real value; there are games
that are more tactical than chess but not so easy to program effectively,
as will be seen.

At the dawn of the AI era almost 40 years ago, chess was thought
to be a conceptual game in nature. There were claims that any program that
played well "could be said to have penetrated to the core of human intellectual
endeavor". The reasoning was that the branching factor was so large that
straight minimax search could be dismissed as hopeless a priori for any
imaginable computer speed. Hence there could be no alternative to an
elaborately organized system that knew patterns, evaluated positions
at a high level of abstraction, constructed plans based on those results,
and generally mimicked the complexities of human reasoning.

High-IQ chess programs have not progressed far. Instead, the brute
force approach added small, clever tricks to brutality. Alpha-beta and hash
tables made the largest contributions. Various extension heuristics, and in
some cases, mild forward pruning techniques were incorporated, without
disturbing the "completeness" property of full-width search. Last but not
least, the speed of hardware has improved by many orders of magnitude.

Interestingly, evaluation functions have not developed to a
commensurate degree. This is a significant observation.


2) What is Strategy?

Beginners who play chess find frequent frustration. They know that
the queen is valuable, but when they sneak up to attack it, it moves away.
Wherever they look, the enemy is defended. The objective is to capture the
enemy King, but they see no means to advance toward the goal.

In truth, long term strategy can only be based on features that are
invariant, or nearly so. Material is such a feature. When someone is up a pawn,
he tends to remain a pawn ahead, other things being equal. However, material
is a coarse quantity and begs the question of how one PLANS to win material.
Certainly, the location of the enemy queen is too ephemeral to be the basis
of strategy.

The secret lies in a peculiarity of chess rules. Pawns capture
differently than they move, contrary to many chess variants. Therefore, as
enemy pawns approach other, they will frequently lock in chains, immobilizing
each other. This imparts a fixed, structural aspect to chess positions. Backward
and isolated pawns are bad because there is no way for them to change status
by ordinary means, only by heroic feats of accidental tactics. Because they
can't move, they constitute fixed targets, so it is valid strategy to "pile up"
on them. The configuration of White pawns on e4, f5, g4, and h2 versus Black
pawns on e5, f6. g5, and h6 favors White, ignoring other factors. That is
because both before and after White gets around to advancing his pawn to h4,
Black has no nonmagical way to improve his structure, h5 probably being
unplayable. After White plays h2-h4, Black suffers from an isolated pawn
on an open file if he captures. If he refuses, White has the OPTION of
holding or capturing himself, in his own good time. This itself constitutes
possessing the initiative. A valid strategy would be to mass rooks on the h file
before capturing, and/or place a Knight on h5. It is safe to refer to this
as strategy, because the same plan holds valid semipermanently,
whatever distractions are in progress elsewhere. Fixed pawn structures also
affect mobility of pieces, hence bishops can be "bad" on a permanent basis,
knights may have "outpost" squares begging for their occupation, etc.

Occasionally, direct King hunts are an appropriate strategy, but
definitely not always. In crowded middlegame positions, the general area
that the King calls home is constrained severely, because it is unsafe to
wander out in the open. In this way, the target can be considered fixed.
If the pawn structure is such that the attacker also has a permanent space
advantage on that side, it may be feasible to contemplate massing pieces for
a direct assault.

It is not hard to construct positions that require theorem proving
to understand, and that will cause the typical strong chess program great
heartburn. Such positions now seem exceptional. It suffices to measure a number
of (somewhat) invariant and independent properties and just add them up. Some
even vaguer tactical measures of "activity" are often added to the mix.

The details don't matter greatly, and no program attempts to be
comprehensive. What the programs do is sift through the great mass of
meaningless moves and select those that appear to collect the handful of
(semi)permanent goodies they happen to know about. It works.

3) Other Games

The situation is clearer if we look at other unsolved games. The
table below is a subjective estimate, on a 1-4 scale, of how much effort was
put into other games, and how successful the results were in comparison to
human play.

Game Effort Success
---------------------------------
Chess 4 3
Go 3 1
Othello 3 4
Checkers 2 4
Int'l Checkers 3 2
Kalah 1 4
Chinese Chess 2 1
Shogi 1 1 (from the land of consumer electronics!)


The game of Othello is particularly illuminating. The object of the
game is to end up with the most disks of your color. However, an evaluation
function that simply counts disks performs poorly. The problem is that when
a disk flips to your color, that is no achievement, since it can easily flip
again. However, a corner square once gained can never be lost. When that
happens, the squares two away begin to loom as permanent prizes also. The
invariants in Othello sometimes really are invariant in an absolute sense.
The evaluations are hence less subject to error, and the programs are very
strong indeed.

In checkers, material advantage suggests victory more surely than in
chess. In actuality, strong programs consider a couple dozen other terms also.
For instance, control of the 14, 15, 18, and 19 squares is important and often
persistent. The greater reliability of the functions and their greater
coverage of what is important lead to greater success than in chess.

In chess, the features covered during evaluation are less complete,
and less reliable, in that they can not infrequently be overturned by tactical
noise. What the success of computer chess has shown is that the loss is
manageable, that sound, greedy play on average compensates for occasional
misunderstandings of deep positions. Of course, the amount of careful
engineering effort expended over the years has been substantial.

International Checkers presents a good contrast to the version
played in English-speaking countries. Material advantage is still important,
but not as insurmountable as in the 8x8 game. The powers of the King are so
vast that heavy sacrifices may be justified in order to get one early. This
is situation dependent, so humans need to exercise judgement. Also, it
may happen that 3 Kings vs 1 is a draw, so naive bean counting doesn't
capture the essence of the position. In consequence, successful programming
require devoting relatively more effort to the design of evaluation functions
than to deep search. Comments from the better informed would be welcome.

Kalah is an extreme example of a purely tactical game. I suspect
that there ARE no long term mystical structural features that people can latch
on to. It is all calculation, and it is even unclear if one can improve on the
trivial evaluation function of counting the number of stones in each kalah.
Featureless games show the computer in a good light, but bore humans.

In contrast, the last two games above prove that tactical games do not
imply easy programmability. In Chinese Chess, there are only 5 out of 9
files occupied by pawns. When they meet, one is annihilated, so the
positions are always "open" in the chess sense. The board is bigger
than western chess, but the pieces are weaker. The King position is more
fixed, though. My impression is that strategy consists of keeping track
of tempo counts to wheel pieces from one theater to another, one theater
always being the King's palace. A stronger player would have to comment.
In Shogi, pieces may be resurrected, so material advantage is less important
than in chess but more so than in Bughouse. I do not know what human strategy
consists of here either, although it is manifestly present. I am assuming
the two Oriental games are more tactical than western chess, yet less
programmable.

All things considered, Go seems nearly as well suited as Othello for
tracking progress, at least in later stages of play. The score is merely
the sum of the situations on each front.

4) Tentative Conclusion

There are games that are highly tactical and suitable for machines
only, yet there are other games that are tactical and played best by people.
Chess is quite strategic, though with a high tactical component, yet amenable
to computer play. Go is more strategic still, yet far more refractory
to mechanization.

I think the tactics vs strategy dichotomy is simply an irrelevant
one, in determining programmability. A slightly better determinant is to
estimate how easily the human-relevant invariants can be formalized. It may
seem that Go is hard because there is no quick-and-dirty way to measure
"shape" or "thickness", but then, comparably murky notions in chess either
proved unnecessary, or had adequate crude substitutes.

The distinctive feature of game programming for games other than Go,
is the encapsulation of evaluation knowledge in a simple function evaluated
at endpoints of the search. (There is some trivial knowledge in the search
too, such as "look at captures first", and "quiesce on captures". Adding much
more than this has proven even more difficult than finding ways to usefully
add knowledge to evaluation functions.) This separation of evaluation from
search is of course a very handy thing for developmental and theoretical
purposes. However, in Go, the security of a group and its territory may depend
on the outcome of a hypothetical ladder fight seen to end in victory for one
side. If it so happens that the points crucial to the fight in other areas
intersect those in the implicit ladder, the status of the first group must be
reassessed. Here, local search is PART of evaluation. I believe THIS is the key
peculiarity of Go.

In all likelihood, the search and evaluation processes in strong Go
programs must be more tightly coupled. This does not necessarily imply
construction of high IQ planning modules of the sort that failed in chess.
The singular extensions and null move heuristics in chess point to a better way.
These are "weak heuristics" that dynamically (i.e. during search) determine
importance or unimportance of candidate moves without the need for incorporating
deep domain knowledge. It was a mistake in computer chess history to regard
forward pruning as an absolute. Some moves are profitably searched deeper,
and some are less worthy, but that is strictly a matter of degree. The
completeness of full width search is valuable, too valuable to give up.
Go programs will have to employ tricks like these, and more aggressively
than any chess programs have done to date.

Peter Mckenzie

unread,
Sep 25, 1994, 4:55:42 AM9/25/94
to

Thanks to George Shrier for an interesting article.
I do however have a couple of bones to pick:

> Interestingly, evaluation functions have not developed to a
>commensurate degree. This is a significant observation.

Perhaps though, they have developed more than is generally appreciated?
It seems likely to me that the evaluation functions in the best PC
programs are quite advanced compared to say, Chess 4.5.


> The situation is clearer if we look at other unsolved games. The
>table below is a subjective estimate, on a 1-4 scale, of how much effort was
>put into other games, and how successful the results were in comparison to
>human play.
>
>Game Effort Success
>---------------------------------
>Chess 4 3
>Go 3 1
>Othello 3 4
>Checkers 2 4
>Int'l Checkers 3 2
>Kalah 1 4
>Chinese Chess 2 1
>Shogi 1 1 (from the land of consumer electronics!)

Perhaps you should have explained more about your estimate of effort?

While your rankings may be OK, perhaps the scale is a little off (or
at least not-linear). It would seem to me that the efforts that have
been put into computer chess probably exceed the combined efforts of
all the other games, perhaps by a factor of 10 or more. This isn't
reflected in your table.

cheers,
Peter

Roy Langston

unread,
Sep 25, 1994, 2:49:31 PM9/25/94
to
This was an interesting and well-written (i.e., literate) post. However...

George Shrier (shr...@protocol.zycad.com) writes:

> 3) Other Games


>
> The table below is a subjective estimate, on a 1-4 scale, of how
> much effort was put into other games, and how successful the results were
> in comparison to human play.
>
> Game Effort Success
> ---------------------------------
> Chess 4 3
> Go 3 1
> Othello 3 4
> Checkers 2 4
> Int'l Checkers 3 2
> Kalah 1 4
> Chinese Chess 2 1
> Shogi 1 1 (from the land of consumer electronics!)


My own subjective estimate is that checkers programming has had about the
same amount of effort devoted to it as go programming (though much of it
was early in AI history, when both hardware and software were very inferior
to today's products). Also, it is likely that more go programming effort
has been wasted, because it was often the work of lone mavericks who
reinvented each other's wheels quite a lot. Chess and checker programs
have more often been the work of teams of professionals who were able to
use a lot of ideas developed in previous efforts by others.

Also, I believe that shogi and Chinese chess programs are significantly
stronger (level 2, say) than go programs.


> I am assuming <shogi and Chinese chess> are more tactical than western
> chess, yet less programmable.


Dubious on both counts. Chinese chess is very similar to western chess (a
strong player of either game will be able to pick up the other quite
quickly), and shogi seems to have comparable strategic depth. They may
_seem_ less programmable than chess because the evaluation analysis for
chess had reached a much higher level of formal rigor before the
programming efforts began.

> All things considered, Go seems nearly as well suited as Othello
> for tracking progress, at least in later stages of play. The score is
> merely the sum of the situations on each front.


Unfortunately, it doesn't do much good to be able to finally calculate with
great precision, at move 100, that you are indeed now hopelessly behind.:-)


> 4) Tentative Conclusion
> <Interesting stuff deleted...>


> However, in Go, the security of a group and its territory may depend
> on the outcome of a hypothetical ladder fight seen to end in victory for
> one side. If it so happens that the points crucial to the fight in other
> areas intersect those in the implicit ladder, the status of the first
> group must be reassessed. Here, local search is PART of evaluation. I
> believe THIS is the key peculiarity of Go.


It probably is not. Far more important, IMHO, are the need to:
a) identify the strategic functions of the stones on the board, which no
program can yet do reliably, but even weak human players do pretty well and
strong players much better still,
b) identify the main strategic values currently at stake in the game, which
programs seem to be a little better at, but still nowhere near the level of
a human player, and
c) formulate and carry out a plan to achieve an identified strategic goal,
even if it turns out later not to have been the most important one. Again,
programs are abysmal at this, but human players do it pretty well.

One reason go programs do so badly is precisely their over-reliance on
evaluation functions. When they think they are getting the short end of
the stick, they try to change goals when it is already too late, and end up
getting nothing. Go programmers are all too familiar with the bizarre
tenukis their babies come up with when the evaluation function goes
ballistic.

Go programming is harder than chess programming primarily because human go
strength is based on mental skills that are more difficult for computers to
simulate than the mental skills human chess strength is based on.

In particular, even fairly weak human go players use a lot of pattern
recognition, which requires the ability to distinguish the essential from
the incidental, something computers are very weak at; human players can
think of stones in terms of their function; and human players learn from
their mistakes, which requires the ability to set intermediate goals, and
reject moves that have failed to achieve those goals under similar
circumstances in the past.

In chess, once you are out of the opening book, pattern recognition is much
less important, until you get to the endgame (unless you are talking about
very strong human players). There are some amazingly strong and very
compact programs that seem to consist of little more than a good evaluation
function and aggressive move-tree pruning. That's not going to cut it ( :-)
in go.

-- Roy Langston

Torben AEgidius Mogensen

unread,
Sep 26, 1994, 6:55:55 AM9/26/94
to
shr...@protocol.zycad.com (George Shrier) writes:

> The game of Othello is particularly illuminating. The object of the
>game is to end up with the most disks of your color. However, an evaluation
>function that simply counts disks performs poorly. The problem is that when
>a disk flips to your color, that is no achievement, since it can easily flip
>again. However, a corner square once gained can never be lost. When that
>happens, the squares two away begin to loom as permanent prizes also. The
>invariants in Othello sometimes really are invariant in an absolute sense.
>The evaluations are hence less subject to error, and the programs are very
>strong indeed.

There are other factors that make Othello suitable for computers:

1) The endgame will often decide games, and is very difficult
to track for humans, because of the large changes each move
can cause in the position. However, a computer can easily
evaluate the last N (typically 8-12) ply _perfectly_, due
to the guaranteed decreasement of the branching factor.

2) The branching factor (number of possible moves) is
typically less than in e.g. chess.

3) The positions on the edges are important to the game, but
the number of possible configurations is small enough to
tabulate, which makes it possible to make a reasonable
board evaluation very quickly. (Obviously, edge positions
aren't everything, but they are important).

Torben Mogensen (tor...@diku.dk)

David Mechner

unread,
Sep 26, 1994, 1:51:49 PM9/26/94
to
In article <54...@mindlink.bc.ca>, Roy_La...@mindlink.bc.ca
(Roy Langston) writes:

> Go programming is harder than chess programming primarily because human
> go strength is based on mental skills that are more difficult for
> computers to simulate than the mental skills human chess strength is
> based on.

I disagree. It seems to me that humans use quite the same set of mental
skills in go and chess -- its not as if _humans_ do brute force search! In
both games, players use "pattern knowledge" and experience to select one or
two promising moves, hypothesize playing each in turn, and quickly abandon
lines as soon as one looks acceptable or bad (or things get too complicated).

I think perhaps you're conflating what humans do when they play with what
programmers have done (in the case of chess) to avoid having to do what
humans do.

As far as programming go goes, patterns and strategic concepts (like the
positional functions of stones) are good for finding reasonable moves or
important areas, but they won't help you tell which of a half-dozen
reasonable-looking possibilities will turn out best (or work at all for
their intended purpose). For that even humans need search. And search
doesn't work without the ability to evaluate the positions you reach. And
in go, you can't evaluate a position accurately without knowing the life and
death status of the groups on the board. In other words, I think G. Shrier
is on the mark.


-David Mechner

Roy Langston

unread,
Sep 27, 1994, 2:46:08 PM9/27/94
to
In article <3671nl$k...@thecourier.cims.nyu.edu>, dm0...@sparky.cs.nyu.edu


This (and Shrier's) analysis does not adequately explain why go programs
are so weak. Programs can do dan-level tsume-go with ease. They can read
ladders, too. What is so hard about getting them to integrate the two
functions?

My comparison of the human skills used in chess and go was based mainly on
my own experiences as a player of the two games. One reason I prefer go to
chess is that I find chess requires much more tactical reading. In go, I
can usually just play a move that looks good, and be confident that even if
it isn't the best move, it's probably good enough for now. In chess, one
tactical oversight and you're toast.

Also, I believe a key difference between the human skills needed for the
two games is revealed by simultaneous play. An experienced go player can
play virtually without any reading at all, and give away only a few stones
of his strength. Chess grandmasters, playing strong club players in simuls
with _no handicaps_ must still do substantial reading, or face the
probability of a loss.

Watch the two kinds of simul carefully. Even though the go pro is giving
handicaps, he can usually answer much faster than the chess grandmaster.
Just count the number of moves the experts play in a given period of time:
the difference is very considerable. IMHO, that is a vital clue to the
difference between the two games.

I am willing to give pretty long odds that if a go programmer does manage
to implement an integrated life and death evaluation function to address
Shrier's hypothesis, it won't make a big difference to the program's
strength: certainly not more than a few stones' worth. Real progress
awaits a much more significant conceptual breakthrough.

-- Roy

AC Missias

unread,
Sep 27, 1994, 4:05:12 PM9/27/94
to
This isn't directly addressing the subject line, but I'm wondering if
any of the programmers among you have any experience with or
impressions of attempts at writing bridge programs. While there is no
visual pattern recognition component to a card game, bridge, like go,
has a huge and rapidly-expanding "move tree" (during the bidding
phase), and a high degree of subtle strategy (during the play,
especially for the defense) which is hard for a program to emulate. I
think that a lot of effort has been put into writing good programs
(especially as you have to find *3* other players to play live), but I
have never seen a game which is a decent substitute for human players.
Any thoughts?

-ACM

Bill Fischofer

unread,
Sep 28, 1994, 5:50:23 PM9/28/94
to
I believe that go is harder to program than chess for three reasons:

1. The mathematical state space of go is vastly larger than chess. As has
been noted, chess programs have taken brute force technique far beyond
the wildest imaginings of the early chess programmers. Because such
techniques have been successful, there has been little incentive to
try and develop chess programs that attempt to mimic the thought
processes of human grandmasters (or even to probe what those thought
processes might really be). But the "gap" between chess and go is
so large (at least 100 orders of magnitude) that no amount of cleverness
or improvement in hardware speeds seems likely to yield similar
successes in the realm of go. If strong go programs arise, it will be
because they manage to play go in ways recognizably more similar to
the way human masters play than is the case for chess programs.

2. Go is far more symmetric than chess. The different pieces and their
moves actually makes tactical analysis easier in chess than in go.
In go the value of a stone is a function of its relationship to other
stones rather than to anything intrinsic. While sacrifice technique
can violate the normal material relationship among pieces in chess,
it is almost always the case that a queen is worth more than a knight.
Moreover, in chess significant sacrifices are almost invariably coupled
to a mating attack of some sort which will decide the game fairly
quickly. In go, the amorphous nature of stones means that sacrifices
tend to have more abstract strategic purposes even when they seemingly
involve large amounts of "material".

3. Go evaluation functions must be global to be meaninful whereas chess
does quite well with local evaluation functions. This means that not
only does a go program need to consider many more positions than a
chess program, but it needs to spend much more time considering each
position if it is to do anything meaningful with it. This only compounds
the problem of trying to duplicate the "brute force" success of
chess programs in go.

On this subject, David Ehrbach's article on computers and go in the Go
Player's Almanac (Ishi) is recommended for many other insights into this
subject. I feel confident in predicting that the Ing prize will remain
unclaimed during the lifetime of anyone reading this post. Programming go
is a very hard problem.

--
Bill Fischofer Internet: w...@bocaraton.ibm.com

T. M. Cuffel

unread,
Sep 28, 1994, 9:45:53 PM9/28/94
to
In article <Cwv0o...@austin.ibm.com>,

Bill Fischofer <w...@calrissian.bocaraton.ibm.com> wrote:
>I believe that go is harder to program than chess for three reasons:
>
>1. The mathematical state space of go is vastly larger than chess.

So? Scrabble has a 15x15 board and 27 different playing pieces. Quite
possibly it has a state space bigger than chess or go. But I imagine it
would be almost trivial to write a Scrabble program that could beat
any human.

> If strong go programs arise, it will be
> because they manage to play go in ways recognizably more similar to
> the way human masters play than is the case for chess programs.

This is exactly the same thing they were saying about chess computers
twenty years ago.

Computer chess is a bunch of trickery. One trick is simply
representing a chess game using a computer. Another is simulating
thinking with a minimax seach. New tricks, like alpha-beta and
hash tables, were discovered that greatly enchance the old tricks.

These chess tricks have little to with how good players think. Why
should the undiscovered go tricks be any different?

>2. Go is far more symmetric than chess. The different pieces and their
> moves actually makes tactical analysis easier in chess than in go.

Nonsense. Go may very well be more difficult to tactically analyze
than chess, but this has nothing to do with symmetry or piece
variety. In general, more piece variety means more complex
tactics. Take away a few flavors of chess piece, you have a
tactically simpler game. Or add an new kind of go piece, and
your tactics will probably go throught the roof.

> In go the value of a stone is a function of its relationship to other
> stones rather than to anything intrinsic.

Context is important in just about every game.

>3. Go evaluation functions must be global to be meaninful whereas chess
> does quite well with local evaluation functions.

I have no idea what this is supposed to mean. Most every chess evaluation
function I have seen looks at the whole position. Sounds global to me.

> This means that not
> only does a go program need to consider many more positions than a
> chess program, but it needs to spend much more time considering each
> position if it is to do anything meaningful with it.

So far, using techniques developed for chess, a computer has to look at
more positions longer in go. But this probably has something to do
with fact that they are DIFFERENT GAMES. A car works well on land,
but takes a serious performance hit crossing an ocean. Does this
mean boats are harder to build?

David Mechner

unread,
Sep 28, 1994, 5:19:33 PM9/28/94
to
>> I disagree. It seems to me that humans use quite the same set of mental
>> skills in go and chess
>> [. . . rest of my post omitted ]

> This (and Shrier's) analysis does not adequately explain why go
> programs are so weak. Programs can do dan-level tsume-go with ease.

Hardly. I know of dedicated tsume-go programs that can do "dan-level
tsume-go", so long as the problem is simple and clear cut, i.e. the
locality of the problem is clearly defined, and the group surrounding
the group whose status is in question is totally impervious to attack.
This rarely happens in a real game. Defining the locality of a
tactical situation in a real game is a terribly difficult problem,
because as a fight progresses, it can grow and spill over into other
areas. But without a clearly defined locality, the brute-force
methods used by life & death programs are impracticable. Current
go-playing programs are pathetcially weak at life & death.


> My comparison of the human skills used in chess and go was based
> mainly on my own experiences as a player of the two games. One
> reason I prefer go to chess is that I find chess requires much more
> tactical reading. In go, I can usually just play a move that looks
> good, and be confident that even if it isn't the best move, it's
> probably good enough for now. In chess, one tactical oversight and
> you're toast.

I don't know your strength in either game, but this comment suggests
to me that you are stronger at chess than at go. In both games, if
you make any serious tactical _or_ strategic error against a strong
player, you are toast, while if you make such an error playing a weak
player, you will have chances to recoup. If go seems more forgiving
tactically, it may be because you are playing weaker opponents, or
because go is a longer game (and thus each game naturally gives you
more chances to recoup).

> Also, I believe a key difference between the human skills needed for
> the two games is revealed by simultaneous play. An experienced go
> player can play virtually without any reading at all, and give away
> only a few stones of his strength. Chess grandmasters, playing
> strong club players in simuls with _no handicaps_ must still do
> substantial reading, or face the probability of a loss.
>
> Watch the two kinds of simul carefully. Even though the go pro is
> giving handicaps, he can usually answer much faster than the chess
> grandmaster. Just count the number of moves the experts play in a
> given period of time: the difference is very considerable. IMHO,
> that is a vital clue to the difference between the two games.

I haven't noticed this difference myself, and I've seen both sorts of
simuls. Both professional go players and grandmaster level chess
players can read very far, very accurately, very fast. I've seen go
professionals read 20 move deep life & death problems in 3 seconds
(literally).

And for a counter-argument: blitz chess can be played just as fast (or
faster) than speed go. And in either game a master playing with 2
seconds/move can destroy a weaker player with 2 hours to think.

> I am willing to give pretty long odds that if a go programmer does
> manage to implement an integrated life and death evaluation function
> to address Shrier's hypothesis, it won't make a big difference to
> the program's strength: certainly not more than a few stones' worth.
> Real progress awaits a much more significant conceptual
> breakthrough.

I guess we'll see.

-David Mechner
mec...@ultra.nyu.edu

Steve Thomas

unread,
Sep 29, 1994, 4:26:55 AM9/29/94
to
In article <36d68i$g...@csnews.cs.Colorado.EDU>, cuf...@alumni.cs.colorado.edu (T. M. Cuffel) writes:
|> In article <Cwv0o...@austin.ibm.com>,
|> Bill Fischofer <w...@calrissian.bocaraton.ibm.com> wrote:
|> >I believe that go is harder to program than chess for three reasons:
|> >
|> >1. The mathematical state space of go is vastly larger than chess.
|>
|> So? Scrabble has a 15x15 board and 27 different playing pieces. Quite
|> possibly it has a state space bigger than chess or go. But I imagine it
|> would be almost trivial to write a Scrabble program that could beat
|> any human.
A Scrabble-playing program which simply plays the highest-scoring
move each turn will be beaten by a few hundred human players. The
current best programs are about as good as the best humans, but I
don't think this counts as "almost trivial".
--
Steve Thomas, Specialix Research Limited.
Suite 2, Cobb House, 2 Oyster Lane, Byfleet, Surrey, KT14 7DU, U.K.
Phone +44 1932 352251 Fax +44 1932 352349 Internet sth...@specialix.co.uk

sl...@cc.usu.edu

unread,
Sep 29, 1994, 5:27:41 AM9/29/94
to
In article <Cwv0o...@austin.ibm.com>, w...@calrissian.bocaraton.ibm.com (Bill Fischofer) writes:
> I believe that go is harder to program than chess for three reasons:
>
> 1. The mathematical state space of go is vastly larger than chess. As has
> been noted, chess programs have taken brute force technique far beyond
> the wildest imaginings of the early chess programmers. Because such
> techniques have been successful, there has been little incentive to
> try and develop chess programs that attempt to mimic the thought
> processes of human grandmasters (or even to probe what those thought
> processes might really be). But the "gap" between chess and go is
> so large (at least 100 orders of magnitude) that no amount of cleverness
> or improvement in hardware speeds seems likely to yield similar
> successes in the realm of go. If strong go programs arise, it will be
> because they manage to play go in ways recognizably more similar to
> the way human masters play than is the case for chess programs.
>

I will have to disagree with this. The search space of go might be 100 orders
of magnitude larger than that of chess, but the chess search spcace is still
too larger to prevent any "brute force" search. The binary distinction between
"brute force" and "intelligent" evaluation functions is quite artificial. Highly
ranked chess programs use pretty sophisticated and non-local evaluation functions.
The program that made news recently by beating top players like Kasparov was running
on an ordinary pentium PC; no special pupose hardware was used.

> 2. Go is far more symmetric than chess. The different pieces and their
> moves actually makes tactical analysis easier in chess than in go.
> In go the value of a stone is a function of its relationship to other
> stones rather than to anything intrinsic. While sacrifice technique
> can violate the normal material relationship among pieces in chess,
> it is almost always the case that a queen is worth more than a knight.
> Moreover, in chess significant sacrifices are almost invariably coupled
> to a mating attack of some sort which will decide the game fairly
> quickly. In go, the amorphous nature of stones means that sacrifices
> tend to have more abstract strategic purposes even when they seemingly
> involve large amounts of "material".
>

I would guess that this could be the main reason for the difficulty in making
smart evaluation functions for go.

> 3. Go evaluation functions must be global to be meaninful whereas chess
> does quite well with local evaluation functions. This means that not
> only does a go program need to consider many more positions than a
> chess program, but it needs to spend much more time considering each
> position if it is to do anything meaningful with it. This only compounds
> the problem of trying to duplicate the "brute force" success of
> chess programs in go.
>

Relative to go, chess might do away with "local" evaluation functions, but
the are not so local actually.

> On this subject, David Ehrbach's article on computers and go in the Go
> Player's Almanac (Ishi) is recommended for many other insights into this
> subject. I feel confident in predicting that the Ing prize will remain
> unclaimed during the lifetime of anyone reading this post. Programming go
> is a very hard problem.
>

One reason with minor contribution to lack of success in go could be the amount
of effort spent. Mainly due to popularity of chess, it was quite deep rooted in
our psych that "proficiency in chess == intelligence". Once chess is "solved",
our definition of intelligence will change and more effort will be spent on games
like go. AI has rightly been defined as "whatever that hasn't been done". Who knows
after how many years will people say something like "The method X has allowed us to
do which was beyond wildes imaginations of early go programmers" ?

Rutvik

--
Rutvik Desai
Utah State University
rut...@sys3.cs.usu.edu

Herbert Enderton

unread,
Sep 28, 1994, 11:53:22 PM9/28/94
to
Why is go harder to program than chess?

The usual answers I see are:
(1) The branching factor is significantly larger
(2) The state space as a whole is much larger
(3) The program must do deeper look-ahead in go (because humans do)
(4) There seems to be no good cheap evaluation function for go

I think these answers are essentially bogus. E.g. the large state
space makes the game harder for everyone, humans included. When
one says go is hard to program that means relative to human play.

Here's what I think the real answer is:

There are several non-trivial analysis techniques which work well
in Go, and which either don't apply or work less well in chess.
Humans use these analysis techniques when playing Go, and it buys
them a LOT in comparison to brute-force search. I have in mind
things like examining a cutting point to see if it is dangerous
(i.e. if White cut there, Black could capture the cutting stone),
and then using that knowledge to reason that some black stones
are tactically connectable (no matter where White plays), and
use that to reason that Black's invasion is viable. This all
requires look-ahead search, but it's not simple brute-force search.
Brute-force search could conceivably arrive at the same answer
(e.g. Black should invade here), but that would require MUCH more
search.

In some chess positions there are similar reasoning techniques
which apply. E.g. A black bishop on a2 might be trapped behind
the white pawns on b3 and c2, and will eventually be captured
even though Black can make irrelevant threats elsewhere on the
board to delay the capture. But chess pieces attack distant
squares,, so it is rare that a tactical fact like this can be
analysed independently. I.e., often one of those seemingly
irrelevant threats saves the bishop. So while fancy goal-directed
local search can help in analysing some positions (especially
some pawn endings), in typical positions it seems reasonable
to look at all the legal moves. The fancy methods help, but not
as much as they do in go, where tactical facts like trapped stones
can be determined by local search and stay true over many moves.

The best go programs do use cutting-point analysis and so on, but
they don't have it all down yet. When they do, look out.

-- Bert Enderton

Vincent Diepeveen

unread,
Sep 29, 1994, 6:59:23 AM9/29/94
to

That is for sure: you need a knowledge based programm to play go.
Only evaluating important moves made, and no (for human evident)
nonsens like brute force programms do.

But i guess there still will be programms that play go.
Like to know more about these programms (i want to make a knowledge
based chessprogramm), someone knows a guy who did it and want to talk
about it?

>--
>Bill Fischofer Internet: w...@bocaraton.ibm.com
>

--
+--------------------------------------+
|| email : vdie...@cs.ruu.nl ||
|| fidonet: 2:280/206.23 ||
+======================================+

Tero Sand

unread,
Sep 29, 1994, 11:39:05 AM9/29/94
to
I don't know what relevance the following has, if indeed anything. But
anyway... one difference between humans and computers (I think, somebody
will undoubtedly correct me if I'm wrong) is that when computers analyze
a situation, then play a move, the analysis goes >poof< and is gone.
With humans, this is obviously not the case; just about _all_ analysis
done during all previous moves is carried along. This effectively means
humans, by the time only a few moves have been plays, have a pretty big
'look-ahead', in breadth if not in depth (and quite deep, too, for some
moves).
I don't know how computer chess has progressed since 1983 or so; then I
used the chess program Sargon III on my Apple II+ computer. It had the
capability of thinking on the opponent's time. The analysis it did,
however, was lost if its opponent played a different move from what it
predicted. This, of course, is stunningly likely in go, except for some
specific tactical moves (peeps, ladder-blocks, etc.).

--
Tero Sand, 2 kyu (4k*) ! "We have become... the stewards of life's
! continuity on Earth. We did not ask for this
EMail: cus...@cc.helsinki.fi ! role, but we cannot abjure it. We may not be
cus...@cc.helsinki.fi ! suited for it, but here we are." -S.J. Gould

Jonathan Buss

unread,
Sep 29, 1994, 12:11:28 PM9/29/94
to
In article <36cml5$a...@thecourier.cims.nyu.edu>,
David Mechner <dm0...@sparky.cs.nyu.edu> wrote:
[I believe in reply to Roy Langston -- apologies to Roy if not. -JB]

>> My comparison of the human skills used in chess and go was based
>> mainly on my own experiences as a player of the two games. One
>> reason I prefer go to chess is that I find chess requires much more
>> tactical reading. In go, I can usually just play a move that looks
>> good, and be confident that even if it isn't the best move, it's
>> probably good enough for now. In chess, one tactical oversight and
>> you're toast.
>
>I don't know your strength in either game, but this comment suggests
>to me that you are stronger at chess than at go. In both games, if
>you make any serious tactical _or_ strategic error against a strong
>player, you are toast, while if you make such an error playing a weak
>player, you will have chances to recoup. If go seems more forgiving
>tactically, it may be because you are playing weaker opponents, or
>because go is a longer game (and thus each game naturally gives you
>more chances to recoup).

I played chess a lot while in high school, but quit when I reached a
skill level (USCF 1450) where I felt that to do well I had to memorize
far more opening variations than I wanted to spend time on.

When I learned go some years later, I quickly noticed a difference in
the kind of mistakes I made. My chess mistakes nearly always came
from poor reading, but many of my go mistakes came from poor
understanding of the general situation.

Now, however, I believe that difference came from having human
teachers for go, who could point out my higher-level mistakes. Most
of my chess learning came from books, which couldn't point out where I
ignored their advice.

With respect to memorizing opening sequences, I know fewer than most
go players at my level (IGS 2d). But I don't feel that I have reached
a limit where improvement would require a lot of memorization. I
often come out ahead in the opening without it. (And then lose
later....)

----

I personally believe that the distinction between tactics and strategy
misses the point. I prefer to distinguish between analysis and
intuition. Although tactics relies heavily on analysis and strategy
on intuition, the reverse can also hold. My tactical play relies
heavily on intuition, and my relative strength in openings comes
partly from strategic analysis. Tsume-go problems have helped me with
both of these, by the way.

For computer play, we might define "analysis" as what the computer can
do, and "intuition" as what it can't. But that doesn't help a
programmer.


Jonathan Buss

Roy Langston

unread,
Sep 29, 1994, 3:21:04 PM9/29/94
to
In article <36cml5$a...@thecourier.cims.nyu.edu>, dm0...@sparky.cs.nyu.edu

(David Mechner) writes:
>
>
> > This (and Shrier's) analysis does not adequately explain why go
> > programs are so weak. Programs can do dan-level tsume-go with ease.
>
> Hardly. I know of dedicated tsume-go programs that can do "dan-level
> tsume-go", so long as the problem is simple and clear cut, i.e. the
> locality of the problem is clearly defined, and the group surrounding
> the group whose status is in question is totally impervious to attack.
> This rarely happens in a real game. Defining the locality of a
> tactical situation in a real game is a terribly difficult problem,
> because as a fight progresses, it can grow and spill over into other
> areas. But without a clearly defined locality, the brute-force
> methods used by life & death programs are impracticable. Current
> go-playing programs are pathetcially weak at life & death.


This is certainly true, but is getting much closer to the real meat of the
go AI problem than Shrier's idea, which seemed to concentrate mainly on
ladder relationships (which are still pretty easy for computers to read).
Thinking about how a fight might spill across the board invokes the
distinctively human go skills again.

>
> > My comparison of the human skills used in chess and go was based
> > mainly on my own experiences as a player of the two games. One
> > reason I prefer go to chess is that I find chess requires much more
> > tactical reading. In go, I can usually just play a move that looks
> > good, and be confident that even if it isn't the best move, it's
> > probably good enough for now. In chess, one tactical oversight and
> > you're toast.
>
> I don't know your strength in either game, but this comment suggests
> to me that you are stronger at chess than at go. In both games, if
> you make any serious tactical _or_ strategic error against a strong
> player, you are toast, while if you make such an error playing a weak
> player, you will have chances to recoup. If go seems more forgiving
> tactically, it may be because you are playing weaker opponents, or
> because go is a longer game (and thus each game naturally gives you
> more chances to recoup).


Actually, I consider myself quite a bit stronger at go (though nowhere near
the 6.4 dan I am bizarrely credited with in the latest AGA ratings
readout...what's the smiley for chagrin again?) To me, a big difference
between the two games is that in go, a move that doesn't do much for one
strategic goal very often turns out to be useful for something else. You
can make a thick move, and just avoid tactical complications entirely. If
you're flexible (and a good long-term thinker) you can win almost without
playing out any tactical situations at all. In chess I think that is much
harder to do.

The sheer length of the game may also be an important consideration. I
know I've played some bonehead moves against 5 dan players, and still gone
on to win (as others have also done to me, of course :-)) just because
there was so much damn time left in the game for strange things to happen.

>
> > Also, I believe a key difference between the human skills needed for
> > the two games is revealed by simultaneous play. An experienced go
> > player can play virtually without any reading at all, and give away
> > only a few stones of his strength. Chess grandmasters, playing
> > strong club players in simuls with _no handicaps_ must still do
> > substantial reading, or face the probability of a loss.
> >
> > Watch the two kinds of simul carefully. Even though the go pro is
> > giving handicaps, he can usually answer much faster than the chess
> > grandmaster. Just count the number of moves the experts play in a
> > given period of time: the difference is very considerable. IMHO,
> > that is a vital clue to the difference between the two games.
>
> I haven't noticed this difference myself, and I've seen both sorts of
> simuls. Both professional go players and grandmaster level chess
> players can read very far, very accurately, very fast. I've seen go
> professionals read 20 move deep life & death problems in 3 seconds
> (literally).


My guess is that what you are seeing is pattern recognition in action. In
the case of the chess grandmaster, he is in his opening book much longer
than the weaker player, and is often able to gain a winning advantage just
by playing the book moves against unsound variations the weaker player
comes up with on the fly. The go pro is probably remembering a similar
problem from a book, or a pro game he went over once, maybe many years
before. In both cases, the thinking process involved is very different
from computer-style tactical reading.

-- Roy Langston

0 new messages