Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is Computer Chess A Science?

5 views
Skip to first unread message

Dr Nancy's Sweetie

unread,
Feb 21, 1996, 3:00:00 AM2/21/96
to

I was at Game Six on Saturday, and it was a very entertaining afternoon.
The audience room burst into laughter when Kasparov put his watch back on
near the end. 8-)

At one point, someone asked a member of the Deep Blue team (might have
been CJ Tan, but I couldn't see very well) if IBM was planning to sell
the specialised chess chips in DB. Could we get them on an expansion card
and have a mini Deep Blue to play at home? He said he didn't know.

That got me to wondering about another question which was noticeably
unmentioned: an indication of where the Deep Blue team was planning to
publish their heuristics and tree-pruning algorithms. I had been under
the impression that computer chess was a scientific research field: you
discover something and you publish it, so the others working on the same
problem (computer chess) can build on your knowledge.

However, I am wondering if my impression is mistaken. Comments?


Darren F Provine / kil...@copland.rowan.edu
"I use not only all the brains I have, but all I can borrow as well."
-- Woodrow Wilson

Ray Helie

unread,
Feb 22, 1996, 3:00:00 AM2/22/96
to
<kil...@copland.rowan.edu> wrote:
--
--
-- That got me to wondering about another question which was noticeably
-- unmentioned: an indication of where the Deep Blue team was planning to
-- publish their heuristics and tree-pruning algorithms.

At 100 million positions per second, or 18 billion positions per move, on
average, if it's still performing not much better than so many other micro
programs on the market much of the time (not all of the time, I know),
then I'd say their heuristics and tree-pruning algorithms are
uninteresting -- speed is the only impressive thing that's generating most
of DB's strength. I'd rather see the heuristics and algorithms of micro
programs that are able to perform almost as well looking at a fraction of
those positions. :)


Bruce Moreland

unread,
Feb 23, 1996, 3:00:00 AM2/23/96
to
In article <4gfo4o$o...@cobain.rowan.edu>, kil...@copland.rowan.edu says...
>[snip]

>That got me to wondering about another question which was noticeably
>unmentioned: an indication of where the Deep Blue team was planning to
>publish their heuristics and tree-pruning algorithms. I had been under
>the impression that computer chess was a scientific research field: you
>discover something and you publish it, so the others working on the same
>problem (computer chess) can build on your knowledge.
>[snip]

Nope! It's a lot more complex than this, because there is a commercial
aspect.

1) Some people are basically working in public, notably Bob Hyatt. He
publishes articles and makes his source code public.

2) Others will publish, or converse publicly, but for one reason or another
don't distribute source, and might not distribute executables.

3) Some publish, but may keep secrets for one reason or another. I've seen
several articles on Hitech, but a detailed description of its positional
recognizers, which to me seems one of the more unique (publishable) features
of this program. To be fair, I've never asked these folks for examples,
maybe they would tell me if I asked.

4) Some will talk to you privately, and might open up a little more if you
have something to trade.

5) Some people are very guarded in their responses, but will talk a little.
Frans Morsch, for instance, will tell a little bit about his program but not
a lot.

6) Some people are absolutely quiet. Richard Lang does not talk about
Genius, as far as I know.

7) Some people are absolutely quiet, and are nasty about it.

Most of the amateurs seem to be doing #2 or #3, the professionals are between
#4 and #6. Some of the amateurs also seem to inhabit the area between #4 and
#5, with the occasional trip out to #7.

We all owe a debt to the people who have published, notably Hyatt, Slate,
Thompson, Berliner, Hsu & co., Donninger, Beal, and many others whose names I
may or may not be able to spell correctly.

The people who write professional programs have to eat, so if someone comes
up with something new and interesting there is at least some financial
incentive to keep quiet about it.

I don't feel comfortable keeping secrets, particularly from the people who
publish, so that's why I'm in camp #2. Most of what is in my program has
been drawn directly from the published material, I started with a pseudo-code
implementation of Belle's search function, and added on to that.

bruce

--
The opinions expressed in this message are my own personal views
and do not reflect the official views of Microsoft Corporation.


EEBYDEEBY

unread,
Feb 23, 1996, 3:00:00 AM2/23/96
to
I have been reading a little bit about the fields of genetic algorithms
and
genetic programming and wondering if it could be applied to chess
programming
to find some decent chess heuristics, particularly as they relate to
positional
play.

It seems that it would be an interesting experiment... allow chess
programs
to compete with each other in a battle of survival. Those that do well
"breed"
with others that do well. Throw in some random mutations for kicks, and
after
a few million generations, maybe the top programs could provide some
useful
heuristics that traditional programmers could incorporate into their
knowledge
evaluation functions.

It seems it might be worth a try. I am just an amateur but perhaps one of
you smart guys could comment on this or maybe try it out.

John Stanback

unread,
Feb 23, 1996, 3:00:00 AM2/23/96
to
A couple years ago I tried using a genetic algorithm to optimize the
coefficients of the evaluation function components in my chess program
Zarkov. I started with a population of 100 "players" each which had a set of
coefficients randomly generated within a very wide range. For example, the
materialvalue for all pieces was a random number between 100 and 1200. Pawns
were fixed at 100.

Games were played between randomly selected players.
Players were replaced when they had lost more games they they won and
they had lost at least 2 games. The replacement algorithm randomly selected
2 parents, each with an even or winning record. For each of the 30
evaluation function parameters, the child program randomly acquired the value
of one of the parents. About 1% of the time this value was randomly changed
by a small amount (mutation).

After 2000-3000 games at a time control of 2 seconds per move the piece
values had nearly converged to reasonable values; knights and bishops were
usually a bit over 300 points, rooks a bit over 500, and queens around
1000 points. Other less important parameters had more or less converged,
but not to an "ideal" values. A control program with hand-tuned
parameters still won against the best of the programs derived using this
algorithm, but only by a margin of 55-60%.

I haven't studied genetic algorithms so I'm sure there are better methods
for selection and replacement. This experiment does seem to show that
there might be some merit in using genetic algorithms for tuning evaluation
function parameters in chess programs.

John Stanback


Daniel A. Thies

unread,
Feb 24, 1996, 3:00:00 AM2/24/96
to
John:

I let an experiment run for about 6 months last year, which used a very
similar fitness measure. I evolved neural networks to deliver an
evaluation function for chess positions. For the first two months, it ran
with fixed material values (lifted straight out of gnuchess - thanks).
After that, I let the material values float as well. The networks did
improve in strength over time, but after 6 months were still weaker (and
slower!) than the hand-coded evaluation function I had been using. My
conclusion was that I had better things to do with my CPU. :)

I have yet to try the same experiment *starting* with the hand-coded
parameters, but perhaps that would be the way to do it, and hope that
after several months there would be some worthwhile change.

Dan

In article <4gl1n1$7...@fcnews.fc.hp.com>, John Stanback <jhs> wrote:
[snip]

Steven J. Edwards

unread,
Feb 28, 1996, 3:00:00 AM2/28/96
to
kil...@copland.rowan.edu (Dr Nancy's Sweetie) writes:

>At one point, someone asked a member of the Deep Blue team (might have
>been CJ Tan, but I couldn't see very well) if IBM was planning to sell
>the specialised chess chips in DB. Could we get them on an expansion card
>and have a mini Deep Blue to play at home? He said he didn't know.

At one point some time ago, there was a report in the press that
commercialization was being considered. But it may be that IBM is too
big to do it effectively. Perhaps they could license the technology
to a faster and smaller company.

>That got me to wondering about another question which was noticeably
>unmentioned: an indication of where the Deep Blue team was planning to
>publish their heuristics and tree-pruning algorithms. I had been under
>the impression that computer chess was a scientific research field: you
>discover something and you publish it, so the others working on the same
>problem (computer chess) can build on your knowledge.

The DB development team has published a number of papers relating to
its earlier work (ChipTest and Deep Thought).

>However, I am wondering if my impression is mistaken. Comments?

Computer chess programming is probably the most advanced field of
classical game programming. It has more published papers than all
other such game research combined. Computer go is probably in second
place with perhaps one tenth as many papers.

So I would say that computer chess is a real science. It is mostly an
empirical science, perhaps due to the fact that computer chess
experimentation is easy to do compared to computer chess theory
formulation.

It is possible to get impressive performace results with almost no
theory whatsoever. It's analogous to comparing the working process of
a professional cook with that of a theoretical biochemist.

-- Steven (s...@mv.mv.com)

0 new messages