Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Q: Neural Nets/Genetic Algor. and Chess

15 views
Skip to first unread message

Jeff Hamm

unread,
Mar 1, 1996, 3:00:00 AM3/1/96
to
Hi,
My understanding is that most computer chess programmes operate on
the brute force method of calculate every move/position possible and select
the best. This is supplemented with data-base moves such as openings and
endgames which do not require algorithmic calculation. DeepBlue operates on
such a principle and being the "bee's knees" of this approach, has shown that
although great chess can be produced this way, the human has at the moment
still been able to stay on top. Being a research psychologist, the
brute-force approach to chess "problem solving" has struck me as an
unfortunate event, but may be the "optimal strategy" in terms of programming
time and performance. However, it seems to me that chess would be well suited
to a neural net/genetic alogorithm approach. If a neural net pattern
recognition programme were employed to determine "strategy" (i.e., I'm in a
position that calls for attack on the King Side or increase play on Queen
Side, etc) and what goals would be beneficial, and while a genetic algor. is
employed to solve the tactical solution to the strategical plan? Has there
been much thought to this approach? One of the benefits is the ability to
"teach" the programme by feeding it GM games and chess puzzles, and each the
ability to learn from it's mistakes. Anyway, perhaps this has been tried
already and the results were not impressive. But even so, the results from
the first brute force algor. weren't exactly wonderful either.
- Jeff Hamm


Adam Sundor

unread,
Mar 2, 1996, 3:00:00 AM3/2/96
to
In your own field of research psychology, what have neural
nets or genetic algorithms produced so far? And aside
from the cachet of trendy "neural nets," why would a net
be much better than brute data base methods? Both
seem to me to be the same thing, variations on a theme.

All AI experts, am I correct here?
Neural net methods = current chess data base methods = brute force

Theoretically, perhaps the neural net could learn and be more
efficient, in the long run, is that the main difference? Would a neural net
that effective have to span the globe first?

For elementary models, they can be interesting. But
at the human decision level, has anything been built
that even remotely can predict any human behavior as
far as research psychology is concerned? That may be unfair to ask. I
don't think prediction of any human behavior has been achieved at any time by
any research in psychology except for IQ scores, ie, the results are
solidly above the usual 50-60% variance explained levels.

I'm just asking. At the human motor level, I have
seen some solid work. I just don't know of anything
above a very elementary, primitive level.

Oh, about chess, the answer is no. And yes, you can bet
your bootie that neural nets have been tried. But
aside from theory, practically they don't work yet.

My guess is that computational power to get a neural
net up and running is probably far more aggravation
than getting a brute-force data base up and running.

One of the mathematicians here suggests he has a proof
that the data base is the most efficient method.

Adam Sundor

Jay Scott

unread,
Mar 4, 1996, 3:00:00 AM3/4/96
to
In article <4h72o4$e...@News.Dal.Ca>, Jeff Hamm (in%"jph...@ac.dal.ca") wrote:
>If a neural net pattern
>recognition programme were employed to determine "strategy" (i.e., I'm in a
>position that calls for attack on the King Side or increase play on Queen
>Side, etc) and what goals would be beneficial, and while a genetic algor. is
>employed to solve the tactical solution to the strategical plan?

I don't understand your suggestion. How would a genetic algorithm
work out the tactical details of a plan?

There have been some attempts to write learning chess programs, but
the ones I know about don't separate strategy and tactics as sharply
as you suggest. See Morph, NeuroChess and SAL on my Machine Learning
in Games page:

http://forum.swarthmore.edu/~jay/learn-game/index.html

>One of the benefits is the ability to
>"teach" the programme by feeding it GM games and chess puzzles, and each the
>ability to learn from it's mistakes.

I think that learning from play against computer opponents of similar
strength (e.g. self-play) will usually work better than learning from
human expert performance. Humans think so differently from any learning
mechanism we can write that human performance tends to be more confusing
than helpful for a program.

If you're talking about learning from a human teacher, instead of from
human games, that's different. I don't know any work like that.


Jay Scott, j...@forum.swarthmore.edu

Machine Learning in Games:
http://forum.swarthmore.edu/~jay/learn-game/index.html

Pedro Mendes

unread,
Mar 5, 1996, 3:00:00 AM3/5/96
to
In article <4hhi6t$f...@News.Dal.Ca>, in%"jph...@ac.dal.ca" says...
>
>[stuff deleted]
>
>The idea of the Genetic Algorithm was that calculating
>the "solution" to the best move might work on the principals of "once I've
>found a good line then maintain what I know", but if that line becomes
>"bad; i.e., I suddenly find myself in a bad situation" then mutate the
>equation.

The problem with this is that the GA requires the evaluation of each
candidate solution by a cost function (a measure of how good the
solution is). What would you suggest the cost function would be? I
presume this would have to be the result of a n-ply deep move search
with exactly the same problems that the current programs have. This
means that a GA would have a time penalty against current programs
(both would have to search but the GA would have to go on doing the
replication of chromosomes etc). By the way, the same would apply to
any other optimisation techniques. There are some tasks that are better
done by heuristics and I think chess may be one of them.

Of course, this is just my opinion and I never did any chess
programming but I am currently employed in a research project that
includes using GAs for optimisation.

--
Pedro Mendes
home: pe...@enzyme.demon.co.uk , work: p...@aber.ac.uk | Aberystwyth,
http://gepasi.dbs.aber.ac.uk/pedro/prmhome.htm | Dyfed, Wales,
PGP mail welcome! (public key on demand) | U.K. (Eur.Un.)

Jeff Hamm

unread,
Mar 6, 1996, 3:00:00 AM3/6/96
to
In article <357cc$16131...@enzyme.demon.co.uk>, pe...@enzyme.demon.co.uk
says...

>
>In article <4hhi6t$f...@News.Dal.Ca>, in%"jph...@ac.dal.ca" says...
>>
>>[stuff deleted]
>>
>>The idea of the Genetic Algorithm was that calculating
>>the "solution" to the best move might work on the principals of "once I've
>>found a good line then maintain what I know", but if that line becomes
>>"bad; i.e., I suddenly find myself in a bad situation" then mutate the
>>equation.
>
>The problem with this is that the GA requires the evaluation of each
>candidate solution by a cost function (a measure of how good the
>solution is). What would you suggest the cost function would be? I
>presume this would have to be the result of a n-ply deep move search
>with exactly the same problems that the current programs have. This
>means that a GA would have a time penalty against current programs
>(both would have to search but the GA would have to go on doing the
>replication of chromosomes etc).

Good point. I'm thinking that the optimization would be performed after a
game is completed (i.e., programme loads it's weights and equations and such
in order to play a game. Any changes to these aspects of the programme would
be modified after a game; similar to one doing a post-hoc evaluation of one's
own games; perhaps here a deep-ply search might be used for comparison of
optimal solution?) So the delays due to weight adjustments/replication and
mutations would not be encorporated into game time. Futhermore, since a
ply-search is not being performed, move calculation might be fairly quick (the
idea is that the equations find the solution the ply-search would have come up
with but don't have to do the actual ply-search at game time, furthermore, if
a ply-search makes errors; i.e., loses the game; the equations can be modified
based on the resulting winning play! which would result in the equation making
a better move selection than the ply-search would have). Anyway, thanks for
responding.
- Jeff Hamm


MillerDG

unread,
Mar 6, 1996, 3:00:00 AM3/6/96
to
In article <4hghas$g...@mothra.westworld.com>, edp...@westworld.com (Ed
Parry) writes:

>
>Ultimately, I think the optimal program would simply understand the
>rules, but have a way of "teaching" itself [somehow] by playing itself
>over and over...
>

How is this a problem? Have it play itself(very simple).


Ed Parry

unread,
Mar 9, 1996, 3:00:00 AM3/9/96
to
mill...@aol.com (MillerDG) wrote:

>In article <4hghas$g...@mothra.westworld.com>, edp...@westworld.com (Ed
>Parry) writes:

>>
>>Ultimately, I think the optimal program would simply understand the
>>rules, but have a way of "teaching" itself [somehow] by playing itself

>> -------------
>>over and over...
>>

> How is this a problem? Have it play itself(very simple).

You must have decided to neglected considering the other requirement:

It has to LEARN from the games, not just play itself over and over
mindlessly.

Ep


Ed Parry
edp...@westworld.com
Ed Parry <au...@lafn.org>
EBBS HQ - 818-891-9350 - 28,8kb


Andrew John Walker

unread,
Mar 12, 1996, 3:00:00 AM3/12/96
to
Why not try a neural net on a simple endgame such as KBN vs K ?
You could feed it random positions from a endgame database and
see how good it gets at picking the best move.
Andrew Walker

Vincent Diepeveen

unread,
Mar 15, 1996, 3:00:00 AM3/15/96
to

If you feed it all possible positions:
about 3612 (KK) x 62 x 61 x 1/4 (mirroring king) = .. positions,
then it probably will play the right move.

> Andrew Walker

10 lines hardcoded C and 1 array of 64 bytes,
is also sufficient for this.

What about KP - KNN?

It seems this is harder to make a function for, that
can do it (if possible) in 50 moves. Haven't managed so far.

Vincent Diepeveen
vdie...@cs.ruu.nl


--
+--------------------------------------+
|| email : vdie...@cs.ruu.nl ||
|| Vincent Diepeveen ||
+======================================+

0 new messages