Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Artificial Neural Networks for Chess

17 views
Skip to first unread message

Jet Nebula

unread,
Apr 2, 2002, 12:24:09 AM4/2/02
to
Is the general failure of artificial neural networks in the production
of world-class computer chess players due to the configuration and
training of the ANN, or are ANNs just not suitable for creating
computer chess players? I've seen people claim the latter, but that
seems strange in light of ANNs having been used to create world-class
backgammon players. Does the uncertain nature of backgammon make it
much more amendable to ANNs? Or have all ANN-based chess players just
been poorly constructed and taught? Any thoughts?

Anders Thulin

unread,
Apr 2, 2002, 11:25:41 AM4/2/02
to
Jet Nebula wrote:

> Is the general failure of artificial neural networks in the production
> of world-class computer chess players due to the configuration and
> training of the ANN, or are ANNs just not suitable for creating
> computer chess players?


I'm not an expert on ANN's, but I view them mainly as classification
engines. If chess playing can be modelled as a classification problem
(or some part of it can) then ANNs probably have a use.

The best chess area in which to experiment is probably in the endgame
field: here we have complete information, and can perform exhaustive
and exact testing of any ANN.


--
Anders Thulin a...@algonet.se http://www.algonet.se/~ath

Simon Waters

unread,
Apr 2, 2002, 12:16:16 PM4/2/02
to
Anders Thulin wrote:
>
> I'm not an expert on ANN's, but I view them mainly as classification
> engines. If chess playing can be modelled as a classification problem
> (or some part of it can) then ANNs probably have a use.
>
> The best chess area in which to experiment is probably in the endgame
> field: here we have complete information, and can perform exhaustive
> and exact testing of any ANN.

Being picky but Chess is always a perfect information game, in
the sense both parties always know the complete state of the
board, although some times I wonder when I look back at the game
I played.....

My guess is that Backgammon is just an easier game to classify
winning/losing positions. Similarly in draughts the limit types
of piece and move simplify the pattern recognition problem, and
allowed draughts playing programs to invent their own "ideas".

As Sergei said; Openings teach you openings. Endgames teach you
chess

henri Arsenault

unread,
Apr 2, 2002, 2:20:34 PM4/2/02
to
In article <gqeiau07esqkg76ci...@4ax.com>, Jet Nebula
<thi...@spamfreezone.com> wrote:

The most widely used neural network training algorithm is
back-propagation, which Kohonen has shown to be equivalent to the
least-squares solution to a matrix inversion (or solution to a system of
equations) when an exact inverse may not exist. The least-squares solution
is called the pseudoinverse, and back-propagation calculates this using
only a small subset of the data at a time, at the cost of a greatly
increased number of calculations.

Typical moderatly simple back-propagation calculations can required
upwards of 50,000 iterations before reaching an acceptable solution. So a
complex problem like chess with exponentially increasing complexity seems
way outside the capabilities of neural networks, at least for the moment.

It is not clear to me even how to write out the chess-solving problem in
terms of a sustem of equations. The only approach I can see is for a
neural network to try out moves at random and throwing out moves that lead
to inferior positions, but I can't see how this could compete even with
alpha-beta pruning which is used now.

Yes, one could design a system that learns from its mistakes, but in no
way does that reduce the problem of exponentially increasing numbers of
moves to evaluate.

Henri

Magnus Javerberg

unread,
Apr 2, 2002, 4:51:50 PM4/2/02
to
Their is two main problems with ANN chess:

1) Chess is not smooth. Two positions with similar position of pieces
should result in similar evaluation value. This is reasonable true for
backgammon, but not chess. (If you move one random stone in BG one
step you will most likely not make a big change of the value of the
position.)

2) ANN is slow. Even if you make an evaluation algorithm that
is superior you will still have a rather week engine, since you search
much fewer positions that non ANN programs.

Jet Nebula

unread,
Apr 2, 2002, 6:32:37 PM4/2/02
to
On Tue, 02 Apr 2002 19:20:34 GMT, ars...@nospam.phy.ulaval.ca (henri
Arsenault) wrote:

>It is not clear to me even how to write out the chess-solving problem in
>terms of a sustem of equations. The only approach I can see is for a
>neural network to try out moves at random and throwing out moves that lead
>to inferior positions, but I can't see how this could compete even with
>alpha-beta pruning which is used now.

My idea is to have a computer play itself using a minimax search to
depth 1. At first, a simple static evaluation function will be
used--probably just win, lose, draw, or comparative material value.
The ANN, then, would be fed the board state and have to "guess" the
minimax value of the current position. Training would be based on how
closely its guess matched the actual minimax value. (The actual
minimax value is used during gameplay--the ANN is only there to learn
to guess those values.)

The trick comes into play after the ANN achieves a very small error
percentage. We then take a snapshot of the ANN and use it as our
minimax searcher's static evaluation function. That snapshot will
remain static, but we'll continue training our ANN as before.

The idea is, you first train an ANN that mimics a 1-ply minimax
search. Then you use that ANN as a static evaluation function in a
real 1-ply minimax search to simulate a 2-ply minimax search, which
you use to train an ANN that mimics a 2-ply minimax search. Then you
use the 2-ply ANN eval fn in a 1-ply minimax search to simulate a
3-ply minimax search, to train a 3-ply ANN eval fn. And so on, until
you've trained an ANN to simulate a 100-ply minimax search, or
whatever you want.

Again, the ANN you're currently training is never actually used in the
games. The ANN is only there to learn how to accurately guess the
scores of board states, and the games are only being played to give
the ANN something to guess about.

Anyone want to give this a shot? :)

Paul Onstad

unread,
Apr 2, 2002, 8:27:27 PM4/2/02
to

It would seem that would mean there's something special about a position
itself that could "give it away" some number of plies down the road without
"calculating." I can imagine such positions but they account for only a
minor percentage of what's in chess....perhaps a king attack with an
abundance of enemy pieces on that side of the board.

I see no likelihood in expecting a simulation of a 100-ply depth since the
attempt is only a bit past what can actually be calculated....even while
that "bit," in short duration, has yet to be obtained in demonstration.

I doubt a GM could "explain" most moves he or she makes in a given
positional game. (They would have reasons for making the moves but not an
explanation.) Certainly there is no component in human play of thinking 100
plies ahead unless that means an intention to win the game.


Here's another way of thinking of it. Could your ANN (whatever that is :)
observe numbers and spot those that would be prime?

-Paul

Jet Nebula

unread,
Apr 2, 2002, 9:16:49 PM4/2/02
to
On Tue, 02 Apr 2002 19:27:27 -0600, Paul Onstad <pon...@visi.com>
wrote:

>It would seem that would mean there's something special about a position
>itself that could "give it away" some number of plies down the road without
>"calculating." I can imagine such positions but they account for only a
>minor percentage of what's in chess....perhaps a king attack with an
>abundance of enemy pieces on that side of the board.

Isn't that the spirit of the static evaluation function, though? That
there are special features of a position that give away which side has
the advantage?

>I see no likelihood in expecting a simulation of a 100-ply depth since the
>attempt is only a bit past what can actually be calculated....even while
>that "bit," in short duration, has yet to be obtained in demonstration.

I'm not sure what you mean, here.

>I doubt a GM could "explain" most moves he or she makes in a given
>positional game. (They would have reasons for making the moves but not an
>explanation.)

Well, the ANN couldn't, either. :)

>Certainly there is no component in human play of thinking 100
>plies ahead unless that means an intention to win the game.
>
>Here's another way of thinking of it. Could your ANN (whatever that is :)

ANN is an abbreviation for "artificial neural network." I'm thinking
of the standard backprop job with 1 input layer, 1 hidden layer, and 1
output layer.

>observe numbers and spot those that would be prime?

Good question! I don't know--I've never read anything about people
using ANNs to verify the primeness of numbers.

Alejandro Dubrovsky

unread,
Apr 2, 2002, 11:14:13 PM4/2/02
to
On Wed, 03 Apr 2002 09:32:37 +1000, Jet Nebula wrote:

This was my original intention when i started coding my chess engine, but
got distracted along the way (well, i was planning to go in steps of
4-ply, and only iterate a couple of times, say until virtual-ply = 12,
but even one iteration would have been fine). i still plan to do try it
out at some stage, but don't hold your breath.

btw, in case you didn't know this (well, not exactly this, but fitting a
function of the current state to guess the output of the minimax) method was
tried successully in checkers in the 60's, the resulting program (i
cannot remember its name) playing at a very high level (beating the world
human champion? my memory fails me)

Alejandro

Paul Onstad

unread,
Apr 3, 2002, 8:58:58 AM4/3/02
to
Jet Nebula wrote:
>
> On Tue, 02 Apr 2002 19:27:27 -0600, Paul Onstad <pon...@visi.com>
> wrote:
>
> >It would seem that would mean there's something special about a position
> >itself that could "give it away" some number of plies down the road without
> >"calculating." I can imagine such positions but they account for only a
> >minor percentage of what's in chess....perhaps a king attack with an
> >abundance of enemy pieces on that side of the board.
>
> Isn't that the spirit of the static evaluation function, though? That
> there are special features of a position that give away which side has
> the advantage?

Yes, sometimes, but not at times when a superior static evaluation would be
most useful--IOW, when traditional calculation offers little
differentiation. The fact the ANN (got it now) is taught by calculation
means much of chess remains bland and gray--to a computer.

Once the evaluation slips off the plateau of equality, becomes tactical,
then traditional calculation would show the same superiority it has now.

> >I see no likelihood in expecting a simulation of a 100-ply depth since the
> >attempt is only a bit past what can actually be calculated....even while
> >that "bit," in short duration, has yet to be obtained in demonstration.
>
> I'm not sure what you mean, here.

..Just that the problem might be better evaluated in a few plies and
confirmed before expecting results at some depth that would make a
difference.

> >I doubt a GM could "explain" most moves he or she makes in a given
> >positional game. (They would have reasons for making the moves but not an
> >explanation.)
>
> Well, the ANN couldn't, either. :)

I guess that's right. Yet there's the idea of a lifetime's experience,
summoning specific rules and heuristics, going on in once case while the
other is a dumb hunch--perhaps based on nothing more than "this worked
yesterday."

I think the difficulty would come from a margin of error which would be
multiplied (going for depth) until there was no relationship between the
initial position and the evaluation. If the start were 90% error and 10%
significance (ratios on such order), then it breaks down quickly. (I don't
understand then how you could speak of the 100th ply.)

A slightly related analogy is using the last cut fencepost to determine the
length of the next. By the time one has encircled the field, there's a post
that is much taller/shorter than its starting neighbor.

[could the ANN...]


> >observe numbers and spot those that would be prime?
>
> Good question! I don't know--I've never read anything about people
> using ANNs to verify the primeness of numbers.

Well, it's a corresponding problem--calculation vs. single state
observation. An ANN would quickly learn that numbers ending in 2, 4, 6...
were not prime but the ratio of significance would not build much from
there. IOW--99% error and 1% significance.

-Paul

henri Arsenault

unread,
Apr 3, 2002, 9:02:58 AM4/3/02
to
In article <l6okaukv4aq681t8u...@4ax.com>, Jet Nebula
<thi...@spamfreezone.com> wrote:

> Good question! I don't know--I've never read anything about people
> using ANNs to verify the primeness of numbers.

All prime numbers satisfy certain mathematical properties 9I don't
remember exactly what they are), so all numbers don,t actually have to be
tried, only candidates satisfying the conditions. More difficult than
finding primes is finding the two prime factors of a product of two prime
numbers, which is what is used in the secure two-way encryption
procedures.

I don't think neural networks have anything to contribute here.

As for the procedures proposed for NN chess, in addition to the difficulty
mentioned by some of "evaluating", it is not clear to me that the number
of operations to reach say 100-ply is any smaller than that required for
say alpha-beta pruning.

Henri

Anders Thulin

unread,
Apr 3, 2002, 11:15:59 AM4/3/02
to
Simon Waters wrote:


> Being picky but Chess is always a perfect information game, in
> the sense both parties always know the complete state of the
> board,


Perfect in the sense of game theory. But that is not necessarily
relevant for training an ANN ...

For such training you need a small set of relevant input states,
*as*well*as* the wanted output from the ANN. If the former
correspond to the current state of the board, the latter
correspond to the correct move(s) in this position.

Only with endgame databases do we have that information,
though it's easy to see a problem: there may be several
theoretically best moves (shortest path to win), as well
as several 'good enough' moves (moves that don't risk losing
the best outcome).

Add to that the problem that the 'perfect information' you have
-- the state of the board -- may not be the best input domain to the
ANN. It may need to be transformed other domain(s), to be most useful,
such as the factors of the scoring function, perhaps.

Simon Waters

unread,
Apr 3, 2002, 12:00:31 PM4/3/02
to
Anders Thulin wrote:
>
> For such training you need a small set of relevant input states,
> *as*well*as* the wanted output from the ANN. If the former
> correspond to the current state of the board, the latter
> correspond to the correct move(s) in this position.

You might end up training a neural network that could only play
"simple" positions.

Reminds me of the neural net that learnt to recognise the labels
on the pictures it was being trained with ;)

> Only with endgame databases do we have that information,
> though it's easy to see a problem: there may be several
> theoretically best moves (shortest path to win), as well
> as several 'good enough' moves (moves that don't risk losing
> the best outcome).

Perhaps the games of most top GM's are of suitable quality,
predicting the next move in each position. Okay top GM's blunder
as well, but most of your training information will be good.



> Add to that the problem that the 'perfect information' you have
> -- the state of the board -- may not be the best input domain to the
> ANN. It may need to be transformed other domain(s), to be most useful,
> such as the factors of the scoring function, perhaps.

If you go too far down this route, you end up with something
akin to an algorithmn to tune your positional evaluation
terms... If you simplify the problem too much up front you end
up wondering if perhaps you should have just performed some sort
of statistical analysis to determine the best possible
weightings.

Will Dwinnell

unread,
Apr 15, 2002, 9:56:15 AM4/15/02
to


I imagine that at least some of the problem has been a naive
application of neural networks. More than once, I have seen it
suggested that the actual, raw board position be fed into a neural
network as an evaluation function. This strikes me as a dreadfully
flawed approach since even tiny difference (one piece moved by one
square) can make a large difference in the prospects of each side and
while neural networks can learn to turn on a dime, I think there are
far too many little hills and valleys in the space of board positions
to make this practical.

I would imagine that feeding a neural network a series of heuristics
(material measures, control of center 4 squares, castled yet?, etc.)
as a position evaluator at the end of a game tree search would fare
much better.

Will Dwinnell

unread,
Apr 15, 2002, 10:00:31 AM4/15/02
to
Anders Thulin <a...@algonet.se> wrote:
"For such training you need a small set of relevant input states,
*as*well*as* the wanted output from the ANN."

This is not true. There are machine learning systems, both neural
(such as adaptive critic models) and otherwise (like BOXES), which can
learn after a series of decisions have been made.

Will Dwinnell

unread,
Apr 15, 2002, 10:17:47 AM4/15/02
to
ars...@nospam.phy.ulaval.ca (henri Arsenault) wrote:
"Typical moderatly simple back-propagation calculations can required
upwards of 50,000 iterations before reaching an acceptable solution."


I'm not sure what sort of problems you've been working on, but my
experience (largely with commercial and industrial data) is that
backpropagation-trained MLPs (in contrast to popular belief) typically
train in a few hundred if not a few tens of training passes through
the data. To be perfectly clear, I am talking about backprop with the
usual variations (momentum, adaptive learning rate, etc.) Regardless,
there are other modeling algorithms available (indeed many other
neural archtiectures) than backpropagation-trained MLPs.

ars...@nospam.phy.ulaval.ca (henri Arsenault) continues:


"So a complex problem like chess with exponentially increasing
complexity seems way outside the capabilities of neural networks, at
least for the moment."


Could you explain precisely what you mean by "exponentially increasing
complexity"?

ars...@nospam.phy.ulaval.ca (henri Arsenault) continues:


"It is not clear to me even how to write out the chess-solving problem
in terms of a sustem of equations. The only approach I can see is for
a neural network to try out moves at random and throwing out moves
that lead to inferior positions, but I can't see how this could
compete even with alpha-beta pruning which is used now."


I would think that a neural network applied as an evaluation function
at the end of a game tree search might be useful. Either way, I agree
that the direct application of the board position as inputs to a
neural network, with the expectation that the optimal move be
generated as output is unrealistic.

Will Dwinnell

unread,
Apr 15, 2002, 10:25:01 AM4/15/02
to
Paul Onstad <pon...@visi.com> wrote:
"I doubt a GM could "explain" most moves he or she makes in a given
positional game. (They would have reasons for making the moves but not
an explanation.)"

It is not clear to me why you believe that such explanations would be
necessary for training a neural network from GM games. (?)

Paul Onstad <pon...@visi.com> wrote:
"Here's another way of thinking of it. Could your ANN (whatever that
is :) observe numbers and spot those that would be prime?"

ANN stands for "artificial neural network", which is in the title of
this thread. I don't think neural networks would be a good candidate
for the evaluation of numbers as primes, but I'm not sure what you're
driving at.

Paul Onstad

unread,
Apr 15, 2002, 11:16:01 AM4/15/02
to
Will Dwinnell wrote:
>
> Paul Onstad <pon...@visi.com> wrote:
> "I doubt a GM could "explain" most moves he or she makes in a given
> positional game. (They would have reasons for making the moves but not
> an explanation.)"
>
> It is not clear to me why you believe that such explanations would be
> necessary for training a neural network from GM games. (?)

Well, you said it yourself in another post, heuristics. If you can't explain
it, you can't program it. In that sense you might as well go back to
observing the full board and rely on "dumb learning"....which is what some
hard-AI folks think intelligence is. If, OTOH, you incorporate lower level
heuristics such as there are, then that's conventional programming all over
again....which is what some hard-AI folks think intelligence is :)

When an "AI" comp plays and beats a rank amateur--and an inspection of the
code reveals no hidden engine--then AI detractors shall have to reconsider.

-Paul (unreconsidered at present)

Will Dwinnell

unread,
Apr 15, 2002, 12:57:00 PM4/15/02
to
Paul Onstad <pon...@visi.com> wrote:
"An ANN would quickly learn that numbers ending in 2, 4, 6... were not
prime but the ratio of significance would not build much from there."

I'm not clear on how this relates to playing chess, but the above
statement is only likely to be true if some sort of extra information
is supplied regarding the last digit. Feeding only integer values to
most neural networks and training to identify prime numbers would not
result in the identification of even numbers as non-prime.

Traveler

unread,
Apr 15, 2002, 1:06:26 PM4/15/02
to
In article <gqeiau07esqkg76ci...@4ax.com>, Jet Nebula
<thi...@spamfreezone.com> wrote:

Conventional ANNs are unsuited for intelligent tasks that require
temporal learning and reasoning. The ANN community seems to go out of
its way to ignore all advances in neurobiology that took place over
the last 100 years as ANNs bear little resemblance to biological
neurons. Traditional ANNs are a joke in this regard.

The real exciting work in AI is no longer happening in the GOFAI
community but in the relatively new field known computational
neuroscience. Biologically-plausible NNs are called 'spiking neural
networks' or 'pulsed networks.' They are based on the precise temporal
relationships between the discrete spikes generated by neurons.

The link below is part of an ongoing project that uses a spiking
neural network to learn chess from scratch. There is no search tree,
no alpha-beta pruning and no position evaluation function. Feel free
to download the executable on the site. For more info on spiking
neural networks, just do a search on Google.com.

Nemesis

Temporal Intelligence:
http://home1.gte.net/res02khr/AI/Temporal_Intelligence.htm

Anders Thulin

unread,
Apr 15, 2002, 1:41:23 PM4/15/02
to
Will Dwinnell wrote:


That is true, but why use them? Is there any reason to? We
have the complete set of chess positions for 3/4/5-piece endgames,
and we have the correct evaluations (with certain restrictions as
regards castling). We have the input, and we have the output.

Paul Onstad

unread,
Apr 15, 2002, 2:12:50 PM4/15/02
to

You'll have to go back for context, but in an ititial sense the "look" of a
prime number has no more relationship to numbers that are prime than a
present chess position would to an evaluation at the 100th ply. (We may
assume the chess position is "equal," else there would be no reason to look
that far ahead.)

-Paul

Russell

unread,
Apr 15, 2002, 7:47:30 PM4/15/02
to
Let's analyze this situation. An artificial neural network is a loose model
of a human brain. Considering that fact, let's look at how well we could
expect it to play chess and learn.

A human is born not knowing anything, not how to speak, walk, etc. A human's
brain is much more complex than a neural network, and it takes many years to
even reach the point where playing chess would be feasible. At that point a
human brain has to learn how to play the game, and it will likely take many
more years before the human is good at the game. In addition to that, very
few actually become grandmasters. Now let's see how this applies to neural
networks.

1. We're starting off with an inferior neural net model when compared to the
human brain.

2. Even IF we had a sufficiently correct model, it would take years before
the program was even capable of even learning the rules of chess.

3. At that point, it would take even many more years for the program to
become good.

So until computers become faster than our brains by around a factor of say,
10,000 maybe, we can't evn begin to test the proper implimentation of a
neural net in a computer program. It would take at least 20 years to even
find out if the neural net worked correctly or not. If you're lucky that
means you could do FOUR experiments in your lifetime. But we're forgetting
that we don't even have a thorough model of how the human brain works to
begin with, so it will likely never happen until we learn more about the
brain.

In addition to that, it is quite possible that the human brain is the
world's fastest computer, and that while we can't crunch numbers like a
computer can, we can process certain kinds of information better than
computers. Speech recognition or image recognition, and others, are some
examples where humans are clearly superior to computers. Computers are
getting better though, but even still, they're pretty far behind.

Russell


Will Dwinnell

unread,
Apr 15, 2002, 8:03:49 PM4/15/02
to
Anders Thulin <a...@algonet.se> wrote:
"For such training you need a small set of relevant input states,
*as*well*as* the wanted output from the ANN."

Will Dwinnell answered:


"This is not true. There are machine learning systems, both neural
(such as adaptive critic models) and otherwise (like BOXES), which can
learn after a series of decisions have been made."

Anders Thulin <a...@algonet.se> responded:


"That is true, but why use them? Is there any reason to? We have the
complete set of chess positions for 3/4/5-piece endgames, and we have
the correct evaluations (with certain restrictions as regards
castling). We have the input, and we have the output."


According to the very next line in your first message which I quoted,
"Only with endgame databases do we have that information, ..." Would
it not then be useful to apply a learning system which does not
require immediate feedback?

Anders Thulin

unread,
Apr 16, 2002, 1:03:57 PM4/16/02
to
Will Dwinnell wrote:


> According to the very next line in your first message which I quoted,
> "Only with endgame databases do we have that information, ..." Would
> it not then be useful to apply a learning system which does not
> require immediate feedback?


For what purpose? You still need to verify that it produces the
correct responses.


Within the domain covered by endgame databases, this is trivial
and can be done very cheaply.

henri Arsenault

unread,
Apr 16, 2002, 4:23:47 PM4/16/02
to
In article <2b7b8021.02041...@posting.google.com>,
pred...@bellatlantic.net (Will Dwinnell) wrote:


> Could you explain precisely what you mean by "exponentially increasing
> complexity"?
>

I am referring to the exponential increase in possible number of moves
and/or positions as one looks further ahead.

Henri

Will Dwinnell

unread,
Apr 17, 2002, 8:11:20 AM4/17/02
to
ars...@nospam.phy.ulaval.ca (henri Arsenault) wrote:
"I am referring to the exponential increase in possible number of
moves and/or positions as one looks further ahead."

I see, but I think that problem goes away if the neural network is
used as an evaluation function for a game tree search.

Will Dwinnell

unread,
Apr 17, 2002, 8:17:07 AM4/17/02
to
Will Dwinnell wrote:
"According to the very next line in your first message which I quoted,
"Only with endgame databases do we have that information, ..." Would
it not then be useful to apply a learning system which does not
require immediate feedback?"

Anders Thulin <a...@algonet.se> asked:
"For what purpose?"

To train the computer opponent to play the entire game, not just the
endgame.


Anders Thulin <a...@algonet.se> continues:


"You still need to verify that it produces the correct responses."

No, one only needs to verify that the system is effective at playing
chess (on a game-basis), not that it produces some optimal decision
(on a particular turn).


Anders Thulin <a...@algonet.se> continues:


"Within the domain covered by endgame databases, this is trivial and
can be done very cheaply."

Yes, but an effective chess player needs to do more than play the
endgame.

Will Dwinnell

unread,
Apr 17, 2002, 8:23:39 AM4/17/02
to
Paul Onstad <pon...@visi.com> wrote:
"I doubt a GM could "explain" most moves he or she makes in a given
positional game. (They would have reasons for making the moves but not
an explanation.)"

Will Dwinnell inquired:


"It is not clear to me why you believe that such explanations would be
necessary for training a neural network from GM games. (?)"

Paul Onstad <pon...@visi.com> answered:


Well, you said it yourself in another post, heuristics. If you can't
explain
it, you can't program it. In that sense you might as well go back to
observing the full board and rely on "dumb learning"....which is what
some
hard-AI folks think intelligence is. If, OTOH, you incorporate lower
level
heuristics such as there are, then that's conventional programming all
over
again....which is what some hard-AI folks think intelligence is :)"


My thought was that even though a GM (and many other players, for that
matter) might not provide a comprehensive analysis of the decision to
make a particular move ("explaining", as you say), heuristics might be
formulated from their "reasons for making the moves".

Will Dwinnell

unread,
Apr 17, 2002, 8:42:50 AM4/17/02
to
"Russell" <rre...@attbi.com> wrote:
"An artificial neural network is a loose model of a human brain."


This is a very loose analogy. The fact is that very few artificial
neural networks are biologically plausible models. This is not to say
that they are not efective at what they do (typically, though not
always, associating outputs with inputs), but taking this "model of
the brain" analogy too seriously as an assumption will yield
misleading conclusions.

In terms of finding appropriate mappings from input to output, based
on historical data sets, neural networks (and other machine learning
algorithms) often do a very good job, and easily surpass humans at
many narrowly-defined applications.

Paul Onstad

unread,
Apr 17, 2002, 9:17:50 AM4/17/02
to
Will Dwinnell wrote:

> Will Dwinnell inquired:
> "It is not clear to me why you believe that such explanations would be
> necessary for training a neural network from GM games. (?)"
>
> Paul Onstad <pon...@visi.com> answered:
> Well, you said it yourself in another post, heuristics. If you can't
> explain
> it, you can't program it. In that sense you might as well go back to
> observing the full board and rely on "dumb learning"....which is what
> some
> hard-AI folks think intelligence is. If, OTOH, you incorporate lower
> level
> heuristics such as there are, then that's conventional programming all
> over
> again....which is what some hard-AI folks think intelligence is :)"
>
> My thought was that even though a GM (and many other players, for that
> matter) might not provide a comprehensive analysis of the decision to
> make a particular move ("explaining", as you say), heuristics might be
> formulated from their "reasons for making the moves".

Sure, which is saying that if a GM and a programmer sat down together, they
could make more progress than either one alone.

-Paul

Dr A. N. Walker

unread,
Apr 18, 2002, 2:41:31 PM4/18/02
to
In article <2b7b8021.02041...@posting.google.com>,

Will Dwinnell <pred...@bellatlantic.net> wrote:
>I would think that a neural network applied as an evaluation function
>at the end of a game tree search might be useful.

My PhD student, Stephen Milner, tried a different approach.
He trained the NN on positional differences between two moves, then
applied this to the first-level moves. This gives the positional
ordering of available moves. Then he stripped out most of the
positional stuff from the evaluation function, giving a v fast
almost-purely tactical tree search. The move played is the best
positional move from among those with equal tactical value.

So, we get a v deep search combined with some positional
awareness. Difficult to claim that it worked *well*, but it worked,
and worked *not badly*.

You don't want to do too many calls to the NN, if it's of
any size, as it slows you down. I suppose that as machines get
faster and bigger, the NN stuff could move further down the tree,
giving ever-more postional awareness to the tactics.

--
Andy Walker, School of MathSci., Univ. of Nott'm, UK.
a...@maths.nott.ac.uk

0 new messages