Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Neural Nets in Chess? Question to experts.

8 views
Skip to first unread message

George R. Barrett

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to

This one goes out to all of the chess programming experts that
read this newsgroup. It's a bit long winded, but I'd appreciate
sincere responses.

I have no experience with the inner workings of chess programs; however,
I've read some of the postings here and I gather there are a lot of
methods that can be used to attack the problems of generating good
chess moves on a computer. I'm assuming some of these methods are
ad hoc, and some are well founded in dynamic programming theory.
Most, I'm sure (please correct me if I'm wrong), generate a tree of
moves out to some "practical" window length and do some sort of
evaluation of the position reached (a leaf?) by a sequence of moves.
The various methods of leaf evaluations probably all have some good
motivation behind their use (even as simple as "because it works").

Recently there is a thread that suggests an "Advantage of Knowledge
over Game Tree" (however, I'm sure that the leaf evaluations used by
programmers are based on a knowledge of experience, e.g., "hey! this
eval works! I think I'll keep these parameters!") I am positive that
a good understanding of chess is fundamental to most, if not all,
evaluations of moves in computer chess.

In one of his responses to the above mentioned thread, Bruce states
something to the effect that we need not model how a human approaches
chess move evaluation in order to obtain a good algorithm. I agree,
but why not take this line of reasoning a step further? Why not
remove any attempt of understanding why a leaf in a move search should
be evaluated good or bad? That is, take what I said in the
previous paragraph about "a good understanding of chess is fundamental
to evaluations of moves" and throw that line of reasoning away.

For example, why not take every position reached in any recorded
GM level game and use it to train a neural network to do positional
evaluations? Thus, we wouldn't need to understand what the grandmaster
was thinking. We would simply use the fact of whether they won or lost.
Build a game tree and let the neural network evaluate the leaves.
It seems, at least to this non-expert, that the neural net would then
"posess" any knowledge used by grandmasters to win. I've had quite
some experience with neural networks and I think building an appropriate
training set of data is certainly feasible.

Ok, now that the non-expert has spoken, here's my question to the
experts: Has the ability of neural networks to "learn" information
presented in a training data set ever been used in chess? if so,
what task was the neural network used for? what were the results?

Like I said, it's a bit long winded, but I'd appreciate responses
both good and bad.

George

Jay Scott

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <32F54...@eecs.umich.edu>, "George R. Barrett" (grba...@eecs.umich.edu) wrote:
>For example, why not take every position reached in any recorded
>GM level game and use it to train a neural network to do positional
>evaluations? Thus, we wouldn't need to understand what the grandmaster
>was thinking. We would simply use the fact of whether they won or lost.
>Build a game tree and let the neural network evaluate the leaves.
>It seems, at least to this non-expert, that the neural net would then
>"posess" any knowledge used by grandmasters to win. I've had quite
>some experience with neural networks and I think building an appropriate
>training set of data is certainly feasible.

The two popular ideas are learning by self-play with a temporal
difference method, a la backgammon programs, and learning from
grandmaster games as you suggest. Each has advantages and
disadvantages. Neither one has worked well in chess, so far.

>Has the ability of neural networks to "learn" information
>presented in a training data set ever been used in chess? if so,
>what task was the neural network used for? what were the results?

See the pages about NeuroChess and SAL on my web site, address
in my signature.

Jay Scott <j...@forum.swarthmore.edu>

Machine Learning in Games:
http://forum.swarthmore.edu/~jay/learn-game/index.html

graham_douglass

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <32F54...@eecs.umich.edu>, "George says...

>
>This one goes out to all of the chess programming experts that
>read this newsgroup. It's a bit long winded, but I'd appreciate
>sincere responses.

I'm not an expert, but I'm throwing my oar in for two reasons:

1. It's a very exciting subject

2. I doubt if anyone can sincerely say they have expertise in this area

>
>I have no experience with the inner workings of chess programs; however,
>I've read some of the postings here and I gather there are a lot of
>methods that can be used to attack the problems of generating good
>chess moves on a computer. I'm assuming some of these methods are
>ad hoc, and some are well founded in dynamic programming theory.
>Most, I'm sure (please correct me if I'm wrong), generate a tree of
>moves out to some "practical" window length and do some sort of
>evaluation of the position reached (a leaf?) by a sequence of moves.
>The various methods of leaf evaluations probably all have some good
>motivation behind their use (even as simple as "because it works").

That's about it in a nutshell

>
>Recently there is a thread that suggests an "Advantage of Knowledge
>over Game Tree" (however, I'm sure that the leaf evaluations used by
>programmers are based on a knowledge of experience, e.g., "hey! this
>eval works! I think I'll keep these parameters!") I am positive that
>a good understanding of chess is fundamental to most, if not all,
>evaluations of moves in computer chess.

There is knowledge in the eval functions. In my opinion (based on years of
studying the topic), the key problem is that knowledge which is beneficial in
position 1 may be a liability in position 2. What is really needed for
Grandmaster level chess is unique knowledge for about (based on the book "Chess
Skill In Man And Machine") 50,000 different positions, and a way of determining
which position type we are in.

>
>In one of his responses to the above mentioned thread, Bruce states
>something to the effect that we need not model how a human approaches
>chess move evaluation in order to obtain a good algorithm. I agree,

Well I disagree. Yes - the basic game tree beats most players. Doesn't seem to
hurt the likes of Nigel Short at tournament speeds.

>but why not take this line of reasoning a step further? Why not
>remove any attempt of understanding why a leaf in a move search should
>be evaluated good or bad? That is, take what I said in the
>previous paragraph about "a good understanding of chess is fundamental
>to evaluations of moves" and throw that line of reasoning away.
>

>For example, why not take every position reached in any recorded
>GM level game and use it to train a neural network to do positional
>evaluations? Thus, we wouldn't need to understand what the grandmaster
>was thinking. We would simply use the fact of whether they won or lost.
>Build a game tree and let the neural network evaluate the leaves.
>It seems, at least to this non-expert, that the neural net would then
>"posess" any knowledge used by grandmasters to win. I've had quite
>some experience with neural networks and I think building an appropriate
>training set of data is certainly feasible.

Yes yes yes yes yes!

This is the great benefit of fuzzy logic systems - the beat the problem of
transferring knowledge from human to machine - the biggest problem in ai!

The data is all there. There are hundreds of thousands of games in Chess Base,
Chess Assistant, Rebel Silver is a chess database (correct me if I'm wrong)
and there are others out there.

I do see a drawback though. The human neural network os estimated to have
10^11 cells, each connected to an average of 10^4 other cells, making a total of
10^14 connections.

Now, your neural simulator does not need anything like this amount (it does not
have to do visual simulation of the position for example), but it is quite
likely to have to store an awful lot of data.

This idea will be well worth trying before we can achieve grandmaster level
(which has not been achieved by any method yet), but to see if contemporary
computers can reach grandmaster level, the question we have to ask is this: is
your neural net able to store 50,000 position patterns in the foreseeable
future?

>
>Ok, now that the non-expert has spoken, here's my question to the

>experts: Has the ability of neural networks to "learn" information


>presented in a training data set ever been used in chess? if so,
>what task was the neural network used for? what were the results?
>

Komputer Korner

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

George R. Barrett wrote:
>
snipped

> For example, why not take every position reached in any recorded
> GM level game and use it to train a neural network to do positional
> evaluations? Thus, we wouldn't need to understand what the grandmaster
> was thinking. We would simply use the fact of whether they won or lost.
> Build a game tree and let the neural network evaluate the leaves.
> It seems, at least to this non-expert, that the neural net would then
> "posess" any knowledge used by grandmasters to win. I've had quite
> some experience with neural networks and I think building an appropriate
> training set of data is certainly feasible.
>
> Ok, now that the non-expert has spoken, here's my question to the
> experts: Has the ability of neural networks to "learn" information
> presented in a training data set ever been used in chess? if so,
> what task was the neural network used for? what were the results?
>
> Like I said, it's a bit long winded, but I'd appreciate responses
> both good and bad.
>
> George

Dap Hartmann has had some experience with this. Actually the learning
programs are attempting to do this, but as Ed Schroder says, Sometimes
the program learns the wrong things. There is nothing new here.
--
Komputer Korner
The komputer that kouldn't keep a password safe from
prying eyes, kouldn't kompute the square root of 36^n,
kouldn't find the real Motive and variation tree in
ChessBase, kouldn't compute the proper time in 2 variation
mode, missed the Hiarcs functionality in Extreme
and also misread the real learning feature of Nimzo.

Tom C. Kerrigan

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

Not that I'm any sort of expert...

George R. Barrett (grba...@eecs.umich.edu) wrote:

> but why not take this line of reasoning a step further? Why not
> remove any attempt of understanding why a leaf in a move search should
> be evaluated good or bad? That is, take what I said in the

Because chess programs with this tree/leaf structure have a division of
work theme going. The search tries to find quiet positions where the
evaluation function will be effective. If you were to use a neural network
to replace eval(), you would need some method of teaching it without
giving it tactical influences. I have no idea how to do this. It sounds
like a very difficult problem, from what I know of the subjects.

Not to shoot your idea down... "*I* have no idea how to do this"...

Cheers,
Tom

Simon Read

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

In article <32F54...@eecs.umich.edu>, "George says...
>>
>>For example, why not take every position reached in any recorded
>>GM level game and use it to train a neural network to do positional
>>evaluations? [...]

>>Build a game tree and let the neural network evaluate the leaves.
>>It seems, at least to this non-expert, that the neural net would then
>>"posess" any knowledge used by grandmasters to win. I've had quite
>>some experience with neural networks and I think building an appropriate
>>training set of data is certainly feasible.


I have studied this problem. There are a few issues to address. If you
assume for now that it's a feed-forward net, so define some inputs and
define some outputs, the following design issues are relevant:

(1) What is your network going to compute? Here are some choices:
* Probability of winning, given a position
* distance to win, or some function of distance to win, given a position
* evaluation based on equivalence to material value (most current
chess programs seem to use this), given a position
* best move: this is a tricky one. You input the position, and the
network outputs the move BUT that means it has to understand
legal moves and it isn't compatible with the game tree approach.
It also effectively treats all positions as of equal complexity,
since they all get one run of the network to produce a move.
* evaluation based on position and proposed move, rather than just
position. A legal move generator supplies the position and a
set of moves; the network computes a score for each proposed
move.
* a score for each piece/square: effectively a piece/square table.
The legal move generator then makes the move which best matches
this. This means that the network has 64x13 outputs instead of 1.

(2) What part does your network play in the complete chess-player?
Does it compute legal moves, or does it merely evaluate the goodness
of a position? With current chess programs, the making-moves part is
separate from the evaluating-positions part.

(3) How much time are you prepared for your network to take to evaluate
a position? Position evaluators in most chess programs add up some terms
to get a score. This is a lot simpler than the computations performed
in a neural network. A network performs as many multiplications as
additions (approximately) so it will take quite a lot longer to produce
an evaluation than a hand-crafted evaluation function. This extra time
may be acceptable if the network's evaluations are of high quality.
Maybe a floating-point co-processor would be useful here.

Graham Douglass:


>I do see a drawback though. The human neural network os estimated to have
>10^11 cells, each connected to an average of 10^4 other cells,
>making a total of 10^14 connections.

:) x 10^15


Graham Douglass:


> but to see if contemporary
>computers can reach grandmaster level, the question we have to ask is this: is
>your neural net able to store 50,000 position patterns in the foreseeable
>future?

One awkward question is: are there really any distinct patterns, or do
they all have overlaps and fuzzy boundaries? You mentioned fuzzy logic
which may well be part of the answer.

The answer to your question is pretty much "yes", if there are distinct
patterns, but now we have to calculate the time taken for the network to
operate, as follows.

You could use a Hopfield net. They are designed for pattern-recognition.
The auto-associators are presented with an incomplete pattern and after
a few iterations they converge to the best match. For 50,000 patterns
you would need say about 200,000 neurons in a Hopfield net. For every
connection between a neuron and another neuron you need one multiply
and add step. Let's say this network is connected in a sparse manner
so each neuron has 1000 connections to other neurons. That means 2x10^8
multiply-accumulate operations for each pattern-recognition iteration,
and you need a few of those to successfully recognise a pattern, more if
the pattern you are interested in is very similar to another pattern.
This puts some limit on the speed with which you can recognise patterns.
Presumably this is equivalent to an upper limit on the number of position
nodes you can process per second.

Then there are hetero-associators. Given a complete pattern, they
associate that with another pattern. Humans seem to do a lot of that.
Given a queen and knight in a certian position, I think "Philidor's
legacy - smothered mate." That's associating one pattern with another.
The computing time for this is usually one iteration for one
association, but may be more if the pattern in question looks similar
to another. This sort of association may also be used to associate a
suggested move with a position.

>>Ok, now that the non-expert has spoken, here's my question to the
>>experts: Has the ability of neural networks to "learn" information
>>presented in a training data set ever been used in chess? if so,
>>what task was the neural network used for? what were the results?

I think Jay Scott's page has a bit on neural learning, but you have to
hunt for it:

Schmidt, Martin - Neural Networks and Chess -
(presentation of the paper is a little rough. The network
doesn't play chess very well. He explores various ways of
representing the input data. Feedforward net, no pattern-
recognition stuff)


and another one from somewhere else:

Thrun, Sebastian - Learning to Play the game of chess - to appear in
Advances in Neural Information Processing Systems 7

Downloadable from the internet - http and ftp site
http://www.informatik.uni-bonn.de/~thrun/publications.html
this site has many neural network learning papers.
(a neural network learns to play chess. It sometimes beats GNUchess.
GNUchess requires about 100 times less CPU time.
Feedforward net, no pattern-recognition stuff.)


Should we keep this to ourselves, or should we whisper it to the
comp.ai.neural-nets people?


/ cranfield.ac.uk /
Simon / @ / Spam, spam, spam, spam, lovely spam.
/ s.read /


graham_douglass

unread,
Feb 6, 1997, 3:00:00 AM2/6/97
to

This is a brilliant post - visionary and well informed.

I have printed it and filed it, and after 18 months of following r.g.c.c., I
regard it as the single best post I have ever seen.

More more more!

In article <32f73...@news.cranfield.ac.uk>, Simon says...

Oops!

George R. Barrett

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

Graham, Douglass wrote:
>
> In article <32f73...@news.cranfield.ac.uk>, Simon says...
> >
> >In article <32F54...@eecs.umich.edu>, "George says...
> >>>
> >>>For example, why not take every position reached in any recorded
> >>>GM level game and use it to train a neural network to do positional
> >>>evaluations? [...]
> >>>Build a game tree and let the neural network evaluate the leaves.
> >>>It seems, at least to this non-expert, that the neural net would
[much snipping]

> >may be acceptable if the network's evaluations are of high quality.
> >Maybe a floating-point co-processor would be useful here.

Yes, RISC certainly doesn't do the job.

> >
> >Graham Douglass:
> >>I do see a drawback though. The human neural network os estimated to have
> >>10^11 cells, each connected to an average of 10^4 other cells,
> >>making a total of 10^14 connections.
> >
> >:) x 10^15

We're not trying to build a human brain.

> >
> >
> >Graham Douglass:
> >> but to see if contemporary
> >>computers can reach grandmaster level, the question we have to ask is this: is
> >>your neural net able to store 50,000 position patterns in the foreseeable
> >>future?

[much snipping and gnashing of teeth]

50,000 positions? Where did you get such a low number. Current
trees are millions of positions.

> >
> >You could use a Hopfield net. They are designed for pattern-recognition.
> >The auto-associators are presented with an incomplete pattern and after
> >a few iterations they converge to the best match. For 50,000 patterns
> >you would need say about 200,000 neurons in a Hopfield net. For every
> >connection between a neuron and another neuron you need one multiply
> >and add step. Let's say this network is connected in a sparse manner
> >so each neuron has 1000 connections to other neurons. That means 2x10^8

We wouldn't try to memorize positions with the NN. If that is all that
I was interested in, I'd build a lookup table and use hash codes.

few things: 1) clearly, we wouldn't use the inefficient hopfield model
because we're not trying to memorize patterns.
Besides, a 50000 dim. space can be shattered with log
that number of neurons not 200000 neurons!
2a) We don't want to memorize patterns. We want the NN to
extract what it means for a position to be good vs. bad
(based on the outcome of the game?) .
I doubt that any evaluation functions currently in use
come even close to the computational power of
log(50000) neurons (still don't know where you get
50000 though) in a simple feedforward net.
2b) Again, 50000 positions? I'm talking of using every
position ever recorded in all GM games. That is well
over 50000. Let the neural net see them all.
3) I'm not suggesting that the computational complexity
results in a faster method of evaluation of a single
position. The idea does lend itself to ease of
parallel implementation, though. I'm suggesting that
such a method might drastically reduce the number of
positions that need to examined, and quite possibly the
NN (since it can map *any* function) would give far
better evaluation of leaves.
4) I agree that many of the people in this newsgroup
probably don't want to attempt something so far from
standard methodology, but then everyone is entitled to
their own ideas. It appears that what the programmers
are currently doing is working quite well (and congrats
are in order); however, I doubt that even DB2 will defeat
Kasparov using current methods.

In my original post, I was also fishing to see if anyone
was interested in such a chess\neural net project.

Thanks for the replys.
George

graham_douglass

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

In article <3301E7...@eecs.umich.edu>, "George says...
{snip}

>> >>I do see a drawback though. The human neural network os estimated to have
>> >>10^11 cells, each connected to an average of 10^4 other cells,
>> >>making a total of 10^14 connections.
>> >
>> >:) x 10^15
>

>We're not trying to build a human brain.

Agreed. But we are trying to grapple with the problem of making a computer
evaluate a position as well as the human brain can, so it's worth looking at
the opposition.

>
>> >
>> >
>> >Graham Douglass:
>> >> but to see if contemporary
>> >>computers can reach grandmaster level, the question we have to ask is this: is
>> >>your neural net able to store 50,000 position patterns in the foreseeable
>> >>future?

>[much snipping and gnashing of teeth]
>
>50,000 positions? Where did you get such a low number. Current
>trees are millions of positions.

In 1984, I read "chess skill in man and machine" at the University Library in
Hull. I wish I had the book now. They referred to some studies which aimed to
find out how grand masters REALLY play chess. The number 50,000 comes from one
of these studies. As best I can remember, you need to have expert knowledge of
50,000 patterns that can arise on the chess board, and to acquire that you have
to study chess from age 5 to age 18 to the exclusion of everything else -
including social skills. An owner of the book can correct me if I'm wrong.

>
>> >
>> >You could use a Hopfield net. They are designed for pattern-recognition.
>> >The auto-associators are presented with an incomplete pattern and after
>> >a few iterations they converge to the best match. For 50,000 patterns
>> >you would need say about 200,000 neurons in a Hopfield net. For every
>> >connection between a neuron and another neuron you need one multiply
>> >and add step. Let's say this network is connected in a sparse manner
>> >so each neuron has 1000 connections to other neurons. That means 2x10^8
>

>We wouldn't try to memorize positions with the NN. If that is all that
>I was interested in, I'd build a lookup table and use hash codes.
>
>few things: 1) clearly, we wouldn't use the inefficient hopfield model
> because we're not trying to memorize patterns.
> Besides, a 50000 dim. space can be shattered with log
> that number of neurons not 200000 neurons!

Log base e?

Have you taken into account that these patterns have a degree of complexity?

> 2a) We don't want to memorize patterns. We want the NN to
> extract what it means for a position to be good vs. bad
> (based on the outcome of the game?) .
> I doubt that any evaluation functions currently in use
> come even close to the computational power of
> log(50000) neurons (still don't know where you get
> 50000 though) in a simple feedforward net.
> 2b) Again, 50000 positions? I'm talking of using every
> position ever recorded in all GM games. That is well
> over 50000. Let the neural net see them all.

Right. The NN is NOT trying to record every game in the database - GK doesn't
do that. He admits that he likes to use ChessBase for analysis. If we are not
trying to record patterns that arise in positions, what are we trying to record?
You say we're trying to record "what it means for a position to be good vs bad".
Surely this means we're looking for patterns? If you disagree, can you try to
explain it carefully for me, please? I'm not trying to argue - I'm hungry to
read what people think about these issues!!!

> 3) I'm not suggesting that the computational complexity
> results in a faster method of evaluation of a single
> position. The idea does lend itself to ease of
> parallel implementation, though. I'm suggesting that
> such a method might drastically reduce the number of
> positions that need to examined, and quite possibly the
> NN (since it can map *any* function) would give far
> better evaluation of leaves.

If the 50,000 number is correct (we have to make some assumptions to get
anywhere), how long do you think it would take the NN to "settle down" after it
had been fed a position? Obviously, the faster it can settle, the more positions
we can make it look at. (This is VERY exciting!)

> 4) I agree that many of the people in this newsgroup
> probably don't want to attempt something so far from
> standard methodology, but then everyone is entitled to
> their own ideas. It appears that what the programmers
> are currently doing is working quite well (and congrats
> are in order); however, I doubt that even DB2 will defeat
> Kasparov using current methods.

If track record is anything to go by, this is certainly correct.

>
> In my original post, I was also fishing to see if anyone
> was interested in such a chess\neural net project.

One thing is for sure - if anyone does have a go at this, they're not going to
be wanting for attention!

I think it will be done - the question is, who is going to do it first?

In the meantime, it is nice to be able to discuss it.

>
>Thanks for the replys.

That's the least we can do.

Graham

Simon Read

unread,
Feb 16, 1997, 3:00:00 AM2/16/97
to

"George R. Barrett" <grba...@eecs.umich.edu> wrote:
>We're not trying to build a human brain.

It was mentioned as an example of something which already
seems to do the job, in order to ask the question: "Do we need to
copy the human brain in order to play chess?" I think the answer
was "No." This is a convoluted way of agreeing with you.


>50,000 positions? Where did you get such a low number. Current
>trees are millions of positions.

The original phrase was not "50,000 positions". It was "50,000 patterns."

We define a pattern as an element of a position which is identifiable.
Simple patterns may be pins, a castled king, or a pawn ram.
Complex patterns may be an arrangement of pieces/pawns around a king,
such as a fianchettoed bishop or elements of pawn structure. Each pattern
may also include possible future moves, strong points, weak points and
an evaluation of how good the pattern is.

Most current chess programs look for simple patterns (passed pawns,
castled king, existence of bishop pair) and assign a numerical value
to each. Grandmasters have more patterns, and each pattern has
associated with it far more than just a numerical score.

The number 50,000 is the amount of knowledge a grandmaster is believed
to posess. He (possibly) has accumulated 50,000 items of chess-specific
information in his playing experience. The number 50,000 has been
widely circulated (since no-one else has come up with one) and
comes from the following paper:

Nievergelt, Jurg - Information content of Chess Positions: Implications
for chess-specific knowledge of Chess Players
SIGART Newsletter No. 62 April 1977 (SIGART is Special Interest Group
on Artificial Intelligence part of ACM - Association for Computing
machinery)
(Interesting - just how many patterns do Grandmasters know? Here it
comes out as 50,000. Interesting method, using some psychology, to
deduce this number.)

That is why the question was asked, "Can your net store 50,000 patterns?"
_If_ grandmasters have this amount of knowledge, any neural net playing
chess to grandmaster level must also have this amount of knowledge.
(Other things being equal, of course, the usual nebulous requirement.)
This is separate from the question of the size of the search tree.

Read "Thought and Choice in Chess" by Adriaan de Groot, probably published
by Mouton, The Hague, The Netherlands. It's an English translation of his
Dutch PhD thesis. It includes transcripts from grandmasters speaking their
thoughts as they choose a move.

Read also "Chess Skill in Man and Machine" by Peter Frey.


>We wouldn't try to memorize positions with the NN. If that is all that
>I was interested in, I'd build a lookup table and use hash codes.

Not positions, but patterns.


>few things: 1) clearly, we wouldn't use the inefficient hopfield model
> because we're not trying to memorize patterns.
> Besides, a 50000 dim. space can be shattered with log
> that number of neurons not 200000 neurons!

Presumably you mean log2 since Hopfield neurons are bi-state,
but this only lets you have 50,000 _states_ of your network. (Come
to think of it, it would have to be 65,536 states). This is not
the same as 50,000 _patterns_ which the network can acquire.
For 50,000 memorizable patterns you need some small factor times
50,000.


> 2a) We don't want to memorize patterns. We want the NN to
> extract what it means for a position to be good vs. bad
> (based on the outcome of the game?) .

Nets can do many things which are applicable to chess. Pattern-recognition
and action based on patterns is one of many possible ways to play chess.
It's not the memorizing patterns which is useful, but the recognition
that they exist in the board position we're studying.

Feedforward nets seem more compatible with _current_ chess programs,
however, since they can easily implement an evaluation function.


> I doubt that any evaluation functions currently in use
> come even close to the computational power of
> log(50000) neurons

Want to bet?
Log2(50,000) is about 16. This is being generous: log10(50,000) is less
than 5. 16 neurons is not an awful lot, and I would wager,
ooh, let's say 50,000 pounds, that a common program, let's say "crafty",
has *MUCH* more computational power in its evaluation function than that.
Pick any other of Chess Genius, Rebel, Chess System Tal, Junior, etc.
or even any in the SSDF list which is commercially available today.


s.read
Simon at
cranfielddotacdotuk


George R. Barrett

unread,
Feb 17, 1997, 3:00:00 AM2/17/97
to

If you can give an example evaluation function from one of these
programs and show that that function couldn't be learned by a NN
with 16 hidden neurons, then I'll believe your statement; otherwise
I'll stick to my statement. You are correct that 16 is not a lot;
however that is not the question, in fact, the fact that 16 is not a
large number of neurons emphasizes my point that NNs would be useful.

About the pattern/position confusion. My original post discusses
positions, hence (since no defn. was given for pattern) I assumed
they were the same.

George

0 new messages