Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Whatever happened to Neural Network Chess programs?

78 views
Skip to first unread message

Ray Lopez

unread,
Mar 26, 2000, 3:00:00 AM3/26/00
to
I few years ago I saw a "neural network" chess program that learned to play
better the more one beat it. Any progress in this area or is everything
still alpha-beta algorithm driven?

Henri H. Arsenault

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
In article <6vsD4.3482$p_6....@newsread2.prod.itd.earthlink.net>, "Ray
Lopez" <raylo...@yahoo.com> wrote:

I don't know, but considering the amount of time that it takes to train a
neural network for simple tasks, I have serious doubts about the
possibility of using a neural network approach to play chess well. In
fact, most neural networks in practical use are trained by means of the
back-propagation algorithm, which is just an iterative way to approximate
the inverse or pseudo-inverse of a matrix at the cost of a LARGE increase
of computation load, but with the advantage of reducing the memory space
required (compared to calculating the inverse by other methods). As far as
I know, chess cannot be formulated in terms of a matrix equation.

Chess does not seem to me to be well-suited for this kind of approach. It
is true that the Hopfield model used to find approximate solutions to
problems like the Traveling Salesman problem, being a minimizing problem,
might appear more useful for chess. However it is not clear to me that it
is possible in chess to define a cost function, like the minimum distance
in the TS problem. Chess problems where neural networks MIGHT make a
contribution are for example to find the smallest number of moves leading
to mate when the number is not too great.

Finally, existing programs like Fritz already allow programs to train
themselves by putting weights on moves that lead to a loss, but given the
large number of moves possible in a game, I suspect that this approach
would probably make a program weaker instead of stronger against competent
play, its usefulness being limited to avoiding moves that are obviously
bod for any medium-strength player. In addition, it has not been shown
that such a method converges to better play; I can imagine a number of
scenarios where such a program would reject the only good line in an
opening because it lost a few times using it against an equal player.Not
to mention that it would take billions of games and millions of years of
the computer playing itself before it showed any significant improvement
at the Grandmaster level...

Henri

Dann Corbit

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
"Ray Lopez" <raylo...@yahoo.com> wrote in message
news:6vsD4.3482$p_6....@newsread2.prod.itd.earthlink.net...

> I few years ago I saw a "neural network" chess program that learned to
play
> better the more one beat it. Any progress in this area or is everything
> still alpha-beta algorithm driven?

There is a program called SAL that uses a neural net. It's lame. (But
still an interesting idea.)

There is a program called ChessterField that uses a neural net. It's a lot
better, but not a world beater by any stretch of the imagination.

Search is still the way to go in chess, by a landslide. Maybe someday that
will change.
--
C-FAQ: http://www.eskimo.com/~scs/C-faq/top.html
"The C-FAQ Book" ISBN 0-201-84519-9
C.A.P. Newsgroup http://www.dejanews.com/~c_a_p
C.A.P. FAQ: ftp://38.168.214.175/pub/Chess%20Analysis%20Project%20FAQ.htm

Dr A. N. Walker

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
In article <arseno-2703...@descartes.phy.ulaval.ca>,
Henri H. Arsenault <ars...@phy.ulaval.ca> wrote:
> [...] As far as

>I know, chess cannot be formulated in terms of a matrix equation.

At the Oxford Computer Chess Conference [1975!], R. H. Atkin
[Essex], working with W. R. Hartston, presented a paper formulating
chess as a 57-dimensional [IIRC!] optimisation problem. The work
foundered because they couldn't invert 57x57 matrices. Between you,
me and the gatepost, I suspect it would have foundered even had they
managed the matrix stuff, but I've been wrong before ....

>Chess does not seem to me to be well-suited for this kind of approach.

> [...] However it is not clear to me that it


>is possible in chess to define a cost function, like the minimum distance
>in the TS problem.

We have been trying to train a neural net to recognise
positional features, based on the moves chosen by GMs in quiet
positions. I don't think there's any sense in a NN trying to
work out tactics [BIBWB].

--
Andy Walker, School of MathSci., Univ. of Nott'm, UK.
a...@maths.nott.ac.uk

Henri H. Arsenault

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
In article <8bqe15$q9v$1...@oyez.ccc.nottingham.ac.uk>, a...@merlot.uucp (Dr
A. N. Walker) wrote:

> At the Oxford Computer Chess Conference [1975!], R. H. Atkin
> [Essex], working with W. R. Hartston, presented a paper formulating
> chess as a 57-dimensional [IIRC!] optimisation problem. The work
> foundered because they couldn't invert 57x57 matrices. Between you,
> me and the gatepost, I suspect it would have foundered even had they
> managed the matrix stuff, but I've been wrong before ....
>

Strange, because there are methods to invert matrices larger than this,
although I have no idea how long it takes.

Henri

Dann Corbit

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to
"Henri H. Arsenault" <ars...@phy.ulaval.ca> wrote in message
news:arseno-2803...@descartes.phy.ulaval.ca...

I have accurately inverted Hilbert matrices larger than that!!!

See (for instance):
http://indigo.ie/~mscott/

Here is the inversion of a Hilbert Matrix of order 57 (takes a few
seconds!):
D:\MIRACL\SOURCE\RELEASE>hilbert
Order of Hilbert matrix H= 57

Solution is
x[1] = 57
x[2] = -185136
x[3] = 150191580
x[4] = -54068968800
x[5] = 10925311008150
x[6] = -1408928107611024
x[7] = 125746833604283892
x[8] = -8212038112932825600
x[9] = 408677209213922649000
x[10] = -15983819738144530272000
x[11] = 503330483554171258265280
x[12] = -13011716963284691701271040
x[13] = 280565147020826164808656800
x[14] = -5113258300734583358642976000
x[15] = 79646824449707566295596968000
x[16] = -1070453320604069691012823249920
x[17] = 12515104642843674160942890574260
x[18] = -128182386653346974105159017646400
x[19] = 1157202101731604627338241131530000
x[20] = -9257616813852837018705929052240000
x[21] = 65937375756666831665732979674579400
x[22] = -419846147675103091422626319560587200
x[23] = 2398501236201776958230499532200462000
x[24] = -12332558341150913660466840694868160000
x[25] = 57230778551903458705603932599622555000
x[26] = -240277700672311481029607550626255334912
x[27] = 914548112174345326463284360593720379776
x[28] = -3161400881590329523576785444027675386880
x[29] = 9939863741224696780123438927969668148800
x[30] = -28460394635991759627273770438229442214400
x[31] = 74281629999938492627184540843778844179584
x[32] = -176853662268323903362120946358549422979072
x[33] = 384276756393574887676483501609152798953550
x[34] = -762201830863289033407901160216501419412000
x[35] = 1380007294114934210140776062571918227361000
x[36] = -2280110010847858645979535306649438769125440
x[37] = 3435999113569342542899716399603668145140420
x[38] = -4718537862315824675421086071040829885218400
x[39] = 5898172327894780844276357588801037356523000
x[40] = -6700882171336082379296216905620113446464000
x[41] = 6906096687833249902162163548354729420761960
x[42] = -6441855803999128998566491638203578662554880
x[43] = 5422990855407430024303424083181584078171200
x[44] = -4106104487598919434302214016470642352320000
x[45] = 2784770243913936579152276344848116430060000
x[46] = -1683238902987979443398709257330417042169600
x[47] = 901280565730331148095811714818224248004800
x[48] = -424324032756697326400925388597081583488000
x[49] = 174039154060364137781629553916771743227500
x[50] = -61468222675213989520542216460400848920000
x[51] = 18415879513494111260354448051536094336432
x[52] = -4588039186752858168669620275815220734336
x[53] = 924734229578516161954490773047076664280
x[54] = -144849790322017483538617280220973204800
x[55] = 16541488400971132379409998049925952400
x[56] = -1224890380766126827433996549812698624
x[57] = 44136675072248830197717350168633592
Result is exact!

Ray Lopez

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to

Dann Corbit <dco...@solutionsiq.com> wrote in message
news:2g7E4.104$IH6.1600@client...

> "Henri H. Arsenault" <ars...@phy.ulaval.ca> wrote in message
> news:arseno-2803...@descartes.phy.ulaval.ca...
> > In article <8bqe15$q9v$1...@oyez.ccc.nottingham.ac.uk>, a...@merlot.uucp (Dr
> > A. N. Walker) wrote:
> >
> > > At the Oxford Computer Chess Conference [1975!], R. H. Atkin
> > > [Essex], working with W. R. Hartston, presented a paper formulating
> > > chess as a 57-dimensional [IIRC!] optimisation problem. The work
> > > foundered because they couldn't invert 57x57 matrices. Between you,
> > > me and the gatepost, I suspect it would have foundered even had they
> > > managed the matrix stuff, but I've been wrong before ....
> > >
> > Strange, because there are methods to invert matrices larger than this,
> > although I have no idea how long it takes.
>
> I have accurately inverted Hilbert matrices larger than that!!!

Here are the future tools of neural networks solving chess --> Fuzzy logic.
Analog computers. Quantum computers, with instantaneous solution of the
chess tree. Pattern recognition (a form of neural network, no?) Isn't the
human brain just a carbon-DNA based neural network? Convergence of
solutions is only a matter of transistor density. Once the billion
transistors per die is surpassed, HAL will speak.

I like this thead. Technology marches forward. Remember, not too long ago
people did not think a machine, playing an algorithm (ALPHA-BETA) would ever
beat a human. Artificial life will find a way...

Andrew P. Mullhaupt

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to

Dr A. N. Walker <a...@merlot.uucp> wrote in message
news:8bqe15$q9v$1...@oyez.ccc.nottingham.ac.uk...

> In article <arseno-2703...@descartes.phy.ulaval.ca>,
> Henri H. Arsenault <ars...@phy.ulaval.ca> wrote:
> > [...] As far as
> >I know, chess cannot be formulated in terms of a matrix equation.
>
> At the Oxford Computer Chess Conference [1975!], R. H. Atkin
> [Essex], working with W. R. Hartston, presented a paper formulating
> chess as a 57-dimensional [IIRC!] optimisation problem. The work
> foundered because they couldn't invert 57x57 matrices.

That can't be right. People at Oxford were inverting 57x57 matrices long
before 1975.

I'd be interested in this formulation. Sounds like Roger Brockett's stuff on
embedding discrete optimizations in differential equations or Steve
Omohundro's embedding cellular automata in partial differential equations.

Later,
Andrew Mullhaupt

Andrew P. Mullhaupt

unread,
Mar 28, 2000, 3:00:00 AM3/28/00
to

Ray Lopez <raylo...@yahoo.com> wrote in message
news:aC7E4.590$64.2...@newsread2.prod.itd.earthlink.net...

>
> Here are the future tools of neural networks solving chess --> Fuzzy
logic.

I doubt that.

> Analog computers.

Possibly.

> Quantum computers,

Now that is interesting, but still a ways over the horizon.

Later,
Andrew Mullhaupt

Dr A. N. Walker

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
Henri H. Arsenault wrote [as did others in similar vein]:
[I wrote:]
>> [...] The work
>> foundered because they couldn't [in 1975] invert 57x57 matrices.

>Strange, because there are methods to invert matrices larger than this,
>although I have no idea how long it takes.

Note: I didn't say it was impossible, but that Atkin and
Hartston couldn't do it. I told them that we were routinely inverting
100x100 matrices, and they were *amazed*, wondering how we could manage
to work out 10000 99x99 determinants within the lifetime of the Universe.
They should have talked to NA experts rather than pure mathematicians!

IIRC, they had managed to find an approximation which was in
only 6 dimensions, and they were managing to invert 6x6 matrices [again
using the "theoretical" matrix-of-cofactors approach], but it was much
too slow for practical use. Mind you, even today I don't think we'd want
to invert a 6x6 matrix at every node in the tree, though it would be a
doddle to invert a 57x57 at the root.

Henri H. Arsenault

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
In article <aC7E4.590$64.2...@newsread2.prod.itd.earthlink.net>, "Ray
Lopez" <raylo...@yahoo.com> wrote:

> Here are the future tools of neural networks solving chess --> Fuzzy logic.

> Analog computers. Quantum computers, with instantaneous solution of the
> chess tree. Pattern recognition (a form of neural network, no?) Isn't the
> human brain just a carbon-DNA based neural network? Convergence of
> solutions is only a matter of transistor density. Once the billion
> transistors per die is surpassed, HAL will speak.
>

Well, computers can already play as strongly as humans, and humans have
not "solved" the game of chess, so the point of whether or not the human
mind is only a carbon-based NN is beside the point. The questions here are
whether or not a neural-based program could play good chess, and could it
play as well as or better than alpha-beta? At this time, only speculative
answers are possible, and for the moment the practical answer for NN-based
chess computers is "nothing worthwhile yet".

Henri

Cnidarian

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
well, if you want another program is octavius:
http://home.seol.net.au/luke/Octavius/

has free download and many features
intresting stuff like a 3d visual representation of the weights, and a
live graph of the nodes, windows program, a bunch of settings, and
you can load a pgn it can train from.

hell if I know what most of it means though....


On Mon, 27 Mar 2000 14:00:23 -0800, "Dann Corbit"
<dco...@solutionsiq.com> wrote:

>"Ray Lopez" <raylo...@yahoo.com> wrote in message

>news:6vsD4.3482$p_6....@newsread2.prod.itd.earthlink.net...
>> I few years ago I saw a "neural network" chess program that learned to
>play
>> better the more one beat it. Any progress in this area or is everything
>> still alpha-beta algorithm driven?
>
>There is a program called SAL that uses a neural net. It's lame. (But
>still an interesting idea.)
>
>There is a program called ChessterField that uses a neural net. It's a lot
>better, but not a world beater by any stretch of the imagination.
>
>Search is still the way to go in chess, by a landslide. Maybe someday that
>will change.

hy...@tamu.edu

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
In article <8bt1bo$f1m$1...@oyez.ccc.nottingham.ac.uk>,

a...@merlot.uucp (Dr A. N. Walker) writes:
> Henri H. Arsenault wrote [as did others in similar vein]:
> [I wrote:]
>>> [...] The work
>>> foundered because they couldn't [in 1975] invert 57x57 matrices.
>>Strange, because there are methods to invert matrices larger than this,
>>although I have no idea how long it takes.
>
> Note: I didn't say it was impossible, but that Atkin and
> Hartston couldn't do it. I told them that we were routinely inverting
> 100x100 matrices, and they were *amazed*, wondering how we could manage
> to work out 10000 99x99 determinants within the lifetime of the Universe.
> They should have talked to NA experts rather than pure mathematicians!

I remember that the inability to invert very big matrices
caused real problems in solid state physics. The equations
looked particularly simple when one used plane waves as
basis functions for expanding wavefunctions but this series
converged very slowly. At one seminar the speaker said they'd
have to retain a thousand terms to get acceptably close to
the "real" solution, but to invert the resulting 1000 by
1000 matrix would require 25 years on the IBM mainframes of
the day (circa 1976).

Thus we all spent years learning various methods of
getting around the problem (pseudopotentials, orthogonalized
plane waves, etc, etc). I wonder if those methods have
been totally abandoned now that inverting big matrices
is routine.


William Hyde
Dept of Oceanography
Texas A&M University
hy...@rossby.tamu.edu

Dann Corbit

unread,
Mar 30, 2000, 3:00:00 AM3/30/00
to
<hy...@tamu.edu> wrote in message news:8c00rn$800$1...@news.tamu.edu...

I doubt that the techniques are abandoned. They are probably used on what
are *now* considered big matrices -- perhaps 100,000 x 100,000 or larger.

Ray Lopez

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
Way off topic:
You guys lost me here. So how were 57 x 57 matrices inverted after 1975?
By increased hardware? I can't believe a Pentium PC is that much more
powerful than an IBM 1970's mainframe. So it must be memory (memory was
sorely lacking in the 70s I've read)? Or some new algorithm? Is the
algorithm exact, or an approximation of the actual inverted matrix? And if
a new exact algorithm, why are 10000x matrices so hard to invert? Is it an
exponential algorithm?

I always use Gauss's method and/or Cramer's Rule for inverting matrices. If
it's good enough for grandpa, it's good enuf for me.


Dann Corbit <dco...@solutionsiq.com> wrote in message

news:8yME4.105$JL.1191@client...

Andrew P. Mullhaupt

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to

Dr A. N. Walker <a...@merlot.uucp> wrote in message
news:8bt1bo$f1m$1...@oyez.ccc.nottingham.ac.uk...

> Henri H. Arsenault wrote [as did others in similar vein]:
> [I wrote:]
> >> [...] The work
> >> foundered because they couldn't [in 1975] invert 57x57 matrices.
> >Strange, because there are methods to invert matrices larger than this,
> >although I have no idea how long it takes.
>
> Note: I didn't say it was impossible, but that Atkin and
> Hartston couldn't do it. I told them that we were routinely inverting
> 100x100 matrices, and they were *amazed*, wondering how we could manage
> to work out 10000 99x99 determinants within the lifetime of the Universe.
> They should have talked to NA experts rather than pure mathematicians!

There were big league numerical analysts at Oxford at that time, and had
been for a long time. It was one of the "cradles" of numerical analysis. I
have trouble believing that they were using cofactors in 1975, when this was
well known (and had been well known for many years by that time) to be quite
insane.

Who let these guys off the leash?

Later,
Andrew Mullhaupt

Andrew P. Mullhaupt

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to

<hy...@tamu.edu> wrote in message news:8c00rn$800$1...@news.tamu.edu...
>
> At one seminar the speaker said they'd
> have to retain a thousand terms to get acceptably close to
> the "real" solution, but to invert the resulting 1000 by
> 1000 matrix would require 25 years on the IBM mainframes of
> the day (circa 1976).

That can't be right. You need roughly 1000 megaflops to invert a 1000x1000
matrix and in 1975 you could get some significant fraction of a megaflop,
meaning several thousands of seconds were needed. 1000 seconds is about 17
minutes, so no, 25 years is not reasonable.

In fact, in 1960, 31x31 matrices could be inverted in 4 seconds, giving a
flop rate of about 250 flops. This would translated to 4000000 seconds or
1111 hours, that is a bit over 46 days for a 1000x1000 matrix, assuming you
could have fit enough of it in memory. (Using an intelligent pattern of
access, you do not need to store the entire matrix in memory to get high
speed.)

By 1975 the situation was even better.

> Thus we all spent years learning various methods of
> getting around the problem (pseudopotentials, orthogonalized
> plane waves, etc, etc).

Hopefully, this work was justifiable on other grounds.

> I wonder if those methods have
> been totally abandoned now that inverting big matrices
> is routine.

No, they are still important in ab initio quantum mechanics calculations,
other stuff, too.

This stuff kind of burns me because when my dad was doing his Ph. D. in
chemistry he had to work out a 12x12 determinant which he did with a
Marchant (hand calculator) and used the expansion by minors. That was a lot
of extra work compared to methods which were known to mathematicians at the
time.

Later,
Andrew Mullhaupt

lues...@my-deja.com

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
I have written a WinBoard program called ChessterfieldCL. It combines some
kind of a simplified neural network with alpha-beta. The evaluation of this
simplified neural network is very fast and it does not sacrifice too much of
the power of a real neural network. On my homepage
(http://home.datacomm.ch/m.luescher/download_cl.html) you can find a German
description of the program and - of course - the program. If you are not
speaking German, I suggest, that you have a look at Michel Buro´s paper
(there is a link on the page mentioned above), which is written in English.
If, you are playing against ChessterfieldCL, please keep in mind, that the
program has learned all it´s positional knowledge from games played against
other WinBoard programs

Matthias Lüscher


Sent via Deja.com http://www.deja.com/
Before you buy.

Paul Morphy

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
I found an English description and no program.

<lues...@my-deja.com> wrote in message news:8c4ntm$9c3$1...@nnrp1.deja.com...

Bob Simpson

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to

> Later,
> Andrew Mullhaupt
>

Yo, bro'


lues...@my-deja.com

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <8c4tu6$l9d$1...@slb2.atl.mindspring.net>,

"Paul Morphy" <paul_...@mailcity.com> wrote:
> I found an English description and no program.
>

> > (http://home.datacomm.ch/m.luescher/download_cl.html) you can find a


> German
> > description of the program and - of course - the program.

Ok, there is somewhere on the page a link DOWNLOAD. Choose this link to
download the program. At the bottom of the page is a section "More
Information" and in this section you have two links. The first link
points to Michael Buro愀 paper, the second to my own (German,
pdf-Format) text about the program.

Matthias

Andrew P. Mullhaupt

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to

Ray Lopez <raylo...@yahoo.com> wrote in message
news:GDXE4.8686$9m6.3...@newsread1.prod.itd.earthlink.net...

> Way off topic:
> You guys lost me here. So how were 57 x 57 matrices inverted after 1975?

57 x 57 wasn't even big in 19_65_.

> I always use Gauss's method and/or Cramer's Rule for inverting matrices.
If
> it's good enough for grandpa, it's good enuf for me.

Cramer's rule wasn't good enough for grandpa. It is not normally used by
professionals.

Later,
Andrew Mullhaupt

Stuart Cracraft

unread,
Apr 3, 2000, 3:00:00 AM4/3/00
to

Plays absolutely awful chess. Class D at best.

On Thu, 30 Mar 2000 04:27:04 GMT, the_cn...@mailcity.com
(Cnidarian) wrote:

>well, if you want another program is octavius:
>http://home.seol.net.au/luke/Octavius/
>
>has free download and many features
>intresting stuff like a 3d visual representation of the weights, and a
>live graph of the nodes, windows program, a bunch of settings, and
>you can load a pgn it can train from.
>
>hell if I know what most of it means though....
>
>
>
>
>On Mon, 27 Mar 2000 14:00:23 -0800, "Dann Corbit"
><dco...@solutionsiq.com> wrote:
>

>>"Ray Lopez" <raylo...@yahoo.com> wrote in message

>>news:6vsD4.3482$p_6....@newsread2.prod.itd.earthlink.net...
>>> I few years ago I saw a "neural network" chess program that learned to
>>play
>>> better the more one beat it. Any progress in this area or is everything
>>> still alpha-beta algorithm driven?
>>
>>There is a program called SAL that uses a neural net. It's lame. (But
>>still an interesting idea.)
>>
>>There is a program called ChessterField that uses a neural net. It's a lot
>>better, but not a world beater by any stretch of the imagination.
>>
>>Search is still the way to go in chess, by a landslide. Maybe someday that
>>will change.

Richard A. Fowell

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
[ Posted per request of Richard Lang: sup...@chessgenius.com ]

ChessGenius v. 1.1 for Palm should be released soon
(free upgrade for v1.0 owners)

The current beta testers report no problems, so
a pre-release beta (53 KB) is now at:

http://www.chessgenius.com/beta.htm

for those willing to help give it a last once-over before release.

(motivation: wouldn't you hate to find out it had a
problem on your machine AFTER it was released?)

Please report any problems to sup...@chessgenius.com
If no problems are found, this will be released as the final version.

If you have any Palm device with OS 3.5 please let us have your comments.
In particular on non-color devices are the default gray scales alright?

If you have a Handspring Visor please report successful operation.

Installation:

If you already have ChessGenius 1.0 just install
over your current version. No need to enter Licence Key again.

New features

* Color and Gray scale support through new 'Colors... /O' menu item

What this provides is a function of Palm hardware/software:

Color Palm IIIc: Light and dark board squares can be any color
Palm OS3.5: Light and dark squares can be any gray scale.
Palm 0S3.0 to 3.3: Dark squares can be either:
dark gray, light gray, black checked (as v1.0),
or dark gray checked.
Palm OS2.0: No greyscale/color support

* Better blitz chess clocks for "big board".
- both side's time visible as countdown time below board
for "game in" levels.

* Hardware scroll up/down buttons now function for:
- game rewind
- game replay
- move takeback
- step forward.

* Switching away from Giant board now returns to previous board size.

* The menu item 'Move Now /M' has been renamed 'Switch sides/Move /M'

* More responsive to key presses, responds even whilst pieces are sliding.

* Black move notation is now not truncated even if the move text is long
like O-O-O+ or Q8xd2+

[Posted for Richard Lang: sup...@chessgenius.com]

Dr A. N. Walker

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
In article <8c4g3a$kra$1...@nntp9.atl.mindspring.net>,

Andrew P. Mullhaupt <amul...@zen-pharaohs.com> wrote:
>In fact, in 1960, 31x31 matrices could be inverted in 4 seconds, giving a
>flop rate of about 250 flops.

Even in 1960, they couldn't invert a 1000-element matrix in
1000 operations, so make that 2500 flops. But, actually, Atlas [1959]
was at least an order of magnitude faster than that.

> This would translated to 4000000 seconds or
>1111 hours, that is a bit over 46 days for a 1000x1000 matrix,

Erm? Time goes as n-cubed, so 4s translates to around 120000s
or a day-and-a-half. Mind you, even with an order of magnitude speed-up,
that might as well have been a century-and-a-half; in the mid-60s, Atlas
was *the* computer for Manchester and several other North and Midlands
universities, including Nottingham. My then department was allocated
around 5 mins per day. I got into trouble as a postgrad for wasting
a few seconds getting the computer to produce "poetry". Luckily, "they"
never found out what else we were using Atlas for! By 1975, this dept
was up to around an hour per day on the Univ's own ICL 1906A, but chess
was still out of the question except as an underground activity. What a
relief when Unix came along shortly after, and we could do what we liked
with it ...!

>By 1975 the situation was even better.

There was actually very little speed-up between 1960 and
1975, except for the tiny minority who had access to the real
super-computers; but massive improvements in operating systems,
disc storage, editors and other ancillary matters. Computers
became much more usable.

Andrew Mullhaupt also writes:
>There were big league numerical analysts at Oxford at that time,

Note that it was the *conference* that was at Oxford; Atkin
and Hartston were at Essex.

> [...] I


>have trouble believing that they were using cofactors in 1975,

You have obviously not talked to many pure mathematicians
about this! I would guess [reasonably informedly] that around 30%
of pure mathematicians would still, today, if asked in an unguarded
moment about matrix inversion, say "Well, work out the matrix of
cofactors and ...". That approach is much more useful for the
*theory*, and if they ever have to invert real [or complex!] matrices,
they are 3x3, or at worst 4x4 with lots of zeros. Of course, if you
sit them down and make them think about efficiency, they'll dredge
up something about elimination and row operations as a dirty-but-
practical way of doing it, but it's not their first thought.

Many, many students would say the same -- they've all done
enough linear algebra to know about matrices and determinants and
Cramer, and if they do Cramer before Gauss, that remains their method
of choice. You can rabbit on till the cows come home in a later NA or
complexity module about how much more efficient Gauss or various other
decomposition etc methods are, but it's in one ear and out the other --
until you *force* them to invert a 10x10 matrix *by hand* [and who is
that sadistic?] and they realise that Cramer is just hopeless.

Will Dwinnell

unread,
Apr 8, 2000, 3:00:00 AM4/8/00
to
"Ray Lopez" <raylo...@yahoo.com> wrote:
"I few years ago I saw a "neural network" chess program that learned to play
better the more one beat it. Any progress in this area or is everything still
alpha-beta algorithm driven?"

There is no reason that these (neural networks vs. game tree evaluation) need to
be mutually exclusive approaches. For example, neural networks might be used as
evaluation functions for game trees.

Henri H. Arsenault wrote:
"I don't know, but considering the amount of time that it takes to train a neural
network for simple tasks, I have serious doubts about the possibility of using a
neural network approach to play chess well."

Neural networks comprise a diverse class of tools: some are quicker to train than
others. For myself, I think the real stumbling blocks for neural networks (and
supervised machine learning methods in general) are: 1. the lack of instant
feedback and 2. the sensitivity of the true evaluation function to very small
changes in board configuration. The first issue is a problem since many moves
are made from the beginning of a game to its end, yet the only feedback available
is whether the game was won or lost. Distributing that simple yes/no information
back over so many moves makes it difficult to decide which decisions, and what
factors in each decision were important. Some machine learning systems have been
developed to handle exactly this sort of problem (reinforcement learning, such as
the Adaptive Critic or Klopf's DRT), but this remains a challenging task. The
second issue is even more of a problem since most neural networks (again, and
other supervised machine learning systems) operate on the assumption that (to
some degree) similar inputs (in this case, board configurations) should yield
similar outputs (board evaluations). However, in chess, the movement of a single
piece by as little as one square can radically alter the evaluation of the game
(Levy gave an example one such situation in his book "Computer Gamesmanship").
This problem is referred to in the genetic algorithm literature as a "high
deception" function.

"In fact, most neural networks in practical use are trained by means of the
back-propagation algorithm, which is just an iterative way to approximate the
inverse or pseudo-inverse of a matrix at the cost of a LARGE increase of
computation load, but with the advantage of reducing the memory space required
(compared to calculating the inverse by other methods)."

This is sort of true. In the neural networks you describe, though, there are
usually two weight matrices, as well as nonlinear transfer functions, so this is
sort of a "multi-step nonlinear" division. Matrix inversion can, of course, be
used to find the least squares fit for a single layer linear neural network
(linear regression) or single layer non-linear neural network, but multilayer
networks become more complex.

"Chess does not seem to me to be well-suited for this kind of approach."

I agree with this conclusion, but for the reasons given above.

Will Dwinnell
pred...@compuserve.com


Will Dwinnell

unread,
Apr 8, 2000, 3:00:00 AM4/8/00
to
Dr A. N. Walker wrote:
"At the Oxford Computer Chess Conference [1975!], R. H. Atkin [Essex],
working with W. R. Hartston, presented a paper formulating chess as a
57-dimensional [IIRC!] optimisation problem. The work foundered because

they couldn't invert 57x57 matrices. Between you, me and the gatepost, I
suspect it would have foundered even had they managed the matrix stuff, but
I've been wrong before ...."

Henri H. Arsenault responded:


"Strange, because there are methods to invert matrices larger than this,
although I have no idea how long it takes."

I was curious about this as well. In MATLAB, I just inverted a 200x200
matrix of uniformly distributed random numbers (between 0.0 and 1.0) on an
AMD K6 (equivalent to a Pentium Pro) running at 233MHz in less than 1
second.

Will Dwinnell
pred...@compuserve.com


Will Dwinnell

unread,
Apr 8, 2000, 3:00:00 AM4/8/00
to
Ray Lopez wrote:
"Here are the future tools of neural networks solving chess --> Fuzzy logic.
Analog computers. Quantum computers, with instantaneous solution of the chess
tree. Pattern recognition (a form of neural network, no?) Isn't the human
brain just a carbon-DNA based neural network?"

Is this just pipe-dreaming, or are you talking about something more solid?
There are non-neural pattern recognition methods, by the way.

Will Dwinnell
pred...@compuserve.com


Simon Read

unread,
Apr 9, 2000, 3:00:00 AM4/9/00
to
In article <38EF8F1F...@compuserve.com>, Will Dwinnell

<URL:mailto:pred...@compuserve.com> wrote:
> Ray Lopez wrote:
> "Here are the future tools of neural networks solving chess --> Fuzzy logic.
> Analog computers. Quantum computers, with instantaneous solution of the chess
> tree. Pattern recognition (a form of neural network, no?) Isn't the human
> brain just a carbon-DNA based neural network?"

Aren't you forgetting something? Most artificial intelligence researchers say
"Neural Network" but what they really mean is "*ARTIFICIAL* Neural Network".

The question "Isn't the human brain just a neural network?" is completely
upside-down. It should read, "Aren't the sort of _artificial_ neural networks
we play with just simplified copies of what we think happens in the brain?"

I am (an have been for a few years) experimenting with a VERY large neural
network to evaluate chess positons. No spectacular results yet, in fact I
got the network to play a random-mover and the random-mover won. Oh, dear.

However, the network is learning.


Simon

address:
following / netcomuk.co.uk /
at the / @ / - Simon
reach me / rocketsc /
You can

0 new messages