Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Genetic Algorithms for Chess Evaluation Functions

78 views
Skip to first unread message

cma...@ix.netcom.com

unread,
Jul 1, 1996, 3:00:00 AM7/1/96
to

Has anyone ever tried using a genetic algorithm to come up with
locally optimized evaluation functions? I was thinking of possible
ways of solving specific subclasses of chess positions, and this
particular idea came to me. Unless it has already been done, I am
considering writing a program to do this. The basic idea is to first
use a classification function to determine which eval function to use,
and optimize each individual eval function using a genetic algorithm.
I know that most programs do this to a limited extent by defining a
opening/middle/endgame classification, but I am interested in defining
much more specific subclasses.
This also raised an interesting question. Should a system like this
be trained on databases of great games by humans to come up with
similar moves, or by playing against itself? (or possible both?)

Thanks,
Chris Mayer

Jay Scott

unread,
Jul 1, 1996, 3:00:00 AM7/1/96
to

In article <4r7vmj$8...@dfw-ixnews7.ix.netcom.com>, cma...@ix.netcom.com wrote:
>Has anyone ever tried using a genetic algorithm to come up with
>locally optimized evaluation functions?

There was a brief discussion about it earlier this year. I saved out
a couple of the articles:

http://forum.swarthmore.edu/~jay/learn-game/methods/chess-ga.html

>The basic idea is to first
>use a classification function to determine which eval function to use,
>and optimize each individual eval function using a genetic algorithm.

You also want to somehow knit the evaluation patches together at their
edges, to make the overall function smooth. After all, changes in the
evaluation function are how a chess program represents knowledge; if the
function has discontinuities between different evaluation classes, it
will apply this "knowledge" and fall over a cliff. :-)

I haven't heard of anyone trying this approach.

>This also raised an interesting question. Should a system like this
>be trained on databases of great games by humans to come up with
>similar moves, or by playing against itself? (or possible both?)

Training on a database is faster, but it tends to produce artifacts
because grandmaster are so different from computer programs. For example,
suppose your program notices that, statistically, a grandmaster's piece
sac on the kingside is usually followed by a win. So you fire up the machine
for its first game, and it plays 1. e3, 2. Bd3, 3. Bxh7, and confidently
predicts a win because it sacrificed on the kingside! :-) Well, usually it's
not that bad, but you get the idea.

Training by self-play avoids these artifacts. On the other hand, if the program
starts out knowing nothing, then it may have a slow time bootstrapping itself
out of ignorance. It's probably a good idea to start training with a database,
to give the program some initial knowledge, then switch to self-play.

For more info, visit my web site, Machine Learning in Games:

http://forum.swarthmore.edu/~jay/learn-game/index.html

For suggestions about what's important in training, see Susan Epstein's paper
"Toward an idea trainer" under "other online papers". The most directly
useful other stuff might be "Sebastian Thrun's NeuroChess".

If you try it, let us know how it works out. Good luck, and have fun!

Jay Scott <j...@forum.swarthmore.edu>

Machine Learning in Games:
http://forum.swarthmore.edu/~jay/learn-game/index.html

John Stoneham

unread,
Jul 1, 1996, 3:00:00 AM7/1/96
to

cma...@ix.netcom.com wrote:
>
> Has anyone ever tried using a genetic algorithm to come up with
> locally optimized evaluation functions? I was thinking of possible
> ways of solving specific subclasses of chess positions, and this
> particular idea came to me. Unless it has already been done, I am
> considering writing a program to do this. The basic idea is to first

> use a classification function to determine which eval function to use,
> and optimize each individual eval function using a genetic algorithm.
> I know that most programs do this to a limited extent by defining a
> opening/middle/endgame classification, but I am interested in defining
> much more specific subclasses.
> This also raised an interesting question. Should a system like this
> be trained on databases of great games by humans to come up with
> similar moves, or by playing against itself? (or possible both?)
>
> Thanks,
> Chris Mayer


Sounds like you're not really talking about *genetic* algorithms. There
were some articles on this in past _Advances_In_Computer_Chess_ volumes
(Ed. D.F. Beal). In fact, vol. 5 contains an article about some
evaluation terms that were arrived at by analyzing grandmaster games and
determing whether and how much each term appeared in won games (poor
description - check out the paper), much like what you mention, but
this *not* a genetic algorithm. This proceedure is just an alogrithm
used to determine what weight to give which terms in an evaluation
function. I think that same volume, or maybe one or two earlier, had a
paper on genetic algorithms, which, by definition, alter *themselves*
based on the result of each game actually played using the particular
function to be genetically altered -- a kind of self teaching evaluation
function. I think the paper did discuss playing different versions
against themselves, but this is not as "instructive" a process to the
algorithm as actual opponents would be.

cma...@ix.netcom.com

unread,
Jul 4, 1996, 3:00:00 AM7/4/96
to

Results of the First Experiment
-----------------------------------------------

Just to test the idea, I did the following experiment. I stripped out
all of the chess knowledge I could that was built into my program. I
disabled all of the terms of my evaluation function except the single
one under test, which was simple [non-positional] piece values. The
"gene" was simply 5 floats. I started out with all the values set to
1.0 and had the program play it itself for 24 hours. The only fitness
test was winning. A loss or draw meant death, and a win allowed a new
mutated set of values. The mutation algorithm was simply adding or
subtracting 0.1 to the values.

At first, the games were painful to watch, but I noticed some very
interesting things.
The first was a verification of what someone in this NG posted before,
that the mobility function seemed to magically appear by itself. This
was more pronounced at deeper ply settings. Q: If mobility appears by
itself, maybe we don't need to waste time computing it? Q: Why does
it magically appear?
The second phenomena, which may be a result of the above, is that even
though the pieces started out with identical values, the moves still
reflected (to a small extent) the tradition piece values. In other
words, it didn't trade knight for pawn etc. when you think it would
because it didn't know any better. It seemed that even if a piece has
the same material value, if it has more mobility it becomes worth
more. Q: Is material value just a meaningless echo of mobility?
All this from no virtually no evaluation function!
The final results converged in about the same time regardless of the
search depth. This seemed to be because learning was faster per game
at ply 7, but more games can be played at ply 5. More data needs to
be collected, but there was no significant difference in time to
converge.
And now for the answer:
Pawn 1.0
Knight 2.3
Bishop 2.1
Rook 3.4
Queen 5.5
I was really expecting something closer to 1:3:3:5:9, so what went
wrong?
Here are some of my ideas, not that I believe them but I'm "just
thinking out loud".
1) At 1 ply, maybe 1:3:3:5:9 is right, but at ply 9 where mobility can
manifest itself, maybe the results I got when added to the 'magic
mobility factor' compensate.
2) At ply 5, the most captures you can have is 3 pieces of his and 2
of yours. Any results must be quantized to this. Because rarely is
there a Queen for Rook, Knight, and Pawn trade, it is statistically
impossible to see if this is an even trade accross many situatuions.
3) The traditional 1:3:3:5:9 is wrong.

What I will be doing next is to add more evaluation terms into the
"gene" to see what grows.

As usual, your comments and ideas are greatly appreciated.

Chris Mayer

Chris Whittington

unread,
Jul 4, 1996, 3:00:00 AM7/4/96
to

In the land of the blind the one-eyed man is king.

So why start with both sides 'blind' ?

How about giving on eside the magic 1:3:3:5:9

or, better, 1 : 3.6 : 3.6 : 6 : 9.6

and the other 1:1:1:1:1

and see what happens ?

Or will this approach never work, since the 1:1:1:1:1 will always
lose (?) and thus never improve.

Maybe you could factor in some kind of distance to loss for making
random changes .... ?

Chris Whittington


Tom Kerrigan

unread,
Jul 4, 1996, 3:00:00 AM7/4/96
to

>2) At ply 5, the most captures you can have is 3 pieces of his and 2
>of yours. Any results must be quantized to this. Because rarely is

This says to me that you aren't using a quiescence function, which is a horrible
mistake.

Cheers,
Tom

_______________________________________________________________________________
Tom Kerrigan kerr...@frii.com O-

Ehrman's Commentary:
(1) Things will get worse before they get better.
(2) Who said things would get better?

Jay Scott

unread,
Jul 5, 1996, 3:00:00 AM7/5/96
to

In article <4rfqs4$a...@dfw-ixnews5.ix.netcom.com>, cma...@ix.netcom.com wrote:
>Just to test the idea, I did the following experiment. I stripped out
>all of the chess knowledge I could that was built into my program.

A good first test. Leaving the rest of the knowledge in place would be
fun, too.

One learning plan, when you're adding knowledge in stages, is to freeze
earlier knowledge while learning new stuff. For example, (1) get the program
to learn material values alone. (2) Freeze them, and set the program to learning
about, say, pawn structure, without touching the material values that it already
knows. (3) When pawn structure has converged, you can unfreeze all the knowledge
and let it modify its material values in light of what it now understands
about pawn structure, as well as vice versa. If it works well, stage 3 should
be mainly fine-tuning. The idea is that at each stage the program is learning
a small, manageable amount of knowledge, so it should converge quickly, sidestepping
the worst-case exponential convergence times.

>I started out with all the values set to
>1.0

I suggest fixing the pawn value at 1.0, and letting the others float--
it looks like maybe you did this? It doesn't lose any information, it
makes the results easier to interpret, and it speeds up learning a tad.

For more variety in the initial population, you can start out with
random genes.

>The only fitness
>test was winning. A loss or draw meant death, and a win allowed a new
>mutated set of values.

Your fitness measure is extremely noisy. A strong candidate that has
lived for generations, defeating all opposition, can be knocked out
by one bad game. To get high performance, you're going to need a more
forgiving selection mechanism. There's a discussion of this problem in
the paper about the backgammon program HC-Gammon:

http://forum.swarthmore.edu/~jay/learn-game/systems/hc-gammon.html

>Q: If mobility appears by
>itself, maybe we don't need to waste time computing it?

Ideally. :-)

>Q: Why does
>it magically appear?

There was a theoretical analysis in a recent ICCA Journal that gave
an idea of how it magically appears for a random evaluation function.
I'm not sure how it appears in a material-only evaluator.

>Q: Is material value just a meaningless echo of mobility?

No. Mobility is an "instantaneous" measure that tells how well you're
doing right now, and material is a "long term" measure that tells how well
you can (probably) do eventually. Sometimes it's worthwhile to tie your
position into a knot to win a pawn, because you can untangle later.

Losing one pawn can lose the game, even though the pawn has at most a few
moves at any one time. The mobility measure doesn't care that the pawn may
someday turn into a queen.

>Pawn 1.0
>Knight 2.3
>Bishop 2.1
>Rook 3.4
>Queen 5.5
>I was really expecting something closer to 1:3:3:5:9, so what went
>wrong?

It's hard to say without making a few tests.

- Maybe, for this program under these conditions, these values really
are close to optimal. You turned off all the rest of the chess knowledge,
and that'll tend to mix things up a bit. Play the 1:2.3:2.1:3.4:5.5 program
against an identical one set to 1:3:3:5:9, and see how it does.

- Maybe you didn't run the experiment long enough for the values to
converge. They can only change by 0.1 at a time, and the GA needs quite
a few tests to discover that 2.2 is a better value for a bishop than 2.1.
Make a graph over time of the genes of the first member of the population--
are they still changing at the end of the run? Try a longer run.

- Maybe the noise in the fitness function is the culprit, and the values
you came up with are indistinguishable (to the GA) from optimal values.
Try this: seed an initial population with half 1:2.3:2.1:3.4:5.5's and
half 1:3:3:5:9, turn off mutation, and see if one species can drive the
other out of the population. If not, or if it happens slowly by drift
rather than quickly by selection, then the GA is having trouble detecting
the difference. You need less noise.

- Maybe the GA hit a plateau. 1:3:3:5:9 is better, and the algorithm
can tell if it gets a chance, but it doesn't get a chance because
mutations happen in 0.1 increments. If the GA can't tell the difference
between 1:2.3:2.1:3.4:5.5 and anything that's different from it by 0.1,
thanks to the noisy fitness function, then it will get stuck there, or
at best will make progress by random drift rather than by purposeful
selection. This is what seems most likely to me.

Urban Koistinen

unread,
Jul 5, 1996, 3:00:00 AM7/5/96
to

cma...@ix.netcom.com wrote:
: Results of the First Experiment
: -----------------------------------------------

: Just to test the idea, I did the following experiment. I stripped out
: all of the chess knowledge I could that was built into my program. I


: disabled all of the terms of my evaluation function except the single
: one under test, which was simple [non-positional] piece values. The

: "gene" was simply 5 floats. I started out with all the values set to
: 1.0 and had the program play it itself for 24 hours. The only fitness


: test was winning. A loss or draw meant death, and a win allowed a new

: mutated set of values. The mutation algorithm was simply adding or


: subtracting 0.1 to the values.

[cut]
: And now for the answer:
: Pawn 1.0


: Knight 2.3
: Bishop 2.1
: Rook 3.4
: Queen 5.5
: I was really expecting something closer to 1:3:3:5:9, so what went
: wrong?

Making draw the same as loss would mean a program that gets
10 wins, 0 draws, 10 losses would do better than one with
9 wins, 11 draws, 0 losses.
You get a wilder playing style.
Try to decide the winner by a coin toss when they have drawn.

: Here are some of my ideas, not that I believe them but I'm "just
: thinking out loud".
: 1) At 1 ply, maybe 1:3:3:5:9 is right, but at ply 9 where mobility can


: manifest itself, maybe the results I got when added to the 'magic
: mobility factor' compensate.

: 2) At ply 5, the most captures you can have is 3 pieces of his and 2


: of yours. Any results must be quantized to this. Because rarely is

: there a Queen for Rook, Knight, and Pawn trade, it is statistically


: impossible to see if this is an even trade accross many situatuions.

Having more evaluation terms might help this.

: 3) The traditional 1:3:3:5:9 is wrong.

I think 1:3:3:5:9 is better than 1.0:2.3:2.1:3.4:5.5.

: What I will be doing next is to add more evaluation terms into the


: "gene" to see what grows.

First try giving half a point for draws.

: As usual, your comments and ideas are greatly appreciated.

Hope to hear from you again.

Peter Osterlund

unread,
Jul 5, 1996, 3:00:00 AM7/5/96
to

On Thu, 4 Jul 1996, Chris Whittington wrote:

> In the land of the blind the one-eyed man is king.
>
> So why start with both sides 'blind' ?
>
> How about giving on eside the magic 1:3:3:5:9
>
> or, better, 1 : 3.6 : 3.6 : 6 : 9.6

Is this really better? This means Q=R+B and Q=R+N. However, in most
situations I think the queen is better.

--
Peter Österlund Email: peter.o...@mailbox.swipnet.se
Sköndalsvägen 35 f90...@nada.kth.se
S-128 66 Sköndal Phone: +46 8 942647
Sweden

Peter Osterlund

unread,
Jul 5, 1996, 3:00:00 AM7/5/96
to

On Thu, 4 Jul 1996 cma...@ix.netcom.com wrote:

> Results of the First Experiment
> -----------------------------------------------
>
> Just to test the idea, I did the following experiment. I stripped out
> all of the chess knowledge I could that was built into my program. I
> disabled all of the terms of my evaluation function except the single
> one under test, which was simple [non-positional] piece values. The
> "gene" was simply 5 floats. I started out with all the values set to
> 1.0 and had the program play it itself for 24 hours. The only fitness
> test was winning. A loss or draw meant death, and a win allowed a new
> mutated set of values. The mutation algorithm was simply adding or
> subtracting 0.1 to the values.
>

> At first, the games were painful to watch, but I noticed some very
> interesting things.
> The first was a verification of what someone in this NG posted before,
> that the mobility function seemed to magically appear by itself. This

> was more pronounced at deeper ply settings. Q: If mobility appears by
> itself, maybe we don't need to waste time computing it? Q: Why does
> it magically appear?


> The second phenomena, which may be a result of the above, is that even
> though the pieces started out with identical values, the moves still
> reflected (to a small extent) the tradition piece values. In other
> words, it didn't trade knight for pawn etc. when you think it would
> because it didn't know any better. It seemed that even if a piece has
> the same material value, if it has more mobility it becomes worth
> more. Q: Is material value just a meaningless echo of mobility?
> All this from no virtually no evaluation function!
> The final results converged in about the same time regardless of the
> search depth. This seemed to be because learning was faster per game
> at ply 7, but more games can be played at ply 5. More data needs to
> be collected, but there was no significant difference in time to
> converge.

> And now for the answer:
> Pawn 1.0
> Knight 2.3
> Bishop 2.1
> Rook 3.4
> Queen 5.5
> I was really expecting something closer to 1:3:3:5:9, so what went
> wrong?

> Here are some of my ideas, not that I believe them but I'm "just
> thinking out loud".
> 1) At 1 ply, maybe 1:3:3:5:9 is right, but at ply 9 where mobility can
> manifest itself, maybe the results I got when added to the 'magic
> mobility factor' compensate.
> 2) At ply 5, the most captures you can have is 3 pieces of his and 2
> of yours. Any results must be quantized to this. Because rarely is
> there a Queen for Rook, Knight, and Pawn trade, it is statistically
> impossible to see if this is an even trade accross many situatuions.

> 3) The traditional 1:3:3:5:9 is wrong.


>
> What I will be doing next is to add more evaluation terms into the
> "gene" to see what grows.
>

> As usual, your comments and ideas are greatly appreciated.

This is an interesting experiment.

If I understood your experiment correctly, you start by generating N
different genes, all having the value 1:1:1:1:1. Then you play the genes
against each other, the losing gene dies and the winning gene mutates (is
it also duplicated?).

My suggestion is to generate the N starting genes randomly, for example
pick 5 random numbers in the interval [0,10] for each gene. This would
prevent the algorithm from getting trapped in locally optimal solutions,
which could be the cause of your unusual values.

Also, as Tom Kerrigan mentioned, you would probably benefit from using a
quiesence search in addition to the fixed depth search.

My suggested test would be something like this:

for i=1 to N
gene[i] = random starting value

while (not enough matches played) {
pick two genes i and j randomly
play a match between i and j
if (i wins the match) {
remove gene j
duplicate gene i
mutate both copies of i individually
} else if (j wins the match) {
// same as above, but with i and j swapped
} else { // draw
remove i and j
somehow make two new genes (perhaps mutations of i and j)
}
}

If you iterate the while loop many times, all genes will maybe converge to
approximately the same value. However, it is also possible that the genes
will converge to many different values.

Is this how your test worked, except for the starting values? If so, how
did you handle the draw case? Your explanation indicates that the genes
would be removed. Are they replaced by other genes, or is the total number
of genes reduced?

Chris Whittington

unread,
Jul 6, 1996, 3:00:00 AM7/6/96
to

Peter Osterlund <peter.o...@mailbox.swipnet.se> wrote:
>
> On Thu, 4 Jul 1996, Chris Whittington wrote:
>
> > In the land of the blind the one-eyed man is king.
> >
> > So why start with both sides 'blind' ?
> >
> > How about giving on eside the magic 1:3:3:5:9
> >
> > or, better, 1 : 3.6 : 3.6 : 6 : 9.6
>
> Is this really better? This means Q=R+B and Q=R+N. However, in most
> situations I think the queen is better.
>
> --
> Peter Österlund Email: peter.o...@mailbox.swipnet.se
> Sköndalsvägen 35 f90...@nada.kth.se
> S-128 66 Sköndal Phone: +46 8 942647
> Sweden
>
>

True. More for Queen, maybe 11 or so

Chris Whittington


Robert Hyatt

unread,
Jul 6, 1996, 3:00:00 AM7/6/96
to

Chris Whittington (chr...@cpsoft.demon.co.uk) wrote:
:

Just for fun, here's what I'm using in Crafty v9.29:

P=1, B=N=4, R=6, Q=12. Notice that the proportions are nearly standard,
except that a queen is as good as two rooks. The B=N=4*P is something I had
to do to prevent trading a knight for 2 pawns plus a lot of positional
compensation that most of the time was not enough.

Larry Kaufman ran some tests and reported in the last CCR that with a computer,
the queen is at least as good as two rooks, and sometimes better. I experimented
and now agree with this. Crafty wins most games when it has a Q vs 2 rooks,
particularly against other computers.

I don't claim these numbers are magic, that they produce GM class chess, or
anything at all, other than they are better than P=1, B=N=3, R=5, Q=9,
*in Crafty*. Note the big qualification. Others may find these are gross
numbers that result in poor results. Caveat Emptor, then... :)

Bob


Chris Whittington

unread,
Jul 6, 1996, 3:00:00 AM7/6/96
to


If you do 1:3:3:5:9 then one fault is an early BN for RP trade
often on f2/f7, which is invariably bad.

The other problem is an early minor piece for 3 pawns trade, which
is again bad.

Hence the NB need to be N/B > 3*P
And you'll also need R+P > B+N

Q+P versus R+R gets more complicated, usually R+R > Q+P is best.

Chris Whittington

Philippe Schnoebelen

unread,
Jul 7, 1996, 3:00:00 AM7/7/96
to

> Q+P versus R+R gets more complicated, usually R+R > Q+P is best.

What do you mean with "best" ?

And with "usually" ? You mean, disregarding tests such as Larry Kaufman's
or Bob Hyatt's ?

--Philippe

Simon Read

unread,
Jul 7, 1996, 3:00:00 AM7/7/96
to

CM: cma...@ix.netcom.com
CM> What I will be doing next is to add more evaluation terms into the
CM> "gene" to see what grows.
-->
I am fascinated by the above. Are the values you quote (1 2.3 2.1 3.4 5.5)
independent of the search depth used to learn them?

Can you tell us how many games were played at each search depth
before the piece values converged to those you quote above?

Regards
Simon


Simon Read

unread,
Jul 8, 1996, 3:00:00 AM7/8/96
to

Other machine-learning investigators have hinted that they need some
randomness to get their programs to explore all the problem domain.
In other words, are you ensuring that a wide variety of positions
gets played? Are the opening moves random? If there is no randomness
at all, the programs will probably restrict themselves to a
limited set of openings and a limited set of positions.

Regards
Simon


Urban Koistinen

unread,
Jul 8, 1996, 3:00:00 AM7/8/96
to

Simon Read (s.r...@cranfield.ac.uk) wrote:
: Other machine-learning investigators have hinted that they need some

Adding a random value with a bell shaped distribution around 0 to
the evaluation should do the trick but might have other benefits
that are unwanted, so I suggest picking starting positions from
GM games after 10 moves to make sure different games get played.

Martin Borriss

unread,
Jul 8, 1996, 3:00:00 AM7/8/96
to

In article <83668499...@cpsoft.demon.co.uk>, Chris Whittington <chr...@cpsoft.demon.co.uk> writes:
> hy...@crafty.cis.uab.edu (Robert Hyatt) wrote:

> > Just for fun, here's what I'm using in Crafty v9.29:
> >
> > P=1, B=N=4, R=6, Q=12. Notice that the proportions are nearly standard,
> > except that a queen is as good as two rooks. The B=N=4*P is something I had
> > to do to prevent trading a knight for 2 pawns plus a lot of positional
> > compensation that most of the time was not enough.
> >

This is from gnuchess 4.0 pl75, and this is what I use as well.

/* Piece values */
#define valueP 100
#define valueN 350
#define valueB 355
#define valueR 550
#define valueQ 1100


> If you do 1:3:3:5:9 then one fault is an early BN for RP trade
> often on f2/f7, which is invariably bad.
>
> The other problem is an early minor piece for 3 pawns trade, which
> is again bad.
>

I agree, I have seen both with the first being the more serious/frequent problem.



> Hence the NB need to be N/B > 3*P
> And you'll also need R+P > B+N
>

You certainly mean B+N > (or >>) R.



> Q+P versus R+R gets more complicated, usually R+R > Q+P is best.
>

I don't know, but Bob Hyatt sounded more like Q >= R+R, but Q+P >=R+R
is safe in any case I think.

Martin

--
Martin....@inf.tu-dresden.de

Chris Whittington

unread,
Jul 9, 1996, 3:00:00 AM7/9/96
to

mb...@irz.inf.tu-dresden.de (Martin Borriss) wrote:
>
>
> In article <83668499...@cpsoft.demon.co.uk>, Chris Whittington <chr...@cpsoft.demon.co.uk> writes:
> > hy...@crafty.cis.uab.edu (Robert Hyatt) wrote:
>
> > > Just for fun, here's what I'm using in Crafty v9.29:
> > >
> > > P=1, B=N=4, R=6, Q=12. Notice that the proportions are nearly standard,
> > > except that a queen is as good as two rooks. The B=N=4*P is something I had
> > > to do to prevent trading a knight for 2 pawns plus a lot of positional
> > > compensation that most of the time was not enough.
> > >
>
> This is from gnuchess 4.0 pl75, and this is what I use as well.
>
> /* Piece values */
> #define valueP 100
> #define valueN 350
> #define valueB 355
> #define valueR 550
> #define valueQ 1100
>
>
> > If you do 1:3:3:5:9 then one fault is an early BN for RP trade
> > often on f2/f7, which is invariably bad.
> >
> > The other problem is an early minor piece for 3 pawns trade, which
> > is again bad.
> >
>
> I agree, I have seen both with the first being the more serious/frequent problem.
>
> > Hence the NB need to be N/B > 3*P
> > And you'll also need R+P > B+N
> >
>
> You certainly mean B+N > (or >>) R.

Yup, you're right.

>
> > Q+P versus R+R gets more complicated, usually R+R > Q+P is best.
> >
>
> I don't know, but Bob Hyatt sounded more like Q >= R+R, but Q+P >=R+R
> is safe in any case I think.
>

Disagree.
In normal chess there is almost no way I'ld give up R+R for Q+P,
without some very large positional compensation as well, like a massive
attack and the enemy rooks likely to remain disjointed.
Just as soon as those rooks can start to co-operate, it would
spell big trouble.

The value of the Q is number of pawns dependant. With many pawns
the file mobility of the rooks is restricted, while the queen can still
move along diagonals.
So, as ever, it depends.

In general, evaluation functions will stress various parameters, and
not others - sorry for the trueism.
Eg. Bishop mobility is often measured, whereas nite's is not (nite's
are often left to the piece-square table). So, if you're not
careful, the bishop will 'grow' from 3 to 3.x, while the nite stays
at 3.

Or, eg 2. The value of the queen (and other pieces) in CSTal is
strongly dependant on king attack status. CSTal could double or
triple the value of such a piece.

So maybe, a program that doesn't do this would be better with Q=12
or so. But its a blunt instrument.

Chris Whittington


> Martin
>
> --
> Martin....@inf.tu-dresden.de


Robert Hyatt

unread,
Jul 9, 1996, 3:00:00 AM7/9/96
to

Chris Whittington (chr...@cpsoft.demon.co.uk) wrote:
:

I vary the value of *all* pieces based on the king safety of the opponent.
However, in nearly all cases where the position is Q+pawns vs RR+pawns, the
Q is better. The two rooks don't have nearly the mobility of the queen if
there are no other pieces on the board. The two rooks are constrained to
stay connected on the same rank or file or else run into a series of queen
checks that end up forking the king and one of the hanging rooks.

Larry ran a bag full of tests using Mchess Pro, Genius, Rebel, and I don't
know who else, and came to the conclusion from letting them slug it out, that
the Q was better. I've been convinced after making the change and watching
Crafty play games against Genius/Fritz/etc where it has a Q vs two rooks and
the queen usually wins. Of course there are positions where this doesn't hold
water, but the side with the queen doesn't have to sweat the forks, while the
side with two rooks can disconnect them only after very careful analysis to make
sure one doesn't get snagged.

As far as mobility making a bishop "grow" you simply need to bias the result,
so that a bishop that attacks 1/2 of the max squares gets 0, while one
that is on two open diagonals gets X and if it is completely blocked in with
no mobility at all, it gets -X.

I came to the current P=1, N=B=4, R=6, Q=12 after several months of testing and
watching. This has worked the best for crafty, although, again, this is really
dependent on the program involved. If you aren't a deep searcher with lots of
extensions (like Crafty) then the queen's mobility advantage and forking
potential is maybe not as significant. I witnessed a Crafty vs Fritz game
earlier this week where if you counted material, things were even, with crafty
in a Q+5P vs R+R+4P ending. Normal material says this is even, Crafty said
it was +1.5 ahead, and over the next 20 moves or so, it went from +1.5 to +2.5,
to (eventually) +3.5, and then it traded the queen for the two rooks leaving it
with a trivially won KP ending. The two rooks couldn't stay connected and
cover all of the pawns, while the queen flitted around the board to find squares
where it attacked the opposing king, one of the rooks (to make them stay connected) and a pawn or two. Something had to fall, and it did. On seveeral
occasions.

Probably in a KRR vs KQ ending it doesn't matter what you value the pieces,
because so long as a rook isn't lost outright, this is going to be a dead
draw. But with pawns, the queen seems better because it can do so many things
at one time.

Tom Kerrigan

unread,
Jul 9, 1996, 3:00:00 AM7/9/96
to

Not to question your chess skill, Chris, but I've noticed that for a few programs
(including mine) Q+P=R+R is reasonable. If the value of the queen is set too high,
the program often ends up with a very powerful, misplaced piece (the queen) vs.
two powerful pieces (the rooks) which are less likely to be misplaced simply
because there are two of 'em. :)

Cheers,
Tom

_______________________________________________________________________________
Tom Kerrigan kerr...@frii.com O-

Children are natural mimic who act like their parents despite every
effort to teach them good manners.

Chris Whittington

unread,
Jul 9, 1996, 3:00:00 AM7/9/96
to

kerr...@frii.com (Tom Kerrigan) wrote:
>
> Not to question your chess skill, Chris, but I've noticed that for a few programs
> (including mine) Q+P=R+R is reasonable. If the value of the queen is set too high,
> the program often ends up with a very powerful, misplaced piece (the queen) vs.
> two powerful pieces (the rooks) which are less likely to be misplaced simply
> because there are two of 'em. :)
>
> Cheers,
> Tom
>

I know I tend to write these posts like I write programs - ie full
of bugs :) - but I thought I was arguing to *not* set the queen
too high. If I wrote it wrong this is what I mean: I would not give
up two rooks for a queen and a pawn in a normal human game of chess.

Ie, I'm trying to say that R+R > Q+P
except that there are special circumstances - which are beyond the
comprehension of a piece-square table.

Chris Whittington

Chris Whittington

unread,
Jul 9, 1996, 3:00:00 AM7/9/96
to

In human chess this is just not so.
It just goes to show that if you let 'blind' boxers slug it out
then the conclusions may not be that accurate.

Give me (as a human) R + R + 4xpawns on e2f2g2h2, and let Crafty
have the equivalent Q + 5 pawns on d7e7f7g7h7. The rooks would win
hands down.

With fewer pawns it gets even easier (although the king will need a
pawn or two to shelter it from the rain of checks).

Just a matter of technique. Whilst sheltering the white king with the pawns
and rooks, as needed, double the rooks on the 7th - black is dead.

>
> As far as mobility making a bishop "grow" you simply need to bias the result,
> so that a bishop that attacks 1/2 of the max squares gets 0, while one
> that is on two open diagonals gets X and if it is completely blocked in with
> no mobility at all, it gets -X.

This is true. My point was not this. It was that evaluation functions
will tend to give points to pieces, just for existing in some way.
*If* you're not careful, then the relative piece values can get out
of kilter with the 'golden' values - whatever they might be.

To accomodate, then some programmers might want to adjust their golden
values.

My guess is that this recommendation of a high value for the
queen, is just a blunt mechanism for keeping the tactically active
queen on the board, when it might be better to measure the tactically
active component of the queen directly withing the evaluation function.
Otherwise, you'll want to keep the queen in inappropriate situations.

But you can dismiss this as me banging on about knowledge against
search again, I suppose.

>
> I came to the current P=1, N=B=4, R=6, Q=12 after several months of testing and
> watching. This has worked the best for crafty, although, again, this is really
> dependent on the program involved.

Yes, of course, a blunt knife (Q=12) is better than an old piece of
flint.

But better is a sharp knife and some fine adjustables for specific
situations.


> If you aren't a deep searcher with lots of
> extensions (like Crafty) then the queen's mobility advantage and forking
> potential is maybe not as significant. I witnessed a Crafty vs Fritz game
> earlier this week where if you counted material, things were even, with crafty
> in a Q+5P vs R+R+4P ending. Normal material says this is even, Crafty said
> it was +1.5 ahead, and over the next 20 moves or so, it went from +1.5 to +2.5,
> to (eventually) +3.5, and then it traded the queen for the two rooks leaving it
> with a trivially won KP ending. The two rooks couldn't stay connected and
> cover all of the pawns, while the queen flitted around the board to find squares
> where it attacked the opposing king, one of the rooks (to make them stay connected) and a pawn or two. Something had to fall, and it did. On seveeral
> occasions.

Crafty was playing the proverbial blind boxer. Maybe Fritz doesn't
know to connect rooks on the 7th ? (it doesn't know about rookfile
pawns being a draw according to a post last week).

The key to these positions for the 2rooks, is to shelter the king.
Obviously a loose queen able to check everywhere has an effective
mobility doubling (it can check, then make a free move).


>
> Probably in a KRR vs KQ ending it doesn't matter what you value the pieces,
> because so long as a rook isn't lost outright, this is going to be a dead
> draw. But with pawns, the queen seems better because it can do so many things
> at one time.
>
>


Queen is probably better when it has 7 or 8 of its own pawns to work
with. It needs a lot of pawns.

Chris Whittington

Bruce Moreland

unread,
Jul 9, 1996, 3:00:00 AM7/9/96
to

In article <83694255...@cpsoft.demon.co.uk>, chr...@cpsoft.demon.co.uk
says...

>Give me (as a human) R + R + 4xpawns on e2f2g2h2, and let Crafty
>have the equivalent Q + 5 pawns on d7e7f7g7h7. The rooks would win
>hands down.
>
>With fewer pawns it gets even easier (although the king will need a
>pawn or two to shelter it from the rain of checks).
>
>Just a matter of technique. Whilst sheltering the white king with the pawns
>and rooks, as needed, double the rooks on the 7th - black is dead.

If you take a practical KQP* vs KRRP* it is going to be a matter of piece
placement. I've seen the rooks recover quickly and get the upper hand, and I
have seen the rooks start out mis-placed and get smeared.

Good luck recognizing this in an eval function, perhaps you could determine
who was ahead if the material was even, but if your eval function is volatile
enough that the positional terms will overshadow a pawn-minus, I worry that
you will blow lots of other practical cases.

I hope you guys don't get into a giant fight over this, it seems like a
pretty trivial point.

bruce

--
The opinions expressed in this message are my own personal views
and do not reflect the official views of Microsoft Corporation.


D Kirkland

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to
: 1) At 1 ply, maybe 1:3:3:5:9 is right, but at ply 9 where mobility can

: manifest itself, maybe the results I got when added to the 'magic
: mobility factor' compensate.
: 2) At ply 5, the most captures you can have is 3 pieces of his and 2
: of yours. Any results must be quantized to this. Because rarely is
: there a Queen for Rook, Knight, and Pawn trade, it is statistically
: impossible to see if this is an even trade accross many situatuions.
: 3) The traditional 1:3:3:5:9 is wrong.
:
: What I will be doing next is to add more evaluation terms into the
: "gene" to see what grows.
:
: As usual, your comments and ideas are greatly appreciated.
:
: Chris Mayer

Okay, I will give this a shot...

I think it is pretty obvious that, except for the king (for
equally obvious reasons), material values reflect the mobility
(or mobility potential) of the pieces (the queen is the most
mobile, the pawn the least...).

Even with equal material values (queen=1.0, pawn=1.0, ...),
a reasonably deep search will not want to trade a queen for
a pawn because it can see that it will lose MORE than just
the queen. The deeper the search, the more true this is.

If you look at the search trees you will find that with
equal values, the search will gladly make a move that leads
to a queen_for_a_pawn trade at the horizon. But it can see
that a queen_for_a_pawn trade at the root leads to something
more like a queen_for_2+_pawns.

If you could search deep enough, equal piece values would
be just fine. And if you could see the whole tree, then
you wouldn't need peace values at all! But, ALL CHESS
PLAYERS (human or machine) have a search depth problem!
That's the whole point of have piece values (or any other
scoring!). Piece values help make up for lack of search
depth!

Lack of mobility leads to lost pieces. And again, a
reasonably deep search with even simple scoring can see
this. And again, the deeper the search the better it
will do. And as above, mobility is just another way to
help with the search depth problem.

Now about your scores...
You mean to tell me that your program came up with
pawn=1.0? (Ya right!) Or did you adjust the scores so
that pawn=1.0? (Obviously!)

Scoring material is a ratio of values between pieces.
Usually the ratios are adjusted so that pawn=1.0. But in
your case, this can be misleading. Why? Because your
program is trying to find an AVERAGE value that will work.
Average values don't always work well. Which is why most
adjust the values (with things like mobility!). Pawns are
often worth something very different than 1.0. And on
average, pawns are worth MORE than 1.0. A pawn that is
about to promote can be worth more than a rook. Your
program is simply trying to adjust for this while using
an AVERAGE value!

A BETTER way (off the top of my head) to compare your scores
to the values 1,3,3,5,9 would be to adjust the scores to
have the same total (1+3+3+5+9=21). To do this, simply find
a multiplier.

(1+3+3+5+9)/(1+2.3+2.1+3.4+5.5) = 21/14.3 = ~1.47

Then use this multiplier to adjust your scores...

pawn = 1.47
knight = 3.38
bishop = 3.09
rook = 5.00
queen = 8.09

Looks much better, no?

Keep in mind that your program was trying to find an
AVERAGE. Something that would work in ALL cases.

If the pawn score seems a bit high, keep in mind that
sometimes pawns can be worth many times the usual 1.0!
And your program is just trying to average ALL pawn
values! On the other hand, a queen is more likely to
lose value (due to lack of mobility) than gain value.

In other words, for average values, your program is
not doing to bad...

Now, for a next step, just put in a mobility factor
for each piece and let the program adjust this factor
along with the piece values...

Good luck!
dan (kirk...@ee.utah.edu)

Tim Mirabile

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

Chris Whittington <chr...@cpsoft.demon.co.uk> wrote:

>hy...@crafty.cis.uab.edu (Robert Hyatt) wrote:
>>
>> Chris Whittington (chr...@cpsoft.demon.co.uk) wrote:

>> : Peter Osterlund <peter.o...@mailbox.swipnet.se> wrote:
>> : >
>> : > On Thu, 4 Jul 1996, Chris Whittington wrote:
>> : >
>> : > > In the land of the blind the one-eyed man is king.
>> : > >
>> : > > So why start with both sides 'blind' ?
>> : > >
>> : > > How about giving on eside the magic 1:3:3:5:9
>> : > >
>> : > > or, better, 1 : 3.6 : 3.6 : 6 : 9.6
>> : >
>> : > Is this really better? This means Q=R+B and Q=R+N. However, in most
>> : > situations I think the queen is better.
>> : >
>> : > --
>> : > Peter Österlund Email: peter.o...@mailbox.swipnet.se
>> : > Sköndalsvägen 35 f90...@nada.kth.se
>> : > S-128 66 Sköndal Phone: +46 8 942647
>> : > Sweden
>> : >
>> : >
>> :
>> : True. More for Queen, maybe 11 or so
>> :
>> : Chris Whittington
>> :
>>

>> Just for fun, here's what I'm using in Crafty v9.29:
>>
>> P=1, B=N=4, R=6, Q=12. Notice that the proportions are nearly standard,
>> except that a queen is as good as two rooks. The B=N=4*P is something I had
>> to do to prevent trading a knight for 2 pawns plus a lot of positional
>> compensation that most of the time was not enough.
>>

>> Larry Kaufman ran some tests and reported in the last CCR that with a computer,
>> the queen is at least as good as two rooks, and sometimes better. I experimented
>> and now agree with this. Crafty wins most games when it has a Q vs 2 rooks,
>> particularly against other computers.
>>
>> I don't claim these numbers are magic, that they produce GM class chess, or
>> anything at all, other than they are better than P=1, B=N=3, R=5, Q=9,
>> *in Crafty*. Note the big qualification. Others may find these are gross
>> numbers that result in poor results. Caveat Emptor, then... :)
>>
>> Bob

>If you do 1:3:3:5:9 then one fault is an early BN for RP trade


>often on f2/f7, which is invariably bad.
>
>The other problem is an early minor piece for 3 pawns trade, which
>is again bad.

Yes, another problem I see with some programs is the willingness to give up a
minor piece for two pawns in positions where it would otherwise lose one of
it's own pawns (I.e. it thinks it's getting three pawns for the piece). In
most cases, it's easier to hold the pawn down position than the piece down for
two pawns position.

>Hence the NB need to be N/B > 3*P
>And you'll also need R+P > B+N

I assume you have this backwards - B+N > R+P. In fact, in an early middlegame,
B+N > R+2*P is even better. Of course, the closer you get to an endgame, the
more the rooks and pawns will be worth.

>Q+P versus R+R gets more complicated, usually R+R > Q+P is best.

Good idea. In positions where the king is safe from Q checks, the rooks can
often win some additional pawns, and this may be well beyond the horizon. On
the other hand, a deep searcher may see this, and this may be an unnecessary
penalty.

But probably the most important thing of all to get right is the value of the
exchange, which I don't see mentioned here. A friend of mine who is >2400
commented that he is amazed by how often Chess Genius finds strong exchange
sacs.

+---------------------------------+
| Tim Mirabile <t...@mail.htp.com> |
| PGP Key ID: B7CE30D1 |
+---------------------------------+

Martin Borriss

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

In article <83693244...@cpsoft.demon.co.uk>, Chris Whittington <chr...@cpsoft.demon.co.uk> writes:

[...]

> Ie, I'm trying to say that R+R > Q+P
> except that there are special circumstances - which are beyond the
> comprehension of a piece-square table.
>

It is interesting that you are really convinced of this. I agree that there might
be special circumstances sometimes, but my rule of thumb goes like this:

a) Q+P > R+R. Even more so if the position is complicated (unsafe kings) and there
are minor pieces left as well.
b) Q+P = R+R if the queen is alone and the 'king of the rooks :)' is safe.

The same holds for other ratios as well, e.g. in an ending you can often fight
with R+(B or N) vs. Q, but usually in an middle game you can't.
The messier a position is, the stronger a queen becomes.

Martin

--
Martin....@inf.tu-dresden.de

Martin Borriss

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

In article <83694255...@cpsoft.demon.co.uk>, Chris Whittington <chr...@cpsoft.demon.co.uk> writes:

[...]


>
> In human chess this is just not so.
> It just goes to show that if you let 'blind' boxers slug it out
> then the conclusions may not be that accurate.
>
> Give me (as a human) R + R + 4xpawns on e2f2g2h2, and let Crafty
> have the equivalent Q + 5 pawns on d7e7f7g7h7. The rooks would win
> hands down.
>

This should be a draw 'hands down' for both sides with the participants being of
no importance (e.g., Crafty should hold this against Kasparov, with both colors).
Maybe Judit Polgar would lose it (remembering her ending against Kasparov
recently). ;)
You have better practical chances (if any) to win this with the queen.



> With fewer pawns it gets even easier (although the king will need a
> pawn or two to shelter it from the rain of checks).
>
> Just a matter of technique. Whilst sheltering the white king with the pawns
> and rooks, as needed, double the rooks on the 7th - black is dead.
>

I have no idea why black is dead if two rooks appear on the 7th rank. The
most simpleminded 'defense' for my queen is to protect my extra pawn with K and
Q, inviting you into a drawish pawn ending.

Martin

--
Martin....@inf.tu-dresden.de

Chris Whittington

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

mb...@irz.inf.tu-dresden.de (Martin Borriss) wrote:

>
>
> In article <83693244...@cpsoft.demon.co.uk>, Chris Whittington <chr...@cpsoft.demon.co.uk> writes:
>
> [...]
>
> > Ie, I'm trying to say that R+R > Q+P
> > except that there are special circumstances - which are beyond the
> > comprehension of a piece-square table.
> >
>
> It is interesting that you are really convinced of this. I agree that there might
> be special circumstances sometimes, but my rule of thumb goes like this:
>
> a) Q+P > R+R. Even more so if the position is complicated (unsafe kings) and there
> are minor pieces left as well.
> b) Q+P = R+R if the queen is alone and the 'king of the rooks :)' is safe.
>
> The same holds for other ratios as well, e.g. in an ending you can often fight
> with R+(B or N) vs. Q, but usually in an middle game you can't.
> The messier a position is, the stronger a queen becomes.
>
> Martin
>

I don't really think we disagree.

Absolutely the queen is more valuable as the 'messiness' increases.
Good term 'messiness'.

Its just that I account the 'messiness' of the position within the
evaluation function, not by a global piece-table value, so without
messiness, I posit that R+R > Q+P

Chris Whittington

Chris Whittington

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

mb...@irz.inf.tu-dresden.de (Martin Borriss) wrote:
>
>
> In article <83694255...@cpsoft.demon.co.uk>, Chris Whittington <chr...@cpsoft.demon.co.uk> writes:
>
> [...]

> >
> > In human chess this is just not so.
> > It just goes to show that if you let 'blind' boxers slug it out
> > then the conclusions may not be that accurate.
> >
> > Give me (as a human) R + R + 4xpawns on e2f2g2h2, and let Crafty
> > have the equivalent Q + 5 pawns on d7e7f7g7h7. The rooks would win
> > hands down.
> >
>
> This should be a draw 'hands down' for both sides with the participants being of
> no importance (e.g., Crafty should hold this against Kasparov, with both colors).
> Maybe Judit Polgar would lose it (remembering her ending against Kasparov
> recently). ;)
> You have better practical chances (if any) to win this with the queen.
>
> > With fewer pawns it gets even easier (although the king will need a
> > pawn or two to shelter it from the rain of checks).
> >
> > Just a matter of technique. Whilst sheltering the white king with the pawns
> > and rooks, as needed, double the rooks on the 7th - black is dead.
> >
>
> I have no idea why black is dead if two rooks appear on the 7th rank. The
> most simpleminded 'defense' for my queen is to protect my extra pawn with K and
> Q, inviting you into a drawish pawn ending.

I decline :)

Tie the QK to defence of one of the 7th rank pawns as you say.
Now white has a spare piece (the king), advance it, using the pawns
as a shield and the rooks to control possible check squares.

White can choose when to liquidate the R+R for Q+P, and, with
the white king suitably placed and the black king back on the 7th
should be able to win.

Or am I being over-optimistic :) ?

D.Regis

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

In article <4rfqs4$a...@dfw-ixnews5.ix.netcom.com> cma...@ix.netcom.com writes:
>And now for the answer:
>Pawn 1.0
>Knight 2.3
>Bishop 2.1
>Rook 3.4
>Queen 5.5
>I was really expecting something closer to 1:3:3:5:9, so what went
>wrong?

Fascinating, and thank you for the work and the post.

The opinion of a middling chessplayer rather than a strong programmer:

We all know not to bring the Queen out too early, and it's actually
damned difficult to get the Rooks out early.

Is it that the average 99-ply value of the Queen is something like 9
pawns, but during the opening, and in closed middlegames, it's just
another King? i.e. very manoeuvrable but not the monster that it can
be once the position opens up?

Something like this might also explain why you're getting Knights
worth more than Bishops. The technique of exploiting the advantage
of Bishop over Knight is not too obscure but I'd guess it's beyond
the few ply in use.

That will be tuppence

--

May your pieces harmonise with your Pawn structure and
your sacrifices be sound in all variations


D
--
_

Chris Whittington

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

Can't resist pointing you at WMCCC Paderborn Chess System
Tal v. Genius.
Exchange sac was by CST :)
Genius never saw it :)

Chris Whittington

Robert Hyatt

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

Chris Whittington (chr...@cpsoft.demon.co.uk) wrote:

Not necessarily. Note that the side with the Q doesn't have to allow the
two rooks to double on the 7th. and except for unusual positions, it is
not terribly difficult for the queen to keep the two rooks out. Queens are
good at multiple threats, such as attacking a pawn, defending a key square
and attacking one rook making undoubling them difficult. The presence of
one minor piece makes this even easier to handle, and makes the queen even
more dangerous.


Wolfgang Kuechle

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

In article <4rfqs4$a...@dfw-ixnews5.ix.netcom.com> cma...@ix.netcom.com writes:
>And now for the answer:
>Pawn 1.0
>Knight 2.3
>Bishop 2.1
>Rook 3.4
>Queen 5.5
>I was really expecting something closer to 1:3:3:5:9, so what went
>wrong?

My explanation:
With both queens on the board a queen may well be worth only 5.5 because
its possibility to set up threats on different areas on the board is
considerably reduced by the opponents queen. In addition the strength
of the queen usually increases in low-material positions. The problem
with a self-tuning algorithm is that both queen-endings as well as
games with only one queen on the board are relatively rare and therefore
the strength of this piece in such situations may not enter the evaluation
function. Comments ?

Regards,
Wolfgang Kuechle

Ralph Betza

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

Of course, I am also interested in the values of chesspieces
( http://www.li.net/~sappe/pieceval/index.html )
but not limited to the few pieces currently used in FIDE chess.

You may be interested in my finding that the value of a piece depends on
its own mobility/capture/etcetera, of course, but also
on its initial position on the chessboard, and
on its ranking with respect to the other pieces on the board, and
on how its powers mesh with, or conflict with, the powers of other pieces
in its own army.

In particular, have a look at the Colorbound Clobberers...
( http://www.li.net/~sappe/chessvar/DAN/colclob.html )

If you found yourself interested by the above , you should take a look at
http://www.li.net/~sappe/chessvar/whatsnew.html

--
Ralph Betza (FM)


Benjamin Tracy

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

On 8 Jul 1996, Simon Read wrote:

>
> Other machine-learning investigators have hinted that they need some
> randomness to get their programs to explore all the problem domain.
> In other words, are you ensuring that a wide variety of positions
> gets played? Are the opening moves random? If there is no randomness
> at all, the programs will probably restrict themselves to a
> limited set of openings and a limited set of positions.
>

This might not be a bad thing. Consider the possibility of the machine
learning to value pieces differently when using different openings, or in
different positions in a game! By extension, the machine might be able
to learn what pieces are valuable to the opposition! This could lead to
a major advance.

Ben

Chris Whittington

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

Yup, you can add this disclaimer to every attempt at generating
a heuristic.

> Note that the side with the Q doesn't have to allow the
> two rooks to double on the 7th. and except for unusual positions, it is
> not terribly difficult for the queen to keep the two rooks out. Queens are
> good at multiple threats, such as attacking a pawn, defending a key square
> and attacking one rook making undoubling them difficult. The presence of
> one minor piece makes this even easier to handle, and makes the queen even
> more dangerous.
>

Queen gets more dangerous the more support it has from
other pieces (and pawns).

Simplification favours the rooks.

In complex (Martin Borriss nicely call them 'messy') positions the
mobility and potential mating capability of the queen are strong.

In simpler, or blocked positions, the ability of R+R to attack a
fixed objective overcome the queen's ability to defend.

2 attackers get to win over 1 defender.

Perhaps we could just agree to differ ?
If anything else, r.g.c.c shows that there is no one correct
solution ........
What works for you may not work for me and vice versa.

Chris Whittington

Bruce Moreland

unread,
Jul 10, 1996, 3:00:00 AM7/10/96
to

In article <4rusuk$n...@news.cc.utah.edu>, dbk...@cc.utah.edu says...

>
>I think it is pretty obvious that, except for the king (for
>equally obvious reasons), material values reflect the mobility
>(or mobility potential) of the pieces (the queen is the most
>mobile, the pawn the least...).

There was a recent post that attempted to directly relate average mobility
to "point value". I think this whole idea is bogus.

The point values were derived from practical play. They work because in
many practical cases you can figure out who is "ahead" an a position with
imbalanced material, by counting points. Two pawns usually isn't enough
for a piece, a rook and a minor usually aren't enough for a queen, a pawn
usually isn't enough for the exchange, etc.

The pieces not only strike at X squares, the geometry of their attack
"footprint" is also different. Different combinations of pieces
coordinate well or poorly with each other, some pieces perform better in
open positions, and some pieces attain more value in the endgame.

All of this has an effect upon the point value of a piece.

bruce


Tom Kerrigan

unread,
Jul 11, 1996, 3:00:00 AM7/11/96
to

>Scoring material is a ratio of values between pieces.
>Usually the ratios are adjusted so that pawn=1.0. But in
>your case, this can be misleading. Why? Because your
>program is trying to find an AVERAGE value that will work.
>Average values don't always work well. Which is why most
>adjust the values (with things like mobility!). Pawns are
>often worth something very different than 1.0. And on

When "the ratios are adjusted so that pawn=1.0" then there is no point where

"Pawns are often worth something very different than 1.0."

A pawn is a pawn.

Don't rip on somebody for normalizing a few point values.

Cheers,
Tom

_______________________________________________________________________________
Tom Kerrigan kerr...@frii.com O-

In Pocataligo, Georgia, it is a violation for a woman over 200 pounds
and attired in shorts to pilot or ride in an airplane.

Jay Scott

unread,
Jul 11, 1996, 3:00:00 AM7/11/96
to

In article <4ru19v$n...@willis1.cis.uab.edu>, Robert Hyatt (hy...@crafty.cis.uab.edu) wrote:
>However, in nearly all cases where the position is Q+pawns vs RR+pawns, the
>Q is better. The two rooks don't have nearly the mobility of the queen if
>there are no other pieces on the board.

My theory is that Q beats RR for a program because programs are better
at playing the queen than the rooks. Programs understand the tactical
trickiness of the queen better than they understand the way to put pressure
on with rook maneuvers. I think that the rook is the hardest
piece for a program to understand well.

RR is better for a human because humans have different playing skills.

Of course, I've already seen people disagree in this thread. :-)

Jay Scott <j...@forum.swarthmore.edu>

Machine Learning in Games:
http://forum.swarthmore.edu/~jay/learn-game/index.html

Bruce Moreland

unread,
Jul 11, 1996, 3:00:00 AM7/11/96
to

In article <4s3eui$m...@larch.cc.swarthmore.edu>, j...@forum.swarthmore.edu
says...

>
>In article <4ru19v$n...@willis1.cis.uab.edu>, Robert Hyatt
(hy...@crafty.cis.uab.edu) wrote:
>>However, in nearly all cases where the position is Q+pawns vs RR+pawns,
the
>>Q is better. The two rooks don't have nearly the mobility of the queen
if
>>there are no other pieces on the board.
>
>My theory is that Q beats RR for a program because programs are better
>at playing the queen than the rooks. Programs understand the tactical
>trickiness of the queen better than they understand the way to put
pressure
>on with rook maneuvers. I think that the rook is the hardest
>piece for a program to understand well.

A bishop is way harder. In fact, a rook might be the easiest piece. A
rook can get in trouble if it gets into the center of the board too early,
but every piece has problems like this (knight on a8, queen on b7, etc.).

bruce


Robert Hyatt

unread,
Jul 11, 1996, 3:00:00 AM7/11/96
to

Jay Scott (j...@forum.swarthmore.edu) wrote:
: In article <4ru19v$n...@willis1.cis.uab.edu>, Robert Hyatt (hy...@crafty.cis.uab.edu) wrote:
: >However, in nearly all cases where the position is Q+pawns vs RR+pawns, the

: >Q is better. The two rooks don't have nearly the mobility of the queen if
: >there are no other pieces on the board.
:
: My theory is that Q beats RR for a program because programs are better

: at playing the queen than the rooks. Programs understand the tactical
: trickiness of the queen better than they understand the way to put pressure
: on with rook maneuvers. I think that the rook is the hardest
: piece for a program to understand well.

First, this is certainly correct in middlegames. I'd much rather see Crafty
keep queens on, even if it has to give up a pawn to do so, because its
tactical skills can use the queen very effectively. This is likely what
Larry Kaufman was measuring when he had program vs program tests with Q vs RR
and he found that the Q was much better. I don't remember the numbers from
CCR, but maybe 16 wins 4 losses for the queen? I'll try to dig it up at the
office when I get in.

Second, as material is traded, the two rooks become better as several have
pointed out. However, in developing Crafty, I've been using ICC as my
"proving ground" which means mostly playing humans (although there are now
lots of P6 waving programs there now, and they frequently play many games with
Crafty). In these games (human) the queen (in the hands of a computer that's
reasonably fast like Crafty or any of the others) is really quite dangerous,
particularly at faster time controls.

:
: RR is better for a human because humans have different playing skills.

cma...@ix.netcom.com

unread,
Jul 12, 1996, 3:00:00 AM7/12/96
to

I just got back from a vacation, and thank you all for the responses!

The main point of the first experiment was to determine 1) is the
approach of a genetic algorithm worth spending time on, and 2) is
there any interest. The actual results were not as exciting to me as
the fact that results actually emerged as opposed to a random walk
across the genetic space. It was extremely flawed in many [other]
respects, but it did serve its purpose. What I would like to do next
is the following:
1) Sort through the responses and e-mails.
2) Design a second experiment.
3) Post the proposal and get feedback.
4) Write the code
5) Post the results
The task I would like to accomplish by this 2nd experiment is 1)
define a single specific subset of chess positions, and 2) find the
optimal evaluation function for this subset within a predefined set of
algorithm parameters.
(The astute reader will notice that I have completely glossed over
half of the original problem, which is to first algorithmically
determine which subset a given position is a member of.)

Chris Mayer

Jay Scott

unread,
Jul 12, 1996, 3:00:00 AM7/12/96
to

In article <4s4stv$4...@sjx-ixn6.ix.netcom.com>, cma...@ix.netcom.com wrote:
>The task I would like to accomplish by this 2nd experiment is 1)
>define a single specific subset of chess positions, and 2) find the
>optimal evaluation function for this subset within a predefined set of
>algorithm parameters.

I suggest a complex endgame, like B+pawns versus N+pawns for any equal
number of pawns on each side. That's simple enough that you can see what's
going on, and difficult enough to make it a fascinating problem. If that's
too hard, leave out the minor pieces. :-)

I think it makes sense to define the subsets by the material on the board,
because it handles the discontinuity-between-subsets problem automatically:
there SHOULD be an evaluation discontinuity when material changes. Other
definitions are probably as good, though.

William Tunstall-Pedoe

unread,
Jul 13, 1996, 3:00:00 AM7/13/96
to

In article <4r7vmj$8...@dfw-ixnews7.ix.netcom.com>, cma...@ix.netcom.com
writes
>Has anyone ever tried using a genetic algorithm to come up with
>locally optimized evaluation functions?

I wrote a paper on this subject about five years ago. It was published
in the ICCA journal in 1991, "Genetic Algorithms Optimising Evaluation
Functions" with me (William Tunstall-Pedoe) as the author.

>I was thinking of possible
>ways of solving specific subclasses of chess positions, and this
>particular idea came to me. Unless it has already been done, I am
>considering writing a program to do this. The basic idea is to first
>use a classification function to determine which eval function to use,
>and optimize each individual eval function using a genetic algorithm.
>I know that most programs do this to a limited extent by defining a
>opening/middle/endgame classification, but I am interested in defining
>much more specific subclasses.
>This also raised an interesting question. Should a system like this
>be trained on databases of great games by humans to come up with
>similar moves, or by playing against itself? (or possible both?)
>

Self-play is fraught with difficulties and is probably too slow for
this. You will need numbers of generations in the thousands and each
individual in the generation will have to play many games to get a
meaningful fitness. Once you multiply all this up you will end up with
months or years to get results.

My method was to get fitness by giving each individual in the population
random positions from grandmaster games, giving it a very short time to
find a move and giving it a point if it gets the same move as the
grandmaster actually made. This, of course, is imperfect for a number of
reasons but should correlate with chess strength and a strong
statistical correlation should be all you need for a fitness function.

I set a range around what I considered to be the true values for the
various material and positional scores and let the genetic algorithm
determine where the optimum points in those ranges were.

My results showed that a genetic algorithm could significantly increase
the average score that the fitness function produced generation on
generation but that the results returned were not useful in my set up
for playing better chess than my hand set results.

I was rather handicapped at the time with a relatively poor chess engine
(it was a final year undergraduate project I did in Cambridge) but I
believe the idea has potential especially with more complicated and
faster engines where hand setting of values is more difficult.

One idea I had which I never had a chance to test was that genetic
algorithms could be used to optimise factors within the program that
control search. e.g. Values used in determining which moves to forward
prune during selective portions of the search. This idea has a great
deal of intellectual appeal for me in that evolution might improve the
performance of a program by changing its execution paths.

>Thanks,
>Chris Mayer
>
>

William Tunstall-Pedoe


Chris Whittington

unread,
Jul 14, 1996, 3:00:00 AM7/14/96
to

Isn't the flaw in genetic algorithms / hill-climbing and whatever
in a complex game like chess this:

If you apply the method to a low number of variable parameters then
the results are likely to be a compromise between the cases where
the value should be high, and the cases where it should be low. The
recent thread on value of the queen illustrates this - the program
ends up with an unsatisfactory kludge value for the queen.

If you then up the variable count to deal with the myriad possible
situations (eg Q by itself, Q with various types of piece, Q making
a king-attack etc. etc) then the problem becomes unmanageable due
to exponential explosion.

Best is to have a complex number of parameters, play games,
understand chess, work out why your program does stupid things
and genetically vary the parameters by intelligent human intervention.

Chris Whittington


Lloyd Lim

unread,
Jul 14, 1996, 3:00:00 AM7/14/96
to

In article <4s63ts$h...@larch.cc.swarthmore.edu>,

Jay Scott <j...@forum.swarthmore.edu> wrote:
>In article <4s4stv$4...@sjx-ixn6.ix.netcom.com>, cma...@ix.netcom.com wrote:
>>The task I would like to accomplish by this 2nd experiment is 1)
>>define a single specific subset of chess positions, and 2) find the
>>optimal evaluation function for this subset within a predefined set of
>>algorithm parameters.
>
>I suggest a complex endgame, like B+pawns versus N+pawns for any equal
>number of pawns on each side. That's simple enough that you can see what's
>going on, and difficult enough to make it a fascinating problem. If that's
>too hard, leave out the minor pieces. :-)

I should mention that I'm currently trying to evolve C functions which
return moves-to-win/loss in endgames with genetic programming. My goals are:
* to evolve perfect, optimal evaluations functions
(be capable of replacing a tablebase)
* to evolve functions immediately useful to chess programmers
(pure C code, no other knowledge sources assumed)
* not to provide any human input in the learning process
(except for GP setup and later on adding whatever functions GP finds)

The main problem is that there are a tremendous number of fitness cases
so it takes a long time to evaluate them. I've been trying out different
sampling techniques and different selection methods with KRK and KQK.
I've also broken up each endgame into a bunch of smaller problems
(ie--classify win/loss/draw, classify win > 8, etc.) It looks promising,
but it's hard to do anything when the runs take so long.

+++
Lloyd Lim <Lloy...@limunltd.com>
Lim Unlimited <http://www.limunltd.com/>

cma...@ix.netcom.com

unread,
Jul 15, 1996, 3:00:00 AM7/15/96
to

cma...@ix.netcom.com wrote:

>Adding to Jim's idea

Many many apologies. I mentally merged 'Jay' with 'Lim'.

Chris Mayer

cma...@ix.netcom.com

unread,
Jul 15, 1996, 3:00:00 AM7/15/96
to

Lloyd Lim <Lloy...@limunltd.com> wrote:


>I should mention that I'm currently trying to evolve C functions which
>return moves-to-win/loss in endgames with genetic programming. My goals are:
> * to evolve perfect, optimal evaluations functions
> (be capable of replacing a tablebase)
> * to evolve functions immediately useful to chess programmers
> (pure C code, no other knowledge sources assumed)
> * not to provide any human input in the learning process
> (except for GP setup and later on adding whatever functions GP finds)

>The main problem is that there are a tremendous number of fitness cases
>so it takes a long time to evaluate them. I've been trying out different
>sampling techniques and different selection methods with KRK and KQK.
>I've also broken up each endgame into a bunch of smaller problems
>(ie--classify win/loss/draw, classify win > 8, etc.) It looks promising,
>but it's hard to do anything when the runs take so long.

>+++
>Lloyd Lim <Lloy...@limunltd.com>
>Lim Unlimited <http://www.limunltd.com/>

It looks like I'm trying to reinvent a wheel you already have rolling!
Please post any results you have so far, as there seems to be a lot of
interest in this area and it could potentially save me a lot of work.
Adding to Jim's idea, I was thinking of using an endgame for which a
table base is already known. The fitness test is then making a single
move and comparing it against the table base, instead of playing a
whole game and seeing who wins. This should speed things up a lot.
It would also eliminate the "training by self play versus traing by
human example" problem. Like you, I was also thinking of reducing the
need for endgame table bases with a set of heuristics + a small
exception table. Would you please post the data structure of your
gene, and also your mutation algorithm?
Thanks,
Chris Mayer

Lloyd Lim

unread,
Jul 15, 1996, 3:00:00 AM7/15/96
to

In article <4scmo2$i...@sjx-ixn2.ix.netcom.com>, <cma...@ix.netcom.com> wrote:
>Lloyd Lim <Lloy...@limunltd.com> wrote:
>
>>I should mention that I'm currently trying to evolve C functions which
>>return moves-to-win/loss in endgames with genetic programming. My goals are:
>> * to evolve perfect, optimal evaluations functions
>> (be capable of replacing a tablebase)
>> * to evolve functions immediately useful to chess programmers
>> (pure C code, no other knowledge sources assumed)
>> * not to provide any human input in the learning process
>> (except for GP setup and later on adding whatever functions GP finds)
>
>>The main problem is that there are a tremendous number of fitness cases
>>so it takes a long time to evaluate them. I've been trying out different
>>sampling techniques and different selection methods with KRK and KQK.
>>I've also broken up each endgame into a bunch of smaller problems
>>(ie--classify win/loss/draw, classify win > 8, etc.) It looks promising,
>>but it's hard to do anything when the runs take so long.
>
>It looks like I'm trying to reinvent a wheel you already have rolling!
>Please post any results you have so far, as there seems to be a lot of
>interest in this area and it could potentially save me a lot of work.

The wheel's not rolling yet--the spokes are slowly being constructed
by nano-robots!

I will post when I get results. Keep in mind that I'm still in the
early stages. The way I've broken down KRK, I'll have to evolve 30+
functions for a complete solution. I'm hoping that GP will find
useful ADFs (subroutines) that will help in other endgames and
speed things up.

If someone else uses a better machine learning technique, has more
computing power, and/or uses human input, then they could easily
build the wheel faster than me. Ideally, I'd like the computer to
learn all by itself. I may abandon this goal, but for now I think
it's worthwhile.

>Adding to Jim's idea, I was thinking of using an endgame for which a
>table base is already known. The fitness test is then making a single
>move and comparing it against the table base, instead of playing a
>whole game and seeing who wins. This should speed things up a lot.

Yes, this is what I'm doing. My work is based on Steven Edwards'
tablebases. Fitness evaluations are still slow if you are interested
in perfection.

>Like you, I was also thinking of reducing the
>need for endgame table bases with a set of heuristics + a small
>exception table.

I'm committed to not having an exception table.

>Would you please post the data structure of your
>gene, and also your mutation algorithm?

I'm not using a GA; I'm using GP, genetic programming. The initial
function set is all of the operators available in C. Currently,
the approach is offset-based so the data type is int and the terminals
are the rank and file numbers of the pieces. My original goal was
to be bitboard-based and to use 64-bit integers as the data type and
Crafty variables as terminals. However, the offset-based approach
seemed to work ok in preliminary tests so I've stuck with it for now.
I'm currently using 90% crossover, 5% reproduction, and 5% mutation.

If the terminology sounds as much like chess programming as machine
learning, that's intentional. Even though I have an ivory tower-like
goal of not having any human input, I want the solutions to be usable
in chess programs. That's why I like GP. You can almost cut-and-paste
the evolved functions into a C program.

cma...@ix.netcom.com

unread,
Jul 15, 1996, 3:00:00 AM7/15/96
to

>>
>> Self-play is fraught with difficulties and is probably too slow for
>> this. You will need numbers of generations in the thousands and each
>> individual in the generation will have to play many games to get a
>> meaningful fitness. Once you multiply all this up you will end up with
>> months or years to get results.

Most of the feedback is supporting a combination of self-play and
human database training.

>> My results showed that a genetic algorithm could significantly increase
>> the average score that the fitness function produced generation on
>> generation but that the results returned were not useful in my set up
>> for playing better chess than my hand set results.

>>

>> One idea I had which I never had a chance to test was that genetic
>> algorithms could be used to optimise factors within the program that
>> control search. e.g. Values used in determining which moves to forward
>> prune during selective portions of the search. This idea has a great
>> deal of intellectual appeal for me in that evolution might improve the
>> performance of a program by changing its execution paths.

This is one of my personal hopes. As eval() becomes more complex
doing it 'by hand' won't be as good any more. (By example, it is
becoming more rare to do assembly code 'by hand' better than the
current C compilers.)

>>
>>
>> William Tunstall-Pedoe
>>

>Isn't the flaw in genetic algorithms / hill-climbing and whatever
>in a complex game like chess this:

>If you apply the method to a low number of variable parameters then
>the results are likely to be a compromise between the cases where
>the value should be high, and the cases where it should be low. The
>recent thread on value of the queen illustrates this - the program
>ends up with an unsatisfactory kludge value for the queen.

>If you then up the variable count to deal with the myriad possible
>situations (eg Q by itself, Q with various types of piece, Q making
>a king-attack etc. etc) then the problem becomes unmanageable due
>to exponential explosion.

Although this is definately a problem, I'm hoping that GAs are a
potential solution. Finding the right balance between sheer number of
different situations and settling for when to merge different ones to
compress the heuristics is a real tough problem. Right now, I don't
know of any program that divides the situations into more than 5
(book, opening, middle game, endgame, tablebase), so there should be
room for improvement there.

>Best is to have a complex number of parameters, play games,
>understand chess, work out why your program does stupid things
>and genetically vary the parameters by intelligent human intervention.

>Chris Whittington

If you think about it, chess programs as well as modern chess
strategies (as well as automobiles, donut recipes, etc...) are
evolving like a GA. Fitness is winning. The mutation is trying new
ideas based on intelligent human intervention. [So far] that is still
the best way. Someone wrote in an email that 'chess is to AI as fruit
flys are to genetics'. GAs may or may not work here, but its sure
exciting to check the idea out!

Chris Mayer


Benoit St-Jean

unread,
Jul 16, 1996, 3:00:00 AM7/16/96
to

Lloyd Lim <Lloy...@limunltd.com> wrote:


>>>I should mention that I'm currently trying to evolve C functions which
>>>return moves-to-win/loss in endgames with genetic programming. My goals are:
>>> * to evolve perfect, optimal evaluations functions
>>> (be capable of replacing a tablebase)
>>> * to evolve functions immediately useful to chess programmers
>>> (pure C code, no other knowledge sources assumed)
>>> * not to provide any human input in the learning process
>>> (except for GP setup and later on adding whatever functions GP finds)

Check out the "ALEXS: an Optimization Approach for the Endgame
KNNKP(h)" paper in "Advances in Computer Chess 6". The authors used
GA to fine tune their evaluation function for this particular endgame.
Also, a close look at "Genetic Programming" by Koza would surely help
since most of the book is dedicated to GP and building of functions...

BTW, you could use a classifier system (See Goldberg) instead of
building functions. For each endgame, you could build a set of rules
based on appropriate parameters for each endgame.

Hope this helps.


D Kirkland

unread,
Jul 19, 1996, 3:00:00 AM7/19/96
to

Tom Kerrigan (kerr...@frii.com) wrote:
: >Scoring material is a ratio of values between pieces.

: >Usually the ratios are adjusted so that pawn=1.0. But in
: >your case, this can be misleading. Why? Because your
: >program is trying to find an AVERAGE value that will work.
: >Average values don't always work well. Which is why most
: >adjust the values (with things like mobility!). Pawns are
: >often worth something very different than 1.0. And on
:
: When "the ratios are adjusted so that pawn=1.0" then there is no point where
: "Pawns are often worth something very different than 1.0."
:
: A pawn is a pawn.
:
: Don't rip on somebody for normalizing a few point values.

First, I wasn't ripping anybody!

Second, while most programs have a set material value for a pawn
that doesn't change, almost all programs make positional adjustments.
Things like bonuses for passed pawns or chained pawns and such are
adjustments to the pawns material score.

With all such adjustments taken into account, most pawns will be
worth something more than the pawns normal material score.

His program did not have any positional adjustments, so the
program was trying to make up for it in material value.
So his material value would be compariable to other programs
material+positional values.

Hope this make sense...
dan (kirk...@ee.utah.edu)

Tom Kerrigan

unread,
Jul 20, 1996, 3:00:00 AM7/20/96
to

D Kirkland (dbk...@cc.utah.edu) wrote:

: Tom Kerrigan (kerr...@frii.com) wrote:
: : >Scoring material is a ratio of values between pieces.
: : >Usually the ratios are adjusted so that pawn=1.0. But in
: : >your case, this can be misleading. Why? Because your
: : >program is trying to find an AVERAGE value that will work.
: : >Average values don't always work well. Which is why most
: : >adjust the values (with things like mobility!). Pawns are
: : >often worth something very different than 1.0. And on
: :
: : When "the ratios are adjusted so that pawn=1.0" then there is no point where
: : "Pawns are often worth something very different than 1.0."
: :
: : A pawn is a pawn.
: :
: : Don't rip on somebody for normalizing a few point values.

: First, I wasn't ripping anybody!

Oh. It sounded like you were. You said something to the effect of, "You got 1.0
for a pawn?? Yeah Right!!" These may not be your exact words, but you get the idea.

: With all such adjustments taken into account, most pawns will be


: worth something more than the pawns normal material score.

Wrong. What you are trying to say here is that some pawns are worth more than
others, which I totally agree with. What you are actually saying is that a pawn is
worth more than a pawn, which is just silly. I think we all get the idea, so no
need to argue more about this...

: His program did not have any positional adjustments, so the


: program was trying to make up for it in material value.
: So his material value would be compariable to other programs
: material+positional values.

Yes. This is not a problem. I could run the same test with the best pawn
evaluation code ever and it would still be missing terms. Same "problem".

I thought this experiment was interesting and enjoyed reading about it. Yes, it's
flawed, but at least it was run. If you think you can do better, go for it. I
would be interested in your efforts too.

Cheers,
Tom

_______________________________________________________________________________
Tom Kerrigan kerr...@frii.com O-

If you push the "extra ice" button on the soft drink vending machine,
you won't get any ice. If you push the "no ice" button, you'll get
ice, but no cup.

0 new messages