Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Transposition table

260 views
Skip to first unread message

Matthew Bengtson

unread,
Jan 25, 1996, 3:00:00 AM1/25/96
to
A simple question for chess programmers:

The advantages of transposition tables are obvious - for example, in
positions with multiple pawn levers. It seems one would want to store
the position and the evaluation reached there, but the problem is that in
the middle of search with pruning, the position may not be accurately
evaluated. For example, a search engine will prune away a move which
needlessly loses a rook, when in fact a queen is being lost. But
conceivably an accurate evaluation of such a position might be needed at
some point. What do people do about this problem? Are analysis
evaluations deep in search simply discarded? This seems like it could be
wasteful.

Please reply mail with any suggestions. Thanks

Matt


br...@ais.com

unread,
Jan 27, 1996, 3:00:00 AM1/27/96
to

Having written several chess programs over the years (a couple of which
competed with reasonable success in the old ACM tournaments), I think I'm
qualified to answer this question.

There are a number of items that are useful to store in a transposition
table:

1) The position (obviously)
2) The best move found so far from this position. This is useful
if you encounter the same position and need to search it more
completely, say because it was cut off the previous time by
alpha-beta or because you want to search it more deeply this time.
This is particularly useful if you use the technique of iterative
deepening of the search tree.
3) The depth of search performed from this position. This, along
with the score, can be used to determine if you want to expand
the node or just use the values stored in the node.
4) The bounds on the score. (NOT the score itself - this is
important). Normally, many nodes in the tree cannot have a known
evaluation (as you note) because they were cut off by alpha-beta.
But you _do_ know that the value must lie within certain bounds --
for example, if it was cut off because it was found to be losing a
Rook, but in fact it actually loses a Queen in your example. In
that case, you would know that the bounds for the score would be
given by

mated <= score <= losing one Rook

If you now later encounter the position, if you have another line
that remains even (and depth and other factors are satisfactory),
you can cut off immediately; or if you find you are actually
trying to avoid mate, you know that you have to expand the node
since even though it loses at least a Rook it may at least avoid
mate. If you happen to know the score precisely (which actually is
true for only a minority of the nodes), you set both of the bounds
equal to the score and need not re-search the position if you
encounter it again at the same search depth.
5) (Optional) A flag for whether the score is known to be exact (or,
if you prefer, "absolute"). For example, a line that is known to be
a hopeless draw for both sides (stalemate, draw by repetition, etc)
or that is known to lead to a quick checkmate (and where the best
line is known; it is not usually a good idea to stop searching on
the first checkmate you find because it can lead to horizon problems
where you keep seeing a "mate in 3" but never actually execute it.
This does not apply to lines that are cut off, obviously, since by
definition they are not expected to happen in actual play; but if
the opponent blunders _then_ you want to checkmate him efficiently).
Or, alternatively, you could mark the depth of search performed from
the node in this case as being +infinity. This can be useful to
avoid re-searching portions of the tree that are already known (If
this flag did not exist, the node could be encountered once and
searched with a fairly shallow search, and then encountered later
and searched with a deep search. If the shallow search had already
obtained an exact score for the position, then the deep search would
be a waste of time since by definition the score for such a node is
known for any search depth). Note that you may know that either one
or both of the bounds are exact, which could enable additional
cutoffs (if the exact value is good enough for a cutoff, you can cut
off immediately at any search depth), and if both bounds are equal
and exact then you never need to expand the node. If you use this,
you should propagate the flags as appropriate to the parent nodes
(For example, if the scores from all siblings are also exact, then
the score for the parent node also becomes exact; there are other
cases as well). This allows the information to be used to maximum
benefit.

Note that depending on how the search is conducted it may be possible to
know more than that one bound of the score is always tied to +/- infinity
(checkmating/getting checkmated). In particular, you might know that the
score is bounded by:

losing one Rook <= score <= losing one Knight

which may enable your opponent to prune the node if he's found another line
that's winning your Queen. This could happen, for example, if the same node
had been expanded twice before, with different values for alpha and beta.

Depending on the program, there may be other things you want to put in the
node. For example, if your program uses the heuristic that there is likely
to be no big "surprise" in the position and sets the initial bounds to, say,
+/- the gain or loss of a Pawn or two (with the expectation that if the
root node gets pruned then you have to do another search with the bound that
allowed the cutoff readjusted to +/- infinity as appropriate and as is usual
for the alpha-beta algorithm), then you may want information about that in
the node (although the simplest implementation would just propagate the
narrower values into the search bounds, but that may lead to erroneous
cutoffs in the event the original position has to be re-searched and the
old table retained. One way around the problem would be to discard the
values obtained from the first search but retain the "best moves found").
You are also likely to find that there are situations where you have
multiple searches from a single point that were performed with multiple
bounds and at multiple depths, and it is not always obvious which is the
best one to save; for example, the one that has the "deepest" search may
actually have relatively little information because it was cut off. If your
program uses a "contempt" factor that assigns a slightly negative value (for
the computer) in the event of a drawn game (in an attempt to make the
machine play more enterprisingly and avoid draws by repetitions), that too
may be able to be put to use in the node.

One other thing to consider is how to determine what nodes to retain in the
table. If two positions hash to the same location, you might use a re-hash
function (say, the first n entries after the hash location) and select some
small set of positions, apply some kind of quick evaluation of the usefulness
of the information in each (depth of search performed from the node is
usually a good measure but as noted above is not foolproof), and discard the
least valuable node from the table.

This is all a rough overview and glosses over a lot of implementation
details (and indeed, different programs will use the tables differently
and may not use all these items or may use others as well). But a well-
written program will retain as much information as it can given the time
constraints placed on the search engine.

Bruce C. Wright

Robert Hyatt

unread,
Jan 27, 1996, 3:00:00 AM1/27/96
to
In article <1996Jan27.124914.8690@ais>, <br...@ais.com> wrote:
-->In article <4e8tsf$7...@decaxp.harvard.edu>, beng...@scws40.harvard.edu (Matthew Bengtson) writes:
-->> A simple question for chess programmers:
-->>
-->> The advantages of transposition tables are obvious - for example, in
-->> positions with multiple pawn levers. It seems one would want to store
-->> the position and the evaluation reached there, but the problem is that in
-->> the middle of search with pruning, the position may not be accurately
-->> evaluated. For example, a search engine will prune away a move which
-->> needlessly loses a rook, when in fact a queen is being lost. But
-->> conceivably an accurate evaluation of such a position might be needed at
-->> some point. What do people do about this problem? Are analysis
-->> evaluations deep in search simply discarded? This seems like it could be
-->> wasteful.
-->
-->Having written several chess programs over the years (a couple of which
-->competed with reasonable success in the old ACM tournaments), I think I'm
-->qualified to answer this question.
-->
-->There are a number of items that are useful to store in a transposition
-->table:
-->
--> 1) The position (obviously)
--> 2) The best move found so far from this position. This is useful
--> if you encounter the same position and need to search it more
--> completely, say because it was cut off the previous time by
--> alpha-beta or because you want to search it more deeply this time.
--> This is particularly useful if you use the technique of iterative
--> deepening of the search tree.
--> 3) The depth of search performed from this position. This, along
--> with the score, can be used to determine if you want to expand
--> the node or just use the values stored in the node.
--> 4) The bounds on the score. (NOT the score itself - this is
--> important). Normally, many nodes in the tree cannot have a known
--> evaluation (as you note) because they were cut off by alpha-beta.
--> But you _do_ know that the value must lie within certain bounds --

--> for example, if it was cut off because it was found to be losing a
--> Rook, but in fact it actually loses a Queen in your example. In
--> that case, you would know that the bounds for the score would be
--> given by
-->
--> mated <= score <= losing one Rook
-->
--> If you now later encounter the position, if you have another line
--> that remains even (and depth and other factors are satisfactory),
--> you can cut off immediately; or if you find you are actually
--> trying to avoid mate, you know that you have to expand the node
--> since even though it loses at least a Rook it may at least avoid
--> mate. If you happen to know the score precisely (which actually is
--> true for only a minority of the nodes), you set both of the bounds
--> equal to the score and need not re-search the position if you
--> encounter it again at the same search depth.

All you need is one value. Any node within the tree produces one of three
possible outcomes:

1. EXACT_SCORE, obvious meaning. Score > alpha, score < beta, so it's
a true or EXACT score;

2. LOWER_BOUND, here we search every move, and each fails low because the
move at the next ply "refutes" it. As a result, after completing this ply,
we have not changed alpha, nor do we have a clue about what the best move is.
Store "alpha".

3. UPPER_BOUND, here we searched one move, it failed high (refuted the move
at the preceeding ply) so store "beta".

You use these as follows: Look up the position. if you get:

EXACT_SCORE, then return this value and you are done.

LOWER_BOUND. If the value from the table is <= (less than or equal) to the
current alpha value, then we return alpha and exit this ply. Why? because
the last time we searched this ply, we tried everything and had to back up
alpha. As long as this lower bound is <= alpha, we *know* the table result
is at least bad enough to fail here too. Should the table bound be "higher"
then just because we failed then, we might not fail now since alpha is lower.

UPPER_BOUND. if the value from the table is >= beta, we return beta. same
logic as above.

Note that only one of the above cases can be true, so storing either upper, lower
bound or exact score is all that's needed.

--> 5) (Optional) A flag for whether the score is known to be exact (or,
--> if you prefer, "absolute"). For example, a line that is known to be
--> a hopeless draw for both sides (stalemate, draw by repetition, etc)
--> or that is known to lead to a quick checkmate (and where the best
--> line is known; it is not usually a good idea to stop searching on
--> the first checkmate you find because it can lead to horizon problems
--> where you keep seeing a "mate in 3" but never actually execute it.
--> This does not apply to lines that are cut off, obviously, since by
--> definition they are not expected to happen in actual play; but if
--> the opponent blunders _then_ you want to checkmate him efficiently).
--> Or, alternatively, you could mark the depth of search performed from
--> the node in this case as being +infinity. This can be useful to
--> avoid re-searching portions of the tree that are already known (If
--> this flag did not exist, the node could be encountered once and
--> searched with a fairly shallow search, and then encountered later
--> and searched with a deep search. If the shallow search had already
--> obtained an exact score for the position, then the deep search would
--> be a waste of time since by definition the score for such a node is
--> known for any search depth). Note that you may know that either one
--> or both of the bounds are exact, which could enable additional
--> cutoffs (if the exact value is good enough for a cutoff, you can cut
--> off immediately at any search depth), and if both bounds are equal
--> and exact then you never need to expand the node. If you use this,
--> you should propagate the flags as appropriate to the parent nodes
--> (For example, if the scores from all siblings are also exact, then
--> the score for the parent node also becomes exact; there are other
--> cases as well). This allows the information to be used to maximum
--> benefit.

Don't see how this is "optional" since you have to know "what" you got
from the table.

-->
-->Note that depending on how the search is conducted it may be possible to
-->know more than that one bound of the score is always tied to +/- infinity
-->(checkmating/getting checkmated). In particular, you might know that the
-->score is bounded by:
-->
--> losing one Rook <= score <= losing one Knight
-->
-->which may enable your opponent to prune the node if he's found another line
-->that's winning your Queen. This could happen, for example, if the same node
-->had been expanded twice before, with different values for alpha and beta.
-->
-->Depending on the program, there may be other things you want to put in the
-->node. For example, if your program uses the heuristic that there is likely
-->to be no big "surprise" in the position and sets the initial bounds to, say,
-->+/- the gain or loss of a Pawn or two (with the expectation that if the
-->root node gets pruned then you have to do another search with the bound that
-->allowed the cutoff readjusted to +/- infinity as appropriate and as is usual
-->for the alpha-beta algorithm), then you may want information about that in
-->the node (although the simplest implementation would just propagate the
-->narrower values into the search bounds, but that may lead to erroneous
-->cutoffs in the event the original position has to be re-searched and the
-->old table retained. One way around the problem would be to discard the
-->values obtained from the first search but retain the "best moves found").
-->You are also likely to find that there are situations where you have
-->multiple searches from a single point that were performed with multiple
-->bounds and at multiple depths, and it is not always obvious which is the
-->best one to save; for example, the one that has the "deepest" search may
-->actually have relatively little information because it was cut off. If your
-->program uses a "contempt" factor that assigns a slightly negative value (for
-->the computer) in the event of a drawn game (in an attempt to make the
-->machine play more enterprisingly and avoid draws by repetitions), that too
-->may be able to be put to use in the node.
-->
-->One other thing to consider is how to determine what nodes to retain in the
-->table. If two positions hash to the same location, you might use a re-hash
-->function (say, the first n entries after the hash location) and select some
-->small set of positions, apply some kind of quick evaluation of the usefulness
-->of the information in each (depth of search performed from the node is
-->usually a good measure but as noted above is not foolproof), and discard the
-->least valuable node from the table.

One thing overlooked by nearly everybody, is to also store the positional evaluation
(if you have one, as I don't do positional evals at interior nodes). It's easy to
get 5% (or more) of the positional scores here and avoid doing an eval, because
often a table value is useless due to depth or bounds, but the score (positional)
can save doing some complex calculations.

-->
-->This is all a rough overview and glosses over a lot of implementation
-->details (and indeed, different programs will use the tables differently
-->and may not use all these items or may use others as well). But a well-
-->written program will retain as much information as it can given the time
-->constraints placed on the search engine.
-->
-->Bruce C. Wright


--
Robert Hyatt Computer and Information Sciences
hy...@cis.uab.edu University of Alabama at Birmingham
(205) 934-2213 115A Campbell Hall, UAB Station
(205) 934-5473 FAX Birmingham, AL 35294-1170

Robert Hyatt

unread,
Jan 28, 1996, 3:00:00 AM1/28/96
to
In article <1996Jan28.164544.8694@ais>, <br...@ais.com> wrote:
-->In article <4eesfp$h...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
-->> In article <1996Jan27.124914.8690@ais>, <br...@ais.com> wrote:
-->> -->In article <4e8tsf$7...@decaxp.harvard.edu>, beng...@scws40.harvard.edu (Matthew Bengtson) writes:
-->> -->> A simple question for chess programmers:
-->> -->>
-->> -->> The advantages of transposition tables are obvious - for example, in
-->> -->> positions with multiple pawn levers. It seems one would want to store
-->> -->> the position and the evaluation reached there, but the problem is that in
-->> -->> the middle of search with pruning, the position may not be accurately
-->> -->> evaluated. For example, a search engine will prune away a move which
-->> -->> needlessly loses a rook, when in fact a queen is being lost. But
-->> -->> conceivably an accurate evaluation of such a position might be needed at
-->> -->> some point. What do people do about this problem? Are analysis
-->> -->> evaluations deep in search simply discarded? This seems like it could be
-->> -->> wasteful.
-->> --> [...]
-->> -->
-->> --> 4) The bounds on the score. (NOT the score itself - this is
-->> --> important). Normally, many nodes in the tree cannot have a known
-->> --> evaluation (as you note) because they were cut off by alpha-beta.
-->> --> But you _do_ know that the value must lie within certain bounds --
-->> --> for example, if it was cut off because it was found to be losing a
-->> --> Rook, but in fact it actually loses a Queen in your example. In
-->> --> that case, you would know that the bounds for the score would be
-->> --> given by

-->> -->
-->> --> mated <= score <= losing one Rook
-->> -->
-->>
-->> All you need is one value. Any node within the tree produces one of three
-->> possible outcomes:
-->>
-->> 1. EXACT_SCORE, [...]
-->> 2. LOWER_BOUND, [...]
-->> 3. UPPER_BOUND, [...]
-->>
-->> Note that only one of the above cases can be true, so storing either upper,
-->> lower bound or exact score is all that's needed.
-->
-->Hi Robert - been a while since we met last. Yes, you're right, I wasn't
-->thinking. That will teach me to free associate when I haven't looked at
-->the code for about 18 years (gads, has it been that long?).
-->
-->Nevertheless I'm nearly certain that Duchess stored two values; this may
-->have been due to efficiency considerations more than anything. (We went
-->to a great deal of trouble to minimize the number of conditional branch
-->instructions since they caused the instruction queue on the machine we
-->were using to be flushed. Storing two values would require the use of
-->only two compares and two conditional branches whereas storing one value
-->and a flag would require at least two compares and *three* conditional
-->branches, or alternatively a conditional branch and an indexed branch which
-->was also a no-no on that machine. The extra two bytes were not a major
-->percentage of the entry size since we had to store the position as well).
-->
-->> --> 5) (Optional) A flag for whether the score is known to be exact (or,
-->> --> if you prefer, "absolute"). [...]
-->>
-->> Don't see how this is "optional" since you have to know "what" you got
-->> from the table.
-->
-->I think this is a terminology problem. The information is not redundant
-->with the information about the score. Duchess had a special (and cheap)
-->evaluation function that recognized fairly large classes of "hopelessly
-->drawn" positions (two bare Kings is the classic and trivial example,
-->although Duchess recognized a much larger class than that), and would not
-->search further on such lines. You can back up this information in the
-->tree and mark nodes which must also be "hopelessly drawn" and not re-search
-->them, even if they were not previously searched to the current requested
-->search depth. Since they are "hopelessly drawn", by definition no amount
-->of additional searching on them can ever be worthwhile, no matter what the
-->current values of alpha and beta are nor what the search depth is. This
-->would not always be true if all that you knew was that the value was 0.

We did this in Cray Blitz, for both draws an mates, since a mate is a mate
regardless of the table "draft" (depth searched below the current position).

-->
-->It is also useful when you are doing iterative deepening, and are marking
-->forced mates (with a known "shortest solution"). On subsequent iterations
-->you need not re-expand the nodes leading to forced mate even when you're
-->searching with a deeper search, again since by definition no amount of
-->additional searching is worthwhile. Most of the time this may not save
-->much time because the mate ought to be found again quickly, but it may
-->sometimes gain a little bit. You can achieve a somewhat similar effect by
-->always accepting an evaluation of +/- mate from the table even if it wasn't
-->searched to the requested depth, but at the cost of possibly running into
-->horizon problems if the mate had been found before during the quiescence
-->phase of the search; and this problem in turn can be solved by re-searching
-->mates which had been found previously during the quiescence phase rather
-->than during the primary search. However the use of the flag may somewhat
-->simplify the code.
-->
-->A surprising number of chess programs, at least those that are widely
-->available either as commercial products or as "user-written software" of
-->various descriptions on popular machines such as PC's and workstations,
-->do not make any attempt to identify "hopelessly drawn" positions. Adding
-->such a dictionary, if it is well-written, can both remarkably speed up and
-->substantially improve the play in a number of endgame situations, and it
-->need not require the many megabytes of storage that some of the "every
-->possible Q vs R endgame" sorts of databases require.
-->

Yes, as well as handling "probably drawn positions" like opposite-colored
bishops. Crafty has this after Roman commented about it trading rooks and
leaving itself in an even more drawish position, even though it was ahead a
pawn (ror even two).

-->Bruce C. Wright


Haven't talked to Tom in years either. Missed you guys when you stopped
coming. Those were "the good old days..." :)

Bob

br...@ais.com

unread,
Jan 28, 1996, 3:00:00 AM1/28/96
to
In article <4eesfp$h...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
> In article <1996Jan27.124914.8690@ais>, <br...@ais.com> wrote:
> -->In article <4e8tsf$7...@decaxp.harvard.edu>, beng...@scws40.harvard.edu (Matthew Bengtson) writes:
> -->> A simple question for chess programmers:
> -->>
> -->> The advantages of transposition tables are obvious - for example, in
> -->> positions with multiple pawn levers. It seems one would want to store
> -->> the position and the evaluation reached there, but the problem is that in
> -->> the middle of search with pruning, the position may not be accurately
> -->> evaluated. For example, a search engine will prune away a move which
> -->> needlessly loses a rook, when in fact a queen is being lost. But
> -->> conceivably an accurate evaluation of such a position might be needed at
> -->> some point. What do people do about this problem? Are analysis
> -->> evaluations deep in search simply discarded? This seems like it could be
> -->> wasteful.
> --> [...]
> -->

> --> 4) The bounds on the score. (NOT the score itself - this is
> --> important). Normally, many nodes in the tree cannot have a known
> --> evaluation (as you note) because they were cut off by alpha-beta.
> --> But you _do_ know that the value must lie within certain bounds --
> --> for example, if it was cut off because it was found to be losing a
> --> Rook, but in fact it actually loses a Queen in your example. In
> --> that case, you would know that the bounds for the score would be
> --> given by
> -->
> --> mated <= score <= losing one Rook
> -->
>
> All you need is one value. Any node within the tree produces one of three
> possible outcomes:
>
> 1. EXACT_SCORE, [...]
> 2. LOWER_BOUND, [...]
> 3. UPPER_BOUND, [...]

>
> Note that only one of the above cases can be true, so storing either upper,
> lower bound or exact score is all that's needed.

Hi Robert - been a while since we met last. Yes, you're right, I wasn't


thinking. That will teach me to free associate when I haven't looked at

the code for about 18 years (gads, has it been that long?).

Nevertheless I'm nearly certain that Duchess stored two values; this may


have been due to efficiency considerations more than anything. (We went

to a great deal of trouble to minimize the number of conditional branch

instructions since they caused the instruction queue on the machine we

were using to be flushed. Storing two values would require the use of

only two compares and two conditional branches whereas storing one value

and a flag would require at least two compares and *three* conditional

branches, or alternatively a conditional branch and an indexed branch which

was also a no-no on that machine. The extra two bytes were not a major

percentage of the entry size since we had to store the position as well).

> --> 5) (Optional) A flag for whether the score is known to be exact (or,
> --> if you prefer, "absolute"). [...]


>
> Don't see how this is "optional" since you have to know "what" you got
> from the table.

I think this is a terminology problem. The information is not redundant


with the information about the score. Duchess had a special (and cheap)

evaluation function that recognized fairly large classes of "hopelessly

drawn" positions (two bare Kings is the classic and trivial example,

although Duchess recognized a much larger class than that), and would not

search further on such lines. You can back up this information in the

tree and mark nodes which must also be "hopelessly drawn" and not re-search

them, even if they were not previously searched to the current requested

search depth. Since they are "hopelessly drawn", by definition no amount

of additional searching on them can ever be worthwhile, no matter what the

current values of alpha and beta are nor what the search depth is. This

would not always be true if all that you knew was that the value was 0.

It is also useful when you are doing iterative deepening, and are marking


forced mates (with a known "shortest solution"). On subsequent iterations

you need not re-expand the nodes leading to forced mate even when you're

searching with a deeper search, again since by definition no amount of

additional searching is worthwhile. Most of the time this may not save

much time because the mate ought to be found again quickly, but it may

sometimes gain a little bit. You can achieve a somewhat similar effect by

always accepting an evaluation of +/- mate from the table even if it wasn't

searched to the requested depth, but at the cost of possibly running into

horizon problems if the mate had been found before during the quiescence

phase of the search; and this problem in turn can be solved by re-searching

mates which had been found previously during the quiescence phase rather

than during the primary search. However the use of the flag may somewhat

simplify the code.

A surprising number of chess programs, at least those that are widely

available either as commercial products or as "user-written software" of

various descriptions on popular machines such as PC's and workstations,

do not make any attempt to identify "hopelessly drawn" positions. Adding

such a dictionary, if it is well-written, can both remarkably speed up and

substantially improve the play in a number of endgame situations, and it

need not require the many megabytes of storage that some of the "every


possible Q vs R endgame" sorts of databases require.

Bruce C. Wright

br...@ais.com

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
In article <4ehh6j$c...@ss20-1.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
> In article <1996Jan28.164544.8694@ais>, <br...@ais.com> wrote:
> -->
> -->A surprising number of chess programs, at least those that are widely
> -->available either as commercial products or as "user-written software" of
> -->various descriptions on popular machines such as PC's and workstations,
> -->do not make any attempt to identify "hopelessly drawn" positions. Adding
> -->such a dictionary, if it is well-written, can both remarkably speed up and
> -->substantially improve the play in a number of endgame situations, and it
> -->need not require the many megabytes of storage that some of the "every
> -->possible Q vs R endgame" sorts of databases require.
> -->
>
> Yes, as well as handling "probably drawn positions" like opposite-colored
> bishops. Crafty has this after Roman commented about it trading rooks and
> leaving itself in an even more drawish position, even though it was ahead a
> pawn (ror even two).
>
> Haven't talked to Tom in years either. Missed you guys when you stopped
> coming. Those were "the good old days..." :)

Obviously the "probably drawn position" evaluation must occur when you are
doing a full evaluation, usually only on the leaf nodes. I don't recall
if Duchess had the "opposite colored Bishops" evaluation or not; I know
that the full evaluation did take a lot of things like that into account
but I don't remember if that particular item was one of them. It would
have been pretty cheap to compute.

We used to apply the "bookdraw" routine at all of the interior nodes as
well (and if I understand your comment about Cray Blitz it does too),
because besides the extra useless search time that would be required if
it were only applied at the leaves, it also caused some very strange "edge
effects" similar to the classic "horizon problem" (although this would as
often as not be a "sideways" horizon rather than a "depth" horizon), where
the program would sometimes gravitate towards a position that was not known
to the evaluation function to be drawn, but where the search had to pass
through drawn positions to reach the target position. For example, in
the endgame K + P vs K, Duchess would recognize a large class of positions
that were drawn because the defender had the Opposition. Unfortunately,
if the bookdraw routine doesn't get _every_ drawn position in such an
endgame, the edge effects could crop up by allowing the program to "find"
one of the positions not known to be drawn in the search. By applying the
function at the interior nodes, you would quickly find those positions
that were missed by the bookdraw routine. You could of course put in a
database of all possible K+P vs K endgames, but that would cost a substan-
tial amount of storage for something that doesn't come up in every game
(more important then than now, but even now this would be a questionable use
of storage that would usually be better used in the transposition table).

Some of the other drawn positions recognized included K vs K+N, K vs K+B,
and K vs K+B+RP of the wrong color, as well as several others. A somewhat
trickier example is K+N vs K+P, which is sometimes won by the Pawn and
sometimes (more rarely) by the Knight, and which is very often drawn. Many
of these are quite easy to recognize, and by checking the amount of material
on the board first you don't have to do much computation in those cases for
which the evaluation could not possibly apply. It's often rather amusing to
watch a program trade down to one of these hopelessly drawn positions
because it's able to optimize its evaluation function by "winning a piece".
Fortunately for such programs, this kind of situation only comes up in a
minority of games, but still it's embarrassing when it does happen ...

I've sometimes missed the tournaments as well, but I have no idea how I'd
be able to find the time for it all any more; it takes quite a bit of
effort to make a really good chess program, and it doesn't usually pay
very well ;-). Maybe I'll be able to drop in sometime as a spectator or
we'll run into each other at some other conference - as you say, it's
been a while.

Bruce C. Wright

Jay Scott

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
In article <1996Jan29.123002.8697@ais>, br...@ais.com wrote:
>You could of course put in a
>database of all possible K+P vs K endgames, but that would cost a substan-
>tial amount of storage for something that doesn't come up in every game

Deep Blue apparently has an *on-chip* K+PvK endgame database. See

http://www.ibm.com/Stretch/EOS/chip.html

I imagine the tradeoff has to be re-evaluated depending on your
constraints. :-)

Jay Scott, j...@forum.swarthmore.edu

Machine Learning in Games:
http://forum.swarthmore.edu/~jay/learn-game/index.html

Vincent Diepeveen

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
In <1996Jan27.124914.8690@ais> br...@ais.com writes:

>In article <4e8tsf$7...@decaxp.harvard.edu>, beng...@scws40.harvard.edu (Matthew Bengtson) writes:

>> The advantages of transposition tables are obvious - for example, in
>> positions with multiple pawn levers. It seems one would want to store
>> the position and the evaluation reached there, but the problem is that in
>> the middle of search with pruning, the position may not be accurately
>> evaluated. For example, a search engine will prune away a move which
>> needlessly loses a rook, when in fact a queen is being lost. But

I guess i also have a problem with this:
suppose my programms calculates on depth = p.

Assumptions: alfa = 0.00, beta = 0.01
Suppose that you have a position where evaluation is -0.40

Now suppose that depth is 1. So after making 1 move you get into
your qsearch.

Now suppose that there are about n moves in position depth=1

I make a very very very stupid move, after which we get on
depth 0. Here i evaluate. score is 0.20 (-0.20 is backed up to
depth=1 using negamax). So we get a cutoff: 0.20 >= beta=0.01
However TRUE score should be 1.20 because it looses a pawn,
or because of a very good positional move that can be done.

Now suppose that all moves on depth=1 are giving scores > -1.2 and < -0.20

This means that in my hashtable the stupid move is saved, where i
wanted to be saved an other move, and the programm COULD know that
there is a better move.

now during next iteration it will waste systemtime using the backed up
move, which does give a cutoff again, now using nullmove. Again other
moves are found to be not better, because of nullmove.

This sucks. Understand my problem?
score < alfa in hashtable means that in that position there is no
move >= beta, but doesn't give you garantee that it is REALLY better
then the other moves.

Vincent Diepeveen
vdie...@cs.ruu.nl
--
+--------------------------------------+
|| email : vdie...@cs.ruu.nl ||
|| Vincent Diepeveen ||
+======================================+

Steven J. Edwards

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
j...@forum.swarthmore.edu (Jay Scott) writes:
>In article <1996Jan29.123002.8697@ais>, br...@ais.com wrote:
>>You could of course put in a
>>database of all possible K+P vs K endgames, but that would cost a substan-
>>tial amount of storage for something that doesn't come up in every game

>Deep Blue apparently has an *on-chip* K+PvK endgame database. See

> http://www.ibm.com/Stretch/EOS/chip.html

A free database for K+P vs. K for either side to move can be had from
the KPK.tbw.gz and KPK.tbb.gz files in the pub/chess/TB directory at
the chess.onenet.net ftp site. It would be trivial to burn these into
a ROM. These files, along with all of the other data files in the
same directory can be used to give instant evaluations of any endgame
position covered in the database.

For those who want to experiment with perfect endgame play and
evaluations without coding or soldering, Hyatt's program Crafty can
access all of the endgame files stored at the abovementioned ftp site.
(You have to download your own copies of the files, of course.)

-- Steven (s...@mv.mv.com)

br...@ais.com

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
In article <4eji2t$l...@larch.cc.swarthmore.edu>, j...@forum.swarthmore.edu (Jay Scott) writes:
> In article <1996Jan29.123002.8697@ais>, br...@ais.com wrote:
>>You could of course put in a
>>database of all possible K+P vs K endgames, but that would cost a substan-
>>tial amount of storage for something that doesn't come up in every game
>
> Deep Blue apparently has an *on-chip* K+PvK endgame database. See
>
> http://www.ibm.com/Stretch/EOS/chip.html
>
> I imagine the tradeoff has to be re-evaluated depending on your
> constraints. :-)

Well, obviously. But I was talking about "typical" chess programs, which
run on typical general-purpose hardware, and in that case a lot of extra
entries in the transposition table will usually give more benefit than
large databases of endgame positions. However if you can afford to throw
an essentially infinite amount of memory at the problem then the tradeoff
for them changes as well.

Bruce C. Wright

Robert Hyatt

unread,
Jan 31, 1996, 3:00:00 AM1/31/96
to
In article <1996Jan30.195053.8704@ais>, <br...@ais.com> wrote:
-->In article <4eji2t$l...@larch.cc.swarthmore.edu>, j...@forum.swarthmore.edu (Jay Scott) writes:
-->> In article <1996Jan29.123002.8697@ais>, br...@ais.com wrote:
-->>>You could of course put in a
-->>>database of all possible K+P vs K endgames, but that would cost a substan-
-->>>tial amount of storage for something that doesn't come up in every game
-->>
-->> Deep Blue apparently has an *on-chip* K+PvK endgame database. See
-->>
-->> http://www.ibm.com/Stretch/EOS/chip.html
-->>
-->> I imagine the tradeoff has to be re-evaluated depending on your
-->> constraints. :-)
-->
-->Well, obviously. But I was talking about "typical" chess programs, which
-->run on typical general-purpose hardware, and in that case a lot of extra
-->entries in the transposition table will usually give more benefit than
-->large databases of endgame positions. However if you can afford to throw
-->an essentially infinite amount of memory at the problem then the tradeoff
-->for them changes as well.

-->
-->Bruce C. Wright


First, it's a small database. using byte-values, 2^17 should be enough,
and it's likely it could be reduced by another factor of two as I haven't
given a lot of thought to the issue.

Second, however, it (at first glance) seems like an odd way to use real
estate on the chip, unless it was simply a matter of "what can I possibly
put on this chip to use this remaining 2 square mm's?" Having the database
somewhere makes sense, but having at q-search tip nodes doesn't seem much
better than having it throughout the full-width part of the search, which
is where I do the probes in crafty with virtually no performance penalty
that I can measure.

Third, it's probably moot for most of us. Very unlikely to reach a KPK
ending anyway against that machine... :)

br...@ais.com

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
In article <4enp7f$m...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
> In article <1996Jan30.195053.8704@ais>, <br...@ais.com> wrote:
> -->In article <4eji2t$l...@larch.cc.swarthmore.edu>, j...@forum.swarthmore.edu (Jay Scott) writes:
> -->> In article <1996Jan29.123002.8697@ais>, br...@ais.com wrote:
> -->>>You could of course put in a
> -->>>database of all possible K+P vs K endgames, but that would cost a
> -->>>substantial amount of storage for something that doesn't come up in
> -->>>every game

> -->>
> -->> Deep Blue apparently has an *on-chip* K+PvK endgame database. See
> -->>
> -->> http://www.ibm.com/Stretch/EOS/chip.html
> -->>
> -->> I imagine the tradeoff has to be re-evaluated depending on your
> -->> constraints. :-)
> -->
> -->Well, obviously. But I was talking about "typical" chess programs, which
> -->run on typical general-purpose hardware, and in that case a lot of extra
> -->entries in the transposition table will usually give more benefit than
> -->large databases of endgame positions. However if you can afford to throw
> -->an essentially infinite amount of memory at the problem then the tradeoff
> -->for them changes as well.
>
> First, it's a small database. using byte-values, 2^17 should be enough,
> and it's likely it could be reduced by another factor of two as I haven't
> given a lot of thought to the issue.

The problem is that taken to its logical conclusion you need such a
database for every kind of endgame you want to recognize. K+P vs K isn't
*too* bad, though I still think that unless you have lots of memory to
burn you'll usually do better with more table entries. Remember that this
isn't something that comes up every game (or even most games), while you
can almost always find a use for more table entries. Add in K+P vs K+N,
K+R vs K+Q, and so forth, and the databases start looking rather less
reasonable. You can't just call on them once the position on the board
reaches that endgame: it might be too late by then. They have to be
available to the search, which means main storage rather than disk files
or paged memory. Some of the interesting ones, K+R+P vs K+R and K+Q+P
vs K+Q for instance, run into the dozens of megabytes even with heavy
compression.

Of course, if you're putting together a program to compete at the top level,
you can arrange to throw lots of memory at the problem and the issue becomes
moot. But for smaller programs, a rule-based drawn position recognizer is
neither that complicated nor that expensive to compute; a few AND's and
OR's can recognize a rather surprising number of drawn positions and even if
it doesn't catch all of the ones that the database would catch, it can still
find enough to make a difference at times. It's rather surprising that so
many such programs don't include one.

> Third, it's probably moot for most of us. Very unlikely to reach a KPK
> ending anyway against that machine... :)

True :).

Bruce C. Wright

Bruce Moreland

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
If you get a fail high after searching a move, in the hash table you store
the depth to which you searched, the move that failed high, the "bound"
(which in this case is beta), and a flag that indicates that you failed high.

When you come through here the second time, you first check to see if the
depth in the table is >= the depth you are searching now. If so, you note
that this is a fail high transposition table entry, and see if the bound in
the hash element is greater than or equal to your current beta, and if so you
cut off.

If you fail low you do the opposite of what I said above, you can figure this
out. You can't store the move that you failed low on, because there is none.

If you get an exact score after searching you can always cut off if your
depth test passes.

This isn't rocket science, but it is a giant can of worms. You can take a
perfectly sound chess program, add hashing in a day, and spend the next month
actively fixing bugs, and still be finding the occasional bug two years
later.

bruce

In article <4eiqrh$m...@krant.cs.ruu.nl>, vdie...@cs.ruu.nl says...
>
>[snip]

--
The opinions expressed in this message are my own personal views
and do not reflect the official views of Microsoft Corporation.


Steven J. Edwards

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
br...@ais.com writes:

[Concerning the use of endgame databases]

>The problem is that taken to its logical conclusion you need such a
>database for every kind of endgame you want to recognize. K+P vs K isn't
>*too* bad, though I still think that unless you have lots of memory to
>burn you'll usually do better with more table entries. Remember that this
>isn't something that comes up every game (or even most games), while you
>can almost always find a use for more table entries. Add in K+P vs K+N,
>K+R vs K+Q, and so forth, and the databases start looking rather less
>reasonable. You can't just call on them once the position on the board
>reaches that endgame: it might be too late by then. They have to be
>available to the search, which means main storage rather than disk files
>or paged memory. Some of the interesting ones, K+R+P vs K+R and K+Q+P
>vs K+Q for instance, run into the dozens of megabytes even with heavy
>compression.

Although KPK might not appear in every game, it is surprising how
often it and many other 3-4-5 man positions are reached during
searches taking place in the late middlegame.

It is not necessary to dedicate memory to endgame storage. Having
uncompressed versions of the endgame database files on disk is
sufficient. This has been shown through testing where the endgame
files may be probed at any full width search ply. The results
indicate a significant strength improvement, and so having endgame
databases is more than just being able to announce "mate in 30" (or
similar predictions) in only a few root level positions. This
improvement has been demonstrated in both Spector and Crafty, and I'm
sure it can be demonstrated in other programs as well.

-- Steven (s...@mv.mv.com)


Robert Hyatt

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
In article <4erog8$3...@ns2.spectra.net>,
Tom Elliott <tell...@spectra.net> wrote:
-->I've found this entire tread very interesting. The discussion based
-->on "these endings aren't reached very often" prompts this observation.
-->
-->It has been observed that programs are very poor at planning,
-->especially long-range planning. Wouldn't the ability to recognize the
-->possibility of a won ending and attempting to reach it be a long range
-->plan. The threat of reaching such an ending would be formidible and
-->put strong psychological pressure on the opponent, if the threat is
-->even recognized.
-->
-->I read once that one thing to do when evaluating a position is to
-->imagine all the pieces, except Kings, removed and see who wins, or
-->draws. This is useful in assessing the permanent strength of one's
-->position. It seems that extending this to evaluating probable, more
-->complicated endgame setups, is a good stategy for programs.
-->
-->Any thoughts, anybody?
-->
-->Regards,
-->Tom Elliott
-->

One of the things I did in Cray Blitz was to use the pawn evaluation
to decide about trading pieces, rather than the old "if ahead trade
pieces, if behind trade pawns." This is the spirit of your idea: if
the pawn structure is significantly better for me (say a protected
passed pawn) then I want to reach a pawn-only ending where I should be
able to use my pawn structure advantage to win. Not in Crafty (yet)
but it will be.

Vincent Diepeveen

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to

>First, it's a small database. using byte-values, 2^17 should be enough,

Wasting 128 kb. A better evaluation function (special pawn endgame evaluation)
of only few bytes also has perfect knowledge of K versus P + K. Of course this
is more work.

Of course this requires 1 extra CMP instruction... ...is that too much?

>and it's likely it could be reduced by another factor of two as I haven't
>given a lot of thought to the issue.

>Second, however, it (at first glance) seems like an odd way to use real


>estate on the chip, unless it was simply a matter of "what can I possibly
>put on this chip to use this remaining 2 square mm's?" Having the database
>somewhere makes sense, but having at q-search tip nodes doesn't seem much
>better than having it throughout the full-width part of the search, which
>is where I do the probes in crafty with virtually no performance penalty
>that I can measure.

True.

>Third, it's probably moot for most of us. Very unlikely to reach a KPK
>ending anyway against that machine... :)

I suppose even they can only win by fine-tuning all things...
...it gives the normal chessplayer an idea how close the strength
of Fritz versus Deep Blue is :)

Is it true that Karpov is getting money to being the Deep-Blue operator?
And is it true that the move Deep-Blue plays are outputted in Feng-Hsiung Hsu's
native language?

That'll be the best investment on the short term: another ascii set
and an other operator!

However we must not forget that Deep Blue has nothing to loose: everyone reasonable
human thinks that Kasparov wins (ok some programmers MUST beleive, but that beleive
has to do with money, or misinformation received from the money-lovers);

Does Deep Blue print the best variation it finds?

What variation does it find best when calculating from the beginning position?
Can anyone post that, or are they too ashamed to post it, when turning off
openingsbook?

My own programm wants to play a bizar Russian variation... ...too stupid for words:
1. e4,e5 2. Nf3,Nf6 3. Nxe5,d6 4. Nc4,Nxe4 5. d3,Nc5 (10 ply calculation).
How coincidentally it gives score 0.00 for this veriation...

Robert Hyatt

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <4esbrs$6...@vortex.worldaccess.com>,
Peter Nachtwey <pna...@worldaccess.com> wrote:

-->beng...@scws40.harvard.edu (Matthew Bengtson) writes:
-->> A simple question for chess programmers:
-->>
-->> The advantages of transposition tables are obvious - for example, in
-->> positions with multiple pawn levers. It seems one would want to store
-->> the position and the evaluation reached there, but the problem is that in
-->> the middle of search with pruning, the position may not be accurately
-->> evaluated. For example, a search engine will prune away a move which
-->> needlessly loses a rook, when in fact a queen is being lost. But
-->> conceivably an accurate evaluation of such a position might be needed at
-->> some point. What do people do about this problem? Are analysis
-->> evaluations deep in search simply discarded? This seems like it could be
-->> wasteful.
-->>
-->> Please reply mail with any suggestions. Thanks
-->>
-->> Matt
-->>
-->
-->I wrote a chess program with a hash table of about 32k position under dos.
-->Its positional evaluations were weak but I used some the suggestion in
-->"All the Right Moves" by Carl Ebling. In each hash entry I saved part of the
-->hash code, the best move, depth, score, and whether the score was a true value
-->or a upper or lower bound. If in the search there was a hash hit, the program could
-->get the score and check what type if value it was. Most of the time the value was
-->just an upper lower bound which could be used to reduce the alpha-beta window and
-->continue searching using the best move from the hash table. Only if the score is
-->an exact value can and farther from the endpoints can you return the score in the
-->hash table. Hope this helps.
-->
-->Peter Nachtwey
-->
-->
-->
-->sco
-->exact value,


You can also return the value if it represents a lower bound, and this
lower bound is <= alpha, and the depth is sufficient. If I searched all
the moves at this ply before, and they all failed low so that I returned
alpha, and that alpha is <= current alpha, then all moves are *still*
going to fail low and I'll still return alpha.

br...@ais.com

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <4er73v$e...@news.microsoft.com>, brucemo (Bruce Moreland) writes:
> If you get a fail high after searching a move, in the hash table you store
> the depth to which you searched, the move that failed high, the "bound"
> (which in this case is beta), and a flag that indicates that you failed high.
> [...]

>
> This isn't rocket science, but it is a giant can of worms. You can take a
> perfectly sound chess program, add hashing in a day, and spend the next month
> actively fixing bugs, and still be finding the occasional bug two years
> later.

An obvious potential problem, to combine two threads, is that if the
score was obtained by the `lazy evaluation' which terminated the
positional evaluation early because it was obvious that the move would
cut off, you can't use that raw score in the table because when the
position is encountered again you can't guarantee that alpha and beta
will have the same values. If the alpha and beta "window" is tighter
than it was before, you don't know if the move would cut off if you
had the complete positional score or not. The easiest way to deal
with the problem is to store the score with the most pessimistic
adjustment for what the positional adjustment might be, as appropriate
for whether this is a high bound or a low bound; or, alternatively,
store only alpha or beta as appropriate (throwing away the `quick test'
score). You could also flag the node as having an `inexact' score, but
this is going to be more complicated to program and I don't see that
having that information gets you very much. (Not that the complexity
is all _that_ great from a design standpoint, but you really have to be
careful about every instruction in a chess program; each one takes time
away from doing something else useful, so you have to spend each one
very carefully).

Bruce C. Wright

Bruce Moreland

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <4enp7f$m...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu says...
>[snip]

>First, it's a small database. using byte-values, 2^17 should be enough,
>and it's likely it could be reduced by another factor of two as I haven't
>given a lot of thought to the issue.

It's like 123K bytes if you store it bitwise. There's probably a way to make
it smaller, but I don't particularly care.

Someone figured out how to make the KR vs K database a little smaller, there
was an article on this in the ICCA Journal about a million years ago
(pre-1985 i think, maybe when it was still the ICCA Newsletter).

bruce

Robert Hyatt

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <1996Feb2.110025.8713@ais>, <br...@ais.com> wrote:
-->In article <4er73v$e...@news.microsoft.com>, brucemo (Bruce Moreland) writes:
-->> If you get a fail high after searching a move, in the hash table you store
-->> the depth to which you searched, the move that failed high, the "bound"
-->> (which in this case is beta), and a flag that indicates that you failed high.
-->> [...]
-->>
-->> This isn't rocket science, but it is a giant can of worms. You can take a
-->> perfectly sound chess program, add hashing in a day, and spend the next month
-->> actively fixing bugs, and still be finding the occasional bug two years
-->> later.
-->
-->An obvious potential problem, to combine two threads, is that if the
-->score was obtained by the `lazy evaluation' which terminated the
-->positional evaluation early because it was obvious that the move would
-->cut off, you can't use that raw score in the table because when the
-->position is encountered again you can't guarantee that alpha and beta
-->will have the same values.

I'm not sure exactly how to interpret this, because the lazy evaluation
that I did in Cray Blitz could *never* be used if eval bailed out early
because the score couldn't be brought back into the window. I simply
returned alpha or beta depending, but it didn't matter. If it was beta,
we failed high and that worked well. If it was alpha, and no caputres
could improve it, it failed low. In either case, storing the edge
that I failed on worked fine, since I did know that the score was
guaranteed to be above beta or below beta.

--> If the alpha and beta "window" is tighter
-->than it was before, you don't know if the move would cut off if you
-->had the complete positional score or not. The easiest way to deal
-->with the problem is to store the score with the most pessimistic
-->adjustment for what the positional adjustment might be, as appropriate
-->for whether this is a high bound or a low bound; or, alternatively,
-->store only alpha or beta as appropriate (throwing away the `quick test'
-->score). You could also flag the node as having an `inexact' score, but
-->this is going to be more complicated to program and I don't see that
-->having that information gets you very much. (Not that the complexity
-->is all _that_ great from a design standpoint, but you really have to be
-->careful about every instruction in a chess program; each one takes time
-->away from doing something else useful, so you have to spend each one
-->very carefully).


-->
-->Bruce C. Wright


Maybe I misunderstand what you are saying? My "cut" on lazy evaluation is
in the spirit of alpha/beta minimax. If the score's < alpha, it's of no
use whatsoever. Ditto for > beta, because beta is the largest score I'll
except (and I'll fail high if I get that). the score is either < alpha,
--> beta, or somewhere in between. Don't see how worrying about how bad it
is matters...

Tom Elliott

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
I've found this entire tread very interesting. The discussion based
on "these endings aren't reached very often" prompts this observation.

It has been observed that programs are very poor at planning,


especially long-range planning. Wouldn't the ability to recognize the

possibility of a won ending and attempting to reach it be a long range

plan. The threat of reaching such an ending would be formidible and

put strong psychological pressure on the opponent, if the threat is

even recognized.

I read once that one thing to do when evaluating a position is to

imagine all the pieces, except Kings, removed and see who wins, or

draws. This is useful in assessing the permanent strength of one's

position. It seems that extending this to evaluating probable, more

complicated endgame setups, is a good stategy for programs.

Any thoughts, anybody?

Regards,
Tom Elliott


Peter Nachtwey

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
beng...@scws40.harvard.edu (Matthew Bengtson) writes:
> A simple question for chess programmers:
>
> The advantages of transposition tables are obvious - for example, in
> positions with multiple pawn levers. It seems one would want to store
> the position and the evaluation reached there, but the problem is that in
> the middle of search with pruning, the position may not be accurately
> evaluated. For example, a search engine will prune away a move which
> needlessly loses a rook, when in fact a queen is being lost. But
> conceivably an accurate evaluation of such a position might be needed at
> some point. What do people do about this problem? Are analysis
> evaluations deep in search simply discarded? This seems like it could be
> wasteful.

>
> Please reply mail with any suggestions. Thanks
>
> Matt
>

I wrote a chess program with a hash table of about 32k position under dos.

Its positional evaluations were weak but I used some the suggestion in

"All the Right Moves" by Carl Ebling. In each hash entry I saved part of the

hash code, the best move, depth, score, and whether the score was a true value

or a upper or lower bound. If in the search there was a hash hit, the program could

get the score and check what type if value it was. Most of the time the value was

just an upper lower bound which could be used to reduce the alpha-beta window and

continue searching using the best move from the hash table. Only if the score is

an exact value can and farther from the endpoints can you return the score in the

hash table. Hope this helps.

Peter Nachtwey



sco
exact value,

Paul Rubin

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
In article <4etrsr$c...@news.microsoft.com>, Bruce Moreland <brucemo> wrote:
>Someone figured out how to make the KR vs K database a little smaller, there
>was an article on this in the ICCA Journal about a million years ago
>(pre-1985 i think, maybe when it was still the ICCA Newsletter).

KR vs K doesn't need a database :-)

br...@ais.com

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
In article <4etrsr$c...@news.microsoft.com>, brucemo (Bruce Moreland) writes:
> In article <4enp7f$m...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu says...
>>[snip]
>>First, it's a small database. using byte-values, 2^17 should be enough,
>>and it's likely it could be reduced by another factor of two as I haven't
>>given a lot of thought to the issue.
>
> It's like 123K bytes if you store it bitwise. There's probably a way to make
> it smaller, but I don't particularly care.
>
> Someone figured out how to make the KR vs K database a little smaller, there
> was an article on this in the ICCA Journal about a million years ago
> (pre-1985 i think, maybe when it was still the ICCA Newsletter).

Not quite right. Let's assume that you always store the position with
White having the Pawn (the cases with Black having the Pawn are symmetric
and you can easily invert the position). You can very easily get rid of
reflections as well by only storing the positions with the Pawn on the
Queen-side and reflecting the position if the pawn is on the King-side.
This leaves 4*6=24 possible squares for the Pawn.

Now each of the Kings can occupy any of the squares of the board, except
for squares already occupied or from which the King not on the move would
be in check. (The case of the King on the move being in check from the
other King would be excluded by this rule since the King not on the move
would also have to be in check). However offhand I don't see a time-
efficient way to store only the +/- 56 squares that the defending King
can legally occupy, so let's just store all 64; storing only the squares
that each King can legally occupy would only save about 12% anyway.
Finally we need to double this number because either side can be on the
move. So the total number of possible positions is given by:

24*64*64*2 = 196608, or somewhat less than 2^18 which is 262144.

Assume we store this as a bit array, with one bit per position. If the
bit is 0, the position is drawn; if it is 1, the position is won. If
we ever reach one of these positions in actual play we can just continue
to search only on the won positions and select the one that advances the
Pawn the farthest or that succeeds in safely promoting the Pawn; we
really only need to know if we should stop searching because the position
is a forced draw. So the number of bytes required to store the table is:

196608/8 = 24576 bytes.

Now this isn't such an enormous amount of storage by itself; if you have
a program that you know will always have many megabytes of storage available
it may even be a reasonable thing to store. But if you can store each
transposition node in 64 bytes (this would allow for a long key to store
the exact position in a format that would be very time-efficient to compute,
as well as a fair amount of additional information), that's 384 extra nodes
in the transposition table. If you don't store the exact position but rely
on an inexact key you could get even more at the cost of possibly getting
a "false hit" with potentially disastrous results. If your transposition
table is 2MB, this is only about 1.15% of the transposition table space.
The problem is that the extra nodes will almost always be useful, but the
draw table will not be. Rule-based draw detection would deal with most of
this case in only a couple hundred bytes and not much more processor time.

The problem becomes much more intractable when you add another piece (No
other endgame containing only the Kings plus one piece or Pawn is at all
interesting; it's either an obvious win or an obvious draw). In addition,
in many of these endgames there are _three_ possible outcomes rather than
just two. Even if you just store the ones that are drawn on the theory
that the ones that are won or lost can be picked out fairly easily by the
search, that's still on the order of 1572864 bytes (=24576*64) for each
one. Add yet another piece to get to the really interesting endgames such
as K+R+P vs K+R and things get completely out of hand. (It would however
be quite easy to store all possible combinations of K,R, and P vs K and R
endgames -- of which K+P vs K would be one subset -- as additional entries
in such a unified table by placing each "missing" piece on top of its own
King. Even so the table would just be huge).

While I'm on this subject I should probably point out that you should
also check for the case of K+P vs K+B or K+N. Since these are sometimes
won by the side with the piece if the Pawn is a RP (more often with the
N than with the B since if the B can't mate on the move it requires a
helpmate), they do need to be treated with some care. I've seen not a
few programs that will prefer the side with the piece even though in the
vast majority of cases only the side with the Pawn has any winning chances,
even to the point of giving up its last Pawn (and its winning chances) to
get the extra piece. We used have the terminal position evaluation (but
not the "drawn game detector") put a maximum value of "draw" on such
positions for the side that had the piece, on the theory that those
few cases that were won could be found by the search.

All of this of course assumes that you have a program that's good enough
to get to the endgame. If you're wiped out in the middle game (likely if
you're playing Deep Thought :-), you'll never get a chance to use any of
these things.

Bruce C. Wright

br...@ais.com

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
In article <4eu72p$q...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
> In article <1996Feb2.110025.8713@ais>, <br...@ais.com> wrote:
> -->
> -->An obvious potential problem, to combine two threads, is that if the
> -->score was obtained by the `lazy evaluation' which terminated the
> -->positional evaluation early because it was obvious that the move would
> -->cut off, you can't use that raw score in the table because when the
> -->position is encountered again you can't guarantee that alpha and beta
> -->will have the same values.
>
> I'm not sure exactly how to interpret this, because the lazy evaluation
> that I did in Cray Blitz could *never* be used if eval bailed out early
> because the score couldn't be brought back into the window. I simply
> returned alpha or beta depending, but it didn't matter. If it was beta,
> we failed high and that worked well. If it was alpha, and no caputres
> could improve it, it failed low. In either case, storing the edge
> that I failed on worked fine, since I did know that the score was
> guaranteed to be above beta or below beta.

If I understand you correctly, that's one of the alternatives (the second)
that I gave for addressing the problem in the next paragraph. I was
talking about what score you put in the transposition table, not the
score that is backed up since you're cutting off the branch. You could
also discard the transposition table entry entirely, though we did not;
we usually needed to look up a transposition table entry anyway to see
if it was a repetition of position either of the game history or of the
line from the root position. Of course if we later needed an entry for
another position in the table the entries that had only leaf nodes
were not considered very valuable, but it sometimes could save calling
the terminal evaluation routine again. So we could often pick up a
terminal node whose terminal evaluation had been terminated early, and
in that case we had to know whether we needed to re-evaluate the
position.

> --> If the alpha and beta "window" is tighter
> -->than it was before, you don't know if the move would cut off if you
> -->had the complete positional score or not. The easiest way to deal
> -->with the problem is to store the score with the most pessimistic
> -->adjustment for what the positional adjustment might be, as appropriate
> -->for whether this is a high bound or a low bound; or, alternatively,
> -->store only alpha or beta as appropriate (throwing away the `quick test'

> -->score). [...]


>
> Maybe I misunderstand what you are saying? My "cut" on lazy evaluation is
> in the spirit of alpha/beta minimax. If the score's < alpha, it's of no
> use whatsoever. Ditto for > beta, because beta is the largest score I'll
> except (and I'll fail high if I get that). the score is either < alpha,
> --> beta, or somewhere in between. Don't see how worrying about how bad it
> is matters...

If you actually complete a terminal evaluation, you know what the score
is for the "null move" option; this is true whatever the values for
alpha and beta are. If you terminate it early, you know that the score
lies between (quick score - maximum positional adjustment) and (quick
score + maximum position adjustment): actually rather more information
than you get from an actual search that cut off, although it's only for
a 0-ply search :-). Obviously this doesn't include any information about
the quiescence search from this position (if any).

I realized that in that first article my editing reversed the condition -
it should have been that the alpha and beta values were tighter the first
time through, and the second time you encounter the position the values
are wider so that they now overlap the error range for the quick terminal
evaluation. Sorry - that may have been the cause of your confusion.

Bruce C. Wright

Steven J. Edwards

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
brucemo (Bruce Moreland) writes:

>Someone figured out how to make the KR vs K database a little smaller, there
>was an article on this in the ICCA Journal about a million years ago
>(pre-1985 i think, maybe when it was still the ICCA Newsletter).

I recall the article. The author had less than 32 Kbyte RAM available
for storage, so he was able to devise a scheme to store the distance
to mate values in a nybble (4 bits). The idea was that the stored
value was interpreted differently depending on whether or not the
black king was in the central 4x4 square. If the black king was
within the square, then a small number was added to the stored
distance to get the true distance. This worked because the author
demonstrated that a forced mate could not be achieved in less than a
certain number of moves if the black king was in the central 16
squares, and that if if the black king was not in the central 16
squares then a forced mate took at most some number of moves that
could be stored without subtracting the bias.

Note: the longest distance to mate in KRK is 16 moves. In addition to
storing the distance, each entry must also have a marker for
indicating a drawn game. During constuction, it is useful to have
additional marker values for "illegal position" and "undetermined
value". This totals 19 different values which is too many for an
unbiased nybble scheme.

The same author had earlier constructed the distance to mate database
for KQK WTM. There was no need to resort to a bias scheme as the
longest distance to mate in KQK is 9 moves which fits into a nybble
with 7 values left over.

-- Steven (s...@mv.mv.com)

Robert Hyatt

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
In article <1996Feb3.160642.8715@ais>, <br...@ais.com> wrote:
-->In article <4etrsr$c...@news.microsoft.com>, brucemo (Bruce Moreland) writes:
-->> In article <4enp7f$m...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu says...
-->>>[snip]
-->>>First, it's a small database. using byte-values, 2^17 should be enough,
-->>>and it's likely it could be reduced by another factor of two as I haven't
-->>>given a lot of thought to the issue.
-->>
-->> It's like 123K bytes if you store it bitwise. There's probably a way to make
-->> it smaller, but I don't particularly care.
-->>
-->> Someone figured out how to make the KR vs K database a little smaller, there
-->> was an article on this in the ICCA Journal about a million years ago
-->> (pre-1985 i think, maybe when it was still the ICCA Newsletter).
-->
-->Not quite right. Let's assume that you always store the position with
-->White having the Pawn (the cases with Black having the Pawn are symmetric
-->and you can easily invert the position). You can very easily get rid of
-->reflections as well by only storing the positions with the Pawn on the
-->Queen-side and reflecting the position if the pawn is on the King-side.
-->This leaves 4*6=24 possible squares for the Pawn.
-->
-->Now each of the Kings can occupy any of the squares of the board, except
-->for squares already occupied or from which the King not on the move would
-->be in check. (The case of the King on the move being in check from the
-->other King would be excluded by this rule since the King not on the move
-->would also have to be in check). However offhand I don't see a time-
-->efficient way to store only the +/- 56 squares that the defending King
-->can legally occupy, so let's just store all 64; storing only the squares
-->that each King can legally occupy would only save about 12% anyway.
-->Finally we need to double this number because either side can be on the
-->move. So the total number of possible positions is given by:
-->
--> 24*64*64*2 = 196608, or somewhat less than 2^18 which is 262144.
-->
-->Assume we store this as a bit array, with one bit per position. If the
-->bit is 0, the position is drawn; if it is 1, the position is won. If
-->we ever reach one of these positions in actual play we can just continue
-->to search only on the won positions and select the one that advances the
-->Pawn the farthest or that succeeds in safely promoting the Pawn; we
-->really only need to know if we should stop searching because the position
-->is a forced draw. So the number of bytes required to store the table is:
-->
--> 196608/8 = 24576 bytes.
-->
-->Now this isn't such an enormous amount of storage by itself; if you have
-->a program that you know will always have many megabytes of storage available
-->it may even be a reasonable thing to store. But if you can store each
-->transposition node in 64 bytes (this would allow for a long key to store
-->the exact position in a format that would be very time-efficient to compute,
-->as well as a fair amount of additional information), that's 384 extra nodes
-->in the transposition table. If you don't store the exact position but rely
-->on an inexact key you could get even more at the cost of possibly getting
-->a "false hit" with potentially disastrous results. If your transposition
-->table is 2MB, this is only about 1.15% of the transposition table space.
-->The problem is that the extra nodes will almost always be useful, but the
-->draw table will not be. Rule-based draw detection would deal with most of
-->this case in only a couple hundred bytes and not much more processor time.
-->
-->The problem becomes much more intractable when you add another piece (No
-->other endgame containing only the Kings plus one piece or Pawn is at all
-->interesting;

This might prove more difficult than you think. Without care, you might
draw a won position, because you can't see the win *after* the promotion,
so you make a move that keeps you in a *known winning position* in the
database, and before you know it, you have exhausted all possible moves
that could win, so that now everything leads to a repetition.

I've had some really interesting things happen, database wise, when I
didn't have *all* the 4-man endings available for Crafty. It drew won
positions, converted to a hard win rather than an easy win, which is
not bad in theory, but in practice, if you are low on time, it can be.

I personally believe that (a) win loss and draw is not enough in a chess
database, because the distance from "1" to "mate" can be so far, it's
impossible to see with a searchh (KBN vs K comes to mind.) (b) that
all possible conversion classes are also needed to avoid drawing positions
that are wins (KQ vs KR comes to mind, where you start in a KR vs KP
position and have one of the rare positions where the queen can't be taken
after the promotion. Without KQKR, it'd be *difficult* to win this from
the winning side.

It's probably possible the KPK by itself is o.k., because most programs can
force mate with the queen with practically no search. However, getting it to
make the conversion from KPK (won) to KQK (search win) could be difficult.

win/loss/draw might work for KPK if win is scored as something < QUEEN, so
that the search would choose the promotion when possible, but it will
probably not work for anything with 4 pieces.

--> it's either an obvious win or an obvious draw). In addition,
-->in many of these endgames there are _three_ possible outcomes rather than
-->just two. Even if you just store the ones that are drawn on the theory
-->that the ones that are won or lost can be picked out fairly easily by the
-->search, that's still on the order of 1572864 bytes (=24576*64) for each
-->one. Add yet another piece to get to the really interesting endgames such
-->as K+R+P vs K+R and things get completely out of hand. (It would however
-->be quite easy to store all possible combinations of K,R, and P vs K and R
-->endgames -- of which K+P vs K would be one subset -- as additional entries
-->in such a unified table by placing each "missing" piece on top of its own
-->King. Even so the table would just be huge).
-->
-->While I'm on this subject I should probably point out that you should
-->also check for the case of K+P vs K+B or K+N. Since these are sometimes
-->won by the side with the piece if the Pawn is a RP (more often with the
-->N than with the B since if the B can't mate on the move it requires a
-->helpmate), they do need to be treated with some care. I've seen not a
-->few programs that will prefer the side with the piece even though in the
-->vast majority of cases only the side with the Pawn has any winning chances,
-->even to the point of giving up its last Pawn (and its winning chances) to
-->get the extra piece. We used have the terminal position evaluation (but
-->not the "drawn game detector") put a maximum value of "draw" on such
-->positions for the side that had the piece, on the theory that those
-->few cases that were won could be found by the search.
-->
-->All of this of course assumes that you have a program that's good enough
-->to get to the endgame. If you're wiped out in the middle game (likely if
-->you're playing Deep Thought :-), you'll never get a chance to use any of
-->these things.

How true... :)

Robert Hyatt

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
In article <1996Feb4.142130.8719@ais>, <br...@ais.com> wrote:
-->In article <4eu72p$q...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:

-->> In article <1996Feb2.110025.8713@ais>, <br...@ais.com> wrote:
-->> -->
-->> -->An obvious potential problem, to combine two threads, is that if the
-->> -->score was obtained by the `lazy evaluation' which terminated the
-->> -->positional evaluation early because it was obvious that the move would
-->> -->cut off, you can't use that raw score in the table because when the
-->> -->position is encountered again you can't guarantee that alpha and beta
-->> -->will have the same values.
-->>
-->> I'm not sure exactly how to interpret this, because the lazy evaluation
-->> that I did in Cray Blitz could *never* be used if eval bailed out early
-->> because the score couldn't be brought back into the window. I simply
-->> returned alpha or beta depending, but it didn't matter. If it was beta,
-->> we failed high and that worked well. If it was alpha, and no caputres
-->> could improve it, it failed low. In either case, storing the edge
-->> that I failed on worked fine, since I did know that the score was
-->> guaranteed to be above beta or below beta.
-->
-->If I understand you correctly, that's one of the alternatives (the second)
-->that I gave for addressing the problem in the next paragraph. I was
-->talking about what score you put in the transposition table, not the
-->score that is backed up since you're cutting off the branch. You could
-->also discard the transposition table entry entirely, though we did not;
-->we usually needed to look up a transposition table entry anyway to see
-->if it was a repetition of position either of the game history or of the
-->line from the root position. Of course if we later needed an entry for
-->another position in the table the entries that had only leaf nodes
-->were not considered very valuable, but it sometimes could save calling
-->the terminal evaluation routine again. So we could often pick up a
-->terminal node whose terminal evaluation had been terminated early, and
-->in that case we had to know whether we needed to re-evaluate the
-->position.
-->
-->> --> If the alpha and beta "window" is tighter
-->> -->than it was before, you don't know if the move would cut off if you
-->> -->had the complete positional score or not. The easiest way to deal
-->> -->with the problem is to store the score with the most pessimistic
-->> -->adjustment for what the positional adjustment might be, as appropriate
-->> -->for whether this is a high bound or a low bound; or, alternatively,
-->> -->store only alpha or beta as appropriate (throwing away the `quick test'
-->> -->score). [...]
-->>
-->> Maybe I misunderstand what you are saying? My "cut" on lazy evaluation is
-->> in the spirit of alpha/beta minimax. If the score's < alpha, it's of no
-->> use whatsoever. Ditto for > beta, because beta is the largest score I'll
-->> except (and I'll fail high if I get that). the score is either < alpha,
-->> --> beta, or somewhere in between. Don't see how worrying about how bad it
-->> is matters...
-->
-->If you actually complete a terminal evaluation, you know what the score
-->is for the "null move" option; this is true whatever the values for
-->alpha and beta are. If you terminate it early, you know that the score
-->lies between (quick score - maximum positional adjustment) and (quick
-->score + maximum position adjustment): actually rather more information
-->than you get from an actual search that cut off, although it's only for
-->a 0-ply search :-). Obviously this doesn't include any information about
-->the quiescence search from this position (if any).
-->
-->I realized that in that first article my editing reversed the condition -
-->it should have been that the alpha and beta values were tighter the first
-->time through, and the second time you encounter the position the values
-->are wider so that they now overlap the error range for the quick terminal
-->evaluation. Sorry - that may have been the cause of your confusion.

-->
-->Bruce C. Wright

Actually, it might be age that confused me. :) happen all the time.

Bob

br...@ais.com

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
In article <4ervvb$p...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
> In article <4erog8$3...@ns2.spectra.net>,
> Tom Elliott <tell...@spectra.net> wrote:
> -->I've found this entire tread very interesting. The discussion based
> -->on "these endings aren't reached very often" prompts this observation.
> -->
> -->It has been observed that programs are very poor at planning,
> -->especially long-range planning. Wouldn't the ability to recognize the
> -->possibility of a won ending and attempting to reach it be a long range
> -->plan. The threat of reaching such an ending would be formidible and
> -->put strong psychological pressure on the opponent, if the threat is
> -->even recognized.
> -->
> -->I read once that one thing to do when evaluating a position is to
> -->imagine all the pieces, except Kings, removed and see who wins, or
> -->draws. This is useful in assessing the permanent strength of one's
> -->position. It seems that extending this to evaluating probable, more
> -->complicated endgame setups, is a good stategy for programs.
>
> One of the things I did in Cray Blitz was to use the pawn evaluation
> to decide about trading pieces, rather than the old "if ahead trade
> pieces, if behind trade pawns." This is the spirit of your idea: if
> the pawn structure is significantly better for me (say a protected
> passed pawn) then I want to reach a pawn-only ending where I should be
> able to use my pawn structure advantage to win. Not in Crafty (yet)
> but it will be.

One thing that we used to do was to have a special routine that ran
before the main search that identified strong and weak points in each
side's respective position -- weak squares, passed Pawns, weak King
protection, etc, and tried to grow trees of moves to protect our own
weak squares or attack the opponent's weak squares. These would be
just sequences of moves by our own or our opponent's pieces and Pawns
to reach the targeted squares. A table was built for the value of
reaching each of these goals for each type of piece and for reaching
the middle points along the paths (the values diminishing the further
they were from the goal). Then during the search the program would
get bonus points for reaching some of the intermediate points in the
plan.

This seemed most useful in the end-game when actually reaching the
culmination of a plan might be beyond the search depth, and where
tactics did not dominate as much as they do in the middle game; for
example, Duchess was quite capable of finding long King manoeuvers
that might take the King far away from simpleminded heuristics such as
"centralize the King in the endgame" and that were much too deep to be
found by a direct search. It wasn't perfect; it could not take into
account the changes in strategy that might be dictated by a radically
different structure encountered deep in the search, but it seemed to
be better than nothing.

Bruce C. Wright

Steven J. Edwards

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
hy...@willis.cis.uab.edu (Robert Hyatt) writes:

>I personally believe that (a) win loss and draw is not enough in a chess
>database, because the distance from "1" to "mate" can be so far, it's
>impossible to see with a searchh (KBN vs K comes to mind.) (b) that
>all possible conversion classes are also needed to avoid drawing positions
>that are wins (KQ vs KR comes to mind, where you start in a KR vs KP
>position and have one of the rare positions where the queen can't be taken
>after the promotion. Without KQKR, it'd be *difficult* to win this from
>the winning side.

The first part of this has been shown to be true even in games other
than chess. The Chinook world champion checkerplaying program uses
only win/loss/draw information in what is probably the largest endgame
database on the planet. But even though it is the champion, it has
still had problems at times moving from a known won N-man postion to a
won (N-1)-man position.

For database fans, a mention should be made of Chinook's win/loss/draw
encoding as the same technique could be used for chess endgames.
Chinnok is able to store five position evaluations per byte by using
base 3 arithmetic coding. This is 25% better than the obvious two
bits per value coding.

The second part of this is true, but perhaps only when playing against
GM strength opponents or computer opponents known to have complete
distance to mate/loss databases. Other opponents are highly unlikely
to find the best defense/offense over the board under regular time
controls. Years ago GM Walter Browne had a hard time winning KQKR
against Belle with its KQKR WTM distance to conversion database. He
was able to just barely manage it after a week's worth of intensive
study. I doubt if things have changed much for the general case since
then.

For most players it is tough enough to win KBNK under normal time
controls if the opponent knows enough to stay out of the corners and
particularly the wrong corner pair.

I agree that it is important that if you have a database for endgame
class X, then you should also have all the successor databases of that
class. So, if KRKP is present, it's a good idea to have KQKR, KRKR,
KRKB, KRKN, KRK, and KPK available. (These are the ones needed during
the generation of KRKP.) Now, for those unable to dedicate sufficient
disk storage (ca. 240 Mbyte) for all of the three and four man
classes, there is an alternative. It is possible to get by almost as
well by having only the WTM files instead of both the WTM and BTM
files. Having only one side of the tablebase pairs will make the
program search a little longer than if both sides are present.
Specifically, it means an extra ply of search for about half of the
positions that would be solved immediately if both sides of the files
were present. Note that the terms WTM and BTM are only a convention
related to the names used for the tablebase files; the program using
them will automatically swap colors and flip the board as required.

A side note: what is the rating of a program that plays perfect chess
if the ratings are calculated only for positions that the program has
distance to mate/loss databases? Against imperfect human (or
imperfect program) opposition, it's rating will be at least as high as
the highest rated opponent. (Assuming equally distributed starting
positions.) But no such competetion will ever generate a reliable
rating because the upper limit of the opposition is variable over
time. If enough databases are present, say all the six man classes,
then I would guess that the program would win over 99% of its games
and so have a rating 400 points above the best opposition. But this
still doesn't give a reliable figure.

-- Steven (s...@mv.mv.com)

Paul Rubin

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In article <DM9wM...@mv.mv.com>, Steven J. Edwards <s...@mv.mv.com> wrote:
>A side note: what is the rating of a program that plays perfect chess
>if the ratings are calculated only for positions that the program has
>distance to mate/loss databases? Against imperfect human (or
>imperfect program) opposition, it's rating will be at least as high as
>the highest rated opponent. (Assuming equally distributed starting
>positions.) But no such competetion will ever generate a reliable
>rating because the upper limit of the opposition is variable over
>time. If enough databases are present, say all the six man classes,
>then I would guess that the program would win over 99% of its games
>and so have a rating 400 points above the best opposition. But this
>still doesn't give a reliable figure.

Huh? If the starting positions are equally distributed, then half of
them will be drawn or lost. In fact many of the six man classes are
totally uninteresting, like KQQQQQ vs. K: black loses immediately
(maybe some diabolical problem composer can construct an obscure
stalemate trap, but that's irrelevant to the statistical question).
So I don't see how even perfect knowledge of six-man endings can help
a program win 99% of randomly selected six-man positions against any
player that is not totally idiotic. There are totally won positions,
totally lost positions, and the in-between margin where knowledge and
skill make chess interesting. Give reasonable strategy and searching
capability to the opponent, and this margin can be quite thin even
against a perfect player.

Ernst A. Heinz

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
Steven J. Edwards wrote:

> Although KPK might not appear in every game, it is surprising how
> often it and many other 3-4-5 man positions are reached during
> searches taking place in the late middlegame.

True indeed.

> It is not necessary to dedicate memory to endgame storage. Having
> uncompressed versions of the endgame database files on disk is
> sufficient. This has been shown through testing where the endgame
> files may be probed at any full width search ply. The results
> indicate a significant strength improvement, and so having endgame
> databases is more than just being able to announce "mate in 30" (or
> similar predictions) in only a few root level positions. This
> improvement has been demonstrated in both Spector and Crafty, and I'm
> sure it can be demonstrated in other programs as well.

DarkThought uses the whole set of Ken Thompson's databases during the
search and we made similar experiences, e.g. during WCCC'95 and WMCCC'95.
Therefrom we conclude that even *compressed* external endgame databases
(Huffman encoded in our case) may be accessed during the search without
intolerable costs.

=Ernst=
--
+----------------------------------------------------------------------------+
| Ernst A. Heinz, IPD, School of CS, Univ. of Karlsruhe, P.O. Box 6980, |
| D-76128 Karlsruhe, Germany. WWW: <http://wwwipd.ira.uka.de/Tichy/~heinze> |
| Mail: <hei...@ira.uka.de> Tel: +49-(0)721-6084386 Fax: +49-(0)721-694092 |
+----------------------------------------------------------------------------+
"It has recently been found out that research causes cancer in rats!"

Robert Hyatt

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In article <4f6lbd$p...@nz12.rz.uni-karlsruhe.de>,
Peter Gillgasch <gil...@ira.uka.de> wrote:
-->In article <1996Feb4.144843.8720@ais>, br...@ais.com writes:
-->|> One thing that we used to do was to have a special routine that ran
-->|> before the main search that identified strong and weak points in each
-->|> side's respective position -- weak squares, passed Pawns, weak King
-->|> protection, etc, and tried to grow trees of moves to protect our own
-->|> weak squares or attack the opponent's weak squares. These would be
-->|> just sequences of moves by our own or our opponent's pieces and Pawns
-->|> to reach the targeted squares. A table was built for the value of
-->|> reaching each of these goals for each type of piece and for reaching
-->|> the middle points along the paths (the values diminishing the further
-->|> they were from the goal). Then during the search the program would
-->|> get bonus points for reaching some of the intermediate points in the
-->|> plan.
-->|>
-->|> This seemed most useful in the end-game when actually reaching the
-->|> culmination of a plan might be beyond the search depth, and where
-->|> tactics did not dominate as much as they do in the middle game; for
-->|> example, Duchess was quite capable of finding long King manoeuvers
-->|> that might take the King far away from simpleminded heuristics such as
-->|> "centralize the King in the endgame" and that were much too deep to be
-->|> found by a direct search. It wasn't perfect; it could not take into
-->|> account the changes in strategy that might be dictated by a radically
-->|> different structure encountered deep in the search, but it seemed to
-->|> be better than nothing.
-->
-->I am really glad that somebody touches this subject. As I
-->started the work on the program that later became DarkThought I
-->had basically no evaluation for the pieces. After toying
-->with horrible slow code that did endpoint evaluations for
-->the pieces I got the idea (as many did before, but I swear
-->I didn't know that and hence I was quite proud of it) to
-->analyze the root position and to modify the piece/square table
-->relative to pawn structure and king locations. The piece/square
-->table scores were also used for move ordering to cut down
-->the tree size (this helped a lot since it is basically using
-->the evaluation to order the moves at close to no cost).
-->
-->Of course the program played *much* better than before but
-->it had serious trouble when the pawn structure was highly
-->flexible and if there was a way to lock its bishops with
-->own pawns it was guaranteed to like exactly this line ;)
-->
-->As you said, it is "better than nothing".
-->
-->Later the three of us modified the oracle code (to use Berliner's
-->term) and we where quite sucessfull with it - DarkThought started
-->to beat commercial PC programs. But it was quite hard to make
-->further progress, so a lot of knowledge was moved to the
-->endpoint evaluation up to the point that there was basically
-->no oracle code left. This improved playing strength but it
-->was always my impression that we suffered in the "tactical"
-->aspect a bit since the search was no longer fast as hell and
-->I was never convinced that going this way was the best decision.

A quibble with the above. When Crafty runs on it's "target", it crushes
the commercial programs too. However, when it runs on a PC, it's a different
matter altogether. When you are talking about beating commercial programs,
I assume you are still talking about running on an Alpha, which is *not*
a reasonable comparison point, any more than my C90 tests last year were,
even though I "handicapped" cray blitz a significant amount. As a result,
it's hard to attribute "x" to programming algorithm and "y" to faster hardware.
That's why you don't find me posting games where Crafty beats genius, although
it does it regularly. I just don't think it will win as many when running on
a PC as it will when running on faster hardware. The real test would be to see
how you do against a commercial program when you both run on the *same* class
of machine.

-->
-->I am currently finishing a complete redesign of DarkThought
-->that takes the need of a fast endpoint officer evaluation
-->into account (using bitboards with *very* small lookup tables)
-->without the need to resort to major kludges in the evaluation code.
-->I learned a lot during the development of DarkThought v1 and
-->v2 is about 4 times faster.
-->
-->But it is obvious that a lot of long term features can be
-->determined by looking at the root position and I am convinced that
-->a "mixed" approach of endpoint evaluation and root node
-->evaluation is the way to go for a couple of reasons:
-->
-->- no serious glitches thanks to correct endpoint evaluation
-->- no hefty speed / depth penalty
-->- no need to modify optimized time critical code all the time
-->
-->So, what does r.g.c.c. think of this? I think I recall that
-->Bob doesn't like root node prescanning since the search may
-->run into a completely unexpected direction if you search deeply
-->but if the Deep Blue guys are putting a writable piece
-->square table on their chip it is obvious that the opinions on
-->that are different in the > 1MNPS camp as well.

Not necessarily. It simply depends on (a) what you put there and (b) what
your hardware is capable of evaluating. Better in (a) than nowhere, but,
as Berliner mentioned, the important things migrate from piece/square tables
to dynamic evaluation computations as a program gets better. Not the other
way around...

-->
-->-- Peter
-->
-->p.s.: Interesting side note. DarkThought has no real problems
--> against most programs with a lot of endpoint evaluation knowledge
--> and no problems against most of the prescan-based evaluators.

hmm. don't know how to evaluate this. The obvious conclusion is not supported
by evidence...

--> But I cannot think of any "endpoint" evaluator that really
--> scares me, only some prescan engines come to mind. My
--> conclusion is that good prescanning + speed is lethal,
--> while speed is not possible when a program is based on
--> endpoint evaluation only.
-->
-->------------------------------------------------------------
-->Peter W. Gillgasch
-->Klosterweg 28/I-113 email gil...@ira.uka.de
-->76131 Karlsruhe/Germany Phone ++49/(0)721/6904255

The PS is questionable. Crafty's fast, and getting faster as I clean up
and optimize. Also, I ran an older version on a Cray T90 a few months back,
using a cobbled-up parallel algorithm borrowed from Cray Blitz code and saw
around 5M nodes per second, which is *not* slow. And it does *full* endpoint
evaluation, mobility and all...

I don't like dynamic info in piece/square tables because it's simply
wrong. However, centralization, and other "static" features works very
well and we've *always* done these, and still do in Crafty. However, no
king-side attack stuff, no king tropism, no weak squares, because that's
too dynamic and it's wrong. A program written with (say) king-safety stuff
in piece/square tables will attack a king that's scampered away, or else
won't bother "scampering away" because it can't escape the scoring penalties
since they are not attached to the king.

This has been done for 25 years, but *not* because it's better, rather,
because it's faster. if it comes down to a decision based on speed, then
this might be a good deal. however, all else being equal, endpoint evaluation
simply has to be better, obviously it can't be any worse. If endpoint evaluation
costs too much, I can see where you'd move slow things to piece/square tables and
the speed might be well worth it.

br...@ais.com

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In article <4f2jnc$r...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
> In article <1996Feb3.160642.8715@ais>, <br...@ais.com> wrote:
> -->
> -->Not quite right. Let's assume that you always store the position with
> -->White having the Pawn (the cases with Black having the Pawn are symmetric
> -->and you can easily invert the position). You can very easily get rid of
> -->reflections as well by only storing the positions with the Pawn on the
> -->Queen-side and reflecting the position if the pawn is on the King-side.
> -->This leaves 4*6=24 possible squares for the Pawn. [...]

> -->
> -->The problem becomes much more intractable when you add another piece (No
> -->other endgame containing only the Kings plus one piece or Pawn is at all
> -->interesting;
>
> This might prove more difficult than you think. Without care, you might
> draw a won position, because you can't see the win *after* the promotion,
> so you make a move that keeps you in a *known winning position* in the
> database, and before you know it, you have exhausted all possible moves
> that could win, so that now everything leads to a repetition.

The idea was just to score the drawn positions and let the search deal
with the others, ie the database wouldn't score them at all. That
does allow a significant compression of the database, and although
there are some possibilities of problems if the search can't figure out
how to win a won position, that can't really be completely solved by
adding more databases (unless that is you want to go into multiple
gigabyte databases). For example, if the program knows how to deal
with K+B+N vs K only via a database, it may find itself unable to find
the win if it encounters K+B+N+N vs K+B (for example). I think that
you still have to put heuristics in the search to help it find the win
in such situations, and those heuristics should still work in the
degenerate case of just K+B+N vs K.

Not that K+B+K vs K and K+Q vs K+R are very common endgames -- I don't
think I've ever seen the latter come up in an actual game between good
players (either human or computer) that wasn't set up to start in such
an endgame, and the former only once.

You're right that there can be some really odd "edge effects" with
all of this though, and if you have the full database with "number of
moves to mate" for each position you'll eliminate the problems for
those particular endgames. But that won't get you out of writing a
good endgame evaluation module since even a slight modification to
the position will get you out of your database.

> I personally believe that (a) win loss and draw is not enough in a chess
> database, because the distance from "1" to "mate" can be so far, it's
> impossible to see with a searchh (KBN vs K comes to mind.)

Scoring a position as "won" without a best move and/or a "distance to
mate" would be deadly -- the program would have no incentive to make
any progress. That's why I was indicating it as either "drawn" or "not
drawn" and letting the search deal with assigning a score in the latter
case.

Bruce C. Wright

Bruce Moreland

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
Of course not. The only real reason you might need it is if your KP vs K
database caused you to announce mate, if you didn't have the databases that
are used after conversion you'd get odd behavior at the point of transition.

bruce

In article <phrDM6...@netcom.com>, p...@netcom.com says...


>
>In article <4etrsr$c...@news.microsoft.com>, Bruce Moreland <brucemo> wrote:

>>Someone figured out how to make the KR vs K database a little smaller,
there

>>was an article on this in the ICCA Journal about a million years ago

>>(pre-1985 i think, maybe when it was still the ICCA Newsletter).
>

Robert Hyatt

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In article <1996Feb5.121010.8723@ais>, <br...@ais.com> wrote:
-->In article <4f2jnc$r...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
-->> In article <1996Feb3.160642.8715@ais>, <br...@ais.com> wrote:
-->> -->

-->> -->Not quite right. Let's assume that you always store the position with
-->> -->White having the Pawn (the cases with Black having the Pawn are symmetric
-->> -->and you can easily invert the position). You can very easily get rid of
-->> -->reflections as well by only storing the positions with the Pawn on the
-->> -->Queen-side and reflecting the position if the pawn is on the King-side.
-->> -->This leaves 4*6=24 possible squares for the Pawn. [...]
-->> -->

-->> -->The problem becomes much more intractable when you add another piece (No
-->> -->other endgame containing only the Kings plus one piece or Pawn is at all
-->> -->interesting;
-->>
-->> This might prove more difficult than you think. Without care, you might
-->> draw a won position, because you can't see the win *after* the promotion,
-->> so you make a move that keeps you in a *known winning position* in the
-->> database, and before you know it, you have exhausted all possible moves
-->> that could win, so that now everything leads to a repetition.
-->
-->The idea was just to score the drawn positions and let the search deal
-->with the others, ie the database wouldn't score them at all. That
-->does allow a significant compression of the database, and although
-->there are some possibilities of problems if the search can't figure out
-->how to win a won position, that can't really be completely solved by
-->adding more databases (unless that is you want to go into multiple
-->gigabyte databases). For example, if the program knows how to deal
-->with K+B+N vs K only via a database, it may find itself unable to find
-->the win if it encounters K+B+N+N vs K+B (for example). I think that
-->you still have to put heuristics in the search to help it find the win
-->in such situations, and those heuristics should still work in the
-->degenerate case of just K+B+N vs K.
-->
-->Not that K+B+K vs K and K+Q vs K+R are very common endgames -- I don't
-->think I've ever seen the latter come up in an actual game between good
-->players (either human or computer) that wasn't set up to start in such
-->an endgame, and the former only once.

Depends on how many games you play. I've seen dozens of KQ vs KR,
KRN vs KR, KRB vs KR, etc. types of endings on ICC. Of course, this
is from playing over 30,000 games last year there, not counting all
the other "crafty" versions playing there too. however, the KR* vs KN*
is *very* common, where crafty wins or loses the exchange, and then
plays on evenly to reach an exchange-up or exchange-down ending. In
fact, I've seen probably at least 50 KR vs KN or KR vs KB type endings
this past year.

-->
-->You're right that there can be some really odd "edge effects" with
-->all of this though, and if you have the full database with "number of
-->moves to mate" for each position you'll eliminate the problems for
-->those particular endgames. But that won't get you out of writing a
-->good endgame evaluation module since even a slight modification to
-->the position will get you out of your database.

Correct, but the neat part of this is that with "distance to mate" which
is an *absolute* value, you can now play to force the game to a won
position, with no guesswork at all. If you can't hit the databases,
then you play with the evaluation you have, but as you get close, the
search will automatically be "drawn" the right way because it begins to
smell mate scores around the fringe of the tree...


-->
-->> I personally believe that (a) win loss and draw is not enough in a chess
-->> database, because the distance from "1" to "mate" can be so far, it's
-->> impossible to see with a searchh (KBN vs K comes to mind.)
-->
-->Scoring a position as "won" without a best move and/or a "distance to
-->mate" would be deadly -- the program would have no incentive to make
-->any progress. That's why I was indicating it as either "drawn" or "not
-->drawn" and letting the search deal with assigning a score in the latter
-->case.
-->
Also, in your last sentence above, how do you handle KQ vs KR? This is
difficult to "programmatically evaluate". All you'd know is "not drawn",
but if you are in a KQP vs KRP, you could probably guess that's not
drawn as well. In any case, this is a difficult issue.

Also remember that we are applying this knowledge way out in the search
tree, and making decisions based on it. IE, should I sac my queen for
his rook to reach a won KP vs KP ending? Crafty does it in a heartbeat,
rather than playing on for many moves in a more difficult ending. If you
don't know the conversion distance, it's hard to make a rational decision
at endpoints. Once you land in an ending, it may be more difficult. For
example, how many moves does it take to convert KQ vs KR into a KQ vs K
ending? I know that Crafty's never going to see that far, and would very
likely run into repetition draws because it can't see "over the hill" far
enough.

Peter Gillgasch

unread,
Feb 6, 1996, 3:00:00 AM2/6/96
to
In article <1996Feb4.144843.8720@ais>, br...@ais.com writes:
|> One thing that we used to do was to have a special routine that ran
|> before the main search that identified strong and weak points in each
|> side's respective position -- weak squares, passed Pawns, weak King
|> protection, etc, and tried to grow trees of moves to protect our own
|> weak squares or attack the opponent's weak squares. These would be
|> just sequences of moves by our own or our opponent's pieces and Pawns
|> to reach the targeted squares. A table was built for the value of
|> reaching each of these goals for each type of piece and for reaching
|> the middle points along the paths (the values diminishing the further
|> they were from the goal). Then during the search the program would
|> get bonus points for reaching some of the intermediate points in the
|> plan.

|>
|> This seemed most useful in the end-game when actually reaching the
|> culmination of a plan might be beyond the search depth, and where
|> tactics did not dominate as much as they do in the middle game; for
|> example, Duchess was quite capable of finding long King manoeuvers
|> that might take the King far away from simpleminded heuristics such as
|> "centralize the King in the endgame" and that were much too deep to be
|> found by a direct search. It wasn't perfect; it could not take into
|> account the changes in strategy that might be dictated by a radically
|> different structure encountered deep in the search, but it seemed to
|> be better than nothing.

I am really glad that somebody touches this subject. As I

started the work on the program that later became DarkThought I

had basically no evaluation for the pieces. After toying

with horrible slow code that did endpoint evaluations for

the pieces I got the idea (as many did before, but I swear

I didn't know that and hence I was quite proud of it) to

analyze the root position and to modify the piece/square table

relative to pawn structure and king locations. The piece/square

table scores were also used for move ordering to cut down

the tree size (this helped a lot since it is basically using

the evaluation to order the moves at close to no cost).

Of course the program played *much* better than before but


it had serious trouble when the pawn structure was highly

flexible and if there was a way to lock its bishops with

own pawns it was guaranteed to like exactly this line ;)

As you said, it is "better than nothing".

Later the three of us modified the oracle code (to use Berliner's


term) and we where quite sucessfull with it - DarkThought started

to beat commercial PC programs. But it was quite hard to make

further progress, so a lot of knowledge was moved to the

endpoint evaluation up to the point that there was basically

no oracle code left. This improved playing strength but it

was always my impression that we suffered in the "tactical"

aspect a bit since the search was no longer fast as hell and

I was never convinced that going this way was the best decision.

I am currently finishing a complete redesign of DarkThought


that takes the need of a fast endpoint officer evaluation

into account (using bitboards with *very* small lookup tables)

without the need to resort to major kludges in the evaluation code.

I learned a lot during the development of DarkThought v1 and

v2 is about 4 times faster.

But it is obvious that a lot of long term features can be


determined by looking at the root position and I am convinced that

a "mixed" approach of endpoint evaluation and root node

evaluation is the way to go for a couple of reasons:

- no serious glitches thanks to correct endpoint evaluation


- no hefty speed / depth penalty

- no need to modify optimized time critical code all the time

So, what does r.g.c.c. think of this? I think I recall that
Bob doesn't like root node prescanning since the search may


run into a completely unexpected direction if you search deeply

but if the Deep Blue guys are putting a writable piece

square table on their chip it is obvious that the opinions on

that are different in the > 1MNPS camp as well.

-- Peter

p.s.: Interesting side note. DarkThought has no real problems

against most programs with a lot of endpoint evaluation knowledge

and no problems against most of the prescan-based evaluators.

But I cannot think of any "endpoint" evaluator that really

scares me, only some prescan engines come to mind. My

conclusion is that good prescanning + speed is lethal,

while speed is not possible when a program is based on

endpoint evaluation only.

------------------------------------------------------------
Peter W. Gillgasch
Klosterweg 28/I-113 email gil...@ira.uka.de
76131 Karlsruhe/Germany Phone ++49/(0)721/6904255

Walter Ravenek

unread,
Feb 6, 1996, 3:00:00 AM2/6/96
to
When reading this interesting thread I got the impression that
neither Bob nor Bruce is using the fail-soft condition.
Let me first say what my understanding of that technique is:
In the alpha_beta you start out with a best score -INFINITY
and pass MAX(alpha, best_score) as a parameter to subsequent
calls of alpha_beta.
I have never seen a proof of it, but the idea seems to be that if
the call to alpha_beta fails high or low, you can return a score
outside the alpha_beta window and use that score as a bound if you
need to research.

My questions:
- Is the fail-soft technique based on assumpions that are invalid
in chess tree searches (e.g. because of extensions)?
- Have you experimented with it? If so, what where the results?

Vincent Diepeveen

unread,
Feb 6, 1996, 3:00:00 AM2/6/96
to
In <4f6lbd$p...@nz12.rz.uni-karlsruhe.de> gil...@ira.uka.de (Peter Gillgasch) writes:

<about prescanning and putting knowledge in tables>

>I am really glad that somebody touches this subject. As I
>started the work on the program that later became DarkThought I
>had basically no evaluation for the pieces. After toying
>with horrible slow code that did endpoint evaluations for
>the pieces I got the idea (as many did before, but I swear
>I didn't know that and hence I was quite proud of it) to
>analyze the root position and to modify the piece/square table
>relative to pawn structure and king locations. The piece/square
>table scores were also used for move ordering to cut down
>the tree size (this helped a lot since it is basically using
>the evaluation to order the moves at close to no cost).

>Of course the program played *much* better than before but
>it had serious trouble when the pawn structure was highly
>flexible and if there was a way to lock its bishops with
>own pawns it was guaranteed to like exactly this line ;)

>As you said, it is "better than nothing".

Tactics are more important that positional play until the opponent
doesn't make that tactical mistake... :)

>Later the three of us modified the oracle code (to use Berliner's
>term) and we where quite sucessfull with it - DarkThought started
>to beat commercial PC programs. But it was quite hard to make
>further progress, so a lot of knowledge was moved to the
>endpoint evaluation up to the point that there was basically
>no oracle code left. This improved playing strength but it
>was always my impression that we suffered in the "tactical"
>aspect a bit since the search was no longer fast as hell and

>I was never convinced that going this way was the best decision.

>I am currently finishing a complete redesign of DarkThought
>that takes the need of a fast endpoint officer evaluation
>into account (using bitboards with *very* small lookup tables)
>without the need to resort to major kludges in the evaluation code.
>I learned a lot during the development of DarkThought v1 and
>v2 is about 4 times faster.

That is some speed up!

What can you do faster with bitboards? Are you talking about 64 bits
bitboards, or 2x32 bits?

>But it is obvious that a lot of long term features can be
>determined by looking at the root position and I am convinced that
>a "mixed" approach of endpoint evaluation and root node
>evaluation is the way to go for a couple of reasons:

I agree: endpoint evaluation is very important, but because of the
slow down of the search you will not search that deep. I currently
search only 7-10 ply (using nullmove), usually 8 ply, which is not much.

Few ply ( <= 10 ) definitely gives you planning problems. Only 5 moves...
...which is tactically a lot, but positionally hopeless.

Although because of deep search programs might be happy to find sometimes
a plan, it still will be a short range plan. I guess you need to scan
at the root and put a very little knowledge in tables to use a plan.

>- no serious glitches thanks to correct endpoint evaluation
>- no hefty speed / depth penalty
>- no need to modify optimized time critical code all the time

>So, what does r.g.c.c. think of this? I think I recall that
>Bob doesn't like root node prescanning since the search may
>run into a completely unexpected direction if you search deeply
>but if the Deep Blue guys are putting a writable piece
>square table on their chip it is obvious that the opinions on
>that are different in the > 1MNPS camp as well.

Is that writable square table also handling the dynamic stuff such
as good/bad bishops?

>p.s.: Interesting side note. DarkThought has no real problems
> against most programs with a lot of endpoint evaluation knowledge
> and no problems against most of the prescan-based evaluators.
> But I cannot think of any "endpoint" evaluator that really
> scares me, only some prescan engines come to mind. My
> conclusion is that good prescanning + speed is lethal,
> while speed is not possible when a program is based on
> endpoint evaluation only.

We must not forget that you are running usually at a DEC alpha, which
gives you less horizon effect, where extended endpoint evaluation
like i try to write doesn't give me the tactical sufficient depth
at the current hardware (pentium 100). Also: you're always lacking knowledge:

A good problem is next position that arised in the game:
Diep - Schach, DCCC 1995
white Kg1,Qa4,Ra1,Rf1,Nc3,a3,d4,e3,f2,g2,h2
black Kg8,Qg6,Rb8,Re8,Bd7,a7,b6,c6,f7,g7,h7

White to move. Here DIEP got next horizon effect, because of
a) lacking kingsafety knowledge
b) lacking depth

bxc6?,Bh3 g3,b5 and Diep missed that the pawn at c6 is also hung.
After some desperate moves it

Diep only got 8 ply.
Current version 9 ply, it sees at 9 ply that bxc6 is loosing,
and wants to play Kh1. However then the next horizon effect appears,
which is also missed by most programs.

Kh1,cxb5 now after Qa7 most programs see that black wins,
the problem is

Kh1,cxb5 Nxb5,a6 Qa6,Ra8 Na7 and the knight gets captured
and so does the game.

Programs that see that Qa7 is not the way miss that Nxb5 is not the way
and visa versa.

After the game the Schach team told 'knowledge doesn't work'.
However:
Instead of bxc6 Schach expected Rfc1, which shows a serious
bug in kingsafety (so does bxc6).

So i guess you need both:
a) huge depth
b) lot of endpoint-knowledge

The question is of course, how cheap may that endpoint knowledge be?

Marcel van Kervinck

unread,
Feb 7, 1996, 3:00:00 AM2/7/96
to
Peter Gillgasch (gil...@ira.uka.de) wrote:
: So, what does r.g.c.c. think of this? I think I recall that

: Bob doesn't like root node prescanning since the search may
: run into a completely unexpected direction if you search deeply
: but if the Deep Blue guys are putting a writable piece
: square table on their chip it is obvious that the opinions on
: that are different in the > 1MNPS camp as well.

I use adaptive pc/sq tables.

My program uses pc/sq tables that depend on the pawn structure
only. It's really fast and makes long-term planning possible.
Instead of doing a root position pre-scan, I recompute the
pc/sq tables in the tree as the goals change. Sounds simpler
than it is when searching at roughly 1000 instructions per node.

To return to the subject of this thread: a simple extension based
on this mechanism enables Rookie to evaluate postions like the one
below as a draw without search.

- - - K - - - -
- - - - - - - -
- - - - - - - -
* - - * - - * -
o - - o - - o -
o - - - - - - -
- - - k - - - -

Marcel
-- _ _
_| |_|_|
|_ |_ Marcel van Kervinck
|_| bue...@urc.tue.nl

Robert Hyatt

unread,
Feb 7, 1996, 3:00:00 AM2/7/96
to
In article <4fb6ur$5...@nz12.rz.uni-karlsruhe.de>,
Peter Gillgasch <gil...@ira.uka.de> wrote:
-->Thanks for joining this discussion Bob!
-->
-->In article <4f6nvg$t...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
-->|> In article <4f6lbd$p...@nz12.rz.uni-karlsruhe.de>,
-->|> Peter Gillgasch <gil...@ira.uka.de> wrote:
-->|> -->In article <1996Feb4.144843.8720@ais>, br...@ais.com writes:
-->|> -->|> One thing that we used to do was to have a special routine that ran
-->|> -->|> before the main search that identified strong and weak points in each
-->|> -->|> side's respective position -- weak squares, passed Pawns, weak King
-->|> -->|> protection, etc, and tried to grow trees of moves to protect our own
-->|> -->|> weak squares or attack the opponent's weak squares. These would be
-->|> -->|> just sequences of moves by our own or our opponent's pieces and Pawns
-->|> -->|> to reach the targeted squares. A table was built for the value of
-->|> -->|> reaching each of these goals for each type of piece and for reaching
-->|> -->|> the middle points along the paths (the values diminishing the further
-->|> -->|> they were from the goal). Then during the search the program would
-->|> -->|> get bonus points for reaching some of the intermediate points in the
-->|> -->|> plan.
-->|> -->|>
-->|> -->|> This seemed most useful in the end-game when actually reaching the
-->|> -->|> culmination of a plan might be beyond the search depth, and where
-->|> -->|> tactics did not dominate as much as they do in the middle game; for
-->|> -->|> example, Duchess was quite capable of finding long King manoeuvers
-->|> -->|> that might take the King far away from simpleminded heuristics such as
-->|> -->|> "centralize the King in the endgame" and that were much too deep to be
-->|> -->|> found by a direct search. It wasn't perfect; it could not take into
-->|> -->|> account the changes in strategy that might be dictated by a radically
-->|> -->|> different structure encountered deep in the search, but it seemed to
-->|> -->|> be better than nothing.

-->|> -->
-->|> -->I am really glad that somebody touches this subject. As I
-->|> -->started the work on the program that later became DarkThought I
-->|> -->had basically no evaluation for the pieces. After toying
-->|> -->with horrible slow code that did endpoint evaluations for
-->|> -->the pieces I got the idea (as many did before, but I swear
-->|> -->I didn't know that and hence I was quite proud of it) to
-->|> -->analyze the root position and to modify the piece/square table
-->|> -->relative to pawn structure and king locations. The piece/square
-->|> -->table scores were also used for move ordering to cut down
-->|> -->the tree size (this helped a lot since it is basically using
-->|> -->the evaluation to order the moves at close to no cost).

-->|> -->
-->|> -->Of course the program played *much* better than before but
-->|> -->it had serious trouble when the pawn structure was highly
-->|> -->flexible and if there was a way to lock its bishops with
-->|> -->own pawns it was guaranteed to like exactly this line ;)

-->|> -->
-->|> -->As you said, it is "better than nothing".
-->|> -->
-->|> -->Later the three of us modified the oracle code (to use Berliner's
-->|> -->term) and we where quite sucessfull with it - DarkThought started
-->|> -->to beat commercial PC programs. But it was quite hard to make
-->|> -->further progress, so a lot of knowledge was moved to the
-->|> -->endpoint evaluation up to the point that there was basically
-->|> -->no oracle code left. This improved playing strength but it
-->|> -->was always my impression that we suffered in the "tactical"
-->|> -->aspect a bit since the search was no longer fast as hell and
-->|> -->I was never convinced that going this way was the best decision.
-->|>
-->|> A quibble with the above. When Crafty runs on it's "target", it crushes
-->|> the commercial programs too. However, when it runs on a PC, it's a different
-->|> matter altogether. When you are talking about beating commercial programs,
-->|> I assume you are still talking about running on an Alpha, which is *not*
-->|> a reasonable comparison point, any more than my C90 tests last year were,
-->|> even though I "handicapped" cray blitz a significant amount.
-->
-->Probably it would have been helpful if I had noted the following:
-->
-->At the point in time I was refering to DarkThought was coded in a PeeCee'ish
-->way. It was running on a 175 mhz Alpha and was about 1.3 times faster than on
-->my PowerMacintosh @80 mhz. The code didn't use any stuff that is using the
-->64 bit registers on the Alpha and the port of DarkThought from Pascal to C
-->was my first major C program and I later found a lot of things that could be
-->improved on the source code level. Live and learn...
-->
-->In other words I think that we where at this time on that specific machine not
-->much faster than a good PeeCee (at that time).
-->
-->The search speed of that version of DarkThought v1 was somewhere between 20 K
-->(worst case middle game speed) to 85 K (pawn endings). When I compare that with the
-->speed of Genius or Fritz it is comparable or even worse (assuming that they
-->do not cheat with their nps figures) and a lot of knowledge that is in the
-->commercial programs was simply missing.
-->
-->Later on I found a cute technique that allowed us to do mobility evaluation
-->at the endpoints. Due to other optimizations the program was actually a bit
-->faster with this endpoint evaluation than the pure oracle based version.
-->
-->Later on a lot of knowledge was added, the technique was modified
-->in strive for "exactness" and the search speed was basically lost.
-->The impact on the speed of the search was severe. We often dropped below
-->20 K on a 275 mhz Alpha and the max speed was 1/3rd of the original.
-->The program played better chess, but the conclusion was clear: I needed
-->a better technique that allowed fast endpoint evaluation and a lot of
-->special case code had to be moved to places other than the officer evaluation.
-->
-->That was the moment I decided to go the bitboard road for *everything*.
-->
-->|> As a result,
-->|> it's hard to attribute "x" to programming algorithm and "y" to faster hardware.
-->|> That's why you don't find me posting games where Crafty beats genius, although
-->|> it does it regularly. I just don't think it will win as many when running on
-->|> a PC as it will when running on faster hardware. The real test would be to see
-->|> how you do against a commercial program when you both run on the *same* class
-->|> of machine.
-->
-->Hmm. Such a test is *not* possible since the commercial programs are
-->partly hand-tuned in x86 assembly which makes a tremendous difference
-->(you know that since Crafty benefits from a couple of lines of x86
-->assembly pretty well).
-->
-->Later on the code evolved into a direction that made such a comparision
-->nonsensical: The basic search speed dropped on the PowerPC from around
-->20-25 K to 5 K since we used 64 bit stuff in the evaluator and in parts of
-->the move generator.
-->
-->DarkThought v2 is different since it runs very efficiently on 64 bit
-->machines and 32 bit big endian machines. On the PowerPC (32 bit architecture)
-->I even included some handcoded assembly stuff. I hope to find the time to make
-->the 32 bit code "little endian safe" to be able to run it through a commercial
-->PeeCee compiler, but the x86 with its shortage of 32 bit registers is
-->probably always handicapped beyond repair.
-->
-->In other words I doubt it that bitboards make sense on a PeeCee when
-->you want to compete with commercial 0x88 engines on that platform.

I used to agree, but crafty's gotten pretty fast on the PeeCee platform
even though I'm not doing anything specific for this. It might be that
the bitboards will give us enough "good stuff" (say evaluation tricks)
that it'll pay off. I just noted that Crafty generates moves at 1.5M
per sec on a P133, which was surprising. It's move generator is *not*
a drawback. And remember that I don't do the incremental makemove
bitboard attack update stuff either, that I generate the attack bitmaps
right in movgen as they are needed. I'm no longer sure that x88 is any
faster...

-->


-->|> -->I am currently finishing a complete redesign of DarkThought

-->|> -->that takes the need of a fast endpoint officer evaluation
-->|> -->into account (using bitboards with *very* small lookup tables)
-->|> -->without the need to resort to major kludges in the evaluation code.
-->|> -->I learned a lot during the development of DarkThought v1 and
-->|> -->v2 is about 4 times faster.


-->|> -->
-->|> -->But it is obvious that a lot of long term features can be

-->|> -->determined by looking at the root position and I am convinced that
-->|> -->a "mixed" approach of endpoint evaluation and root node
-->|> -->evaluation is the way to go for a couple of reasons:


-->|> -->
-->|> -->- no serious glitches thanks to correct endpoint evaluation

-->|> -->- no hefty speed / depth penalty
-->|> -->- no need to modify optimized time critical code all the time


-->|> -->
-->|> -->So, what does r.g.c.c. think of this? I think I recall that

-->|> -->Bob doesn't like root node prescanning since the search may
-->|> -->run into a completely unexpected direction if you search deeply
-->|> -->but if the Deep Blue guys are putting a writable piece
-->|> -->square table on their chip it is obvious that the opinions on
-->|> -->that are different in the > 1MNPS camp as well.
-->|>
-->|> Not necessarily. It simply depends on (a) what you put there and (b) what
-->|> your hardware is capable of evaluating. Better in (a) than nowhere, but,
-->|> as Berliner mentioned, the important things migrate from piece/square tables
-->|> to dynamic evaluation computations as a program gets better. Not the other
-->|> way around...
-->
-->I agree partly. If I interpret your stament correctly you say that you do not
-->reject the prescanning idea in principle. In the old code we have a few things
-->that can be moved to the pst if you invest some effort in the prescan code.
-->
-->Furthermore you have a tremendous amount of information at the root node
-->(in principle you have that info at interior nodes as well, but it is to
-->slow to use that there). And *strategy* and long term plans cannot be
-->evaluated under the time constraints you have during the search.
-->

I agree. If what you scan is truly static features, then it has to be
safe. As I said, I do board centralization, since except in abstract
geometry the center is always in the same place... :)

If you pre-scan things that are more fluid, then, maybe as Marcel pointed
out, you can re-scan when things change enough to set off a warning that
pre-computed stuff is becoming untrustworthy.

However, as a long-time chess player, I tend to think more in terms of
dynamic analysis anyway. That is, I don't give much consideration to what
I can do at the root, because I don't play like that. However, I wouldn't
presume to claim that it can't work...

-->[ snip ]
-->


-->|> -->p.s.: Interesting side note. DarkThought has no real problems

-->|> --> against most programs with a lot of endpoint evaluation knowledge
-->|> --> and no problems against most of the prescan-based evaluators.
-->|>
-->|> hmm. don't know how to evaluate this. The obvious conclusion is not supported
-->|> by evidence...
-->
-->I spoke from personal experience at test matches and tournament games.
-->Basically I wanted to express a personal opinion and I certainly wanted
-->to claim anything that can be considered as "provable"
-->
-->|> --> But I cannot think of any "endpoint" evaluator that really
-->|> --> scares me, only some prescan engines come to mind. My
-->|> --> conclusion is that good prescanning + speed is lethal,
-->|> --> while speed is not possible when a program is based on
-->|> --> endpoint evaluation only.
-->
-->[ snip ]
-->
-->|> The PS is questionable. Crafty's fast, and getting faster as I clean up
-->|> and optimize. Also, I ran an older version on a Cray T90 a few months back,
-->|> using a cobbled-up parallel algorithm borrowed from Cray Blitz code and saw
-->|> around 5M nodes per second, which is *not* slow. And it does *full* endpoint
-->|> evaluation, mobility and all...
-->
-->Yes, Crafty is pretty fast when you consider that it is not build as a "eval()
-->does not cost anything" program (which is the right decision IMHO). We haven't
-->tested against Crafty yet since we stll have to do a couple of things to get
-->v2 tournament ready and comparing with v1 won't provide us any useful information
-->since this program is put to rest forever. But I have to admit that since
-->
-->(a) the knowledge (features that it detects) is not "better" in principle than
--> ours.
-->(b) our search speed is not inferior to Crafty's
-->(c) it lacks some techniques I "borrowed" from Hitech and improved them further
--> (that is we don't have the problem of mapping things onto a limited amount
--> of gates).
-->
-->it does not scare me on equivalent machines (your T90 *does* scare me though,
-->but we are working on that ;) Note that (a) does *not* include the weights
-->you have put on the features, we will need some time to get them tuned. Also
-->note that "it scares me / scares me not" is no scientific expression just
-->a gut feeling so please *no flames, no claims*.

no problem. :)


-->
-->But if you add some "strategic planning" facility to Crafty (or to DarkThought)
-->it will be slow. A prescan program doesn't have that problem but it will
-->have other problems which are on the other hand probably offset by the
-->increased search depth since the concepts in todays end point evaluators
-->are pretty simple.
-->
-->BTW, does anybody know what the DeepBlue folks to with their PST? Is it
-->just a catch for all to offset defects in the evaluation design/implementation
-->that will certainly show up sooner or later?
-->
-->|> I don't like dynamic info in piece/square tables because it's simply
-->|> wrong. However, centralization, and other "static" features works very
-->|> well and we've *always* done these, and still do in Crafty. However, no
-->|> king-side attack stuff, no king tropism, no weak squares, because that's
-->|> too dynamic and it's wrong. A program written with (say) king-safety stuff
-->|> in piece/square tables will attack a king that's scampered away, or else
-->|> won't bother "scampering away" because it can't escape the scoring penalties
-->|> since they are not attached to the king.
-->
-->Yes, centralization works (but I like it to be adjustable) and the other
-->features you mentioned are not stable enough over a search in many cases.
-->
-->|> This has been done for 25 years, but *not* because it's better, rather,
-->|> because it's faster. if it comes down to a decision based on speed, then
-->|> this might be a good deal. however, all else being equal, endpoint evaluation
-->|> simply has to be better, obviously it can't be any worse. If endpoint evaluation
-->|> costs too much, I can see where you'd move slow things to piece/square tables and
-->|> the speed might be well worth it.
-->
-->Yes, if it costs to much it should be moved if it doesn't screw
-->up the search if you move it. When I look at Crafty the first thing
-->I would move to the PST is the fianchetto thing for example. But this
-->is not the real topic I am interested in:

But, you notice, that's "dynamic". IE, if crafty can trade pieces, the
finachetto thing can go away. If it can reach an endgame *while searching*
it goes away completely. That's the thing I don't like about piece/square
tables... there's a boundary condition that fuzzes things badly as you
approach that boundary....

-->
-->I think that there are a lot of things that are in no search time
-->evaluator (simply because it relies on too many unlikely
-->things that all have to be true to make the feature present)
-->that can be determined by prescanning. Maybe this can add a
-->new quality to a program that has a *real* endpoint evaluation
-->already.


-->
-->-- Peter
-->

-->------------------------------------------------------------
-->Peter W. Gillgasch
-->Klosterweg 28/I-113 email gil...@ira.uka.de
-->76131 Karlsruhe/Germany Phone ++49/(0)721/6904255

Peter Gillgasch

unread,
Feb 7, 1996, 3:00:00 AM2/7/96
to
Thanks for joining this discussion Bob!

Probably it would have been helpful if I had noted the following:

At the point in time I was refering to DarkThought was coded in a PeeCee'ish


way. It was running on a 175 mhz Alpha and was about 1.3 times faster than on

my PowerMacintosh @80 mhz. The code didn't use any stuff that is using the

64 bit registers on the Alpha and the port of DarkThought from Pascal to C

was my first major C program and I later found a lot of things that could be

improved on the source code level. Live and learn...

In other words I think that we where at this time on that specific machine not

much faster than a good PeeCee (at that time).

The search speed of that version of DarkThought v1 was somewhere between 20 K


(worst case middle game speed) to 85 K (pawn endings). When I compare that with the

speed of Genius or Fritz it is comparable or even worse (assuming that they

do not cheat with their nps figures) and a lot of knowledge that is in the

commercial programs was simply missing.

Later on I found a cute technique that allowed us to do mobility evaluation


at the endpoints. Due to other optimizations the program was actually a bit

faster with this endpoint evaluation than the pure oracle based version.

Later on a lot of knowledge was added, the technique was modified


in strive for "exactness" and the search speed was basically lost.

The impact on the speed of the search was severe. We often dropped below

20 K on a 275 mhz Alpha and the max speed was 1/3rd of the original.

The program played better chess, but the conclusion was clear: I needed

a better technique that allowed fast endpoint evaluation and a lot of

special case code had to be moved to places other than the officer evaluation.

That was the moment I decided to go the bitboard road for *everything*.

|> As a result,


|> it's hard to attribute "x" to programming algorithm and "y" to faster hardware.
|> That's why you don't find me posting games where Crafty beats genius, although
|> it does it regularly. I just don't think it will win as many when running on
|> a PC as it will when running on faster hardware. The real test would be to see
|> how you do against a commercial program when you both run on the *same* class
|> of machine.

Hmm. Such a test is *not* possible since the commercial programs are


partly hand-tuned in x86 assembly which makes a tremendous difference

(you know that since Crafty benefits from a couple of lines of x86

assembly pretty well).

Later on the code evolved into a direction that made such a comparision

nonsensical: The basic search speed dropped on the PowerPC from around

20-25 K to 5 K since we used 64 bit stuff in the evaluator and in parts of

the move generator.

DarkThought v2 is different since it runs very efficiently on 64 bit

machines and 32 bit big endian machines. On the PowerPC (32 bit architecture)

I even included some handcoded assembly stuff. I hope to find the time to make

the 32 bit code "little endian safe" to be able to run it through a commercial

PeeCee compiler, but the x86 with its shortage of 32 bit registers is

probably always handicapped beyond repair.

In other words I doubt it that bitboards make sense on a PeeCee when


you want to compete with commercial 0x88 engines on that platform.

|> -->I am currently finishing a complete redesign of DarkThought

I agree partly. If I interpret your stament correctly you say that you do not


reject the prescanning idea in principle. In the old code we have a few things

that can be moved to the pst if you invest some effort in the prescan code.

Furthermore you have a tremendous amount of information at the root node


(in principle you have that info at interior nodes as well, but it is to

slow to use that there). And *strategy* and long term plans cannot be

evaluated under the time constraints you have during the search.

[ snip ]

|> -->p.s.: Interesting side note. DarkThought has no real problems
|> --> against most programs with a lot of endpoint evaluation knowledge
|> --> and no problems against most of the prescan-based evaluators.
|>
|> hmm. don't know how to evaluate this. The obvious conclusion is not supported
|> by evidence...

I spoke from personal experience at test matches and tournament games.


Basically I wanted to express a personal opinion and I certainly wanted

to claim anything that can be considered as "provable"

|> --> But I cannot think of any "endpoint" evaluator that really


|> --> scares me, only some prescan engines come to mind. My
|> --> conclusion is that good prescanning + speed is lethal,
|> --> while speed is not possible when a program is based on
|> --> endpoint evaluation only.

[ snip ]

|> The PS is questionable. Crafty's fast, and getting faster as I clean up
|> and optimize. Also, I ran an older version on a Cray T90 a few months back,
|> using a cobbled-up parallel algorithm borrowed from Cray Blitz code and saw
|> around 5M nodes per second, which is *not* slow. And it does *full* endpoint
|> evaluation, mobility and all...

Yes, Crafty is pretty fast when you consider that it is not build as a "eval()


does not cost anything" program (which is the right decision IMHO). We haven't

tested against Crafty yet since we stll have to do a couple of things to get

v2 tournament ready and comparing with v1 won't provide us any useful information

since this program is put to rest forever. But I have to admit that since

(a) the knowledge (features that it detects) is not "better" in principle than
ours.


(b) our search speed is not inferior to Crafty's

(c) it lacks some techniques I "borrowed" from Hitech and improved them further

(that is we don't have the problem of mapping things onto a limited amount

of gates).

it does not scare me on equivalent machines (your T90 *does* scare me though,

but we are working on that ;) Note that (a) does *not* include the weights

you have put on the features, we will need some time to get them tuned. Also

note that "it scares me / scares me not" is no scientific expression just

a gut feeling so please *no flames, no claims*.

But if you add some "strategic planning" facility to Crafty (or to DarkThought)


it will be slow. A prescan program doesn't have that problem but it will

have other problems which are on the other hand probably offset by the

increased search depth since the concepts in todays end point evaluators

are pretty simple.

BTW, does anybody know what the DeepBlue folks to with their PST? Is it

just a catch for all to offset defects in the evaluation design/implementation

that will certainly show up sooner or later?

|> I don't like dynamic info in piece/square tables because it's simply


|> wrong. However, centralization, and other "static" features works very
|> well and we've *always* done these, and still do in Crafty. However, no
|> king-side attack stuff, no king tropism, no weak squares, because that's
|> too dynamic and it's wrong. A program written with (say) king-safety stuff
|> in piece/square tables will attack a king that's scampered away, or else
|> won't bother "scampering away" because it can't escape the scoring penalties
|> since they are not attached to the king.

Yes, centralization works (but I like it to be adjustable) and the other


features you mentioned are not stable enough over a search in many cases.

|> This has been done for 25 years, but *not* because it's better, rather,

|> because it's faster. if it comes down to a decision based on speed, then
|> this might be a good deal. however, all else being equal, endpoint evaluation
|> simply has to be better, obviously it can't be any worse. If endpoint evaluation
|> costs too much, I can see where you'd move slow things to piece/square tables and
|> the speed might be well worth it.

Yes, if it costs to much it should be moved if it doesn't screw


up the search if you move it. When I look at Crafty the first thing

I would move to the PST is the fianchetto thing for example. But this

is not the real topic I am interested in:

I think that there are a lot of things that are in no search time


evaluator (simply because it relies on too many unlikely

things that all have to be true to make the feature present)

that can be determined by prescanning. Maybe this can add a

new quality to a program that has a *real* endpoint evaluation

already.

-- Peter

br...@ais.com

unread,
Feb 9, 1996, 3:00:00 AM2/9/96
to
In article <4f6nvg$t...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
> I don't like dynamic info in piece/square tables because it's simply
> wrong. However, centralization, and other "static" features works very
> well and we've *always* done these, and still do in Crafty. However, no
> king-side attack stuff, no king tropism, no weak squares, because that's
> too dynamic and it's wrong. A program written with (say) king-safety stuff
> in piece/square tables will attack a king that's scampered away, or else
> won't bother "scampering away" because it can't escape the scoring
> penalties since they are not attached to the king.

As I recall we mostly used this for things that were judged to be fairly
"static" -- that is, Pawns that were fixed on their squares and couldn't
be moved (either because they were blocked by other Pawns or because
they were "permanently backward") and so forth; there was a King
protection term in it but it was not very strong compared to the other
King protection terms in the endpoint evaluation. Also, as trees get
deeper you can encounter more problems with this -- when we were doing
Duchess, we didn't have the hardware speed to get more than something
like a 5 or 6 ply exhaustive search in the middle game, so we didn't
have as much of an opportunity for the King to "scamper away."

I should mention that we definitely did have endpoint detection and
evaluation of such things as actually occupying a "hole" in the
opponent's Pawn structure, whether the "hole" existed at the beginning
of the search or not. That's one of the advantages of bit boards;
you can do this kind of evaluation -- and Pawn structure evaluation
generally -- _much_ more quickly (and easily, I think) than you can
with arrays. (For a general-purpose computer, the Alpha is a nearly
perfect Chess machine :-). The main purpose of the planner array
was to try to guide the program towards goals that were beyond what
could have been seen in the actual search.

> This has been done for 25 years, but *not* because it's better, rather,
> because it's faster. if it comes down to a decision based on speed, then
> this might be a good deal. however, all else being equal, endpoint
> evaluation simply has to be better, obviously it can't be any worse. If
> endpoint evaluation costs too much, I can see where you'd move slow
> things to piece/square tables and the speed might be well worth it.

Agreed. But doing this correctly in the endpoint evaluation can be
very expensive, unless you have hardware assists (which can become
expensive in another sense :-).

Bruce C. Wright

br...@ais.com

unread,
Feb 9, 1996, 3:00:00 AM2/9/96
to

I can't speak for Bob's program, but back in the 70's we did do
this in Duchess. It did usually speed things up considerably, but
at the cost of sometimes having to redo the iteration. I think we
wound up using it, but setting it so something like +/- a Knight
so that we didn't end up cutting off the root node so much.

You do have to be careful how you integrate this with the transposition
table; as I recall, one of the more obscure bugs with the table was
caused by adding this feature to the program without properly
anticipating its effect on the transposition table :-).

Bruce C. Wright

0 new messages