Repetition detection w/hash tables

Skip to first unread message

Bruce Moreland

Aug 13, 1993, 4:18:13 PM8/13/93
My apologies to readers who don't care about computers, but
I don't know where else to ask this question.

I was making a "normal" chess program a couple of years ago
(alpha-beta, iterative deepening, an aspiration window, hash
table, etc.) and came across an obvious problem that caused
me a lot of headache. I've never seen any reference to this
problem anywhere, and it's bugged me ever since. I wonder
if someone with experience with building chess programs might
tell me what I'm missing?

I ran some of the "Peasant" pawn endings from "How Computers
Play Chess" through my program, and came across a bug. The
search would fail high when the program discovered it could
win a pawn, so it would modify its aspiration window and try
again. This time it would fail low. It didn't expect this
case, and responded by treating this as a normal fail-low
condition, so it would set the aspiration window up for that
and try yet again, resulting in another fail-high, and so on.

The problem seems to have been caused by repetition detection.
When the program found a repetition it would score it as zero,
so a zero would get stored in the hash entry for the position,
and this zero would be returned to the previous ply and might
affect the value of, and consequently the hash entries for,
backed-up positions as well.

Other move orders would use the hash table to produce cutoffs
based upon these hash entries. But as in many cases you can
reach the same position in the same number of moves WITHOUT
repeating a position, these cutoffs were wrong. (Example: in
the initial position, Ng1-f3-g5-f3 contains a repetition and
would be scored as zero, but Ng1-h3-g5-f3 does not, and should
not use the zero value for the previous position.)

The program got completely confused when it had to cope with
the same search returning different results, which might
happen if a hash entry is overwritten.

My fix was to detect repetitions in the first five plies of
the search only, and to not let the hash table cause cut-offs
during the first five plies. This worked, but isn't great
because the program missed perpetual checks that may occur
later in the search, etc.

What am I doing wrong?

bruce (, (206) 936-3892

Steven J Edwards

Aug 14, 1993, 12:21:03 AM8/14/93
I think that you have two somewhat different difficulties occuring as
described in your article.

The first problem with fail-high and fail-low retries can be easily fixed.
Here's how my program Spector does it (approximately):

if (FailedLow() || FailedHigh())
if (FailedLow())
if (FailedLow() || FailedHigh())

The second problem is with the general detection of repeated positions
hidden by transposition table look-up. Unfortunately, there is no good
answer to this. It has been discussed extensively in this newsgroup, and
if I still had some of the articles handy, I'd post them or send them to

Who knows? Perhaps you'll figure out some handy way to solve this problem!
Keep us informed.

-- Steven (

Robert Hyatt

Aug 16, 1993, 12:55:59 PM8/16/93

There are two distinct problems you mentioned.

(1) the transposition table has a dire effect on searching when draw scores
appear. As you mentioned, two *positions* are not necessarily the same when
one of them is drawn due to repetition, and the other is reached by a different
sequence of moves and is *not* a repetition. There is no solution for this
problem except to store the move path (or hash it in) so that the *path*
is encapsulated in the *position* as mandated by the draw by repetition.

Another problem under this heading is that when you look up a position in the
table and don't find a draw score, you assume a real score and continue.
However, the real score might be bad as the current position from the table
was reached from a path, that, when grafted onto the current search path
might contain a draw by repetition.

In Cray-Blitz, we simply ignore the problems as there doesn't seem to be a
reasonable work-around without making a table entry much to long (how much
path data do you store? we se search extensions well beyond 30 plies, and
have reached 50 in endgames. Should we allow 50 words for the complete

(2) failing hi and then failing low is not necessarily a bug. It often
happens in tactical positions where extensions occur. An example. You
do a 10 ply search and when searching one of the branches, find that it
fails high, too high in fact, and alpha/beta throws this away. This is
entered into the search table with (failhi, 10 ply search, search_bound)

Now you start the next search after making your move and when you try the
first move, you get the "fail hi" from the table. You then indicate "fail
hi" to the operator, relax the upper bound and search again. Now, the
entry from the table won't fail hi again since the current upper bound is
now too high for this. Since we don't search as deeply as we did when
we stored the fail-hi, we now don't see "why" we failed hi as we can't
search deeply enough to see the needed extensions. We see a couple of
these fail-hi fail-lo conditions per game with Cray-Blitz. It is annoying
and can be nearly eliminated by purging the transposition table between
iterations so that these deep fail-hi scores won't be carried forward to
a shallow search. It turns out that the fail-hi moves are, in fact, quite
good moves, just that the program can't see deeply enough to see why. We
have never seen a fail-hi on a bummer of a move. The only problem with
this is that after a fail-hi, and then a fail-low, what score do you give
the move? If you use the upper bound when it failed hi, it might be possible
for another move to fail hi over this move and this new move will be
chosen. It might not be as good as the "mystery" move. We have seen
this on occasion as well.

The moral: If you want sane behavior, can the transposition table. We
have a switch in C-B so that we can eliminate it for testing so that we
can weed out all the funny things it does during some of the search
extension testing we do. It's sort of like women.... can't live with
them (it, adds funny results) cant live without them (it, the search would
be too slow)


!Robert Hyatt Computer and Information Sciences !
! University of Alabama at Birmingham !

Reply all
Reply to author
0 new messages