Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

hash table fail high/fail low problem

86 views
Skip to first unread message

Robert Hyatt

unread,
Nov 24, 1996, 3:00:00 AM11/24/96
to

Chris Whittington (chr...@cpsoft.demon.co.uk) wrote:
: brucemo <bru...@nwlink.com> wrote:
: >
: > Chris Whittington wrote:
: >
: > > Interesting, I had assumed I had it due to some deep bug in
: > > the pruning. Something covered in one pruning function but missed
: > > in another.
: > >
: > > Maybe its just an undesired effect of doing any pruning at all ?
: > >
: > > Chris Whittington
: >
: > I believe you can get this to reproduce in a completely vanilla
: > program, as long as it uses a hash table.
: >
:
: The idea of a hash table is not too difficult to cope with, but
: unravelling the complexitiesof hash table behaviour can get
: exceedingly hairy.
:
: Ok, if you reckon a vanilla a-b (not an a-only, since the
: behaviour comes from the b-cuts ?), no pruning, fixed depth, does
: this; can you generate an explainable mechanism .......... ?
:
: Chris Whittington
:
: >
: > bruce
:

If you are talking about fail high / fail low, yes...

You search for a long time, and at some point search position "X" and
find the score is > beta, store this and continue. You later search a
move at the root, look up position "X", and find that when you stored
this position beta was "Z" and the current beta is < "Z", so this table
hit causes a fail high just like you'd suspect. Now, you relax beta,
search again, and the fail-high won't happen, you can't search deeply
enough to find out why it *really* failed high, so you end up failing
low. After a year of hunting, I finally caught this in Cray Blitz years
ago, and there's really nothing you can do. If you carry hash stuff
across searches (not iterations, but actual complete searches) this is
worse (I do this in Crafty), but even if you clear it after each iteration
for some ungodly reason, it can still happen...

Bob

Robert HYATT

unread,
Nov 25, 1996, 3:00:00 AM11/25/96
to


On Mon, 25 Nov 1996, Marcel van Kervinck wrote:

> In article <579rmv$s...@juniper.cis.uab.edu> you wrote:
>
> > You search for a long time, and at some point search position "X" and
> > find the score is > beta, store this and continue. You later search a
> > move at the root, look up position "X", and find that when you stored
> > this position beta was "Z" and the current beta is < "Z", so this table
> > hit causes a fail high just like you'd suspect. Now, you relax beta,
> > search again, and the fail-high won't happen, you can't search deeply
> > enough to find out why it *really* failed high, so you end up failing
> > low. After a year of hunting, I finally caught this in Cray Blitz years
> > ago, and there's really nothing you can do.
>

> A few weeks ago I *exactly* explained what can be done against
> this. Do you forget so quickly, read so inaccurately or do you
> firmly believe vanilla table handling is the best there is?
>
> Marcel

No, but I also don't believe in "throwing the baby out with the
bath." This type of search inconsistency is *not* a bad thing. The
new move *is* better than the old, and Crafty will play it just like it
should.

I'm trying to not sound harsh, but your suggestion lumps right in the
class of suggestions that happen all the time, and all involve "limiting"
the usefullness of a strong search algorithm, rather than doing something
to avoid serious failures. The scenario I explained is both normal and
causes no problems at all, which is why I've left it in for 20 years now,
and why everyone else ignores it as well. The move *is* better, I just
don't know how to find out how much better. So what?


Chris Whittington

unread,
Nov 25, 1996, 3:00:00 AM11/25/96
to

hy...@crafty.cis.uab.edu (Robert Hyatt) wrote:
>
> Chris Whittington (chr...@cpsoft.demon.co.uk) wrote:
> : brucemo <bru...@nwlink.com> wrote:
> : >
> : > Chris Whittington wrote:
> : >
> : > > Interesting, I had assumed I had it due to some deep bug in
> : > > the pruning. Something covered in one pruning function but missed
> : > > in another.
> : > >
> : > > Maybe its just an undesired effect of doing any pruning at all ?
> : > >
> : > > Chris Whittington
> : >
> : > I believe you can get this to reproduce in a completely vanilla
> : > program, as long as it uses a hash table.
> : >
> :
> : The idea of a hash table is not too difficult to cope with, but
> : unravelling the complexitiesof hash table behaviour can get
> : exceedingly hairy.
> :
> : Ok, if you reckon a vanilla a-b (not an a-only, since the
> : behaviour comes from the b-cuts ?), no pruning, fixed depth, does
> : this; can you generate an explainable mechanism .......... ?
> :
> : Chris Whittington
> :
> : >
> : > bruce
> :
>
> If you are talking about fail high / fail low, yes...
>
> You search for a long time, and at some point search position "X" and
> find the score is > beta, store this and continue. You later search a
> move at the root, look up position "X", and find that when you stored
> this position beta was "Z" and the current beta is < "Z", so this table
> hit causes a fail high just like you'd suspect. Now, you relax beta,
> search again, and the fail-high won't happen, you can't search deeply
> enough to find out why it *really* failed high, so you end up failing
> low. After a year of hunting, I finally caught this in Cray Blitz years
> ago, and there's really nothing you can do. If you carry hash stuff
> across searches (not iterations, but actual complete searches) this is
> worse (I do this in Crafty), but even if you clear it after each iteration
> for some ungodly reason, it can still happen...
>

It was the 'clear after each iteration' case that I meant.

I've always looked at it as 'for soem ungodly' reason too.

But maybe we should try to analyse why.

We could try:

Waiting for a repeatable fail high/fail low.
Switch off the hash. See if it still repeats.

Chris Whittington

> Bob
>
>


Marcel van Kervinck

unread,
Nov 25, 1996, 3:00:00 AM11/25/96
to

Robert HYATT (hy...@cis.uab.edu) wrote:

> On Mon, 25 Nov 1996, Marcel van Kervinck wrote:

> > In article <579rmv$s...@juniper.cis.uab.edu> you wrote:
> >

> > > You search for a long time, and at some point search position "X" and
> > > find the score is > beta, store this and continue. You later search a
> > > move at the root, look up position "X", and find that when you stored
> > > this position beta was "Z" and the current beta is < "Z", so this table
> > > hit causes a fail high just like you'd suspect. Now, you relax beta,
> > > search again, and the fail-high won't happen, you can't search deeply
> > > enough to find out why it *really* failed high, so you end up failing
> > > low. After a year of hunting, I finally caught this in Cray Blitz years
> > > ago, and there's really nothing you can do.
> >

> > A few weeks ago I *exactly* explained what can be done against
> > this. Do you forget so quickly, read so inaccurately or do you
> > firmly believe vanilla table handling is the best there is?
> >
> > Marcel

> No, but I also don't believe in "throwing the baby out with the
> bath." This type of search inconsistency is *not* a bad thing. The
> new move *is* better than the old, and Crafty will play it just like it
> should.

> I'm trying to not sound harsh, but your suggestion lumps right in the
> class of suggestions that happen all the time, and all involve "limiting"
> the usefullness of a strong search algorithm, rather than doing something
> to avoid serious failures. The scenario I explained is both normal and
> causes no problems at all, which is why I've left it in for 20 years now,
> and why everyone else ignores it as well. The move *is* better, I just
> don't know how to find out how much better. So what?

I don't see this as a major problem too. But it is easy to correct.
But the correction is not a dirty trick and it is not "hiding"
something. Vanilla handling won't recognize the move better than any
other move that falls between the early (deep) bound and the new
(shallow) search. My implementation does. So there must be cases where
it gives a better move, and it returns the same move otherwise.
Perhaps that there aren't many. That's why I'm not really seeing
it as a big problem.

I don't see where it is "throwing the baby out with the water". It is
enhancing the use of table information. It uses more information from
the table, it even uses *all* information you can get. There is no
"limiting the usefullness" of anything. It is merely fixing a minor
oversight in the canocical handling.

Perhaps I'll take Crafty one day and implement the fix. Then we can see
if it really helps much compared to the old one. I don't think there
will be huge differences, but they will be there. And I don't expect
anyone to actually use it. Just don't say there is nothing that can be
done. It can be done, and more important, it can be done without
making things worse.

Marcel
-- _ _
_| |_|_|
|_ |_ Marcel van Kervinck
|_| bue...@urc.tue.nl

Robert Hyatt

unread,
Nov 25, 1996, 3:00:00 AM11/25/96
to

Chris Whittington (chr...@cpsoft.demon.co.uk) wrote:

: hy...@crafty.cis.uab.edu (Robert Hyatt) wrote:
: >
: > Chris Whittington (chr...@cpsoft.demon.co.uk) wrote:
: > : brucemo <bru...@nwlink.com> wrote:
: > : >
: > : > Chris Whittington wrote:
: > : >
: > : > > Interesting, I had assumed I had it due to some deep bug in
: > : > > the pruning. Something covered in one pruning function but missed
: > : > > in another.
: > : > >
: > : > > Maybe its just an undesired effect of doing any pruning at all ?
: > : > >
: > : > > Chris Whittington
: > : >
: > : > I believe you can get this to reproduce in a completely vanilla
: > : > program, as long as it uses a hash table.
: > : >
: > :
: > : The idea of a hash table is not too difficult to cope with, but
: > : unravelling the complexitiesof hash table behaviour can get
: > : exceedingly hairy.
: > :
: > : Ok, if you reckon a vanilla a-b (not an a-only, since the
: > : behaviour comes from the b-cuts ?), no pruning, fixed depth, does
: > : this; can you generate an explainable mechanism .......... ?
: > :
: > : Chris Whittington
: > :
: > : >
: > : > bruce
: > :
: >
: > If you are talking about fail high / fail low, yes...
: >
: > You search for a long time, and at some point search position "X" and

: > find the score is > beta, store this and continue. You later search a
: > move at the root, look up position "X", and find that when you stored
: > this position beta was "Z" and the current beta is < "Z", so this table
: > hit causes a fail high just like you'd suspect. Now, you relax beta,
: > search again, and the fail-high won't happen, you can't search deeply
: > enough to find out why it *really* failed high, so you end up failing
: > low. After a year of hunting, I finally caught this in Cray Blitz years
: > ago, and there's really nothing you can do. If you carry hash stuff
: > across searches (not iterations, but actual complete searches) this is
: > worse (I do this in Crafty), but even if you clear it after each iteration
: > for some ungodly reason, it can still happen...
: >
:
: It was the 'clear after each iteration' case that I meant.
:
: I've always looked at it as 'for soem ungodly' reason too.
:
: But maybe we should try to analyse why.
:
: We could try:
:
: Waiting for a repeatable fail high/fail low.
: Switch off the hash. See if it still repeats.
:
: Chris Whittington
:
: > Bob
: >
: >
:

I have found two distinct causes for fail-high, fail-low:

(1) null-moves. In crafty, when all pieces reach "optimal" squares, which is a
sort of local maximum, any move will drop the score (to a shallow search) which
is exactly what null-move does. That makes such a move look bad, and it will get
refuted by the null-move search quickly. However, the re-search will not fail
high, because standing pat is almost *never* a good idea, and the move then fails
high (as it should) which will back up to the root as a fail-low, as it should.
In this case, the null move incorrectly found a "zug" position where null-move is
known to fail miserably, and it proceeds to fail miserably as expected. This is
a serious problem that I see and am working on solving in 11.8...

(2) the hash bug above. If you turn off hashing (in the cases I've worked on)
the fail-high/fail-low goes away. The fail-high never happens. even if you
clear the hash between iterations, the information stored during an iteration
can still produce this behavior... However, don't forget, this move that fails
high *is* better than the current move, just you will never find out how much
better. This is only a problem if a second move fails high as well, as you now
can't really compare them and most often will end up playing the latter, even if
the former is better. No solution I can see, other than to not allow the fail
high to occur, which would be silly, knowing it *is* a better move...

brucemo

unread,
Nov 25, 1996, 3:00:00 AM11/25/96
to

Chris Whittington wrote:

> It was the 'clear after each iteration' case that I meant.
>
> I've always looked at it as 'for soem ungodly' reason too.
>
> But maybe we should try to analyse why.
>
> We could try:
>
> Waiting for a repeatable fail high/fail low.
> Switch off the hash. See if it still repeats.

I'd figure out why, if I was concerned that I had a real bug, meaning a bug
wherein my eval function returned a different value for the same position, or
something like that.

There are some places in mine where I prune quiescent moves if I don't think
they can get anywhere near alpha, so if I was wrong -- one of them could have --
and I come back later with a lower alpha, I might find a variation that I missed
before, and obviously this can cause anything to happen.

I did track cases where the hash table caused problems, I dumped the tree and
figured it out. At least one of the ones I found was due to draw score
problems. This problem doesn't go away if you don't take the simple-minded
approach of not storing the draw score in the hash table, because detecting an
erroneous repetition can result in the value of some node above you in the tree
changing from a value that is not the draw score to another value that is also
not the draw score.

I ignore all of these problems, since I know that they can happen without there
being "real" bugs. I believe my engine is solid, but I understand that there
are some cases where it breaks down slightly because it is using approaches that
are heuristic. In cases that appear truly strange I might ask Bob to run them
on Crafty, and see if he has the same problem.

If a move fails high at the root, I re-search with alpha-infinity rather than
beta-infinity, so if it fails low here the move appears to have been a real dog
(it came back less than alpha with a window that was a superset of the previous
window), so I ignore the move. Obviously there are cases where the move was
good, and I should have kept it. Oh well, better luck next iteration.

bruce

brucemo

unread,
Nov 25, 1996, 3:00:00 AM11/25/96
to

Robert Hyatt wrote:

> (1) prevent the fail high condition, which would be bad, because the move *is*
> better, it's just the search can't figure this out on its own and uses the hash
> table hit to find it, but when the search relaxes beta, it discovers it can't
> figure out why it's better and fails low. The correct thing to do is still
> play this move. In every case I studied this in with Cray Blitz and with
> Crafty, the move was actually better, which is not a surprise since the
> transposition table really can't "lie" unless there's a serious bug.

I found a counter-example, and it was a position you gave me. It was one of
these cases where you take the bishop on g4 with a pawn on h3, black re-takes the
pawn with a pawn on h5, opening the rook-file, and you get blasted like 3 moves
later. 1. hxg4 did a fail-high fail-low at the root, and turned out to be a
fatal choice.

I wrote a previous post which tried to suggest circumstances where you could get
a fail-high fail-low, and I didn't mention that it makes sense that the null-move
could also cause this to happen.

bruce

Robert Hyatt

unread,
Nov 25, 1996, 3:00:00 AM11/25/96
to

Marcel van Kervinck (bue...@asterix.urc.tue.nl) wrote:

: Robert HYATT (hy...@cis.uab.edu) wrote:
:
: > On Mon, 25 Nov 1996, Marcel van Kervinck wrote:
:
: > > In article <579rmv$s...@juniper.cis.uab.edu> you wrote:
: > >
: > > > You search for a long time, and at some point search position "X" and
: > > > find the score is > beta, store this and continue. You later search a
: > > > move at the root, look up position "X", and find that when you stored
: > > > this position beta was "Z" and the current beta is < "Z", so this table
: > > > hit causes a fail high just like you'd suspect. Now, you relax beta,
: > > > search again, and the fail-high won't happen, you can't search deeply
: > > > enough to find out why it *really* failed high, so you end up failing
: > > > low. After a year of hunting, I finally caught this in Cray Blitz years
: > > > ago, and there's really nothing you can do.
: > >
: > > A few weeks ago I *exactly* explained what can be done against

: > > this. Do you forget so quickly, read so inaccurately or do you
: > > firmly believe vanilla table handling is the best there is?
: > >
: > > Marcel
:
: > No, but I also don't believe in "throwing the baby out with the
: > bath." This type of search inconsistency is *not* a bad thing. The
: > new move *is* better than the old, and Crafty will play it just like it
: > should.
:
: > I'm trying to not sound harsh, but your suggestion lumps right in the
: > class of suggestions that happen all the time, and all involve "limiting"
: > the usefullness of a strong search algorithm, rather than doing something
: > to avoid serious failures. The scenario I explained is both normal and
: > causes no problems at all, which is why I've left it in for 20 years now,
: > and why everyone else ignores it as well. The move *is* better, I just
: > don't know how to find out how much better. So what?

:
: I don't see this as a major problem too. But it is easy to correct.
: But the correction is not a dirty trick and it is not "hiding"
: something. Vanilla handling won't recognize the move better than any
: other move that falls between the early (deep) bound and the new
: (shallow) search. My implementation does. So there must be cases where
: it gives a better move, and it returns the same move otherwise.
: Perhaps that there aren't many. That's why I'm not really seeing
: it as a big problem.
:
: I don't see where it is "throwing the baby out with the water". It is
: enhancing the use of table information. It uses more information from
: the table, it even uses *all* information you can get. There is no
: "limiting the usefullness" of anything. It is merely fixing a minor
: oversight in the canocical handling.
:
: Perhaps I'll take Crafty one day and implement the fix. Then we can see
: if it really helps much compared to the old one. I don't think there
: will be huge differences, but they will be there. And I don't expect
: anyone to actually use it. Just don't say there is nothing that can be
: done. It can be done, and more important, it can be done without
: making things worse.
:
: Marcel

Here's what I meant: there are only two ways to fix the above problem, which
I see in maybe one out of two or three games.

(1) prevent the fail high condition, which would be bad, because the move *is*
better, it's just the search can't figure this out on its own and uses the hash
table hit to find it, but when the search relaxes beta, it discovers it can't
figure out why it's better and fails low. The correct thing to do is still
play this move. In every case I studied this in with Cray Blitz and with
Crafty, the move was actually better, which is not a surprise since the
transposition table really can't "lie" unless there's a serious bug.

(2) allow the fail high, but then there will be no way to resolve the fail high
and produce a score, because the solution is likely beyond that which can be
reached with the current search depth constraint. Another iteration might make
all the difference, but we don't know and don't know if we'll have time to find
out.

In case (1) things are easy. In case (2) how do you propose to solve that which
can't be recognized via search. It seems to me that to solve (2), you have to
prevent (1), which is bad...

Bob


Robert Hyatt

unread,
Nov 25, 1996, 3:00:00 AM11/25/96
to

brucemo (bru...@nwlink.com) wrote:
: Robert Hyatt wrote:
:
: > (1) prevent the fail high condition, which would be bad, because the move *is*

: > better, it's just the search can't figure this out on its own and uses the hash
: > table hit to find it, but when the search relaxes beta, it discovers it can't
: > figure out why it's better and fails low. The correct thing to do is still
: > play this move. In every case I studied this in with Cray Blitz and with
: > Crafty, the move was actually better, which is not a surprise since the
: > transposition table really can't "lie" unless there's a serious bug.
:
: I found a counter-example, and it was a position you gave me. It was one of
: these cases where you take the bishop on g4 with a pawn on h3, black re-takes the
: pawn with a pawn on h5, opening the rook-file, and you get blasted like 3 moves
: later. 1. hxg4 did a fail-high fail-low at the root, and turned out to be a
: fatal choice.
:
: I wrote a previous post which tried to suggest circumstances where you could get
: a fail-high fail-low, and I didn't mention that it makes sense that the null-move
: could also cause this to happen.
:
: bruce

I'd almost be willing to bet that this was at least exacerbated by the null-move,
if not outright caused by it. In these cases, my first recourse is to turn hashing
off, and it almost never affects anything dealing with the fail-hi/fail-low
problem, but the instant I turn off the good old null move, everything works like
a charm once again... :)

Bob

Vincent Diepeveen

unread,
Nov 26, 1996, 3:00:00 AM11/26/96
to

In <57cc1j$v...@juniper.cis.uab.edu> hy...@crafty.cis.uab.edu (Robert Hyatt) writes:

>I have found two distinct causes for fail-high, fail-low:

>(1) null-moves. In crafty, when all pieces reach "optimal" squares, which is a
>sort of local maximum, any move will drop the score (to a shallow search) which
>is exactly what null-move does. That makes such a move look bad, and it will get
>refuted by the null-move search quickly. However, the re-search will not fail
>high, because standing pat is almost *never* a good idea, and the move then fails
>high (as it should) which will back up to the root as a fail-low, as it should.
>In this case, the null move incorrectly found a "zug" position where null-move is
>known to fail miserably, and it proceeds to fail miserably as expected. This is
>a serious problem that I see and am working on solving in 11.8...
>
>(2) the hash bug above. If you turn off hashing (in the cases I've worked on)
>the fail-high/fail-low goes away. The fail-high never happens. even if you

(3) 2 is caused by
a) lazy evaluation
b) alfa/beta dependant extensions
c) bugs

If i turn off alfabeta dependant extensions (lazy evaluation
i don't use), then i never have get a fail-high/fail-low.

If i turn them on, then i also never get a fail-high/fail-low outside
window;

If i first search with window [a,b] then get a fail high,
then i research with [a-x,b+x]. I have seen cases where i search
returns score y >= a-x, but i have never seen a score, causing
fail low: y <= a-x.

If i however use window [b,b+x] to research a fail low, then
this can give a fail low, as score seems only >= a.

Perhaps this is your problem: you must not use alfabeta dependant
extensions or one must not use research [b,b+x]?

Vincent Diepeveen
vdie...@cs.ruu.nl
http://www.students.cs.ruu.nl/~vdiepeve/
--
+--------------------------------------+
|| email : vdie...@cs.ruu.nl ||
|| Vincent Diepeveen ||
+======================================+

Marcel van Kervinck

unread,
Nov 26, 1996, 3:00:00 AM11/26/96
to

> (1) prevent the fail high condition, which would be bad, because the move *is*


> better, it's just the search can't figure this out on its own and uses the hash
> table hit to find it, but when the search relaxes beta, it discovers it can't
> figure out why it's better and fails low. The correct thing to do is still
> play this move. In every case I studied this in with Cray Blitz and with
> Crafty, the move was actually better, which is not a surprise since the
> transposition table really can't "lie" unless there's a serious bug.

> (2) allow the fail high, but then there will be no way to resolve the fail high


> and produce a score, because the solution is likely beyond that which can be
> reached with the current search depth constraint. Another iteration might make
> all the difference, but we don't know and don't know if we'll have time to find
> out.

> In case (1) things are easy. In case (2) how do you propose to solve that which
> can't be recognized via search. It seems to me that to solve (2), you have to
> prevent (1), which is bad...

> Bob

The correct thing to do is to return the table value here. Not the
shallow search value that fails low. My hardwindow technique I posted a
few weeks ago does exactly that. You can't search deep enough to see
why, but that doesn't mean the early fail high should be ignored by
the fail low search. It contains information you throw away and I
don't.

Robert Hyatt

unread,
Nov 26, 1996, 3:00:00 AM11/26/96
to

Marcel van Kervinck (bue...@asterix.urc.tue.nl) wrote:
:

Now I'm totally lost. I don't throw this away. If I fail high, I will always
play that move. And have never regretted doing this in post-game analysis.
The problem that I don't see how you solve is if you fail high, the upper bound
is probably 1 "unit" higher than the current alpha value. If you can't resolve
the fail high, you end up by returning alpha, which is a fail low. You know
the current move is better than the previous best, but have no idea how much
better (beta = alpha+1 remember). If a second move fails high, and you *do*
resolve this one, it will also be better than the first move that failed high
based on what the search returned, but there's no way to compare move A with
score > 1.221 (say) and move B with score 1.383 (say). Crafty (and your
program I'd suspect) will play move B, which *may* be completely wrong. Do
you have a cure for that? If so, that would be a breakthrough that would
certainly help solve a problem (albeit a rare problem)...

0 new messages