Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Hash Table test (re: move ordering)

57 views
Skip to first unread message

Robert Hyatt

unread,
Feb 8, 1996, 3:00:00 AM2/8/96
to
A couple of days ago, Jonathan Schaeffer recommended that I try the
following "trick": if there's not a "hash move" at a particular ply,
then generate the list of legal moves, pass through them creating a
hash key for each, then probing the "other" hash table (for the opposite
side), and if I find a move that leads to a position that would produce
a cutoff quickly, try that move here first.

I wrote this code and tried it on Crafty and Cray Blitz. The results
were that it *did not* slow things down much because of the speed at
which both programs generate legal moves. However, it produced *very*
little performance improvement. In most positions it was 1 or 2 percent
slower due to the extra work, while in others it was 1 or 2 percent faster
due to slightly smaller tree sizes. I tried it on the complete set of
win at chess and Bratko/Kopec positions, and the net result was <nil>
improvement, and overall a 1-2% slowdown. Note that the Kopec test was
run at 5 mins per position so it wasn't a search that was "too shallow"
and that also the hash table sizes were not "immense".

I can't begin to explain why I get this result when Jonathan (and someone
else too) reported 25% improvements, particularly in a checker program.
One possible explanation is that I've always had a fairly clever hash table
replacement algorithm that goes to great lengths to keep useful information.
In any case, after about 350 test positions, the result was almost break-even.

Bruce Moreland (Ferret on ICC/FICS) also tried this, and while he saw the
tree sizes shrink very slightly as I did, he took a much bigger hit in
performance because of the way he generates moves. His result, then, was
that the idea failed miserably, because even though the tree shrank minimally,
the cost far outweighed the gain.

Maybe we need to discuss/consider/tune this some more, but at present, it
looks like whatever we are doing is working well. BTW, I even replaced
my hash table algorithm (in Crafty) with a "clone" of what Bruce does, and
re-ran the tests. Same results, so it's not just *my* hashing that's doing
well and making this not work.

Comments or suggestions?


--
Robert Hyatt Computer and Information Sciences
hy...@cis.uab.edu University of Alabama at Birmingham
(205) 934-2213 115A Campbell Hall, UAB Station
(205) 934-5473 FAX Birmingham, AL 35294-1170

Daniel A. Thies

unread,
Feb 9, 1996, 3:00:00 AM2/9/96
to
In article <4fdghn$2...@pelham.cis.uab.edu>, Robert Hyatt wrote:

> A couple of days ago, Jonathan Schaeffer recommended that I try the
> following "trick": if there's not a "hash move" at a particular ply,
> then generate the list of legal moves, pass through them creating a
> hash key for each, then probing the "other" hash table (for the opposite
> side), and if I find a move that leads to a position that would produce
> a cutoff quickly, try that move here first.

I think Jonathan is doing this in a checkers program, where there are
fewer moves to generate, and it is perhaps (I dunno) more likely that a
useful match is found in the opposite side's hash table. It is also
possible that you are already getting much of the benefits Jonathan found
with this by using other means.

Why not try this: turn off killer move and history move, and run your
test. Then add in Jonathan's hash table trick, and see if you don't get a
25% improvement.

It seems to me that this trick could substitute for killer/history move to
some extent, and there are a *lot* of clever things going on in Crafty
besides the hash tables. Since you got no real change when you swapped in
Ferret's hashing methods, the answer would appear to lie somewhere else.

Jonathan: if you're following this thread, perhaps you can say what other
tricks you're doing that Crafty may or may not already use.

Dan

M D. Grimminck

unread,
Feb 9, 1996, 3:00:00 AM2/9/96
to
hy...@willis.cis.uab.edu (Robert Hyatt) writes:

>Comments or suggestions?

For me, it seems to only work if the hashtables are overloaded.
This overloading prevents good move ordering, and gives the algorithm
(I think it is called ETC) a chance to find some cheap cut-offs.

So possibly your move ordering is too good for ETC!

--
Michel Grimminck, Computational Physics, University of Amsterdam.
draughts/checker page: http://carol.fwi.uva.nl/~grimmink/draughts.html

Vincent Diepeveen

unread,
Feb 9, 1996, 3:00:00 AM2/9/96
to
In <4fdghn$2...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:

>A couple of days ago, Jonathan Schaeffer recommended that I try the
>following "trick": if there's not a "hash move" at a particular ply,
>then generate the list of legal moves, pass through them creating a
>hash key for each, then probing the "other" hash table (for the opposite
>side), and if I find a move that leads to a position that would produce
>a cutoff quickly, try that move here first.

I also implemented the trick few months ago. At this time i also was experimenting with things
like countermove. In very tactical positions it does not seem to work that fine,
but it seems to work very fine as soon as the loading factor increases.

Only using killermoves seems to work fine with this.

We must not forget that in a checkersprogramm best move stored works better than
in a chessprogram. Don't know why. Any thoughts?

I guess that this phenomena will always work better for checkers/draughts etc. than for
chess. However, it seems to give a considerable node reduction in
Diep, although in the same order Hyatt reported, about few %.
This changes up to 5-20% for >= 10 ply, which
is a depth Diep usually doesn't get in tournament play,
where in checkers/draughts these depth are already valid for blitz.

I'll try to implement it in my draughtsprogram. Next Monday i'll report if it works better for
draughts than for chess.

>I wrote this code and tried it on Crafty and Cray Blitz. The results
>were that it *did not* slow things down much because of the speed at
>which both programs generate legal moves. However, it produced *very*
>little performance improvement. In most positions it was 1 or 2 percent
>slower due to the extra work, while in others it was 1 or 2 percent faster
>due to slightly smaller tree sizes. I tried it on the complete set of
>win at chess and Bratko/Kopec positions, and the net result was <nil>
>improvement, and overall a 1-2% slowdown. Note that the Kopec test was
>run at 5 mins per position so it wasn't a search that was "too shallow"
>and that also the hash table sizes were not "immense".

What depths does Cray Blitz get? What is its hashtable loading factor after 5 mins?

>I can't begin to explain why I get this result when Jonathan (and someone
>else too) reported 25% improvements, particularly in a checker program.
>One possible explanation is that I've always had a fairly clever hash table
>replacement algorithm that goes to great lengths to keep useful information.
>In any case, after about 350 test positions, the result was almost break-even.

May be depth?

Vincent

Vincent Diepeveen
vdie...@cs.ruu.nl
--
+--------------------------------------+
|| email : vdie...@cs.ruu.nl ||
|| Vincent Diepeveen ||
+======================================+

Jonathan Schaeffer

unread,
Feb 10, 1996, 3:00:00 AM2/10/96
to
hy...@willis.cis.uab.edu (Robert Hyatt) writes:

>A couple of days ago, Jonathan Schaeffer recommended that I try the
>following "trick": if there's not a "hash move" at a particular ply,
>then generate the list of legal moves, pass through them creating a
>hash key for each, then probing the "other" hash table (for the opposite
>side), and if I find a move that leads to a position that would produce
>a cutoff quickly, try that move here first.

>I wrote this code and tried it on Crafty and Cray Blitz. The results


>were that it *did not* slow things down much because of the speed at
>which both programs generate legal moves. However, it produced *very*
>little performance improvement. In most positions it was 1 or 2 percent
>slower due to the extra work, while in others it was 1 or 2 percent faster
>due to slightly smaller tree sizes. I tried it on the complete set of
>win at chess and Bratko/Kopec positions, and the net result was <nil>
>improvement, and overall a 1-2% slowdown. Note that the Kopec test was
>run at 5 mins per position so it wasn't a search that was "too shallow"
>and that also the hash table sizes were not "immense".

>I can't begin to explain why I get this result when Jonathan (and someone


>else too) reported 25% improvements, particularly in a checker program.
>One possible explanation is that I've always had a fairly clever hash table
>replacement algorithm that goes to great lengths to keep useful information.
>In any case, after about 350 test positions, the result was almost break-even.

Interesting. A few comments. This has been tried in four programs
(2 chess, 1 checker, 1 Othello). Three program show very good
results; Othello does not (it turns out that moves in Othello
dramatically change the board, reducing the likelihood of trans-
positions).

I think it is important to consider the search algorithm. Yes, I go
to a lot of work to make sure I keep the valuable information in the
tyrans table, so I do not believe this is an issue.

Consider what must be happening in the search for this "trick" to work:
A. two lines of play must end up almost transposing into each other. Then
this trick finds the transposition.

Consider what must be happening in the search for this "trick" to fail: either
B: two lines of play are so diverse, that there is no hope for trans-
positions to occur; or
C: the lines are similar enough that the transpositions are already occurring.

In the programs tested, case A is obviously happening. I have done the
experiments to verify that A is happening in at least the programs I
wrote (Phoenix, chess; Chinook, checkers). At the root, I have moves X
and Y. X gets searched. I then search Y and get lots of transpositions
from this search into nodes already considered while searching X.

This can be seen more clearly when you turn search extensions on. In
this case, assume X is the right move. It typically gets searched deeper
because there are more search extensions. With the "trick", the search of
Y transposes into X's lines and you get *improved* search accuracy than
you normally would. This has been demonstrated in my programs. The Deep
Blue people also use it, and confirm that the 'trick" improves their search
accuracy (it results in more positions being solved in their test sets).
[To the best of my knowledge, the "trick" is in the program that plays
Kasparov tomorrow.]

If you do not see any improvement, then clearly your program is seeing
either case B or C. I rely heavily on history based moves, which means that
many lines of play (even the ridiculous ones) tend to be similar, hence
scenario A. If your search does not have this property, then you will
see no improvement, scenario B.


What is the minimal search tree (actually graph)? The best way of estimating
it is Ebeling's method. Search the search, save the best moves in the trans
table, and then research using the table as an oracle. Using this metric,
many programs confirm that they are searching close to "optimal".

However, this algorithm does not take into account maximizing the utility
of the transposition table. It is possible to search in such a way as
to maximize transpositions. We have built trees that show that the "real"
minimal tree is at least 40% smaller (I don;t have the exact number here)
than most people think. Just because you have a cutoff, doesn;t mean you
have the best move. When you visit a node, you search the trans table move
first, every time you visit the node. *But* there may be another move
that can also cause a cutoff, and do so by building a smaller search tree.

By making search trees similar, you can encourage extra transpositions,
resulting in a much smaller search tree - i.e. getting cheaper cutoffs.

Robert Hyatt

unread,
Feb 10, 1996, 3:00:00 AM2/10/96
to
In article <4fh7dk$q...@scapa.cs.ualberta.ca>,
Jonathan Schaeffer <jona...@cs.ualberta.ca> wrote:
-->hy...@willis.cis.uab.edu (Robert Hyatt) writes:
-->
-->>A couple of days ago, Jonathan Schaeffer recommended that I try the
-->>following "trick": if there's not a "hash move" at a particular ply,
-->>then generate the list of legal moves, pass through them creating a
-->>hash key for each, then probing the "other" hash table (for the opposite
-->>side), and if I find a move that leads to a position that would produce
-->>a cutoff quickly, try that move here first.
-->
-->>I wrote this code and tried it on Crafty and Cray Blitz. The results
-->>were that it *did not* slow things down much because of the speed at
-->>which both programs generate legal moves. However, it produced *very*
-->>little performance improvement. In most positions it was 1 or 2 percent
-->>slower due to the extra work, while in others it was 1 or 2 percent faster
-->>due to slightly smaller tree sizes. I tried it on the complete set of
-->>win at chess and Bratko/Kopec positions, and the net result was <nil>
-->>improvement, and overall a 1-2% slowdown. Note that the Kopec test was
-->>run at 5 mins per position so it wasn't a search that was "too shallow"
-->>and that also the hash table sizes were not "immense".
-->
-->>I can't begin to explain why I get this result when Jonathan (and someone
-->>else too) reported 25% improvements, particularly in a checker program.
-->>One possible explanation is that I've always had a fairly clever hash table
-->>replacement algorithm that goes to great lengths to keep useful information.
-->>In any case, after about 350 test positions, the result was almost break-even.
-->
-->Interesting. A few comments. This has been tried in four programs
-->(2 chess, 1 checker, 1 Othello). Three program show very good
-->results; Othello does not (it turns out that moves in Othello
-->dramatically change the board, reducing the likelihood of trans-
-->positions).
-->
-->I think it is important to consider the search algorithm. Yes, I go
-->to a lot of work to make sure I keep the valuable information in the
-->tyrans table, so I do not believe this is an issue.
-->
-->Consider what must be happening in the search for this "trick" to work:
-->A. two lines of play must end up almost transposing into each other. Then
-->this trick finds the transposition.
-->
-->Consider what must be happening in the search for this "trick" to fail: either
-->B: two lines of play are so diverse, that there is no hope for trans-
-->positions to occur; or
-->C: the lines are similar enough that the transpositions are already occurring.
-->
-->In the programs tested, case A is obviously happening. I have done the
-->experiments to verify that A is happening in at least the programs I
-->wrote (Phoenix, chess; Chinook, checkers). At the root, I have moves X
-->and Y. X gets searched. I then search Y and get lots of transpositions
-->from this search into nodes already considered while searching X.
-->
-->This can be seen more clearly when you turn search extensions on. In
-->this case, assume X is the right move. It typically gets searched deeper
-->because there are more search extensions. With the "trick", the search of
-->Y transposes into X's lines and you get *improved* search accuracy than
-->you normally would. This has been demonstrated in my programs. The Deep
-->Blue people also use it, and confirm that the 'trick" improves their search
-->accuracy (it results in more positions being solved in their test sets).
-->[To the best of my knowledge, the "trick" is in the program that plays
-->Kasparov tomorrow.]
-->
-->If you do not see any improvement, then clearly your program is seeing
-->either case B or C. I rely heavily on history based moves, which means that
-->many lines of play (even the ridiculous ones) tend to be similar, hence
-->scenario A. If your search does not have this property, then you will
-->see no improvement, scenario B.
-->
-->
-->What is the minimal search tree (actually graph)? The best way of estimating
-->it is Ebeling's method. Search the search, save the best moves in the trans
-->table, and then research using the table as an oracle. Using this metric,
-->many programs confirm that they are searching close to "optimal".
-->
-->However, this algorithm does not take into account maximizing the utility
-->of the transposition table. It is possible to search in such a way as
-->to maximize transpositions. We have built trees that show that the "real"
-->minimal tree is at least 40% smaller (I don;t have the exact number here)
-->than most people think. Just because you have a cutoff, doesn;t mean you
-->have the best move. When you visit a node, you search the trans table move
-->first, every time you visit the node. *But* there may be another move
-->that can also cause a cutoff, and do so by building a smaller search tree.
-->
-->By making search trees similar, you can encourage extra transpositions,
-->resulting in a much smaller search tree - i.e. getting cheaper cutoffs.

Crafty uses both history and killer moves. In fact, last Summer while
on vacation, I added killers for the heck of it, and was surprised to see
a significant tree size reduction, and a speedup because I can try the
killers before generating any moves.

I've saved the code and will test the idea from time to time, but obviously
something in crafty's search is (at present) somehow nullifying the gain
this should produce. I should add that in one or two endgame positions,
I saw a speedup of 2x or more, while in others, nothing. I discount this
result, since in endgames it's possible to search 50 positions, then
somehow randomly alter move ordering and search again and fine a couple
that are *much* faster.

One quick explanation of how I did what I did:

I generated the list of legal moves, made each to produce a new hash
key, then looked up these positions with the opposite side on move. If
I got a hit, I looked for two cases:

1. position represents a LOWER_BOUND, which means it failed low. This
translates to an immediate fail-high if I search this move right now.

2. position represents an EXACT_VALUE. If this value is below ALPHA,
then I search it first as it will cause us to exit as well with a
fail high, since at the current ply this will result in score > beta.

Did I overlook something that you are doing?

Robert Hyatt

unread,
Feb 10, 1996, 3:00:00 AM2/10/96
to
In article <4ffs9d$r...@krant.cs.ruu.nl>,
Vincent Diepeveen <vdie...@cs.ruu.nl> wrote:

-->In <4fdghn$2...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
-->
-->>A couple of days ago, Jonathan Schaeffer recommended that I try the
-->>following "trick": if there's not a "hash move" at a particular ply,
-->>then generate the list of legal moves, pass through them creating a
-->>hash key for each, then probing the "other" hash table (for the opposite
-->>side), and if I find a move that leads to a position that would produce
-->>a cutoff quickly, try that move here first.
-->
-->I also implemented the trick few months ago. At this time i also was experimenting with things
-->like countermove. In very tactical positions it does not seem to work that fine,
-->but it seems to work very fine as soon as the loading factor increases.
-->
-->Only using killermoves seems to work fine with this.
-->
-->We must not forget that in a checkersprogramm best move stored works better than
-->in a chessprogram. Don't know why. Any thoughts?
-->
-->I guess that this phenomena will always work better for checkers/draughts etc. than for
-->chess. However, it seems to give a considerable node reduction in
-->Diep, although in the same order Hyatt reported, about few %.
-->This changes up to 5-20% for >= 10 ply, which
-->is a depth Diep usually doesn't get in tournament play,
-->where in checkers/draughts these depth are already valid for blitz.
-->
-->I'll try to implement it in my draughtsprogram. Next Monday i'll report if it works better for
-->draughts than for chess.

-->
-->>I wrote this code and tried it on Crafty and Cray Blitz. The results
-->>were that it *did not* slow things down much because of the speed at
-->>which both programs generate legal moves. However, it produced *very*
-->>little performance improvement. In most positions it was 1 or 2 percent
-->>slower due to the extra work, while in others it was 1 or 2 percent faster
-->>due to slightly smaller tree sizes. I tried it on the complete set of
-->>win at chess and Bratko/Kopec positions, and the net result was <nil>
-->>improvement, and overall a 1-2% slowdown. Note that the Kopec test was
-->>run at 5 mins per position so it wasn't a search that was "too shallow"
-->>and that also the hash table sizes were not "immense".
-->
-->What depths does Cray Blitz get? What is its hashtable loading factor after 5 mins?

typically 10-11 plies in normal middlegame. Hash table typically has 512 million
entries, although in tournaments (C90) we've used 1 billion entries also.

-->
-->>I can't begin to explain why I get this result when Jonathan (and someone
-->>else too) reported 25% improvements, particularly in a checker program.
-->>One possible explanation is that I've always had a fairly clever hash table
-->>replacement algorithm that goes to great lengths to keep useful information.
-->>In any case, after about 350 test positions, the result was almost break-even.
-->

-->May be depth?

Possibly, because the couple of favorable positions were endgames like Fine #70,
where crafty hits 25+ plies in seconds...

-->
-->Vincent
-->
-->Vincent Diepeveen
-->vdie...@cs.ruu.nl
-->--
--> +--------------------------------------+
--> || email : vdie...@cs.ruu.nl ||
--> || Vincent Diepeveen ||
--> +======================================+

Peter Gillgasch

unread,
Feb 10, 1996, 3:00:00 AM2/10/96
to
My personnal guess is that Jonathan is not using a swap()
function in Phoenix (at least this is how I understand the
papers on Phoenix). DeepBlue also doesn't use it.

Maybe Jonathan is just getting the "swap()" effect (better
move ordering). Bob has (had ?) a switch that set Crafty to
mvv/lva ordering, so this hypothesis can be verified quite
easily.

-- Peter

------------------------------------------------------------
Peter W. Gillgasch
Klosterweg 28/I-113 email gil...@ira.uka.de
76131 Karlsruhe/Germany Phone ++49/(0)721/6904255

Robert Hyatt

unread,
Feb 10, 1996, 3:00:00 AM2/10/96
to
In article <4fj197$j...@nz12.rz.uni-karlsruhe.de>,
Peter Gillgasch <gil...@ira.uka.de> wrote:
-->My personnal guess is that Jonathan is not using a swap()
-->function in Phoenix (at least this is how I understand the
-->papers on Phoenix). DeepBlue also doesn't use it.
-->
-->Maybe Jonathan is just getting the "swap()" effect (better
-->move ordering). Bob has (had ?) a switch that set Crafty to
-->mvv/lva ordering, so this hypothesis can be verified quite
-->easily.
-->
-->-- Peter
-->
-->------------------------------------------------------------
-->Peter W. Gillgasch
-->Klosterweg 28/I-113 email gil...@ira.uka.de
-->76131 Karlsruhe/Germany Phone ++49/(0)721/6904255

Alas, it's gone. In optimizing, I've removed all "crud" that was left over
from old tests. Would take some coding now to try this. Bottom line is that
right now, it looks like a barely break-even thing.

Bruce Moreland

unread,
Feb 11, 1996, 3:00:00 AM2/11/96
to
In article <4fdghn$2...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu says...
>[snip]

>Bruce Moreland (Ferret on ICC/FICS) also tried this, and while he saw the
>tree sizes shrink very slightly as I did, he took a much bigger hit in
>performance because of the way he generates moves. His result, then, was
>that the idea failed miserably, because even though the tree shrank
minimally,
>the cost far outweighed the gain.
>[snip]

I took a big performance hit because I didn't try everything to make this
lookup look go very fast. I saw some change in tree size, but it didn't make
it a heck of a lot smaller.

I wasn't ready to report results on this, so my apologies if the above sounds
vague.

bruce


Vincent Diepeveen

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
In <4fh7dk$q...@scapa.cs.ualberta.ca> jona...@cs.ualberta.ca (Jonathan Schaeffer) writes:

>hy...@willis.cis.uab.edu (Robert Hyatt) writes:

>>A couple of days ago, Jonathan Schaeffer recommended that I try the
>>following "trick": if there's not a "hash move" at a particular ply,
>>then generate the list of legal moves, pass through them creating a
>>hash key for each, then probing the "other" hash table (for the opposite
>>side), and if I find a move that leads to a position that would produce
>>a cutoff quickly, try that move here first.

>Interesting. A few comments. This has been tried in four programs


>(2 chess, 1 checker, 1 Othello). Three program show very good
>results; Othello does not (it turns out that moves in Othello
>dramatically change the board, reducing the likelihood of trans-
>positions).

I tried it both in a chessprogram (few %) and in a draughtprogramm.
The strange thing: in my draughtsprogram it works less good than in
my chessprogramm; it does do NOTHING. No reduction, no increase.

>This can be seen more clearly when you turn search extensions on. In
>this case, assume X is the right move. It typically gets searched deeper
>because there are more search extensions. With the "trick", the search of
>Y transposes into X's lines and you get *improved* search accuracy than
>you normally would. This has been demonstrated in my programs. The Deep
>Blue people also use it, and confirm that the 'trick" improves their search
>accuracy (it results in more positions being solved in their test sets).
>[To the best of my knowledge, the "trick" is in the program that plays
>Kasparov tomorrow.]

>If you do not see any improvement, then clearly your program is seeing
>either case B or C. I rely heavily on history based moves, which means that
>many lines of play (even the ridiculous ones) tend to be similar, hence
>scenario A. If your search does not have this property, then you will
>see no improvement, scenario B.

In my draughtsprogram (almost similar to checkers with few exceptional rules
and boardsize) it doesn't make things better.

Before searching, do you order the moves also tactically in Chinook and
the chessprogram?

>What is the minimal search tree (actually graph)? The best way of estimating
>it is Ebeling's method. Search the search, save the best moves in the trans
>table, and then research using the table as an oracle. Using this metric,
>many programs confirm that they are searching close to "optimal".

This remembers me to a discussion 'searching 20 ply'...

>However, this algorithm does not take into account maximizing the utility
>of the transposition table. It is possible to search in such a way as
>to maximize transpositions. We have built trees that show that the "real"
>minimal tree is at least 40% smaller (I don;t have the exact number here)

Thanks. I don't have exact numbers too, but it seems that there is an awfull
lot to improve in this area, not to mention nullmove.

>than most people think. Just because you have a cutoff, doesn;t mean you
>have the best move. When you visit a node, you search the trans table move
>first, every time you visit the node. *But* there may be another move
>that can also cause a cutoff, and do so by building a smaller search tree.

>By making search trees similar, you can encourage extra transpositions,
>resulting in a much smaller search tree - i.e. getting cheaper cutoffs.

--

Robert Hyatt

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
In article <4for4o$m...@scapa.cs.ualberta.ca>,

Jonathan Schaeffer <jona...@cs.ualberta.ca> wrote:
-->hy...@willis.cis.uab.edu (Robert Hyatt) writes:
-->
-->>One quick explanation of how I did what I did:
-->
-->>I generated the list of legal moves, made each to produce a new hash
-->>key, then looked up these positions with the opposite side on move. If
-->>I got a hit, I looked for two cases:
-->
-->>1. position represents a LOWER_BOUND, which means it failed low. This
-->>translates to an immediate fail-high if I search this move right now.
-->
-->>2. position represents an EXACT_VALUE. If this value is below ALPHA,
-->>then I search it first as it will cause us to exit as well with a
-->>fail high, since at the current ply this will result in score > beta.
-->
-->>Did I overlook something that you are doing?
-->
-->This is it - with one point not mentioned (which I am sure you are doing).
-->If the node is at depth D, then when you look up the children in the hash
-->table, you are looking for scores of depth D-1 accuracy.
-->
-->I only used point 1 above. I found 2 did little for me.
-->


Yes. In fact, I think that when Bruce Moreland and I compared notes,
we both did this as you suggest. I agree that with PVS, 2 is just
about useless. I count exact score hits as well as fail highs and
fail lows, and exact score hits are usually 0%.

I'm saving the code and will re-try this again later. Amazing how
many ideas fail the first time around, then work the next. Internal
iterative deepening was one example I remember. Think I'm getting
too old to do reliable programming any more... :)

If anyone up there's playing around with Crafty, I can email 'em the
code to play with...

Bob

Jonathan Schaeffer

unread,
Feb 13, 1996, 3:00:00 AM2/13/96
to
hy...@willis.cis.uab.edu (Robert Hyatt) writes:

>One quick explanation of how I did what I did:

>I generated the list of legal moves, made each to produce a new hash
>key, then looked up these positions with the opposite side on move. If
>I got a hit, I looked for two cases:

>1. position represents a LOWER_BOUND, which means it failed low. This
>translates to an immediate fail-high if I search this move right now.

>2. position represents an EXACT_VALUE. If this value is below ALPHA,
>then I search it first as it will cause us to exit as well with a
>fail high, since at the current ply this will result in score > beta.

>Did I overlook something that you are doing?

This is it - with one point not mentioned (which I am sure you are doing).


If the node is at depth D, then when you look up the children in the hash

table, you are looking for scores of depth D-1 accuracy.

I only used point 1 above. I found 2 did little for me.


0 new messages