Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

mvv/lva vs SEE capture ordering test results

731 views
Skip to first unread message

Robert Hyatt

unread,
Aug 10, 1995, 3:00:00 AM8/10/95
to

Here's the test results for the three tests I described earlier
today.

First, an explanation of the three sets (columns) ov data. These
are the 24 Bratko-Kopec test positions. The first two columns,
labeled "normal" were produced using the standard crafty search
algorithms. The first number is the number of nodes searched for
this test, the second is the maximum search depth reached for that
particular position.

The second two columns of data were created by modifying the
quiescence search of Crafty to include *all* capture moves,
rather than the forward pruning of losing captures it normally
uses. This still uses the static exchange evaluation ordering
for captures, and no other changes were made.

The third two columns of data were created by modifying the above
version of crafty to use the mvv/lva capture ordering, rather than
the static exchange evaluation algorithm. Note that all other search
tricks, etc. were left in place as they were for test number two,
above.

With that out of the way, here's the data:

normal all captures mvv/lva

1 376 6 440 9 496 8
2 73661 19 59913 21 53567 20
3 65950 16 134899 21 238209 22
4 155863 24 204293 25 201500 25
5 987430 23 721173 27 822292 25
6 91284 16 67664 16 75659 18
7 542367 17 1396770 27 1207976 30
8 10668 17 12039 17 12536 17
9 214456 19 402516 25 481063 26
10 185935 20 401219 23 358073 21
11 179948 24 545910 29 724403 31
12 181407 23 383008 25 342934 25
13 213022 15 396055 20 372184 20
14 116949 16 130438 20 125038 21
15 90143 18 115488 19 123094 20
16 278972 23 386409 27 518904 24
17 176827 16 434300 25 559867 27
18 1165658 19 1347926 27 1267460 29
19 301033 18 448709 22 336420 22
20 320061 21 767034 23 879244 25
21 120178 17 169037 20 146898 19
22 294226 22 705767 27 1113788 30
23 137277 18 244031 23 271804 22
24 544241 22 2043373 31 2456107 31
------- -------- --------
6447932 11518411 12689516

From the total column, including *all* captures effectively
doubles (nearly) the size of the tree. Note that a couple of
the positions produces fewer nodes with all captures, but that
the overall total is almost double.

Notice that the mvv/lva column is roughly 10% larger in total
nodes than the static exchange evaluation. As a result, it
supports the idea that SEE is not significantly better than
mvv/lva. Of course, in Crafty, the function that does the
static exchange evaluation is just under 5% of the total search
time, from using gprof to profile the code for some of the above
tests. As a result, SEE still comes out ahead by roughly 5% in
my case.

However, note that with mvv/lva, it is impossible to cull losing
captures, since they can't be "discovered" using mvv/lva. In
Crafty's case, using SEE is about 2x faster than using mvv/lva
if you take the forward pruning of losing captures into
consideration. From the above, you can draw your own conclusions,
as it's far from clear, if you aren't willing to prune the captures
that appear to lose material. Testing Cray Blitz with and without
this forward pruning, on a set of about 5,000 positions, produced
no cases where the program played a worse move with the pruning
than without, and, in fact, the program often played a better move
because it could frequently search somewhat deeper, or, if not
it was able to look at more moves on the last iteration and was
able to occasionally "change it's mind" to a better move. In the
absence of evidence that such pruning is harmful, I'll take the 2x
any day.

Note that this is a "research" program. I wouldn't begin to try and
enumerate the search extensions used, the special things included
in the q-search, etc., so comparing nodes between two different
programs is not going to produce useful data. However, using
Crafty for all three tests produces results that seem to be
consistent with what I would expect. In fact, mvv/lva works much
better than I expected, although you give up a lot by using it (you
don't have a static exchange evaluator that can be used to include
safe passed pawn pushes, safe checks, etc.)

Comments?

--
Robert Hyatt Computer and Information Sciences
hy...@cis.uab.edu University of Alabama at Birmingham
(205) 934-2213 115A Campbell Hall, UAB Station
(205) 934-5473 FAX Birmingham, AL 35294-1170

Feng-Hsiung Hsu

unread,
Aug 11, 1995, 3:00:00 AM8/11/95
to
In article <40c4t5$6...@pelham.cis.uab.edu>,
Robert Hyatt <hy...@willis.cis.uab.edu> wrote:
> normal all captures mvv/lva
(SEE pruning) (normal quies)
(SEE ordering)

> ------- -------- --------
> 6447932 11518411 12689516

So the mystic factor of two actually comes from the Static
Exchange Evaluation pruning. Personally, I think Cray Blitz
probably was burned more times by the pruning than Bob realized.
Capture quiescence search is not perfect, but it won't be fooled
into believing that a pinned or overloaded piece guards anything.
Maybe Bob has a SEE implementation that won't be fooled, but I doubt
it.

It seems to me that unless one believes in SEE pruning (I don't),
one is better off sticking with mvv/lva. For that matter, mvv/lva
might be more than a factor of two faster with a good implementation.

Peter Gillgasch

unread,
Aug 12, 1995, 3:00:00 AM8/12/95
to
In article <40c4t5$6...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
|>
|> Here's the test results for the three tests I described earlier
|> today.

Big thanks to Bob for looking into this. The setup of the experiment
is wonderful, because it takes the forward pruning that was done
in Cray Blitz/Crafty into account.

|> normal all captures mvv/lva

[position by position results snipped]

|> ------- -------- --------
|> 6447932 11518411 12689516
|>
|> From the total column, including *all* captures effectively
|> doubles (nearly) the size of the tree. Note that a couple of
|> the positions produces fewer nodes with all captures, but that
|> the overall total is almost double.

An interesting result, which (ahem) supports my thought that the
tree size reduction is mainly due to forward pruning of "unsafe"
captures in the quiesence. Some time ago I played with that idea
as well and ran some tests (back in the time as DarkThought was
a Pascal thing called Frankenstein). At that time I didn't had
attack bitboards and computing a SEE was pretty expensive compared
to capture generation. I saw similar results and used mvv/lva ever
since although I really felt at that time that it looks like a stupid
thing to do...

|> Notice that the mvv/lva column is roughly 10% larger in total
|> nodes than the static exchange evaluation. As a result, it
|> supports the idea that SEE is not significantly better than
|> mvv/lva.

Surprising, eh?

|> However, note that with mvv/lva, it is impossible to cull losing
|> captures, since they can't be "discovered" using mvv/lva. In
|> Crafty's case, using SEE is about 2x faster than using mvv/lva
|> if you take the forward pruning of losing captures into
|> consideration.

As I said, it boils down to the question wether forward
pruning in the quiesence is a good idea. A factor of 2 won't
buy you another full-width ply in the general case and as FHH has
noted overloaded and pinned pieces are really a problem, because often
a SEE won't see that a capture works out fine, but a search will.

|> From the above, you can draw your own conclusions,
|> as it's far from clear, if you aren't willing to prune the captures
|> that appear to lose material.

I aggree, it is far from clear in that case. Obviously SEE ordering is
marginaly better in terms of node count, but it is compared to mvv/lva
more complex. The cost of mvv/lva capture generation adds very little
overhead to the basic capture move generation (when using bitboards).
In short, we all have to generate captures but some ordering forces us to
generate them all and to assess them before searching, while the other
odering just generates them in an ordered sequence one by one. Since I
don't want to use forward pruning (well, never say never) I stick with
the neat small code...

Maybe one day you will win prizes for small and fast code instead for
winning games (^8


-- Peter

------------------------------------------------------------
Peter W. Gillgasch
Klosterweg 28/I-113 email gil...@ira.uka.de
76131 Karlsruhe/Germany Phone ++49/(0)721/6904255

Robert Hyatt

unread,
Aug 15, 1995, 3:00:00 AM8/15/95
to

sorry for the lateness of this post, but we've been "upgrading" our
news software and one of the new "features" was the inability to post
for several days... :^)

In article <40gcka$m...@casaba.srv.cs.cmu.edu>,


Feng-Hsiung Hsu <fh...@cs.cmu.edu> wrote:
>In article <40c4t5$6...@pelham.cis.uab.edu>,
>Robert Hyatt <hy...@willis.cis.uab.edu> wrote:
>> normal all captures mvv/lva
> (SEE pruning) (normal quies)
> (SEE ordering)

>> ------- -------- --------
>> 6447932 11518411 12689516
>

>So the mystic factor of two actually comes from the Static
>Exchange Evaluation pruning. Personally, I think Cray Blitz
>probably was burned more times by the pruning than Bob realized.
>Capture quiescence search is not perfect, but it won't be fooled
>into believing that a pinned or overloaded piece guards anything.
>Maybe Bob has a SEE implementation that won't be fooled, but I doubt
>it.

Right, but a capture search makes *lots* of errors. It's the grossest
sort of forward-pruning that there is. I'm trying to err on the side
of conservatism. Material wins or losses ought to be scouted out by
the normal search, where all moves are tried. Letting the q-search
win things is just as risky as what we do. Turns out there are lots
of programs that are culling losing captures in q-search.

>
>It seems to me that unless one believes in SEE pruning (I don't),
>one is better off sticking with mvv/lva. For that matter, mvv/lva
>might be more than a factor of two faster with a good implementation.

However, Swap() in Crafty is <5% of the *total* search time, for
either of the two cases where it was used above. If you drive that
to zero with mvv/lva, the best I would see is 5%.

Robert Hyatt

unread,
Aug 15, 1995, 3:00:00 AM8/15/95
to

Ditto for this one.

In article <40gv3u$2...@nz12.rz.uni-karlsruhe.de>,


Peter Gillgasch <gil...@ira.uka.de> wrote:
>
>Big thanks to Bob for looking into this. The setup of the experiment
>is wonderful, because it takes the forward pruning that was done
>in Cray Blitz/Crafty into account.
>
>|> normal all captures mvv/lva
>
>[position by position results snipped]
>

>|> ------- -------- --------
>|> 6447932 11518411 12689516
>|>
>

<snip> sorry, had to cut some relevant text. Another new
feature of this stupid news system upgrade is that it has some
"sense" about the ratio of old text (quoted) to new text.

>Surprising, eh?

Yes it was, however, I've had *lots* of surprises with Crafty.
It was, to quote a software engineering bible, "designed for
change" to make these tests much easier. Lots of modern "lore"
has turned up to be different than what I expected, some good,
some bad. Doubt I've seen the last "surprise" yet.

>
>As I said, it boils down to the question wether forward
>pruning in the quiesence is a good idea. A factor of 2 won't
>buy you another full-width ply in the general case and as FHH has
>noted overloaded and pinned pieces are really a problem, because often
>a SEE won't see that a capture works out fine, but a search will.

But a search will also fall for other sucker moves if it looks at
only captures. After all, this is just about the "grossest" form
of forward pruning there is, culling everything but captures. My
quiescence can reliably find mates, as that's pretty easy to do,
but to find anything else is risky, due to the example I discussed
when I started this off. I've played chess long enough to note that
many times, the best move is *not* a capture (which the opponent
often expects) but, rather, is something else entirely. The q-search
misses these. If I have to be wrong, I want to be *fast* and wrong
rather than *slow* and wrong. The *fast* might just let me get to
the next iteration and find out something better.

>
>I aggree, it is far from clear in that case. Obviously SEE ordering is
>marginaly better in terms of node count, but it is compared to mvv/lva
>more complex. The cost of mvv/lva capture generation adds very little
>overhead to the basic capture move generation (when using bitboards).
>In short, we all have to generate captures but some ordering forces us to
>generate them all and to assess them before searching, while the other
>odering just generates them in an ordered sequence one by one. Since I
>don't want to use forward pruning (well, never say never) I stick with
>the neat small code...

However, I'll bet that generating all captures, ignoring the ordering
code, is a lot cleaner. You don't have to save masks to get you to
the next capture. Generate_Moves() is, as a result, easier to read
and cleaner. The code to sort the captures is all of 20 lines of
code (not counting Swap() which is also very short thanks to these
bitboards). In any case, on to the next discussion.

Robert Hyatt

unread,
Aug 15, 1995, 3:00:00 AM8/15/95
to
In article <40rkhd$o...@watnews1.watson.ibm.com>,
Feng-Hsiung Hsu <f...@sawmill.watson.ibm.com> wrote:

>In article <40qie3$e...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
>>Right, but a capture search makes *lots* of errors. It's the grossest
>>sort of forward-pruning that there is. I'm trying to err on the side
>>of conservatism. Material wins or losses ought to be scouted out by
>>the normal search, where all moves are tried. Letting the q-search
>>win things is just as risky as what we do.
>
>A 10-ply search still makes *lots* of errors, but that does not make
>it "as risky as" doing no search at all. The error from SEE pruning is
>a lower order one, that is, it requires fewer conditions to occur and
>occurs far more often. Also, to err on the side of "conservatisim" is
>precisely the wrong thing to do when you are doing "dynamic" selective
>extensions. You want the quiescence search to discover possible problems,
>not to bury its head in the sand.

>
>>However, Swap() in Crafty is <5% of the *total* search time, for
>>either of the two cases where it was used above. If you drive that
>>to zero with mvv/lva, the best I would see is 5%.
>
>Check again. I believe you are ignoring some hidden cost, or possibly
>some opportunity cost. Can you honestly say that if you had written
>your code to use mvv/lva from the very beginning, it would not be faster
>by far more than just 5%? With mvv/lva, you only need to compute the
>moves for ONE side at the frontier nodes, not for BOTH sides. Just a
>simple example.
>

Remember that I'm computing the full set of attack bitboards, both
the ones that give attacks from a square to all squares it attacks,
and the ones that give all the attacks to a given square. As a result,
swap() has little to do, and is actually <5% for the entire set of
kopec problems plus about 30 win at chess positions, with profile
run for the whole problem set as one execution.

with or without mvv/lva, I don't have to compute any moves for either
side at frontier nodes, it's all part of the bitboard logic anyway. Also,
yes, it was originally written using mvv/lva, but I converted when I found
it was slightly faster to use the old cray-blitz SEE algorithm, and
that once I did this, it was a trivial change to forward prune the
losing captures.

I'm still willing to bet that looking at losing captures in the
q-search, and measuring effectiveness, will show that there's not
much point without other types of moves included to improve the
accuracy. There will be errors in both directions, I just haven't
convinced myself that ignoring losers produces more errors than
not doing so. I suspect that those positions where the node counts
drop with mvv/lva compared to SEE represent positions where the
q-search finds a cute refutation that omitting losers doesn't find,
but the question is, does the refutation *really* work or not. It's
a difficult question to quantitatively answer as I really don't know
how to measure the difference. I have (had) some positions that
broke the q-search if it considered all captures, and I had some
that broke it if it didn't. The factor of 2x was the convincing (to
me) argument that has led me to doing what I do. In fact, I'd be
willing to bet that the null-move search leads to *far* more tactical
errors than pruning the losers. I encountered enough null-move killer
positions to choke a horse, before I simply started ignoring them. Then
I went to R=2 and found more. And then futility cutoffs, and .....

Feng-Hsiung Hsu

unread,
Aug 16, 1995, 3:00:00 AM8/16/95
to

Feng-Hsiung Hsu

unread,
Aug 16, 1995, 3:00:00 AM8/16/95
to
In article <40rrkr$f...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
>Remember that I'm computing the full set of attack bitboards, both
>the ones that give attacks from a square to all squares it attacks,
>and the ones that give all the attacks to a given square. As a result,
>swap() has little to do, and is actually <5% for the entire set of
>kopec problems plus about 30 win at chess positions, with profile
>run for the whole problem set as one execution.
>
>with or without mvv/lva, I don't have to compute any moves for either
>side at frontier nodes, it's all part of the bitboard logic anyway. Also,

But you did compute the moves for BOTH sides when you compute the attack
bitboards for both. At the frontier nodes, you only need to partially
update the attack boards.

>I'm still willing to bet that looking at losing captures in the
>q-search, and measuring effectiveness, will show that there's not
>much point without other types of moves included to improve the
>accuracy. There will be errors in both directions, I just haven't

Whenever you have a pinned or overloaded piece, SEE pruning is royally
screwed up. The worst part of it is that the condition cannot be
easily removed by search extensions, and also when it happens, it tends
to propagate to the top of the tree as the condition is usually somewhat
permanent.

Steven J. Edwards

unread,
Aug 16, 1995, 3:00:00 AM8/16/95
to
f...@sawmill.watson.ibm.com (Feng-Hsiung Hsu) writes:

>In article <40rrkr$f...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
>>Remember that I'm computing the full set of attack bitboards, both
>>the ones that give attacks from a square to all squares it attacks,
>>and the ones that give all the attacks to a given square. As a result,
>>swap() has little to do, and is actually <5% for the entire set of
>>kopec problems plus about 30 win at chess positions, with profile
>>run for the whole problem set as one execution.
>>
>>with or without mvv/lva, I don't have to compute any moves for either
>>side at frontier nodes, it's all part of the bitboard logic anyway. Also,

>But you did compute the moves for BOTH sides when you compute the attack
>bitboards for both. At the frontier nodes, you only need to partially
>update the attack boards.

A problem here is that it may be the case that a node initially
classified as being a frontier node may have its status changed as a
result of static evaluation. This depends upon exactly what one means
by "frontier" and exactly what a static evaluation can return. If the
status changes, it may require more work overall than if just all the
attack bitboards and other items were unconditionally updated.

Also, it's very handy to have all the bitboard stuff available for the
positional factors evaluator.

>>I'm still willing to bet that looking at losing captures in the
>>q-search, and measuring effectiveness, will show that there's not
>>much point without other types of moves included to improve the
>>accuracy. There will be errors in both directions, I just haven't

>Whenever you have a pinned or overloaded piece, SEE pruning is royally
>screwed up. The worst part of it is that the condition cannot be
>easily removed by search extensions, and also when it happens, it tends
>to propagate to the top of the tree as the condition is usually somewhat
>permanent.

I partially agree with this as it is very difficult to write a
reliable and accurate SEE. But it is possible to keep track of pinned
pieces, something that is not too difficult to do with bitboards.
Overworked pieces are a bit harder, but still possible. I recall from
many years ago the SEE code in the Spracklen progam Sargon where pins
of pieces against the king and queen were calculated immediately prior
to SEE. Dan Spracklen commented that while this took some time to
compute, it significantly improved the quality of play. However,
Sargon was an early work and did not have much in the way of dynamic
extensions or many other features seen in more modern programs, and so
the utility of their SEE may not be as good in conjunction with more
recent techniques.

Spector uses an open ended (no fixed depth) capture search using a
specialized offset generator that is optimized for speed. It only
knows about material balance, check evasion, and
checkmates/stalemates, but it is quite accurate in its domain. (It
automatically pickes up on pins and overworked pieces.) However,
because it is open depth, it is unsuitable for general use because of
the hazard of combinatorial explosion. Therefore, it is limited for
use in providing move ordering at certain nodes. These include the
root note, all of its first-time descendents, and most nodes with a
nonzero width A-B window (Spector uses PVS).

Spector's regular quiescence search is more standard and looks mostly
at equal and winning captures along with a limited number of other
move classes. The ordering uses the bitboards to guess at likely
exchange outcomes, to encourage the movement of hung pieces to
unattacked squares, and a few other heuristics. The q-search rarely
blows up and usually composes slightly less than half of all nodes
searched.

We have to keep in mind that the purpose of a q-search is much
different from that of a regular search. If it weren't, we might as
well just work on the regular search only.

Perhaps we shouldn't even expect a q-search to return a score. I
think that when humans do a mental q-search, they are more interested
is answering the question of whether or not further evaluation of the
current line is indicated. So, we might build on this and have a
q-search that returned, instead of a score or a simple bound,
something like:

1) Whether or not further search is indicated along the current line

2) A confidence factor for the decision in #1

In a way, such a new q-search replaces the code that was used for
determining whether or not various dynamic extensions are made during
the "regular" search. Instead of being applied only at the endpoints
of a regular search, it is applied at all of the nodes and determines
which ones will be endpoints.

-- Steven (s...@mv.mv.com)

Feng-Hsiung Hsu

unread,
Aug 17, 1995, 3:00:00 AM8/17/95
to
In article <DDF6v...@mv.mv.com> s...@mv.mv.com (Steven J. Edwards) writes:
>by "frontier" and exactly what a static evaluation can return. If the
>status changes, it may require more work overall than if just all the
>attack bitboards and other items were unconditionally updated.

The keyword here is "may". If you are careful enough, there should be
no difference.

>Also, it's very handy to have all the bitboard stuff available for the
>positional factors evaluator.

But with a state of the art program, you probably only need the positional
evaluation 1/5 to 1/10 of the time.

Robert Hyatt

unread,
Aug 17, 1995, 3:00:00 AM8/17/95
to
In article <40td2j$5...@watnews1.watson.ibm.com>,

Feng-Hsiung Hsu <f...@sawmill.watson.ibm.com> wrote:
>In article <40rrkr$f...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
>>Remember that I'm computing the full set of attack bitboards, both
>>the ones that give attacks from a square to all squares it attacks,
>>and the ones that give all the attacks to a given square. As a result,
>>swap() has little to do, and is actually <5% for the entire set of
>>kopec problems plus about 30 win at chess positions, with profile
>>run for the whole problem set as one execution.
>>
>>with or without mvv/lva, I don't have to compute any moves for either
>>side at frontier nodes, it's all part of the bitboard logic anyway. Also,
>
>But you did compute the moves for BOTH sides when you compute the attack
>bitboards for both. At the frontier nodes, you only need to partially
>update the attack boards.

Depends. I use these attack maps for *both* sides in the eval. And
I'm (currently) doing it incrementally, so the cost is not particularly
high (<20% of search in make_move now.).


>
>>I'm still willing to bet that looking at losing captures in the
>>q-search, and measuring effectiveness, will show that there's not
>>much point without other types of moves included to improve the
>>accuracy. There will be errors in both directions, I just haven't
>
>Whenever you have a pinned or overloaded piece, SEE pruning is royally
>screwed up. The worst part of it is that the condition cannot be
>easily removed by search extensions, and also when it happens, it tends
>to propagate to the top of the tree as the condition is usually somewhat
>permanent.
>

I agree. But when you have potential pins, *any* capture search is
going to be wrong, since frequently a capture might not be the best
response to a capture, but that's the only move the q-search can try.
I can't begin to defend this losing capture code, except to say that
I've done it for about 15 years, and have yet to find a position where
turning it off has made a difference. I'm sure they exist, but by the
same token, turning it off slows things down enough that that too causes
problems. A catch-22.


I've been studying tactical errors made by Crafty for several months
now on ICC. I have the ability to turn off the forward pruning in
the q-search, turning off null moves, futility cutoffs, etc. In
nearly every case I've studied in depth, it was either (a) lack of
depth or (b) null-move problems. The null-move is *really* showing
itself to be much more dangerous than many have suspected. I'm
using R=2, which is faster than R=1, but which is also more dangerous.
Typically, R=1 cures many tactical problems, but adds others by
decreasing the overall depth Crafty reaches. Yet another catch-22.

Feng-Hsiung Hsu

unread,
Aug 18, 1995, 3:00:00 AM8/18/95
to
In article <411280$i...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
>Depends. I use these attack maps for *both* sides in the eval. And

But you only need the full eval for about 10-20% of the nodes.

>I agree. But when you have potential pins, *any* capture search is
>going to be wrong, since frequently a capture might not be the best
>response to a capture, but that's the only move the q-search can try.

The capture search is wrong in this case because it fails to 2-ply
combinations, the SEE pruning fails to 0-ply combinations (from the
view point of capture quiescence). Draw your own conclusion about
the freqency and the seriousness of the errors.


Robert Hyatt

unread,
Aug 20, 1995, 3:00:00 AM8/20/95
to
In article <412jff$o...@watnews1.watson.ibm.com>,

So far, I believe that the *frequency* of the above two types of
errors is similar. SEE overlooks (in particular) overloaded
pieces. MVV/LVA (or simply non-SEE forward pruning) overlooks
positions where the best response to a capture is not a capture.
It could be a pin, a check, a threat, or anything else. That's
not a small class of positions either. At least, for the present,
SEE forward pruning is about 2x faster for me. Until I find a good
way to evaluate the two in some quantitative fashion, or until I
find a circumstance where SEE causes problems that not using it
doesn't, that's a hard performance gain to turn down.

Note that I don't claim that SEE is better, because I simply don't
know. All I claim is that I've used it long enough to trust it to
do no worse than looking at all captures, and then I take the speed
advantage and search deeper, when 2x is enough to help.

Also, as I've said, many others have independently come to this
same conclusion, whether or not it's right or whether or not it's
supported by hard analysis.

Robert Hyatt

unread,
Aug 23, 1995, 3:00:00 AM8/23/95
to
In article <41cr1g$i...@watnews1.watson.ibm.com>,
Feng-Hsiung Hsu <f...@sawmill.watson.ibm.com> wrote:

>In article <417jp6$l...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
>>So far, I believe that the *frequency* of the above two types of
>>errors is similar. SEE overlooks (in particular) overloaded
>>pieces. MVV/LVA (or simply non-SEE forward pruning) overlooks
>>positions where the best response to a capture is not a capture.
>
>SEE pruning overlooks
> 1) discovered captures by opponent.
> a) Using a pinned piece as a guard, thus creating a new capture
> by opponent.
> b) Opponent's early capture allows a new xrayed capture by opponent.
> 2) illusionary captures by self
> Overloaded pieces that can no longer participate in the capture
> sequence.
> 3) the best response to a capture is not a (SAFE) capture.
> That is, something that requires multiple plies, unless you get
> out of the capture paradigm. This is the very same class
> of problems that MVV/LVA could not resolve. Basically, this
> is failing to multiple-ply combinations.
>
>SEE pruning misses 0-ply combinations as well as 2-ply combinations. You
>were just observing that MVV/LVA fails to 2-ply combinations, but so does
>SEE pruning. Remember further, the 0-ply combinations that SEE pruning misses
>could happen anywhere in the quiescence search, and may require many plies
>before a SEE pruned program can discover its error.
>
>I always had a nagging suspicion that Cray Blitz played its boo-boo move
>precisely because of SEE pruning in the game against ChipTest, because
>neither ChipTest nor Hitech at the time ever considered the Cray Blitz's
>move to be sound. Also, I was not entirely joking when I asked whether
>SEE pruning was responsible for the 10-move gap...


I can answer this last easily. The answer is a resounding "NO".
That move fell into the same "class" as a move we made in 1984,
when playing nuchess in LA. We retreated a knight to the first
rank in a lost position. Two moves later, this knight just
happened to be in the right place to help CB trade away *all*
pieces and end up in an ending where we had a pawn nuchess couldn't
catch. We've run this problem over and over, and it has *never*
played this move again, regardless of time control, etc. Ditto
for the chiptest game. Harry and I ran that over and over and
over, and it never failed hi there again and changed it's mind
as it did during the actual game. We have simply attributed both
moves to either (a) a parallel processing search error or (b) the
famous non-deterministic behavior that parallel searches produce
from time to time.

I wasn't joking when I responded either! :^) Remember, in the
game we are talking about (not the chiptest game) you saw the
cute bishop problem around 10 moves before us, just about like
you saw the wild combination against *socrates two years ago.

As I said, I don't have *any* hard evidence that our forward
pruning in the q-search has caused any problems, but I can say
that for the past 6 months, every problem position I've looked
at was cured by eliminating the null-move search, or else by
reducing to R=1. Personally, I despise *any* forward pruning,
but against humans (at least) it seems to pay off. I'm now
trying to get Crafty's rating stabilized by restricting the
time-controls. So far it's staying at about 2550 running on our
sun Sparc. After a week or so to establish the variation, I'll
turn the forward pruning off to see how the rating changes. I
will go through the on-again off-again test several times to make
sure that the rating follows the changes and not some external
effect. I haven't found any better way to test ideas than this,
but it is *slow* since I have to go through the "cycle" many times
to produce a high confidence that a rating change is caused by the
modification I'm working on, rather than by other things like the
particular mix of players, player's moods, player's alertness, etc.

At least I can shed some light on this, although I'm not sure just
how much. More once I am convinced that Crafty is going to stay
around 2550. One real problem I have is the sparc is *not* dedicated
which adds yet another wrinkle into the soup.

I can say, and I'm not completely sure why just yet, that Crafty is
about 2 plies faster on the cray than Cray Blitz was. It's only
5x faster computationally, so the rest is coming from either better
ordering or something (I'm not factoring in the faster T90 machine
here, I'm assuming same machine for both.) This, too, is going to
take some analysis, although the trees are nearly impossible to
look at and compare. It's currently solving around 275 of the WAC
problems in under 1 minute on my notebook machine, which is way
better than what CB could pull off on this machine. I haven't had
enough time to run the full test on a Cray yet. I ran the WAC
positions with and without SEE forward pruning (on my notebook) and
actually solved 7 fewer with SEE pruning turned off due to slowing
the search down. More on this later as well, as I am going to run
the whole thing with lots of different options and post the results,
such as R=2, R=1, R=0 (null move disabled) among other things.

Bob

By the way, may your chips have at least one flawed transistor
each. :^) [just kidding... ]

Thomas Kerrigan

unread,
Aug 27, 1995, 3:00:00 AM8/27/95
to
This is inane. Of course pruning misses things. Hsu, just tell Hyatt that
his precious quiescence scheme misses all non-captures.

Hyatt just recently posted results for WAC that demonstrate that SEE is a
significant improvement. Also, my own SEE does NOT affect the PV in any
WAC or BK position, and the scores that are returned deviate less than
8%, with the largest deviation at 4 millipawns. At the same time, the
ordering and pruning significantly cut my tree size. Becasue of this, I
plan to continue using it.

Cheers,
Tom

------------------------------------------------------------------------------
"You can bring any calculator you like to the midterm, as long as it
doesn't dim the lights when you turn it on." -Hepler, Systems Design 182

Thomas C. Kerrigan
'kerr...@alpha.pr1.k12.co.us'

Robert Hyatt

unread,
Aug 27, 1995, 3:00:00 AM8/27/95
to
In article <41q2o5$o...@alpha.pr1.k12.co.us>,

Thomas Kerrigan <kerr...@alpha.pr1.k12.co.us> wrote:
>This is inane. Of course pruning misses things. Hsu, just tell Hyatt that
>his precious quiescence scheme misses all non-captures.
>


Note: my q-search does *not* miss all non-captures... :^) It has lots
of checks and pawn-pushes thrown in. It *does* probably miss lots of
"important" non-captures, just like all q-searches. I'm still wading
through the two logs to see if MVV/LVA or SEE show any particularly
interesting differences, other than the speed issue. So far, no, but
the logs are big and the position-by-position analysis is slow, tedious,
and, most importantly, *boring*. :^)

Feng-Hsiung Hsu

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
In article <41q2o5$o...@alpha.pr1.k12.co.us> kerr...@alpha.pr1.k12.co.us (Thomas Kerrigan) writes:
>This is inane. Of course pruning misses things. Hsu, just tell Hyatt that
>his precious quiescence scheme misses all non-captures.

You should have read more carefully. Bob was asserting that mvv/lva
capturing quiescence misses certain types of combinations involving
non-capturing moves which pin, fork, and so on, and therefore is faulty.
I was just pointing out SEE pruning only searches pseudo-"safe" captures
and is subject to the same error along with some error that invloves 0-ply
combinations anywhere inside a SEE-pruned quiescence search.

Bob is further asserting that he has not ever seen a position where
a SEE-pruned searches produces a "worse" move, which I find hard to
believe unless Bob was not looking very carefully. One thing that
had puzzled me for a long time was that games against Cray Blitz always
seemed far easier than Cray Blitz's search depth would suggest.
Is it SEE pruning the culprit? Or is it something else?

One thing that led me to suspect SEE pruning is the following observation.
One out of 3 or 4 test games that we played against null-move pruning
programs ended up in some sort of middle game zugzwang positions. I
suspect some of the games we played against Cray Blitz invloved some
positions where SEE pruning was killing Cray Blitz.


Robert Hyatt

unread,
Aug 29, 1995, 3:00:00 AM8/29/95
to
In article <41v3j4$16...@watnews1.watson.ibm.com>,
Feng-Hsiung Hsu <f...@sawmill.watson.ibm.com> wrote:

>You should have read more carefully. Bob was asserting that mvv/lva
>capturing quiescence misses certain types of combinations involving
>non-capturing moves which pin, fork, and so on, and therefore is faulty.
>I was just pointing out SEE pruning only searches pseudo-"safe" captures
>and is subject to the same error along with some error that invloves 0-ply
>combinations anywhere inside a SEE-pruned quiescence search.
>

Not exactly. I asserted that *both* miss lots of things, from going over
enough games to choke a small horse. My q-search is "perfect" when looking
for mates. It never finds a non-existant mate, nor overlooks one unless
there's a quiet move in the middle (non-capture/non-check.) It never falls
for things like 8 checks, a capture (non-check) followed by 3 more checks
and mate, and then announces a mate. That part's easy. However, that's
the *only* thing I trust it to do. I don't trust it to find ways to win
material, since it is way too selective, no matter what I do, because
captures are simply not the only thing that play a part in the tactical
variations. I want the basic search to find the cute tricks, and then the
q-search to pick up any hanging material, and, basically, clean up after the
real search ends. Trusting any type of capture-only search is asking for
trouble.

>Bob is further asserting that he has not ever seen a position where
>a SEE-pruned searches produces a "worse" move, which I find hard to
>believe unless Bob was not looking very carefully. One thing that
>had puzzled me for a long time was that games against Cray Blitz always
>seemed far easier than Cray Blitz's search depth would suggest.
>Is it SEE pruning the culprit? Or is it something else?

I'm trying to recall, but I remember three specific games we played that
come to mind. The first, we should have won. Some typical paralllel
programming glitch made it change it's mind at the last minute (Orlando,
I think) and play a really bad move. We've run this over and over and it
always played the right move after that. Chalk it up to lack of testing,
caused by not having enough access to the machine to thoroughly exercise
the parallel code. A disaster waiting to happen. The next game was another
loss, where we simply played about as bad an opening as we've ever done.
The third was the famous "bishop trap" and was a result of singular
extensions working particularly well, letting you see the outcome way before
we did. None of those had any type of overloaded pieces, pins, and the
like that I remember. You simply outplayed us. However, if you look back
through earlier years, while we were still using SEE forward pruning in the
q-search, we were able to beat programs searching significantly faster than
we were, such as Belle in 1983/84 (160K nodes vs our 20K in 1983), Hitech in
1986 (searching 175K to our 80K), to name a few. I'm not
"hyper" about forward pruning in the capture search, however, you have to
admit, while the WAC problem set is very easy for machines, it is a tactical
tour de force in pins, overloaded pieces, etc. I would expect that if SEE
was inferior to MVV/LVA (probably a bad label, maybe I should be saying
forward pruning vs non-forward-pruning, since SEE can work with MVV/LVA
just fine) that there would be several of the problems in this set that
SEE would confuse. I'm still comparing, but so far, none have shown up.
This comparison includes eyeball comparisons of scores, depths, PV's, and
so forth. To me, that's pretty compelling evidence. I wouldn't bet a lot
of money that SEE won't screw up, but neither would I bet that non-forward
pruning won't either.


>
>One thing that led me to suspect SEE pruning is the following observation.
>One out of 3 or 4 test games that we played against null-move pruning
>programs ended up in some sort of middle game zugzwang positions. I
>suspect some of the games we played against Cray Blitz invloved some
>positions where SEE pruning was killing Cray Blitz.

Now I'm confused. I more than agree with the above, as I've said before,
because I notice many errors caused by null-move, most of which do not
involve zugzwang at all. They simply let the depth-reduction R reduce
the search depth to a point where the threat is not seen, and it happens
often. FAR more often than the literature would have us believe. I've
got a set of problems that break null-move searches, and one day plan to
test this thoroughly. At present, with Crafty running on the sparc, and
reaching depths of 7-8 plies in speed chess time time controls (10 secs
per move or less) the null-move speedup seems to be beneficial, even
with the tactical blunders it causes. However, running on the Cray,
with it's factor of roughly 500 in computing power advantage when running
Crafty, this results in 3-4 plies of additional depth. If null-move
is worth 1.5 plies (factor of 10 is what I'm typically seeing) then
dropping it on the sparc will reduce the depth to 5-6. Pretty significant.
I wonder, however, if dropping two plies on the Cray wouldn't be a good
trade, since we would still reach 10 quickly, but with fewer errors
thrown in. This sort of testing is on my to-do list.

I said I'm confused, because we were talking about SEE forward pruning in
the q-search, and then you tossed out the null-move problem. They don't
seem to be related to me. Null-move search ranks right up there with the
trans/ref table, as something I "tolerate" but dream about throwing out
because of the "features" they add.

Bob

jwes on BIX

unread,
Aug 30, 1995, 3:00:00 AM8/30/95
to
hy...@willis.cis.uab.edu (Robert Hyatt) writes:

>In article <41v3j4$16...@watnews1.watson.ibm.com>,
>Feng-Hsiung Hsu <f...@sawmill.watson.ibm.com> wrote:

>>One thing that led me to suspect SEE pruning is the following observation.
>>One out of 3 or 4 test games that we played against null-move pruning
>>programs ended up in some sort of middle game zugzwang positions. I
>>suspect some of the games we played against Cray Blitz invloved some
>>positions where SEE pruning was killing Cray Blitz.

>I said I'm confused, because we were talking about SEE forward pruning in
>the q-search, and then you tossed out the null-move problem. They don't
>seem to be related to me. Null-move search ranks right up there with the
>trans/ref table, as something I "tolerate" but dream about throwing out
>because of the "features" they add.

I think that what Feng-Hsiung Hsu is saying is that evaluation errors will
attract the positions in which they occur, because half the time
they return a significantly higher value than the actual one.
(With null-move, the errors may be virtually all on one side )
jw...@bix.com

Johanes Suhardjo

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
On 29 Aug 1995, Robert Hyatt wrote:
> I said I'm confused, because we were talking about SEE forward pruning in
> the q-search, and then you tossed out the null-move problem. They don't
> seem to be related to me. Null-move search ranks right up there with the
> trans/ref table, as something I "tolerate" but dream about throwing out
^^^^^^^^^^^^^^^

> because of the "features" they add.

I thought that hash table was always a plus without any side effect, just
like storing numbers in a calculator for later use. Could you elaborate
more on this?

P.S. Like some others, I would like to express my thanks to Mr. Hyatt for
this very educational forum.


Johanes Suhardjo (joh...@farida.cc.nd.edu)
--
Lackland's Laws:
(1) Never be first.
(2) Never be last.
(3) Never volunteer for anything

Robert Hyatt

unread,
Sep 1, 1995, 3:00:00 AM9/1/95
to
In article <Pine.SOL.3.91.950901083613.11092A-100000@farida>,

Johanes Suhardjo <johanes@farida> wrote:
>
>I thought that hash table was always a plus without any side effect, just
>like storing numbers in a calculator for later use. Could you elaborate
>more on this?
>


two problems for me:

1. parallel search uses hash table to produce non-deterministic search
results, since the table lets parts of the tree pass information back and
forth, and gets caught up in neat timing intracacies...

2. makes debugging hard, since it remembers things for (maybe) a long time,
and this "result" can be used whenever it's found. If you run again, you
might not have this result due to different timings of the searches. Makes
reproducibility a real issue.

Bob
h

Joe Stella

unread,
Sep 2, 1995, 3:00:00 AM9/2/95
to
In article <DEA3F...@ecf.toronto.edu> man...@ecf.toronto.edu
(MANOHARARAJAH VALAVAN) writes:

>On a side note: Does three move repetition and the fifty move rule get
>stored along with rest of the chess info in the Hash Table?

>I would think that there is a slight problem with hash tables if it didn't
>consider this info.

>Also, I am not sure of how the fifty move rule + 3 move rep works.
>After fifty moves have passed without a capture, do you claim for a draw--which
>is then accepted or rejected by the opponent? Or is it drawn automatically..
>i.e. after 50 moves have passed, the game is stopped no matter what.


To answer your second question first: If either player claims a draw (which
he can do when it is his turn to move) then it is a draw. The opponent
has no say in the matter, unless he can prove that the claim is incorrect
(i.e. the position has *not* really been repeated etc.).

These types of draws necessarily *cannot* be included in the hash table,
because they depend on the past history of the position and not just on
the position itself. In the case of threefold repetition, for example, the
position is not a draw at first, then the *very same* position becomes a
draw on the third repetition.

My program handles such cases by checking for repetition draws *before*
consulting the hash table. If repetition is detected, the position is scored
as a draw immediately. It takes a little time to test for repetition, but
some of this time is recovered when the position does not have to be
expanded (the score is already known so expansion is not necessary).

Joe S.


MANOHARARAJAH VALAVAN

unread,
Sep 2, 1995, 3:00:00 AM9/2/95
to
On a side note: Does three move repetition and the fifty move rule get
stored along with rest of the chess info in the Hash Table?

I would think that there is a slight problem with hash tables if it didn't
consider this info.

Also, I am not sure of how the fifty move rule + 3 move rep works.
After fifty moves have passed without a capture, do you claim for a draw--which
is then accepted or rejected by the opponent? Or is it drawn automatically..
i.e. after 50 moves have passed, the game is stopped no matter what.


--
-------------------------------------------------------------------------------
man...@ecf.utoronto.ca | 2nd year Comp Eng., University of Toronto
Valavan Manohararajah | OS/2 Warp "Operate at a higher level"
-----------------------------Team OS/2 Member----------------------------------

Thomas Kerrigan

unread,
Sep 2, 1995, 3:00:00 AM9/2/95
to
Heh heh, I'm just starting to realize the full scope of hash table flaws.
First, they are awful to implement. I'm not sure quite why this is now,
but I remember that it took days to debug Stobor's hashing. Second,
hashing doesn't consider the rest of the tree. Hashing mate scores is
painful because the program can try to get a hashed mate in x score which
is really much deeper. The *serious* problem involves draws. Both the
three repetition draw and the 50 move draw rules rely on what happened
previously in the tree, which isn't considered in hashing (usually). So
what you have are bugs in your program which you put there on purpose.
Imagine the sleep a "normal" programmer would lose over this...

MANOHARARAJAH VALAVAN

unread,
Sep 3, 1995, 3:00:00 AM9/3/95
to
In article <joes.453...@ultranet.com>,

Joe Stella <jo...@ultranet.com> wrote:
>In article <DEA3F...@ecf.toronto.edu> man...@ecf.toronto.edu
>(MANOHARARAJAH VALAVAN) writes:
>
>>On a side note: Does three move repetition and the fifty move rule get
>>stored along with rest of the chess info in the Hash Table?
>
>>I would think that there is a slight problem with hash tables if it didn't
>>consider this info.
>
>>Also, I am not sure of how the fifty move rule + 3 move rep works.
>>After fifty moves have passed without a capture, do you claim for a draw--which
>>is then accepted or rejected by the opponent? Or is it drawn automatically..
>>i.e. after 50 moves have passed, the game is stopped no matter what.
>
>
>To answer your second question first: If either player claims a draw (which
>he can do when it is his turn to move) then it is a draw. The opponent
>has no say in the matter, unless he can prove that the claim is incorrect
>(i.e. the position has *not* really been repeated etc.).

Ok...so I guess in a tree search, when you hit a 50 move rule, you can set
your lower bound to Zero and continue searching as always.

>
>These types of draws necessarily *cannot* be included in the hash table,
>because they depend on the past history of the position and not just on
>the position itself. In the case of threefold repetition, for example, the
>position is not a draw at first, then the *very same* position becomes a
>draw on the third repetition.
>
>My program handles such cases by checking for repetition draws *before*
>consulting the hash table. If repetition is detected, the position is scored
>as a draw immediately. It takes a little time to test for repetition, but
>some of this time is recovered when the position does not have to be
>expanded (the score is already known so expansion is not necessary).
>
> Joe S.
>

Thanks for your info.

Feng-Hsiung Hsu

unread,
Sep 3, 1995, 3:00:00 AM9/3/95
to
In article <420n04$h...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
>the parallel code. A disaster waiting to happen. The next game was another
>loss, where we simply played about as bad an opening as we've ever done.

Was it just the opening? I would have checked each and every move for
the (possibly bad) effects of SEE pruning.

>The third was the famous "bishop trap" and was a result of singular
>extensions working particularly well, letting you see the outcome way before

>like that I remember. You simply outplayed us. However, if you look back

I don't recall any bishop traps. All I remembered was that we were expecting
major material which Cray Blitz took over 10 moves to see. The search
extensions may be part of the reasons, but 10+ moves (20+ plies) on top of
what Cray Blitz normally searched is a little bit extreme. I tend to
believe that Cray Blitz was also hallucinating due to SEE pruning.

>"hyper" about forward pruning in the capture search, however, you have to
>admit, while the WAC problem set is very easy for machines, it is a tactical
>tour de force in pins, overloaded pieces, etc. I would expect that if SEE

WAC problem set and tour de force are not what I would put in the same
sentence. We use ECM positions as the basic tactical test nowadays, and I
don't trust even them as a benchmark for the type of decisions we are
discussing here.

>was inferior to MVV/LVA (probably a bad label, maybe I should be saying
>forward pruning vs non-forward-pruning, since SEE can work with MVV/LVA
>just fine) that there would be several of the problems in this set that
>SEE would confuse. I'm still comparing, but so far, none have shown up.

I have absolutely no reason to believe that the WAC problem set is any
good in finding whether SEE pruning is potentially debilitating. At
shallow depths, SEE pruning might be somewhat beneficial, as the errors
it introduced might not be too severe relatively yet. The problems come
when you start to search deeper and you start to add search extensions. SEE
pruning introduces errors of the type that search extensions have great
difficulty in removing them. It essentially buries the program's head in
the sand...

Robert Hyatt

unread,
Sep 7, 1995, 3:00:00 AM9/7/95
to
In article <42m2fo$4...@news2100.microsoft.com>, Bruce Moreland <brucemo> wrote:
-->Draws & hash table cause some nasty interaction bugs, no matter what you
-->do. Refusing to hash the actual move that caused the draw seems like it
-->would be helpful, but it doesn't solve the problem fully, because a draw
-->score at any particular node can affect nodes above this node in the
-->tree, and these would get hashed.
-->
-->When you hit move 50, you can return a draw at that point, as long as the
-->50th move wasn't a mate. Checking for this can be a little bit
-->complicated, given that you have to find a legal response on the 51st
-->move.
-->
-->I think the problem with storing mates (which is not referred to in the
-->following posts) can be solved by storing mate values as bounds, meaning
-->that if you find a mate in 5, store it as a "mate in at most 500" or
-->something like that. Basically "MATE-1000" as a beta bound. This works
-->for me.

I've always stored exact mate scores in CB/Crafty and have seen absolutely
no problems, since a mate is a mate. I do adjust the mate-in-n score to
'mate-in-n from *this* position' which is basically mate-score - ply. I
don't see how draws can ever be done right, but the above mate scoring has
not caused any problems I've seen.

50-move is yet another problem, because hash tables store position, but not
the "history" of the position. As a result, it's *easy* to store a position
as mate in n, when, in fact, its a draw, because the mate-in-n score has
no history information and you may repeat a position for the 3rd time
before delivering mate. Slate always said don't store draw scores at all,
but this seemed extreme.

Bruce Moreland

unread,
Sep 7, 1995, 3:00:00 AM9/7/95
to
Draws & hash table cause some nasty interaction bugs, no matter what you
do. Refusing to hash the actual move that caused the draw seems like it
would be helpful, but it doesn't solve the problem fully, because a draw
score at any particular node can affect nodes above this node in the
tree, and these would get hashed.

When you hit move 50, you can return a draw at that point, as long as the

50th move wasn't a mate. Checking for this can be a little bit

complicated, given that you have to find a legal response on the 51st

move.

I think the problem with storing mates (which is not referred to in the

following posts) can be solved by storing mate values as bounds, meaning

that if you find a mate in 5, store it as a "mate in at most 500" or

something like that. Basically "MATE-1000" as a beta bound. This works

for me.

bruce

In article <DEBAs...@ecf.toronto.edu>, man...@ecf.toronto.edu says...


>
>In article <joes.453...@ultranet.com>,
>Joe Stella <jo...@ultranet.com> wrote:
>>In article <DEA3F...@ecf.toronto.edu> man...@ecf.toronto.edu
>>(MANOHARARAJAH VALAVAN) writes:
>>
>>>On a side note: Does three move repetition and the fifty move rule get
>>>stored along with rest of the chess info in the Hash Table?
>>
>>>I would think that there is a slight problem with hash tables if it
didn't
>>>consider this info.

>[snip]


>Ok...so I guess in a tree search, when you hit a 50 move rule, you can
set
>your lower bound to Zero and continue searching as always.
>
>>
>>These types of draws necessarily *cannot* be included in the hash
table,
>>because they depend on the past history of the position and not just on
>>the position itself. In the case of threefold repetition, for example,
the
>>position is not a draw at first, then the *very same* position becomes
a
>>draw on the third repetition.
>>
>>My program handles such cases by checking for repetition draws *before*
>>consulting the hash table. If repetition is detected, the position is
scored
>>as a draw immediately. It takes a little time to test for repetition,
but
>>some of this time is recovered when the position does not have to be
>>expanded (the score is already known so expansion is not necessary).

--
The opinions expressed in this message are my own personal views
and do not reflect the official views of Microsoft Corporation.


0 new messages