Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Null move?

279 views
Skip to first unread message

MLC

unread,
Oct 7, 1997, 3:00:00 AM10/7/97
to

Could someone explain to this computer chess novice what null move means?

Robert Hyatt

unread,
Oct 7, 1997, 3:00:00 AM10/7/97
to

MLC (M...@email.com) wrote:
: Could someone explain to this computer chess novice what null move means?

It is a pruning method. The idea is this: if one side gets to make
two moves in a row (by the intervening move being a "null-move") then
he should be able to achieve a significant advantage. Imagine a case
where you can move to attack the opponent's queen with your knight,
then the next move you take it since he can't move.

Since two moves are *so* powerful if played back to back, what would
you think if you reached a position in the game where you could let
your opponent play twomoves in a row and he *still* couldn't do anything?

This is the idea of a null-move. At any point in the search tree, the
first move we try is a null-move. But before playing the second opponent
move, we reduce the search depth by an extra two plies, because it doesn't
require much depth to see the benefit of two moves in a row, if there is
any benefit.

After doing this search, which is much less expensive than the normal search
because we search to a shallower depth, we compare the result returned from
the search below the null-move. If this value is enough to make the current
position "fail high" we do so, thinking "I can safely fail high here because
my opponent, even with two moves in a row, couldn't do anything to my
position."

the net result is much deeper searches. But it fails on zug positions,
because it actually lets you play the move you want (no move at all).


Robert Hyatt

unread,
Oct 8, 1997, 3:00:00 AM10/8/97
to

Fuming Wang (fum...@sup.phys.washington.edu) wrote:

: I see people using "PV" or "PV moves" all the time. Can anyone explain
: to me what they are?

: Thanks,

: Fuming Wang
: fum...@phys.washington.edu

This is a term frequently used to replace "principal variation", which
is the sequence of moves that leads to the position the machine thinks is
best. IE e4 e5 Nf3 Nc6 Bc4 Nf6 Ng5 d5 exd5 Nxf7 Kxf7 Qf3+ is a PV, and
each move is commonly called a PV move.

If you've ever run a chess program, the "search stuff" shows a PV,
the expected sequence of moves...


Vincent Diepeveen

unread,
Oct 11, 1997, 3:00:00 AM10/11/97
to

Not true: nullmove doesn't fail if you add 1 condition to the
nullmove: you may not make a nullmove if 2 ply sooner you did
already a nullmove (so my previous move i made a nullmove).

You ARE allowed to do a nullmove if 1 ply ago you did a nullmove.

this improvement makes nullmove work correctly.

You can see that this makes nullmove work correctly:
suppose we have a zugzwang position.

This means: i make a nullmove, now my opponent make a nullmove.
Now we get the same position with me to move, but no i may not do
nullmove, so i can detect here zugzwang.

Of course, the bigger your reduction factor, the more ply you need to
detect zugzwang.

Position A side to move
1 [nullmove]
Position A xside to move
2 [nullmove]
Position A side to move
3 [nullmove not allowed so side MUST move]

So Nullmove is a correct algorithm. A brute force algorithm. The
reduction factor however takes care that you sometimes need more ply
to see things, which you would see full-width faster.

Greetings,
Vincent Diepeveen
di...@xs4all.nl

Robert Hyatt

unread,
Oct 11, 1997, 3:00:00 AM10/11/97
to

Vincent Diepeveen (di...@xs4all.nl) wrote:

not at all. It improves it in some positions, slows it down
unnecessarily in others. But there is *nothing* we can do to
make it play correctly, because this is a forward pruning
algorithm. And forward pruning will suffer on occasion..

: You can see that this makes nullmove work correctly:


: suppose we have a zugzwang position.

: This means: i make a nullmove, now my opponent make a nullmove.
: Now we get the same position with me to move, but no i may not do
: nullmove, so i can detect here zugzwang.

Unless you are at max search depth... null move just cost you 4 plies
of search, you go to quiescence and conclude something totally different
than you would if you used the regular search.

As I said, it cures some problems, but hurts others. I'm using something
that seems to work well in the "near-endgame" positions. but I still see
failures at some points. Just that the speed advantage far outstrips the
cost in terms of failures..


: Of course, the bigger your reduction factor, the more ply you need to
: detect zugzwang.

: Position A side to move
: 1 [nullmove]
: Position A xside to move
: 2 [nullmove]
: Position A side to move
: 3 [nullmove not allowed so side MUST move]

: So Nullmove is a correct algorithm. A brute force algorithm. The
: reduction factor however takes care that you sometimes need more ply
: to see things, which you would see full-width faster.

I don't like your term "correct." That implies no mistakes. Which
is impossible for a forward pruning algorithm. "more correct" we could
argue about, but not "completely correct."

Vincent Diepeveen

unread,
Oct 14, 1997, 3:00:00 AM10/14/97
to

It only takes care that your branching
factor becomes less, and in certain cases a problem is found at
ply = x instead of x - n,

See description above.

Now nullmove doesn't fail anymore in zugzwang positions.
Which was the major objective of nullmove.

So it works correctly now: it always is able to find the optimal
solution and score in ALL position, so it is a selective algorithm
which comes to the same conclusion full-width alfabeta comes.

It in the end gives the true score of the position, and usually
many planetary faster than any form of full-width will give you.

Therefore (because it searches the right space) it is a correct
algorithm.

Vincent Diepeveen
di...@xs4all.nl

Robert Hyatt

unread,
Oct 15, 1997, 3:00:00 AM10/15/97
to

Vincent Diepeveen (di...@xs4all.nl) wrote:

: It only takes care that your branching


: factor becomes less, and in certain cases a problem is found at
: ply = x instead of x - n,

: See description above.

: Now nullmove doesn't fail anymore in zugzwang positions.

It fails every time that you reach a zug position within 3 plies of
the capture search, because now you don't do the non-capture things
needed to see what is happening.

: Which was the major objective of nullmove.

: So it works correctly now: it always is able to find the optimal
: solution and score in ALL position, so it is a selective algorithm
: which comes to the same conclusion full-width alfabeta comes.

Again, this is simply incorrect. There is *no* forward pruning
algorithm that comes to the same conclusion as full-width alpha/beta.
There are *always* exceptions that break any forward pruning algorithm.

: It in the end gives the true score of the position, and usually


: many planetary faster than any form of full-width will give you.

: Therefore (because it searches the right space) it is a correct
: algorithm.

Except that it doesn't...


Urban Koistinen

unread,
Oct 15, 1997, 3:00:00 AM10/15/97
to

The key to understanding Vincents reasoning is that as you
increase search depth at the root, you will find the correct
move even when using the null move heuristic, if the heuristic
is modified as Vincent suggested.
I think the testing that we did earlier showed that allowing
back to back null moves had a small cost. We did not try much
to reduce it while keeping back to back null moves.

In short: By allowing back to back null moves, but not 3 in
a row, the correct result will be found although at a greater
ply depth.

Example: Kc6,Rh7 vs Kb8
1. Rg7 <null> 2. Rh7 <null> 3. Rg7 <null>
Never gets the correct result no matter how deep you search.
1. Rg7 <null> 2. <null> Ka8 3. Kb6 <null> 4. Rg8#
Finds the mate at depth 13 (with R=2) that without null moves
1. Rg7 Ka8 2. Kb6 Kb8 3. Rg8#
would have been found at depth 5.

That is how it is correct.

Urban Koistinen

Urban Koistinen

unread,
Oct 15, 1997, 3:00:00 AM10/15/97
to

15 Oct 1997 15:28:53 GMT Robert Hyatt wrote:
! Urban Koistinen (m10...@abc.se) wrote:
! : The key to understanding Vincents reasoning is that as you
! : increase search depth at the root, you will find the correct
! : move even when using the null move heuristic, if the heuristic
! : is modified as Vincent suggested.
! : I think the testing that we did earlier showed that allowing
! : back to back null moves had a small cost. We did not try much
! : to reduce it while keeping back to back null moves.

! : In short: By allowing back to back null moves, but not 3 in
! : a row, the correct result will be found although at a greater
! : ply depth.

! : Example: Kc6,Rh7 vs Kb8
! : 1. Rg7 <null> 2. Rh7 <null> 3. Rg7 <null>
! : Never gets the correct result no matter how deep you search.
! : 1. Rg7 <null> 2. <null> Ka8 3. Kb6 <null> 4. Rg8#
! : Finds the mate at depth 13 (with R=2) that without null moves
! : 1. Rg7 Ka8 2. Kb6 Kb8 3. Rg8#
! : would have been found at depth 5.

! : That is how it is correct.

! the point being, then, that you have to get *at least* 8 plies
! deeper with null move than without. which is not possible in any
! test I know of, of course...

! Anytime the depth is decreased (as null move does) then things are
! hidden below that point. And the only way to see that hidden trap
! is to search deep enough to overcome the null-move problem. That's
! why I didn't like his "correct" adjective here. Because it has to
! be "correct *if*" but the if is unobtainable.

"The Lisp programmer knows the value of everything and the cost of nothing."
"correct" does not mean "faster and works most of the time".
It means "works all the time" but might be impractically slow.

Regular null moves are not generally correct as there are
positions where they can prevent the correct value of a
position to be found regardless of how deep you search.
That can be perfectly all right for Crafty as it is a
program that is mainly meant to play chess.

! I tried his fix, just to see how it affected performance. Here is
! one test position, although all are affected by this in roughly
! the same proportion: (position is kopec test position 22, Bxe4 is
! the correct answer.) The first run is my normal search, the second
! run is with Vincent's limitation...

! normal null-move: 10-> 23.52 0.516 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
!
! modified null move (only disallows null if in check or if move two plies
! back was a null, or if there are no pieces, only pawns):
! 10-> 54.30 0.516 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4

! Summary:

! the "fix" costs a factor of 2x over the normal null-move search. So it
! actually will search nearly a ply shallower. Is it worth it? My first
! conclusion is no, it is just another way to try to fix a hole, but the
! effect is it only slows things down.

! I have seen *hardly any* failures caused by two nulls on successive
! plies, when debugging real game positions. I see far more that are
! simply failures because the R=2 depth reduction hides something and
! convinces Crafty the position is very good, when in fact is it very
! bad, or vice-versa.

! You can take your pick of course. At present, I will keep the 2X
! speed advantage.

It ought to be easy to keep the theoretical benefits of Vincents
nullmoves without losing speed. As an example: don't do the second
nullmove unless there would be at least one regular ply search
left after it.

Urban Koistinen

Robert Hyatt

unread,
Oct 15, 1997, 3:00:00 AM10/15/97
to

Urban Koistinen (m10...@abc.se) wrote:

: ! Summary:

: Urban Koistinen

This is expensive (your last suggestion). This was tried last year, and
was first suggested to me by John Stanback. But the cost is high. In the
case of Vincent's post, he made the statement "using my null-move limitation
will always return the same value as minimax alpha/beta". With the clear
implication of "everything else being equal." And that was the part that
was not clear. It makes little sense to say "they are equal so long as the
null-move improvement algorithm gets to search at a speed rate unobtainable
even by deep blue"... It sounded like a simple change that eliminated *all*
null move problems. It doesn't come close. It eliminates the one class of
problems where successive nulls hide something, but it does nothing for the
other classes where the problems are simply caused by the depth reduction.
Of course, the depth reduction is why it is done in the first place...


Randy Baker

unread,
Oct 15, 1997, 3:00:00 AM10/15/97
to

Robert Hyatt <hyatt@crafty> wrote in article
<622nfl$q...@juniper.cis.uab.edu>...

> Urban Koistinen (m10...@abc.se) wrote:
> : The key to understanding Vincents reasoning is that as you
> : increase search depth at the root, you will find the correct
> : move even when using the null move heuristic, if the heuristic
> : is modified as Vincent suggested.
> : I think the testing that we did earlier showed that allowing
> : back to back null moves had a small cost. We did not try much
> : to reduce it while keeping back to back null moves.
>
> : In short: By allowing back to back null moves, but not 3 in
> : a row, the correct result will be found although at a greater
> : ply depth.
>
> : Example: Kc6,Rh7 vs Kb8

> : 1. Rg7 <null> 2. Rh7 <null> 3. Rg7 <null>
> : Never gets the correct result no matter how deep you search.
> : 1. Rg7 <null> 2. <null> Ka8 3. Kb6 <null> 4. Rg8#
> : Finds the mate at depth 13 (with R=2) that without null moves
> : 1. Rg7 Ka8 2. Kb6 Kb8 3. Rg8#
> : would have been found at depth 5.
>
> : That is how it is correct.
>
> : Urban Koistinen

>
> the point being, then, that you have to get *at least* 8 plies
> deeper with null move than without. which is not possible in any
> test I know of, of course...
>
> Anytime the depth is decreased (as null move does) then things are
> hidden below that point. And the only way to see that hidden trap
> is to search deep enough to overcome the null-move problem. That's
> why I didn't like his "correct" adjective here. Because it has to
> be "correct *if*" but the if is unobtainable.
>
> I tried his fix, just to see how it affected performance. Here is
> one test position, although all are affected by this in roughly
> the same proportion: (position is kopec test position 22, Bxe4 is
> the correct answer.) The first run is my normal search, the second
> run is with Vincent's limitation...
>
> normal null-move:
>
> depth time score variation (1)
> 1 0.00 -1.430 d5 exd5
> 1 0.00 -1.415 Ne5 Nxe5 dxe5
> 1 0.00 -0.637 Nc5
> 1-> 0.01 -0.637 Nc5
> 2 0.01 -0.552 Nc5 Nxc5 Qxc5
> 2-> 0.01 -0.552 Nc5 Nxc5 Qxc5
> 3 0.03 -0.706 Nc5 Nxc5 Qxc5 Bd4
> 3-> 0.04 -0.706 Nc5 Nxc5 Qxc5 Bd4
> 4 0.05 -0.826 Nc5 Nxc5 Qxc5 Bd4 Qc7
> 4 0.17 -0.647 Nh5 Qe3 Nc5 Nxc5 Qxc5
> 4-> 0.17 -0.647 Nh5 Qe3 Nc5 Nxc5 Qxc5
> 5 0.22 -0.791 Nh5 Qe3 Nc5 Nxc5 Qxc5 Bd4
> 5-> 0.45 -0.791 Nh5 Qe3 Nc5 Nxc5 Qxc5 Bd4
> 6 0.65 -0.755 Nh5 Qe3 Bf6 Bxf6 Nhxf6 Qd4 Nc5
> 6 2.99 -0.746 Qd8 Rac1 d5 cxd5 exd5 Rxc8 Bxc8 e5
> 6 3.47 ++ Bxe4!!
> 6 3.72 0.717 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> 6-> 3.73 0.717 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> 7 4.65 0.777 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7
> 7-> 4.79 0.777 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7
> 8 6.17 0.529 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 a4
> 8-> 6.48 0.529 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 a4
> 9 9.93 0.685 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 Bc3 Rb8
> 9-> 10.74 0.685 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 Bc3 Rb8
> 10 19.43 0.516 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 Bc3 a4 Rac1

> 10-> 23.52 0.516 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 Bc3 a4 Rac1
> time: 23.52 cpu:95% mat:1 n:1884983 nps:80143

>
> modified null move (only disallows null if in check or if move two plies
> back was a null, or if there are no pieces, only pawns):
>
> depth time score variation (1)
> 1 0.00 -1.430 d5 exd5
> 1 0.00 -1.415 Ne5 Nxe5 dxe5
> 1 0.00 -0.637 Nc5
> 1-> 0.00 -0.637 Nc5
> 2 0.00 -0.552 Nc5 Nxc5 Qxc5
> 2-> 0.01 -0.552 Nc5 Nxc5 Qxc5
> 3 0.02 -0.706 Nc5 Nxc5 Qxc5 Bd4
> 3-> 0.06 -0.706 Nc5 Nxc5 Qxc5 Bd4
> 4 0.08 -0.826 Nc5 Nxc5 Qxc5 Bd4 Qc7
> 4 0.21 -0.647 Nh5 Qe3 Nc5 Nxc5 Qxc5
> 4-> 0.23 -0.647 Nh5 Qe3 Nc5 Nxc5 Qxc5
> 5 0.29 -0.791 Nh5 Qe3 Nc5 Nxc5 Qxc5 Bd4
> 5-> 0.47 -0.791 Nh5 Qe3 Nc5 Nxc5 Qxc5 Bd4
> 6 0.69 -0.755 Nh5 Qe3 Bf6 Bxf6 Nhxf6 Qd4 Nc5
> 6 3.29 -0.746 Qd8 Rac1 d5 cxd5 exd5 Rxc8 Bxc8 e5
> 6 3.59 ++ Bxe4!!
> 6 3.84 0.717 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> 6-> 3.85 0.717 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> 7 4.79 0.777 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7
> 7-> 5.42 0.777 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7
> 8 6.84 0.529 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 a4
> 8-> 8.31 0.529 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 a4
> 9 12.53 0.685 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 Bc3 Rb8
> 9-> 20.02 0.685 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 Bc3 Rb8
> 10 37.27 0.516 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 Bc3 a4 Rac1

> 10-> 54.30 0.516 Bxe4 Bxe4 Qxc4 Qxc4 Rxc4 Nxb6 Rxe4
> Nxd7 Nxd7 Bc3 a4 Rac1
> time: 54.30 cpu:98% mat:1 n:5055000 nps:93093
>
>
> Summary:

>
> the "fix" costs a factor of 2x over the normal null-move search. So it
> actually will search nearly a ply shallower. Is it worth it? My first
> conclusion is no, it is just another way to try to fix a hole, but the
> effect is it only slows things down.
>
> I have seen *hardly any* failures caused by two nulls on successive
> plies, when debugging real game positions. I see far more that are
> simply failures because the R=2 depth reduction hides something and
> convinces Crafty the position is very good, when in fact is it very
> bad, or vice-versa.

>
> You can take your pick of course. At present, I will keep the 2X
> speed advantage.
>
>

Perhaps there is a different way to approach when to turn null move off
based solely on its failure to detect zugswang. If every possible move a
side makes significantly worsens its position, then playing null move might
not be advisable in the position. I leave whether this is a reasonable
condition or whether it can be efficiently/practically detected in the
course of the normal search as an exercise for the experts.

--
Randy Baker (remove Z from address in email replies)

Robert Hyatt

unread,
Oct 15, 1997, 3:00:00 AM10/15/97
to

Robert Hyatt

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

Randy Baker (rsba...@msn.com) wrote:

: Perhaps there is a different way to approach when to turn null move off


: based solely on its failure to detect zugswang. If every possible move a
: side makes significantly worsens its position, then playing null move might
: not be advisable in the position. I leave whether this is a reasonable
: condition or whether it can be efficiently/practically detected in the
: course of the normal search as an exercise for the experts.

The problem is, however, that we don't know that every move worsens
the position. We try the null-move before even generating a single
capture move, and if it fails high, we bug out at that point. It is
a matter of not knowing it shouldn't be tried, but to find that out,
we reach the point where trying it won't do a thing. The whole point
is to avoid searching any of the current moves if possible...

Dusan Dobes

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

Robert Hyatt (hyatt@crafty) wrote:

: I tried his fix, just to see how it affected performance. Here is

: normal null-move:

: Summary:

The "fix" actually costs only about 10%. Look at the solution times
(3.47 and 3.59). Even a small change in null move parameters causes
big (but useless) changes in search depth in this type of positions
(single move that is significantly better than the others), but only
small changes in solution times. The primary goal is to find moves,
not to reach big search depths.

Dusan


Robert Hyatt

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

Dusan Dobes (do...@bart1.math.muni.cz) wrote:

: The "fix" actually costs only about 10%. Look at the solution times


: (3.47 and 3.59). Even a small change in null move parameters causes
: big (but useless) changes in search depth in this type of positions
: (single move that is significantly better than the others), but only
: small changes in solution times. The primary goal is to find moves,
: not to reach big search depths.

: Dusan

that early result is not super-accurate in terms of time. but if you
use this "trick" over several positions, it is consistently 2x slower.
IE in the test I posted, it is possible that the solution is not found
until iteration 10, rather than at 6. there it is a clear 2x. Note
that this did not find that solution any sooner iteration-wise, so it
is a simple depth issue, as in the tests I ran this "fix" did not find
anything earlier than then normal null-move algorithm.

here is another position from wac:

normal null move:

7-> 3.43 3.362 Qe4 Kg8 Rxf7 Rxf7 Rxb7 Be5 Rxf7 Kxf7
c6 Ke6
8 6.49 3.528 Qe4 Kg8 Rxf7 Rxf7 Rxb7 Rxb7 Qxb7 Qf5
c6 Qg4+ Kf1 Be5
8 6.83 ++ Rxf7+!!
8 7.64 4.317 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 Be5 Rxf7+
Kxf7 cxb7 Ke6 Bd4 Bb8 Kg2
8-> 8.71 4.317 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 Be5 Rxf7+
Kxf7 cxb7 Ke6 Bd4 Bb8 Kg2
9 12.13 4.505 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 Be5 Bd4 Kg6
Rxf7 Kxf7 cxb7 Bb8 Kg2
9-> 15.60 4.505 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 Be5 Bd4 Kg6
Rxf7 Kxf7 cxb7 Bb8 Kg2
10 16.46 ++ Rxf7+!!
10 22.69 5.779 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 bxc6 bxc6
Be5 Bd4 Bxd4 Rxf7+ Kxf7 c7 Ke6 c8=Q+
Kd5 Qf5+ Kc4
10-> 28.61 5.779 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 bxc6 bxc6
Be5 Bd4 Bxd4 Rxf7+ Kxf7 c7 Ke6 c8=Q+
Kd5 Qf5+ Kc4
time: 28.61 cpu:97% mat:6 n:3036505 nps:106134

Vincent modified null-move:

7-> 4.92 3.440 Qe4 Rxe7 Rxe7+ Rf7 Rxb7 Rxb7 Qxb7+
Kg6 Qd5
8 10.91 3.528 Qe4 Kg8 Rxf7 Rxf7 Rxb7 Rxb7 Qxb7 Qf5
c6 Qg4+ Kf1 Be5
8 11.55 ++ Rxf7+!!
8 13.06 4.317 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 Be5 Rxf7+
Kxf7 cxb7 Ke6 Bd4 Bb8 Kg2
8-> 15.12 4.317 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 Be5 Rxf7+
Kxf7 cxb7 Ke6 Bd4 Bb8 Kg2
9 19.09 4.505 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 Be5 Bd4 Kg6
Rxf7 Kxf7 cxb7 Bb8 Kg2
9-> 25.25 4.505 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 Be5 Bd4 Kg6
Rxf7 Kxf7 cxb7 Bb8 Kg2
10 26.81 ++ Rxf7+!!
10 38.41 5.779 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 bxc6 bxc6
Be5 Bd4 Bxd4 Rxf7+ Kxf7 c7 Ke6 c8=Q+
Kd5 Qf5+ Kc4
10-> 51.29 5.779 Rxf7+ Rxf7 Qxf7+ Qxf7 c6 bxc6 bxc6
Be5 Bd4 Bxd4 Rxf7+ Kxf7 c7 Ke6 c8=Q+
Kd5 Qf5+ Kc4
time: 51.30 cpu:99% mat:6 n:5377822 nps:104830

this is again a 2x (nearly) difference...


Marcel van Kervinck

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

Robert Hyatt (hyatt@crafty) wrote:
> The problem is, however, that we don't know that every move worsens
> the position. We try the null-move before even generating a single
> capture move, and if it fails high, we bug out at that point.

Why not simply reduce depth to depth-R and continue the search
without null move? Failing high without further inspection will
give wrong results in the end. Just reducing depth only delays
results. And the costs of searching depth-R can be neglected....

Marcel
-- _ _
_| |_|_|
|_ |_ Marcel van Kervinck
|_| mar...@stack.nl

Robert Hyatt

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

Urban Koistinen (m10...@abc.se) wrote:

: 16 Oct 1997 16:36:12 GMT Robert Hyatt wrote:
: ! Urban Koistinen (m10...@abc.se) wrote:
: ! : 15 Oct 1997 22:20:32 GMT Robert Hyatt wrote:
: ! : ! Urban Koistinen wrote:
: ! : ! : It ought to be easy to keep the theoretical benefits of Vincents
: ! : ! : nullmoves without losing speed. As an example: don't do the second
: ! : ! : nullmove unless there would be at least one regular ply search
: ! : ! : left after it.

: ! : ! This is expensive (your last suggestion). This was tried last year, and
: ! : ! was first suggested to me by John Stanback. But the cost is high.

: ! : Here is my suggestion rephrased:
: ! : Only allow the backtoback null move when there still would be
: ! : at least x ply of search left after it.
: ! : (x=1 or larger to get a smaller difference compared to what
: ! : you are doing now, with x=18 you would hardly ever notice a
: ! : difference in the search tree.)

: ! This is very expensive. Again, this was suggested by John Stanback
: ! last year. I used it for a good while to evaluate it. It is worth
: ! a factor of 2x in many positions. The idea is to not let the null
: ! search collapse a position into a quiescence search.

: Wrong.
: If you can't see the difference by now I don't think it is
: worth explaining to you.

suit yourself. I've tried it. As I said, trying null moves at the
very last ply before the capture search produces a significant speedup.
Getting rid of that, plus backing up a ply or two and not using the full
R value is expensive. You can try it for yourself... Crafty's search.c
is easy to modify in that regard. Trivial in fact. Then come back and
explain where I am "wrong" about it being expensive. It might not slow
the search down by a factor of 5x, or even 3x, but I consider 50% to be
expensive. And in lots of positions it will be way worse. There are
comments in main.c about doing this, although it was quite a while back.

: ! But if you
: ! try it, you'll find it runs *much* slower. In fact, trying null-move
: ! at the very fringe of the tree saves a lot of time, because a null-move
: ! is quicker to search than the regular capture search...

: ...

: ! : ! In the
: ! : ! case of Vincent's post, he made the statement
: ! : ! "using my null-move limitation
: ! : ! will always return the same value as minimax alpha/beta".
: ! : ! With the clear
: ! : ! implication of "everything else being equal."

: ! : He specified that you might need more depth.

: ! Yes... but his "modified-null == alpha/beta" didn't include that
: ! disclaimer. And it is *dead* wrong anyway. A null-move search
: ! is *never* == alpha/beta, because of the depth reduction.

: Vincents null would give the same result as the full tree when
: the search tree gets deep enough.
: That is not always so for the null you are using.
: I have read the 2 posts by Vincent that have reached me and I
: found the disclaimer in the first.

Very good argument. By that same logic, the Search of Genius, with
it's selective forward pruning, will also produce the same result as
a traditional full-width alpha/beta search. It simply needs to get
deep enough, so that the full-width part of his search is as deep as
the normal alpha/beta search. The fact that this isn't going to happen
is not very relevant I suppose. However, here I prefer to talk "practice"
rather than "theory" because "practice" plays and wins/loses/draws games,
"theory" goes back to the frog and pockets argument...

a

: ! It
: ! doesn't matter if you use R=1 and only allow one null-move in the
: ! entire path, it still makes mistakes because of that one ply
: ! reduction along some critical path...

: No argument.

: ! : ! And that was the part that
: ! : ! was not clear. It makes little sense to say
: ! : ! "they are equal so long as the
: ! : ! null-move improvement algorithm gets to search
: ! : ! at a speed rate unobtainable
: ! : ! even by deep blue"...

: ! : Speed != Depth (don't confuse them)

: ! I didn't. I've posted two examples that show this. The last was a
: ! win at chess position that was solved much quicker with normal null
: ! than with "modified" null.

: DB

: ! : ! It sounded like a simple change that eliminated *all*
: ! : ! null move problems. It doesn't come close.
: ! : ! It eliminates the one class of
: ! : ! problems where successive nulls hide something,
: ! : ! but it does nothing for the
: ! : ! other classes where the problems are simply
: ! : ! caused by the depth reduction.
: ! : ! Of course, the depth reduction is why it is done in the first place...

: ! : I think you got confused.
: ! : Vincent compared his null move AB with the regular AB without null moves.

: ! I agree. And they do *not* compare. Null-move is speculative. normal
: ! alpha/beta is not speculative at all... If we use null-move, we have to
: ! take hits in some types of positions, because there isn't any perfect
: ! forward-pruning algorithm...

: Vincents null is less speculative than the regular null.

I agree. But it is also less efficient in terms of time to reach a
specific depth, and in terms of time to find a specific move. The
question always is, is the gain in accuracy more important than the loss
in performance, or is it the other way around. Two of us (Ferret and
Crafty) have toyed with this for a couple of years. We've found it
is simply better to take the speed and run, and out-search our opponent
most of the time, and get beat with an occasional null-move failure.

I ran some tests with various "restrictions" of null move, in crafty vs
crafty marathons of hundreds of games. In every case, restricting null
move dropped the win/lose ratio. Bruce also ran some tests but I don't
recall what his results were... But we are both using unrestricted R=2,
so I suspect his matched mine...


: ! : BTW I think Vincent used the word "objective" when he should have used
: ! : the word "objection".

: Urban Koistinen

Marcel van Kervinck

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

Robert Hyatt <hyatt@crafty> wrote:
> : Why not simply reduce depth to depth-R and continue the search

> : without null move? Failing high without further inspection will
> : give wrong results in the end. Just reducing depth only delays
> : results. And the costs of searching depth-R can be neglected....
> This was suggested in a fairly recent JICCA article.

I should have read it. Which article? I don't remember it.

> I tried it. And
> it is not free, in terms of cost, and it still overlooks things when it
> collapses the search to the q-search...

What's the cost? Say, R is constant and R==2:

Crafty without nullmove searches nonullmove-depth-10 in N nodes.
Now Crafty needs to search a position in it's thinking, depth=10.
Crafty does its usual risky recursive nullmove, which fails high,
with depth-R-1 == 7 ply. Observed overall savings: 95% of search.
So nullmove-depth-10 costs just N/20 nodes, right?

Now instead of failing high, we research with depth=8, disallowing
nullmove, and return that score. Remember, nonullmove-depth-10 costs N,
so nonullmove-depth-8 costs N/36.

This, on top of risky nullmove reduction, costs N/20 + N/36 ==
N/12.9, leaving us with a reduction of 92% instead of 95%. Indeed,
as you experimently observed, a doubling of nodes. But no risk of ever
overseeing tactics, not even in the endgame.

And, do really want to trust a depth-7 fail-high cutof, no questions
asked? (that is: 7 ply, including more recursive nullmoves) Do you really
trust such a search over a depth-8 search? (that is 8 ply, full width,
no selective search)

All that for just a 50% speedup??????

Marcel
-- _ _
_| |_|_|

|_ |_ mar...@stack.nl
|_| Marcel van Kervinck

Urban Koistinen

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

15 Oct 1997 22:20:32 GMT Robert Hyatt wrote:
! Urban Koistinen wrote:
! : It ought to be easy to keep the theoretical benefits of Vincents
! : nullmoves without losing speed. As an example: don't do the second
! : nullmove unless there would be at least one regular ply search
! : left after it.

! This is expensive (your last suggestion). This was tried last year, and

! was first suggested to me by John Stanback. But the cost is high.

Here is my suggestion rephrased:


Only allow the backtoback null move when there still would be

at least x ply of search left after it.

(x=1 or larger to get a smaller difference compared to what

you are doing now, with x=18 you would hardly ever notice a

difference in the search tree.)

! In the
! case of Vincent's post, he made the statement "using my null-move limitation
! will always return the same value as minimax alpha/beta". With the clear


! implication of "everything else being equal."

He specified that you might need more depth.

! And that was the part that
! was not clear. It makes little sense to say "they are equal so long as the
! null-move improvement algorithm gets to search at a speed rate unobtainable


! even by deep blue"...

Speed != Depth (don't confuse them)

! It sounded like a simple change that eliminated *all*
! null move problems. It doesn't come close. It eliminates the one class of
! problems where successive nulls hide something, but it does nothing for the
! other classes where the problems are simply caused by the depth reduction.


! Of course, the depth reduction is why it is done in the first place...

I think you got confused.


Vincent compared his null move AB with the regular AB without null moves.

BTW I think Vincent used the word "objective" when he should have used

Robert Hyatt

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

Urban Koistinen (m10...@abc.se) wrote:

: 15 Oct 1997 22:20:32 GMT Robert Hyatt wrote:
: ! Urban Koistinen wrote:
: ! : It ought to be easy to keep the theoretical benefits of Vincents
: ! : nullmoves without losing speed. As an example: don't do the second
: ! : nullmove unless there would be at least one regular ply search
: ! : left after it.

: ! This is expensive (your last suggestion). This was tried last year, and
: ! was first suggested to me by John Stanback. But the cost is high.

: Here is my suggestion rephrased:
: Only allow the backtoback null move when there still would be
: at least x ply of search left after it.
: (x=1 or larger to get a smaller difference compared to what
: you are doing now, with x=18 you would hardly ever notice a
: difference in the search tree.)

This is very expensive. Again, this was suggested by John Stanback


last year. I used it for a good while to evaluate it. It is worth

a factor of 2x in many positions. The idea is to not let the null

search collapse a position into a quiescence search. But if you


try it, you'll find it runs *much* slower. In fact, trying null-move

at the very fringe of the tree saves a lot of time, because a null-move

is quicker to search than the regular capture search...


: ! In the


: ! case of Vincent's post, he made the statement "using my null-move limitation
: ! will always return the same value as minimax alpha/beta". With the clear
: ! implication of "everything else being equal."

: He specified that you might need more depth.

Yes... but his "modified-null == alpha/beta" didn't include that


disclaimer. And it is *dead* wrong anyway. A null-move search

is *never* == alpha/beta, because of the depth reduction. It


doesn't matter if you use R=1 and only allow one null-move in the

entire path, it still makes mistakes because of that one ply

reduction along some critical path...

: ! And that was the part that


: ! was not clear. It makes little sense to say "they are equal so long as the
: ! null-move improvement algorithm gets to search at a speed rate unobtainable
: ! even by deep blue"...

: Speed != Depth (don't confuse them)

I didn't. I've posted two examples that show this. The last was a


win at chess position that was solved much quicker with normal null

than with "modified" null.


: ! It sounded like a simple change that eliminated *all*


: ! null move problems. It doesn't come close. It eliminates the one class of
: ! problems where successive nulls hide something, but it does nothing for the
: ! other classes where the problems are simply caused by the depth reduction.
: ! Of course, the depth reduction is why it is done in the first place...

: I think you got confused.
: Vincent compared his null move AB with the regular AB without null moves.

I agree. And they do *not* compare. Null-move is speculative. normal


alpha/beta is not speculative at all... If we use null-move, we have to

take hits in some types of positions, because there isn't any perfect

forward-pruning algorithm...

: BTW I think Vincent used the word "objective" when he should have used

Robert Hyatt

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

Marcel van Kervinck (mar...@stack.nl) wrote:

: Robert Hyatt (hyatt@crafty) wrote:
: > The problem is, however, that we don't know that every move worsens
: > the position. We try the null-move before even generating a single
: > capture move, and if it fails high, we bug out at that point.

: Why not simply reduce depth to depth-R and continue the search
: without null move? Failing high without further inspection will
: give wrong results in the end. Just reducing depth only delays
: results. And the costs of searching depth-R can be neglected....

This was suggested in a fairly recent JICCA article. I tried it. And


it is not free, in terms of cost, and it still overlooks things when it
collapses the search to the q-search...

When you search all moves to depth-R-1, you still hide threats that are
deep, which is the most likely place for this thing to fail anyway. IE
I don't see enough "zug" failures to even think about them. I do see
failures where the depth-R-1 simply hides some tactical resource that
depth-1 would see. Of course, it plays positionally much stronger
because it gets to depth=10/11/12/13 in middlegame positions, and can
find some cute positional maneuvers that would be difficult at depth=9.


Urban Koistinen

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

16 Oct 1997 16:36:12 GMT Robert Hyatt wrote:
! Urban Koistinen (m10...@abc.se) wrote:
! : 15 Oct 1997 22:20:32 GMT Robert Hyatt wrote:
! : ! Urban Koistinen wrote:
! : ! : It ought to be easy to keep the theoretical benefits of Vincents
! : ! : nullmoves without losing speed. As an example: don't do the second
! : ! : nullmove unless there would be at least one regular ply search
! : ! : left after it.

! : ! This is expensive (your last suggestion). This was tried last year, and
! : ! was first suggested to me by John Stanback. But the cost is high.

! : Here is my suggestion rephrased:
! : Only allow the backtoback null move when there still would be
! : at least x ply of search left after it.
! : (x=1 or larger to get a smaller difference compared to what
! : you are doing now, with x=18 you would hardly ever notice a
! : difference in the search tree.)

! This is very expensive. Again, this was suggested by John Stanback
! last year. I used it for a good while to evaluate it. It is worth
! a factor of 2x in many positions. The idea is to not let the null
! search collapse a position into a quiescence search.

Wrong.
If you can't see the difference by now I don't think it is
worth explaining to you.

! But if you
! try it, you'll find it runs *much* slower. In fact, trying null-move
! at the very fringe of the tree saves a lot of time, because a null-move
! is quicker to search than the regular capture search...

...

! : ! In the
! : ! case of Vincent's post, he made the statement
! : ! "using my null-move limitation
! : ! will always return the same value as minimax alpha/beta".


! : ! With the clear

! : ! implication of "everything else being equal."

! : He specified that you might need more depth.

! Yes... but his "modified-null == alpha/beta" didn't include that
! disclaimer. And it is *dead* wrong anyway. A null-move search
! is *never* == alpha/beta, because of the depth reduction.

Vincents null would give the same result as the full tree when
the search tree gets deep enough.
That is not always so for the null you are using.
I have read the 2 posts by Vincent that have reached me and I
found the disclaimer in the first.

! It
! doesn't matter if you use R=1 and only allow one null-move in the
! entire path, it still makes mistakes because of that one ply
! reduction along some critical path...

No argument.

! : ! And that was the part that
! : ! was not clear. It makes little sense to say
! : ! "they are equal so long as the
! : ! null-move improvement algorithm gets to search
! : ! at a speed rate unobtainable
! : ! even by deep blue"...

! : Speed != Depth (don't confuse them)

! I didn't. I've posted two examples that show this. The last was a
! win at chess position that was solved much quicker with normal null


! than with "modified" null.

DB

! : ! It sounded like a simple change that eliminated *all*
! : ! null move problems. It doesn't come close.
! : ! It eliminates the one class of
! : ! problems where successive nulls hide something,
! : ! but it does nothing for the
! : ! other classes where the problems are simply
! : ! caused by the depth reduction.
! : ! Of course, the depth reduction is why it is done in the first place...

! : I think you got confused.
! : Vincent compared his null move AB with the regular AB without null moves.

! I agree. And they do *not* compare. Null-move is speculative. normal
! alpha/beta is not speculative at all... If we use null-move, we have to
! take hits in some types of positions, because there isn't any perfect
! forward-pruning algorithm...

Vincents null is less speculative than the regular null.

! : BTW I think Vincent used the word "objective" when he should have used
! : the word "objection".

Urban Koistinen

Marcel van Kervinck

unread,
Oct 16, 1997, 3:00:00 AM10/16/97
to

brucemo (bru...@seanet.com) wrote:
> Pardon me for jumping into the discussion (I may have missed some
> important information earlier), but this sounds easy enough to test. Have
> you tested it, and if so, what was your result?
> Did doing this increase program strength, or did it just find 10% of
> tactical shots twice as fast, which going twice as slow on the rest?

Don't know exactly. I see some strange results, like searching 8
ply full width plus 2 null move restricted plies twice as fast as
just doing 8 ply full width. Don't know what's going on here, but
it seems to have an beneficial effect on move ordering as well.

Marcel
-- _ _
_| |_|_|

Robert Hyatt

unread,
Oct 17, 1997, 3:00:00 AM10/17/97
to

Marcel van Kervinck (mar...@unox.hak.nl) wrote:
: Robert Hyatt <hyatt@crafty> wrote:
: > : Why not simply reduce depth to depth-R and continue the search

: > : without null move? Failing high without further inspection will
: > : give wrong results in the end. Just reducing depth only delays
: > : results. And the costs of searching depth-R can be neglected....
: > This was suggested in a fairly recent JICCA article.

: I should have read it. Which article? I don't remember it.

: > I tried it. And


: > it is not free, in terms of cost, and it still overlooks things when it
: > collapses the search to the q-search...

: What's the cost? Say, R is constant and R==2:

: Crafty without nullmove searches nonullmove-depth-10 in N nodes.
: Now Crafty needs to search a position in it's thinking, depth=10.
: Crafty does its usual risky recursive nullmove, which fails high,
: with depth-R-1 == 7 ply. Observed overall savings: 95% of search.
: So nullmove-depth-10 costs just N/20 nodes, right?

I'm not sure I follow. The cost to do a null-move search and discover
that it fails high is the same old 2*sqrt(W^D) as always, just that D
is reduced by two extra plies for the null move. This is not cheap at
all, but it is cheaper than the full search to D. So if you try the
above idea, it costs W times as much effort as the null move search,
which is not exactly a great savings. In fact, it is not terribly
better than just not doing the null nor the regular Depth-3 searches
when you consider that the null-move is effectively bringing the
branching factor down to about 3 or so...

: Now instead of failing high, we research with depth=8, disallowing


: nullmove, and return that score. Remember, nonullmove-depth-10 costs N,
: so nonullmove-depth-8 costs N/36.

: This, on top of risky nullmove reduction, costs N/20 + N/36 ==
: N/12.9, leaving us with a reduction of 92% instead of 95%. Indeed,
: as you experimently observed, a doubling of nodes. But no risk of ever
: overseeing tactics, not even in the endgame.

wait... now we differ again, because you are *still* doing a reduced-ply
search to confirm the null move, and this reduce-ply search can *still* be
wrong, because it doesn't go as deep as the normal non-null alpha/beta
search would go along all branches... So there *must* be errors, otherwise
we just did the impossible...


: And, do really want to trust a depth-7 fail-high cutof, no questions


: asked? (that is: 7 ply, including more recursive nullmoves) Do you really
: trust such a search over a depth-8 search? (that is 8 ply, full width,
: no selective search)

: All that for just a 50% speedup??????

It's more. Turn null off in crafty, and step to something in the late
middlegame. It is worth 2-3 *plies*, not just a factor of two. That I
am happy to take...

Ronald de Man

unread,
Oct 17, 1997, 3:00:00 AM10/17/97
to

hyatt@crafty (Robert Hyatt) writes:

>Urban Koistinen (m10...@abc.se) wrote:
>: 16 Oct 1997 16:36:12 GMT Robert Hyatt wrote:
>: ! Urban Koistinen (m10...@abc.se) wrote:

>: ! : Here is my suggestion rephrased:
>: ! : Only allow the backtoback null move when there still would be

^^^^^^^^^^


>: ! : at least x ply of search left after it.

Urban is talking about back to back null moves, by which he means, the
null moves done immediately after a first null move. So 2 null moves in
a row.

>: ! This is very expensive. Again, this was suggested by John Stanback
>: ! last year. I used it for a good while to evaluate it. It is worth
>: ! a factor of 2x in many positions. The idea is to not let the null
>: ! search collapse a position into a quiescence search.

No no, you are thinking of leaving out null moves totally at very shallow
depths. That certainly will cost a lot.

>: Wrong.
>: If you can't see the difference by now I don't think it is
>: worth explaining to you.

So Urban is saying: close to the leafs, only allow single null moves.
Then double null moves can be used closer to the root where they can
help solving the occasional zug problem.

>suit yourself. I've tried it. As I said, trying null moves at the
>very last ply before the capture search produces a significant speedup.

Yes, everybody agrees with that, it just isn't what was meant.

>: ! : ! In the
>: ! : ! case of Vincent's post, he made the statement
>: ! : ! "using my null-move limitation
>: ! : ! will always return the same value as minimax alpha/beta".
>: ! : ! With the clear
>: ! : ! implication of "everything else being equal."

Vincent was clearly only talking about his search being 'theoretically'
correct, in that it will always eventually solve the position. So if you
give his search the opening position and enough time, it will eventually
solve chess. No, that is not really a practical possibility.

>: ! : He specified that you might need more depth.

>: ! Yes... but his "modified-null == alpha/beta" didn't include that
>: ! disclaimer. And it is *dead* wrong anyway. A null-move search
>: ! is *never* == alpha/beta, because of the depth reduction.

>: Vincents null would give the same result as the full tree when
>: the search tree gets deep enough.
>: That is not always so for the null you are using.
>: I have read the 2 posts by Vincent that have reached me and I
>: found the disclaimer in the first.

>Very good argument. By that same logic, the Search of Genius, with
>it's selective forward pruning, will also produce the same result as
>a traditional full-width alpha/beta search. It simply needs to get
>deep enough, so that the full-width part of his search is as deep as
>the normal alpha/beta search. The fact that this isn't going to happen
>is not very relevant I suppose. However, here I prefer to talk "practice"
>rather than "theory" because "practice" plays and wins/loses/draws games,
>"theory" goes back to the frog and pockets argument...

But it still is a good property of Genius' search. As long as it doesn't
cost in actual play of course.

By the way, the fact that Vincent's search will take an additional 4 plies
for overcoming zug problems doesn't mean that his solution to the zug
problem is therefore useless: zug problems typically arise in endgames
where it is not difficult to search 15-20 or even more plies.

>: ! : Speed != Depth (don't confuse them)

>: ! I didn't. I've posted two examples that show this. The last was a
>: ! win at chess position that was solved much quicker with normal null
>: ! than with "modified" null.

Yes, I tried Vincent's suggestion too, and for me it also seemed to
increase the number of nodes needed by a non-trivial amount. One explanation
is of course that although the second null in a row doesn't cost you much,
it might refute the first null, which does cost a lot. Since nulls give
a big win when used near the leafs, Vincent's trick might become
attractive when only used at several plies from the leafs, as Urban
is suggesting.
I have however not yet tried Urban's suggestion, but will do it when I
have time. I actually don't think I will really use it in my program,
but it might be an option in positions with few pieces left on the board.

Well, I hope I interpreted all previous messages in this thread in the
way they were meant.

Ronald de Man

Marcel van Kervinck

unread,
Oct 17, 1997, 3:00:00 AM10/17/97
to

Robert Hyatt (hyatt@crafty) wrote:
> : What's the cost? Say, R is constant and R==2:
> : Crafty without nullmove searches nonullmove-depth-10 in N nodes.
> : Now Crafty needs to search a position in it's thinking, depth=10.
> : Crafty does its usual risky recursive nullmove, which fails high,
> : with depth-R-1 == 7 ply. Observed overall savings: 95% of search.
> : So nullmove-depth-10 costs just N/20 nodes, right?

> I'm not sure I follow. The cost to do a null-move search and discover
> that it fails high is the same old 2*sqrt(W^D) as always, just that D
> is reduced by two extra plies for the null move.

You don't follow. That formula is for perfect ordered full-width
open window alpha-beta. Not for recursive null move searches.

> This is not cheap at
> all, but it is cheaper than the full search to D. So if you try the
> above idea, it costs W times as much effort as the null move search,
> which is not exactly a great savings.

No, I didn't write that, and it doesn't cost W times the effort.

> : This, on top of risky nullmove reduction, costs N/20 + N/36 ==
> : N/12.9, leaving us with a reduction of 92% instead of 95%. Indeed,
> : as you experimently observed, a doubling of nodes. But no risk of ever
> : overseeing tactics, not even in the endgame.

> wait... now we differ again, because you are *still* doing a reduced-ply
> search to confirm the null move, and this reduce-ply search can *still* be
> wrong, because it doesn't go as deep as the normal non-null alpha/beta
> search would go along all branches...

Sure, BUT it will only delay results for at most R ply. While
getting over 90% savings, it's always faster than full width. And
it doesn't leave you in the emberrassing situation where you cut
off the wrong deep searches simply based on a null move search.
And thereby never being able to see a 5-ply tactic, even if you're
searching 20-ply. It's just making those searches less deep, based
on that null move search result. But it doesn't cut them off right
away.

Randy Baker

unread,
Oct 17, 1997, 3:00:00 AM10/17/97
to

Robert Hyatt <hyatt@crafty> wrote in article
<626e1j$9...@juniper.cis.uab.edu>...

> Marcel van Kervinck (mar...@unox.hak.nl) wrote:
> : Robert Hyatt <hyatt@crafty> wrote:
> : > : Why not simply reduce depth to depth-R and continue the search
> : > : without null move? Failing high without further inspection will
> : > : give wrong results in the end. Just reducing depth only delays
> : > : results. And the costs of searching depth-R can be neglected....
> : > This was suggested in a fairly recent JICCA article.
>
> : I should have read it. Which article? I don't remember it.
>
> : > I tried it. And
> : > it is not free, in terms of cost, and it still overlooks things when
it
> : > collapses the search to the q-search...
>
> : What's the cost? Say, R is constant and R==2:
>
> : Crafty without nullmove searches nonullmove-depth-10 in N nodes.
> : Now Crafty needs to search a position in it's thinking, depth=10.
> : Crafty does its usual risky recursive nullmove, which fails high,
> : with depth-R-1 == 7 ply. Observed overall savings: 95% of search.
> : So nullmove-depth-10 costs just N/20 nodes, right?
>
> I'm not sure I follow. The cost to do a null-move search and discover
> that it fails high is the same old 2*sqrt(W^D) as always, just that D
> is reduced by two extra plies for the null move. This is not cheap at

> all, but it is cheaper than the full search to D. So if you try the
> above idea, it costs W times as much effort as the null move search,
> which is not exactly a great savings. In fact, it is not terribly
> better than just not doing the null nor the regular Depth-3 searches
> when you consider that the null-move is effectively bringing the
> branching factor down to about 3 or so...
>
> : Now instead of failing high, we research with depth=8, disallowing
> : nullmove, and return that score. Remember, nonullmove-depth-10 costs
N,
> : so nonullmove-depth-8 costs N/36.
>
> : This, on top of risky nullmove reduction, costs N/20 + N/36 ==
> : N/12.9, leaving us with a reduction of 92% instead of 95%. Indeed,
> : as you experimently observed, a doubling of nodes. But no risk of ever
> : overseeing tactics, not even in the endgame.
>
> wait... now we differ again, because you are *still* doing a reduced-ply
> search to confirm the null move, and this reduce-ply search can *still*
be
> wrong, because it doesn't go as deep as the normal non-null alpha/beta
> search would go along all branches... So there *must* be errors,
otherwise
> we just did the impossible...

Pardon the intrusion, but we discussed a similar thread in June. In some
zugswang positions. Crafty (at least 12.6) is blind to mate in 3 even
though it has searched 20+ plies. It could search until the end of time and
not see the win. Marcel's nonullmove verification search will eventually
catch all such oversights, since iterative deepening will also increase
number of plies which the nonullmove verification search examines as
follows:

Search Depth "Null move failure-free" depth
---------------- -----------------------------------
5 3
6 4
7 5
8 6
9 7
N N-2 plies with no null-move errors

etc.

I don't see how you can have *any* errors (at least null-move induced ones)
within the number of plies searched by the verification search.

--
Randy Baker (remove Z from address in email replies)

>
>

Robert Hyatt

unread,
Oct 17, 1997, 3:00:00 AM10/17/97
to

Marcel van Kervinck (mar...@stack.nl) wrote:
: Robert Hyatt (hyatt@crafty) wrote:
: > : What's the cost? Say, R is constant and R==2:

: > : Crafty without nullmove searches nonullmove-depth-10 in N nodes.
: > : Now Crafty needs to search a position in it's thinking, depth=10.
: > : Crafty does its usual risky recursive nullmove, which fails high,
: > : with depth-R-1 == 7 ply. Observed overall savings: 95% of search.
: > : So nullmove-depth-10 costs just N/20 nodes, right?

: > I'm not sure I follow. The cost to do a null-move search and discover
: > that it fails high is the same old 2*sqrt(W^D) as always, just that D
: > is reduced by two extra plies for the null move.

: You don't follow. That formula is for perfect ordered full-width


: open window alpha-beta. Not for recursive null move searches.

I follow well enough however. For the tree *below* the null move, the
above formula works perfectly. and near the leaf nodes, it is exact since
no nulls follow, in the restricted sense...

: > This is not cheap at


: > all, but it is cheaper than the full search to D. So if you try the
: > above idea, it costs W times as much effort as the null move search,
: > which is not exactly a great savings.

: No, I didn't write that, and it doesn't cost W times the effort.

Sure it does. The null move costs "1" in effort. To search all the
moves at this ply costs W*1... or W...

: > : This, on top of risky nullmove reduction, costs N/20 + N/36 ==


: > : N/12.9, leaving us with a reduction of 92% instead of 95%. Indeed,
: > : as you experimently observed, a doubling of nodes. But no risk of ever
: > : overseeing tactics, not even in the endgame.

: > wait... now we differ again, because you are *still* doing a reduced-ply
: > search to confirm the null move, and this reduce-ply search can *still* be
: > wrong, because it doesn't go as deep as the normal non-null alpha/beta
: > search would go along all branches...

: Sure, BUT it will only delay results for at most R ply. While


: getting over 90% savings, it's always faster than full width. And
: it doesn't leave you in the emberrassing situation where you cut
: off the wrong deep searches simply based on a null move search.
: And thereby never being able to see a 5-ply tactic, even if you're
: searching 20-ply. It's just making those searches less deep, based
: on that null move search result. But it doesn't cut them off right
: away.

I suppose we agree to disagree. In practice, I don't see these problems
enough to even notice them. And I have a pretty good dataset of games
played to support this.

But the best test is the one I ran way back, simply version a vs version b,
for a couple of hundred games. Every restriction to null-move that I tried
had a negative influence, although not always terribly negative. But never
positive in that kind of test...


Robert Hyatt

unread,
Oct 17, 1997, 3:00:00 AM10/17/97
to

Ronald de Man (de...@win.tue.nl) wrote:
: hyatt@crafty (Robert Hyatt) writes:

: >Urban Koistinen (m10...@abc.se) wrote:
: >: 16 Oct 1997 16:36:12 GMT Robert Hyatt wrote:
: >: ! Urban Koistinen (m10...@abc.se) wrote:

: >: ! : Here is my suggestion rephrased:
: >: ! : Only allow the backtoback null move when there still would be
: ^^^^^^^^^^
: >: ! : at least x ply of search left after it.

: Urban is talking about back to back null moves, by which he means, the
: null moves done immediately after a first null move. So 2 null moves in
: a row.

This I *never* do. It is easy to add it, but in crafty it is *always*
slower to do this. You can make the change in search.c yourself, and
just remove the "do_null &&" from the if test that decides whether to
try a null or not. This will allow back to back nulls, and the tree
gets bigger for me...

: >: ! This is very expensive. Again, this was suggested by John Stanback


: >: ! last year. I used it for a good while to evaluate it. It is worth
: >: ! a factor of 2x in many positions. The idea is to not let the null
: >: ! search collapse a position into a quiescence search.

: No no, you are thinking of leaving out null moves totally at very shallow
: depths. That certainly will cost a lot.

: >: Wrong.
: >: If you can't see the difference by now I don't think it is
: >: worth explaining to you.

: So Urban is saying: close to the leafs, only allow single null moves.
: Then double null moves can be used closer to the root where they can
: help solving the occasional zug problem.

hmmm.. I never do double nulls anyway.. but I can do a null at ply=3 and
a null at ply=5, which is what Vincent says "don't allow"... This I tried
this week and it was about 2x slower. I played two versions of this against
each other, but for only 30 games and it ended +15 =8 -7 for the normal
null-move search. Might not be enough to positively say it was worse, but
it was enough to get my attention. It also did worse on win at chess,
although only missing two more than it normally gets (dropped to 295 from
297 at one minute). But most of the solution times were longer. I didn't
find any that were shorter..


: >suit yourself. I've tried it. As I said, trying null moves at the


: >very last ply before the capture search produces a significant speedup.

: Yes, everybody agrees with that, it just isn't what was meant.

: >: ! : ! In the
: >: ! : ! case of Vincent's post, he made the statement
: >: ! : ! "using my null-move limitation
: >: ! : ! will always return the same value as minimax alpha/beta".
: >: ! : ! With the clear
: >: ! : ! implication of "everything else being equal."

: Vincent was clearly only talking about his search being 'theoretically'
: correct, in that it will always eventually solve the position. So if you
: give his search the opening position and enough time, it will eventually
: solve chess. No, that is not really a practical possibility.

: >: ! : He specified that you might need more depth.

: >: ! Yes... but his "modified-null == alpha/beta" didn't include that
: >: ! disclaimer. And it is *dead* wrong anyway. A null-move search
: >: ! is *never* == alpha/beta, because of the depth reduction.

: >: Vincents null would give the same result as the full tree when
: >: the search tree gets deep enough.
: >: That is not always so for the null you are using.
: >: I have read the 2 posts by Vincent that have reached me and I
: >: found the disclaimer in the first.

: >Very good argument. By that same logic, the Search of Genius, with
: >it's selective forward pruning, will also produce the same result as
: >a traditional full-width alpha/beta search. It simply needs to get

: >deep enough, so that the full-width part of his search is as deep as
: >the normal alpha/beta search. The fact that this isn't going to happen


: >is not very relevant I suppose. However, here I prefer to talk "practice"
: >rather than "theory" because "practice" plays and wins/loses/draws games,
: >"theory" goes back to the frog and pockets argument...

: But it still is a good property of Genius' search. As long as it doesn't
: cost in actual play of course.

: By the way, the fact that Vincent's search will take an additional 4 plies
: for overcoming zug problems doesn't mean that his solution to the zug
: problem is therefore useless: zug problems typically arise in endgames
: where it is not difficult to search 15-20 or even more plies.

yes, but if you search 20, there are 24 ply solutions you can't find. That's
my point. in simple endgames, null-move (unrestricted) is worth 3-4 plies
at least, sometimes even more. losing part of that may or may not pay off.
I haven't seen any hard data or positions to make the loss in speed look
worthwhile...


: >: ! : Speed != Depth (don't confuse them)

: >: ! I didn't. I've posted two examples that show this. The last was a
: >: ! win at chess position that was solved much quicker with normal null
: >: ! than with "modified" null.

: Yes, I tried Vincent's suggestion too, and for me it also seemed to
: increase the number of nodes needed by a non-trivial amount. One explanation
: is of course that although the second null in a row doesn't cost you much,
: it might refute the first null, which does cost a lot. Since nulls give
: a big win when used near the leafs, Vincent's trick might become
: attractive when only used at several plies from the leafs, as Urban
: is suggesting.

Wait... I don't do this. Vincent's idea was to allow back-to-back nulls,
but not allow two consecutive nulls by the same side. I did both and the
results were bad for me.

: I have however not yet tried Urban's suggestion, but will do it when I

Randy Baker

unread,
Oct 17, 1997, 3:00:00 AM10/17/97
to

Robert Hyatt <hyatt@crafty> wrote in article
<628bh7$1...@juniper.cis.uab.edu>...
> Randy Baker (rsba...@msn.com) wrote:
> : Robert Hyatt <hyatt@crafty> wrote in article
> : <626e1j$9...@juniper.cis.uab.edu>...

> : > Marcel van Kervinck (mar...@unox.hak.nl) wrote:
> : > : Robert Hyatt <hyatt@crafty> wrote:
> : > : > : Why not simply reduce depth to depth-R and continue the search
> : > : > : without null move? Failing high without further inspection will
> : > : > : give wrong results in the end. Just reducing depth only delays
> : > : > : results. And the costs of searching depth-R can be
neglected....
> : > : > This was suggested in a fairly recent JICCA article.
> : >
> : > : I should have read it. Which article? I don't remember it.
> : >
> : > : > I tried it. And
> : > : > it is not free, in terms of cost, and it still overlooks things
when
> : it
> : > : > collapses the search to the q-search...
> : >
> : > : What's the cost? Say, R is constant and R==2:
> : >
> : > : Crafty without nullmove searches nonullmove-depth-10 in N nodes.
> : > : Now Crafty needs to search a position in it's thinking, depth=10.
> : > : Crafty does its usual risky recursive nullmove, which fails high,
> : > : with depth-R-1 == 7 ply. Observed overall savings: 95% of search.
> : > : So nullmove-depth-10 costs just N/20 nodes, right?
> : >
> : > I'm not sure I follow. The cost to do a null-move search and
discover
> : > that it fails high is the same old 2*sqrt(W^D) as always, just that D
> : > is reduced by two extra plies for the null move. This is not cheap

at
> : > all, but it is cheaper than the full search to D. So if you try the
> : > above idea, it costs W times as much effort as the null move search,
> : > which is not exactly a great savings. In fact, it is not terribly

> : > better than just not doing the null nor the regular Depth-3 searches
> : > when you consider that the null-move is effectively bringing the
> : > branching factor down to about 3 or so...
> : >
> : > : Now instead of failing high, we research with depth=8, disallowing
> : > : nullmove, and return that score. Remember, nonullmove-depth-10
costs
> : N,
> : > : so nonullmove-depth-8 costs N/36.
> : >
> : > : This, on top of risky nullmove reduction, costs N/20 + N/36 ==
> : > : N/12.9, leaving us with a reduction of 92% instead of 95%. Indeed,
> : > : as you experimently observed, a doubling of nodes. But no risk of
ever
> : > : overseeing tactics, not even in the endgame.
> : >
> : > wait... now we differ again, because you are *still* doing a
reduced-ply
> : > search to confirm the null move, and this reduce-ply search can
*still*
> : be
> : > wrong, because it doesn't go as deep as the normal non-null
alpha/beta
> : > search would go along all branches... So there *must* be errors,

> : otherwise
> : > we just did the impossible...
>
> : Pardon the intrusion, but we discussed a similar thread in June. In
some
> : zugswang positions. Crafty (at least 12.6) is blind to mate in 3 even
> : though it has searched 20+ plies. It could search until the end of time
and
> : not see the win. Marcel's nonullmove verification search will
eventually
> : catch all such oversights, since iterative deepening will also increase
> : number of plies which the nonullmove verification search examines as
> : follows:
>
> : Search Depth "Null move failure-free" depth
> : ---------------- -----------------------------------
> : 5 3
> : 6 4
> : 7 5
> : 8 6
> : 9 7
> : N N-2 plies with no null-move errors
>
> : etc.
>
> : I don't see how you can have *any* errors (at least null-move induced
ones)
> : within the number of plies searched by the verification search.
>
> No... but the point is, you won't get as deep. And overall which happens
> more often, null move failures or tactical wins by the extra ply of
search
> finding something? I haven't found a game crafty lost due to null-move
in
> a year of analysis. Granted, on the P5/133 I found a bunch with the Qh3
Bf3
> type position the biggest problem. But I have not seen that in at least
1.5
> years...
>
> It's a question of being "perfectly right" which the null-move will
*never*
> be, or various levels of "almost always right." So far, what I do (and
what
> Bruce does) seems to work best... for us...

No, my only point was that you insisted that there "*must* be errors" (and
I quote) when *all* null move failures are in actuality detected by the
algorithm within a very reasonable time frame. *Any* null move failure in
the search which occurred depths < N is guaranteed to be found by depth
N+2. All "fail-highs" at depth N are researched full-width to N-2. A
full-width search cannot have null move failures, thus Q.E.D.

On a different tack, which should appeal to you given your promotion for
null move, perhaps you could use Marcel's algorithm to search *more* plies
in certain endgame positions:

You turn off null move in some endgame positions to avoid null move
failure, true? (K+P vs. K endgames I would think, for example) Marcel's
algorithm offers much of the benefit of null move while incurring *zero*
risk of null move failure from zug positions. So, you don't have to turn it
off!! Use your current null-move approach unmodified until you decide the
potential for zugs is high. Switching to Marcel's approach instead of
disabling null move would actually let you search further in these endgame
positions since you don't have to turn null move off completely, but can
use the modified version and get an extra ply or so in the same amount of
time.

Randy

Komputer Korner

unread,
Oct 18, 1997, 3:00:00 AM10/18/97
to

Bob's point is that there is a tradeoff given a restricted amount of time.
For every ply depth that the non standard null move approach achieves, the
standard null move will search deeper. It seems to be that simple. And Bob
has decided that the extra plies are worth more than the holes that null
move causes. I don't know what you guys are still arguing about. What is
missing in my summation?
--
- -
Komputer Korner

The inkompetent komputer

If you see a 1 in my email address, take it out before replying.
Please do not email both me and the r.g.c.c. at the same time. I read all
the postings on r.g.c.c.

Randy Baker <rsba...@msn.com> wrote in article
<01bcdb49$95d6a920$0f01010a@randy>...


> > > It's a question of being "perfectly right" which the null-move will
> *never*
> > be, or various levels of "almost always right." So far, what I do (and
> what
> > Bruce does) seems to work best... for us...
>
> No, my only point was that you insisted that there "*must* be errors"
(and
> I quote) when *all* null move failures are in actuality detected by the
> algorithm within a very reasonable time frame. *Any* null move failure in
> the search which occurred depths < N is guaranteed to be found by depth
> N+2. All "fail-highs" at depth N are researched full-width to N-2. A
> full-width search cannot have null move failures, thus Q.E.D.

>
> Randy
>

Marcel van Kervinck

unread,
Oct 18, 1997, 3:00:00 AM10/18/97
to

Robert Hyatt <hyatt@crafty> wrote:
> that it fails high is the same old 2*sqrt(W^D) as always,
> For the tree *below* the null move, the
> above formula works perfectly. and near the leaf nodes, it is exact since
> no nulls follow, in the restricted sense...

So, Crafty doesn't do nullmoves recursively?
Is that what you mean?

> : > This is not cheap at


> : > all, but it is cheaper than the full search to D. So if you try the
> : > above idea, it costs W times as much effort as the null move search,
> : > which is not exactly a great savings.

> : No, I didn't write that, and it doesn't cost W times the effort.
> Sure it does. The null move costs "1" in effort. To search all the
> moves at this ply costs W*1... or W...

You have lost me completely here. I've no idea what this is about.
I estimated the nodes in an earlier post, and there is no factor W of
relative performance in those results. Here a more accurate analyses:

Say a program wins a factor S by using vanilla nullmove over ordinary
full-width. So full-width costs N, with nullmove it costs N/S. Just for
the idea, think of S=20, which seems reasonable to me.

The cost of doing a D-R ply safe search is at most

N/S + N/W^(R/2) (1)

We need N/S to do the selective nullmove enhanced search, plus at most N /
W^(R/2) to do the full width verifications. This equals N/W for R=2. The
W^(R/2) comes from divided endnode counts 2*sqrt(W^D) and 2*sqrt(W^(D-R)),
if you wonder, which is not entirely correct, but close enough for our
purpose. To be correct in the right term, we should divide N by

W^(0/2) + W^(1/2) + W^(2/2) + ... + W^(R/2)
which equals
(sqrt(W)^(R+1) - 1) / (sqrt(W) - 1)

But that would leave us with the ugly and unclear estimate of:

N/S + N*(sqrt(W)-1)/(sqrt(W)^(R+1)-1) (2)

However, this is not correct either, as the 2*sqrt formula is also
an approximation. If W is sufficiently large, this is enough close
to W^(R/2). So (1) is just the fast approximation of (2).

This formula is an upperbound for the worst case when all researches
are needed. Otherwise it will be less. But, in practice, I think the
true value is very close to (1).

So an upperbound of the cost to do a D-R safe search, compared to vanilla
nullmove search, would be a 1 + S/W^(R/2). This is not a factor W.
For R=2, S=20 and W=36 this gives 1.55, or 55% more nodes at most.
Actually it is 47% if we use (2) instead of (1). If your S is smaller,
(that is, your normal nullmove doesn't get 95% savings), the relative
costs are even less. Not enough to lose a ply.

And, of course, by Knuth and Moore, it would be tricky to get below 2 *
W^((D-R)/2) endnodes. If that's what you're saying...

In practice, I often see this nullmove idea actually perform better than a
full width D-R search. I don't know why, there are other effects playing
a role, because the trees searched are different. Also, the formulas
are just rough approximations. But I like the idea of getting R plies
for free, and a time bonus on top.

Is one better than the other? I don't know. They are simply 'different'.
Vanilla nullmove depends on extra knowledge to turn itself off in cases
where it could fail. The other doesn't rely on that specific knowledge,
and can be safely used in the endgame. Still, it could be better to turn
it off there as well, as when W becomes smaller, the extra costs increase.
But there is enough room for experiment, as we can increase R for greater
search depths. This may decrease the costs, but could also increase S...

Another consideration is the mysterious S. It is not a parameter as it
depends on R and D. It also depends on W and on the type of position
in general. One idea is to make the two applied depth reductions (R)
differ, doing nullmoves with the R reduction, and doing the pruning with
a different R'.

Kind regards,

Marcel
-- _ _
_| |_|_|

Robert Hyatt

unread,
Oct 19, 1997, 3:00:00 AM10/19/97
to

Marcel van Kervinck (mar...@unox.hak.nl) wrote:
: Robert Hyatt <hyatt@crafty> wrote:
: > that it fails high is the same old 2*sqrt(W^D) as always,
: > For the tree *below* the null move, the
: > above formula works perfectly. and near the leaf nodes, it is exact since
: > no nulls follow, in the restricted sense...

: So, Crafty doesn't do nullmoves recursively?
: Is that what you mean?

No. But the basic "alpha/beta" formula is a good estimate for the work
done below a null-move. It's not exact due to recursive null-move, but
it is perfectly useful when comparing the effort to search below a null
move with R=2, and then searching at the null-move ply with R=2 to "confirm"
the null move.

Null move has errors. There is nothing that can be done to eliminate
them, by the very definition of null-move. They can be reduced in some
places, but doing so also hurts other places as well. The point is, then,
is it better to try to restrict null-move and slow down, and make errors
because it is slower, or do a full-null-move search and accept the errors
that occur there.

My data points are games played on ICC. Some 6,700 this last year that
I have studied a good number of, particularly the losses. I've yet to
encounter something that I would pin on a null-move failure. However, I'm
sure they occur. Just as I am 100% sure that Cray Blitz had null-move
failures, with R=1, non-recursive, and no null-moves close enough to the
leaf nodes to prevent a normal full-width ply after the null move. And we
*still* found positions where it was wrong. If that restricted version
fails, then the suggestion you have made is still going to fail. I've
simply taken the easy/fast way out of this, and just said "petal to the
metal, full speed ahead." When it costs me more games than it wins for
me, I'll consider restricting it. Right now, so long as Crafty plays on a
P6 or faster, it searches deep enough that null-move failures are simply
not a problem.

I follow the old praxis of "if it ain't broken, don't fix it..."


: > : > This is not cheap at


: > : > all, but it is cheaper than the full search to D. So if you try the
: > : > above idea, it costs W times as much effort as the null move search,
: > : > which is not exactly a great savings.
: > : No, I didn't write that, and it doesn't cost W times the effort.
: > Sure it does. The null move costs "1" in effort. To search all the
: > moves at this ply costs W*1... or W...

: You have lost me completely here. I've no idea what this is about.
: I estimated the nodes in an earlier post, and there is no factor W of
: relative performance in those results. Here a more accurate analyses:

W comes from the "confirmation search". If the null-move fails high,
it costs "X" nodes. Then you try a confirmation search at the current
ply (non-null, but to reduced depth R=2). How many potential moves have
to be searched to this depth at this position? W. Hence this costs W
times as much as the null-move search... Unless I am misunderstanding
what you suggested. The JICCA article I saw suggested this as a way
to eliminate "zug" failures. It doesn't. The reason it doesn't is that
it still searches to depth-1-R, and that "R" can hide the reason this
position is a zug position in the first place.

: Say a program wins a factor S by using vanilla nullmove over ordinary

: N/S + N/W^(R/2) (1)

: N/S + N*(sqrt(W)-1)/(sqrt(W)^(R+1)-1) (2)

another issue I have seen, verified, and now ignore, is that shallow searches
fail much more often than deep searches. If you look theu my main.c comments,
you'll see where I tried R=2 R=1 R=2 R=2 restricted to avoid collapsing to
the q-search, and other ideas. And most were better on the P5/133, because
it was not searching deeply enough. When I went to the P6, and started to
test R=2 again, the problems went away. I was not seeing it fall into the
g2 mate problem any more, nor any other null-move grossness. This was why
I suggested to Mclane (Czub) that Crafty on a P120 vs CSTal on a P120 was
not a good match, because CSTal plays aggressively, and crafty suffers from
*severe* null-move horizon problems at that slow speed. On a P6, it is a
totally different program. On the Alpha, it is better still...

the simple cray blitz null move search R=1, non-recursive, not near the
leaves, still made it twice as fast. R=2 recursive is a dramatic search
speeder-upper... Given a deep enough search to avoid some obvious problems..


: Kind regards,

Rolf Tueschen

unread,
Oct 19, 1997, 3:00:00 AM10/19/97
to

hyatt@crafty (Robert Hyatt) wrote:

>The JICCA article I saw suggested this as a way
>to eliminate "zug" failures.


Hiho, let's talk about science again.

In the anglo-american language we have a few german wordings.

For instance kindergarden. Which is not a garden (for flowers) or a
garden with children growing on trees, but it's exactly the institution
where children of Sesame Street learn to behave in groups... Something
BH must have missed somehow...

Nobody would shorten the long word into 'kinder', because that would
simply mean children. Nor into 'kin'. Although I would agree that all
abreviations are clear in a certain professional group. We also could
define the number '47' standing for a 'kin'.... Of course NOT '69'...

The german word 'Zugzwang' means 'to be forced to move'. Now if you
simply said it's a 'move' problem, or simply it's a 'move' you
deteriorate the whole sense of the wording.

But that is what Her Royal Highness, The Teacher Mr. Hyatt does
constantly with his new invention 'zug'. Which simply means '(a) move'.
But this is nothing new that a side had to move in chess.

Only with the word 'zwang' the crucial situation is sufficiantly
discribed.


Conclusion:

We should either say the complete 'zugzwang'. Or if we are too lazy, we
could shorten to 'zwang'. But saying 'move' (= zug) ----, why not say
'horse' or 'highway', maybe 'crocodile'?


Rolf <Webster> Tueschen


mclane

unread,
Oct 19, 1997, 3:00:00 AM10/19/97
to

hyatt@crafty (Robert Hyatt) wrote:
> This was why
>I suggested to Mclane (Czub) that Crafty on a P120 vs CSTal on a P120 was
>not a good match, because CSTal plays aggressively, and crafty suffers from
>*severe* null-move horizon problems at that slow speed. On a P6, it is a
>totally different program. On the Alpha, it is better still...

Right ! I can confirm this. Since I have the k6/200 Mhz Crafty vs.
CSTal is hopeless lost for CSTal. Crafty plays really strong.
But WHAT do we see here ?

We see player A look playing better than player B.
Is A really better than B ??
If A and B or A or B would be HUMANS i would agree.
If A and B are computers I have to disagree very heavily.

Why ? When A = B but B is 2 or 5 times faster, than
B plays better but not because B knows more about chess or plays a
better chess.
A is outsearched by B.
Due to different hardware caused reasons.
If A is not = B than it could also be outsearched.
In the case where Bob and I AGREE now, crafty is outsearching CSTal
due to a more efficient search.

The same programs beeing taken on 2 similar machines with 1/2 speed
gives a different score. On a p5/120 CSTal wins. On a p6/200 (or K6)
CSTal loses.

I accept that THIS game goes to Bob who has optimized his program with
search-strategy-decisions like he has suggested and fairly explained
in his thread before, but of course, this is a competition and in any
competition there are times A is leading, and when circumstances
change, B is winning or C comes and kills A and B.

I was not able to SEE register this, because the first time I got such
a fast machine like the p6/200 (or K6) was maybe 6 months or even 1
year later than Bob.

We have never constructed CSTal to come deep. We worked a couple of
years on the evaluation-function and now we are over with working we
register that our competitors have come into an advance due to
hardware-progress and software-progress in the field of
search-strategies. When we started to tune the evaluation-function I
had a 486-33 and was proud to have one !!!! This is a few years ago.
Please register that Germany is always a couple of months BEHIND U.S.
market.

If I would have known about the machines and Bob's experiences in
tuning the null-move I would have come into a different idea about the
whole stuff.

I apologize therefore for my foolish comments into this context that
was - from my point of view - mainly based because my horizont was
limited. Since I have new data, my point of view is different.

We will now concentrate on search-techniques and programming-progress
in this area to take the competition. Paris comes - of course - to
early to be prepared against the fast searchers. So we have to take
the role of defending the slow-programs to Hiarcs and Rebel (in the
ssdf-list) and Virtual-Chess and Mchess and Shredder in Paris !

may the gods of the slow programs be with them.

Thanks.

Pitters

unread,
Oct 20, 1997, 3:00:00 AM10/20/97
to

Im Artikel <62dpu1$t0a$2...@steve.prima.ruhr.de>, mcl...@prima.ruhr.de
(mclane) schreibt:

Hi McLane,

we hope, that MChess will also have a leading position in the SSDF ....;-)

>may the gods of the slow programs be with them.>

Shure, why not ?

-Peter

Robert Hyatt

unread,
Oct 20, 1997, 3:00:00 AM10/20/97
to

Ronald de Man (de...@win.tue.nl) wrote:
: hyatt@crafty (Robert Hyatt) writes:

: >Ronald de Man (de...@win.tue.nl) wrote:
: >: hyatt@crafty (Robert Hyatt) writes:

: >: >Urban Koistinen (m10...@abc.se) wrote:
: >: >: 16 Oct 1997 16:36:12 GMT Robert Hyatt wrote:
: >: >: ! Urban Koistinen (m10...@abc.se) wrote:


: >: >: ! : Here is my suggestion rephrased:
: >: >: ! : Only allow the backtoback null move when there still would be
: >: ^^^^^^^^^^
: >: >: ! : at least x ply of search left after it.

: >: Urban is talking about back to back null moves, by which he means, the
: >: null moves done immediately after a first null move. So 2 null moves in
: >: a row.

I don't do these. They accomplish nothing in Crafty except to make the
search slower. If one is going to fail high, the other is going to fail
low, and that's a useless outcome. I use the transpositiona table "trick"
to avoid lots of nulls when they won't fail high, so trying them *everywhere*
is a losing proposition in my case...


: >This I *never* do. It is easy to add it, but in crafty it is *always*


: >slower to do this. You can make the change in search.c yourself, and
: >just remove the "do_null &&" from the if test that decides whether to
: >try a null or not. This will allow back to back nulls, and the tree
: >gets bigger for me...

: Yes, we all agree. But Urban suggests that this doubling of the tree
: might be greatly reduced when you only leave out this test (i.e. allow
: double nulls) at at least x ply from the leafs, where x has still to
: be chosen.


: >: So Urban is saying: close to the leafs, only allow single null moves.


: >: Then double null moves can be used closer to the root where they can
: >: help solving the occasional zug problem.

: >hmmm.. I never do double nulls anyway.. but I can do a null at ply=3 and
: >a null at ply=5, which is what Vincent says "don't allow"... This I tried

: A null at ply=3 and a null at ply=5 is what you are doing now, right?
: And by 'This I tried', I guess you mean allowing nulls at ply=3 and ply=4.

Yes. I allow nulls at every other ply with no restrictions. And I've tried
"double nulls." We had a big discussion about this two years ago. I tested
it carefully. For me, in crafty, it simply didn't work. It was a way to make
the tree bigger, not smaller...

: >: By the way, the fact that Vincent's search will take an additional 4 plies


: >: for overcoming zug problems doesn't mean that his solution to the zug
: >: problem is therefore useless: zug problems typically arise in endgames
: >: where it is not difficult to search 15-20 or even more plies.

: >yes, but if you search 20, there are 24 ply solutions you can't find. That's
: >my point. in simple endgames, null-move (unrestricted) is worth 3-4 plies
: >at least, sometimes even more. losing part of that may or may not pay off.
: >I haven't seen any hard data or positions to make the loss in speed look
: >worthwhile...

: Yes, but what if Urban's suggestion would cause the tree not to be
: much bigger, while still keeping the benefit of sometimes overcoming
: zug problems?

I don't see how double-nulls overcome anything at all. All it does is to
search a tree that is 4 plies shallower than nominal...

: >: Yes, I tried Vincent's suggestion too, and for me it also seemed to


: >: increase the number of nodes needed by a non-trivial amount. One explanation
: >: is of course that although the second null in a row doesn't cost you much,
: >: it might refute the first null, which does cost a lot. Since nulls give
: >: a big win when used near the leafs, Vincent's trick might become
: >: attractive when only used at several plies from the leafs, as Urban
: >: is suggesting.

: >Wait... I don't do this. Vincent's idea was to allow back-to-back nulls,
: >but not allow two consecutive nulls by the same side. I did both and the
: >results were bad for me.

: With 'second null in a row' I mean back-to-back nulls (so allow a null
: at ply=4 when there was a null at ply=3, but then don't allow one at
: ply=5; if there was a null at ply=3, but no at ply=4, then allow one
: at ply=5). I don't know if Vincent suggested not to allow two
: consecutive nulls by the same side at all. I think he allows nulls
: at ply=3 and ply=5 when there was no at ply=4, but of course he
: doesn't allow one at ply=5 when there were nulls at ply=3 and 4, since
: the idea was that now you are back in the original position (with a lot
: less depth of course), and since you don't allow a null you can detect
: zug problems (given enough depth, of course).

Yes, that's exactly what he suggested, to not allow two nulls back to back
by the same side. I tried it. It is better on concocted positions. It is
quite a bit slower overall (a factor of 2x or so for me). Overall it was a
losing proposition in my eyes...


: And just to repeat, Urban's suggestion was to only allow the ply=3,4
: double nullmove when there is enough depth left. As you explained,
: null moves give a large gain when used near the leaves, and with
: Urban's suggestion your search would not be changed near the leaves,
: meaning that there might not be a significant penalty, while keeping
: the benefits of Vincent's approach.

yes, but I have a hard time with double nulls. If the first is going to
fail high, the second is going to be useless as it will fail low. The
only thing the second did was to chop two more plies off the tree, and
maybe hide something important... I'm careful to try nulls where they
are likely to work, but not when it is for certain they won't...


: I would have tried Urban's suggestion by now, had I not suffered from
: some irritating little bug somewhere hidden in my program, and lack
: of time. So at the moment I cannot really comment on his suggestion.
: But it seems worthwhile to at least give it a try.

: Ronald de Man


Randy Baker

unread,
Oct 20, 1997, 3:00:00 AM10/20/97
to

Yes, and the other main point of my post was that the modified null move
searches deeper than search w/o null move, and this could be used in
endgame positions to advantage and without risk of overlooking a zug
instead of disabling null move completely.

--
Randy Baker (remove Z from address in email replies)

Komputer Korner <kor...@netcom.ca> wrote in article
<01bcdb85$bd3dab80$815bb5cf@ALAN>...


> Bob's point is that there is a tradeoff given a restricted amount of
time.
> For every ply depth that the non standard null move approach achieves,
the
> standard null move will search deeper. It seems to be that simple. And
Bob
> has decided that the extra plies are worth more than the holes that null
> move causes. I don't know what you guys are still arguing about. What is
> missing in my summation?
> --
> - -
> Komputer Korner
>
> The inkompetent komputer
>
> If you see a 1 in my email address, take it out before replying.
> Please do not email both me and the r.g.c.c. at the same time. I read all
> the postings on r.g.c.c.
>
> Randy Baker <rsba...@msn.com> wrote in article
> <01bcdb49$95d6a920$0f01010a@randy>...

> > > > It's a question of being "perfectly right" which the null-move will
> > *never*
> > > be, or various levels of "almost always right." So far, what I do
(and
> > what
> > > Bruce does) seems to work best... for us...
> >
> > No, my only point was that you insisted that there "*must* be errors"
> (and
> > I quote) when *all* null move failures are in actuality detected by the
> > algorithm within a very reasonable time frame. *Any* null move failure
in
> > the search which occurred depths < N is guaranteed to be found by depth
> > N+2. All "fail-highs" at depth N are researched full-width to N-2. A
> > full-width search cannot have null move failures, thus Q.E.D.
>
> >

> > Randy
> >
>

Robert Hyatt

unread,
Oct 20, 1997, 3:00:00 AM10/20/97
to

Ronald de Man (de...@win.tue.nl) wrote:
: hyatt@crafty (Robert Hyatt) writes:

: >Ronald de Man (de...@win.tue.nl) wrote:
: >: hyatt@crafty (Robert Hyatt) writes:

: >: >Ronald de Man (de...@win.tue.nl) wrote:
: >: >: hyatt@crafty (Robert Hyatt) writes:

: >: >: >Urban Koistinen (m10...@abc.se) wrote:
: >: >: >: 16 Oct 1997 16:36:12 GMT Robert Hyatt wrote:
: >: >: >: ! Urban Koistinen (m10...@abc.se) wrote:


: >: >: >: ! : Here is my suggestion rephrased:
: >: >: >: ! : Only allow the backtoback null move when there still would be
: >: >: ^^^^^^^^^^
: >: >: >: ! : at least x ply of search left after it.

: >: >: Urban is talking about back to back null moves, by which he means, the
: >: >: null moves done immediately after a first null move. So 2 null moves in
: >: >: a row.

: >I don't do these. They accomplish nothing in Crafty except to make the
: >search slower. If one is going to fail high, the other is going to fail
: >low, and that's a useless outcome. I use the transpositiona table "trick"

: Not always. As Vincent argued, it might point at a zug problem that you
: just detected. Of course, you have to forbid the third null then (so
: allow them at ply=3,4, and no null at ply=5). If it is white's move at
: ply=3, it is white's move again at ply=5, and if there it is a zug
: position you will find it (of course, given that you have enough depth
: and so on).

: >to avoid lots of nulls when they won't fail high, so trying them *everywhere*


: >is a losing proposition in my case...

: >: Yes, but what if Urban's suggestion would cause the tree not to be


: >: much bigger, while still keeping the benefit of sometimes overcoming
: >: zug problems?

: >I don't see how double-nulls overcome anything at all. All it does is to
: >search a tree that is 4 plies shallower than nominal...

: In that case, you're still missing the point. Please read carefully.
: If you have a zug position for white, and you allow white to make a null,
: the search clearly won't find it. The idea is, if you allow black's next
: move also to be a null, and forbid the next null for white, white is back
: in the zug position and your search will find it. So black plays the
: null in order to drive white back into the zug position, where he can't
: use a second null to solve his problems. So double-nulls obviously overcome
: zug positions (as long as you forbid the third null).

You are missing my point however. What if after reducing the normal search
by 4 plies you *don't* see why this is a zugzwang position? IE it takes some
search below the point where the zug position arises before you see that no
matter what he does, it leads to a losing position, and only by doing nothing
can he "hold." Those 4 plies are often more than enough to hide it. Which
is my primary point here.

But, in testing, to take this further, allowing back-to-back nulls makes
the tree bigger. And only catches a certain small percentage of the positions
where a null will fail, while still leaving the biggest chunk of them alone to
cause problems...

: This was all Vincent's reasoning. Now the problem is that this makes the
: tree larger, which is a bad thing for normal play where you don't have
: many zug problems. Now Urban's proposal is to only allow this second null
: at positions where you have still some depth left. This will still solve
: some zug problems (in principle it solves all zug problems, giving enough
: time), but hopefully makes the tree only marginally bigger. That has to
: be tested.

I disagree with "solves all positions in principle" because *most* null-move
failures are *not* due to zugzwang. They are due to the reduced search depth
overlooking something tactical and wrongly assuming that my making two moves
in a row is not very beneficial. It's the reduced depth that causes the
problems I see. in fact, in thinking back over the past three years of working
on Crafty, I've *never* seen a zug position cause me to lose a game. but I see
*plenty* where the reduced depth hides something that later turns out to be
quite significant...

Everyone is focusing on the zugzwang problem. That is *not* the big
problem here. It is "R"...


: >: >Wait... I don't do this. Vincent's idea was to allow back-to-back nulls,


: >: >but not allow two consecutive nulls by the same side. I did both and the
: >: >results were bad for me.

: >: With 'second null in a row' I mean back-to-back nulls (so allow a null
: >: at ply=4 when there was a null at ply=3, but then don't allow one at
: >: ply=5; if there was a null at ply=3, but no at ply=4, then allow one
: >: at ply=5). I don't know if Vincent suggested not to allow two
: >: consecutive nulls by the same side at all. I think he allows nulls
: >: at ply=3 and ply=5 when there was no at ply=4, but of course he
: >: doesn't allow one at ply=5 when there were nulls at ply=3 and 4, since
: >: the idea was that now you are back in the original position (with a lot
: >: less depth of course), and since you don't allow a null you can detect
: >: zug problems (given enough depth, of course).

: >Yes, that's exactly what he suggested, to not allow two nulls back to back
: >by the same side. I tried it. It is better on concocted positions. It is
: >quite a bit slower overall (a factor of 2x or so for me). Overall it was a
: >losing proposition in my eyes...

: If I did not misread Vincent's post, his suggestion was as I described
: above: if white makes a null, then allow a null for black as well, but
: after 2 such nulls, don't allow one for white again. I don't think he
: meant that you should forbid two nulls for the same side altogether. So
: after a null for white and a normal move for black, white is allowed to
: do a null again.

: >: And just to repeat, Urban's suggestion was to only allow the ply=3,4


: >: double nullmove when there is enough depth left. As you explained,
: >: null moves give a large gain when used near the leaves, and with
: >: Urban's suggestion your search would not be changed near the leaves,
: >: meaning that there might not be a significant penalty, while keeping
: >: the benefits of Vincent's approach.

: >yes, but I have a hard time with double nulls. If the first is going to
: >fail high, the second is going to be useless as it will fail low. The

: Not necessarily useless, as I explained above.

: >only thing the second did was to chop two more plies off the tree, and


: >maybe hide something important... I'm careful to try nulls where they

: It won't hide something important. If the second null fails low, it is as
: nothing has happened (yes, you could call that useless). If the second null
: fails high, the first null will fail low and you might just have detected
: a zug problem. In no way can this double null make your search less accurate!

and you might just have detected that reducing the search by 4 plies hides
something that will break things soon...

: >are likely to work, but not when it is for certain they won't...

: The only point of double nulls is to be able to overcome zug problems.
: It certainly won't help to decrease tree size. But there might be a way
: to have the advantages of double nulls without having the big disadvantage
: of a doubling in tree size. That is the whole point of this discussion.
: I'm not sure that you read everything carefully enough, because you don't
: seem to see how double nulls can help in overcoming zug problems. It is
: not that difficult to see, so please read carefully.

: Ronald de Man

As I said, I don't see zug problems anyway. I'm sure odd test positions can
find such positions, but I simply don't see them in real games. I analyze
*every* loss by Crafty. Fortunately this is a small number of games per day.
I can't even recall *one* game where the null-move caused it to lose because
of zugzwang. I can recall plenty where "the classic position" (black Kg8,
white Qh6 Pf6 can be overlooked by folling the Qh6 move by a null, which
reduces the depth by 2 extra plies and hides that hideous Qg7 mate problem.
But as you can see, this isn't a zug of any kind. It is a problem with R=2
depth reduction...


Robert Hyatt

unread,
Oct 20, 1997, 3:00:00 AM10/20/97
to

Ronald de Man (de...@win.tue.nl) wrote:
: hyatt@crafty (Robert Hyatt) writes:

: >Ronald de Man (de...@win.tue.nl) wrote:
: >: hyatt@crafty (Robert Hyatt) writes:

: >: >Ronald de Man (de...@win.tue.nl) wrote:
: >: >: hyatt@crafty (Robert Hyatt) writes:

: I read this as you saying that double-nulls have no benefit whatsoever.

My opinion, yes.

: >: In that case, you're still missing the point. Please read carefully.


: >: If you have a zug position for white, and you allow white to make a null,
: >: the search clearly won't find it. The idea is, if you allow black's next
: >: move also to be a null, and forbid the next null for white, white is back
: >: in the zug position and your search will find it. So black plays the
: >: null in order to drive white back into the zug position, where he can't
: >: use a second null to solve his problems. So double-nulls obviously overcome
: >: zug positions (as long as you forbid the third null).

: >You are missing my point however. What if after reducing the normal search
: >by 4 plies you *don't* see why this is a zugzwang position? IE it takes some
: >search below the point where the zug position arises before you see that no
: >matter what he does, it leads to a losing position, and only by doing nothing
: >can he "hold." Those 4 plies are often more than enough to hide it. Which
: >is my primary point here.

: In that case, you seem to have switched your point, because previously
: you said you didn't see how double-nulls overcome anything at all.
: If double-nulls would solve one position more than no-double-nulls,
: it overcomes at least something.

If you only look at the benefit, yes. but if you factor in the cost for
doing this (again, in Crafty only, other programs may well be different)
it is a net significant loss. Because, as I stated earlier "zug" positions
are *not* a problem from where I look...


: Of course 4 plies are enough to hide plenty of things. So if it costs
: 4 plies it is bad to use double-nulls in almost all positions. That's
: why it is a good idea to restrict the double-nulls to the part of the
: tree far enough from the leaves. Since null-moves gains most near the
: leaves, this might reduced the double-null-overhead greatly.
: I don't know how this would work out, however, since I've not yet had
: the time to try this. It is very well possible that the overhead
: should still be considered too high.

In my testing, restricting nulls from near-leaf and leaf positions is
a huge loss.

: Maybe this trick might be a good option in positions where you do not
: use nullmoves currently. I don't think this would be a good thing in
: positions with pawns only, but maybe it would work in positions with
: only pawns and a knight, for example.

: >: This was all Vincent's reasoning. Now the problem is that this makes the


: >: tree larger, which is a bad thing for normal play where you don't have
: >: many zug problems. Now Urban's proposal is to only allow this second null
: >: at positions where you have still some depth left. This will still solve
: >: some zug problems (in principle it solves all zug problems, giving enough
: >: time), but hopefully makes the tree only marginally bigger. That has to
: >: be tested.

: >I disagree with "solves all positions in principle" because *most* null-move
: >failures are *not* due to zugzwang. They are due to the reduced search depth
: >overlooking something tactical and wrongly assuming that my making two moves
: >in a row is not very beneficial. It's the reduced depth that causes the
: >problems I see. in fact, in thinking back over the past three years of working
: >on Crafty, I've *never* seen a zug position cause me to lose a game. but I see
: >*plenty* where the reduced depth hides something that later turns out to be
: >quite significant...

: Well, I agree that "in principle" here has nothing to do with practical
: play. It is just that it would be nice for a program to be able to solve
: any problem "in principle" (as long as this wouldn't hurt actual play,
: which in this case I agree it might).

: The problems caused by reduced depth are all solved by increasing depth,
: which of course has nothing to do with practical play.
: The positions where crafty fails because of not recognizing zugzwang
: will never be solved by crafty at all. Some people find this very
: disturbing. My program has this same problem. For practical play it
: is not really a problem (I think), but it is somewhat disturbing.
: If this trick would cure this problem and could be implemented in
: such a way that it would cost hardly anything, I will probably keep it.

there I wouldn't argue at all. But that is a mighty big "if"... :)


: >Everyone is focusing on the zugzwang problem. That is *not* the big


: >problem here. It is "R"...

: I agree that in practical play that is the big problem. But in this
: thread we are focussing on Vincent's idea, which is a solution for
: the zug problem, as we hopefully all agree now. Again, solution in
: a theoretical sense, while maybe impractical. But then we have Urban's
: suggestion that might drive the cost down sufficiently.

It's easy enough to test, that's for sure. I'm waiting to see if anyone
gets results better than I did, because the various tests I tried were
all worse.

: >: >: And just to repeat, Urban's suggestion was to only allow the ply=3,4


: >: >: double nullmove when there is enough depth left. As you explained,
: >: >: null moves give a large gain when used near the leaves, and with
: >: >: Urban's suggestion your search would not be changed near the leaves,
: >: >: meaning that there might not be a significant penalty, while keeping
: >: >: the benefits of Vincent's approach.

: >: >yes, but I have a hard time with double nulls. If the first is going to
: >: >fail high, the second is going to be useless as it will fail low. The

: >: Not necessarily useless, as I explained above.

: >: >only thing the second did was to chop two more plies off the tree, and
: >: >maybe hide something important... I'm careful to try nulls where they

: >: It won't hide something important. If the second null fails low, it is as
: >: nothing has happened (yes, you could call that useless). If the second null
: >: fails high, the first null will fail low and you might just have detected
: >: a zug problem. In no way can this double null make your search less accurate!

: >and you might just have detected that reducing the search by 4 plies hides
: >something that will break things soon...

: I read this as saying that double-nulls can make your search less accurate.
: That is false. It can only cause some first nulls to fail low where
: they previously failed high, but that can have no bad effect on
: the accuracy (efficiency is another thing, of course).
: So double-nulls cannot hide anything. What can happen is that the 4 plies
: reduction hide something that a single-null will never find. But with
: double-nulls and increased depth, this thing will be found in the end.
: Yes, there are lots of positions in real games where this increased
: depth will never be reached, but that is another thing. Yes, that is
: important for actual play, but has nothing to do with your saying
: that double-nulls can hide things.

No... what I was saying is that the so-called "zug elimination" is not a
sure thing at all, because the 4 plies the two nulls costs can easily hide
the reason this is a zug position in the first place... or make it take 4
more plies of search to see it, which probably won't be possible...


: >: >are likely to work, but not when it is for certain they won't...

: >: The only point of double nulls is to be able to overcome zug problems.
: >: It certainly won't help to decrease tree size. But there might be a way
: >: to have the advantages of double nulls without having the big disadvantage
: >: of a doubling in tree size. That is the whole point of this discussion.
: >: I'm not sure that you read everything carefully enough, because you don't
: >: seem to see how double nulls can help in overcoming zug problems. It is
: >: not that difficult to see, so please read carefully.

: >: Ronald de Man

: >As I said, I don't see zug problems anyway. I'm sure odd test positions can
: >find such positions, but I simply don't see them in real games. I analyze
: >*every* loss by Crafty. Fortunately this is a small number of games per day.
: >I can't even recall *one* game where the null-move caused it to lose because
: >of zugzwang. I can recall plenty where "the classic position" (black Kg8,
: >white Qh6 Pf6 can be overlooked by folling the Qh6 move by a null, which
: >reduces the depth by 2 extra plies and hides that hideous Qg7 mate problem.
: >But as you can see, this isn't a zug of any kind. It is a problem with R=2
: >depth reduction...

: I think it is difficult to say that null-move has never cost a loss because
: of zugzwang. There might be drawing possibilities depending on zugzwang,
: for example. But I agree that this is certainly not the most important
: problem for play. What we are talking about however, has to do with this
: zug problem. I would not mind my program solving those odd test positions.
: As long as the cost is minimal. And as I said, it might even improve
: play in endgames with pawns and one knight for example, where using
: nullmoves might be too dangerous (well, I do it at the moment), and not
: using nullmoves loses depth.

: Ronald de Man

my reference above is in looking at losses played on ICC. One of my first
tests is always to try a normal non-null search on positions that look like
something fishy happened depth-wise. I've not found any yet that this cures.
I can't guarantee that there aren't any, but I am certain that Crafty with the
unlimited null-move search plays far better than crafty with no null-move search,
and plays significantly better than any restricted null-move search I've tried
over the past 18 months or so...

Dan Thies

unread,
Oct 20, 1997, 3:00:00 AM10/20/97
to

On 20 Oct 1997 17:00:56 GMT, de...@win.tue.nl (Ronald de Man) wrote:

>I agree that in practical play that is the big problem. But in this
>thread we are focussing on Vincent's idea, which is a solution for
>the zug problem, as we hopefully all agree now. Again, solution in
>a theoretical sense, while maybe impractical. But then we have Urban's
>suggestion that might drive the cost down sufficiently.

I hate to butt in and disagree, but Vincent's idea will solve *some*
zug problems. It will also create others if the depth reduction is 4
plies. This has been pointed out. You find one thing by hiding
another.

Urban and Bob were talking right past each other for a while there,
but the argument seems to be on one side, that you can use Vincent's
double-null idea and eat the added search cost. The other side of
this, that zugzwang is a pretty rare problem and you're better off
with the deeper search - well, I find this a little hard to refute.
I suppose that's why nobody is still in here trying to say that
Bob is wrong about that. Anyone who's tested something like this
knows what it does to their search.

Vincent's original "point" was that his idea actually made null-move
better in practice. It does not.

Dan

Ronald de Man

unread,
Oct 20, 1997, 3:00:00 AM10/20/97
to

hyatt@crafty (Robert Hyatt) writes:

I read this as you saying that double-nulls have no benefit whatsoever.

>: In that case, you're still missing the point. Please read carefully.


>: If you have a zug position for white, and you allow white to make a null,
>: the search clearly won't find it. The idea is, if you allow black's next
>: move also to be a null, and forbid the next null for white, white is back
>: in the zug position and your search will find it. So black plays the
>: null in order to drive white back into the zug position, where he can't
>: use a second null to solve his problems. So double-nulls obviously overcome
>: zug positions (as long as you forbid the third null).

>You are missing my point however. What if after reducing the normal search
>by 4 plies you *don't* see why this is a zugzwang position? IE it takes some
>search below the point where the zug position arises before you see that no
>matter what he does, it leads to a losing position, and only by doing nothing
>can he "hold." Those 4 plies are often more than enough to hide it. Which
>is my primary point here.

In that case, you seem to have switched your point, because previously
you said you didn't see how double-nulls overcome anything at all.


If double-nulls would solve one position more than no-double-nulls,
it overcomes at least something.

Of course 4 plies are enough to hide plenty of things. So if it costs


4 plies it is bad to use double-nulls in almost all positions. That's
why it is a good idea to restrict the double-nulls to the part of the
tree far enough from the leaves. Since null-moves gains most near the
leaves, this might reduced the double-null-overhead greatly.
I don't know how this would work out, however, since I've not yet had
the time to try this. It is very well possible that the overhead
should still be considered too high.

Maybe this trick might be a good option in positions where you do not


use nullmoves currently. I don't think this would be a good thing in
positions with pawns only, but maybe it would work in positions with
only pawns and a knight, for example.

>: This was all Vincent's reasoning. Now the problem is that this makes the


>: tree larger, which is a bad thing for normal play where you don't have
>: many zug problems. Now Urban's proposal is to only allow this second null
>: at positions where you have still some depth left. This will still solve
>: some zug problems (in principle it solves all zug problems, giving enough
>: time), but hopefully makes the tree only marginally bigger. That has to
>: be tested.

>I disagree with "solves all positions in principle" because *most* null-move
>failures are *not* due to zugzwang. They are due to the reduced search depth
>overlooking something tactical and wrongly assuming that my making two moves
>in a row is not very beneficial. It's the reduced depth that causes the
>problems I see. in fact, in thinking back over the past three years of working
>on Crafty, I've *never* seen a zug position cause me to lose a game. but I see
>*plenty* where the reduced depth hides something that later turns out to be
>quite significant...

Well, I agree that "in principle" here has nothing to do with practical


play. It is just that it would be nice for a program to be able to solve
any problem "in principle" (as long as this wouldn't hurt actual play,
which in this case I agree it might).

The problems caused by reduced depth are all solved by increasing depth,
which of course has nothing to do with practical play.
The positions where crafty fails because of not recognizing zugzwang
will never be solved by crafty at all. Some people find this very
disturbing. My program has this same problem. For practical play it
is not really a problem (I think), but it is somewhat disturbing.
If this trick would cure this problem and could be implemented in
such a way that it would cost hardly anything, I will probably keep it.

>Everyone is focusing on the zugzwang problem. That is *not* the big


>problem here. It is "R"...

I agree that in practical play that is the big problem. But in this


thread we are focussing on Vincent's idea, which is a solution for
the zug problem, as we hopefully all agree now. Again, solution in
a theoretical sense, while maybe impractical. But then we have Urban's
suggestion that might drive the cost down sufficiently.

>: >: And just to repeat, Urban's suggestion was to only allow the ply=3,4


>: >: double nullmove when there is enough depth left. As you explained,
>: >: null moves give a large gain when used near the leaves, and with
>: >: Urban's suggestion your search would not be changed near the leaves,
>: >: meaning that there might not be a significant penalty, while keeping
>: >: the benefits of Vincent's approach.

>: >yes, but I have a hard time with double nulls. If the first is going to
>: >fail high, the second is going to be useless as it will fail low. The

>: Not necessarily useless, as I explained above.

>: >only thing the second did was to chop two more plies off the tree, and
>: >maybe hide something important... I'm careful to try nulls where they

>: It won't hide something important. If the second null fails low, it is as
>: nothing has happened (yes, you could call that useless). If the second null
>: fails high, the first null will fail low and you might just have detected
>: a zug problem. In no way can this double null make your search less accurate!

>and you might just have detected that reducing the search by 4 plies hides
>something that will break things soon...

I read this as saying that double-nulls can make your search less accurate.


That is false. It can only cause some first nulls to fail low where
they previously failed high, but that can have no bad effect on
the accuracy (efficiency is another thing, of course).
So double-nulls cannot hide anything. What can happen is that the 4 plies
reduction hide something that a single-null will never find. But with
double-nulls and increased depth, this thing will be found in the end.
Yes, there are lots of positions in real games where this increased
depth will never be reached, but that is another thing. Yes, that is
important for actual play, but has nothing to do with your saying
that double-nulls can hide things.

>: >are likely to work, but not when it is for certain they won't...

>: The only point of double nulls is to be able to overcome zug problems.
>: It certainly won't help to decrease tree size. But there might be a way
>: to have the advantages of double nulls without having the big disadvantage
>: of a doubling in tree size. That is the whole point of this discussion.
>: I'm not sure that you read everything carefully enough, because you don't
>: seem to see how double nulls can help in overcoming zug problems. It is
>: not that difficult to see, so please read carefully.

>: Ronald de Man

>As I said, I don't see zug problems anyway. I'm sure odd test positions can
>find such positions, but I simply don't see them in real games. I analyze
>*every* loss by Crafty. Fortunately this is a small number of games per day.
>I can't even recall *one* game where the null-move caused it to lose because
>of zugzwang. I can recall plenty where "the classic position" (black Kg8,
>white Qh6 Pf6 can be overlooked by folling the Qh6 move by a null, which
>reduces the depth by 2 extra plies and hides that hideous Qg7 mate problem.
>But as you can see, this isn't a zug of any kind. It is a problem with R=2
>depth reduction...

I think it is difficult to say that null-move has never cost a loss because

Dan Thies

unread,
Oct 20, 1997, 3:00:00 AM10/20/97
to

On 18 Oct 1997 01:37:14 EDT, "Komputer Korner" <kor...@netcom.ca>
wrote:

>Bob's point is that there is a tradeoff given a restricted amount of time.
>For every ply depth that the non standard null move approach achieves, the
>standard null move will search deeper. It seems to be that simple. And Bob
>has decided that the extra plies are worth more than the holes that null
>move causes. I don't know what you guys are still arguing about. What is
>missing in my summation?

Randy's talking about a very limited application of this - in
situations where you wouldn't use a standard null move because of the
error rate.

Still, I'm having a hard time visualizing such a situation occuring
where I didn't have either:
a) an endgame table which solved the positions outright
b) very good play from the knowledge base.

But in some cases, I'm sure that double-nulls will prove useful. The
question is how to identify these cases on the fly. It's sort of like
trying to figure out which nodes are "all" nodes while you're in the
search. It's nice if you can figure out how. The "if" in the
double-null solutions I've seen is usually pretty big, though.

Dan

Ronald de Man

unread,
Oct 21, 1997, 3:00:00 AM10/21/97
to

rt...@wt.net (Dan Thies) writes:

>On 20 Oct 1997 17:00:56 GMT, de...@win.tue.nl (Ronald de Man) wrote:

>>I agree that in practical play that is the big problem. But in this
>>thread we are focussing on Vincent's idea, which is a solution for
>>the zug problem, as we hopefully all agree now. Again, solution in
>>a theoretical sense, while maybe impractical. But then we have Urban's
>>suggestion that might drive the cost down sufficiently.

>I hate to butt in and disagree, but Vincent's idea will solve *some*


>zug problems. It will also create others if the depth reduction is 4
>plies. This has been pointed out. You find one thing by hiding
>another.

Of course, it won't solve all zug problems, just like increasing
depth will not solve all horizon problems.
And yes, if this trick reduces depth (which it will if you do it
unrestricted), it is not worth it. A depth reduction of 4 seems
exagerated, however. In normal positions the worst I have seen is
about 50% more nodes to reach the same depth. Positions that
give a depth reduction of 4 are probably full of zugzwang.

>Urban and Bob were talking right past each other for a while there,
>but the argument seems to be on one side, that you can use Vincent's
>double-null idea and eat the added search cost. The other side of
>this, that zugzwang is a pretty rare problem and you're better off
>with the deeper search - well, I find this a little hard to refute.
>I suppose that's why nobody is still in here trying to say that
>Bob is wrong about that. Anyone who's tested something like this
>knows what it does to their search.

Yes, but I was originally trying to point out to Bob what Urban's
idea is to reduce the penalty of Vincent's trick (which is,
don't do the double-nulls near the leaves). However, everytime
I say this, Bob interpretes this as 'don't do nulls near the leaves',
which is something really different from what I mean, and is obviously
going to cost a lot.

Meanwhile I have tried Urban's idea in my program, and it seems
to work well. For several middlegame positions treesize even decreases
somewhat. In others, it doesn't cost much. The effect seems more
noticeable in endgames, where trees do get somewhat bigger, but also
some of the PVs change.
And in the position

8/4nB2/1p6/1P6/K7/P1k5/8/8 w

I now quickly get the right move Bh5 with an eval of +0.000, and it
stays at that, where I previously had very strange behaviour.

To make clear what I do, when I test in Search() whether I want to do
a null or not, one of the conditions that had to be satisfied was:

previous_move!=null_move

and I replaced this by:

(previous_move!=null_move ||
(move_before_that_one!=null_move && depth>2))

>Vincent's original "point" was that his idea actually made null-move
>better in practice. It does not.

Yes, I certainly agree with that if you do the double-nulls unrestricted.

What I will probably do is to allow double-nulls in positions with very few
pieces where I still want to use some form of nullmove.

By the way, to make it solve zug problems in endgames with for example
a knight for white and no pieces for black (and pawns for both), you'll
have to allow a null for black if white just did a null, even though black
has no pieces (in which case I normally don't allow nulls for that side).

>Dan

Ronald de Man


Robert Hyatt

unread,
Oct 21, 1997, 3:00:00 AM10/21/97
to

Ronald de Man (de...@win.tue.nl) wrote:
: rt...@wt.net (Dan Thies) writes:

: >On 20 Oct 1997 17:00:56 GMT, de...@win.tue.nl (Ronald de Man) wrote:

: >>I agree that in practical play that is the big problem. But in this
: >>thread we are focussing on Vincent's idea, which is a solution for
: >>the zug problem, as we hopefully all agree now. Again, solution in
: >>a theoretical sense, while maybe impractical. But then we have Urban's
: >>suggestion that might drive the cost down sufficiently.

: >I hate to butt in and disagree, but Vincent's idea will solve *some*
: >zug problems. It will also create others if the depth reduction is 4
: >plies. This has been pointed out. You find one thing by hiding
: >another.

: Of course, it won't solve all zug problems, just like increasing
: depth will not solve all horizon problems.
: And yes, if this trick reduces depth (which it will if you do it
: unrestricted), it is not worth it. A depth reduction of 4 seems
: exagerated, however. In normal positions the worst I have seen is
: about 50% more nodes to reach the same depth. Positions that
: give a depth reduction of 4 are probably full of zugzwang.

wait... you mis-interpreted. when you play a *single* null move in
the search, below that null you search *two* plies less deep than normal
because of the R=2 depth reduction parameter. That's simply the definition
of the null-move search. If you do two back-to-back, you chop 4 plies off
of the nominal search depth. Not because of lost speed, but because of R=2.


: >Urban and Bob were talking right past each other for a while there,

Ronald de Man

unread,
Oct 21, 1997, 3:00:00 AM10/21/97
to

hyatt@crafty (Robert Hyatt) writes:

>Ronald de Man (de...@win.tue.nl) wrote:
>: rt...@wt.net (Dan Thies) writes:

>: >On 20 Oct 1997 17:00:56 GMT, de...@win.tue.nl (Ronald de Man) wrote:

>: Of course, it won't solve all zug problems, just like increasing
>: depth will not solve all horizon problems.
>: And yes, if this trick reduces depth (which it will if you do it
>: unrestricted), it is not worth it. A depth reduction of 4 seems
>: exagerated, however. In normal positions the worst I have seen is
>: about 50% more nodes to reach the same depth. Positions that
>: give a depth reduction of 4 are probably full of zugzwang.

>wait... you mis-interpreted. when you play a *single* null move in
>the search, below that null you search *two* plies less deep than normal
>because of the R=2 depth reduction parameter. That's simply the definition
>of the null-move search. If you do two back-to-back, you chop 4 plies off
>of the nominal search depth. Not because of lost speed, but because of R=2.

Hmmm, it really seemed that you were talking of a total reduction in
depth of 4 as an example of what could happen in an endgame: you said
if you search with this trick and you get to 20 plies, you might get to
24 without it.

Actually, here is what you wrote:

yes, but if you search 20, there are 24 ply solutions you can't find.
That's my point. in simple endgames, null-move (unrestricted) is worth
3-4 plies at least, sometimes even more. losing part of that may or may
not pay off. I haven't seen any hard data or positions to make the loss
in speed look worthwhile...

Well, here you compare it with a non-nullmove search.

Of course it is clear that a 20 ply tactic involving zugzwang will not
be found by these double-nulls at 20 ply. But that is not a good
argument against double-nulls, since you wouldn't have found it with
the normal nullmove anyway. I just cannot think of any disadvantages
of double-nulls other than that it might slow down the search. And of
course, a noticeable slowdown of the search is a very big disadvantage.
That is why methods promising part of the advantages, but not giving
a slowdown are interesting.

(By the way, it seems you're chopping off 6 plies, since the two nulls
also take a ply each.)

>: 8/4nB2/1p6/1P6/K7/P1k5/8/8 w

I just tried crafty (rather old version, v9.30) on this position. It
can't solve it. If you allow double-nulls in positions where for example
both sides have either a bishop or a knight left, it would be found,
and the slowdown doesn't have to be excessive if you do it like Urban
suggests. I think that is interesting.

>: I now quickly get the right move Bh5 with an eval of +0.000, and it
>: stays at that, where I previously had very strange behaviour.

>: To make clear what I do, when I test in Search() whether I want to do
>: a null or not, one of the conditions that had to be satisfied was:

>: previous_move!=null_move

>: and I replaced this by:

>: (previous_move!=null_move ||
>: (move_before_that_one!=null_move && depth>2))

>: By the way, to make it solve zug problems in endgames with for example


>: a knight for white and no pieces for black (and pawns for both), you'll
>: have to allow a null for black if white just did a null, even though black
>: has no pieces (in which case I normally don't allow nulls for that side).

Ronald de Man


Robert Hyatt

unread,
Oct 21, 1997, 3:00:00 AM10/21/97
to

Ronald de Man (de...@win.tue.nl) wrote:
: hyatt@crafty (Robert Hyatt) writes:

right. But the "4 plies" I mentioned were the 4 plies dropped by the two
back-to-back null move searchs. If you are searching to 24 plies, any PV
with two back-to-back nulls will only reach ply 20 (actually only ply 18
because the null moves don't change the board either). And that can hide
the reason this position is a zugzwang position... In a 24 ply search, it
would be very likely that positions are searched to even shallower depths,
since this double-null can be done recursively, so at plies 2-3 it is done,
then at 9-10, etc...


: Well, here you compare it with a non-nullmove search.

: Of course it is clear that a 20 ply tactic involving zugzwang will not
: be found by these double-nulls at 20 ply. But that is not a good
: argument against double-nulls, since you wouldn't have found it with
: the normal nullmove anyway. I just cannot think of any disadvantages
: of double-nulls other than that it might slow down the search. And of
: course, a noticeable slowdown of the search is a very big disadvantage.
: That is why methods promising part of the advantages, but not giving
: a slowdown are interesting.

This is exactly the effect I saw. (a) it slowed the search down significantly.
(b) it didn't solve most of the zugzwang type positions I tried. It did solve
a few, but only the "trivial" ones. The difficult ones are the positions where
you reach a "zug" position (but don't know it yet) and a 20 ply search below that
node will reveal this, but anything less won't. Any null-move below that node
will break things, because it will reduce the depth just enough so that we don't
know this is a zug any more... Since (a) happens, and (b) doesn't help in the
positions I see, my choice was obvious. Your milage may certainly vary.


: (By the way, it seems you're chopping off 6 plies, since the two nulls


: also take a ply each.)

yep... :)


: >: 8/4nB2/1p6/1P6/K7/P1k5/8/8 w

: I just tried crafty (rather old version, v9.30) on this position. It
: can't solve it. If you allow double-nulls in positions where for example
: both sides have either a bishop or a knight left, it would be found,
: and the slowdown doesn't have to be excessive if you do it like Urban
: suggests. I think that is interesting.

current crafty fails high at 30 seconds. Somewhat excessive, but it can
find it. But remember, this is a pathological position. The last null-
move position posted (the simple mate that Fritz couldn't find) is another
case. Endgame zugs are much different from these...


: >: I now quickly get the right move Bh5 with an eval of +0.000, and it

Dan Thies

unread,
Oct 22, 1997, 3:00:00 AM10/22/97
to

On 21 Oct 1997 09:39:46 GMT, de...@win.tue.nl (Ronald de Man) wrote:

>rt...@wt.net (Dan Thies) writes:
>>I hate to butt in and disagree, but Vincent's idea will solve *some*
>>zug problems. It will also create others if the depth reduction is 4
>>plies. This has been pointed out. You find one thing by hiding
>>another.
>

>Of course, it won't solve all zug problems, just like increasing
>depth will not solve all horizon problems.
>And yes, if this trick reduces depth (which it will if you do it
>unrestricted), it is not worth it. A depth reduction of 4 seems
>exagerated, however. In normal positions the worst I have seen is
>about 50% more nodes to reach the same depth. Positions that
>give a depth reduction of 4 are probably full of zugzwang.

Agan, I'm just pointing out that Vincent's assessment of his idea(s)
is as optomistic and unrealistic as usual.

>>Urban and Bob were talking right past each other for a while there,

>Yes, but I was originally trying to point out to Bob what Urban's
>idea is to reduce the penalty of Vincent's trick (which is,
>don't do the double-nulls near the leaves). However, everytime
>I say this, Bob interpretes this as 'don't do nulls near the leaves',
>which is something really different from what I mean, and is obviously
>going to cost a lot.

It's hard to understand whether Bob also tried this or not.

>Meanwhile I have tried Urban's idea in my program, and it seems
>to work well. For several middlegame positions treesize even decreases
>somewhat. In others, it doesn't cost much. The effect seems more
>noticeable in endgames, where trees do get somewhat bigger, but also
>some of the PVs change.
>And in the position
>
>8/4nB2/1p6/1P6/K7/P1k5/8/8 w
>

>I now quickly get the right move Bh5 with an eval of +0.000, and it
>stays at that, where I previously had very strange behaviour.

I think this will work in some endgame positions - the big question to
me is how you identify those positions.

>To make clear what I do, when I test in Search() whether I want to do
>a null or not, one of the conditions that had to be satisfied was:
>
> previous_move!=null_move
>
>and I replaced this by:
>
> (previous_move!=null_move ||
> (move_before_that_one!=null_move && depth>2))

I'll try this, and see what I get. KC had no zugzwang problems,
because of the knowledge and the way the search worked, but now that
I'm doing an alpha-beta version (with heaps o' forward pruning...)
it's going to become an issue.

>>Vincent's original "point" was that his idea actually made null-move
>>better in practice. It does not.
>
>Yes, I certainly agree with that if you do the double-nulls unrestricted.
>
>What I will probably do is to allow double-nulls in positions with very few
>pieces where I still want to use some form of nullmove.

When you write that magic code that tells you which positions this is
good for, don't forget to post it here for all your friends. :-)

>By the way, to make it solve zug problems in endgames with for example
>a knight for white and no pieces for black (and pawns for both), you'll
>have to allow a null for black if white just did a null, even though black
>has no pieces (in which case I normally don't allow nulls for that side).

Check.


Dan (Knowchess)

Robert Hyatt

unread,
Oct 22, 1997, 3:00:00 AM10/22/97
to

Dan Thies (rt...@wt.net) wrote:

: Agan, I'm just pointing out that Vincent's assessment of his idea(s)


: is as optomistic and unrealistic as usual.

: >>Urban and Bob were talking right past each other for a while there,
: >Yes, but I was originally trying to point out to Bob what Urban's
: >idea is to reduce the penalty of Vincent's trick (which is,
: >don't do the double-nulls near the leaves). However, everytime
: >I say this, Bob interpretes this as 'don't do nulls near the leaves',
: >which is something really different from what I mean, and is obviously
: >going to cost a lot.

: It's hard to understand whether Bob also tried this or not.

yes. I tried double-nulls everywhere, then not near the leaves. I
tried all the way back to depth > 3 so that the depth-R-1 would not
take depth to zero and drop into the capture search.


Vincent Diepeveen

unread,
Oct 24, 1997, 3:00:00 AM10/24/97
to


Robert Hyatt <hyatt@crafty> wrote in article

<62btah$f...@juniper.cis.uab.edu>...


> Marcel van Kervinck (mar...@unox.hak.nl) wrote:
> : Robert Hyatt <hyatt@crafty> wrote:
> : > that it fails high is the same old 2*sqrt(W^D) as always,
> : > For the tree *below* the null move, the
> : > above formula works perfectly. and near the leaf nodes, it is exact
since
> : > no nulls follow, in the restricted sense...
>
> : So, Crafty doesn't do nullmoves recursively?
> : Is that what you mean?
>
> No. But the basic "alpha/beta" formula is a good estimate for the work
> done below a null-move. It's not exact due to recursive null-move, but
> it is perfectly useful when comparing the effort to search below a null
> move with R=2, and then searching at the null-move ply with R=2 to
"confirm"
> the null move.
>
> Null move has errors. There is nothing that can be done to eliminate

Nullmove has no errors with diepeveen improvement: detecting zugzwang
and failure with another nullmove.

You just need sometimes more plies and a considerable depth,
which we will get in the future and some already today.

Did my post of yesterday come to the group?

Robert Hyatt

unread,
Oct 25, 1997, 3:00:00 AM10/25/97
to

Vincent Diepeveen (di...@xs4all.nl) wrote:


: Robert Hyatt <hyatt@crafty> wrote in article


: <62btah$f...@juniper.cis.uab.edu>...
: > Marcel van Kervinck (mar...@unox.hak.nl) wrote:
: > : Robert Hyatt <hyatt@crafty> wrote:
: > : > that it fails high is the same old 2*sqrt(W^D) as always,
: > : > For the tree *below* the null move, the
: > : > above formula works perfectly. and near the leaf nodes, it is exact
: since
: > : > no nulls follow, in the restricted sense...
: >
: > : So, Crafty doesn't do nullmoves recursively?
: > : Is that what you mean?
: >
: > No. But the basic "alpha/beta" formula is a good estimate for the work
: > done below a null-move. It's not exact due to recursive null-move, but
: > it is perfectly useful when comparing the effort to search below a null
: > move with R=2, and then searching at the null-move ply with R=2 to
: "confirm"
: > the null move.
: >
: > Null move has errors. There is nothing that can be done to eliminate

: Nullmove has no errors with diepeveen improvement: detecting zugzwang
: and failure with another nullmove.

: You just need sometimes more plies and a considerable depth,
: which we will get in the future and some already today.

But that is somewhat misleading: "if I can search deep enough, then I
don't have null-move errors". And that is dead false... because no
matter how deep you search, null-move reduced that depth along selected
branches and produces incorrect results. Yes a deeper search can solve
a particular null-move failure, but it is guaranteed to expose more
because it is a selective algorithm.


0 new messages