Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

An interesting forward pruning experiment - with pseudo description

64 views
Skip to first unread message

Vincent Diepeveen

unread,
Feb 7, 2003, 6:51:27 PM2/7/03
to
Hello,

I did a forward pruning experiment in DIEP which i baptise DIEP
reductions:

If diep nullmoves at depths >= 6 ply left and it fails low. then i try
another nullmove at to be precise:

newwindow = [alfa-0.750;alfa-0.749]

If that fails high then i assume that there is no tactical threat in
this position. Obviously because of that, we can do more than just sit
and wait.

Then i did 3 experiments and i am not sure yet which is best but i
guess last:
- A directly reduce 1 ply. So NOT lower plydepth here 1 ply but set
a mask that takes care we reduce another ply when going to the next
position. The advantage of not lowering directly now is because it
gets stored in hashtables now as a ply bigger. this is dubious of
course in itself, but well the whole concept of the pruning is dubious
anyway and we get more cutoffs like this.

- B directly reduce 1 ply, like described under A, but then such that
after we fail high we research again with correct search depth.

- C reduce 1 ply, like described under A, but then such that after we
fail low research again with correct search depth

both B and C are very interesting. C seems most promising here.

Preliminary results: C is very promising which was against my natural
feelings as a algorithmic experts. But later cool coldhearted logics
concluded that C is pretty logical.

I search on average 1 ply deeper at 40 in 2 time controls. sometimes
more.

My natural feelings say now i should turn it off and i will go turn it
off again this forward pruning. Though DIEP reductions is a better
word for it.

At testsets it working great. One might argue that when a move puts
check you in advance might not want to reduce that 1 ply. Just the
move.

That might remove some big horizontal problems of the algorithm.

Of course do not forget i combine it with R=3 which might be pretty
dangerous because things like moving a piece from d2 to c1 to a3 might
get missed (deepjunior-kasparov).

Dieter Buerssner

unread,
Feb 9, 2003, 1:27:41 PM2/9/03
to
Vincent Diepeveen wrote:

> I did a forward pruning experiment in DIEP which i baptise DIEP
> reductions:
>
> If diep nullmoves at depths >= 6 ply left and it fails low. then i try
> another nullmove at to be precise:
>
> newwindow = [alfa-0.750;alfa-0.749]
>
> If that fails high then i assume that there is no tactical threat in
> this position.

Yes. I think you mean, the opponent has no tactical threat against me. I
think, one must be careful, that my own tactical threats are not
overseen.

> Obviously because of that, we can do more than just sit
> and wait.
>
> Then i did 3 experiments and i am not sure yet which is best but i
> guess last:
> - A directly reduce 1 ply. So NOT lower plydepth here 1 ply but set
> a mask that takes care we reduce another ply when going to the next
> position. The advantage of not lowering directly now is because it
> gets stored in hashtables now as a ply bigger. this is dubious of
> course in itself, but well the whole concept of the pruning is dubious
> anyway and we get more cutoffs like this.
>
> - B directly reduce 1 ply, like described under A, but then such that
> after we fail high we research again with correct search depth.
>
> - C reduce 1 ply, like described under A, but then such that after we
> fail low research again with correct search depth
>
> both B and C are very interesting. C seems most promising here.

I tried A and B. I tried it in zero window searches only, as well as in
open window searches. Additionally I tried using beta instead of alpha
(which will be prectically the same in zero window searches). So far the
results have not been too promising in test sets. Depth is reached a bit
faster, but other postions are only solved at later depth. I should
recheck carefully, that I implemented it correctly.

> Preliminary results: C is very promising which was against my natural
> feelings as a algorithmic experts. But later cool coldhearted logics
> concluded that C is pretty logical.

C should get rid of the problem, that I can oversee some tactical trick
for side to move. But then again, one would expect much smaller savings?
Basically, you can get a faster cutoff, when depth - 1 would fail high
already. The prize is, that you do another search with depth-R-1, and
sometimes (often?) do the depth - 1 and the "normal" search at depth.

Can you have 2 bounds in the HT for one position? I can't at the moment.
I fear, that the 2 null move searches with different windows could hurt
move ordering. In one search you need lower bound values and in the other
higher bound values for cutoffs from HT later in the search at the same
ply.

But I didn't try it yet.

> At testsets it working great. One might argue that when a move puts
> check you in advance might not want to reduce that 1 ply. Just the
> move.
>
> That might remove some big horizontal problems of the algorithm.

I thought the same, and want to try this later, too.



> Of course do not forget i combine it with R=3 which might be pretty
> dangerous because things like moving a piece from d2 to c1 to a3 might
> get missed (deepjunior-kasparov).

I used R=2 (R=1 in some endgames).

Regards,
Dieter

Vincent Diepeveen

unread,
Feb 10, 2003, 4:35:27 AM2/10/03
to
On Sun, 9 Feb 2003 19:27:41 +0100, Dieter Buerssner <bu...@gmx.de>
wrote:

>Vincent Diepeveen wrote:
>
>> I did a forward pruning experiment in DIEP which i baptise DIEP
>> reductions:
>>
>> If diep nullmoves at depths >= 6 ply left and it fails low. then i try
>> another nullmove at to be precise:
>>
>> newwindow = [alfa-0.750;alfa-0.749]
>>
>> If that fails high then i assume that there is no tactical threat in
>> this position.
>
>Yes. I think you mean, the opponent has no tactical threat against me. I
>think, one must be careful, that my own tactical threats are not
>overseen.

If you fail high this is not a major problem.

>> Obviously because of that, we can do more than just sit
>> and wait.
>>
>> Then i did 3 experiments and i am not sure yet which is best but i
>> guess last:
>> - A directly reduce 1 ply. So NOT lower plydepth here 1 ply but set
>> a mask that takes care we reduce another ply when going to the next
>> position. The advantage of not lowering directly now is because it
>> gets stored in hashtables now as a ply bigger. this is dubious of
>> course in itself, but well the whole concept of the pruning is dubious
>> anyway and we get more cutoffs like this.
>>
>> - B directly reduce 1 ply, like described under A, but then such that
>> after we fail high we research again with correct search depth.
>>
>> - C reduce 1 ply, like described under A, but then such that after we
>> fail low research again with correct search depth
>>
>> both B and C are very interesting. C seems most promising here.
>
>I tried A and B. I tried it in zero window searches only, as well as in
>open window searches. Additionally I tried using beta instead of alpha

One should protect itself against doing this with huge beta values,
otherwise you burn too much nodes. If you can see from my
implementation i only do it when i already have nullmoved. In fact
nullmove is just another move in diep which is added to the move list
as first move to try.

So after that fails low i directly research without even unmaking the
nullmove.

I need to note that i also am doing singular extensions in DIEP. Just
for security i found out quick that when a position has in hashtable
stored that it has a singular move, i do not allow the additional
nullmove to be done.

I dissallow the nullmove anyway when beta = infinite.

>(which will be prectically the same in zero window searches). So far the
>results have not been too promising in test sets. Depth is reached a bit
>faster, but other postions are only solved at later depth. I should
>recheck carefully, that I implemented it correctly.
>
>> Preliminary results: C is very promising which was against my natural
>> feelings as a algorithmic experts. But later cool coldhearted logics
>> concluded that C is pretty logical.
>
>C should get rid of the problem, that I can oversee some tactical trick
>for side to move. But then again, one would expect much smaller savings?

The principle is in fact that if you already have a cutoff without
counter threat, that it is some kind of safe when there is no real
threat of the opponent.

On the other hand C says that if you fail low with a move at reduced
depth, that you still extend it. So you try to optimize your score.

>Basically, you can get a faster cutoff, when depth - 1 would fail high
>already. The prize is, that you do another search with depth-R-1, and
>sometimes (often?) do the depth - 1 and the "normal" search at depth.

Exactly.

>Can you have 2 bounds in the HT for one position? I can't at the moment.
>I fear, that the 2 null move searches with different windows could hurt
>move ordering. In one search you need lower bound values and in the other

I am missing this part of you at all. Where do you get 2 bounds for 1
position stored?

Oh wait. I get your point. In diep i do not overwrite in hashtable a
position that is searched at depth d when the stored position is depth
d+i.

>higher bound values for cutoffs from HT later in the search at the same
>ply.
>
>But I didn't try it yet.

You should try C.

A i concluded to be hopelessly missing everything. B needs more
research. C is clearly giving a quicker cutoff to a subtree when there
is evidence that the opponent has no real threats.

>> At testsets it working great. One might argue that when a move puts
>> check you in advance might not want to reduce that 1 ply. Just the
>> move.
>>
>> That might remove some big horizontal problems of the algorithm.
>
>I thought the same, and want to try this later, too.

In general i feel that removing checks is never a good idea. I feel in
DIEP clearly that i nowadays do less checks in its selective search.

Keep in mind with this diepreduction that my qsearch picks up quite
some tactics, because i try all kind of moves in qsearch.



>> Of course do not forget i combine it with R=3 which might be pretty
>> dangerous because things like moving a piece from d2 to c1 to a3 might
>> get missed (deepjunior-kasparov).
>
>I used R=2 (R=1 in some endgames).

Best regards,
Vincent

>Regards,
>Dieter

0 new messages