Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Iterative Deepening

57 views
Skip to first unread message

John Scalo

unread,
Jun 27, 1997, 3:00:00 AM6/27/97
to

I have a somewhat novice question about the use of iterative deepening. Of
course I know that ID is necessary because it allows the program to work
with time constraints and jump out of a search with a viable move from the
last iteration. But does it buy you anything else?

I know it fills the transposition tables up and gives you the PV to work
off of so it doesn't cost a lot, but is that enough to make searching to
depth n via 1, then 2, then 3, ..., then n faster than just a search to n?

What other information can you obtain during the last iteration that will
make the current iteration fast?

Thanks,

-j

brucemo

unread,
Jun 27, 1997, 3:00:00 AM6/27/97
to

John Scalo wrote:
>
> I have a somewhat novice question about the use of iterative deepening. Of
> course I know that ID is necessary because it allows the program to work
> with time constraints and jump out of a search with a viable move from the
> last iteration. But does it buy you anything else?
>
> I know it fills the transposition tables up and gives you the PV to work
> off of so it doesn't cost a lot, but is that enough to make searching to
> depth n via 1, then 2, then 3, ..., then n faster than just a search to n?

The contention is that it is faster to search to 1..2..3..4 than it is to
search 4. I've never tried it. I've also never tried to show that this
holds true for 1..2..3..4..5..6..7..8..9..10..11..12 as opposed to some other
scheme like 1..2..3..5..7..9..12. I've also never tried messing with hash
table sizes to see if that has any effect upon iterative deepening
efficiency.

> What other information can you obtain during the last iteration that will
> make the current iteration fast?

Just hash table and the PV. The hash table contains not only scores, but
some moves that were "best" when searched to the reduced depth. If they are
still best, you get a less bushy tree at the next ply.

Actually, that's not fair, there might be more benefit to interative
deepening. There is other information available to you after you finish a
ply. You have a value associated with the position, you know how long it
took to generate this value, and you know the depth of search you used to
generate the value. So perhaps you can decide to extend or reduce time based
upon this, or maybe you can direct the search in some other way. Maybe you
can collect other information during previous iterations as well.

bruce

Robert Hyatt

unread,
Jun 27, 1997, 3:00:00 AM6/27/97
to

John Scalo (sca...@nospam.apple.com) wrote:
: I have a somewhat novice question about the use of iterative deepening. Of
: course I know that ID is necessary because it allows the program to work
: with time constraints and jump out of a search with a viable move from the
: last iteration. But does it buy you anything else?

It buys you a lot, because after doing a 2 ply search, you start on a 3
ply search, and you can order the moves at the first 2 plies nearly
optimally, which further aids alpha/beta. In fact, were you to try it,
you would discover that doing 1,2,.., 10 ply iterative deepening will
search fewer total nodes than just starting at 10. By quite a bit. And
it avoids the horrible problem of starting a 10 ply search in a position
where you can't even get something back from the first move within your
time limit. the problem then, of course, is "what do I play?".. Iterative
deepening eliminates this totally. In the early 70's everyone used a
non-iterative deepening search and had lots of problems as above. It's
fortunately a thing of the past...

: I know it fills the transposition tables up and gives you the PV to work


: off of so it doesn't cost a lot, but is that enough to make searching to
: depth n via 1, then 2, then 3, ..., then n faster than just a search to n?

: What other information can you obtain during the last iteration that will


: make the current iteration fast?

mainly move ordering. Transpositions where you can actually retrieve a
good score or an upper/lower search bound that is applicable in the current
search, particularly important in endgames...

: Thanks,

: -j

chrisw

unread,
Jun 27, 1997, 3:00:00 AM6/27/97
to

--
http://www.demon.co.uk/oxford-soft

John Scalo <sca...@nospam.apple.com> wrote in article
<scaloj-2706...@17.127.19.35>...


> I have a somewhat novice question about the use of iterative deepening.
Of
> course I know that ID is necessary because it allows the program to work
> with time constraints and jump out of a search with a viable move from
the
> last iteration. But does it buy you anything else?
>

> I know it fills the transposition tables up and gives you the PV to work
> off of so it doesn't cost a lot, but is that enough to make searching to
> depth n via 1, then 2, then 3, ..., then n faster than just a search to
n?

Yes.

Also, since you can't predict which n will lead to giving you the one
iteration that comes in before the time control, its pretty much forced to
build up with n=1,2,3,4 etc.

>
> What other information can you obtain during the last iteration that will
> make the current iteration fast?

Just general search guidance (hash, PV, killers, history, etc. etc.)

The likely alpha/beta window to operate within.


Chris Whittington

>
> Thanks,
>
> -j
>

R.S. Baker

unread,
Jun 27, 1997, 3:00:00 AM6/27/97
to

John Scalo <sca...@nospam.apple.com> wrote in article
<scaloj-2706...@17.127.19.35>...
> I have a somewhat novice question about the use of iterative deepening.
Of
> course I know that ID is necessary because it allows the program to work
> with time constraints and jump out of a search with a viable move from
the
> last iteration. But does it buy you anything else?
>
> I know it fills the transposition tables up and gives you the PV to work
> off of so it doesn't cost a lot, but is that enough to make searching to
> depth n via 1, then 2, then 3, ..., then n faster than just a search to
n?
>
> What other information can you obtain during the last iteration that will
> make the current iteration fast?
>
> Thanks,
>
> -j
>

This is just part of the equation, but even with iterative deepening, the
time required to search depth N is generally significantly greater than the
time required to search all previous depths *combined*.

Even with null move, which Hyatt et. al. assert reduces the incremental ply
cost to a factor of 3 or so, the time to search depth N is 1/3 more than
the time used to search all previous depths. So it really doesn't cost very
much and has all the benefits you mentioned. For higher incremental ply
costs (perfect A/B would give an incremental cost factor of about 6), ID
costs even less. A cost factor of 6 means you would spend roughly 4 times
as much time searching depth N as searching all previous depths, etc.

--
Randy

(Please remove .NOSPAM if replying via email)

Tom King

unread,
Jun 27, 1997, 3:00:00 AM6/27/97
to

In article <33B408...@seanet.com>, brucemo <bru...@seanet.com>
writes

>John Scalo wrote:
>>
>> I have a somewhat novice question about the use of iterative deepening. Of
>> course I know that ID is necessary because it allows the program to work
>> with time constraints and jump out of a search with a viable move from the
>> last iteration. But does it buy you anything else?
>>
>> I know it fills the transposition tables up and gives you the PV to work
>> off of so it doesn't cost a lot, but is that enough to make searching to
>> depth n via 1, then 2, then 3, ..., then n faster than just a search to n?
>
>The contention is that it is faster to search to 1..2..3..4 than it is to
>search 4. I've never tried it. I've also never tried to show that this
>holds true for 1..2..3..4..5..6..7..8..9..10..11..12 as opposed to some other
>scheme like 1..2..3..5..7..9..12. I've also never tried messing with hash
>table sizes to see if that has any effect upon iterative deepening
>efficiency.
>
> [snip]

A *long* time ago, before I used null moves, my program used to search,
1..3..5..7..9..With very poor move ordering and no null moves, it never
searched 9 ply, and played weak chess.

At that time I used a weird scheme where I'd search a move to a certain
depth, say 5, then if this move was the best move, I'd search it one ply
deeper, in this case depth 6. So I used a kind of "verification search".
At the time I thought it was cool, because it helped my program avoid
some tactical traps. I've toyed with similar "verification search" ideas
on and off since then, without success. Anyone else tried this kind of
thing? I think it's a loser, because your program spends even more time
than usual examinining the PV move.

But the above makes sense to me as a human player. Although I'm a pretty
weak player myself, I tend to examine a line of play, then say to myself
something like "well, I'd love to play Ng5, let's just look at the line
of play following that a bit deeper.. Hmmm.."

I ditched this stuff back in 1994, and moved to 1..2..3..4..5 etc. I've
used this ever since. Not that every program does use this. The Russian
program, Mirage, for example, will search 6 ply, and then suddenly
something like 11 ply (I forget the exact figures). I suppose they're
using some sort of fractional definition of "ply"?! Who knows..

I don't want to piss on your fireworks, so give it a go! Play around
with different iterative deepening schemes. I'm sure every chess
programmer has looked down this avenue, but there's still the
possibility they've all missed a gold coin or two.

P.S You mention that iterative deepening fills the hash tables with
useful moves and scores. This helps speed up future iterations, of
course. Further, iterative deepening seeds the history tables/ killer
tables/ refutation tables with useful data, speeding up future
iterations.
--
Tom King

Robert Hyatt

unread,
Jun 28, 1997, 3:00:00 AM6/28/97
to

R.S. Baker (rsbaker...@msn.com) wrote:
: John Scalo <sca...@nospam.apple.com> wrote in article
: <scaloj-2706...@17.127.19.35>...
: > I have a somewhat novice question about the use of iterative deepening.

: Of
: > course I know that ID is necessary because it allows the program to work
: > with time constraints and jump out of a search with a viable move from
: the
: > last iteration. But does it buy you anything else?
: >
: > I know it fills the transposition tables up and gives you the PV to work
: > off of so it doesn't cost a lot, but is that enough to make searching to
: > depth n via 1, then 2, then 3, ..., then n faster than just a search to
: n?
: >
: > What other information can you obtain during the last iteration that will

: > make the current iteration fast?
: >
: > Thanks,
: >
: > -j
: >

: This is just part of the equation, but even with iterative deepening, the
: time required to search depth N is generally significantly greater than the
: time required to search all previous depths *combined*.

Not necessarily. Watch this, taken from a 15 minute game played on ICC
by Crafty, P6/200:

depth time score variation (1)
4-> 0.12 1.775 bxc3 Rxf4 gxf4 h4
5 0.19 ++ bxc3!!
5-> 0.32 2.074 bxc3 Rxf4 gxf4 h4
6 0.43 ++ bxc3!!
6 1.10 3.424 bxc3 g3 Rxd4 Rxd4 Ne2+ Kg2 Nxd4 Rxc3
6-> 1.14 3.424 bxc3 g3 Rxd4 Rxd4 Ne2+ Kg2 Nxd4 Rxc3
7 2.14 3.410 bxc3 Kh2 e5 Rxc3 Rxd4 Re1 Nd3 Re3
<HT>
7-> 2.21 3.410 bxc3 Kh2 e5 Rxc3 Rxd4 Re1 Nd3 Re3
<HT>
8 5.89 3.472 bxc3 Rxc3 Rxd4 Rxd4 Ne2+ Kf1 Nxd4
g3 Rb8 Rc5 f5
8-> 6.13 3.472 bxc3 Rxc3 Rxd4 Rxd4 Ne2+ Kf1 Nxd4
g3 Rb8 Rc5 f5
9 9.13 3.392 bxc3 Rxc3 Rxd4 Rxd4 Ne2+ Kf1 Nxd4
b4 Kg7 a5 Rb8 Rc5
9-> 9.95 3.392 bxc3 Rxc3 Rxd4 Rxd4 Ne2+ Kf1 Nxd4
b4 Kg7 a5 Rb8 Rc5
time: 17.57 cpu:99% mat:2 n:1849608 nps:105390

So sometimes the current iteration takes way longer than all the previous
ones combined, sometimes it doesn't. Above, ply 9 takes about 1/2 of the
time for 1-8. Of course, ply 8 took 3x the time for ply=7 and earlier.


: Even with null move, which Hyatt et. al. assert reduces the incremental ply


: cost to a factor of 3 or so, the time to search depth N is 1/3 more than
: the time used to search all previous depths. So it really doesn't cost very
: much and has all the benefits you mentioned. For higher incremental ply
: costs (perfect A/B would give an incremental cost factor of about 6), ID
: costs even less. A cost factor of 6 means you would spend roughly 4 times
: as much time searching depth N as searching all previous depths, etc.

: --
: Randy

right... the point of the algorithm is the cost of a shallow search is
offset by the savings produced by better move ordering at the next search.
And alpha/beta is *so* sensitive to move ordering, this is a critical issue.


Dan Kirkland

unread,
Jun 30, 1997, 3:00:00 AM6/30/97
to

In article <scaloj-2706...@17.127.19.35>,
sca...@nospam.apple.com (John Scalo) writes:

>I have a somewhat novice question about the use of iterative deepening. Of
>course I know that ID is necessary because it allows the program to work
>with time constraints and jump out of a search with a viable move from the
>last iteration. But does it buy you anything else?
>
>I know it fills the transposition tables up and gives you the PV to work
>off of so it doesn't cost a lot, but is that enough to make searching to
>depth n via 1, then 2, then 3, ..., then n faster than just a search to n?
>
>What other information can you obtain during the last iteration that will
>make the current iteration fast?

Searching with a zero width window works best (by far!) when the best
move is searched first! A search with a zero-width window is very
fast because ALL the moves either fail high or fail low. If the best
move is first, then after the first move ALL others can be searched
with a zero-width window. And all of these will fail low and not
need be researched. If the best move is not first, then it (and
maybe others) will fail high on the zero-width search and need to
be researched with a much slower search (non-zero-width window).

Now in the real world some moves are going to need researching from
time to time. But by using iterative deepening these researches
can be kept to a minimum. Iterative deepening and zero-width searching
go together.

I don't plan on any hash tables in my calculator (HP48) chess program.
When I first converted my chess program from full alpha-beta window
searching (without iterative deepening) to PVS (with iterative deeping)
the PVS version was a little slower overall. But not by much! And I
don't expect to ever go back! I may have already made enough
improvement to make the PVS system faster. If not it will surely be
faster when I get through with it.

I recently made a couple of changes that improved the speed a little.
And I did a bit of testing on my killers that show that there is
more speedups to be made there. Also I think I can gain some speed
by keeping any old PVs around from the last iteration. And there
are likely a few things I can do that will help in place of the
missing hash tables.

As it is I think PVS (with iterative deeping) is a win even without
hash tables. And with hash tables it should be a BIG win!

(Maybe I will even get around to trying some limited hashing...)

dan

SMO CFI

unread,
Jul 2, 1997, 3:00:00 AM7/2/97
to

Hi folks.

Would someone mind giving a quick rundown on "null moves?" I'm thinking
of tweaking an old program, and this sounds promising.

Thanks.

Will


Robert Hyatt

unread,
Jul 3, 1997, 3:00:00 AM7/3/97
to

SMO CFI (smo...@aol.com) wrote:
: Hi folks.

: Thanks.

: Will


Easy to implement...

what you do is play a null-move (which means do nothing or "pass") and
then call search normally to search for the other side, but rather than
passing it "depth-1", you pass it "depth-1-R" (where I am using R=2 at
present). If this search returns a value >= beta, you simply return
beta at this point, because you just gave your opponent two moves in a
row and he couldn't do anything effective. And if that is the case,
your position must be very good itself. The savings is the depth-R-1
because you refuted your opponent's last move with a 2-ply shallower
search than you would normally have used.

something like this:

val=-Search(-beta,-alpha,depth-R-1);
if (val >= beta) return(val);

don't try the null if you are in check for one case, since it can't
work there (hangs the king of close)...

SMO CFI

unread,
Jul 3, 1997, 3:00:00 AM7/3/97
to

Robert,

Thanks for the info. I suppose it would also be possible to give the
computer two moves in a row, and see if the position got better. Don't
know about that, though.

I saw some discussion about disabling null moves until ply 5 or 6. Do you
use it only at terminal nodes, as part of the eval? If not, how do you
decide which nodes to try the null move?

-Will


Robert Hyatt

unread,
Jul 4, 1997, 3:00:00 AM7/4/97
to

SMO CFI (smo...@aol.com) wrote:
: Robert,

: -Will

I don't do null-move in the quiescence (capture) search. I do it at
*every* node in the basic search, except at the root where it makes no
sense, and I don't do it at a ply that follows a null-move, since two
nulls in a row makes no sense...

You can disable them whenever you want, but everywhere you don't try
one, you lose a little of the advantage. Of course you also eliminate
a few of the problems too. :) Your choice. :)


Dusan Dobes

unread,
Jul 4, 1997, 3:00:00 AM7/4/97
to

Robert Hyatt (hy...@crafty.cis.uab.edu) wrote:

: SMO CFI (smo...@aol.com) wrote:
: : Hi folks.

: : Would someone mind giving a quick rundown on "null moves?" I'm thinking
: : of tweaking an old program, and this sounds promising.

: : Thanks.

: : Will


: Easy to implement...

: what you do is play a null-move (which means do nothing or "pass") and
: then call search normally to search for the other side, but rather than
: passing it "depth-1", you pass it "depth-1-R" (where I am using R=2 at
: present). If this search returns a value >= beta, you simply return
: beta at this point, because you just gave your opponent two moves in a
: row and he couldn't do anything effective. And if that is the case,
: your position must be very good itself. The savings is the depth-R-1
: because you refuted your opponent's last move with a 2-ply shallower
: search than you would normally have used.

: something like this:

: val=-Search(-beta,-alpha,depth-R-1);
: if (val >= beta) return(val);

What about adding one more line:

if (val > alpha) alpha=val;

This might result in more cuts in the subtree.

Dusan


Robert Hyatt

unread,
Jul 4, 1997, 3:00:00 AM7/4/97
to

Dusan Dobes (do...@bart1.math.muni.cz) wrote:

: : : Thanks.

: : : Will


: : Easy to implement...

: : something like this:

: if (val > alpha) alpha=val;

: Dusan

Depends on your implementation of course. In Crafty I use PVS, so except
for a few dozen nodes, beta=alpha+1. So the null move either fails high,
or it fails low. And if it fails low, there's no info to be gained due
to the zero-width window used in PVS...


David Eppstein

unread,
Jul 4, 1997, 3:00:00 AM7/4/97
to

hy...@crafty.cis.uab.edu (Robert Hyatt) writes:
> Depends on your implementation of course. In Crafty I use PVS, so except
> for a few dozen nodes, beta=alpha+1. So the null move either fails high,
> or it fails low. And if it fails low, there's no info to be gained due
> to the zero-width window used in PVS...

This no longer has much to do with iterated deepening, but...

In my (non-chess) program, I tried PVS, but ended up using instead
an aspiration search with a wider alpha-beta window (roughly half the
value of a pawn). When I used PVS, it was going through far too many
iterations to get to the correct window (of course, those iterations
went more quickly, especially the later ones because of the hash table,
but even so PVS ended up being slower).

I assume you've tried the same experiment in crafty and found that PVS
works better than wider windows.

I'm wondering whether my failure to get PVS to work well indicates some
other problem with my program. Like maybe a bug in the fail-softness of
the alphabeta, or (I think more likely) bad move ordering....
I suppose the other thing I could try would be some kind of binary
search for the correct window (rather than the way I have it now, in which
I use the eval that failed high or low from the previous window).
--
David Eppstein UC Irvine Dept. of Information & Computer Science
epps...@ics.uci.edu http://www.ics.uci.edu/~eppstein/

Robert Hyatt

unread,
Jul 4, 1997, 3:00:00 AM7/4/97
to

David Eppstein (epps...@euclid.ics.uci.edu) wrote:

: hy...@crafty.cis.uab.edu (Robert Hyatt) writes:
: > Depends on your implementation of course. In Crafty I use PVS, so except
: > for a few dozen nodes, beta=alpha+1. So the null move either fails high,
: > or it fails low. And if it fails low, there's no info to be gained due
: > to the zero-width window used in PVS...

: This no longer has much to do with iterated deepening, but...

: In my (non-chess) program, I tried PVS, but ended up using instead
: an aspiration search with a wider alpha-beta window (roughly half the
: value of a pawn). When I used PVS, it was going through far too many
: iterations to get to the correct window (of course, those iterations
: went more quickly, especially the later ones because of the hash table,
: but even so PVS ended up being slower).

: I assume you've tried the same experiment in crafty and found that PVS
: works better than wider windows.

I'm not sure what you mean by "many iterations." In Crafty, I start
an iteration with a window of alpha=last_search-.25 and beta=last_search+.25
and then use PVS. The left-most successor of each position is searched with
this window, but after that, everyone uses alpha,alpha+1, and the only
overhead occurs when a later move "fails high" on this null-window and makes
me search again with the original beta value. For me, this was way cheaper
than the basic alpha/beta search, but it's critical that you have decent
move ordering.

: I'm wondering whether my failure to get PVS to work well indicates some


: other problem with my program. Like maybe a bug in the fail-softness of
: the alphabeta, or (I think more likely) bad move ordering....

failsoft isn't an issue that I can see. I'm currently running with
this (allowing search to return values outside alpha/beta window) but it
seems to be no better or worse than clamping the scores to alpha or
beta if the score falls outside the window. I'm doing this to try MTD(f)
again with a working failsoft to see if it is any better than my last
test results...

: I suppose the other thing I could try would be some kind of binary


: search for the correct window (rather than the way I have it now, in which
: I use the eval that failed high or low from the previous window).

Wait... it sounds like we are talking about different ideas.

PVS looks something like this:

while(move=next()) {
if (first) value=-Search(-beta,-alpha,...)
else {
value=-Search(-alpha-1,-alpha,...)
if (value>alpha && value<beta) value=-Search(-beta,-alpha,...)
}
}

and there you have it. The first move is searched with normal window
(which is still almost always X,X+1, except along the left-most branches
as mentioned before). The remainder are searched with value,value+1 (where
value comes from the first move's score), and if one fails high, we relax
and search with the more optimistic (original) beta value. Also notice that
this won't research if we are already searching where beta=alpha+1, because
that window would be the same as the original window...

: --

David Eppstein

unread,
Jul 5, 1997, 3:00:00 AM7/5/97
to

hy...@crafty.cis.uab.edu (Robert Hyatt) writes:
> In Crafty, I start
> an iteration with a window of alpha=last_search-.25 and beta=last_search+.25
> ... failsoft isn't an issue that I can see. I'm currently running with

> this (allowing search to return values outside alpha/beta window) but it
> seems to be no better or worse than clamping the scores to alpha or
> beta if the score falls outside the window.

Yes, I also settled on a very similar initial search window.
Then when I fail high I set new window = [eval,eval+.5] where
the eval is the result from the search that failed high
(similarly from fail low). Clamping to alpha or beta would
shift the window more slowly when I fail high or fail low,
resulting in more search iterations...

> Wait... it sounds like we are talking about different ideas.
> PVS looks something like this:
> while(move=next()) {
> if (first) value=-Search(-beta,-alpha,...)
> else {
> value=-Search(-alpha-1,-alpha,...)
> if (value>alpha && value<beta) value=-Search(-beta,-alpha,...)
> }
> }

Yes, I was confused into thinking that PVS meant always searching with a
zero-width window (not treating the first line differently), i.e. just
using an initial window of alpha=last and beta=last+1. Which meant lots
(sometimes hundreds) of fail-highs before finally reaching the correct
search window. This is not as bad as it sounds, because the hash table
makes most of these searches very fast; I think throwing a binary search
on top of it (instead of using the eval from a fail high or fail low to
set the next window) might actually work quite well.

I'll have to try what you suggest here, but I think first I have to get
a better move ordering. Probably I should try the method you use in
Crafty of doing a shallower search to get a good PV (when I don't get
one from the hash table).

Robert Hyatt

unread,
Jul 5, 1997, 3:00:00 AM7/5/97
to

David Eppstein (epps...@euclid.ics.uci.edu) wrote:

: hy...@crafty.cis.uab.edu (Robert Hyatt) writes:
: > In Crafty, I start
: > an iteration with a window of alpha=last_search-.25 and beta=last_search+.25
: > ... failsoft isn't an issue that I can see. I'm currently running with
: > this (allowing search to return values outside alpha/beta window) but it
: > seems to be no better or worse than clamping the scores to alpha or
: > beta if the score falls outside the window.

: Yes, I also settled on a very similar initial search window.
: Then when I fail high I set new window = [eval,eval+.5] where
: the eval is the result from the search that failed high
: (similarly from fail low). Clamping to alpha or beta would
: shift the window more slowly when I fail high or fail low,
: resulting in more search iterations...

Here's what is likely the "best" way although I don't do this in
Crafty yet (did it in Cray Blitz)

if the score is near zero and fails high, shift the window .5 pawns up
(or down if it fails low). If that fails high, shift it up a pawn. If
that fails high shift up a queen, and if that fails high shift to +infinity.

The reason this works well is that in very wild tactical positions, you
can find zillions of mates and also a forced way to win a pawn. If you
shift the window too high, the mates won't fail and get cut off, which
will let the tree explode. Harry found the first position where this was
critical to solve it, although I don't have it handy, but the goal was to
win a pawn. But searching with beta=+infinity would *never* return a
score because the pawn was hidden in the midst of many unforced mates...

In crafty I simply go to +inf or -inf if the first move fails high. In
PVS if a non-first move fails high, I revert to the original beta value
first, and if that fails high, I then pop to +inf. Not the most
effective, but I have not gone back to re-do this yet. It is on my list
for one day...


: > Wait... it sounds like we are talking about different ideas.


: > PVS looks something like this:
: > while(move=next()) {
: > if (first) value=-Search(-beta,-alpha,...)
: > else {
: > value=-Search(-alpha-1,-alpha,...)
: > if (value>alpha && value<beta) value=-Search(-beta,-alpha,...)
: > }
: > }

: Yes, I was confused into thinking that PVS meant always searching with a
: zero-width window (not treating the first line differently), i.e. just
: using an initial window of alpha=last and beta=last+1. Which meant lots
: (sometimes hundreds) of fail-highs before finally reaching the correct
: search window. This is not as bad as it sounds, because the hash table
: makes most of these searches very fast; I think throwing a binary search
: on top of it (instead of using the eval from a fail high or fail low to
: set the next window) might actually work quite well.

What you are describing sounds like mtd(f)... There is a version of mtd(F)
that uses a binary sort of eval search. In fact, when I tried this sort of
algorithm about 15 years ago, I only did it the binary way. There might be
a positive aspect to non-binary however, because you are continually failing
in only one direction, search after search... although I have not yet given
a lot of thought to how this is affected by a condition Jonathan Schaeffer
called "the minimax wall."


: I'll have to try what you suggest here, but I think first I have to get


: a better move ordering. Probably I should try the method you use in
: Crafty of doing a shallower search to get a good PV (when I don't get
: one from the hash table).

that is really a trivial thing (internal iterative deepening) that seems to
be worth about 10% from my older measurements...

: --

0 new messages