HIARCS 5 Maximum Search Depth

32 views
Skip to first unread message

Kevin Miller

unread,
Jan 7, 1997, 3:00:00 AM1/7/97
to

Forgive this question, but I've been out of the chess software loop
for a while:

When the HIARCS 5 documentation states that the maximum search depth
is 31, what exactly does that mean? I fed a particular problem to it
and it fired back with something like a "mate in 22" message," which
to me would put the depth of search quite a bit beyond 31...
Thanks in advance!


Kevin Miller

"Jazz is not dead; it just smells funny" -- Frank Zappa

Tom C. Kerrigan

unread,
Jan 11, 1997, 3:00:00 AM1/11/97
to

Kevin Miller (kmi...@tiac.net) wrote:
> Forgive this question, but I've been out of the chess software loop
> for a while:

> When the HIARCS 5 documentation states that the maximum search depth
> is 31, what exactly does that mean? I fed a particular problem to it
> and it fired back with something like a "mate in 22" message," which
> to me would put the depth of search quite a bit beyond 31...
> Thanks in advance!

Probably that iterative deepending doesn't go beyond 31 ply. Extensions and
the quiescence search account for the Mate-in-22's. Stobor has room to play
out 64-ply sequences, but I limit the iterations to 50 so it won't barf. I
assume Mark is doing something similar.

(Actually, I just fixed Stobor to play 128-ply sequences and search 100 ply
to solve a few of the problems that have been posted here recently.)

Cheers,
Tom

Komputer Korner

unread,
Jan 13, 1997, 3:00:00 AM1/13/97
to

Hi Tom,
I thought there was a limit on ply searching and you have
confirmed it. Can you please tell me what limits each PC program
to a certain depth. What did your fix involve doing to Stobor?
--
Komputer Korner

The komputer that kouldn't keep a password safe from
prying eyes, kouldn't kompute the square root of 36^n,
kouldn't find the real Motive and variation tree in
ChessBase and missed the real learning feature of Nimzo.

Steven J. Edwards

unread,
Jan 14, 1997, 3:00:00 AM1/14/97
to

hy...@cis.uab.edu (Robert Hyatt) writes:

>Can't answer for Tom, but the change is generally "trivial". In crafty, it's
>changed by a #define MAXPLY 64 modification, and then simply re-compiling. It's
>not "free" however... the array used to "back up" the PV during the search get's
>bigger, which has a negative impact on cache as well as making copying the PV
>around take longer...

>64 plies is a reasonable limit. I used it in Cray Blitz and never saw a need to
>go higher... it's also in Crafty (same value)...

Spector also has a similar definition:

#define plyL 64

As Bob noted, priniciple variation storage grows with maximum ply; the
usual relationship is O(n^2).

Search tree depth is merely a lower limit on maximum ply. Using the
transposition table (or similar mechanism) to glue together variations
may result in a predicted variation (candidate or final) deeper than
the search which produced it.

-- Steven (s...@mv.mv.com)

Komputer Korner

unread,
Jan 14, 1997, 3:00:00 AM1/14/97
to

Robert Hyatt wrote:
>
>snipped
> : Hi Tom,

> : I thought there was a limit on ply searching and you have
> : confirmed it. Can you please tell me what limits each PC program
> : to a certain depth. What did your fix involve doing to Stobor?
> : --
> : Komputer Korner
>
> Can't answer for Tom, but the change is generally "trivial". In crafty, it's
> changed by a #define MAXPLY 64 modification, and then simply re-compiling. It's
> not "free" however... the array used to "back up" the PV during the search get's
> bigger, which has a negative impact on cache as well as making copying the PV
> around take longer...
>
> 64 plies is a reasonable limit. I used it in Cray Blitz and never saw a need to
> go higher... it's also in Crafty (same value)...

Thanks for the answer, and I guess that 64 is enough because I don't
know of any Kasparov combination that was any longer. Lasker's longest
was 51.

Ronald de Man

unread,
Jan 14, 1997, 3:00:00 AM1/14/97
to

hy...@cis.uab.edu (Robert Hyatt) writes:
>Can't answer for Tom, but the change is generally "trivial". In crafty, it's
>changed by a #define MAXPLY 64 modification, and then simply re-compiling. It's
>not "free" however... the array used to "back up" the PV during the search get's
>bigger, which has a negative impact on cache as well as making copying the PV
>around take longer...

If you use PVS, with some care you do not need to copy PV's during the search.
At each ply you keep one PV move, and you only overwrite this move when
you know that you're searching a new PV (which is when alpha+1<beta).

This probably saves some time, even with MAXPLY=64.

In my program MAXDEPTH (the equivalent of your MAXPLY, and set to 100 in
my case) is only used for global arrays, so increasing MAXDEPTH will
not slow down my search.

Ronald de Man


brucemo

unread,
Jan 14, 1997, 3:00:00 AM1/14/97
to

Komputer Korner wrote:

> Hi Tom,
> I thought there was a limit on ply searching and you have
> confirmed it. Can you please tell me what limits each PC program
> to a certain depth. What did your fix involve doing to Stobor?

I'll answer because I'd been meaning to anyway.

Chess programs I'm familiar with use "iterative deepening". The idea
is that you search a position to one ply, and get a main line (PV)
for this position. Then you search it two plies deep, and get a new
PV. Then you search it three plies deep, etc.

When I say "three plies" I mean that this is the basic "full-width"
depth a program searches. Maybe it doesn't search all positions to
three plies, maybe it searches some to two plies, and maybe it
searches some other positions to twenty plies. But the basic idea is
to search three plies.

The reasons you do iterative deepening are:

1) You always are in a state where you can make a move, assuming you
complete a one-ply search. So if you run out of search time for this
move, you don't have to go, oops, I don't have a move to make, you
just make the move at the root of the last PV that you returned.

2) You take the PV you get back from the search to depth N, and feed
it back into the search to depth N+1. Alpha-beta is a sensitive
algorithm, if you search the moves in a position in any old order,
you might spend a lot of time getting a result back. But if you
search the best move first, you get a result back quicker. The PV
move you got back at depth N is likely to be still best at depth N+1.
So it's supposedly faster to do a nine-ply search by searching to
depth one, two, three, ..., nine, rather that just saying, "Search
nine plies now!" at the root.

Anyway. There are a couple of limits on programs.

1) The absolute depth a line can reach. Maybe you are supposed to be
searching 12 plies deep this iteration, but suppose this line has a
lot of checks and other interesting stuff, and you are out 50 plies.
Well, perhaps you didn't expect this to happen, so kaboom. An
obvious solution is to increase the size of the data structures you
use, so that you can go more than 50 plies. I've got this limit set
to about a hundred in my program. Some programs have this limit
hard-code in their search, others just make it so large that they
never have to worry about exceeding it.

2) The number of iterations you will try. There are some rare
positions that will take very little time to search, no matter how
deeply you search them. One of them was posted here recently. If
you don't put a bound on your maximum iteration, you could end up
starting a 100-ply search, or something rude like that, and depending
upon your data-structures, something bad can happen. So the programs
I am familiar with put a lid on this value, too. In my program, it
was 40, which wasn't enough for it to solve one of the problems
posted recently, and apparently wasn't enough for Tom's program
either.

You can fiddle with either of these values, at will. The only
tradeoff is a little bit of memory usage.

Anyone who says there's some sort of physical limit imposed on any of
this by the structure of the game of chess combined with the
characteristics of modern computers, is wrong. You can search
anything as deep as you want, sometimes this takes a long time and
sometimes it takes 1/30 of a second. It all depends upon the
position and the way you've written your program.

bruce

Robert Hyatt

unread,
Jan 15, 1997, 3:00:00 AM1/15/97
to

Komputer Korner (kor...@netcom.ca) wrote:
: Robert Hyatt wrote:
: >
: >snipped
: > : Hi Tom,

: > : I thought there was a limit on ply searching and you have
: > : confirmed it. Can you please tell me what limits each PC program
: > : to a certain depth. What did your fix involve doing to Stobor?
: > : --
: > : Komputer Korner
: >
: > Can't answer for Tom, but the change is generally "trivial". In crafty, it's

: > changed by a #define MAXPLY 64 modification, and then simply re-compiling. It's
: > not "free" however... the array used to "back up" the PV during the search get's
: > bigger, which has a negative impact on cache as well as making copying the PV
: > around take longer...
: >
: > 64 plies is a reasonable limit. I used it in Cray Blitz and never saw a need to

: > go higher... it's also in Crafty (same value)...

: Thanks for the answer, and I guess that 64 is enough because I don't
: know of any Kasparov combination that was any longer. Lasker's longest
: was 51.
: --
: Komputer Korner


The real issue is "does the program bump into this too often?" If so, it
screws the search up. If not, it causes no problems. I'd certainly be
concerned if Crafty was hitting this "barrier" very often, but not because
the wall is at 64, rather I'd be concerned because Crafty is getting to that
wall, which likely indicates uncontrolled search extensions. I've had 'em
of course... :)

Tom C. Kerrigan

unread,
Jan 15, 1997, 3:00:00 AM1/15/97
to

Robert Hyatt (hy...@cis.uab.edu) wrote:

> The real issue is "does the program bump into this too often?" If so, it
> screws the search up. If not, it causes no problems. I'd certainly be
> concerned if Crafty was hitting this "barrier" very often, but not because
> the wall is at 64, rather I'd be concerned because Crafty is getting to that
> wall, which likely indicates uncontrolled search extensions. I've had 'em
> of course... :)

I don't even check to see if the wall is being bumped into, so if I have
an uncontrolled extension my program goes all weird on me. :)

A mate-in-something problem was posted recently which Stobor solves after
a 55 ply search, or so. This is why I increased the array size. Then you
posted something about Crafty and Ferret searching over 50 ply in actual
games, so I figured I'd keep the change. I haven't measured the
performance hit, but observation tells me that it's pretty tiny...

Cheers,
Tom

Cpsoft

unread,
Jan 15, 1997, 3:00:00 AM1/15/97
to





Komputer Korner <kor...@netcom.ca> wrote in article <32D9E1...@netcom.ca>...


> Tom C. Kerrigan wrote:
> >
> > Kevin Miller (


> > > Forgive this question, but I've been out of the chess software loop
> > > for a while:
> >
> > > When the HIARCS 5 documentation states that the maximum search depth
> > > is 31, what exactly does that mean? I fed a particular problem to it
> > > and it fired back with something like a "mate in 22" message," which
> > > to me would put the depth of search quite a bit beyond 31...
> > > Thanks in advance!
> >
> > Probably that iterative deepending doesn't go beyond 31 ply. Extensions and
> > the quiescence search account for the Mate-in-22's. Stobor has room to play
> > out 64-ply sequences, but I limit the iterations to 50 so it won't barf. I
> > assume Mark is doing something similar.
> >
> > (Actually, I just fixed Stobor to play 128-ply sequences and search 100 ply
> > to solve a few of the problems that have been posted here recently.)
> >
> > Cheers,
> > Tom
>

> Hi Tom,
>  I thought there was a limit on ply searching and you have
> confirmed it.

There is no fixed limit.

If (big if) a line could continue with only one forced move per side, then it would
be prefectly feasible for a program to allow this to extend for 100, 200, 1000
moves deep.

The problem arises when the branching factor exceeds 1.0, even by a little bit.



>
> Can you please tell me what limits each PC program
> to a certain depth.

A magic number, plucked from the sky, by the programmer.

Too deep (even with 1-move forcers) still requires memory for each depth level.

Too deep is overkill.

Everything has to stop somewhere - pick a number from the sky.



> What did your fix involve doing to Stobor?

Probably increasing the magic number and allocating some more memory.

Chris Whittington

Robert Hyatt

unread,
Jan 15, 1997, 3:00:00 AM1/15/97
to

Ronald de Man (de...@wsdw10.win.tue.nl) wrote:

: hy...@cis.uab.edu (Robert Hyatt) writes:
: >Can't answer for Tom, but the change is generally "trivial". In crafty, it's
: >changed by a #define MAXPLY 64 modification, and then simply re-compiling. It's
: >not "free" however... the array used to "back up" the PV during the search get's
: >bigger, which has a negative impact on cache as well as making copying the PV
: >around take longer...

: If you use PVS, with some care you do not need to copy PV's during the search.


: At each ply you keep one PV move, and you only overwrite this move when
: you know that you're searching a new PV (which is when alpha+1<beta).

: This probably saves some time, even with MAXPLY=64.

: In my program MAXDEPTH (the equivalent of your MAXPLY, and set to 100 in
: my case) is only used for global arrays, so increasing MAXDEPTH will
: not slow down my search.

: Ronald de Man

Will have to think about this, but using PVS, determining what is a PV node
and what is not is difficult, since you can research a node after a fail high
way down in the tree...

I'm also not sure how you make this work when a move fails high at the root,
and then fails low, so that most of us throw the fail high away... the old
PV is therefore lost it would seem... As I said, it needs some thought by
me...

Bob


Robert Hyatt

unread,
Jan 16, 1997, 3:00:00 AM1/16/97
to

Tom C. Kerrigan (kerr...@merlin.pn.org) wrote:
: Robert Hyatt (hy...@cis.uab.edu) wrote:

What you are doing is dangerous. I debugged a problem many months ago that
led directly to this. I gave the position to Bruce and it didn't break Ferret
because he had a depth limit check where I didn't, but it produced some really
huge search depths... In crafty, if I let it hit 65, I die, instantly.


Robert Hyatt

unread,
Jan 16, 1997, 3:00:00 AM1/16/97
to

Cpsoft (po...@cpsoft.demon.co.uk) wrote:
: This is a multi-part message in MIME format.

: ------=_NextPart_000_01BC02DE.ED413480
: Content-Type: text/plain; charset=ISO-8859-1
: Content-Transfer-Encoding: 7bit

: Komputer Korner <kor...@netcom.ca> wrote in article

Doesn't have to *exceed* 1, just has to *reach* 1... which is not nearly
so hard. If you start with a 2 ply depth limit, and extend every ply, you
*never* terminate. That's one reason I liked the fractional ply increment
and used it in Cray Blitz and for about six months now in Crafty also...

the one-reply and capture extension are two good candidates for making this
happen... I check you and automatically extend one ply, if you do a one-reply
extension you might extend once, I check you again, and ... the search goes
on and on...


: >
: > Can you please tell me what limits each PC program
: > to a certain depth.

: A magic number, plucked from the sky, by the programmer.

: Too deep (even with 1-move forcers) still requires memory for each depth
: level.

: Too deep is overkill.

: Everything has to stop somewhere - pick a number from the sky.

: > What did your fix involve doing to Stobor?

: Probably increasing the magic number and allocating some more memory.

: Chris Whittington

: > --
: > Komputer Korner
: >
: > The komputer that kouldn't keep a password safe from
: > prying eyes, kouldn't kompute the square root of 36^n,
: > kouldn't find the real Motive and variation tree in
: > ChessBase and missed the real learning feature of Nimzo.

: >
: ------=_NextPart_000_01BC02DE.ED413480
: Content-Type: text/html; charset=ISO-8859-1
: Content-Transfer-Encoding: quoted-printable

: <html><head></head><BODY bgcolor=3D"#FFFFFF"><p><font size=3D2 =
: color=3D"#000000" face=3D"Arial"><br><br><br><br>Komputer Korner =
: &lt;<font color=3D"#0000FF"><u>kor...@netcom.ca</u><font =
: color=3D"#000000">&gt; wrote in article &lt;<font =
: color=3D"#0000FF"><u>32D9E1...@netcom.ca</u><font =
: color=3D"#000000">&gt;...<br>&gt; Tom C. Kerrigan wrote:<br>&gt; &gt; =
: <br>&gt; &gt; Kevin Miller (<font =
: color=3D"#0000FF"><u>kmi...@tiac.net</u><font color=3D"#000000">) =
: wrote:<br>&gt; &gt; &gt; Forgive this question, but I've been out of the =
: chess software loop<br>&gt; &gt; &gt; for a while:<br>&gt; &gt; <br>&gt; =
: &gt; &gt; When the HIARCS 5 documentation states that the maximum search =
: depth<br>&gt; &gt; &gt; is 31, what exactly does that mean? I fed a =
: particular problem to it<br>&gt; &gt; &gt; and it fired back with =
: something like a &quot;mate in 22&quot; message,&quot; which<br>&gt; =
: &gt; &gt; to me would put the depth of search quite a bit beyond =
: 31...<br>&gt; &gt; &gt; Thanks in advance!<br>&gt; &gt; <br>&gt; &gt; =
: Probably that iterative deepending doesn't go beyond 31 ply. Extensions =
: and<br>&gt; &gt; the quiescence search account for the Mate-in-22's. =
: Stobor has room to play<br>&gt; &gt; out 64-ply sequences, but I limit =
: the iterations to 50 so it won't barf. I<br>&gt; &gt; assume Mark is =
: doing something similar.<br>&gt; &gt; <br>&gt; &gt; (Actually, I just =
: fixed Stobor to play 128-ply sequences and search 100 ply<br>&gt; &gt; =
: to solve a few of the problems that have been posted here =
: recently.)<br>&gt; &gt; <br>&gt; &gt; Cheers,<br>&gt; &gt; Tom<br>&gt; =
: <br>&gt; Hi Tom,<br>&gt; &#009;&nbsp;I thought there was a limit on ply =
: searching and you have<br>&gt; confirmed it. <br><br>There is no fixed =
: limit.<br><br>If (big if) a line could continue with only one forced =
: move per side, then it would<br>be prefectly feasible for a program to =
: allow this to extend for 100, 200, 1000<br>moves deep.<br><br>The =
: problem arises when the branching factor exceeds 1.0, even by a little =
: bit.<br><br>&gt;<br>&gt; Can you please tell me what limits each PC =
: program<br>&gt; to a certain depth. <br><br>A magic number, plucked from =
: the sky, by the programmer.<br><br>Too deep (even with 1-move forcers) =
: still requires memory for each depth level.<br><br>Too deep is =
: overkill.<br><br>Everything has to stop somewhere - pick a number from =
: the sky.<br><br>&gt; What did your fix involve doing to Stobor? =
: <br><br>Probably increasing the magic number and allocating some more =
: memory.<br><br>Chris Whittington<br><br>&gt; --<br>&gt; Komputer =
: Korner<br>&gt; <br>&gt; The komputer that kouldn't keep a password safe =
: from<br>&gt; prying eyes, kouldn't kompute the square root of =
: 36^n,<br>&gt; kouldn't find the real Motive and variation tree in =
: <br>&gt; ChessBase and missed the real learning feature of =
: Nimzo.<br>&gt; </p>
: </font></font></font></font></font></font></font></body></html>
: ------=_NextPart_000_01BC02DE.ED413480--


Komputer Korner

unread,
Jan 16, 1997, 3:00:00 AM1/16/97
to

Cpsoft wrote:
>
snipped

>
> Too deep is overkill.
>
> Everything has to stop somewhere - pick a number from the sky.
>
> > What did your fix involve doing to Stobor?
>
> Probably increasing the magic number and allocating some more memory.
>
> Chris Whittington
>
> > --
> > Komputer Korner
> >
> > The komputer that kouldn't keep a password safe from
> > prying eyes, kouldn't kompute the square root of 36^n,
> > kouldn't find the real Motive and variation tree in
> > ChessBase and missed the real learning feature of Nimzo.
> >

Do you agree that the cost (cost in terms of slower search speed) of
having large data arrays is too high to enable the odd 100 ply searches?
The commercial programmers obviously think so?

Ronald de Man

unread,
Jan 16, 1997, 3:00:00 AM1/16/97
to

hy...@cis.uab.edu (Robert Hyatt) writes:

>Ronald de Man (de...@wsdw10.win.tue.nl) wrote:

>: If you use PVS, with some care you do not need to copy PV's during the search.
>: At each ply you keep one PV move, and you only overwrite this move when
>: you know that you're searching a new PV (which is when alpha+1<beta).

>: This probably saves some time, even with MAXPLY=64.

>: Ronald de Man

>Will have to think about this, but using PVS, determining what is a PV node
>and what is not is difficult, since you can research a node after a fail high
>way down in the tree...

>I'm also not sure how you make this work when a move fails high at the root,
>and then fails low, so that most of us throw the fail high away... the old
>PV is therefore lost it would seem... As I said, it needs some thought by
>me...

>Bob

I've just looked at my code and noticed that when a move fails high at the
root and the fails low I just keep it. Maybe I should do something about
that. I remember cases when my program had found a mate in 6 (or so), then
failed high to announce a mate in 5, after which the research of the move
returned a value like +24, but no mate. Assuming my search is correct, I
guess this really was a mate in 5 or less. But probably there are cases
where the move is really worse than the old best move.

Anyway, if you will throw away the move you might want to save the old PV
before, and restore it when the move turns out to fail low. But since this
is so rare, you might as well handle it the way I handle internal
fail-highs followed by fail-lows: When a fail-high occurs I research the
node. Normally, the value returned is big enough to keep this move, so
you store this move as the PV move of the current ply. If it fails low,
you do not change the PV move of the current ply, but now the PV moves of
the following plies are messed up. I solve this by setting the PV move of
the next ply to zero. This shortens the PV returned by the search (if the
hash table also doesn't contain it anymore), but since this occurs so
seldom I don't care.
What also can happen (at least in my program) is that I search a node,
that I know is a PV node, with a zero window (this probably has to do
with the fact that I use fail-soft PVS). Therefore I include a special
parameter in my calls to Search() and Quiesce() telling it whether this
node is a PV node or not. In case of a PV node I also store a move as the
PV move when its value equals alpha or beta (the reason I have to do this
has everything to do with researches).

Hmm, I think that without fail-soft you can also have PV nodes with zero
window. Suppose you are in a PV node with alpha+2=beta. You search a move
(but not the first) with (alpha,alpha+1). If you get a fail-high you know
its value is at least alpha+1, so you research it with window (alpha+1,beta),
which is a zero window. Without search inconsistencies, its value is either
value+1, in which case this is really a move of the new PV, or >=beta.
But if it is alpha+1 the research of the next node (which has a beta
value of -alpha-1) will find a move with value -alpha-1, so equal to its
beta. But as we know this move should appear in the PV, so we have to store
it as the PV node of that ply.

So I admit there are some complicating things...

Ronald de Man


Komputer Korner

unread,
Jan 16, 1997, 3:00:00 AM1/16/97
to

Can you explain why a runaway search might break a program and exactly
what breaking a program leads to, a major rewrite or what?

brucemo

unread,
Jan 16, 1997, 3:00:00 AM1/16/97
to

Robert Hyatt wrote:
>
> Tom C. Kerrigan (kerr...@merlin.pn.org) wrote:

> : A mate-in-something problem was posted recently which Stobor solves after
> : a 55 ply search, or so. This is why I increased the array size. Then you
> : posted something about Crafty and Ferret searching over 50 ply in actual
> : games, so I figured I'd keep the change. I haven't measured the
> : performance hit, but observation tells me that it's pretty tiny...
>
> What you are doing is dangerous. I debugged a problem many months ago that
> led directly to this. I gave the position to Bruce and it didn't break Ferret
> because he had a depth limit check where I didn't, but it produced some really
> huge search depths... In crafty, if I let it hit 65, I die, instantly.

I don't have a depth limit check, at all. If I search too deep I'll crash.

If I crash, I drop into the debugger, and it's a simple matter to check to see
if one of these limits has been exceeded.

This is not particularly great software engineering, but Ferret has always been
extremely solid, so there's not been a problem.

Actually, I have a bad bug now, the first time my "real" version has crashed in
the past six months at least. I was doing a 15-hour test suite and crashed.
I've also crashed on ICC a couple of times, maybe once every few hours. The
crash is very interesting, I crash in "unmake", but the line it's on is
unreachable given the contents of the registers at the time of the crash. I
made a version that does massive amounts of pointer verification, in order to
detect a bogus pointer somewhere, but of course the thing won't repro now.

bruce

Komputer Korner

unread,
Jan 17, 1997, 3:00:00 AM1/17/97
to

Robert Hyatt wrote:
>
snipped
> That's the danger of extensions. If one gets excited too often, you will be
> searching 30-40-50 or even 60+ plies deep. We can guess how long *that* might
> take to complete. :)

Okay what you meant was there is always the danger of a never ending
search but wouldn't the time algorithm kick in to prevent the program
taking too long on one move?

Chris Whittington

unread,
Jan 17, 1997, 3:00:00 AM1/17/97
to

--
http://www.demon.co.uk/oxford-soft

Robert Hyatt <hy...@cis.uab.edu> wrote in article
<5blh8e$6...@juniper.cis.uab.edu>...

>

Yeah well, this depends on how you terminate the search.

CST doesn't have a depth limit which then does or doesn't get decremented
to reach an horizon.

It has an horizon (as stipulated by the magic number, 51 is CST's case),
and
then chooses whether to extend or not at each ply.

The decision engine includes:

a) chess knowledge
b) how manic/active the line has been so far
c) the general explosiveness of the search so far.
d) the likely branching factor if this node is extended

So most other programs go say N brute force and then N+20 (say) selective.

CST can go up to 51 deep as of the first iteration, and slackens the
selection criteria iteration by iteration.

Chris Whittington

Tom C. Kerrigan

unread,
Jan 17, 1997, 3:00:00 AM1/17/97
to

Komputer Korner (kor...@netcom.ca) wrote:

> Do you agree that the cost (cost in terms of slower search speed) of
> having large data arrays is too high to enable the odd 100 ply searches?
> The commercial programmers obviously think so?

Not necessarily. The search speed issue is one of cache, which is a
tricky, tricky beast. The commercial programmers probably make their
decision based on how much memory they're comfortable using for this and
what's reasonable. 64 ply is perfectly reasonable, for example.

Cache comes in all shapes and sizes and unless you actually build the
stuff, you have a slim chance of predicting exactly how it will behave in
a given situation. Having great big arrays of nothing usually isn't so
hot, but there's a chance that they space out other arrays just right and
the program actually runs faster. Because of this unpredictability, I
generally ignore cache weirdness. I figure there are easier ways to go
faster.

It looks like switching the array sizes in Stobor costs about 100 nodes
per second out of 35,000. This is easily within measuring error, so I'm
not worried about it at all.

Cheers,
Tom

Chris Whittington

unread,
Jan 17, 1997, 3:00:00 AM1/17/97
to

--
http://www.demon.co.uk/oxford-soft

Komputer Korner <kor...@netcom.ca> wrote in article

<32DDCB...@netcom.ca>...


> Robert Hyatt wrote:
> >
> > Tom C. Kerrigan (kerr...@merlin.pn.org) wrote:

> > : Robert Hyatt (hy...@cis.uab.edu) wrote:
> >
> > : > The real issue is "does the program bump into this too often?" If
so, it
> > : > screws the search up. If not, it causes no problems. I'd
certainly be
> > : > concerned if Crafty was hitting this "barrier" very often, but not
because
> > : > the wall is at 64, rather I'd be concerned because Crafty is
getting to that
> > : > wall, which likely indicates uncontrolled search extensions. I've
had 'em
> > : > of course... :)
> >
> > : I don't even check to see if the wall is being bumped into, so if I
have
> > : an uncontrolled extension my program goes all weird on me. :)
> >

> > : A mate-in-something problem was posted recently which Stobor solves
after
> > : a 55 ply search, or so. This is why I increased the array size. Then
you
> > : posted something about Crafty and Ferret searching over 50 ply in
actual
> > : games, so I figured I'd keep the change. I haven't measured the
> > : performance hit, but observation tells me that it's pretty tiny...
> >
> > What you are doing is dangerous. I debugged a problem many months ago
that
> > led directly to this. I gave the position to Bruce and it didn't break
Ferret
> > because he had a depth limit check where I didn't, but it produced some
really
> > huge search depths... In crafty, if I let it hit 65, I die, instantly.
>

> Can you explain why a runaway search might break a program
>

The program failing to return from the search.

Imagine sending a rocket to go and investigate interesting planets and come
back when its finished, only you forget to tell it to come back after, say
N planets.

Then the rockets goes off and keeps finding these planets you didn't know
existed, so it never comes back. Or, if it runs out of planets and decides
to return, its out of fuel, because you hadn't planned for this scenario.

> and exactly
> what breaking a program leads to, a major rewrite or what?

In the case of explosive search, probably a minor adjustment of the
limiting parameters, or maybe the creation of some limiting paramaters in
the case where you forgot to apply them in the first place.

But, this issue of search control, (pruning and extensions and depth) is so
critical and complex that its not possible to talk of minor adjustments.

I know of no 'scientific' way to control it, it's intuition and suck it and
see.

Tom C. Kerrigan

unread,
Jan 17, 1997, 3:00:00 AM1/17/97
to

Komputer Korner (kor...@netcom.ca) wrote:

> Can you explain why a runaway search might break a program and exactly


> what breaking a program leads to, a major rewrite or what?

If you're thinking of what I said about breaking my program:

Right now I have enough memory allocated for the search to "imagine" a 128
ply line. As soon as the 129th move is played, the program writes data to
memory that is not allocated for this purpose. In fact, it's most often
allocated for another purpose, and fun things start happening when _that_
gets overwritten. Illegal moves, etc.

If a program is searching something to 128 ply, it's broken, or it's Chess
System Tal. :)

Here's another wee bit of information about depth limits: I saw a position
in an actual game where there were two legal moves. The first move led to
a quick loss. The second move led to a repetition. Stobor promptly
searched this position to a hillion skillion ply and barfed. That's when I
put my depth limit in. :)

Cheers,
Tom

Robert Hyatt

unread,
Jan 17, 1997, 3:00:00 AM1/17/97
to

Komputer Korner (kor...@netcom.ca) wrote:

: Robert Hyatt wrote:
: >
: > Tom C. Kerrigan (kerr...@merlin.pn.org) wrote:
: > : Robert Hyatt (hy...@cis.uab.edu) wrote:
: >
: > : > The real issue is "does the program bump into this too often?" If so, it
: > : > screws the search up. If not, it causes no problems. I'd certainly be
: > : > concerned if Crafty was hitting this "barrier" very often, but not because
: > : > the wall is at 64, rather I'd be concerned because Crafty is getting to that
: > : > wall, which likely indicates uncontrolled search extensions. I've had 'em
: > : > of course... :)
: >
: > : I don't even check to see if the wall is being bumped into, so if I have
: > : an uncontrolled extension my program goes all weird on me. :)
: >
: > : A mate-in-something problem was posted recently which Stobor solves after
: > : a 55 ply search, or so. This is why I increased the array size. Then you
: > : posted something about Crafty and Ferret searching over 50 ply in actual
: > : games, so I figured I'd keep the change. I haven't measured the
: > : performance hit, but observation tells me that it's pretty tiny...
: >
: > What you are doing is dangerous. I debugged a problem many months ago that
: > led directly to this. I gave the position to Bruce and it didn't break Ferret
: > because he had a depth limit check where I didn't, but it produced some really
: > huge search depths... In crafty, if I let it hit 65, I die, instantly.

: Can you explain why a runaway search might break a program and exactly


: what breaking a program leads to, a major rewrite or what?

: --
: Komputer Korner


The issue is that every program, when starting a search, has some target depth
"in mind". For every ply of search, this "target depth" is decremented by 1,
and when it reached "0" we enter the quiescence search and get ready to stop
searching and do a static eval.

However, to complicate this, after decrementing depth by 1, you might choose to
increment this (this would be called a search extension) for various reasons.
One common one is if you are in check, increment the depth by 1, which drives
every variation below this node one ply deeper to see if the check is going to
lead to anything (this is only one reason to extend the search). You might do
this if the current move is a re-capture, because it is forced, or extend if
there is only one legal response to a check, because that is a *very* forcing
move, and so forth. If the number of extensions exceeds 1 at any ply, then the
depth gets *larger*. And if it happens enough, you search very deeply, but never
finish, because that search might actually take years to complete.

Robert Hyatt

unread,
Jan 18, 1997, 3:00:00 AM1/18/97
to

Komputer Korner (kor...@netcom.ca) wrote:
: Robert Hyatt wrote:
: >
: snipped
: > That's the danger of extensions. If one gets excited too often, you will be

: > searching 30-40-50 or even 60+ plies deep. We can guess how long *that* might
: > take to complete. :)

: Okay what you meant was there is always the danger of a never ending


: search but wouldn't the time algorithm kick in to prevent the program
: taking too long on one move?
: --
: Komputer Korner


Exactly, but when you run out of time, what move do you play? You have
not yet even searched the first one... *that* is the problem. Of course,
you might also play something after only a one ply search, because the two
ply search blew up. And we know what might happen there too... In short
the search might not hang, because the timer can kick it out of the search
at the right time, but the opponent might hang you instead. By the neck.
Until dead. :)


mclane

unread,
Jan 18, 1997, 3:00:00 AM1/18/97
to

Komputer Korner <kor...@netcom.ca> wrote:

>Lasker's longest
>was 51.


LASKER !! LASKER !!! LASKER !!!! LASKA !!!!!


Lasker, Capablanca and Fischer. Maybe Polgar-Sisters.
Tal. Nimzowitsch and Emil-Josef Diemer. There is so much quality in
the world of chess!


mclane

unread,
Jan 18, 1997, 3:00:00 AM1/18/97
to

kerr...@merlin.pn.org (Tom C. Kerrigan) wrote:
>If a program is searching something to 128 ply, it's broken, or it's Chess
>System Tal. :)

Very good said.

>Cheers,
>Tom

Hello Tom,
maybe it is exactly the opposite:

Chess System Tal plays sacrifies that come out of the evaluation.
No computing in the tree, but knowledge.

Therefore the SEARCHERS cannot defend against those sacs.

As you have seen from own experience with STOBOR-ChessSystemTal
it sometimes works.

Komputer Korner

unread,
Jan 19, 1997, 3:00:00 AM1/19/97
to

So are you saying that CSTAL is a preprocessor?

mclane

unread,
Jan 24, 1997, 3:00:00 AM1/24/97
to

Komputer Korner <kor...@netcom.ca> wrote:

>mclane wrote:
>>
>> kerr...@merlin.pn.org (Tom C. Kerrigan) wrote:
>> >If a program is searching something to 128 ply, it's broken, or it's Chess
>> >System Tal. :)
>>
>> Very good said.
>>
>> >Cheers,
>> >Tom
>>
>> Hello Tom,
>> maybe it is exactly the opposite:
>>
>> Chess System Tal plays sacrifies that come out of the evaluation.
>> No computing in the tree, but knowledge.
>>
>> Therefore the SEARCHERS cannot defend against those sacs.
>>
>> As you have seen from own experience with STOBOR-ChessSystemTal
>> it sometimes works.

>So are you saying that CSTAL is a preprocessor?


NONONO. The evaluation is the master of the program.
And it uses this evlauation anywhere in the tree.
But it often decides to look for interesting lines, a normal
Alpha-beta-null-move program would have pruned away using
backtracking.
The null-move tree is much smaller than cstals- tree.
(no - I don't say that cstal is brute-force. But it only looks for
INTERESTING moves, and sometimes you have MANY INTERESTING moves.)

Reply all
Reply to author
Forward
0 new messages