Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Fractional depth increments

222 views
Skip to first unread message

S.Read

unread,
Jan 18, 1996, 3:00:00 AM1/18/96
to
I hear rumours that some programs use fractional depth increments, so that
in the principal variation they tend to search deeper than in less
likely sidelines...

I have a simple way of calculating fractional depth increment. Do a quick
(say 4 ply alpha-beta) search on a single node, and if the backed-up score
looks better than its siblings, the fractional depth to that node should
be less than to its siblings. That way, "simple" and "obvious" things like
re-capturing the queen would naturally tend to come out as obvious and
would have a low fractional depth.

This would turn out to be useful, ie would cause deeper searching in the
principal variation and closely-related branches, if it turned out that
the principal variation mostly does consist of fairly "obvious" moves.
It may have one or two devastatingly un-obvious moves but, then, to a
4-ply search, almost everything off the principal variation would be
"wrong" "bad" etc.

Let's say your search program normally achieved 9 ply in the middle game.
Then you could do 5 ply of variable-depth search, where each node in the
variable-depth search has a 4-ply search done on it to see what depth
increment it should get. Thus you get the same total depth (4 ply beyond
the endpoints of your variable-depth search) but it is strongly biased
towards going more deeply into sensible moves.

Of course, depending upon the scaling of this variable depth, it might only
be slightly biased, but that would be controllable.

Simon


S.Read

unread,
Jan 19, 1996, 3:00:00 AM1/19/96
to
You seem to be treating your fractional depths differently to the way
I imagined treating them, but it comes to the same thing in the end.
For example: you imply that getting out of check extends the subsequent
search by some amount. (let's say 0.75 ply) I would have put it the
other way round:
the out-of-check move is worth 0.25 ply, so the search mechanism
realises that there is 0.75 left over and correspondingly extends the
search. But that is detail... it comes to the same thing in the end.

Part of my posting was the question, which you should be able to answer:
how true is it that the majority of moves on the principal variation
are fairly obvious to a simpler search?

The other question I'm interested in is deciding just how large or small
your fractional increment should be. The only things I have heard of
(like Berliner's Pawns Are The Soul Of Chess, reference available)
use an algorithm to DECIDE which moves get different fractional
depths (In berliner's case, he either extends or doesn't, but there
isn't much conceptual difference between that and giving some moves
different fractional increments.)

So my question has to be: have people used algorithms to _DECIDE_ which
moves get fractional increments, or have they used simple searches,
and thus freed themselves from complex logic, to _CALCULATE_ extra
fractional depths? I think the second of these is better, since it
simply follows the rules of chess, extends transparently to deeper
searches and avoids a lot of complex logic.

I also think that once you back up the scores, the moves in the principal
variation suddenly all become very good and the fractional depths can
be adjusted accordingly. This might be a lot of work since it requires
storing a branch of the tree and backed-up scores and fractional
depths and re-specifying the fractional depths once you discover that
some move IS a good move after all.

Siobhon

--
DISCLAIMER: I recently found out that my contract of employment
includes the following clause:

"If, while you are on holiday in Siberia for six months,
you should happen to invent a tomato-peeler, it shall immediately
become the property of The Company. If you accidentally invent
a new way of getting into the bath in the morning, it shall
become the property of The Company. If in doubt about any
physical or abstract entity ever invented by you or yet to be,
it shall be considered the property of The Company. As is your
body, your mind and your soul."

..so the views expressed above are, unfortunately, not mine.


Paul Rubin

unread,
Jan 19, 1996, 3:00:00 AM1/19/96
to
In article <4dob7e$7...@pelham.cis.uab.edu>,
Robert Hyatt <hy...@willis.cis.uab.edu> wrote:
>This sort of search-driven extension is also behind singular extensions,
>but, again, I have not had much luck with using this. We used singular
>extensions in 1978 because it was solving some tactical problems with this
>that it couldn't without (this was Blitz, before the Cray was used by
>us.) Just before the washington event, we had GM Larry Christiansen
>on campus for a simul, and he graciously played blitz several games
>(he lost the simul game because he tried an unsound sac) while we were
>chatting after the simul. I turned singular extensions off for a couple
>of games and noticed that it was searching maybe a ply deeper. (back then
>a 5 ply search was quite good, using our hardware). Some more testing
>convinced us to remove this just before the ACM event, even though we
>had been using this for 6 months or so with (so we thought) good results,
>that were unfortunately mostly the result of problem set solutions.

I remember a paper by Hsu, Campbell et al indicating that singular
extensions were an improvement only when the search hardware
was quite fast (i.e. able to search 8 ply or more, or so).
Are you using them now?

Robert Hyatt

unread,
Jan 19, 1996, 3:00:00 AM1/19/96
to
In article <phrDLF...@netcom.com>, Paul Rubin <p...@netcom.com> wrote:
-->In article <4dob7e$7...@pelham.cis.uab.edu>,
-->Robert Hyatt <hy...@willis.cis.uab.edu> wrote:
-->>This sort of search-driven extension is also behind singular extensions,
-->>but, again, I have not had much luck with using this. We used singular
-->>extensions in 1978 because it was solving some tactical problems with this
-->>that it couldn't without (this was Blitz, before the Cray was used by
-->>us.) Just before the washington event, we had GM Larry Christiansen
-->>on campus for a simul, and he graciously played blitz several games
-->>(he lost the simul game because he tried an unsound sac) while we were
-->>chatting after the simul. I turned singular extensions off for a couple
-->>of games and noticed that it was searching maybe a ply deeper. (back then
-->>a 5 ply search was quite good, using our hardware). Some more testing
-->>convinced us to remove this just before the ACM event, even though we
-->>had been using this for 6 months or so with (so we thought) good results,
-->>that were unfortunately mostly the result of problem set solutions.
-->
-->I remember a paper by Hsu, Campbell et al indicating that singular
-->extensions were an improvement only when the search hardware
-->was quite fast (i.e. able to search 8 ply or more, or so).
-->Are you using them now?


No I'm not, although I'll probably re-visit this issue once again in a
few months. I've not yet done them in Crafty. However, Hsu and group
also modified the expected gain they got down to maybe 8 rating points
or some such, since when it helps it really helps, but when it doesn't
it also hurts.


--
Robert Hyatt Computer and Information Sciences
hy...@cis.uab.edu University of Alabama at Birmingham
(205) 934-2213 115A Campbell Hall, UAB Station
(205) 934-5473 FAX Birmingham, AL 35294-1170

Robert Hyatt

unread,
Jan 22, 1996, 3:00:00 AM1/22/96
to
In article <4e0nog$u...@watnews1.watson.ibm.com>,
Feng-Hsiung Hsu <f...@sawmill.watson.ibm.com> wrote:
-->In article <4dp0b0$8...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
-->>few months. I've not yet done them in Crafty. However, Hsu and group
-->>also modified the expected gain they got down to maybe 8 rating points
-->
-->Actually, only Thomas said so in his thesis. There was a conceptual bug in the
-->DT-1 implementation by the way.
--> --Hsu


Bug? You said bug? I thought I was the only one that had those nowadays.

:)

Feng-Hsiung Hsu

unread,
Jan 22, 1996, 3:00:00 AM1/22/96
to
In article <4dp0b0$8...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
>few months. I've not yet done them in Crafty. However, Hsu and group
>also modified the expected gain they got down to maybe 8 rating points

Actually, only Thomas said so in his thesis. There was a conceptual bug in the


DT-1 implementation by the way.

--Hsu

S.Read

unread,
Jan 23, 1996, 3:00:00 AM1/23/96
to
I would have thought that fractional depth increments would be a way to
get the quiescence search to happen automatically, without explicitly
coding for it to happen. If captures (or other things like check evasion,
which would come in a quiescence search) are given a depth increment of
zero, the normal search would keep searching sequences of captures
until there were no more captures, because the captures would not
contribute anything to the depthcount, so the search would keep going.

Simon


Thorsten Greiner

unread,
Jan 23, 1996, 3:00:00 AM1/23/96
to
hy...@willis.cis.uab.edu (Robert Hyatt) wrote:
>[...]
>I'm treading on *very* thin ice in Crafty already, allowing 2 plies of
>extension at any ply. Without care this obviously results in a non-terminating
>search since a ply costs 1, but can add 2 back to depth. It's constrained
>in Crafty, but it's questionable that it's done well (yet). Fractional
>extensions rather than a whole ply for each "consideration" should help to
>constrain this a little better.
>

I agree fully with you here. Another advantage is that you can classify a move
using a lot of different classes (not only gives check or is a recapture),
e.g. attacks a piece, threatens mate etc.

This idea is not new, of course. Two places to look at are

- the PhD thesis of J.C. Weill (co-author of VirtuaChess)
(its available in french on some ftp-server, I would have
to look up the exact location)

- an article by D. Levy and others in the ICCA-Journal
entitled "The SEX algorithm in computer chess"
(Nothing obscene here, SEX=search extension :-)

--
Thorsten Greiner gre...@wpts0.physik.uni-wuppertal.de
Department of Physics
Wuppertal University


Robert Hyatt

unread,
Jan 23, 1996, 3:00:00 AM1/23/96
to


It could, but the risk is a huge tree explosion. If I read you correctly,
*any* capture is given an increment of 0. This would let you start a PV
off with two captures, *then* do a 6 ply (say) full-search, which is a
*lot* bigger than doing a full-6-ply search followed by captures. That's
the danger of all of our fancy search extensions...

I'm treading on *very* thin ice in Crafty already, allowing 2 plies of
extension at any ply. Without care this obviously results in a non-terminating
search since a ply costs 1, but can add 2 back to depth. It's constrained
in Crafty, but it's questionable that it's done well (yet). Fractional
extensions rather than a whole ply for each "consideration" should help to
constrain this a little better.

M D. Grimminck

unread,
Jan 23, 1996, 3:00:00 AM1/23/96
to

"S.Read" <s.r...@cranfield.ac.uk> writes:

>I hear rumours that some programs use fractional depth increments, so that
>in the principal variation they tend to search deeper than in less
>likely sidelines...

I have been using fractional depths in my draughts program 'dragon'
since the beginning of the program. I must say it is great to
have the option to extend search by e.g. .52 ply. It gives much more
control over the search process. Currently the
program hardly ever decreases search with steps of 1.0 ply.

There is however one huge disadvantage; the way the program works
now makes the transposition tables very inefficient: when a position
is reached by various paths, each path may (and in my program it
will) have its own search-depth.

So if you only want true transpositions, (with exactly the same search-depth)
you have a problem. At this moment the transposition table is horribly
inefficient.

I am not so sure the transposition table problem can be solved easily
while using truly fractional increments.

Michel
--
Michel Grimminck, Computational Physics, University of Amsterdam.
draughts/checker page: http://carol.fwi.uva.nl/~grimmink/draughts.html

Jean-Christophe Weill

unread,
Jan 24, 1996, 3:00:00 AM1/24/96
to
Thorsten Greiner <gre...@wpts0.physik.uni-wuppertal.de> wrote:

>This idea is not new, of course. Two places to look at are
>
> - the PhD thesis of J.C. Weill (co-author of VirtuaChess)
> (its available in french on some ftp-server, I would have
> to look up the exact location)
>


The thesis is available on ftp://seh.etca.fr/pub/echecs/phdJCW.ps.gz
Unfortunately, the translation is postponed for the moment :(
But it should be readable with only small knowledge of french.

---
Jean-Christophe Weill
NEC Research Institute. 4 Independence Way. Princeton New Jersey 08540


David Eppstein

unread,
Jan 25, 1996, 3:00:00 AM1/25/96
to
In <4e36du$s...@mail.fwi.uva.nl> grim...@fwi.uva.nl (M D. Grimminck) writes:
< I have been using fractional depths in my draughts program 'dragon'
< since the beginning of the program. ...

< There is however one huge disadvantage; the way the program works
< now makes the transposition tables very inefficient: when a position
< is reached by various paths, each path may (and in my program it
< will) have its own search-depth.

Assuming you're doing iterative deepening, this can be mitigated
somewhat: just store in the hash table the max depth adjustment of any
path to that node. Then the next deepening iteration, you can look up
the right depth to evaluate the node. You then only have this problem
at the fringes, where maybe it's less severe.

Also, this same idea of storing the adjustment in the hashtable would
let you provide some way of combining depth increments from different
paths instead of just taking the max among them.

(Caveat: I have not actually programmed up anything like this.)
--
David Eppstein UC Irvine Dept. of Information & Computer Science
epps...@ics.uci.edu http://www.ics.uci.edu/~eppstein/

Kerrigan42

unread,
Jan 28, 1996, 3:00:00 AM1/28/96
to
I was using fractional depth increments for a while. This was because of
the one-reply-to-check extension. If this is a 1 ply unconstrained
extension, your program screws itself once in a while trying to search an
infinitely long line. I thought, "Instead of weird and bulky constraint
code, why not just extend 3/4 ply?" I've recently decided this isn't such
a hot idea for a few reasons. First, you're never sure where it's going to
extend if you don't have a pencil and paper at hand. Second, some hash
table scores aren't used when they should be. Third, the 1<<depth history
increment "trick" needs a division (a no-no in chess programs). Fourth,
the code is a bit harder to read. I guess I'm a wuss for reason four, but
my hand-held calculator is better than yours. :)

Cheers,
Tom

Robert Hyatt

unread,
Jan 28, 1996, 3:00:00 AM1/28/96
to
In article <4efvnc$k...@newsbf02.news.aol.com>,
Kerrigan42 <kerri...@aol.com> wrote:
-->I was using fractional depth increments for a while. This was because of
-->the one-reply-to-check extension. If this is a 1 ply unconstrained
-->extension, your program screws itself once in a while trying to search an
-->infinitely long line. I thought, "Instead of weird and bulky constraint
-->code, why not just extend 3/4 ply?" I've recently decided this isn't such
-->a hot idea for a few reasons. First, you're never sure where it's going to
-->extend if you don't have a pencil and paper at hand. Second, some hash
-->table scores aren't used when they should be. Third, the 1<<depth history
-->increment "trick" needs a division (a no-no in chess programs). Fourth,
-->the code is a bit harder to read. I guess I'm a wuss for reason four, but
-->my hand-held calculator is better than yours. :)
-->
-->Cheers,
-->Tom

I haven't used the 1<<depth since February of last year, when I started
letting Crafty play standard games on ICC. 1<<depth where depth can
easily reach 16 with extensions is asking for overflows. In endgames, I
see 20-25 ply searches regularly, with depth reaching 35. 1<<35 is a
real problem. :)

Vincent Diepeveen

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
In <4eg28u$h...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:

>In article <4efvnc$k...@newsbf02.news.aol.com>,
>Kerrigan42 <kerri...@aol.com> wrote:
>-->I was using fractional depth increments for a while. This was because of
>-->the one-reply-to-check extension. If this is a 1 ply unconstrained
>-->extension, your program screws itself once in a while trying to search an
>-->infinitely long line. I thought, "Instead of weird and bulky constraint
>-->code, why not just extend 3/4 ply?" I've recently decided this isn't such
>-->a hot idea for a few reasons. First, you're never sure where it's going to
>-->extend if you don't have a pencil and paper at hand. Second, some hash
>-->table scores aren't used when they should be. Third, the 1<<depth history
>-->increment "trick" needs a division (a no-no in chess programs). Fourth,
>-->the code is a bit harder to read. I guess I'm a wuss for reason four, but
>-->my hand-held calculator is better than yours. :)
>-->
>-->Cheers,
>-->Tom
>
>I haven't used the 1<<depth since February of last year, when I started
>letting Crafty play standard games on ICC. 1<<depth where depth can
>easily reach 16 with extensions is asking for overflows. In endgames, I
>see 20-25 ply searches regularly, with depth reaching 35. 1<<35 is a
>real problem. :)

I have also problems with depth. Searching about 10 ply my programm
usually gives searchdepths up to 48 ply (it is not possible
for my programm currently to search deeper). I use singular extensions,
which cause this.

Vincent Diepeveen
vdie...@cs.ruu.nl


>
>
>--
>Robert Hyatt Computer and Information Sciences
>hy...@cis.uab.edu University of Alabama at Birmingham
>(205) 934-2213 115A Campbell Hall, UAB Station
>(205) 934-5473 FAX Birmingham, AL 35294-1170

--
+--------------------------------------+
|| email : vdie...@cs.ruu.nl ||
|| Vincent Diepeveen ||
+======================================+

Marcel van Kervinck

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
Robert Hyatt (hy...@willis.cis.uab.edu) wrote:

: I haven't used the 1<<depth since February of last year, when I started


: letting Crafty play standard games on ICC. 1<<depth where depth can
: easily reach 16 with extensions is asking for overflows. In endgames, I
: see 20-25 ply searches regularly, with depth reaching 35. 1<<35 is a
: real problem. :)

I also had this problem. Mainly because I'm constrained to use
8 bit integers for the history heuristic. I'm now using

inc(depth) = (a*depth*depth + b*depth + c) min 64

for the increment, for some chosen a, b and c. This seems to
work just fine. Not too many irritating overflows. BTW, one
could also consider not to increment the history count, but
to OR it with some f(depth) to eliminate all overflows. I
haven't tried this yet, though...

Marcel
-- _ _
_| |_|_|
|_ |_ Marcel van Kervinck
|_| bue...@urc.tue.nl

Robert Hyatt

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
In article <4eikvi$8...@tuegate.tue.nl>,
Marcel van Kervinck <bue...@asterix.urc.tue.nl> wrote:
-->Robert Hyatt (hy...@willis.cis.uab.edu) wrote:
-->
-->: I haven't used the 1<<depth since February of last year, when I started
-->: letting Crafty play standard games on ICC. 1<<depth where depth can
-->: easily reach 16 with extensions is asking for overflows. In endgames, I
-->: see 20-25 ply searches regularly, with depth reaching 35. 1<<35 is a
-->: real problem. :)
-->
-->I also had this problem. Mainly because I'm constrained to use
-->8 bit integers for the history heuristic. I'm now using
-->
--> inc(depth) = (a*depth*depth + b*depth + c) min 64
-->
-->for the increment, for some chosen a, b and c. This seems to
-->work just fine. Not too many irritating overflows. BTW, one
-->could also consider not to increment the history count, but
-->to OR it with some f(depth) to eliminate all overflows. I
-->haven't tried this yet, though...
-->
--> Marcel
-->-- _ _
--> _| |_|_|
--> |_ |_ Marcel van Kervinck
--> |_| bue...@urc.tue.nl


I've been using "depth" for several months. I tested 1<<depth vs depth, and
didn't see any difference to speak of. Also, after a complete search is done,
and I display Crafty's chosen move, I don't clear the history values, rather
I >>8 them. This means that after 4 moves a history move will be zero,
unless it's used during some of the searches. This prevents starting a search
from scratch with no history data.

Maybe, after reading the above, I might try depth*depth to see how that
impacts things...

Robert Hyatt

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
In article <4ekkso$6...@nz12.rz.uni-karlsruhe.de>,
Peter Gillgasch <gil...@ira.uka.de> wrote:

-->In article <4eg28u$h...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
-->|> I haven't used the 1<<depth since February of last year, when I started
-->|> letting Crafty play standard games on ICC. 1<<depth where depth can
-->|> easily reach 16 with extensions is asking for overflows. In endgames, I
-->|> see 20-25 ply searches regularly, with depth reaching 35. 1<<35 is a
-->|> real problem. :)
-->
-->Ok, 1 << 35 is of course a killer. How about using bit boards
-->(AKA 64 bit wide unsigned integers) as history counters on
-->64 bit machines ?
-->
-->-- Peter
-->
-->------------------------------------------------------------
-->Peter W. Gillgasch
-->Klosterweg 28/I-113 email gil...@ira.uka.de
-->76131 Karlsruhe/Germany Phone ++49/(0)721/6904255


on a Cray, Alpha, I860, HP PA8000, etc, you don't have any choice, and
the problem is not bad, although I've done some 50 ply endgame searches
on the Cray (fine 70 comes to mind). However, I have tried both 1<<depth
and just depth and didn't notice much difference. depth*depth might be
interesting too, and I'm planning on trying it.

Robert Hyatt

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
In article <4els63$e...@yama.mcc.ac.uk>, S.Read <s.r...@cranfield.ac.uk> wrote:
-->I'm afraid I've missed something here.
-->You're using two "less-than" signs to represent left-shift by
-->a certain number of bits, also two "greater-than" signs for
-->right-shifts.
-->
-->(1) Why? I've missed something... I don't understand why you left-shift
--> something by (depth) places.

the reason is to weight move that occur near the root more than moves
that are good near the tip of the tree. at the root depth=n, at the
tips, depth=0, so 1<<depth will generate a large number for moves near
the root and vice-versa. depth*depth seems just as good in my tests
and doesn't overflow.

-->
-->(2) DON'T DO IT!! My Mosaic browser goes completely crazy; it tries to
--> interpret "less-than"(text)"greater-than" as some sort of HTML
--> command which means I can't read the posting properly. This is an
--> unfortunate side-effect of the old version of mosaic. My alternative
--> is using xrn, but that makes my eyeballs bleed because some
--> thoughtful systems programmer at my site has configured it
--> to use a very small font. (Bless him!) I can cometimes use
--> netscape to read news, which behaves differently, not necessarily
--> correctly. This isn't the fault of those people who want to
--> left-shift; it's just an endearing habit of my web browser. (sigh)

<<sorry>> :)

-->
-->Simon
-->
-->You can kiss a nun once, you can kiss a nun twice...
-->
-->..but you mustn't get into the habit!!
-->

S.Read

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
I'm afraid I've missed something here.
You're using two "less-than" signs to represent left-shift by
a certain number of bits, also two "greater-than" signs for
right-shifts.

(1) Why? I've missed something... I don't understand why you left-shift

something by (depth) places.

(2) DON'T DO IT!! My Mosaic browser goes completely crazy; it tries to

interpret "less-than"(text)"greater-than" as some sort of HTML

command which means I can't read the posting properly. This is an

unfortunate side-effect of the old version of mosaic. My alternative

is using xrn, but that makes my eyeballs bleed because some

thoughtful systems programmer at my site has configured it

to use a very small font. (Bless him!) I can cometimes use

netscape to read news, which behaves differently, not necessarily

correctly. This isn't the fault of those people who want to

left-shift; it's just an endearing habit of my web browser. (sigh)

Simon

You can kiss a nun once, you can kiss a nun twice...

..but you mustn't get into the habit!!


Peter Gillgasch

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
In article <4eg28u$h...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) writes:
|> I haven't used the 1<<depth since February of last year, when I started
|> letting Crafty play standard games on ICC. 1<<depth where depth can
|> easily reach 16 with extensions is asking for overflows. In endgames, I
|> see 20-25 ply searches regularly, with depth reaching 35. 1<<35 is a
|> real problem. :)

Ok, 1 << 35 is of course a killer. How about using bit boards

(AKA 64 bit wide unsigned integers) as history counters on

64 bit machines ?

-- Peter

------------------------------------------------------------
Peter W. Gillgasch
Klosterweg 28/I-113 email gil...@ira.uka.de
76131 Karlsruhe/Germany Phone ++49/(0)721/6904255

Bruce Moreland

unread,
Jan 31, 1996, 3:00:00 AM1/31/96
to
I use "depth + 4". There's nothing magic about "1 << depth", there's nothing
that would indicate that this is the "perfect" scheme.

Come to think of it, why should a move that refutes high in the tree be
somehow "better" than one that refutes near the leaves?

To find the good scheme for your program, compile it a few different ways,
run a "positional" test suite such on each version of your program such that
the whole thing takes overnight to run, and go to sleep. The next day you
can choose the best one, which would probably be no worse than any of the
others :)

bruce

In article <4ekkso$6...@nz12.rz.uni-karlsruhe.de>, gil...@ira.uka.de says...

--
The opinions expressed in this message are my own personal views
and do not reflect the official views of Microsoft Corporation.


Vincent Diepeveen

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
In <4eohof$9...@news.microsoft.com> brucemo (Bruce Moreland) writes:

>I use "depth + 4". There's nothing magic about "1 << depth", there's nothing
>that would indicate that this is the "perfect" scheme.
>
>Come to think of it, why should a move that refutes high in the tree be
>somehow "better" than one that refutes near the leaves?

True: i don't use it at all. Wasting CPU time on a phenomena that doesn't
work, and it can be explained why it doesn't work by using the human mind:

Why should some heuristic not based on chessknowledge and not used by humans
be the best? A human doesn't think that somewhere in the tree a certain move
is refuted always by a certain other move.

Humans do use killermoves however: always trying a certain move first,
and hashtable move (remembering). And clever knowledgebased moves and nullmoves:
even if i don't do a thing, what brings that for you?

>To find the good scheme for your program, compile it a few different ways,
>run a "positional" test suite such on each version of your program such that
>the whole thing takes overnight to run, and go to sleep. The next day you
>can choose the best one, which would probably be no worse than any of the
>others :)

Delete all refutation/countermove code...

I came to that conclusion when i wanted to make the countermove perform better:
simply adding the number of evaluations needed for a position.

Vincent Diepeveen
vdie...@cs.ruu.nl

Marcel van Kervinck

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
Vincent Diepeveen (vdie...@cs.ruu.nl) wrote:

: >Come to think of it, why should a move that refutes high in the tree be

: >somehow "better" than one that refutes near the leaves?
: True: i don't use it at all. Wasting CPU time on a phenomena that doesn't
: work, and it can be explained why it doesn't work by using the human mind:

Depends on the program ofcourse, and it works in a lot of them.
refutation moves and history heuristics take exactly 4 instructions
in my code. That's roughly 0.5% cpu time per node. I don't see the
point of 'wasting time'. It works great, if tuned properly.

: Why should some heuristic not based on chessknowledge and not used by humans


: be the best? A human doesn't think that somewhere in the tree a certain move
: is refuted always by a certain other move.

If it works, it IS chess knowledge in itself. Period. The reason humans
don't use it only says something about the humans. But they don't have a
monopoly on chess knowledge...

Marcel
-- _ _
_| |_|_|

|_ |_ Marcel van Kervinck

|_| bue...@urc.tue.nl

Robert Hyatt

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <4eqh6q$g...@krant.cs.ruu.nl>,
Vincent Diepeveen <vdie...@cs.ruu.nl> wrote:
-->In <4eohof$9...@news.microsoft.com> brucemo (Bruce Moreland) writes:
-->
-->>I use "depth + 4". There's nothing magic about "1 << depth", there's nothing
-->>that would indicate that this is the "perfect" scheme.
-->>
-->>Come to think of it, why should a move that refutes high in the tree be
-->>somehow "better" than one that refutes near the leaves?
-->
-->True: i don't use it at all. Wasting CPU time on a phenomena that doesn't
-->work, and it can be explained why it doesn't work by using the human mind:

Hmm.. as a human I don't do most of what a program does to play chess, but this
doesn't mean that what the program's doing is "wrong" or "bad" Rather than
trying to explain it away, it's easy to test with and without. I don't know
of anyone that has tried history and not found a tree size improvement.

these are called "uninformed search strategies" because they use no chess
knowledge, rather they use information gleaned from backed-up-values to
guide things along. They are still sound, and the nice thing is, they will
work on *any* game.

-->
-->Why should some heuristic not based on chessknowledge and not used by humans
-->be the best? A human doesn't think that somewhere in the tree a certain move
-->is refuted always by a certain other move.
-->
-->Humans do use killermoves however: always trying a certain move first,
-->and hashtable move (remembering). And clever knowledgebased moves and nullmoves:
-->even if i don't do a thing, what brings that for you?
-->
-->>To find the good scheme for your program, compile it a few different ways,
-->>run a "positional" test suite such on each version of your program such that
-->>the whole thing takes overnight to run, and go to sleep. The next day you
-->>can choose the best one, which would probably be no worse than any of the
-->>others :)
-->
-->Delete all refutation/countermove code...

Easy to do in Crafty. I've done it, and it *always* slows the program down
by making it examine more moves. History move *is* killer-move, only a more
general form of it. I use *both* in crafty, and removing *either one* will
produce aa larger tree for the same search depth.

I tried (history counter) depth, 1<<depth, depth+5 and depth*depth, and found
little difference.


<snip>

Vincent Diepeveen

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In <4et36l$p...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
>-->In <4eohof$9...@news.microsoft.com> brucemo (Bruce Moreland) writes:
>-->>I use "depth + 4". There's nothing magic about "1 << depth", there's nothing
>-->>that would indicate that this is the "perfect" scheme.
>-->>Come to think of it, why should a move that refutes high in the tree be
>-->>somehow "better" than one that refutes near the leaves?
>-->True: i don't use it at all. Wasting CPU time on a phenomena that doesn't
>-->work, and it can be explained why it doesn't work by using the human mind:

>Hmm.. as a human I don't do most of what a program does to play chess, but this
>doesn't mean that what the program's doing is "wrong" or "bad" Rather than
>trying to explain it away, it's easy to test with and without. I don't know
>of anyone that has tried history and not found a tree size improvement.

>these are called "uninformed search strategies" because they use no chess
>knowledge, rather they use information gleaned from backed-up-values to
>guide things along. They are still sound, and the nice thing is, they will
>work on *any* game.

>-->Delete all refutation/countermove code...

>Easy to do in Crafty. I've done it, and it *always* slows the program down
>by making it examine more moves. History move *is* killer-move, only a more
>general form of it. I use *both* in crafty, and removing *either one* will
>produce aa larger tree for the same search depth.

Of course: using no knowledge heuristics is better then none.

Do you use chessknowledge to sort moves?.
I mean: if your evaluation is taking over 60% of the total systemtime
then it might be worth sorting the moves better.

When did you decide to remove the mobility factor out of crafty? Did you?
I mean this is a big term for humans, so it should be included in a
chessprogramm.

Has anyone done tests using both mobility and no mobility?
It takes an awfull lot of systemtime in my programm.

>I tried (history counter) depth, 1<<depth, depth+5 and depth*depth, and found
>little difference.

How many positions did you try this on?

Vincent

Robert Hyatt

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In article <4f6g6j$i...@scapa.cs.ualberta.ca>,
Jonathan Schaeffer <jona...@cs.ualberta.ca> wrote:

-->hy...@willis.cis.uab.edu (Robert Hyatt) writes:
-->
-->>heuristics at all? In any case, I suspect you are greatly over-
-->>estimating the potential performance gains possible with better move
-->>ordering. If HiTech, Cray Blitz, and others are, in fact, within
-->>50% of optimial tree size, the remaining 50% is not even a part of a
-->>ply of additional depth.
-->
-->Interestingly enough, the best chess programs are *not* searching as
-->efficiently as we thought. Many programs have published results that
-->show them to be within 50% of optimal (including me). However, these
-->results were obtained assuming AlphaBeta's left-to-right traversal of the
-->search tree. At a cutoff node, you want more than a move that causes a
-->cutoff: you want the move that causes the *cheapest* cutoff.
-->
-->Using the cheapest cutoff, we found that the minimal search tree is
-->much smaller than previously thought.
-->
-->So, how can you take advantage of this? By hunting for cheaper
-->cutoffs. There are several ways to do it, but here is one that gives
-->a 25% node reduction for chess and checkers:
-->
-->At an interior node one normally does a trans table lookup and, if
-->no cutoff is achieved, start searching the move suggested by the table.
-->But what if a different move at this node transposes into previously seen
-->search that has been searched deep enough and the table contains a value
-->sufficient for a cutoff? In other words, try this:
-->
--> For each move:
--> Make the move
--> Check in TT for a value sufficient for a cutoff
--> Unmake the move
-->
--> If no cutoff is achieved, search this node as before
-->
-->In chess (Phoenix) and checkers (Chinook) 25% node reductions are seen.
-->This does not come for free - there is additional computational overhead.
-->By restricting this test to all but the last few ply of the search,
-->most of the benefits can be had for little cost.
-->
-->Note: the search reductions have been quantified in fixed-depth searches
-->(so it is easy to compare the effect of the enhancement). In variable
-->depth searches, it is harder to see the improvement because of the
-->search extensions. Our experience (and others) is that the enhancement
-->saves nodes *and* also improves accuracy. You see lines transposing
-->into other parts of the tree that may have been searched deeper.
-->
-->Bob, check out my WWW page (http://www.cs.ualberta.ca/~jonathan). I will
-->put the relevent papers up tomorrow (Tuesday). This is joint work with
-->Aske Plaat.


This sounds like something suggested by someone a few years ago. I tried it,
but due to the way we only "partially" generated moves in CB, the overhead
offset the savings. I'll try it again, however, although Crafty inherits
the same "design" where we try a trans/ref move, captures (after generating
only captures thanks to bitboard stuff) and then killer moves, all before
we generate the rest of the moves. Thing is, as I recall, I didn't see
25% either, but I can't remember exactly the test conditions, ie, was it on
a deep-searching cray, a shallow-searching vax, etc.

I assume you are suggesting that the lookup operation is looking for a fail-low
condition? which translates into a fail high here? In any case, I'll look
tomorrow.

Bob

Robert Hyatt

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In article <4f4rjj$5...@krant.cs.ruu.nl>,
Vincent Diepeveen <vdie...@cs.ruu.nl> wrote:
-->In <4et36l$p...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:
-->>-->In <4eohof$9...@news.microsoft.com> brucemo (Bruce Moreland) writes:
-->>-->>I use "depth + 4". There's nothing magic about "1 << depth", there's nothing
-->>-->>that would indicate that this is the "perfect" scheme.
-->>-->>Come to think of it, why should a move that refutes high in the tree be
-->>-->>somehow "better" than one that refutes near the leaves?
-->>-->True: i don't use it at all. Wasting CPU time on a phenomena that doesn't
-->>-->work, and it can be explained why it doesn't work by using the human mind:
-->
-->>Hmm.. as a human I don't do most of what a program does to play chess, but this
-->>doesn't mean that what the program's doing is "wrong" or "bad" Rather than
-->>trying to explain it away, it's easy to test with and without. I don't know
-->>of anyone that has tried history and not found a tree size improvement.
-->
-->>these are called "uninformed search strategies" because they use no chess
-->>knowledge, rather they use information gleaned from backed-up-values to
-->>guide things along. They are still sound, and the nice thing is, they will
-->>work on *any* game.
-->
-->>-->Delete all refutation/countermove code...
-->
-->>Easy to do in Crafty. I've done it, and it *always* slows the program down
-->>by making it examine more moves. History move *is* killer-move, only a more
-->>general form of it. I use *both* in crafty, and removing *either one* will
-->>produce aa larger tree for the same search depth.

I should add that I re-ran these tests in optimizing, because the history
move stuff makes one pass over the move list per move tried per ply, which
is a substantial computing cost. however, taking it out invariably makes
the search time larger. I have some experiments planned for when I begin
optimizing Search() to see if I can find something with a lesser cost.
One idea is to attempt to recognize ALL nodes and bypass history moves
completely since ordering is not important at such nodes and represents
wasted computation. I'll probably try captures as always, since they are
the most likely thing to refute a bad move when move ordering is broken,
but am going to try falling right into the "remaining_moves" phase of move
selection, skipping killers, history, etc... completely.

-->
-->Of course: using no knowledge heuristics is better then none.

I suppose this is a language problem? (nothing > none?) Maybe you
mean using no knowledge heuristics is better than using no ordering


heuristics at all? In any case, I suspect you are greatly over-

estimating the potential performance gains possible with better move

ordering. If HiTech, Cray Blitz, and others are, in fact, within

50% of optimial tree size, the remaining 50% is not even a part of a

ply of additional depth. Certainly not the multiple plies you were
talking about a few posts back (getting to 20 plies with better ordering.)

here's the way to construct an experiment to test this hypothesis:

modify the move generator to generate 30 moves exactly. Ignore illegal
positions and simply do a 6 ply search, no capture search or anything added
to the end. turn alpha/beta off, and count nodes. Turn it on and count
nodes. then compute 2 * (30*30*30) and see how close to this number the
alpha/beta search gets. of course, the minimax is going to search
30*30*30*30*30*30 nodes for sanity-checking. If you search (say) 5*30*30*30
nodes, you are only off by a factor of 2.5, which is not bad. Obviously,
with best-possible ordering, you can get the 2.5, but *no more*.

-->
-->Do you use chessknowledge to sort moves?.
-->I mean: if your evaluation is taking over 60% of the total systemtime
-->then it might be worth sorting the moves better.

It's not that high, but it's a moot point anyway. No, I don't use chess
knowledge except at the root of the tree. Why? because at least 50% of
the positions searched are not affected in the least by move ordering,
because these are the "ALL" nodes (Knuth/Moore TYPE=3). At such nodes,
move ordering gets *nothing* because *all* moves must be examined and no
cutoffs occur. Of course, if the move at the previous CUT ply are ordered
poorly, then this ALL node becomes a CUT node and bets are off.

However, I've not yet made the decision to use Evaluate() at internal nodes,
and use the resulting scores to order things. Might or might not be a good
idea. I'll certainly try one day. However, 15 years of experience with
alpha/beta has shown that "uninformed search ordering strategies" produce
good results, at low cost. The only real gains in the search are *not* going
to come as a result of move ordering. Hans Berlined did some measurements
with HiTech, I did the same with Cray Blitz, and we found that these programs
search less than 2X (on average) the number of nodes in the truly optimally
ordered tree. As a result, all the trickery in the world will only get a
factor of 2X speed improvement, and probably significantly less because it
will cost something to do better ordering...

In short, ordering is important, but I think you are under-estimating the
utility of uninformed ordering strategies by a significant amount. Remember
that the cost difference between informed and uninformed strategies can be
substantial, particularly if your evaluation is one of the things you are
using.


-->
-->When did you decide to remove the mobility factor out of crafty? Did you?
-->I mean this is a big term for humans, so it should be included in a
-->chessprogramm.

Never removed it. I just have a clever way of computing it using the rotated
bit-boards that costs almost *nothing*. Mobility was the first thing I used
for evaluation in Crafty because it's so easy to compute, and it's never been
removed.

-->
-->Has anyone done tests using both mobility and no mobility?
-->It takes an awfull lot of systemtime in my programm.

Someone did, and found that mobility + some other simple strategies was worth
approximately 1 ply of search. That is, a program with depth N + mobility +
other simple positional ideas would play evenly with a program without
mobility. I think that to get the whole "ply" they used mobility+space+
something else, but memory fails me.

-->
-->>I tried (history counter) depth, 1<<depth, depth+5 and depth*depth, and found
-->>little difference.
-->
-->How many positions did you try this on?

about 50. I tried the "unsolved" ones from win at chess (about 10 positions
that are worthwhile out of the 15 Crafty doesn't get in 1 minute), plus the
Kopec-Bratko positions, plus a few I've collected over the years. I found
problems with 1<<depth on Fine#70 of course, where crafty searches to depth
29 in under a minute, before hanging for a while due to seeing a queen appear.
1<<29 is obviously bad. and with extensions allowed as I do in Crafty, depth
can exceed 32 in this position which fails to adjust history count at all.

Bruce Moreland is using depth+something (not sure whether he said 5 or 7. In
any case, it reminds me of the SEE/MVVLVA discussion a while back. At least
two of us have found similar results, and, in fact, everyone that's reported
trying SEE is now using it due to the smaller tree produced by restricting
the quiescence search. However, as most know, I'm not "pig-headed" and will
try *anything*. I am "empirical" however, and only keep what actually produces
results that "work".

-->
-->Vincent
-->
-->Vincent Diepeveen
-->vdie...@cs.ruu.nl
-->
-->
-->--
--> +--------------------------------------+
--> || email : vdie...@cs.ruu.nl ||
--> || Vincent Diepeveen ||
--> +======================================+

Robert Hyatt

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In article <4f66ko$p...@news-2.csn.net>, <ke...@t1sys.cfer.com> wrote:
-->Rober Hyatt writes
-->
-->[big snip]
-->
-->>I suppose this is a language problem? (nothing > none?) Maybe you
-->>mean using no knowledge heuristics is better than using no ordering
-->>heuristics at all? In any case, I suspect you are greatly over-
-->>estimating the potential performance gains possible with better move
-->>ordering. If HiTech, Cray Blitz, and others are, in fact, within
-->>50% of optimial tree size, the remaining 50% is not even a part of a
-->>ply of additional depth. Certainly not the multiple plies you were
-->>talking about a few posts back (getting to 20 plies with better ordering.)
-->>
-->>here's the way to construct an experiment to test this hypothesis:
-->>
-->>modify the move generator to generate 30 moves exactly. Ignore illegal
-->>positions and simply do a 6 ply search, no capture search or anything added
-->>to the end. turn alpha/beta off, and count nodes. Turn it on and count
-->>nodes. then compute 2 * (30*30*30) and see how close to this number the
-->>alpha/beta search gets. of course, the minimax is going to search
-->>30*30*30*30*30*30 nodes for sanity-checking. If you search (say) 5*30*30*30
-->>nodes, you are only off by a factor of 2.5, which is not bad. Obviously,
-->>with best-possible ordering, you can get the 2.5, but *no more*.
-->
-->[big snip
-->
-->I was going to argue against this until I fully grocked it and saw that it
-->is in fact true!
-->
-->Do you actually get 5*30*30*30 (better or worse)?

I have not run this since I played with this on my Ph.D. dissertation in 1987-88,
but it was "in this range".

-->
-->Is this "5" here what I saw refered to as the branching factor in some
-->earlier posts? If not what is the branching factor.

No. the branching factor here is 30. the "5" is simply 2.5 times larger
than the optimal value of 2 ( for even ply searches, to a fixed depth D, with
a constant branching factor W, the number minimum number of nodes is given by:

nodes = 2 * W ^ (D/2) [where ^ is "to the power of"]

obviously, this is then 2 * sqrt ( W ^ D) and, in my case, W=30, D=6, which makes
the math work out well.

The main point is that this is an absolute lower bound on minimum number of nodes
that must be searched with Alpha/Beta algorithm. No matter what you do to ordering,
you can approach this, and maybe (if you are lucky on a position) even match this
number, but it can't be beaten unless something is used to reduce W or reduce D.
Null move search plays with D significantly.

Hope this helps....

-->
-->I realize I'm showing my ignorance but isn't learning fun!
-->
-->I'd like to thank everyone for participating in these discussions, especially
-->Robert Hyatt and Bruce Moreland. When I started reading this group a few
-->weeks back I didn't even know what alpha-beta was beyond some vague
-->memories from a college class six years ago (and that no more than 1
-->day). Since that time I've been able to look at some code, and learn little
-->bit by little bit. In another year maybe I'll have something meaningful to
-->contribute.
-->
-->Kevin Barnes
-->kba...@t1sys.cfer.com
-->

In another couple of years, maybe I'll do the same. :)

Bob

M D. Grimminck

unread,
Feb 6, 1996, 3:00:00 AM2/6/96
to
hy...@willis.cis.uab.edu (Robert Hyatt) writes:

>This sounds like something suggested by someone a few years ago. I tried it,
>but due to the way we only "partially" generated moves in CB, the overhead
>offset the savings. I'll try it again, however, although Crafty inherits
>the same "design" where we try a trans/ref move, captures (after generating
>only captures thanks to bitboard stuff) and then killer moves, all before
>we generate the rest of the moves. Thing is, as I recall, I didn't see
>25% either, but I can't remember exactly the test conditions, ie, was it on
>a deep-searching cray, a shallow-searching vax, etc.

Surely you can include the above algorithm if search depth is larger
than a given limit? I only use it if depth>=3 (in draughts) and it
seems to work fine.

For depths above 6 ply (the program regulary searches 14 to 17 ply), I try
to estimate the tree size of the resulting subtrees and favor the ones
with small trees. However it may not work very well in chess programs.

BTW, instead of using the history heuristic, I am using the
(estimated!) probability of a move giving a cut-off. This seems
to give a 10% improvement in node count (and time), at least in my
program.

Cheers,

Vincent Diepeveen

unread,
Feb 6, 1996, 3:00:00 AM2/6/96
to
In <4f5bp4$s...@pelham.cis.uab.edu> hy...@willis.cis.uab.edu (Robert Hyatt) writes:

>I should add that I re-ran these tests in optimizing, because the history
>move stuff makes one pass over the move list per move tried per ply, which
>is a substantial computing cost. however, taking it out invariably makes
>the search time larger. I have some experiments planned for when I begin
>optimizing Search() to see if I can find something with a lesser cost.
>One idea is to attempt to recognize ALL nodes and bypass history moves
>completely since ordering is not important at such nodes and represents
>wasted computation. I'll probably try captures as always, since they are
>the most likely thing to refute a bad move when move ordering is broken,
>but am going to try falling right into the "remaining_moves" phase of move
>selection, skipping killers, history, etc... completely.

This is true. When score < alfa (using negamax), then it is
better to look first at checks and captures. I use this partly,
and it produces good results. The less search depth left, the greater
the chance to get a score < alfa back.

>-->Of course: using no knowledge heuristics is better then none.

>I suppose this is a language problem? (nothing > none?) Maybe you
>mean using no knowledge heuristics is better than using no ordering
>heuristics at all?

Sorry, mistype.

> In any case, I suspect you are greatly over-
>estimating the potential performance gains possible with better move
>ordering. If HiTech, Cray Blitz, and others are, in fact, within
>50% of optimial tree size, the remaining 50% is not even a part of a
>ply of additional depth.

I don't believe that.

You can prove this for alphabeta, that is true.

However if you are using hashtables and
nullmove you already get problems.
You cannot prove the combination; so you'll never know the minimum sized
tree. Of course nullmove is theoretical dubious, but it seems to work
fine.

How will a 20 ply searching programm with nullmove score against
a 15 ply fullwidth searching programm? Any thoughts?

> Certainly not the multiple plies you were
>talking about a few posts back (getting to 20 plies with better ordering.)

>here's the way to construct an experiment to test this hypothesis:
>modify the move generator to generate 30 moves exactly. Ignore illegal
>positions and simply do a 6 ply search, no capture search or anything added
>to the end. turn alpha/beta off, and count nodes. Turn it on and count
>nodes. then compute 2 * (30*30*30) and see how close to this number the
>alpha/beta search gets. of course, the minimax is going to search
>30*30*30*30*30*30 nodes for sanity-checking. If you search (say) 5*30*30*30
>nodes, you are only off by a factor of 2.5, which is not bad. Obviously,
>with best-possible ordering, you can get the 2.5, but *no more*.

Not in every positon you have 30 possibilities.

Let's take an example: if i change a little in the move ordering
(for example introducing a bug) by now and then evaluating random,
then the current 170,000 nodes i currently need to search 9 ply don't
change that much, but the currently needed amount of 700,000 positions
to search the beginning position 10 plies then suddenly becomes against
2 million or more. The effect becomes worse the deeper you search.

>-->Do you use chessknowledge to sort moves?.
>-->I mean: if your evaluation is taking over 60% of the total systemtime
>-->then it might be worth sorting the moves better.

>It's not that high, but it's a moot point anyway. No, I don't use chess
>knowledge except at the root of the tree. Why? because at least 50% of
>the positions searched are not affected in the least by move ordering,
>because these are the "ALL" nodes (Knuth/Moore TYPE=3). At such nodes,
>move ordering gets *nothing* because *all* moves must be examined and no
>cutoffs occur. Of course, if the move at the previous CUT ply are ordered
>poorly, then this ALL node becomes a CUT node and bets are off.

You are talking about 50%. How do you know if it is 50%?
NO one in the world has proved it. Only for alfabeta it might be possible
to prove it, but because nullmove and hashtables effect it that much
we have a serious problem.

If i turn off hashtables it already gives me 50%... ...at 7-9 ply,
and more when searching deeper. But what is 50%?

I mean: 50% node reduction at a certain depth is what i was talking about,
but 50% better branching factor gives you more:

your current branching factor: 5 ^ 20 = 9.54 x 10^13
against 2.5 ^ 20 = 9.10 x 10^6

That saves exactly 99% of the tree.

>However, I've not yet made the decision to use Evaluate() at internal nodes,

In principle i'm not using Evaluate() at internal nodes too much systemtime...
...here you see already the trade off. I don't even EVALUATE internal
because it is eating too much systemtime.

>and use the resulting scores to order things. Might or might not be a good
>idea. I'll certainly try one day. However, 15 years of experience with
>alpha/beta has shown that "uninformed search ordering strategies" produce
>good results, at low cost.

This is true, uninformed things based on remembering from previous
searches seems to work fine.

I'm very busy with the question, what if previous search WAS giving
a cutoff, and how to detect if there is a move that also gives a cutoff,
only faster!

> The only real gains in the search are *not* going
>to come as a result of move ordering.

Improving the weakest point in your program is of course giving you the most.

0 new messages