Evaluation function diminishing returns

66 views
Skip to first unread message

brucemo

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

Is there a point beyond which it makes no sense to improve
your evaluation function, because additional time spent
detracts from search depth and therefore makes the program
weaker?

Does this point change depending upon time control?

Does this stuff change depending upon the characteristics of
your opponent (human, computer, positional, tactical, etc.)?

Have any studies been done? Didn't someone say that a great
eval function was worth a ply of search?

I've actually heard someone (not a contributor here) claim
that their eval function was worth four plies of search ...

bruce

Hans-Henrik Grand

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

>Is there a point beyond which it makes no sense to improve
>your evaluation function, because additional time spent
>detracts from search depth and therefore makes the program
>weaker?

I'm not a chess programmer, but it seems to me that most programs
need a better evalfunction. You will always be able to find a position
were the program is playing weak, and extra plies won't help much.
Now, if you don't improve your eval, your computer is likely to loose
that kind of positions.
I think a problem for some programmers are, that they can't put more
chess knowledge into their programs, than they know for themselves.
And sometimes there just don't seem to be an overall algorithm, for
a type of positions.

>Does this point change depending upon time control?

Indeed. When playing very fast games, blunders are much more likely
to be tactical. If you have a very fast but not all that accurate eval,
you will benefit from it in lightning games.
With slower games however, the eval becomes much more important.

>Does this stuff change depending upon the characteristics of
>your opponent (human, computer, positional, tactical, etc.)?

Of course. When playing humans the eval is of significance, when playing
computers speed is more significant. Beat your opponents with their
own weapons.

>Have any studies been done? Didn't someone say that a great
>eval function was worth a ply of search?

I'm sure there have. There have been many publications on what the
change in plies will do to programs rating. At a sudden point the
improvement from extra plies drops close to zero. Only a better eval
can do a better job here.
In my opinion of course.

Regards /hhg
--
************** Hans-Henrik Grand ****************
** ** ** Computer Science Department ** ** **
** ** ** Aarhus University ** ** **
*********** email: h...@daimi.aau.dk ****************

Robert Hyatt

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

brucemo (bru...@nwlink.com) wrote:
: Is there a point beyond which it makes no sense to improve
: your evaluation function, because additional time spent
: detracts from search depth and therefore makes the program
: weaker?

: Does this point change depending upon time control?

: Does this stuff change depending upon the characteristics of

: your opponent (human, computer, positional, tactical, etc.)?

: Have any studies been done? Didn't someone say that a great

: eval function was worth a ply of search?

: I've actually heard someone (not a contributor here) claim

: that their eval function was worth four plies of search ...

: bruce

Interesting question. With probably lots of answers. Some terms
are worth many plies (one that comes to mind is square of the king
for passed pawn races) while others are probably worth fractions of
a ply.

Speed vs knowledge. Probably can be debated from here on with no real
good answer. :)

I do recall that Berliner came up with the knowledge=1ply estimate in the
HiTech vs LoTech paper in one of the advances in computer chess books I
think. This was based on playing basically the same program with a big and
small eval against each other. I'd personally expect the difference to be
more with a better eval, but computer vs computer is probably not the right
way to measure it...

Komputer Korner

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to

Robert Hyatt wrote:
>
>snipped

>
> I do recall that Berliner came up with the knowledge=1ply estimate in the
> HiTech vs LoTech paper in one of the advances in computer chess books I
> think. This was based on playing basically the same program with a big and
> small eval against each other. I'd personally expect the difference to be
> more with a better eval, but computer vs computer is probably not the right
> way to measure it...

Won't this always be a tradeoff as there is no end to the potential
speed and no end to the potential knowledge that can be added?
--
Komputer Korner
The komputer that kouldn't keep a password safe from
prying eyes, kouldn't kompute the square root of 36^n,
kouldn't find the real Motive and variation tree in
ChessBase, kouldn't compute the proper time in 2 variation
mode, missed the Hiarcs functionality in Extreme
and also misread the real learning feature of Nimzo.

Chris Whittington

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to

--
http://www.demon.co.uk/oxford-soft

brucemo <bru...@nwlink.com> wrote in article <32F301...@nwlink.com>...


> Is there a point beyond which it makes no sense to improve
> your evaluation function, because additional time spent
> detracts from search depth and therefore makes the program
> weaker?
>
> Does this point change depending upon time control?
>
> Does this stuff change depending upon the characteristics of
> your opponent (human, computer, positional, tactical, etc.)?
>
> Have any studies been done? Didn't someone say that a great
> eval function was worth a ply of search?
>
> I've actually heard someone (not a contributor here) claim
> that their eval function was worth four plies of search ...

I think some components of CSTal's evaluation can be worth more than 4
plies.
Sometimes, in some situations.

Why not turn your question round ?

Can your current evaluation (or any current evaluation) with faster
hardware and maybe more efficient search ever consistently beat Kasparov
within the time frame of your interest in computer chess ?

If the answer to this question is no, or unlikely, then you're left with
what seems to be the other path, namely evaluation function development.


Or turn the question again: Do humans have sophisticated evaluation
functions ?


Or, since there are no real hard answers to these questions, it comes down
to a matter of belief. Do you believe in search, and search improvements;
or, do you believe in evaluation improvements ?

And it is just belief. Talk to other programmers, they *believe* (Vincent,
for example). Talk to another programmer, attack his program, or attack his
program behind his back; you'ld have been better off attacking his wife for
the response you'll get.

Despite all the logic, and programming, and algorithms, and mechanistic
ideologies, and the loss of the power of speech, and the nerd-ness - these
programmers are the most over-emotional, mixed-up, believers in the
holy-grail you could hope to meet.

This whole process has zilch to do with logic - there is none.

Zilch to do with facts - no one has any of these either

Zilch to do with certainty - it doesn't exist

Everything to do with 'I believe'

Chris Whittington

>
> bruce
>

Marcel van Kervinck

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to

brucemo (bru...@nwlink.com) wrote:

> I've actually heard someone (not a contributor here) claim
> that their eval function was worth four plies of search ...

Try to get the ICCAJ article 'DISTANCE, towards the unification
of chess knowledge'. It was published in november 1993 (? I'm
not pretty sure about the date, as I don't have my copy nearby. It
appeared in the same issue as Donninger's nullmove article)

It describes a neat evaluator that is worth a few ply of search.
(Positional knowledge). However, it is really expensive to compute.
I spent a two years extracting essentials from that algorithm
to enhance my own program, which is actually a fast searcher.

Marcel
-- _ _
_| |_|_|
|_ |_ Marcel van Kervinck
|_| mar...@stack.nl

Vincent Diepeveen

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

>Is there a point beyond which it makes no sense to improve
>your evaluation function, because additional time spent
>detracts from search depth and therefore makes the program
>weaker?
>
>Does this point change depending upon time control?
>
>Does this stuff change depending upon the characteristics of
>your opponent (human, computer, positional, tactical, etc.)?
>
>Have any studies been done? Didn't someone say that a great
>eval function was worth a ply of search?
>

>I've actually heard someone (not a contributor here) claim
>that their eval function was worth four plies of search ...

It depends of course on the search depth and the 2 evals you compare.

I guess that if you search 5 ply with eval, you better can search
6 ply with a less better eval.

If you search however 9+ ply my own experiences are that eval
is more worth than another ply. A big problem which is not defined
in any book is the what i call 'tactical barrier'. In certain positions
there are tricks. You first must see all tricks before depth doesn't help,
otherwise your search could produce horizon effects.

In most positions i estimate this is around 9-12 ply for grandmaster tactics.

Assuming you search deeper than the tactical barrier both, i don't see
why searching deeper gives more than a better eval.

I see this problem very clearly in the draughtsprogram i have. After a
night of pondering it searches 20-30 ply in middlegame (at tournament level
14-18 ply). Haven't seen a tactical middlegame trick that was used in a
real game that was over 25 ply so far... :)

The deepest i have is found at 14 ply already (captures are extended, so
the real depth of the combination is much more).

The program has great problems with strategical decisions, which are very
easy for human players. Another ply doesn't work here.

That are depths chessprograms don't get so far. Lucky for the chessprograms
in chess centralising the pieces is so terrible important, and most worse
moves simply fail tactically, or fail because centralising/mobility
doesn't give enough points.

On the other hand, capturing is not forced in chess, which seems to make
tactics at IGM level even more problematic for chessprograms to find
in chess than it is in draughts.

Still the problems are very well comparable. From this viewpoint i
come to the statement that i made above. So in fact i make a new question:
what depth does one need to overcome the tactical barrier in chess?

>bruce

Vincent
--
+----------------------------------------------------+
| Vincent Diepeveen email: vdie...@cs.ruu.nl |
| http://www.students.cs.ruu.nl/~vdiepeve/ |
+----------------------------------------------------+

Vincent Diepeveen

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In <5cv6jl$42f$1...@gjallar.daimi.aau.dk> h...@daimi.aau.dk (Hans-Henrik Grand) writes:

>In <32F301...@nwlink.com> brucemo <bru...@nwlink.com> writes:
>
>>Is there a point beyond which it makes no sense to improve
>>your evaluation function, because additional time spent
>>detracts from search depth and therefore makes the program
>>weaker?

>I'm not a chess programmer, but it seems to me that most programs
>need a better evalfunction. You will always be able to find a position
>were the program is playing weak, and extra plies won't help much.
>Now, if you don't improve your eval, your computer is likely to loose
>that kind of positions.
>I think a problem for some programmers are, that they can't put more
>chess knowledge into their programs, than they know for themselves.
>And sometimes there just don't seem to be an overall algorithm, for
>a type of positions.

If you are talking about chessprograms running on fast hardware, i totally
agree with your story. I concluded the same few years ago. For this reason
i began my own program: Diep in which i try to make a better evalfunction.

After a lot of mailings, angry reactions etc. i come to the conclusion that
adding new knowledge in a chessprogram is not the problem, but the time
needed to add knowledge in a program is the problem, or better call it the
lack of time.

For example: if you add 1 general chesspattern a month to your program,
then after 3 years you have: 3 * 12 * 1 = just 36 patterns.

After 20 years: 20 * 12 = 240 still nothing.

Also one needs a lot of games to correct the newly added knowledge and to
adjust the parameters to the new evaluation function.

This latter thing is something Diep in the DCCC 96 suffered terrible from,
so the correction of the newly added knowledge and the adjustment of parameters
definitely is needed for Diep.

If you don't know a thing about chess (which is true for most chessprogrammers)
you need to get this chessknowledge from a second person. Still you must
find out in what cases it must work, and how to generalize it, and also you
need to know in what circumstances it absolutely is not an
advantage/disadvantage.

>>Does this point change depending upon time control?

>Indeed. When playing very fast games, blunders are much more likely
>to be tactical. If you have a very fast but not all that accurate eval,
>you will benefit from it in lightning games.
>With slower games however, the eval becomes much more important.

Make it when searching deeper than the tactical barrier.



>>Does this stuff change depending upon the characteristics of
>>your opponent (human, computer, positional, tactical, etc.)?

>Of course. When playing humans the eval is of significance, when playing
>computers speed is more significant. Beat your opponents with their
>own weapons.

I don't know whether this is true.

Seen the rating of Fritz4, and compared it to Fritz3?

>>Have any studies been done? Didn't someone say that a great
>>eval function was worth a ply of search?

>I'm sure there have. There have been many publications on what the


>change in plies will do to programs rating. At a sudden point the
>improvement from extra plies drops close to zero. Only a better eval
>can do a better job here.

There is another problem: strategics. Diep seems positionally to play better
and better, but still makes sometimes really beginners strategically mistakes.

Don't see how i must implement strategics in Diep.
Of course when eval becomes better, this insight becomes better, but when
i compare the elorating it has for
a) tactics
b) positional insight
c) strategical insight

then i would give it 2600 for tactics
2100 for positional insight
and 1200 for strategical insight

It has problems discriminating on what positional thing to play.

For example: in a given position there are 2 open files.
c-file
h-file

black has castled short, white has castled long and moved king to b1.

Where to put the white rooks?
Currently Diep almost gets equal points for this, of course depending on
mobility,kingsafety etc. However the strategical decision a human would
make has
to do with looking into the future and taking into account much more factors,
and we will assume that this future cannot be reached by searching deeply.

So a program for his strategical insight should redenate with patterns
With the conclusion from this it again must redenate etc.

So it should have a very abstract evaluation function.

>In my opinion of course.

same.

Still don't see why a program would become a world champion within 100 years.

>Regards /hhg
>--
>************** Hans-Henrik Grand ****************
>** ** ** Computer Science Department ** ** **
>** ** ** Aarhus University ** ** **
>*********** email: h...@daimi.aau.dk ****************

Vincent

graham_douglass

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <01bc1118$79792100$c308...@cpsoft.demon.co.uk>, "Chris says...
>> Is there a point beyond which it makes no sense to improve
>> your evaluation function, because additional time spent
>> detracts from search depth and therefore makes the program
>> weaker?
>>
>> Does this point change depending upon time control?
>>
>> Does this stuff change depending upon the characteristics of
>> your opponent (human, computer, positional, tactical, etc.)?
>>
>> Have any studies been done? Didn't someone say that a great
>> eval function was worth a ply of search?
>>
>> I've actually heard someone (not a contributor here) claim
>> that their eval function was worth four plies of search ...
>

Good. You've just saved the taxpayer some money. All research on any subject
being done by statistical or mathematical means can be scrapped!

I disagree with Chris profoundly. The elo rating system is based on statistics
and probability. And the studies cited in "Chess Skill In Man And Machine"
demonstrate that progress has been made in the field of studying how humans
select their moves and different elo ratings.

Maybe other people know of other studies that yield information relevant to
this discussion.

All Chris seems to be saying is that programmers display animal territorial
behaviour with their methods.

Graham

>
>Chris Whittington
>
>>
>> bruce
>>

brucemo

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

Marcel van Kervinck wrote:

> It describes a neat evaluator that is worth a few ply of search.
> (Positional knowledge). However, it is really expensive to compute.
> I spent a two years extracting essentials from that algorithm
> to enhance my own program, which is actually a fast searcher.

What is your program called? I've seen you post, and I've looked through
back-issues of ICCA Journals to try to find you, but I can't.

I'm not trying to challenge your credentials or anything, I'm just curious.

bruce

Tom C. Kerrigan

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

Vincent Diepeveen (vdie...@cs.ruu.nl) wrote:

> If you don't know a thing about chess (which is true for most chessprogrammers)
> you need to get this chessknowledge from a second person. Still you must

Most chess programmers on the planet Goog, I assume.

I certainly know nothing about chess, but I seem to recall that the
ratings of famous programmers published in How Computers Play Chess were
seriously above average.

With regard to how search improves the evaluation function, I suggest you
read the closing statements of the CHESS 4.0 article in Chess Skill in Man
and Machine. Excellent observations there.

Cheers,
Tom

Chris Whittington

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

--
http://www.demon.co.uk/oxford-soft

Graham Douglass wrote in article <5d4fnv$3...@lana.zippo.com>...

Perhaps I didn't explain myself very well, but you seem to have missed the
point(s) so profoundly that you've gone tangential.

Chris Whittington

>
> Graham
>
> >
> >Chris Whittington
> >
> >>
> >> bruce
> >>
>

Komputer Korner

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

Vincent Diepeveen wrote:
>
>snipped

> The deepest i have is found at 14 ply already (captures are extended, so
> the real depth of the combination is much more).
>
> The program has great problems with strategical decisions, which are very
> easy for human players. Another ply doesn't work here.
>
> That are depths chessprograms don't get so far. Lucky for the chessprograms
> in chess centralising the pieces is so terrible important, and most worse
> moves simply fail tactically, or fail because centralising/mobility
> doesn't give enough points.
>
> On the other hand, capturing is not forced in chess, which seems to make
> tactics at IGM level even more problematic for chessprograms to find
> in chess than it is in draughts.
>
> Still the problems are very well comparable. From this viewpoint i
> come to the statement that i made above. So in fact i make a new question:
> what depth does one need to overcome the tactical barrier in chess?
>

>-----------------------------+

Vincent,
I think I am beginning to understand why your level of posts
is below that of other programmers. You don't seem to understand
high level chess. I don't pretend to play or think at a high level
either but from what I know, GM chess, which is what every computer
program is aspiring to become is a very tactical, positional game
all rolled up into one. There is No tactical barrier. Sure the
difference between 10 ply and 14 ply is less than the difference
between 6 ply and 10 ply but there is still a difference. There will
remain a difference all the way up the scale albeit at a decreasing
rate. Looking 30 ply ahead is better than 25 ply which is better than
20 ply. These differences are substantial and no amount of root
evaluation is going to make them go away. AS Hyatt says speed kills
but this really applies only to dumb chess programs and car races.
There are no safe positions in chess except for fortesses and known
book draws. Everything else can be assaulted and killed especially
with tactical or strategical errors by the opponent. The idea
of a tactical barrier is NONSENSE. The proof of this is you take
your program and have it search to what you think the tactical
barrier is. And then I will search 5 plies farther on every move and
we will see who will win.

Vincent Diepeveen

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

In <32F72C...@netcom.ca> Komputer Korner <kor...@netcom.ca> writes:

>Vincent Diepeveen wrote:
>>
>>snipped
>
>> The deepest i have is found at 14 ply already (captures are extended, so
>> the real depth of the combination is much more).
>>
>> The program has great problems with strategical decisions, which are very
>> easy for human players. Another ply doesn't work here.
>>
>> That are depths chessprograms don't get so far. Lucky for the chessprograms
>> in chess centralising the pieces is so terrible important, and most worse
>> moves simply fail tactically, or fail because centralising/mobility
>> doesn't give enough points.
>>
>> On the other hand, capturing is not forced in chess, which seems to make
>> tactics at IGM level even more problematic for chessprograms to find
>> in chess than it is in draughts.
>>
>> Still the problems are very well comparable. From this viewpoint i
>> come to the statement that i made above. So in fact i make a new question:
>> what depth does one need to overcome the tactical barrier in chess?
>>
>>-----------------------------+
>
>Vincent,
> I think I am beginning to understand why your level of posts
>is below that of other programmers. You don't seem to understand

It is very easy to define a theoretical tactical barrier for every position.
So every position has a tactical barrier.

Why do you have emotional problems with that?

This has nothing to do with personality. Really.

It is very true that humans don't have a horizon problem.

Now we can look to statistics, and the chance that in a game a program plays
against Kasparov, that there is a position where it hasn't reached the depth
needed to see the tactical trick. This says of course NOTHING about positional
and/or strategical insight. Just a statistical look about the chance one
looses tactical when searching at a certain depth.

I'm just curious about how deep one average needs to prevent from loosing
tactical.

For example after

1. e4,e5
2. Nf3,Nc6
3. Bb5,Nd4

the tactical problem is that after Nxe5 white looses because of Qg5.
So a program needs to see that Nxe5 is not possible.

So the tactical barrier in this position is i would define to be around 7 ply
(some programs more, some programs less) for most programs.

Vincent


>Komputer Korner

Vincent Diepeveen

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

In <5d5o50$3...@merlin.pn.org> kerr...@merlin.pn.org (Tom C. Kerrigan) writes:

>Vincent Diepeveen (vdie...@cs.ruu.nl) wrote:
>
>> If you don't know a thing about chess (which is true for most chessprogrammers)
>> you need to get this chessknowledge from a second person. Still you must
>

>Most chess programmers on the planet Goog, I assume.
>
>I certainly know nothing about chess, but I seem to recall that the
>ratings of famous programmers published in How Computers Play Chess were
>seriously above average.
>
>With regard to how search improves the evaluation function, I suggest you
>read the closing statements of the CHESS 4.0 article in Chess Skill in Man
>and Machine. Excellent observations there.

Lot of articles from chess skill in man and machine were from 1979 or something
around that year.


In the mean time programs became stronger.

There are few programmers with a bunch of elo, but that are usually
chessplayers and not programmers.

Vincent

>
>Cheers,
>Tom

Chris Whittington

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

--
http://www.demon.co.uk/oxford-soft

Vincent Diepeveen <vdie...@cs.ruu.nl> wrote in article
<5d7qpb$nml$1...@krant.cs.ruu.nl>...


> In <5d5o50$3...@merlin.pn.org> kerr...@merlin.pn.org (Tom C. Kerrigan)
writes:
>
> >Vincent Diepeveen (vdie...@cs.ruu.nl) wrote:
> >

> >> If you don't know a thing about chess (which is true for most
chessprogrammers)
> >> you need to get this chessknowledge from a second person. Still you
must
> >

> >Most chess programmers on the planet Goog, I assume.
> >
> >I certainly know nothing about chess, but I seem to recall that the
> >ratings of famous programmers published in How Computers Play Chess were
> >seriously above average.
> >
> >With regard to how search improves the evaluation function, I suggest
you
> >read the closing statements of the CHESS 4.0 article in Chess Skill in
Man
> >and Machine. Excellent observations there.
>
> Lot of articles from chess skill in man and machine were from 1979 or
something
> around that year.
>
>
> In the mean time programs became stronger.
>
> There are few programmers with a bunch of elo, but that are usually
> chessplayers and not programmers.

Well I agree with Vincent here.

Most programmers are not strong chess players.

But being a strong chessplayer leads the programmer down the 'evaluation',
slow nodes per second path.

Being a weak chessplayer leads the programmer down the 'fast' 'brute-force
path.

Its still an open question: search or knowledge ......

But I have to say that I slighty resent Vincent's assertion thta he is the
only chess-knowledgeable programmer

Chris Whittington

>
> Vincent
>
> >
> >Cheers,
> >Tom

Chris Whittington

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

--
http://www.demon.co.uk/oxford-soft

Komputer Korner <kor...@netcom.ca> wrote in article
<32F72C...@netcom.ca>...

> high level chess. I don't pretend to play or think at a high level
> either but from what I know, GM chess, which is what every computer
> program is aspiring to become is a very tactical, positional game
> all rolled up into one. There is No tactical barrier. Sure the
> difference between 10 ply and 14 ply is less than the difference
> between 6 ply and 10 ply but there is still a difference. There will
> remain a difference all the way up the scale albeit at a decreasing
> rate. Looking 30 ply ahead is better than 25 ply which is better than
> 20 ply. These differences are substantial and no amount of root
> evaluation is going to make them go away. AS Hyatt says speed kills
> but this really applies only to dumb chess programs and car races.
> There are no safe positions in chess except for fortesses and known
> book draws. Everything else can be assaulted and killed especially
> with tactical or strategical errors by the opponent. The idea
> of a tactical barrier is NONSENSE. The proof of this is you take
> your program and have it search to what you think the tactical
> barrier is. And then I will search 5 plies farther on every move and
> we will see who will win.

Again, I agree with Vincent.

Obviously its a trueism to say there is no tactical barrier, since 250 ply
or whatever would solve the game.

But there is a practical barrier (search tree exponential growth,
blah-blah)

1. You can't search 5 ply more.

2. even if you could at those depths the better positional knowledge of the
shallower program would still win out.

Chris Whittington

Ed Schroder

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

From: Komputer Korner <kor...@netcom.ca>

: Vincent,

: I think I am beginning to understand why your level of posts
: is below that of other programmers.

I am not in agreement here.

[ snip ]

: The idea of a tactical barrier is NONSENSE.

The chance of a tactical loss is getting lower and lower every extra ply.

Suppose you have 2 versions (idea borrowed from Hans Berliner)
Version-1 : thinks 16 ply + a lot of chess knowledge
Version-2 : thinks 20 ply + less chess knowlegde
My vote is obviously in favor for version-1

Another point...
If version-1 due to his extra chess knowledge is able to build a strong
position who has the best chances on tactics? If you have a good position
the tactics will come by itself and no 20-30-40 ply will help. A common
rule in chess. That's why chess players sacrifice pawns and even more
to get a good and strong position and then... tactics will come.

It's certainly not the other way around although there are exceptions.

Best example...
Deep Thought-II - Fritz, Hong Kong.
After castling DT was lost.
Ply depth didn't help.

- Ed -


: Komputer Korner


Brian Sheppard

unread,
Feb 5, 1997, 3:00:00 AM2/5/97
to

> KK wrote:
> I think I am beginning to understand why your level of posts
> is below that of other programmers. You don't seem to understand
> high level chess.

Shame on you, KK. This jab is inappropriate.

> There is No tactical barrier.

There IS a tactical barrier, if you define the term properly.

Given an evaluation function F, and a position P, and a search
algorithm S, we can define the tactical barrier of P with respect
to F and S to be the smallest depth, D, of search such that a program
using F and S correctly plays P for all search depths >= D.

Tactical barrier is a useful concept for several purposes. Allow me
to elaborate.

If a program plays a game such that all positions in that game are
searched to at least their tactical barriers, then that program cannot
lose (assuming that the initial position is drawn).

Any position searched to less than the tactical barrier is at risk of
choosing a bad move. In fact, searching exactly one ply less than the
tactical barrier is guaranteed to produce an error (since the tactical
barrier is the *smallest* depth that guarantees correct play). There
may be even lower depths that play correctly, however.

It follows that if you want to improve the play of your program,
you must concentrate on those positions that have tactical barriers
deeper than you can feasibly search. "Improvements" that affect only
positions that are correctly played won't help much (unless they
induce errors from the opponent).

The benefit of faster computers can be seen in this light: the faster
we search, the more positions fall within their tactical barriers.

The decreasing benefit of search as depth increases is also clarified
by the concept of tactical barrier: the percentage of real-game positions
that have a tactical depth of N is a decreasing function of N.

The usefulness of search extensions is explained using the concept
of tactical barrier: the change to the search algorithm decreases the
percentage of positions having large tactical barriers.

And evaluation ideas like Square-Of-The-Pawn are also justified by
the concept of tactical barrier.

> The idea of a tactical barrier is NONSENSE. The proof of this is


> you take your program and have it search to what you think the tactical
> barrier is. And then I will search 5 plies farther on every move and
> we will see who will win.

I think you will agree that this opinion is incorrect, if you
define the term "tactical barrier" as I have done above.

Brian

Vincent Diepeveen

unread,
Feb 5, 1997, 3:00:00 AM2/5/97
to

In <01bc12d1$2c33fa40$c308...@cpsoft.demon.co.uk> "Chris Whittington" <chr...@demon.co.uk> writes:


>--
>http://www.demon.co.uk/oxford-soft
>
>Vincent Diepeveen <vdie...@cs.ruu.nl> wrote in article
><5d7qpb$nml$1...@krant.cs.ruu.nl>...
>> In <5d5o50$3...@merlin.pn.org> kerr...@merlin.pn.org (Tom C. Kerrigan)
>writes:
>>
>> >Vincent Diepeveen (vdie...@cs.ruu.nl) wrote:
>> >

>> >> If you don't know a thing about chess (which is true for most
>chessprogrammers)
>> >> you need to get this chessknowledge from a second person. Still you
>must
>> >

>> >Most chess programmers on the planet Goog, I assume.
>> >
>> >I certainly know nothing about chess, but I seem to recall that the
>> >ratings of famous programmers published in How Computers Play Chess were
>> >seriously above average.
>> >
>> >With regard to how search improves the evaluation function, I suggest
>you
>> >read the closing statements of the CHESS 4.0 article in Chess Skill in
>Man
>> >and Machine. Excellent observations there.
>>
>> Lot of articles from chess skill in man and machine were from 1979 or
>something
>> around that year.
>>
>>
>> In the mean time programs became stronger.
>>
>> There are few programmers with a bunch of elo, but that are usually
>> chessplayers and not programmers.
>
>Well I agree with Vincent here.
>
>Most programmers are not strong chess players.
>
>But being a strong chessplayer leads the programmer down the 'evaluation',
>slow nodes per second path.
>
>Being a weak chessplayer leads the programmer down the 'fast' 'brute-force
>path.
>
>Its still an open question: search or knowledge ......

I bet on the combination in my program.

Diep searches around 10 ply at Pro at 3 min/move.

This is around 1 to 2 ply less than Fritz/Kallisto and some forward pruning
programs, but that still are at least 5 moves.

>But I have to say that I slighty resent Vincent's assertion thta he is the
>only chess-knowledgeable programmer

I said few.

>Chris Whittington

Marcel van Kervinck

unread,
Feb 5, 1997, 3:00:00 AM2/5/97
to

Brian Sheppard (bri...@mstone.com) wrote:

> There IS a tactical barrier, if you define the term properly.

> Given an evaluation function F, and a position P, and a search
> algorithm S, we can define the tactical barrier of P with respect
> to F and S to be the smallest depth, D, of search such that a program
> using F and S correctly plays P for all search depths >= D.

> Tactical barrier is a useful concept for several purposes. Allow me
> to elaborate.

Good definition. You may even abstract from the usual ply-by-ply
search by defining it in terms of of minimal minimax trees. The
partial order in above definition will then be defined by the
inclusion operator.

Tom C. Kerrigan

unread,
Feb 5, 1997, 3:00:00 AM2/5/97
to

Bruce asked me to stop pissing on you, but this post is so insanely stupid
that I feel a moral obligation to reply.

Vincent Diepeveen (vdie...@cs.ruu.nl) wrote:

> >With regard to how search improves the evaluation function, I suggest you
> >read the closing statements of the CHESS 4.0 article in Chess Skill in Man
> >and Machine. Excellent observations there.
> Lot of articles from chess skill in man and machine were from 1979 or something
> around that year.
> In the mean time programs became stronger.

Right. Programs today don't have search or evaluation functions. Silly me.

> There are few programmers with a bunch of elo, but that are usually
> chessplayers and not programmers.

Okay, let's read this sentence again and see if we can't spot a gaping
logical hole, shall we?

Cheers,
Tom

Komputer Korner

unread,
Feb 5, 1997, 3:00:00 AM2/5/97
to

Chris Whittington wrote:
>
> --
snipped

> But there is a practical barrier (search tree exponential growth,
> blah-blah)
>
> 1. You can't search 5 ply more.
>
> 2. even if you could at those depths the better positional knowledge of the
> shallower program would still win out.
>
> Chris Whittington
>
>
Your answer is based on your innate belief that knowledge will win
out. That is true if the difference in knowledge is equal to the
difference between Deep Blue and Kasparov. However we are not
talking about that difference. We are talking about the difference
between chess programs. The difference between knowledge of chess
programs will never make up for a 5 ply difference in search. There is
no tactical barrier at which extra search is worthless. This 5 ply
difference is about the difference between the micros and Deep Blue.
Even when the bar gets raised to micros searching 20 ply and Deep Blue
searching 25 ply, the speed difference will always be decisive no
matter how tricky or TAL like CSTAL becomes.

brucemo

unread,
Feb 5, 1997, 3:00:00 AM2/5/97
to

Tom C. Kerrigan wrote:
>
> Bruce asked me to stop pissing on you, but this post is so insanely stupid
> that I feel a moral obligation to reply.

Yes, in private email, which you have felt necessary to discuss publicly,
that is what I said.

The basic idea is that in my opinion you'd do well to stop writing derisive
posts that have no independent content.

bruce

Komputer Korner

unread,
Feb 5, 1997, 3:00:00 AM2/5/97
to

Vincent Diepeveen wrote:
>
snipped

>
> 1. e4,e5
> 2. Nf3,Nc6
> 3. Bb5,Nd4
>
> the tactical problem is that after Nxe5 white looses because of Qg5.
> So a program needs to see that Nxe5 is not possible.
>
> So the tactical barrier in this position is i would define to be around 7 ply
> (some programs more, some programs less) for most programs.
>
> Vincent
>
> >Komputer Korner
> --
> +----------------------------------------------------+
> | Vincent Diepeveen email: vdie...@cs.ruu.nl |
> | http://www.students.cs.ruu.nl/~vdiepeve/ |
> +----------------------------------------------------+

That assumes that the only line that matters is Nxe5. There are many
lines much
deeper in the tree in that opening that are tactical and that take many
more
than 7 plies to see. You can't divide up a chess game into tactics and
positional
play. The 2 are intertwined as any GM will tell you. The whole of chess
is
one big tactical positional melee. Barriers are artificial limiters that
you
use to describe a quiescent position, but that quiescent position will
flair
up again if you search more deeply enough. Quiescence is a term that
computers like so that their search trees don't get too big. However
since
chess is one big search tree, an idea of a barrier is trying to chop the
tree down. Well you can cut off your search at the first quiescent
position
but Deep Blue will search 5 ply deeper and find a tactical trick that
your
quiescent position couldn't see. The whole trick to knowledge is to
selective search but to do that consistently well you have to have human
GM
knowledge. If you don't; you better be fast like Deep Blue because dumb
(non
human) computer knowledge with slow search will not cut it.

Robert Hyatt

unread,
Feb 6, 1997, 3:00:00 AM2/6/97
to

Komputer Korner (kor...@netcom.ca) wrote:

This whole line of reasoning is "flawed". The idea of "tactical
sufficiency" is simply *wrong*. Here's a workable definition: You are
searching deep enough to avoid tactics, *if and only if* you are searching
as deep or deeper than your opponent. There is *no* number you can attach
to this. Take it from someone who has been there. 5 years ago if you had
asked me about Cray Blitz, I'd have said "no more depth" just "more
knowledge". And I'd have been wrong, as Hsu so forcefully demonstrated in
the game I've mentioned many times.

The flaw with "tacitical sufficiency" assumes that there may be N-ply
forced sequencies, but there are no N+1 ply forced sequencies. That is
simply incorrect. For a given *position* there might be a tactical
threshold that can be defined, although one would have to work hard to
convince me that another 20 plies wouldn't uncover something completely
new in the position...

This sort of thing is seen all the time in a chess program that changes its
mind in the last iteration. If you let it search longer, it may change its
mind again... and this might continue for years if you have that long. I
don't buy the "fixed threshold" idea.. I believe chess is too dynamic for
that to be true...

Chris Whittington

unread,
Feb 6, 1997, 3:00:00 AM2/6/97
to

--
http://www.demon.co.uk/oxford-soft

Komputer Korner <kor...@netcom.ca> wrote in article

<32F914...@netcom.ca>...


> Chris Whittington wrote:
> >
> > --
> snipped
> > But there is a practical barrier (search tree exponential growth,
> > blah-blah)
> >
> > 1. You can't search 5 ply more.
> >
> > 2. even if you could at those depths the better positional knowledge of
the
> > shallower program would still win out.
> >
> > Chris Whittington
> >
> >
> Your answer is based on your innate belief that knowledge will win
> out. That is true if the difference in knowledge is equal to the
> difference between Deep Blue and Kasparov. However we are not
> talking about that difference. We are talking about the difference
> between chess programs. The difference between knowledge of chess
> programs will never make up for a 5 ply difference in search.
>
> There is
> no tactical barrier at which extra search is worthless.

Now as a straight statement what you say is true.
It would not be worthless, but it wouldn't be worth very much at the depths
you're referring to.

But your statement that prgs can't have a 5-ply eval difference makes no
sense.

> This 5 ply
> difference is about the difference between the micros and Deep Blue.

Really ? I'm not convinced of that. Especially when DB's perfomance stats
are subject to conjecture.

> Even when the bar gets raised to micros searching 20 ply and Deep Blue
> searching 25 ply, the speed difference will always be decisive no
> matter how tricky or TAL like CSTAL becomes.

The speed difference will help but it won't be *decisive* (your word). And
I didn't know we were talking about my program specifically, I thought this
was a general thread ?

KK you do blather on erroneously at times :)

I'll try it put it simply:

For two dissimilar programs:
At low search depths (say 2,3,4 ply) the program with the greater depth
capability will be advantaged.

At very deep search depths (say 20,25,30 ply) the program with the greater
knowledge will be advantaged.

The effect of knowledge increases with search depth.

We could draw a little graph to show this. Problem is that nobody knows the
figures. we just conjecture and guess.

I would guess that at 20+plies, superior knowledge (whatever that means)
would win out.

Chris Whittington

Komputer Korner

unread,
Feb 6, 1997, 3:00:00 AM2/6/97
to

I don't see how one can arbitrarily set up a
tactical barrier point. Quiescent positions become non quiescent
deeper down in the search. The only reason that a quiescent position
is defined that way is that there are no immediate captures or
castling or sometimes checking moves. The keyword here is IMMEDIATE.
One program's tactical barrier is another program's horizon which
can be smashed by deeper searching. Under the above definition of
tactical barrier, the only 100% barrier would be the next to last
position in a game where the program is able to search the whole tree
out to the end of the game. No program can do this except deep in the
endgame so under your definition, tactical barriers only exist in the
endgame. In the opening and middlegames, all quiescent and I repeat
ALL quiescent positions dissolve into tactical messes eventually. Even
in dead drawn positions, there are lots of tactics going on where if
the program has a horizon it will never be able to achieve a so
called tactical safety threshold which you call "tactical barrier".
And I don't know of any program that doesn't have a horizon.

Komputer Korner

unread,
Feb 6, 1997, 3:00:00 AM2/6/97
to

Brian Sheppard wrote:
>
snipped

> The usefulness of search extensions is explained using the concept
> of tactical barrier: the change to the search algorithm decreases the
> percentage of positions having large tactical barriers.
>
> And evaluation ideas like Square-Of-The-Pawn are also justified by
> the concept of tactical barrier.
>
> > The idea of a tactical barrier is NONSENSE. The proof of this is
> > you take your program and have it search to what you think the tactical
> > barrier is. And then I will search 5 plies farther on every move and
> > we will see who will win.
>
> I think you will agree that this opinion is incorrect, if you
> define the term "tactical barrier" as I have done above.
>
> Brian

This whole explanation sounds to me like a non chess player computer
programmer who thinks that just because his program reaches a quiescent
position it is now safe to "cross the street". There is no safety in
chess except for fortress positions and known book draws. Tactical
barrier may have usefulness in "square of the pawn" and other endgame
positions, but that is it. ALL opening and middlegame QUIESCENT
positions
become violent sooner or later. If you didn't accept my 5 plies
advantage argument what about this?. You have your program think to your
tactical barrier where it now thinks it is safe. Let us say that is
20 plies deep full width plus another 20 plies selective which is
certainly far better than Deep Blue does now.
Now, let me be Gary Kasparov who doesn't search full width but who
searches
50 ply selective if he has to. Lasker once showed a combination that he
searched 51 ply in order to win. Because Gary rarely throws out an
important line in his thinking process, his selective lines are all the
good ones, but your program's selective lines are only some of the good
ones. Who is better here? Of course the player who searches deeper and
the
one that searches better selectively. In the above example, your program
will win the odd game because of a deeper full width search and Gary
will
win the rest because of a better selectivity process. The controversy
over knowledge vs brute force is exactly this. Which one will win more
games?
The mistake that the knowledge programmers make is in thinking that they
can duplicate the thinking processes of Kasparov. Their attempts will
always be less than successful. That is why the Deep Blue team have gone
to a 2 pronged approach, deep full width and deep selectivity. The deep
full width search is nice but without the good selectivity there is a
horizon.
Look at what happened in game 5 and 6 of the last Deep Blue-Kasparov
match.
They realize that only a combination of the 2 will triumph. The hard
part
is the good selectivity. Deep Blue is as close to your concept as any
program is but even it will fall to Gary's better selectivity. Of course
if one had perfect selectivity, one could never lose, but this is the
holy
grail.
In short there is no tactical barrier high enough by itself that will
defeat
Kasparov. It will come from a combination of deep full width plus
another good deep selective search. Kasparov would laugh at the concept
of
a tactical saftey barrier.

Ed Schroder

unread,
Feb 6, 1997, 3:00:00 AM2/6/97
to

From: Komputer Korner <kor...@netcom.ca>

: Vincent,

: I think I am beginning to understand why your level of posts


: is below that of other programmers.

I am not in agreement here.

Komputer Korner

unread,
Feb 7, 1997, 3:00:00 AM2/7/97
to

Robert Hyatt wrote:
>

snipped


>
> This whole line of reasoning is "flawed". The idea of "tactical
> sufficiency" is simply *wrong*. Here's a workable definition: You are
> searching deep enough to avoid tactics, *if and only if* you are searching
> as deep or deeper than your opponent. There is *no* number you can attach
> to this. Take it from someone who has been there. 5 years ago if you had
> asked me about Cray Blitz, I'd have said "no more depth" just "more
> knowledge". And I'd have been wrong, as Hsu so forcefully demonstrated in
> the game I've mentioned many times.
>
> The flaw with "tacitical sufficiency" assumes that there may be N-ply
> forced sequencies, but there are no N+1 ply forced sequencies. That is
> simply incorrect. For a given *position* there might be a tactical
> threshold that can be defined, although one would have to work hard to
> convince me that another 20 plies wouldn't uncover something completely
> new in the position...
>
> This sort of thing is seen all the time in a chess program that changes its
> mind in the last iteration. If you let it search longer, it may change its
> mind again... and this might continue for years if you have that long. I
> don't buy the "fixed threshold" idea.. I believe chess is too dynamic for
> that to be true...

Then you agree with me.

Komputer Korner

unread,
Feb 7, 1997, 3:00:00 AM2/7/97
to

Chris Whittington wrote:
>
> --
> http://www.demon.co.uk/oxford-soft
>

>
> The effect of knowledge increases with search depth.
>
> We could draw a little graph to show this. Problem is that nobody knows the
> figures. we just conjecture and guess.
>
> I would guess that at 20+plies, superior knowledge (whatever that means)
> would win out.
>
> Chris Whittington
>


At 20 plies deep, how many plies will the most knowledgeable computer
program's knowledge be worth in extra search plies? In other words
will you let me build a DEEEEEEP Blue that will search 30 ply deep.
Will your knowledge still be enough to win? Or are you saying that
superior knowledge is worth only 5 ply at tops. Don't forget while
you will be adding knowledge, so will DEEEEEP Blue. I think that
to make up for 5 ply, you will need all the knowledge of all the GM's.
The problem is that your program won't be able to assimilate all that
knowledge correctly. Knowledge can't make up completely the difference
at very deep search depths. My next statement will hit below the belt,
but here it is. Vincent once said that the more asymmetric a program
is the harder time it has at very deep search depths. I don't
understand why but if you agree with that statement then wouldn't
you say that adding knowledge to an asymmetric program is like trying
to teach a drunk to walk a straight line. If you disagree with
Vincent's statement then that is one more Vincentism we have to clear
up. Anyway to get back to the original argument. Even if you are right
about knowledge making up for 5 plies of extra search, that doesn't mean
that 10 ply extra won't be any good and that the knowledge will
prevent your program from falling into a tactical hole. Tactics lurk
everywhere even very deeply. Saying that a program can apply
huge knowledge to make up for deeper searching is arguing the root
processor argument and I didn't know that Chris Whittington had
jumped ship into the root processor crowd.

Chris Whittington

unread,
Feb 7, 1997, 3:00:00 AM2/7/97
to

--
http://www.demon.co.uk/oxford-soft

Komputer Korner <kor...@netcom.ca> wrote in article

<32FAE6...@netcom.ca>...


> Chris Whittington wrote:
> >
> > --
> > http://www.demon.co.uk/oxford-soft
> >
>
> >
> > The effect of knowledge increases with search depth.
> >
> > We could draw a little graph to show this. Problem is that nobody knows
the
> > figures. we just conjecture and guess.
> >
> > I would guess that at 20+plies, superior knowledge (whatever that
means)
> > would win out.
> >
> > Chris Whittington
> >
>

It would be ever so helpful if you learnt about the concept of the
paragraph.
But I'll break your text up for you in answering :)

>
> At 20 plies deep, how many plies will the most knowledgeable computer
> program's knowledge be worth in extra search plies?

Don't know - npbody knows.

> In other words
> will you let me build a DEEEEEEP Blue that will search 30 ply deep.
> Will your knowledge still be enough to win?

What at 25 + Knowledge to 30 + materialism.
I would *predict* that 25+K would win.

>
> Or are you saying that
> superior knowledge is worth only 5 ply at tops.

No, we've discussed this before. Knowledge can be worth 40, 50, 60 plies,
or 0 plies, or 1,2,3,4,5..... plies. Depends on the knowledge. Depends on
the situation.

> Don't forget while
> you will be adding knowledge, so will DEEEEEP Blue.

Except that by definition this discussion is about knowledge versus deep
material search.

If you give the deep material search knowledge then sure. But then it stops
being deep material search.

> I think that
> to make up for 5 ply, you will need all the knowledge of all the GM's.

At low depth, yes. At high depth, its another matter. that's what we were
discussing, no ?

> The problem is that your program won't be able to assimilate all that
> knowledge correctly.

This is very probably true. But deconstruct the statement: 'correctly' is
an interesting word. I'ld posit that 'correctly' isn't meaningful in this
context, since
a) nobody agrees
b) what's right one day is wrong the next

>
> Knowledge can't make up completely the difference
> at very deep search depths.

What ?

Look, sometimes it makes up the difference, sometimes it exceeds, and
sometimes it fails to make the difference.


> My next statement will hit below the belt,

ooohh, ooow ...

> but here it is. Vincent once said that the more asymmetric a program
> is the harder time it has at very deep search depths. I don't
> understand why but if you agree with that statement then wouldn't
> you say that adding knowledge to an asymmetric program is like trying
> to teach a drunk to walk a straight line. If you disagree with
> Vincent's statement then that is one more Vincentism we have to clear
> up.

Er, where was the hit ? Didn't notice it.

There's no reason why asymmetry should, per se, create problems at greater
depth.

All chess programs are about trying to make drunks walk straight.

> Anyway to get back to the original argument.

Oh, god - where in the mass of unbroken words was that :)

> Even if you are right
> about knowledge making up for 5 plies of extra search, that doesn't mean
> that 10 ply extra won't be any good and that the knowledge will
> prevent your program from falling into a tactical hole. Tactics lurk
> everywhere even very deeply.

But,

If you take my queen with your nite - you win (1 ply)

If you fork my king and queen with your nite - you win (3 ply)

If you threaten to fork my Q and K - you don't necessarily win. I can
defend (5 ply)

If you threaten to threaten to fork my Q and K - you don't necessarily win.
I now have 4 plies to defend in (7 ply)

And so on and so on.

Tactics and tactical threats are defensible by having spare moves.
Therefore tactics and tactical threats become less and less the issue for
deep search.

Positional factors become more and more the issue.

Do I win the KK post of the week for this little gem of wisdom so well
explained ? :)

> Saying that a program can apply
> huge knowledge to make up for deeper searching is arguing the root
> processor argument

No it doesn't. It argues nothing of the sort.

Chris Whittington

Vincent Diepeveen

unread,
Feb 7, 1997, 3:00:00 AM2/7/97
to

>This whole line of reasoning is "flawed". The idea of "tactical
>sufficiency" is simply *wrong*. Here's a workable definition: You are
>searching deep enough to avoid tactics, *if and only if* you are searching
>as deep or deeper than your opponent. There is *no* number you can attach
>to this. Take it from someone who has been there. 5 years ago if you had
>asked me about Cray Blitz, I'd have said "no more depth" just "more
>knowledge". And I'd have been wrong, as Hsu so forcefully demonstrated in
>the game I've mentioned many times.

>The flaw with "tacitical sufficiency" assumes that there may be N-ply
>forced sequencies, but there are no N+1 ply forced sequencies. That is
>simply incorrect. For a given *position* there might be a tactical
>threshold that can be defined, although one would have to work hard to
>convince me that another 20 plies wouldn't uncover something completely
>new in the position...

Suppose you have an objective observer Kasparov2, which is a perfect
chessplayer. Suppose this K2 knows everything about chess, and concluded
that in a given position X there is a tactical horizon effect when searching
i < n ply that is only seen tactical correct using an n-ply search.

Now i define n ply the tactical barrier needed for position X. My estimate
now is that in position X after an n ply search only better knowledge will
work, instead of searching deeper.

>This sort of thing is seen all the time in a chess program that changes its
>mind in the last iteration. If you let it search longer, it may change its
>mind again... and this might continue for years if you have that long. I
>don't buy the "fixed threshold" idea.. I believe chess is too dynamic for
>that to be true...

It could be the case that in certain positions the barrier is exactly
the depth needed to find a mate. This depth could be in certain positions
so huge that it will not be possible to reach this depth.

On the other hand a good kings evaluation could perhaps give the king safety
so much positional penalties that a material advantage is walked over
by this kingsafety penalty, so that it won't be necessary to search any deeper.

Vincent

Marcel van Kervinck

unread,
Feb 7, 1997, 3:00:00 AM2/7/97
to

I wrote:

> Brian Sheppard (bri...@mstone.com) wrote:
> > There IS a tactical barrier, if you define the term properly.
> > Given an evaluation function F, and a position P, and a search
> > algorithm S, we can define the tactical barrier of P with respect
> > to F and S to be the smallest depth, D, of search such that a program
> > using F and S correctly plays P for all search depths >= D.
> > Tactical barrier is a useful concept for several purposes. Allow me
> > to elaborate.
> Good definition. You may even abstract from the usual ply-by-ply
> search by defining it in terms of of minimal minimax trees. The
> partial order in above definition will then be defined by the
> inclusion operator.

After playing with the idea for a few days, I have to reject
above-like definitions as basicly flawed. Looks like that a
typical program under such definitions have a 'tactical
barrier' that closely resembles the endnodes of the entire
game tree.

Marcel van Kervinck

unread,
Feb 7, 1997, 3:00:00 AM2/7/97
to

brucemo (bru...@nwlink.com) wrote:
> Marcel van Kervinck wrote:

> > It describes a neat evaluator that is worth a few ply of search.
> > (Positional knowledge). However, it is really expensive to compute.
> > I spent a two years extracting essentials from that algorithm
> > to enhance my own program, which is actually a fast searcher.

> What is your program called? I've seen you post, and I've looked through
> back-issues of ICCA Journals to try to find you, but I can't.

It's 'Rookie'. Version 0 played in the 1993 DCCC. After that I
started a complete rewrite from scratch, borrowing some ideas from
the DISTANCE article. That became Version 1. It searched ~40,000
nps on a 60 MHz 060. I had most of it properly implemented last
October and it was basicly ready to play the 1996 DCCC. However,
due to some misfortune I lost a few months of work just before
the tournament started and I had to skip the event.

Currently I'm too busy graduating. (VLSI implementation of search
algorithms... unfortunatly unrelated to chess :)

I'm planning to write Version 2 if I can find some spare time. I don't
want to spoil more work on Version 1. Number 2 will be a rewrite of 1,
this time in C. (68k seems to have died, so no more assembler). And
parallelism will be standard. Just can't wait to see 200 Suns play chess.

Regards,

Chris Whittington

unread,
Feb 7, 1997, 3:00:00 AM2/7/97
to

--
http://www.demon.co.uk/oxford-soft

Robert Hyatt <hy...@crafty.cis.uab.edu> wrote in article
<5dbkn2$c...@juniper.cis.uab.edu>...

If you're searching with two equivalent material searchers, then sure, the
deeper one will probably win. Clearly programs get better with more depth.

>
> The flaw with "tacitical sufficiency" assumes that there may be N-ply
> forced sequencies, but there are no N+1 ply forced sequencies. That is
> simply incorrect.

Agreed, that would be incorrect in a logical sense. Its not possible to
argue that there wouldn't be any N+1 forced sequences. There might be. But
we can argue that the number of forcing sequences falls with N.

Say N=2. White captures, black must recapture. Plenty of those. Or white
attacks, black retreats, plenty of those.

Say N=3. White captures, black must recapture. But white forcing again ?
Not so often.

N=4, fewer sequences, (even fewer possibilities to force black at ply 4)
N=5, still fewer and so on.

But it gets worse: the greater is N, the more intermediate move
possibilities open up for the opponent to foil you, or do something else to
divert you, or, or, or.

Now when you get to N=20 or N=30 (and remember we've got extensions on top)
then the number of forcing moves you can make, and keep making, and keep
forcing a response - without the opponent having myriad ways of deviating
or foiling you, is going to be limited. Maybe even vanishingly limited.
Very likely vanishingly limited.

Like does 1. e4 have a 25 move forcer flowing from it ? No.

This is tactical sufficiency - you hit the point where N gets so large,
that there just aren't any forced winning lines to find.

Then what ? Well its positional knowledge, of course. And that's why
positional is so important as the search space expands.

Can I claim my second KK post of the week, please :)

Chris Whittington


> For a given *position* there might be a tactical
> threshold that can be defined, although one would have to work hard to
> convince me that another 20 plies wouldn't uncover something completely
> new in the position...
>

Dan Kirkland

unread,
Feb 8, 1997, 3:00:00 AM2/8/97
to

In article <5d9bqo$4...@merlin.pn.org>,

kerr...@merlin.pn.org (Tom C. Kerrigan) writes:
>Bruce asked me to stop pissing on you, but this post is so insanely stupid
>that I feel a moral obligation to reply.
>
>Vincent Diepeveen (vdie...@cs.ruu.nl) wrote:

[stuff deleted...]

>Okay, let's read this sentence again and see if we can't spot a gaping
>logical hole, shall we?
>
>Cheers,
>Tom

Well Tom, can't hold it till ya get to the restroom huh?

Most of us have long realized that Vincent doesn't write great
English. And most of us are more than willing to overlook it.

It looks as though you have not yet figured this out...?
Either you haven't realized Vincent's English is not so good
(You are not that stupid are you?), or, you are just to damn
lazy to go to the restroom and ...

Either way, it's pretty lame...

(Yes, I know, this post is also pretty lame. But I couldn't
resist!!!)

dan

Tom C. Kerrigan

unread,
Feb 9, 1997, 3:00:00 AM2/9/97
to

Dan Kirkland (kirk...@ee.utah.edu) wrote:

> Well Tom, can't hold it till ya get to the restroom huh?

> Most of us have long realized that Vincent doesn't write great
> English. And most of us are more than willing to overlook it.

Guess you can't hold it either.

My comment was not about Vincent's English.

Because of his not-quite-perfect English, there are several ways to
interpret what he posted, but all of them amount to programmers not being
programmers. THAT is the problem I'm having trouble with.

Cheers,
Tom

Komputer Korner

unread,
Feb 10, 1997, 3:00:00 AM2/10/97
to

Chris Whittington wrote:
>
> --

>
>
>
> But it gets worse: the greater is N, the more intermediate move
> possibilities open up for the opponent to foil you, or do something else to
> divert you, or, or, or.
>
> Now when you get to N=20 or N=30 (and remember we've got extensions on top)
> then the number of forcing moves you can make, and keep making, and keep
> forcing a response - without the opponent having myriad ways of deviating
> or foiling you, is going to be limited. Maybe even vanishingly limited.
> Very likely vanishingly limited.
>
> Like does 1. e4 have a 25 move forcer flowing from it ? No.
>
> This is tactical sufficiency - you hit the point where N gets so large,
> that there just aren't any forced winning lines to find.
>
> Then what ? Well its positional knowledge, of course. And that's why
> positional is so important as the search space expands.
>
> Can I claim my second KK post of the week, please :)
>
> Chris Whittington
>

> >
> >

The flaw in your argument is that your opponent's full width search will
prevent your program from deviating half way through to avoid the
tactics
that it finally sees after the horizon bar is moved with each move. Your
argument is only valid when playing against selective searchers that
have
less than 100% efficiency in picking all the important lines of
selective search. A Deep Blue that searches 6 ply farther is doing this
full width. So if it sees something important 6 ply ahead your
program's knowledge is not going to make any difference. We are really
talking here about the difference between Kasparov and Deep Blue.
Kasparov has the knowedge to decide which selective lines are good.
Your program or any other micro doesn't.
Ex: Kasparov's selective search efficiency approaches 97% let us say.
He makes about 1.33 mistakes per game on average. For 40 moves that is
3.3% in mistakes.
Deep Blue's selective search efficiency is probably less than 90%.
Your program is somewhere in between or you better hope it is.
Let us say for argument's sake that CSTAL has a selective search
efficiency of 92.5%. That means that it makes 3 mistakes a game.
Now let us say that Deep Blue makes 5 mistakes per game. For every
mistake that Deep Blue makes, it will have 3 moves or 6 plies to make up
for those mistakes within the full width window. Sometimes that is not
enough, because the mistake may be a fatal positional error, but most
of the time deep Blue will escape the mistakes because of a combination
of 6 ply full width deeper and a longer selective search. Don't forget
that the mistakes come from the important selective lines that are not
considered by the program. On the other hand CSTAL's mistakes cannot
be corrected by a deeper horizon. It has nowhere to hide vs Deep Blue.

Kasparov defeats Deep Blue easily because of his high selective search
efficiency. CSTAL won't because the difference in the selective
search efficieny is too small to make up for the large difference in
the depth of search of those selective lines plus the difference in
the full width. Only when you approach the selective search efficiency
of Kasparov will you be able to overtake the likes of Deep Blue. Until
then there will be tactical problems that all micros will fail at
because
of not having enough speed. The $64,000,000 question is will the micros
be able to add enough knowledge? I now think it will take much longer
than everyone thought. Maybe another 30 years.

Chris Whittington

unread,
Feb 10, 1997, 3:00:00 AM2/10/97
to

--
http://www.demon.co.uk/oxford-soft

Komputer Korner <kor...@netcom.ca> wrote in article

<32FEE7...@netcom.ca>...

IF IF IF IF IF IF IF it sees something further ahead - but at the depths
we're referring to there's myriad ways of avoiding ..............

Can somebody else explain ?

> We are really
> talking here about the difference between Kasparov and Deep Blue.

No, you were clearly talking about 5 extra plies being worth more than
knowledge regardless of depth. Don't change the subject.

> Kasparov has the knowedge to decide which selective lines are good.
> Your program or any other micro doesn't.

Indeed, this is very true.

Shall I try this technique ?

What we were talking about was the feeding habits of herons.
Herons spear fish with their beaks.
Komputers can't do this.

Now I can be right, what a great feeling :)

> Ex: Kasparov's selective search efficiency approaches 97% let us say.
> He makes about 1.33 mistakes per game on average. For 40 moves that is
> 3.3% in mistakes.

Where are you getting these insane figures from ?
Korner, this is total madness.

1. What is selective search efficiency ? No, don't bother you just made it
up.

2. Who says he makes 1.33 mistakes per game ? No don't bother, it must be
the computer that can solve life, the universe and everything, and has
solved chess as well along the way.

> Deep Blue's selective search efficiency is probably less than 90%.

Gobbeldegook efficiency ration parameter of the second order is 35.675809
borons per sillisecond squared.

> Your program is somewhere in between or you better hope it is.

You're totally nuts.

> Let us say for argument's sake that CSTAL has a selective search
> efficiency of 92.5%. That means that it makes 3 mistakes a game.

I wish.

> Now let us say that Deep Blue makes 5 mistakes per game.

Let us say anything, we just make it up as we go along.

> For every
> mistake that Deep Blue makes, it will have 3 moves or 6 plies to make up
> for those mistakes within the full width window.

Ok, the reason for your bonkers logik patterns have suddenly become clear.

YOU BELIEVE THAT THE ONLY REASON PROGRAMS MAKE MISTAKES IS BECAUSE OF
SELECTIVE SEARCH.

You think full-width is fool-proof, and sees everything, and knows
everything.

Sorry, but I'm going to have to disillusion you. Full-width (or any search)
is only as good as its evaluation at the tip nodes.

Think about it. Suppose I forget to evaluate doubled pawns ?

Then does my 30 ply of full width make mistakes or not ?

Might it leave me with a truly shitty doubled pawn at the end of the PV and
not care about it ?

Hmmm ?

> Sometimes that is not
> enough, because the mistake may be a fatal positional error, but most
> of the time deep Blue will escape the mistakes because of a combination
> of 6 ply full width deeper and a longer selective search. Don't forget
> that the mistakes come from the important selective lines that are not
> considered by the program.

Ah yes, selective search == errors. KK logik.

ChrisW logik: faulty or inadequate evaluations at the tip nodes == errors

More ChrisW logik: an inability to see the entire tree == just trying to do
the best you can under the circumstances. This may be by total brute force,
this may be by sacrificing width to try and gain depth, it may be by
......

Compared to the omnipotent one, any solution to the problem (including
Kasparov's) will contain 'errors'.


> On the other hand CSTAL's mistakes cannot
> be corrected by a deeper horizon. It has nowhere to hide vs Deep Blue.

No it can't hide, but it can run :)

>
> Kasparov defeats Deep Blue easily because of his high selective search
> efficiency.

No, he just understands chess better.

Neither you nor I understand how Kasparov does it.

> CSTAL won't because the difference in the selective
> search efficieny is too small to make up for the large difference in
> the depth of search of those selective lines plus the difference in
> the full width.

Phew. Please wait till I can parse the above into some sort of sense.

Tap, tap, tap (sound of fingers drumming).

No, sorry, just can't manage it. Its like one of those maths equations full
of greek letters and strange symbols - too complicated for little me.

CSTal is unlikely to stand a bat's chance in hell against Deep Blue.
I suspect this is true of most programs.
But you never know, DB has lost in the past. Might get it in an opening
trap :)
We've been conjecturing that in a game series against a batch of PC prgs,
DB might come a cropper. Speak to Ed, he's got ideas on this.

> Only when you approach the selective search efficiency
> of Kasparov will you be able to overtake the likes of Deep Blue. Until
> then there will be tactical problems that all micros will fail at
> because
> of not having enough speed.

Might I point out that you are changing the subject again ?

We spoke of what happens at depth (20 to 30 ply), see my original post at
the top.

At these depths DB has less of an advantage due to speed and extra plies.
Positional becomes relatively more and more important.

You now seem to have switched to what happens *now*, with current hardware
and current software. Its better to keep the goalposts in one place, no ?

> The $64,000,000 question is will the micros
> be able to add enough knowledge? I now think it will take much longer
> than everyone thought. Maybe another 30 years.

And lets just pluck another number from the sky.

15 Kommandments.

97% search efficiency

30 years to get the knowledge.

I spent 345.34786512938471238470192837409781236450918234 seconds writing
this post.

Can I claim my third KK post of the week for this one now ? :)

Chris Whittington



> --
> Komputer Korner
> The komputer that kouldn't keep a password safe from
> prying eyes, kouldn't kompute the square root of 36^n,
> kouldn't find the real Motive and variation tree in
> ChessBase, kouldn't compute the proper time in 2 variation
> mode, missed the Hiarcs functionality in Extreme
> and also misread the real learning feature of Nimzo.

The Komputer that .... oh, never mind.
>

Bernhard Sadlowski

unread,
Feb 10, 1997, 3:00:00 AM2/10/97
to

In article <5d7rb1$o1v$1...@krant.cs.ruu.nl>,
Vincent Diepeveen <vdie...@cs.ruu.nl> wrote:
>Now we can look to statistics, and the chance that in a game a program plays
>against Kasparov, that there is a position where it hasn't reached the depth
>needed to see the tactical trick. This says of course NOTHING about positional
>and/or strategical insight. Just a statistical look about the chance one
>looses tactical when searching at a certain depth.
>
>I'm just curious about how deep one average needs to prevent from loosing
>tactical.
>
>For example after

>
>1. e4,e5
>2. Nf3,Nc6
>3. Bb5,Nd4
>
>the tactical problem is that after Nxe5 white looses because of Qg5.
>So a program needs to see that Nxe5 is not possible.
>
>So the tactical barrier in this position is i would define to be around 7 ply
>(some programs more, some programs less) for most programs.

This seems to me like a 1 ply barrier due to 5. Nxe5? Nxb5 :)

Bernhard
--
Bernhard Sadlowski
<sadl...@mathematik.uni-bielefeld.de>

brucemo

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

Graham, Douglass wrote:
>
> In article <01bc1118$79792100$c308...@cpsoft.demon.co.uk>, "Chris says...

> >brucemo <bru...@nwlink.com> wrote in article <32F301...@nwlink.com>...
> >> Is there a point beyond which it makes no sense to improve
> >> your evaluation function, because additional time spent
> >> detracts from search depth and therefore makes the program
> >> weaker?

> >Why not turn your question round ?
> >
> >Can your current evaluation (or any current evaluation) with faster
> >hardware and maybe more efficient search ever consistently beat Kasparov
> >within the time frame of your interest in computer chess ?
> >
> >If the answer to this question is no, or unlikely, then you're left with
> >what seems to be the other path, namely evaluation function development.

Unfortunately, I didn't get to this until Chris' response was dumped from my
provider's disk, so I'll respond to a response that include it.

I think this is a perfectly fine way to answer this question. I wasn't trying to
take on a particular religious viewpoint so much as I was attempting to stimulate
discussion.

My own viewpoint is that I'm going to try to maintain speed while adding as much
knowledge as I can.

Back to Chris' response. I don't think that I have to choose between my approach
and the knowledge approach. I think there are plenty of ways to go about this, and
even if it is true that mine won't get me past Kasparov, it isn't necessarily true
that yours will.

> >Or turn the question again: Do humans have sophisticated evaluation
> >functions ?

I don't think simulating a human is feasible. Some problems can be solved by
watching humans solve them, and attempting to simulate their solution processes.
Other times it makes more sense to use an approach that ignores human processes
entirely. There has been more success attained to date by using non-human
processes. This is not to say that human processes won't work better eventually,
but I don't feel diminished by my own choice to not research human processes.

> >Or, since there are no real hard answers to these questions, it comes down
> >to a matter of belief. Do you believe in search, and search improvements;
> >or, do you believe in evaluation improvements ?
> >
> >And it is just belief. Talk to other programmers, they *believe* (Vincent,
> >for example). Talk to another programmer, attack his program, or attack his
> >program behind his back; you'ld have been better off attacking his wife for
> >the response you'll get.

I agree with this. For the record, attack mine all you want. I have time and
emotion invested in it, but I believe that part of the process here involves
learning about other approaches, and subjecting your own approach to criticism. If
I find that another approach is distinctly better, and have ideas that I think will
prove fruitful, I'll rewrite mine.

bruce

brucemo

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

Chris Whittington wrote:

> For two dissimilar programs:
> At low search depths (say 2,3,4 ply) the program with the greater depth
> capability will be advantaged.
>
> At very deep search depths (say 20,25,30 ply) the program with the greater
> knowledge will be advantaged.
>

> The effect of knowledge increases with search depth.
>
> We could draw a little graph to show this. Problem is that nobody knows the
> figures. we just conjecture and guess.
>
> I would guess that at 20+plies, superior knowledge (whatever that means)
> would win out.

This post, including the conclusions that Chris would probably agree are fuzzy,
is pretty representative of the speed vs knowledge debate.

The idea is that the importance of search depth decreases as search depth
increases. Problem is that it is hard to devise a test that tests it properly.

I've also never seen any sort of study about what an evaluation function
accomplishes in a program, this would be even harder to quantify.

So, while your statement about depth-effectiveness drop-off is probably true, I
don't agree that it is certainly true.

We are in agreement about the lack of rigor in this field. Could it be that
what seperates us from the very top tier is that they have accurate testing
mechanisms that they've kept secret from us? :-)

bruce

brucemo

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

Komputer Korner wrote:

> at very deep search depths. My next statement will hit below the belt,


> but here it is. Vincent once said that the more asymmetric a program
> is the harder time it has at very deep search depths. I don't
> understand why but if you agree with that statement then wouldn't

> you say that adding knowledge to an asymmetric program is like trying


> to teach a drunk to walk a straight line.

We are back to the asymmetry thing. All an asymmetrical evaluation
function is is a recognition that a program plays in a different style
than its opponent does. It does this by selecting slightly different
goals for each side.

I don't understand how it could have any effect upon the search process
itself.

bruce

Harald Faber

unread,
Feb 11, 1997, 3:00:00 AM2/11/97
to

quoting a mail from chrisw # demon.co.uk

Hello Chris,


CW> From: Chris Whittington <chr...@demon.co.uk>
CW> Subject: Re: Evaluation function diminishing returns
CW> Organization: Organised. moi ?

CW> Now when you get to N=20 or N=30 (and remember we've got extensions on
CW> top) then the number of forcing moves you can make, and keep making, and
CW> keep forcing a response - without the opponent having myriad ways of
CW> deviating or foiling you, is going to be limited. Maybe even vanishingly
CW> limited. Very likely vanishingly limited.
CW>
CW> Like does 1. e4 have a 25 move forcer flowing from it ? No.
CW>
CW> This is tactical sufficiency - you hit the point where N gets so large,
CW> that there just aren't any forced winning lines to find.
CW>
CW> Then what ? Well its positional knowledge, of course. And that's why
CW> positional is so important as the search space expands.
CW>
CW> Can I claim my second KK post of the week, please :)
CW> Chris Whittington

You'lll get a gold medal. :-))))

Harald
--

Vincent Diepeveen

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

>Chris Whittington wrote:
>
>> For two dissimilar programs:
>> At low search depths (say 2,3,4 ply) the program with the greater depth
>> capability will be advantaged.
>>
>> At very deep search depths (say 20,25,30 ply) the program with the greater
>> knowledge will be advantaged.
>>
>> The effect of knowledge increases with search depth.
>>
>> We could draw a little graph to show this. Problem is that nobody knows the
>> figures. we just conjecture and guess.
>>
>> I would guess that at 20+plies, superior knowledge (whatever that means)
>> would win out.
>
>This post, including the conclusions that Chris would probably agree are fuzzy,
>is pretty representative of the speed vs knowledge debate.
>
>The idea is that the importance of search depth decreases as search depth
>increases. Problem is that it is hard to devise a test that tests it properly.
>
>I've also never seen any sort of study about what an evaluation function
>accomplishes in a program, this would be even harder to quantify.

It is easy to see that knowledge is at least as good as search:

For an n-ply search program versus an n-i ply search program with better
evaluation function, i > 0 just define the evaluation function to be an
i ply search with the original evaluation.

>So, while your statement about depth-effectiveness drop-off is probably true, I
>don't agree that it is certainly true.
>
>We are in agreement about the lack of rigor in this field. Could it be that
>what seperates us from the very top tier is that they have accurate testing
>mechanisms that they've kept secret from us? :-)

>bruce

Vincent

Vincent Diepeveen

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

I don't have prove that it effects search. After few experiments i however
concluded this for myself.

The assumption is that for every different evaluation function one needs
a different sorting to reduce tree size.

Suppose you have an sorting algorithm that selects best moves. Now depending
on the evaluation that is backed up by alfabeta one should use 2 different
sortings. One does not know before one searches what of the 2 different
searches has the highest chance to become best.

This is the first problem of an asymetric evaluation function. Now i mean
REALLY asymetric, and not only giving different penalties for the same
pattern.

Vincent.

Martin Borriss

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

In article <33014B...@nwlink.com>,

brucemo <bru...@nwlink.com> writes:
>The idea is that the importance of search depth decreases as search depth
>increases. Problem is that it is hard to devise a test that tests it properly.
>
>I've also never seen any sort of study about what an evaluation function
>accomplishes in a program, this would be even harder to quantify.
>
>So, while your statement about depth-effectiveness drop-off is probably true, I
>don't agree that it is certainly true.
>
>We are in agreement about the lack of rigor in this field. Could it be that
>what seperates us from the very top tier is that they have accurate testing
>mechanisms that they've kept secret from us? :-)

I am sometimes watching my program playing on GICS. I noted that its
evaluation is sometimes way off. Quite normal in wild positions, but it also
sees big advantages or disadvantages when it is a quiet, drawn position.

Therefore, I was thinking of having an evaluation testsuite which tries to
come close to the "correct" evaluation of the position. Of course, in the
positions tested searching is not "permitted", so those positions should be
quiescent (more or less, at least should they be "sane").
This test won't be perfect at all, but I think it will help me to check
my evaluation function and improve it.

In fact, the more I think about it the better I like the idea. I am
convinced that many programs don't understand the "simple" things in a game -
like transposing into an ending etc.

Good thing about the eval testsuite is that you can do the whole test within
a second :)

What do other people think about it?

Martin
--
Martin....@inf.tu-dresden.de

brucemo

unread,
Feb 12, 1997, 3:00:00 AM2/12/97
to

Vincent Diepeveen wrote:

> It is easy to see that knowledge is at least as good as search:
>
> For an n-ply search program versus an n-i ply search program with better
> evaluation function, i > 0 just define the evaluation function to be an
> i ply search with the original evaluation.

I guess this makes Fritz a heavy-duty knowledge program then.

I don't think your example accomplished anything, other than to lead to
discussion about what an eval function is.

bruce

Komputer Korner

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

Chris Whittington wrote:
>
> --
snipped

>

> No, sorry, just can't manage it. Its like one of those maths equations full
> of greek letters and strange symbols - too complicated for little me.
>
> CSTal is unlikely to stand a bat's chance in hell against Deep Blue.
> I suspect this is true of most programs.
> But you never know, DB has lost in the past. Might get it in an opening
> trap :)
> We've been conjecturing that in a game series against a batch of PC prgs,
> DB might come a cropper. Speak to Ed, he's got ideas on this.
>

>snipped


>
> Might I point out that you are changing the subject again ?
>
> We spoke of what happens at depth (20 to 30 ply), see my original post at
> the top.
>
> At these depths DB has less of an advantage due to speed and extra plies.
> Positional becomes relatively more and more important.
>
> You now seem to have switched to what happens *now*, with current hardware
> and current software. Its better to keep the goalposts in one place, no ?
>

>snipped
Okay, within the full width itself there can't be any tactical mistakes.
Since there is selective search on every move, the only mistakes can
come from not selecting all the important lines right out to mate.
The argument is how much knowledge is = to the difference in
full width plus the difference in selective search length and breadth.
Since knowledge is cheap,. (Look at the r.g.c.c. where Bob Hyatt gives
it away, and speed is expensive ) then in the future the difference
in knowledge will always be much less than the difference in speed.
Rich people and corporations are much higher up the bell curve of
technology which is expensive. IBM can also afford to hire GMs both
for the opening books and the input of chess knowledge if they can't
get enough knowledge for free. Speed will always rule because of
this. If you are arguing that at 30 ply speed is unimportant, then
why would anybody search further? Chess search does not exhibit
fatal ( there is a term for this that I have forgotten based on
death)??

Chris Whittington

unread,
Feb 13, 1997, 3:00:00 AM2/13/97
to

--
http://www.demon.co.uk/oxford-soft

Komputer Korner <kor...@netcom.ca> wrote in article

<3302CB...@netcom.ca>...
> Chris Whittington wrote:
> >
> > --
> snipped


>
> >
> > No, sorry, just can't manage it. Its like one of those maths equations
full
> > of greek letters and strange symbols - too complicated for little me.
> >
> > CSTal is unlikely to stand a bat's chance in hell against Deep Blue.
> > I suspect this is true of most programs.
> > But you never know, DB has lost in the past. Might get it in an opening
> > trap :)
> > We've been conjecturing that in a game series against a batch of PC
prgs,
> > DB might come a cropper. Speak to Ed, he's got ideas on this.
> >

> >snipped


> >
> > Might I point out that you are changing the subject again ?
> >
> > We spoke of what happens at depth (20 to 30 ply), see my original post
at
> > the top.
> >
> > At these depths DB has less of an advantage due to speed and extra
plies.
> > Positional becomes relatively more and more important.
> >
> > You now seem to have switched to what happens *now*, with current
hardware
> > and current software. Its better to keep the goalposts in one place, no
?
> >

> >snipped
> Okay, within the full width itself there can't be any tactical mistakes.
> Since there is selective search on every move, the only mistakes can
> come from not selecting all the important lines right out to mate.
> The argument is how much knowledge is = to the difference in
> full width plus the difference in selective search length and breadth.
> Since knowledge is cheap,. (Look at the r.g.c.c. where Bob Hyatt gives
> it away, and speed is expensive ) then in the future the difference
> in knowledge will always be much less than the difference in speed.
> Rich people and corporations are much higher up the bell curve of
> technology which is expensive. IBM can also afford to hire GMs both
> for the opening books and the input of chess knowledge if they can't
> get enough knowledge for free. Speed will always rule because of
> this. If you are arguing that at 30 ply speed is unimportant, then
> why would anybody search further? Chess search does not exhibit
> fatal ( there is a term for this that I have forgotten based on
> death)??

Am I alone in having real difficulty with this 'paragraph'. A stream of
Komputer consciousness. Ok, I'll have a go at it. First break it up into
manageable chunks.


> Okay, within the full width itself there can't be any tactical mistakes.

This depends. If the program knows about P=1, N=3, B=3 etc. then it will
strive, within its full width search, to (a) win material, (b) not lose
material, (c) win the enemy king, (d) not lose its own king. So we can
assume that, within the horizon, it won't blunder on a material level.

You may call this 'no tactical mistake', personally I don't, I think
tactics is more than this. What about weak pawns, very low mobility pieces,
smashed up king defences etc. etc. ? You may want to set the rules to be
about material only, but there's more to it than this. Or, even if yuo'ld
like to set it to material only, then material values can change in the
game. Good bish v. bad nite etc. etc. etc.

> Since there is selective search on every move, the only mistakes can
> come from not selecting all the important lines right out to mate.

What means 'there is selective search on every move' ?

the second half of your sentence seems to imply that 'the only mistakes
come from not having omnipotent knowledge'. But coupled with this selective
search statement, sorry I don't understand you.

> The argument is how much knowledge is = to the difference in
> full width plus the difference in selective search length and breadth.

Is it ? I don't think so. the 'discussion' is whether knowledge is worth
less than 5 plies at deep search levels. You're changing the subject again.
Or moving the goalposts.

> Since knowledge is cheap,. (Look at the r.g.c.c. where Bob Hyatt gives
> it away, and speed is expensive ) then in the future the difference
> in knowledge will always be much less than the difference in speed.

Is knowledge cheap ?
Is speed expensive ?

Both appear to require intelligent and creative people to spend much of
their time fiddling around trying to make progress. Whether this is by P6
200's or software.

Your contrapositioning of speed/knowledge relative costings appears to me
to make no sense (again).

In the case of DB, its seems to me that what was cheap for them was what
they did, namely build a bloody fast machine. This required a very large
knowledge dosage, but *not* the knowledge we're discussing here, ie. it did
not get a very large dosage of *chess* knowledge. Now to get this chess
knowledge into the tip nodes of DB would be difficult, not impossible, but
difficult. Its significant that given the 'lets go for speed', or 'lets go
for knowledge', they went for speed.

Contradiction to your knowledge cheap, speed expensive ?

So, finally your statement: 'diff in knowledge <= diff in speed (always)',
seems to be an unsustainable assertion.

Apart from the fact that boths sides of the equation use different units
and measure different things.

One example: knowledge diff (kasparov, deep blue) is very large.

speed diff (kasparov, deep blue) is very large in opposite direction.

How does your equation count for this ?

> Rich people and corporations are much higher up the bell curve of
> technology which is expensive.

Piffle. In chess software development, small people and small companies are
much higher up the bell curve of chess technology which is cheap.

> IBM can also afford to hire GMs both
> for the opening books and the input of chess knowledge if they can't
> get enough knowledge for free.

Show me anybody who can (a) work out what a GM means, (b) translate this
into software, (c) not go insane in the process.


> Speed will always rule because of
> this.

'This'. What is this 'this' that speed always rules from ? That IBM has
loadsa dosh ? Well, we'll see about that.

> If you are arguing that at 30 ply speed is unimportant, then
> why would anybody search further?

Indeed, why bother ? Try inputing some knowledge into the tip nodes would
be a cool idea.

> Chess search does not exhibit
> fatal ( there is a term for this that I have forgotten based on
> death)??

Er, yes ... there's more to this sentence ? It can't just terminate here,
surely ?

Tom C. Kerrigan

unread,
Feb 14, 1997, 3:00:00 AM2/14/97
to

Komputer Korner (kor...@netcom.ca) wrote:

> Since knowledge is cheap,. (Look at the r.g.c.c. where Bob Hyatt gives
> it away, and speed is expensive ) then in the future the difference

I'm having as much trouble reading this paragraph as Chris is, but this
particular bit caught my eye.

If Bob just has all of this cheap knowledge to give away, and knowledge is
equal to so-and-so many ply, why hasn't Crafty solved chess yet??

Bob has experience that he's willing to relate, but he isn't God. Don't
get them confused. When it comes to writing the perfect evaluation
function, he knows exactly as much as everybody else: NOTHING.

Write your own program, then come back and tell us that knowledge is
cheap.

Cheers,
Tom

Robert Hyatt

unread,
Feb 15, 1997, 3:00:00 AM2/15/97
to

Tom C. Kerrigan (kerr...@merlin.pn.org) wrote:
: Komputer Korner (kor...@netcom.ca) wrote:

: > Since knowledge is cheap,. (Look at the r.g.c.c. where Bob Hyatt gives
: > it away, and speed is expensive ) then in the future the difference

: I'm having as much trouble reading this paragraph as Chris is, but this
: particular bit caught my eye.

: If Bob just has all of this cheap knowledge to give away, and knowledge is
: equal to so-and-so many ply, why hasn't Crafty solved chess yet??

: Bob has experience that he's willing to relate, but he isn't God. Don't
: get them confused. When it comes to writing the perfect evaluation
: function, he knows exactly as much as everybody else: NOTHING.

I think the above is wrong. Maybe you should have said that nobody knows
*everything*, which is a different statement. Some knowledge is cheap,
some search things are cheap, and then there's the other stuff. Is this
bit of knowledge worth that many nodes per second, or that many plies of
depth.

However, there's plenty of knowledge that current chess programs don't have,
yet it's not overly complex to add. One example, try a position on your
favorite program where the pawns are nearly blocked, with one side having
only one possible pawn lever. Find a program that sets the pieces up so
that the lever can be played. That program will be a killer in many positions
where computers currently play badly. I'll have a solution for you in a year,
because this is not an overwhelming problem, but it's a critical one if programs
are going to continue to climb the skill ladder...


: Write your own program, then come back and tell us that knowledge is
: cheap.

: Cheers,
: Tom

Rolf Tueschen

unread,
Feb 15, 1997, 3:00:00 AM2/15/97
to

kerr...@merlin.pn.org (Tom C. Kerrigan) wrote:

>Komputer Korner (kor...@netcom.ca) wrote:

>> Since knowledge is cheap,. (Look at the r.g.c.c. where Bob Hyatt gives
>> it away, and speed is expensive ) then in the future the difference

>Bob has experience that he's willing to relate, but he isn't God. Don't


>get them confused. When it comes to writing the perfect evaluation
>function, he knows exactly as much as everybody else: NOTHING.

>Cheers,
>Tom
---------------------------------------------------------------------------------------------------------------

Thank you very much. *I* always thought god when reading/writing Bob
---
who couldn't detect the difference between Thorsten and traumschiff, and who
mixed up a mere confusion of Tony/David with a brutal rape of a 4 year old
girl!!!! And who was fascinated by his wish to find some solutions to make
gazoline out of black deathrow loosers and who is far more amazed when Tony is
confused with David than by the fact that several hundreds of thousands of
people were killed in indonesia. AND many more examples. (I see I'll have to
rearrange that.)
--
You know why?

Their names (ROTFL) are so close together.


Ciao and see ya. :)
Rolf

who couldn't ....


Tom C. Kerrigan

unread,
Feb 16, 1997, 3:00:00 AM2/16/97