Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Dutch computer chess championship

8 views
Skip to first unread message

Johan Havegheer

unread,
Jan 2, 1999, 3:00:00 AM1/2/99
to

Hello,

The King 3.0 has just finished first at the Dutch Computer chess
championship.

Congratulations to Johan De Koning

Don Getkey

unread,
Jan 2, 1999, 3:00:00 AM1/2/99
to

In article <36643dff...@news.uunet.be>, Haveghe...@village.uunet.be
(Johan Havegheer) writes:

>The King 3.0 has just finished first at the Dutch Computer chess
>championship.

Any links to this tournament?


,::::::<
,::/^\"``.
,::/, ` 描.
,::; | '.
;::| \___,-. c)
;::| \ '-'
;::| \
;::| _.=`\
`;:|.=` _.=`\
yours in chess,
Don

Coon Rapids MN USA

Johan Havegheer

unread,
Jan 3, 1999, 3:00:00 AM1/3/99
to
On 02 Jan 1999 15:31:48 GMT, dong...@aol.com (Don Getkey) wrote:

Sorry for this message, It should be posted a month ago.
Due to a HD crash a had to restore my backup and the message has been
sent now.

>
>In article <36643dff...@news.uunet.be>, Haveghe...@village.uunet.be
>(Johan Havegheer) writes:
>
>>The King 3.0 has just finished first at the Dutch Computer chess
>>championship.
>
>Any links to this tournament?


Look at Theo van der Storm page
http://ourworld.compuserve.com/homepages/thstorm/


>
>,::::::<
> ,::/^\"``.
> ,::/, ` 描.
> ,::; | '.
> ;::| \___,-. c)
> ;::| \ '-'
> ;::| \
> ;::| _.=`\
> `;:|.=` _.=`\
>yours in chess,
>Don
>
>Coon Rapids MN USA

Johan Havegheer

Don Getkey

unread,
Jan 4, 1999, 3:00:00 AM1/4/99
to

So, just how strong is CM6000's lineage? Check out the DCCC results. . .


1  The King                    9.0  63.5  49.50
AMD K6  350 MHz

 2  CilkChess                   8.0  70.5  47.00
2.064 processors at 195 MHz Silicon Graphics Origin2000

 3  Arthur                      7.5  70.0  43.50
Apple Power Mac G3 at 300 MHz

 4  Kallisto II Exp.            7.5  67.0  40.25
Pentium II 450 MHz

 5  Bionic Impakt               7.5  62.0  36.75
Dual-processor Pentium II 500 MHz

 6  Nimzo'99                    6.5  71.0  37.00
Pentium II 450 MHz

 7  Diep                        6.5  61.0  28.25
Pentium II 450 MHz

 8  Alexs                       6.5  60.5  27.75
Pentium II 504 MHz

 9  Patzer                      6.5  55.5  22.25
AMD K6-II 380 MHz

10  Ant                         5.5  54.0  16.00
Pentium II 350 Mhz

11  Rookie 2.0+                 5.5  53.5  15.50
Dual-processor Pentium II 350 MHz

12  Dappet                      5.0  53.5  12.00
Pentium II 400 MHz

13  BugChess                    3.5  55.0   5.75
Pentium II 450 Mhz

14  Zzzzzz                      1.5  57.0   0.75
Pentium II 166 MHz

15  Morphy 3.0                  1.5  55.5   0.75
Pentium II 166 Mhz

16  Delta                       0.0  58.5   0.00
Pentium MMX 450 MHz


One interesting observation is that PC programs today seem to be reaching their
limits as defined by speed. Now that the hardware can fully match the
software's need for speed, and the exponential depth limitations of chess, we
are starting to see that a Pll350 based program for example, is no handicap
when pitted against a Duel Pll500. Once a program hits the mid teens in ply
count in relatively the same time period as a slightly faster program, they are
both searching at their maximum limits. Future gains will only be had
depending on what you search for within the 14-17ply outer limits it seems.

,::::::<
,::/^\"``.
,::/, ` •`.

Robert Hyatt

unread,
Jan 4, 1999, 3:00:00 AM1/4/99
to
Don Getkey <dong...@aol.com> wrote:

: So, just how strong is CM6000's lineage? Check out the DCCC results. . .


I'm not quite sure what you are saying, but a PII/350 is definitely at a
handicap when playing a dual 500mhz machine. We are talking a factor of 3
in computing which is pretty much one more ply. And one ply does make a
difference. IE ask the folks the play Crafty on ICC using their 400-500
mhz machines what a difference the 2x faster quad xeon made compared to the
quad P6/200 I was using. The results are markedly different.

Note that the "chess is exponential" is really "chess is exponential
with respect to depth". But each ply does have a linear cost impact over
the previous ply. It is just that this factor of 3 becomes messy when
you compare the cost of 10 plies to 11 plies (factor of 3 more work) and
then compare the cost of 10 plies to 12 plies (factor of 9 more work).


--
Robert Hyatt Computer and Information Sciences
hy...@cis.uab.edu University of Alabama at Birmingham
(205) 934-2213 115A Campbell Hall, UAB Station
(205) 934-5473 FAX Birmingham, AL 35294-1170

Don Getkey

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to

In article <76r123$4fr$1...@juniper.cis.uab.edu>, Robert Hyatt
<hy...@crafty.cis.uab.edu> writes:

>I'm not quite sure what you are saying, but a PII/350 is definitely at a
>handicap when playing a dual 500mhz machine. We are talking a factor of 3
>in computing which is pretty much one more ply. And one ply does make a
>difference.

Apparently not enough of a "difference" when faced with a higher quality
player.

>IE ask the folks the play Crafty on ICC using their 400-500
>mhz machines what a difference the 2x faster quad xeon made compared to the
>quad P6/200 I was using. The results are markedly different.

There should be some increase I submit, but it appears that a program can only
benefit so much from additional speed once it's own high water mark has been
reached. Pushing a "dumb" program at break neck speeds does not result in
significantly better performance, otherwise how do you explain "Delta" running
on a Pll450 getting shut out of a tournament where not one, but two programs
entered were running on Pll166's? Furthermore "The King" did beat the none too
weak "CilkChess" which was running on much faster hardware. Or are you saying
that "The King," even on slower hardware is heavily favored over the other
entrants?


>
>Note that the "chess is exponential" is really "chess is exponential
>with respect to depth". But each ply does have a linear cost impact over
>the previous ply. It is just that this factor of 3 becomes messy when
>you compare the cost of 10 plies to 11 plies (factor of 3 more work) and
>then compare the cost of 10 plies to 12 plies (factor of 9 more work).


Right, and the trend (as it looks to me as demonstrated in this tournament as
well as spending a few hours on ICC), looks to me to be that many a high
quality program is able to hold it's own on somewhat slower platforms against
inferior programs on faster gear. BTW, I can't tell you how many times I have
seen Crafty get beat by a slower running Fritz3,CM5, CM6, R9, or R10.

My guess is that 1 extra ply at such deep depths is only going to sparsely tip
the scales in favor of the faster program over the smarter program from time to
time.


But this is good! This means that there is quite a bit more room for
improvement in programming technique.


,::::::<
,::/^\"``.
,::/, ` 描.

Robert Hyatt

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Don Getkey <dong...@aol.com> wrote:

: In article <76r123$4fr$1...@juniper.cis.uab.edu>, Robert Hyatt
: <hy...@crafty.cis.uab.edu> writes:

:>I'm not quite sure what you are saying, but a PII/350 is definitely at a
:>handicap when playing a dual 500mhz machine. We are talking a factor of 3
:>in computing which is pretty much one more ply. And one ply does make a
:>difference.

: Apparently not enough of a "difference" when faced with a higher quality
: player.

This is always the case. Berliner wrote a paper on this a few years ago
and played "smart vs dumb" to several depths. But he couldn't go deep
enough to get data for the 12 ply searches we see today unfortunately.

But back to the original premise... I am not one of those that believe
in the concept of "tactical sufficiency". IE I believe that a ply is a
ply, and no matter how deep I can go, another ply will make my program
better. To date, I haven't seen anything to suggest this is wrong.

:>IE ask the folks the play Crafty on ICC using their 400-500


:>mhz machines what a difference the 2x faster quad xeon made compared to the
:>quad P6/200 I was using. The results are markedly different.

: There should be some increase I submit, but it appears that a program can only
: benefit so much from additional speed once it's own high water mark has been
: reached. Pushing a "dumb" program at break neck speeds does not result in
: significantly better performance, otherwise how do you explain "Delta" running
: on a Pll450 getting shut out of a tournament where not one, but two programs
: entered were running on Pll166's? Furthermore "The King" did beat the none too
: weak "CilkChess" which was running on much faster hardware. Or are you saying
: that "The King," even on slower hardware is heavily favored over the other
: entrants?

You are looking at single games. Where anything can happen. But play a
match with two decent programs, one with a 2-3x time handicap, and the
one with the speed advantage is going to win the match every time...

Yes there are exceptions. The faster program might be weaker. Which
invalidates the whole idea of course. But given "similar or equal"
programs, the speed advantage tells...


:>
:>Note that the "chess is exponential" is really "chess is exponential


:>with respect to depth". But each ply does have a linear cost impact over
:>the previous ply. It is just that this factor of 3 becomes messy when
:>you compare the cost of 10 plies to 11 plies (factor of 3 more work) and
:>then compare the cost of 10 plies to 12 plies (factor of 9 more work).


: Right, and the trend (as it looks to me as demonstrated in this tournament as
: well as spending a few hours on ICC), looks to me to be that many a high
: quality program is able to hold it's own on somewhat slower platforms against
: inferior programs on faster gear. BTW, I can't tell you how many times I have
: seen Crafty get beat by a slower running Fritz3,CM5, CM6, R9, or R10.

Certainly true. But then I can show you 20 games in a row where it will roll
over any current commercial program on a PII/450-500, running on my quad
xeon.

: My guess is that 1 extra ply at such deep depths is only going to sparsely tip


: the scales in favor of the faster program over the smarter program from time to
: time.

Never said "dumber/faster" would beat "smarter". I said that a smarter
program appears to continue to benefit from additional plies. When you take
the current crop of commercial programs, as an example, lets say smart is
"10" on the scale, and "fast/dumb" is 1 on the scale. Pick a number for
Crafty and for Rebel, and lets compare notes. You might be surprised...

ie fill in the following:

program "rank"
Crafty _____
Hiarcs _____
Rebel _____
Fritz _____

and all you have to supply is a number between 1 and 10 for each one..
based only on how "smart" you think they are...

Then I have some data I'll share..

: But this is good! This means that there is quite a bit more room for
: improvement in programming technique.

totally agree. And also that faster hardware is also going to make the
programs better..


: ,::::::<

: ,::/^\"``.
: ,::/, ` 描.
: ,::; | '.
: ;::| \___,-. c)
: ;::| \ '-'
: ;::| \
: ;::| _.=`\
: `;:|.=` _.=`\
: yours in chess,
: Don

: Coon Rapids MN USA

--

mongrel

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Robert Hyatt wrote:

snip

> Never said "dumber/faster" would beat "smarter". I said that a smarter
> program appears to continue to benefit from additional plies. When you take
> the current crop of commercial programs, as an example, lets say smart is
> "10" on the scale, and "fast/dumb" is 1 on the scale. Pick a number for
> Crafty and for Rebel, and lets compare notes. You might be surprised...
>
> ie fill in the following:
>
> program "rank"
> Crafty _____
> Hiarcs _____
> Rebel _____
> Fritz _____
>
> and all you have to supply is a number between 1 and 10 for each one..
> based only on how "smart" you think they are...

> Then I have some data I'll share..

snip

Pardon me for intruding into the thread, but I'm eager to see this data...

Hiarcs 9
Crafty 6
Rebel 6
Fritz 2

mongrel


Komputer Korner

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Is there any reason to not believe that the amount of nodes remains
constant from ply depth to ply depth? It seems that the whole range of
ply depth would be a constant. In other words at depth 20 is it still
a factor of 3?

--
--
Komputer Korner
The inkompetent komputer

To send email take the 1 out of my address. My email address is
kor...@netcom.ca but take the 1 out before sending the email.
Robert Hyatt wrote in message <76r123$4fr$1...@juniper.cis.uab.edu>...

>I'm not quite sure what you are saying, but a PII/350 is definitely
at a
>handicap when playing a dual 500mhz machine. We are talking a factor
of 3
>in computing which is pretty much one more ply. And one ply does
make a

>difference. IE ask the folks the play Crafty on ICC using their


400-500
>mhz machines what a difference the 2x faster quad xeon made compared
to the
>quad P6/200 I was using. The results are markedly different.
>

>Note that the "chess is exponential" is really "chess is exponential
>with respect to depth". But each ply does have a linear cost impact
over
>the previous ply. It is just that this factor of 3 becomes messy
when
>you compare the cost of 10 plies to 11 plies (factor of 3 more work)
and
>then compare the cost of 10 plies to 12 plies (factor of 9 more
work).
>
>

Steve Maughan

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
My guess would be


Hiarcs 9

Rebel 7
Crafty 5
Fritz 4

Steve Maughan

Robert Hyatt

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
mongrel <car...@bellsouth.net> wrote:
: Robert Hyatt wrote:

: snip

: snip

: mongrel


Good answer. Based on NPS, here is what I would say:

hiarcs 10

crafty 5

rebel 4

fritz 1


And I haven't tried Rebel 10, so I don't know how it compares, although
someone could run the same position on both it and crafty to get a number
for each on the same hardware. the numbers might be 10,6,4,1 or something
close. Doesn't mean crafty is better than rebel because I don't believe
it is... but it does mean that whatever Rebel is doing, it is not doing it
because it is a super-smart program. Knowledge definitely is proportional to
NPS.

Alan B

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
On 5 Jan 1999 14:11:38 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
wrote:

>mongrel <car...@bellsouth.net> wrote:
>: Robert Hyatt wrote:
>

>:<snip><snip>


>
>
>Good answer. Based on NPS, here is what I would say:
>
>hiarcs 10
>
>crafty 5
>
>rebel 4
>
>fritz 1
>
>
>And I haven't tried Rebel 10, so I don't know how it compares, although
>someone could run the same position on both it and crafty to get a number
>for each on the same hardware. the numbers might be 10,6,4,1 or something
>close. Doesn't mean crafty is better than rebel because I don't believe
>it is... but it does mean that whatever Rebel is doing, it is not doing it
>because it is a super-smart program. Knowledge definitely is proportional to
>NPS.

Out of curiosity, where would you put MChess (latest version you would
feel comfortable evaluating)? Also I'm really curious about hw "smart"
the successive versions of Nimzo were from 3.5 to 98 to 99, since I
vaguely recall reading a quote (which may have been wrong in the first
place) from Dr. Donninger about taking knowledge out of 35 to make 98
faster, and about deciding what knowledge to remove. (Actually I'm
dying to know where you'd rate CS TAl too but I don't want to start
another flame sequence).

Do you think that in different phases of the game "smartness" or speed
are more or less significant? Or is it a constant from opening to
endgame? Are there programmed differences in how Crafty searches in
different phases because of the different weights of the factors?

AB

Robert Hyatt

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Steve Maughan <maughanD...@bigfoot.com> wrote:
: My guess would be


: Hiarcs 9

: Steve Maughan


Now that you've done that, do you have rebel? If so, run rebel and
crafty on the _same_ machine and tell me what you conclude...

Robert Hyatt

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Komputer Korner <kor...@netcom.ca> wrote:
: Is there any reason to not believe that the amount of nodes remains

: constant from ply depth to ply depth? It seems that the whole range of
: ply depth would be a constant. In other words at depth 20 is it still
: a factor of 3?

So far as I know, yes. It is possible that if you search to extreme
depths, you might overrun the hash table badly if you don't have enough
memory, and then you might see that factor of 3 ramp up to something
bigger. But not because of the depth. In fact, there is some evidence
to suggest that with proper hashing, the deeper you go the faster you
go (but not by a big amount) because the deeper you go the more likely
you are to find ways to transpose moves to reach the same position, and
improve hash hits...


: --

:>
:>
:>--

Steve Maughan

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
>
>: Hiarcs 9
>
>: Rebel 7
>: Crafty 5
>: Fritz 4
>
>: Steve Maughan
>
>
>Now that you've done that, do you have rebel? If so, run rebel and
>crafty on the _same_ machine and tell me what you conclude...


FWIW this was my logic.

I was not basing my estimates on NPS alone. Clearly HIARCS is totally
knowledge based and I gave it 9. I gave Rebel a score of 7, higher than
Crafty, due to Rebel being a more mature product ~7 years in development and
the fact that it is written in assembler. Fritz, well I don't think it
could be as strong as it clearly is without some reasonable amount of
knowledge.

Steve Maughan


Robert Hyatt

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Alan B <brat...@uc.edu> wrote:
: On 5 Jan 1999 14:11:38 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
: wrote:

:>mongrel <car...@bellsouth.net> wrote:
:>: Robert Hyatt wrote:
:>
:>:<snip><snip>
:>
:>
:>Good answer. Based on NPS, here is what I would say:
:>
:>hiarcs 10
:>
:>crafty 5
:>
:>rebel 4
:>
:>fritz 1
:>
:>
:>And I haven't tried Rebel 10, so I don't know how it compares, although
:>someone could run the same position on both it and crafty to get a number
:>for each on the same hardware. the numbers might be 10,6,4,1 or something
:>close. Doesn't mean crafty is better than rebel because I don't believe
:>it is... but it does mean that whatever Rebel is doing, it is not doing it
:>because it is a super-smart program. Knowledge definitely is proportional to
:>NPS.

: Out of curiosity, where would you put MChess (latest version you would
: feel comfortable evaluating)? Also I'm really curious about hw "smart"
: the successive versions of Nimzo were from 3.5 to 98 to 99, since I
: vaguely recall reading a quote (which may have been wrong in the first
: place) from Dr. Donninger about taking knowledge out of 35 to make 98
: faster, and about deciding what knowledge to remove. (Actually I'm
: dying to know where you'd rate CS TAl too but I don't want to start
: another flame sequence).

Mchess is in the same ballpark with Hiarcs IMHO. Different style,
totally, as Marty has always produced programs that play actively, like
the old Novag type progams of Dave Kittinger's.

From numbers I have "heard" but not "seen" I'd put nimzo down at the
bottom of the "smarts" heap in the same ballpark as Fritz/Junior. All
three seem to be _very_ fast. And all three make the same kinds of
moves that lead me to believe that the endpoint evaluation is very slim,
while the piece/square stuff (and perhaps some incremental stuff) take
up the slack.

: Do you think that in different phases of the game "smartness" or speed


: are more or less significant? Or is it a constant from opening to
: endgame? Are there programmed differences in how Crafty searches in
: different phases because of the different weights of the factors?

Yes, there are differences. I turn things on/off depending on what is going
on. I'd assume most do this as well, if they have something that should or
could be enabled/disabled. Everyone is commenting on how Crafty plays
endgames, yet it is getting by with what I'd call fairly elementary code
for this part of the game. It understands passed pawns and weak pawns, of
course, and it understands that advancing passers is good, it understands
centralizing the king, and it understands "distant passed pawns" and that
they become stronger as material comes off. IE it understands enough to
play reasonably well in most endings, enough to outplay GM's many times in
endings, but not enough that it never makes stupid mistakes. All programs
will look foolish at times.


: AB

Komputer Korner

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Then chess is truly in danger even if we don't get around the memory
limitations. Speed alone will kill it. We might see this in our
lifetimes.

--
--
Komputer Korner
The inkompetent komputer

To send email take the 1 out of my address. My email address is
kor...@netcom.ca but take the 1 out before sending the email.

Robert Hyatt wrote in message <76t7vd$lep$4...@juniper.cis.uab.edu>...

Don Getkey

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to

In article <76s4i9$cn2$1...@juniper.cis.uab.edu>, Robert Hyatt
<hy...@crafty.cis.uab.edu> writes:

>Never said "dumber/faster" would beat "smarter". I said that a smarter
>program appears to continue to benefit from additional plies. When you take
>the current crop of commercial programs, as an example, lets say smart is
>"10" on the scale, and "fast/dumb" is 1 on the scale. Pick a number for
>Crafty and for Rebel, and lets compare notes. You might be surprised...
>
>ie fill in the following:
>
>program "rank"

>Crafty __7___
>Hiarcs __9___
>Rebel __8___
>Fritz __6___


>
>and all you have to supply is a number between 1 and 10 for each one..
>based only on how "smart" you think they are...
>
>Then I have some data I'll share..
>

Cool.

Dann Corbit

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Komputer Korner <kor...@netcom.ca> wrote in message
news:xZtk2.5463$7l6....@tor-nn1.netcom.ca...

>Then chess is truly in danger even if we don't get around the memory
>limitations. Speed alone will kill it. We might see this in our
>lifetimes.

Rather than seeing this sort of thing as danger, I see it as opportunity.
How often does a revolutionary advancement happen in a game such as chess?
The use of computers may open new and unexplored avenues. There is no
danger whatsoever that chess will be "solved" since there will *always* be
unexplored combinations. What exactly is the perceived danger? That chess
machines will become unbeatable? Certainly this is not true if they are
facing each other, since both cannot always win. Is the danger that chess
machines will always be able to beat every human opponent? I do not see why
this should be the case either. The research or new information found by
the computers will also be available to humans. And new chess prodigies may
come along who are ten times better than the best the world has ever seen.
In any case, I love to watch an epic battle, whether it is between Kasparov
and Anand or Rebel and Fritz (both are out of my league, after all). So
even if worst comes to worst and somehow a new discovery is made and
computers can beat every human on the planet 100% of the time, we could
still watch them play each other. And both human verses human and computer
verses computer would remain interesting.

Look at the calculation of pi. Up until the mid 1900's all calculations
were made by hand. Since the 1960's onward, no human could match the
ability of computers to compute this number. Has that killed the interest
or research into finding out more information about the number pi? Not at
all. It has only increased our ability to study new and fundamentally
different ways to find the best results.

What about Mersenne Prime numbers? The same is true for calculation of
these interesting prime numbers. In the last few decades all computations
have been by machine. But this has resulted in advancement and even the
discovery of a prime with nearly a million digits. This technology has even
got immediate useful side effects. The best pseudo-prime generator uses
properties of the Mersenne prime numbers to achieve its remarkable
properties.

Look at fractal calculations. We could do all of them by hand, and
meticulously paint in each dot by hand. Nobody would want to do that when
you can see a million pixels calculated in a few seconds flawlessly by
machine.

Computers don't kill anything. They only enhance.
--
C-FAQ: http://www.eskimo.com/~scs/C-faq/top.html
"The C-FAQ Book" ISBN 0-201-84519-9
Find Stuff: http://www.infoseek.com
Chess Data: ftp://38.168.214.175/pub/


Robert Hyatt

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Steve Maughan <maughanD...@bigfoot.com> wrote:
:>
:>: Hiarcs 9

: Steve Maughan

Remember, my "scale" had nothing to do with "quality"... just "quantity".

Quite simply, a program that is slower is spending that time doing
something. I tend to call that "smarts". Yes, it is possible to make a
slow dumb program. It is not possible to make a fast smart program however,
unless you have Deep Blue type resources to do it in hardware where it doesn't
hurt your speed.

In every test I have run, the programs come out like this in terms of
speed, on a scale of 1 to 300:

Hiarcs 20

Crafty 60

Rebel 85

Nimzo 230
Fritz 250
Junior 260+

More data would be welcome of course...


As I said, I am a long way from being convinced that _my_ tuning is
very good yet, although it is certainly getting better. I am only looking
at quantity here, measured in NPS. Rebel has been significantly faster than
Crafty for maybe 2 years now with the gap getting wider (this being on equal
machines of course.)

Robert Hyatt

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Don Getkey <dong...@aol.com> wrote:

: In article <76s4i9$cn2$1...@juniper.cis.uab.edu>, Robert Hyatt
: <hy...@crafty.cis.uab.edu> writes:

:>Never said "dumber/faster" would beat "smarter". I said that a smarter
:>program appears to continue to benefit from additional plies. When you take
:>the current crop of commercial programs, as an example, lets say smart is
:>"10" on the scale, and "fast/dumb" is 1 on the scale. Pick a number for
:>Crafty and for Rebel, and lets compare notes. You might be surprised...
:>
:>ie fill in the following:
:>
:>program "rank"
:>Crafty __7___
:>Hiarcs __9___
:>Rebel __8___
:>Fritz __6___
:>
:>and all you have to supply is a number between 1 and 10 for each one..
:>based only on how "smart" you think they are...
:>
:>Then I have some data I'll share..
:>

: Cool.

your numbers above... fritz is *way* faster than crafty. *way* faster.
Rebel is significantly faster than crafty. Hiarcs seems to be about 1/3
the speed of crafty.


: ,::::::<

: ,::/^\"``.
: ,::/, ` 描.
: ,::; | '.
: ;::| \___,-. c)
: ;::| \ '-'
: ;::| \
: ;::| _.=`\
: `;:|.=` _.=`\
: yours in chess,
: Don

: Coon Rapids MN USA

--

Robert Hyatt

unread,
Jan 5, 1999, 3:00:00 AM1/5/99
to
Komputer Korner <kor...@netcom.ca> wrote:
: Then chess is truly in danger even if we don't get around the memory

: limitations. Speed alone will kill it. We might see this in our
: lifetimes.

Just so you realize we won't see "perfect" chess _ever_. So a computer
won't be invincible, but it may well become nearly so. IE I watched Crafty
play GM Kaidanov a couple of nights ago and after 25 games the result was
24 wins and 1 draw (from crafty's point of view.) It still loses an
occasional game to a GM in blitz/bullet, but they are becoming rarer.
A few more years and who knows... 10 years ago what is happening now
was impossible on anything but a supercomputer. 10 years from now your
blender may be able to beat a GM. :)


: --

Tord Kallqvist Romstad

unread,
Jan 6, 1999, 3:00:00 AM1/6/99
to
In article <76tmi5$ooh$1...@juniper.cis.uab.edu>, Robert Hyatt wrote:

>In every test I have run, the programs come out like this in terms of
>speed, on a scale of 1 to 300:
>
>Hiarcs 20
>
>Crafty 60
>
>Rebel 85
>
>Nimzo 230
>Fritz 250
>Junior 260+
>
>More data would be welcome of course...

These numbers are different from what I get on my P6/200 MHz. Crafty and
Rebel are roughly equally fast on my machine. Rebel is usually slightly
faster in the opening, Crafty is slightly faster in the endgame. Fritz
is significantly faster than Junior. My list would look like this:

Chess System Tal 6
Hiarcs 20
The King 20
M-Chess 25
Crafty 70
Rebel 70
W-Chess 1.0 100
Junior 150
Fritz 180

Tord

Robert Hyatt

unread,
Jan 6, 1999, 3:00:00 AM1/6/99
to
Tord Kallqvist Romstad <rom...@janus.uio.no> wrote:

: Tord

Is this Junior 5 or 4.6? I recall Amir posting some numbers on CCC
that had the newest Junior a bit faster than fritz.

As far as Crafty/Rebel, if you are trying the very latest, you may
be right, as someone did some inline assembly that made it 10-15%
faster than without this...

Don Getkey

unread,
Jan 7, 1999, 3:00:00 AM1/7/99
to

In article <76s4i9$cn2$1...@juniper.cis.uab.edu>, Robert Hyatt
<hy...@crafty.cis.uab.edu> writes:

>This is always the case. Berliner wrote a paper on this a few years ago
>and played "smart vs dumb" to several depths. But he couldn't go deep
>enough to get data for the 12 ply searches we see today unfortunately.
>
>But back to the original premise... I am not one of those that believe
>in the concept of "tactical sufficiency". IE I believe that a ply is a
>ply, and no matter how deep I can go, another ply will make my program
>better. To date, I haven't seen anything to suggest this is wrong.

I believe this is where my current thinking has lead me i.e., "tactical
sufficiency." Are you also suggesting that this is a dead end? Or simply that
"ply superiority" will always be paramount based on current evidence?

Komputer Korner

unread,
Jan 7, 1999, 3:00:00 AM1/7/99
to
Ply superiority will always be paramount however at very deep levels,
the difference between n and n+1 ply is a lot less significant than at
shallow levels. You never hit a brick wall by getting 1 more ply, but
if it is a choice between getting to the 21st ply or the 20th ply
while adding knowledge, i would go for the knowledge. Faster computers
will take you to the 21st ply and beyond anyway.

--
--
Komputer Korner
The inkompetent komputer

To send email take the 1 out of my address. My email address is
kor...@netcom.ca but take the 1 out before sending the email.

Don Getkey wrote in message
<19990107001108...@ngol01.aol.com>...

Robert Hyatt

unread,
Jan 7, 1999, 3:00:00 AM1/7/99
to
Don Getkey <dong...@aol.com> wrote:

: In article <76s4i9$cn2$1...@juniper.cis.uab.edu>, Robert Hyatt
: <hy...@crafty.cis.uab.edu> writes:

:>This is always the case. Berliner wrote a paper on this a few years ago
:>and played "smart vs dumb" to several depths. But he couldn't go deep
:>enough to get data for the 12 ply searches we see today unfortunately.
:>
:>But back to the original premise... I am not one of those that believe
:>in the concept of "tactical sufficiency". IE I believe that a ply is a
:>ply, and no matter how deep I can go, another ply will make my program
:>better. To date, I haven't seen anything to suggest this is wrong.

: I believe this is where my current thinking has lead me i.e., "tactical
: sufficiency." Are you also suggesting that this is a dead end? Or simply that
: "ply superiority" will always be paramount based on current evidence?


I'm only saying that _I_ don't believe in this idea. IE I have never felt
that there was no use in going a ply deeper. I've played lots of games at
long time controls (IE in Paris, Jakarta, at the Pan American, and so forth)
and have seen several positions where another ply would definitely have helped.

Don Getkey

unread,
Jan 7, 1999, 3:00:00 AM1/7/99
to

In article <tzZk2.5776$7l6.1...@tor-nn1.netcom.ca>, "Komputer Korner"
<kor...@netcom.ca> writes:

>Ply superiority will always be paramount however at very deep levels,
>the difference between n and n+1 ply is a lot less significant than at
>shallow levels. You never hit a brick wall by getting 1 more ply, but
>if it is a choice between getting to the 21st ply or the 20th ply
>while adding knowledge, i would go for the knowledge. Faster computers
>will take you to the 21st ply and beyond anyway.
>

This point of interest/division sounds like the fundamental difference between
a computer approach vs a more human approach. One side says smart computers
only get smarter the deeper they see, and the other side that says, knowing
what to do with the plys you see is the question. Humans see very few plys, but
know what to do.

I find it ironic that Bob Hyatt has attempted to create a program that is
geared to play humans better than a program that has no such aspirations,
precisely because of Bob's emphasis on ply depth and speed vs knowledge.

It would seem to me that if you wanted to defeat humans, you'd eventually have
to go to their "home field," and do it on their terms? Deep Blue ll would seem
to support this in that as fast as it was, ( a speed that we will never see in
a PC) and as deep as it searched, DBll would not be favored in a 21 game match
against any of the top 10-20 players in the world. That though it has the
depth of search, it has a small grasp of what to do with all those billions of
nodes.

I think there is a "wall" out there some where, that says , "beyond x ply thou
must think like a human to best a human." If not, then I'm affraid all of the
human mystery and esoteric wreaths that have been laid at the feet of Chess for
the past 500+ years, was pure vanity.

RDavis101

unread,
Jan 7, 1999, 3:00:00 AM1/7/99
to

I don't think you have to go to the human being's home field to defeat humans.

Another way of looking at this comes from an analogy between cognitive
psychology, linguistics, neuropsychology, and AI, which have been in the
process of merging into an overarching discipline, cognitive science.

From a cognitive science perspective, there are ways of representing
information processing heuristics that are independent on the underlying
substratum in which those heuristics are instantiated. Biology versus silicon,
for example.

However, evolution has given human beings certain well-developed cognitive
abilities, while skimping on others. For example, human beings tend to be
highly focused on the possibility of negative outcomes, since these have
survival value. In the modern world, where survival is no longer much of an
issue, we have the nightly news as an example of our evolutionary heritage.

Depending on whether you think the analogy is a good one, human beings probably
have ways of playing chess that draw on ways of representing knowledge and
processing information that evolution has given us.

However, from a cognitive science perspective, this makes the human being a
biased processor. In other words, it is possible that there are ways of playing
chess that little to do with how human actually represent and process
information, but are quite successful against humans nevertheless. Since humans
are biases, there are heuristics that can exploit their shortcomings.

Alpha-beta is perhaps one of these. And since chess is mostly tactics, ever
deeper searches have continued to result in increased payoff. So, human being
have their limitations.

But so does silicon. Fritz and Junior search a lot of nodes, but Hiarcs 7
searches fewer nodes than any of the other major programs (someone correct me
if I'm wrong), and has a great shot at getting the #1 position on the SSDF
list. It searches fewer nodes because it has more human chess knowledge built
in.

Each side has its advantages. You've got to play your strengths and patch your
holes if you want to win.

Roger

Robert Hyatt

unread,
Jan 7, 1999, 3:00:00 AM1/7/99
to
Don Getkey <dong...@aol.com> wrote:

: In article <tzZk2.5776$7l6.1...@tor-nn1.netcom.ca>, "Komputer Korner"
: <kor...@netcom.ca> writes:

:>Ply superiority will always be paramount however at very deep levels,
:>the difference between n and n+1 ply is a lot less significant than at
:>shallow levels. You never hit a brick wall by getting 1 more ply, but
:>if it is a choice between getting to the 21st ply or the 20th ply
:>while adding knowledge, i would go for the knowledge. Faster computers
:>will take you to the 21st ply and beyond anyway.
:>

: This point of interest/division sounds like the fundamental difference between
: a computer approach vs a more human approach. One side says smart computers
: only get smarter the deeper they see, and the other side that says, knowing
: what to do with the plys you see is the question. Humans see very few plys, but
: know what to do.

: I find it ironic that Bob Hyatt has attempted to create a program that is
: geared to play humans better than a program that has no such aspirations,
: precisely because of Bob's emphasis on ply depth and speed vs knowledge.

Note that what I do is not the apparent contradiction you think you are
seeing. Nothing wrong with a program getting "smarter". If you look at
Crafty's eval over the past 4 years you'd see that, because it is now up
to 50% of the total time, while 4 years ago it was 10%. However, smart
is good. But smart *and* fast is better. And all I'm saying is that a
smart program continues to get better with more depth. I suspect it will
get more from going to (say) 15 to 16 plies than a truly dumb program will
get. But I am carefully jockeying with this ply vs knowledge tradeoff, and
am finding interesting things here and there.

: It would seem to me that if you wanted to defeat humans, you'd eventually have


: to go to their "home field," and do it on their terms? Deep Blue ll would seem
: to support this in that as fast as it was, ( a speed that we will never see in
: a PC) and as deep as it searched, DBll would not be favored in a 21 game match
: against any of the top 10-20 players in the world. That though it has the
: depth of search, it has a small grasp of what to do with all those billions of
: nodes.

Hard to say. How about this? In my AI class I start by asking students to
"name a flower that rhymes with nose". Everyone gets "rose". Then "name a
color that rhymes with tack" and we get black quickly. A discussion then
goes on to conclude that the mind can't possibly have all those different
sounds as linked lists, because our head would be full of pointers and no
useful data were that the case.

Then after this, I point out that in spite of how clever the mind is, I
can beat this performance with a good computer any day. Because I'll just
sequentially search thru a full dictionary, extracting the phonetics from
each word and _still_ be able to produce "rose" quicker than any human.

So, do I _have_ to do it like a human? I'm not convinced this is true. I
_do_ have to do something that "appears" to be the way a human does some
things, but appearance can obviously be quite deceiving.

: I think there is a "wall" out there some where, that says , "beyond x ply thou
: must think like a human to best a human." If not, then I'm afraid all of the


: human mystery and esoteric wreaths that have been laid at the feet of Chess for
: the past 500+ years, was pure vanity.


: ,::::::<
: ,::/^\"``.
: ,::/, ` 描.
: ,::; | '.
: ;::| \___,-. c)
: ;::| \ '-'
: ;::| \
: ;::| _.=`\
: `;:|.=` _.=`\
: yours in chess,
: Don

: Coon Rapids MN USA

--

Don Getkey

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to

In article <772rov$1dd$1...@juniper.cis.uab.edu>, Robert Hyatt
<hy...@crafty.cis.uab.edu> writes:

Your examples are fine, and illustrate your point(s) well Bob. Thanks for your
courteous input.

It all leaves me wondering, if you don't have to "do it (chess) like a human,"
to completely and consistently defeat the very best humans (someday), then do
you see a day when the present programming strategies with enough speed will
over come man? Personally, I don't.

I can remember several predictions from as far back as 1989 that said man's
superiority at chess would be gone in 5 years. I can recall people saying,
"wait till we get our hands on that new DX4 486/100mhz (and then the Pentium
66Mhz), GM's won't have a chance!" Today we have Quad Pll450's, and
programmers are still unable to legitimately challenge GM's to match play at
tournament time controls. Funny thing is, we don't hear too many "5 year"
predictions anymore, even in these days of fantastic speeds. Hmmm.

Call me a romantic, a true believer, a keeper of the flame, but, I think mans
intimate grasp of this ancient game is always going to be a ply or two ahead of
the deepest searching computers for some time to come. That is unless someone
comes up with a uniquely human methodology, which I will submit might require
Deep Blue like speed.

Don Getkey

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to

In article <19990107131249...@ng15.aol.com>, rdav...@aol.com
(RDavis101) writes:

>I don't think you have to go to the human being's home field to defeat
>humans.


Depends on what you mean? Certainly computers have defeated humans at all
levels at one time control or another, this is true. But a single game here
and there, is a much different thing than being fully superior in match play.


>
>Another way of looking at this comes from an analogy between cognitive
>psychology, linguistics, neuropsychology, and AI, which have been in the
>process of merging into an overarching discipline, cognitive science.
>
>From a cognitive science perspective, there are ways of representing
>information processing heuristics that are independent on the underlying
>substratum in which those heuristics are instantiated. Biology versus
>silicon,
>for example.
>
>However, evolution has given human beings certain well-developed cognitive
>abilities, while skimping on others. For example, human beings tend to be
>highly focused on the possibility of negative outcomes, since these have
>survival value. In the modern world, where survival is no longer much of an
>issue, we have the nightly news as an example of our evolutionary heritage.


I understand what you are saying, I just don't like the way you are saying it.
;-)
Your use of the "E" word presumes I/"we" agree with the theory of evolution.
Personally I don't. Evolution as an explanation for all things linear is
rather conveniently simplistic. Especially when you can easily insert the non
synonymous word "adaptation" as it's replacement. To me adaptation is what is
so often mistakenly used as evidence for the theory of evolution.

I also tend to disagree with the thought, "In the modern world, where survival
is no longer much of an issue." Physical survival in terms of predator vs prey
is of course in the past (somewhat), but the need to survive in modern times is
no less serious.

>
>Depending on whether you think the analogy is a good one, human beings
>probably
>have ways of playing chess that draw on ways of representing knowledge and
>processing information that evolution has given us.


Personifying evolution begins the process of creating a kind of deity equal to
any other. I'd rather not go there. If anything has been "given" to us I
prefer it to be a real person/God, and not a theory.

>
>However, from a cognitive science perspective, this makes the human being a
>biased processor. In other words, it is possible that there are ways of
>playing
>chess that little to do with how human actually represent and process
>information, but are quite successful against humans nevertheless. Since
>humans
>are biases, there are heuristics that can exploit their shortcomings.

Yes, I believe this is true, it's just that it has not been demonstrated to
date.

>
>Alpha-beta is perhaps one of these. And since chess is mostly tactics, ever
>deeper searches have continued to result in increased payoff. So, human being
>have their limitations.
>
>But so does silicon. Fritz and Junior search a lot of nodes, but Hiarcs 7
>searches fewer nodes than any of the other major programs (someone correct me
>if I'm wrong), and has a great shot at getting the #1 position on the SSDF
>list. It searches fewer nodes because it has more human chess knowledge built
>in.
>
>Each side has its advantages. You've got to play your strengths and patch
>your
>holes if you want to win.
>
>Roger
>

Again, it appears that the present methods coupled with the fastest machines
have yet to show their dominance at our grand game. My opinion is that
something more is needed, something more human like.

Robert Hyatt

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to
Don Getkey <dong...@aol.com> wrote:

: In article <772rov$1dd$1...@juniper.cis.uab.edu>, Robert Hyatt
: <hy...@crafty.cis.uab.edu> writes:

These discussions are fun. No arguing. No rancor. Just a discussion. And
that's the way things ought to be, whether we agree or not. Here we are
very likely pretty close to agreement anyway, except for the future where we
are both 'guessing' anyway...


: It all leaves me wondering, if you don't have to "do it (chess) like a human,"


: to completely and consistently defeat the very best humans (someday), then do
: you see a day when the present programming strategies with enough speed will
: over come man? Personally, I don't.

Can you spell deep blue? :) Seriously, they are _very_ strong. I am now
getting closer and closer to an average of 1M nodes per second on fairly
cheap hardware. How long before I can hit 250M? hard to say. What will
crafty be capable of when it gets there? Again, unknown. But clearly
speed can help. And clever eval and search tricks can shorten the time
needed to do some of these things... But one day someone will be able to
spend a couple of months getting a chess program up and going and just
blow all humans away, because machines _will_ get that fast one day...

I spent the early 70's doing demand paging stuff for operating systems.
Now we don't do much of that any more because memory is dirt-cheap. Makes
you wonder about other things that us 'chess programmers' spend so much
time doing today (because of hardware limitations) and how they too might
become obsolete in the future...


: I can remember several predictions from as far back as 1989 that said man's


: superiority at chess would be gone in 5 years. I can recall people saying,
: "wait till we get our hands on that new DX4 486/100mhz (and then the Pentium
: 66Mhz), GM's won't have a chance!" Today we have Quad Pll450's, and
: programmers are still unable to legitimately challenge GM's to match play at
: tournament time controls. Funny thing is, we don't hear too many "5 year"
: predictions anymore, even in these days of fantastic speeds. Hmmm.

I never bought that, and really thought it would be longer than what most
thought was necessary to even beat the human world champion in a match. But
that's come and gone. So we may argue about "when", but I doubt we would
have to argue about "if"...


: Call me a romantic, a true believer, a keeper of the flame, but, I think mans


: intimate grasp of this ancient game is always going to be a ply or two ahead of
: the deepest searching computers for some time to come. That is unless someone
: comes up with a uniquely human methodology, which I will submit might require
: Deep Blue like speed.

so long as you add "for some time to come" I am in agreement. How long? No
idea. 100 years? Hardly. 20? maybe. 5? also doubtful. But I bet in
20 years humans will be hard-pressed to beat computers. I'll bet in 50 years
they will be hard-pressed to beat my blender. :)

Vincent Diepeveen

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to
On Tue, 5 Jan 1999 03:37:46 -0500, "Komputer Korner"
<kor...@netcom.ca> wrote:

>Is there any reason to not believe that the amount of nodes remains
>constant from ply depth to ply depth? It seems that the whole range of
>ply depth would be a constant. In other words at depth 20 is it still
>a factor of 3?

There is 1 mainreason why searching deeper
reduces branching factor:

The deeper you search, the more pieces get from the board,
the more pieces get from the board, the fewer possibilities,
the fewer possibilities, the smaller the branching factor.

Also as there get more pieces from the board, the fewer positions
you transpose to, so the more transpositions there are to that
position.

I'm using 8 probe in DIEP. I already considered rewriting it to 16
probe. Hashtables kick butt at huge depths. They really give a lot of
plies extra.

Greetings,
Vincent


>--
>--
>Komputer Korner
>The inkompetent komputer
>
>To send email take the 1 out of my address. My email address is
>kor...@netcom.ca but take the 1 out before sending the email.

>Robert Hyatt wrote in message <76r123$4fr$1...@juniper.cis.uab.edu>...
>
>>I'm not quite sure what you are saying, but a PII/350 is definitely
>at a
>>handicap when playing a dual 500mhz machine. We are talking a factor
>of 3
>>in computing which is pretty much one more ply. And one ply does
>make a
>>difference. IE ask the folks the play Crafty on ICC using their
>400-500
>>mhz machines what a difference the 2x faster quad xeon made compared
>to the
>>quad P6/200 I was using. The results are markedly different.
>>
>>Note that the "chess is exponential" is really "chess is exponential
>>with respect to depth". But each ply does have a linear cost impact
>over
>>the previous ply. It is just that this factor of 3 becomes messy
>when
>>you compare the cost of 10 plies to 11 plies (factor of 3 more work)
>and
>>then compare the cost of 10 plies to 12 plies (factor of 9 more
>work).
>>
>>

Don Getkey

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to

In article <773ng1$7a2$1...@juniper.cis.uab.edu>, Robert Hyatt
<hy...@crafty.cis.uab.edu> writes:

>These discussions are fun. No arguing. No rancor. Just a discussion. And
>that's the way things ought to be, whether we agree or not. Here we are
>very likely pretty close to agreement anyway, except for the future where we
>are both 'guessing' anyway...
>

Yes Bob, it is truly refreshing to explore a topic or two without defensive
interjections and insults. I have a great deal more fun earnestly teasing out
the truth of a matter, with folks who are willing to do the same. It gives
everyone a chance to see the other persons side.


>
>: It all leaves me wondering, if you don't have to "do it (chess) like a
>human,"
>: to completely and consistently defeat the very best humans (someday), then
>do
>: you see a day when the present programming strategies with enough speed
>will
>: over come man? Personally, I don't.
>
>Can you spell deep blue? :) Seriously, they are _very_ strong. I am now
>getting closer and closer to an average of 1M nodes per second on fairly
>cheap hardware. How long before I can hit 250M? hard to say. What will
>crafty be capable of when it gets there? Again, unknown. But clearly
>speed can help. And clever eval and search tricks can shorten the time
>needed to do some of these things... But one day someone will be able to
>spend a couple of months getting a chess program up and going and just
>blow all humans away, because machines _will_ get that fast one day...
>

I wish Hsu and company would come out and attempt a follow up re-match. A real
match, not one so limited in which the odds favor inconsistant outcomes.


>I spent the early 70's doing demand paging stuff for operating systems.
>Now we don't do much of that any more because memory is dirt-cheap. Makes
>you wonder about other things that us 'chess programmers' spend so much
>time doing today (because of hardware limitations) and how they too might
>become obsolete in the future...
>
>
>: I can remember several predictions from as far back as 1989 that said man's
>: superiority at chess would be gone in 5 years. I can recall people saying,
>: "wait till we get our hands on that new DX4 486/100mhz (and then the
>Pentium
>: 66Mhz), GM's won't have a chance!" Today we have Quad Pll450's, and
>: programmers are still unable to legitimately challenge GM's to match play
>at
>: tournament time controls. Funny thing is, we don't hear too many "5 year"
>: predictions anymore, even in these days of fantastic speeds. Hmmm.
>
>I never bought that, and really thought it would be longer than what most
>thought was necessary to even beat the human world champion in a match. But
>that's come and gone. So we may argue about "when", but I doubt we would
>have to argue about "if"...


You know that really hurts. Kasparov, the Bill Clinton of chess. How could he
let us down like that!!!???


>
>
>: Call me a romantic, a true believer, a keeper of the flame, but, I think
>mans
>: intimate grasp of this ancient game is always going to be a ply or two
>ahead of
>: the deepest searching computers for some time to come. That is unless
>someone
>: comes up with a uniquely human methodology, which I will submit might
>require
>: Deep Blue like speed.
>
>so long as you add "for some time to come" I am in agreement. How long? No
>idea. 100 years? Hardly. 20? maybe. 5? also doubtful. But I bet in
>20 years humans will be hard-pressed to beat computers. I'll bet in 50 years
>they will be hard-pressed to beat my blender. :)
>
>

LOL! :-))) May I never live to see the day Westinghouse liquefies the world
champion.

Dan Kirkland

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to
[many deletions...]

In article <19990108004839...@ngol01.aol.com>,
dong...@aol.com (Don Getkey) writes:
>
>In article <773ng1$7a2$1...@juniper.cis.uab.edu>, Robert Hyatt
><hy...@crafty.cis.uab.edu> writes:

>>Can you spell deep blue? :) Seriously, they are _very_ strong. I am now
>>getting closer and closer to an average of 1M nodes per second on fairly
>>cheap hardware. How long before I can hit 250M? hard to say. What will
>>crafty be capable of when it gets there? Again, unknown. But clearly
>>speed can help. And clever eval and search tricks can shorten the time
>>needed to do some of these things... But one day someone will be able to
>>spend a couple of months getting a chess program up and going and just
>>blow all humans away, because machines _will_ get that fast one day...


It's too bad that most chess programmers just look at the speed
aspect of chess programming..., because there IS something more
than just speed!!!


>I wish Hsu and company would come out and attempt a follow up re-match. A real
>match, not one so limited in which the odds favor inconsistant outcomes.


Well, Hsu likely wouldn't mind, but IBM is afraid that they would LOSE!


>>: I can remember several predictions from as far back as 1989 that said man's
>>: superiority at chess would be gone in 5 years. I can recall people saying,
>>: "wait till we get our hands on that new DX4 486/100mhz (and then the
>>Pentium
>>: 66Mhz), GM's won't have a chance!" Today we have Quad Pll450's, and
>>: programmers are still unable to legitimately challenge GM's to match play
>>at
>>: tournament time controls. Funny thing is, we don't hear too many "5 year"
>>: predictions anymore, even in these days of fantastic speeds. Hmmm.


Okay, I will make one...
Give me a GOOD chess programmer, and I WILL do it in LESS than TWO
years! (Yes, there is a condition, I DO need the help of a good
chess programmer.)

Sadly, none of the better chess programmers will give it a second
thought. Instead they make comments like 'maybe after you have
been programming for 30 plus years...' or such...
Seems they think if they haven't found a way in such time, then
they are not going to believe that anyone else can do such.
BUT THEY ARE WRONG!


>>I never bought that, and really thought it would be longer than what most
>>thought was necessary to even beat the human world champion in a match. But
>>that's come and gone. So we may argue about "when", but I doubt we would
>>have to argue about "if"...


The when is here and now! I just need one of you great chess
programmers to take a chance on me...? Anybody?

(Bob? Bruce?)


>>: Call me a romantic, a true believer, a keeper of the flame, but, I think
>>mans
>>: intimate grasp of this ancient game is always going to be a ply or two
>>ahead of
>>: the deepest searching computers for some time to come. That is unless
>>someone
>>: comes up with a uniquely human methodology, which I will submit might
>>require
>>: Deep Blue like speed.
>>
>>so long as you add "for some time to come" I am in agreement. How long? No
>>idea. 100 years? Hardly. 20? maybe. 5? also doubtful. But I bet in
>>20 years humans will be hard-pressed to beat computers. I'll bet in 50 years
>>they will be hard-pressed to beat my blender. :)
>>
>>
>

>LOL! :-))) May I never live to see the day Westinghouse liquefies the world
>champion.


Okay, most will likely think this is a joke or something...
But it is NOT! I last mentioned that I had some ideas a couple
years ago... And all I got in return was some snide remarks.
So I though I would just do write a chess program on my own.
But I cannot even come up with a good computer (and my poor
little HP48 is just too slow). I guess I'm more interested in
spending my small disability payments on music than trying to
save up for a reasonable computer. And my programming skills
are pretty much limited to the HP48. I guess I am also somewhat
lacking in the motivation department... :(

So, I am looking for some help!

Did I mention that I have some IDEAS for making a truely BETTER
chess program? ;)

I first said something about this here on this newsgroup a couple
years ago. And while I had only been researching chess programming
a couple of years at that time, I really felt I had something new.
Now a couple more years have gone by, I know chess programming
quite a bit better than I did... And I have played a bit with
a chess program I have been writing on my HP48...
And I am more sure than ever than I have some ides that will
make for a truely BETTER chess program! Something that will surely
be REVOLUTIONARY even by Bob's definition (whatever that may be)!

I should mention that I want to make some money from these ideas.
(I really need to better my station in life.) And I really need
the help of a good chess programmer (but also someone that can
accept that I may actually know a thing or two about chess
programming!). Also, I cannot really talk about the ideas and
methods without giving them away, and I'm NOT about to just
give them away!

Now I don't really now how to go about this...
I am NOT expecting any money up front or such. But I need a
way to work with someone without them stealing my ideas.
Also, I would need to work directly with the programmer.
(I am NOT willing to do this project over the phone or internet!)
I am looking on ideas as to how to handle this (sorry Bob, but
there is NO WAY I would be willing to let you try my ideas with
just a promise that you would keep mum. While you seem reasonable,
I am just not that trusting...)

So...

WHO IS WILLING TO HELP ME TAKE CHESS PROGRAMMING TO THE NEXT LEVEL???

Hope to hear from you...
dan (kirk...@ee.utah.edu)

Steve Maughan

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to
>
>Call me a romantic, a true believer, a keeper of the flame, but, I think
mans
>intimate grasp of this ancient game is always going to be a ply or two
ahead of
>the deepest searching computers for some time to come. That is unless
someone
>comes up with a uniquely human methodology, which I will submit might
require
>Deep Blue like speed.
>


I agree. On a more philosophic note - I think this illustrates how much
more advanced human thought is over computers. I remember seeing a BBC
documentary in the 80s claiming that computer AI would dominate world
thought by ~2010 - i.e. computers would be able to 'think' in general terms
much better than humans. One think that computer chess makes very plain is
that if computers cannot master a purely deterministic game with a minute
universe of 64 squares and 32 particles then they are more than a little way
away from dominating world thought!

Steve Maughan

Robert Hyatt

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to
Dan Kirkland <kirk...@ee.utah.edu> wrote:
: [many deletions...]

That's easy to say. *MUCH* harder to prove. This implies that there must be
some magic bullet that can put this away quite quickly. I know the folks
that have been working on this for a long time, and contrary to popular
opinion, there are some *bright* folks in that group. That have done
nothing exceptional other than to create a major operating system that
is used world-wide, design a piece of hardware that goes faster than
anyone thought possible, built endgame databases that perfectly solve
endings that humans misunderstood for hundreds of years... etc...


:>>I never bought that, and really thought it would be longer than what most


:>>thought was necessary to even beat the human world champion in a match. But
:>>that's come and gone. So we may argue about "when", but I doubt we would
:>>have to argue about "if"...


: The when is here and now! I just need one of you great chess
: programmers to take a chance on me...? Anybody?

: (Bob? Bruce?)

I'm waiting for the 'punch line' here.

:)


:>>: Call me a romantic, a true believer, a keeper of the flame, but, I think

: So...

--

mshoe...@my-dejanews.com

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to
I thought I'd mention they said the ENIAC or EDSAC (i forget which one) could
play perfect chess too, back in the 50s.

Its still pretty amazing how far computers have come in chess. Humans have
been playing chess for hundreds of years, compare that to computers. Its only
been 20 years since a computer beat a grandmaster for the first time, and now
they pose a threat to human dominance in regular chess, and have pretty much
taken over speed chess. IMHO, they aren't slowing down their advance either.

What fascinates me is the games that computers dont grasp so well, like
Bughouse (siamese) chess, and go. It should be interesting to see if/when
they make a breakthrough and become dominant there too.

matt


In article <19990107193321...@ngol08.aol.com>,
dong...@aol.com (Don Getkey) wrote:
>
> In article <772rov$1dd$1...@juniper.cis.uab.edu>, Robert Hyatt

> It all leaves me wondering, if you don't have to "do it (chess) like a human,"
> to completely and consistently defeat the very best humans (someday), then do
> you see a day when the present programming strategies with enough speed will
> over come man? Personally, I don't.
>

> I can remember several predictions from as far back as 1989 that said man's
> superiority at chess would be gone in 5 years. I can recall people saying,
> "wait till we get our hands on that new DX4 486/100mhz (and then the Pentium
> 66Mhz), GM's won't have a chance!" Today we have Quad Pll450's, and
> programmers are still unable to legitimately challenge GM's to match play at
> tournament time controls. Funny thing is, we don't hear too many "5 year"
> predictions anymore, even in these days of fantastic speeds. Hmmm.
>

> Call me a romantic, a true believer, a keeper of the flame, but, I think mans
> intimate grasp of this ancient game is always going to be a ply or two ahead
of
> the deepest searching computers for some time to come. That is unless someone
> comes up with a uniquely human methodology, which I will submit might require
> Deep Blue like speed.
>

> ,::::::<
> ,::/^\"``.
> ,::/, ` 描.
> ,::; | '.
> ;::| \___,-. c)
> ;::| \ '-'
> ;::| \
> ;::| _.=`\
> `;:|.=` _.=`\
> yours in chess,
> Don
>
> Coon Rapids MN USA
>

Matt

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

mshoe...@my-dejanews.com

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to
In article <36967895...@news.xs4all.nl>,

di...@xs4all.nl (Vincent Diepeveen) wrote:
> On Tue, 5 Jan 1999 03:37:46 -0500, "Komputer Korner"
> <kor...@netcom.ca> wrote:
>
> >Is there any reason to not believe that the amount of nodes remains
> >constant from ply depth to ply depth? It seems that the whole range of
> >ply depth would be a constant. In other words at depth 20 is it still
> >a factor of 3?
>
> There is 1 mainreason why searching deeper
> reduces branching factor:
>
> The deeper you search, the more pieces get from the board,
> the more pieces get from the board, the fewer possibilities,
> the fewer possibilities, the smaller the branching factor.
>
> Also as there get more pieces from the board, the fewer positions
> you transpose to, so the more transpositions there are to that
> position.
>
> I'm using 8 probe in DIEP. I already considered rewriting it to 16
> probe. Hashtables kick butt at huge depths. They really give a lot of
> plies extra.

Sorry for my ignorance, but what is "8 probe" and "16 probe" ? I've written a
very simple chess program, and am relatively familiar with the ideas behind
chess algorithms and heuristics and hash tables etc. but i have never heard of
this?
Thanks for the help :)

> >>--
> >>Robert Hyatt Computer and Information Sciences
> >>hy...@cis.uab.edu University of Alabama at Birmingham
> >>(205) 934-2213 115A Campbell Hall, UAB Station
> >>(205) 934-5473 FAX Birmingham, AL 35294-1170
> >
> >
>
>

Matt

Bradlee Johnson

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to
Roger,

Thanks for this great synopsis. Indeed, human classifcation systems
for knowledge reflect the bias you speak of. For example, we classify
an opening system as "The English" or the "The Queen's Gambit
Declined" and when one set crosses into the historical area of the
other, we say it has "transposed." Well, only in historical/human
perspective. There isn't anything organically that makes the one an
English and the other a QGD at the point where they reach a tabia. For
humans, however, that crossing from one opening to another becomes a
real problem due to our methods of classification.

A system (such as Hiarcs?) that heavily relies on games and openings
that stem from human understanding and classifications seems destined
for a dead end. It plays into the strengths and weaknesses of humans
and not those of the machine. I've often thought that the best way to
create an opening book for a computer application would be to let it
play/learn without any access to opening books. Yes, that would be
painful at first, but the opening infrastructure it creates is far
more reflective of what the machine/software does well, maximizing its
strengths while minimizing its weakenesses. What kind of openings and
infrastructures would result? Dunno. It is unlikely that we would
replicate the human historical on this e.g. the Romantic era of
material sacrifice for gain of tempo in order to launch an attack.
While our evolutionary imperative may have been survival instinct, we
also have an imperative that provides a hunting instinct which aims
for the throat -- hence the gambits of yore. But what is the
imperative that arises from the heuristics and algoritms of our
machines?

Bradlee

>I don't think you have to go to the human being's home field to defeat humans.
>

>Another way of looking at this comes from an analogy between cognitive
>psychology, linguistics, neuropsychology, and AI, which have been in the
>process of merging into an overarching discipline, cognitive science.
>
>From a cognitive science perspective, there are ways of representing
>information processing heuristics that are independent on the underlying
>substratum in which those heuristics are instantiated. Biology versus silicon,
>for example.
>
>However, evolution has given human beings certain well-developed cognitive
>abilities, while skimping on others. For example, human beings tend to be
>highly focused on the possibility of negative outcomes, since these have
>survival value. In the modern world, where survival is no longer much of an
>issue, we have the nightly news as an example of our evolutionary heritage.
>

>Depending on whether you think the analogy is a good one, human beings probably
>have ways of playing chess that draw on ways of representing knowledge and
>processing information that evolution has given us.
>

>However, from a cognitive science perspective, this makes the human being a
>biased processor. In other words, it is possible that there are ways of playing
>chess that little to do with how human actually represent and process
>information, but are quite successful against humans nevertheless. Since humans
>are biases, there are heuristics that can exploit their shortcomings.
>

Robert Hyatt

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to
mshoe...@my-dejanews.com wrote:
: In article <36967895...@news.xs4all.nl>,

: di...@xs4all.nl (Vincent Diepeveen) wrote:
:> On Tue, 5 Jan 1999 03:37:46 -0500, "Komputer Korner"
:> <kor...@netcom.ca> wrote:
:>
:> >Is there any reason to not believe that the amount of nodes remains
:> >constant from ply depth to ply depth? It seems that the whole range of
:> >ply depth would be a constant. In other words at depth 20 is it still
:> >a factor of 3?
:>
:> There is 1 mainreason why searching deeper
:> reduces branching factor:
:>
:> The deeper you search, the more pieces get from the board,
:> the more pieces get from the board, the fewer possibilities,
:> the fewer possibilities, the smaller the branching factor.
:>
:> Also as there get more pieces from the board, the fewer positions
:> you transpose to, so the more transpositions there are to that
:> position.
:>
:> I'm using 8 probe in DIEP. I already considered rewriting it to 16
:> probe. Hashtables kick butt at huge depths. They really give a lot of
:> plies extra.

: Sorry for my ignorance, but what is "8 probe" and "16 probe" ? I've written a
: very simple chess program, and am relatively familiar with the ideas behind
: chess algorithms and heuristics and hash tables etc. but i have never heard of
: this?
: Thanks for the help :)

When you hash, and want to store a position, where do you store it? If you
hash to one fixed address, you get to overwrite the old, or throw out the
new. If you hash to a "bucket" with N entries, you have a choice of which
of the N entries you are going to replace. It is more expensive in terms of
execution time, but it is more effective when you don't really have enough
memory and have to choose to replace or throw away things very often...

In vincent's case, he is talking about the size of the "bucket". If you
probe the table to produce a set of N entries you can consider replacing,
you are doing an N probe algorithm...

many ways to implement it, from consecutive entry probes to random offset
probes or rehashing (as done in Cray Blitz)...


:> Greetings,

Komputer Korner

unread,
Jan 8, 1999, 3:00:00 AM1/8/99
to
Then what this means that effectively you only need to be able to go
to around 60 moves or 120 ply to solve the game because after that the
plies would be so fast to calculate that they don't really factor into
the equation. Chess as we know it is in danger.

--
--
Komputer Korner
The inkompetent komputer

To send email take the 1 out of my address. My email address is
kor...@netcom.ca but take the 1 out before sending the email.

Vincent Diepeveen wrote in message
<36967895...@news.xs4all.nl>...

Paul Richards

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
On 8 Jan 1999 01:37:05 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
wrote:

>: It all leaves me wondering, if you don't have to "do it (chess) like a human,"
>: to completely and consistently defeat the very best humans (someday), then do
>: you see a day when the present programming strategies with enough speed will
>: over come man? Personally, I don't.
>
>Can you spell deep blue? :) Seriously, they are _very_ strong. I am now
>getting closer and closer to an average of 1M nodes per second on fairly
>cheap hardware.

[snip]

and Don Getkey wrote:

>: Call me a romantic, a true believer, a keeper of the flame, but, I think mans
>: intimate grasp of this ancient game is always going to be a ply or two ahead of
>: the deepest searching computers for some time to come. That is unless someone
>: comes up with a uniquely human methodology, which I will submit might require
>: Deep Blue like speed.

And I say...

OK, Deeper Blue defeated Kasparov fair and square (whether he admits
it or not ;) ), but the point is taken that if Kasparov played many
matches with the machine he would quickly discover it's weaknesses,
since humans are quite good at adapting (survival pressure again).
But the machine is right there with him. Kasparov admits even his
notebook running Junior 5 is "right there with him". He probably just
means it's a decent opponent, but in the privacy of his own room how
many times does he have to hit the takeback key? :) One slip and
you're toast against ordinary computers these days.

In fact it's an interesting question what the real strength of
computers is vs. humans, because humans are forced to play
differently. If Kasparov thought he were playing a human he would be
tactically more bold, and pay the price against a machine. Perhaps
the real ratings of computer programs are understated as a result.
For example today I played through a "brilliant" game of Tal's. From
a human vs. human standpoint it was a great game. Tal sacrificed
pieces right and left just to open lines, and had a brilliant victory.
The pyschology and boldness work against other humans who can't
calculte with machine perfection. I put the game in Fritz 5.32, and
while Fritz easily made the same "brilliant" moves toward the end (and
in fact the lines were "obvious" as it took mere seconds to find the
pv to quite a depth) after the initial sacrifice Tal had a losing game
and after a few more was at minus 6 with no chance of redemption (I
can post the game and analysis if there are any doubts). Fritz would
have squashed him like a bug, while a human would have fallen for the
same moves as did his opponent.

Getting back to Kasparov vs. Deeper Blue, how would he fare against
Pacific Blue? Frankly I don't know the difference in power between
these machines, but the point is that building crushing hardware is
already possible today. There is no longer any question of man vs.
machine. The new question is when "ordinary" machines will do it.
This year will see 600+ MHz chips, next year into the gigahertz.
Faster system buses and RAM. In terms of software the endgame
databases get bigger and bigger, while Dan Corbit's project will build
the world's biggest opening book and beyond. The only left will be
the middle game, if that, where computer tactics are crushing. So
again the question is simply "when", and the best answer is "soon",
or better yet "sooner than you think". :)

The important point is that it does not matter. Human vs. computer
will always be a different animal than human vs human. The human
drama and excitement will always be there because we don't have the
tactical power and precision, because we base on ideas and themes and
emotion. The analysis of the game is already so extensive that
soomeone with a photographic memory will do well, but the top guys are
all freaks one way or another so what does it matter? :) Even
Kasparov has to study like a madman. The game will always be
challenging enough for humans, and more exciting. The only way we can
keep pace with machines will be through genetic engineering. Frankly
this is a much more likely and plausible future than computer implants
(who would ever want the 1.0 version of an implant??? ;) ) and this is
another discussion entirely. Likely we will engineer ourselves to be
smarter, and then perhaps the current version of chess WILL be too
easy, and we will crush the computers. Interesting, but not an issue
for the current human or the current game. The bottom line is that
computers are an interesting adjunct, they already crush most and will
soon crush the rest, but it doesn't matter. FWIW I think Stephen
Hawking has the right take on it: the strength of the human mind is
imagination and creativity, and anything that can be described in
simple rules is much better suited to machines.

Paul Richards

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
On Fri, 8 Jan 1999 15:19:23 -0500, "Komputer Korner"
<kor...@netcom.ca> wrote:

>Then what this means that effectively you only need to be able to go
>to around 60 moves or 120 ply to solve the game because after that the
>plies would be so fast to calculate that they don't really factor into

>the equation. Chess as we know it is in danger.

I don't think calculation that many plies is in any way realistic. But
I think it's unnecessary to brute force so many moves since in a given
position many if not most can be rejected. Between the analysis
project going on with Mr. Corbit (and based on your suggestion, I
might add :) ) the opening book and beyond will be well known for the
strongest variations, and these will be refined over time as people
play them. Endgame databases will also grow, and computer power will
grow so that there will be more and more known. As I said in another
trhead though it really doesn't matter. Suppose the worst case
scenario, from extensive computer analysis it turns out that 1.e4 is
the best opening move, and 1...c5 is the best response. So what?
Does that mean everybody will play the Sicilian? Of course not.
People can't memorize the variations in the Sicilian to the depths
*currently* known. And if some freak with a super memory can go 30
plies into one or two variations, just play 1.d4. ;) In other words
it will be no different than it is today. The only change is that the
average computer will be unbeatable, which for most of us is also no
change from today. :)

Moritz Berger

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
On 5 Jan 1999 18:44:21 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
wrote:

You claim to have run tests, can you give us any "hard" data?
(CPU, positions (EPD), time controls, nps) ???

>In every test I have run, the programs come out like this in terms of
>speed, on a scale of 1 to 300:
>
>Hiarcs 20
>
>Crafty 60
>
>Rebel 85
>
>Nimzo 230
>Fritz 250
>Junior 260+
>
>More data would be welcome of course...
>
>

>As I said, I am a long way from being convinced that _my_ tuning is
>very good yet, although it is certainly getting better. I am only looking
>at quantity here, measured in NPS. Rebel has been significantly faster than
>Crafty for maybe 2 years now with the gap getting wider (this being on equal
>machines of course.)


Moritz Berger

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
On 5 Jan 1999 20:59:44 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
wrote:

>Rebel is significantly faster than crafty.

On 5 Jan 1999 14:11:38 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
wrote:

>Good answer. Based on NPS, here is what I would say:
>
>hiarcs 10
>
>crafty 5
>
>rebel 4
>
>fritz 1
>
>
>And I haven't tried Rebel 10, so I don't know how it compares, although
>someone could run the same position on both it and crafty to get a number
>for each on the same hardware.

I took the 3 Rebel 10 benchmark positions from
http://www.rebel.nl/bench.htm
The advantage from using this positions lies in the solution times
table for all kind of CPUs on the same page - just divide solution
time by total number of nodes computed (in the R10: lines below) and
you have the NPS figure on any type of machine.

I set both WCrafty16_3.exe (96+16MB HT) and Rebel 10 (60 MB HT) to
fixed time and wrote down total computed nodes (see below). For
reference i also included nps for Fritz which doesn't allow fixed time
setting, so I tried to quote a nodecount after a similar amount of
time.

POS 1 Nxf7
r1bqr1k1/pp1n1ppp/5b2/4N1B1/3p3P/8/PPPQ1PP1/2K1RB1R w - - 0 1
C16.3: time=12.47 n=2390435 nps=191694
R10: time=12.00 n=1.562.181 nps=130182
F5.32: time=11.00 n=3.967.000 nps=360636

Ratio Crafty/Rebel: 1.47
Ratio Crafty/Fritz: 0.53

POS 2 Rd7
3R4/1p2kp2/p1b1nN2/6p1/8/6BP/2r1qPPK/Q7 w - - 0 1
C16.3: time=53.20 n=9975249 nps=187504
R10: time=53.00 n=6.690.644 nps=126239
F5.32: time=121 n=61.069.000 nps=504702

Ratio Crafty/Rebel: 1.49
Ratio Crafty/Fritz: 0.37

POS 3 Rxg7
r4rk1/5ppp/p2pbb2/3B3Q/qp2p3/4B3/PPP2P1P/2KR2R1 w - - 0 1
C16.3: time=1:27 n=17942327 nps=205266
R10: time=87.00 n=10.779.851 nps=123906
F5.32: time=90.00 n=35.409.000 nps=393433

Ratio Crafty/Rebel: 1.66
Ratio Crafty/Fritz: 0.52

> the numbers might be 10,6,4,1 or something
>close. Doesn't mean crafty is better than rebel because I don't believe
>it is... but it does mean that whatever Rebel is doing, it is not doing it
>because it is a super-smart program. Knowledge definitely is proportional to
>NPS.

... Rebel and Fritz are written in Assembly language, Crafty in C.
I remember that Ed reported a 25% gain from moving from C to Assembly
for his engine. Frans Morsch is known for his programing wizardry; I
think it's safe to assume that his gain from Assembly language might
be at least as big.

I also remember that Rebel evaluates the majority of nodes with a
cheaper "lazy" evaluation routine. Rebel's "full" evaluation thus
easily might be several times as complex as Crafty's.


Moritz

Enrique Irazoqui

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
Moritz Berger escribió en mensaje <369e48dd...@NEWS.SUPERNEWS.COM>...
>On 5 Jan 1999 18:44:21 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
>wrote:
>

>You claim to have run tests, can you give us any "hard" data?
>(CPU, positions (EPD), time controls, nps) ???
>
>>In every test I have run, the programs come out like this in terms of
>>speed, on a scale of 1 to 300:
>>
>>Hiarcs 20
>>
>>Crafty 60
>>
>>Rebel 85
>>
>>Nimzo 230
>>Fritz 250
>>Junior 260+
>>
>>More data would be welcome of course...


If Hiarcs = 20, then:
Mchess = 27
Tiger = 55
Rebel = 74
Genius = 85
Crafty = 108
Nimzo = 171
Junior = 200
Fritz = 217

Enrique

Robert Hyatt

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
Komputer Korner <kor...@netcom.ca> wrote:
: Then what this means that effectively you only need to be able to go
: to around 60 moves or 120 ply to solve the game because after that the
: plies would be so fast to calculate that they don't really factor into
: the equation. Chess as we know it is in danger.

Forget it. _if_ the search kept speeding up, then you would be correct.
But the search doesn't quite behave like this. Deeper searches do offer
more transpositions. But you have to store them. Which takes a _lot_ of
memory. And you have to reach those depths, which isn't going to happen
in any finite time (ie a 60 ply search is _not_ going to be done in our
lifetimes, or our children's lifetime, or our children's children's
lifetime, etc...

Robert Hyatt

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
Moritz Berger <Moritz...@email.msn.com> wrote:
: On 5 Jan 1999 20:59:44 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
: wrote:

:>Rebel is significantly faster than crafty.

: On 5 Jan 1999 14:11:38 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
: wrote:

:>Good answer. Based on NPS, here is what I would say:


:>
:>hiarcs 10
:>
:>crafty 5
:>
:>rebel 4
:>
:>fritz 1
:>
:>
:>And I haven't tried Rebel 10, so I don't know how it compares, although
:>someone could run the same position on both it and crafty to get a number
:>for each on the same hardware.

: I took the 3 Rebel 10 benchmark positions from
: http://www.rebel.nl/bench.htm
: The advantage from using this positions lies in the solution times
: table for all kind of CPUs on the same page - just divide solution
: time by total number of nodes computed (in the R10: lines below) and
: you have the NPS figure on any type of machine.

This is a bad test. You can't use "solution" times for test
positions, because some programs are very good at the tactical shots,
while others are not. And this has little to do with how "smart" or
"dumb" a program is, it only has to do with how it searches tactically.

the "NPS" value is still the best estimate...

: I set both WCrafty16_3.exe (96+16MB HT) and Rebel 10 (60 MB HT) to

This I doubt, because crafty also does "lazy" evaluation and only does
the 'full' evaluation in maybe 10-20% of the cases where eval is called.
I don't have rebel 10. My numbers were directly based on Rebel 8 which I
do have...


: Moritz

Robert Hyatt

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
Moritz Berger <Moritz...@email.msn.com> wrote:
: On 5 Jan 1999 18:44:21 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
: wrote:

: You claim to have run tests, can you give us any "hard" data?


: (CPU, positions (EPD), time controls, nps) ???

:>In every test I have run, the programs come out like this in terms of
:>speed, on a scale of 1 to 300:
:>
:>Hiarcs 20
:>
:>Crafty 60
:>
:>Rebel 85

I based the first two numbers on simple results that have been posted
here many times, plus results posted here and on CCC for positions that
hiarcs had searched. It seemed to be (at the time) about 1/3 the speed
of Crafty (ie on similar machines I was doing 60K, hiarcs was doing 20K,
although I couldn't say which hiarcs version [5 or 6]). I ran several
tests with Rebel, and consistently got 80-90K on the same machine,
which was surprising (this was rebel 8).

The numbers for nimzo/fritz/junior came from others. Amir reported on
CCC that junior was slightly faster than fritz. I am taking the fritz/
nimzo numbers 'on faith' since I don't have them and have simply taken
results posted here...

:>
:>Nimzo 230


:>Fritz 250
:>Junior 260+
:>
:>More data would be welcome of course...

:>
:>

I don't spend a lot of time testing such things, as the only NPS
figure I worry about is mine, since my search and eval is different
from the commercial programs in various ways. I only worry about
making mine faster without taking anything out (or even adding things
when needed).

I only wanted to point out that it is possible that a program is thought
of as "smart" when it is really nothing more than "well tuned". Because
there is no way a program is going to be both twice as fast as another
while having twice the knowledge as well. Unless _one_ of the programmers
is _really_ bad.

Robert Hyatt

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
Paul Richards <rich...@spamoff.hotmail.com> wrote:

: And I say...

: OK, Deeper Blue defeated Kasparov fair and square (whether he admits
: it or not ;) ), but the point is taken that if Kasparov played many
: matches with the machine he would quickly discover it's weaknesses,
: since humans are quite good at adapting (survival pressure again).
: But the machine is right there with him. Kasparov admits even his
: notebook running Junior 5 is "right there with him". He probably just
: means it's a decent opponent, but in the privacy of his own room how
: many times does he have to hit the takeback key? :) One slip and
: you're toast against ordinary computers these days.

I don't agree. IE Crafty plays blitz matches against GM players day
after day. And it wins (blitz) at least 90% of these games now. Why
can't the GM's do the same for Crafty and figure out how to beat it,
as it does have weaknesses? Answer: because _I_ don't sit still
either. When it loses I look at the game, and if it made a positional
mistake, I adjust the eval if possible, to fill the hole. If it made a
tactical mistake, I try to decide whether I can adjust the search, or
just wait for faster hardware to solve that.

Everyone seems to think that DB is 'static'. It isn't. The DB team
could adjust it enough that I don't think someone would be able to find
a big hole and exploit it game after game, any more than they can do the
same against Crafty or other programs...


: In fact it's an interesting question what the real strength of

Just so everyone understands that 'soon' is a relative term. If you
consider 50 years soon, you are probably correct. If you consider
soon 10 years, probably not...

: The important point is that it does not matter. Human vs. computer


: will always be a different animal than human vs human. The human
: drama and excitement will always be there because we don't have the
: tactical power and precision, because we base on ideas and themes and
: emotion. The analysis of the game is already so extensive that
: soomeone with a photographic memory will do well, but the top guys are
: all freaks one way or another so what does it matter? :) Even
: Kasparov has to study like a madman. The game will always be
: challenging enough for humans, and more exciting. The only way we can
: keep pace with machines will be through genetic engineering. Frankly
: this is a much more likely and plausible future than computer implants
: (who would ever want the 1.0 version of an implant??? ;) ) and this is
: another discussion entirely. Likely we will engineer ourselves to be
: smarter, and then perhaps the current version of chess WILL be too
: easy, and we will crush the computers. Interesting, but not an issue
: for the current human or the current game. The bottom line is that
: computers are an interesting adjunct, they already crush most and will
: soon crush the rest, but it doesn't matter. FWIW I think Stephen
: Hawking has the right take on it: the strength of the human mind is
: imagination and creativity, and anything that can be described in
: simple rules is much better suited to machines.

--

ShaktiFire

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
All we have to do is reach 120 ply
to solve chess? Is that all? Consider,....
strictly for fun.

Maybe we get to 20 ply with a week of
computing. So we only need 100 ply more.

3**100 = 5 X 10**47

So we only need a speed increase by
a factor of 5 X 10**47.

Lets use a modern day Connection Machine.
with 64,000 processors all doing branches in
the tree and assume node per second increase of 50,000 (wild ass guess.). So
now
each processor needs to increase speed
by a factor of 1 X 10**43.

Lets say processor speed increases by 50%
annually indefinitely (doubling every 18 months). "Indefinitely" is a big
assumption.

2**n = 1 X 10**43.

n = 142 doublings X 18 months = 213 years

Hmm,,, those life extension people better
work quickly if I hope to see chess solved
to 120 ply in my lifetime.

Paul Richards

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
On 9 Jan 1999 15:47:50 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
wrote:

>I don't agree. IE Crafty plays blitz matches against GM players day


>after day. And it wins (blitz) at least 90% of these games now. Why
>can't the GM's do the same for Crafty and figure out how to beat it,
>as it does have weaknesses?

Because there's no time to think in Blitz? ;) You're right, I was
assuming a static case for DB. My point is that in terms of man vs.
machine we are already there with existing technology if anybody cares
to prove it any further. The issue now is 'ordinary' machine vs. man.
I think humans holding out another 10 years is optimistic,
particularly because of things like the analysis project. Reliance on
brute forcing during a game will go down if the strongest variations
have already been computed and put in a database, and then the
database is updated if there is a defeat. DB was a demonstration of
computing power, but if the object is to win any way you can, an
enormous database of pre-analyzed variations will obviate much of the
effect of time control. At blitz such a database would make a
computer virtually unbeatable, and if by some miracle people win the
database gets updated anyway. But whether it's 5, 10, or 20 years,
the time will come, but it will mean little for human vs. human chess.
The game will not be 'ruined' no matter how powerful or knowledgable
computers become.


Moritz Berger

unread,
Jan 9, 1999, 3:00:00 AM1/9/99
to
On 9 Jan 1999 15:36:02 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
wrote:

>Moritz Berger <Moritz...@email.msn.com> wrote:
>: I took the 3 Rebel 10 benchmark positions from
>: http://www.rebel.nl/bench.htm
>: The advantage from using this positions lies in the solution times
>: table for all kind of CPUs on the same page - just divide solution
>: time by total number of nodes computed (in the R10: lines below) and
>: you have the NPS figure on any type of machine.
>
>This is a bad test. You can't use "solution" times for test
>positions, because some programs are very good at the tactical shots,
>while others are not. And this has little to do with how "smart" or
>"dumb" a program is, it only has to do with how it searches tactically.
>
>the "NPS" value is still the best estimate...

I used it *exclusively* for the *implicit* NPS values ... Which might
or might not coincide with solution times for Rebel - that doesn't
matter at all.

*Implicit* means that if Rebel on PII-400 needs _x_ seconds to compute
a well defined total number of nodes (as it is the case for these
examples), I assume that having Crafty on the same hardware search the
same position for the same amount of time gives a total number of
nodes computed that allows me to compare the nodes per second value
for both programs.

The advantage I see in using the 3 positions I mentioned lies in the
fact that anybody can run Crafty e.g. on a P200MMX machine and can
calculate the nps value for Rebel on that machine in that position by
oneself.

I hope this clarifies my intention.


Moritz

Vincent Diepeveen

unread,
Jan 10, 1999, 3:00:00 AM1/10/99
to
On 8 Jan 1999 22:07:44 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
wrote:

>mshoe...@my-dejanews.com wrote:


>: In article <36967895...@news.xs4all.nl>,
>: di...@xs4all.nl (Vincent Diepeveen) wrote:

>:> On Tue, 5 Jan 1999 03:37:46 -0500, "Komputer Korner"
>:> <kor...@netcom.ca> wrote:
>:>
>:> >Is there any reason to not believe that the amount of nodes remains


>:> >constant from ply depth to ply depth? It seems that the whole range of
>:> >ply depth would be a constant. In other words at depth 20 is it still
>:> >a factor of 3?
>:>
>:> There is 1 mainreason why searching deeper
>:> reduces branching factor:
>:>
>:> The deeper you search, the more pieces get from the board,
>:> the more pieces get from the board, the fewer possibilities,
>:> the fewer possibilities, the smaller the branching factor.
>:>
>:> Also as there get more pieces from the board, the fewer positions
>:> you transpose to, so the more transpositions there are to that
>:> position.

>:>
>:> I'm using 8 probe in DIEP. I already considered rewriting it to 16
>:> probe. Hashtables kick butt at huge depths. They really give a lot of
>:> plies extra.
>
>: Sorry for my ignorance, but what is "8 probe" and "16 probe" ? I've written a
>: very simple chess program, and am relatively familiar with the ideas behind
>: chess algorithms and heuristics and hash tables etc. but i have never heard of
>: this?
>: Thanks for the help :)
>
>When you hash, and want to store a position, where do you store it? If you
>hash to one fixed address, you get to overwrite the old, or throw out the
>new. If you hash to a "bucket" with N entries, you have a choice of which
>of the N entries you are going to replace. It is more expensive in terms of
>execution time, but it is more effective when you don't really have enough
>memory and have to choose to replace or throw away things very often...
>
>In vincent's case, he is talking about the size of the "bucket". If you
>probe the table to produce a set of N entries you can consider replacing,
>you are doing an N probe algorithm...

>many ways to implement it, from consecutive entry probes to random offset
>probes or rehashing (as done in Cray Blitz)...

'Random offset' probes i'm using (something to prevent chaining),
but that might actually be the adres already from a different tuple,
so it's not exactly a bucket, but more like a shared bucket.

I've measured it for some searches:

100MB hash versus 50MB hash: after 90 minutes using 40MB hash,
i get the same PV average after 70 minutes using 100MB hash at
a PII-450. Transposition table is around 64 mb versus 32 mb, using 16
tuples.

Search speed of diep: around 20k a second in that middlegame position.

REsults are on my different computer, which is now autoplaying.

>:> Greetings,
>:> Vincent
>:> >--


>:> >--
>:> >Komputer Korner
>:> >The inkompetent komputer
>:> >
>:> >To send email take the 1 out of my address. My email address is
>:> >kor...@netcom.ca but take the 1 out before sending the email.

>:> >Robert Hyatt wrote in message <76r123$4fr$1...@juniper.cis.uab.edu>...
>:> >
>:> >>I'm not quite sure what you are saying, but a PII/350 is definitely
>:> >at a
>:> >>handicap when playing a dual 500mhz machine. We are talking a factor
>:> >of 3
>:> >>in computing which is pretty much one more ply. And one ply does
>:> >make a
>:> >>difference. IE ask the folks the play Crafty on ICC using their
>:> >400-500
>:> >>mhz machines what a difference the 2x faster quad xeon made compared
>:> >to the
>:> >>quad P6/200 I was using. The results are markedly different.
>:> >>
>:> >>Note that the "chess is exponential" is really "chess is exponential
>:> >>with respect to depth". But each ply does have a linear cost impact
>:> >over
>:> >>the previous ply. It is just that this factor of 3 becomes messy
>:> >when
>:> >>you compare the cost of 10 plies to 11 plies (factor of 3 more work)
>:> >and
>:> >>then compare the cost of 10 plies to 12 plies (factor of 9 more
>:> >work).
>:> >>
>:> >>

>:> >>--


>:> >>Robert Hyatt Computer and Information Sciences
>:> >>hy...@cis.uab.edu University of Alabama at Birmingham
>:> >>(205) 934-2213 115A Campbell Hall, UAB Station
>:> >>(205) 934-5473 FAX Birmingham, AL 35294-1170

>:> >
>:> >
>:>
>:>
>
>: Matt
>
>: -----------== Posted via Deja News, The Discussion Network ==----------
>: http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
>

Vincent Diepeveen

unread,
Jan 10, 1999, 3:00:00 AM1/10/99
to
On Fri, 8 Jan 1999 15:19:23 -0500, "Komputer Korner"
<kor...@netcom.ca> wrote:

>Then what this means that effectively you only need to be able to go
>to around 60 moves or 120 ply to solve the game because after that the
>plies would be so fast to calculate that they don't really factor into

>the equation. Chess as we know it is in danger.

You're making a small calculation mistake Komputer Korner.
Even if branching factor goes down to 2.0, like it is in my
draughtsprogram already, then still it's a long way to go.

Getting to 19 ply in the average middlegame position
(only a liltle over 35 possibilities and see my post in CCC) takes
about a billion nodes for DIEP, and takes many hours of
calculation.

So getting 'just' getting 40 ply, which still doesn't solve the game,
is already a million days.

If hardware improves 2x in speed every 2 years (not sure it will,
cars and planes aren't going that much faster lately either), then
we need just for 40 ply in 1 day around say 40 years.

That's a long time to go.

>--
>--
>Komputer Korner
>The inkompetent komputer
>
>To send email take the 1 out of my address. My email address is
>kor...@netcom.ca but take the 1 out before sending the email.

>Vincent Diepeveen wrote in message
><36967895...@news.xs4all.nl>...

Vincent Diepeveen

unread,
Jan 10, 1999, 3:00:00 AM1/10/99
to
On 9 Jan 1999 15:31:31 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu>
wrote:

>Komputer Korner <kor...@netcom.ca> wrote:


>: Then what this means that effectively you only need to be able to go
>: to around 60 moves or 120 ply to solve the game because after that the
>: plies would be so fast to calculate that they don't really factor into
>: the equation. Chess as we know it is in danger.
>

>Forget it. _if_ the search kept speeding up, then you would be correct.
>But the search doesn't quite behave like this. Deeper searches do offer
>more transpositions. But you have to store them. Which takes a _lot_ of
>memory. And you have to reach those depths, which isn't going to happen
>in any finite time (ie a 60 ply search is _not_ going to be done in our
>lifetimes, or our children's lifetime, or our children's children's
>lifetime, etc...

Correct, even if processor speed goes up 2 times each 2 years, then we
still have a problem if memory doesn't do the same. In draughts i'm
already searching some years now depths way beyond what we will search
in chess within 10 years, and it seems to me that although tough to
kick in blitz (tactical shots everywhere!) , my partner Marcel
Monteba, comparable rating in chess: 2100, kicks it silly.

Centralization is very important in chess, unlike in draughts (polish
rules 10x10); draughts you can't mirror everything and pieces all move
the same initially.

When our draugthsprogram was a little more stupid than it is now
(worked hard at evaluation last few years) then searching 29 plies
after a night (yes 29 plies FULLWIDTH excluding all kinds of
extensions) in the middlegame, still gave the same result we had at
our screen at 14 ply, which we get in tournament level (65 moves in 60
minutes, after which adjudication takes place).

tomorrow we have a blitz event. Hope we win it this time. previous
year we were 2nd.

We search in blitz like 12-13 ply...
...which means that *any* tactical trick (called: forcings) played in
tournament games of grandmasters are seen within seconds.

My partner played 2 testgames against it 1.5 -0.5 his favour...
...it's insane. He directly gets a more or less 'closed' position
(whatever it is called in draughts, i forgot its name) and kicks its
butt.

our evaluation function in the draughtsprogram is around 2000 lines of
efficient and compact C code.

Vincent Diepeveen

unread,
Jan 10, 1999, 3:00:00 AM1/10/99
to
On Sat, 09 Jan 1999 07:14:10 GMT, rich...@spamoff.hotmail.com (Paul
Richards) wrote:

>On Fri, 8 Jan 1999 15:19:23 -0500, "Komputer Korner"


><kor...@netcom.ca> wrote:
>
>>Then what this means that effectively you only need to be able to go
>>to around 60 moves or 120 ply to solve the game because after that the
>>plies would be so fast to calculate that they don't really factor into
>>the equation. Chess as we know it is in danger.
>

>I don't think calculation that many plies is in any way realistic. But
>I think it's unnecessary to brute force so many moves since in a given
>position many if not most can be rejected. Between the analysis
>project going on with Mr. Corbit (and based on your suggestion, I
>might add :) ) the opening book and beyond will be well known for the
>strongest variations, and these will be refined over time as people
>play them. Endgame databases will also grow, and computer power will
>grow so that there will be more and more known. As I said in another
>trhead though it really doesn't matter. Suppose the worst case
>scenario, from extensive computer analysis it turns out that 1.e4 is
>the best opening move, and 1...c5 is the best response. So what?
>Does that mean everybody will play the Sicilian? Of course not.
>People can't memorize the variations in the Sicilian to the depths
>*currently* known. And if some freak with a super memory can go 30
>plies into one or two variations, just play 1.d4. ;) In other words
>it will be no different than it is today. The only change is that the
>average computer will be unbeatable, which for most of us is also no
>change from today. :)


Actually for those who are already get beaten, things will get better,
as from the time that programs become unbeatable, and tend to give the
proper game position, you can follow realtime grandmaster games, just
you can follow nowadays already tennis games.

Everyone knows when following tennis games the exact score: who is
winning, who is losing etcetera. When computer gets really good, then
that's good news for folks wanting to follow the world champs life.

It might take at least another 30 years or more i fear though.

Opening is still a big problem for programs.


Komputer Korner

unread,
Jan 11, 1999, 3:00:00 AM1/11/99
to
Who says that the increase in speed will be only a constant? It may
very well explode with new technologies. The bottom line is that we
can see the end. It may come in 100 or 200 years but the end is there.
Chess as we know it with the present rules is indeed in danger. If we
change the rules then the tradition of the past 500 years is broken.

--
--
Komputer Korner
The inkompetent komputer

To send email take the 1 out of my address. My email address is
kor...@netcom.ca but take the 1 out before sending the email.
Vincent Diepeveen wrote in message

<3699fc35...@news.xs4all.nl>...


>On Fri, 8 Jan 1999 15:19:23 -0500, "Komputer Korner"
><kor...@netcom.ca> wrote:
>
>>Then what this means that effectively you only need to be able to go
>>to around 60 moves or 120 ply to solve the game because after that
the
>>plies would be so fast to calculate that they don't really factor
into
>>the equation. Chess as we know it is in danger.
>

Robert Hyatt

unread,
Jan 11, 1999, 3:00:00 AM1/11/99
to
Komputer Korner <kor...@netcom.ca> wrote:
: Who says that the increase in speed will be only a constant? It may

: very well explode with new technologies. The bottom line is that we
: can see the end. It may come in 100 or 200 years but the end is there.
: Chess as we know it with the present rules is indeed in danger. If we
: change the rules then the tradition of the past 500 years is broken.

I see the opposite, based purely on physics. picosecond chips will
have to be (a) microscopic in size; and (b) have some "magic" in them
to get around RF problems at that frequency.

0 new messages