Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Computer Strength

13 views
Skip to first unread message

Christopher Dorr

unread,
May 22, 1995, 3:00:00 AM5/22/95
to
Recently, there has been much talk in several threads about the actual
strength of PC programs available today. Several people have said that
they believe the top programs (Genius, WChess, MChess Pro, Chess Master
4000) are no better than USCF 2300 on a fast processor. Some have even
implied that they are worse.

I would someone who has these feelings to construct a theory about how
those ratings are possibly true given the evidence of IM-level computer
performance detailed below.

1. The King, Oviedo Spain, December 1993. Finished ahead of 30 GM's,
beating and drawing at least 6 GM's in the process. FIDE Performance
rating-approx 2650+.

2. Chess Genius Experimental, 2 matches against Garry Kasparov, G/25,
Pentium/90 & Pentium/120. Match total result 2-2. FIDE performance rating
2800.

3. WChess, Pentium 90. Harvard cup. 5-1 against 6 GM's. Game/25.
Performance rating approx FIDE 2800+.

4. CM4000, Pentium 90. Harvard Cup. 2.5-3.5 against 6 GM's. Game/25.
Performance rating approx FIDE 2500.

5. Fritz Experimental/Fritz3. Blitz results too numerous to count.
Victories over multiple GM's, including Kasparov.


There are many other examples of computers beating strong GM's in
tournament conditions. I can post these too, if requested. I challenge
anyone to imagine how anyone/anything of less than 2500/2600 FIDE
strength could score 2-2 against Kasparov. It is simply impossible to
believe than any 2400 could do so. On ICS, there are several computers
that play blitz at 2500/2600 level (Ferret, CM400, GCFour, Hill, etc.)

I know that computers are much better at blitz than slow chess, but the
Oviedo tournament was 40/2, I believe.

How is it possible that the best available PC programs are only
2300/2400, and the above objective evidence can exist?

Thanks,

Chris


Robert Hyatt

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
In article <3prjma$h...@wabash.iac.net>,

Christopher Dorr <crd...@iac.net> wrote:
>Recently, there has been much talk in several threads about the actual
>strength of PC programs available today. Several people have said that
>they believe the top programs (Genius, WChess, MChess Pro, Chess Master
>4000) are no better than USCF 2300 on a fast processor. Some have even
>implied that they are worse.
>
>I would someone who has these feelings to construct a theory about how
>those ratings are possibly true given the evidence of IM-level computer
>performance detailed below.
>
>1. The King, Oviedo Spain, December 1993. Finished ahead of 30 GM's,
>beating and drawing at least 6 GM's in the process. FIDE Performance
>rating-approx 2650+.
>
>2. Chess Genius Experimental, 2 matches against Garry Kasparov, G/25,
>Pentium/90 & Pentium/120. Match total result 2-2. FIDE performance rating
>2800.
>
>3. WChess, Pentium 90. Harvard cup. 5-1 against 6 GM's. Game/25.
>Performance rating approx FIDE 2800+.
>
>4. CM4000, Pentium 90. Harvard Cup. 2.5-3.5 against 6 GM's. Game/25.
>Performance rating approx FIDE 2500.
>
>5. Fritz Experimental/Fritz3. Blitz results too numerous to count.
>Victories over multiple GM's, including Kasparov.
>


First, you partially answered the question yourself. Note the Game/25
time control. Cray Blitz has (for the past 5 years) maintained a
performance rating of 2650 at speed chess. If you factor out the
low-rated players (that actually drag a perf rating down even if it
wins), it's rating climbs well over 2800. However it is *not* a
2800 player at 45/2. Look at the game/25 games and note how many are
won in the last 5 minutes of time scramble.

Second, watch the ratings (Performance) of these programs as they
play more games where GM's begin to take them seriously and prepare
for them. You will see a huge difference. (in addition to the difference
caused by long time controls.)

If you want to put genius at 2650, how do you justify deep thought
being 2550 yet being capable of completely overwhelming genius? At
the last ACM, deep thought forfeited round one, yet still won, going
through the field like a hot knife thru butter. One of the numbers
is wrong, Deep Thought's rating comes from dozens of rated tournament
games at normal time controls. You draw your own conclusion...


>
>There are many other examples of computers beating strong GM's in
>tournament conditions. I can post these too, if requested. I challenge
>anyone to imagine how anyone/anything of less than 2500/2600 FIDE
>strength could score 2-2 against Kasparov. It is simply impossible to
>believe than any 2400 could do so. On ICS, there are several computers
>that play blitz at 2500/2600 level (Ferret, CM400, GCFour, Hill, etc.)
>
>I know that computers are much better at blitz than slow chess, but the
>Oviedo tournament was 40/2, I believe.
>
>How is it possible that the best available PC programs are only
>2300/2400, and the above objective evidence can exist?
>
>Thanks,
>
>Chris
>


--
Robert Hyatt Computer and Information Sciences
hy...@cis.uab.edu University of Alabama at Birmingham
(205) 934-2213 115A Campbell Hall, UAB Station
(205) 934-5473 FAX Birmingham, AL 35294-1170

Robert Hyatt

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
In article <3ps0ed$3...@redstone.interpath.net>,
Maurice Robinson <mrob...@mercury.interpath.net> wrote:
>Joseph Albert (alb...@coral.cs.wisc.edu) wrote:
>
>: these aren't tournament conditions, but game/25 (and some blitz), which gives
>: a tremendous advantage to the computer, since searching game trees requires
>: time exponential time in the ply of search. thus, computers get "most" of
>: their benefit in the early part of the search.
>
>: let's see some more 40 moves in 2 hrs wins against GMs before we jump the gun
>: here.
>
>On the same line, I wonder how many people have noticed the VERY subtle
>shift in the USCF's own "Computer Rating Agency" (CRA) results utilized
>in their chess computer ads. Previously they banned all ads claiming
>ratings which were not based on 40/2 and verified by a CRA test in open
>competition. Now, however, a rating based on 40/2 is nowhere to be found
>-- instead all chess computer ratings cited as CRA approved are all "CRA
>Action Chess" (g/30 or the like) rated.
>
>Seems to me that the USCF, who originally created the CRA to avoid
>"inflated" ratings claims by computer manufacturers, has performed their
>own smoke and mirrors attempt at inflating the apparent ratings of
>machines currently available in order to (gasp!) increase sales of
>products they choose to carry, in a manner ethically similar to those
>earlier inflated ratings claims made by the manufacturers themselves.
>
>All CRA ratings should EITHER be based on 40/2 in USCF publications OR --
>merely to be fair to previous machines -- carry out a CRA "Action Rating"
>of those previous machines to see how much their ratings miraculously
>increase. I am expecting the Fidelity 2100 Chess Challenger to become the
>amazing "Action Chess" Fidelity 2300 overnight in such a case.....*8-)
>
>Steve Schwarz from ICD posted on here a while back asking for new
>articles for computer chess reports --- maybe he would like to furnish us
>with ICD's 40/2 estimates for the same machines USCF carries versus their
>CRA Action ratings, so we can see what the REAL difference and "Hype and
>Nonsense" is all about.
>
>Maurice Robinson


Part of this is caused by the sheer magnitude of time required to play
the games. After Cray Blitz (and now Crafty) I can confirm that testing
is a real "pain". I think the best idea possible would be to put these
programs up on ICC and FICS for a month in automatic mode (any time control,
program allocates its time automatically, *no* human to freak around with
the program, things like "take a little longer here, you are getting attacked"
and the like (ie no "infinite" level with operator assistance.)

Publishing such a rating would be much more informative, since a month would
see well over a thousand games played against varying strength opponents
(including the ever-increasing number of computer players on ICC as well.)

I can't imagine how the CCR and Swedish Rating List is maintained. The number
of games requires an unimaginable amount of time, with someone watching over
the games as they are played out.

As a note, I never had much faith in these ratings anyway, they were just
as unreliable as the ratings published by manufacturers after they entered
4 machines in the US Open, and they picked the "best" result, when the worst
result was often 200 rating points less. Wonder how they "decided" that the
best of the four represented the "true" rating? :^)

Al Cargill

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
In article <3prlcb$s...@spool.cs.wisc.edu>
alb...@coral.cs.wisc.edu "Joseph Albert" writes:

> these aren't tournament conditions, but game/25 (and some blitz), which gives
> a tremendous advantage to the computer, since searching game trees requires
> time exponential time in the ply of search. thus, computers get "most" of
> their benefit in the early part of the search.
>
> let's see some more 40 moves in 2 hrs wins against GMs before we jump the gun
> here.
>

> J. Albert
> alb...@cs.wisc.edu
>

Hi

I believe that the recent results from Aegon were at longer time
limits - some sort of Fischer time control I think which roughly
equated to two hours for the game. As such the results there were
pretty sensational from the computers point of view - not sure of the
exact grades but with seven computers on 5/6 some have to have
graded above 2600. If someone could post the games we would have
a better idea of the standard of play!!

Al

Hal Bogner

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
Chris: You are talking about two different things here. The statemenst made
by Hyatt and others (including myself) are about 40/2 games. Most of the
results you cite are at G/25 or other fast time limits.

You might ask "what's the difference?" If it was human v human, there might
be little difference other than the variation from one player to the next, but
computers results against humans clearly improve with increasingly rapid
play. That's why I called the USCF Action Chess ratings for computers a sham
and a fraud.

Computers are damn tough at fast games...maybe too tough for the world's best,
in fact (or very close!). But at normal tournament time limits, the best of
them, Deep Blue, is "just a GM," it appears (from a small number of events),
and none of the others are that far along. Still very impressive, of course,
but not quite what is implied by claims about Harvard Cup successes and the
like.

-hal

In article <3prjma$h...@wabash.iac.net> crd...@iac.net (Christopher Dorr) writes:
>Recently, there has been much talk in several threads about the actual
>strength of PC programs available today. Several people have said that
>they believe the top programs (Genius, WChess, MChess Pro, Chess Master
>4000) are no better than USCF 2300 on a fast processor. Some have even
>implied that they are worse.
>
>I would someone who has these feelings to construct a theory about how
>those ratings are possibly true given the evidence of IM-level computer
>performance detailed below.
>
>1. The King, Oviedo Spain, December 1993. Finished ahead of 30 GM's,
>beating and drawing at least 6 GM's in the process. FIDE Performance
>rating-approx 2650+.
>
>2. Chess Genius Experimental, 2 matches against Garry Kasparov, G/25,
>Pentium/90 & Pentium/120. Match total result 2-2. FIDE performance rating
>2800.
>
>3. WChess, Pentium 90. Harvard cup. 5-1 against 6 GM's. Game/25.
>Performance rating approx FIDE 2800+.
>
>4. CM4000, Pentium 90. Harvard Cup. 2.5-3.5 against 6 GM's. Game/25.
>Performance rating approx FIDE 2500.
>
>5. Fritz Experimental/Fritz3. Blitz results too numerous to count.
>Victories over multiple GM's, including Kasparov.
>
>

Hal Bogner

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
In article <3ps0ed$3...@redstone.interpath.net> mrob...@mercury.interpath.net (Maurice Robinson) writes:
>Joseph Albert (alb...@coral.cs.wisc.edu) wrote:
>
>: these aren't tournament conditions, but game/25 (and some blitz), which gives

>: a tremendous advantage to the computer, since searching game trees requires
>: time exponential time in the ply of search. thus, computers get "most" of
>: their benefit in the early part of the search.
>
>: let's see some more 40 moves in 2 hrs wins against GMs before we jump the gun
>: here.
>
>On the same line, I wonder how many people have noticed the VERY subtle
>shift in the USCF's own "Computer Rating Agency" (CRA) results utilized
>in their chess computer ads. Previously they banned all ads claiming
>ratings which were not based on 40/2 and verified by a CRA test in open
>competition. Now, however, a rating based on 40/2 is nowhere to be found
>-- instead all chess computer ratings cited as CRA approved are all "CRA
>Action Chess" (g/30 or the like) rated.

You're absolutely right! I helped Dave Welsh design the original CRA process
in the early 1980s, with accuracy as the only goal. The USCF has over the
years "reoriented" the process, and now it's a marketing tool only. A process
that was originally intended to advise users of what to expect has become a
method for misleading them.


>
>Seems to me that the USCF, who originally created the CRA to avoid
>"inflated" ratings claims by computer manufacturers, has performed their
>own smoke and mirrors attempt at inflating the apparent ratings of
>machines currently available in order to (gasp!) increase sales of
>products they choose to carry, in a manner ethically similar to those
>earlier inflated ratings claims made by the manufacturers themselves.
>
>All CRA ratings should EITHER be based on 40/2 in USCF publications OR --
>merely to be fair to previous machines -- carry out a CRA "Action Rating"
>of those previous machines to see how much their ratings miraculously
>increase. I am expecting the Fidelity 2100 Chess Challenger to become the
>amazing "Action Chess" Fidelity 2300 overnight in such a case.....*8-)
>
>Steve Schwarz from ICD posted on here a while back asking for new
>articles for computer chess reports --- maybe he would like to furnish us
>with ICD's 40/2 estimates for the same machines USCF carries versus their
>CRA Action ratings, so we can see what the REAL difference and "Hype and
>Nonsense" is all about.
>

Even Steve relies on data from others, and filters it through his business
preferences to maximize the sales results of those products that he can make
more money selling. Caution is needed when asking people who derive income
from the creation or sale of these products! (Of course, that includes me,
too!)

>Maurice Robinson

-hal

Hal Bogner

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
In article <3psir0$c...@maze.dpo.uab.edu> SHR...@UABDPO.DPO.UAB.EDU ) writes:

>In article <3prjma$h...@wabash.iac.net>, crd...@iac.net (Christopher Dorr) says:
>>
>>How is it possible that the best available PC programs are only
>>2300/2400, and the above objective evidence can exist?

Whoa!!! You're talking apples and oranges! The results Dorr cites are not at
40/2.... And you're points 1 and 2 are not valid.
>
>
>There are a number of simple answers that you continue to overlook.
>
>1. GMs are not experienced in playing computers, and don't know
> how to adapt their style to beat computers. The Kasparov near-fiasco
> against Genius is a good example of this.

This varies from player to player, and over time, too.
>
>2. These programs are specially programmed for these events, using
> hard- and software not available in the for-sale units.
>
Not always true. For example, an experimental version of Genius was used
against Kasparov last Saturday, but the off-the-shelf version was used in
Aegon a few weeks ago (supplemented, I believe, by a large, specialized
opening book).

>3. These are single case studies, not longitudinal results. These are
> always new opponents for the GMs. If they had time to prepare,
> they would have near-perfect results.

Al Cargill

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
In article <3pskgs$e...@pelham.cis.uab.edu>
hy...@willis.cis.uab.edu "Robert Hyatt" writes:

> If you want to put genius at 2650, how do you justify deep thought
> being 2550 yet being capable of completely overwhelming genius? At
> the last ACM, deep thought forfeited round one, yet still won, going
> through the field like a hot knife thru butter. One of the numbers
> is wrong, Deep Thought's rating comes from dozens of rated tournament
> games at normal time controls. You draw your own conclusion...
>

Robert

Do you think that the best PC programs could be more difficult to
prepare for than DT for the majority of 2400-2500 ELO players? What
I am trying to work out is whether DT overwhelms its computer
opponents by sheer tactics and whether this would be less effective
against humans. I fully accept that DT is the best computer for
playing against computers and presumably against humans at fast
time controls, but wonder whether the (arguably) more "intelligent"
programs of Genius/Hiarcs would hold up longer under scrutiny/
preparation for longer time limits.
BTW am I bad mouthing the DT program here? - at my end it is
hard to know how much of its (superb) strength is due to fast
hardware - any idea of approximately how fast the hardware is
compared to a Pentium 90 for example.

Al

Doctor SBD

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
How can my point two not be valid when you say yourself that something is
an "off-the shelf version" but is supplemented by a large opening book?
Wouldn't that skew the results exactly as I said??

Joseph Albert

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
In article <3prjma$h...@wabash.iac.net>,
Christopher Dorr <crd...@iac.net> wrote:
:they believe the top programs (Genius, WChess, MChess Pro, Chess Master
:4000) are no better than USCF 2300 on a fast processor. Some have even

:2. Chess Genius Experimental, 2 matches against Garry Kasparov, G/25,

:Pentium/90 & Pentium/120. Match total result 2-2. FIDE performance rating
:2800.
:
:3. WChess, Pentium 90. Harvard cup. 5-1 against 6 GM's. Game/25.
:Performance rating approx FIDE 2800+.
:
:4. CM4000, Pentium 90. Harvard Cup. 2.5-3.5 against 6 GM's. Game/25.
:Performance rating approx FIDE 2500.
:
:5. Fritz Experimental/Fritz3. Blitz results too numerous to count.
:Victories over multiple GM's, including Kasparov.
:
:There are many other examples of computers beating strong GM's in
:tournament conditions. I can post these too, if requested.

these aren't tournament conditions, but game/25 (and some blitz), which gives


a tremendous advantage to the computer, since searching game trees requires
time exponential time in the ply of search. thus, computers get "most" of
their benefit in the early part of the search.

let's see some more 40 moves in 2 hrs wins against GMs before we jump the gun
here.

J. Albert
alb...@cs.wisc.edu

Steve Mayer

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
Christopher Dorr (crd...@iac.net) wrote:
[deletia]
: I would someone who has these feelings to construct a theory about how
: those ratings are possibly true given the evidence of IM-level computer
: performance detailed below.

: 1. The King, Oviedo Spain, December 1993. Finished ahead of 30 GM's,
: beating and drawing at least 6 GM's in the process. FIDE Performance
: rating-approx 2650+.

[truly mass deletia]

: I know that computers are much better at blitz than slow chess, but the

: Oviedo tournament was 40/2, I believe.

I believe that tournament was game/25. Maybe not, but there was a
major G/25 tournament at Oveido in '93. Jonathan Mestel played in that
one and might comment, i.e., is this result of "The King" from the same
"Oveido G/25" he played in?

-Steve

Maurice Robinson

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
Joseph Albert (alb...@coral.cs.wisc.edu) wrote:

: these aren't tournament conditions, but game/25 (and some blitz), which gives


: a tremendous advantage to the computer, since searching game trees requires
: time exponential time in the ply of search. thus, computers get "most" of
: their benefit in the early part of the search.

: let's see some more 40 moves in 2 hrs wins against GMs before we jump the gun
: here.

On the same line, I wonder how many people have noticed the VERY subtle

shift in the USCF's own "Computer Rating Agency" (CRA) results utilized
in their chess computer ads. Previously they banned all ads claiming
ratings which were not based on 40/2 and verified by a CRA test in open
competition. Now, however, a rating based on 40/2 is nowhere to be found
-- instead all chess computer ratings cited as CRA approved are all "CRA
Action Chess" (g/30 or the like) rated.

Seems to me that the USCF, who originally created the CRA to avoid

"inflated" ratings claims by computer manufacturers, has performed their
own smoke and mirrors attempt at inflating the apparent ratings of
machines currently available in order to (gasp!) increase sales of
products they choose to carry, in a manner ethically similar to those
earlier inflated ratings claims made by the manufacturers themselves.

All CRA ratings should EITHER be based on 40/2 in USCF publications OR --
merely to be fair to previous machines -- carry out a CRA "Action Rating"
of those previous machines to see how much their ratings miraculously
increase. I am expecting the Fidelity 2100 Chess Challenger to become the
amazing "Action Chess" Fidelity 2300 overnight in such a case.....*8-)

Steve Schwarz from ICD posted on here a while back asking for new
articles for computer chess reports --- maybe he would like to furnish us
with ICD's 40/2 estimates for the same machines USCF carries versus their
CRA Action ratings, so we can see what the REAL difference and "Hype and
Nonsense" is all about.

Maurice Robinson

glen...@delphi.com

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
Could someone please post the ELO performance ratings of the
top machines at the AEGON tournament ?? Thanks........ Tom Glenn

Christopher Dorr

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
Joseph Albert (alb...@coral.cs.wisc.edu) wrote:

: these aren't tournament conditions, but game/25 (and some blitz), which gives
: a tremendous advantage to the computer, since searching game trees requires
: time exponential time in the ply of search. thus, computers get "most" of
: their benefit in the early part of the search.

: let's see some more 40 moves in 2 hrs wins against GMs before we jump the gun
: here.

: J. Albert
: alb...@cs.wisc.edu

I believe the Oviedo tournament in Spain was 40/2. I also believe the
Aegon tournaments, where several programs have routinely trounced strong
GM's is 40/2 or 45/2.

The USCF Computer Rating Agency has at least 15 programs rated as
masters, and at least 6 rated over 2400, on the basis of G/30 or slower,
which the USCF rates for humans as normanl tournament games.

Please correct me if my information is wrong.

Thanks, Chris

Robert Hyatt

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
In article <801243...@alcarg.demon.co.uk>,

Al Cargill <A...@alcarg.demon.co.uk> wrote:
>In article <3pskgs$e...@pelham.cis.uab.edu>
> hy...@willis.cis.uab.edu "Robert Hyatt" writes:
>
>> If you want to put genius at 2650, how do you justify deep thought
>> being 2550 yet being capable of completely overwhelming genius? At
>> the last ACM, deep thought forfeited round one, yet still won, going
>> through the field like a hot knife thru butter. One of the numbers
>> is wrong, Deep Thought's rating comes from dozens of rated tournament
>> games at normal time controls. You draw your own conclusion...
>>
>
>Robert
>
>Do you think that the best PC programs could be more difficult to
>prepare for than DT for the majority of 2400-2500 ELO players? What
>I am trying to work out is whether DT overwhelms its computer
>opponents by sheer tactics and whether this would be less effective
>against humans. I fully accept that DT is the best computer for
>playing against computers and presumably against humans at fast
>time controls, but wonder whether the (arguably) more "intelligent"
>programs of Genius/Hiarcs would hold up longer under scrutiny/
>preparation for longer time limits.

1. your point is well-taken, and I have mentioned this on many
occasions: it's one thing to design a program to beat humans,
and another to design a program to beat other programs. While, in
many ways, the two goals are parallel in places, to beat other
programs speed becomes more important than anything else... I've
been in this position before and played "that" game.

2. there is *no* evidence to lead me to believe that genius has
more knowledge that Deep Thought. I know the guys up there, and
(regardless of other opinions) believe me, they are *not* dummies.
I know for a fact that they do some things that Genius simply cannot
due to their hardware taking care of some things that would be too
costly to do in a normal "program". Their search extensions are
unlike anything seen in any other program.

3. seeing the search capability of DT "up close and personal on more
than one occasion" humans have a *real* problem facing them when they
play the machine. Even when it plays *lousy* moves, it sees so
deeply that very few can punish it. Again, genius is a really
amazing piece of work. Deep Thought is *unbelievable*.


> BTW am I bad mouthing the DT program here? - at my end it is
>hard to know how much of its (superb) strength is due to fast
>hardware - any idea of approximately how fast the hardware is
>compared to a Pentium 90 for example.

Hard to say, but with the old machine running well beyond 2M nodes
per second, and fast pentium programs claiming 100K nodes per second,
maybe a factor of 20-50 at present. Of course, this will be a factor
of 1,000 real soon...

Tim Mirabile

unread,
May 23, 1995, 3:00:00 AM5/23/95
to
In article <3prjma$h...@wabash.iac.net>,
> Christopher Dorr <crd...@iac.net> wrote:
> :they believe the top programs (Genius, WChess, MChess Pro, Chess Master
> :4000) are no better than USCF 2300 on a fast processor. Some have even

> :2. Chess Genius Experimental, 2 matches against Garry Kasparov, G/25,
> :Pentium/90 & Pentium/120. Match total result 2-2. FIDE performance rating
> :2800.

> :3. WChess, Pentium 90. Harvard cup. 5-1 against 6 GM's. Game/25.
> :Performance rating approx FIDE 2800+.

> :4. CM4000, Pentium 90. Harvard Cup. 2.5-3.5 against 6 GM's. Game/25.
> :Performance rating approx FIDE 2500.

> :5. Fritz Experimental/Fritz3. Blitz results too numerous to count.
> :Victories over multiple GM's, including Kasparov.

> :There are many other examples of computers beating strong GM's in
> :tournament conditions. I can post these too, if requested.

alb...@coral.cs.wisc.edu (Joseph Albert) wrote:

> these aren't tournament conditions, but game/25 (and some blitz), which gives
> a tremendous advantage to the computer, since searching game trees requires
> time exponential time in the ply of search. thus, computers get "most" of
> their benefit in the early part of the search.

> let's see some more 40 moves in 2 hrs wins against GMs before we jump the gun
> here.

Exactly. These G/25 results are meaningless to the discussion.
G/25 is not FIDE ratable, yet you are using the human's FIDE
ratings, earned at much slower time controls, to calculate a
performance rating for the programs.

And it's not that the computers are much better at faster controls,
it's that the humans are much worse, i.e. the humans lose a lot more
of their strength than the computers do when you speed up the game.

Also, the anti-computer strategy needed to beat these machines
has changed rapidly over the last few years, and I'd say a lot
of the humans haven't been keeping up.

Les Elwell

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
hy...@willis.cis.uab.edu (Robert Hyatt) wrote:
>
> If you want to put genius at 2650, how do you justify deep thought
> being 2550 yet being capable of completely overwhelming genius? At
> the last ACM, deep thought forfeited round one, yet still won, going
> through the field like a hot knife thru butter. One of the numbers
> is wrong, Deep Thought's rating comes from dozens of rated tournament
> games at normal time controls. You draw your own conclusion...
>
Robert,

I have been reading this with massive interest - the essence of the
problem seems to be as follows:

There are no PC programs yet that can overcome some fundamental
problems.

1. Blocked positions. PC programs not knowing what to do!

2. Pawn grabbing. Genius's game against Kasparov on Saturday was
hideous to watch. I thought that such awful misplaced priorities
would have been ironed out years ago, as PC programs' knowledge of
the game improved (however, if it had won it would have been heroic!).

3. Any position that requires seeing "over the horizon", or a postion
in which something "will never happen" that a computer cannot see
(e.g. a king getting itself into a trapped position which gives it
a temporary advantage, but from which it will never escape).

Most of us have come to the conclusion that speed alone will never
be enough - that PCs simply have to become more knowledgable about
the game.

The really, truly, absolutely vital question for you Robert is this:

Can supercomputers ever become good enough to be able to prevent
these anti computer tactics from guaranteeing to be able to kill
them?

We are relying on you to provide the answer, because most of us
never get to play anything better than Genius. Unfortunately, one
can "learn" how to beat these machines WITHOUT EVEN BEING A
GRANDMASTER (or IM, or Master, or anything).

Mark Uniacke

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
Hi,

I know for certain the top computer in the 1995 AEGON tournament, HIARCS, was
a 100% "off-the-shelf" of HIARCS 3.0 (opening book included).
Its performance rating was Elo 2631 (remember this is a longer time control).
However, I would not claim ANY PC program is close to sustaining such a rating
at longer time controls >= 2 min/move, but some are >Elo 2350.


Best wishes,
Mark

--
The opinions and comments expressed herein are my own and do not in anyway
represent those of BNR Europe or Northern Telecom.
ma...@bnr.co.uk

Christopher Dorr

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
error@hell wrote:

: In article <3prjma$h...@wabash.iac.net>, crd...@iac.net (Christopher Dorr) says:
: >
: >How is it possible that the best available PC programs are only
: >2300/2400, and the above objective evidence can exist?


: There are a number of simple answers that you continue to overlook.

: 1. GMs are not experienced in playing computers, and don't know
: how to adapt their style to beat computers. The Kasparov near-fiasco
: against Genius is a good example of this.

1. By now, virtually all GM's are experienced against computers. From
what I understand, the vast majority of GMs own and use both database
programs and playing programs. Also this is irrelevent to an extent. Both
the GM's and the computers are playing chess, pure and simple. A GM
should be able to respond appropriately to the position on the board,
whether created by a human or a computer.


: 2. These programs are specially programmed for these events, using


: hard- and software not available in the for-sale units.

2. All programs/computers used in CRA rated events ARE the actual, for
sale units. That is a stipulation of the test. From what I understand,
WChess that beat the fielt at the Harvard Cup is WChess you can buy on
the street.

: 3. These are single case studies, not longitudinal results. These are

: always new opponents for the GMs. If they had time to prepare,
: they would have near-perfect results.

3. Again, CRA ratings are obtained only after a significant number of
USCF rated games under tournament conditions. Kasparov had time to
prepare against Genius, but in the last match obtained a 1.5-.5 score.
Not perfect. And again, why is this relevent anyway? If programs are
truly as weak as many say, then a GM should have no trouble troiuncing in
easily even without preperation. At tournaments all over the US, GMs kill
USCF 2400's in tournaments (like the National Open and World Open) on a
very regular basis, even without preperation against them. The pattern
seems to follow the old "David Levy" challenges. At first, he could kill
the computers with no preperation. Then he needed to prepare against
them, and finally, they were simply stronger than he was, and the
preperation became secondary to the strength differential. I believe that
is also the case here.


Chris


Christopher Dorr

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
re: the strength of Deep Thought vs. the others,

In 'Chess, Computers, and Cognition', Edited by Masland and Schaeffer,
Hsu et al not that the 2551 USCF rating of Deep Thought was weighted
towards the early game (per the USCF formula), which included a large
number of games played by a 'buggy' version, that allowed a mate in 1 by
a 2100!

They note that it's overall performance was over 2600 USCF, and that it's
best 25 game period had a performance of over 2650, over 100 points above
it's published rating.

If that 2600+ rating is more indicative than the 2550, then with Genius
being even 150+ points weaker, it still is a 2500 USCF player.

Another possible contributing factor in it's dominance over other
computers may be the well-known fact that computer-computer matches (as
in the ACM) overestimate the difference between programs. And if one is
going to posit that the small number of games in things like Harvard Cup
diminish the credibility of the performance, then one must also apply the
same logic to the ACM, and thus give it's outcome little credence because
of the extremely limited number of games.

Chris


Hal Bogner

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
In article <3pu8f0$8...@wabash.iac.net> crd...@iac.net (Christopher Dorr) writes:

>Joseph Albert (alb...@coral.cs.wisc.edu) wrote:
>
>: these aren't tournament conditions, but game/25 (and some blitz), which gives
>: a tremendous advantage to the computer, since searching game trees requires
>: time exponential time in the ply of search. thus, computers get "most" of
>: their benefit in the early part of the search.
>
>: let's see some more 40 moves in 2 hrs wins against GMs before we jump the gun
>: here.
>
>: J. Albert
>: alb...@cs.wisc.edu
>
>I believe the Oviedo tournament in Spain was 40/2. I also believe the
>Aegon tournaments, where several programs have routinely trounced strong
>GM's is 40/2 or 45/2.

Steve Mayer has posted that Oviedo was G/25, and Aegon was G/90 with a
"Bronstein 15 second delay before your clock started running at the start of
each move."


>
>The USCF Computer Rating Agency has at least 15 programs rated as
>masters, and at least 6 rated over 2400, on the basis of G/30 or slower,
>which the USCF rates for humans as normanl tournament games.
>

It's true that USCF rates G/30 the same for humans as 40/2, but applying it to
computers vs. humans is misleading. Studies have shown that elo ratings hold
for players (over large groups, not necessarily individually) relative to each
other regardless of the time limit, at least down to g/30 and probably g/15.
It may be true for g/5, too - I don't know.

But it's also clear that speeding up the time limit from 40/2 to g/30 in
computer vs human play increaes the computers' winning expectation.

>Please correct me if my information is wrong.
>
>Thanks, Chris
>

Thanks for the opportunity to bring these issues out into the open. They've
been buried far too long!

You're welcome!

-hal
>

engelkes

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
Les Elwell <aw...@dial.pipex.com> writes:
>I have been reading this with massive interest - the essence of the
>problem seems to be as follows:
>There are no PC programs yet that can overcome some fundamental
>problems.
>1. Blocked positions. PC programs not knowing what to do!
>2. Pawn grabbing. Genius's game against Kasparov on Saturday was
>hideous to watch. I thought that such awful misplaced priorities
>would have been ironed out years ago, as PC programs' knowledge of
>the game improved (however, if it had won it would have been heroic!).
>3. Any position that requires seeing "over the horizon", or a postion
>in which something "will never happen" that a computer cannot see
>(e.g. a king getting itself into a trapped position which gives it
>a temporary advantage, but from which it will never escape).

>Can supercomputers ever become good enough to be able to prevent


>these anti computer tactics from guaranteeing to be able to kill
>them?
>We are relying on you to provide the answer, because most of us
>never get to play anything better than Genius. Unfortunately, one
>can "learn" how to beat these machines WITHOUT EVEN BEING A
>GRANDMASTER (or IM, or Master, or anything).

Couldn't agree more. As published "around the world" a few times, a 1800
player like myself can win against Fritz3, even giving rook handicap.
Fritz3 is commonly advertised with ELO 2800 (where's the decimal point?).
It's my estimation you only need to know the rules and a lot of trial and
error to find the way to beat every computer. ELO 1200 will be sufficient.
No, don't ask me to prove that again, Mr Hyatt, I'm still waiting for the
proef of the re-re-re-re-re-re-re (since about 1945) claimed nonsense of
"computers will win the world championship within x years".
Anyone wanting to win against their computer: try hippopotamus build-up,
or one of the openings 1. h4 2. e3 3. Rh2 4. g3 5. Qf3 6. Qh1 and wait
for the moron to castle short, or 1. h4 2. a4 3. a5 4. f3 5. Nh3, wait
for the idiot to take the pawn on h4 and double your rooks on the wing
where he will castle.

Robert Hyatt

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
In article <3put91$8...@soap.pipex.net>,

Les Elwell <aw...@dial.pipex.com> wrote:
>hy...@willis.cis.uab.edu (Robert Hyatt) wrote:
>>
>> If you want to put genius at 2650, how do you justify deep thought
>> being 2550 yet being capable of completely overwhelming genius? At
>> the last ACM, deep thought forfeited round one, yet still won, going
>> through the field like a hot knife thru butter. One of the numbers
>> is wrong, Deep Thought's rating comes from dozens of rated tournament
>> games at normal time controls. You draw your own conclusion...
>>
>Robert,
>
>I have been reading this with massive interest - the essence of the
>problem seems to be as follows:
>
>There are no PC programs yet that can overcome some fundamental
>problems.
>
>1. Blocked positions. PC programs not knowing what to do!
>
>2. Pawn grabbing. Genius's game against Kasparov on Saturday was
>hideous to watch. I thought that such awful misplaced priorities
>would have been ironed out years ago, as PC programs' knowledge of
>the game improved (however, if it had won it would have been heroic!).
>
>3. Any position that requires seeing "over the horizon", or a postion
>in which something "will never happen" that a computer cannot see
>(e.g. a king getting itself into a trapped position which gives it
>a temporary advantage, but from which it will never escape).
>
>Most of us have come to the conclusion that speed alone will never
>be enough - that PCs simply have to become more knowledgable about
>the game.
>
>The really, truly, absolutely vital question for you Robert is this:
>
>Can supercomputers ever become good enough to be able to prevent
>these anti computer tactics from guaranteeing to be able to kill
>them?
>
>We are relying on you to provide the answer, because most of us
>never get to play anything better than Genius. Unfortunately, one
>can "learn" how to beat these machines WITHOUT EVEN BEING A
>GRANDMASTER (or IM, or Master, or anything).


First, a quote:

"if supercomputers are the answer, what is the question?"

Computing power directly affects tactical prowess. If you have had a
chance to play against Cray Blitz at the many ACM events I've participated
in, you know what it is capable of. It has announced mates in 10-15 in speed
games (*not* 10-15 plies, but 10-15 moves...). Crafty has announced *several*
mates in 10 on ICC. This from running on a Sun sparcstation. Faster hardware
means deeper searches, which means better tactical awareness. Deep Blue will
be *frightening*...

*HOWEVER* there are some tactics that are simply so deep, *no* machine is going
to solve them with search alone. I can show you positions in the Stonewall that
Crafty simply can't cope with. Perhaps on a Cray, yes, but obviously there are
positions that 10 plies (or 15 plies) is simply not going to resolve. With that
in mind, what's left?

One idea is to twiddle with the evaluation. As a human, I do some quick
analysis and decide that Bxh6 is worthwhile, even though I can't see a forced
mate. Your king being forced to h5 (eventually) is enough of a red flag that
I'll try it. While I'll probably never be able to make Crafty exercise such
a positional "sac" I can tune its king safety so that it will realize that
if the queen wanders over to the queenside and grabs a pawn, while the king-side
gets shattered (say Bxf6 gxf6) then I can make king-safety large enough that it
will realize that this was a bad deal. In short, (and I'm playing with this
idea right now) I can tell crafty that its kingside safety is *way* more
important than yours. If I make it symmetric, its going to start sac'ing
pieces to disrupt your king-side. It will win a lot of games, but against
GM's, most of these attacks will prove to be futile.

In short: if progress is going to be made, part of the progress is going
to have to be in the form of knowledge and when to apply it, as opposed to
raw nodes-per-second. This is why working on Crafty has been so much fun.
I have the opportunity to try *anything* I want. It has some really neat
things in the eval now that I haven't seen written up anywhere, yet they
are logical, and even more importantly, *correct* ways to look at things.
I have (more or less) stopped fritzing around with its search at present, and
am working on adjusting current knowledge or adding new knowledge to try and
get better moves out of it without having to resort to a machine (Cray) that
is very difficult to get time on. If I thought the problem was hopeless, I
would not be working on it. However, taking something I "understand" (kingside
safety) and translating that to an algorithm (and a fast algorithm so that I
don't bust its tactical ability by slowing it down significantly) is an
interesting exercise.

However, "stay tuned."

Robert Hyatt

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
In article <3pvec9$k...@news.xs4all.nl>, engelkes <enge...@xs4all.nl> wrote:

>Les Elwell <aw...@dial.pipex.com> writes:
>>I have been reading this with massive interest - the essence of the
>>problem seems to be as follows:
>>There are no PC programs yet that can overcome some fundamental
>>problems.
>>1. Blocked positions. PC programs not knowing what to do!
>>2. Pawn grabbing. Genius's game against Kasparov on Saturday was
>>hideous to watch. I thought that such awful misplaced priorities
>>would have been ironed out years ago, as PC programs' knowledge of
>>the game improved (however, if it had won it would have been heroic!).
>>3. Any position that requires seeing "over the horizon", or a postion
>>in which something "will never happen" that a computer cannot see
>>(e.g. a king getting itself into a trapped position which gives it
>>a temporary advantage, but from which it will never escape).
>
>>Can supercomputers ever become good enough to be able to prevent
>>these anti computer tactics from guaranteeing to be able to kill
>>them?
>>We are relying on you to provide the answer, because most of us
>>never get to play anything better than Genius. Unfortunately, one
>>can "learn" how to beat these machines WITHOUT EVEN BEING A
>>GRANDMASTER (or IM, or Master, or anything).
>
>Couldn't agree more. As published "around the world" a few times, a 1800
>player like myself can win against Fritz3, even giving rook handicap.
>Fritz3 is commonly advertised with ELO 2800 (where's the decimal point?).
>It's my estimation you only need to know the rules and a lot of trial and
>error to find the way to beat every computer. ELO 1200 will be sufficient.
>No, don't ask me to prove that again, Mr Hyatt, I'm still waiting for the
>proef of the re-re-re-re-re-re-re (since about 1945) claimed nonsense of
>"computers will win the world championship within x years".
>Anyone wanting to win against their computer: try hippopotamus build-up,
>or one of the openings 1. h4 2. e3 3. Rh2 4. g3 5. Qf3 6. Qh1 and wait
>for the moron to castle short, or 1. h4 2. a4 3. a5 4. f3 5. Nh3, wait
>for the idiot to take the pawn on h4 and double your rooks on the wing
>where he will castle.
>
>


You ought to try some of that on Crafty. It has a more human-like
approach. Most programs simply castle when they can, and where they
can (often to the kingside since there is one less piece in the way,
and the king winds up on a more desirable square (g1 rather than c1).

Crafty uses a "look left then look right" algorithm. If it is considering
castling kingside, it "pretends to do it" and evaluates king safety there;
then it pretends to castle queen-side and evaluates king safety there as
well. The score for king-side castling is then the king-side safety term,
minus the difference between queen-side and king-side king safety (if
queenside safety is "better") which makes the program "wait" and castle
to the safer side. I won't claim it always works, but its far better than
Cray Blitz was in this regard.

I will avoid responding to the <expletive deleted> about ELO 1200 being
enough. I've made the challenge before. *you* provide the ELO 1200
player, I'll provide the opponent. Any time control you want.

As far as your suggested openings, they may work against Fritz, but your
sample size is too small. If your ELO rating is 2200 or less, try that
against Crafty and see how you do. It's not "great" yet as it is being
modified every day. However, you play stupid moves against it and it will
hand you your head in a sack. Ask Roman, Junior, Beetle, or any of the
IM's or GM's on ICC. Your suggestion is dated at least 10 years. David
Levy (who *was* over 2200 ELO) tried this strategy against deep thought,
and had his head run up the flag pole. If an IM, with a real understanding
of how computers play chess, couldn't implement your suggestion successfully,
is there any point on continuing to trumpet this nonsense?

For your last example, I suspect Crafty will rip your pawn, castle queenside,
and then wonder why in the hell you threw that pawn away. As a human, I would
take it in a heartbeat. Don't take experiences with Fritz (or any *one* program)
and extrapolate from that how every other program is going to perform. We
already have a basket full of heads in my office, one more won't hurt.

Robert Hyatt

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
In article <3pvaof$c...@wabash.iac.net>,


But the number of *private* games played is *not* trivial. Hsu can comment
(if he will) but I use genius all the time working with cray blitz. We
are *not* talking about a few games, but about hundreds of games.

DT's "best 25 game history" is meaningless... This is what Fidelity,
Saitek, Novag and others used to do... If you tell me that program "x"
has a rating of "y" then I expect it to play like a "y" player all the
time. DT II may well be >2600 USCF. However, I would still maintain
that genius and the rest are >200 points lower... they *don't" win 1/4
like a 2400 player should against a 2600 player.

You are correct about computer vs computer not showing much. However, one
thing I've never seen (yet, anyway) is program "a" is much better than
program "b", yet program "b" consistently has better performance ratings
in human tournaments than program "a". I wouldn't say it couldn't happen,
but I haven't seen it happen.

Joe Stella

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
>In article <3pvec9$k...@news.xs4all.nl>, engelkes <enge...@xs4all.nl> wrote:
>>Anyone wanting to win against their computer: try hippopotamus build-up,
>>or one of the openings 1. h4 2. e3 3. Rh2 4. g3 5. Qf3 6. Qh1 and wait
>>for the moron to castle short, or 1. h4 2. a4 3. a5 4. f3 5. Nh3, wait
>>for the idiot to take the pawn on h4 and double your rooks on the wing
>>where he will castle.
>>

Eventually learning algorithms will take care of this problem, if it
is not solved by other means.

Why do you have to say "moron" and "idiot"? Does doing this make you feel
intelligent or something? Programmers want to write a program that sells,
so they do not devote much time to handling the absurd type of play you
are talking about. Most people want a computer program that they can play
against to help them improve their chess. Just about all the top PC programs
do this very well. Play them as if you are playing another human and
you will get a good work-out, guaranteed to improve your game.


Joe Stella


Robert Hyatt

unread,
May 24, 1995, 3:00:00 AM5/24/95
to


Exactly. This gets overlooked too often, but when you see the notation
G/* where *=anything, the rating is suspect, since this implies that for
some games, there is a time scramble at the end. The machines *always*
scramble "better".


>
>>Please correct me if my information is wrong.
>>
>>Thanks, Chris
>>
>Thanks for the opportunity to bring these issues out into the open. They've
>been buried far too long!
>
>You're welcome!
>
>-hal
>>
>
>

Doctor SBD

unread,
May 24, 1995, 3:00:00 AM5/24/95
to
Stella's point is interesting. I guess it has something to do with chess
style. I would never play a computer like it was a human because that
would not be taking advantage of its weaknesses. I suppose this is a
Laskerian approach to chess.

As a corollary point, how many of you purposely "play into" your club
master's strengths in order to improve your game (and would that even
work?)? Or do you try to figure out his weaknesses, and take advantage of
those? Maybe its a little of both. I try to be somewhat fluid and adapt my
style to take advantage of my opponent's weaknesses.

To try to finish this thread, what software developer is willing to play
his/her machine/program in 100-200 games across the US and the world to
see what the result will be, with participation of the software announced
in advance of each tournament? For me, it would settle my doubts about the
strength of the programs. I also realize the logistical problems as well.

Robert Hyatt

unread,
May 25, 1995, 3:00:00 AM5/25/95
to
In article <3q0mj6$n...@newsbf02.news.aol.com>,

Doctor SBD <doct...@aol.com> wrote:
>Stella's point is interesting. I guess it has something to do with chess
>style. I would never play a computer like it was a human because that
>would not be taking advantage of its weaknesses. I suppose this is a
>Laskerian approach to chess.

I think an example of what Joe was thinking of would be the way I
would use a computer to prepare for the US open Speed Chess
tourney. Here, I would *not* play "against" the computer, rather
I would play "wild" openings and let the machine give me a tactical
lesson. I've seen computers serve as excellent "tactical tutors"
in this regard.

In reality, I used to spend less time trying to pick on the weaknesses
of the machine than I did trying to pick on my own weaknesses and let
the computer help me improve myself. It's a "slippery slope" where
once you personalize the machine and your major goal is to beat *it*
then you are probably losing a significant advantage it offers.

>
>As a corollary point, how many of you purposely "play into" your club
>master's strengths in order to improve your game (and would that even
>work?)? Or do you try to figure out his weaknesses, and take advantage of
>those? Maybe its a little of both. I try to be somewhat fluid and adapt my
>style to take advantage of my opponent's weaknesses.
>
>To try to finish this thread, what software developer is willing to play
>his/her machine/program in 100-200 games across the US and the world to
>see what the result will be, with participation of the software announced
>in advance of each tournament? For me, it would settle my doubts about the
>strength of the programs. I also realize the logistical problems as well.

Steven Schwartz

unread,
May 25, 1995, 3:00:00 AM5/25/95
to
My two cents in response to Hal Bogner... ICD does indeed quote ratings based solely upon 40/2 results because we, too, believe that the "OFFICIAL" action chess ratings "awarded" by the USCF are very misleading. However, I must take exception with Hal's characterization of how we use our estimates to "maximize the sales results of those products that ICD can make more money selling." Our estimates are based solely upon the research done by IM Larry Kaufman and National Master Nick Schoonmaker for Computer Chess Reports. Since neither works on commission and neither has any financial interest in the top software or chess playing hardware we sell and since each is truly dedicated to searching for the truth (whatever THAT is), we feel that we are relying upon a neutral source that has no interest in what we sell. In fact, it was Larry who wrote a scathingly negative review of Kasparov's Gambit - a program he wrote!!! Since ICD sells every top chess playing program and stand alone chess computer, we are in the enviable position of not having to resort to such questionable sales tactics. I agree with Hal that "caution is needed when asking people who derive income from the creation or sale of products" after all "Buyer Beware" is a saying that has been around for decades and we have had a myriad of competitors over the last 17 years that have kept the Better Business Bureau hopping, but you only need to speak with our customers to find out that we are a VERY DIFFERENT kind of mail order company. I think it is safe to say that we have, by far, the largest percentage of repeat customers in the industry. In fact, I am sure that many of you reading this have dealt with ICD in the past and I trust it was a pleasant experience. That will never change. Steve

eg. johndoe@morgue.com

unread,
May 25, 1995, 3:00:00 AM5/25/95
to
> LGT...@prodigy.com (Steven Schwartz) writes:
but you only need to speak with our customers to find out that we are
> a VERY DIFFERENT kind of mail order company. I think it
> is safe to say that we have, by far, the largest percentage
> of repeat customers in the industry. In fact, I am sure
> that many of you reading this have dealt with ICD in the
> past and I trust it was a pleasant experience. That will
> never change. Steve
>>>>

I have no connection with ICD, other than a Satisfied (if only occasional)
Customer for appx. the last 15 years. I have always received very
good service from this firm, and the chess computers have always
worked.

I also greatly enjoy the computer chess newsletter they put out.

my two cents

Reg Barron


Steven Schwartz

unread,
May 25, 1995, 3:00:00 AM5/25/95
to
Hi Glenn,
I have a faxed copy that is somewhat unclear, but here is the best I can
do:
Chess Genius X = 2662
M-Chess Pro = 2652
Hiarcs = 2631
Socrates = 2487
W-Chess = 2424
From what I can understand Hiarcs, Socrates and W-Chess were actual
commercially available versions. I hope this is helpful. Steve
P.S. Hiarcs won on tie-break points and Marty Hirsch says that since
Genius ran in two different formats but with the same program, his M-Pro
really had a higher performance rating than Genius.


T. M. Cuffel

unread,
May 25, 1995, 3:00:00 AM5/25/95
to
In article <3pva37$c...@wabash.iac.net>,

Christopher Dorr <crd...@iac.net> wrote:
>error@hell wrote:
>: In article <3prjma$h...@wabash.iac.net>, crd...@iac.net (Christopher Dorr) says:
>: >
>: >How is it possible that the best available PC programs are only
>: >2300/2400, and the above objective evidence can exist?
>
>
>: There are a number of simple answers that you continue to overlook.
>
>: 1. GMs are not experienced in playing computers, and don't know
>: how to adapt their style to beat computers. The Kasparov near-fiasco
>: against Genius is a good example of this.
>
>1. By now, virtually all GM's are experienced against computers. From
>what I understand, the vast majority of GMs own and use both database
>programs and playing programs.

Using a database isn't going to teach you a damn thing about computer
play.

And a lot of professionals in all fields don't use computers despite
the valuable tools they can provide. Setting up, maintaining, and
even using a computer requires a level of sophistication a lot of us
net.types take for granted. Many people prefer to do without.



>Also this is irrelevent to an extent. Both
>the GM's and the computers are playing chess, pure and simple. A GM
>should be able to respond appropriately to the position on the board,
>whether created by a human or a computer.

It is completely relevant. If Kasparov suffered brain damage, and
was no longer capable of remebering Black's ideas in Bird's opening,
he would remain quite strong until people realized this, and adapted
to this peculiarity in his style by playing the Bird against him,
something they ordinarily would not do. Similarly, computers have
certain peculiarities in their styles -- once GMs are aware of them,
they will exploit them by making moves they would not use against
humans.


--
Beware the advice of successful people;
They do not seek company.
- Dogbert

Al Cargill

unread,
May 26, 1995, 3:00:00 AM5/26/95
to
In article <3q1sdq$i...@pelham.cis.uab.edu>
hy...@willis.cis.uab.edu "Robert Hyatt" writes:

> In reality, I used to spend less time trying to pick on the weaknesses
> of the machine than I did trying to pick on my own weaknesses and let
> the computer help me improve myself. It's a "slippery slope" where
> once you personalize the machine and your major goal is to beat *it*
> then you are probably losing a significant advantage it offers.
>

It was very interesting to see how human players approached playing
computers in the British Major Open (undeer approx 2200-2300 ELO) a
few years ago when I was operating one. The majority of the stronger
players i.e. 2100 and above chose to take the computers on in playing
chess, whilst those below chose to try and repeat lines played in
earlier rounds (3 identical computers/11 rounds plus all games
available from previuos rounds).

Al

Ralf W. Stephan

unread,
May 27, 1995, 3:00:00 AM5/27/95
to
Robert Hyatt writes:
> ... However, one

> thing I've never seen (yet, anyway) is program "a" is much better than
> program "b", yet program "b" consistently has better performance ratings
> in human tournaments than program "a". I wouldn't say it couldn't happen,
> but I haven't seen it happen.

Something like this has happened at one time when the Saitek Turbo 432
consistently beat the Novag SuperConstellation, due to its killer opening
library, but was much worse against other programs.

I admit that this is a special case, and unlikely to happen nowadays.


ralf
--
aaabcdeeeeeeegghhhhhiiilmmnnoprrssssttttwyROTT25-. | PGP-mail welcome
1024/A713ECE9 Fingerprint = 22 0C 59 E3 5A C8 71 14 31 A7 6B 23 5A F9 62 ED

Joe Stella

unread,
May 29, 1995, 3:00:00 AM5/29/95
to
In article <3q0mj6$n...@newsbf02.news.aol.com>
doct...@aol.com (Doctor SBD) writes:

>I would never play a computer like it was a human because that
>would not be taking advantage of its weaknesses. I suppose this is a
>Laskerian approach to chess.

_I_ suppose this is the reason why your chess is not improving.

Playing silly moves just to beat the machine will not improve your game
against humans. If you want to do that, start studying your own weaknesses
and forget about the computer's weaknesses.

Joe S.


Joe Stella

unread,
May 30, 1995, 3:00:00 AM5/30/95
to
In article <3qf3k0$h...@maze.dpo.uab.edu>
SHR...@UABDPO.DPO.UAB.EDU ) writes:

>In article <joes.256...@ultranet.com>, jo...@ultranet.com (Joe Stella) says:

>>_I_ suppose this is the reason why your chess is not improving.

>> Joe S.


>When did he ever say his chess was not improving? I thought the thread
>was about beating computers??

>You really have to wonder if some people read these messages
>before they respond.

I agree. In an earlier post in this thread, I was responding to people that
were saying "computers are not such strong players because there are
certain move sequences which beat them easily". My point was that these
"silly" move sequences ("silly" because they would never work against humans)
are so effective because chess programmers are not trying very hard to handle
this type of thing. Current chess programs will give you a very good game
if one plays against them as if they are human. They will point out your
weaknesses to you, which is what you should study if you want to improve.

This is the best strength of modern chess computer programs, and anyone
who spends a lot of time trying to beat the machine by probing for
"machine weaknesses" that a human does not have *cannot* be improving
their game against humans.

The person I was responding to above said "I would never play a computer as
if it were human because that is not taking advantage of its weaknessses".
Fine, but if you "take advantage of its weaknesses" then you are not
taking advantage of its strength, which is to point out your own weaknesses
to you if you proceed as I outlined above.

Joe S.


SHR...@uabdpo.dpo.uab.edu

unread,
May 31, 1995, 3:00:00 AM5/31/95
to
In article <joes.260...@ultranet.com>, jo...@ultranet.com (Joe Stella) says:
>The person I was responding to above said "I would never play a computer as
>if it were human because that is not taking advantage of its weaknessses".
>Fine, but if you "take advantage of its weaknesses" then you are not
>taking advantage of its strength, which is to point out your own weaknesses
>to you if you proceed as I outlined above.

So therefore, a marathoner with a strong finishing kick should not take
advantage of that finishing kick by pacing himself, but instead run hard
with the frontrunners?
(whereupon he will lose the race)

It seems to me that when you train for any sporting event, you should try
to capitalize on your strengths. You will spend some time working on
your weaknesses, but improving those will only get you so far.

It would also make sense to me that someone who trains by identifying
the weaknesses of the computer may develop a corollary ability to do so
in humans as well. Thus maybe they *can* be improving against humans
as well. Isn't that at least one aspect of chess play, determining whether
certain move or move sequences are bad?

Finally, I think that programmers should develop chess programs with
"silly" computer mistakes in mind. I think they are trying to handle that
sort of thing, and should.

Robert Hyatt

unread,
May 31, 1995, 3:00:00 AM5/31/95
to


Your analogy is not so good. Let's try this: you train for a marathon.
However, you need someone to train with for motivation. You pick someone
that is a short or middle-distance runner. Since he can't run the entire
26+ miles, you compromise to (say) one mile. You then learn his weaknesses
in this distance and learn how to pace yourself to beat him. However, what
has this done to improve your marathon performance?

Same with computers. If you continually play 1. a3 to "get 'em out of book"
how are you improving your opening skills? If you continually play h4,g4, etc.
to attack their king since the computers are relatively poor at defending right
now, how are you improving your chess skills for playing against humans that
recognize what you are doing immediately, relocate a couple of pieces, and
leave you with a terrible pawn weakness with nothing to show for it?

If you are preparing to play a machine in a tournament, I agree that probing
its weaknesses is the proper path. However, if the liklihood is low that you
will meet this program OTB, then learning how to beat it is about as useful
as learning how to knit.

MR. STEVEN DOWD

unread,
May 31, 1995, 3:00:00 AM5/31/95
to
In article <3qi4ur$c...@pelham.cis.uab.edu>, hy...@willis.cis.uab.edu (Robert Hyatt) says:
>

>Your analogy is not so good. Let's try this: you train for a marathon.
>However, you need someone to train with for motivation. You pick someone
>that is a short or middle-distance runner. Since he can't run the entire
>26+ miles, you compromise to (say) one mile. You then learn his weaknesses
>in this distance and learn how to pace yourself to beat him. However, what
>has this done to improve your marathon performance?

Robert Hyatt - Either you misunderstood my analogy or I did not phrase
it correctly (possible also!). We are saying the same things (I think). I am
going to try to think up another analogy since I think my following two
points are valid.


Robert Hyatt

unread,
May 31, 1995, 3:00:00 AM5/31/95
to
In article <3qib6g$b...@maze.dpo.uab.edu>,


Likely a misunderstanding. My basic premise is that if a human plays a
computer repeatedly, and begins to play in such a way that he is simply
exploiting some built-in weakness of that particular program (or of computer
programs in general), the human is not doing much to improve the quality
of his chess against other human opponents.

If you play "normal chess" against a computer, most of the time you will
get a "normal position" where your usual chess skills and positional
understanding apply, and where they will be sharpened. Sort of like
learning to box a one-armed man... useful until you encounter one with
two arms, then all you have learned can be thrown out the door.

I would assume that everyone is working as hard as I am to eliminate the
well-known computer weaknesses. It's a hard problem, but not (IMHO) an
impossible one. However, we are talking evolution and not revolution
to make progress. Our (mine for sure) goal is to eliminate these "computer
quirks" so that they are no longer there to exploit. After all, my goal is
to crush you, not teach you. :^)

Bob

0 new messages