Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Berliner paper about Botvinnik

44 views
Skip to first unread message

Shane Hudson

unread,
Sep 10, 1994, 5:35:22 PM9/10/94
to
Here is, as promised yesterday, an interesting paper showing just
what Botvinnik has (or hasn't, actually) done in Computer Chess.
I ftp'ed it from some site at Carnegie-Mellon (sorry, can't remember
the exact address) sometime early last year, I think.

-------CUT HERE -------------------

Playing Computer Chess in the Human Style

Hans J. Berliner ###### footnote(The opinions expressed in this
article are those of the author only, and do not necessarily reflect
those of any organization he is or has been affiliated with.)

School of Computer Science Carnegie Mellon University Pittsburgh, PA
15213

"The time has come", the Walrus said, "To talk of many things: Of
shoes -- and ships -- and sealing-wax -- Of cabbages -- and kings --
And why the sea is boiling hot -- And whether pigs have wings."

Lewis Carroll in "Through the Looking Glass".

INTRODUCTION

All of us who have achieved a chess program good enough to enter a
meaningful competition have had the following experience, which I
first had around 1969. Richard Greenblatt, whose program, MacHack VI
[7], had made some major breakthroughs in entering and surviving human
competitions, was good enough to make his program available to
Carnegie Mellon University Computer Science. Knowing it was the best,
it was immediately a challenge to try to see how my program, J. Biit,
could do against it. This was easily done by playing a pair of games
starting from the original position and looking at the result.

It did not take me long to find out that under the same settings,
MacHack VI always played the same game. Being a top-notch chess
player, it was easy for me to choose openings for J. Biit that
maximized its winning chances. However, even that was not enough,
since it would sooner or later make some blunder and lose positions in
which it was clearly favored to win. However, not to worry; we merely
extend the opening book so that it gets over these humps without
blundering. Eventually this technique resulted in J. Biit getting a
score of 1.5 - 0.5 from the starting position. Now we could say J.
Biit was able to beat MacHack VI. What a joke!!

This is a classical case of what I call manicuring. The program is
manicured, just like a fine lady in a beauty shop, until her finger
nails are just right. The result is that we have a beautifully
manicured program that can do these few examples excellently, but can
do very little else. Anyone who has been through this very likely
recognizes the syndrome, and now laughs at his early behavior, as I do
today.

In many fields, competition among researchers is measured by the
excellence of individual presentations. Meetings are held and each
researcher presents his latest result on his favorite problem. There
is a lot of manicuring and the art of presentation can rise to extreme
heights in the service of those who wish to call attention to their
results. It is not surprising that progress in such fields is very
slow.

We owe it to Greenblatt who brought computer chess out of the closet,
and to Monty Newborn who had the courage to do the organizing, that
computer chess is a healthy discipline in which the world's best
testing mechanism exists. Here, we do not barrage each other with
promotional material. We compete, and it becomes clear who is
achieving and who is best. Further, we don't just play parts of the
game, or deal with our own pet problems; we play the whole game and
this forces progress in those areas that are unpleasant to deal with.
This is the path to generality, and it is what science is about. The
point of presenting the result of a program working on a problem is to
show how it can deal with problems that belong to some set that are
recognized as interesting. It is not to trot something out of the
closet and say "Oh, look how well my program does on this!".

Any person who would achieve greatness has his dreams. It is one
thing to do some manicuring to give oneself the courage to continue.
However, anyone who would be a scientist should have enough
understanding to distinguish manicured results from real ones, and to
spare his audience the pain of having to do the distinguishing that he
failed to do. In fact, anyone who would build a scientific
reputation, had better not do this more than once (when very young)
else he will find that the world turns a deaf ear to his future
publications. A @b[responsible] researcher has certain duties to his
readership. I have found that in talking to my peers, almost everyone
knows real research from the manicured stuff, and those that manicure
never are admitted to the inner circle of top researchers.

Now let us turn to what it means to play in the "Human Style".
Shannon in his ground-breaking paper [13] first discussed this, saying
he did not know how to do it. Allen Newell contributed an excellent
paper in 1954 [8], in which he dealt with some of the mechanics of how
to reason, and judge in the context of chess. This is still one of
the great papers on computer chess.

Since then a number of efforts on playing chess in the human style
have taken place. The Newell, Simon and Shaw effort is documented in
[9]. There was an effort by Euwe, DeGroot, Berge and others under the
auspices of the EURATOM agency in the 1960's. In 1975 I received my
Doctorate for a thesis entitled "Chess as Problem Solving: The
Development of a Tactics Analyzer" [1]. A group at the University of
Minnesota had a very competent program named CHAOS that competed
regularly in the ACM competitions in the 1970's. There were also some
very nice papers by Pitrat [12], Wilkins [14], and some others. At
present my student Chris McConnell and I are very far along with a
program called B* Hitech that uses a modification of the B* search
algorithm that was published in [2], and improved upon in subsequent
work by Palay [10, 11]. This program has already played in several
competitions and has done moderately well. Thus there is a scientific
cadre of competent researchers who have actually built programs, and
believe that it is worthwhile to spend time on building human style
chess programs.

DR. BOTVINNIK AND HIS ROLE IN COMPUTER CHESS

In 1970 a magnificent book by Dr. Botvinnik was published. It was
called Computers, Chess and Long-Range Planning [4]. It is not clear
what the book had to do with long-range planning, but the parts on
computer chess were magnificent. They dealt in quantitative ways with
the problem of how to encode those things that top chess players see
at a flash, and that are very difficult to encode and time consuming
to execute. I had spent quite some time thinking about these things
myself, and found Botvinnik's formulations although expensive
computationally, to be very exact and doable. They had a considerable
influence on my program J. Biit and later programs such as my thesis
program CAPS, Patsoc, and Hitech.

Having delivered his magnum opus, Botvinnik apparently set out to
develop the program of his dreams. However, doing this was not as easy
as writing about it. For 14 years the world waited to hear how he was
doing, and there were only murmurs about his slow progress and various
vicissitudes in getting programmers and computers. Then in 1984
appeared in English language "Computers in Chess: Solving Inexact
Search Problems" [5]. I poured over the manuscript intently to find
what the great man had done, and found instead a publicity job that
had clearly been engineered by Botvinnik and the publisher.

Why do I say this? It is easy to justify. There have never been any
published games played by the program Pioneer. There are a few
examples of its problem solving ability in the above book, but they
are considerably lacking in significant content. In his earlier work,
Botvinnik had posed 60 problems for the future great program to solve.
Of these only one, a well known pawn endgame miniature by Reti was
included (see Figure 1). What about the others; clearly they were
much too difficult for Pioneer. However, two other examples of
Pioneer's prowess were conjured up to provide some additional data.

-- -- -- -- -- -- -- WK
-- -- -- -- -- -- -- --
BK -- WP -- -- -- -- --
-- -- -- -- -- -- -- BP
-- -- -- -- -- -- -- --
-- -- -- -- -- -- -- --
-- -- -- -- -- -- -- --
-- -- -- -- -- -- -- --

Figure 1

At that time any program that had a hash-table and could search to a
depth of 9 in positions of such reduced material (as all the top
programs could) would have solved the Reti study in a few seconds.
Even then, it might have been a meaningful achievement for a program
doing its investigation exclusively in the human style. However,
looking at the tree that is presented, we find that a branch of the
analysis goes 1. Kg7,Kb5??; 2. c7,h5; 3. c8=Q with a value of 8.
Another branch goes 1. c7,Kb7; 2. c8=Q,K:c8 with a value of -200.
Are we to understand that Pioneer thinks that the second position is
better for black than the first is for white. Apparently so! This
must be because the program has patterns that tell it that the black
pawn in the second variation cannot be stopped from queening. If this
is so, then how could this program play 1.--Kb5 in the first
variation, which clearly allows a pawn that could not queen before, to
now queen. The whole analysis contains only 28 nodes, so it is very
sparse and to the point. Yet this branch stands out as something
unusual. Also, there is the branch 1. Kg7,Kb7; 2. c:g7 (takes the
king)!, with a value of +200. But another branch goes 1. Kg7!,h4; 2.
Kf6!,Kb6; 3. Ke5!,Kb7, but this time the value = 0, and no attempt is
made to capture the king. What gives? Another branch goes 1.
Kg7,h4; and now there are only two alternatives tried {Kf6 and c7}. A
program that tries some of the insane moves such as Kb7 and Kb5 above,
might be forgiven for trying 2. Kg6 in this situation, but it doesn't.
One could expect a "very knowledgeable" program to:

Not waste effort on illegal moves

Realize that an unstoppable passed pawn on the 6th rank and on
move
will easily outdistance another such pawn that is only on the
4th
rank. The point here is that the more knowledge a program
has, the
fewer meaningless searches are done. Therefore it behooves
the
programmer to do little things that save effort.

It is easy for the experienced chess programmer to detect manicuring.
This example is only one of several in this book. One gets the
feeling that the only knowledge Pioneer uses for a particular problem
is that needed to produce a tree of the type Botvinnik wants.

I found the back cover promotion for this book to be particularly
offensive. It is truly awful that a scientific book should be allowed
to spout forth on its cover "This book provides insight into important
(master-level) chess programs such as PIONEER, which will be of
interest to chess enthusiasts". Pioneer is not a master-level
program. It very likely is not even a Class D program, if it is even
capable of playing a full game of chess at all. To the best of my
knowledge there does not exist a single published game it has played.

THE LATEST BOTVINNIK PLOY

In the June,1993, vol. 16, No. 2 issue of the ICCA Journal we are
treated to the latest of Botvinnik's ploys. Here he chides Richard
Greenblatt, without whom we might still all be in the dark ages, for
believing that it is unlikely that a chess program modelling a
master's thought could be successful. He then presents three examples
of his program, which apparently is no longer called Pioneer #######
Footnote (If Leningrad can again become St. Petersburg, then it is
probably polically correct to dispense with the name Pioneer also.)
The first of these is shown in Figure 2. It is from a game
Kasparov-Ribli played in 1989.

-- -- -- -- -- BR BK --
-- -- -- -- -- BP BP BP
BP -- WQ -- BP -- -- --
-- WR -- -- -- -- -- --
BQ -- -- -- -- -- -- --
-- -- -- -- BB -- WP --
WP -- -- WR WP WP -- WP
-- -- -- -- -- -- WK --

Figure 2

The reader is advised to read the following paragraphs very
@b[carefully] if he would like to understand the points at issue.

Botvinnik shows the analysis of his program on this position. At the
end he states "This puts an end to exploring further variations,
because White has a draw home and free and a win is not excluded".
One variation he gives is 1. Rd8!,Qb5; 2. Qd6!, B:f2+; 3. K:f2,Re8;
4. Qe7 satisfying the stated goal. At black's 3rd turn, no other move
than Re8 is considered. Presumably the beckoning 3.--Qf5+ is
dismissed since @i[it can't possibly lead to a win for black].
Botvinnik knows that, and I know that, but can a computer program
figure this out without searching? Presumably the key idea is that a
queen alone cannot win against a king. However, if the black queen
can check at a5, f6 or g5 it can win the white rook at d8 and with it
the game. Further, if just one piece on the board, the white pawn at
g3, were removed or moved to h3 white would be lost because one of
these checks would become a reality. I doubt very, very, very much
that any program can analyze this situation correctly without search.

So this is the first faux pas of the manicurist. The second is so
gross that even the translator discovered it. In the variation above,
at the end of the line given, black wins with 4.--Qb6+. This is
alibied in a footnote as being due to a bug, but apparently the
analysis as published pleased Botvinnik. It met the manicurist's idea
of how the analysis should proceed.

To close this part, I submitted the Figure 2 position to B* Hitech,
which as most readers know is a program using a modified B* search.
It found the line 1. Rd8,Qb5; 2. Qd6,B:f2+; 3. K:f2,Re8; 4. a4!! and
now white does win since if 4.--Q:a4; 5. Qe7!, and if 4.--Qf5+; 5.
Kg2,Qe4+; 6. Kh3!,Qf5+; 7. g4, Qf1+; 8. Kg3,Qg1+; 9. Kf3,Qf1+; 10.
Ke3!,Qc1+; 11. Kf2 and wins. All this was found by B* Hitech is about
two minutes.

When I published this on the chess net, F. H. Hsu contributed the
following information. He and DT2 had analysed this position with
Kasparov, and DT2 was unable to see the whole line to the end.
However, by piecemeal analysis it became clear that white does indeed
have a win, and the longest line is 3.-Qf5+!; 4. Kg1! (not Kg2 because
of Qd5+),Qb1+; 5. Kg2,Qe4+; 6. Kh3,Qf5+; 7. g4,Qf1+; 8. Kg3,Qg1+; 9.
Kf3,Qf1+; 10. Ke3!,Qh3+!; 11. Kd4,e5+!; 12. Kd5, Qg2+; 13. Kc5,Qg1+;
14. Kc6,Qh1+ (Qg2+ and Qc1+ are also possible but make no difference);
15. Kb6,Qg1+; 16. K:a6. The total variation is 31 ply in length. It
is no wonder DT2 was not able to find such a deep variation.

With this new information it was possible to identify a serious bug in
the search algorithm, and B* Hitech now finds the whole solution in
less than 10 minutes. It saw all of the variations presented above and
many dozens more. It has also solved a large number of other problems
(including the famous Botvinnik - Capablanca position that Pioneer
specializes in). The B* search control alone encompasses more than
1000 lines of program. All the above will be the subject of a future
article.

BOTVINNIK AS PART OF THE "HUMAN STYLE" COMPUTER CHESS PICTURE

It is, perhaps, wise at this point to take stock of Botvinnik as a
candidate computer expert. What does he show he knows? In his last
book he talks about things such as "Stilman threw out the procedures
for the deblockade of trajectories" to speed up the program [5], p.
58. I suspect that most of my colleagues would use compile-time
switches for this. Or even better, how about some code that decides
whether deblockading trajectory analysis is appropriate in the given
position. There are many other strange comments that could only come
from someone who has never written a significant program. The whole
thing smacks of rank amateurism. Further, Botvinnik seems to have no
inkling that anyone else (except Euwe) has tackled these problems.

When one reads Botvinnik's writings one never sees anything about
search control. He must assume is it automatic. He spends a lot of
time on "trajectory analysis" which I take to mean planning. But
planning alone cannot do the job. It can maintain a view that the
program is trying to implement. However, there must be criteria to
decide which plan is most promising, and when to terminate a plan that
is not working. There are moments when it is necessary to switch
plans or even institute a new one when the opponent thwarts your plan.
Even when a plan has been successfully completed, it is necessary to
make sure that the opponent does not have an effective counter-plan.
In [6, 3] such issues are dealt with in the micro-domain of pawn
endings. Even here great difficulties were encountered, but the
issues above were addressed and with considerable success. The reader
who would like to understand such issues is also referred to the work
of Pitrat [12], which represents a few man-months of effort, and shows
itself to be considerably more apt at solving text book problems than
Pioneer.

-- -- -- -- -- -- -- --
-- -- -- -- -- -- -- --
-- -- -- -- BB -- BP --
-- -- WB BP -- -- -- BP
-- -- -- -- -- WP -- WP
-- BP WK -- WP BK -- --
-- -- -- -- -- -- -- --
-- -- -- -- -- -- -- --

Figure 3

It is unthinkable that Botvinnik can address the whole of chess
without some general mechanisms of this type. The fact that Pioneer
stopped the analysis at a point where it was going to lose a rook
(Figure 2) attests to the inadequacy of his approach. Planning and
search must be tightly integrated, and no plan can ever be so good as
to anticipate all the things that can go wrong. For that one needs to
have a search to act as devil's advocate. Just think what a failure
the total planning of the Soviet bureaucracy turned out to be.

Looking at the analysis of position 3 (Figure 3 above) in Botvinnik's
ICCA article, one is also not convinced about what it is doing. The
analysis is correct as far as it goes. However, there are some
branches that are terminated in positions that are truly difficult to
believe a machine would understand. For instance, the branch 1.--g5;
2. f:g5,d4+ is left off where white will be a pawn ahead. White has
two passed pawns and black only one, yet white is losing??! Are we to
understand that the program understands that white is lost?

It is possible that Pioneer analyzed the various trajectories and came
to the conclusion that if the pawn at h4 could not be protected the
position is lost for white. However, to do this by analysis instead
of search would require an incredible amount of specially constructed
(for this problem) knowledge. It is definitely the wrong way to do
things; even if it were successful in this particular case. To
consider how unlikely this is to work in the general case, if an
additional black pawn were placed at h7, black would not be able to
win. In that case 3. e:d,Kg3; 4. g6!, and black must attend to the
pawn at g6, when 5. Be7 assures the draw. Would Pioneer be able to
analyze the difference in these two positions without search? I very
much doubt it!

From another point of view, a program that could do the above analysis
correctly without any search would have to be 20 pages long. The
issue could probably have been settled by a search of a mere 20
additional nodes. If one had to have such a tremendous amount of
program to resolve a single issue instead of searching it out, then
the whole chess enterprise is doomed to failure since there are many
thousands of such situations that would have to be treated with
similar amounts of program.

My thesis program, CAPS [1], which was put to sleep 18 years ago, also
did all sorts of clever things and was significantly better than
anything Botvinnik and his team have done. It probably was not even a
Class D player, but at least it played. Why have I not made a lot of
hoopla about CAPS? Because I recognized its shortcomings, and have
addressed myself to correcting them. I did my thesis by myself in
four years at a time when the computer available to me was no better
than what has been available to Botvinnik in the last few years. Yet
Botvinnik and his programmers have been at it for over 20 years now.
Enough already! The only credentials Botvinnik has that could cause
someone to believe him is his history as a chess player. However, he
has over-extended these many times, and his program has again and
again failed to show anything worthwhile.

CONCLUSIONS

Why am I so offended by what Botvinnik has done. A scientist must
bear some responsibility for what people in his field do and say. If
a football players attacks people in a bar, all football players
suffer in reputation. I am also directly hurt by this nonsense, as
people who fund research believe that Botvinnik is credible in the
area of computer chess, which he clearly is not. One should bear in
mind that there are at least two major computer chess tournaments
every year, and Botvinnik has never had a program of his participate.
In 1989, when the World Championships were held in Edmonton, Canada,
Botvinnik was offered a machine of the type he had been using but 100
times faster, and an all-expense paid trip for himself and an
assistant to come two weeks early and bring his program up to play in
the World Championships. He refused the offer! Instead we get this
continual sniping from the periphery about what we should be doing,
when he really has no idea what he is talking about.

Botvinnik should be ashamed of himself. I hope that when I get to be
his age, I will have enough good sense to know what I am capable of
and what not, and leave the latter alone. To try to convince others
that you are what you are not, is unbecoming of a World Class person.
Even the poor quality NSS program of 40 years ago published the games
it played. Based upon his performance, Botvinnik is not a meaningful
participant in the field of computer chess, and if he wishes to
comment upon this field, he should make clear that it is the comments
of a chess player, not a computer person.

I also wish to take issue with the editors of the ICCA Journal. It is
commendable that this journal exists; however, if they are going to
publish such an article then they should be aware of its consequences.
It is not enough to place a footnote on the last page saying "Recently
an unfortunate shortcoming in the search tree --- was discovered. ---
This mistake was due to an error that has slipped into the subroutine
that determines fork trajectories. The error will soon be corrected".
This is utter nonsense. The problem is in the program, but is anyone
so poorly informed as to believe that this was presented by Botvinnik
without his believing it was an accurate analysis of the position? It
cannot be otherwise. At the end of this example (Figure 2) Botvinnik
opines "Truly, the manner of operating of a master at the
board!"(p.72). Botvinnik is radiant about the (faulty) analysis as it
was presented. Yet the editors place his apology, some three pages
later. Is this stupidity on the part of the editors, or are they part
of the attempt to deceive?

So it becomes clear that the whole analysis was manicured to fit
Botvinnik's notion of a convincing analysis. When an obvious error
shows up in such a manicure, it makes what has happened so much
clearer. It is one thing to publish such a paper with a known error
of content by a 10 year old child. But editors are supposed to be
able to know that the world does not read footnotes, and will assume
that when a former World Champion speaks, he knows what he is talking
about. To prevent future misleading of the computer chess community,
I would strongly suggest that future articles with great scientific
import be refereed.

REFERENCES: [1] Berliner, H. J., "Chess as Problem Solving: The
Development of a
Tactics Analyzer", Carnegie-Mellon University Thesis, 1974.

[2] Berliner, H., "The B* Tree Search Algorithm: A Best-First Proof
Procedure", Artificial Intelligence, Vol. 12, No. 1, p. 23-40,
1979.

[3] Berliner, H., and Campbell, M., "Using Chunking to Play Chess Pawn
Endgames", Artificial Intelligence, Vol. 23, No.1, 1984.

[4] Botvinnik, M. M., "Computers, Chess and Long-Range Planning",
Springer-Verlag, 1970.

[5] Botvinnik, M. M., "Computers in Chess: Solving Inexact Search
Problems",
Springer-Verlag, 1984.

[6] Campbell, M., "Chunking as an Abstraction Mechanism", Carnegie
Mellon
University Thesis, CMU-CS-88-116, 1988.

[7] Greenblatt, R.D., et. al., "The Greenblatt Chess Program",
Proceedings
of the Fall Joint Computer Conference, ACM 1967, p. 801-810.

[8] Newell, A., "The Chess Machine: An Example of Dealing with a
Complex Task
by Adaptation", Proceedings of the Western Joint Computer
Conference,
ACM-IEEE, 1955, p. 101-108.

[9] Newell, A., Simon, H., and Shaw, C., "Chess Playing Programs and
the
Problem of Complexity", in "Computers and Thought", E.A.
Feigenbaum
and J. Feldman (Eds), McGraw-Hill, 1963.

[10] Palay, A. J., "The B* Tree Search Algorithm -- New Results",
Artificial
Intelligence, Vol 19, No. 2, 1982, p. 145-164.

[11] Palay, A. J., "Searching with Probabilities", Pitman Research
Notes in
Artificial Intelligence, Vol. 3., 1984.

[12] Pitrat, J., "A Chess Combinations Program Which Uses Plans",
Artificial
Intelligence, Vol. 8, No. 3, 1977.

[13] Shannon, C.E., Programming a Computer to Play Chess, Philosophy
Magazine, 1950, Vol. 41. No. 314, p. 256-275.

[14] Wilkins, D., "Using Plans in Chess", Sixth International Joint
Conference on Artificial Intelligence, 1979, p. 960-967.


Feng-Hsiung Hsu

unread,
Sep 10, 1994, 9:02:22 PM9/10/94
to
What you got was a very early draft that contained some incorrect information
related to DT-2. I pointed out the errors on the net and Berliner corrected
them afterwards.

In article <34t8qq$p...@cantua.canterbury.ac.nz> sh...@csc.canterbury.ac.nz (Shane Hudson) writes:
>
>When I published this on the chess net, F. H. Hsu contributed the
>following information. He and DT2 had analysed this position with
>Kasparov, and DT2 was unable to see the whole line to the end.
>However, by piecemeal analysis it became clear that white does indeed
>have a win, and the longest line is 3.-Qf5+!; 4. Kg1! (not Kg2 because
>of Qd5+),Qb1+; 5. Kg2,Qe4+; 6. Kh3,Qf5+; 7. g4,Qf1+; 8. Kg3,Qg1+; 9.
>Kf3,Qf1+; 10. Ke3!,Qh3+!; 11. Kd4,e5+!; 12. Kd5, Qg2+; 13. Kc5,Qg1+;
>14. Kc6,Qh1+ (Qg2+ and Qc1+ are also possible but make no difference);
>15. Kb6,Qg1+; 16. K:a6. The total variation is 31 ply in length. It
>is no wonder DT2 was not able to find such a deep variation.

DT-1 analyzed it with Kasparov, not DT-2. DT-1, in fact, could see the
whole line to the end in an hour or so, and once given the first critical
move, saw the win in tens of seconds. DT-2 had no problem seeing the whole
refutation tree in entirety from the original position in tournament time.
I don't know where Berliner got the false idea that we were doing
piecemeal analysis, because I was making it quite clear that, when I posted
the analysis, it was solved in tournament time (3 minutes, actually about 1.5
min with the current setup). The above fact was pointed out to Berliner,
and he subsequently corrected his draft. [The revised draft was published
in ICCA Journal.] Baby Blue, when I get it running, probably would solve it
in blitz time.

One big question that is not quite clear from Berliner's description is
whether B* Hitech saw the whole line when it first played Rd8. Berliner
was making a big deal that B* did not need the "true value" of a move to
play it, and Rd8 is good for at least a draw... Maybe that is where
"piecemeal analysis" first came from? B* is OK for selecting moves, but
it does not always provide a winning variation.

Greg Kennedy

unread,
Sep 12, 1994, 3:47:12 AM9/12/94
to
Feng-Hsiung Hsu (f...@watson.ibm.com) wrote:

: DT-1 analyzed it with Kasparov, not DT-2. DT-1, in fact, could see the


: whole line to the end in an hour or so, and once given the first critical
: move, saw the win in tens of seconds. DT-2 had no problem seeing the whole
: refutation tree in entirety from the original position in tournament time.

: Baby Blue, when I get it running, probably would solve it in blitz time.

I too got the impression that Hans was saying _your_ program was the
one doing "piecemeal" analysis, not his. :-)
You make it sound as though you can "solve" middlgame positions (win,
loss, or draw) in only minutes now. Have you considered doing the chess
public a huge service by settling once-and-for-all, some of the great
analytical disputes of history? For example: did Bobby's ...Bxh2 blunder
lose _by force_ in game 1 of his 1972 match with Spassky? And should
Boris have _moved his queen_ in the 2nd match, when Bobby attacked it
with his knight?(!!) :-) I know you're a busy man, but this would only
take a few seconds of Baby Blue's time... Thanks!
-Greg

Vincent Diepeveen

unread,
Sep 12, 1994, 8:45:15 AM9/12/94
to
A programm named CAPS is mentioned in this paper.
Can someone tell me more about it?

Greetings, Vincent Diepeveen.

vdie...@cs.ruu.nl

--
+--------------------------------------+
|| email : vdie...@cs.ruu.nl ||
|| fidonet: 2:280/206.23 ||
+======================================+

Johannes Fuernkranz

unread,
Sep 16, 1994, 6:44:22 AM9/16/94
to
In article <CvxxJ...@hawnews.watson.ibm.com>,

Feng-Hsiung Hsu <f...@watson.ibm.com> wrote:
>in ICCA Journal.] Baby Blue, when I get it running, probably would solve it
>in blitz time.

You beg the question:
When will you be back in the ring?

We're all waiting!

Juffi

0 new messages