Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

chess and AI

20 views
Skip to first unread message

David Deininger

unread,
Apr 11, 1994, 11:02:48 AM4/11/94
to
I saw this post in comp.sys.next.advocacy, and I'd like to hear your feelings
about Mark's statement that "chess ... is pretty much done and no longer
considered interesting" (from an AI perspective).

--david

In comp.sys.next.advocacy Mark Crispin <m...@Tomobiki-Cho.CAC.Washington.EDU>
writes:
>
> Don't believe everything you read in books, particularly about the AI lab
> hackers of the 1970s. Any book which claims that the AI industry was
> ``short lived'' (implying that it is no longer alive) is less
> than objective.
>
> What was short-lived were dedicated engines for running Lisp and other AI
> languages. It turned out that micros ran Lisp faster than dedicated Lisp
> engines ran Lisp.
>
> [Example of Expert System for credit card companies deleted.]
>
> Some of the traditional AI problems have also fallen. Chess, for
> example, is pretty much done and no longer considered interesting;
> more focus in the games sphere is on Go. Other problems, such
> as vision and reasoning, have proven to be much harder than anyone
> imagined. People no longer predict ``computers with souls'' and
> many wagers have been lost.
>
> AI these days is more big business and less glamor. This is at least in
> part because there isn't much in the way of DoD $$$ for AI to
> fund research.
>
> -- Mark -- who was there at the MIT, Stanford, and SUMEX AI labs,
> and who knows most of the players

--
--david deininger (dein...@cipher.cen.encompass.com)

Peter W. Gillgasch

unread,
Apr 12, 1994, 8:30:00 AM4/12/94
to
In article <1994Apr11....@glv.cen.encompass.com>,
dein...@cen.encompass.com (David Deininger) writes:

>I saw this post in comp.sys.next.advocacy, and I'd like to hear your feelings
>about Mark's statement that "chess ... is pretty much done and no longer
>considered interesting" (from an AI perspective).
>
> --david
>
>>

>> Some of the traditional AI problems have also fallen. Chess, for
>> example, is pretty much done and no longer considered interesting;
>> more focus in the games sphere is on Go. Other problems, such
>> as vision and reasoning, have proven to be much harder than anyone
>> imagined. People no longer predict ``computers with souls'' and
>> many wagers have been lost.
>>

I wouldn't say that "chess is done". Maybe it is not longer considered
"interesting" by the AI community.

1. Chess is not "done"

This is in my opinion NOT related to the fact that the top human
players still have the upper hand. The problem is completely different:
Tell me ONE program that uses "traditional" AI techniques AND is
competitive with programs that use the generic CHESS 4.x approach -
full width alpha beta, transposition tables, move ordering heuristics
and a quiesence search. With "traditional" AI techniques I mean:

- Learning
- Classification and deduction
- Pattern matching

In other words: Selectivity that is not driven by SEARCH but by
REASONING.

The AI community has failed here.

2. Chess may be no longer "of interest" for the AI community

Over the years we have seen the astounding rise of the playing strength
of "convential" search based machine players. This implies that there
is an efficient solution to chess by algorithmic means that are
considered "trivial" or even "primitive" by the AI community. Maybe
the research on other solutions of the problem has come to a halt since
the generic approach has been such a success.

Now the AI community is shifting attention to the game of Go. This may
be due to two facts:

1. Go has a much higher branching factor. The AI community feels
that their research is "safe" because there won't be a GO 4.x
that simply kills their programs when meeting in battle.
2. Go is much more strategic than chess, which is essentially
driven by tactics.

In my opinion, the AI community will fail here too. If it is not
possible to write a REASONING chess program in 30 years then it is
very unlikely that they will have success in any non trivial game.

So this boils down to the question "what is AI anyway?"

AI in the traditional sense is the successfull application of
computing machinery to solve problems that need some "intelligence"
by human problem solvers.

So dear AI followers: Take it or leave it - CHESS 4.x like programs
are AI applications or there are *no* AI programs. Since brute force
searchers are "trivial" and "primitive" this holds for all the stuff
you have written in the last 30 years.

Thank you.

-------------------- DISCLAIMER ------------------------------------

I don't use sigs, they are waste of bandwith. Nevertheless in this
special case I think it is wise to state:

THE OPINIONS EXPRESSED HERE ARE MY OWN NOT THOSE OF MY UNIVERSITY.

Kenneth Sloan

unread,
Apr 12, 1994, 8:03:00 PM4/12/94
to
In article <199404121...@ibm3090.rz.uni-karlsruhe.de> UK...@ibm3090.rz.uni-karlsruhe.de (Peter W. Gillgasch) writes:
>...

>I wouldn't say that "chess is done". Maybe it is not longer considered
>"interesting" by the AI community.
>
>1. Chess is not "done"
>
>This is in my opinion NOT related to the fact that the top human
>players still have the upper hand. The problem is completely different:
>Tell me ONE program that uses "traditional" AI techniques AND is
>competitive with programs that use the generic CHESS 4.x approach -
>full width alpha beta, transposition tables, move ordering heuristics
>and a quiesence search. With "traditional" AI techniques I mean:
>
>- Learning
>- Classification and deduction
>- Pattern matching
>
>In other words: Selectivity that is not driven by SEARCH but by
>REASONING.
>
>The AI community has failed here.

Whoa...that depends strongly on what you think the goals are, and what
it means to "fail". I think it's fair to say that "the AI community"
had a lot to do with demonstrating the (somewhat surprising) strength of
"full width ...". The goal of a scientific enterprise ought to be to
find the truth, and not "My way is better than your way, NYAH, NYAH!"

Of course, this is not always widely appreciated...

--
Kenneth Sloan Computer and Information Sciences
sl...@cis.uab.edu University of Alabama at Birmingham
(205) 934-2213 115A Campbell Hall, UAB Station
(205) 934-5473 FAX Birmingham, AL 35294-1170

Robert Canright

unread,
Apr 12, 1994, 6:19:54 PM4/12/94
to

>In article <1994Apr11....@glv.cen.encompass.com>,
>dein...@cen.encompass.com (David Deininger) writes:

>>I saw this post in comp.sys.next.advocacy, and I'd like to hear your feelings
>>about Mark's statement that "chess ... is pretty much done and no longer
>>considered interesting" (from an AI perspective).
>>
>> --david
>>
>>>
>>> Some of the traditional AI problems have also fallen. Chess, for
>>> example, is pretty much done and no longer considered interesting;
>>> more focus in the games sphere is on Go. Other problems, such
>>> as vision and reasoning, have proven to be much harder than anyone
>>> imagined. People no longer predict ``computers with souls'' and
>>> many wagers have been lost.
>>>

>The AI community has failed here.

>Over the years we have seen the astounding rise of the playing strength


>of "convential" search based machine players. This implies that there
>is an efficient solution to chess by algorithmic means that are
>considered "trivial" or even "primitive" by the AI community. Maybe
>the research on other solutions of the problem has come to a halt since
>the generic approach has been such a success.

...deleted stuff...

>In my opinion, the AI community will fail here too. If it is not
>possible to write a REASONING chess program in 30 years then it is
>very unlikely that they will have success in any non trivial game.

>So this boils down to the question "what is AI anyway?"

>AI in the traditional sense is the successfull application of
>computing machinery to solve problems that need some "intelligence"
>by human problem solvers.

>So dear AI followers: Take it or leave it - CHESS 4.x like programs
>are AI applications or there are *no* AI programs. Since brute force
>searchers are "trivial" and "primitive" this holds for all the stuff
>you have written in the last 30 years.

I applauded when I read the posting above (edited down for briefness).
I agree wholeheartedly. Here's my 2 cents:

When I looked at Feng-Hsiung Hsu's dissertation & saw the curves
showing major advancement in computer skill related to computer
speed & saw the success of deep thought, a machine done out of the
computer science discipline by men with undergraduate degrees in
electical engineering, I concluded that chess was dead as an AI
problem. Then thinking about the lack of success in the AI field,
I concluded that AI is dead.

I'm still a believer in limited purpose expert systems, neural networks,
and fuzzy logic, but not in the broader discipline of AI.

I still think that Deep Blue & the goal of defeating Kasparov is worth
pursuing. I still believe in chess as a problem for computer automation,
but I no longer believe in AI.

bob
canr...@convex.com

T. M. Cuffel

unread,
Apr 13, 1994, 2:49:31 AM4/13/94
to
In article <199404121...@ibm3090.rz.uni-karlsruhe.de>,

This is an interesting, and I believe somewhat inaccurate, characterization
of traditional AI. Until recently, most AI consisted of representing
problems and data in a form a machine could understand, and searching
through that representation. Chess programs rely more on throwing
their brute speed at a search rather than clever representations (not
that they aren't clever, just not as important...), and therefore
seem less "artificially intelligent" than AI programs that deal more
with representation. But they both do the same thing. In fact, because
of the large body of work associated with it, chess is one of the best
paradigms for tradition AI techniques.



>The AI community has failed here.

Which is precisely why, in terms of useful AI research, chess is done.
Chess was a wonderful way to refine traditional approaches. But we
understand these approaches, and everything here on out is bragging
rights. Chess is a terrible way to develop learning and pattern
matching techniques. Very talented people have tried, and found
this to be true. Other problems lend themselves to much more
readily to developing these techniques, and much has been accomplished.
Perhaps someday a chess program that can "reason" and "learn" will
demolish all competition, but only because it employs "reasoning" and
"learning" techniques developed elsewhere. It will be doing nothing
new.

>2. Chess may be no longer "of interest" for the AI community
>
>Over the years we have seen the astounding rise of the playing strength
>of "convential" search based machine players. This implies that there
>is an efficient solution to chess by algorithmic means that are
>considered "trivial" or even "primitive" by the AI community. Maybe
>the research on other solutions of the problem has come to a halt since
>the generic approach has been such a success.

Chess algorithms are not considered trivial or primitive. Alpha-beta will
take up a goodly chunk of any beginning AI text. Chess is actually one
of suprising few success stories in AI. It worked, and worked well.
But now it is time to slay some new giants, and chess is just not the
medium to do this in.



>Now the AI community is shifting attention to the game of Go. This may
>be due to two facts:
>
> 1. Go has a much higher branching factor. The AI community feels
> that their research is "safe" because there won't be a GO 4.x
> that simply kills their programs when meeting in battle.
> 2. Go is much more strategic than chess, which is essentially
> driven by tactics.

Maybe a sub-sub-AI community is interested in go, but it does not hold
much interest for the rest of the field. Certainly not to the limited
extent chess ever did. They admit that go is a harder problem than
chess, but, believe it or not, more AI researchers have better things
to do than teach computers how to play games.

>In my opinion, the AI community will fail here too. If it is not
>possible to write a REASONING chess program in 30 years then it is
>very unlikely that they will have success in any non trivial game.

A trivial game like chess?

Funny thing about AI. It's history consists of people claiming that a
computer will never do so-and-so because it can't think/reason/learn,
then when it does, those same people redefine what it means to
thing/reason/learn. You are in good company.



>So this boils down to the question "what is AI anyway?"
>
>AI in the traditional sense is the successfull application of
>computing machinery to solve problems that need some "intelligence"
>by human problem solvers.

Didn't you just say traditional AI was learning, pattern matching and
deduction?



>So dear AI followers: Take it or leave it - CHESS 4.x like programs
>are AI applications or there are *no* AI programs. Since brute force
>searchers are "trivial" and "primitive" this holds for all the stuff
>you have written in the last 30 years.

You have a narrow view of AI. Chess is just a drop in the bucket.
It gave rise to some good search theory, but mostly it is proof of
concept. When you are working on making a computer understand
language or vision, it is nice to know that someone, somewhere, in
some small way, was able to emulate some small part of human intelligent
ability.

--
T. M. Cuffel "So do you have any experience watching children?"
"Only from my car."

-Some TV show

Michael David Rosen

unread,
Apr 14, 1994, 1:28:36 AM4/14/94
to
I can't give you a good definition of AI; and I can't explain Alpha-Beta and
other algorithms used in AI, but I do know that the attempt to use so called
"intelligent" chess programs failed miserably to the "brute force" approach.

The success of computer chess programming has been mostly the result of
better, faster hardware. While the progress of intelligent chess software
has been very slow, the sheer number of positions the computer can search
has contributed the lion's share of playing strength.

To say that AI is done with chess is simply a failure of AI to produce
an "intelligent" chess playing machine. This is called "throwing in the
towel".

It's not surprising that those working in the field of AI would make a
statement like this. To declare the failure of AI would mean the loss of
their jobs (if anyone believed them).

Chess isn't a poor environment for pattern matching, for instance. Quite
the opposite. The fact is chess is too complex for pattern matching without
"intelligence". Brute force, on the other hand can largely dispense with
the finesse of the game and simply calculate "all" the possible moves (within
the power of the computer of course).

AI can't claim any great successes in chess. Just put their programs in a
slow computer and playing strength evaporates as the depth of search
shrinks.


mrosen

Robert Hyatt

unread,
Apr 14, 1994, 11:46:58 AM4/14/94
to
In article <2oika4...@uwm.edu> mro...@csd4.csd.uwm.edu (Michael David Rosen) writes:
>I can't give you a good definition of AI; and I can't explain Alpha-Beta and
>other algorithms used in AI, but I do know that the attempt to use so called
>"intelligent" chess programs failed miserably to the "brute force" approach.

What this means is that we are using the strength of the computer to "simulate"
the subtleties of the human mind. An example: name me a flower that rhymes
with nose. "rose" right? now, name me a color that rhymes with tack. "black"
right? Now, do you suppose that, in your mind, you have literally thousands
of "linked lists" where every word you know is linked by various phonetic
sounds? of course not, if you did, you would have more pointers
than data. Now, can I emulate this "behavior" on the machine? easy.
I can take the entire Webster's dictionary and the phonetic spelling, and
search through this list, and still come up with "rose" faster than you did.
Does the machine have to "do it like you do it" before it can "do it well?"
Hardly. In fact, we really don't yet know how the mind plays the game of
chess (or anything else, for that matter.) How can I write a program to
emulate something that I don't understand how it does what it does? I can't,
but I *can* emulate the process and produce a result that might be better
than when I'm emulating. Chess programs do this now. They use an (admittedly)
awkward and complicated mechanism to play the game of chess, yet they can
beat most of us. Pretty convincing results that say that AI has not
"failed" unless your goal is to do it exactly like we do it. Since we don't
know how we do it, confirming such a complaint is difficult. Wouldn't it be
amusing if we one day discover that our brain does ultimately work with ones
and zeroes and *does* use alpha/beta?

>
>The success of computer chess programming has been mostly the result of
>better, faster hardware. While the progress of intelligent chess software
>has been very slow, the sheer number of positions the computer can search
>has contributed the lion's share of playing strength.

This is completely wrong. A simple example. The best computer program in
1976 was chess 4.x running on a 14 mip supercomputer with an official USCF
rating of 1700. Currently, you can buy a simple Fidelity Mach III, with a
processor running at 16mhz (probably rated at 2 mips or so) which has a
rating of 2258. For that example, technology has run "backward" while
the program has gotten better.

Chess has been improved in several ways: (1) search extensions allow the
program to probe deeply along lines that need it, while searching less
deeply when it is not needed -- just like a human (2) evaluation strategies
have improved significantly as we (chess programmers) find more and more
ways to incorporate knowledge into the chess program; (3) opening books
are "better suited" to a program's playing style now as opposed to the old
idea of simply typing in all of MCO 10/11/etc. In short, lots of very
significant improvements have been made, even without considering the
technological advances.

Of course, when you add faster hardware to the equation, a couple of things
have happened (Cray Blitz is but one example here): (a) the program reaches
the point where tactical mistakes become quite rare (b) additional evaluation
strategies can be included when "time cost" becomes "no object." IE, in
years past, we would rigorously test evaluation ideas as to how they
affected program play and compare "knowledge content" to "computational
cost". Often, computational cost was high enough that the knowledge would
slow the program down enough to cause tactical oversights.

Note that every program that plays in the annual ACM events are now rated
over 2300-2400 USCF, and yet they are running on machines that can't come
close to competing with the original Cray-1 we ran on 12 years ago when
our USCF rating was 2258 after we won the Mississippi State Chess
championship in 1981. Technology has certainly provided significant
boosts to playing skill, but people like Schaeffer, Hsu, Thompson, Lang
(and perhaps even the Cray Blitz team) have also contributed significant
programming enhancements as well.

The hardware has certainly helped, but if you were to take chess 4.x, drop
it on new hardware, you would *not* have a program that would be competitive
today.

One other comment here. Often, we have had at least one "technology based"
program compete in the ACM events. This goes back to the days of "tech"
and on to "Lex'entrique" (hope that's spelled correctly). These programs
used no knowledge at all, just searched the largest tree they could while
doing little or no positional evaluation. As a general rule, such approaches
have been found lacking.


>
>To say that AI is done with chess is simply a failure of AI to produce
>an "intelligent" chess playing machine. This is called "throwing in the
>towel".
>
>It's not surprising that those working in the field of AI would make a
>statement like this. To declare the failure of AI would mean the loss of
>their jobs (if anyone believed them).
>
>Chess isn't a poor environment for pattern matching, for instance. Quite
>the opposite. The fact is chess is too complex for pattern matching without
>"intelligence". Brute force, on the other hand can largely dispense with
>the finesse of the game and simply calculate "all" the possible moves (within
>the power of the computer of course).
>
>AI can't claim any great successes in chess. Just put their programs in a
>slow computer and playing strength evaporates as the depth of search
>shrinks.

OK, let's play the following game: We'll run Cray Blitz on the slowest
YMP/EL machine we can find (one processor) and search around 10K nodes per
second rather than our normal 500K-1M. For *you* we'll have a kabitzer stand
by the board and continually give you advice on what to play and why (of course,
he will only be rated 1400 so you'll have to figure out how to ignore his
comments and not let his analysis distract you while you think.) Now, anyone
want to make a wager on the game? The point, certainly you can slow down a
machine enough to make it play weaker, although at only 1K nodes per second
Cray Blitz has beaten players rated over 2200 (we won the mississippi state
championship searching about 1K nodes per second for reference.) To say it's
strength "evaporates" is silly. I have run Chess Genius on everything from
a 386/20 to a 486/66, and while it plays well on the 486, it also plays well
on the 386. weaker, yes, terribly? no.

>
>
>mrosen
>


--

-------------------------------------------------------------------
Robert Hyatt Computer and Information Sciences
hy...@cis.uab.edu University of Alabama at Birmingham

Peter W. Gillgasch

unread,
Apr 13, 1994, 2:12:00 PM4/13/94
to
Wow, this is the beginning of a rather interesting thread...

In article <1994Apr13.0...@cis.uab.edu>,
sl...@cis.uab.edu (Kenneth Sloan) writes:

>In article <199404121...@ibm3090.rz.uni-karlsruhe.de> UK...@ibm3090.rz.uni-karlsruhe.de (Peter W. Gillgasch) writes:
>>...
>>I wouldn't say that "chess is done". Maybe it is not longer considered
>>"interesting" by the AI community.
>>
>>1. Chess is not "done"
>>
>>This is in my opinion NOT related to the fact that the top human
>>players still have the upper hand. The problem is completely different:
>>Tell me ONE program that uses "traditional" AI techniques AND is
>>competitive with programs that use the generic CHESS 4.x approach -
>>full width alpha beta, transposition tables, move ordering heuristics
>>and a quiesence search. With "traditional" AI techniques I mean:
>>
>>- Learning
>>- Classification and deduction
>>- Pattern matching
>>
>>In other words: Selectivity that is not driven by SEARCH but by
>>REASONING.
>>
>>The AI community has failed here.
>
>Whoa...that depends strongly on what you think the goals are, and what
>it means to "fail". I think it's fair to say that "the AI community"
>had a lot to do with demonstrating the (somewhat surprising) strength of
>"full width ...". The goal of a scientific enterprise ought to be to
>find the truth, and not "My way is better than your way, NYAH, NYAH!"
>
>Of course, this is not always widely appreciated...
>

"My way is better than your way" ? I didn't say *that*.

In my opinion there are two severe errors in your arguments:

1. You think that "failing" in my article means the fact that programs
using a "human" approach to chess usually loose to the programs that
decide their next move using the "generic" approach.
This is only a part of the story. The real problem is that there are
to the best of my knowledge *no* programs that play chess in the
human style that can handle the full game - not only some subdomains
of chess. I don't say that they won't operate in all positions but
they play very poor in types of positions where they do no have
"hard coded" knowledge. "Loosing" is only a symptom. It is much more
interesting why they loose and how.
2. "My way is better than your way". This is not only an error, this
is nearly insulting :-)
Say you have two methods to solve a problem. Method A is well known,
you have hundreds of implementations of this problem and it works
so well that >>95% of all human beings that can potentialy deal
with this class of problem can't do any better. More specifically
if you put them in contest with those implementations then their
efforts to deal with the problem are worse.
Then you have method B. It looks nice and sounds reasonable. But
nobody manages to build an implementation using method B that
a) Really uses method B without using two many ideas from method A
b) Comes up with results that are at least of a quality that justify
the additional effort or that produce offspin for other problems.

The interesting thing now is that the former followers of method B
claim that method A is much in the sense of method B. They gave up
their aim of implementing method B but say now that the problem
is dead for them since method A is a success.

Then they walk away to treat other problems that may be even more
complex...

"In the failing failing failing heartland make the places mine"

Andrew Eldritch

With best regards

Peter W. Gillgasch

Michael David Rosen

unread,
Apr 15, 1994, 4:29:49 AM4/15/94
to
In article <1994Apr14.1...@cis.uab.edu> hy...@cis.uab.edu (Robert Hyatt) writes:
>
>Does the machine have to "do it like you do it" before it can "do it well?"
>Hardly. In fact, we really don't yet know how the mind plays the game of
>chess (or anything else, for that matter.) How can I write a program to
>emulate something that I don't understand how it does what it does? I can't,
>but I *can* emulate the process and produce a result that might be better
>than when I'm emulating. Chess programs do this now. They use an (admittedly)
>awkward and complicated mechanism to play the game of chess, yet they can
>beat most of us. Pretty convincing results that say that AI has not
>"failed" unless your goal is to do it exactly like we do it. Since we don't
>know how we do it, confirming such a complaint is difficult. Wouldn't it be
>amusing if we one day discover that our brain does ultimately work with ones
>and zeroes and *does* use alpha/beta?

I certainly understand that the computer is attempting to "emulate" thinking.
And I don't argue with computers using the best means at their disposal to
achieve success in the game. My assertion is that while using better
methods to achieve a broader and deeper search, that function is still a
"brute force" method. I seem to recall AI people scorning brute force
many years ago as a short cut to achieve stronger play, but not much of
a contribution to AI.

>>The success of computer chess programming has been mostly the result of
>>better, faster hardware. While the progress of intelligent chess software
>

>This is completely wrong. A simple example. The best computer program in
>1976 was chess 4.x running on a 14 mip supercomputer with an official USCF
>rating of 1700. Currently, you can buy a simple Fidelity Mach III, with a
>processor running at 16mhz (probably rated at 2 mips or so) which has a
>rating of 2258. For that example, technology has run "backward" while
>the program has gotten better.
>

Excuse me but a simple Fidelity Mach III with a 16mhz processor is quite
a leap from the earlier chess computers that often ran 6502's and the
like. Virtually any chess platform today is *much* faster than anything
running on a computer years ago, most of which were 8 bit processors like
C-64's and AppleII's.

Of course, the programming has become more efficient over the past 15-20
years. There are better ways to avoid transpositions, larger memory has
allowed more analysis and larger opening books.

>Chess has been improved in several ways: (1) search extensions allow the
>program to probe deeply along lines that need it, while searching less
>deeply when it is not needed -- just like a human (2) evaluation strategies
>

Again, you don't have the luxury of these selective searches without bigger,
faster cpus and memory. The selective search computers of the past were
very weak because they just couldn't crunch enough numbers.

>ways to incorporate knowledge into the chess program; (3) opening books
>are "better suited" to a program's playing style now as opposed to the old
>idea of simply typing in all of MCO 10/11/etc. In short, lots of very
>

This is not AI. The computer doesn't chose the opening that suites it
style, the programmers do.

>Of course, when you add faster hardware to the equation, a couple of things
>have happened (Cray Blitz is but one example here): (a) the program reaches
>the point where tactical mistakes become quite rare (b) additional evaluation
>strategies can be included when "time cost" becomes "no object." IE, in
>years past, we would rigorously test evaluation ideas as to how they
>affected program play and compare "knowledge content" to "computational
>cost". Often, computational cost was high enough that the knowledge would
>slow the program down enough to cause tactical oversights.
>
>Note that every program that plays in the annual ACM events are now rated
>over 2300-2400 USCF, and yet they are running on machines that can't come
>close to competing with the original Cray-1 we ran on 12 years ago when
>our USCF rating was 2258 after we won the Mississippi State Chess
>championship in 1981. Technology has certainly provided significant
>boosts to playing skill, but people like Schaeffer, Hsu, Thompson, Lang
>(and perhaps even the Cray Blitz team) have also contributed significant
>programming enhancements as well.
>

I would hope that the programming is better. Better programming doesn't
necessarily mean better AI. Most software is much better today than fifteen
years ago.

The point I was responding to was that AI was finished with chess; implying
that it had adequately solved the problems associated with computers playing
chess successfully. Computers do play chess successfully, but not because
AI solved the problems. The programmers just found other stratagies to
solve the problem of computer chess play.

Maybe you can enlighten me about this. Almost any chess machine will play
a losing move in a certain position. Perhaps a huge sacrifice of material
will lead to a mate just beyond the computer's horizon. I know of no
computer program that retains the position or "solves" the problem so that
it doesn't choose incorrectly if the position arises again. Part of AI
used to be the computer "learning" from its mistakes. But if this exists
at all now, it isn't found on any "accessible" machine/program. Or does it?

You seem to be defining AI as any procedure that produces functional
results that "look like" human results. But isn't this a bit broad? As you
stated, you can create huge lists and have a computer look up keywords
and respond very quickly. This is somewhat like drilling different size
holes and having coins fall into the correct bin. It works like a human
sorting coins, better perhaps; but is it AI?

>>AI can't claim any great successes in chess. Just put their programs in a
>>slow computer and playing strength evaporates as the depth of search
>>shrinks.
>
>OK, let's play the following game: We'll run Cray Blitz on the slowest
>YMP/EL machine we can find (one processor) and search around 10K nodes per
>second rather than our normal 500K-1M. For *you* we'll have a kabitzer stand
>by the board and continually give you advice on what to play and why (of course,
>he will only be rated 1400 so you'll have to figure out how to ignore his
>comments and not let his analysis distract you while you think.) Now, anyone
>want to make a wager on the game? The point, certainly you can slow down a
>

How about letting your program run full strength, full speed and I'll just
erase random areas of its analysis on every move. That's equivalent to the
"help" you suggested for me.

Maybe I don't see it. But unless your selective searches are based on some
sort of pattern matching that "sees", for instance, a back rank mate, not
because the cpu is so fast that it can simply blanket a ply search covering
such traps, then it's still brute force and not AI. If on the other hand
the program "sees" back rank threats because the king lacks the ability to
move forward (or back) and searches for specific threats (despite its
opponent losing a queen and a rook and still being two ply from a mate),
then that looks and smells like AI to me.

BTW, thanks for a response based on logical debate rather than "You just
don't get it, Bozo". I half expected someone to reply like that.

I'm not trying to say that AI is a failure. I'm very hopeful that AI will
make breakthoughs in understanding how to process information and come up
with useable actions. AI has already demonstrated just how overwhelmingly
difficult "simple" understanding is, i.e. "what's the difference between a
ball and a disc?" Now teach you computer to tell the difference...

I just don't think AI has been as successful at playing chess as good
programming and big, fast computers.

T. M. Cuffel

unread,
Apr 15, 1994, 10:09:21 AM4/15/94
to
In article <2oika4...@uwm.edu>,

Michael David Rosen <mro...@csd4.csd.uwm.edu> wrote:
>I can't give you a good definition of AI;

I can:

Making computers do things humans are good at and they are not.

The natural fallout is once the computers are good at it, you are done.

>and I can't explain Alpha-Beta and
>other algorithms used in AI, but I do know that the attempt to use so called
>"intelligent" chess programs failed miserably to the "brute force" approach.

As far as I know there are no programs of any sort that exhibit
"intelligent" behavior.

>The success of computer chess programming has been mostly the result of
>better, faster hardware. While the progress of intelligent chess software
>has been very slow, the sheer number of positions the computer can search
>has contributed the lion's share of playing strength.

AI is not about computers thinking like humans, learning, or gaining sentience
and locking people out of spacecraft. It is about computers DOING things
that humans have little trouble with. Method is largely important. It
has to be. In order to make a computer play chess like a human, we have
to understand how humans play chess. If you want to blame the AI people
for not instantly ascertaining facts about human cognition that escape
top psychology researchers, then you are holding them to too high a
standard.

>To say that AI is done with chess is simply a failure of AI to produce
>an "intelligent" chess playing machine. This is called "throwing in the
>towel".

So the AI field is obliged to keep working on chess programs until the
make one that satisfies you?

Did you have a tricycle as a child? Did your parents force you to master
every nuance of tricycle skill before you moved up to a two wheeler?
Mine saw that I was ready to move on, that I was done. Today, I am not
one of the best tricyclists in the world, and I am pretty glad I didn't
make the effort.

>It's not surprising that those working in the field of AI would make a
>statement like this. To declare the failure of AI would mean the loss of
>their jobs (if anyone believed them).

Anyone concerned about job security is not working in AI.

>Chess isn't a poor environment for pattern matching, for instance. Quite
>the opposite. The fact is chess is too complex for pattern matching without
>"intelligence". Brute force, on the other hand can largely dispense with
>the finesse of the game and simply calculate "all" the possible moves (within
>the power of the computer of course).

Chess is a poor environment for learning things about pattern matching.
Would you give your child a college text and expect him to learn to read
just because it was a good college text? Pattern matching techniques
will be developed on simpler problems where pattern matching is all
you are doing. Chess provide too many distractions to the task at hand.

>AI can't claim any great successes in chess. Just put their programs in a
>slow computer and playing strength evaporates as the depth of search
>shrinks.

Any AI program will rely on computers being very very fast. If
they didn't, we wouldn't need the computer.

Also, the advances in problem representation, minimax, alpha-beta, and
other chess related AI advances are obvious and trivial things any
inclined person could whip up on an idle Sunday afternoon? Your
concept of AI seems to indicate that anything short of a program
that shouts "I think therefore I am!" while taking over the Enterprise
is not "real" AI.

Most AI consists of:

1. Rigorously defining a problem so that
2. a brute force search can easily be applied to it

When you finish people tell you how part 1 is a cheap trick and how part
2 is trivial, and that your computer really isn't smart because it
will never write a sonnet as good as Shakespeare, and that chess is
too easy let's see you do go, and...

Robert Hyatt

unread,
Apr 15, 1994, 11:41:22 AM4/15/94
to
In article <2olj9t...@uwm.edu> mro...@csd4.csd.uwm.edu (Michael David Rosen) writes:
>In article <1994Apr14.1...@cis.uab.edu> hy...@cis.uab.edu (Robert Hyatt) writes:
>>
>>Does the machine have to "do it like you do it" before it can "do it well?"
>>Hardly. In fact, we really don't yet know how the mind plays the game of
>>chess (or anything else, for that matter.) How can I write a program to
>>emulate something that I don't understand how it does what it does? I can't,
>>but I *can* emulate the process and produce a result that might be better
>>than when I'm emulating. Chess programs do this now. They use an (admittedly)
>>awkward and complicated mechanism to play the game of chess, yet they can
>>beat most of us. Pretty convincing results that say that AI has not
>>"failed" unless your goal is to do it exactly like we do it. Since we don't
>>know how we do it, confirming such a complaint is difficult. Wouldn't it be
>>amusing if we one day discover that our brain does ultimately work with ones
>>and zeroes and *does* use alpha/beta?
>
>I certainly understand that the computer is attempting to "emulate" thinking.
>And I don't argue with computers using the best means at their disposal to
>achieve success in the game. My assertion is that while using better
>methods to achieve a broader and deeper search, that function is still a
>"brute force" method. I seem to recall AI people scorning brute force
>many years ago as a short cut to achieve stronger play, but not much of
>a contribution to AI.

Not *all* of us in "AI" scorn the algorithmic approach to the chess
problem. Don't forget, there are still many that scorn our modern
medicine advances, yet it's hard to argue with the results.

The AI community has divided into two separate camps: camp (1) wants to
"solve" the chess problem by using a computer. The goal is to beat any
human in the world. That will happen one day, how soon is anybody's guess.
camp (2) also wants to solve the chess problem, but their primary goal is
to make the computer "do it like a human". Any method to derive a solution
which differs from the way a human derives that solution is wrong, even if
the solution itself is correct. My problem is that I don't know how I
play chess and choose moves. Yes, I manipulate pieces in my head, and
use some chess "theory" to evaluate positions, but I often make a move
without being able to say exactly "why" I did it, and *that* I can't get
Cray Blitz to do (of course, some would say that C-B can't ever say exactly
why it played a move... :^) )


>
>>>The success of computer chess programming has been mostly the result of
>>>better, faster hardware. While the progress of intelligent chess software
>>
>>This is completely wrong. A simple example. The best computer program in
>>1976 was chess 4.x running on a 14 mip supercomputer with an official USCF
>>rating of 1700. Currently, you can buy a simple Fidelity Mach III, with a
>>processor running at 16mhz (probably rated at 2 mips or so) which has a
>>rating of 2258. For that example, technology has run "backward" while
>>the program has gotten better.
>>
>Excuse me but a simple Fidelity Mach III with a 16mhz processor is quite
>a leap from the earlier chess computers that often ran 6502's and the
>like. Virtually any chess platform today is *much* faster than anything
>running on a computer years ago, most of which were 8 bit processors like
>C-64's and AppleII's.

Bzzzzttttt. wrong answer, reread what I wrote. The BEST computer in 1976
was rated only 1700, yet used a machine at least one order of magnitude
faster than the Mach III machine. The mach III is rated 500 points higher
on a machine at least 10 times (more like 100 times) slower. How do you
explain this rating improvement? Certainly not on "technology." Check your
old CL&R for chess 4.5 circa 1976 to get its rating then on a CDC cyber
176. That machine will still blow away a 68000 at 16mhz, and yet the
Spracklen's program will "whup" chess 4.5 soundly.


>
>Of course, the programming has become more efficient over the past 15-20
>years. There are better ways to avoid transpositions, larger memory has
>allowed more analysis and larger opening books.

The old cdc 176 had more memory that my mach III by a couple of orders of
magnitude. This isn't the answer... keep trying.


>
>>Chess has been improved in several ways: (1) search extensions allow the
>>program to probe deeply along lines that need it, while searching less
>>deeply when it is not needed -- just like a human (2) evaluation strategies
>>
>Again, you don't have the luxury of these selective searches without bigger,
>faster cpus and memory. The selective search computers of the past were
>very weak because they just couldn't crunch enough numbers.

Again, how do you explain how well they are working on the Mach III? These
extensions are made exactly because we can't search deep enough without them
since the machines are still not fast enough to search everything to 20 plies
deep. The extensions help the slower machines more than the fast ones...


>
>>ways to incorporate knowledge into the chess program; (3) opening books
>>are "better suited" to a program's playing style now as opposed to the old
>>idea of simply typing in all of MCO 10/11/etc. In short, lots of very
>>
>This is not AI. The computer doesn't chose the opening that suites it
>style, the programmers do.

Then nothing is AI, since who is going to write the programs, debug them,
etc. Humans have to learn the style of play they prefer, and then "book up"
for it. The computer is getting better for the same reason...


>
>>Of course, when you add faster hardware to the equation, a couple of things
>>have happened (Cray Blitz is but one example here): (a) the program reaches
>>the point where tactical mistakes become quite rare (b) additional evaluation
>>strategies can be included when "time cost" becomes "no object." IE, in
>>years past, we would rigorously test evaluation ideas as to how they
>>affected program play and compare "knowledge content" to "computational
>>cost". Often, computational cost was high enough that the knowledge would
>>slow the program down enough to cause tactical oversights.
>>
>>Note that every program that plays in the annual ACM events are now rated
>>over 2300-2400 USCF, and yet they are running on machines that can't come
>>close to competing with the original Cray-1 we ran on 12 years ago when
>>our USCF rating was 2258 after we won the Mississippi State Chess
>>championship in 1981. Technology has certainly provided significant
>>boosts to playing skill, but people like Schaeffer, Hsu, Thompson, Lang
>>(and perhaps even the Cray Blitz team) have also contributed significant
>>programming enhancements as well.
>>
>I would hope that the programming is better. Better programming doesn't
>necessarily mean better AI. Most software is much better today than fifteen
>years ago.

As someone who has programmed for 25 years, you'll have to explain what you
mean here. I can't program any "better" now than I could in 1970... languages
are basically the same, data structures the same, etc. The thing that is now
better is the search algorithm, which has been refined steadily over the past
20 years, by *many* different people. The programming doesn't mean squat.
A sloppy piece of search code with the right search extensions will beat the
most elequently written search code with no extensions.

>
>The point I was responding to was that AI was finished with chess; implying
>that it had adequately solved the problems associated with computers playing
>chess successfully. Computers do play chess successfully, but not because
>AI solved the problems. The programmers just found other stratagies to
>solve the problem of computer chess play.
>
>Maybe you can enlighten me about this. Almost any chess machine will play
>a losing move in a certain position. Perhaps a huge sacrifice of material
>will lead to a mate just beyond the computer's horizon. I know of no
>computer program that retains the position or "solves" the problem so that
>it doesn't choose incorrectly if the position arises again. Part of AI
>used to be the computer "learning" from its mistakes. But if this exists
>at all now, it isn't found on any "accessible" machine/program. Or does it?

Several programs do just this, BeBe being one. It basically has a "permanent"
hash table that contains positions from all games it has played, and uses this
to avoid game paths where it didn't like the position later on. Very much like
I do, I try 1. e4 e6, and after playing lots of different variations, I decide
that I simply don't like e6 and find some other defense. Whether this is
"learning" or not is still open to debate, but it *does* sort of smell like
it...

>
>You seem to be defining AI as any procedure that produces functional
>results that "look like" human results. But isn't this a bit broad? As you
>stated, you can create huge lists and have a computer look up keywords
>and respond very quickly. This is somewhat like drilling different size
>holes and having coins fall into the correct bin. It works like a human
>sorting coins, better perhaps; but is it AI?

Good point, I guess that you simply have to define AI first. I have always
used the classic Turing Test model for chess. If you put two terminals in
a room, one connected to a GM and another connected to a Computer Chess
machine, can you determine which is which if all you can do is set up chess
positions and play through them with the machine. I maintain that the computer
can pretty well claim to be an "AI" solution in this problem domain. Of course,
several will point out that it is really easy to pick out the computer if you
set up something like a long mate in 15 and then notice that one of the two
"players" solves it in one second... not many humans can do that. Of course,
it would be trivial to have C-B "spin it's wheels for a randomly long amount
of time to "simulate" thinking..


>
>>>AI can't claim any great successes in chess. Just put their programs in a
>>>slow computer and playing strength evaporates as the depth of search
>>>shrinks.
>>
>>OK, let's play the following game: We'll run Cray Blitz on the slowest
>>YMP/EL machine we can find (one processor) and search around 10K nodes per
>>second rather than our normal 500K-1M. For *you* we'll have a kabitzer stand
>>by the board and continually give you advice on what to play and why (of course,
>>he will only be rated 1400 so you'll have to figure out how to ignore his
>>comments and not let his analysis distract you while you think.) Now, anyone
>>want to make a wager on the game? The point, certainly you can slow down a
>>
>How about letting your program run full strength, full speed and I'll just
>erase random areas of its analysis on every move. That's equivalent to the
>"help" you suggested for me.

No, because you can go back and "recreate" these "losses." This does cost you
time, and that was the point. The more interruptions you endure, the less time
you will spend on chess. So, what's the point of slowing a machine down to the
point where you can say "it's a fish"??

>
>Maybe I don't see it. But unless your selective searches are based on some
>sort of pattern matching that "sees", for instance, a back rank mate, not
>because the cpu is so fast that it can simply blanket a ply search covering
>such traps, then it's still brute force and not AI. If on the other hand
>the program "sees" back rank threats because the king lacks the ability to
>move forward (or back) and searches for specific threats (despite its
>opponent losing a queen and a rook and still being two ply from a mate),
>then that looks and smells like AI to me.

Again, CB "sees" this but in a way different than you do. It is *so* fast,
we simply search like he** and "stumble" into these mates. Sort of like the
way some of our current medical wonder-drugs and wonder-glues were found...
The machine will never "notice" such a thing without an algorithmic routine
written to detect it. I (and many others) have chosen to let the search
"algorithm" detect this since it's not really any more efficient to write a
special-purpose piece of code.

James Alan Riechel

unread,
Apr 15, 1994, 10:04:36 PM4/15/94
to
dein...@cen.encompass.com (David Deininger) writes:

[text deleted]

>> Some of the traditional AI problems have also fallen. Chess, for
>> example, is pretty much done and no longer considered interesting;
>> more focus in the games sphere is on Go. Other problems, such
>> as vision and reasoning, have proven to be much harder than anyone
>> imagined. People no longer predict ``computers with souls'' and
>> many wagers have been lost.

[text deleted]

Ah, you're all over the place, but I will say this: The domain of
chess is still used for studying problems in ai. If you have a chance, look at
some of the work coming out of Northwestern University's Institute for the
Learning Sciences. None of it's based on search. But then again, if you're
recent on your psychology studies, you'll know that humans, too, use a
knowledge-intensive rather than search-intensive approach.

While the best machines can not yet defeat the best people, I will
agree that chess is AI solved if one is not trying to imitate the procedure(s)
people use to "solve" chess. However, if this is the case, then chess is very
far from AI solved and may even be AI complete.

--
James Riechel
jrie...@cc.gatech.edu

James Alan Riechel

unread,
Apr 15, 1994, 10:18:07 PM4/15/94
to
canr...@convex.com (Robert Canright) writes:

[text deleted]

>When I looked at Feng-Hsiung Hsu's dissertation & saw the curves
>showing major advancement in computer skill related to computer
>speed & saw the success of deep thought, a machine done out of the
>computer science discipline by men with undergraduate degrees in
>electical engineering, I concluded that chess was dead as an AI
>problem. Then thinking about the lack of success in the AI field,
>I concluded that AI is dead.

[text deleted]

You should cross post this to comp.ai.religious.beliefs.

You state that "[traditional] AI is dead" (i.e. the functional-
conceptual paradigm) as a belief and not as a matter of fact. If, as
scientists, we should prefer facts, then by at least one measure your
statement is false. Traditional AI still drags in alot of research money
from government and corporations, and they're actually producing (in many,
many cases) advanced systems for their "clients" that could not be
done solely w/ expert systems, neural nets, fuzzy logic, etc...

The only true litmus test is time. We'll have to wait and see.

--
James Riechel
jrie...@cc.gatech.edu

James Alan Riechel

unread,
Apr 15, 1994, 10:31:55 PM4/15/94
to
cuf...@spot.Colorado.EDU (T. M. Cuffel) writes:

[text deleted]

>This is an interesting, and I believe somewhat inaccurate, characterization
>of traditional AI. Until recently, most AI consisted of representing
>problems and data in a form a machine could understand, and searching
>through that representation. Chess programs rely more on throwing
>their brute speed at a search rather than clever representations (not
>that they aren't clever, just not as important...), and therefore
>seem less "artificially intelligent" than AI programs that deal more
>with representation. But they both do the same thing. In fact, because
>of the large body of work associated with it, chess is one of the best
>paradigms for tradition AI techniques.

[text deleted]

I believe you are incorrect in your assertion than traditional
idea is comprised entirely (or nearly so) of search techniques. IMHO and
in the humble opinion of history, traditional AI has indeed included
learning, problem solving, planning, etc. theory and techniques that are
not, by in large, dependent on search techniques. ....

--
James Riechel
jrie...@cc.gatech.edu

Michael David Rosen

unread,
Apr 16, 1994, 3:29:02 AM4/16/94
to
In article <CoB0n...@cnsnews.Colorado.EDU> cuf...@spot.Colorado.EDU (T. M. Cuffel) writes:
>In article <2oika4...@uwm.edu>,
>Michael David Rosen <mro...@csd4.csd.uwm.edu> wrote:
>>I can't give you a good definition of AI;
>
>I can:
>
> Making computers do things humans are good at and they are not.
>
Congratulations, you just made word processing an AI success.

I assume that a definition of AI requires something more rigorous than your
definition. Otherwise AI really doesn't exist at all.

>As far as I know there are no programs of any sort that exhibit
>"intelligent" behavior.
>

Well it is hard to hit a moving target. I may well be wrong in my
understanding of AI but no one has suggested it is the creation of
a "creature". I would have guessed that AI had something to do with the
computer being able to "learn" from its errors and to develop, from its
programming, possible alternatives to avoid repeating the same errors.

>AI is not about computers thinking like humans, learning, or gaining sentience
>and locking people out of spacecraft. It is about computers DOING things
>

It's strange that you assume that I think AI has something to do with
science fiction. It would be helpful if you had a more concrete explanation
of AI.

>So the AI field is obliged to keep working on chess programs until the
>make one that satisfies you?
>

This is just nonsense. I'm questioning statement made about AI experts
claiming to have "solved" the problems of chess as it relates to their
field. It seems to me that the solutions to playing good chess were more
a success of good conventional programming and very fast computers with
very large memories.

Can it be that the very proponents of AI have diluted its "nature" to the
extent that it is indistinguishable from any other programming other than
it is difficult for a computer to achieve the task?

>Chess is a poor environment for learning things about pattern matching.

^^^^^^^^
Of course. A difficult thing for a computer to do isn't it? When AI
programmers get more proficient at pattern matching they should come
back to chess to see if they can apply it to something complex.

>Any AI program will rely on computers being very very fast. If
>they didn't, we wouldn't need the computer.
>
>Also, the advances in problem representation, minimax, alpha-beta, and
>other chess related AI advances are obvious and trivial things any
>inclined person could whip up on an idle Sunday afternoon? Your
>concept of AI seems to indicate that anything short of a program
>that shouts "I think therefore I am!" while taking over the Enterprise
>is not "real" AI.
>
>Most AI consists of:
>
> 1. Rigorously defining a problem so that
> 2. a brute force search can easily be applied to it
>

Finally, something of a reasonable point. It took a long time for you
to actually provide a useable insight. You could perhaps concentrate
on this than on the "science fiction" defense which is an artifact of
your imagination rather than my inquiry.

>When you finish people tell you how part 1 is a cheap trick and how part
>2 is trivial, and that your computer really isn't smart because it
>will never write a sonnet as good as Shakespeare, and that chess is
>too easy let's see you do go, and...
>

Again, I don't know who you are replying to but this has nothing to do
with my questions or assertions. Say, are you an actual sentient being;
are you you some AI program created to reply to AI criticisms? This
might explain the nature of some of your reply. It's really quite
human-like...

Sorry about that, it's getting late and the topic's too far from chess.

Bye,

mrosen


Benoit St-Jean

unread,
Apr 17, 1994, 3:31:00 AM4/17/94
to
TO: hy...@cis.uab.edu

RH>One other comment here. Often, we have had at least one "technology based"
RH>program compete in the ACM events. This goes back to the days of "tech"
RH>and on to "Lex'entrique" (hope that's spelled correctly). These programs
RH>used no knowledge at all, just searched the largest tree they could while

L'excentrique is better! :) A french word that means eccentric, cranky
or crotchety

|=============================================================|
| Benoit St-Jean, UQAM student |
| e-mail: benoit....@xonxoff.com |
| st-jean...@cafe.uqam.ca |
| st-jean...@nobel.si.uqam.ca |
| Please support genetic algorithms, Modula-2 and the Expos! |
|=============================================================|


* SLMR 2.1a *

----
XON/XOFF Information Service | (514) 683-9345 (voice/fax)
a division of XON/XOFF Computer Solutions | (514) 685-1152 (data) HST D/S
Montreal, Canada | (514) 683-6729 (data) HST D/S
------------------------------------------'

David Gomboc

unread,
Apr 17, 1994, 5:52:54 AM4/17/94
to
In article <2oo43u...@uwm.edu> mro...@csd4.csd.uwm.edu (Michael David Rosen) writes:
>In article <CoB0n...@cnsnews.Colorado.EDU> cuf...@spot.Colorado.EDU (T. M. Cuffel) writes:
>>In article <2oika4...@uwm.edu>,
>>Michael David Rosen <mro...@csd4.csd.uwm.edu> wrote:
>>>I can't give you a good definition of AI;
>>
>>I can:
>>
>> Making computers do things humans are good at and they are not.
>>
>Congratulations, you just made word processing an AI success.

!?!?

[deletia]

>This is just nonsense. I'm questioning statement made about AI experts
>claiming to have "solved" the problems of chess as it relates to their
>field. It seems to me that the solutions to playing good chess were more
>a success of good conventional programming and very fast computers with
>very large memories.
>
>Can it be that the very proponents of AI have diluted its "nature" to the
>extent that it is indistinguishable from any other programming other than
>it is difficult for a computer to achieve the task?
>
>>Chess is a poor environment for learning things about pattern matching.
> ^^^^^^^^
>Of course. A difficult thing for a computer to do isn't it? When AI
>programmers get more proficient at pattern matching they should come
>back to chess to see if they can apply it to something complex.

I happen to think Hans Berliner has done a fine job. What about
David Wilkins? Or Kenneth Church? (B. Sc. thesis I believe.)

--
Dave Gomboc
drgo...@acs.ucalgary.ca
gom...@cpsc.ucalgary.ca

Anders Thulin

unread,
Apr 20, 1994, 1:51:17 AM4/20/94
to
In article <2oo43u...@uwm.edu> mro...@csd4.csd.uwm.edu (Michael David Rosen) writes:
>In article <CoB0n...@cnsnews.Colorado.EDU> cuf...@spot.Colorado.EDU (T. M. Cuffel) writes:
>>
>> Making computers do things humans are good at and they are not.
>>
>Congratulations, you just made word processing an AI success.

Not really. A computer can`t `do` word processing. It can provide
tools and support to simplify it. But it can't do the job.

Hand it the text and pictures that go into todays newspaper, and see
what it does with it ...

--
Anders Thulin a...@linkoping.trab.se 013-23 55 32
Telia Research AB, Teknikringen 2B, S-583 30 Linkoping, Sweden

Lionel Tun

unread,
Apr 20, 1994, 1:11:42 PM4/20/94
to
What do you think of the following:

1. AI is programming a computer to do `clever stuff'.
2. If the `clever stuff' can be programmed on a computer,
then it is just a series of pre-defined instructions,
and not AI.

Therefore one must conclude that whenever `clever stuff'
is achieved it is no longer `clever stuff'. Therefore AI
is impossible.

--
________ Lionel Tun, lio...@cs.city.ac.uk ________
/ /_ __/\ Computer Vision Group /\ \__ _\
/___/_/_/\/ City University, London EC1V 0HB \ \___\_\_\
\___\_\_\/ 071-477 8000 ext 3889 \/___/_/_/

Robert Hyatt

unread,
Apr 20, 1994, 4:04:19 PM4/20/94
to
In article <2p3noe$5...@barney.cs.city.ac.uk> lio...@cs.city.ac.uk (Lionel Tun) writes:
>What do you think of the following:
>
>1. AI is programming a computer to do `clever stuff'.
>2. If the `clever stuff' can be programmed on a computer,
> then it is just a series of pre-defined instructions,
> and not AI.
>
>Therefore one must conclude that whenever `clever stuff'
>is achieved it is no longer `clever stuff'. Therefore AI
>is impossible.
>
>--

only one flaw: what if we one day find out that *everything* is really
binary? are *we* impossible? :^)

Jean-Marc Alliot

unread,
Apr 21, 1994, 12:14:04 PM4/21/94
to
In article <2p3noe$5...@barney.cs.city.ac.uk> lio...@cs.city.ac.uk (Lionel Tun) writes:

> 1. AI is programming a computer to do `clever stuff'.
> 2. If the `clever stuff' can be programmed on a computer,
> then it is just a series of pre-defined instructions,
> and not AI.

> Therefore one must conclude that whenever `clever stuff'
> is achieved it is no longer `clever stuff'. Therefore AI
> is impossible.

Interesting argument that can be easily applied to cognitive/rule-based
systems. But what about the connectionist approach, with a neural network
learning "by itself" to play chess? It's not that easy to conclude then,
because we can not explain what is in the black box of the net... Weights
have no significance.

Jean-Marc

James Alan Riechel

unread,
Apr 21, 1994, 4:36:32 PM4/21/94
to
jqta...@boi.hp.com (John Quill Taylor) writes:

>An intelligent chess computer would ask, "Why does White always go first?"

Better yet: Why does a white-square need to fall in the lower right-hand
corner?

--
James Riechel
jriechel@cc

John Quill Taylor

unread,
Apr 21, 1994, 5:19:45 PM4/21/94
to
James Alan Riechel (jrie...@gaia.gatech.edu) wrote:

: jqta...@boi.hp.com (John Quill Taylor) writes:

: >An intelligent chess computer would ask, "Why does White always go first?"

: Better yet: Why does a white-square need to fall in the lower right-hand
: corner?

Which "rule," if changed, would more alter chess as we (sometimes only think
we) know it?


--

__
John Quill Taylor / /\
Writer at Large / / \
Hewlett-Packard, Storage Systems Division __ /_/ /\ \
Boise, Idaho U.S.A. /_/\ __\ \ \_\ \
e-mail: jqta...@hpdmd48.boi.hp.com \ \ \/ /\\ \ \/ /
Telephone: (208) 396-2328 (MST = GMT - 7) \ \ \/ \\ \ /
Snail Mail: Hewlett-Packard \ \ /\ \\ \ \
11413 Chinden Blvd \ \ \ \ \\ \ \
Boise, Idaho 83714 \ \ \_\/ \ \ \
Mailstop 430 \ \ \ \_\/
\_\/
"When in doubt, do as doubters do." -jqt

Michael David Rosen

unread,
Apr 22, 1994, 4:25:55 AM4/22/94
to
>In article <2p3noe$5...@barney.cs.city.ac.uk> lio...@cs.city.ac.uk (Lionel Tun) writes:
>>What do you think of the following:
>>
>>1. AI is programming a computer to do `clever stuff'.
>>2. If the `clever stuff' can be programmed on a computer,
>> then it is just a series of pre-defined instructions,
>> and not AI.
>>
>>Therefore one must conclude that whenever `clever stuff'
>>is achieved it is no longer `clever stuff'. Therefore AI
>>is impossible.
>>

"1." is probably a poor definition of AI.

In 2., "just a series of pre-defined instructions" sounds a little like
DNA doesn't it?

mrosen

Doyen T. Klein

unread,
Apr 22, 1994, 7:20:53 PM4/22/94
to
In article <CoMoK...@boi.hp.com> jqta...@boi.hp.com (John Quill Taylor) writes:
>: >An intelligent chess computer would ask, "Why does White always go first?"
>: Why does a white-square need to fall in the lower right-hand corner?
>Which "rule," if changed, would more alter chess
Interesting idea, but would the 'game' be truly altered in either
case?

If you start by black moveing first then
it seems that the opening strategies would be changed only to the
extent that the queen is now on the other side of the opening
color's king.

If the right corner square is black, again the bishop colors are
reversed, but I also assume that the King, not the Queen, would
start on his own color.

In either case, I fail to see any change in the principles of
how to develop a winning position, but the opening books would
have to reflect some changes (seemingly symetrical for both sides)

If you changed position of Knights and Bishops, would the game change?
Can anyone upload the rules proposed by Bobby Fisher, where the
opening consist of players arranging their own pieces?

The answer, by the way, is that for people who reley on 'color clues'
more than position, black going first would alter the game,
for the others, changing the corner square would have the greater effect.

--
.dtk das oberbyte

Mark Kambites

unread,
Apr 22, 1994, 9:50:54 PM4/22/94
to
Quoting from jqta...@boi.hp.com:

> An intelligent chess computer would ask, "Why does White
> always go first?"

Even more intelligently, "Why does the first player always have the white pieces?".

/\/)ark

Mark Kambites

unread,
Apr 22, 1994, 9:50:57 PM4/22/94
to
Quoting from jqta...@boi.hp.com:

> Which "rule," if changed, would more alter chess as we
> (sometimes only think we) know it?

Make stalemate a loss for the player who cannot move, and you could solve the energy crisis by burning endgame books. :-)

/\/)ark

ALEX LANE

unread,
Apr 23, 1994, 11:19:05 PM4/23/94
to
Mark Kambites writes:

-> > An intelligent chess computer would ask, "Why does White
-> > always go first?"
->
-> Even more intelligently, "Why does the first player always have the
-> white pieces?".

And most intelligently, the following:

Why are the pieces always arranged the same way?
Why does a given piece always move in a particular fashion?
Why is the board square?
Why is it always the same size?
Why are there only two players?
Why aren't we playing bridge?
...etc....

Cheers...
+----------------------------------------------------------------------+
alex...@springsboard.org | voice: (303) 264-2399; fax -2363
Pagosa Springs, Colorado, USA | "You *can* get here from there!"
VC PGP fingerprint: 7F DB 06 E2 47 84 79 B4 32 3C A9 48 65 AA 5B C2
+----------------------------------------------------------------------+

John Quill Taylor

unread,
Apr 21, 1994, 2:42:59 PM4/21/94
to
An intelligent chess computer would ask, "Why does White always go first?"

Auld Orion

unread,
Apr 23, 1994, 3:02:08 PM4/23/94
to
jqta...@boi.hp.com (John Quill Taylor) writes:

>An intelligent chess computer would ask, "Why does White always go first?"

> __
>John Quill Taylor / /\
>Writer at Large / / \

You could, at the very least, cite the source for your quotation above.

John Quill Taylor

unread,
Apr 25, 1994, 2:42:14 PM4/25/94
to
Auld Orion (iv...@jove.acs.unt.edu) wrote:

: jqta...@boi.hp.com (John Quill Taylor) writes:

: >An intelligent chess computer would ask, "Why does White always go first?"

: >
: >John Quill Taylor
: >Writer at Large

: You could, at the very least, cite the source for your quotation above.

I'll cite it (again) gladly. I wrote it! About 1990.

In esse, jqt

--

__
John Quill Taylor / /\
Writer at Large / / \

Hewlett-Packard, Storage Systems Division __ /_/ /\ \
Boise, Idaho U.S.A. /_/\ __\ \ \_\ \
e-mail: jqta...@hpdmd48.boi.hp.com \ \ \/ /\\ \ \/ /
Telephone: (208) 396-2328 (MST = GMT - 7) \ \ \/ \\ \ /
Snail Mail: Hewlett-Packard \ \ /\ \\ \ \
11413 Chinden Blvd \ \ \ \ \\ \ \
Boise, Idaho 83714 \ \ \_\/ \ \ \
Mailstop 430 \ \ \ \_\/
\_\/

Steve Zimmerman

unread,
Apr 26, 1994, 5:12:56 PM4/26/94
to
In article <CoMHB...@boi.hp.com>, jqta...@boi.hp.com (John Quill Taylor) writes:
|> An intelligent chess computer would ask, "Why does White always go first?"

I disagree. A child would ask "Why does White always go first?" Computers
have no concept of color. And for the game of chess *they don't need to*.
They simply need to `know' that each player has a mutually
exclusive set of pieces and that one player must move firt.

Steve Zimmerman

shad augenstein

unread,
Apr 27, 1994, 10:33:50 PM4/27/94
to
ste...@code3.code3.com (Steve Zimmerman) writes:

>Steve Zimmerman

I would think that an intelligent computer would be quite similar to a
child.
-Shad

Anders Thulin

unread,
Apr 28, 1994, 2:29:47 AM4/28/94
to
In article <jriechel....@cc.gatech.edu> jrie...@gaia.gatech.edu (James Alan Riechel) writes:
>jqta...@boi.hp.com (John Quill Taylor) writes:
>
>>An intelligent chess computer would ask, "Why does White always go first?"

Give that chess computer a cigar!

It's just the kind of computer we need to handle games from the early
19th century, when there (apparantly) was no such rule. Don't be
fooled by late 20th century presentations of such games which had to
be translated/mangled into a white-first-order to be entered into
modern databases.

0 new messages