Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Human VS computer

15 views
Skip to first unread message

Julien Dumesnil

unread,
Jul 4, 1994, 1:48:44 PM7/4/94
to

I had a discussion with a researcher who told me that chess was
not interresting anymore (for computer science ;) because we
now have computers that can beat the best human players.

Well, I must say that last time I looked into it, that was about
5 years ago (I think), the best computer was still incapable of
winning steadily against the best players.

I am sure you guys are more up to date than me. could you post
something about it here?

Thanks.

Julien.

James Aspnes

unread,
Jul 4, 1994, 2:14:48 PM7/4/94
to
In article <k64fD.d...@etl.go.jp>,

Julien Dumesnil <dume...@etl.go.jp> wrote:
>
>I had a discussion with a researcher who told me that chess was
>not interresting anymore (for computer science ;) because we
>now have computers that can beat the best human players.

I think a better way of putting this is: back in the 1960's computer
chess was considered to be part of the mainstream of Good
Old-Fashioned AI, one of the problems whose solutions would help
unlock the secrets of building boxes that could substitute for
late-night parking attendants. Since then most CS types have realized
that (a) the approaches that seemed to work well for computer chess
(massive alpha-beta search with cheap heuristics) were very different
from what good human players seemed to use; and (b) chess was an
example of a problem in AI that was chosen under the assumption that
what was a sign of relative intelligence in humans (remember, this is
the 1950's and 1960's--- the picture of an intelligent human is Mr.
Spock or a lab-coated rocket scientist) would be a sign of
intelligence in computers, when in fact it's the apparently brainless
activities like walking across a room without tipping over or looking
at a scene and making out individual objects that are the really hard
problems.

So the real answer to why computer chess is not as interesting as it
once was may just be that nobody much believes any more that if we
build a good chess program we'll get anything but a good chess
program.

>Well, I must say that last time I looked into it, that was about
>5 years ago (I think), the best computer was still incapable of
>winning steadily against the best players.

I think the Deep Thought II folks currently claim about a 2550
equivalent strength. This is pretty good, but not enough by quite a
bit yet to blow away the best humans.

Nandu S. Shah

unread,
Jul 5, 1994, 10:33:44 AM7/5/94
to
In article <2v9jio...@PINE.THEORY.CS.YALE.EDU> aspnes...@cs.yale.edu (James Aspnes) writes:

>So the real answer to why computer chess is not as interesting as it
>once was may just be that nobody much believes any more that if we
>build a good chess program we'll get anything but a good chess
>program.

Actually, from a certain point of view, this statement holds true for humans
as well as computers-- because, for the most part, playing, practicing, and
studying chess only prepares you to play chess. Being a strong chessplayer
doesn't make you a genius, and being a genius (whatever that means)
*certainly* doesn't mean you'll be able to play chess with any competence
whatsoever.


Nandu Shah email: na...@anest4.anest.ufl.edu
Department of Anesthesiology phone: (904) 392-5389
University of Florida WWW: http://www.anest.ufl.edu/~nandu/

Robert Canright

unread,
Jul 5, 1994, 8:19:15 PM7/5/94
to

>In article <k64fD.d...@etl.go.jp>,
>Julien Dumesnil <dume...@etl.go.jp> wrote:
>>
>>I had a discussion with a researcher who told me that chess was
>>not interresting anymore (for computer science ;) because we
>>now have computers that can beat the best human players.

deleted stuff

>I think the Deep Thought II folks currently claim about a 2550
>equivalent strength. This is pretty good, but not enough by quite a
>bit yet to blow away the best humans.

Deep Thought I & II have defeated grandmasters (Larsen at least) and
IM's. If the statement "the best humans"=grandmasters, then Deep Thought
has defeated _among_ the best of humans. Has it defeated the very best?
Not yet. The theory is that it will defeat Kasparov if it can compute
fast enough. I believe that the target speed has been published.

Because IBM, funding Deep Blue, is a business you would think that they
have a schedule for meeting that target speed.

Julien Dumesnil

unread,
Jul 6, 1994, 5:46:12 AM7/6/94
to
On 07/06/94(09:19) canr...@convex.com (Robert Canright) wrote

So I guess it means the problem has been solved, considering that the speed
of computers has been improving steadily over the years...

Now what he told me is that the research is orienting itself towards
the japanese game of Shogi, because the search space is so much more
huge than the one of chess... he also told me that the next step was
Go.

Anybody care to comment that?

-= Julien Dumesnil =-

Rob Ryan

unread,
Jul 6, 1994, 5:56:03 AM7/6/94
to
In article <k6dl5.d...@etl.go.jp>,
Julien Dumesnil <dume...@etl.go.jp> wrote:

>Now what he told me is that the research is orienting itself towards
>the japanese game of Shogi, because the search space is so much more
>huge than the one of chess... he also told me that the next step was
>Go.

Is Shogi's search space really bigger? Although the dropping of
pieces adds a lot to the search tree (in addition to the 9x9 board),
the general mobility is much more restricted. Is it true to say that
Shogi has a bigger search space?

-- Rob

Johannes Fuernkranz

unread,
Jul 6, 1994, 8:39:38 AM7/6/94
to

There is a TR about Shogi as an AI research target:
Hitoshi Matsubara, Shogi as the AI research target next to chess,
ETL-TR-93-23, Electrotechnical Laboratory, ??, 1993.
I have ftp-ed it from somewhere, but didn't keep the address. The author's
e-mail is (was) mats...@etl.go.jp.

Anyhow, to answer your question: In the introduction it is stated that the
average branching factor of Shogi is usually over 100 compared to around
35 for chess. The author also gives numbers for the search space:
Chess 10^120
Go 10^200
Shogi 10^300
In chapter 3 he gives a position where 593 moves are possible (which is the
maximum). Also Shogi games are usually a little longer (movewise) and
position evaluation is harder.

Juffi


Johannes Fuernkranz ju...@ai.univie.ac.at
Austrian Research Inst. for Artificial Intelligence +43-1-5336112(Tel)
Schottengasse 3, A-1010 Vienna, Austria, Europe +43-1-5320652(Fax)
--------------- "Life's too short for Chess." -- Byron ------------------

Urban Koistinen

unread,
Jul 6, 1994, 10:22:04 AM7/6/94
to
In <2ve8ma$5...@infosrv.edvz.univie.ac.at> ju...@ai.univie.ac.at (Johannes Fuernkranz) writes:
[text deleted]
:Anyhow, to answer your question: In the introduction it is stated that the

:average branching factor of Shogi is usually over 100 compared to around
:35 for chess. The author also gives numbers for the search space:

I guess
search space <==> reachable positions

:Chess 10^120

This figure might be good, do you know the motivation for it?
I am interested in references.

:Go 10^200

Has anyone managed to write a program to determine who has won any
given Go position?
Even if it would need O(10^200) space and O(10^400) time it would be
interesting.

:Shogi 10^300


:In chapter 3 he gives a position where 593 moves are possible (which is the
:maximum). Also Shogi games are usually a little longer (movewise) and
:position evaluation is harder.

How does alpha-beta with random evaluation do?
--
Urban Koistinen - md85...@nada.kth.se
Stop software patents, interface copyrights: contact l...@uunet.uu.net

Dave Gomboc

unread,
Jul 7, 1994, 12:58:31 PM7/7/94
to

It is my understanding that Shogi is at least a more tactical
game than Chess. Perhaps computers would do even better relative
to humans than in Chess were they to be geared towards Shogi.
9x9 is not nearly as convenient as 8x8, though. :-)

--
Dave Gomboc
drgo...@acs.ucalgary.ca

Johannes Fuernkranz

unread,
Jul 8, 1994, 6:01:33 AM7/8/94
to
In article <2veemc$m...@news.kth.se>,

Urban Koistinen <md85...@somme.nada.kth.se> wrote:
>In <2ve8ma$5...@infosrv.edvz.univie.ac.at> ju...@ai.univie.ac.at (Johannes Fuernkranz) writes:
>[text deleted]
>:Anyhow, to answer your question: In the introduction it is stated that the
>:average branching factor of Shogi is usually over 100 compared to around
>:35 for chess. The author also gives numbers for the search space:
>
>I guess
> search space <==> reachable positions
Probably. I was just quoting.

>
>:Chess 10^120
>
>This figure might be good, do you know the motivation for it?
>I am interested in references.

I'm not so sure about it and I can't give you references for *this* number
(except the TR about Shogi where I took them from).

Numbers I like better, and I'm more confident in are:
#moves (max): 5899
#positions: 10^43
#games: 10^18900

I could give you references for those numbers (they are from a book in German
called "Schach und Zahl" (chess and number) and are based on analyses from
Nenad Petrovic I think.
However, this is the last time I will read News and E-mail for a month
now. So you have to be patient (and you better send me a reminder).

>:Shogi 10^300
>:In chapter 3 he gives a position where 593 moves are possible (which is the
>:maximum). Also Shogi games are usually a little longer (movewise) and
>:position evaluation is harder.
>
>How does alpha-beta with random evaluation do?

The point being made in this TR is that Alpha-Beta is less useful in general
because of the search space. I don't find the arguments convincing, however.
But I don't know much about Shogi, so I'm probably wrong.
I doubt that random evaluation would yield interesting information, as it
is just another way of encoding a piece mobility heuristic. (see D. Beals
article in the last ICCA Journal for arguments why it might nevertheless
be useful).


Juffi


Johannes Fuernkranz ju...@ai.univie.ac.at
Austrian Research Inst. for Artificial Intelligence +43-1-5336112(Tel)
Schottengasse 3, A-1010 Vienna, Austria, Europe +43-1-5320652(Fax)

--------------- "Life is too short for Chess." -- Byron -----------------

Don Beal

unread,
Jul 11, 1994, 9:11:25 AM7/11/94
to
Johannes Fuernkranz (ju...@ai.univie.ac.at) wrote:
: Urban Koistinen <md85...@somme.nada.kth.se> wrote:
[ deleted ]
: >:Shogi 10^300

: >:In chapter 3 he gives a position where 593 moves are possible (which is the
: >:maximum). Also Shogi games are usually a little longer (movewise) and
: >:position evaluation is harder.
: >
: >How does alpha-beta with random evaluation do?

: The point being made in this TR is that Alpha-Beta is less useful in general
: because of the search space. I don't find the arguments convincing, however.
: But I don't know much about Shogi, so I'm probably wrong.
: I doubt that random evaluation would yield interesting information, as it
: is just another way of encoding a piece mobility heuristic. (see D. Beals
: article in the last ICCA Journal for arguments why it might nevertheless
: be useful).

Well, since it was mentioned...
"Random evaluations" sample the structure of the full game tree.
It is incorrect to say they "encode" any heuristic, although it is true
that tree branching factor is the determining influence, so that mobility
advantages do correlate with good backed-up scores on random evaluations.

I think comparing "alpha-beta with random evalations" on different games
_would_ give some indication of differences in character between games -
it would be at least as useful as figures such as average branching factor,
total number of positions, etc. I haven't done this though.
(Of course none of these tell you very much!)

Don Beal.

0 new messages