Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Deep Blue - Kasparov - in which side You are?

53 views
Skip to first unread message

Jouni Uski

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

I personally are complete on Deep Blues side and I was surprised to
read after last match, that everybody was on humans side! BTW if
DB loses (clearly) I think we can forget our dreams about machine
world champion for ever...

Jouni Uski

Robert Hyatt

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

Jouni Uski (Jouni...@semitechturku.com) wrote:
: I personally are complete on Deep Blues side and I was surprised to

: Jouni Uski

I don't know about "forever" (since that's a *long* time... :) )
but if DB can't do it then it isn't going to be done for at least
another 20-30 years probably... because we won't have their speed
for at *least* that long, if ever. However, algorithms are slowly
evolving, and programs are incrementally getting better over time
independent of machine speed. We are really at the 15 year mark for
serious computer chess since the PC is about 15 years old. Before that
it was simply large mainframe vs mainframe types of games, with the
occasional minicomputer thrown into the mix.


one experiment I wish we could do is take a program from the early
80's and run it against a program from the current day, using equal
hardware, to get an idea for program improvements. I know a person
on ICC that might have a version of blitz version 6, which was a
basic chess 4.x clone (pretty much exactly, except for evaluation).
Maybe a match between that program and Crafty, both on the same machine
would be revealing, because that program eventually won the 1983 WCCC
when moved to a Cray to be fast enough to compete with Belle (it was about
1/8 as fast as belle in 1983, but that was close enough...)


Randy

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

Jouni Uski <Jouni...@semitechturku.com> wrote:

>I personally are complete on Deep Blues side and I was surprised to
>read after last match, that everybody was on humans side! BTW if
>DB loses (clearly) I think we can forget our dreams about machine
>world champion for ever...

>Jouni Uski

personally, i am not on either 'side'. there are aspects about both
sides that can only be admired. whichever side wins, it is a triumph,
ie, Kasparov reaffirming human superiority over machine. on the other
hand, programs are the result of labor of human minds, and i suspect
the triumph would be even greater for the programmers' side. still, it
is difficult to even think about choosing a side. i'm surprised it
surprises you that so many people are on the human side :)

i have to disagree with you that if DB loses, the machines' chances
are over. programs have been maturing in the master stages for well
over a decade now. this may seem slow, but i think it is natural,
since programming is such a laborious process. i think the best days
of programs are yet to come. and that means a lot since they are
already among the best chess players in the world.

Randy

Randy

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

hy...@crafty.cis.uab.edu (Robert Hyatt) wrote:

>Maybe a match between that program and Crafty, both on the same machine
>would be revealing, because that program eventually won the 1983 WCCC
>when moved to a Cray to be fast enough to compete with Belle (it was about
>1/8 as fast as belle in 1983, but that was close enough...)

i was just looking at the game Belle-Cray Blitz, New York 1983 in my
copy of 'How Computers Play Chess." Were both programs running on a
Cray in that game? if so, were they the same version machine or
different?

one other question i've wondered about for some time. what roles did
Harry Nelson and Bert Gower play on the team and where are they now?

Randy

ShaktiFire

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

I remember on ICC , a poll was taken and
the vast majority were for Kasparov. Even
Bob Hyatt, in one of the games, swung his
allegiances to the human. (ICC game discussion).

I am definitely rooting for Deep Blue. Personally I
think most fans of computer chess, especially those
who have been programmers/spectators for a number
of years, must be pulling for Deep Blue. Many must
think like I do... we want to see the computer kick
some butt. (American slang - to win decisively).

Best Wishes

Jesper Antonsson

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

In article <5got8n$j...@juniper.cis.uab.edu>, hy...@crafty.cis.uab.edu (Robert Hyatt) wrote:
>I don't know about "forever" (since that's a *long* time... :) )
>but if DB can't do it then it isn't going to be done for at least
>another 20-30 years probably... because we won't have their speed
>for at *least* that long, if ever.

Well, if current increases in computer speed continues at the same
rate as usual (about a doubling in 18 months) we'll have something
approximating DB's NPS-figures in about 15 years. That will be a
sequential machine, mind you, (no parallell alpha-beta problems), with
hashtables big enough to (almost) keep up, and at least some 6-man
endgame tablebases in RAM.

In 15 years, I'm pretty convinced that the best personal computers
will play chess way better than the current DB. Now, imagine that IBM
continues with the DB project... Since todays' Deep Blue doesn't
seem to be too far away from top grand master strength, it seems
clear that the basic current approach will produce world-champion
class programs before any real paradigm shift in computer chess
takes place.

One interesting question is for how long this exponential growth in
computer speeds is going to continue. We've not seen any sign
of it slowing down yet, and from what I've heard, the basic approaches
in computer tech allows for at least 20 years of development at current
rate before any fundamental barriers stops it. It might well be that
practical barriers prevents them from being financially feasible before
that, but there are always other approaches. :-)

/Jesper Antonsson

Robert Hyatt

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

ShaktiFire (shakt...@aol.com) wrote:
: I remember on ICC , a poll was taken and

: Best Wishes

I find myself in the "middle" here. I'd like to see Deep Blue
pull this off, just to settle this speed issue once and for all.
However, let me take you back to the 1970's and early 1980's
with me...

Once I moved blitz to the Cray, my main competition for several
years was Belle, and chasing Ken was a lot of fun. In 1982 we
drew with him for first at the ACM event, but later that year
he came out with the "new and improved belle" (old bell was about
5K nodes per second, new belle was around 160K). So the challenge
of chasing Ken was fun. As you probably know, in 1983, Cray Blitz
beat Belle and won the 1983 World Computer Chess Championship in
New York. It was a satisfying event.

Later that year, during our yearly personnel review, my department
chairman was chatting with me during my review, and he asked "what
was the highlight of the year for you?" (I'm sure he knew the
answer already). I responded "Beating Belle in NY of course." No
surprise said he. Then he turned to a more serious question and asked
"what was the low point of the year?" looking, I suppose, for problems
with other faculty, problems in the lab/system area (I was in charge of
the campus computer systems, software support (operating systems),
networking and the like). My response was "Beating Belle in NY." He
thought I misunderstood the question, but I was serious. When you chase
a goal for a long time, and finally reach it, it's a satisfying/
thrilling/etc experience. And then you realize you've done it, you
can't really do it again since the second time around is not so
thrilling, and then the realization sets in that a goal that has driven
you for years is gone. You might remember the astronaut that walked on
the moon, came back, and went into a near-suicidal depression because he'd
trained for that for 20 years, and it was over, and he'd never go back
again.

With that story told, you can also see why I am pulling for Garry as
well. Were that My program playing him, I'd have strong mixed emotions,
because if it were to win, that's *it*. There's not much else in the
way of goals for computer chess. If it loses, there's still the thrill
of the "chase."

In effect, I'm in a pretty good position here, because if DB wins, I'll
be happy (of a sort), if DB loses, I'll be happy, because "the game is
still afoot (Sherlock Holmes)". Or, you might say, I lose either way as
well. But I'm an eternal optimist. :)

the l

Robert Hyatt

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

Jesper Antonsson (jes...@lysator.liu.se) wrote:

: /Jesper Antonsson

Don't know if we can continue for 20 years. A lot of the speed of current
machines is based on parallel processing already, because every decent
micro on the market is a superscalar multiple-instruction-issue-per-cycle
architecture. the 1gigahertz barrier is an imposing boundary, because a
lot of the electrical properties we know and love at 200mhz change at those
rediculous frequencies.

If you want to cut the cycle time by 1/2, you have to shorten all the paths
by 1/2, which means the machine has to be 1/8th the size (volume) of the old
machine (something Seymour talked about many times.)

I don't think that taking multiple-instruction-issue beyond the current
3-4 instructions per cycle (depending on whether we talk about the P6 or
the Alpha or whatever) is going to pay off as much as going from 1 to 2
or 1 to 3 did. So that trick is not going to carry us very far it seems.
It's the same as the problems we encounter when using a traditional
multiprocessing architecture.

we're already about 18 months or close to it from the P6/200 introduction
point. The next generation is apparently still a ways out so we may already
be seeing a flattening of this curve.

Memory is a huge problem. This basically killed the performance of the
Cray-2, because memory latency was *so* high on that machine, every memory
reference took forever, unless you were doing vector loads and stores to
take advantage of the streaming to/from memory the machine could do. If
we were to go to a 1ghz machine, every hash probe would take 5x longer than
it does right now because memory speeds have not changed. And there's no
new technology that promises to help (let's don't turn over the SDRAM rock
again... it's good for cache line fills, but it is no faster than the normal
memory of today for random access of a few words).

I hope it don't continue (speed improvements) but if it happens, don't forget
that DB can be re-designed to use the same new technology, and they'll get that
same factor of 1,000 speedup then, over the new hardware of that era. We
are always going to have to face the issue that special-purpose hardware is
going to "toast" general purpose hardware on any single task. That's almost
the definition of such hardware. Of course I can take my P6/200 and fiddle
with graphics images, read news, write C programs. The DB hardware can either
play chess, or it can play chess, or, of course, at times it can also play
chess. :)


Don Fong

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

In article <19970319154...@ladder01.news.aol.com>,

ShaktiFire <shakt...@aol.com> wrote:
>I remember on ICC , a poll was taken and
>the vast majority were for Kasparov. Even
>Bob Hyatt, in one of the games, swung his
>allegiances to the human. (ICC game discussion).

allegiance??

>I am definitely rooting for Deep Blue. Personally I
>think most fans of computer chess, especially those
>who have been programmers/spectators for a number
>of years, must be pulling for Deep Blue. Many must
>think like I do... we want to see the computer kick
>some butt. (American slang - to win decisively).

FWIW i want Kasparov to win. i don't think the cheering
section is going to make any difference though. there are
a few reasons why i want the human to win.

1. Kasparov is a human. i am a human. a victory for Kasparov is
a victory for humanity over machines.
2. Kasparov is an individual. whereas the machine is the product of
a huge collaborative effort. a victory for Kasparov is a victory
for the individual over corporations. you can argue that
Kasparov's knowledge also results from collaborative effort, but
not nearly the same scale as was necessary to create DB.
to me it quite marvellous that an individual can defeat the
combined talents of so many.
3. Kasparov doesn't hide from the competition like DB.
he got where he is by playing thousands of games vs all comers.
he played the best competition and defeated them over the board;
he didn't just wave a bunch of $$.
4. on the day the machine wins, i feel it will be a turning point
for the game and sport of chess. there will be no looking back.
the machines can only get stronger.


Ron Moskovitz

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

My question is this:

Why does a loss in this particular match say something about how good
computers will get for the rest of eternity? Why is this match
more signifigant, as far as the future of chess computing is concerned,
than, say, Kasparov-Deep Blue I?

The computers are just going to continue to get better. Whether or not
there are signifigant improvements based on further speed increases,
people are spending more and more effort trying to teach machines
how to understand chess. Obviously, no one will every teach a machine
as much about chess as the best human knows, but, given the relatively
small size of the gap now, I think the assumption that a computer's
combination of chess knowledge and search speed will never bridge it
is unsurpportable, regardless of the outcome of K-DB II.

-Ron

Chris Mayer

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

Well said.

Personally, I will most likely abandon chess and start learning GO.
My first goal was to beat myself, and I've already done that. My
second goal was to get a draw with Crafty, and even though my program
gets better, so does yours! ( Maybe I should lower my goals to drawing
with an old buggy version of Crafty ?)

Chris Mayer


Robert Hyatt

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

Chris Mayer (cma...@ix.netcom.com) wrote:
: Well said.

: Chris Mayer

Two things to remember. First, what is your machine. If you are as fast
(or faster) than Crafty, then you should draw/win a reasonable number of
games. If you are slower, you are just banging your head into a wall. I
did this for two months with Ferret while I waited on my shiny new P6 while
Bruce already had one. I don't know if I won or drew a single game during
that period of time.

Second, a game here and there is not a good test. I had Crafty logged on a
server I won't mention the other night, and watched someone come in with a
copy of Genius 5. He was playing 15 15 games over and over. The first 3
were losses. The 4th was a draw. the next game was a loss and it was late
enough, and It made me sick enough that I went to bed. I returned to find
that before this match ended, Crafty finished with +1. The machine the
opponent was using was a P5/200MMX, but genius is an oddball program and I
don't know how it does on the MMX compared to the P6 so I can't say whether
this was a fair match or not.

The moral is, however, that 3 losses in a row don't mean a lot. 30 in a row
is cause for alarm. 300 might be cause for a career change. :) I still lose
too many to Ferret. At times I get close to catching up, then he spurts
ahead again with some new trick. The information interchange with bruce
makes it worthwhile to pound my head every now and then however... :)

Komputer Korner

unread,
Mar 19, 1997, 3:00:00 AM3/19/97
to

Jouni Uski wrote:
>
> I personally are complete on Deep Blues side and I was surprised to
> read after last match, that everybody was on humans side! BTW if
> DB loses (clearly) I think we can forget our dreams about machine
> world champion for ever...
>
> Jouni Uski

Forever is a long time!!! The machines will triumph one day, but it
will be a sad day for mankind unless the machine can solve it
perfectly in which case it will be a sad day for chess. Either case,
it is not something to cherish. Perhaps the perfect scenario was
what happened in checkers where the human world champion triumphed in
the 1st match of 40 games, but then died during the rematch after
an opening spate of draws. The checkers program is now the world's
strongest checkers entity but we will never know if it would ever have
been able to defeat the deceased world champion.
--
Komputer Korner

The inkompetent komputer.

Howard Exner

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to


I'm also rooting for Kasparov. I predict another 4-2 result for
Kasparov.
Don King might want to consider being Kasparov's promoter. He could
have Garry throw the match in the last round. He could also train
Garry to throw a wild tantrum and perhaps break a monitor or two.
Media .... Rematch ...... Money.

Robert Hyatt

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to

Peter W. Gillgasch (gil...@ilk.de) wrote:
: [ this should be a comp.arch post, maybe... ]

: Robert Hyatt <hy...@crafty.cis.uab.edu> wrote:

: > Don't know if we can continue for 20 years. A lot of the speed of current


: > machines is based on parallel processing already, because every decent
: > micro on the market is a superscalar multiple-instruction-issue-per-cycle
: > architecture. the 1gigahertz barrier is an imposing boundary, because a
: > lot of the electrical properties we know and love at 200mhz change at those
: > rediculous frequencies.

: I think the 1 ghz barrier is a number you pulled from somewhere 8^)
: Wasn't Semymour at 2 ghz internal clock rate (Cray 4) ? True, he used
: GaAs, which seems like a hell of a bad idea to me for the consumer
: market <grin> (imagine tons of toxical electronic waste, much more toxic
: than everything we have right now...). Oh yeah, and he went broke. US
: gov folks should be kicked for letting this happen...
:

I'm not really sure. So far as I remember, the Cray-4 was going to be a
1ns machine, but I could easily be mistaken since I didn't use that family
of machines at all other than for fun. 500mhz on the T90 has been quite a
stretch...

: > If you want to cut the cycle time by 1/2, you have to shorten all the paths


: > by 1/2, which means the machine has to be 1/8th the size (volume) of the old
: > machine (something Seymour talked about many times.)

: Was at a systems trade fair lately. Prize for best catering goes to the
: girls of the Motorola booth. Looked at their 604e stuff, wild
: performance monitoring tools etc. I recall that I was surprised as heck
: how small the 604e is. *Really* small. About this size (assuming 9 point
: font :)

: XXXX
: XXXX
: XXXX
: XXXX

: Yes, package included. Pretty little thing that 604e.

: > I don't think that taking multiple-instruction-issue beyond the current


: > 3-4 instructions per cycle (depending on whether we talk about the P6 or
: > the Alpha or whatever) is going to pay off as much as going from 1 to 2
: > or 1 to 3 did. So that trick is not going to carry us very far it seems.
: > It's the same as the problems we encounter when using a traditional
: > multiprocessing architecture.

: I agree. Chess programs have little instruction level parallelism
: (ignoring bit ops, which are of course some sort of parallelism) with
: regard to overlapping instructions.

: > we're already about 18 months or close to it from the P6/200 introduction


: > point. The next generation is apparently still a ways out so we may already
: > be seeing a flattening of this curve.

: Hum. I am still thinking that a vector micro would be nice. Original
: Crafty version comes to mind, with attacked_from and attacked_two in two
: vector registers. I am sure this would smoke. There is this vector
: project at Berkely I think (odd name like T0), which adds vectors to
: some common ISA (Mips or Sparc, can't remember). Can try to dig up a
: reference if needed.

Have you seen the i860 family? Looks like a baby cray on a chip...

: What's your opinion of the future ? Even more complicated RISCs
: (internally more a data flow machine), VLIW, vector ? Pseudo stack
: machines marketed as Java chips (ROTFL) ?

Forget the damned JAVA chips. :) Any language without pointers, structures
and unions is a toy. :) and even with the smiley I'm serious...

future is hard to predict. Dec planned on 2ghz as the upper bound on the
alpha. I don't believe it, but we'll see. They are already 1/4 of the way
there of course and the machine is a hoss. VLIW is just another flavor of
superscalar when you get right down to it, and a very old flavor where the
instructions are packed into chunks at compile time, rather than at exec
time (as in the P6).

My favorite chip of the day (from one who teaches computer architecture)
is the P6. It's done about as well as I could imagine it being done, and
it's far better than the rest. If they would take that design, throw out
the X86 bullshit like the flags register and stuff that cause internal
bottlenecks, that would be probably the best architectural design I could
imagine. Unfortunately it is saddled with flags, 8 registers, a stupid
instruction set layout, and a host of things that prove the Intel guys are
a bunch of geniuses, because they got around things that other vendors
chose to eliminate. The alpha is lean and mean for example, as is the IBM
RS chip And the MIPS group. All have a better design, but also didn't
inherit a huge number of old machines to be compatible with. :)

Will we see a gigahertz chip? Don't know. I really don't see how, but
the engineering types say yes. I do believe the curve is going to slow,
although it has been a great ride so far. :)

:
: > Memory is a huge problem. This basically killed the performance of the


: > Cray-2, because memory latency was *so* high on that machine, every memory
: > reference took forever, unless you were doing vector loads and stores to
: > take advantage of the streaming to/from memory the machine could do. If
: > we were to go to a 1ghz machine, every hash probe would take 5x longer than
: > it does right now because memory speeds have not changed.

: So you need to do away with hashing non essential stuff :)

already have I hope. :)

: > And there's no


: > new technology that promises to help (let's don't turn over the SDRAM rock
: > again... it's good for cache line fills, but it is no faster than the normal
: > memory of today for random access of a few words).

: I agree. Random access patterns have to be avoided. Slow Demon down by
: 30-40 % on an 80 mhz system, the very thought what this would do to the
: performance on a 200+ mhz system makes me want to puke. Could calculate
: this just for making me feel sea sick 8^)


Peter W. Gillgasch

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to

[ this should be a comp.arch post, maybe... ]

Robert Hyatt <hy...@crafty.cis.uab.edu> wrote:

> Don't know if we can continue for 20 years. A lot of the speed of current
> machines is based on parallel processing already, because every decent
> micro on the market is a superscalar multiple-instruction-issue-per-cycle
> architecture. the 1gigahertz barrier is an imposing boundary, because a
> lot of the electrical properties we know and love at 200mhz change at those
> rediculous frequencies.

I think the 1 ghz barrier is a number you pulled from somewhere 8^)
Wasn't Semymour at 2 ghz internal clock rate (Cray 4) ? True, he used
GaAs, which seems like a hell of a bad idea to me for the consumer
market <grin> (imagine tons of toxical electronic waste, much more toxic
than everything we have right now...). Oh yeah, and he went broke. US
gov folks should be kicked for letting this happen...

XXXX
XXXX
XXXX
XXXX

What's your opinion of the future ? Even more complicated RISCs


(internally more a data flow machine), VLIW, vector ? Pseudo stack
machines marketed as Java chips (ROTFL) ?

> Memory is a huge problem. This basically killed the performance of the
> Cray-2, because memory latency was *so* high on that machine, every memory
> reference took forever, unless you were doing vector loads and stores to
> take advantage of the streaming to/from memory the machine could do. If
> we were to go to a 1ghz machine, every hash probe would take 5x longer than
> it does right now because memory speeds have not changed.

So you need to do away with hashing non essential stuff :)

> And there's no


> new technology that promises to help (let's don't turn over the SDRAM rock
> again... it's good for cache line fills, but it is no faster than the normal
> memory of today for random access of a few words).

I agree. Random access patterns have to be avoided. Slow Demon down by
30-40 % on an 80 mhz system, the very thought what this would do to the
performance on a 200+ mhz system makes me want to puke. Could calculate
this just for making me feel sea sick 8^)

-- Peter

May God grant me the serenity to accept the things I cannot change,
courage to choke the living shit out of those who piss me off,
and wisdom to know where I should hide the bodies...

jmc...@aol.com

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to

In article <332F8C...@semitechturku.com>, Jouni Uski <Jouni...@semitechturku.com> writes:

>I personally are complete on Deep Blues side and I was surprised to
>read after last match, that everybody was on humans side! BTW if
>DB loses (clearly) I think we can forget our dreams about machine
>world champion for ever...
>
>Jouni Uski

if only that were true.

Tord Kallqvist Romstad

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to

Chris Mayer (cma...@ix.netcom.com) wrote:
: Well said.

: Personally, I will most likely abandon chess and start learning GO.

Good idea. It's much more fun than chess anyway. :-)

Tord

: Chris Mayer


Tom C. Kerrigan

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to

Jesper Antonsson (jes...@lysator.liu.se) wrote:

> Well, if current increases in computer speed continues at the same
> rate as usual (about a doubling in 18 months) we'll have something
> approximating DB's NPS-figures in about 15 years. That will be a
> sequential machine, mind you, (no parallell alpha-beta problems), with
> hashtables big enough to (almost) keep up, and at least some 6-man
> endgame tablebases in RAM.

Technicality, but compare RAM 10 years ago (1 to 2 MB fairly standard, if
I recall) to RAM today (16 MB fairly standard). Now compare CPU speed
(386/16 to Pentium/166). Now you understand why I don't think we'll have 6
man tablebases in RAM anytime soon. :)

It may be possible to maintain CPU improvement for 15 years if molecule
sized transistors are realized (Feynman stuff) but otherwise I think it
might get very tricky...

Cheers,
Tom

Don Fong

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to

In article <5gp7au$m...@juniper.cis.uab.edu>,

Robert Hyatt <hy...@crafty.cis.uab.edu> wrote:
>we're already about 18 months or close to it from the P6/200 introduction
>point. The next generation is apparently still a ways out so we may already
>be seeing a flattening of this curve.

of course, that's assuming you are talking about a single processor.
imagine if you could harness the power of a fraction of the PC's on the
internet. this would surely dwarf even DB.
suppose you could automate the "arbiter" role in Ingo Althofer's
3-hirn. the 3-hirn composite supposedly can play stronger than any one
of 3 individual components. note that there is very little communication
bandwidth used between the 3 components. (:-) what would happen if
you cascaded multiple levels of 3-hirns (or N-hirns) into a tree
structure?


Enrique Irazoqui

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to

Don Fong <df...@cse.ucsc.edu> escribió en artículo
<5gp6sk$4...@darkstar.ucsc.edu>...
> In article <19970319154...@ladder01.news.aol.com>,

> FWIW i want Kasparov to win. i don't think the cheering
> section is going to make any difference though. there are
> a few reasons why i want the human to win.

I want Kasparov to win and I have very little doubts that he will, and by a
lot. I want him to win because he is the better player.



> 1. Kasparov is a human. i am a human. a victory for Kasparov is
> a victory for humanity over machines.

That's where I disagree. Programmers are human too. I consider the strength
of current programs a victory of the human mind.

Enrique


Moritz Berger

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to

On 20 Mar 1997 03:26:10 GMT, hy...@crafty.cis.uab.edu (Robert Hyatt)
wrote:
< snip >

>Forget the damned JAVA chips. :) Any language without pointers, structures
>and unions is a toy. :) and even with the smiley I'm serious...

It's very easy ... Just write your own Java compiler ... I mean,
you're THE BOB HYATT, so what's the point? ;-)))

Moritz

-------------
Moritz...@msn.com

Rolf Tueschen

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to

"Howard Exner" <hex...@dlcwest.com> wrote:

-----------------------------------------------------------------------------------------------------------

I agree with you. But Howard, following the famous Alabama computer chess
ettiquette of good and correct writings on usenet your speculating about Don
King's personal character is not in order. Following the same law you should
try to write what you are damn sure about -- exclusively. Think about the
consequeces if everybody just wrote his opinion right down the line ...

Rolf <European department of BH ACCUGCW> Tueschen


Francesco Di Tolla

unread,
Mar 20, 1997, 3:00:00 AM3/20/97
to

Robert Hyatt wrote:

> future is hard to predict. Dec planned on 2ghz as the upper bound on the
> alpha.

Wow, that will make coffee too, just put the pot over the chip while on
:-)


> I don't believe it, but we'll see. They are already 1/4 of the way
> there of course and the machine is a hoss. VLIW is just another flavor of
> superscalar when you get right down to it, and a very old flavor where the
> instructions are packed into chunks at compile time, rather than at exec
> time (as in the P6).

I've just read that new experiments show that the behavior of electrons
is still reasonable scaling further down. Now we have .35/.25 microns
technology, actual approach seemed to be unable to go beyond .1, but
it seems that you don't need 10000 electrons to have them beahave
statistically well enough, to be able to make a new chip. Appparently
they found that you can really approach the quantum limit. But
we'll never have a device triggered by half an electron. (Source
PC Plus 4/97)

> My favorite chip of the day (from one who teaches computer architecture)
> is the P6. It's done about as well as I could imagine it being done, and
> it's far better than the rest.

My favourite would be R10000, if I had money enough. But April 2nd
AMD K6 will show up, and smash anithing PPro can do, apparently a 250
MHz
is coming soon (3x83 MHz) and they are working at a chipset
with a 100 MHZ data bus. That will give one of the last big boost before
the century turn-over. I suspect that in 1998 we well 300 MHz chips,
but MMX extensions will not give much, and the market goes there.
I doubt we will see a PC more than two times faster than a PPro 200 in
this
century. Not for the chip. For the rest.
bye
Franz

--
Francesco Di Tolla, Center for Atomic-scale Materials Physics
Physics Department, Build. 307, Technical University of Denmark,
DK-2800 Lyngby, Denmark, Tel.: (+45) 4525 3208 Fax: (+45) 4593 2399
mailto:dit...@fysik.dtu.dk http://www.fysik.dtu.dk/persons/ditolla.html

Jesper Antonsson

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

In article hy...@crafty.cis.uab.edu (Robert Hyatt) wrote:
>Jesper Antonsson (jes...@lysator.liu.se) wrote:
>: Well, if current increases in computer speed continues at the same
>: rate as usual (about a doubling in 18 months) we'll have something
>: approximating DB's NPS-figures in about 15 years. That will be a
>: sequential machine, mind you, (no parallell alpha-beta problems), with
>: hashtables big enough to (almost) keep up, and at least some 6-man
>: endgame tablebases in RAM.
>
>: One interesting question is for how long this exponential growth in
>: computer speeds is going to continue. We've not seen any sign
>: of it slowing down yet, and from what I've heard, the basic approaches
>: in computer tech allows for at least 20 years of development at current
>: rate before any fundamental barriers stops it. It might well be that
>: practical barriers prevents them from being financially feasible before
>: that, but there are always other approaches. :-)
>
>Don't know if we can continue for 20 years. A lot of the speed of current
>machines is based on parallel processing already, because every decent
>micro on the market is a superscalar multiple-instruction-issue-per-cycle
>architecture. the 1gigahertz barrier is an imposing boundary, because a
>lot of the electrical properties we know and love at 200mhz change at those
>rediculous frequencies.

I don't really believe in a 1 gigahertz barrier. From what I've read we can
expect to hit the transistor-technology roof when we have an approximate
speedup of 10^5 compared to 1990, which should translate to 10^4 compared
to now. The fundamental reason for this is that there aren't anything that
says we can't push the length of one transistor down to 10 nm without
suffering from tunneling effects.

>If you want to cut the cycle time by 1/2, you have to shorten all the paths
>by 1/2, which means the machine has to be 1/8th the size (volume) of the old
>machine (something Seymour talked about many times.)

I hope you don't forget that if we cut the cycle time by 1/2 and shorten the
paths accordingly, we can also use 4 times the number of transistors on the
same surface, thus have a potential 8-fold increase in speed...

>we're already about 18 months or close to it from the P6/200 introduction
>point. The next generation is apparently still a ways out so we may already
>be seeing a flattening of this curve.

Are we? When I bought my machine about 1.5 years ago, the hottest commonly
sold machine was P-133. Well, I wouldn't draw any quick conclusions either
way. This is close to a religious debate, but we have an advantage here. Time
will tell. :-) As I said, I expect to have a DB-strength chessplaying computer
on my desk well within 15 years.

>Memory is a huge problem. This basically killed the performance of the
>Cray-2, because memory latency was *so* high on that machine, every memory
>reference took forever, unless you were doing vector loads and stores to
>take advantage of the streaming to/from memory the machine could do. If
>we were to go to a 1ghz machine, every hash probe would take 5x longer than

>it does right now because memory speeds have not changed. And there's no


>new technology that promises to help (let's don't turn over the SDRAM rock
>again... it's good for cache line fills, but it is no faster than the normal
>memory of today for random access of a few words).

I guess PC's will always be memory-starved, but the on-chip-cache will grow
about proportionally to the processor-speed. Not much of a comfort, but
anyways.

>I hope it don't continue (speed improvements) but if it happens, don't forget

>[snip]

Why? Wouldn't you like to have a 100 million NPS to burn? I know I would...
;-)

/Jesper Antonsson

Robert Hyatt

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

Moritz Berger (Moritz...@msn.com) wrote:
: On 20 Mar 1997 03:26:10 GMT, hy...@crafty.cis.uab.edu (Robert Hyatt)
: wrote:
: < snip >
: >Forget the damned JAVA chips. :) Any language without pointers, structures

: >and unions is a toy. :) and even with the smiley I'm serious...

: It's very easy ... Just write your own Java compiler ... I mean,


: you're THE BOB HYATT, so what's the point? ;-)))

: Moritz

Yes... but if I wrote a JAVA compiler... it still wouldn't have pointers,
structures and unions. Else we wouldn't call it JAVA, we'd call it "C"... :)

Robert Hyatt

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

Don Fong (df...@cse.ucsc.edu) wrote:
: In article <5gp7au$m...@juniper.cis.uab.edu>,
: Robert Hyatt <hy...@crafty.cis.uab.edu> wrote:
: >we're already about 18 months or close to it from the P6/200 introduction

: >point. The next generation is apparently still a ways out so we may already
: >be seeing a flattening of this curve.

: of course, that's assuming you are talking about a single processor.


: imagine if you could harness the power of a fraction of the PC's on the
: internet. this would surely dwarf even DB.
: suppose you could automate the "arbiter" role in Ingo Althofer's
: 3-hirn. the 3-hirn composite supposedly can play stronger than any one
: of 3 individual components. note that there is very little communication
: bandwidth used between the 3 components. (:-) what would happen if
: you cascaded multiple levels of 3-hirns (or N-hirns) into a tree
: structure?


It's a difficult problem. I've been fooling around with this stuff for a
long time, and progress has been slow. Using 16 processors on a C90 was
non-trivial and the algorithm took a couple of years to develop and a whole
year to debug and tune. 32 would be worse. 256 would be really worse.
N,000 would be pure hell... :)

The idea has occurred to me however, and I'm going to develop a generic
parallel processing hook in crafty so that several of us can play with
different ideas...


Robert Hyatt

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

Jesper Antonsson (jes...@lysator.liu.se) wrote:

eventually a 10nm transistor might be doable... but there's a lot of
fabrication development between here and there. And then there are other
issues. IE electrical properties at 1ghz, or even at any high frequency.
Things conduct that shouldn't, things dont that should... resistance,
capacitance and inductance become more annoying...


: >If you want to cut the cycle time by 1/2, you have to shorten all the paths


: >by 1/2, which means the machine has to be 1/8th the size (volume) of the old
: >machine (something Seymour talked about many times.)

: I hope you don't forget that if we cut the cycle time by 1/2 and shorten the
: paths accordingly, we can also use 4 times the number of transistors on the
: same surface, thus have a potential 8-fold increase in speed...

possibly, but a lot of that 8fold is then going to be in parallelism, which is
not so easy to use. IE a 4way superscalar machine is one thing. but don't
toss me a 32way superscalar... I doubt that I can feed 32pipes with any sort
of chess algorithm...


: >we're already about 18 months or close to it from the P6/200 introduction
: >point. The next generation is apparently still a ways out so we may already
: >be seeing a flattening of this curve.

: Are we? When I bought my machine about 1.5 years ago, the hottest commonly


: sold machine was P-133. Well, I wouldn't draw any quick conclusions either
: way. This is close to a religious debate, but we have an advantage here. Time
: will tell. :-) As I said, I expect to have a DB-strength chessplaying computer
: on my desk well within 15 years.

I hope you are right, of course. But I personally believe it won't happen.
Because deep blue is so far beyond any supercomputer technology we have right
now... and for normal high-performance type codes, the 1976-era Cray is still
faster than the fastest micro (P6/200 type) we have today...


: >Memory is a huge problem. This basically killed the performance of the


: >Cray-2, because memory latency was *so* high on that machine, every memory
: >reference took forever, unless you were doing vector loads and stores to
: >take advantage of the streaming to/from memory the machine could do. If
: >we were to go to a 1ghz machine, every hash probe would take 5x longer than
: >it does right now because memory speeds have not changed. And there's no
: >new technology that promises to help (let's don't turn over the SDRAM rock
: >again... it's good for cache line fills, but it is no faster than the normal
: >memory of today for random access of a few words).

: I guess PC's will always be memory-starved, but the on-chip-cache will grow
: about proportionally to the processor-speed. Not much of a comfort, but
: anyways.

: >I hope it don't continue (speed improvements) but if it happens, don't forget
: >[snip]

: Why? Wouldn't you like to have a 100 million NPS to burn? I know I would...
: ;-)

something's wrong with the above statement. Obviously, for those that know
my "need for speed" I hope machines triple in speed every 6 months... :)


: /Jesper Antonsson

Jesper Antonsson

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

In article <5grokp$7...@merlin.pn.org>, kerr...@merlin.pn.org (Tom C. Kerrigan) wrote:
>Jesper Antonsson (jes...@lysator.liu.se) wrote:
>
>> Well, if current increases in computer speed continues at the same
>> rate as usual (about a doubling in 18 months) we'll have something
>> approximating DB's NPS-figures in about 15 years. That will be a
>> sequential machine, mind you, (no parallell alpha-beta problems), with
>> hashtables big enough to (almost) keep up, and at least some 6-man
>> endgame tablebases in RAM.
>
>Technicality, but compare RAM 10 years ago (1 to 2 MB fairly standard, if
>I recall) to RAM today (16 MB fairly standard). Now compare CPU speed
>(386/16 to Pentium/166). Now you understand why I don't think we'll have 6
>man tablebases in RAM anytime soon. :)
[snip]

What you are saying is that RAM has at least made an 8-fold increase in
10 years. If that "trend" continues, we'll have a factor of 23 in 15 years,
but 64 if we use your other figure (1 Mb ten years ago). Some of the machines
in the last WMCCC used 256 Mb RAM, right? In fifteen years, with the
figures you mentioned, and the same growth rate, the WMCCC machines
could have 6-16 Gb of RAM. And my belief is that were being pretty
conservative here. The old formula of doubling every 1.5 years would make
for a 1024-fold speed increase in 15 years. Thats 256 Gb RAM in the 2011
WMCCC machines...

5-256 Gb's should be enough for at least the simplest 6-man tablebases, right?
Has there been any estimates done regarding the total size of 6-man and
above tablebases?

Well, this crystal ball activity of mine is becoming somewhat silly, so I'll
stop argue about computer speeds and memory sizes for now. We'll see. The
progress is going to slow down of course, the question is when.

/Jesper Antonsson (I am one, and God is my prophet.)

Stefano Gemma

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

ShaktiFire <shakt...@aol.com> scritto nell'articolo
<19970319154...@ladder01.news.aol.com>...

> I remember on ICC , a poll was taken and
> the vast majority were for Kasparov. Even
> Bob Hyatt, in one of the games, swung his
> allegiances to the human. (ICC game discussion).
>
> I am definitely rooting for Deep Blue. Personally I
> think most fans of computer chess, especially those
> who have been programmers/spectators for a number
> of years, must be pulling for Deep Blue. Many must
> think like I do... we want to see the computer kick
> some butt. (American slang - to win decisively).

Please, don't shot on me! :-)))

I think that the interest for this match has something like those we have
seen for the match Fisher/Spassky. In 1972 there was a strong American
Player against a strong Russian Player. In 1996 there is a strong American
Company (IBM) against a strong Russian Player. Maybe there will be no more
American players that could win against a Russian player? ;-)))

Ciao!

PS: don't start flames! look at smiling faces! :-)))))))))))))))))))


Stefano Gemma

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

Robert Hyatt <hy...@crafty.cis.uab.edu> scritto nell'articolo
<5gp4mg$m...@juniper.cis.uab.edu>...
> ShaktiFire (shakt...@aol.com) wrote:
> : I remember on ICC , a poll was taken and

> : the vast majority were for Kasparov. Even
> : Bob Hyatt, in one of the games, swung his
> : allegiances to the human. (ICC game discussion).
[...]

> Once I moved blitz to the Cray, my main competition for several
> years was Belle, and chasing Ken was a lot of fun. In 1982 we
[...]

and our goal is to beat Crafty! ;-)

Ciao!

brucemo

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

Jesper Antonsson wrote:

> 5-256 Gb's should be enough for at least the simplest 6-man tablebases, right?
> Has there been any estimates done regarding the total size of 6-man and
> above tablebases?

The number of positions in a N-man tablebase, assuming no pawns, and assuming
you don't get too crazy about trying tricky ways to reduce this value, is
10*64^(N-1).

The reason you have a "10" in there is that you can save some space by
reflecting the board so that one of the pieces is in the triangle delimited by
a1..d1..d4.

In a 6-piece tablebase this works out to 10,737,418,240.

Assume a byte per position (which probably isn't valid, but it may depend upon
how you generate these things), and that's pretty big. Assume that you have
white-to-move and black-to-move and you've just doubled it.

If you have at least one pawn, you can't do this funky reflection because you
end up trying to move a pawn sideways. You can still reflect through one axis,
which means that you can get away with constraining one of the pawns to the
region a2..d2..d7..a7, which is 24 squares.

So the formula for an N-man database with at least one pawn is 24*64^(N-1).

In a 6-piece tablebase this works out to 25,769,803,776, which is beyond
enormous, and still doesn't take into account possible en-passant captures.

You can reduce this size by forcing a pawn to be on one file, or even on one
square, but it's still big.

So as you can see, unless you've got some serious bucks for hardware, currently
the biggest practical tablebases are 5-man, and this is probably how things will
stay for a while.

bruce

Stefano Gemma

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

Komputer Korner <kor...@netcom.ca> scritto nell'articolo
<333080...@netcom.ca>...

> Jouni Uski wrote:
> >
> > I personally are complete on Deep Blues side and I was surprised to
[...]
> > world champion for ever...

> Forever is a long time!!! The machines will triumph one day, but it
> will be a sad day for mankind unless the machine can solve it
> perfectly in which case it will be a sad day for chess. Either case,
[...]

In those day, we should just add two more columns and two more pieces to
chessboard. Supposing that we add two pieces with almost 10 moves each
ones, the complexity of the game raise for a factor of almost 1.5 per ply.
This will lead off any brute force program for a lot of years... and then
we could add another two columns and pieces! ;-)

Ciao!

Robert Hyatt

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

Stefano Gemma (stefan...@spiderlink.it) wrote:
: Robert Hyatt <hy...@crafty.cis.uab.edu> scritto nell'articolo

: Ciao!


Only problem is, I'm not *nearly* so hard to catch as Ken was... :)


Chris Mayer

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

On Fri, 21 Mar 1997 01:53:53 -0800, brucemo <bru...@nwlink.com>
wrote:

>The number of positions in a N-man tablebase, assuming no pawns, and assuming
>you don't get too crazy about trying tricky ways to reduce this value, is
>10*64^(N-1).
>
>The reason you have a "10" in there is that you can save some space by
>reflecting the board so that one of the pieces is in the triangle delimited by
>a1..d1..d4.
>
>In a 6-piece tablebase this works out to 10,737,418,240.

Instead of allowing a king to stay in the triangle and have values
from 0 - 9, I have always looked at both kings together and assigned a
value from 0 - 461. (462 ways to set 2 kings down, eliminating
symmetry). A simple table with 462 entries gives me the 2 squares, as
opposed to using a table with 10 entries giving a single square. This
reduces my tables to 462*64^(N-2). This makes the difference in
allowing me to keep my 5 man tables in memory for the endgame. I also
squeeze in 5 values per byte, only storing win/lose/draw info. The 40
man rule has been built into the tables, so you don't really need the
number of moves yet. This will just get you where you want to be. A
second tablebase on a CD plays the moves once the 5 man position is
actually reached. The resulting file for a 5 man class (without
pawns) is now only a little more than 23 M (WTM only).

One thing I've been thinking about is a way to kep 6 man classes in
memory. When your search hits a 6 man ending, you note the position
of the kings. Because captures are usually done by non-kings, other
hits into the 6 man table have a high probability that the kings are
still in the same position. This makes for a simple10*64^(N-3) table
given static kings. This could give a 6 man table with only 512K,
and a 7 man with 32M. If we assume only 1 static king we are still
back to the 5 man size I haven't tested any of this idea yet, so any
input on my assumptions about king movement are appreciated.

Chris Mayer


Marcel van Kervinck

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

Chris Mayer (cma...@ix.netcom.com) wrote:
> opposed to using a table with 10 entries giving a single square. This
> reduces my tables to 462*64^(N-2). This makes the difference in
> allowing me to keep my 5 man tables in memory for the endgame. I also
> squeeze in 5 values per byte, only storing win/lose/draw info.

If you want to keep large chunks of a TB in memory, you might even
consider splitting it in two, and use only 1 bit per position.
One database gives win/nowin, another gives loss/noloss. Given alpha-
beta search, you usually only need one of the two.

BTW, 5 'trits' in 8 bytes is very close to optimal coding. Given
equal probabilities for each value.probabilities for each value.
ln(3^5) is nearly ln(2^8).

Marcel
-- _ _
_| |_|_|
|_ |_ Marcel van Kervinck
|_| mar...@stack.nl

Tom C. Kerrigan

unread,
Mar 21, 1997, 3:00:00 AM3/21/97
to

Don Fong (df...@cse.ucsc.edu) wrote:
> In article <5gp7au$m...@juniper.cis.uab.edu>,
> Robert Hyatt <hy...@crafty.cis.uab.edu> wrote:
> >we're already about 18 months or close to it from the P6/200 introduction
> >point. The next generation is apparently still a ways out so we may already
> >be seeing a flattening of this curve.
> of course, that's assuming you are talking about a single processor.
> imagine if you could harness the power of a fraction of the PC's on the
> internet. this would surely dwarf even DB.

Words that I see way too often. The number of PC's on the Internet at any
given time is not even remotely close to the number of users quoted in
popular magazines these days, and then if you cut out the machines that
aren't appropriate for such a project (way too slow to bother, no real
operating system, etc.) then you probably still have a fair sized number,
but only a tiny fraction of what you might think. Now getting everybody to
participate in such a project is downright impossible. I just can't
imagine your scenario.

Anyway, one of the best parallel algorithms today gets a speedup of around
200 on a 1k processor machine, and at that point you are already seeing
some major tapering off... My guess is that anything beyond 10,000
processors is next to useless...

BTW, what does "arbeiter" (sp?) mean in English? I'm trying as hard as I
possibly can to remember and just draw blanks... "worker" in German,
anyway...

Cheers,
Tom

Andrew Tridgell

unread,
Mar 22, 1997, 3:00:00 AM3/22/97
to Robert Hyatt

Robert Hyatt wrote:
> It's a difficult problem. I've been fooling around with this stuff for a
> long time, and progress has been slow. Using 16 processors on a C90 was
> non-trivial and the algorithm took a couple of years to develop and a whole
> year to debug and tune. 32 would be worse. 256 would be really worse.
> N,000 would be pure hell... :)
>
> The idea has occurred to me however, and I'm going to develop a generic
> parallel processing hook in crafty so that several of us can play with
> different ideas...

At the risk of boring people with more KnightCap info I thought
I'd describe the parallel algorithm I use. It seems to be very
effective and is absolutely trivial to program.

Most parallel chess algorithms attempt to parallelise the chess
tree search. I tried this (in a very simple fashion) and found
it to be terrible. The communications costs and scheduling
problems were horrible, and fought with the serial nature
of alpha-beta. A parallel slowdown was not uncommon!

Instead I parallelise across the evaluation space. All CPUs
search using the same algorithm starting at the same
position. Sounds silly? It actually works!

I use only null-window searches, and the CPUs are set to
search using null windows with a spacing of one evaluation
point about the central value (taken from the MTD(f) algorithm).
Thus the CPUs are searching a "fan" of null windows around
the best guess. With our machine this means we are doing
simultaneous null-window searches on the 15 evaluation
values closest to the current best guess.

A global hash table is used to communicate results between
the CPUs, so they don't end up replicating too much work.
The individual CPUs revert to a local hash table when they are 2 ply
into the quiescence search.

When a CPU finishes a null-window search a global upper and lower
bound on the overall evaluation is updated from the result. Then
any CPUs which are searching outside these bounds are reassigned
to new null-window searches around the new best guess. If there
aren't any spare evaluation regions in the window then multiple
CPUs are used on the inner evaluation region. They help
each other because the move ordering of each CPU is different (as
they use local not global history and killer heuristics)

The overall result is that I get around a 7 times speedup using 15
CPUs (one CPU is dedicated to being the master as I'm lazy). I am
quite pleased with this given the trivial implementation.

I haven't benchmarked it on a 64 CPU machine yet, but I will soon.

Of course, like all "good ideas" this has almost certainly been done
before. Can someone point me at a paper if it has been?

Cheers, Andrew

Andrew Tridgell

unread,
Mar 22, 1997, 3:00:00 AM3/22/97
to Robert Hyatt

Robert Hyatt wrote:
> : So you need to do away with hashing non essential stuff :)
>
> already have I hope. :)

In KnightCap I've used a trick where I overlap the fetch from the
global hash table with the evaluation of the node. While waiting
for the remote memory reference to complete I call the eval fn.
This allows me to use the global hash table at higher drafts
without greatly affecting performance. Its wasted when I get
a hash hit, but its an overall win.

Of course, I'm lucky that the AP+ hardware supports background
remote memory copies and that the latency of such operations
is only around 5 us. Fujitsu did a nice job with this hardware.

Andrew

Robert Hyatt

unread,
Mar 22, 1997, 3:00:00 AM3/22/97
to

Andrew Tridgell (Andrew....@anu.edu.au) wrote:

this approach has been around for ages. My first question is about
your move ordering, because 7x is too high, and it's easy to mathematically
prove it. I can't think of the reference, but it seems like Baudot was
the first to try this approach of having each cpu search a different
"window". However, if you only search at the root with the processors,
the absolute best you can do is roughly sqrt(w) where w is the effective
branching factor at the root. That's around 5-6 for normal positions.

The place where this really falls apart is with 16 processors, because a
factor of 5-6 is not very good. ON the Cray T90, with 32 processors, you'd
still be stuck at this same speedup...


: I haven't benchmarked it on a 64 CPU machine yet, but I will soon.

: Of course, like all "good ideas" this has almost certainly been done
: before. Can someone point me at a paper if it has been?

Try to find one of Marslands parallel search papers, he wrote a couple
that surveyed what was being done, and discussed this sort of approach
and the person that did it...

I played with in in the first versions of parallel Cray Blitz, but it
doesn't scale very well at all, *unless* you have grossly bad move
ordering, because you are searching something closer to a minimax tree,
which is much larger, with more potential for parallel stuff. A good
alpha/beta tree is a thing of beauty... using PVS the tree is very close
to the minimal game tree already... and if you know pretty well about the
bounds on the value, it's even better.


: Cheers, Andrew

Dave

unread,
Mar 22, 1997, 3:00:00 AM3/22/97
to

df...@cse.ucsc.edu (Don Fong) wrote:

:In article <19970319154...@ladder01.news.aol.com>,


:ShaktiFire <shakt...@aol.com> wrote:
:>I remember on ICC , a poll was taken and
:>the vast majority were for Kasparov. Even
:>Bob Hyatt, in one of the games, swung his
:>allegiances to the human. (ICC game discussion).
:

: allegiance??
:
:>I am definitely rooting for Deep Blue. Personally I


:>think most fans of computer chess, especially those
:>who have been programmers/spectators for a number
:>of years, must be pulling for Deep Blue. Many must
:>think like I do... we want to see the computer kick
:>some butt. (American slang - to win decisively).

:
: FWIW i want Kasparov to win. i don't think the cheering


:section is going to make any difference though. there are
:a few reasons why i want the human to win.

:
:1. Kasparov is a human. i am a human. a victory for Kasparov is


: a victory for humanity over machines.

Nope. It would be a victory for one extremely unrepresentative human
over a machine built and programmed by a group of other extremely
unrepresentative humans, at a particular task specially chosen by
humans. If one must think in terms of "man vs. machine" the machines
won when the average commercial program started beating the average
human player. Whether Deep Blue is better than 100% of humans or only
99.99% of humans is only important to those who choose to define it as
important, and I don't. To me it's just an interesting contest. If
programs become consistently better than all human players there will
be ramifications within the sport, probably involving such programs
being banned from competition, and the top players using such programs
to memorise enormous numbers of variations (so what's new?), but apart
from repercussions within the sport, what else would it affect?
There'd be a temporary spate of media hype, and then the general
public would forget about it. Life goes on.

:2. Kasparov is an individual. whereas the machine is the product of
: a huge collaborative effort. a victory for Kasparov is a victory
: for the individual over corporations. you can argue that
: Kasparov's knowledge also results from collaborative effort, but
: not nearly the same scale as was necessary to create DB.
: to me it quite marvellous that an individual can defeat the
: combined talents of so many.

There are many things most individuals can do much better than any
machine, despite huge amounts of money being spent on AI - speaking
natural languages, reading handwriting, walking, playing baseball. The
contest is only interesting because it's reasonably well-matched. If
it was totally lop-sided, regardless of who (or what) had the
advantage, no-one would be interested.

:3. Kasparov doesn't hide from the competition like DB.
: he got where he is by playing thousands of games vs all comers.
: he played the best competition and defeated them over the board;
: he didn't just wave a bunch of $$.

This is true. Of course, Kasparov's income depends on playing
publicly, that's his job. IBM aren't professional grandmasters, this
is just one of their projects. Kasparov could have refused the
conditions if he wanted to, but they were offering rather a lot of
money...

:4. on the day the machine wins, i feel it will be a turning point
: for the game and sport of chess. there will be no looking back.
: the machines can only get stronger.
:
There will be changes, but probably only affecting the professional
players (which might well have some knock-on effects on amateur
players). The vast majority of human players are already outclassed by
programs, it doesn't stop us enjoying the game.

Dave

Mark Brockington

unread,
Mar 22, 1997, 3:00:00 AM3/22/97
to

hy...@crafty.cis.uab.edu (Robert Hyatt) writes:

>this approach has been around for ages. My first question is about
>your move ordering, because 7x is too high, and it's easy to mathematically
>prove it. I can't think of the reference, but it seems like Baudot was
>the first to try this approach of having each cpu search a different
>"window". However, if you only search at the root with the processors,
>the absolute best you can do is roughly sqrt(w) where w is the effective
>branching factor at the root. That's around 5-6 for normal positions.

>The place where this really falls apart is with 16 processors, because a
>factor of 5-6 is not very good. ON the Cray T90, with 32 processors, you'd
>still be stuck at this same speedup...

Although the work does have similarities to the algorithm presented in
Baudet's thesis, parallel aspiration searching did not use a global
transposition table. Without a global transposition table, parallel
aspiration searching is a bit of a dog. The algorithm Andrew described
uses the shared transposition table to deflect processors down unsearched
branches when the search has already been completed to that processor's
satisfaction.

I assume that each hash table entry has both an upper bound and a lower bound,
if you're using an MTD(f) style algorithm, right?

If this type of algorithm works relatively well and you have a lot of shared
memory to kick around, Andrew, you may want to consider trying ABDADA, a
combination of Young Brothers Wait and ab* search. It distributes the
processors across the global search space in an orderly manner using the
global transposition table.

Jean-Christophe Weill wrote an article about it in the ICCA Journal
("The ABDADA Distributed Minimax-Search Algorithm", volume 19, number 1,
pages 3-16, 1996), and I believe all of the pseudocode for the algorithm
is in the paper.
--
Mark Brockington | br...@cs.ualberta.ca
Dept. of Computing Science |
University of Alberta | http://web.cs.ualberta.ca/~brock/

Robert Hyatt

unread,
Mar 22, 1997, 3:00:00 AM3/22/97
to

Mark Brockington (br...@cs.ualberta.ca) wrote:
: hy...@crafty.cis.uab.edu (Robert Hyatt) writes:

: >this approach has been around for ages. My first question is about
: >your move ordering, because 7x is too high, and it's easy to mathematically
: >prove it. I can't think of the reference, but it seems like Baudot was
: >the first to try this approach of having each cpu search a different
: >"window". However, if you only search at the root with the processors,
: >the absolute best you can do is roughly sqrt(w) where w is the effective
: >branching factor at the root. That's around 5-6 for normal positions.

: >The place where this really falls apart is with 16 processors, because a
: >factor of 5-6 is not very good. ON the Cray T90, with 32 processors, you'd
: >still be stuck at this same speedup...

: Although the work does have similarities to the algorithm presented in
: Baudet's thesis, parallel aspiration searching did not use a global
: transposition table. Without a global transposition table, parallel
: aspiration searching is a bit of a dog. The algorithm Andrew described
: uses the shared transposition table to deflect processors down unsearched
: branches when the search has already been completed to that processor's
: satisfaction.

Depends. When I fooled with it I always had a shared hash table, since it
was all done on a Cray...

: I assume that each hash table entry has both an upper bound and a lower bound,


: if you're using an MTD(f) style algorithm, right?

no. I only store beta if the result is >= beta, and alpha if the result is
<= alpha... I've never stored both bounds. Although it is easy to get both
if you want, because 99.9999999999% of the nodes searched are with a window of
N and N+1...

: If this type of algorithm works relatively well and you have a lot of shared

Robert Hyatt

unread,
Mar 22, 1997, 3:00:00 AM3/22/97
to

Andrew Tridgell (Andrew....@anu.edu.au) wrote:
: Robert Hyatt wrote:
: > this approach has been around for ages. My first question is about
: > your move ordering, because 7x is too high, and it's easy to mathematically
: > prove it. I can't think of the reference, but it seems like Baudot was
: > the first to try this approach of having each cpu search a different
: > "window". However, if you only search at the root with the processors,
: > the absolute best you can do is roughly sqrt(w) where w is the effective
: > branching factor at the root. That's around 5-6 for normal positions.

: Its perfectly possible that something like my move ordering is
: the limiting factor. I've only been into chess programming for
: a few weeks so I don't have much of a feel for whats normal.

: Currently I get around 80% move ordering. By this I means that
: 80% of cutoffs occur on the first move in the move list. I don't
: bother measuring the move ordering when no cutoffs occur as this
: is irrelevent for null-windows searches (you can't raise alpha!)

: It does vary between about 75% and 95% depending on the position,
: but 80% is "typical".

: >

: > The place where this really falls apart is with 16 processors, because a
: > factor of 5-6 is not very good. ON the Cray T90, with 32 processors, you'd
: > still be stuck at this same speedup...

: I'll try it on the 64 processor machine and see how much speedup
: I get. Hopefully it won't be less than with the 16 processor
: machine :-)

: The 64 processor machine is at the end of a piece of wet string
: in Japan, so its a bit awkward to use, which is why I haven't
: done many runs yet. Its also often used for more serious work!

: > Try to find one of Marslands parallel search papers, he wrote a couple


: > that surveyed what was being done, and discussed this sort of approach
: > and the person that did it...

: Thanks for the tip!

: >
: > I played with in in the first versions of parallel Cray Blitz, but it


: > doesn't scale very well at all, *unless* you have grossly bad move
: > ordering, because you are searching something closer to a minimax tree,
: > which is much larger, with more potential for parallel stuff. A good
: > alpha/beta tree is a thing of beauty... using PVS the tree is very close
: > to the minimal game tree already... and if you know pretty well about the
: > bounds on the value, it's even better.

: The other area I suspect in my code is the hashing. I typically get
: only about 6% to 12% hash hits, which seems a bit low. Does anyone
: know what is typical with MTD(f) ?

Sounds reasonable for a middlegame position... You can tell if it is working
by trying a K+P's endgame... it should go up to 70%+


: Cheers, Andrew

: PS: The KnightCap thats running on FICS is on a single processor
: Ultra170 at the moment, not the AP1000+. It whispers its depth,
: eval, nodes/sec etc if anyone is interested.

Andrew Tridgell

unread,
Mar 23, 1997, 3:00:00 AM3/23/97
to

Thanks for the tip!

Cheers, Andrew

Komputer Korner

unread,
Mar 23, 1997, 3:00:00 AM3/23/97
to

Yes, but it wouldn't be chess any more. There seems to be a certain
beauty in the present game that seems to have a mathematically
beautiful structure. Any rule change would upset the balance so that
there might be too much or too little advantage for white or there
might be too much emphasis on one of the 3 areas, opening, midgame,
or endgame.
--
Komputer Korner

The inkompetent komputer.

Andrew Tridgell

unread,
Mar 23, 1997, 3:00:00 AM3/23/97
to

Robert Hyatt wrote:

>
> Mark Brockington (br...@cs.ualberta.ca) wrote:
> : Although the work does have similarities to the algorithm presented in
> : Baudet's thesis, parallel aspiration searching did not use a global
> : transposition table. Without a global transposition table, parallel
> : aspiration searching is a bit of a dog. The algorithm Andrew described
> : uses the shared transposition table to deflect processors down unsearched
> : branches when the search has already been completed to that processor's
> : satisfaction.

yes, I believe thats the main way parallelism occurs. Its just
kicked off by using different move ordering on each CPU (a
conseqence of using non-global history and cutoff data)

> : I assume that each hash table entry has both an upper bound and a lower bound,
> : if you're using an MTD(f) style algorithm, right?
>
> no. I only store beta if the result is >= beta, and alpha if the result is
> <= alpha... I've never stored both bounds. Although it is easy to get both
> if you want, because 99.9999999999% of the nodes searched are with a window of
> N and N+1...

In KnightCap I do have separate upper and lower bounds in the
hash table. I basically follow the MTD(f) algorithm except that:

1) 15 null windows around the current center are searched in
parallel

2) I stop when the MTD upper and lower bounds are within 4
points as the last bit of discrimination is quite costly
and my eval function isn't _that_ accurate!

3) The initial guess (an input to MTD) is taken as the
iterative deepening value from ply-2 if the ply-1 and
ply-2 values are within 10 points of each other, otherwise
I use the ply-1 value. This accounts for a lot of "eval
bounce" that can occur due to horizon effects and tends to
reduce the number of iterations needed for MTD

> : If this type of algorithm works relatively well and you have a lot of shared
> : memory to kick around, Andrew, you may want to consider trying ABDADA, a
> : combination of Young Brothers Wait and ab* search. It distributes the
> : processors across the global search space in an orderly manner using the
> : global transposition table.
>
> : Jean-Christophe Weill wrote an article about it in the ICCA Journal
> : ("The ABDADA Distributed Minimax-Search Algorithm", volume 19, number 1,
> : pages 3-16, 1996), and I believe all of the pseudocode for the algorithm
> : is in the paper.

Thanks for the tip!

Cheers, Andrew

PS: I appear to be missing some articles in rgcc due to news
server stuffups. My aplogies if I don't respond, I may not
have seen the article!

PPS: KnightCap now has a WWW page (I was bored!) at
http://samba.anu.edu.au/KnightCap

Andrew Tridgell

unread,
Mar 23, 1997, 3:00:00 AM3/23/97
to

I've now done a simple experiment with up to 40 processors
and the algorithm I gave before. Its a very rough experiment
because I tested just one arbitrary position and only ran
each test once.

I generated the position by getting KnightCap to run in demo
mode (self play) for 10 moves. I then saved that position
and used it for the experiment. KnightCap has no opening
book so the position is not really a "recognisable" one,
at least to me. It was a fairly even middle game.

I did two experiments, one with a 8 ply search and the other
with a 10 ply search. The first was straight MTD(f) and the 2nd
also had null moves and razoring. (in a similar style to crafty)

I tested with 1, 10, 20 and 40 processors. Each CPU contributes
32 MB to the gloabal hash table. Each hash entry is 64 bits.

Here are the results:

no null:

40 CPUs 8 ply in 265 secs
20 CPUs 8 ply in 480 secs
10 CPUs 8 ply in 963 secs
1 CPU 8 ply in 3017 secs


with null moves and razoring (as KnightCap normally plays):

40 CPUs 10 ply in 401 secs
20 CPUs 10 ply in 579 secs
10 CPUs 10 ply in 282 secs
1 CPU 10 ply in 3257 secs


Its a little hard to interpret these results. There is far too
little data to say anything with confidence. On the one hand
the overall speedup is pretty poor with 40 CPUs, just a speedup
of 11 and 8 respectively. On the other hand doubling the
number of CPUs is still making a big difference even from 20 to
40.

I suspect the results would have a very high standard deviation
when run on a large sample set. I also stronly suspect that the
hash replacement scheme used could make a big difference as
the algorithm relies heavily on the global trasposition table
leading to the parallel efficiency.

I did some more quick runs after the above tests. I tried a new
hash replacement scheme I have been thinking of where you replace
entries with the same draft if the new entry has an evaluation
closer to the current MTD(f) estimate. The idea is that entries
close to the true value are harder to calculate and are worth more.
I acually got a 2 times speedup with this scheme in one test and
small slowdowns in others. I think this demonstrates that the
replacement scheme is critical - if only I could get it right!

Anyway, don't try to read too much into the above results. When I
get time for more extensive testing I'll post some more results.

Cheers, Andrew

Jack Nerad

unread,
Mar 23, 1997, 3:00:00 AM3/23/97
to

Tom C. Kerrigan wrote:

> BTW, what does "arbeiter" (sp?) mean in English? I'm trying as hard as I
> possibly can to remember and just draw blanks... "worker" in German,
> anyway...
>
> Cheers,
> Tom

Arbeiter means "judge" or "official" or "referee" in English.

Jack Nerad

brucemo

unread,
Mar 23, 1997, 3:00:00 AM3/23/97
to

Mark Rawlings wrote:

> I just got the book "Kasparov vs. Deep Blue" about the match last Feb.
> and computer chess in general. Anyway, the last chapter, entitled
> "The Future" says: "... eventually, all six- and seven-piece
> endgames will be solved. A hundred years from now, all eight-piece
> endgames may be solved as well." Great book, btw.

That book isn't very technical, and seems to suffer from the same lack of
pseudo-code that's in Levy's "How Computers Play Chess", but it's still a
book that I would own.

The thing about those endings is that all you need is CPU time and disk
to solve them. The code necessary to solve them isn't trivial, but it
isn't rocket science either. It's not like you have to devise some
intricate and highly tuned heuristic, you just solve the problem, and
that's that.

I have a program that would solve them now if my machine had enough
memory and disk space, and I had enough patience to wait for them to
complete.

bruce

Mark Rawlings

unread,
Mar 24, 1997, 3:00:00 AM3/24/97
to

brucemo <bru...@nwlink.com> wrote:

>Jesper Antonsson wrote:

>> 5-256 Gb's should be enough for at least the simplest 6-man tablebases, right?
>> Has there been any estimates done regarding the total size of 6-man and
>> above tablebases?

>The number of positions in a N-man tablebase, assuming no pawns, and assuming

>you don't get too crazy about trying tricky ways to reduce this value, is
>10*64^(N-1).

>The reason you have a "10" in there is that you can save some space by
>reflecting the board so that one of the pieces is in the triangle delimited by
>a1..d1..d4.

>In a 6-piece tablebase this works out to 10,737,418,240.

>Assume a byte per position (which probably isn't valid, but it may depend upon

>how you generate these things), and that's pretty big. Assume that you have
>white-to-move and black-to-move and you've just doubled it.

>If you have at least one pawn, you can't do this funky reflection because you
>end up trying to move a pawn sideways. You can still reflect through one axis,
>which means that you can get away with constraining one of the pawns to the
>region a2..d2..d7..a7, which is 24 squares.

>So the formula for an N-man database with at least one pawn is 24*64^(N-1).

>In a 6-piece tablebase this works out to 25,769,803,776, which is beyond
>enormous, and still doesn't take into account possible en-passant captures.

>You can reduce this size by forcing a pawn to be on one file, or even on one
>square, but it's still big.

>So as you can see, unless you've got some serious bucks for hardware, currently
>the biggest practical tablebases are 5-man, and this is probably how things will
>stay for a while.

>bruce

I just got the book "Kasparov vs. Deep Blue" about the match last Feb.


and computer chess in general. Anyway, the last chapter, entitled
"The Future" says: "... eventually, all six- and seven-piece
endgames will be solved. A hundred years from now, all eight-piece
endgames may be solved as well." Great book, btw.

Mark


Kevin Miller

unread,
Mar 24, 1997, 3:00:00 AM3/24/97
to

On Sun, 23 Mar 1997 19:57:33 -0500, Jack Nerad <JNE...@concentric.net>
wrote:

I thought that was "arbiter..."

Kevin Miller

"Jazz is not dead; it just smells funny" -- Frank Zappa

Jack Nerad

unread,
Mar 24, 1997, 3:00:00 AM3/24/97
to

Right you are. I was caught up in the moment. Please forgive.

Jack Nerad

Ernst A. Heinz

unread,
Mar 24, 1997, 3:00:00 AM3/24/97
to

Chris Mayer (cma...@ix.netcom.com) wrote:

> Instead of allowing a king to stay in the triangle and have values
> from 0 - 9, I have always looked at both kings together and assigned a
> value from 0 - 461. (462 ways to set 2 kings down, eliminating
> symmetry). A simple table with 462 entries gives me the 2 squares, as

> opposed to using a table with 10 entries giving a single square. This
> reduces my tables to 462*64^(N-2).

That's how Ken Thompson's databases have been organized since the 1980's.

See e.g. "Retrograde analysis of certain endgames."
Ken Thompson, ICCA Journal 9(13), pages 131-139, September 1986.

=Ernst=

+----------------------------------------------------------------------------+
| Ernst A. Heinz, School of CS (IPD), Univ. of Karlsruhe, P.O. Box 6980, |
| D-76128 Karlsruhe, F.R. Germany. WWW: <http://wwwipd.ira.uka.de/~heinze> |
| Mail: <hei...@ira.uka.de> Tel: +49-(0)721-6084386 Fax: +49-(0)721-694092 |
+----------------------------------------------------------------------------+
"It has recently been found out that research causes cancer in rats!"

mclane

unread,
Mar 24, 1997, 3:00:00 AM3/24/97
to

Jack Nerad <JNE...@concentric.net> wrote:

>Tom C. Kerrigan wrote:

>> BTW, what does "arbeiter" (sp?) mean in English? I'm trying as hard as I
>> possibly can to remember and just draw blanks... "worker" in German,
>> anyway...
>>
>> Cheers,
>> Tom

>Arbeiter means "judge" or "official" or "referee" in English.

Arbiter means this in english! But this was notm the question Jack.
Arbeiter is worker, if you speak it in english. In german it means of
course not only somebody who is working, it means somebody of the
working class, with all that political connotation that comes from
old-german-history of fighting against capitalism (----> MARX).

>Jack Nerad

Jack Nerad

unread,
Mar 24, 1997, 3:00:00 AM3/24/97
to

Aha! I now understand the question. Perhaps you can understand my
confusion....

Jack Nerad

Simon Read

unread,
Mar 26, 1997, 3:00:00 AM3/26/97
to

From s.read (Simon) at cranfield.ac.uk


100 points if you can guess who wrote the following:


> Any rule change would upset the balance so that
> there might be too much or too little advantage for white or there
> might be too much emphasis on one of the 3 areas, opening, midgame,
> or endgame.
>

> Komputer Korner

At the moment, the board starts off half full of pieces. even after a
couple of exchanges, this leads to a rather tangled state of affairs
with lots of subtle restrictions, locks and gateways on the board.
If we were to change the rules to allow a larger board, the board
would be considerably less than half-full, leading to much more open
and mobile games. This would probably be to computrs' advantage.


Guillem Barnolas

unread,
Mar 30, 1997, 3:00:00 AM3/30/97
to

hy...@crafty.cis.uab.edu (Robert Hyatt) wrote:

>I don't know about "forever" (since that's a *long* time... :) )
>but if DB can't do it then it isn't going to be done for at least
>another 20-30 years probably... because we won't have their speed
>for at *least* that long, if ever. However, algorithms are slowly
>evolving, and programs are incrementally getting better over time
>independent of machine speed. We are really at the 15 year mark for
>serious computer chess since the PC is about 15 years old. Before that
>it was simply large mainframe vs mainframe types of games, with the
>occasional minicomputer thrown into the mix.

>[Comments on computer programs deleted]

I think the whole idea is wrong. You're not focusing on making
programs that play chess, but on programs that are basically blind. If
a program as DB can't win Kasparov with a ply-depth clearly superior
to de W.Champion, it's because there's something wrong with the
program, and i'll tell you. Actual programs, DO NOT play chess, they
merely study all the positions they can in a specific time, and they
study positions that a simple low-level chess player would see that
are undesirable, but the computer is blind, without planning.
I think we should work in computer programs using AI to develop plans
and therefore, try to win Kasparov.
By now, I'm on Kasparov's side and I think he is going to beat DB
without effort.

Greets, Guillem.


Marcel van Kervinck

unread,
Mar 31, 1997, 3:00:00 AM3/31/97
to

Guillem Barnolas (guill...@redestb.es) wrote:
: I think the whole idea is wrong. You're not focusing on making

: programs that play chess, but on programs that are basically blind. If
: a program as DB can't win Kasparov with a ply-depth clearly superior
: to de W.Champion, it's because there's something wrong with the
: program, and i'll tell you. Actual programs, DO NOT play chess, they
: merely study all the positions they can in a specific time, and they
: study positions that a simple low-level chess player would see that
: are undesirable, but the computer is blind, without planning.
: I think we should work in computer programs using AI to develop plans
: and therefore, try to win Kasparov.

Your ignorance is showing. If you think you can do better than 25 years
of steady improvement you should prove it. Don't bother to post this
same old bullshit that people on the sideline have been screaming
for ages.

: By now, I'm on Kasparov's side and I think he is going to beat DB
: without effort.

He'll win, but not effortless.

Vincent Diepeveen

unread,
Apr 1, 1997, 3:00:00 AM4/1/97
to

In <333129...@fysik.dtu.dk> Francesco Di Tolla <dit...@fysik.dtu.dk> writes:

>Robert Hyatt wrote:
>> future is hard to predict. Dec planned on 2ghz as the upper bound on the
>> alpha.
>Wow, that will make coffee too, just put the pot over the chip while on
>:-)
>
>
>> I don't believe it, but we'll see. They are already 1/4 of the way
>> there of course and the machine is a hoss. VLIW is just another flavor of
>> superscalar when you get right down to it, and a very old flavor where the
>> instructions are packed into chunks at compile time, rather than at exec
>> time (as in the P6).
>
>I've just read that new experiments show that the behavior of electrons
>is still reasonable scaling further down. Now we have .35/.25 microns
>technology, actual approach seemed to be unable to go beyond .1, but
>it seems that you don't need 10000 electrons to have them beahave
>statistically well enough, to be able to make a new chip. Appparently
>they found that you can really approach the quantum limit. But
>we'll never have a device triggered by half an electron. (Source
>PC Plus 4/97)
>
>> My favorite chip of the day (from one who teaches computer architecture)
>> is the P6. It's done about as well as I could imagine it being done, and
>> it's far better than the rest.
>
>My favourite would be R10000, if I had money enough. But April 2nd
>AMD K6 will show up, and smash anithing PPro can do, apparently a 250
>MHz

Perhaps this K6 will already 2 times faster for chessprograms than the
PP200!

I read it should have the same optimalization MMX200 has, and that
it has 4 times more L1 cache (64 kb) versus PP200 16 kb L1 cache (MMX=32 kb
L1 cache).

Should be a beast for chessprograms!

>is coming soon (3x83 MHz) and they are working at a chipset
>with a 100 MHZ data bus. That will give one of the last big boost before
>the century turn-over. I suspect that in 1998 we well 300 MHz chips,
>but MMX extensions will not give much, and the market goes there.
>I doubt we will see a PC more than two times faster than a PPro 200 in
>this
>century. Not for the chip. For the rest.

Read something about a new standard for Video cards and MM things, which
would in 2000 make graphics and so on around 10 times faster than
current standard. So same speed graphical workstations currently are having.
>bye
>Franz

>--
>Francesco Di Tolla, Center for Atomic-scale Materials Physics
>Physics Department, Build. 307, Technical University of Denmark,
>DK-2800 Lyngby, Denmark, Tel.: (+45) 4525 3208 Fax: (+45) 4593 2399
>mailto:dit...@fysik.dtu.dk http://www.fysik.dtu.dk/persons/ditolla.html

Vincent
--
+----------------------------------------------------+
| Vincent Diepeveen email: vdie...@cs.ruu.nl |
| http://www.students.cs.ruu.nl/~vdiepeve/ |
+----------------------------------------------------+

Vincent Diepeveen

unread,
Apr 1, 1997, 3:00:00 AM4/1/97
to

In <5hp7bq$c...@turtle.stack.nl> mar...@stack.nl (Marcel van Kervinck) writes:

>Guillem Barnolas (guill...@redestb.es) wrote:
>: I think the whole idea is wrong. You're not focusing on making
>: programs that play chess, but on programs that are basically blind. If
>: a program as DB can't win Kasparov with a ply-depth clearly superior
>: to de W.Champion, it's because there's something wrong with the
>: program, and i'll tell you. Actual programs, DO NOT play chess, they
>: merely study all the positions they can in a specific time, and they
>: study positions that a simple low-level chess player would see that
>: are undesirable, but the computer is blind, without planning.
>: I think we should work in computer programs using AI to develop plans
>: and therefore, try to win Kasparov.
>
>Your ignorance is showing. If you think you can do better than 25 years
>of steady improvement you should prove it. Don't bother to post this
>same old bullshit that people on the sideline have been screaming
>for ages.

I also think that screaming that using AI will develop plans for you
is not very smart.

The problem is however that he's right:
suppose you have program with the same evaluation Kasparov has
(assuming that Kasparov's evaluation can be independant from the
depth).

Suppose we have this evaluation, then it will still loose tactically from
Kasparov certain games. around 13 ply is just 7 moves, and this is
not much when you play Kasparov (without doubt better than 5/6 moves which
PC programs hardly get, that's not where the discussion is about).

Reality is of course that the evaluation is usually the evaluation of
a dumb, or better said, the patterns of an expert using the discrimination
of a dumb between these patterns.

the biggest problems of this all is that:
a) the current approach sucks, but every year gives better results.
b) no other approach has been successful so far.

c) human's don't have problems with the turing test, as turing test
cannot be applied to humans. Programs suffer from the turing test,
and therefore will never be able to evaluate and plan the same a human
will.

>: By now, I'm on Kasparov's side and I think he is going to beat DB
>: without effort.

>He'll win, but not effortless.

I'm afraid Marcel, that you first need some chess lessons, and then will
conclude that he'll do it without effort.

Did you see Kasparov ever play Kings Indian against a program?
Let's hope for DB. K. will not get the chance to do this.

I guess that Anand and especially Timman will have much more trouble
playing simltaneously! 8 boards blitz-simultaneously against Rebel is
something i tried myself, and i can assure you that it was hard, although
that were 8 games against the same opponent!

Vincent

> Marcel
>-- _ _
> _| |_|_|
> |_ |_ Marcel van Kervinck
> |_| mar...@stack.nl

--

Marcel van Kervinck

unread,
Apr 1, 1997, 3:00:00 AM4/1/97
to

Vincent Diepeveen (vdie...@cs.ruu.nl) wrote:
> The problem is however that he's right:
> suppose you have program with the same evaluation Kasparov has
> (assuming that Kasparov's evaluation can be independant from the
> depth).

There's reason to believe the DB team has spent quite some effort
to attack the very deep tactics. It won't be enough, but progress
is still made on the evaluation part.

> c) human's don't have problems with the turing test, as turing test
> cannot be applied to humans. Programs suffer from the turing test,
> and therefore will never be able to evaluate and plan the same a human
> will.

This puzzles me. What do you mean by this 'turing test'. The
simulation game in Turing's article 'Can Machines Think?'.

mclane

unread,
Apr 1, 1997, 3:00:00 AM4/1/97
to

mar...@stack.nl (Marcel van Kervinck) wrote:

>Guillem Barnolas (guill...@redestb.es) wrote:
>: I think the whole idea is wrong. You're not focusing on making
>: programs that play chess, but on programs that are basically blind. If
>: a program as DB can't win Kasparov with a ply-depth clearly superior
>: to de W.Champion, it's because there's something wrong with the
>: program, and i'll tell you. Actual programs, DO NOT play chess, they
>: merely study all the positions they can in a specific time, and they
>: study positions that a simple low-level chess player would see that
>: are undesirable, but the computer is blind, without planning.
>: I think we should work in computer programs using AI to develop plans
>: and therefore, try to win Kasparov.

>Your ignorance is showing. If you think you can do better than 25 years
>of steady improvement you should prove it. Don't bother to post this
>same old bullshit that people on the sideline have been screaming
>for ages.

Although you are attacking this man very tough, I am on his side.
Sorry. You can now attack me the same way, and you are right to do
that. But I am the same opinion this man is.
I hope you will now not kill me for having another opinion....

>: By now, I'm on Kasparov's side and I think he is going to beat DB
>: without effort.

>He'll win, but not effortless.

> Marcel

Michel Hafner

unread,
Apr 2, 1997, 3:00:00 AM4/2/97
to

In article <5hr3n3$346$1...@krant.cs.ruu.nl>,

Vincent Diepeveen <vdie...@cs.ruu.nl> wrote:
>>: By now, I'm on Kasparov's side and I think he is going to beat DB
>>: without effort.
>
>>He'll win, but not effortless.
>I'm afraid Marcel, that you first need some chess lessons, and then will
>conclude that he'll do it without effort.

Yeah, sure. He did it last time without effort as he told everybody and
DB in the meantime got much worse as we all know. Should be a piece of cake...
Michel Hafner


>
>Did you see Kasparov ever play Kings Indian against a program?
>Let's hope for DB. K. will not get the chance to do this.
>
>I guess that Anand and especially Timman will have much more trouble
>playing simltaneously! 8 boards blitz-simultaneously against Rebel is
>something i tried myself, and i can assure you that it was hard, although
>that were 8 games against the same opponent!
>
>Vincent
>

>> Marcel
>>-- _ _
>> _| |_|_|
>> |_ |_ Marcel van Kervinck
>> |_| mar...@stack.nl
>

Simon Read

unread,
Apr 4, 1997, 3:00:00 AM4/4/97
to

I'm not sure who I want to win. I am impressed by the power of the human
brain and I would be very happy if Kasparov won. I'm also a programmer
and I'd be very impressed if Deep Blue won. It will be extremely
interesting either way.

My prediction:
In the last match, Deep Blue refused a draw and then went on to
make a subtle blunder which Kasparov exploited to win. (game 5)
Kasparov will therefore be unwilling to accept draws, because he'll
want to keep pushing the machine until well into a sterile endgame,
just to get the machine to prove it really knows how to draw that
endgame. The machine will therefore make another one or two subtle
mistakes which Kasparov will exploit.

Kasparov seems to have improved his game recently, witness
Las Palmas 96 and Linares 97.

5-1 to Kasparov. 4 wins, 2 draws.

Simon http://www.cranfield.ac.uk/~me944p/hotlist.html


Michael

unread,
Apr 7, 1997, 3:00:00 AM4/7/97
to

In article <3344f...@news.cranfield.ac.uk>,
I am most definitely on Deep Blue's side. I think the Deep Blue's team have
finally decided to make Deep Blue play to its strength - Tactics. Lets face
it computers will never be able to conduct postional strategic planning
anymore than I can write down a list of things you need to follow to find the
winning plan in any chess position.
Kasparov will be looking for sterile closed postions while Deep blue should be
looking for an open game with lots of combination possibilities, even at the
cost of material. The Tal way!!
I therefore predict a defeat for Kasparov:

4 - 2 to Deep Blue 3 wins and two draws!!

Michael

Gianluigi Masciulli

unread,
Apr 8, 1997, 3:00:00 AM4/8/97
to

een...@electeng.leeds.ac.uk (Michael) wrote:

>I am most definitely on Deep Blue's side. I think the Deep Blue's team have
>finally decided to make Deep Blue play to its strength - Tactics. Lets face
>it computers will never be able to conduct postional strategic planning
>anymore than I can write down a list of things you need to follow to find the
>winning plan in any chess position.
>Kasparov will be looking for sterile closed postions while Deep blue should be
>looking for an open game with lots of combination possibilities, even at the
>cost of material. The Tal way!!

I think this way too.
I completely agree with you, and make me fell like i have a twin
somewhere ;-)

IMHO this is the key point of the match as I wrote in the
"DB and wild tactict" tread.

If I have to choose in what side to stay, as a programmer,
i will not choose neither DB or GK but i will choose the Hsu side.

I *hope* the match will be a 3-3 without draws
with GK spectacularly win the last 2 game.
but may be DB is not so sharp and I'm only dreaming.


ciao.

Gianluigi


>Michael

HowardAE

unread,
Apr 8, 1997, 3:00:00 AM4/8/97
to

While Deep Blue is calculating millions of positions Kasparov will be
eliminating millions of positions. I predict 4.5 - 1.5 for the knowledge
based entity.

ROCKJUICE

unread,
Apr 12, 1997, 3:00:00 AM4/12/97
to

I am black

0 new messages