Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Distributed Chess

4 views
Skip to first unread message

Where It's At

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
From slashdot:

Posted by Cliff on Sunday April 30, @06:05AM
from the that-would-be-hell-to-brute-force dept.
R. Jason Valentine asks: "One of the more complex problems that computing
has tackled has been the game of Chess. The rules are simple, the strategy
complex. We now have computers, based upon current technology, that can play
as good as or better than the best humans. However, the current computing
power is still far from answering the age old question: Is there such a
thing as a perfect game of chess?"

Anyone have spare processor time on a Beowulf Cluster? Or maybe this could
be another project for distributed.net? -->
http://www.distributed.net/projects.html

J: Remy de Ruysscher writes to say he's still working on distributed chess;
to join his mailing list, email him --> r...@xs4all.nl


Brandon W. Beasley

unread,
May 3, 2000, 3:00:00 AM5/3/00
to
I think this is on Sourceforge.


--
Brandon W. Beasley
mailto://bbea...@austin.rr.com

Vince Plourde

unread,
May 3, 2000, 3:00:00 AM5/3/00
to
Greetings,

I think the perfect game of chess is a drawn game.

If this is so, then any way to reach a draw is a good way.
It is all those million and million ways of reaching a draw that makes this
game so interesting!

moonman

Robert Hyatt

unread,
May 3, 2000, 3:00:00 AM5/3/00
to
Brandon W. Beasley <bra...@cs2758-214.austin.rr.com> wrote:
> I think this is on Sourceforge.

The idea is good. But it isn't going to make any great strides in solving
chess. The search is _still_ exponential. And getting a few extra plies
by using a month of cpu time distributed over the net would be interesting
(I am designing a beowulf Crafty right now). But it isn't going to tell us
who wins, even if the cluster searches for the next 10,000,000 years...

--
Robert Hyatt Computer and Information Sciences
hy...@cis.uab.edu University of Alabama at Birmingham
(205) 934-2213 115A Campbell Hall, UAB Station
(205) 934-5473 FAX Birmingham, AL 35294-1170

Brandon W. Beasley

unread,
May 4, 2000, 3:00:00 AM5/4/00
to
Okay, so not solvable, but how deep could these distributed net chess engines
search? Compared to Deep Blue, for instance?

How are transpositions handled by a distributed engine? Is there a
distributed transposition table?

Seems real complicated.

Do you have a running beowulf crafty yet? Tell us about it, please.

Brandon W. Beasley

unread,
May 4, 2000, 3:00:00 AM5/4/00
to
Another question:

Let say we have a computer that searches 12ply on average at 40/120 time
controls in the middle game.

Now, add 1,000 more PCs crunching on a distributed problem. What would
we gain theoretically by the addition of 1,000 computers? Obviously, we
ain't going to get 1,000 x 12ply = 12,000 ply, but would it hit 20ply
in the same time control?

Perhaps I ask the question indelicately, but I'm trying to grasp the
potential of this distributed chess idea.

Robert Hyatt

unread,
May 4, 2000, 3:00:00 AM5/4/00
to
Brandon W. Beasley <bra...@cs2758-214.austin.rr.com> wrote:
> Okay, so not solvable, but how deep could these distributed net chess engines
> search? Compared to Deep Blue, for instance?

That is a non-trivial question to answer. If you mean could the
distributed chess program search as deeply as deep blue in the same time
interval (say 3 minutes), then no, not even close. If you give it a couple
of days, it could produce some serious analysis for a single position,
however.

> How are transpositions handled by a distributed engine? Is there a
> distributed transposition table?

Good question with multiple answers. Each machine would probably use a local
hash table only. As a distributed hash table over the internet would simply
not work, even with internet2 speeds. The latency/jitter is too high. But
for a local 'beowulf cluster' yes distributed hashing is viable.

> Seems real complicated.

> Do you have a running beowulf crafty yet? Tell us about it, please.

No. Nothing running. I am simply going over alternative approaches to
make this work. In fact, the architecture is pretty unique, in that I have
a cluster of 10 quad xeons, which means a two-level hiarchy exists...
shared memory threads on each node, but inter-node distributed search will
be done as well...

More once I get something going. Which won't be until this summer, for
sure. Not enough time right now.


One thing is for sure. 10M nodes per second is reachable.

Robert Hyatt

unread,
May 4, 2000, 3:00:00 AM5/4/00
to
Brandon W. Beasley <bra...@cs2758-214.austin.rr.com> wrote:
> Another question:

> Let say we have a computer that searches 12ply on average at 40/120 time
> controls in the middle game.

> Now, add 1,000 more PCs crunching on a distributed problem. What would
> we gain theoretically by the addition of 1,000 computers? Obviously, we
> ain't going to get 1,000 x 12ply = 12,000 ply, but would it hit 20ply
> in the same time control?

No. The problem is that the internet has high latency (message transit
time) and high jitter (unpredictable transit time). This makes fast
searches impractical... Sometimes it takes a packet 3 minutes to traverse
the network. That won't fly for a real game.

To do this for a real game, requires a local cluster with less latency and
jitter to contend with...


> Perhaps I ask the question indelicately, but I'm trying to grasp the
> potential of this distributed chess idea.

I don't think you want to try this on something with a time-scale measured
in a few minutes. Correspondence chess would be a much better fit for a
WAN approach.

> On 3 May 2000 13:21:56 GMT, Robert Hyatt <hy...@crafty.cis.uab.edu> wrote:

>>Brandon W. Beasley <bra...@cs2758-214.austin.rr.com> wrote:

Dr A. N. Walker

unread,
May 4, 2000, 3:00:00 AM5/4/00
to
In article <slrn8h48ct....@cs2758-214.austin.rr.com>,

Brandon W. Beasley <#bbea...@austin.rr.com> wrote:
>Let say we have a computer that searches 12ply on average at 40/120 time
>controls in the middle game.
>Now, add 1,000 more PCs crunching on a distributed problem. What would
>we gain theoretically by the addition of 1,000 computers?

Nowhere near as much as you would like! As others have pointed
out, "it depends". But here is one simple model.

The most natural way to share the work is to let one glob of
your computers search move A, another glob move B, and so on, through
all the possible moves. Within the glob of A-searchers, one sub-glob
search the first reply Aa, another the second reply Ab, and so on, and
similarly there are searchers for Ba, Bb, ..., Zz. [There are plenty
of other schemes, but more complicated.]

As luck would have it, in a typical position there are around
30 moves and 30 replies to each move, and 30x30 is, give or take a few
percent, 1000, so on average you will get one machine searching each of
the possibilities Aa, ..., Zz. Effectively, you've shifted the entire
search two ply down the tree, so instead of 12 ply you get 14.

Well, you can hope to do better than that. It's much easier
to confirm that a move is hopeless than that it's good [this is the
*very* simple version of "alpha-beta pruning"!], so if you expect move
A to be good and move Z to be bad, your A-glob can be much bigger than
your Z-glob. [This corresponds to the observation that your computer
will spend much more time looking at move 1 on its list than at move
30.] This gives you more resources for the good moves. But you take
the risk that move Z was good after all and has been under-resourced.

Also, if your 1000 PCs are sharing information, then they can
swap "war stories" -- "I've found a neat trick, see if it works in your
line too". Or globs that finish early can help out elsewhere. But any
communication is distracting the computers from their analysis, so it's
not all good news. There is some information that is easy to swap, and
that should speed up many of the searches. But you can't hope to share
[eg] transposition tables in real time across the internet, so two widely-
separated machines simply are not twice as fast as a single machine.

All of this stuff was being very actively researched a decade
and more ago. I get the impression that "a PC on every desk" killed
off most of that, but it's now coming back again through wider use of
the Internet and of multi-processor PCs. No doubt Bob's 40-xeon m/c
will be a monster!

--
Andy Walker, School of MathSci., Univ. of Nott'm, UK.
a...@maths.nott.ac.uk

0 new messages