Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Extending endgame database using grid computing?

2 views
Skip to first unread message

dp...@buffalo.edu

unread,
Feb 24, 2005, 4:18:19 AM2/24/05
to
[c.p. moderator: x-posted to 2 other news groups]

Hi everyone,

I've been browsing the list of grid computing projects, wondering to
which worthy causes I should donate my precious idle cycles. Sure I've
folded a few proteins and valiantly searched for monstrous prime
numbers, but there's just something about a new PC which screams "use
me for CHESS!"

The ChessBrain website explains that they harness grid computing to
play a single chess game (albeit with enormous power behind each move).
I have a different idea, and I'd be in shock if I were to learn that
I'm the first one to think of it.

We all know that the 5-man endgame database was completed many years
ago by Ken Thompson and friends. And some work has been done on the
6-man and possibly higher-order databases, but only for limited
configurations of pieces and/or positions.

Since these kinds of calculations require tremendous processing and
memory resources, they readily lend themselves to grid computing. Just
imagine -- we get ten thousand or more participants, each eager to do
his or her part to extend human knowledge of this great game. Each
willing to sell (give, really) his computer into bondage so that
society may flourish. Why I'll bet we could have the complete 6-man
database finished in no time at all. Ok, maybe not "no time," but much
quicker than it took IBM to build Deep Blue. And perhaps quicker than
it took for the 5-man data.

Does anyone know if this kind of project exists and, if so, how this
humble chess-and-computer enthusiast can join? If no such effort is
currently underway, who can tell me what would be involved in launching
one? My Pentium is chomping at the bit to start crunching.

Regards,
Daryl

--

Urban Koistinen

unread,
Feb 24, 2005, 2:07:23 PM2/24/05
to
The reason this is not being done now is that the common way of
computing endgame databases require plenty of local storage.
For an endgame with pawns it would be about 24*64**5 positions
and to store that require more than 2 Giby of ram and most people
don't have so much yet.
It is possible to use disk space but that requires more complicated
programming and so far no one has done that well enough.

If anyone writes the better program it will be easy enough to compute
the larger databases on todays supercomputers.
The computation that might profit from distributed computing is,
I think, compressing the databases.

In rec.games.chess.computer dp...@buffalo.edu wrote:
> [c.p. moderator: x-posted to 2 other news groups]
>

Vincent Diepeveen

unread,
Feb 28, 2005, 9:02:38 AM2/28/05
to
Please let me note that deep blue didn't have any EGTBs in its hardware
processors above 3 men.
Perhaps they had kpk in hardware. nothing else.

In software perhaps a few 5 men, though i even doubt that.

"Urban Koistinen" <urb...@nada.kth.se> wrote in message
news:cvl8lb$ajn$1...@inn.nada.kth.se...


> The reason this is not being done now is that the common way of
> computing endgame databases require plenty of local storage.
> For an endgame with pawns it would be about 24*64**5 positions
> and to store that require more than 2 Giby of ram and most people
> don't have so much yet.
> It is possible to use disk space but that requires more complicated
> programming and so far no one has done that well enough.

> If anyone writes the better program it will be easy enough to compute
> the larger databases on todays supercomputers.
> The computation that might profit from distributed computing is,
> I think, compressing the databases.

diep's generator is there nowadays using a modified koistinen approach to
computing.

the problem of generating 7 men is basically the huge storage space when
generating needed.

a raid array is definitely important to have, simply because harddisks
aren't big enough to have a single uncompressed big 7 man.

Vincent

Anders Thulin

unread,
Mar 1, 2005, 11:37:41 AM3/1/05
to

dp...@buffalo.edu wrote:
> humble chess-and-computer enthusiast can join? If no such effort is
> currently underway, who can tell me what would be involved in launching
> one?

First, a good understanding of how the databases are created now.

Next a method of transforming that essentially serial task into a
parallel one: instead of one computer doing an enormous amount of
computations, you want an enormous number of computers doing small
and fairly well isolated tasks, which they then pass on to someone
who puts it all together.

That's the tricky part, I think. I don't know of any neat way of
doing it. I'm not sure how bad the un-neat ways are, though.
Chess endgames can be split into sublcasses (a 6-piece endgame can
be split into a number of 5-piece endgames whith a fixed sixth piece,
say), but not clearly in a useful way.

Then you'll need the scaffolding -- but again that's not very
tricky. Once you know how data flows, this should be fairly trivial.

The first subtask in creating an endgame database is to list all
possible positions. Not too tricky -- if you split on positions, this
can be split over N systems. The second one is to class the endgames
as either mate, stalemate, illegal, or unknown. Not to difficult either.

The third is to go over the unknown positions, and find all 'mate in one' positions.
This means listing all legal moves, doing them one by one, looking up
the resulting position to see if it's known or not, and then unding the moves.
Minimax the results, and you'll know if the position examined can be assigned
a score. If it cannot, you'll have to repeat the job. This is the tricky
one: if you split the endgame over N computers, these computers will have to
refer to each other, or a central system, for thse positions they are
not in charge of themselves. There seems to be a lot of those -- and so
there will need to be lots of cross-communication. And that's just what
you don't want -- because it's *slow*. Or you allow duplication of
tasks -- but then you want to impose some kind of upper bound, or
all N computers will duplicate each other's work to a very great extent.

The forth is, obviously, the repeat step 3, mut for 'mate in two plies',
and so on and so forth until nothing changes anymore. All remaining
'unknown' can then be classed as 'drawn'.

To a certain extent, the first N steps can be done by letting each
system do a exhaustive search for the required number of plies. That might
be one way to split the job. The N+1:th step would then be similar to
the third step above: a 1-ply search in already scores positions.
And again, the same problem as above appears, but only a little later.

There may be some clever way of partitioning the job ... it has to
be found before anything useful can be done.

--
Anders Thulin ath*algonet.se http://www.algonet.se/~ath

--

0 new messages