Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Parallel Prolog

62 views
Skip to first unread message

Simon....@gmail.com

unread,
Jan 15, 2007, 4:37:18 AM1/15/07
to
Hello,

does anybody know what happened to parallel Prolog(s)? It seems that in
the late 1980s and early 1990s many people believed in the idea of
making prolog parallel. Around 1995, this idea stopped to be
fashionable.

Do we not now live in a time in which a parallel prolog could be
useful? Today many people own computers with several processors. And,
most probably, computers will become faster in the next years not
because of having higher clock rates but rather because of having more
processors.

Does this make any sense?

Do you know of any projects about parallel prolog?

Simon

Paulo Moura

unread,
Jan 15, 2007, 5:49:45 AM1/15/07
to

Yes :-)

> Do you know of any projects about parallel prolog?

Some Prolog compilers (e.g. Qu-Prolog, SWI-Prolog, YAP, XSB) provide a
low level support for multi-threading programming. High level support
for multi-threading is the focus of the latest versions of Logtalk,
aiming to allow programmers to easily take advantage of the current
crop of multi-processor and multi-core desktop and portable computers.

Cheers,

Paulo

Darren Bane

unread,
Jan 15, 2007, 5:58:17 AM1/15/07
to
Simon....@gmail.com wrote:
!snip!

> Does this make any sense?

More every year as the number of cores on consumer-grade CPUs increases.

> Do you know of any projects about parallel prolog?

I found KLIC ( http://www.klic.org/software/klic/index.en.html ) very
interesting. However, it has significant differences from ISO Prolog
(e.g. committed choice), and could best be described as another Logical
Programming language. This may or may not be what you're looking for.
--
Darren Bane

Jan Wielemaker

unread,
Jan 15, 2007, 6:23:05 AM1/15/07
to

Concurrency in Prolog is -in my opinion- really a must-have. Prolog
is now commonly used as component in (web-)servers and indeed,
multi-CPU hardware is becomming commonplace. Traditionally, research
has been to introduce concurrency at a low-level in the language. I
think parlog is one of the most famous examples. To my best knowledge,
none of these approaches were widely addopted.

As Paulo points out, quite a few modern Prolog systems provide
multiple communicating Prolog engines operating on a shared set of
clauses. This approach allows for good interaction in multi-threaded
server environments and makes it easy to write multi-threaded servers
in Prolog. For example, SWI-Prolog comes with a Tomcat-like
multi-threaded HTTP server library.

The downside of this of course is that we are back at explicitely
coded concurrency, where the parlog-like approaches aimed at
exploiting concurrency automatically. I guess practical Prolog,
containing cuts, assert/retract and interaction with foreign code
just isn't clean enough to make automatic concurrency feasible.

Cheers --- Jan

Jens Kilian

unread,
Jan 15, 2007, 12:41:42 PM1/15/07
to
Simon....@gmail.com writes:
> does anybody know what happened to parallel Prolog(s)? It seems that in
> the late 1980s and early 1990s many people believed in the idea of
> making prolog parallel. Around 1995, this idea stopped to be
> fashionable.

I recently read Mats Carlsson's and Roland Karlsson's dissertations
about OR-parallel Prolog implementations (both from SICS). Carlsson
describes an "Aurora"-based (basically, shared memory) system, Karlsson
writes about "Muse" (a MUlti-SEquential system, with copying between
independent engines). The Muse approach seemed like it would be quite
appropriate for modern PCs.

Regards,
Jens.
--
mailto:j...@acm.org As the air to a bird, or the sea to a fish,
http://www.bawue.de/~jjk/ so is contempt to the contemptible. [Blake]
http://del.icio.us/jjk

Cameron Hughes

unread,
Jan 16, 2007, 2:48:22 PM1/16/07
to
Jan Wielemaker wrote:

While my approach may upset some language purists, it works and I
have found it to be robust.

I use C/C++ wrappers (SWI-Prolog) foreign language library in conjunction
with either PVM or MPI (it depends which cluster I'm on)

So while the core of the work is done in Prolog, the parallelism and
synchronization is done within the PVM and MPI functions calls.

While this does mean that you have to design your concurrency, its
a very practical, stable, do-it-right-now approach.

--- Cameron

Simon....@gmail.com

unread,
Jan 17, 2007, 4:46:24 AM1/17/07
to

> The downside of this of course is that we are back at explicitely
> coded concurrency, where the parlog-like approaches aimed at
> exploiting concurrency automatically. I guess practical Prolog,
> containing cuts, assert/retract and interaction with foreign code
> just isn't clean enough to make automatic concurrency feasible.

Wouldn't it be possible to "declare" certain parts of one's code
(e.g. certain modules) "clean" in order to have the Prolog compiler
create automatic concurrency when compiling those parts? Clean
code chunks would not be allowed to use assert/retract, the cut, and
so on. This way one would retain Prolog's practicalness while gaining
efficiency.

Simon

Jan Wielemaker

unread,
Jan 17, 2007, 8:48:04 AM1/17/07
to

Prolog still suffers from subtle ordering issues. Automatic
concurrent processing may be more suitable for constraint solvers and
other more declarative sublanguages. Mercury might also be a better
candidate.

Cheers --- Jan

Michael D. Kersey

unread,
Jan 17, 2007, 3:01:13 PM1/17/07
to
Simon....@gmail.com wrote:
<snipped>

> Do you know of any projects about parallel prolog?
>
You should visit The Computational logic, Languages,
Implementation, and Parallelism Laboratory (CLIP) site at
http://clip.dia.fi.upm.es/

CLIP maintains the Ciao-Prolog Development System WWW Site at
http://clip.dia.fi.upm.es/Software/Ciao/index.html#ciao

From that site:
> The Ciao Prolog System
>
> Ciao is a public domain, next generation multi-paradigm programming environment with a unique set of features:
>
>
>
> Ciao offers a complete Prolog system, supporting ISO-Prolog, but its novel modular design allows both restricting and extending the language. As a result, it allows working with fully declarative subsets of Prolog and also to extend these subsets (or ISO-Prolog) both syntactically and semantically. Most importantly, these restrictions and extensions can be activated separately on each program module so that several extensions can coexist in the same application for different modules.
>
> Ciao also supports (through such extensions) programming with functions, higher-order (with predicate abstractions), constraints, and objects, as well as feature terms (records), persistence, several control rules (breadth-first search, iterative deepening, ...), concurrency (threads/engines), a good base for distributed execution (agents), and parallel execution. Libraries also support WWW programming, sockets, external interfaces (C, Java, TclTk, relational databases, etc.), etc.

Alexei A. Morozov

unread,
Jan 18, 2007, 1:53:05 PM1/18/07
to

Michael D. Kersey wrote:
> Simon....@gmail.com wrote:
> <snipped>

Alexei A. Morozov

unread,
Jan 18, 2007, 1:55:33 PM1/18/07
to
Dear Colleagues,

Let me state my opinion that the idea of joining plain (standard)
Prolog with parallelism
and/or concurrency is rather naive. My opinion is that parallelism and
concurrency can be
build into logic programming only after (reasonable and mathematically
correct) joining
of logic programming with the OOP approach.

One can read about our approach to solve the problem on the Web Site:
http://www.cplire.ru/Lab144/start/e_concur.html

Best regards,

Dr. Alexei A. Morozov

mailto: mor...@mail.cplire.ru

http://www.cplire.ru/Lab144
Institute of Radio Engineering and Electronics RAS

nall...@gmail.com

unread,
Jan 20, 2007, 2:07:57 PM1/20/07
to

Paulo Moura

unread,
Jan 24, 2007, 2:22:50 PM1/24/07
to pjlm...@gmail.com

On Jan 18, 6:55 pm, "Alexei A. Morozov" <AlexeiMoro...@netscape.net>
wrote:

> Dear Colleagues,
>
> Let me state my opinion that the idea of joining plain (standard)
> Prolog with parallelism
> and/or concurrency is rather naive. My opinion is that parallelism and
> concurrency can be
> build into logic programming only after (reasonable and mathematically
> correct) joining
> of logic programming with the OOP approach.

While currently working on Logtalk multi-threading support, I would say
that logic programming, objects, and concurrency are orthogonal
features. People are doing useful work with e.g. SWI-Prolog or
Qu-Prolog multi-threading support without necessarily using objects. I
believe Jan and Peter can provide us with some real world examples of
multi-threaded Prolog applications.

Cheers,

Paulo

Jan Wielemaker

unread,
Jan 24, 2007, 5:07:38 PM1/24/07
to

I have the impression that a large proportion of the commercial users of
SWI-Prolog use concurrency, mostly as component of a server-application.
I'm involved in a larger project running a multi-threaded SWI-Prolog based
web-server. Enjoy at http://e-culture.multimedian.nl/demo/search

Cheers --- Jan

Matthew Huntbach

unread,
Jan 25, 2007, 5:23:40 AM1/25/07
to

No. The problem is that even "clean" Prolog is surprisingly dependent on
sequential execution.

Consider the left-to-right evaluation of goals. It is fundamental to a lot of
Prolog programming that one goal binds a variable and a goal to its right
then makes choices dependent on that binding. Remove that left-to-right
sequentiality and then the goal to the right may make those choices without
the variable being bound. Given that Prolog makes no distinction between
testing whether a variable has a binding and giving it a binding, the result
is that the goal to the right binds the variable to some inappropriate value
it was supposed to test it for, or simply flounders on making inappropriate
choices in the absence of the variable being bound. Then the problem is
what to do when the computation that is meant to bind the vartiable binds
it to something else.

Consider Prolog's backtracking mechanism. That requires a centralised
global view of program execution, one that can't be divided into
concurrent parts. It really implies one global stack of goals.
Attempting to parallelise this means we have the prospect of a variable
being bound in one part of the program and another part of the program
acting on that binding, only for the variable to change its value because the
binding part has backtracked and reassigned the variable.

It was for this reason that the most successful "parallel Prologs" (KLIC,
Parlog, Strand, FGHC - all essentially the same language with very minor
differences, the plethora of names resulting from different research
teams working on essentually the same idea but failing to accept a common
standard did not help) abandoned both two-way unification and backtracking.
But having done this, it was really rather silly to think of them as
"parallel Prologs". What they inherited from Prolog was rather limited,
although the code had a Prolog-like appearance. They were really such different
languages that thinking of them as "Prolog variants" acted as a barrier to
constructive use of them.

The development of these languages is discussed in the book I co-authored
with Graem Ringwood, "Agent-Oriented Programming", published in the Springer
Lecture Notes in Artificial Intelligence series in 1999 as volume 1630.
Actually, the chatty bits at the front are largely Graem's, the codey bits
at the back are largely mine. I'm happy to admit the book is a bit of a
mess, it was delayed due to too much effort spent on trying to add
context-setting history of AI stuff, and a misguided attempt, having missed
the logic programming boom, to jump on the multi-agent systems boom instead.

Since writing that book, my view on this is that Parlog/KLIC/Strand isn't
really a "high level language" at all, it's a very simple low-level
concurrent calculus which is held back by its Prolog-like syntax.
The silliest thing is that its syntax disguises what is actually fundamental
to it - directed flow of data. It's far easier to understand what the
language can do if you give it a different syntax which demonstrates and
formalises what in practice is essential - that all its variables must
have one writer but may have several readers.

I call this syntax the "core language of Aldwych". Also as part of this,
I formalise what was in practice a requirement, that variables which
employ "back-communication" (leaving an unbound variable as a "hole" in
a tuple, which is bond by their reader), the thing which really makes
Parlog/KLIC/Strand more than just first-order functional programming,
must be linear - have one reader as well as one writer.

Next I recognise that this langauge is very low level, with none of the
structuring mechanisms a high level language needs to make its code
comprehensible. So I develop some structurings, based on and extending
previous work on "object oriented programming in concurrent logic languages".
In effect what is happening here is that design patterns in programs in
these languages - common ways of putting code together and thinking about it
on a higher level - are turned into syntax which can compile down to the
lower level form for execution. This results in the high-level language I
call "Aldwych" - originally named because "Aldwych turns into Strand" (on
the London street map as well as in these language terms). But now I don't
see any particular value in emphasising the Prolog background to the work.

See my web page:

http://www.dcs.qmul.ac.uk/~mmh/

for some papers on this.

Matthew Huntbach

Markus Triska

unread,
Jan 25, 2007, 12:12:15 PM1/25/07
to

Matthew Huntbach <m...@dcs.qmul.ac.uk> wrote:

> Consider the left-to-right evaluation of goals. It is fundamental to
> a lot of Prolog programming that one goal binds a variable and a
> goal to its right then makes choices dependent on that binding.

It's also often the case that one goal binds a variable and several
further goals just perform tests on the binding. Such tests can be
performed in parallel. That's AND-parallelism: performing several
goals of a conjunction in parallel (Mercury already has that, it's
possible in Prolog as well).

> Given that Prolog makes no distinction between testing whether a
> variable has a binding and giving it a binding

Sometimes you can deduce these properties even statically. A few
simple annotations (like in Mercury) can also help.

> Consider Prolog's backtracking mechanism. That requires a centralised
> global view of program execution, one that can't be divided into
> concurrent parts. It really implies one global stack of goals.

Not at all. Consider OR-parallelism: A disjunction like in

good(X) :-
( X = 0; X = 1; X = 2 ),
X >= 0.

can be divided and yield different "stacks of goals" on different
cores. If done naively, you may obtain solutions in an order different
from the one obtained by sequential execution on one core. That's
often OK (e.g. in many constraint satisfaction tasks any solution
suffices). Also, from a previous post by Lee Naish:

"Furthermore, much of the work on or-parallel Prolog systems was to
ensure they had the same sequence of computed answers as
sequential Prolog. Similarly for independent and-parallel
systems."

All the best,
Markus Triska

Matthew Huntbach

unread,
Jan 26, 2007, 7:23:06 AM1/26/07
to
On Thu, 25 Jan 2007, Markus Triska wrote:
> Matthew Huntbach <m...@dcs.qmul.ac.uk> wrote:

>> Consider the left-to-right evaluation of goals. It is fundamental to
>> a lot of Prolog programming that one goal binds a variable and a
>> goal to its right then makes choices dependent on that binding.

> It's also often the case that one goal binds a variable and several
> further goals just perform tests on the binding. Such tests can be
> performed in parallel. That's AND-parallelism: performing several
> goals of a conjunction in parallel (Mercury already has that, it's
> possible in Prolog as well).

Yes, AND-parallelism of this sort is also in the committed choice concurrent
logic languages I mentioned (KLIC/Parlog/Strand/etc).

>> Given that Prolog makes no distinction between testing whether a
>> variable has a binding and giving it a binding

> Sometimes you can deduce these properties even statically. A few
> simple annotations (like in Mercury) can also help.

Yes, I think since the programmer has properties these in mind, and they
are essential for good parallelisation of the program, it is silly to use
a syntax which hides them. The point I'm making is that even pure
Prolog doesn't parallelise well, and the annotations and restrictions
required to parallelise it means what you have is better considered as
another sort fo logic programming language rather than "parallel Prolog".

>> Consider Prolog's backtracking mechanism. That requires a centralised
>> global view of program execution, one that can't be divided into
>> concurrent parts. It really implies one global stack of goals.

> Not at all. Consider OR-parallelism: A disjunction like in
>
> good(X) :-
> ( X = 0; X = 1; X = 2 ),
> X >= 0.
>
> can be divided and yield different "stacks of goals" on different
> cores. If done naively, you may obtain solutions in an order different
> from the one obtained by sequential execution on one core. That's
> often OK (e.g. in many constraint satisfaction tasks any solution
> suffices). Also, from a previous post by Lee Naish:
>
> "Furthermore, much of the work on or-parallel Prolog systems was to
> ensure they had the same sequence of computed answers as
> sequential Prolog. Similarly for independent and-parallel
> systems."

OR-parallelism brings big efficiency issues, at worst you have to split
the whole programming environment into two every time you hit an
OR-parallel choice. You also hit the whole tricky issue of speculative
parallelism and speedup anomalies. So while it can be done, it's much
trickier than was naively supposed when the idea of paraleliding Prolog was
first raised. It's because of this trickiness that the committed choice
logic languages dropped it, becoming "flat" and concentrating purely on
AND-parallelism. As was noted then, you can simulate OR-parallelism by
AND-parallelism anyway, and doing so gives you more direct control over
your search algorithm, which in practice you need if you are doing
serious search. This is all written up in my book, and if anyone is really
interested I still have a few publisher's copies I could mail you on
request.

Matthew Huntbach


Markus Triska

unread,
Jan 26, 2007, 2:02:34 PM1/26/07
to
Matthew Huntbach <m...@dcs.qmul.ac.uk> wrote:

> the annotations and restrictions required to parallelise it means
> what you have is better considered as another sort fo logic
> programming language rather than "parallel Prolog".

Calling it "parallel Prolog" seems to fit the case perfectly (Prolog,
possibly augmented with simple annotations, or with transparent
parallelism where the restrictions can be deduced automatically). What
could legitimately be called "parallel Prolog", if not exactly that.

> OR-parallelism brings big efficiency issues, at worst you have to split
> the whole programming environment into two every time you hit an
> OR-parallel choice.

You don't "have to"; the fact that it's possible doesn't make that a
viable strategy, nor does it imply its necessity. Also, it's not the
worst case in general: Say I have a single OR branch at the start of a
program followed by a lengthy deterministic computation. Splitting on
every OR-choice (i.e., once) can be the best thing to happen then.

> You also hit the whole tricky issue of speculative parallelism and
> speedup anomalies.

You mean "speedup anomaly" in the usual sense, i.e., "the observed
speedup is greater than predicted"? Yes, that can happen.

> So while it can be done, it's much trickier than was naively
> supposed when the idea of paraleliding Prolog was first raised.

That's one correct and concise reply to Simon's question.

student

unread,
Jan 26, 2007, 8:49:35 PM1/26/07
to
Matthew Huntbach wrote:
...

>
> It was for this reason that the most successful "parallel Prologs" (KLIC,
> Parlog, Strand, FGHC (all essentially the same language with very minor

> differences, the plethora of names resulting from different research
> teams working on essentially the same idea but failing to accept a common

> standard did not help) abandoned both two-way unification and backtracking.
> But having done this, it was really rather silly to think of them as
> "parallel Prologs". What they inherited from Prolog was rather limited,
> although the code had a Prolog-like appearance. They were really such
> different
> languages that thinking of them as "Prolog variants" acted as a barrier to
> constructive use of them.
>
>
> ... [M]y view on this is that Parlog/KLIC/Strand isn't

Thank you.

In your introduction to Aldwych, you state "asynchronous ensembles are
the dominating intellectual issue in the emerging era of computing
systems research".

That statement called to my mind a paper which I happen to be reading,
"Occam, Turing, von Neumann, [and] Jaynes: How much can you get for how
little?", by Tommaso Toffoli, in which he says "One owes to Boltzmann
the bold intuition that the /entropy/ of a sample of matter -- entropy
being a very concrete physical quantity
which appears in the equations of thermodynamics alongside other
physical quantities such as volume, temperature, and energy -- should
be identified with something as abstract as its 'information content,'
namely, the number arrived at by just /counting/ ... how many individual
microscopic states are compatible with the *macroscopic* properties of
the sample."

I am not sure what Toffoli means here by the predicate "is compatible
with", but what was most surprising to me about this article is just how
small the number of microscopic states that is observably sufficient to
give rise to the macroscopic "properties" of an ensemble can be -- much
smaller, in same cases, than the number of personal computers running
Linux in existence today.

This makes me wonder, given a sufficiently large number of
intercommunicating computers -- all running, for example, the OLPC
operating system (which has automatic sensing and wireless
interconnection between neighboring OLPC systems among its main design
goals) -- should one not expect certain limiting properties of such an
ensemble to emerge, properties which are both independent of and
invariant with respect to the properties of the individual PCs within in
that ensemble?

Is this a meaningful question?

If so, what might those properties be?

billh

Matthew Huntbach

unread,
Jan 29, 2007, 4:26:21 AM1/29/07
to
On Sat, 27 Jan 2007, student wrote:
> Matthew Huntbach wrote:

>> See my web page:
>>
>> http://www.dcs.qmul.ac.uk/~mmh/
>>
>> for some papers on this.

> This makes me wonder, given a sufficiently large number of intercommunicating

> computers -- all running, for example, the OLPC operating system (which has
> automatic sensing and wireless interconnection between neighboring OLPC
> systems among its main design goals) -- should one not expect certain
> limiting properties of such an ensemble to emerge, properties which are both
> independent of and invariant with respect to the properties of the individual
> PCs within in that ensemble?
>
> Is this a meaningful question?

The particular brand of computer used or operating system have nothing to
do with the abstract properties of highly concurrent systems. There is plenty
of work on this sort of thing. Try Googlin on "emergent properties".

Matthew Huntbach

Simon....@gmail.com

unread,
Jan 30, 2007, 4:23:13 AM1/30/07
to

Thanks to all who participated in this thread. Google classifies this
group as a low-activity group. It fails to notice, though, that it is
also a high-quality-activity group.

Simon

Matthew Huntbach

unread,
Jan 31, 2007, 9:40:32 AM1/31/07
to
On Fri, 26 Jan 2007, Markus Triska wrote:
> Matthew Huntbach <m...@dcs.qmul.ac.uk> wrote:

>> the annotations and restrictions required to parallelise it means
>> what you have is better considered as another sort fo logic
>> programming language rather than "parallel Prolog".

> Calling it "parallel Prolog" seems to fit the case perfectly (Prolog,
> possibly augmented with simple annotations, or with transparent
> parallelism where the restrictions can be deduced automatically). What
> could legitimately be called "parallel Prolog", if not exactly that.

My own feeling is that the word "Prolog" has a good and useful meaning:
the sequential logic programming language with depth-first search and
backtracking, left-to-right evaluation of goals, full unification
(apart perhaps from the occurs check). I think we have another phrase
for languages which look like Prolog but behave in a different way, and that
phrase is "logic programming language".

OK, I'm prepared to accept "parallel Prolog" for something that looks
and behaves like sequential Prolog but exploits parallelism transparently.
But I'm uneasy about it going further than that, and I think the
usage of "parallel Prolog" for what ought to be "parallel logic programming
language" is regrettable, though it often happens. One of the reasons I
came to this conclusion was that when I was working on parallel logic
programming languages I often found people's beliefs that what I was doing
was "parallel Prolog" limited what they believed the language could do.
That is, they saw it as logic programming with all the restrictions of
Prolog plus the additional restrictions and/or annotations required for
efficient parallelisation. I would rather they saw it as "logic
programming which is more powerful than Prolog due to the additional
facilities opened up by the concurrency and paralellism".

>> OR-parallelism brings big efficiency issues, at worst you have to split
>> the whole programming environment into two every time you hit an
>> OR-parallel choice.

> You don't "have to"; the fact that it's possible doesn't make that a
> viable strategy, nor does it imply its necessity. Also, it's not the
> worst case in general: Say I have a single OR branch at the start of a
> program followed by a lengthy deterministic computation. Splitting on
> every OR-choice (i.e., once) can be the best thing to happen then.

In sequential Prolog you go with your first OR-choice, and if that
doesn't work out you backtrack and go with your second OR-choice.
With OR-parallelism, you do both computations, one with your first OR-choice,
one with your second, and if these also involve choices you split again, and
again ... . So I'm not saying it's never useful, just that it's tricky
and its effects and usefulness aren't always obvious.

>> You also hit the whole tricky issue of speculative parallelism and
>> speedup anomalies.

> You mean "speedup anomaly" in the usual sense, i.e., "the observed
> speedup is greater than predicted"? Yes, that can happen.

Can happen both ways round. If the solution is somewhere to the
right in the search tree, but you get to it quickly because a parallel
proessor is working on that branch, you get a faster speedup than the
number of processors. On the other hand, if the slution is to the left
you get no speedup because you don't arrive at it any more quickly
with the parallel processors. A slowdown occurs when the solution
is to the left but it happens all your parallel processors are given
over to searching for solutions in the branches to it sright.

>> So while it can be done, it's much trickier than was naively
>> supposed when the idea of paraleliding Prolog was first raised.

> That's one correct and concise reply to Simon's question.

Matthew Huntbach

student

unread,
Feb 3, 2007, 3:12:38 AM2/3/07
to
Matthew Huntbach wrote:
> On Sat, 27 Jan 2007, student wrote:
>> Matthew Huntbach wrote:
>
>>> See my web page:
>>>
>>> http://www.dcs.qmul.ac.uk/~mmh/
>>>
>>> for some papers on this.
>
>> This makes me wonder, given a sufficiently large number of
>> intercommunicating computers -- all running, for example, the OLPC
>> operating system (which has automatic sensing and wireless
>> interconnection between neighboring OLPC systems among its main design
>> goals) -- should one not expect certain limiting properties of such an
>> ensemble to emerge, properties which are both independent of and
>> invariant with respect to the properties of the individual PCs within
>> in that ensemble?
>>
>> Is this a meaningful question?
>
> The particular brand of computer used or operating system have nothing to
> do with the abstract properties of highly concurrent systems.

I didn't say it did. My question was, would a *population* consisting
of a sufficiently large number of functionally identical interacting
computers -- for example, OLPC systems -- exhibit recognizable "emergent
properties" of the sort exhibited by the examples given in the paper I
mntioned.

bh

epon...@cs.nmsu.edu

unread,
Feb 26, 2007, 12:28:21 AM2/26/07
to

I would like to suggest

Parallel execution of prolog programs: a survey
Gopal Gupta, Enrico Pontelli, Khayri A.M. Ali, Mats Carlsson, Manuel
V. Hermenegildo
July 2001 ACM Transactions on Programming Languages and Systems
(TOPLAS), Volume 23 Issue 4
Publisher: ACM Press

I think it answers a lot of the questions raised in this thread.

Cheers

0 new messages