Cloud Haskell now on Hackage

100 views
Skip to first unread message

Jeff Epstein

unread,
Oct 26, 2011, 8:40:30 PM10/26/11
to parallel-haskell
Cloud Haskell is now available from Hackage!

http://hackage.haskell.org/package/remote

Greetings from sunny Ulaan Baatar,
Jeff

Dylan Lukes

unread,
Oct 26, 2011, 8:43:46 PM10/26/11
to parallel...@googlegroups.com
Cheers!

Johan Tibell

unread,
Oct 26, 2011, 8:46:53 PM10/26/11
to parallel...@googlegroups.com
Nice!

Ryan Newton

unread,
Oct 27, 2011, 12:24:17 AM10/27/11
to parallel...@googlegroups.com
Great!

Btw, I see it kept the name "remote" after all. So should we stop saying
"Cloud Haskell" then?

Jeff Epstein

unread,
Oct 27, 2011, 2:23:50 AM10/27/11
to parallel...@googlegroups.com
I'm open to changing both the title and the package name in a future
version, if some consensus can be reached.

Dylan Lukes

unread,
Oct 27, 2011, 6:30:39 AM10/27/11
to parallel...@googlegroups.com
I think the package name "remote" is a bit unspecific/generic to be honest.

- Dylan

Erik de Castro Lopo

unread,
Oct 27, 2011, 6:33:02 AM10/27/11
to parallel...@googlegroups.com
Dylan Lukes wrote:

> I think the package name "remote" is a bit unspecific/generic to be honest.

I tend to agree, but it is better than "Cloud Haskell".

Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/

Ryan Newton

unread,
Oct 27, 2011, 10:19:24 AM10/27/11
to parallel...@googlegroups.com
Other alternatives? Jeff is open minded so speak now or forever hold
your peace ;-).

... It does have an Erlang-based heritage. What about something
explicitly referencing Erlang?

Johan Tibell

unread,
Oct 27, 2011, 10:26:44 AM10/27/11
to parallel...@googlegroups.com
My bike shed:

Something with "actors" in the name given that this implements a messaging passing actor model? remote-actors, distributed-actors, or just actors?

-- Johan

Dylan Lukes

unread,
Oct 27, 2011, 10:37:47 AM10/27/11
to parallel...@googlegroups.com
We should probably explicitly avoid using the Erlang name. That's confusing to say the least...

Maybe we could put this in a Distributed toplevel? It somewhat fits in Network and Concurrent too though. If "Cloud" captures this hybrid, it might be a valid toplevel name.

Cloud.---- I mean.

Dylan


-----Original message-----
From: Ryan Newton <rrne...@gmail.com>
To:
parallel...@googlegroups.com
Sent:
Thu, Oct 27, 2011 14:19:45 GMT+00:00
Subject:
Re: Cloud Haskell now on Hackage

Other alternatives? Jeff is open minded so speak now or forever hold
your peace ;-).

... It does have an Erlang-based heritage. What about something
explicitly referencing Erlang?



On Thu, Oct 27, 2011 at 6:33 AM, Erik de Castro Lopo

Ryan Newton

unread,
Oct 27, 2011, 10:59:58 AM10/27/11
to parallel...@googlegroups.com
I like Johan's "actors" idea.  

  Remote.Actors  (package remote-actors?)
  Cloud.Actors  ?
  Network.Actors
  Distributed.Actors

It is shocking to see how many names get used in different message-passing models for actors -- agents / kernels / filters / nodes / processes / etc ;-).

Michael Xavier

unread,
Oct 27, 2011, 11:10:56 AM10/27/11
to parallel...@googlegroups.com
I second Network.Actors. It is a pretty succinct way of describing the concurrency model (actors) and problem domain (network).
--
Michael Xavier
http://www.michaelxavier.net

Johan Tibell

unread,
Oct 27, 2011, 11:28:17 AM10/27/11
to parallel...@googlegroups.com
If we use "actors" we more easily make dismissive comments about Erlang:

"Actors? Oh, that's just 'cabal install actors' in Haskell."

;)

Dylan Lukes

unread,
Oct 27, 2011, 11:29:53 AM10/27/11
to parallel...@googlegroups.com
This would require some renaming in the remote package, and I'm not sure we're not conflating names with "actors". Actorthread or actorprocess as a compound name might be clearer and more discernable.

Dylan Lukes

unread,
Oct 27, 2011, 11:36:19 AM10/27/11
to parallel...@googlegroups.com
Erlang doesnt talk about "actors" though. They talk about spawning processes (which can send/receive...).

We need a name which wont be conflated with threads/processes, or Erlang processes...

Source: I write Erlang.

- Dylan


-----Original message-----
From: Johan Tibell <johan....@gmail.com>
To:
parallel...@googlegroups.com
Sent:
Thu, Oct 27, 2011 15:28:38 GMT+00:00
Subject:
Re: Cloud Haskell now on Hackage

Simon Marlow

unread,
Oct 27, 2011, 11:37:06 AM10/27/11
to parallel...@googlegroups.com

Not “Network” – it’s perfectly sensible to run this on a multicore. I could live with “Distributed” though.

 

Not “Cloud” – too buzzwordy, and everyone disagrees about what “cloud” means. (some people really dislike the use of the word “cloud” in this context)

 

“Actors” – that gets across the message-passing bit, but not the distributed bit. There’s already an “actor” package, FWIW.

 

(I’ve written a lot more email than code for Cloud Haskell, so my opinion should be weighted appropriately :-)

 

Cheers,

                Simon

Johan Tibell

unread,
Oct 27, 2011, 11:44:49 AM10/27/11
to parallel...@googlegroups.com
On Thu, Oct 27, 2011 at 8:36 AM, Dylan Lukes <lukes...@gmail.com> wrote:
Erlang doesnt talk about "actors" though. They talk about spawning processes (which can send/receive...).

We need a name which wont be conflated with threads/processes, or Erlang processes...

Source: I write Erlang.

I stand corrected. The programming model is generally referred to as the actors model but in Erlang you talk about processes, not actors.

-- Johan
 

Ryan Newton

unread,
Oct 27, 2011, 11:50:48 AM10/27/11
to parallel...@googlegroups.com
On Thu, Oct 27, 2011 at 11:37 AM, Simon Marlow <simo...@microsoft.com> wrote:

Not “Network” – it’s perfectly sensible to run this on a multicore. I could live with “Distributed” though.

 

Not “Cloud” – too buzzwordy, and everyone disagrees about what “cloud” means. (some people really dislike the use of the word “cloud” in this context)

 

“Actors” – that gets across the message-passing bit, but not the distributed bit. There’s already an “actor” package, FWIW.


It seems like Distributed (i.e. rather than shared memory, but not necessarily "networked") addresses Simon's concerns.  That would then recommend:

  Distributed.Actors 
or
  Distributed.Processes

I'm fine with either, or even something like:

  Distributed.Messaging

And factor out the task stuff, providing just the messaging layer.

-Ryan

Dylan Lukes

unread,
Oct 27, 2011, 12:14:21 PM10/27/11
to parallel...@googlegroups.com
I'd argue Processes and Messaging are in the same layer, but picking a name is hard. Maybe Distributed.Processes is better. Messaging doesn't feel inclusive enough.

Dylan

Johan Tibell

unread,
Oct 27, 2011, 12:57:22 PM10/27/11
to parallel...@googlegroups.com
Also consider whether we should use singular or plural. I think we typically use singular.

Duncan Coutts

unread,
Oct 27, 2011, 1:04:11 PM10/27/11
to parallel...@googlegroups.com
On Thu, 2011-10-27 at 11:50 -0400, Ryan Newton wrote:
> On Thu, Oct 27, 2011 at 11:37 AM, Simon Marlow <simo...@microsoft.com>wrote:
>
> > Not “Network” – it’s perfectly sensible to run this on a multicore. I
> > could live with “Distributed” though.****
> >
> > ** **

> >
> > Not “Cloud” – too buzzwordy, and everyone disagrees about what “cloud”
> > means. (some people really dislike the use of the word “cloud” in this
> > context)****
> >
> > ** **

> >
> > “Actors” – that gets across the message-passing bit, but not the
> > distributed bit. There’s already an “actor” package, FWIW.
> >
>
> It seems like Distributed (i.e. rather than shared memory, but not
> necessarily "networked") addresses Simon's concerns. That would then
> recommend:

Yes, I was thinking of Distributed too.

Our existing concurrency libraries have things like:

Control.Concurrent
Control.Concurrent.STM
Control.Concurrent.MVar
etc

In keeping with that, I'd go with

Control.Distributed.*

> Distributed.Actors
> or
> Distributed.Processes

Yes, and I'd be happy with * being Actors or Processes.

So, specifically then, my suggestion is:

Control.Distributed.Actors
or
Control.Distributed.Processes

Other people working on other similar distributed memory stuff can pick
other names in Control.Distributed.*

> And factor out the task stuff, providing just the messaging layer.

Right.

Control.Distributed.Task

Then for a package name, how about "distributed-actors"?

Duncan

Johan Tibell

unread,
Oct 27, 2011, 1:16:39 PM10/27/11
to parallel...@googlegroups.com
On Thu, Oct 27, 2011 at 10:04 AM, Duncan Coutts <duncan...@googlemail.com> wrote:
So, specifically then, my suggestion is:

 Control.Distributed.Actors
or
 Control.Distributed.Processes

Just to emphasize what I said in the previous email: I think this should be

 Control.Distributed.Actor
or
 Control.Distributed.Process

We don't have

Data.Maps
Control.Monads
Foreign.C.Errors
etc

-- Johan

Ryan Newton

unread,
Oct 27, 2011, 1:54:48 PM10/27/11
to parallel...@googlegroups.com
Ok... since this thread is rolling maybe at the same time someone can tell me what to name a package for threadsafe mutable deques:

   Control.Concurrent.Deque (like Chan)
   Data.Deque
   Data.Deque.LockFree
   Data.Concurrent.Deque

??

Dylan Lukes

unread,
Oct 27, 2011, 2:09:21 PM10/27/11
to parallel...@googlegroups.com
Control.Distributed.Process sounds good to me. The Task layer is separate then.

To Ryan, it depends on whether their a control structure (like a semaphore, mutex, etc), or whether they're a normally implementable data structure (tree, deque, etc).  

- Dylan

Johan Tibell

unread,
Oct 27, 2011, 3:01:30 PM10/27/11
to parallel...@googlegroups.com
On Thu, Oct 27, 2011 at 10:54 AM, Ryan Newton <rrne...@gmail.com> wrote:
Ok... since this thread is rolling maybe at the same time someone can tell me what to name a package for threadsafe mutable deques:

   Control.Concurrent.Deque (like Chan)
   Data.Deque
   Data.Deque.LockFree
   Data.Concurrent.Deque

??

This is the problem with ontologies. :)

Definitely under Data. I say. Data.Concurrent. sounds like a good namespace for mutable concurrency safe data structures (immutable data structures are already concurrency safe.)

Putting Concurrent before Deque allows us to group different concurrent data structures.

-- Johan

Ryan Newton

unread,
Oct 27, 2011, 3:27:48 PM10/27/11
to parallel...@googlegroups.com
This is the problem with ontologies. :)

Definitely under Data. I say. Data.Concurrent. sounds like a good namespace for mutable concurrency safe data structures (immutable data structures are already concurrency safe.)
 
Putting Concurrent before Deque allows us to group different concurrent data structures.

Great!  I like that one too.  I assume that one reason things like Chan end up in Control is that once you have blocking or synchronizing data structures "Control" and "Data" do get mixed up.  Still... I think when you get to things like concurrent Deques, Stacks, Bags, Hashtables the emphasis is on the "Data" not the "Control".



Bas van Dijk

unread,
Oct 27, 2011, 3:59:34 PM10/27/11
to parallel...@googlegroups.com
On 27 October 2011 02:40, Jeff Epstein <jep...@gmail.com> wrote:
> Cloud Haskell is now available from Hackage!

Exciting! Is there a source repo somewhere? It would be nice if you
could record[1] that in the cabal file.

Regards,

Bas

[1] http://www.haskell.org/cabal/users-guide/#source-repositories

Bas van Dijk

unread,
Oct 27, 2011, 5:42:38 PM10/27/11
to parallel...@googlegroups.com
On 27 October 2011 21:59, Bas van Dijk <v.dij...@gmail.com> wrote:
> Is there a source repo somewhere?

Never mind: I just noticed the github repos
(https://github.com/jepst/CloudHaskell) in the paper.

> It would be nice if you could record that in the cabal file.

You just got a pull request to do just that ;-)

Bas

Jeff Epstein

unread,
Oct 30, 2011, 3:19:22 AM10/30/11
to parallel...@googlegroups.com
This sounds like we're approaching agreement. Have we narrowed the
options to {Data,Control}.Distributed.Process, then?

What do you think of the idea of separating the task layer abstraction
into a separate package?

Jeff

Alexander Kjeldaas

unread,
Oct 30, 2011, 9:27:49 AM10/30/11
to parallel...@googlegroups.com

Isn't a defining characteristic here that the same binary runs on all nodes? That is very un-erlangish.

Alexander

Dylan Lukes

unread,
Oct 30, 2011, 11:27:48 AM10/30/11
to parallel...@googlegroups.com
Jeff,

You're conflating two different discussions. The current consensus as far as I can see is actually a top level Distributed. The Data vs Control is a separate debate for Ryan's threadsafe mutable deques.

Ryan, please make a new thread. :P

- Dylan

On Oct 26, 2011, at 8:40 PM, Jeff Epstein wrote:

Cloud Haskell is now available from Hackage!

Jeff Epstein

unread,
Oct 31, 2011, 2:28:25 AM10/31/11
to parallel...@googlegroups.com
On 10/30/11, Alexander Kjeldaas <alexander...@gmail.com> wrote:
> Isn't a defining characteristic here that the same binary runs on all
> nodes? That is very un-erlangish.

You are correct. We welcome proposals that would help mitigate this limitation.

Jeff

Simon Peyton-Jones

unread,
Oct 31, 2011, 4:53:57 AM10/31/11
to parallel...@googlegroups.com
| > Isn't a defining characteristic here that the same binary runs on all
| > nodes? That is very un-erlangish.
|
| You are correct. We welcome proposals that would help mitigate this limitation.

There are two different questions here:

1. Can you modify the code of the program while it is running?
2. Even if the program is stable, can nodes have different binaries?

Erlang answers "yes" to (1) and hence of course "yes" to (2). Of course, modifying the program while it is running can lead to all manner of strange and unpredictable errors.

In a typed language (1) is much, much harder. What does soundness mean? There is research on this question (look at Acute and Mike Hicks's work) but we made no attempt to address it in Cloud Haskell. So for (1) we answer firmly NO. That's a limitation, but it is one that is very hard to lift.


On the other hand (2) is much more nuanced. If you send a function closure over the network, you surely do not want to send the transitive closure of all the code that can be executed starting at that closure. That could be megabytes of code! No, surely you will imagine a global "code base", and pointers into it (call them labels), like "the code for function f". Now when sending a closure, just send the *label* for its code. If a node wants to run that code, it can fetch it from the global codebase; and cache it thereafter.

Running the same binary is just an extreme version in which each node pre-fetches the entire code base. But it's just an implementation strategy. In principle it can all be fetched lazily.


So in Cloud Haskell (2) is very much an implementation tactic. We chose something simple for Day 1, that's all.

Simon

Simon Marlow

unread,
Oct 31, 2011, 5:19:15 AM10/31/11
to parallel...@googlegroups.com

> | > Isn't a defining characteristic here that the same binary runs on
> | > all nodes? That is very un-erlangish.
> |
> | You are correct. We welcome proposals that would help mitigate this
> limitation.
>
> There are two different questions here:
>
> 1. Can you modify the code of the program while it is running?
> 2. Even if the program is stable, can nodes have different binaries?
>
> Erlang answers "yes" to (1) and hence of course "yes" to (2). Of course,
> modifying the program while it is running can lead to all manner of
> strange and unpredictable errors.
>
> In a typed language (1) is much, much harder. What does soundness mean?
> There is research on this question (look at Acute and Mike Hicks's work)
> but we made no attempt to address it in Cloud Haskell. So for (1) we
> answer firmly NO. That's a limitation, but it is one that is very hard to
> lift.
>
>
> On the other hand (2) is much more nuanced. If you send a function
> closure over the network, you surely do not want to send the transitive
> closure of all the code that can be executed starting at that closure.
> That could be megabytes of code! No, surely you will imagine a global
> "code base", and pointers into it (call them labels), like "the code for
> function f". Now when sending a closure, just send the *label* for its
> code. If a node wants to run that code, it can fetch it from the global
> codebase; and cache it thereafter.

If we could name ABIs globally, then a code label can be an ABI identifier plus a function name. Then, when a node receives a code label that it doesn't already have, it can "cabal install" the package and dynamically link the code.

This relies on having stable ABIs, which is something we've talked about in the past. Stable ABIs are useful for other things too - e.g. upgrading packages without breaking packages that depend on them.

Cheers,
Simon

Alexander Kjeldaas

unread,
Oct 31, 2011, 9:49:29 AM10/31/11
to parallel...@googlegroups.com

I am not sure I fully get what you are both talking about.

SPJ, in solving 2), if you change any function, then it is possible that the transitive closure of code that depends on the changed function has changed.  The same is true for all functions if the compiler changed, or if libc was upgraded.  Think in terms of the git hash of the code+build environment.  The *meaning* might have changed.

Code size it not really an issue. Collecting all code that runs a distributed system is a solved problem if one can disregard time.  It only requires linking a huge number of libraries.  Upgrading is the interesting problem.  When upgrading we are replacing code, and introducing new code that never existed in the system to begin with, much like what you categorize as 1).

To deal with the issue of changed "meaning", an engineering solution is to use stable interfaces.  The meaning of a stable interface is by definition the same for a client, even if the implementation of that interface is upgraded.  Thinking in terms of git hash, the stable interface has a fixed hash.

Here's my quick and dirty design for stable identifiers and interfaces for a distributed haskell based on thinking like git. Let's define two types of identifiers: Id = CodeId | InterfaceId

- CodeId: Sha-256 hierarchical hash of the code and the Id of its dependencies (the name would not be hashed).

- InterfaceId: Sha-256 hash of the interface definition; types and names.

Assuming ghc would calculate these Ids and store them with somewhere as metadata, we could do the following operations:

- After building a new binary, would the remote calls it is trying to execute be serviceable by running nodes at a given service level (i.e. qps - enough % of available nodes support the remote calls)?  This should and can be answered before bringing the new build up in a production environment. If the new binary is trying to do a remote call to something that is not an InterfaceId, it might or might not work, depending on the difference between the binary and what is available on the network (are the CodeIds the same? They could be, by chance.).  If remote calls are only done to InterfaceIds, then there is a fair chance that it will work out.

- Which InterfaceIds and CodeIds do not exist on the network anymore?  The InterfaceIds can be "garbage-collected" from the code-base.  The CodeIds can be garbage-collected form the "artifact/binary code repository".

GHC would be the component that could produce these Ids, but I do think they solve the problem of having a unique Id attached to a piece of behavior.

Alexander

Duncan Coutts

unread,
Nov 1, 2011, 8:41:02 AM11/1/11
to parallel...@googlegroups.com
On Sun, 2011-10-30 at 15:19 +0800, Jeff Epstein wrote:
> This sounds like we're approaching agreement. Have we narrowed the
> options to {Data,Control}.Distributed.Process, then?

According to Dylan, the Data.* thing was a different discussion.

So as I understand it the main question was wether we use a top level
Distributed.* or Control.Distributed.* I'm not sure if we decided
between .Process and .Actor, but if we decided Process that's fine.

I've been advocating using module names that matche our existing
standard module names for concurrency, sice this is "just" distributed
concurrency. The base package has Control.Concurrent.*, and I argue we
should pick Control.Distributed.*

Hence Control.Distributed.Process

> What do you think of the idea of separating the task layer abstraction
> into a separate package?

Yes. These layers should not be tied together in one package. There is
plenty of scope in the design space for alternatives to the Task layer
and these might also be built on top of the Process layer. It also
simplifies the API documentation to keep them separate.

Duncan

Alberto G. Corona

unread,
Nov 2, 2011, 3:47:03 PM11/2/11
to parallel...@googlegroups.com


2011/10/30 Jeff Epstein <jep...@gmail.com>
What do you think of the idea of separating the task layer abstraction
into a separate package?

Jeff:

That would be Fine IMHO. The lower layer makes sense for itself. Many people like me, may  be interested into using this layer to implement other abstractions. .

Ryan Newton

unread,
Nov 3, 2011, 11:38:03 AM11/3/11
to dun...@well-typed.com, parallel...@googlegroups.com
Are we near consensus then? I vote for Actor over Process just because Process has another strong meaning (System.Process) and those will pollute google search results.

I guess the nominees would be this:

Control.Distributed.Actor
Control.Distributed.Process

With the former having my vote. Likewise, Duncan put in another vote for splitting out the task layer into separate package, which I also support. Hopefully if no new strong opposition arises these measures can both go forward...

-Ryan

Johan Tibell

unread,
Nov 3, 2011, 12:21:54 PM11/3/11
to rrne...@gmail.com, dun...@well-typed.com, parallel...@googlegroups.com
On Thu, Nov 3, 2011 at 8:38 AM, Ryan Newton <rrne...@gmail.com> wrote:
Are we near consensus then?  I vote for Actor over Process just because Process has another strong meaning (System.Process) and those will pollute google search results.

I guess the nominees would be this:

  Control.Distributed.Actor

+1

Adam Foltzer

unread,
Nov 6, 2011, 12:44:55 PM11/6/11
to parallel...@googlegroups.com
+1, even though haskell-mpi is in the Control.Parallel hierarchy. I think Distributed is more descriptive for what we're doing here.

Adam

Duncan Coutts

unread,
Nov 6, 2011, 2:29:34 PM11/6/11
to acfo...@gmail.com, parallel...@googlegroups.com
On Sun, 2011-11-06 at 12:44 -0500, Adam Foltzer wrote:
> +1, even though haskell-mpi is in the Control.Parallel hierarchy. I think
> Distributed is more descriptive for what we're doing here.

And it's not as if we can't change the haskell-mpi package.

Duncan

Jeff Epstein

unread,
Nov 10, 2011, 11:57:19 PM11/10/11
to dun...@well-typed.com, acfo...@gmail.com, parallel...@googlegroups.com
Ok, sounds like we've agreed on Control.Distributed.Actor or
Control.Distributed.Process. I lean towards the latter, as the word
"Actor" doesn't actually appear in CH's API.

If there are no further opinions, I'll make this change in the next version.

What about package/product name?

Jeff

Simon Marlow

unread,
Nov 11, 2011, 4:40:13 AM11/11/11
to jep...@gmail.com, dun...@well-typed.com, acfo...@gmail.com, parallel...@googlegroups.com
On 11/11/2011 04:57, Jeff Epstein wrote:
> Ok, sounds like we've agreed on Control.Distributed.Actor or
> Control.Distributed.Process. I lean towards the latter, as the word
> "Actor" doesn't actually appear in CH's API.

FWIW, I'd be happy with just "Control.Distributed". By analogy, we have
"Control.Concurrent", not "Control.Concurrent.Thread".

Cheers,
Simon

Duncan Coutts

unread,
Nov 11, 2011, 7:14:11 AM11/11/11
to Simon Marlow, jep...@gmail.com, acfo...@gmail.com, parallel...@googlegroups.com
On Fri, 2011-11-11 at 09:40 +0000, Simon Marlow wrote:
> On 11/11/2011 04:57, Jeff Epstein wrote:
> > Ok, sounds like we've agreed on Control.Distributed.Actor or
> > Control.Distributed.Process. I lean towards the latter, as the word
> > "Actor" doesn't actually appear in CH's API.
>
> FWIW, I'd be happy with just "Control.Distributed". By analogy, we have
> "Control.Concurrent", not "Control.Concurrent.Thread".

It's true, but in the case of concurrency there are fewer design choices
and Control.Concurrent is what is provided by "Concurrent Haskell",
implemented directly in the RTS and exported by the base package. The
difference here is the higher chance of other implementations /
approaches in the same Control.Distributed.* area.

--
Duncan Coutts, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/

Simon Marlow

unread,
Nov 11, 2011, 7:44:35 AM11/11/11
to Duncan Coutts, jep...@gmail.com, acfo...@gmail.com, parallel...@googlegroups.com

Sure, but module names do not have to be globally unique. It's unlikely
that you'd need more than one kind of Control.Distributed in any given
program.

I just have the feeling that "Control.Distributed.Process" is like
giving 3 digits of precision when you only care about 2, and making you
type in the extra stuff every time even though it doesn't change. Maybe
it's just me.

Cheers,
Simon

Mischa Dieterle

unread,
Nov 11, 2011, 8:19:33 AM11/11/11
to parallel...@googlegroups.com
I agree with Duncan. And I think it's advantageous to have distinct
Module names, thus one can install packages for different parallel
extensions via cabal and still use a simple import in a module to
distinguish the packages.
Following Duncans argument, I think Control.Distributed.Actor would be
more distinctive than Control.Distributed.Process. For example in Eden,
we also use distributed processes. Also in MPI conceptually there are
processes which communicate, even though the API doesn't use a keyword
process.

Cheers,
Mischa

Reply all
Reply to author
Forward
0 new messages