As for it eventually being named CloudHaskell in the end, I have no idea. Not my project/domain at all.
I'm just building my project on top of it. OTP is the Open Telecom Platform in Erlang. It's a sort of common set of patterns you can build just about any application from which have been standardized and thoroughly tested, comprising supervisors, finite state machines, generalized servers, etc.
- Dylan
The module is so named because the working title for the project was
Remote Haskell, and none of the existing module hierarchies seemed to
add any semantic value, while nevertheless being verbose.
There has been talk of an MPI backend. I'm not sure that it can be
done cleanly; MPI likes to manage its own workers, and as far as I
know there is no standardized way for MPI applications to spawn new
processes, recover cleanly from hardware failure, etc. So it would
probably mean either tailoring the backend to a particular
implementation of MPI, or forgoing some useful CH features, which may
result in a dramatically changed interface. I don't know much about
MPI, though, and would love to hear from someone with more experience.
Jeff
Thank you Ryan for setting up this list. It's exciting how much energy there seems to be in the Cloud-Haskell area. I hope we can work together to build something (or a collection of compatible pieces) that will really bring Haskell into a entirely new area.
I think it'd be really helpful to have an overview of the various layers involved here. Perhaps it'd be useful to start a Wiki page on haskell.org to describe them?
For example, you are building a layer on top of ProcessM. An overview of
* What the layers are
* Roughly what API they support
* A pointer to the package or whatnot that implements them
would be fantastically helpful. For example, Jeff has a "Task" layer that he builds on top of ProcessM too, but it's probably different to your OTP layer.
And maybe others have different ideas again. For example, we probably want a communication/deployment layer *below* ProcessM to abstract away from the details of whether the underlying infrastructure is Infiniband or MPI or Portals or what.
Simon
Well, I'm essentially just trying to put together a layer that better suits the OTP paradigm than Erlang processes themselves do. The problems I see to address are manifold.
OTP is implemented as a set of behaviours (i.e, interfaces for modules, though the requirement is less strict). In Haskell, rather than use modules, we can fill in a record with the callback closures and initialization state instead. In Erlang, each callback for the OTP components (supervisor, gen_serv, gen_fsm, ...) all take a State parameter as their first argument. In Haskell it seems more natural to instead lift all callbacks with StateT. However merely using StateT ProcessM () is somewhat nasty, so I'm trying to reexport and reinstantiate my own Process with StateT built in (I may add more later).
I may also try to add some different sorts of Processes, i.e, pure and impure ones, ones with different specializations/features, and so forth. I'm not sure yet whether this will be beneficial, but I think there are some interesting possibilities.
Wrapping remote itself also lets me undo some of its unfortunate naming conventions...
In short, my "OTP" layer is just meant to be a slightly more restricted, but easier to work with layer. The OTP philosophy is to have a core set of tools that can be composed to do their job well. My API is currently very unstable, and I'm still experimenting. However, once it evens out I'll certainly do that. I'm certainly nowhere near an expert or even relatively knowledgeable in this field, I'm just trying to hit the ground running, learn as much as I can, and hopefully make a valuable contribution.
I was wondering about backend. If I recall correctly from the whitepaper, the current remote package's closure implementation is something of a stopgap hack. From the source code it certainly seemed so. It would probably be best to allow for alternative transport layers too...
- Dylan
The root is here: http://haskell.org/haskellwiki/GHC
under “Collaborative documentation”
See for example links to “Data parallel Haskell” or “Concurrency”
Perhaps add a link to GHC/CloudHaskell, and then a sub-page GHC/CloudHaskell/Transport about transport layers?
S
From: Peter Braam [mailto:peter...@parsci.com]
Sent: 03 October 2011 13:54
To: Simon Peyton-Jones
Cc: cloudh...@googlegroups.com; Geoffrey Mainland
Subject: Re: Everest (was CloudHaskell-OTP) Suggestions
Hi -
I would be glad to write a few pages about network libraries that I'm familiar with or studying. Could someone tell me where to create a page about this, then I can get going. I'm not sufficiently familiar with the layout of haskell.org to choose myself I think.
Thanks!
Peter
On 11 October 2011 06:49, Ryan Newton <rrne...@gmail.com> wrote:
> Whether to do an MPI backend, at the possible expense of making the
> "monitors" fault-tolerance layer an only optionally provided by the API
> (i.e. not required of all backends)
Perhaps I can offer a glimmer of hope in this regard. I am working
with Patrick on HdpH, and my PhD is looking at fault tolerant
distributed Haskell. As MPI is the de-facto backend for (probably) all
HPC systems, it seemed sensible to focus on the feasibility of fault
tolerant MPI implementations.
Any yes - such things do exist! I'm aware of 2 MPI implementations
that handle node failure, that is - jobs are not killed given the
termination of the process on a node included in mpiexec. I am using
one of these implementations on Heriot Watt's cluster with distributed
Haskell, and preliminary results are promising. They do appear to be
adhering to the MPI standard too, using MPI_RETURN_ALL, and perhaps
one has to speculate that other fault tolerant MPI implementations are
to follow.
--
Rob Stewart
Heriot Watt University
distributed Haskell. As MPI is the de-facto backend for (probably) all
Any yes - such things do exist! I'm aware of 2 MPI implementations
that handle node failure, that is - jobs are not killed given the
termination of the process on a node included in mpiexec. I am using
one of these implementations on Heriot Watt's cluster with distributed
Haskell, and preliminary results are promising. They do appear to be
adhering to the MPI standard too, using MPI_RETURN_ALL, and perhaps
one has to speculate that other fault tolerant MPI implementations are
to follow.
Is your paper Patrick Maier, Phil Trinder and Hans-Wolfgang Loidl, Implementing a High-level Distributed-Memory Parallel Haskell in Haskell online yet? If not, could it be. It would give us a better idea of what you are up to. Thanks!
Could you add the link to http://www.haskell.org/haskellwiki/GHC/CloudAndHPCHaskell
Simon
| -----Original Message-----
| From: cloudh...@googlegroups.com [mailto:cloudh...@googlegroups.com]
| On Behalf Of Rob Stewart
| Sent: 11 October 2011 07:28
| To: cloudh...@googlegroups.com
| Cc: Peter Braam; Geoffrey Mainland
| Subject: Re: Everest (was CloudHaskell-OTP) Suggestions
|
With agreement of my co-authors we've added the draft HdpH paper to
http://www.haskell.org/haskellwiki/GHC/CloudAndHPCHaskell
Also added some brief thoughts in response to Peter's on abstract
language models of large systems, with a reference to another draft
paper.
Cheers,
Phil
Phil
> -----Original Message-----
> From: cloudh...@googlegroups.com
> [mailto:cloudh...@googlegroups.com] On Behalf Of Simon Peyton-Jones
> Sent: 12 October 2011 22:26
> To: cloudh...@googlegroups.com
> Subject: RE: Everest (was CloudHaskell-OTP) Suggestions
>
--
Heriot-Watt University is a Scottish charity
registered under charity number SC000278.
Heriot-Watt University is the Sunday Times
Scottish University of the Year 2011-2012
Great, so does that give you confidence that the CloudHaskell API
HdpH, which looks a lot like Cloud Haskell, is implemented with an MPI backend as we target HPC platforms that mandate MPI.
We’ve even used a fault tolerant MPI, so we can kill nodes.
Best Regards
Phil
From: cloudh...@googlegroups.com [mailto:cloudh...@googlegroups.com] On Behalf Of Ryan Newton
Sent: 11 October 2011 17:48
To: cloudh...@googlegroups.com
Cc: Peter Braam; Geoffrey Mainland
Subject: Re: Everest (was CloudHaskell-OTP) Suggestions
distributed Haskell. As MPI is the de-facto backend for (probably) all