Haskell brings us a very rich toolkit for all kinds of concurrency/
including classic Thread-based concurrency; Data-parallel Haskell
(DPH) and, recently, Cloud Haskell
However, if we look at this ecosystem through the BigData
perspective (i.e. distributed parallel computing), the following
components are missing:
* Integration with a distributed file system, such as HDFS (Hadoop
distributed file system); That would allow to perform distributed
computations on a distributed data.
* Data aggregation framework on top of it (I would not call it
MapReduce framework, just because in Haskell we'd definitely expect
richer set of primitives).
The most closest examples are Hadoop and DryadLINQ.
I was thinking about writing a Google summer code project proposal; is
there anybody in this group potentially interested in mentoring?
Currently I think about the project scope as follows:
1. Build haskell APIs for HDFS. This project can be used for an
inspiration https://github.com/kim/hdfs-haskell Basically it's a
binding to a native libhdfs.
2. Use Cloud Haskell primitives to build an execution plan for
distributed data aggregation. This requires some research; for example
DPH can be used to parallelize local computations on a single node.
3. Build high-level API (such as map / reduce) to be automatically
load-balanced and distributed across the cluster.
4. Performance benchmarks. Comparison with Hadoop/Dryad.