Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Haskell-cafe] [GSoC] A data parallel physics engine

22 views
Skip to first unread message

Roman Cheplyaka

unread,
Mar 10, 2008, 10:30:41 AM3/10/08
to Roman Leshchinskiy, Manuel Chakravarty, haskel...@haskell.org
I'm looking for interesting project to work on during Google Summer of
Code. So I found [1]"A data parallel physics engine" ticket and got
excited about it. I'd like to know interested mentors and community
opinion about the complexity of such project.

I have not very deep knowledge about both NDP and physics engines. Is
it feasible to learn all the necessary stuff during these 2 month before
SoC starts?

1. http://hackage.haskell.org/trac/summer-of-code/ticket/1535
--
Roman I. Cheplyaka :: http://ro-che.info/
..being in love is totally punk rock...

signature.asc

Bryan O'Sullivan

unread,
Mar 10, 2008, 1:30:05 PM3/10/08
to Roman Cheplyaka, Manuel Chakravarty, haskel...@haskell.org
Roman Cheplyaka wrote:

> I have not very deep knowledge about both NDP and physics engines.

I've done some physics simulation work for games. One could certainly
learn enough to be able to write a straightforward implementation in
that time. Broadphase collision detection is easy; narrowphase somewhat
more difficult; integration and interpenetration resolution would
consume the bulk of your time.

I very strongly doubt that one might both write a physics engine and
adapt it to compute anything interesting using NDP in the SoC time
frame. That's more like a meaty topic for a Masters thesis.

<b
_______________________________________________
Haskell-Cafe mailing list
Haskel...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Manuel M T Chakravarty

unread,
Mar 10, 2008, 7:48:51 PM3/10/08
to Roman Cheplyaka, haskel...@haskell.org
Roman Cheplyaka:

> I'm looking for interesting project to work on during Google Summer of
> Code. So I found [1]"A data parallel physics engine" ticket and got
> excited about it. I'd like to know interested mentors and community
> opinion about the complexity of such project.
>
> I have not very deep knowledge about both NDP and physics engines. Is
> it feasible to learn all the necessary stuff during these 2 month
> before
> SoC starts?
>
> 1. http://hackage.haskell.org/trac/summer-of-code/ticket/1535

Using NDP is actually pretty easy, that is, for somebody who is
already used to functional programming. The idea is, for anything you
want to have running in parallel, don't use explicit recursion, but
use list comprehensions (well, array comprehensions, but it's the same
idea) and combinators like mapP, foldP, scanP, etc (which are also
like their list variants). Here, for example, is quicksort:

> qsort :: Ord a => [:a:] -> [:a:]
> qsort [::] = [::]
> qsort xs = let
> m = xs!:0
> ss = [:x | x <- xs, x < m:]
> ms = [:x | x <- xs, x == m:]
> gs = [:x | x <- xs, x > m:]
> qs = [:qsort ys | ys <- [:ss, gs:]:]
> in
> qs!:0 +:+ ms +:+ qs!:1

where [:a:] is the type of arrays of as, [::] is an empty array, [: ..
| .. :] are array comprehensions, (!:) is array indexing, and (+:+)
array concatenation. The only funny thing is the bindings of qs,
where we put ss and gs into an array and apply qsort recursively
*inside* an array comprehension. This is so that the two recursive
calls to qsort happen in parallel.

Getting good parallel performance is the tricky bit, but in a project
like the physics engine for GSoC, a basic parallelisation strategy is
sufficient and performance can then be improved incrementally. We
don't expect anybody to implement a fully-fledged, fully parallelised,
high-performance physics engine during GSoC. It is rather about
laying a solid foundation on which we can then build over time. (GSoC
is a lot about getting exciting open source projects of the ground,
not about producing a polished product that doesn't change anymore
after GSoC.)

Besides, there is more to physics engines than just collision
detection. Another aspect is physical simulations of non-solid
bodies, such as liquids and cloth - re the latter, see <http://graphics.snu.ac.kr/~kjchoi/publication/cloth.pdf
> for an interesting SIGGRAPH paper. What aspect we'd cover in the
GSoC project would really be a matter of what you are most interested
in.

So, overall, I agree with that this is an exciting project ;)

Manuel

Sebastian Sylvan

unread,
Mar 12, 2008, 7:04:56 AM3/12/08
to Manuel M T Chakravarty, haskel...@haskell.org

> http://graphics.snu.ac.kr/~kjchoi/publication/cloth.pdf<http://graphics.snu.ac.kr/%7Ekjchoi/publication/cloth.pdf>


> > for an interesting SIGGRAPH paper. What aspect we'd cover in the
> GSoC project would really be a matter of what you are most interested
> in.
>


There are a variety of approximate fluid simulation techniques (see
SIGGRAPH, GDC papers etc., for example you could do the heightfield based
approach, that simplifies the equations by working in 2D -- this runs in
realtime on most decent hardware!) that aren't too complicated to implement,
but looks really really cool. So I'd recommend doing something in that area
because it's a "high coolness factor per unit time spent coding" kind of
thing :-)


--
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862

Neil Mitchell

unread,
Mar 12, 2008, 2:39:19 PM3/12/08
to Roman Cheplyaka, Manuel Chakravarty, haskel...@haskell.org
Hi

> I'm looking for interesting project to work on during Google Summer of
> Code. So I found [1]"A data parallel physics engine" ticket and got
> excited about it. I'd like to know interested mentors and community
> opinion about the complexity of such project.

I don't think there are a great deal of Haskell users who _really_
need a physics engine right now. However, there seem to be a massive
number who are working with matrices. I am informed that a lot of
physics is just matrix stuff underneath (but don't know anything
myself).

Perhaps a nice direction to take this project would be to build an NDP
matrix library first, then use that library to build a physics engine
on top of it. A physics engine would certainly be very cool, and a
parallel matrix library would certainly be very much in demand.

Thanks

Neil

Jed Brown

unread,
Mar 12, 2008, 3:18:51 PM3/12/08
to Neil Mitchell, haskel...@haskell.org
On 12 Mar 2008, ndmit...@gmail.com wrote:

>> I'm looking for interesting project to work on during Google Summer of
>> Code. So I found [1]"A data parallel physics engine" ticket and got
>> excited about it. I'd like to know interested mentors and community
>> opinion about the complexity of such project.
>
> I don't think there are a great deal of Haskell users who _really_
> need a physics engine right now. However, there seem to be a massive
> number who are working with matrices. I am informed that a lot of
> physics is just matrix stuff underneath (but don't know anything
> myself).
>
> Perhaps a nice direction to take this project would be to build an NDP
> matrix library first, then use that library to build a physics engine
> on top of it. A physics engine would certainly be very cool, and a
> parallel matrix library would certainly be very much in demand.

Indeed, a matrix library would be really nice. Before getting serious
about this, please take a very close look at how PETSc
(http://www-unix.mcs.anl.gov/petsc/) handles matrices. The abstraction
is very important because most large matrices of interest are sparse or
have some symmetry that makes them asymptotically cheaper to apply (like
with an FFT, FMM, or tensor product). It would be a shame to put a lot
of work into something and have it miss the very important point that a
matrix is nothing more than a linear transformation between finite
dimensional spaces. Certain algorithms may need a particular
representation, but many don't (a Krylov iteration just needs to apply
the matrix to a vector; the preconditioner usually needs more, but may
not use the same matrix). At the more mundane level, there is
frequently an order of magnitude performance difference between
different sparse matrix formats, but which one wins is problem specific.

Jed

Bas van Dijk

unread,
Mar 12, 2008, 4:11:31 PM3/12/08
to Roman Cheplyaka, Manuel Chakravarty, haskel...@haskell.org
2008/3/10 Roman Cheplyaka <ro...@ro-che.info>:

> I'm looking for interesting project to work on during Google Summer of
> Code. So I found [1]"A data parallel physics engine" ticket and got
> excited about it. I'd like to know interested mentors and community
> opinion about the complexity of such project.

A bit offtopic but slightly related:
I just added a GSoC project proposal about adding a nVidia CUDA
backend to Data Parallel Haskell:
http://hackage.haskell.org/trac/summer-of-code/ticket/1537

It would be great if this physics engine or matrix library could run
on a CUDA enabled nVidia "graphics" card!

regards,

Bas

Andrew Coppin

unread,
Mar 12, 2008, 4:20:10 PM3/12/08
to haskel...@haskell.org
Bas van Dijk wrote:
> A bit offtopic but slightly related:
> I just added a GSoC project proposal about adding a nVidia CUDA
> backend to Data Parallel Haskell:
> http://hackage.haskell.org/trac/summer-of-code/ticket/1537
>
> It would be great if this physics engine or matrix library could run
> on a CUDA enabled nVidia "graphics" card!
>

That looks to me like something that would be amazingly cool if it was
done successfully (and a pretty "killer feature" for using Haskell in
general) - but it looks like a HELL of a lot of work to bring it from
being a pipe dream to a practical reality...

OTOH, I don't build compilers for a living, so...?

Dan Piponi

unread,
Mar 12, 2008, 5:08:35 PM3/12/08
to Jed Brown, haskel...@haskell.org
2008/3/12 Jed Brown <j...@59a2.org>:
> It would be a shame to ...miss the very important point that a

> matrix is nothing more than a linear transformation between finite
> dimensional spaces.

I rate this obvious seeming fact as one of the most important things
I've learnt about numerical linear algebra in my career. The number of
times I've seen (and...oops...written it myself) code that copies data
out of some structure into some standard matrix structure when the
original structure could itself have been seen as a function that
transforms vectors is scary. It pays to think functionally.
--
Dan

Don Stewart

unread,
Mar 12, 2008, 5:26:59 PM3/12/08
to Neil Mitchell, haskel...@haskell.org
ndmitchell:

> Hi
>
> > I'm looking for interesting project to work on during Google Summer of
> > Code. So I found [1]"A data parallel physics engine" ticket and got
> > excited about it. I'd like to know interested mentors and community
> > opinion about the complexity of such project.
>
> I don't think there are a great deal of Haskell users who _really_
> need a physics engine right now. However, there seem to be a massive
> number who are working with matrices. I am informed that a lot of
> physics is just matrix stuff underneath (but don't know anything
> myself).
>
> Perhaps a nice direction to take this project would be to build an NDP
> matrix library first, then use that library to build a physics engine
> on top of it. A physics engine would certainly be very cool, and a
> parallel matrix library would certainly be very much in demand.

I'd chime in here -- actually getting arrays and parallel arrays with
list-like interfaces, and then onto matrices, will impact a lot of
people's work, in a good way.

Don Stewart

unread,
Mar 12, 2008, 5:28:31 PM3/12/08
to Bas van Dijk, Manuel Chakravarty, haskel...@haskell.org
v.dijk.bas:

> 2008/3/10 Roman Cheplyaka <ro...@ro-che.info>:
> > I'm looking for interesting project to work on during Google Summer of
> > Code. So I found [1]"A data parallel physics engine" ticket and got
> > excited about it. I'd like to know interested mentors and community
> > opinion about the complexity of such project.
>
> A bit offtopic but slightly related:
> I just added a GSoC project proposal about adding a nVidia CUDA
> backend to Data Parallel Haskell:
> http://hackage.haskell.org/trac/summer-of-code/ticket/1537
>
> It would be great if this physics engine or matrix library could run
> on a CUDA enabled nVidia "graphics" card!
>

Note there's already a project at UNSW, with a PhD student attached,
doing an nvidia CUDA backend to Data Parallel Haskell.

Perhaps this could be factored in somehow? At least there's a source
of mentors here.

Bas van Dijk

unread,
Mar 12, 2008, 5:47:41 PM3/12/08
to Don Stewart, Manuel Chakravarty, haskel...@haskell.org
On Wed, Mar 12, 2008 at 10:27 PM, Don Stewart <do...@galois.com> wrote:
> Note there's already a project at UNSW, with a PhD student attached,
> doing an nvidia CUDA backend to Data Parallel Haskell.

Great, do you perhaps have a link to a page describing that project?
Then I can link to it from the GSoC ticket.

Bas

Dan Piponi

unread,
Mar 12, 2008, 5:54:53 PM3/12/08
to Andrew Coppin, haskel...@haskell.org
On Wed, Mar 12, 2008 at 2:33 PM, Andrew Coppin
<andrew...@btinternet.com> wrote:

> Hanging around here, you really feel like you're at the cutting edge
> of... something... heh.

Another approach isn't to target a CUDA back end for Haskell but to
write an array library that builds computations that can target a CUDA
(or other) back end. My first real world job that involved programming
was APL [1] based. APL (and its offspring) is a functional-ish
programming language that manipulates arrays using a relatively small
number of primitives, most of which probably map nicely to CUDA
hardware because of the potential for data parallelism. Despite the
write-only nature of APL source code, and the negative comments about
it by Dijkstra, the expressivity of APL for numerical work is
unbelievable. I would love to see some of those ideas somehow brought
into Haskell as a library.

[1] http://en.wikipedia.org/wiki/APL_%28programming_language%29
--
Dan

Roman Leshchinskiy

unread,
Mar 12, 2008, 7:37:21 PM3/12/08
to Neil Mitchell, Manuel Chakravarty, haskel...@haskell.org
Neil Mitchell wrote:
> Hi
>
>> I'm looking for interesting project to work on during Google Summer of
>> Code. So I found [1]"A data parallel physics engine" ticket and got
>> excited about it. I'd like to know interested mentors and community
>> opinion about the complexity of such project.
>
> I don't think there are a great deal of Haskell users who _really_
> need a physics engine right now.

Well, we do :-) The idea is to provide a good testbed for NDP with some
real-world appeal and a certain coolness factor.

> Perhaps a nice direction to take this project would be to build an NDP
> matrix library first, then use that library to build a physics engine
> on top of it. A physics engine would certainly be very cool, and a
> parallel matrix library would certainly be very much in demand.

The problem with dense matrices is that they are regular and NDP isn't
too good at handling regular data structures at the moment. Our main
focus is on irregular stuff like sparse matrices, trees and so on.
Still, we'll see what happens.

Roman

Roman Leshchinskiy

unread,
Mar 12, 2008, 7:41:13 PM3/12/08
to Bas van Dijk, Manuel Chakravarty, haskel...@haskell.org
Bas van Dijk wrote:
>
> A bit offtopic but slightly related:
> I just added a GSoC project proposal about adding a nVidia CUDA
> backend to Data Parallel Haskell:
> http://hackage.haskell.org/trac/summer-of-code/ticket/1537
>
> It would be great if this physics engine or matrix library could run
> on a CUDA enabled nVidia "graphics" card!

As Don said, we're working on that. However, this is a lot more work
than just a summer project. The reason is that you can't run arbitrary
NDP computations on the GPU, just a fairly restricted subset. This means
that you need to decide what to put on the GPU during code generation
which, in turn, means a significant amount of compiler hacking. It's
really more than enough work for a PhD thesis.

Roman

Thomas Schilling

unread,
Mar 12, 2008, 8:15:38 PM3/12/08
to Dan Piponi, Haskell Cafe mailing list
There's an effort going on to use techniques from Lava (the Haskell-
based hardware description language) to target GPUs. Joel Svensson
[1] has written his Master's thesis on this and is now working on
this for his PhD, so if you ask kindly he might tell you more about
this or send you the thesis.

[1] .. http://www.chalmers.se/cse/EN/people/svensson-joel

Don Stewart

unread,
Mar 12, 2008, 9:29:22 PM3/12/08
to Manuel M T Chakravarty, haskel...@haskell.org
chak:
> Don Stewart:

> >I'd chime in here -- actually getting arrays and parallel arrays with
> >list-like interfaces, and then onto matrices, will impact a lot of
> >people's work, in a good way.
>
> I am not quite sure what you mean with a list-like interface. NDP/DPH-
> style arrays are exactly like Haskell lists, but restricted to finite
> structures and with a more eager evaluation strategy. The syntactic
> sugar is like lists, just with colons thrown in (eg, [:1,2,3:] instead
> of [1,2,3]) and the Prelude functions have the same names as the list
> functions, just with a suffix P (eg, mapP instead of map).

Right, I was hinting I'd like the ndp library released. Even if not for
parallelism -- just having a good array interface would be enough :)

-- Don

Manuel M T Chakravarty

unread,
Mar 12, 2008, 9:31:05 PM3/12/08
to Jed Brown, haskel...@haskell.org
Jed Brown:

> On 12 Mar 2008, ndmit...@gmail.com wrote:
>> I don't think there are a great deal of Haskell users who _really_
>> need a physics engine right now. However, there seem to be a massive
>> number who are working with matrices. I am informed that a lot of
>> physics is just matrix stuff underneath (but don't know anything
>> myself).
>>
>> Perhaps a nice direction to take this project would be to build an
>> NDP
>> matrix library first, then use that library to build a physics engine
>> on top of it. A physics engine would certainly be very cool, and a
>> parallel matrix library would certainly be very much in demand.
>
> Indeed, a matrix library would be really nice. Before getting serious
> about this, please take a very close look at how PETSc
> (http://www-unix.mcs.anl.gov/petsc/) handles matrices. The
> abstraction
> is very important because most large matrices of interest are sparse
> or
> have some symmetry that makes them asymptotically cheaper to apply
> (like
> with an FFT, FMM, or tensor product).

I agree that a good matrix library would be very useful. However, I
am less sure that starting with a general matrix library is a good way
to make progress towards a physics engine. Especially, for a simple
engine we have a small set of (application-dependent) types of
matrices, requiring a small number of the many possible matrices
representations. In contrast, to write a good general-purpose matrix
library, you need to support a whole range of representation and
algorithms.

In my experience, it is a lot harder to get somebody who is motivated
to write a general-purpose library than getting somebody who is
motivated to write an application, which you can run and show to
people at the end. Don't get me wrong, if there is anybody who wants
to write a matrix library using NDP, that would be absolutely
fabulous, but otherwise I'd happily settle for a person who implements
just enough matrix operations to get some nice physics going.

Manuel

Manuel M T Chakravarty

unread,
Mar 12, 2008, 9:44:26 PM3/12/08
to Roman Leshchinskiy, haskel...@haskell.org
Roman Leshchinskiy:

> Bas van Dijk wrote:
>> A bit offtopic but slightly related:
>> I just added a GSoC project proposal about adding a nVidia CUDA
>> backend to Data Parallel Haskell:
>> http://hackage.haskell.org/trac/summer-of-code/ticket/1537
>> It would be great if this physics engine or matrix library could run
>> on a CUDA enabled nVidia "graphics" card!
>
> As Don said, we're working on that. However, this is a lot more work
> than just a summer project. The reason is that you can't run
> arbitrary NDP computations on the GPU, just a fairly restricted
> subset. This means that you need to decide what to put on the GPU
> during code generation which, in turn, means a significant amount of
> compiler hacking. It's really more than enough work for a PhD thesis.

To elaborate on that, our current, more moderate goal is to implement
a sort of embedded, array DSL into Haskell/GHC, so that all array DSL
code is compiled down to CUDA and linked into the rest of the Haskell
application using the FFI. This requires the design of the array DSL
(partially done), a small addition to GHC (not done), and a compiler
from GHC Core to CUDA (partially done). All this is part of a PhD
project, and we hope to have something to show later this year.

Step 2, then, is to implement a backend to DPH using that array DSL.
Doing that efficiently is going to be another significant project
(much more than can be achieved in a GSoC project).

Manuel

Roman Cheplyaka

unread,
Mar 13, 2008, 2:08:34 AM3/13/08
to haskel...@haskell.org
* Manuel M T Chakravarty <ch...@cse.unsw.edu.au> [2008-03-13 12:30:40+1100]

>> Indeed, a matrix library would be really nice. Before getting serious
>> about this, please take a very close look at how PETSc
>> (http://www-unix.mcs.anl.gov/petsc/) handles matrices. The
>> abstraction
>> is very important because most large matrices of interest are sparse
>> or
>> have some symmetry that makes them asymptotically cheaper to apply
>> (like
>> with an FFT, FMM, or tensor product).
>
> In my experience, it is a lot harder to get somebody who is motivated to
> write a general-purpose library than getting somebody who is motivated to
> write an application, which you can run and show to people at the end.

You're absolutely right from my (i.e. student's) point of view :)

signature.asc
0 new messages