Could Clojure be used to program on top of GPU's?

714 views
Skip to first unread message

Victor S

unread,
Jul 18, 2010, 10:30:09 PM7/18/10
to Clojure
Hi,

Just a curiosity question, I'm not sure how far off the JVM is from
running on your GPU, but I imagine that with the hundreds and even
thousands of cores, a GPU might, theoretically show us just what
Clojure can do...

Nicolas Oury

unread,
Jul 19, 2010, 8:28:53 AM7/19/10
to clo...@googlegroups.com
This is a moving area (GPUs and CPUs seems to be in a slow process of converging), but most GPUs are
good at single instruction a lot of data. 

(Disclaimer: I have mostly looked at nvidia gpus of ~6months ago)
Most architectures are made to execute a lot of times the same instruction on a different block of data.
You can have if...then...else... in your GPU programs, but every processor will execute both branchs (with a tag saying wether 
or not it is *really* executing it or not)

This makes GPUs bad at computing typical JVM programs, or even to exploit Clojure's ease of concurrency.
(Again, this might change. Intel wants to put x86 in their GPU, nvidia wants to make CPUs...)

It is perfectly possible to use a native interface and OpenCL/Cuda/Stream to progrm your GPU though.

A small CLojure library does that if I remember well, but  don't recall the name.

But you will have to be explicit in how you want your program to use the GPU.... 

No free lunch, at least for now.

--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clo...@googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

ajuc

unread,
Jul 19, 2010, 9:03:32 AM7/19/10
to Clojure
Look at Penumbra:
http://github.com/ztellman/penumbra

Greetings.

Timothy Baldridge

unread,
Jul 19, 2010, 9:31:27 AM7/19/10
to clo...@googlegroups.com
> Most architectures are made to execute a lot of times the same instruction
> on a different block of data.
> You can have if...then...else... in your GPU programs, but every processor
> will execute both branchs (with a tag saying wether
> or not it is *really* executing it or not)

This is kindof true...although it's not as bad as it sounds. IIRC the
current break-point is 16 threads will be executing the same
instruction in modern NVidia GPUs. This is called the "Warp size".
Modern GPUs can execute hundreds of threads at one time, so it is
possible to have some fairly branch heavy code. It may run slower than
non-branch code, but can often still out perform CPUs.


I would look at something like scriptjure:
http://github.com/arohner/scriptjure combined with Penumbra. This way
you could code in Clojure, have the code translated to CUDA or OpenCL
code, and then executed on the GPU and have the results returned as a
sequence.

Timothy

Zach Tellman

unread,
Jul 19, 2010, 4:08:58 PM7/19/10
to Clojure
Hi Victor,

I've written Penumbra (http://github.com/ztellman/penumbra), which is
a wrapper for OpenGL that allows for some limited general purpose
computation. I've also written Calx (http://github.com/ztellman/calx)
which is a wrapper for OpenCL, and it's designed for general purpose
computation on the GPU.

The thing is, though, that when people say "general purpose GPU", they
don't mean "my webserver runs ten times faster now!". GPUs use the
space usually reserved for L1-3 caches for more computing power. Each
compute kernel has a very limited amount of memory that's fast to
access (generally as fast as registers), but otherwise you have to
reach out to main memory, which is hundreds of times slower.

So basically, if you want to write GPU targeted stuff, use Calx. It's
a pretty nice abstraction over the OpenCL API; certainly it's better
than writing it in C. It's also not much slower than writing it in C,
because all you're doing is enqueueing computations, writes, and
reads. Penumbra has a Clojure DSL that can run on the GPU, but the
further away you get from graphics operations, the leakier the
abstraction gets. The day where you can write plain Clojure and run
it on the GPU is a long way off.

Zach

On Jul 19, 6:31 am, Timothy Baldridge <tbaldri...@gmail.com> wrote:
> > Most architectures are made to execute a lot of times the same instruction
> > on a different block of data.
> > You can have if...then...else... in your GPU programs, but every processor
> > will execute both branchs (with a tag saying wether
> > or not it is *really* executing it or not)
>
> This is kindof true...although it's not as bad as it sounds. IIRC the
> current break-point is 16 threads will be executing the same
> instruction in modern NVidia GPUs. This is called the "Warp size".
> Modern GPUs can execute hundreds of threads at one time, so it is
> possible to have some fairly branch heavy code. It may run slower than
> non-branch code, but can often still out perform CPUs.
>
> I would look at something like scriptjure:http://github.com/arohner/scriptjurecombined with Penumbra. This way
Reply all
Reply to author
Forward
0 new messages