run clojure on 5,832 cores?

1 view
Skip to first unread message

Raoul Duke

unread,
Feb 13, 2009, 5:47:38 PM2/13/09
to clo...@googlegroups.com

Christian Vest Hansen

unread,
Feb 13, 2009, 6:13:33 PM2/13/09
to clo...@googlegroups.com
I see no mention of a JVM being available for those CPUs, but perhaps
the no-asm HotSpot can be build with gcc on it.

Otherwise, cool gear :)

On Fri, Feb 13, 2009 at 11:47 PM, Raoul Duke <rao...@gmail.com> wrote:
>
> http://sicortex.com/products
>
> >
>



--
Venlig hilsen / Kind regards,
Christian Vest Hansen.

Stuart Sierra

unread,
Feb 13, 2009, 7:31:51 PM2/13/09
to Clojure
On Feb 13, 6:13 pm, Christian Vest Hansen <karmazi...@gmail.com>
wrote:
> I see no mention of a JVM being available for those CPUs, but perhaps
> the no-asm HotSpot can be build with gcc on it.

Looks like they run Linux, so it would probably be possible. This
article <http://www.networkworld.com/news/2009/020509-sicortex.html>
says they use slower, cheaper processors that work best when you're
doing lots of small computations in parallel.

The part I get excited about is the 8 TB of memory. When can I get
THAT on my desk?

-Stuart Sierra

Mark H.

unread,
Feb 13, 2009, 9:54:29 PM2/13/09
to Clojure
SiCortex had a nice booth at Supercomputing '08. They have desktop
versions of their machines too.

I've heard that the SiCortex machines have a fabulous communication
network, but they expect you to use it via their MPI stack. I don't
think they offer a shared memory abstraction that the JVM could
exploit over all the cores in the machine; maybe it would work on a
single (6-way parallel) node.

The biggest problem would be that as well as the possible lack of a
modern JVM for the MIPS processors. I've heard that Kaffe runs on
MIPS but Kaffe doesn't support Java >= 1.5 so you won't be able to run
Clojure with it.

mfh

Christian Vest Hansen

unread,
Feb 14, 2009, 1:00:51 PM2/14/09
to clo...@googlegroups.com
You run into problems with the garbage collector when the heap gets
big: the bigger the heap, the longer it takes to compact. Azul has
hardware support for their garbage collector which allows their
compaction phase to run concurrently with the application, otherwise
there'd be no way they could make use of the 768 GB ram their kit can
scale to, unless striped across hundreds of JVMs. If you try to scale
a normal collector to those heap sizes, you will see your
stop-the-world collections jump from sub-second to minutes or even
hours.

>
> -Stuart Sierra

Mark H.

unread,
Feb 14, 2009, 7:29:42 PM2/14/09
to Clojure
On Feb 14, 10:00 am, Christian Vest Hansen <karmazi...@gmail.com>
wrote:
> You run into problems with the garbage collector when the heap gets
> big: the bigger the heap, the longer it takes to compact. Azul has
> hardware support for their garbage collector which allows their
> compaction phase to run concurrently with the application, otherwise
> there'd be no way they could make use of the 768 GB ram their kit can
> scale to, unless striped across hundreds of JVMs. If you try to scale
> a normal collector to those heap sizes, you will see your
> stop-the-world collections jump from sub-second to minutes or even
> hours.

*nods* It's not even clear that one would want to use memory as a
single shared blob for that many processors and that much memory. I
would prefer a partitioned global address space like that used by
Titanium (http://titanium.cs.berkeley.edu/), UPC (http://
upc.lbl.gov/), or Global Arrays (http://www.emsl.pnl.gov/docs/
global/). If you can distinguish between "local" and "far-away"
chunks of memory, it's easier to do garbage collection more
efficiently.

mfh
Reply all
Reply to author
Forward
0 new messages