Fan Programing Language

42 views
Skip to first unread message

Brian Frank

unread,
Apr 17, 2008, 9:20:29 AM4/17/08
to JVM Languages
We finally got our website http://fandev.org/ up and running for Fan -
a new JVM language. It isn't a dynamic language per se, but has some
interesting design decisions:

We are targeting both the JVM and CLR, so we actually deploy code in
an intermediate bytecode, which can be translated into Java bytecode
or IL at load time. The two VMs are definitely very close, but a
couple big differences: the CLR doesn't provide any stack
manipulation opcodes like swap or dup_x1. The CLR has a more
complicated and restricted set of rules for generating try/catch/
finally blocks.

Fan is statically typed, but provides two "call" operators. If you
use "." then the call is a normal method dispatch just like Java. But
you can also use "->" to skip compiler type checking and do a dynamic
dispatch. Dynamic dispatches are just syntax sugar for calling
trap(Str name, Obj[] args) which by default uses reflection. But you
can override that method to trap any call.

We ditched primitives - our only real big trade-off in terms of
performance. But oh it is so nice and clean.

We ditched user defined generics. We do use a special generic syntax
for a couple special cases List, Map, and Func. For example Str[] in
Fan really means List<Str>.

We use mixins instead of interfaces. Under the covers they get
compiled to Java bytecode as an interface, bunch of static methods,
and routing methods for implementing classes.

Fields have built-in accessors methods which are auto-generated or you
can define yourself.

Constructors are named methods.

Functions are built-in via the system class Func. You can use
reflection to get any Method as a Func. Closures are built-in which
are basically an expression which evaluates to a Func. We defined
closures before the buzz about adding them to Java - we actually chose
a syntax based on Ruby.

You can define const classes which are guaranteed immutable by the
compiler.

There is no shared state between threads. You can define static
fields, but they must be immutable. Threads have built-in message
passing. Or you can share data between threads using a REST-like API
to map Uris to Objs.

The serialization format is designed to be human read/written. In
fact it is a true subset of the language itself, so you can use
serialized objects as an expression.

The runtime maintains a "type database" which indexes annotations (Fan
calls them facets). This lets you query installed types based on
different kinds of meta-data.

Hope some you find it interesting!

Patrick Wright

unread,
Apr 18, 2008, 2:00:26 AM4/18/08
to jvm-la...@googlegroups.com
Hi Brian

Toys! New Toys!

On a quick look--very nice feature set and presentation
(website/docs). Two questions--

- who are "you", e.g. the team that developed this? what is the
history of it? it seems pretty far along already
- under what license is this distributed?


Thanks
Patrick

Brian Frank

unread,
Apr 18, 2008, 9:32:06 AM4/18/08
to JVM Languages
Fan is licensed under the AFL 3.0: http://www.fandev.org/doc/docIntro/Faq.html#license

"We" are primarily myself and my brother Andy. There is also a third
developer, John, who is working on sql and haven (the ORM engine).
All three of us have day jobs at Tridium, which markets a Java
framework for embedded devices such as industrial controllers, medical
equipment, etc. That market is known as M2M (Machine-to-Machine)
because it is kind of the next chapter in the Internet - the number of
autonomous devices hooked up will dwarf human users.

We've sort of been keeping Fan under wraps until we had some real
stuff working and lots of documentation. It is by no means complete,
but the compiler is written in Fan, the website is powered by a web
server written in Fan, and the discussion group is a Fan application -
so it is definitely coming along.

Brian

David Pollak

unread,
Apr 18, 2008, 6:18:30 PM4/18/08
to jvm-la...@googlegroups.com
Brian,

Fan looks interesting, but looking at the key benefits, they look like Scala's benefits.  Can you tell me the differences between Fan and Scala?

Thanks,

David
--
Scala lift off unconference, May 10th 2008, San Francisco http://scalaliftoff.com
lift, the secure, simple, powerful web framework http://liftweb.net
Collaborative Task Management http://much4.us

Brian Frank

unread,
Apr 19, 2008, 4:30:18 PM4/19/08
to JVM Languages
David,

I'm not much of a Scala expert - I've read thru the docs and played
around with it, but haven't written any real projects with it. So I'm
not too qualified to compare Fan to Scala, but I'll given it a whack:

A primary goal of Fan is enable writing software which is portable
between the JVM and CLR. To give you an example, we bit off writing
our our own DateTime and TimeZone handling to ensure exact portability
b/w the two platforms (because .NET doesn't use Olsen timezones).
Another example is we wrote our own build system, so that build
scripts aren't dependent on a Java tool like Ant. I know Scala runs
on .NET - but is that one of the goals of Scala? I'd be really
interested in hearing more about how Scala is tackling some of these
problems.

Fan is really about the libraries, not the syntax - the syntax only
exists to aid the libraries. We tried to basically stick with Java/C#
syntax and just fix a few warts. So I would say Fan is really a much
closer descendant of Java/C# than Scala which has a syntax somewhere
between Java and Haskell.

One of Scala's most interesting features is it type system, where as
Fan's type system is pretty boring. Scala is trying to provide an
elegant solution to static typing. Fan's approach is to just drop
down to use dynamic typing whenever the type system gets in the way.
So I think Fan sits on the static/dynamic spectrum somewhere between
Java and Ruby/Python.

I think any new language on the JVM will probably be multi-paradigm -
both OO and functional. In fact even Java and C# seem to be moving
that way. So I don't think that will end up being a distinguishing
feature. Rather key differences will be in features like the type
system and libraries. I'm a framework guy, so over the next few years
you'll see most of my effort in the libraries versus the language
syntax (which we hope to keep relatively simple and stable). For
example, my experience has been that maintaining an indexed database
of installed types effects how you design libraries (such as a webapp
framework) even more than language syntax does. So I think the
divergence really happens in the upper layers of the stack.

What is awesome is the ability to build languages like Fan and Scala
to leverage the JVM and HotSpot. One thing I really wish Java would
solve is the keyword problem - for example if a Fan class declared a
method called "import", then it wouldn't be usable in Java. C# solves
with the @ symbol. But other than that minor annoyance, I personally
think the JVM is a much better platform for alternate languages
than .NET. Some of the new stuff should be really exciting.

Brian

John Rose

unread,
Apr 19, 2008, 5:38:48 PM4/19/08
to jvm-la...@googlegroups.com
On Apr 19, 2008, at 1:30 PM, Brian Frank wrote:

One thing I really wish Java would

solve is the keyword problem - for example if a Fan class declared a

method called "import", then it wouldn't be usable in Java.  C# solves

with the @ symbol.


There are a few parts to that:

0. The community has to decide to support some specific convention of symbolic freedom that VMs and languages will support, regardless of language-specific and VM-specific restrictions.  I think the default answer has to be Lisp's, which is any string can be a name, with social pressure against abuses.

1. The JVM has to provide a way around its own mild but peculiar restrictions against characters like slash and semicolon.  (It is unreasonable to deny Scheme a symbol named '/' just because the JVM has another internal use for that character.)

Here is the best way I know of to relax the JVM restrictions; it works today: http://blogs.sun.com/jrose/entry/symbolic_freedom_in_the_vm

2,3,4,... Each language has to admit the existence of spellings from other languages by supporting an escape syntax for exotic names.  For example, Groovy supports foo.'bar!', where the thing after the dot is lexically a string but syntactically a name.  It's a one-line hack in the parser.  I think Java should do something similar with single quotes and/or backslashes and be done with it.  Java has a special reason to do this, because it will be the "systems programming language" on the JVM for the foreseeable future.

-- John

Rodrigo B. de Oliveira

unread,
Apr 19, 2008, 6:16:40 PM4/19/08
to jvm-la...@googlegroups.com
On Sat, Apr 19, 2008 at 5:30 PM, Brian Frank <brian...@gmail.com> wrote:
> ... I personally

> think the JVM is a much better platform for alternate languages
> than .NET.

Why?

Brian Frank

unread,
Apr 19, 2008, 7:33:35 PM4/19/08
to JVM Languages
> > think the JVM is a much better platform for alternate languages
> > than .NET.
>
> Why?

I personally didn't write any of our C# code, but a couple of our
issues with .NET:

- my biggest issue with .NET is that to get line numbers in stack
traces you have to generate a pdb file - this is ugly from a
deployment perspective if you want line numbers in production code.
That format is Microsoft specific (I think Mono uses a different
format). Worse of all is that you actually have to generate the pdb
file on the disk which just kills performance. It is a lot of work
compared to the elegance of adding a line table attribute in Java
bytecode.

- jars are simple pkzip files and easy to generate, I can't say the
same about generating a dll from scratch

- the JVM provides more flexibility to dynamically load types because
there is an extremely clear specification to describe the binary
classfile. Writing a simple bytecode assembler is a day of work - you
can generate classes easily in memory and load them as needed on a
class granularity basis. .NET imposes more restrictions about how
bits and pieces of an assembly are dynamically loaded. Right now we
ended up just generating the whole assembly at once to avoid the
headaches, but that is a real performance issue we still need to
address.

- as a compiler writer I find the lack of stack manipulation bytecodes
for a stack based VM frustrating because you need to resort to
synthetic local variables

- personally I find Microsoft's online documentation exceedingly
difficult to work with (but that is just a matter of taste)

I just really like Java's simple and extremely well specified file
formats. But in the end they are both really great VMs.

How do you handle line numbers in Boo? Are we just missing something,
or do you really have to generate a pdb file to disk?

John Rose

unread,
Apr 19, 2008, 7:53:29 PM4/19/08
to jvm-la...@googlegroups.com

My take as a JVM engineer (which is a limited but interesting
perspective) is that any of the good JVMs provides C-level
performance for many interesting Java codes, while the CLR provides
early-Java-level performance. The JVMs have been competing with each
other on performance for a decade, and it shows.

Performance isn't everything, but it often turns out to be important.

More thoughts here: http://blogs.sun.com/jrose/entry/
bravo_for_the_dynamic_runtime

I'd love to see a Boo-like thing for the JVM someday. I enjoy
languages which cleverly integrate a small number of high-leverage
features, rather than juxtapose a bunch of shallow hacks.

-- John

Rodrigo B. de Oliveira

unread,
Apr 20, 2008, 5:08:09 AM4/20/08
to jvm-la...@googlegroups.com
On Sat, Apr 19, 2008 at 8:33 PM, Brian Frank <brian...@gmail.com> wrote:
>
> > > think the JVM is a much better platform for alternate languages
> > > than .NET.
> >
> > Why?
> ...
> ...

> I just really like Java's simple and extremely well specified file
> formats. But in the end they are both really great VMs.
>

Thanks for the thoughtful response. I can definitely see your point.

> How do you handle line numbers in Boo? Are we just missing something,
> or do you really have to generate a pdb file to disk?
>

Yes, pdb/mdb files are necessary. The System.Reflection.Emit API takes
care of everything though so it's just a matter of calling
ILGenerator.MarkSequencePoint at the appropriate times.

Cecil is also a great way of reading and writing .net assemblies and
can automatically handle debugging info generation as well.

But again I see your point. Java line table attributes provide a
simpler solution indeed.

Best wishes,
Rodrigo

Rodrigo B. de Oliveira

unread,
Apr 20, 2008, 5:11:02 AM4/20/08
to jvm-la...@googlegroups.com
On Sat, Apr 19, 2008 at 8:53 PM, John Rose <John...@sun.com> wrote:
>
> ...

> I'd love to see a Boo-like thing for the JVM someday.

That day is coming. :)

Thanks, John.
Rodrigo

David Pollak

unread,
Apr 20, 2008, 9:40:17 AM4/20/08
to jvm-la...@googlegroups.com

Thanks for the deep discussion.

I don't speak for the Scala team, but I understand that CLR support is very important and is part of the current 2.7.0 release and is planned for future releases (there exist some Scala CLR based projects in the wild.)

It might be worthwhile for you to chat with the Scala folks (who are very welcoming) about writing a unified library layer under Scala for both JVM and CLR... that'd save you the effort of writing and maintaining a language (which is what the Scala guys are very good at.)

If you're in the Bay Area or are going to be at JavaOne, maybe we can talk as well.  There may be interesting things to be done with your work and the work we're doing with lift.

Thanks,

David
 


Brian

Iulian Dragos

unread,
Apr 20, 2008, 9:43:13 AM4/20/08
to jvm-la...@googlegroups.com
On Sat, Apr 19, 2008 at 10:30 PM, Brian Frank <brian...@gmail.com> wrote:
...

> A primary goal of Fan is enable writing software which is portable
> between the JVM and CLR. To give you an example, we bit off writing
> our our own DateTime and TimeZone handling to ensure exact portability
> b/w the two platforms (because .NET doesn't use Olsen timezones).
> Another example is we wrote our own build system, so that build
> scripts aren't dependent on a Java tool like Ant. I know Scala runs
> on .NET - but is that one of the goals of Scala? I'd be really
> interested in hearing more about how Scala is tackling some of these
> problems.

Portability between the two platforms was never a high priority for
Scala. I'd say the main goal was to bring together object-oriented
programming and functional programming in a modern, type-safe
language. Given that most people agree now that the functional and
object-oriented paradigms should go together, Scala was quite
successful in its first goal.

I would say there is no way to write really portable Scala code.
Sooner or later, one would need to use platform libraries, even for
things as basic as I/O. The language designer favored 'seamless'
interoperability with the platform to give access to tons of existing
libraries. Of course, there could/should be Scala abstractions on top
of that, but we don't have them. It's not that it's impossible, or
even very hard, but it didn't happen. And since we're at that, how do
you deal with platform-dependent code? Say a class on top of File,
which uses either java.io.File or the .NET equivalent behind the
scenes? Conditional compilation?

> I think any new language on the JVM will probably be multi-paradigm -
> both OO and functional. In fact even Java and C# seem to be moving
> that way. So I don't think that will end up being a distinguishing
> feature. Rather key differences will be in features like the type
> system and libraries. I'm a framework guy, so over the next few years
> you'll see most of my effort in the libraries versus the language
> syntax (which we hope to keep relatively simple and stable). For
> example, my experience has been that maintaining an indexed database
> of installed types effects how you design libraries (such as a webapp
> framework) even more than language syntax does. So I think the
> divergence really happens in the upper layers of the stack.

The part that I find really interesting in Fan is the approach to
concurrency. Can you explain a bit more about how it is implemented?
Is it mostly in the libraries, or it has some compiler support (for
message passing, for instance). Is there a way to pattern match on
messages, like in Erlang?

Thanks,
Iulian

--
« Je déteste la montagne, ça cache le paysage »
Alphonse Allais

Jon Harrop

unread,
Apr 20, 2008, 9:50:25 AM4/20/08
to jvm-la...@googlegroups.com
On Sunday 20 April 2008 00:53:29 John Rose wrote:
> On Apr 19, 2008, at 3:16 PM, Rodrigo B. de Oliveira wrote:
> > On Sat, Apr 19, 2008 at 5:30 PM, Brian Frank
> >
> > <brian...@gmail.com> wrote:
> >> ... I personally
> >> think the JVM is a much better platform for alternate languages
> >> than .NET.
> >
> > Why?
>
> My take as a JVM engineer (which is a limited but interesting
> perspective) is that any of the good JVMs provides C-level
> performance for many interesting Java codes, while the CLR provides
> early-Java-level performance. The JVMs have been competing with each
> other on performance for a decade, and it shows.
>
> Performance isn't everything, but it often turns out to be important.
>
> More thoughts here: http://blogs.sun.com/jrose/entry/
> bravo_for_the_dynamic_runtime

Running the SciMark benchmark on my 32-bit WinXP Athlon64 X2 4400+ 2Gb RAM
machine:

Sun JDK 6: 385
.NET 3.5: 367

Here .NET is 5% slower than the JVM.

Running my ray tracer benchmark in Java vs F#:

Sun JDK 6: 4.930s
.NET 3.5: 4.690s

Here, .NET is 5% faster than the JVM.

Indeed, I have never been able to reproduce any benchmark results that
substantiate your claim that "the CLR provides early-Java-level performance".

> I'd love to see a Boo-like thing for the JVM someday. I enjoy
> languages which cleverly integrate a small number of high-leverage
> features, rather than juxtapose a bunch of shallow hacks.

Until the JVM is brought up to date with respect to basic functionality like
tail calls, I'm afraid you won't be seeing any production-quality innovation
along the lines of F#.

--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?e

Brian Frank

unread,
Apr 20, 2008, 11:38:29 AM4/20/08
to JVM Languages
> If you're in the Bay Area or are going to
> be at JavaOne, maybe we can talk as well.
> There may be interesting things to be done
> with your work and the work we're doing with lift.

I'm hoping to get to JavaOne, in which case it would be great
to get together. There is lots of opportunity for collaboration.

> And since we're at that, how do you deal with
> platform-dependent code? Say a class on top of
> File, which uses either java.io.File or the .NET
> equivalent behind the scenes?

Fan defines the standard APIs as Fan classes, then
where appropriate stubs some methods as "native" which
are then implemented in Java and C#. The sys pod
which defines the core is entirely written in both
Java and C#. We tried various other approaches, and
in the end that seemed the most straight forward.

> The part that I find really interesting in Fan
> is the approach to concurrency. Can you explain a
> bit more about how it is implemented? Is it mostly
> in the libraries, or it has some compiler support
> (for message passing, for instance). Is there a way
> to pattern match on messages, like in Erlang?

The only direct language support is immutable classes. Other
than that most everything is implemented via the library
APIs. For example message passing is defined as methods
on the Thread class. Fan's switch statement is a bit more
powerful than Java, but doesn't provide full pattern matching
like Erlang or Scala (that is something we'd like to do though).

> Until the JVM is brought up to date with respect to basic
> functionality like tail calls, I'm afraid you won't be seeing
> any production-quality innovation along the lines of F#.

Can't tail calls be generated by the compiler with an in-method
jump? I guess it would be tricky to push/pop the stack, but
it seems feasible.

Jon Harrop

unread,
Apr 20, 2008, 3:24:35 PM4/20/08
to jvm-la...@googlegroups.com
On Sunday 20 April 2008 14:50:25 Jon Harrop wrote:
> Running the SciMark benchmark on my 32-bit WinXP Athlon64 X2 4400+ 2Gb RAM
> machine:
>
> Sun JDK 6: 385
> .NET 3.5: 367
>
> Here .NET is 5% slower than the JVM.

I hadn't actually noticed that the .NET port of SciMark was written by a Java
programmer who had crippled it by inserting unnecessary locks in the code.
Removing these locks for a fairer comparison, I get:

Sun JDK 6: 385
.NET 3.5: 396

So .NET is not slower at all.

James Abley

unread,
Apr 20, 2008, 5:23:12 PM4/20/08
to jvm-la...@googlegroups.com
On 20/04/2008, Jon Harrop <j...@ffconsultancy.com> wrote:
>
> On Sunday 20 April 2008 14:50:25 Jon Harrop wrote:
> > Running the SciMark benchmark on my 32-bit WinXP Athlon64 X2 4400+ 2Gb RAM
> > machine:
> >
> > Sun JDK 6: 385
> > .NET 3.5: 367
> >
> > Here .NET is 5% slower than the JVM.
>
>
> I hadn't actually noticed that the .NET port of SciMark was written by a Java
> programmer who had crippled it by inserting unnecessary locks in the code.
> Removing these locks for a fairer comparison, I get:
>
> Sun JDK 6: 385
> .NET 3.5: 396
>
> So .NET is not slower at all.
>

As a historical note, my understanding was that it was a Microsoft JVM
that introduced JIT and associated performance improvements in a JVM
platform. I think it's safe to say that there are some smart guys at
Redmond working on those sorts of technologies.

Cheers,

James

hlovatt

unread,
Apr 22, 2008, 12:21:31 AM4/22/08
to JVM Languages
I suggested a language extension to Java for just this problem:

http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6519124

That allowed keywords to be escaped by surrounding them with an
underscore, e.g. the Java keyword class could be escaped via _class_.
You could add a similar escape mechanism to Fan. Similarly Scala uses
back quote ` for its escape character.

John Cowan

unread,
Apr 22, 2008, 12:34:33 AM4/22/08
to jvm-la...@googlegroups.com
On Tue, Apr 22, 2008 at 12:21 AM, hlovatt <howard...@gmail.com> wrote:

> That allowed keywords to be escaped by surrounding them with an
> underscore, e.g. the Java keyword class could be escaped via _class_.

And _class_ would be escaped as __class__, and so on?

--
GMail doesn't have rotating .sigs, but you can see mine at
http://www.ccil.org/~cowan/signatures

hlovatt

unread,
Apr 22, 2008, 3:39:03 AM4/22/08
to JVM Languages
@John,

Your benchmarking does not seem consistent with this paper:

http://www.orcca.on.ca/~ldragan/synasc2005/2005-synasc-scigmark-final.pdf

They show Java faster than C# on most of the benchmarks in the SciMark
suite. But not the Monte Carlo simulation to calculate Pi, which is
presumably the benchmark you are talking about (are you using just
this benchmark or all of the SciMark benchmarks?). Note the authors of
this paper used an identical, non-synchronised random number generator
for all languages, therefore your comments about syncronization are
addressed by their approach.

Jon Harrop

unread,
Apr 22, 2008, 5:44:04 AM4/22/08
to jvm-la...@googlegroups.com
On Tuesday 22 April 2008 08:39:03 hlovatt wrote:
> @John,
>
> Your benchmarking does not seem consistent with this paper:
>
> http://www.orcca.on.ca/~ldragan/synasc2005/2005-synasc-scigmark-final.pdf
>
> They show Java faster than C# on most of the benchmarks in the SciMark
> suite. But not the Monte Carlo simulation to calculate Pi, which is
> presumably the benchmark you are talking about (are you using just
> this benchmark or all of the SciMark benchmarks?).

I quoted the combined figures for all benchmarks. The individual figures are:

Java:
FFT 326
Jacobi 499
Monte C 71.8
Sparse 446
LU 579

C# .NET:
FFT 325
Jacobi 505
Monte C 96.5
Sparse 415
LU 629

As you can see, the Monte Carlo benchmark is several times faster (was 27.0)
without the unnecessary lock and the performance is basically identical
between Java and C#.

> Note the authors of
> this paper used an identical, non-synchronised random number generator
> for all languages, therefore your comments about syncronization are
> addressed by their approach.

They benchmarked an extremely old version of .NET that predated generics.

John Rose

unread,
Apr 22, 2008, 5:59:13 AM4/22/08
to jvm-la...@googlegroups.com
On Apr 20, 2008, at 2:23 PM, James Abley wrote:

I think it's safe to say that there are some smart guys at

Redmond working on those sorts of technologies.


Actually, since I was in Redmond last January for three days talking personally to those guys at Lang.NET, I feel safe to say they have shelved the JIT for several years, and their optimizations have not kept pace with those in the JVM.  (See my blog entry previously mentioned.)

Case in point:  Some of the CLR customers at Lang.NET were asking for vectorized loops.  Nobody could help them, because nobody was working on loops in the JIT.  Meanwhile, HotSpot recently improved its benchmark scores in part by vectorizing some common loops.

A significant number of HotSpot techniques have no CLR equivalent, especially those which depend on profiling and deoptimization.  The CLR JIT compiles at load time, and never looks back.


According to people who use it in CLR, tailcall is uncomfortably slow.  Serrano's CLR version of BigLoo turns tailcalls off by default as a result.  Looks like a neglected stepchild feature to me.

As far as tailcall on the JVM goes, I know at least one researcher who is working on it; I wish we had it yesterday....  See my blog for how it will probably work.

-- John

Jon Harrop

unread,
Apr 22, 2008, 6:06:14 AM4/22/08
to jvm-la...@googlegroups.com
On Tuesday 22 April 2008 10:59:13 John Rose wrote:
> On Apr 20, 2008, at 2:23 PM, James Abley wrote:
> > I think it's safe to say that there are some smart guys at
> > Redmond working on those sorts of technologies.
>
> Actually, since I was in Redmond last January for three days talking
> personally to those guys at Lang.NET, I feel safe to say they have
> shelved the JIT for several years, and their optimizations have not
> kept pace with those in the JVM. (See my blog entry previously
> mentioned.)

My benchmark results disproved your belief.

> Case in point: Some of the CLR customers at Lang.NET were asking for
> vectorized loops. Nobody could help them, because nobody was working
> on loops in the JIT. Meanwhile, HotSpot recently improved its
> benchmark scores in part by vectorizing some common loops.

Then why has HotSpot's performance not improved?

> According to people who use it in CLR, tailcall is uncomfortably
> slow.

That has not been true for some time now.

> Serrano's CLR version of BigLoo turns tailcalls off by default
> as a result. Looks like a neglected stepchild feature to me.

If it were neglected, they would not have drastically improved its performance
in the latest .NET release.

They also improved the efficiency of structs which, apparently, the JVM
doesn't even have.

> As far as tailcall on the JVM goes, I know at least one researcher
> who is working on it; I wish we had it yesterday.... See my blog for
> how it will probably work.

Assuming the JVM does eventually get tail calls, how many years will it be
before their performance catches up with the CLR?

Antonio Cuni

unread,
Apr 22, 2008, 6:35:07 AM4/22/08
to jvm-la...@googlegroups.com
Jon Harrop wrote:

>> Actually, since I was in Redmond last January for three days talking
>> personally to those guys at Lang.NET, I feel safe to say they have
>> shelved the JIT for several years, and their optimizations have not
>> kept pace with those in the JVM. (See my blog entry previously
>> mentioned.)
>
> My benchmark results disproved your belief.

for what it's worth, in PyPy we discovered that hotspot produces much
better code than the CLR when the bytecode doesn't follow the standard
pattern produced by the java/c# compilers.

In particular, we heavily use exceptions to model control flow in our
RPython program (e.g., every "for" loop needs to catch StopIteration),
but the CLR JIT it not able to optimize such a case, and thus the first
versions of the CLI backend produced very slow code; to have
reasonable performances, we rely on our own inliner/malloc
removal/exception inliner, which gave a speedup of something like 30x, IIRC.

On the other hand, hotspot produces much better code[1], and moreover we
get faster code if we *disable* our own optimizations, since if we use
them it results in more code to analyze because of the inlining.

[1] http://blogs.sun.com/jrose/entry/longjumps_considered_inexpensive

ciao,
Anto

Jon Harrop

unread,
Apr 22, 2008, 7:08:21 AM4/22/08
to jvm-la...@googlegroups.com
On Tuesday 22 April 2008 11:35:07 Antonio Cuni wrote:
> Jon Harrop wrote:
> >> Actually, since I was in Redmond last January for three days talking
> >> personally to those guys at Lang.NET, I feel safe to say they have
> >> shelved the JIT for several years, and their optimizations have not
> >> kept pace with those in the JVM. (See my blog entry previously
> >> mentioned.)
> >
> > My benchmark results disproved your belief.
>
> for what it's worth, in PyPy we discovered that hotspot produces much
> better code than the CLR when the bytecode doesn't follow the standard
> pattern produced by the java/c# compilers.

You generated code that turned out to be less efficient on the CLR in this
particular case but you cannot validly generalize that to all "non-standard
code".

Indeed, we know that is wrong because tail calls have the exact opposite
performance characteristics because you have to work around their complete
absence (not just inefficiency) on the JVM.

> In particular, we heavily use exceptions to model control flow in our
> RPython program (e.g., every "for" loop needs to catch StopIteration),
> but the CLR JIT it not able to optimize such a case, and thus the first
> versions of the CLI backend produced very slow code; to have
> reasonable performances, we rely on our own inliner/malloc
> removal/exception inliner, which gave a speedup of something like 30x,
> IIRC.

Why didn't you use tail calls instead?

> On the other hand, hotspot produces much better code[1], and moreover we
> get faster code if we *disable* our own optimizations, since if we use
> them it results in more code to analyze because of the inlining.
>
> [1] http://blogs.sun.com/jrose/entry/longjumps_considered_inexpensive

Sure but it looks as though you are unnecessarily applying a workaround for
the absence of tail calls on the JVM to the CLR when you could have just used
tail calls on the CLR. Moreover, they are easier to use and much faster than
anything equivalent on the JVM.

Antonio Cuni

unread,
Apr 22, 2008, 7:49:16 AM4/22/08
to jvm-la...@googlegroups.com
Jon Harrop wrote:

> You generated code that turned out to be less efficient on the CLR in this
> particular case but you cannot validly generalize that to all "non-standard
> code".

right, I can't generalize to all non-standard code, but it's surely true
for the kind of non standard code pypy generates :-).

The exception inlining was only an example, there are other areas where
the CLR jit was worse, like code that makes an heavy use of temp
variables instead of leaving the values on the stack.

[cut]


> Why didn't you use tail calls instead?

I honestly don't see how tail calls could help here; could you show me
an example please?


ciao,
Anto

Jon Harrop

unread,
Apr 22, 2008, 1:10:54 PM4/22/08
to jvm-la...@googlegroups.com
On Tuesday 22 April 2008 12:49:16 Antonio Cuni wrote:
> Jon Harrop wrote:
> > You generated code that turned out to be less efficient on the CLR in
> > this particular case but you cannot validly generalize that to all
> > "non-standard code".
>
> right, I can't generalize to all non-standard code, but it's surely true
> for the kind of non standard code pypy generates :-).

Sure. Generating exceptions unless absolutely necessary will be a very bad
idea on the CLR but it will also be a bad idea on the JVM because its
exception handling is slow.

> The exception inlining was only an example, there are other areas where
> the CLR jit was worse, like code that makes an heavy use of temp
> variables instead of leaving the values on the stack.

That's interesting.

> > Why didn't you use tail calls instead?
>
> I honestly don't see how tail calls could help here; could you show me
> an example please?

Sure. Consider the loop:

void run() {
for (int i=0; i<3; ++i)
if (foo(i) == 0) break;
bar();
baz();
}

Sounds like you were translating that into something like (F# code):

exception StopIteration

let run() =
try
for i=0 to 2 do
if foo i=0 then raise StopIteration
with StopIteration ->
()
bar()
baz()

But you could have translated it into:

let rec run_1 i =
if foo i=0 then run_2() else
if i<3 then run_1 (i + 1) else run_2()
and run_2() =
bar()
baz()

let run() =
run_1 0

Where both calls to the continuation "run_2" inside the body of the "run_1"
function are tail calls.

Tail calls have lots of advantages here. The JIT is likely to generate a
simple branch but it may well spot that the code blocks can be rearraged to
avoid even the branch! For example, it might rewrite the code into:

let rec run_1 i =
if foo i<>0 && i<3 then run_1 (i + 1) else
bar()
baz()

You can pass as many values as arguments to a continuation as you like and
they are highly likely to be kept in registers wherever your control flow
takes you (what were the exceptional and non-exceptional routes are now
symmetric) for the best possible performance. This facilitates lots of
subsequent optimizations by the JIT.

Doing a quick benchmark on this code, I find that 10^6 iterations using your
exception-based technique gives:

CLR: 24s
JVM: 1.3s

Holy smokes, the JVM is 18x faster!

Now try the tail calls (only available on the CLR):

CLR: 0.025s

Holy smokes, the CLR is 52x faster!

Optimizing exception handling in the JVM before implementing tail calls was
premature optimization, IMHO.

Richard Warburton

unread,
Apr 22, 2008, 1:45:28 PM4/22/08
to jvm-la...@googlegroups.com
> Doing a quick benchmark on this code, I find that 10^6 iterations using your
> exception-based technique gives:
>
> CLR: 24s
> JVM: 1.3s
>
> Holy smokes, the JVM is 18x faster!
>
> Now try the tail calls (only available on the CLR):
>
> CLR: 0.025s
>
> Holy smokes, the CLR is 52x faster!

Could you please provide the source code for this performance comparison.

Richard Warburton

Antonio Cuni

unread,
Apr 22, 2008, 2:11:28 PM4/22/08
to jvm-la...@googlegroups.com
Jon Harrop wrote:

> Sure. Generating exceptions unless absolutely necessary will be a very bad
> idea on the CLR but it will also be a bad idea on the JVM because its
> exception handling is slow.

not always; it's entirely possible that I recall wrongly, but I remember
that we had no penalty in using exception vs. a plain for loop.

> Sounds like you were translating that into something like (F# code):
>
> exception StopIteration
>
> let run() =
> try
> for i=0 to 2 do
> if foo i=0 then raise StopIteration
> with StopIteration ->
> ()
> bar()
> baz()

yes, more ore less

> But you could have translated it into:
>
> let rec run_1 i =
> if foo i=0 then run_2() else
> if i<3 then run_1 (i + 1) else run_2()
> and run_2() =
> bar()
> baz()
>
> let run() =
> run_1 0
>
> Where both calls to the continuation "run_2" inside the body of the "run_1"
> function are tail calls.

well, but doing such a translation is not straightforward; it's doable,
but you need to write it, exactly as we wrote our own exception inliner
that compiles the original code into a plain for loop.

Honestly, I doubt that a tail call can be faster/much faster than a
plain for loop, but we should do some benchmark of course.

> Doing a quick benchmark on this code, I find that 10^6 iterations using your
> exception-based technique gives:
>
> CLR: 24s
> JVM: 1.3s
>
> Holy smokes, the JVM is 18x faster!
>
> Now try the tail calls (only available on the CLR):
>
> CLR: 0.025s
>
> Holy smokes, the CLR is 52x faster!

how does this compare with the first version of the loop you wrote?

void run() {
for (int i=0; i<3; ++i)
if (foo(i) == 0) break;
bar();
baz();
}


ciao,
Anto

Jon Harrop

unread,
Apr 22, 2008, 2:11:40 PM4/22/08
to jvm-la...@googlegroups.com
On Tuesday 22 April 2008 18:45:28 Richard Warburton wrote:
> Could you please provide the source code for this performance comparison.

Sure. The Java:

public class test
{
int foo(int n)
{
return n - 1;
}

void bar()
{
}

void baz()
{
}

void run()
{
Exception e = new Exception("");
try


{
for (int i=0; i<3; ++i)
{

if (foo(i) == 0) throw e;
}
}
catch (Exception e2)
{
}
bar();
baz();
}

public static void main(String[] args)
{
for (int n=0; n<10; ++n)
{
long start = System.currentTimeMillis();
for (int i=0; i<1000000; ++i)
(new test()).run();
System.out.println(System.currentTimeMillis() - start);
}
}
}

The F# (for both techniques):

#light

let foo n = n-1
let bar() = ()
let baz() = ()

exception StopIteration

let run1() =
try
for n=0 to 2 do
if foo n=0 then raise StopIteration


with StopIteration ->
()
bar()
baz()

let run2() =
let rec run_1 n =
if foo n=0 then run_2() else
if n<3 then run_1(n+1) else run_2()


and run_2() =
bar()
baz()

run_1 0

do
let t = new System.Diagnostics.Stopwatch()
t.Start()
for i=1 to 1000000 do
run1()
printf "Exceptions: %dms\n" t.ElapsedMilliseconds
t.Reset()
t.Start()
for i=1 to 1000000 do
run2()
printf "Tail calls: %dms\n" t.ElapsedMilliseconds
stdin.ReadLine()

Christian Vest Hansen

unread,
Apr 22, 2008, 2:38:08 PM4/22/08
to jvm-la...@googlegroups.com
How about this version:

public class Test


{
int foo(int n)
{
return n - 1;
}

void bar()
{
}

void baz()
{
}

final Exception e = new Exception("");

void run()
{


try
{
for (int i=0; i<3; ++i)
{
if (foo(i) == 0) throw e;
}
}
catch (Exception e2)
{
}
bar();
baz();
}

public static void main(String[] args)
{

Test t = new Test();


for (int n=0; n<10; ++n)
{
long start = System.currentTimeMillis();
for (int i=0; i<1000000; ++i)

//(new Test()).run();
t.run();
System.out.println(System.currentTimeMillis() - start);
}
}
}


--
Venlig hilsen / Kind regards,
Christian Vest Hansen.

Patrick Wright

unread,
Apr 22, 2008, 2:40:43 PM4/22/08
to jvm-la...@googlegroups.com
> public class test
> {

See http://blogs.sun.com/jrose/entry/longjumps_considered_inexpensive
for some notes on exceptions and performance. Try this change:
public class test_reuse
{
public static final Exception EXCEPTION = new Exception("") { public
Throwable fillInStackTrace(){return null; } };


int foo(int n)
{
return n - 1;
}

void bar()
{
}

void baz()
{
}

void run()
{
//Exception e = new Exception("");


try
{
for (int i=0; i<3; ++i)
{

if (foo(i) == 0) throw EXCEPTION;


}
}
catch (Exception e2)
{
}
bar();
baz();
}

public static void main(String[] args)
{
for (int n=0; n<10; ++n)
{
long start = System.currentTimeMillis();
for (int i=0; i<1000000; ++i)

(new test_reuse()).run();
System.out.println(System.currentTimeMillis() - start);
}
}
}

also, baz() and bar() are noise in this benchmark, as they should be
dropped by the JVM as no-ops.

patrick@patrick-wrights-computer:~/tmp$
>java -cp . test
1572
1575
1570
1572
1576
1572
1603
1614
1575
1576
patrick@patrick-wrights-computer:~/tmp$
>java -cp . -server test
1480
1457
1468
1461
1470
1456
1459
1464
1455
1456
patrick@patrick-wrights-computer:~/tmp$
>java -cp . test_reuse
133
127
129
129
132
129
130
128
128
127
patrick@patrick-wrights-computer:~/tmp$
>java -cp . -server test_reuse
25
12
10
9
10
10
8
9
9
9
patrick@patrick-wrights-computer:~/tmp$
>java -version
java version "1.5.0_13"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13-b05-237)
Java HotSpot(TM) Client VM (build 1.5.0_13-119, mixed mode, sharing)

IMO, this discussion is becoming a bit of a food fight, which is
unnecessary and not really useful. You have points to make, Jon, but
everyone throwing benchmarks at each other usually just wastes time.
There are better forums to discuss these things.

Cheers!
Patrick

Richard Warburton

unread,
Apr 22, 2008, 2:41:51 PM4/22/08
to jvm-la...@googlegroups.com
> On Tuesday 22 April 2008 18:45:28 Richard Warburton wrote:
> > Could you please provide the source code for this performance comparison.
>
> Sure. The Java:
>
> public class test
> {
> int foo(int n)
> {
> return n - 1;
> }
>
> void bar()
> {
> }
>
> void baz()
> {
> }
>
> void run()
> {
> Exception e = new Exception("");
> try
>
> {
> for (int i=0; i<3; ++i)
> {
> if (foo(i) == 0) throw e;
> }
> }
> catch (Exception e2)
> {
> }
> bar();
> baz();
> }
>
> public static void main(String[] args)
> {
> for (int n=0; n<10; ++n)
> {
> long start = System.currentTimeMillis();
> for (int i=0; i<1000000; ++i)
> (new test()).run();
> System.out.println(System.currentTimeMillis() - start);
> }
> }
> }

Whilst I don't disagree with your overall comment that the JVM should
implement Tail Call elimination, I'm not entirely sure why people are
so interested in using exceptions to implement their specific control
flow semantics of choice anyway. For example replacing the contents
of the run method with;

for (int i=0; i<3; ++i)
{

if (foo(i) == 0) break;
}
bar();
baz();

Yielded performance improvements in the order of 100-200 over the
exceptional control flow based approach. I would expect that writing
a similar program in c# would produce similar results. Since it was
only running for < 20 milliseconds, its hard to gauge an actually
accurate time. I can see why an interprocedural control flow sequence
would map nicely onto exceptions - but if you are moving around within
a method then surely it would be preferrably to stick to goto based
control flow. This would assume that you are using bytecode as your
preferred method of output, rather than java source, but I think thats
a reasonable assumption anyway.

Richard Warburton

ijuma

unread,
Apr 22, 2008, 2:42:04 PM4/22/08
to JVM Languages


On Apr 22, 7:11 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> On Tuesday 22 April 2008 18:45:28 Richard Warburton wrote:
>
> > Could you please provide the source code for this performance comparison.
>
> Sure. The Java:
>
> public class test
> {
>     int foo(int n)
>     {
>       return n - 1;
>     }
>
>     void bar()
>     {
>     }
>
>     void baz()
>     {
>     }
>
>     void run()
>     {
>         Exception e = new Exception("");

The scores change quite dramatically if I change this to:

Exception e = new Exception("") {
@Override
public synchronized Throwable fillInStackTrace() {
return this;
}
};

From an average of 850 to an average of 50.

Regards,
Ismael

Jon Harrop

unread,
Apr 22, 2008, 2:35:15 PM4/22/08
to jvm-la...@googlegroups.com
On Tuesday 22 April 2008 19:38:08 Christian Vest Hansen wrote:
>    final Exception e = new Exception("");

Identical performance to using "break" here.

Antonio Cuni

unread,
Apr 22, 2008, 2:43:04 PM4/22/08
to jvm-la...@googlegroups.com
Jon Harrop wrote:

> Sure. The Java:
>
[cut]


> void run()
> {
> Exception e = new Exception("");
> try
> {
> for (int i=0; i<3; ++i)
> {
> if (foo(i) == 0) throw e;
> }
> }
> catch (Exception e2)
> {
> }
> bar();
> baz();
> }
>
> public static void main(String[] args)
> {
> for (int n=0; n<10; ++n)
> {
> long start = System.currentTimeMillis();
> for (int i=0; i<1000000; ++i)
> (new test()).run();
> System.out.println(System.currentTimeMillis() - start);
> }
> }
> }

this is not a good benchmark, for two reasons:

1) you are allocating a new object at every loop, but we are
benchmarking the loops, not the garbage collector :-); you should use
static methods instead, IMHO;

2) you are allocating a new exception every time; the optimization
described here [1] works only if the exception is pre-allocated.
[1] http://blogs.sun.com/jrose/entry/longjumps_considered_inexpensive

Here is my modified benchmark which tries to address these issues:

public class loop2
{

public static final Exception exc = new Exception("");

static int foo(int n)
{
return n - 1;
}

static void bar()
{
}

static void baz()
{
}

static void run()
{


try
{
for (int i=0; i<3; ++i)
{

if (foo(i) == 0) throw exc;


}
}
catch (Exception e2)
{
}
bar();
baz();
}

public static void main(String[] args)
{
for (int n=0; n<10; ++n)
{
long start = System.currentTimeMillis();
for (int i=0; i<1000000; ++i)

run();
System.out.println(System.currentTimeMillis() - start);
}
}
}

And here are the results:

antocuni@viper tmp $ java loop1
1923
2032
2032
2052
2031
2078
2058
2035
2067
2063
antocuni@viper tmp $ java loop2
9
3
1
1
1
1
1
1
1
1

Trying to interpret the number, I think that after the first iteration
hotspot decided to JIT compile the loop, and since it can inline the
exception it ends up with a completely empty loop which is thrown away.

ciao,
Anto

Jon Harrop

unread,
Apr 22, 2008, 2:41:11 PM4/22/08
to jvm-la...@googlegroups.com
On Tuesday 22 April 2008 19:43:04 Antonio Cuni wrote:
> Jon Harrop wrote:
> this is not a good benchmark, for two reasons:
>
> 1) you are allocating a new object at every loop, but we are
> benchmarking the loops, not the garbage collector :-); you should use
> static methods instead, IMHO;

Actually I tried hoisting the allocation of "test" and it makes the code
consistently slower. I have no idea why.

> 2) you are allocating a new exception every time; the optimization
> described here [1] works only if the exception is pre-allocated.
> [1] http://blogs.sun.com/jrose/entry/longjumps_considered_inexpensive

I think that is not thread safe. Specifically, when the branch conveys
information (passed as arguments using a tail call, or embedded in the
exception) then you must use a locally allocated exception, right?

Jon Harrop

unread,
Apr 22, 2008, 2:43:03 PM4/22/08
to jvm-la...@googlegroups.com
On Tuesday 22 April 2008 19:11:28 Antonio Cuni wrote:
> Honestly, I doubt that a tail call can be faster/much faster than a
> plain for loop, but we should do some benchmark of course.

Yes. However, tail calls are not restricted to the body of a single method.

> > CLR: 0.025s


>
> how does this compare with the first version of the loop you wrote?
>
> void run() {
> for (int i=0; i<3; ++i)
> if (foo(i) == 0) break;
> bar();
> baz();
> }

25ms with tail calls drops to 7ms with "break" on the CLR, 10ms for break on
the JVM.

Patrick Wright

unread,
Apr 22, 2008, 2:53:26 PM4/22/08
to jvm-la...@googlegroups.com
> > 2) you are allocating a new exception every time; the optimization
> > described here [1] works only if the exception is pre-allocated.
> > [1] http://blogs.sun.com/jrose/entry/longjumps_considered_inexpensive
>
> I think that is not thread safe. Specifically, when the branch conveys
> information (passed as arguments using a tail call, or embedded in the
> exception) then you must use a locally allocated exception, right?

From TFA:
"A similar technique, not so widely used yet, is to clone a
pre-allocated exception and throw the clone. This can be handy if
there is information (such as a return value) which differs from use
to use; the variable information can be attached to the exception by
subclassing and adding a field. The generated code can still collapse
to a simple goto, and the extra information will stay completely in
registers, assuming complete escape analysis of the exception. (This
level of EA is on the horizon.)"

John Cowan

unread,
Apr 22, 2008, 3:05:04 PM4/22/08
to jvm-la...@googlegroups.com
On Tue, Apr 22, 2008 at 2:41 PM, Jon Harrop <j...@ffconsultancy.com> wrote:

> > 2) you are allocating a new exception every time; the optimization
> > described here [1] works only if the exception is pre-allocated.
> > [1] http://blogs.sun.com/jrose/entry/longjumps_considered_inexpensive
>
> I think that is not thread safe. Specifically, when the branch conveys
> information (passed as arguments using a tail call, or embedded in the
> exception) then you must use a locally allocated exception, right?

Yes, you must. However, what makes allocating an exception expensive
is the fillInStack method, which has to walk the JVM stack. If you
override that in your exception class with a do-nothing method, then
locally allocating exceptions is very cheap.

Mark Haniford

unread,
Apr 22, 2008, 6:11:50 PM4/22/08
to jvm-la...@googlegroups.com
We need Boo on the JVM in a bad way.

On Sat, Apr 19, 2008 at 6:53 PM, John Rose <John...@sun.com> wrote:
>
> On Apr 19, 2008, at 3:16 PM, Rodrigo B. de Oliveira wrote:
>
> > On Sat, Apr 19, 2008 at 5:30 PM, Brian Frank
> > <brian...@gmail.com> wrote:
> >> ... I personally
> >> think the JVM is a much better platform for alternate languages
> >> than .NET.
> >
> > Why?
>
> My take as a JVM engineer (which is a limited but interesting
> perspective) is that any of the good JVMs provides C-level
> performance for many interesting Java codes, while the CLR provides
> early-Java-level performance. The JVMs have been competing with each
> other on performance for a decade, and it shows.
>
> Performance isn't everything, but it often turns out to be important.
>
> More thoughts here: http://blogs.sun.com/jrose/entry/
> bravo_for_the_dynamic_runtime
>
> I'd love to see a Boo-like thing for the JVM someday. I enjoy
> languages which cleverly integrate a small number of high-leverage
> features, rather than juxtapose a bunch of shallow hacks.
>
> -- John
>
>
>
> >
>

hlovatt

unread,
Apr 22, 2008, 11:07:48 PM4/22/08
to JVM Languages
On my Mac Book Pro running Windows XP via Parallels the figures I get
are:

For C#

SciGMark 1.0 - C# - specialized

FFT (1024): 127.693325911912
SOR (100x100): 451.688663887376
Monte Carlo : 53.8252351904638
Sparse matmult (N=1000, nz=5000): 287.33058461682
LU (100x100): 281.689806463528
PolyMult (N=40): 129.801981762775
Composite Score: 222.004932972147

Platform Information
CLR Version: 2.0.50727.1433
Working Set: 17170432

For java -server:

SciGMark 1.0 - Java - specialized
FFT (1024): 296.13593880108886
SOR (100x100): 895.0093438244637
Monte Carlo : 237.23858121300037
Sparse matmult (N=10, nz=50): 467.9837360457056
LU (100x100): 1304.666388568605
PolyMult (N=40): 563.4980943142291

Composite Score: 627.4220137945155

java.vendor: Sun Microsystems Inc.
java.version: 1.6.0_06
os.arch: x86
os.name: Windows XP
os.version: 5.1

Which makes the Java version about 3 times quicker. I used the code
given for the paper I previously referenced, since this code uses
exactly the same algorithms and avoids system calls. It is quite
possible that I did not give the C# compiler the right options; I
simply ran the code from Visual Studio Express, I am far from a C#
expert.

PS We are probably both is breach of the .NET user agreement; see
section 8, it points considerable restrictions on running benchmarks.

On Apr 22, 7:44 pm, Jon Harrop <j...@ffconsultancy.com> wrote:
> On Tuesday 22 April 2008 08:39:03 hlovatt wrote:
>
> > @John,
>
> > Your benchmarking does not seem consistent with this paper:
>
> >http://www.orcca.on.ca/~ldragan/synasc2005/2005-synasc-scigmark-final...

hlovatt

unread,
Apr 23, 2008, 4:19:47 AM4/23/08
to JVM Languages
I found a bit more time in the day and worked out how to run the C#
compiler from the command line. I used the following command:

csc /optimize /out:commandline.exe *.cs

Then when I run commandline I get:

SciGMark 1.0 - C# - specialized

FFT (1024): 388.779673030229
SOR (100x100): 630.880482605193
Monte Carlo : 76.1505509057327
Sparse matmult (N=1000, nz=5000): 576.581135920473
LU (100x100): 508.721326139402
PolyMult (N=40): 327.208768923184
Composite Score: 418.053656254036

Platform Information
CLR Version: 2.0.50727.1433
Working Set: 5840896

Which is a considerable improvement over before, but still 1.5 times
slower than Java. However on a personal note; I don't usually bother
optimising further when I get within a factor of 2, so I would say
that C# is fast enough.

Steven Shaw

unread,
Apr 23, 2008, 9:59:13 AM4/23/08
to jvm-la...@googlegroups.com
Aren't you guys ignoring the case where the StopIteration exception is
thrown by a method called in a loop. It may not be possible to inline
it. I imagine that JVM cannot optimise that to a goto and CLR's
tailcalls cannot help in that case...

Jon Harrop

unread,
Apr 23, 2008, 10:32:47 AM4/23/08
to jvm-la...@googlegroups.com

Actually that is exactly a case handled by tail calls: the method is
parameterized over the continuations that it will call. This is very common
in functional programming and is called continuation passing style (CPS).
Some functional compilers (e.g. SML/NJ) automatically do this to all code.

Steven Shaw

unread,
Apr 23, 2008, 12:59:10 PM4/23/08
to jvm-la...@googlegroups.com
2008/4/24 Jon Harrop <j...@ffconsultancy.com>:

> Actually that is exactly a case handled by tail calls: the method is
> parameterized over the continuations that it will call. This is very common
> in functional programming and is called continuation passing style (CPS).
> Some functional compilers (e.g. SML/NJ) automatically do this to all code.

So it works if you do a CPS transformation on all your code leaving
your frames on the heap. In that case you can tail call a continuation
to simulate the exception. I am interested in this approach. I like
the flexibility that CPS style gives (perhaps different exception
models to the norm). However, wouldn't this approach have other
performance consequences (mainly the heap-based frames)?

Steven Shaw

unread,
Apr 23, 2008, 1:05:27 PM4/23/08
to jvm-la...@googlegroups.com
2008/4/24 Steven Shaw <ste...@gmail.com>:

> So it works if you do a CPS transformation on all your code leaving
> your frames on the heap. In that case you can tail call a continuation
> to simulate the exception. I am interested in this approach. I like
> the flexibility that CPS style gives (perhaps different exception
> models to the norm). However, wouldn't this approach have other
> performance consequences (mainly the heap-based frames)?

Sorry to follow up my own post with more thoughts.

What I'm getting at is if CPS transformation and tailcalls were so
performant for exceptions then why bake exceptions into the CIL and
into the JVM bytecodes? I really appreciate Scheme because it provides
the primitives to implement high-level constructs like exceptions (and
coroutines, backtracking) but I figured because the "big guys" in
runtime systems baked in particular exception systems then it wasn't
considered fast enough for this common case.

John Cowan

unread,
Apr 23, 2008, 1:18:10 PM4/23/08
to jvm-la...@googlegroups.com
Scheme is Guy Steele's attempt to make a PL for people as smart as he
is. Java is his attempt for the rest of us/them.

--

Jon Harrop

unread,
Apr 23, 2008, 12:33:41 PM4/23/08
to jvm-la...@googlegroups.com
On Wednesday 23 April 2008 09:19:47 hlovatt wrote:
> I found a bit more time in the day and worked out how to run the C#
> compiler from the command line. I used the following command:
>
> csc /optimize /out:commandline.exe *.cs
>
> Then when I run commandline I get:
>
> SciGMark 1.0 - C# - specialized
>
> FFT (1024): 388.779673030229
> SOR (100x100): 630.880482605193
> Monte Carlo : 76.1505509057327
> Sparse matmult (N=1000, nz=5000): 576.581135920473
> LU (100x100): 508.721326139402
> PolyMult (N=40): 327.208768923184
> Composite Score: 418.053656254036
>
> Platform Information
> CLR Version: 2.0.50727.1433
> Working Set: 5840896
>
> Which is a considerable improvement over before, but still 1.5 times
> slower than Java. However on a personal note; I don't usually bother
> optimising further when I get within a factor of 2, so I would say
> that C# is fast enough.

On my machine, this benchmark is only 17% faster in Java for the small
problems and 0.04% faster for the large problems. That is well within the
variation between individual tests, the largest of which is the CLR being
2.44x faster on the polynomial multiplication test.

Charles Oliver Nutter

unread,
Apr 23, 2008, 2:07:32 PM4/23/08
to jvm-la...@googlegroups.com
John Cowan wrote:
> On Tue, Apr 22, 2008 at 2:41 PM, Jon Harrop <j...@ffconsultancy.com> wrote:
>
>> > 2) you are allocating a new exception every time; the optimization
>> > described here [1] works only if the exception is pre-allocated.
>> > [1] http://blogs.sun.com/jrose/entry/longjumps_considered_inexpensive
>>
>> I think that is not thread safe. Specifically, when the branch conveys
>> information (passed as arguments using a tail call, or embedded in the
>> exception) then you must use a locally allocated exception, right?
>
> Yes, you must. However, what makes allocating an exception expensive
> is the fillInStack method, which has to walk the JVM stack. If you
> override that in your exception class with a do-nothing method, then
> locally allocating exceptions is very cheap.

JRuby uses this technique since we frequently have flow-control
exceptions that contain different state. It's fast...very very fast. The
stack trace is basically *all* the cost, but John Rose's version also
eliminates the object allocation cost. For some of our exceptions we do
have a single instance.

- Charlie

Thomas E Enebo

unread,
Apr 23, 2008, 2:11:08 PM4/23/08
to jvm-la...@googlegroups.com
Jon, Can you specify how you are running these benchmarks? I did not
see version of arguments to JVM (or .Net runtime). You are using
-server I assume...

-Tom

--
Blog: http://www.bloglines.com/blog/ThomasEEnebo
Email: en...@acm.org , tom....@gmail.com

Charles Oliver Nutter

unread,
Apr 23, 2008, 2:15:27 PM4/23/08
to jvm-la...@googlegroups.com
Jon Harrop wrote:
> Optimizing exception handling in the JVM before implementing tail calls was
> premature optimization, IMHO.

It's worth mentioning that in order to implement non-local flow control,
IronRuby has to use exceptions just like JRuby. And any benchmarks
involving exceptions or non-local flow control are far slower on
IronRuby than on even the C version of Ruby. JRuby is consistently a lot
faster on all such benchmarks, largely because the cost of exceptions is
so low on the JVM.

- Charlie

John Wilson

unread,
Apr 23, 2008, 2:19:29 PM4/23/08
to jvm-la...@googlegroups.com


I use exceptions like this too. I have an exception instance per
thread held in thread local storage. I currently leave the fillInStack
method alone so that the stack trace points to the creating location
which has a comment saying that if you get here via a printed stack
trace you have found a bug (As I should have caught it before you see
it).

As I only create one per thread it's not a significant performance hit.


John Wilson

Jon Harrop

unread,
Apr 23, 2008, 2:34:06 PM4/23/08
to jvm-la...@googlegroups.com
On Wednesday 23 April 2008 19:11:08 Thomas E Enebo wrote:
> Jon, Can you specify how you are running these benchmarks?

For C++ and Java, I downloaded the source, compiled it with "make" and ran it.
For C#, I copied the source into a VS project, set it to "Release" mode,
built and ran it.

> I did not see version of arguments to JVM (or .Net runtime).

Neither did I. :-)

> You are using -server I assume...

Yes. That is the default here.

Barry Kelly

unread,
Apr 23, 2008, 5:26:47 PM4/23/08
to jvm-la...@googlegroups.com
"Steven Shaw" <ste...@gmail.com> wrote:

> 2008/4/24 Steven Shaw <ste...@gmail.com>:

> What I'm getting at is if CPS transformation and tailcalls were so


> performant for exceptions then why bake exceptions into the CIL and
> into the JVM bytecodes?

For CLR/CIL: exceptions (SEH) are an OS-level primitive on Windows, and
exception control flow can pass through other languages, including
native code (C++, C, Delphi etc.). If your CIL code is being called via
a callback from native code, you may want to be able to throw an
exception and catch it on the other side. Not highly recommended, of
course.

-- Barry

--
http://barrkel.blogspot.com/

Jon Harrop

unread,
Apr 23, 2008, 7:00:39 PM4/23/08
to jvm-la...@googlegroups.com, Steven Shaw
On Wednesday 23 April 2008 18:05:27 Steven Shaw wrote:
> 2008/4/24 Steven Shaw <ste...@gmail.com>:
> > So it works if you do a CPS transformation on all your code leaving
> > your frames on the heap.

Yes.

> > In that case you can tail call a continuation
> > to simulate the exception.

Exactly.

> > I am interested in this approach.

You may like the book "Compiling with continuations" by Appel.

> > I like
> > the flexibility that CPS style gives (perhaps different exception
> > models to the norm). However, wouldn't this approach have other
> > performance consequences (mainly the heap-based frames)?

Yes. Current CPS implementations are typically ~50% slower for ordinary code
but there are lots of non-trivial trade-offs involved and lots of interesting
optimization potential elsewhere.

One important benefit is that moving stack frames onto the heap eliminates
stack overflows, simplifies the run-time (no stack to crawl!) and can improve
incrementality because the stack is often crawled atomically. You also get
callcc for free, which can be extremely useful in some circumstances.

On the other hand, the stack is often used for implicit thread-local storage
with concurrent GCs as an optimization and pushing everything onto the heap
burdens the GC. I believe CPS also complicates FFI.

> What I'm getting at is if CPS transformation and tailcalls were so
> performant for exceptions then why bake exceptions into the CIL and
> into the JVM bytecodes?

I believe there are two main reasons:

. Debugging: exceptions provide a lot of trace information. Without a stack,
you don't even get a stack trace with CPS.

. Business: industry value the old far more than they value the new and they
want to see minimal overhead added to old techniques. This was seen before in
C++: "you don't pay for what you don't use". The JVM and the CLR were
designed for business and largely adopted this mentality as a consequence.
Don't forget that, when they were introduced, many users feared garbage
collection let alone tail calls!

Microsoft have put considerable effort into features not found on the JVM,
like efficient tail calls, not only supporting them but even continuing to
aggressively optimize them.

> I really appreciate Scheme because it provides
> the primitives to implement high-level constructs like exceptions (and
> coroutines, backtracking) but I figured because the "big guys" in
> runtime systems baked in particular exception systems then it wasn't
> considered fast enough for this common case.

I believe the designs of the JVM and (to a lesser extent) the CLR were much
more backward looking than forward thinking because that is essential for
commercial success. Had the same effort been put into the best theoretical
design then I'm sure we could have had something much more productive (but
totally incompatible).

John Cowan

unread,
Apr 24, 2008, 1:42:26 AM4/24/08
to jvm-la...@googlegroups.com
On Wed, Apr 23, 2008 at 7:00 PM, Jon Harrop <j...@ffconsultancy.com> wrote:

> > > So it works if you do a CPS transformation on all your code leaving
> > > your frames on the heap.
>
> Yes.

You don't have to use the heap. You can do what Chicken Scheme does:
CPS convert everything, but leave the calls as ordinary calls with the
call frames on the stack; then when the stack gets too big, long-jump
(fire an exception) to reset it and carry on. The calls never return,
so this is safe.

Chicken goes further: it allocates all objects on the stack as well,
and then when the stack is reset all live objects are copied to the
heap. This makes the stack function as the nursery generation of a
multigenerational heap.

See http://home.pipeline.com/~hbaker1/CheneyMTA.html for a brief explication.

hlovatt

unread,
Apr 24, 2008, 9:05:50 PM4/24/08
to JVM Languages
Your results seem to be the outliers - are you certain you are using
the Scimarks referenced in the paper?

When I compile the C# benchmarks with:

c:\WINDOWS\Microsoft.NET\Framework\v3.5\csc /optimize+ /debug- /
checked- *.cs

And run the large set with:

commandline -large

I get:

SciGMark 1.0 - C# - specialized

FFT (1048576): 68.24358711148
SOR (1000x1000): 336.610476965462
Monte Carlo : 103.895055082435
Sparse matmult (N=100000, nz=1000000): 363.889491977595
LU (1000x1000): 472.132470273794
PolyMult (N=40): 282.144700451176
Composite Score: 271.152630310324

Platform Information
CLR Version: 2.0.50727.1433
Working Set: 21704704

When I compile the Java code with no compiler options and use the
server VM I get:

SciGMark 1.0 - Java - specialized
FFT (1048576): 66.14391779366471
SOR (1000x1000): 862.4075989952408
Monte Carlo : 210.20788286956355
Sparse matmult (N=100000, nz=1000000): 308.99216463285444
LU (1000x1000): 512.0327520949826
PolyMult (N=100): 432.48570135288116

Composite Score: 398.7116696231979

java.vendor: Sun Microsystems Inc.
java.version: 1.6.0_06
os.arch: x86
os.name: Windows XP
os.version: 5.1

Which makes Java about 1.6 times quicker (essentially the same as the
small set results).

Your results don't seem to tie up with other peoples - are you certain
you are running the right benchmarks? The claim that Java is faster
than C# seems quite reasonable given this set of results - why are
yours different than other peoples?

Jon Harrop

unread,
Apr 25, 2008, 6:34:25 AM4/25/08
to jvm-la...@googlegroups.com
On Friday 25 April 2008 02:05:50 hlovatt wrote:
> - are you certain you are running the right benchmarks?

Yes. And I have run them on more than one machine and obtained the same
results.

I also found a bug in the SciGMark code, specifically the C++ was printing the
wrong score for the MultPoly test.

> The claim that Java is faster than C# seems quite reasonable given this set
> of results

You are cherry picking one set of results from one machine that are dominated
by one test (LU). That is bad science.

> - why are yours different than other peoples?

I've Googled for SciMark benchmark results on similar hardware and everything
indicates that my results are perfectly representative.

You never described your machine and operating system precisely. Are you
comparing the JVM running on 64-bit Mac OS X with the CLR running in emulated
32-bit Windows?

hlovatt

unread,
Apr 25, 2008, 6:54:44 AM4/25/08
to JVM Languages
All, Java, C#, & C, tests are on Windows XP running under Parallels on
a Mac Book Pro., 2.33 GHz Intel Core 2 Duo, 2 GB 667 MHz DDR2 SDRAM.

I also ran the large set using gcc compiled with -O3 on g++ (GCC)
3.4.5 (mingw special), the results were:

SciGMark 1.0 - C++ - specialized
Using 2.00 seconds min time per kenel.
Composite Score: 385.09
FFT Mflops: 71.93 (N=1048576)
SOR Mflops: 662.46 (1000 x 1000)
MonteCarlo: Mflops: 90.57
Sparse matmult Mflops: 452.50 (N=100000, nz=1000000)
LU Mflops: 564.25 (M=1000, N=1000)
MultPoly Mflops: 71.93 (N=40)

Which makes Java slightly faster than GCC. So you have to give kudos
to John Rose and Co. at Sun, they have produced a fast VM. Were is the
mistake in the C code, I couldn't find it?
> Your results don't seem to tie up with other peoples - are you certain
> you are running the right benchmarks? The claim that Java is faster
> than C# seems quite reasonable given this set of results - why are
> yours different than other peoples?
>
> On Apr 24, 4:34 am, Jon Harrop <j...@ffconsultancy.com> wrote:
>
> > On Wednesday 23 April 2008 19:11:08 Thomas E Enebo wrote:
>
> > > Jon, Can you specify how you are running these benchmarks?
>
> > For C++ and Java, I downloaded the source, compiled it with "make" and ran it.
> > For C#, I copied the source into a VS project, set it to "Release" mode,
> > built and ran it.
>
> > > I did not see version of arguments to JVM (or .Net runtime).
>
> > Neither did I. :-)
>
> > > You are using -server I assume...
>
> > Yes. That is the default here.
>

Attila Szegedi

unread,
Apr 25, 2008, 7:08:09 AM4/25/08
to jvm-la...@googlegroups.com

On 2008.04.25., at 12:54, hlovatt wrote:

> All, Java, C#, & C, tests are on Windows XP running under Parallels on
> a Mac Book Pro., 2.33 GHz Intel Core 2 Duo, 2 GB 667 MHz DDR2 SDRAM.

Benchmarking under a virtualized OS? Kirk just wrote recently about
that: <http://kirk.blog-city.com/can_i_bench_with_virtualization.htm>

Attila.

John Rose

unread,
Apr 25, 2008, 4:54:57 PM4/25/08
to jvm-la...@googlegroups.com

A Java-to-Java comparison (VMWare/Windows to Mac OS X) would be
interesting for VMWare aficionados.

Scimark is a good benchmark for basic CPU/FPU use. It is sensitive
to loop optimizations and array usage patterns, as well as to stray
oddities like how your random number generator is designed. The JVM
does well on loop opts., and there is always more to do (current
bleeding edge is SIMD).

A couple of scimark benchmarks use 2-D arrays (not surprising!) and
the JVM is a little weak there because of the lack of true 2-D
arrays. We have long known how to fix this under the covers, but as
we soberly prioritize our opportunities, we've chosen to work on
other things. An excellent outcome of the OpenJDK is that the
community can now vote with code about which optimizations are most
important.

At best, this sort of small benchmark will reach C++ levels of
performance on the JVM. (At least until we do really aggressive task
decomposition and use our virtualization freedom to lie about data
structure layouts. But at present the state of the art is to require
heavy input from the programmer for such things.)

At the risk of prolonging the benchmark battle, I have to admit that
scimark is not the sort of app. I had in mind when I was bragging
about the JVM earlier on this thread. (Sorry Fan guys. Major thread
hijack here. Your stuff looks cool, esp. the library agnostic part.)

The JVM's most sophisticated tricks (as enumerated elsewhere) have to
do with optimistic profile-driven optimizations, with deoptimization
backoffs. These show up when the system is large and decoupled
enough that code from point A is reused at point B in a type context
that is unknown at A.

At that point, the JVM (following Smalltalk and Self traditions) can
fill in missing information accumulated during warm-up, which can
drive optimization of point A in terms of specific use cases at point B.

All of this works best when the optimizations are allowed to fail
when the use cases at B change (due to app. phase changes, e.g.) or
when points C and D show up and causes the compilation of A's use
cases to be reconsidered. Key methods get recompiled multiple times
as the app. finds its hot spot.

It is these sorts of optimistic, online optimizations that makes the
JVM run faster than C++, when it does. (It does, e.g., when it
inlines hot interface calls and optimizes across the call boundary.)
Microsoft could do so with C# also, but probably not as long as the
C# JIT runs at application load time, which (as I am told by friendly
Microsoft colleagues) it does.

A final note about C# vs. Java on Intel chips. We have noticed that
the Intel (and AMD) chips are remarkably tolerant of junky object
code. Part of the challenge of JVM engineering is to find
optimizations that tend to make code run better across a range of
platforms with other core competencies (like many-core SPARC,
obviously for Sun).

I speculate that Hotspot has been driven to work harder on clever
optimizations not only because we have competed with other excellent
implementations (IBM J9, BEA JRockit), but also because Java needs to
run on a wider range of chips than C#; some of them are less
forgiving than x86. A way to quantify the "chip factor" would be to
compare the gap between server and client JITs on a range of Java
apps., especially simpler more "static" ones like scimark. More
forgiving chips would narrow the gap.

Best wishes,
-- John

hlovatt

unread,
Apr 25, 2008, 7:26:59 PM4/25/08
to JVM Languages
For interest, the same benchmarks on the native Mac OS running
SoyLatte with the -server option and the large data set are:

SciGMark 1.0 - Java - specialized
FFT (1048576): 66.06220438347867
SOR (1000x1000): 788.5439976789686
Monte Carlo : 241.39879385124996
Sparse matmult (N=100000, nz=1000000): 425.0726583860497
LU (1000x1000): 414.8516922439953
PolyMult (N=100): 531.6603292535906

Composite Score: 411.2649459662221

java.vendor: Sun Microsystems Inc.
java.version: 1.6.0_03-p3
os.arch: amd64
os.name: Darwin
os.version: 9.2.2

Which is a little faster - so the overhead of Parallels plus Windows
is small compared to native 10.5.2.

It would be interesting to see some other benchmarks that are more
like a typical OO application. The Scimark tests, whilst typical of
scientific applications, are not like typical OO applications; few
virtual methods, little GC, etc.

Sorry to the Fan people for high jacking their thread.

Jon Harrop

unread,
Apr 25, 2008, 9:10:23 PM4/25/08
to jvm-la...@googlegroups.com
On Friday 25 April 2008 21:54:57 John Rose wrote:
> At the risk of prolonging the benchmark battle, I have to admit that
> scimark is not the sort of app. I had in mind when I was bragging
> about the JVM earlier on this thread.

Ah, I see. Can you cite a more suitable benchmark?

John Rose

unread,
Apr 25, 2008, 9:25:49 PM4/25/08
to jvm-la...@googlegroups.com
On Apr 25, 2008, at 6:10 PM, Jon Harrop wrote:

Ah, I see. Can you cite a more suitable benchmark?


Charlie's work with JRuby performance comes to mind.  There might be interesting comparisons possible with IronRuby.

Any of the larger spec benchmarks probably has enough complexity to show off the effects of profiling and delayed optimization.

-- John

Kirk Pepperdine

unread,
Apr 26, 2008, 12:36:11 AM4/26/08
to jvm-la...@googlegroups.com
On Fri, Apr 25, 2008 at 10:54 PM, John Rose <John...@sun.com> wrote:

On Apr 25, 2008, at 4:08 AM, Attila Szegedi wrote:

> On 2008.04.25., at 12:54, hlovatt wrote:
>
>> All, Java, C#, & C, tests are on Windows XP running under
>> Parallels on
>> a Mac Book Pro., 2.33 GHz Intel Core 2 Duo, 2 GB 667 MHz DDR2 SDRAM.
>
> Benchmarking under a virtualized OS? Kirk just wrote recently about
> that: <http://kirk.blog-city.com/can_i_bench_with_virtualization.htm>
Not the best blog entry. The problems I've seen with virtualization all have to do with device interactions, sound, network are the most visible. I don't know if it applied to straight CPU utilization. I didn't take the time to investigate. However I get the impression that there is a bit of a hit on the CPU.


A final note about C# vs. Java on Intel chips.  We have noticed that
the Intel (and AMD) chips are remarkably tolerant of junky object
code.  Part of the challenge of JVM engineering is to find
optimizations that tend to make code run better across a range of
platforms with other core competencies (like many-core SPARC,
obviously for Sun).

From what I've been able to see from my benchmarks is an ordering in conservatism,  Intel appears to be very conservative, AMD less so, leaving SPARC to be the least conservative. I also see a wee little bias from you guys towards SPARC, not that there is anything wrong with that ;-)



I speculate that Hotspot has been driven to work harder on clever
optimizations not only because we have competed with other excellent
implementations (IBM J9, BEA JRockit),

I would agree with you here John in that there are many features were the JVM is far ahead of the CLR. I attribute this to everyone being pushed as well as everyone learning from each each other. This just isn't happening in CLR land.

Regards,
Kirk

Best wishes,
-- John





--
Kind regards,
Kirk Pepperdine

http://www.kodewerk.com
http://www.javaperformancetuning.com
http//www.cretesoft.com

Steven Shaw

unread,
Apr 27, 2008, 10:20:49 AM4/27/08
to Jon Harrop, jvm-la...@googlegroups.com
2008/4/24 Jon Harrop <j...@ffconsultancy.com>:

> You may like the book "Compiling with continuations" by Appel.

Thanks Jon. I really appreciate your whole reply. Appel's book is on
my todo list :)

> I believe the designs of the JVM and (to a lesser extent) the CLR were much
> more backward looking than forward thinking because that is essential for
> commercial success. Had the same effort been put into the best theoretical
> design then I'm sure we could have had something much more productive (but
> totally incompatible).

Hopefully with the DVM (MLVM) we will see a more forward thinking VM
in the near future.

Perhaps some languages would benefit from a VM with heap-based frames
and primitive closures that could live on top on the DVM, optimising
heap-frames to stack-frames and other optimisations when possible.

Steve.

Charles Oliver Nutter

unread,
Apr 27, 2008, 1:55:48 PM4/27/08
to jvm-la...@googlegroups.com, Jon Harrop

Ruby would! Ruby would!

- Charlie

hlovatt

unread,
Apr 29, 2008, 10:49:48 PM4/29/08
to JVM Languages
For interest here are some benchmarks for Scigmark running on the
server VM with the large data set. First Apple 1.5:

SciGMark 1.0 - Java - specialized
FFT (1048576): 62.711646778865195
SOR (1000x1000): 802.7910624263924
Monte Carlo : 68.74147451132121
Sparse matmult (N=100000, nz=1000000): 389.20565429955906
LU (1000x1000): 485.554737191564
PolyMult (N=100): 544.318902672618

Composite Score: 392.22057964671995

java.vendor: Apple Inc.
java.version: 1.5.0_13
os.arch: i386
os.name: Mac OS X
os.version: 10.5.2

Apple 1.6:

SciGMark 1.0 - Java - specialized
FFT (1048576): 60.0700425355947
SOR (1000x1000): 809.360923579094
Monte Carlo : 247.8628256348222
Sparse matmult (N=100000, nz=1000000): 392.63803077958215
LU (1000x1000): 482.56725581280824
PolyMult (N=100): 530.6558813726939

Composite Score: 420.5258266190992

java.vendor: Apple Inc.
java.version: 1.6.0_05
os.arch: x86_64
os.name: Mac OS X
os.version: 10.5.2

SoyLatte:

SciGMark 1.0 - Java - specialized
FFT (1048576): 62.49180158350285
SOR (1000x1000): 776.1668443011343
Monte Carlo : 242.5986885023368
Sparse matmult (N=100000, nz=1000000): 414.2394617450079
LU (1000x1000): 399.2015785688642
PolyMult (N=100): 461.52110196389737

Composite Score: 392.7032461107906

java.vendor: Sun Microsystems Inc.
java.version: 1.6.0_03-p3
os.arch: amd64
os.name: Darwin
os.version: 9.2.2

All runs are for a MacBook Pro 2.33 GHz Intel Core 2 Duo, 2 GB 667 MHz
DDR2 SDRAM on OS X 10.5.2 (despite what SoyLatte thinks) with my
typical mix of programs running (Safari, Entourage, Word, Excel,
iTunes, and Firefox). The benchmarks are all from within Netbeans.

The Apple 1.6 appears to be a bit quicker - but not much in it for
this benchmark.
Reply all
Reply to author
Forward
0 new messages