If the allocator is 50% of the time, removing the allocator time would
result in a timing about 2x slower than python.
Regarding "newness", there's nothing on the Go page that says "alpha",
"beta", or "untuned". It claims "fast" "safe" "concurrent" - and I
came in with a certain level of belief. If I was told this is an alpha
implementation that's not ready for prime time, I'd be willing to give
it more slack --- and let me know when it's released.
Out of my 5 microbenchmarks, the only time Go beat Python and Ruby 1.9
was in a simple computation loop with no object references or function
calls. In that test, Go was about 100x faster, implying a Mandelbrot
benchmark would run fast.
That's pretty swank.
Regrettably, little of my real-world code is numerical.
I think it's a little disingenuous to create a firewall between the
speed of the language and the speed of the runtime libraries. They
*are* different, but when we're talking about the language builtins
the distinction is thin. It's like arguing that the language might be
fast, but the implementation is slow because the compiler is bad. The
speed of a language is defined by its best implementation.
Go's designers made a choice to force all object allocation through a
GC system, with the bold statement that modern GCs are nearly
equivalent to explicit-free systems. I would expect, even at this
early stage in a language's development, to see strong runtime
performance from the core builtin collection classes. Otherwise, I
might get the impression this bet is wrong.
-brianb