Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

benchmarking various schemes

4 views
Skip to first unread message

fft1976

unread,
May 19, 2009, 3:42:10 AM5/19/09
to
Has anyone run these benchmarks on more Schemes?

http://www.iro.umontreal.ca/~gambit/bench.html

I'm especially curious how Chez and Larceny do there.

By the way, why are the results so unpredictable? Gambit-C seems to do
as well as Bigloo or better most of the time, but is over 10 times
slower on some problems.

Pascal J. Bourguignon

unread,
May 19, 2009, 4:06:27 AM5/19/09
to
fft1976 <fft...@gmail.com> writes:

Because there are lies, damn lies and benchmarks.

--
__Pascal Bourguignon__

fft1976

unread,
May 19, 2009, 4:31:11 AM5/19/09
to
On May 19, 1:06 am, p...@informatimago.com (Pascal J. Bourguignon)
wrote:

Say people whose implementations don't do very well in benchmarks.

namekuseijin

unread,
May 19, 2009, 1:58:29 PM5/19/09
to
On May 19, 4:42 am, fft1976 <fft1...@gmail.com> wrote:
> Has anyone run these benchmarks on more Schemes?
>
> http://www.iro.umontreal.ca/~gambit/bench.html

Many implementations seem to use the well thought-out gambit
benchmarks. Mosh comes with it, I think Ikarus uses it too.

higepon

unread,
May 20, 2009, 3:40:52 AM5/20/09
to

mosh-0.1.2.tar.gz has gambit benchmarks in bench directory.
You can run the benchmarks easily.

% mosh bench/run-mosh.scm

Cheers

felix

unread,
May 20, 2009, 2:32:41 PM5/20/09
to

These benchmarks are necessarily rigged. An implementor will always
deliberately or undeliberately skew the tests for his/her
implementation.
As much as I respect Marc Feeley: using suboptimal optimization
settings
and obsolete versions of some implementations does not appear to be
very scientific.

The site notes that:

"... The benchmarks were carefully written to defeat some compiler
optimizations such as partial or total evaluation of the benchmark at
compile time...".

That would need deep internal knowledge of the tested
implementations, and careful analysis of
the generated code (and the latter would only really be possible for
the respective
implementors of the systems tested).

"... Moreover it is common in Bigloo to specify the type of parameters
on exported functions,
so the performance obtained with this experimental setup may not be
representative of the
performance commonly achieved with Bigloo."

That makes clear that the benchmarks are purely artifical and may
not even be
of relevance to someone who uses Bigloo exactly as it is commonly to
be used.

I don't say this to bash Marc, I just want to make sure you understand
that the results are meaningless. I recommend you write a large,
useful
application, contact the implementors of the systems you want to
compare,
get intimately familiar with the compilers, and tune every single
function and algorithm with the help of said implementors until really
nothing more can be done. Then compare the directly observed
performance.


cheers,
felix

namekuseijin

unread,
May 20, 2009, 4:25:13 PM5/20/09
to
felix escreveu:

> get intimately familiar with the compilers, and tune every single
> function and algorithm with the help of said implementors until really
> nothing more can be done. Then compare the directly observed
> performance.

I think his point was to compare Scheme code, not Bigloo Scheme to
Gambit Scheme to PLT Scheme or whatever.

It uses truly generic Scheme code in a nice set of different tasks.
It's comparing how well pure Scheme code runs in each implementation
without resorting to specific features.

I believe comparing directly observed performance from such benchmarks
is amusing, but I still find it more amusing the benchmarks in the
so-called Language Shootout:
http://shootout.alioth.debian.org/

Here, some Scheme implementations are put against several other language
implementations, and mostly always lose. More fun than that though is
looking at fast Haskell code being compared to fast C code and realizing
they did it by basically implementing write-only and cryptic low-level,
stateful monadic C in Haskell. :)

damn lies, indeed... :)

--
a game sig: http://tinyurl.com/d3rxz9

fft1976

unread,
May 20, 2009, 6:50:03 PM5/20/09
to
On May 20, 11:32 am, felix <bunny...@gmail.com> wrote:
> On 19 Mai, 09:42, fft1976 <fft1...@gmail.com> wrote:
>
> > Has anyone run these benchmarks on more Schemes?
>
> >http://www.iro.umontreal.ca/~gambit/bench.html
>
> > I'm especially curious how Chez and Larceny do there.
>
> > By the way, why are the results so unpredictable? Gambit-C seems to do
> > as well as Bigloo or better most of the time, but is over 10 times
> > slower on some problems.
>
> These benchmarks are necessarily rigged. An implementor will always
> deliberately or undeliberately skew the tests for his/her
> implementation.
> As much as I respect Marc Feeley: using suboptimal optimization
> settings
> and obsolete versions of some implementations does not appear to be
> very scientific.

Is there a web site that shows the results of running the same
benchmarks using non-obsolete Chicken?


> "... Moreover it is common in Bigloo to specify the type of parameters
> on exported functions,

I'm assuming the last two tables (using fixnum/flonum specializations)
take advantage of knowing the types statically.

> I recommend you write a large,
> useful
> application, contact the implementors of the systems you want to
> compare,
> get intimately familiar with the compilers, and tune every single
> function and algorithm with the help of said implementors until really
> nothing more can be done.

That's not very realistic, because (1) pure Scheme is not enough, you
end up writing for a particular implementation (2) not everyone has
time to get intimately familiar with implementations (especially all
of them).

For me, Scheme is just a prototyping tool. If I end up thinking that
there is little to be gained by rewriting in C or C++, then the
prototype evolves into the final code (that's obviously a plus). If
not, speed is of secondary importance, but still, all things being
equal I would choose an implementation that compiles to faster code.

I'm sort of leaning towards Gambit: the implementor's design decisions
mostly agree with my taste, but I'm concerned about FFI limitations (I
asked here and on gambit-list - no answers there).

(Another weird thing I noticed in Gambit, by the way: structures can
be written, but not read)


Isaac Gouy

unread,
May 20, 2009, 11:12:59 PM5/20/09
to
On May 20, 4:25 pm, namekuseijin <namekusei...@gmail.com> wrote:
-snip-

> I believe comparing directly observed performance from such benchmarks
> is amusing, but I still find it more amusing the benchmarks in the
> so-called LanguageShootout:http://shootout.alioth.debian.org/


I find it instructive that someone mocking the name still hasn't
noticed that the name was changed 2 years ago - after the Viginia Tech
shootings.


> Here, some Scheme implementations are put against several other language
> implementations, and mostly always lose.

10x slower than GNU C++
10x faster than Ruby 1.8

Win some, lose some.

>  More fun than that though is
> looking at fast Haskell code being compared to fast C code and realizing
> they did it by basically implementing write-only and cryptic low-level,
> stateful monadic C in Haskell. :)
>
> damn lies, indeed... :)


Someone who actually looked would notice the other Haskell programs -
like the Haskell meteor-contest program that's so much more concise
than all the rest:

http://shootout.alioth.debian.org/u32q/benchmark.php?test=meteor&lang=all&sort=gz

fft1976

unread,
May 21, 2009, 2:20:03 AM5/21/09
to
On May 20, 8:12 pm, Isaac Gouy <igo...@yahoo.com> wrote:

> I find it instructive that someone mocking the name still hasn't
> noticed that the name was changed 2 years ago - after the Viginia Tech
> shootings.

Was the perp a regular contributor to the project?!

Nicolas Neuss

unread,
May 21, 2009, 4:13:19 AM5/21/09
to
Isaac Gouy <igo...@yahoo.com> writes:

> I find it instructive that someone mocking the name still hasn't
> noticed that the name was changed 2 years ago - after the Viginia Tech
> shootings.

Ah, it's "The Computer Language Benchmarks Game" now... Is there some
respectable person who can confirm that also the quality of that stuff
changed for the better and that it pays off to take another look?

My experiences from some years ago fit with those described by Juho
Snellman in this post:
http://groups.google.gr/group/comp.lang.lisp/msg/5489247d2f56a848

Nicolas


Isaac Gouy

unread,
May 21, 2009, 1:40:14 PM5/21/09
to
On May 21, 1:13 am, Nicolas Neuss <lastn...@math.uni-karlsruhe.de>
wrote:


**from some years ago**

And in 2006 Juho Snellman was already speaking of his experience
during autumn/winter 2005 when the old Doug Bagley tests were being
replaced.

And in 2006 Juho Snellman seemed unaware that his source code was
still available -

http://groups.google.com/group/comp.lang.lisp/msg/32ef1a6cec1481a1


But you already know these things because you made the same comments
and were answered back in 2007.

http://groups.google.com/group/comp.lang.lisp/msg/583816770682e18a?hl=en


namekuseijin

unread,
May 21, 2009, 2:26:53 PM5/21/09
to
On May 21, 12:12 am, Isaac Gouy <igo...@yahoo.com> wrote:
> On May 20, 4:25 pm, namekuseijin <namekusei...@gmail.com> wrote:
> > I believe comparing directly observed performance from such benchmarks
> > is amusing, but I still find it more amusing the benchmarks in the
> > so-called LanguageShootout:http://shootout.alioth.debian.org/
>
> I find it instructive that someone mocking the name still hasn't
> noticed that the name was changed 2 years ago - after the Viginia Tech
> shootings.

It's still in the url though. I was not mocking, it's genuinely
amusing.

> > Here, some Scheme implementations are put against several other language
> > implementations, and mostly always lose.
>
> 10x slower than GNU C++
> 10x faster than Ruby 1.8
>
> Win some, lose some.

Win to the usual losers, lose to the best. Why compare with
performance suckers?

> >  More fun than that though is
> > looking at fast Haskell code being compared to fast C code and realizing
> > they did it by basically implementing write-only and cryptic low-level,
> > stateful monadic C in Haskell. :)
>
> > damn lies, indeed... :)
>
> Someone who actually looked would notice the other Haskell programs -
> like the Haskell meteor-contest program that's so much more concise
> than all the rest:
>

> http://shootout.alioth.debian.org/u32q/benchmark.php?test=meteor&lang...

It's not the norm. The norm is that the beautiful and concise Haskell
code is generally much slower than the cryptic C/C++ equivalents, and
thus they go as low as them to prove the performance point except code
now looks not just cryptic but also downright insane.

Isaac Gouy

unread,
May 21, 2009, 3:03:35 PM5/21/09
to
On May 21, 2:26 pm, namekuseijin <namekusei...@gmail.com> wrote:
> On May 21, 12:12 am, Isaac Gouy <igo...@yahoo.com> wrote:
>
> > On May 20, 4:25 pm, namekuseijin <namekusei...@gmail.com> wrote:
> > > I believe comparing directly observed performance from such benchmarks
> > > is amusing, but I still find it more amusing the benchmarks in the
> > > so-called LanguageShootout:http://shootout.alioth.debian.org/
>
> > I find it instructive that someone mocking the name still hasn't
> > noticed that the name was changed 2 years ago - after the Viginia Tech
> > shootings.
>
> It's still in the url though.  I was not mocking, it's genuinely
> amusing.


Is the URL what somethings called?


>
> > > Here, some Scheme implementations are put against several other language
> > > implementations, and mostly always lose.
>
> > 10x slower than GNU C++
> > 10x faster than Ruby 1.8
>
> > Win some, lose some.
>
> Win to the usual losers, lose to the best.  Why compare with
> performance suckers?


Compare to SBCL if you like.
Compare to Python if you like.

>
> > >  More fun than that though is
> > > looking at fast Haskell code being compared to fast C code and realizing
> > > they did it by basically implementing write-only and cryptic low-level,
> > > stateful monadic C in Haskell. :)
>
> > > damn lies, indeed... :)
>
> > Someone who actually looked would notice the other Haskell programs -
> > like the Haskell meteor-contest program that's so much more concise
> > than all the rest:
>

> >http://shootout.alioth.debian.org/u32q/benchmark.php?test=meteor〈...


>
> It's not the norm.  The norm is that the beautiful and concise Haskell
> code is generally much slower than the cryptic C/C++ equivalents, and
> thus they go as low as them to prove the performance point except code
> now looks not just cryptic but also downright insane.


Did you look? That beautiful and concise Haskell program /is/ nearly
40x slower than than the cryptic C/C++ equivalents.

Whether low-level Haskell is "downright insane" is something you
should take up in the Haskell-Cafe.

namekuseijin

unread,
May 21, 2009, 3:18:28 PM5/21/09
to
Isaac Gouy escreveu:

> On May 21, 2:26 pm, namekuseijin <namekusei...@gmail.com> wrote:
>> On May 21, 12:12 am, Isaac Gouy <igo...@yahoo.com> wrote:
>> Win to the usual losers, lose to the best. Why compare with
>> performance suckers?
>
> Compare to SBCL if you like.
> Compare to Python if you like.

Yes, and BTW Python with psycho has some kickass performance by just
importing a lib and initializing it. Great, but possibly not stable
enough for real-world code rather than benchmarks...

>>> http://shootout.alioth.debian.org/u32q/benchmark.php?test=meteor〈...
>> It's not the norm. The norm is that the beautiful and concise Haskell
>> code is generally much slower than the cryptic C/C++ equivalents, and
>> thus they go as low as them to prove the performance point except code
>> now looks not just cryptic but also downright insane.
>
> Did you look? That beautiful and concise Haskell program /is/ nearly
> 40x slower than than the cryptic C/C++ equivalents.

No, I didn't look, I looked at some of the other Haskell fast code for
these benchmarks before. If it's that slow why did you mention it
anyway when I was specifically talking about the state of their fast
code? I know their slow code is beautiful and concise...

Isaac Gouy

unread,
May 21, 2009, 9:12:08 PM5/21/09
to
On May 21, 12:18 pm, namekuseijin <namekusei...@gmail.com> wrote:
> Isaac Gouy escreveu:
>
> > On May 21, 2:26 pm, namekuseijin <namekusei...@gmail.com> wrote:
> >> On May 21, 12:12 am, Isaac Gouy <igo...@yahoo.com> wrote:
> >> Win to the usual losers, lose to the best.  Why compare with
> >> performance suckers?
>
> > Compare to SBCL if you like.
> > Compare to Python if you like.
>
> Yes, and BTW Python with psycho has some kickass performance by just
> importing a lib and initializing it.  Great, but possibly not stable
> enough for real-world code rather than benchmarks...


Psyco was measured, development moved to PyPy.

Page-search for Psyco on the benchmarks game home page.

>
> >>>http://shootout.alioth.debian.org/u32q/benchmark.php?test=meteor〈...
> >> It's not the norm.  The norm is that the beautiful and concise Haskell
> >> code is generally much slower than the cryptic C/C++ equivalents, and
> >> thus they go as low as them to prove the performance point except code
> >> now looks not just cryptic but also downright insane.
>
> > Did you look? That beautiful and concise Haskell program /is/ nearly
> > 40x slower than than the cryptic C/C++ equivalents.
>
> No, I didn't look, I looked at some of the other Haskell fast code for
> these benchmarks before.  If it's that slow why did you mention it
> anyway when I was specifically talking about the state of their fast
> code?  I know their slow code is beautiful and concise...


I mentioned it because you didn't seem to have noticed that beautiful
and concise Haskell programs are being shown alongside low-level
Haskell programs.


felix

unread,
May 22, 2009, 3:52:06 AM5/22/09
to
On 21 Mai, 00:50, fft1976 <fft1...@gmail.com> wrote:
>
> Is there a web site that shows the results of running the same
> benchmarks using non-obsolete Chicken?

No, there isn't.

> > I recommend you write a large,
> > useful
> > application, contact the implementors of the systems you want to
> > compare,
> > get intimately familiar with the compilers, and tune every single
> > function and algorithm with the help of said implementors until really
> > nothing more can be done.
>
> That's not very realistic, because (1) pure Scheme is not enough, you
> end up writing for a particular implementation (2) not everyone has
> time to get intimately familiar with implementations (especially all
> of them).

Precisely. A serious performance analysis takes time and effort.

>
> For me, Scheme is just a prototyping tool. If I end up thinking that
> there is little to be gained by rewriting in C or C++, then the
> prototype evolves into the final code (that's obviously a plus). If
> not, speed is of secondary importance, but still, all things being
> equal I would choose an implementation that compiles to faster code.

And I would probably do the same! Unfortunately (fortunately) all
things
are never equal.

>
> I'm sort of leaning towards Gambit: the implementor's design decisions
> mostly agree with my taste, but I'm concerned about FFI limitations (I
> asked here and on gambit-list - no answers there).

And that's the important part: it fits you and your programming style
and
that is fine. But I see too often that Scheme implementors are
obsessed with
performance and even while they know better, still can't resist the
temptation to publish their necessarily biased benchmarking results,
which
give a totally wrong impression and often have some pseudo-scientific
touch. It's simply the wrong thing to do.

I'm actually quite convinced that Gambit is indeed faster than, say,
CHICKEN,
and I don't have a problem with that. Its compilation strategy gives
tighter code on which gcc can optimize better, even though I think
single-host
mode stresses gcc in unhealthy ways. So, I'd guess, yes, Gambit will
likely
generate faster code in many situations. Note that this is a purely
subjective
impression and I have no evidence for that. I'm sure such an evidence
can not
possibly exist.


cheers,
felix

felix

unread,
May 22, 2009, 4:01:42 AM5/22/09
to
On 21 Mai, 19:40, Isaac Gouy <igo...@yahoo.com> wrote:
>
> > Ah, it's "The Computer Language Benchmarks Game" now...  Is there some
> > respectable person who can confirm that also the quality of that stuff
> > changed for the better and that it pays off to take another look?
>
> > My experiences from some years ago fit with those described by Juho
> > Snellman in this post:http://groups.google.gr/group/comp.lang.lisp/msg/5489247d2f56a848
>

I had the very same experience. The language shootout is a joke. There
is no such thing as idiomatic Scheme, or Haskell, or C, and the
results are a meaningless waste of clock-cycles. At least Bagley
had the decency to point out everywhere how questionable any
interpretation
of the results was. Now, I know how much fun benchmarking is, I love
do to it myself, but I don't publish the results and try to give the
impression that the results have any kind of meaning other than how
much or little I know about the language implementations I have
tested.

The shootout folks should realize that they would have to become
experts in the language and implementation of every system they
test to justify what they call "idiomatic" or what programming
technique exploits implementation-specific features (which every
user of such a system would *of course* use happily).


cheers,
felix

fft1976

unread,
May 22, 2009, 5:19:26 AM5/22/09
to
On May 22, 1:01 am, felix <bunny...@gmail.com> wrote:

> I had the very same experience. The language shootout is a joke. There
> is no such thing as idiomatic Scheme, or Haskell, or C, and the
> results are a meaningless waste of clock-cycles. At least Bagley
> had the decency to point out everywhere how questionable any
> interpretation
> of the results was.

I've seen people who are unaware, for example, that Python will tend
to be much slower than C. They may spend a lot of time learning
Python, (re)implementing something in it, only to eventually find out
that they've been misinformed and their effort was a waste. To them,
the Shootout is useful (if they find it). Of course, it could be
improved in many ways.

Nicolas Neuss

unread,
May 22, 2009, 8:57:40 AM5/22/09
to
Isaac Gouy <igo...@yahoo.com> writes:

> But you already know these things because you made the same comments and
> were answered back in 2007.
>
> http://groups.google.com/group/comp.lang.lisp/msg/583816770682e18a?hl=en

Yes, thanks for digging that out. I was too idle and only googling for
something like "Juho Snellman shootout" which I find very amusingly
written.

You see, that is the ugly thing about a bad reputation you earned *some
years ago*. I will only look again at the shootout, if someone in whom I
can more or less trust (e.g. because of useful contributions to either
comp.lang.lisp or comp.lang.scheme), tells me that your "Benchmark game"
has improved quite a lot and is a useful endeavour now. Posts like that of
Felix, OTOH, strengthen my reservations..

Nicolas

Isaac Gouy

unread,
May 22, 2009, 12:30:03 PM5/22/09
to
On May 22, 1:01 am, felix <bunny...@gmail.com> wrote:
> On 21 Mai, 19:40, Isaac Gouy <igo...@yahoo.com> wrote:
>
>
>
> > > Ah, it's "The Computer LanguageBenchmarks Game" now...  Is there some

> > > respectable person who can confirm that also the quality of that stuff
> > > changed for the better and that it pays off to take another look?
>
> > > My experiences from some years ago fit with those described by Juho
> > > Snellman in this post:http://groups.google.gr/group/comp.lang.lisp/msg/5489247d2f56a848
>
> I had the very same experience. The language shootout is a joke. There
> is no such thing as idiomatic Scheme, or Haskell, or C, and the
> results are a meaningless waste of clock-cycles. At least Bagley
> had the decency to point out everywhere how questionable any
> interpretation of the results was.


I can't force you to read the "Flawed Benchmarks" page that's been
shown on the website for the last 4 years. It's linked from the first
section of the FAQ. Every page on the website shouts "(Read the
FAQ!)".

Decency? Isn't it just ordinary to at least look at something before
denigrating it?


> Now, I know how much fun benchmarking is, I love
> do to it myself, but I don't publish the results and try to give the
> impression that the results have any kind of meaning other than how
> much or little I know about the language implementations I have
> tested.
>
> The shootout folks should realize that they would have to become
> experts in the language and implementation of every system they
> test to justify what they call "idiomatic" or what programming
> technique exploits implementation-specific features (which every
> user of such a system would *of course* use happily).


Where exactly do the 'shootout folks' say anything about 'what they
call "idiomatic"' or ... ?

Isaac Gouy

unread,
May 22, 2009, 1:00:11 PM5/22/09
to
On May 22, 8:57 am, Nicolas Neuss <lastn...@math.uni-karlsruhe.de>
wrote:


The ugly thing I see is that someone who has made a positive
contribution in the past now seems set on nothing more than malign
gossip.

There are so many honest criticisms that could be made of the
benchmarks game but I haven't seen them here.


felix

unread,
May 23, 2009, 9:25:46 AM5/23/09
to
On 22 Mai, 18:30, Isaac Gouy <igo...@yahoo.com> wrote:
>
> I can't force you to read the "Flawed Benchmarks" page that's been
> shown on the website for the last 4 years. It's linked from the first
> section of the FAQ. Every page on the website shouts "(Read the
> FAQ!)".

Sorry, not strong enough.

> > The shootout folks should realize that they would have to become
> > experts in the language and implementation of every system they
> > test to justify what they call "idiomatic" or what programming
> > technique exploits implementation-specific features (which every
> > user of such a system would *of course* use happily).
>
> Where exactly do the 'shootout folks' say anything about 'what they
> call "idiomatic"' or ... ?

By rejecting valid benchmark code by contributors, of course
(I know of at least one case). But what really bothers me is
that people take the shootout as something that gives one
some magic sort of absolute performance-index, something
that could be taken as some indication of quality. That is
wrong, deeply wrong. In my opinion this isn't pointed out
clearly enough. Something that started as a a one-man fun project
has transmogrified into a pseudo-scientific performance
oracle (nice graphs, though - a bit hard to read if you ask me)
and then distorts the results by changing the rules -
arbitrarly, to the casual observer. This in turn is not
particularly encouraging for those who wrote code which
has been made obsolete.

But it is of course understandable. Comparing apples with
oranges is not an easy thing to do and will take many many
attempts to get (never completely) right.


cheers,
felix

Isaac Gouy

unread,
May 23, 2009, 2:43:45 PM5/23/09
to
On May 22, 2:19 am, fft1976 <fft1...@gmail.com> wrote:
> On May 22, 1:01 am, felix <bunny...@gmail.com> wrote:
>
> > I had the very same experience. The languageshootoutis a joke. There

> > is no such thing as idiomatic Scheme, or Haskell, or C, and the
> > results are a meaningless waste of clock-cycles. At least Bagley
> > had the decency to point out everywhere how questionable any
> > interpretation
> > of the results was.
>
> I've seen people who are unaware, for example, that Python will tend
> to be much slower than C. They may spend a lot of time learning
> Python, (re)implementing something in it, only to eventually find out
> that they've been misinformed and their effort was a waste. To them,
> the Shootout is useful (if they find it). Of course, it could be
> improved in many ways.


If you have any concrete suggestions for improvement make them in the
discussion forum.

Isaac Gouy

unread,
May 23, 2009, 5:06:11 PM5/23/09
to
On May 23, 6:25 am, felix <bunny...@gmail.com> wrote:
> On 22 Mai, 18:30, Isaac Gouy <igo...@yahoo.com> wrote:
>
>
>
> > I can't force you to read the "Flawed Benchmarks" page that's been
> > shown on the website for the last 4 years. It's linked from the first
> > section of the FAQ. Every page on the website shouts "(Read the
> > FAQ!)".
>
> Sorry, not strong enough.


It exists - which is more than you seemed to credit.

Your opinion on whether it's "strong enough" is ... opinion.

> > > Theshootoutfolks should realize that they would have to become


> > > experts in the language and implementation of every system they
> > > test to justify what they call "idiomatic" or what programming
> > > technique exploits implementation-specific features (which every
> > > user of such a system would *of course* use happily).
>

> > Where exactly do the 'shootoutfolks' say anything about 'what they


> > call "idiomatic"' or ... ?
>
> By rejecting valid benchmark code by contributors, of course
> (I know of at least one case).


I ask "Where exactly..." and your response is to vaguely say you know
of a case without showing anything we can look at!

Is it a secret?

> But what really bothers me is
> that people take the shootout as something that gives one
> some magic sort of absolute performance-index, something
> that could be taken as some indication of quality. That is
> wrong, deeply wrong. In my opinion this isn't pointed out
> clearly enough.


Some will misinterpret; Some will have to be shown.

Some don't understand; Some don't want to understand.

My favorites are those who provide a URL to the benchmarks game
(sometimes a very specific URL) and then brazenly claim the complete
opposite of what the data shows - presumably they judge that no one
will actually look for themselves.

> nice graphs, though - a bit hard to read if you ask me

Even without asking, I'd hope for some useful complaint that didn't
immediately require - 'In what way is it a bit hard to read for you?-
just to get started.

> and then distorts the results by changing the rules -
> arbitrarly, to the casual observer. This in turn is not
> particularly encouraging for those who wrote code which
> has been made obsolete.

Here we go again.
What exactly are you referring to?
Autumn/Winter 2005 when the old Doug Bagley tests were being
replaced?

Nicolas Neuss

unread,
May 24, 2009, 4:41:53 AM5/24/09
to
Isaac Gouy <igo...@yahoo.com> writes:

> Here we go again. What exactly are you referring to? Autumn/Winter 2005
> when the old Doug Bagley tests were being replaced?

Maybe - at least my problems arose around that time. But I think felix
aims at something more fundamental: Benchmarking is a tricky subject and
the outcome depends very much on the persons running the benchmark. What
concerns me: I do not trust in these persons anymore (in their objectivity,
in their good taste, in that they will not render useless any work that I
invest in this stuff). And I am still waiting for someone with credentials
who tells me that there is hope that something has improved.

Nicolas

0 new messages