I think the ray tracer and symbolic simplifier are very valuable benchmarks
but I would also like to see regexps, program evaluators (rewriters,
interpreters and compilers) and other benchmarks such as the "n"th-nearest
neighbour example from my book:
http://www.ffconsultancy.com/products/ocaml_for_scientists/complete.html
Perhaps even collating example programs where performance of of little
relevance, such as GUI Sudoku solvers, Wade's Game of Concentration, OpenGL
examples, web programming and so on.
I think such a resource would be both valuable and interesting to a wide
variety of readers but, most of all, I see it as an opportunity to
demonstrate applications where functional programming shines.
--
Dr Jon D Harrop, Flying Frog Consultancy
The OCaml Journal
http://www.ffconsultancy.com/products/ocaml_journal/?usenet
Jon Harrop wrote:
> There seems to be a great deal of interest from the functional programming
> community in benchmarking. Is there enough interest to create a new
> computer language shootout that showcases more relevant/suitable tasks for
> FP?
Nope.
hth,kt
Not me either.
Personally, I don't really classify CL as an FP language, even though
we have some operators that are related to those used in FP languages.
Those operators are in some ways specifically designed to not work
like other languages--being a Lisp2 is a common example. The
deliberate lack of functions to combine other functions [I proposed
CONJOIN and DISJOIN to take predicate functions and return functions
that did the AND and OR of their effects, respectively, and it was
rejected as something our users didn't want]. The absence of tail
call optimization is another example, encouraging people to write
loops rather than recursions--that's probably more practically
profound than the silly syntax debate over Lisp1/Lisp2. One can go on.
More often than not, I have personally found that people who refer to
CL as an FP language end up confusing themselves by thinking CL is
secretly Scheme with bad syntax. That usually holds them back rather
than helping them. CL certainly permits a lot of programming using
functions, but "functional programming" has taken on a kind of
political correctness to it that transcends the sum of the individual
meanings of those terms. The expression part of the CL language isn't
really designed for that kind of narrow understanding of what it is to
program with functions.
Incidentally, following on another item from upthread, I also think
that the "symbolic processing" part of CL isn't really an FP task, so
it's an especially weird thing to be benchmarking. IMO, symbols in
the sense CL means them are really not used like they are in other
languages. That is, they are places to hang semantics out for view,
more like URLs, where it's just plain ridiculous on the web to talk
about a URL being private... and the same is true of CL's symbols. In
my experience, FP languages use symbols in a more "pure" way like
Scheme does, with no real attached semantics and discarding the sense
in which CL allows you to use symbols as a way of gaining a debugging
foothold. The presence of uses for symbols in CL are subtle and take
a while to see, but the ability to do things like CLOS/MOP
introspection by just giving some symbol names (no need to name a
lexical environment) is an example. In other languages, symbols name
variables that are still not intelligible without access to a lexical
environment--in short, CL isn't really working hard to implement the
kind of strict encapsulation that other languages have, and yet it
achieves a level of comfort that makes it easy to program with
reasonable comfort that you won't clobber others' programs and yet
reasonable comfort that you won't find yourself locked out without a
key either.
I don't like big public discussions of benchmarks not because
benchmarks themselves aren't sometimes useful, but because I think
these things change like the wind and it's hard to get rid of invalid
information fast enough. They are quite an opportunity to just whip
stale data into the wind and start a lot of viral misinformation
making the rounds.
Also, Lisp is EXPLICITLY designed to ALLOW the same language to behave
in different ways performance-wise, trading efficiency for other
properties in some implementations--like smallness, like the ability
to deliver compiled and interpreted implementations at different
dollar cost, etc. Lisp also has a zillion more optimization settings
with widely differing meanings than other languages have; the notion
that a single optimization setting is "obviously the right one" is
very hard to establish. Knowing that the total number of optimization
settings under which you could test Lisp is huge, not even counting
opportunities to inline things, the possible use of compiler macros,
etc. I doubt there's any commitment to actually allow or encourage or
report on the huge number of possibilities. The notion that there is
a canonical "fast setting" which is "good enough" seems doubtful in my
mind. Certainly a simple (declare (optimize (speed 3))) isn't likely
to do it.
The notion that by comparing the efficiency of a certain number of
specific function calls you're going to gain insight into whether Lisp
is a good language to use is a bit suspect. When anyone designing
such tests has a profit motive in selling consulting in one of the
languages being tested, that seriously compromises the likelihood of a
useful outcome in my eyes.
I'd rather work with a language that lets me say things as I want to
say them, and work to speed up the things that seem too slow where
that turns out to matter, than begin by tying my hands and saying the
only expressional paradigms I may consider are those that are known on
the day I start programming to be a particular speed.
All just my opinion, of course. Others may do as they please. I
wanted to just say that my non-participation in this is not an
accident. And I didn't want anyone who is mesmerized by benchmark
numbers to think there was no minority report on the significance
(or lack thereof) of such numbers.
I do think metering real programs that are about to be really used
is useful, btw. I just don't think that's what this is.
Benchmark programs for speed compariosns should be very simple or
trivial to write. If you pose significant programming challenges in
your benchmarks then what you end up doing is comparing algorithms and
not languages.
Mark
> Benchmark programs for speed compariosns should be very simple or
> trivial to write. If you pose significant programming challenges in
> your benchmarks then what you end up doing is comparing algorithms and
> not languages.
First you can't test languages at all, only implementations. And if the
benchmarks are too simple, you are testing only one aspect of the
implementation, e.g. floating point and boxing/unboxing performance. This
may be useful, if you have different tests to compare differenct aspects.
But another idea would be one big task, like a ray-tracer, and then writing
it multiple times with different algorithms and optimizations for each
language implementation, and to compare this to derive some conclusions
which algorithms and optimizations works best for which language.
--
Frank Buss, f...@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
Does this sell books? Should I write one?
No; the correct approach is to have a battery of simple benchmarks
aimed at different aspects of the language - symbol processing,
numerics etc. If you have large complex problems then you end up
comparing programmers and not languages.
Mark
If you want to compare OCaml with Lisp why don't you choose the
Gabriel benchmarks and write them in OCaml? They've been intensively
done in Lisp and are designed to test various aspects of an FPL.
Mark
> No; the correct approach is to have a battery of simple benchmarks
> aimed at different aspects of the language - symbol processing,
> numerics etc. If you have large complex problems then you end up
> comparing programmers and not languages.
This depends on what you want to measure. I agree that small benchmarks,
like matrix multiplication, are useful for comparing different aspects of a
language. But implementing a ray tracer could "benchmark" some more things:
How easy is it to implement such a program? Is it easy to add a GUI or some
network support for splitting parts of the rendering to a render farm? How
easy is it to change the program, if requirements changes? And last but not
least: How fast is the ray tracer?
You are right that there is the danger that you are comparing programmers.
To avoid this, multiple programmers should work on the problem, if possible
some of the best for each language or a group of programmers should discuss
and enhance the implementation, e.g. in this newsgroup for Lisp.
The process of implementing the solution should be monitored (blogs from
the programmers) and the time needed should be summed. This would be more
useful than just some simple benchmarks and might show some differences.
E.g. for OCaml it might be true that it is faster than some Lisp
implementations.
But I think developing programs in OCaml or Haskell is slower than with
Lisp. At least I have problems with functional programming languages like
Haskell: Once a solution is found, usually it is very smart and works
reasonable fast. But developing a solution in Haskell is more like
developing a mathematical proof: trying to describe the problem in
mathematical terms, then trying to find axioms and theorems (functions)
which solves the problem.
Developing a solution in Lisp for me is more like painting: First some
sketch, like defining some lists with test cases of the problem and some
functions which handles it, without caring too much about types or
correctness. Then doing some top-down design, mixed with OO, functional or
plain imperative programming to finish the painting.
>> And if the benchmarks are too simple, you are testing only one
>> aspect of the implementation, e.g. floating point and
>> boxing/unboxing performance.
>
> No; the correct approach is to have a battery of simple benchmarks
> aimed at different aspects of the language - symbol processing,
> numerics etc.
I think you've missed Buss's point. In Lisp, a function tested in
isolation may have to perform argument list checks, type checks,
boxing of values to be returned, which can dominate the computation
time for simple routines. In realistic programs, inlining and other
mechanisms for skipping parts of the full function call protocol can
reduce or entirely remove that overhead.
> If you have large complex problems then you end up comparing
> programmers and not languages.
I'd say you end up comparing the ways that programmers and language
implementations combine to make programs, which might be the right
thing to compare, after all.
--
RmK
> More often than not, I have personally found that people who refer to
> CL as an FP language end up confusing themselves by thinking CL is
> secretly Scheme with bad syntax. That usually holds them back rather
> than helping them. CL certainly permits a lot of programming using
> functions, but "functional programming" has taken on a kind of
> political correctness to it that transcends the sum of the individual
> meanings of those terms. The expression part of the CL language isn't
> really designed for that kind of narrow understanding of what it is to
> program with functions.
A lot of time and care has gone into pushing the limits of what is
practical to do in a (pure) functional language. One of lisp's selling
points is paradigmatic adaptability. I've always tried to use lisp in a
way that brings some of those interesting ideas into my standard set of
idioms.
However, you seem to imply that programmers that use Common Lisp
functionally will end up selecting different languages. If I like the kind
structure and reasoning that FP supports, are you saying should I look
elsewhere?
Matt
--
"You do not really understand something unless you can
explain it to your grandmother." — Albert Einstein.
Time and time again you've been outed as a troll and spammer.
You constantly misrepresent what others say, twist words,
lie and deceive. You only accept what fits your crafted
agenda and furthers your nefarious purposes. In so many words,
you are a scumbag plain and simple.
Somehow i don't see many interested in your "benchmarks" and
"language comparisons".
I'd say this is pretty much dead wrong as it doesn't account for
interactions of constructs and a) that is the area which will throw
the most wrenches into things[1], and b) the very area which anything
even approaching the character of a real program/application will need
to make the most use of.
/Jon
1. It's pretty easy to optimize different constructs in isolation from
one another.
--
'j' - a n t h o n y at romeo/charley/november com
> Developing a solution in Lisp for me is more like painting: First some
> sketch, like defining some lists with test cases of the problem and some
> functions which handles it, without caring too much about types or
> correctness. Then doing some top-down design, mixed with OO, functional or
> plain imperative programming to finish the painting.
Well said.
I think both of your statements have merit but, given finite time, I suspect
we shall end up with more smaller and fewer larger benchmarks.
A really important point in this context though: complicated benchmarks in
languages like Java and C# can be trivial in OCaml or Lisp.
For example, a parser and evaluator for a simple dynamically typed
functional language with first-class lexical closures that can interpret
the following program:
let rec fib n =
if n=0 then 0 else
if n=1 then 1 else
fib(n - 1) + fib(n - 2) in
fib 35
may be written in only 49 lines of OCaml:
open Camlp4.PreCast;;
let expr = Gram.Entry.mk "expr";;
EXTEND Gram
expr:
[ "prog" NONA
[ "if"; p = expr; "then"; t = expr; "else"; f = expr -> `If(p, t, f)
| "let"; "rec"; f=LIDENT; x=LIDENT; "="; body=expr; "in"; rest=expr ->
`LetRec(f, x, body, rest) ]
| "equal" LEFTA
[ e1 = expr; "="; e2 = expr -> `Equal(e1, e2) ]
| "sum" LEFTA
[ e1 = expr; "+"; e2 = expr -> `Add(e1, e2)
| e1 = expr; "-"; e2 = expr -> `Add(e1, `Mul(`Int(-1), e2)) ]
| "product" LEFTA
[ e1 = expr; "*"; e2 = expr -> `Mul(e1, e2) ]
| "apply" LEFTA
[ e1 = expr; e2 = expr -> `Apply(e1, e2) ]
| "simple" NONA
[ v = LIDENT -> `Var(v)
| n = INT -> `Int(int_of_string n)
| "("; e = expr; ")" -> e ] ];
END;;
(* Evaluator *)
let int = function `Int n -> n | _ -> invalid_arg "int";;
let bool = function `Bool b -> b | _ -> invalid_arg "bool";;
let rec eval vars = function
| `Apply(func, arg) -> apply (eval vars func) (eval vars arg)
| `Add(e1, e2) -> `Int (int(eval vars e1) + int(eval vars e2))
| `Mul(e1, e2) -> `Int (int(eval vars e1) * int(eval vars e2))
| `Equal(e1, e2) -> `Bool (eval vars e1 = eval vars e2)
| `If(p, t, f) -> eval vars (if bool (eval vars p) then t else f)
| `Int i -> `Int i
| `LetRec(var, arg, body, rest) ->
let rec vars' = (var, `Closure(arg, vars', body)) :: vars in
eval vars' rest
| `Var s -> List.assoc s vars
and apply func arg = match func with
| `Closure(var, vars, body) -> eval ((var, arg) :: vars) body
| _ -> invalid_arg "Attempt to apply a non-function value";;
(* Top level *)
let string_of_value = function
| `Int n -> string_of_int n
| `Bool true -> "true"
| `Bool false -> "false"
| `Closure _ -> "<fun>";;
let () =
let program = Gram.parse expr Loc.ghost (Stream.of_channel stdin) in
let vars = ["true", `Bool true; "false", `Bool false] in
print_endline(string_of_value(eval vars program));;
Could we make another benchmark test out of this? Maybe make the target
language BASIC and have a more sophisticated target program?
Personally, I'd like to see how fast and easy this is using EVAL in Lisp. I
could code up MetaOCaml and F# implementations as well...
> Personally, I'd like to see how fast and easy this is using EVAL in Lisp. I
> could code up MetaOCaml and F# implementations as well...
This reflects a fundamental misunderstanding about how Lisp works.
EVAL makes no pretense whatsoever of being fast and is not in any way
related to the execution speed of Lisp since Lisp programs ordinarily
do not pass through EVAL.
> The notion that by comparing the efficiency of a certain number of
> specific function calls you're going to gain insight into whether Lisp
> is a good language to use is a bit suspect. When anyone designing
> such tests has a profit motive in selling consulting in one of the
> languages being tested, that seriously compromises the likelihood of a
> useful outcome in my eyes.
I think there is a sort of trade off on this. IIRC, some years ago a
dispute between Lisp versus C ended up with a commercial Lisp
implementator finding an opportunity to improve the Lisp compiler.
[snipped]
>
> I do think metering real programs that are about to be really used
> is useful, btw. I just don't think that's what this is.
Most of problems solvable by computer languages are usable by some kind
of user. Some may be so specialized that we can believe they've no
practical value, while some other may find it is its daily bread and butter.
my .01999....
--
Cesar Rabak
I feel you're oversimplifying. . . certain languages allow for
approaches which differ radically from others this is one of selling
points of FP languages.
--
Cesar Rabak
> Developing a solution in Lisp for me is more like painting: First some
> sketch, like defining some lists with test cases of the problem and some
> functions which handles it, without caring too much about types or
> correctness. Then doing some top-down design, mixed with OO, functional or
> plain imperative programming to finish the painting.
>
> --
> Frank Buss, f...@frank-buss.dehttp://www.frank-buss.de,http://www.it4-systems.de
I would add this to my list of favourite quotes if you don't mind . It
describes very well my attitude towards lisp, programming and kind of
person I am.
bobi
I understood him well enough and its wrong for the purpose intended
and for the rest quite impractical.
If you want to compare solutions then its fine. If you want to use
his method to compare performance w.r.t. languages, then its not
fine. Complex open-ended problems (like Sudoku) admitting different
algorithms for solution - offering many choices of implementation
along the way are *bad choices* for benchmarks. You're supposed to be
benchmarking *languages* - not algorithms or programmers.
The point you are making is that a single benchmark function may
misrepresent the truth about a language because it will capitalise on
a few features. That's right - a single benchmark is a single point
on the graph. The trick is to accumulate many such points to get a
better picture.
Benchmarks are trivial and even boring to code but not trivial to
choose. A good suite of benchmarks should systematically profile all
the key aspects of the language in isolation - numerics, garbage
collection, list handling etc.
A good way to think of benchmarks is to compare them to the fitness
tests that sport doctors do of athletes. Simple but scientific tests
aimed at profiling vital fitness signs like lung capacity or
heartbeat. They're not exciting or challenging to watch or do -
nobody is going to pay to watch Beckham take a BP test; but that is
not the point of them.
Now if you want to follow Frank's method you'll probably have a lot
more fun and a more interesting discussion - but with no firm
conclusion. The degree of organisation required to get many of the
best programmers to work together devoting many hours to this project
(and how do we decide who is 'best'?) is not realistic. And many of
the questions (How easy is it to implement such a program?) are not
scientific questions (e.g How easy is it to change a light bulb?
Answer: not hard if you know what you're doing). You'll have a jolly
good chin wag and things will be learnt no doubt. But its not
benchmarking as I understand it.
That's why I recommend the Gabriel benchmarks to Jon as the obvious
route to choose if he wishes to prove his thesis. The benchmarks are
famous, were largely designed on the philosophy I endorse here and
have a wealth of timing results attached to them. Best still the Lisp
code is already there. So rather than reinvent the wheel, why not
begin there?
Mark
I think the plot of verbosity vs performance for different ray tracers is
very enlightening. I would like to see similar things for Sudoku solvers
and so on.
> That's why I recommend the Gabriel benchmarks to Jon as the obvious
> route to choose if he wishes to prove his thesis. The benchmarks are
> famous, were largely designed on the philosophy I endorse here and
> have a wealth of timing results attached to them. Best still the Lisp
> code is already there. So rather than reinvent the wheel, why not
> begin there?
They've already been done in all of these languages but I'm not terribly
impressed with the Gabriel benchmarks themselves. Time is distributed over
a tiny amount of code in each benchmark (that I've studied).
As someone who's interested in real programming problems (matching
derivative prices, compressing terabytes of trading data, serving web
pages, syntax highlighting program code, ...) when I see raw
benchmarks it can be helpful to know which are relevant to the sort of
programs I want to write. So in this respect I see a purpose in the
less granular approach. But obviously going down this route you risk
'benchmarks' that are only relevant to that *exact* problem which is
clearly useless.
As with most programming problems, it seems the best approach is to
tackle the smaller problem as Mark is suggesting but then have some
way of tagging and grouping those results to give them meaning in real
world situations.
--
Phil
http://phil.nullable.eu/
> A good way to think of benchmarks is to compare them to the fitness
> tests that sport doctors do of athletes. Simple but scientific tests
> aimed at profiling vital fitness signs like lung capacity or
> heartbeat. They're not exciting or challenging to watch or do -
> nobody is going to pay to watch Beckham take a BP test; but that is
> not the point of them.
That's a very good way to think of the kinds of benchmarks you're
talking about, but I don't think the logical conclusion is the one
you're implying. No one is going to look at two sports teams or two
individual athletes and say, "Well, A's oxygen-saturation and
reaction-time numbers are all better than B's, so obviously A will beat
B." They're going to looking instead at whether A beats B in actual
contests, or how A and B perform relative to their competition in actual
contests.
The simple tests can tell you where an athlete's strengths and
weaknesses might be, but it's well understood that they don't tell the
whole story. In the same way, simple benchmarks will produce answers you
can measure easily, but the relationship between those answers and
actual usefulness will be unclear. (In other fields of endeavor this is
sometimes called "looking under the lamp post.)
paul
Well, then that indicates that there's more than one factor affecting
an athlete's performance.
- Physical condition is one.
- Technique is another; if one sprinter is controlling his legs to
better effect than another, he may win, despite inferior condition.
- In team sports, the are "wisdom" effects which are related neither
to condition nor to technique. Passing at the right time to the
right person is what gets a goal in football.
Each of these likely has its own subcriteria, so that by the time
you're done, you probably have a whole array of measurements to test,
where results fall out of the combination of them.
You almost certainly won't be able to predict outcomes based on one or
two of the factors, but the whole set of factors are likely to be
somewhat suggestive. "World Cup" teams will probably have better
numbers, across the board, than teams that aren't in that league...
--
output = ("cbbrowne" "@" "acm.org")
http://linuxdatabases.info/info/sgml.html
?OM ERROR
> There seems to be a great deal of interest from the functional programming
> community in benchmarking. Is there enough interest to create a new
> computer language shootout that showcases more relevant/suitable tasks for
> FP?
It would be more interesting to test the language for programming speed,
not program speed (even though my inner nerd protests).
For most areas, being able to turn your ideas into executable code with
a minimum of effort is the most important feature at all and is the very
point of high-level programming languages. That's why scripting
languages like perl, python, php etc. are so popular; they make writing
certain programs very easy, even though I've never heard anyone claim
that perl was fast per se (ok, its regex and string matching stuff is
apparently quite optimized but ordinary execution is the typical
interpreter speed).
I think I remember the wise Kent Pitman (*wink*) say on here, "the inner
loop of a programming language is end-user programming", or something
like that. So that's what should be improved first. Arguably, languages
like Lisp are "more optimized" in that regard than languages that impose
a lot of bureaucracy on the programmer.
That is certainly a belief held by some people. However, I think it is an
unfalsifiable hypothesis. How can we test programming speed without pitting
Jo Lisper against the OCaml elite?
> I would add this to my list of favourite quotes if you don't mind .
Yes, no problem. But I don't know if I have read something similar in one
of Paul Graham's essays. But it is true: I think Lisp supports the use of
both sides of the brain, while the complex syntax, type system and
programming style of Haskell requires lots of power of the logical half of
the brain and no power is leftover for the visual part of the brain.
With Java there is another problem: You have a hard time keeping your brain
awake and not wandering away, while your fingers are running hot, trying to
implement some simple ideas with lines after lines of redundant code :-)
>> Arguably, languages like Lisp are "more optimized" in that regard than
>> languages that impose a lot of bureaucracy on the programmer.
>
> That is certainly a belief held by some people. However, I think it is an
> unfalsifiable hypothesis. How can we test programming speed without pitting
> Jo Lisper against the OCaml elite?
I'd think you'd have to use extensive case study. It's amazing that the
"Software Engineering" crowd hasn't really come up with such stuff yet,
instead of just churning out one visual-modelling-for-OO-practices
fooblah after another, preferrably in combination with simplistic,
bureaucratic programming languages (read: Java, C#.NET, etc.), which
effectively increases the unproductive bureaucracy instead of making
programming more efficient. But then again, the focus of Software
Engineering isn't efficient programming but making the use of cheap,
low-skilled programmers feasible for large projects. We'll see if that
will work out in the long range, I'm rather doubtful.
If you do not value the vast amount of other stuff that CL has (which
entirely dwarfs any FP component of the language) then yes. If you
*do* value it, then no.
Are the OCaml and Lisp communities big enough for such a study to even be
theoretically possible?
Consider a control experiment where two groups of people:
1. Programmers
2. Lisp programmers
were asked to solve benchmark problems in the C programming language. I
believe group 2 would be significantly better but this does not reflect
upon the capability of C vs C, of course, but simply reflects that fact
that only the programming elite bother looking at non-mainstream languages.
While I agree that a software engineering case study would be great, I just
cannot see how we can even attempt it in a way that the results can be at
all meaningful.
> It's amazing that the
> "Software Engineering" crowd hasn't really come up with such stuff yet,
> instead of just churning out one visual-modelling-for-OO-practices
> fooblah after another, preferrably in combination with simplistic,
> bureaucratic programming languages (read: Java, C#.NET, etc.), which
> effectively increases the unproductive bureaucracy instead of making
> programming more efficient. But then again, the focus of Software
> Engineering isn't efficient programming but making the use of cheap,
> low-skilled programmers feasible for large projects. We'll see if that
> will work out in the long range, I'm rather doubtful.
Agreed.
--
Dr Jon D Harrop, Flying Frog Consultancy
OCaml for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists/?usenet
> On 15 Jul, 01:59, Jon Harrop <the_flying_frog.com> wrote:
> > ...
> > http://www....
> >
> > The ... Journal http://www...
>
> If you want to compare OCaml with Lisp why don't you choose the
> Gabriel benchmarks and write them in OCaml? They've been intensively
> done in Lisp and are designed to test various aspects of an FPL.
>
> Mark
Because he then would have to work on something which would not grow the
sales of his books?
Nicolas
P.S.: I would suggest that at least the link spam should not be repeated
when citing him.
Actually, I spent yesterday doing this. Most of the Gabriel benchmarks have
already been ported to OCaml. One of the more interesting ones is the Boyer
benchmark, which is a tiny theorem prover that works by repeated symbolic
rewriting. However, it only solves one problem and takes only 0.01s to
complete.
An OCaml translation is actually part of the benchmark suite of the OCaml
distribution:
http://stuff.mit.edu/afs/sipb/project/ocaml/src/current/test/
The implementation is quite poor and, consequently, it is 4x slower than the
Lisp. However, this benchmark is trivially reducible, rendering it useless:
http://home.pipeline.com/~hbaker1/BoyerB.ps.gz
I started optimizing boyer.ml before I realised this and gave up when it
became obvious, i.e. I can make the program take any amount of time to
complete so it measures nothing.
I've also ported Boehm's GC benchmark but this is (unsurprisingly) a very
poor benchmark of GC performance that stresses the GC so little that it
actually makes Boehm's GC look average. Moreover, it is only 10 lines long.
I would say that the symbolic simplifier supercedes both of these tests, as
it performs more sophisticated term-rewriting in a way that stresses the
GC.
So, I would advocate the selection of a better benchmark that is practically
irreducible but measures the same things. In this context, two tests spring
to mind:
1. A program to evaluate BASIC programs.
2. A program to generate FFT programs.
I would also like to see Haskell implementations.
Incidentally, the Haskell guys recently translated my ray tracer benchmark
and Haskell's performance is similar to Lisp's with GHC and SBCL.
--
Dr Jon D Harrop, Flying Frog Consultancy
In my case, I was speaking mostly to people who are standing outside
the language and trying to figure out what to make of it. If such a
person, who doesn't know CL and thinks that it is most easily approached
by thinking of it as an FP language, then I'm guessing that will lead
to confusion. That has been my experience anyway. Often such people are
happier with Scheme.
I didn't in any way suggest that anyone using the language should
leave it. I'm not out to dissuade anyone who's happy with CL that
they shouldn't be. I was just explaining why some people are sometimes
not happy with and/or just confused by it.
Mismatched expectations are often a source of discontent.
Tell me about it. Did you read How to Crash and Burn your Java
project by Pete McBreen ?
http://www.mcbreen.ab.ca/papers/CrashAndBurnJava.html
My favourite part is :
The best place to put all of the code is either behind the OK button
(you only have to go to one place to see what is happening) or in
stored procedures in the database (since these give optimal database
performance).
That's exactly what happens to me. I don't give a damn if lisp is an
order of magnitude slower as from my own humble experience most apps
spent most of their time waiting for user input or 80% of slow
reaction comes from database & networking, so I would choose something
that is streightforward to understand and easy maintain / improve even
if it's 10x slower, user won't probably even notice a difference.
(Unless you're doing something like games ). If someone wants to
develop with haskell or ocaml let them try it for themselves , if it
makes them happy, great else stick with lisp.
>
> That's exactly what happens to me. I don't give a damn if lisp is an
> order of magnitude slower as from my own humble experience most apps
> spent most of their time waiting for user input or 80% of slow
> reaction comes from database & networking, so I would choose something
> that is streightforward to understand and easy maintain / improve even
> if it's 10x slower, user won't probably even notice a difference.
> (Unless you're doing something like games ). If someone wants to
> develop with haskell or ocaml let them try it for themselves , if it
> makes them happy, great else stick with lisp.
Of course, it's not just your experience, what you're saying is borne
out by a huge amount of evidence. Performance is a much more complex
domain than many people understand it to be. In particular, outside
some limited, if high-profile (games, numerical simulations, a few
others) domains, raw compute performance is rather seldom on the
critical path. Instead things like I/O performance of various kinds,
IPC performance of various kinds (often tangled up with I/O) and just
plain good design matter. And in many contexts programmer time is
more important than any of these.
You can see this in the wholesale rush to higher-level, non-native
compiled languages like Java and Perl, which claim to reduce
programmer time at the cost of some performance. And where
performance really does matter, if you've spent time trying to improve
performance of substantial programs written in these languages (or
Lisp) you'll rather seldom find that there's something which batter
code generation would help significantly with. In relatively recent
history I've spent time trying to improve the performance of a large
Java program and a smaller but long-running Perl program. For the
Java application the problem was really awful memory access patterns
which was improved by better data structure design, and for the Perl
program (writte by me, so not entirely a fair example) the big
improvement was to aggressively re-use the results of system calls to
reduce the number made, as virtually all the time was spent waiting
for them.
--tim
Do you mean an emulator for a Basic-like language?
Mark
Slobodan Blazeski wrote:
> On Jul 16, 8:55 pm, Frank Buss <f...@frank-buss.de> wrote:
>
>>Slobodan Blazeski wrote:
>>
>>>I would add this to my list of favourite quotes if you don't mind .
>>
>>Yes, no problem. But I don't know if I have read something similar in one
>>of Paul Graham's essays. But it is true: I think Lisp supports the use of
>>both sides of the brain, while the complex syntax, type system and
>>programming style of Haskell requires lots of power of the logical half of
>>the brain and no power is leftover for the visual part of the brain.
>>
>>With Java there is another problem: You have a hard time keeping your brain
>>awake and not wandering away, while your fingers are running hot, trying to
>>implement some simple ideas with lines after lines of redundant code :-)
>
>
> Tell me about it. Did you read How to Crash and Burn your Java
> project by Pete McBreen ?
> http://www.mcbreen.ab.ca/papers/CrashAndBurnJava.html
> My favourite part is :
>
> The best place to put all of the code is either behind the OK button
> (you only have to go to one place to see what is happening) or in
> stored procedures in the database (since these give optimal database
> performance).
>
>
> That's exactly what happens to me. I don't give a damn if lisp is an
> order of magnitude slower ...
It is? I thought it was 20% slower than C. I saw Doctor H claiming
OhCamel was 10x faster than Lisp, but that just made me curious about
OakAml being eight times faster than C.
kt
As we know, concerning point 2, the FFTW project did the work for you
already. And probably you have also an OCaml BASIC evaluator lying around
somewhere.
Let's add some interesting tasks in favor of the CL side, too. I hope you
will do them before you expect us to work on your little tasks from above.
3. An OCaml program for evaluating FORTRAN programs
4. An OCaml CAS system (about the size of Maxima or Axiom)
5. Perl-like regexps in Ocaml (as good and fast as cl-pccre)
etc, etc
Nicolas
I bet there are people doing meta-programming with Visual BASIC,
generating Visual BASIC programs, storing them in databases, and
loading and running them...
--
__Pascal Bourguignon__ http://www.informatimago.com/
Couldn't resist... :-)
--
Cesar Rabak
Ocaml's memory allocator is much faster than the default allocator in
most C implementations. If the program uses dynamic allocation and
Ocaml has enough head room to avoid lots of GC cycles it will
outperform C.
However, using statically allocated data structures - mutable
structures in Ocaml - C is faster, though the difference can be
reduced to just a few percent by hand optimizing the Ocaml.
Ocaml's dynamic memory allocation advantage can be mostly negated by
replacing C's default linked list allocator with a better one.
Real-time C implementations, which use region and/or fixed order buddy
allocators, can closely approximate Ocaml's performance and may better
it when GC is taken into account.
George
--
for email reply remove "/" from address
Pascal Bourguignon wrote:
> Mark Tarver <dr.mt...@ukonline.co.uk> writes:
>
>
>>>So, I would advocate the selection of a better benchmark that is practically
>>>irreducible but measures the same things. In this context, two tests spring
>>>to mind:
>>>
>>>1. A program to evaluate BASIC programs.
>>
>>Do you mean an emulator for a Basic-like language?
>
>
> I bet there are people doing meta-programming with Visual BASIC,
> generating Visual BASIC programs, storing them in databases, and
> loading and running them...
>
The word you are looking for is "greenspunning".
hth,kzo
Yes. Just a minimal example.
Certainly relevant but too big to ask anyone to translate into Lisp. We can
take the most trivial reductions that FFTW implements and perform those.
> And probably you have also an OCaml BASIC evaluator lying around
> somewhere.
I have the following interpreter for a tiny dynamically-typed pure
functional language.
> 3. An OCaml program for evaluating FORTRAN programs
This might be feasible for F77. Is there already a Lisp implementation?
> 4. An OCaml CAS system (about the size of Maxima or Axiom)
That is too big and I can't do any more work on CAS as I'm under NDA from
Wolfram Research.
> 5. Perl-like regexps in Ocaml (as good and fast as cl-pccre)
Already done. Might be interesting to benchmark but I've never had a
regexp-limited program.
Here's that interpreter:
open Camlp4.PreCast;;
let expr = Gram.Entry.mk "expr";;
EXTEND Gram
expr:
[ "prog" NONA
[ "if"; p = expr; "then"; t = expr; "else"; f = expr -> `If(p, t, f)
| "let"; "rec"; f=LIDENT; x=LIDENT; "="; body=expr; "in"; rest=expr ->
`LetRec(f, x, body, rest) ]
| "equal" LEFTA
[ e1 = expr; "="; e2 = expr -> `Equal(e1, e2) ]
| "sum" LEFTA
[ e1 = expr; "+"; e2 = expr -> `Add(e1, e2)
| e1 = expr; "-"; e2 = expr -> `Add(e1, `Mul(`Int(-1), e2)) ]
| "product" LEFTA
[ e1 = expr; "*"; e2 = expr -> `Mul(e1, e2) ]
| "apply" LEFTA
[ e1 = expr; e2 = expr -> `Apply(e1, e2) ]
| "simple" NONA
[ v = LIDENT -> `Var(v)
| n = INT -> `Int(int_of_string n)
| "("; e = expr; ")" -> e ] ];
END;;
(* Evaluator *)
let int = function `Int n -> n | _ -> invalid_arg "int";;
let bool = function `Bool b -> b | _ -> invalid_arg "bool";;
let rec eval vars = function
| `Apply(func, arg) -> apply (eval vars func) (eval vars arg)
| `Add(e1, e2) -> `Int (int(eval vars e1) + int(eval vars e2))
| `Mul(e1, e2) -> `Int (int(eval vars e1) * int(eval vars e2))
| `Equal(e1, e2) -> `Bool (eval vars e1 = eval vars e2)
| `If(p, t, f) -> eval vars (if bool (eval vars p) then t else f)
| `Int i -> `Int i
| `LetRec(var, arg, body, rest) ->
let rec vars' = (var, `Closure(arg, vars', body)) :: vars in
eval vars' rest
| `Var s -> List.assoc s vars
and apply func arg = match func with
| `Closure(var, vars, body) -> eval ((var, arg) :: vars) body
| _ -> invalid_arg "Attempt to apply a non-function value";;
(* Top level *)
let string_of_value = function
| `Int n -> string_of_int n
| `Bool true -> "true"
| `Bool false -> "false"
| `Closure _ -> "<fun>";;
let () =
let program = Gram.parse expr Loc.ghost (Stream.of_channel stdin) in
let vars = ["true", `Bool true; "false", `Bool false] in
print_endline(string_of_value(eval vars program));;
This can be made into a BASIC interpreter easily enough. Might also be
interesting to make the target language lazy.
Well I count my self (own the whole) as pleased by Common Lisp.
However, it is so malleable I sometimes forget there are some constraints
on what is practical.
Matt
--
"You do not really understand something unless you can
explain it to your grandmother." — Albert Einstein.
> by replacing C's default linked list allocator with a better one.
There's no "default" allocator these days and the malloc implementations
I know don't use simple linked lists.
>From my own experience based on framerates where I implemented raycar
in c++ and lisp (through ffi) I would rather say ~30%, but different
benchmarks would probably give different results. Neither lisp nor c++
code was optimized, but judging by readability and development time
lisp wins with flying colors. The only bad thing about lisp is that I
went out of shape, I use to make push ups when c++ code was compiling
and there is no such thing with REPL, if you know what I mean.