http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all
SBCL is only about 0.9 times slower than C++, 0.7 slower than C, 0.2
times faster than Java (client), and is finally as fast as Ocaml.
I'm pretty sure that if the SBCL team continue to work, it will become
one of the fastest language implementation, very close to C++ itself.
Thank you for your work!
| Have you seen the shootout recently?:
Not to diminish the achievements of all who contribute to SBCL, of
course---
Besides Shakespeare's works, Lewis Carroll's also have a quote for
everything, more or less:
"Are five nights warmer than one night, then?" Alice ventured to
ask.
"Five times as warm, of course."
"But they should be five times as _cold_, by the same rule---"
"Just so!" cried the Red Queen. "Five times as warm, _and_ five
times as cold---just as I'm five times as rich as you are, _and_
five times as clever!"
(_Through the Looking-Glass_, Chapter IX)
---Vassil.
--
Peius melius est. ---Ricardus Gabriel.
I'm not entirely sure what you mean here. Sure, it's not directly
meaningful to say that language implementation X is z times faster
than Y, but it still has some relative value.
Especially when we see that, on average/mode/worst/best-case, SBCL's
performance in time and memory terms has improved relative to other
language implementations.
The memory use is still pretty high though.
What I'd personally like to see is a set of quite large applications,
implemented in different languages, so we could compare the
readability and code sizes. In the shootout programs, they're a)
optimised to the level where they're barely above C and b) too short
and limited in scope to be meaningful samples on those terms.
Everyone is saying that Lisp is more expressive than other languages,
but it's not visible on the shootout; in fact the Lisp programs are at
the longer, more verbose end.
Oisín
Ok, Shakespeare in c.l.l. The latest thing we happen to see here.
You say that you don't want to diminish the SBCL team, what do you
mean with this quote then?
> Everyone is saying that Lisp is more expressive than other languages,
> but it's not visible on the shootout; in fact the Lisp programs are at
> the longer, more verbose end.
I think this is because of how large operators are in Lisp:
(setf my-variable 99)
versus
my_var=99
But, in general, if you only count the number of lines, Lisp is
usually shorter.
Also, in big applications, defmacro and functional programming allow
us to reduce complexity and size.
The horror, the horror! No spaces around the assignment operator! Is
anybody reading the GNU coding standards nowadays instead of the
guidelines to punch ForTran IV cards? :)
Cheers
--
Marco
Who cares?
> What I'd personally like to see is a set of quite large applications,
> implemented in different languages, so we could compare the
> readability and code sizes. In the shootout programs, they're a)
> optimised to the level where they're barely above C and b) too short
> and limited in scope to be meaningful samples on those terms.
> Everyone is saying that Lisp is more expressive than other languages,
> but it's not visible on the shootout; in fact the Lisp programs are at
> the longer, more verbose end.
But on large applications, you won't have the same application,
because obviously you will program it differently in different
programming languages. Or else, you'll be doing Fortran in C and in
Lisp, and you won't compare how well gcc compiles C vs. how well sbcl
compiles lisp, but how well gcc compiles Fortran vs. how well sbcl
compiles Fortran.
Therefore you won't be comparing the compilers, or the language per se
anymore. You will be comparing the whole system: (developers ide
libraries language compiler). And since this includes a human
factor, they will say, "Of course, but it's not because lisp is
better, it's just you took good lisp programmers and bad blub
programmers.".
--
__Pascal Bourguignon__
I think thats the way they are written. The implementors aimed for
performance and put in all the bells and whistles (often at the cost
of perpiscuity).
Mark
How prosmicuous of you!
:)
kt
Maybe a newer SBCL would be faster. They seem to be using SBCL 1.0.12
which is from November 2007.
--
Lars Rune Nøstdal
http://nostdal.org/
Well, for example, most programmers including myself approaching a new
language have a perhaps superstitious need to compare benchmarks and
see 'how fast' or 'how big' programs are when compiled. Even if we
don't actually need cutting edge speed in our web-app/stock quote
fetcher/...etc.
The huge memory use thing is a bit offputting when you used to be
disappointed upon 'upgrading' from C to C++ and finding that some
simple application now compiles to over 100k (!!) and uses 5 megs of
heap to do its job. Then again, I read a really interesting article
called "Lisping at JPL" a month ago or so when I decided to learn
Lisp, which described useful Lisp systems on embedded platforms with a
ridiculous amount of RAM (compiled Lisp running in 2kb, then 32kb, and
an interpreter running in 8mb). Maybe the programs used in the
shootout cause a lot of things to be allocated before a GC kicks in,
or maybe SBCL carries a large runtime.
> Therefore you won't be comparing the compilers, or the language per se
> anymore. You will be comparing the whole system: (developers ide
> libraries language compiler). And since this includes a human
> factor, they will say, "Of course, but it's not because lisp is
> better, it's just you took good lisp programmers and bad blub
> programmers.".
That's true. Also, I guess the code would be quite contrived and not
representative of real Lisp, like the shootout code (too C-style-
optimised - it's a pity the compilers don't automatically do more of
that stuff so we can have automatically-optimised code which isn't so
ugly). I guess it'd be more beneficial and natural to just search up
some good Lisp code (AI archives look good) and see how I feel about
it, and whether my confusions and worries are answered; for example,
how big does an application have to be before macros and such become
useful abstractions? What's the deal with that weird gensym stuff? Are
dynamic variables good or bad?
Oisín
> On Jun 25, 4:32 pm, p...@informatimago.com (Pascal J. Bourguignon)
> wrote:
> > Oisín Mac Fhearaí <denpasho...@gmail.com> writes:
> >
> > > The memory use is still pretty high though.
> >
> > Who cares?
>
> Well, for example, most programmers including myself approaching a new
> language have a perhaps superstitious need to compare benchmarks and
> see 'how fast' or 'how big' programs are when compiled. Even if we
> don't actually need cutting edge speed in our web-app/stock quote
> fetcher/...etc.
> The huge memory use thing is a bit offputting when you used to be
> disappointed upon 'upgrading' from C to C++ and finding that some
> simple application now compiles to over 100k (!!) and uses 5 megs of
> heap to do its job. Then again, I read a really interesting article
> called "Lisping at JPL" a month ago or so when I decided to learn
> Lisp, which described useful Lisp systems on embedded platforms with a
> ridiculous amount of RAM (compiled Lisp running in 2kb, then 32kb, and
> an interpreter running in 8mb). Maybe the programs used in the
> shootout cause a lot of things to be allocated before a GC kicks in,
> or maybe SBCL carries a large runtime.
Check out Clozure CL. Its code is quite small and the runtime
is small, too. What is now Clozure CL started twenty years ago on
tiny Macs with 4MB RAM. CLISP has also a small footprint.
If you compare compiled Lisp to other code make sure that
you understand whether the compiled Lisp code contains development
information or not. Development information would be:
* original parsed source code
* symbols
* argument lists
* documentation strings
* source locations
* other debug info
Plus the Lisp application may include some facilities
that might not be needed (for example the compiler might not
be needed).
Some Lisp systems can get rid of most of that and more.
For example for LispWorks and Allegro CL there are
delivery tools which also remove unused code and
do lots of other fancy things.
But often one just delivers Lisp applications with full
development information. That makes it quite large sometimes.
Having the ability to change running software is
one of the strengths of Lisp systems.
> > Therefore you won't be comparing the compilers, or the language per se
> > anymore. You will be comparing the whole system: (developers ide
> > libraries language compiler). And since this includes a human
> > factor, they will say, "Of course, but it's not because lisp is
> > better, it's just you took good lisp programmers and bad blub
> > programmers.".
>
> That's true. Also, I guess the code would be quite contrived and not
> representative of real Lisp, like the shootout code (too C-style-
> optimised - it's a pity the compilers don't automatically do more of
> that stuff so we can have automatically-optimised code which isn't so
> ugly). I guess it'd be more beneficial and natural to just search up
> some good Lisp code (AI archives look good) and see how I feel about
> it, and whether my confusions and worries are answered; for example,
> how big does an application have to be before macros and such become
> useful abstractions?
You have to find out. ;-)
> What's the deal with that weird gensym stuff?
Used in generated code for generated symbols.
> Are dynamic variables good or bad?
Both.
>
> Oisín
On Wed, 25 Jun 2008 07:38:58 -0700 (PDT), Javier <jav...@gmail.com> said:
| On 25 jun, 07:38, Vassil Nikolov <vnikolov+use...@pobox.com> wrote:
|| ...
|| "Are five nights warmer than one night, then?" Alice ventured to
|| ask.
|| "Five times as warm, of course."
|| "But they should be five times as _cold_, by the same rule---"
|| "Just so!" cried the Red Queen. "Five times as warm, _and_ five
|| times as cold---just as I'm five times as rich as you are, _and_
|| five times as clever!"
|| (_Through the Looking-Glass_, Chapter IX)
| ...
| You say that you don't want to diminish the SBCL team, what do you
| mean with this quote then?
That it summarizes fairly well many---not all, but way too
many---discussions about "X is faster than Y". It is about such
discussions, not about SBCL.
I don't want to ask why an _implementation_---SBCL---is quoted to be
slower or faster than one or another _language_ (C++, C, etc.).
Perhaps more importantly, I don't want to ask what exactly "faster"
or "slower" mean when languages---or implementations---are compared,
either. (I do not say that such a meaning cannot be ascribed in a
sensible way, just that it requires some care, and more than I am
willing to afford at this time.) Is this horse still alive anyway?
Note that the above web page itself only gives lies ^W damn lies ^W^W
statistics, but does not speak of faster or better (except the
sarcastic question at the top; the FAQ is another matter, but then
some of it is tongue-in-cheek, of course).
By the way, Gabriel's _Performance and Evaluation of Lisp Systems_
is still worth reading for more than historic value; and it is now
available as a PDF file from Gabriel's site
(<http://www.dreamsongs.com/Files/Timrep.pdf>).
To help people understand the futility of this exercise, compare:
int fact(int x){
return((i<=1)?1:(x*fact(x-1)));
}
int main(void){
printf("%d\n",fact(1000));
return(0);
}
vs.
(defun fact (x)
(if (<= i 1) 1 (* x (fact (1- x)))))
(defun main ()
(print (fact 1000))
0)
C/C++ Lisp
Correctness ? ?
Development Time ? ?
Maintainability ? ?
Run Time ? ?
--
__Pascal Bourguignon__ http://www.informatimago.com/
In deep sleep hear sound,
Cat vomit hairball somewhere.
Will find in morning.
We are just asking for runtime speed. Other questions may be debated
apart.
We all know that Lisp code is usually corrent, maintainable, and has
short development times.
But please, let debate about the OTHER thing, one IMPORTANT thing,
even if it is not the most important on some applications.
There is a mith that run time execution of programs written in
language X is not important because most of the time the computer is
iddle and bla, bla, bla. But analize it carefully: this mith is true
for a web application running on actual computers, for most of the
sites. It might also be true for enterprise applications up to some
point. But look at your computer, can you imagine a mp3 player
borrowing 50% of your CPU time? Can you imagine having to wait 0.5s
for every time you press a button? Can you imagine you OS loosing data
when comunicating with other computers? These things are important.
People is buying a 3.0 Ghz computer with 2Gb of RAM, and they don't
want it to run as if it where a 500Mhz one.
Desktop is important. Servers speed is also important. SPEED is
important. Yes, it is just ONE variable for choosing a language, but a
definitive one.
And microbenchmarks are right when showing how a language
implementation manage the code you write. For example, math
calculations being fast is an important prerequisite for real time
applications. If a microbenmark shows up that the implementation can
be fast with short algorithms, it means that good programmers can do
big fast applications. We cannot say the same for implementations
being slow on short programs.
>
> SBCL is only about 0.9 times slower than C++, 0.7 slower than C, 0.2
> times faster than Java (client), and is finally as fast as Ocaml.
For purposes of propaganda, i.e. advocating Common Lisp, this is
great.
I'm all for propaganda of this kind. I guess "marketing collateral"
would be a more euphemistic way of putting it.
As for whether it really means anything: microbenchmarking is hard,
and interpreting the results is hard. See Richard Gabriel's book
on measuring Lisp for a great discussion of why it's so hard.
See Didier Verna's excellent paper, in which he compares
Common Lisp and C++ for some simple image-processing apps to
see how masterfully he deals with the hard issues. It's called
"Beating C in Scientific Computing Applications: On the Behavior
and Performance of Lisp, Part 1". One of the many smart things
he does is to try several Common Lisp implementations. SBCL
and Allegro both come out well.
Notice that the "shootout" numbers are taking the mean of
many benchmarks, because everybody wants a single number
that measures speed, as if every microbenchmark will have
exactly the same ratio for each language. Of course that's
not true, and it depends a lot on which things you benchmark.
And if you're going to take a mean, you have to choose your
weighting. To their credit, they provide a way to do that,
but how do you know what to put in those boxes? The
"what fun!" comment on that web page shows that they
understand this perfectly.
See the full comparison between SBCL and the fastest
Java that they show:
http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=sbcl&lang2=javaxx
That's more informative. The mean is being influenced
by a few benchmarks on which SBCL did a lot better,
whereas it did somewhat worse on many of the benchmarks.
What you really want is a way to predict, in advance, how
much faster or slower your app will be, so that you can
take that into account when you choose a language. The
ability of this kind of microbenchmark to help you make
that prediction is limited at best.
By the way, I often say that you get the most speed from
Lisp, compared to other languages, because you can make
your program work correctly in less time, thus leaving
more time between then and the project deadline to work
on performance improvement. And big performance wins
come from high-level approaches, which also take time
to develop, so developer productivity translates directly
into runtime performance, if you decide to care about
runtime performance.
Warm congratulations to the SBCL maintainers
(hi, Nikodemus!). They deserve lots of credit. These
results are extremely impressive!
-- Dan
I think microbenchmarking is at least as important as full-
benchmarking. It shows up where is actually the limit on speed that
you can achieve.
I obviusly agree that it is not the same comparing small algorithms
using different language implementations, as doing the same with full
applications. It is obvious, too, that getting good performance using C
++ is harder.
But there are times (and desktop applications do almost always require
this), in which performance is so much important, that people still
prefer to use the fastest language implementation, even if it is very
hard to code on.
Knowing that SBCL is getting close to C++, and that it is now very
close to be optimal, can encourage lot of programmers to choose Lisp.
I hope SBCL gets better and better. Lot of myths and misconceptions
may fall out.
I hope that you agree with me that if a Lisp implementation is good at
microbenchmarking, it also means, in some way, that it will be good
when using it for full applications. The faster it is at concrete
things, the faster it should be in general. Any point at which it is
not fast, is a bottle-neck that prevents the full application to be
fast. That's logical.
How about counting symbols or tokens rather than characters or lines?
For example,
(setf my-variable 99)
and
my_var = 99
would both count as 3. It strikes me that this would be a better
measure of verbosity, because it's the number of elements needed more
than the amount of typing that is most relevant. This is especially
true when modern environments provide word completion, so it doesn't
actually take many more keystrokes to enter `long-named-function' than
`f'.
If we were typing as fast as we can think the solution, there would be
little difference in development time between languages. We spend most
of the time reading documentation, thinking, and understanding what we
wrote months ago.
1.- For thinking, Lisp has got lot of facilities, including
functional, object oriented, and imperative programming, a large
library, a more or less uniform semantic, a REPL for testing while
typing, and automatic memory management. The rest is up to the
knowledge and intelligence of the programmer.
2.- For understanding, Lisp is a very readable language, and has macro
facilities which allows to construct embedded micro-languages. This
eliminates a lot of verbose.
3.- For documentation, the language encourages it by built in
facilities like function and classes inline documentations, which can
be consulted in real time using Slime or any other specific editor
(this is equivalent to Javadoc, but better).
So Lisp is a winner here.
And rather than being a sarcastic question "Can you manipulate the
multipliers and weights to make your favourite language the best
programming language in the Benchmarks Game?" is an invitation to play
around - iirc I once managed to get Python to be #1 ;-)
> See the full comparison between SBCL and the fastest
> Java that they show:
>
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=all〈=sbc...
>
> That's more informative. The mean is being influenced
> by a few benchmarks on which SBCL did a lot better,
> whereas it did somewhat worse on many of the benchmarks.
Yes that is how the FAQ says to compare language implementations.
For some reason, the OP chose to state that the SBCL was faster than
the Java -client.
In contrast you have chosen a comparison with Java -server, not with
Java -client.
Just looking at the mean scores would give the impression that SBCL
was a little slower than Java -server :-)
>
> What you really want is a way to predict, in advance, how
> much faster or slower your app will be, so that you can
> take that into account when you choose a language. The
> ability of this kind of microbenchmark to help you make
> that prediction is limited at best.
>
> By the way, I often say that you get the most speed from
> Lisp, compared to other languages, because you can make
> your program work correctly in less time, thus leaving
> more time between then and the project deadline to work
> on performance improvement. And big performance wins
> come from high-level approaches, which also take time
> to develop, so developer productivity translates directly
> into runtime performance, if you decide to care about
> runtime performance.
>
> Warm congratulations to the SBCL maintainers
> (hi, Nikodemus!). They deserve lots of credit. These
> results are extremely impressive!
>
> -- Dan
(It's not obvious to me that these results are particularly different
to what they were a year ago?)
>
>
> If we were typing as fast as we can think the solution, there would be
> little difference in development time between languages. We spend most
> of the time reading documentation, thinking, and understanding what we
> wrote months ago.
> 1.- For thinking, Lisp has got lot of facilities, including
> functional, object oriented, and imperative programming, a large
> library, a more or less uniform semantic, a REPL for testing while
> typing, and automatic memory management. The rest is up to the
> knowledge and intelligence of the programmer.
> 2.- For understanding, Lisp is a very readable language, and has macro
> facilities which allows to construct embedded micro-languages. This
> eliminates a lot of verbose.
> 3.- For documentation, the language encourages it by built in
> facilities like function and classes inline documentations, which can
> be consulted in real time using Slime or any other specific editor
> (this is equivalent to Javadoc, but better).
>
> So Lisp is a winner here.
Fuzzy completion is awesome.. Like the mouse scroll button once you are
used to it you can't do without it.
--------------
John Thingstad
This assumption of yours is exactly the cause of the disagreement on this
thread.
Programmers with more experience than you are trying to tell you that the
inference doesn't hold. You can only make very poor predictions about full
application speed, based only on microbenchmarking results.
Your assumption that microbenchmarking is a good proxy for real application
benchmarking, is false.
> The faster it is at concrete things, the faster it should be in general.
> Any point at which it is not fast, is a bottle-neck that prevents the full
> application to be fast. That's logical.
Science trumps logic, I'm afraid. It doesn't matter how airtight you think
your argument is. The actual experience of real programmers writing real
applications is that microbenchmarking is not a very useful guide to the
eventual speed of the final full application.
We can debate the reasons for why that might so, but the fact that it is so
should not be in doubt.
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
I'm not saying you lack the brains God gave a potato. I'm just saying that
in a similar situation a potato -- or any tuber, for that matter -- would
almost certainly have thought before proceeding. -- "Sally Forth", 2/8/2007
> On Jun 25, 1:12 am, Javier <javu...@gmail.com> wrote:
>
>>
>> SBCL is only about 0.9 times slower than C++, 0.7 slower than C, 0.2
>> times faster than Java (client), and is finally as fast as Ocaml.
>
I might add thet while the efficiency model of C is simple to understand
the efficiency of Lisp is notoriously difficult to understand.
It is easy to get performance 40x times that of C without understanding
the reason.
I recomend 'dissassemble if in doubt.
Thus I also take this claim with a grain of salt.
--------------
John Thingstad
I don't think it might be so much poor as you state.
And you don't know what my experience is.
> Your assumption that microbenchmarking is a good proxy for real application
> benchmarking, is false.
No, it is not false. Your opinion is that it is false. You cannot
demonstrate it, as I cannot my own. We just can believe our respective
and contradictory positions.
> > The faster it is at concrete things, the faster it should be in general.
> > Any point at which it is not fast, is a bottle-neck that prevents the full
> > application to be fast. That's logical.
>
> Science trumps logic, I'm afraid.
Logic is the language to explain science. Science is the understanding
of the logic of the nature. Science doesn't trumps anything.
> It doesn't matter how airtight you think
> your argument is. The actual experience of real programmers writing real
> applications is that microbenchmarking is not a very useful guide to the
> eventual speed of the final full application.
It may depend upon the kind of application, and how it is done, don't
you think so? Experience of real programmers writing real applications
know that performance problems are usually concentrated in concrete
points, called bottle-necks. You can resolve them up to the human
limit. From there on, the speed depends on the implementation of your
choice. If your implementation doesn't generate optimum machine code
for the best algorithm you can choose, and it is not enough for you,
there is nothing more to do, except changing the implementation.
Compare, for example, an application written in CL, compiled both in
Clisp and SBCL. It is the same application, but in SBCL it is clearly
faster than in the other one. Previusly, we made some
microbenchmarking on both implementations, with the same result: SBCL
is clearly the winner.
We can, then, establish the relation. If for almost every program,
either being a big or small one, and for almost every
microbenchmarking, SBCL is faster, we can say that, on the average,
SBCL is clearly faster. Microbenchmarking is really a helpful thing,
it cannot measure every application and circunstance, but we can
figure out how relatively fast is a specific implementation before
being chosen.
> We can debate the reasons for why that might so, but the fact that it is so
> should not be in doubt.
Allow me to doubt it. Demonstrate it.
And... talking about experience. Have you got the experience of
applications written in C that are slower than similar applicatons
written in Lisp? I mean correctly ones for both, not correctly ones
written in Lisp, and badly ones written in C.
Perhaps you can surprise me... I doubt it.
I agree.
> It is easy to get performance 40x times that of C without understanding
> the reason.
And also we can get performance of 400x times slower in Lisp without
understanding the reason...
> I recomend 'dissassemble if in doubt.
> Thus I also take this claim with a grain of salt.
I think the shootout is not the complete truth, but an approximation.
It is useful.
The point is that you can have an abstract argument that something "ought to
be so", and it might be that we can't see what the flaw in the argument is.
That is "logic".
But the real test comes from the real world. Actually do the experiment, and
observe the results. Sometimes the real world surprises you, and produces a
result that you _thought_ you had an airtight logical argument to rule out.
That appears to be the case in this example. You think that "the faster [an
implementation] is at [microexamples], the faster it should be in general."
Yet actual experience with real programmers shows that there is little
correlation here: big programming teams working on big programs don't
actually come up with faster full applications by being forced to use
languages/implementations that happen to get the best scores in a few micro
examples.
>> It doesn't matter how airtight you think your argument is. The actual
>> experience of real programmers writing real applications is that
>> microbenchmarking is not a very useful guide to the eventual speed of the
>> final full application.
>
> It may depend upon the kind of application, and how it is done, don't
> you think so?
Of course. Yet another reason why it's premature to make any grand sweeping
conclusions based only on a few small microexamples.
> Experience of real programmers writing real applications know that
> performance problems are usually concentrated in concrete points, called
> bottle-necks.
Agreed.
> You can resolve them up to the human limit. From there on, the speed
> depends on the implementation of your choice.
Most industrial-class implementations allow FFI or embedded assembly, if
some tiny fraction of your application really was limited by the compiler.
> If your implementation doesn't generate optimum machine code for the best
> algorithm you can choose, and it is not enough for you, there is nothing
> more to do, except changing the implementation.
Agreed, but what you're missing is that this does NOT imply that a real
programming team on a real problem will actually wind up with faster code,
if they use a language/implementation that "won" the microbenchmark.
Let me just pose one (of many) hypotheticals: Most real-world programming
tasks are resource-limited, say by money or time. What if the microbenchmark
winner made it more difficult for programmers to express algorithms? You
might find that the higher-level language/implementation allows more
experimentation with algorithms, so -- in REAL WORLD applications -- the
two teams actually wind up with DIFFERENT algorithms. And I'm sure you know
that the choice of algorithm and/or data structure swamps all the micro things
you've been trying to measure with these benchmarks.
Where in your comparison do you account for the ability of programmers to
_find_ the best algorithm, in a given unit of programming time?
Or, let me try another way: everything you've said applies equally well to
any high-level language vs. assembly code. Why aren't you arguing that every
program should be written in assembly language? SURELY assembly language (in
principle) wins EVERY microbenchmark.
If you start to explore your (limited?) understanding of why C is "better"
than assembly language -- despite doing WORSE on the microbenchmarks -- then
you'll start to also understand why Lisp is better than C (regardless of
microbenchmark performance).
> Compare, for example, an application written in CL, compiled both in Clisp
> and SBCL. It is the same application, but in SBCL it is clearly faster than
> in the other one. Previusly, we made some microbenchmarking on both
> implementations, with the same result: SBCL is clearly the winner.
This is a pretty easy case, since they're the same language, just different
implementations. And one compiles to native code, while the other uses a
byte-code interpreter.
Even so, you're still wrong to quantify over every application. It turns out
that there are significant examples you can come up with where Clisp is
faster. Perhaps (for example) as a unix shell-like scripting language, where
the source file is pure text (like most Perl or shell scripts). In that
case, the time cost of compilation gets charged to the overall running time
of the application, and it is no longer clear that SBCL is the obvious winner
over Clisp.
> We can, then, establish the relation. If for almost every program, either
> being a big or small one, and for almost every microbenchmarking, SBCL is
> faster, we can say that, on the average, SBCL is clearly faster.
Or, you could understand better what is going on, and make a more informed
choice.
>> We can debate the reasons for why that might so, but the fact that it is so
>> should not be in doubt.
>
> Allow me to doubt it. Demonstrate it.
Let's be Socratic here. I can help you find the answer yourself.
Why is C preferred to assembly, even though assembly can beat C on
microbenchmarks?
(C is actually intermediate between assembly and most other high level
languages, so this would be even clearer if you'd pick a different one.
I don't know what your favorite HLL is. C++? C#? F#? Haskell? Python?
Perl? Compare any of those to pure assembly.)
> Have you got the experience of applications written in C that are slower
> than similar applicatons written in Lisp? I mean correctly ones for both,
> not correctly ones written in Lisp, and badly ones written in C.
Ah, but you see how you had to wriggle out of it, with all the exceptions?
The REAL question of interest in the REAL world is something more like:
You've got a 5-person team, and 3-months, to implement software from scratch
that does X. If that team chose to use Java or Lisp or C (assuming equal
experience in any), which would lead to a superior final product (including
features, speed, etc.)?
That's a very, very hard question to answer. Microbenchmarks tell you almost
nothing about the answer.
Consider this: it is surely harder to make "correct" programs in C, than in
Lisp. The additional effort required to make the C program correct isn't
free. You can't sweep under the rug the fact that the Lisp program gets to a
correct state much faster. So, for a fixed total amount of programming time,
there is more time left in the Lisp camp to do profiling and performance
improvement.
Again, otherwise I ask you: "Have you got the experience of applications
written in assembly that are slower than similar applications written in C?
I mean correctly ones for both, not correctly ones written in C, and badly
written in assembly."
If you ignore the cost of developing correct programs, you can get all sorts
of bizarre conclusions. But those conclusions don't apply to the real world.
(Note this is all BESIDE the point of how well you can optimize a
microbenchmark, which is a small but somewhat useful piece of data for a
language and/or implementation. Your mistake is not in exploring that topic,
but in thinking it's the only one that matters for a language comparison.
It's a minor issue, of small but non-zero importance. And, by the way, Lisp
isn't too bad at that micro topic as well.)
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
When I told the people of Northern Ireland that I was an atheist, a woman in
the audience stood up and said, "Yes, but is it the God of the Catholics or the
God of the Protestants in whom you don't believe?" -- Quentin Crisp
Not at all. Read other messages I wrote in this thread.
We were just discussing speed.
I've read all your message, but I think you don't finally got my
point. Just a small example: can you imagine an application like
Cubase or Logic written in Lisp? This is a tipical application which
can be measured pretty well using microbenmarking, because it is using
lot of math, with simple and repetitive algorithms. Speed is so much
important, that implementors are even going to sacrifice
maintainability in order to compete. In such circuntances, it doesn't
mind if the development time increases by one month. If Lisp is 2x
slower in these small algorithms, it means that the program would be
able to manage less tracks, fewer effects, and being less interactive.
There are lots of applications like this. Look at your OS: most parts
of the kernel, web navigators, music players, photo managers, video
editing tools, framework libraries, even games... in fact, most of the
applications we usually use require speed at the low level.
Using CFFI has a cost. If you are going to use it for rewriting the
kernel of your application, what is the utility of Lisp? Just to be a
glue? I resign of such.
Music production? Recording, producing, and mixing sounds?
Uh, yeah. Sure. There's no reason why Lisp couldn't be a fine language for
implementing such an application.
Now, at some point you get down to device drivers for hardware. On the old
Lisp Machines, you could write drivers in Lisp too. With modern hardware and
modern OSes, this is a little more of a challenge, since Windows and Unix
aren't especially friendly to Lisp. I'd recommend that you write hardware
drivers in whatever language is closest to what the OS wants, which probably
means C.
But aside from that? For the basic application itself? Sure!
Let's see ... Cubase 1.0 was released in 1989 for an Atari. Uh, yeah.
Moore's Law is our friend here. I'm sure an Intel Quad core running SBCL
could keep up with a 1989 Atari.
> This is a tipical application which can be measured pretty well using
> microbenmarking, because it is using lot of math, with simple and
> repetitive algorithms.
And a bunch of GUI, don't forget. GUI is often the largest and most complex
part of applications like these.
As for "lots of math", the numeric performance of (some implementations of)
Lisp has been favorably compared to Fortran (the gold standard of numeric
performance):
http://portal.acm.org/citation.cfm?id=200989
SBCL, in particular, is a fork of CMUCL, which itself is a public-domain
successor to CMU Common Lisp, a research project that was created specifically
to demonstrate Fortran-level numeric performance in Lisp.
> Speed is so much important, that implementors are even going to sacrifice
> maintainability in order to compete. In such circuntances, it doesn't mind
> if the development time increases by one month.
You never answered my question in the previous post: why aren't they
programming in assembly language?
> If Lisp is 2x slower in these small algorithms, it means that the program
> would be able to manage less tracks, fewer effects, and being less
> interactive.
Again, you confuse microbenchmarks with overall application performance.
The way you optimize programs, is you find the very tight loops where they are
spending 99% of their CPU time. This is only a tiny fraction of the code, and
vastly easier to optimize than the whole program.
> There are lots of applications like this. Look at your OS: most parts of
> the kernel, web navigators, music players, photo managers, video editing
> tools, framework libraries, even games... in fact, most of the applications
> we usually use require speed at the low level.
You persist in the delusion that using Lisp would force a slower application.
> Using CFFI has a cost. If you are going to use it for rewriting the kernel
> of your application, what is the utility of Lisp? Just to be a glue? I
> resign of such.
Ever hear of the 80/20 rule? The point is that the vast majority of your
programming effort can take advantage of the superior programming language,
EVEN IF the tight loops need to be rewritten in assembly language (or C).
And, of course, you haven't at all established that something other than Lisp
is required for the inner loops (vs. optimizing within the Lisp language,
using an optimizing compiler like SBCL).
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
Dear Mrs, Mr, Miss, or Mr and Mrs Daneeka: Words cannot express the deep
personal grief I experienced when your husband, son, father or brother was
killed, wounded, or reported missing in action. -- Joseph Heller, _Catch-22_
> Javier <jav...@gmail.com> wrote on Thu, 26 Jun 2008:
>> I've read all your message, but I think you don't finally got my
>> point. Just a small example: can you imagine an application like Cubase or
>> Logic written in Lisp?
>
> Music production? Recording, producing, and mixing sounds?
>
> Uh, yeah. Sure. There's no reason why Lisp couldn't be a fine language for
> implementing such an application.
>
> Now, at some point you get down to device drivers for hardware. On the old
> Lisp Machines, you could write drivers in Lisp too. With modern hardware and
> modern OSes, this is a little more of a challenge, since Windows and Unix
> aren't especially friendly to Lisp. I'd recommend that you write hardware
> drivers in whatever language is closest to what the OS wants, which probably
> means C.
Just bypass these OSes. Use Movitz, implement your sound driver in
lisp and have fun!
--
__Pascal Bourguignon__ http://www.informatimago.com/
COMPONENT EQUIVALENCY NOTICE: The subatomic particles (electrons,
protons, etc.) comprising this product are exactly the same in every
measurable respect as those used in the products of other
manufacturers, and no claim to the contrary may legitimately be
expressed or implied.
Actually the both are improtant. The language could be terse - short
tokens and concise -use less tokens.
Lisp is sesquipedalian http://www.answers.com/sesquipedalian+?gwp=11&ver=2.3.0.624&method=3
and concise.
J / K and Q are both terse and concise. But the most improtant thing
to me the style of coding it promotes.
here's the best explanation so far from http://pozorvlak.livejournal.com/89208.html
It may interest you to know that most of my development time for that
was spent typing: paging through the library, trying things out at the
REPL, tweaking, debugging, iterating. It's a style I find much easier
than staring at a blank screen and thinking very hard, and one for
which I find forgiving languages like Perl and Lisp are much better
than Haskell.
So I guess it matters of type of personality you are. I'm gemini
BTW :)
You're making a large and naieve assumption here on exactly how "high-
speed" these applications are.
Someone already mentioned that Cubase runs on old Atari machines - I
have Cubase and a few other commercial music applications, designed to
be powerful yet capable of running smoothly on my Falcon (16mhz CPU
with a dodgy 50mhz upgrade board I don't use) and even ST (8mhz, 4
megs RAM!).
You think you're being fair and open-minded but you're not. I think
programs like this can be implemented efficiently in Lisp without
being twice as slow as some mystical optimal C implementation.
> In such circuntances, it doesn't
> mind if the development time increases by one month. If Lisp is 2x
> slower in these small algorithms, it means that the program would be
> able to manage less tracks, fewer effects, and being less interactive.
And you're kidding yourself if you think implementing an application
of that size in C instead of Lisp would cost you one month extra.
The cost might be 3-6 months, a year, or even the failure of the
project (it happens, a lot, apparently).
> There are lots of applications like this. Look at your OS: most parts
> of the kernel, web navigators, music players, photo managers, video
> editing tools, framework libraries, even games... in fact, most of the
> applications we usually use require speed at the low level.
Again, you're presuming that all of these applications (come on, a web
browser?) use 100% of the CPU time to do their jobs in their optimal C/
assembly/? implementation, and presuming that the best Lisp
implementation is dog-slow. Based on some microbenchmarks.
If Lisp is so expressive, and I think it is, why not try to implement
one of these - the easiest one; perhaps the music player or a game.
Then see if it's really so slow - I doubt it.
> Using CFFI has a cost. If you are going to use it for rewriting the
> kernel of your application, what is the utility of Lisp? Just to be a
> glue? I resign of such.
The point is that if you really need it, you can just use FFI to
implement those bottlenecks you previously mentioned, in C or assembly
or Eiffel or whatever you like.
I get the feeling you're also misjudging the size of those bottlenecks
compared to the size of the whole application. Typically, such a
program will have a few really tight loops; maybe an audio/video codec
or, say, a SNES sound chip emulator core. Maybe <5% of the whole.
You're talking about it as if these bottlenecks make up 90% of the
source.
Oisín
> If Lisp is so expressive, and I think it is, why not try to implement
> one of these - the easiest one; perhaps the music player or a game.
> Then see if it's really so slow - I doubt it.
Ok, lets start with a MP3 reader codec, and some CFFI for connecting
to the OS sound driver.
I'm not very fluent in Lisp, yet I am in C.
Any suggestion?
> On 27 jun, 12:57, Oisín Mac Fhearaí <denpasho...@gmail.com> wrote:
>
>> If Lisp is so expressive, and I think it is, why not try to implement
>> one of these - the easiest one; perhaps the music player or a game.
>> Then see if it's really so slow - I doubt it.
>
> Ok, lets start with a MP3 reader codec, and some CFFI for connecting
> to the OS sound driver.
You don't need any CFFI.
(defun theta (frequency sampling-rate)
(/ (* 2 pi frequency) sampling-rate))
(with-open-file (dsp "/dev/dsp" :direction :output
:if-exists :append
:element-type '(unsigned-byte 16))
(loop
:for i :from 0 :to 44100
:do (write-byte (truncate (1+ (sin (* i (theta 440 44100)))) 1/32768) dsp)))
should be all you need.
> I'm not very fluent in Lisp, yet I am in C.
> Any suggestion?
--
__Pascal Bourguignon__
Yes, but it doesn't work with OSX, and doesn't use ALSA on Linux (just
OSS emulation).
It's ok to start with something that you're comfortable with.
But what's odd, is that you seem to be deliberately choosing poor examples.
This one -- and your microbenchmarks -- are examples where the optimal
algorithm has already been worked out and is well-known, and the ONLY question
is how close the language can come to implementing the already-known optimal
assembly language.
That's just not a common scenario for real-world, valuable software. For
most significant software, a major part of the programming problem is in
figuring out what the optimal algorithms and data structures are going to be.
This isn't to say that Lisp can help you with the task you've suggested.
But you're going to miss a lot of the power and benefit of Lisp if this is
the kind of example you think is "typical".
Ah, so now you've changed your requirements. Now it seems that your REAL
requirement is that you have an existing OS with some existing drivers, and
you want to interface to those drivers as painlessly as possible.
Naturally, when you express your goal that way, the answer is: use whatever
language the OS happens to be written in. If you're using an OS written
in Fortran, then write your code in Fortran too.
It no longer matters how good the language is in the abstract. For your
task, all that matters is the impedance between the OS and your language.
Naturally, you should choose whatever random thing the OS implementors chose,
and don't worry about whether it is good or not.
But: if this is your goal, why did you waste our time on microbenchmarks,
as though the outcome of those tests mattered to you? In reality, you had
already selected the answer, based on the form of the question you decided
was important to you.
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
If the bark is the skin of the tree, then what are the acorns? You don't want
to know. -- Deep Thoughts, by Jack Handey [1999]
You are becoming a little paranoiac now.
Yes, off course I want to interface to those drivers as painlessly as
possible. I'm not going to write my own OS!!
> Naturally, when you express your goal that way, the answer is: use whatever
> language the OS happens to be written in.
This is actually why C is so successful, but I want to give Lisp a
try. But if you, a prominent Lisp user, is trying to convince me not
do so, should I think it twice?
> If you're using an OS written
> in Fortran, then write your code in Fortran too.
There is no OS written in Fortran. Most OS's are written in C.
> It no longer matters how good the language is in the abstract. For your
> task, all that matters is the impedance between the OS and your language.
More paranoia.
"All" is not correct. It matters, but there are other things that also
matters.
> Naturally, you should choose whatever random thing the OS implementors chose,
> and don't worry about whether it is good or not.
>
> But: if this is your goal, why did you waste our time on microbenchmarks,
> as though the outcome of those tests mattered to you? In reality, you had
> already selected the answer, based on the form of the question you decided
> was important to you.
Your logic is crazy.
I was really wanting to make an audio multi-OS library allowing SBCL
to interact with audio drivers and audio file formats, just to deserve
to my future projects.
OSS is very limiting. There are lot of things an audio driver does,
not only play sound.
I found CL-ALSA, CL-MADHL, and CM. Perhaps it is a good start. The
first two one use CFFI, of course. I don't like it, but I can live
with it if there is no other choice.
If interfacing with C libraries is crucial for you, go with ECL! You
can inline embed C code, no CFFI needed! It's maintained, and the CVS
version is getting faster and faster.
I'm a very happy ECL user.
-PM
Maybe something like this is interesting in this context:
http://homepages.nyu.edu/~ys453/#inline_c
..it works with SBCL etc. also.
--
Lars Rune Nøstdal
http://nostdal.org/
Your example has almost nothing EXCEPT interfacing with the OS. This is
not a good example for evaluating the quality of a number of programming
languages. You'll find that the language the OS is written in to be by
far the easiest to use, for interfacing with it.
If you really mean "...let's start with...", and that this task is just
preliminary to get you started and productive with music/audio software,
and you're not going to stop after only doing the driver interface and loudly
proclaim "C is easier to use than Lisp (in my example)!" ... well, then, ok.
In that case, go ahead and use CFFI or whatever, make your interfaces ... and
then get on with the real task of writing the actual interesting application
in Lisp (or whatever language you're evaluating).
>> Naturally, when you express your goal that way, the answer is: use whatever
>> language the OS happens to be written in.
>
> This is actually why C is so successful
Yes of course.
Also, popularity helps too. There are tons of books and conferences and
classes on Java programming.
This is all independent from whether Java or C is a well-designed, productive
language for programmers.
> but I want to give Lisp a try. But if you, a prominent Lisp user, is trying
> to convince me not do so, should I think it twice?
Lisp is certainly worth trying. But if your only task is interfacing to
unix device drivers, that's not a situation where you will find Lisp more
convenient than C.
> I was really wanting to make an audio multi-OS library allowing SBCL to
> interact with audio drivers and audio file formats, just to deserve to my
> future projects.
OK, you can do that.
Just be sure you don't stop with the driver interfaces. Actually write some
interesting algorithms in the source code as well, if you want to learn
something about Lisp.
> I found CL-ALSA, CL-MADHL, and CM. Perhaps it is a good start. The
> first two one use CFFI, of course. I don't like it, but I can live
> with it if there is no other choice.
Looks like you're off to a good start.
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
Do married people live longer, or does it just seem that way?
> > but I want to give Lisp a try. But if you, a prominent Lisp user, is trying
> > to convince me not do so, should I think it twice?
>
> Lisp is certainly worth trying. But if your only task is interfacing to
> unix device drivers, that's not a situation where you will find Lisp more
> convenient than C.
[...]
> Looks like you're off to a good start.
He's been up to this nonsense for two years now. Still claiming he
wants to give Lisp a try but something-or-other is in the way. Doesn't
sound like a good start to me.
I don't think he claimed that C is better than Lisp, but that C is
faster than Lisp.
Assembly is indeed faster than every high level language, but that
doesn't make it better.
<snip>
> Consider this: it is surely harder to make "correct" programs in C, than in
> Lisp. The additional effort required to make the C program correct isn't
> free. You can't sweep under the rug the fact that the Lisp program gets to a
> correct state much faster. So, for a fixed total amount of programming time,
> there is more time left in the Lisp camp to do profiling and performance
> improvement.
Yes but usually C allows for more manual optimization, as far as I
know.
Unless perhaps a Lisper makes a large use of dynamically generated
code and maybe Scheme-like continuations, but I'm not sure there are
many real world cases you can apply these techniques to get a
performance bonus over C (or C++ with properly implemented templates
and exceptions).
Perhaps interpreters are among the few applications where these
techniques can be efficiently used. Are interpeters written in Lisp
faster than their equivalents written in C?
<snip>
This is not true.
There exist some programs such as there exist some (very few)
programmers who could write a faster program in assembler than any
equivalent program written in higher level programming language.
But
for most programs, almost all programmer are not able to write a
faster equivalent one in assembler.
There was a time when all programmers were expert assembler
programmers (because there was no other way to program a computer),
and when compilers were something new enough and were lacking enough
optimization algorithms, so that it was true.
But since the times of RISC processors, the compilers have been able
to better optimize pipeline scheduling than any human programmer, and
have in general produced faster code than human, for the programs of
the size of the programs we create nowadays.
> Yes but usually C allows for more manual optimization, as far as I
> know.
But overall, the quicks of C and C++ prevent them to make more
automatic optimizations, which is why they're doomed and high level
programming language will have compilers generating faster code.
--
__Pascal Bourguignon__ http://www.informatimago.com/
READ THIS BEFORE OPENING PACKAGE: According to certain suggested
versions of the Grand Unified Theory, the primary particles
constituting this product may decay to nothingness within the next
four hundred million years.
If you look at c.l.l, there are many people here wasting their time
speculating about programming languages, and not doing anything really
remarkable.
For example, can you show us up any significant project or general
application in which you are working using Lisp? If so, how much time
do you dedicate to it compared to how much time do you waste in this
newsgroup? And how much time did you needed for the first time until
you wrote your fist Lisp program?
Now ask the same to the rest of people here.
But soon or later, I'm going to start. Perhaps when I stop to read
c.l.l and start to code...
ECL has improved considerably since the last time I saw it.
Nice. Do you know if it support multi-threading on OSX?
Simply put, yes :-) The performance of multi-threaded lisp can be
severely improved, though, but right now it is acceptable. BTW, OS X
is my main development platform, so it is the best supported one.
Juanjo
Nice. I've been watching ECL this morning, and reading the
documentation.
I have compiled it under OSX with default options (./configure &&
make).
Under Chapter 4, there are some functions described. It seems that
they are under MP package, but I fail to load it up:
CL-USER> (require :mp)
Module error: Don't know how to REQUIRE MP.
CL-USER> (mp:process-name)
There is no package with the name MP.
The same happens with UFFI.
CL-USER> *features*
(:DARWIN :IEEE-FLOATING-POINT :RELATIVE-PACKAGE-NAMES :DFFI :CLOS-
STREAMS
:CMU-FORMAT :DLOPEN :CLOS :BOEHM-GC :ANSI-CL :COMMON-
LISP :ECL :COMMON
:PENTIUM3 :FFI :PREFIXED-API)
Is there something I forgot to do?
Also, is there any simple way to install and use asdf-install? I want
to use and install ltk, and some database engine (like cl-sql and/or
elephant).
Thanks!
Most likely they wouldn't be able to write an equivalent program in
assembler, within their time and cost constraints.
But if they manage to write it and they spend enough time optimizing
it the result will be likely faster than the optimized high level
code, at least until compilers become more 'intelligent' than
programmers.
If you take into account development time and cost into the measure of
a language speed, then even Matlab is faster than C for many
scientific/engineering applications.
> There was a time when all programmers were expert assembler
> programmers (because there was no other way to program a computer),
> and when compilers were something new enough and were lacking enough
> optimization algorithms, so that it was true.
>
> But since the times of RISC processors, the compilers have been able
> to better optimize pipeline scheduling than any human programmer, and
> have in general produced faster code than human, for the programs of
> the size of the programs we create nowadays.
I might be wrong but I think that pipeline scheduling is mostly a
local issue, so it shouldn't depend very much on the program size.
Anyway, pipeline scheduling is surely tedious to do manually, but I'm
not sure whether compilers get a more efficient result than humans.
Obviously: (asdf-install:install :asdf-install)
Otherwise, I fail to see the unsimplicity of downloading a tarball,
extracting it, adding the path to asdf:*central-registry* and typing
(asdf:oos 'asdf:load :asdf-install). But that's only me.
--
__Pascal Bourguignon__ http://www.informatimago.com/
"Logiciels libres : nourris au code source sans farine animale."
Yeah, I thought of this too. It's unlikely that a human programmer can
hand-generate better assembly than an optimizing compiler for a high-level
language.
But it's also true that high-level languages impose some constraints on
the generated assembly. There are calling conventions, etc. Lisp allows
run-time function redefinition, so code has to be prepared for changes like
that. Etc.
I suppose the comparison would be: a search through the space of all possible
assembly language programs. Say that the goal was to write a sort function,
to sort a list of integers. One approach might be: start enumerating
sequences of assembly language programs, and stop when you get a really fast
one that matches the specs. Presumably, there does exist SOME assembly
programs which is faster/smaller/better than the result of any (current)
optimizing compiler on any high-level language.
That's not a practical way to program, of course. But if we're just talking
in theory...
Well, in any case, the whole point was that a lot of things matter in
the practical realities of programming, than just the raw absolute speed
of the final code on a few microbenchmarks.
-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
If trees could scream, would we be so cavalier about cutting them down? We
might, if they screamed all the time, for no good reason.
But how to determine that?
| Presumably, there does exist SOME assembly programs which is
| faster/smaller/better than the result of any (current) optimizing
| compiler on any high-level language.
By the speed-up theorem, certainly... (Not that it is feasible to
construct it by hand, of course.)
---Vassil.
--
Peius melius est. ---Ricardus Gabriel.
The default option is _not_ to include multi-thread support. The
reason is that it slows down Common Lisp and many people don't use it.
> Under Chapter 4, there are some functions described. It seems that
> they are under MP package, but I fail to load it up:
If you configure with ./configure --enable-threads plus any other
options you need, then this package is included _always_. There is no
need to use REQUIRE.
> Also, is there any simple way to install and use asdf-install? I want
> to use and install ltk, and some database engine (like cl-sql and/or
> elephant).
I think asdf-install contains fixes to add support for ECL. However my
experience so far has been mixed: there are still many packages out
there which do not include support for ECL and thus fail to install
properly. For instance, this has been the situation with Hunchentoot
so far: porting was not too difficult (thanks Geo!), but getting the
patches into the distribution takes time. Hence I normally use CVS
copies + my own ASDF scripts (see mailing list) + my own patches to
those libraries if needed.
Juanjo
Hi,
a few questions:
* does ECL support X11/CLX?
* are the threads native?
* compiler compiles to C - also to byte codes?
* GC under Mac OS X is Boehm/Weiser conservative/generational? Right?
* Delivery via as shared libray, application? Also as dumped images?
* 32bit support on Mac OS X on PPC and Intel? Right? How about 64bit?
I was just building ECL on my MacBook Pro and it is indeed painless.
Well done!
Regards,
Rainer Joswig
It ships a possibly out of date version of the telent library. I must
admit I have not built it for some years now and somebody should
upgrade it to the latest versions -- but I lost my account in that
project and have no time to keep track of it.
> * are the threads native?
Yes. They are posix threads.
> * compiler compiles to C - also to byte codes?
It ships by default with a bytecodes compiler that works on the fly.
Interpreted in our case means bytecodes compilation and then run. This
is indeed the most stable component of the suite. By comparison, the C
compiler needs to be significantly improved.
> * GC under Mac OS X is Boehm/Weiser conservative/generational? Right?
Both. There is a flag --enable-gengc that activates the generational
algorithm in the Boehm/Weiser library. That brings ECL's performance
on par with SBCL on some real life code.
> * Delivery via as shared libray, application? Also as dumped images?
No dumped images because there is no portable way to do it. However,
you can link all your code into a single program, shared library or
FASL file using ASDF. See http://ecls.sourceforge.net/new-manual/ and
more precisely the section on ASDF extensions.
> * 32bit support on Mac OS X on PPC and Intel? Right? How about 64bit?
I do not have a 64bit machine right now, but if you find out the right
compiler flags, it should be rather painless to include it in the
configuration file. ECL itself is word-size and word-endian agnostic.
It simply does not care whether you use 32, 48, 64 or 128 bits.
> I was just buildingECLon my MacBook Pro and it is indeed painless.
> Well done!
I am glad to read this. Should any problem occur, please contact me
either by private email or in the mailing list -- I prefer this
method.
Juanjo
The --enable-gengc (generational garbage collector) and the --enable-
smallcons (conses are two-word objects) increase the speed of ECL
significantly, but are still only available in the CVS unstable
version. If time permits and I finish setting up the test farm, I will
release a new version of ECL this summer. This is in particular
important for only with the recent changes does Maxima run on ECL.
Juanjo
Let me be a bit more precise here. There are several interesting
configuration flags which are only available in the unstable version,
reachable only via CVS or git. These flags are --enable-gengc (enable
generational garbage collector), --enable-smallcons (conses take only
2 words) and --enable-asmapply (on x86/32 we use assembler for faster
function dispatch).
Juanjo
Thank you very much for all your work.
I'm starting to like ECL.
Just more one question: SLIME fails to show the arg names of some
functions. Is a minor issue, but I'm used to it. Also, the apropos
just shows in what package the function is defined and little more.
Any ideas on this and SLIME support in general?
I would like to help ECL project, in the future. I'm learning... yet.
Last questions:
* Unicode?
* Gray streams (or similar)?
* MOP?
* SSL streams?
* SLIME support?
This is a known issue. There is work in progress to ease gathering of
debug information such as argument names, positions in files, etc. It
is just a problem of coordination between the Slime and ECL projects:
those features can be added, but I want them to remain optional, so
that memory consumption is reduced; at the same time I need a precise
specification of what is needed and what should be the interface.
What I was repeatedly told is to look at the slime code myself and do
it like other implementations, which means that automatically becomes
second priority on my list and only improves when other nice people
like Geo Carncross push the project further.
> I would like to help ECLproject, in the future. I'm learning... yet.
You are welcome to join whenever you feel like. There are many small
tasks like the ones mentioned before that can be carried by new users.
Currently only support for big characters in the strings and #\Uxxxx
codes when reading. The streams I/O system still needs to be upgraded
to support UTF-* and other input formats.
> * Gray streams (or similar)?
(use-package "GRAY")
If for some reason you really want CL:CLOSE to be a generic, then also
(REDEFINE-CL-FUNCTIONS). Otherwise it will still work but you will
have to do something like (DEFMETHOD GRAY:CLOSE (...) ...).
> * MOP?
Look for it in the CLOS package -- though now that I come to think
about it, I should eventually export the appropriate names through a
MOP package --. Some "difficult" features are missing, such as METHOD-
LAMBDA.
> * SSL streams?
Not yet, just SB-SOCKETS, but enough that hunchentoot runs on it.
Given that FLEXI-STREAMS and similar packages work with ECL (thanks
again Geo!) support for SSL streams should be rather easy.
> * SLIME support?
There is some support on the slime side. Swank simply works.
Integrating further features such as better documentation and function
argument names can be easily done.
Juanjo
Additionaly, the ECL CVS version is now incompatible with SLIME 2.0.
It does work with the SLIME CVS version, but this version is somewhat less
complete (something that I can't undersatnd).
> What I was repeatedly told is to look at the slime code myself and do
> it like other implementations, which means that automatically becomes
> second priority on my list and only improves when other nice people
> like Geo Carncross push the project further.
>
>> I would like to help ECLproject, in the future. I'm learning... yet.
>
> You are welcome to join whenever you feel like. There are many small
> tasks like the ones mentioned before that can be carried by new users.
I'll subcribe to ECL list and see what can I do.
> Additionaly, the ECL CVS version is now incompatible with SLIME 2.0.
> It does work with the SLIME CVS version, but this version is somewhat less
> complete (something that I can't undersatnd).
You are confused. SLIME 2.0 is too old and doesn't work with most
modern versions of various CLs. SLIME CVS has a contrib system and some
of the functionality was moved there. I use:
(slime-setup '(slime-fancy slime-asdf slime-indentation))
--
Luís Oliveira
http://student.dei.uc.pt/~lmoliv/
>
> I hope that you agree with me that if a Lisp implementation is good at
> microbenchmarking, it also means, in some way, that it will be good
> when using it for full applications.
Well, no, I don't. That is, the ratios of speed that we see in
various microbenchmarks vary, sometimes greatly, depending on what you
measure. The problem is more severe in full applications, where the
total time tends to depend on even more variables, such as GC
overhead, demand paging overhead, CPU cache hit rates (being more
complicated than in microbenchmark situations), use of many more
language features, and so on.
I do think that being able to show that Lisp can hold its own on some
microbenchmarks is a good first step toward getting someone to believe
that Lisp might also do well on real applications. That's a very
different statement, though. To put it another way, suppose I'm
trying to convince Alice that doing our new project in Lisp is a good
idea, and she says, no, everybody knows Lisp is much too slow for
anything real. Now what do I do? Well, one thing I can do is show
her some nice microbenchmark results. It doesn't prove anything about
how fast the real project will run. But it does make it feel more
plausible that the real application has some chance of being fast, and
it might make Alice more optimistic about Lisp, and more amenable to
spending the time to do some real benchmarking rather than simply
dismissing Lisp out of hand. In that respect, I think this kind of
microbenchmarking is valuable for the Lisp "cause".
-- Dan
>
> > for most programs, almost all programmer are not able to write a
> > faster equivalent one in assembler.
>
> Most likely they wouldn't be able to write an equivalent program in
> assembler, within their time and cost constraints.
>
> But if they manage to write it and they spend enough time optimizing
> it the result will be likely faster than the optimized high level
> code, at least until compilers become more 'intelligent' than
> programmers.
So if we have a team of programmers with no time and cost constraints,
and who are as knowledgeable about writing assembly language as are
the best compiler-writers (who know all about how to predict the speed
of execution of a sequence of instructions, which in modern CPU's is
extremely difficult), does that mean they'd come up with a faster
program?
Well, if the job is to write a relatively small numerical algorithm,
yes, I'd concede that. If the job is to write a huge transaction
processing system that make heavy use of a database management system,
message parsing and production, communication with other computers,
complex abstractions, arcane industry business rules, user interfaces,
transactions, timeouts, and so on? Will all that careful hand-coding
make that much difference? I suppose if we truly postulate infinite
time and resources, perhaps, but the entire point of software
engineering and doing programming in reality is that there's no such
thing as infinite time and resources.
-- Dan
>
> I might add thet while the efficiency model of C is simple to understand
> the efficiency of Lisp is notoriously difficult to understand.
I completely disagree. Lisp contains a lot of functions that amount
to being utility libraries. If your C code uses libraries, you have
exactly the same problem of not knowing precisely how fast library
code is. If you don't use libraries, then you have vast amounts of C
code, and the total work to predict speed becomes correspondingly
vast. There's nothing about Lisp that makes it hard to understand its
performance, that doesn't exist analogously in any other language.
(And don't tell me that the big difference is garbage collection. If
you think that it's easy to predict the speed of malloc and free,
think again, and try it in a real software system.)
> I hope that you agree with me that if a Lisp implementation is good at
> microbenchmarking, it also means, in some way, that it will be good
> when using it for full applications.
Nope. Full applications (that is, bigger and more complex code) come
with additional parameters that can pretty much defeat performance, and
that microbenches will miss. I'm thinking about the quality of the
register allocation policy for example. This is a problem that doesn't
show up small-and-simple code for microbenches.
The way I see it, if you want to convince somebody that a language L is
good, you *need* to start with microbenches because if you loose even
for low-level stuff, it's not worth going further. However, if you win,
then you might get people's attention.
See my ELW'06 paper about that:
http://www.lrde.epita.fr/~didier/research/verna.06.ecoop.pdf
--
5th European Lisp Workshop at ECOOP 2008, July 7: http://elw.bknr.net/2008/
Didier Verna, did...@lrde.epita.fr, http://www.lrde.epita.fr/~didier
EPITA / LRDE, 14-16 rue Voltaire Tel.+33 (0)1 44 08 01 85
94276 Le Kremlin-Bicętre, France Fax.+33 (0)1 53 14 59 22 did...@xemacs.org
> On Jun 26, 8:41 am, Javier <javu...@gmail.com> wrote:
> > On 26 jun, 13:40, dlweinr...@gmail.com wrote:
>
> >
> > I hope that you agree with me that if a Lisp implementation is good at
> > microbenchmarking, it also means, in some way, that it will be good
> > when using it for full applications.
>
> Well, no, I don't. That is, the ratios of speed that we see in
> various microbenchmarks vary, sometimes greatly, depending on what you
> measure. The problem is more severe in full applications, where the
> total time tends to depend on even more variables, such as GC
> overhead, demand paging overhead, CPU cache hit rates (being more
> complicated than in microbenchmark situations), use of many more
> language features, and so on.
True.
Often these microbenchmarks use litte library code and
are compiled to unsafe code (no runtime checks, ...).
I often prefer to use the opposite: lots of library code
and safe code (library + application code).
So may be equally important to measure how a Lisp system
behaves with fully safe code, lots of debug info included
and using lots of pre-compiled libraries (hopefully also
compiled to safe code).
Safe code should also be used for the FFI. I like to see
runtime checks of argument types and return values to/from foreign
code.
> I do think that being able to show that Lisp can hold its own on some
> microbenchmarks is a good first step toward getting someone to believe
> that Lisp might also do well on real applications. That's a very
> different statement, though. To put it another way, suppose I'm
> trying to convince Alice that doing our new project in Lisp is a good
> idea, and she says, no, everybody knows Lisp is much too slow for
> anything real. Now what do I do? Well, one thing I can do is show
> her some nice microbenchmark results. It doesn't prove anything about
> how fast the real project will run. But it does make it feel more
> plausible that the real application has some chance of being fast, and
> it might make Alice more optimistic about Lisp, and more amenable to
> spending the time to do some real benchmarking rather than simply
> dismissing Lisp out of hand. In that respect, I think this kind of
> microbenchmarking is valuable for the Lisp "cause".
>
> -- Dan
> On Jun 26, 12:13 pm, "John Thingstad" <jpth...@online.no> wrote:
>> På Thu, 26 Jun 2008 13:40:20 +0200, skrev <dlweinr...@gmail.com>:
>>
>
>>
>> I might add thet while the efficiency model of C is simple to understand
>> the efficiency of Lisp is notoriously difficult to understand.
>
> I completely disagree. Lisp contains a lot of functions that amount
> to being utility libraries. If your C code uses libraries, you have
> exactly the same problem of not knowing precisely how fast library
> code is. If you don't use libraries, then you have vast amounts of C
> code, and the total work to predict speed becomes correspondingly
> vast. There's nothing about Lisp that makes it hard to understand its
> performance, that doesn't exist analogously in any other language.
Well I can mention a few things.
You have a function returning a integer.
Is a bigint or a fixnum?
SQRT works for complex numbers.
Thus it can be praisingly slow.
MOD returns the remaining fraction on the heap.
Thus it is 10 times slower than rem.
It is variations in type like this which make performance unpredictable.
In C++ you instantiate a template and know the type the code works with.
In Lisp is at the implementors discretion whether code is inlined or not.
Maybe it can use the type info, maybe not.
> (And don't tell me that the big difference is garbage collection. If
> you think that it's easy to predict the speed of malloc and free,
> think again, and try it in a real software system.)
This has nothing to do with it and is highly system dependent anyway.
But having more contol over what goes to the stack and what goes to the
heap does help.
--------------
John Thingstad
probably difference is that Lisp _encourages_ using high-level
library constructs, while C programmers are often satisfied
by bare minimum.
> (And don't tell me that the big difference is garbage collection. If
> you think that it's easy to predict the speed of malloc and free,
> think again, and try it in a real software system.)
while it might be true if overall thoroughput is measured,
in case of real-time applications response time is critical,
and malloc/free is _orders of magnitude_ more predictable
than GC in this field (while still there might be surprises with malloc,
especially with buggy/broken implementations).
to my knowledge, no modern CL implementations
implement incremental GC with response time guarantees.
or do they?
I would love to see them spend their time on getting memory management
on Windows right an finally bring the I/O layer on par to finally see a
viable FOSS solution of Lisp on that major plattform.
Johann
John> Javier wrote:
>> Have you seen the shootout recently?:
>> http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all
>> SBCL is only about 0.9 times slower than C++, 0.7 slower than C, 0.2
>> times faster than Java (client), and is finally as fast as Ocaml.
>> I'm pretty sure that if the SBCL team continue to work, it will
>> become
>> one of the fastest language implementation, very close to C++ itself.
>> Thank you for your work!
John> I would love to see them spend their time on getting memory management
John> on Windows right an finally bring the I/O layer on par to finally see
John> a viable FOSS solution of Lisp on that major plattform.
Clisp, gcl, and ecl run on Windows. What makes them not a viable FOSS
Lisp?
Ray
As a side note, the language shootout doesn't measure the number of
lines in the program but the gzipped bytes.
The idea is that this reduces the penalties for longer keywords etc.
since they will be compressed down more effectively
What you are calling a "fact" certainly is in doubt. Microbenchmarks are
used to find a huge variety of fundamental problems (e.g. exception
handling on .NET is 600x than OCaml's) and those results are absolutely
essential when optimizing whole programs.
IMHO, your "fact" is completely wrong.
--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
I think it is particularly enlightening to note just how far the Haskell
contributors took that idea.
For example, the Haskell implementation of k-nucleotide is 2x slower and
2.5x longer than OCaml because it is filled with unsafe pointer arithmetic
written using the FFI (which has no place in this program):
htHash (I# max) (I# size) ptr@(Ptr p) = abs . inlinePerformIO . IO $ go p 0#
where
lim = p `plusAddr#` size
go p acc !s
| p `geAddr#` lim = (# s, I# (acc `remInt#` max) #)
| otherwise = case readInt8OffAddr# p 0# s of
(# s, i #) -> go (p `plusAddr#` 1#) (5# *# acc +# i) s
There are very few significant Haskell programs in existence. At ~10kLOC,
Frag (a simple 3D game) is one of them and it segfaulted on my 64-bit
machine when I tried to run it precisely because it was full of this kind
of unsafe pointer arithmetic.
I love the way Lispers litter their non-Lisp code with superfluous
parentheses. :-)
Actually Lisp is almost always longer by any reasonable metric including
LOC. I went out of my way to measure the verbosity of Lisp on the ray
tracer benchmark:
http://www.ffconsultancy.com/languages/ray_tracer/results.html
Lisp is among the most verbose languages both by LOC and non-whitespace
bytes.
> Also, in big applications, defmacro and functional programming allow
> us to reduce complexity and size.
Macros without basic features means that Lisp projects have a considerable
constant overhead of boiler plate code spent Greenspunning features like
pattern matchers.
If you were benchmarking programming languages for speed, wouldn't
the programs be optimized for speed? Wouldn't a speed-optimized Common Lisp
ray-tracing benchmark program have lots of declaration-heavy sections,
greatly inflating its size?
Should a honest researcher be using small, speed-optimized pieces of benchmark
source code pieces as the basis for claims about the code density of typical
programs written in some programming language?
LOC and bytes are /not/ particularly reasonable metrics, either.
I suggest that counting the number of atoms and in its abstract syntax tree is
a better indicator of the true size of a program.
LOC can be reduced by cramming more code into a line of code.
A raw character count, even if whitespace is excluded, is inflated by the use
of longer identifiers. Rewriting a function with longer identifiers doesn't
change its complexity.
P.S. The sad thing is, I will die eventually, but Lisp will never die.
But who will continue my important part of insulting Common Lisp
gratuitously? Who?
Jon Harrop
You are not the first, nor will be the last, don't worry :)
It is just a matter of getting numbers straight. For example, your
ray tracer (with low level optimizations) can be rewritten in at least
132 lines with the same semantics using another standard, albeit less
tight, indentation style (just spread the if's and move around the
in's).
Juho's version is just 162 lines, once you maitain CL style but "get
the hang of it" :)
Cheers
--
Marco
>Vend <ven...@virgilio.it> writes:
>> Assembly is indeed faster than every high level language, but that
>> doesn't make it better.
>
>This is not true.
>
>There exist some programs such as there exist some (very few)
>programmers who could write a faster program in assembler than any
>equivalent program written in higher level programming language.
>
>But
>
>for most programs, almost all programmer are not able to write a
>faster equivalent one in assembler.
This can be trivially shown to be incorrect.
Compilers frequently have a limited scope of awareness where several
logical code sequences are interleaved - under register pressure they
only rarely choose the most effective instruction sequencing and
frequently optimization does not completely remove superfluous code
resulting from generator templates or inserted by various analysis
phases and transformations (SSA phi functions are a prime example -
take a look at the hoops GCC jumps through to try to remove them).
With shockingly little effort, most programmers are able to take an
assembler listing from most compilers and optimize the code to run
faster (in some corner cases, much faster).
I spent years implementing hard real time systems where virtually
every cycle counted - and a great deal of time teaching young
programmers how to optimize algorithms and shave cycles when needed.
Once the programmer understands the workings of the pipeline and
studies the instruction set, she can *usually* beat the compiler
alone. In 20 years, I have only seen a handful of cases where the
compiler managed to produce optimal code to begin with.
btw: I have also written a production compiler for an embedded
DSP+FPGA system (speaking of weird constraints!) and several hobby
compilers. Compilers and runtime systems are two of my main geek
hobbies.
George
--
for email reply remove "/" from address
You're right, after all, there's more processing power in a brain than
in the processors on which compilers can work right now.
I forgot the economic term of the equation.
For most programs, for most (real) budgets (money and time), almost
all programmers are not able to write a faster equivalent one in
assembler.
No human programmer would be able to translate 250 KB of C++ sources
into a 5 MB exectuable in less than 5 mn and less than $1. But this
is what gcc does for me flawlessly several times a day.
--
__Pascal Bourguignon__ http://www.informatimago.com/
You're always typing.
Well, let's see you ignore my
sitting on your hands.
> Pascal J. Bourguignon wrote:
>> To help people understand the futility of this exercise, compare:
>>
>> int fact(int x){
>> return((i<=1)?1:(x*fact(x-1)));
>> }
>> int main(void){
>> printf("%d\n",fact(1000));
>> return(0);
>> }
>
> I love the way Lispers litter their non-Lisp code with superfluous
> parentheses. :-)
They are not superfluous: they allow the use of quick editing commands
to navigate and manipulate sub-syntax-trees.
If you write:
return (i<=1)?1:(x*fact(x-1));
^
and with the cursor at the position indicated by the caret, when you
type C-M-k, it cuts only (i<=1). You would have to count the syntactic
elements to cut the whole ?: expression, or work at the level of
characters, to find the next semicolon and cut a range of characters.
Instead, with:
return((i<=1)?1:(x*fact(x-1)));
^
when you type C-M-k, you get ((i<=1)?1:(x*fact(x-1)))
Etc, for all the structured editing commands.
--
__Pascal Bourguignon__
Well, of course. That's why it would be preferable to re-implement in
Lisp libraries written in lower level languages. Using FFI is a Q&D
temporary solution.
--
__Pascal Bourguignon__
Compare to OCaml (43 lines) and Haskell (45 lines).
Yes, sure. I cheated by making Juho's example 162 lines. :) I suppose
we can cheat in many other ways.
Cheers
--
Marco
Much of the Lisp code is wasted performing tedious manual deconstructions of
data structures because Lisp lacks pattern matching. I wonder how much more
reasonable the code might look if you pull in cl-unification.
BTW, what's exactly the difference between paren matching and pattern
matching?
Maybe you all Common Lispers don't need the latter because you already
have the further one?
Jon Harrop
pattern matching as in cl-ppcre, cl-yacc, cl-unify,...?
--------------
John Thingstad
As in Qi.
|...
|| BTW, what's exactly the difference between paren matching and pattern
|| matching?
|| ...
| pattern matching as in cl-ppcre, cl-yacc, cl-unify,...?
As in
f 0 = 0
f 1 = 1
f (n+2) = f n + f (n+1)
Unless we are talking about parent matching, that is...
---Vassil.
--
Peius melius est. ---Ricardus Gabriel.
> On Thu, 10 Jul 2008 16:52:41 +0200, "John Thingstad" <jpt...@online.no>
> said:
>
> |...
> || BTW, what's exactly the difference between paren matching and pattern
> || matching?
> || ...
> | pattern matching as in cl-ppcre, cl-yacc, cl-unify,...?
>
> As in
>
> f 0 = 0
> f 1 = 1
> f (n+2) = f n + f (n+1)
>
> Unless we are talking about parent matching, that is...
>
> ---Vassil.
>
>
I think my point was that there are many types of pattern matching.
Why should one be a part of the language and the others not?
Seems to me to be pretty few places in the code where algebraic matching
helps the code.
--------------
John Thingstad
| På Fri, 11 Jul 2008 00:25:13 +0200, skrev Vassil Nikolov
| <vnikolo...@pobox.com>:
|| On Thu, 10 Jul 2008 16:52:41 +0200, "John Thingstad"
|| <jpt...@online.no> said:
||
|| |...
|| || BTW, what's exactly the difference between paren matching and pattern
|| || matching?
|| || ...
|| | pattern matching as in cl-ppcre, cl-yacc, cl-unify,...?
||
|| As in
||
|| f 0 = 0
|| f 1 = 1
|| f (n+2) = f n + f (n+1)
||
|| Unless we are talking about parent matching, that is...
| I think my point was that there are many types of pattern matching.
| Why should one be a part of the language and the others not?
| Seems to me to be pretty few places in the code where algebraic
| matching helps the code.
I did not follow up in disagreement with you...
Indeed. Looking at functional code it is clear that users benefit from
pattern matching (e.g. in SML, OCaml, Haskell, Scala, F#, Mathematica and
even Scheme) only if it is standardized and well implemented.
> Seems to me to be pretty few places in the code where algebraic matching
> helps the code.
Try any of the above languages and the enormous advancement made by built-in
pattern matching will be blindingly obvious.
Compare this OCaml code:
let rec intersect orig dir (l, _ as hit) (center, radius, scene) =
match ray_sphere orig dir center radius, scene with
| lam', _ when lam' >= lam -> hit
| lam', [] -> lam', unitise (orig +| lam' *| dir -| center)
| _, scenes -> List.fold_left (intersect orig dir) hit scenes
With the equivalent Common Lisp:
(defun intersect (orig dir scene)
(labels ((aux (lam normal scene)
(let* ((center (sphere-center scene))
(lamt (ray-sphere orig
dir
center
(sphere-radius scene))))
(if (>= lamt lam)
(values lam normal)
(etypecase scene
(group
(dolist (kid (group-children scene))
(setf (values lam normal)
(aux lam normal kid)))
(values lam normal))
(sphere
(values lamt (unitise
(-v (+v orig (*v lamt dir)) center)))))))))
(aux infinity zero scene)))
The considerable difference in verbosity is primarily due to pattern
matching, which is used both for destructuring (replacing
MULTIPLE-VALUE-BIND, DESTRUCTURING-BIND, CAR, CDR etc.) and for dispatch
(replacing COND etc.).
With pattern matching, the syntactic overhead is as low as possible, e.g.
destructuring a pair return value:
# let polar x y = sqrt(x *. x +. y *. y), atan2 y x;;
val polar : float -> float -> float * float = <fun>
# let r, theta = polar 3. 4.;;
val r : float = 5.
val theta : float = 0.927295218001612187
Compared to:
* (defun polar (x y)
(values (sqrt (+ (* x x) (* y y))) (atan y x)))
POLAR
* (multiple-value-bind (r theta) (polar 3.0 4.0)
(vector r theta))
#(5.0 0.9272952)
Pattern matching improves most function definitions.
Thinking of it, should I use K in future?! Should I create (in the
sense of: building from nothing, like God) a new K# language?
Should I?
What's your kind Common Lispers' opinion?
(I think I should. Definitely.)
Jon Harrop
You shoould start with COBOL# first :)
Seriously: pattern matching as a substitute of MULTIPLE-VALUE-* and
DESTRUCTURING-BIND is a good thing.
If you read some of the best K's tutorials, even there you see hints
of a "maybe K went too far in succinctness" syndrome :)
Cheers
--
Marco
Are you sure?
If some Lisp Nobody (like JH) throws in some "advanced-language-
technology-progress", we all (real world coding Lispers) have to
forget all of our Lisp knowledge (I hate to call it "wisdom") which is
(1) *not* evident at first glance, like many ingenious theories (they
seem in fact too simple to be true...)
(2) not visible to anybody not deeply rooted inside the Lisp way of
understanding the programming process
(3) (maybe the most important point:) not flexible enough to be
attributed to *any* coding problem around ("pattern matching" is in
fact nothing more than a very, very restricted and quarter baked macro
system, which can be applied only in a very, very restricted number of
coding problem cases...)
Do I have to continue?
(I'd really like to do so, as I could bring you literally *thousands*
of examples from already *existing* Common Lisp code, which in more
"modern" languages would have to be made up...)
-PM
That's true. For most purposes, it would be completely impractical to
write any significant portion of a whole application in assembler.
And if you did, it is likely that a compiler would still provide
better performance for significant portions of it simply by brute
force and programmer inattention - assembly fu has a well deserved
reputation for being inconsistent. The right way is always to
leverage the compiler and do peephole optimization only where
necessary.
Pattern matching and macros do not even attempt to provide similar
functionality. In fact, Mathematica really highlights just how much macros
stand to benefit from pattern matching.
> Do I have to continue?
You have to restart.
I almost agree. What you should count is the number of nodes in the
parse tree (if it's binary then the number of terminals is one
greater than the number of nodes, so the total is about twice the
number of nodes or terminals, so it doesn't matter whether you
count only nodes or only terminals or both consistently).
With Lisp, the number of nodes is obvious from knowledge of the
internal CONS-tree representation. In other languages the parse
tree is not documented and rather a pain to try to analyze. Since
some parse-tree nodes in other languages require more than one
token, such as "else if", there may be a slight discrepancy between
the node count and the literal token count. Still what you suggest
is probably close enough for a reasonable approximation.
> This is especially true when modern environments provide word
> completion, so it doesn't actually take many more keystrokes to
> enter `long-named-function' than `f'.
IMO it's quite likely that Lisp has the advantage here. Compare:
In Lisp, you just start typing what you want, in hyphenated
English. For example if you want to know the count of number of
some element in a sequence (string or list), you type "cou" and
at that point the editor knows what you mean and can remind you the
sequence of the parameters to the COUNT function and the also
keyword options available. If you want to get the universal time,
you just start typing get-u and it can complete the rest already.
In C, it takes several minutes to figure out what the cryptic name
is for the library function that works with strings, which is
different from that which works with arrays, which is different
from what works with linked lists (which you never use anyway
because there's no decent GC). Then if you type it slightly wrong
you accidently make it look more like *another* cryptic
abbreviation that does something entirely different. And you can't
even see if you got it right without writing a PRINTF statement and
compiling and loading and starting your entire program, then you
need to remove that PRINTF statement sometime later so that there
won't be so much clutter of trace output you can't see the PRINTF
for what you try next after this.
And then three months later when your boss tells you there's a bug
that has worked its way through customer support to the management
team and now it really needs to be fixed, so you need to look at your
code again, you can't figure out what all those C-cryptic things mean.
(OK, I overstated my case a little, but I think my point is
somewhat correct.)