Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Java versus Lisp. Comparison needed.

24 views
Skip to first unread message

comp.lang.scheme

unread,
May 16, 2009, 5:58:34 PM5/16/09
to
I work for a group of scientists that study global warming. I am not a
scientist myself. I am a student who was hired to test programs and
install CO2 sensors. Most programs are in Common Lisp. Dr. Magda
Lombardo has chosen Common Lisp because she compared the performance
of SBCL with Java, and concluded that SBCL is faster. Dr. Lombardo is
not a programmer herself. In fact, she cannot tell a C program from a
Matlab program. Therefore I am in charge of porting computer
applications to her Windows XP system.

Dr. Martins is a scientist who works with lightning and protection of
electrical appliances against atmospheric discharges. His programmers
come from different backgrounds, and write lightning propagation
models in many computer languages. Dr. Lombardo used Dr. Martins
programs as benchmarks, because she believes that we will face similar
problems. The Common Lisp model uses Maxima for Computer algebra
operations, Matlisp for number crunching, and a genetic programming
system designed by Dr. Martins himself.

Since Dr. Lobardo works on Windows, and does not have CMUCL, I ported
Dr. Martins' Lisp programs to SBCL. I compiled Maxima from
instructions given in the INSTALL.lisp file. BTW, one must change a
line in displa.lisp in order to make Maxima work on Windows:

#-(or gcl clisp sbcl) (cond ((not (interactive-stream-p *standard-
input*)) (fresh-line)))

After that change, it compiles flawlessly. If there is a member of the
Maxima team reading this article, I suggest that s/he fixes the above
line in the distribution package.

The genetic programming algorithm adds capacitors, coils and resistors
to the model until it matches examples generated by a computer algebra
package. The Java program turns out to be much slower than the Lisp
program. I was expecting it to be faster. To be precise, SBCL was 26
times faster than Java (SBCL: 16 minutes; Java: 7 hours). Since Dr.
Lombardo has chosen SBCL based solely on its dealing with Dr. Martins'
problem, I wonder why her results were so different from the Computer
Language Benchmarks Game, and from what I have read in the literature.
I suppose that people who wrote the Java program made a very stupid
blunder. My questions: Does any one in this list have experience in GP
in Java and SBCL? Are Dr. Lombardo compatible with results in the
literature?

BTW, I also ported Dr. Martins's programs to GCL, and discovered that
GCL takes as long as Java to run them. How do you people explain these
results?

I know that it is hard to give an answer to my questions without
seeing the programs. Unhappily, Dr. Martins is very jealous of his
programs, and I am afraid that he will not allow me to post them.

Scott Burson

unread,
May 16, 2009, 6:36:52 PM5/16/09
to
On May 16, 2:58 pm, "comp.lang.scheme" <phi50...@yahoo.ca> wrote:
> I work for a group of scientists that study global warming. I am not a
> scientist myself. I am a student who was hired to test programs and
> install CO2 sensors. Most programs are in Common Lisp. Dr. Magda
> Lombardo has chosen Common Lisp because she compared the performance
> of SBCL with Java, and concluded that SBCL is faster. Dr. Lombardo is
> not a programmer herself. In fact, she cannot tell a C program from a
> Matlab program. Therefore I am in charge of porting computer
> applications to her Windows XP system.
>
> Dr. Martins is a scientist who works with lightning and protection of
> electrical appliances against atmospheric discharges. His programmers
> come from different backgrounds, and write lightning propagation
> models in many computer languages. Dr. Lombardo used Dr. Martins
> programs as benchmarks, because she believes that we will face similar
> problems.  The Common Lisp model uses Maxima for Computer algebra
> operations, Matlisp for number crunching, and a genetic programming
> system designed by Dr. Martins himself.
>
> BTW, one must change a
> line in displa.lisp in order to make Maxima work on Windows:
>
> #-(or gcl clisp sbcl) (cond ((not (interactive-stream-p *standard-
> input*)) (fresh-line)))
>
> After that change, it compiles flawlessly. If there is a member of the
> Maxima team reading this article, I suggest that s/he fixes the above
> line in the distribution package.

They may or may not be reading this newsgroup, but you could certainly
send them email.

> The genetic programming algorithm adds capacitors, coils and resistors
> to the model until it matches examples generated by a computer algebra
> package. The Java program turns out to be much slower than the Lisp
> program. I was expecting it to be faster. To be precise, SBCL was 26
> times faster than Java (SBCL: 16 minutes; Java: 7 hours). Since Dr.
> Lombardo has chosen SBCL based solely on its dealing with Dr. Martins'
> problem, I wonder why her results were so different from the Computer
> Language Benchmarks Game, and from what I have read in the literature.

You have now discovered the meaning of the phrase "lies, damned lies,
and benchmarks" -- meaning "There are lies, then there are damned
lies, and then there are benchmarks".

Programming language performance is not unidimensional, any more than,
say, human intelligence. This is particularly true since a program
necessarily has to be rewritten to be run in a different language, and
that rewriting can be more or less skillful.

SBCL is a very highly optimized implementation; GCL, not so much.
Without looking at the code, I can only speculate, but it sounds like
this program uses some features of the language that are particularly
well implemented in SBCL. (Floating point, maybe?)

> I suppose that people who wrote the Java program made a very stupid
> blunder.

Could be; but maybe not.

> My questions: Does any one in this list have experience in GP
> in Java and SBCL? Are Dr. Lombardo compatible with results in the
> literature?

It doesn't work that way. "Genetic programming" is too abstract a
description of what the program is doing to be the basis of a
performance comparison. One would have to look at the specific
algorithms and data structures being used... as you suspect.

-- Scott

Raffael Cavallaro

unread,
May 16, 2009, 6:40:23 PM5/16/09
to
On May 16, 5:58 pm, "comp.lang.scheme" <phi50...@yahoo.ca> wrote:

> BTW, I also ported Dr. Martins's programs to GCL, and discovered that
> GCL takes as long as Java to run them. How do you people explain these
> results?

SBCL is the result of decades of research and development of an
optimizing common lisp compiler. GCL development has not been as
focused on generating fast code.

If my aim is pure speed of compiled code, I use sbcl. Some of the
commercial implementations can match sbcl in some areas. Some open
source implementations (e.g., CCL) can match sbcl in some areas. But
for the best all-around performance of compiled code, sbcl is probably
your best bet.

I believe there was a post here recently by Dan Weinreb that said
basically the same thing - that they used sbcl for the compute
intensive part of their system, and used CCL for parts that were
modified frequently because CCL compiles code faster (i.e., it takes
less time to compile a given file) but sbcl compiles faster code
(i.e., the resulting executable runs in less time).

gugamilare

unread,
May 16, 2009, 10:56:17 PM5/16/09
to

Wow, am I glad to hear that ;) When it comes to speed, SBCL shines
more and more, showing that it ain't no toy developed by some
children :)

Now, seriously speaking, I believe that both Java and CL (SBCL) can be
very fast if you know how to use them. And I would say that you would
not lose anything by doing what you have to do using SBCL.

> Since Dr.
> Lombardo has chosen SBCL based solely on its dealing with Dr. Martins'
> problem, I wonder why her results were so different from the Computer
> Language Benchmarks Game,

Not that different. I've seen the results and SBCL is not behind in
speed at all. IIRC, SBCL is better for some benchmarks and worse for
other benchmarks, but in general it is at the same level that Java is.
And, just between us, the person who made the programs for SBCL did
not do all performance optimizations he or she could have, I would add
a few more declarations. You can clearly see that by seeing the amount
of compiler warnings the program gives when run. The most efficient
program written using SBCL wouldn't emit any compiler warning. Not to
mention that the compiling and loading time shouldn't be taken into
account, SBCL isn't an interpreter. People should compile the file and
only then time the execution. Even better would be to load the
compiled file before timing. That is not what they are doing at that
Language Benchmarks Game, they are loading the source code directly.

> and from what I have read in the literature.
> I suppose that people who wrote the Java program made a very stupid
> blunder. My questions: Does any one in this list have experience in GP
> in Java and SBCL? Are Dr. Lombardo compatible with results in the
> literature?
>
> BTW, I also ported Dr. Martins's programs to GCL, and discovered that
> GCL takes as long as Java to run them. How do you people explain these
> results?
>

GCL is not a good implementation, it has bugs and is not ANSI
compliant. It is not fast either. It is like comparing some reasonable
C compiler with gcc.

These are the implementations that I would say to be fast: SBCL,
CMUCL, CCL, Lispworks and Allegro. There are also Clisp, ECL and ABCL
which are not that fast, but they have some advantages as well (Clisp
and ECL being particularly fast for arithmetic using big integers
because they use GMP).

> I know that it is hard to give an answer to my questions without
> seeing the programs. Unhappily, Dr. Martins is very jealous of his
> programs, and I am afraid that he will not allow me to post them.

:(

André Thieme

unread,
May 17, 2009, 7:31:06 AM5/17/09
to
comp.lang.scheme schrieb:

> The genetic programming algorithm adds capacitors, coils and resistors
> to the model until it matches examples generated by a computer algebra
> package. The Java program turns out to be much slower than the Lisp
> program. I was expecting it to be faster. To be precise, SBCL was 26
> times faster than Java (SBCL: 16 minutes; Java: 7 hours). Since Dr.
> Lombardo has chosen SBCL based solely on its dealing with Dr. Martins'
> problem, I wonder why her results were so different from the Computer
> Language Benchmarks Game, and from what I have read in the literature.
> I suppose that people who wrote the Java program made a very stupid
> blunder. My questions: Does any one in this list have experience in GP
> in Java and SBCL? Are Dr. Lombardo compatible with results in the
> literature?

I programmed my own GP engine in SBCL.
First of all I must say that the JVM is an amazing piece of technology,
in many areas leading. For most toy problems it will easily outperform
SBCL. Check the language shootout.

Also, when the program is written in Java then every varibale got a
type declaration, while in a dynamic language such as CL this is not
always possible. But SBCL does a great job at inferring types.

Now for non-trivial problems the issue is different. The Java language
offers practically no construct that is not also available in CL. But
the other way around it is different. When programming in Lisp the coder
can come up with abstractions that greatly simplify problems that are
very hard to express (in Java).
The JVM can't help if the byte code it got as input is an evil beast.
For example compare the Perl regex engine with clppcre. The regex
engine for Perl was developed by a (big) team over many years and is
written in C, got all kinds of optimization tricks. Now that regex
engine for CL (clppcre) was originally developed by only one person and
it took very very much less time. Now the result is that both systems
perform very good, but the CL engine outperforms the C program in many
cases.
The clppcre developer Edi Weitz is a very good programmer, but I doubt
he could have produced such a good regexp engine if he had to write it
in C or even Java.

If you write a scientific program that wants to solve non-trivial
problems then SBCL can be a very good choice. Especially if you want to
do Genetic Programming.
When you do this in a non-Lisp then you basically have to implement your
own "mini Lisp" system. A GP system can't output programs as strings and
then evaluate and measure the fitness. It will always produce a tree and
run eval on that. Those things need to be coded up by people who are not
experts in writing code in that domain (= interpreter and compiler
programming of a language that takes trees as input). Those people who
write a GP system are GP experts. Now by using a CL like SBCL you can
profit from the work that was done by experts in the Lisp domain. That
itself was an evolutionary process.
I chose SBCL for my GP engine because that allowed me to concentrate on
the domain I want to solve problems in, without having to solve problems
in a domain in which I am not interested in (such as building my own Lisp).
When you look at it from that perspective you probably agree that it
makes sense that you got the results which you reported.
Hope that helped a bit.


Andr�
--
Lisp is not dead. It�s just the URL that has changed:
http://clojure.org/

Waldek Hebisch

unread,
May 17, 2009, 11:04:36 AM5/17/09
to
gugamilare <gugam...@gmail.com> wrote:
> GCL is not a good implementation, it has bugs and is not ANSI
> compliant. It is not fast either. It is like comparing some reasonable
> C compiler with gcc.
>

It is not wise for a Lisper to criticize performance of gcc. AFAIK
on average differences between best compilers and gcc are of order
10-20%. When comparing SBCL and gcc typically SBCL gives about
half of gcc speed. So if 20% difference makes gcc unreasonable,
by the same logic there is no reasonable free Lisp compiler (at
least a free one -- I have no data about commercial ones). :)

Concerning GCL: the worst are bugs. If you choose ANSI variant
it is resonable close to beeing compliant (new version, if it
finally appears is supposed to be fully compliant).

> These are the implementations that I would say to be fast: SBCL,
> CMUCL, CCL, Lispworks and Allegro. There are also Clisp, ECL and ABCL
> which are not that fast, but they have some advantages as well (Clisp
> and ECL being particularly fast for arithmetic using big integers
> because they use GMP).
>

Old myths die hard: I have posted measurement showing that on
bignum arithmetic Clisp is _the slowest_ one. And Clisp uses its
own library for bignums. Quite possible that in the past this
library was fast compared to alternatives, but the other (in
particular GMP) improved quite a lot. Actually, for bignums
GCL is quite good (it uses GMP and make significant effort
to optimize memory management). ECL also uses GMP, but is
a bit slower (on my test 90ms GCL, 110ms ECL) than GCL (probably
due to memory management). For moderatly sized bignums SBCL
is quite competitive, but for huge numbers (several thousends
of digits) GMP will be better. If you want data about more
general performance the following is what I got running FriCAS
on a few time consuming tests:

easter mapleok marcbench r21bugsbig Sum

sbcl 10 30 33 66 139

Closure CL 16 59 146 69 290

ecl 33 69 60 154 316

gcl 29 78 54 208 369

clisp 58 184 134 575 951

All times are in seconds. The FriCAS code makes no use of CLOS,
and limited use of library functions. The code is pretty call
intensive (mostly indirect calls), there is mixture of list
manipulations, array accesses, symbols, fixnums and some bignums.
marcbench spents nontrivial fraction of time in bignum computations.
Nontrivial fraction of fixnums is between 32 and 64 bits, so
implementatins which limit fixnums to 32 bits (gcl and clisp)
pay extra cost.

Note: this timing was done few months ago, at that time Closure CL had
quite slow bignum implementation, current Closure CL has faster bignums
and get much better time on marcbench, but overall picture is
similar (I did more recent timing, but have only older result
at hand).

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Isaac Gouy

unread,
May 17, 2009, 11:15:38 AM5/17/09
to
On May 16, 10:56 pm, gugamilare <gugamil...@gmail.com> wrote:
-snip-

> And, just between us, the person who made the programs for SBCL did
> not do all performance optimizations he or she could have, I would add
> a few more declarations.

Once you've added a few more declarations and contributed programs
everyone will be able to see how much better your Lisp programs are -

http://shootout.alioth.debian.org/u32q/faq.php#contribute

-snip-


> Not to mention that the compiling and loading time shouldn't be taken into
> account, SBCL isn't an interpreter. People should compile the file and
> only then time the execution. Even better would be to load the
> compiled file before timing. That is not what they are doing at that
> Language Benchmarks Game, they are loading the source code directly.


You seem very certain about that - what evidence do you have?

Here's the build log for one of the SBCL programs -

http://shootout.alioth.debian.org/u32q/benchmark.php?test=spectralnorm&lang=sbcl&id=3#log

Here's the content of the spectralnorm.sbcl-3.sbcl_run file loaded on
the COMMAND LINE -

(proclaim '(optimize (speed 3) (safety 0) (debug 0) (compilation-speed
0) (space 0)))
(main) (quit)


The source code is loaded, compiled, and saved in the Lisp image
before timing (as it has been for iirc the last 5 years).

Waldek Hebisch

unread,
May 17, 2009, 11:20:26 AM5/17/09
to
comp.lang.scheme <phi5...@yahoo.ca> wrote:
>
> The genetic programming algorithm adds capacitors, coils and resistors
> to the model until it matches examples generated by a computer algebra
> package. The Java program turns out to be much slower than the Lisp
> program. I was expecting it to be faster. To be precise, SBCL was 26
> times faster than Java (SBCL: 16 minutes; Java: 7 hours). Since Dr.
> Lombardo has chosen SBCL based solely on its dealing with Dr. Martins'
> problem, I wonder why her results were so different from the Computer
> Language Benchmarks Game, and from what I have read in the literature.
> I suppose that people who wrote the Java program made a very stupid
> blunder. My questions: Does any one in this list have experience in GP
> in Java and SBCL? Are Dr. Lombardo compatible with results in the
> literature?
>
> BTW, I also ported Dr. Martins's programs to GCL, and discovered that
> GCL takes as long as Java to run them. How do you people explain these
> results?
>

Anwers really depend on what you are doing. One way of doing
genetic programming it to generate some pieces of code (population
of programs), run them for short time, mutate based on results of
running, run again, and so on. If you do this Lisp has definite
advantage: tranditionally Lisp implementations contain compiler
which can reasonably fast generate resonably fast machine code.
Java uses JIT, which works very well if you have long running
code, but typical JVM will first run code interpreted, and
only if you run it long enough JVM will compile it. So, if
dynamically generated code runs for relatively short time you
will get interpreted speed.

Concerning GCL: unlike SBCL it does not compile code by default,
so you may end up interpreting dynamically generated code. OTOH
if you try to compile such code, compilation may take too much time
(GCL compiles much more slowely than SBCL, especially for small
pieces of code).

Of course, you may be doing something completly different,
but unless you tell us more all we can do is wild guesses
like this one.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Tamas K Papp

unread,
May 17, 2009, 1:31:38 PM5/17/09
to
On Sat, 16 May 2009 14:58:34 -0700, comp.lang.scheme wrote:

> problem, I wonder why her results were so different from the Computer
> Language Benchmarks Game, and from what I have read in the literature. I

Look at the name: Computer Language Benchmarks is a _game_, not
something you should take seriously. The goal of these games is to
take a super-simple algorithm and optimize it to death, then claim
that one language/implementation is "faster" than the other. This is
not indicative of the performance of complex real-life programs.

> suppose that people who wrote the Java program made a very stupid
> blunder. My questions: Does any one in this list have experience in GP
> in Java and SBCL? Are Dr. Lombardo compatible with results in the
> literature?

GP is a very general term, there is no "literature" about general
claims like this. There are some papers with benchmarks floating
around, which basically show that with tweaking, you can make things
fast. Do you need to? Is speed so supremely important, or are you
focusing on the wrong question? See below.

> I know that it is hard to give an answer to my questions without seeing
> the programs. Unhappily, Dr. Martins is very jealous of his programs,
> and I am afraid that he will not allow me to post them.

Well, too bad. It is not even clear that you have asked him and said
no, it seems that you are just assuming that he won't. Without
specifics, you won't be able to get much help.

Also, I am wondering if you are chasing a red herring here. You have
seen how fast Lisp implementations are, and the question is whether
you need anything faster, and if you (or your employers) are willing
to make the tradeoffs that are necessary. For example, you could
always use C with hand-tuned assembly and make things 2-10 times
faster, and multiply development time by a factor of 20-100. Would
that be a sensible thing to do? I don't think so. Especially when
the problem domain is ill-defined and there is a lot to be gained by
playing with the algorithm itself, as opposed to optimizing a
well-specified and simple algorithm (eg linear algebra routines).

HTH,

Tamas

nice.a...@gmail.com

unread,
May 17, 2009, 2:15:04 PM5/17/09
to
On 17 maio, 12:20, Waldek Hebisch <hebi...@math.uni.wroc.pl> wrote:
> hebi...@math.uni.wroc.pl

Hi, Waldek.

I was a consultant for the team that wrote the Java code for Dr.
Martins, and I can say that you are right. The Lisp team relied on
SBCL in order to generate code on the fly. Compiling syntax trees for
RC circuits is straightforward in Lisp; in Java, it is necessary to
write an interpreter. Besides the issue you pointed out, I faced a few
other problems:

Lisp team has better programmers. There are many reasons for this
fact. (1) In general, Lisp is taught in AI courses of good
universities, and only the best students are enthusiastic about Lisp.
(2) Lisp programmers have a hard time to get a job, where they can
work with Lisp; therefore, even a good Lisp programmer is ready to
enter a project that gives him or her a small scholarship and an
opportunity to code in Lisp. It is much easier for a Java programmer
to get a good job; thus, it is very difficult to enlist good Java
programmers in a temporary job.

Lisp has very efficient List processing tools. I am not sure, but I
believe that Lisp stores lists as contiguous cells; it uses a cell for
the link only when it cannot find a large enough storage space to hold
the whole structure. Java needs a storage cell for each link. Usually,
Lisp has better memory management too.

Lisp has better computer algebra systems, with a lot of contributors.
The Lisp team relied on Maxima, a mature and efficient system to do
computer algebra. Your own FriCAS is another good example of Lisp CAS.
We have nothing comparable in Java. However, I bet that in a short
time we will have very good CAS systems in Java.

Since SBCL has few users, its team can update the compiler almost
every month. Java cannot do this because it has a very large installed
base.

People who wrote the Java program did not make a stupid blunder :-)
There was an error in the prototype, that Lisp team spotted easily,
because Lisp is so different from Matlab (the prototype language) that
Lisp programmers need to rethink every algorithm they come across.
After fixing the error, the Java program is performing much better.

I am fond of Lisp, that I learned when a student at Cornell. However,
I think that Magda Lombardo should reconsider her decision of using
Lisp, because she may have trouble in hiring a large group of
programmers to maintain her applications, and sensors.


Tamas K Papp

unread,
May 17, 2009, 2:51:34 PM5/17/09
to
On Sun, 17 May 2009 11:15:04 -0700, nice.and.java wrote:

> Lisp team has better programmers. There are many reasons for this fact.
> (1) In general, Lisp is taught in AI courses of good universities, and
> only the best students are enthusiastic about Lisp. (2) Lisp programmers
> have a hard time to get a job, where they can work with Lisp; therefore,
> even a good Lisp programmer is ready to enter a project that gives him
> or her a small scholarship and an opportunity to code in Lisp. It is
> much easier for a Java programmer to get a good job; thus, it is very
> difficult to enlist good Java programmers in a temporary job.

So let me get this straight: Lisp programmers are better, and this
makes finding a job harder for them, so they take up temporary jobs
for peanuts? "Will Program Lisp for Food"? Huh?

> Lisp has better computer algebra systems, with a lot of contributors.
> The Lisp team relied on Maxima, a mature and efficient system to do
> computer algebra. Your own FriCAS is another good example of Lisp CAS.
> We have nothing comparable in Java. However, I bet that in a short time
> we will have very good CAS systems in Java.

Yeah, I bet that wombats will really learn to fly one of these days.
Any minute now.

I have seen attempts at CAS systems in C++ and Java, they were a very
sad sight. Cf Greenspun's 10th rule.

> Since SBCL has few users, its team can update the compiler almost every
> month. Java cannot do this because it has a very large installed base.

FUD. The SBCL team can improve the compiler constantly because CL has
a clearly defined standard, so your programs will just continue to run
fine with the updates. Number of users has nothing to do with it. I
am an SBCL user, do you think that Nikodemus Siivola calls me on the
phone to discuss each update before it is released?

> People who wrote the Java program did not make a stupid blunder :-)
> There was an error in the prototype, that Lisp team spotted easily,
> because Lisp is so different from Matlab (the prototype language) that
> Lisp programmers need to rethink every algorithm they come across. After
> fixing the error, the Java program is performing much better.

Lisp is indeed very different from Matlab, but so is Java. Anytime
one translates an algorithm from another language, it is worthwhile to
understand what you are doing. From your description, it seems that
the Java programmers just didn't make the effort. Probably because
they were constantly bombarded with excellent job offers, which can be
pretty disruptive.

> I am fond of Lisp, that I learned when a student at Cornell. However, I
> think that Magda Lombardo should reconsider her decision of using Lisp,
> because she may have trouble in hiring a large group of programmers to
> maintain her applications, and sensors.

This is shameless FUD. The story shows that Lisp programmers
performed better than the Java team, but you still suggest that it is
better to use Java? Also, you don't need a "large group" of
programmers if you are using a more powerful language, because you
don't need code monkeys copy-pasting pieces of code.

Lisp has a large comparative advantage for scientific applications,
especially when one is not using canned algorithms but developing new
ones. It seems like that the project of Lombardo is quite innovative,
so they can really benefit from Lisp.

Tamas

nice.a...@gmail.com

unread,
May 17, 2009, 2:51:42 PM5/17/09
to
On 17 maio, 14:31, Tamas K Papp <tkp...@gmail.com> wrote:
> On Sat, 16 May 2009 14:58:34 -0700, comp.lang.scheme wrote:
> > problem, I wonder why her results were so different from the Computer
> > Language Benchmarks Game, and from what I have read in the literature. I
>
> > I know that it is hard to give an answer to my questions without seeing
> > the programs. Unhappily, Dr. Martins is very jealous of his programs,
> > and I am afraid that he will not allow me to post them.
>
> Well, too bad.  It is not even clear that you have asked him and said
> no, it seems that you are just assuming that he won't.  Without
> specifics, you won't be able to get much help.
>


Hi, Tamas.
Philip is right in assuming that Martins will not give him his
programs to publish. I want to point out that Martins is a contractor
that works for large electrical facilities, and communication
companies. He has non-disclosure clauses in his contracts. Philip
does not work in any of Martins's projects. He works for the global
warming group.

I believe that Waldek pointed out the problem with our Java programs.
Lisp has a huge advantage in this class of problem because it compiles
the syntax tree on the fly. Martins modeled the ground as a large RCL
circuit. It is very easy for the SBCL team to represent RCL-circuits
as Lisp S-expressions, pre-processing them using a computer algebra
system, and compile them on the fly. In Java, we needed to write an
interpreter. In any case, the Java performance is not very poor,
because only the modeling phase is slow. Once it has the ground model,
the Java program becomes very fast.

Tamas K Papp

unread,
May 17, 2009, 3:07:24 PM5/17/09
to
On Sun, 17 May 2009 11:51:42 -0700, nice.and.java wrote:

> Philip is right in assuming that Martins will not give him his programs
> to publish. I want to point out that Martins is a contractor that works
> for large electrical facilities, and communication companies. He has
> non-disclosure clauses in his contracts. Philip does not work in any of
> Martins's projects. He works for the global warming group.

I don't know their field, but in economics people make their source
code available. Most journals won't even publish your stuff if you
keep your sources closed, especially if the model was solved
numerically. If it is not verifiable by a third party, it is not
science. This practice was instituted because some "results" came
from programming mistakes, and there were instances of people cheating
to get some results.

My experience is that 90% of "preciously guarded" code for scientific
applications is of extremely bad quality. This of course does not
mean that Martins' code is necessarily of this kind.

> Lisp has a huge advantage in this class of problem because it compiles
> the syntax tree on the fly. Martins modeled the ground as a large RCL
> circuit. It is very easy for the SBCL team to represent RCL-circuits as
> Lisp S-expressions, pre-processing them using a computer algebra system,
> and compile them on the fly. In Java, we needed to write an interpreter.

I don't know enough of the problem domain, but when I have
experimented with similar things, I didn't need to compile code on the
fly, I got away with chaining closures together.

Tamas

gugamilare

unread,
May 17, 2009, 3:11:08 PM5/17/09
to
On 17 maio, 12:04, Waldek Hebisch <hebi...@math.uni.wroc.pl> wrote:

> gugamilare <gugamil...@gmail.com> wrote:
> > GCL is not a good implementation, it has bugs and is not ANSI
> > compliant. It is not fast either. It is like comparing some reasonable
> > C compiler with gcc.
>
> It is not wise for a Lisper to criticize performance of gcc.

No, you got what I said all wrong. Just to clear what I meant, SBCL is
compared to gcc as GCL is compared to another not as good C compiler.
I meant both SBCL and gcc are good compilers.

> > These are the implementations that I would say to be fast: SBCL,
> > CMUCL, CCL, Lispworks and Allegro. There are also Clisp, ECL and ABCL
> > which are not that fast, but they have some advantages as well (Clisp
> > and ECL being particularly fast for arithmetic using big integers
> > because they use GMP).
>
> Old myths die hard: I have posted measurement showing that on
> bignum arithmetic Clisp is _the slowest_ one.  And Clisp uses its
> own library for bignums.

I used cl-bench and it said otherwise. Completely. Clisp and ECL in
bignum arithmetics were much both very fast and much better than all
others. The same results you can find here: http://sbcl.boinkor.net/bench/
(see the results for BIGNUM/something in that page).

gugamilare

unread,
May 17, 2009, 3:29:36 PM5/17/09
to
On 17 maio, 12:15, Isaac Gouy <igo...@yahoo.com> wrote:
> On May 16, 10:56 pm, gugamilare <gugamil...@gmail.com> wrote:
> -snip-
>
> > And, just between us, the person who made the programs for SBCL did
> > not do all performance optimizations he or she could have, I would add
> > a few more declarations.
>
> Once you've added a few more declarations and contributed programs
> everyone will be able to see how much better your Lisp programs are -

Hey, don't say MY lisp programs as I am no all speed experience lisp
programmer. I am NOT the only one who could improve that code, just
looking at the compiler warnings is enough to see that a few
improvements can be done. I'll do that with one or two of the
benchmarks and see if I can get some good result, and then I post it
here, ok?


>
> http://shootout.alioth.debian.org/u32q/faq.php#contribute
>
> -snip-
>
> > Not to mention that the compiling and loading time shouldn't be taken into
> > account, SBCL isn't an interpreter. People should compile the file and
> > only then time the execution. Even better would be to load the
> > compiled file before timing. That is not what they are doing at that
> > Language Benchmarks Game, they are loading the source code directly.
>
> You seem very certain about that - what evidence do you have?

Hum, sorry for posting without evidence. It think once I just saw the
language bench game that something like

sbcl --load run.lisp

was the command line that was used to start the program. I'll look for
it, but, unless I find it, you can just say that I was mistaken.

nice.a...@gmail.com

unread,
May 17, 2009, 3:33:54 PM5/17/09
to

Hi, Tamas.
I insist that I am not against Lisp. I am not even saying that Java is
better than Lisp. In fact, I like Lisp. Let me give a better
explanation about what I mean when I say that good Lisp programmers
are willing to take low paying temporary jobs. Usually Lisp
programmers know other languages too. Good students become good
programmers not only in Lisp, but in many other languages. They can
find a high paying job easily, if they are willing to program in a
language other than Lisp. However, they prefer to accept a low paying
job where they can program in Lisp, than a high paying job where they
will be forced to code in C, or in Java. I can assure you that in our
team, Lisp programmers are much better than Java programmers, or OCAML
programmers. They can find a high paying job easily, but I doubt that
the new employer will allow them to code in Lisp.

> for peanuts? "Will Program Lisp for Food"?

Our wages are slightly better than that.


Magda's project is very different from what Martins is doing. Magda is
a researcher, and head of a Non Governamental Organization that
studies global warming. Most people who work for her are not
programmers or engineers. They are scientists. Therefore, I guess that
it will be harder for her to hire good Lisp programmers than
acceptable Java programmers. I may be wrong. Let us waith and see.

Isaac Gouy

unread,
May 17, 2009, 3:52:46 PM5/17/09
to
On May 17, 12:29 pm, gugamilare <gugamil...@gmail.com> wrote:
> On 17 maio, 12:15, Isaac Gouy <igo...@yahoo.com> wrote:
>
> > On May 16, 10:56 pm, gugamilare <gugamil...@gmail.com> wrote:
> > -snip-
>
> > > And, just between us, the person who made the programs for SBCL did
> > > not do all performance optimizations he or she could have, I would add
> > > a few more declarations.
>
> > Once you've added a few more declarations and contributed programs
> > everyone will be able to see how much better your Lisp programs are -
>
> Hey, don't say MY lisp programs as I am no all speed experience lisp
> programmer. I am NOT the only one who could improve that code, just
> looking at the compiler warnings is enough to see that a few
> improvements can be done. I'll do that with one or two of the
> benchmarks and see if I can get some good result, and then I post it
> here, ok?


Posting it here won't get the program onto the benchmarks game; by all
means post it here but if you want to see it on the benchmarks game
then follow the instructions in the FAQ.


> >http://shootout.alioth.debian.org/u32q/faq.php#contribute
>
> > -snip-
>
> > > Not to mention that the compiling and loading time shouldn't be taken into
> > > account, SBCL isn't an interpreter. People should compile the file and
> > > only then time the execution. Even better would be to load the
> > > compiled file before timing. That is not what they are doing at that
> > > Language Benchmarks Game, they are loading the source code directly.
>
> > You seem very certain about that - what evidence do you have?
>
> Hum, sorry for posting without evidence. It think once I just saw the
> language bench game that something like
>
> sbcl --load run.lisp
>
> was the command line that was used to start the program. I'll look for
> it, but, unless I find it, you can just say that I was mistaken.


You aren't the first to be mistaken about what they saw on the
benchmarks game website, and I daresay you won't be the last. Errare
humanum est.

Scott Burson

unread,
May 17, 2009, 5:01:04 PM5/17/09
to

I can believe this.

> Lisp has very efficient List processing tools. I am not sure, but I
> believe that Lisp stores lists as contiguous cells; it uses a cell for
> the link only when it cannot find a large enough storage space to hold
> the whole structure.

The technique you're thinking of is called "cdr-coding", but AFAIK it
has been used only on machines with hardware support for Lisp; and
none of these machines have been built for some 20 years. On standard
hardware there is always a cell for the cdr. However, Lisp
implementations are certainly optimized to deal with conses; for
instance, in many Lisps there is zero allocation overhead for a cons,
because they're allocated in a region of memory that is known to
contain only conses, and therefore don't require a header to tell the
GC what they are.

-- Scott

Waldek Hebisch

unread,
May 17, 2009, 5:21:09 PM5/17/09
to
gugamilare <gugam...@gmail.com> wrote:
> On 17 maio, 12:04, Waldek Hebisch <hebi...@math.uni.wroc.pl> wrote:
> > gugamilare <gugamil...@gmail.com> wrote:
> > > These are the implementations that I would say to be fast: SBCL,
> > > CMUCL, CCL, Lispworks and Allegro. There are also Clisp, ECL and ABCL
> > > which are not that fast, but they have some advantages as well (Clisp
> > > and ECL being particularly fast for arithmetic using big integers
> > > because they use GMP).
> >
> > Old myths die hard: I have posted measurement showing that on
> > bignum arithmetic Clisp is _the slowest_ one. �And Clisp uses its

> > own library for bignums.
>
> I used cl-bench and it said otherwise. Completely. Clisp and ECL in
> bignum arithmetics were much both very fast and much better than all
> others. The same results you can find here: http://sbcl.boinkor.net/bench/
> (see the results for BIGNUM/something in that page).

OK, Clisp is good at cl-bench/bignum benchmark. But check what
this benchmark computes: it contains multipliction, division
with remainder, greatest common divisor (gcd) and integer square root
in equal proportion. If you do more detailed comparison
you will see that sbcl is much faster at multiplication and
division, slightly slower at gcd and significantly slower
at integer square roots.

So Clisp is quite good at bignum square roots and at gcd. However,
typical code contains much more multiplications additions than
divisions and gcd. And square roots are quite rare. So
cl-bench/bignum is rather unrealistic as measure of bignum
performance.

FYI, I simply measured speed of bignum multiplication. I did
not bother to measure speed of bignum addition, because it
is much cheaper than multiplication and easier to get fast.
IMHO measuring just multiplications is much more reasonable
than mix used in cl-bench.

In GCL bignum gcd is a performance disater and isqrt is also
somewhat slow (gcd does not use GMP and this explains
it slowness). However multiplications and divisions are
very fast.


--
Waldek Hebisch
heb...@math.uni.wroc.pl

gugamilare

unread,
May 17, 2009, 5:32:48 PM5/17/09
to
On 17 maio, 18:21, Waldek Hebisch <hebi...@math.uni.wroc.pl> wrote:
> gugamilare <gugamil...@gmail.com> wrote:
> > On 17 maio, 12:04, Waldek Hebisch <hebi...@math.uni.wroc.pl> wrote:
> > > Old myths die hard: I have posted measurement showing that on
> > > bignum arithmetic Clisp is _the slowest_ one.  And Clisp uses its

Wow, those are all very good points. I never thought things like this
could change your opinion about what you consider fast. And you are
right. When calculating divisions and gcd, you need to make many
multiplications, so this is obviously take more time than the
multiplications themselves.

Now I understand why people here keep saying that benchmarks are a
lie. I will never look at them the same way again.

Pascal Costanza

unread,
May 17, 2009, 5:41:30 PM5/17/09
to

One of my theories is this:

If you want to scale programs, you eventually will use some form of
metaprogramming, because that's the only effective way of scaling. (By
scaling I mean here: Produce more useful code than a single programmer
could produce with just base-level means.)

The only easy way to do metaprogramming in Java is by writing
interpreters - which is quite widely used - or by hiring lots of
programmers and give them a catalogue of patterns to instantiate.

The easiest way to do metaprogramming in Lisp is by using macros, which
can be easily compiled away, so you don't have the same overhead as that
of an interpreter at runtime.

So my guess is that there is some form of interpreter running in the
Java program that slows everything down.


Pascal

--
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

gugamilare

unread,
May 17, 2009, 10:43:13 PM5/17/09
to
On 17 maio, 16:52, Isaac Gouy <igo...@yahoo.com> wrote:
> Posting it here won't get the program onto the benchmarks game; by all
> means post it here but if you want to see it on the benchmarks game
> then follow the instructions in the FAQ.

I have optimized the SBCL's code for fasta
http://shootout.alioth.debian.org/u32q/benchmark.php?test=fasta&lang=sbcl&box=1
Now it emits only one compiler note which I could not eliminate. I
also substituted linear search for binary search the same way that the
code in C does, and some tests made me conclude that this is a good
choice, even with the small vectors that the code uses.

Compared to the current version, it is around 26% faster in some
simple tests here (with 10000 as the command-line argument, 4 tries).

I tried to send the code to the language benchmarks game but an error
is shown when I try to login ("Error - Could Not Get User"). I will
try to do it again some other time.

gugamilare

unread,
May 18, 2009, 2:13:43 AM5/18/09
to
On 17 maio, 23:43, gugamilare <gugamil...@gmail.com> wrote:
> On 17 maio, 16:52, Isaac Gouy <igo...@yahoo.com> wrote:
>
> > Posting it here won't get the program onto the benchmarks game; by all
> > means post it here but if you want to see it on the benchmarks game
> > then follow the instructions in the FAQ.
>
> I have optimized the SBCL's code for fastahttp://shootout.alioth.debian.org/u32q/benchmark.php?test=fasta〈=...

> Now it emits only one compiler note which I could not eliminate. I
> also substituted linear search for binary search the same way that the
> code in C does, and some tests made me conclude that this is a good
> choice, even with the small vectors that the code uses.
>
> Compared to the current version, it is around 26% faster in some
> simple tests here (with 10000 as the command-line argument, 4 tries).
>
> I tried to send the code to the language benchmarks game but an error
> is shown when I try to login ("Error - Could Not Get User"). I will
> try to do it again some other time.

I have just figured it out and code has been sent. It should be
available soon if someone wants to see it.

daniel....@excite.com

unread,
May 18, 2009, 3:29:41 PM5/18/09
to
On May 17, 2:15 pm, nice.and.j...@gmail.com wrote:

> I think that Magda Lombardo should reconsider her decision of using

> Lisp...


I thought the OP established "Dr. Lombardo" in appellation here...
Are you name-dropping, sir? Or - probably(?) not - indulging in a
mild ad hominem by replacing title with familiarity, since you are
expressing an opposing point of view?

I suggest in composing your response to the OP, it is most appropriate
to respect his relationship to the people he mentions.

Otherwise, thanks for the first hand information!

Wade

unread,
May 18, 2009, 6:59:21 PM5/18/09
to
On May 17, 12:15 pm, nice.and.j...@gmail.com wrote:
> On 17 maio, 12:20, Waldek Hebisch <hebi...@math.uni.wroc.pl> wrote:
>
>
> Lisp team has better programmers. There are many reasons for this
> fact. (1) In general, Lisp is taught in AI courses of good
> universities, and only the best students are enthusiastic about Lisp.

That is a interesting assertion, but it is not correct. Learning Lisp
made them better programmers. It is important how knowledge
is is transferred. The intrinsic knowledge of what a computer
program is and how it relates to human thought is better passed
on by learning Lisp. There is too much cognitive noise when
learning programming when using a language like Java. The noise
(which is mostly irrelevant information) gets in the way of
cutting to heart of the matter. You said it yourself, the Lisp
team saw mistakes that the Java team missed. I think the
Java team were too focused on the noise. But you are right,
its better when programmers have to "think" and "know" what
they are doing. But when isn't that appropriate?

Is has been my experience that many kinds of people can
learn programming with a language like Lisp. Take the
scientists for that matter. They learned it.

As for having Dr. Lombardo move to Java. Its too late, the
Genie is out of the bottle. All she needs is to get some open
minded intelligent people (not even programmers),
they will have no trouble helping her out using Lisp.

Wade

nice.a...@gmail.com

unread,
May 19, 2009, 10:31:37 AM5/19/09
to

You are right, Pascal. We were forced into writing interpreters to
evaluate the output of the genetic programming module. BTW, we tried
to generate JVM directly instead of a syntactic tree, but it became
even slower.

I would like to refer to the paper When lisp is faster than C, by
Borge Svingen, in the Proceedings of the 8th Annual Conference on
Genetic and Evolutionary Computation. The author states that the great
advantage of Lisp for this kind of problem is that it compiles the
code generated by the system. It is interesting to notice that Dr.
Martins tried other languages besides Java and Lisp: Clean, OCAML,
etc. Except for Lisp, all other implementations were forced into
interpreters, and the result was the same: They were slower than Lisp.

You are also right when you state that Lisp code is full of macros.
The Lisp team is fond of Paul Graham's On Lisp, a book about macros. I
suppose that this kind of program is very hard to maintain.

A point worth noting is that people have a tendency to use a tool that
is very good for a problem in other problems as well. For instance,
antibiotic is widely used in self-medication even when the disease is
not bacterial origin. People who worked with the Anatel project
decided to develop a contract on energy auditing in ... LISP and
Scheme (Bigloo). The new project controls a network of meters to
measure energy consumption and make suggestions to increase efficiency
while lowering costs. It is being implemented in a large area by an
electrical facility (CELG). In fact, half of the CELG's customers will
receive the service. I wonder whether Lisp is the right tool for this
kind of job. You know, if the medication works for a given disease,
people started using it for almost everything...

Pascal J. Bourguignon

unread,
May 19, 2009, 10:44:01 AM5/19/09
to
nice.a...@gmail.com writes:

> On 17 maio, 18:41, Pascal Costanza <p...@p-cos.net> wrote:
>> comp.lang.scheme wrote:
> You are also right when you state that Lisp code is full of macros.
> The Lisp team is fond of Paul Graham's On Lisp, a book about macros. I
> suppose that this kind of program is very hard to maintain.

There's no reason why. On the contrary, high degree of
meta-programming should lead to a smaller number of bugs and to easier
maintenance (because features are more uniquely localized). And when
you make a correction to a program that's highly meta-programmed, you
usually correct a lot of bugs at once.

But what's true is that it might require more brains: you cannot
correct such a program by just patching the holes. You have to
understand how it works, and what you do when you correct it.
Correction by trial and error won't do.


> [...] People who worked with the Anatel project decided to develop a


> contract on energy auditing in ... LISP and Scheme (Bigloo). The new
> project controls a network of meters to measure energy consumption
> and make suggestions to increase efficiency while lowering costs. It
> is being implemented in a large area by an electrical facility
> (CELG). In fact, half of the CELG's customers will receive the
> service. I wonder whether Lisp is the right tool for this kind of

> job. [...]

Well, this looks like the kind of project that could be done in lisp,
indeed. For more kinds of project you can do in lisp:
http://www.franz.com/success/


--
__Pascal Bourguignon__

Tamas K Papp

unread,
May 19, 2009, 11:27:54 AM5/19/09
to
On Tue, 19 May 2009 07:31:37 -0700, nice.and.java wrote:

> You are also right when you state that Lisp code is full of macros. The
> Lisp team is fond of Paul Graham's On Lisp, a book about macros. I
> suppose that this kind of program is very hard to maintain.

That assumption is wrong. Macros provide a higher level of
abstraction, and when applied well, they make maintainance much
easier. Abstraction helps, that's why complex programs are written in
HLLs instead of assembly. It beats me why people keep arguing that
abstraction in the form of macros is a disadvantage.

> (CELG). In fact, half of the CELG's customers will receive the service.
> I wonder whether Lisp is the right tool for this kind of job. You know,
> if the medication works for a given disease, people started using it for
> almost everything...

I would appreciate if you stopped spreading misinformation about
things you don't understand. Instead of resorting to metaphors about
drugs, you could argue directly about the merits/disadvantages of
Lisp. "Wondering" whether Lisp is the right or "supposing" that
macros are hard to maintain thing without any arguments to back your
claims up is just FUD.

Tamas

nice.a...@gmail.com

unread,
May 19, 2009, 11:33:42 AM5/19/09
to

I have no intention to offend or show lack of respect. In her papers,
M. Lombardo appears as Magda Lombardo. Please, check:

LOMBARDO, Magda. Remote Sensing - Aided Determination of the Potential
Heat Load Reduction in Cities Associated with Transpiring Vegetation.
Building Energy Efficiency, CALIFORNIA, 1986.

LOMBARDO, Magda. The Study Of Urban Climate Through Thermal Images
From Meteorological Satellites. INTERNATIONAL SYMPOSIUM ON REMOTE
SENSING OF ENVIRONMENT, 1985.

LOMBARDO, Magda Adelaide. Environmental Modification Of Metropolitam
Areas Through Satellite images. IGARSS' 86, 1986.

LOMBARDO, Magda. Computer Processing Of Satellite Images In The Study
Of Environmental Modification In Urban Areas. ENVIRONMENTAL SOFTWARE -
U.S.A., 1986.

LOMBARDO, M. A. ; LOMBARDO, Magda Adelaide . Enrironmental Software
Interview. ENVIRONMENTAL SOFTWARE - USA, 1987.

Freitas, M. K. ; LOMBARDO, Magda. Urban index and environment quality.
Mercator, v. 12, p. 69-81, 2008.

Books and papers on Urban Heat Island also use "Magda Lombardo". Since
I have a strong interest in global warming, I guess that I was
influenced by the usages of the field. Anyway, I am sorry.

gugamilare

unread,
May 19, 2009, 5:53:43 PM5/19/09
to
On 17 maio, 23:43, gugamilare <gugamil...@gmail.com> wrote:
> Compared to the current version, it is around 26% faster in some
> simple tests here (with 10000 as the command-line argument, 4 tries).

It turns out that my code didn't go quite so well when I sent them to
the Language Benchmarks Game. I don't have any idea why, I made many
tests here which gives the opposite result.

I am just clarifying this here because I was asked to.

Wade

unread,
May 19, 2009, 11:21:28 PM5/19/09
to
On May 19, 8:31 am, nice.and.j...@gmail.com wrote:

> A point worth noting is that people have a tendency to use a tool that
> is very good for a problem in other problems as well. For instance,
> antibiotic is widely used in self-medication even when the disease is
> not bacterial origin. People who worked with the Anatel project
> decided to develop a contract on energy auditing in ... LISP and
> Scheme (Bigloo). The new project controls a network of meters to
> measure energy consumption and make suggestions to increase efficiency
> while lowering costs. It is being implemented in a large area by an
> electrical facility (CELG). In fact, half of the CELG's customers will
> receive the service. I wonder whether Lisp is the right tool for this
> kind of job. You know, if the medication works for a given disease,
> people started using it for almost everything...

Whoa.... where did that cynical comment come from?
A little dose of optimism and cheering other people's
successes please!

Wade

Jon Harrop

unread,
May 22, 2009, 4:34:57 PM5/22/09
to
Tamas K Papp wrote:
> On Tue, 19 May 2009 07:31:37 -0700, nice.and.java wrote:
>> You are also right when you state that Lisp code is full of macros. The
>> Lisp team is fond of Paul Graham's On Lisp, a book about macros. I
>> suppose that this kind of program is very hard to maintain.
>
> That assumption is wrong. Macros provide a higher level of
> abstraction...

No, they don't.

--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u

Jon Harrop

unread,
May 22, 2009, 4:40:46 PM5/22/09
to
nice.a...@gmail.com wrote:
> I would like to refer to the paper When lisp is faster than C, by
> Borge Svingen, in the Proceedings of the 8th Annual Conference on
> Genetic and Evolutionary Computation. The author states that the great
> advantage of Lisp for this kind of problem is that it compiles the
> code generated by the system. It is interesting to notice that Dr.
> Martins tried other languages besides Java and Lisp: Clean, OCAML,
> etc. Except for Lisp, all other implementations were forced into
> interpreters, and the result was the same: They were slower than Lisp.

Being too lazy to write a compiler is not the same as being forced to write
an interpreter. Moreover, some of the languages you are citing (e.g. OCaml)
were specifically designed for this purpose and superceded Lisp decades
ago.

If you cannot solve your problem several times faster and with less code in
OCaml than Lisp then you're doing something seriously wrong. Sounds like
you need to modernize...

Jon Harrop

unread,
May 22, 2009, 4:42:45 PM5/22/09
to
Pascal Costanza wrote:
> The only easy way to do metaprogramming in Java is by writing
> interpreters...

Writing compilers in Java that target the JVM is trivial and should be
vastly more performant than Lisp, particularly for parallel programs due to
the poor state of all existing (free and commercial) Lisp implementations
with respect to multicore.

Jon Harrop

unread,
May 22, 2009, 4:45:26 PM5/22/09
to
nice.a...@gmail.com wrote:
> I would like to refer to the paper When lisp is faster than C, by
> Borge Svingen, in the Proceedings of the 8th Annual Conference on
> Genetic and Evolutionary Computation. The author states that the great
> advantage of Lisp for this kind of problem is that it compiles the
> code generated by the system. It is interesting to notice that Dr.
> Martins tried other languages besides Java and Lisp: Clean, OCAML...

Perhaps Martins will let you publish his OCaml code on the caml-list and ask
for advice in improving it?

I suspect you could do the same with all of the other languages as well and
I would expect the results to be vastly faster than anything you can write
in Lisp...

Raffael Cavallaro

unread,
May 22, 2009, 4:52:00 PM5/22/09
to
On 2009-05-22 16:34:57 -0400, the functionally moistened one
<what.was.that.amp...@oops.no.domain.for.usenet.spam>
said, in total, with no elision on my part whatsoever:

> No, they don't.

Well and cogently argued sir! Now you'll move on to "I know you are,
but what am I?" as a fitting cap to your rhetorical brilliance.


--
Raffael Cavallaro, Ph.D.

Jon Harrop

unread,
May 22, 2009, 5:02:43 PM5/22/09
to
nice.a...@gmail.com wrote:
> I was a consultant for the team that wrote the Java code for Dr.
> Martins, and I can say that you are right. The Lisp team relied on
> SBCL in order to generate code on the fly. Compiling syntax trees for
> RC circuits is straightforward in Lisp; in Java, it is necessary to
> write an interpreter.

No, that is not true. You can metaprogram in Java using the JVM.

> Lisp has very efficient List processing tools.

Look at pattern matching over algebraic datatypes for an obvious counter
example.

> I am not sure, but I
> believe that Lisp stores lists as contiguous cells; it uses a cell for
> the link only when it cannot find a large enough storage space to hold

> the whole structure. Java needs a storage cell for each link. Usually,
> Lisp has better memory management too.

Not true. The GC in Sun's Hotspot JVM is far more advanced and efficient
than anything available in any Lisp implementation (free or commercial).

> Lisp has better computer algebra systems, with a lot of contributors.
> The Lisp team relied on Maxima, a mature and efficient system to do
> computer algebra. Your own FriCAS is another good example of Lisp CAS.
> We have nothing comparable in Java. However, I bet that in a short
> time we will have very good CAS systems in Java.

Generally, you would interoperate with production quality computer algebra
systems. For example, you can perform symbolic computations in Mathematica
via its .NET interface and compile the results using F# down to CIL for
high performance execution.

What exactly were they using Maxima for though?

> Since SBCL has few users, its team can update the compiler almost
> every month. Java cannot do this because it has a very large installed
> base.

Conversely, the man months gone into JVMs vastly outweigh those spent on
SBCL and it really shows in terms of performance and multicore capability
(unless you only know Lisp and stubbornly refuse to learn better
alternatives, apparently).

> People who wrote the Java program did not make a stupid blunder :-)

Then why did they write an interpreter instead of a compiler?

> I am fond of Lisp, that I learned when a student at Cornell.

Keep learning.

gugamilare

unread,
May 22, 2009, 4:59:23 PM5/22/09
to
On 22 maio, 17:40, Jon Harrop <j...@ffconsultancy.com> wrote:

What are you doing? Trying to create another useless flame war (like
that one which is already going on and on and on...)? Don't do this.

Kaz Kylheku

unread,
May 22, 2009, 5:20:40 PM5/22/09
to
On 2009-05-22, Jon Harrop <j...@ffconsultancy.com> wrote:
> Not true. The GC in Sun's Hotspot JVM is far more advanced and efficient
> than anything available in any Lisp implementation (free or commercial).

Even in Lisp implementations that run on the JVM? ;)

Raffael Cavallaro

unread,
May 22, 2009, 5:48:20 PM5/22/09
to
On 2009-05-22 16:59:23 -0400, gugamilare <gugam...@gmail.com> said:

> What are you doing? Trying to create another useless flame war (like
> that one which is already going on and on and on...)? Don't do this.

If one is just trying to get as many copies as possible of one's domain
name into google's archive, a flame war is good for business.

--
Raffael Cavallaro, Ph.D.

Jon Harrop

unread,
May 22, 2009, 6:31:43 PM5/22/09
to

The OP is making claims contrary to the established wisdom that Lisp is slow
and without providing a shred of evidence to back it up. I am simply
saying "put up or shut up".

You are quite right that there is no need for a repeat. We had a symbolic
benchmark with Lisp vs OCaml here before and Lisp lost badly:

http://www.lambdassociates.org/Studies/study10.htm

There are dozens of other examples, all with the same result.

This guy's conclusion should have been "we suck at everything except Lisp"
and not "We are Lisp programmers. Lisp is better. We are better." which is
just baseless drivel. If he wanted objective criticism in order to improve
his situation he needs to ask in any of the many modern functional language
forums where the clever/decent programmers hang out and he will get
credible responses rather than the unsubstantiated back-patting and myth
propagation seen here.

Jon Harrop

unread,
May 22, 2009, 6:34:03 PM5/22/09
to

Very true. I meant standalone Lisp implementations, of course.

Clojure would be the obvious escape route to a decent language
implementation in this case. However, I assume Maxima will not be running
in Clojure any time soon because it is not a CL?

gugamilare

unread,
May 22, 2009, 9:15:09 PM5/22/09
to
On 22 maio, 19:31, Jon Harrop <j...@ffconsultancy.com> wrote:
> gugamilare wrote:
> > On 22 maio, 17:40, Jon Harrop <j...@ffconsultancy.com> wrote:
> >> If you cannot solve your problem several times faster and with less code
> >> in OCaml than Lisp then you're doing something seriously wrong. Sounds
> >> like you need to modernize...
>
> > What are you doing? Trying to create another useless flame war (like
> > that one which is already going on and on and on...)? Don't do this.
>
> The OP is making claims contrary to the established wisdom that Lisp is slow
> and without providing a shred of evidence to back it up.

Evidence?

http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=all&box=1

Yes, Java, OCaml and many languages might be faster in average than
Lisp, but Common Lisp (compiled with SBCL) itself it is not that bad
either, as you can see. And, even though it is almost 4 times slower
than C (compiled with gcc), we still like it the way it is.

Pillsy

unread,
May 22, 2009, 9:16:07 PM5/22/09
to
On May 22, 4:40 pm, Jon Harrop wrote:

[...some shit I don't care about...]

What the fuck? Is it troll week? Nobody told me it was troll week!

Sheesh,
Pillsy

gugamilare

unread,
May 22, 2009, 9:24:13 PM5/22/09
to

Gee, I hope not. I just hope that the flame war developed at that
other thread is an isolated case for being a cross-posted post. If
this becomes a flame war, I will get out of here, and I won't even
read whatever shit comes out of here.

Jon Harrop

unread,
May 22, 2009, 9:41:31 PM5/22/09
to
gugamilare wrote:
> Evidence?
>
>
http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=all&box=1
>
> Yes, Java, OCaml and many languages might be faster in average than
> Lisp, but Common Lisp (compiled with SBCL) itself it is not that bad
> either, as you can see. And, even though it is almost 4 times slower
> than C (compiled with gcc), we still like it the way it is.

Sure. I don't doubt that. My point is that it does not substantiate any of
the claims being made by the OP in this thread. For example, the claim that
other languages "force" you to write interpreters is complete nonsense.

<<<Klu>>>

unread,
Jul 11, 2009, 10:45:58 PM7/11/09
to
Jon Harrop wrote:
> Pascal Costanza wrote:
>> The only easy way to do metaprogramming in Java is by writing
>> interpreters...
>
> Writing compilers in Java that target the JVM is trivial and should be
> vastly more performant than Lisp, particularly for parallel programs due to
> the poor state of all existing (free and commercial) Lisp implementations
> with respect to multicore.

Not quite ALL existing free Lisp implementations. There are two that I
know of that target the JVM, Scala and Clojure, and the latter at least
has an innovative "software transactional memory" specifically geared
towards making high-performance parallel code.

I have seen Clojure in action and tight loops using double arithmetic
can easily end up running at native-code speeds, a few ns per iteration,
on Sun's Hotspot Server VM, if coded correctly. It's a black art though;
the code I saw had type hints everywhere and "unchecked_foo" function
calls and other exotica in it that you don't typically see much of in
normal Lisp code. It did not, however, have Java, let alone JNI, not
that I saw anyway. It also avoided actually using lists or the STM and
had some special Clojure-ism for tail calls.

Seeing the speed it ran at, though, put me in mind of something. That
loop might have been hand-tuned, but macros could emit such code, or
even generate several variations of it and test them all on the fly.
Imagine turning a high level description of some numerical problem
automatically into a very high performance function to calculate it,
with Lispy code tuning and optimizing it, perhaps even using genetic
algorithms or other smart stuff to evolve ever-faster solutions. The
days of hand-tuning inner loops for speed might soon be gone, computers
having eventually beat humans at that job too.

Vassil Nikolov

unread,
Jul 11, 2009, 11:24:40 PM7/11/09
to

On Sun, 17 May 2009 21:21:09 +0000 (UTC), Waldek Hebisch <heb...@math.uni.wroc.pl> said:
> ...
> typical code contains much more multiplications additions than
> divisions and gcd.

Typical programs working with integers, quite possibly. Typical
programs working with rationals, I am not so sure.

Just my 2e-2 (and not to lessen the importance of being careful when
dealing with benchmarking programs, of course.)

---Vassil.


--
"Even when the muse is posting on Usenet, Alexander Sergeevich?"

0 new messages