ordered associative arrays

395 views
Skip to first unread message

Hugh Aguilar

unread,
Nov 1, 2010, 3:36:52 PM11/1/10
to
Hello, I'm very new to CL (working my way through "Practical Common
Lisp"). Can anybody tell me if CL (or PLT Scheme) has an associative
arrays implementation? I want an implementation that supports doing an
ordered traversal, and which also supports filtering of regions. I
want to be able to copy everything within a region into another
association, or everything outside of a region into another
association, and also be able to merge two associations together.

The reason why I ask, is that I wrote such an associative arrays
implementation (using LLRB trees) in ANS-Forth. I want to have the
same capability in Lisp that I have in Forth. I can port my code from
Forth to Lisp (after I get better at Lisp), but I would be interested
in knowing if somebody else has already done something similar.

This is my Forth code:
http://www.forth.org/novice.html

Regards --- Hugh

RG

unread,
Nov 1, 2010, 3:43:32 PM11/1/10
to

Hugh Aguilar

unread,
Nov 1, 2010, 3:50:41 PM11/1/10
to

Let me further clarify. The associative arrays should be able to work
with *any* data-type in the key, including objects of the programmer's
own design. The programmer would have to write a function to compare
the keys for his specific data-type, and provide a vector to this
function at the time that he defines his associative array.

The region-filtering feature is primarily useful for situations in
which the key is a floating point number. The application I have in
mind is numerical.

Note that using a hash-table with an integer key, and then sorting by
another field (such as a float or whatever) isn't going to work. The
reason is that the boundaries of the region may not have
representative nodes in the associative array. For example, in my
Forth code, I may specify a region bounded by 1.0 and 2.0 --- and what
I actually get includes nodes from 1.1 to 1.9. A hash-table won't work
because searching for a 1.0 or 2.0 node will come up NIL, because
there is no such node. With hash-tables or arrays, a complete
traversal of the entire associative array would be necessary, which is
inefficient. With my Forth code, I can find the *nearest* node with a
simple search, with the same efficiency as finding an exact match for
a node. For example, I search for 1.0 and find 1.1, which is the
closest node that is >= to what I searched for. Or I search for 2.0
and find 1.9, which is the closest node that is <= to what I searched
for.

P.S. This question isn't Lisp related, but have any of you guys worked
with Lua at all? Any opinion?

Thanks for your assistance --- Hugh

Pascal J. Bourguignon

unread,
Nov 1, 2010, 4:06:17 PM11/1/10
to
RG <rNOS...@flownet.com> writes:

And also:
http://www.informatimago.com/develop/lisp/com/informatimago/common-lisp/llrbtree.lisp

--
__Pascal Bourguignon__ http://www.informatimago.com/
A bad day in () is better than a good day in {}.

w_a_x_man

unread,
Nov 1, 2010, 4:06:12 PM11/1/10
to
On Nov 1, 1:36 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:

MatzLisp (Ruby):

aa = {:a,22, :b,9, :c,33, :d,8, :e,7}
==>{:d=>8, :b=>9, :e=>7, :c=>33, :a=>22}

# traversal
aa.each{|x| puts x.join " "}
d 8
b 9
e 7
c 33
a 22

# filtering
aa.reject{|key,val| val < 9}
==>{:b=>9, :c=>33, :a=>22}
aa.reject{|key,val| val > 9}
==>{:d=>8, :b=>9, :e=>7}

# merging
aa.merge( {:f,0, :g,99} )
==>{:d=>8, :b=>9, :f=>0, :c=>33, :e=>7, :g=>99, :a=>22}

w_a_x_man

unread,
Nov 1, 2010, 4:29:08 PM11/1/10
to
On Nov 1, 1:50 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:

> Let me further clarify. The associative arrays should be able to work
> with *any* data-type in the key, including objects of the programmer's
> own design. The programmer would have to write a function to compare
> the keys for his specific data-type, and provide a vector to this
> function at the time that he defines his associative array.

MatzLisp (Ruby):

dot_type = Struct.new( :x, :y, :color )
==>#<Class:0x28ff7a8>
dot = dot_type.new( 0.78, 3.33, "red" )
==>#<struct #<Class:0x28ff7a8> x=0.78, y=3.33, color="red">
hash = {dot,"hello", (2..9),"world", [2,3,5,8],"bye"}
==>{#<struct #<Class:0x28ff7a8> x=0.78, y=3.33, color="red"> =>
"hello",
2..9=>"world", [2, 3, 5, 8]=>"bye"}
hash.keys
==>[#<struct #<Class:0x28ff7a8> x=0.78, y=3.33, color="red">,
2..9, [2, 3, 5, 8]]

Frank Buss

unread,
Nov 1, 2010, 4:36:23 PM11/1/10
to
Hugh Aguilar wrote:

> P.S. This question isn't Lisp related, but have any of you guys worked
> with Lua at all? Any opinion?

Yes, I'm the main author of Lua Player:

http://www.frank-buss.de/luaplayer/

It is a nice small scripting-like language. In combination with easy to
write C extensions, it is very good for fast prototyping, even for
applications which requires high performance, because the parts which needs
to be fast are implemented in C and the logic, memory handling etc. is
implemented in Lua and by the Lua runtime system and GC.

Of course, Lisp doesn't need this split between two languages, it can be
fast all the time, but I think it is not as easy as Lua for beginners.

--
Frank Buss, http://www.frank-buss.de
piano and more: http://www.youtube.com/user/frankbuss

Message has been deleted

Hugh Aguilar

unread,
Nov 1, 2010, 7:10:59 PM11/1/10
to
On Nov 1, 2:36 pm, Frank Buss <f...@frank-buss.de> wrote:
> Hugh Aguilar wrote:
> > P.S. This question isn't Lisp related, but have any of you guys worked
> > with Lua at all? Any opinion?
>
> Yes, I'm the main author of Lua Player:
>
> http://www.frank-buss.de/luaplayer/
>
> It is a nice small scripting-like language. In combination with easy to
> write C extensions, it is very good for fast prototyping, even for
> applications which requires high performance, because the parts which needs
> to be fast are implemented in C and the logic, memory handling etc. is
> implemented in Lua and by the Lua runtime system and GC.
>
> Of course, Lisp doesn't need this split between two languages, it can be
> fast all the time, but I think it is not as easy as Lua for beginners.

The application I have in mind will need to provide the user with a
domain-specific embedded-scripting language (DSL). If I write the
application in Lisp, the DSL will presumably be a subset of Lisp
(although I could possibly make it Forth). Another possibility would
be to write the application in C, and use Lua as the DSL. Yet another
possibility would be to write in C, and use FICL as the DSL. Some of
the code might require some speed, but I have heard that CL is as fast
as C these days (true?), so either should be fine.

The user will likely be somebody with minimal programming experience,
so I want a pretty simple DSL. Lua has been used in World-of-Warcraft,
and those teenage gamesters often have never programmed in any
language before, so Lua seems to be a good first-language for folks.
On the other hand, both Forth and Lisp offer more power and
flexibility --- experienced users may appreciate this.

Lua has a reputation for being easy to extend with C, and you seem to
agree. Do you think that it is feasible for me to replace the hash-
tables with LLRB trees, or would this involve too much tearing apart
of the Lua innards? I haven't examined Lua's C source-code yet, as I'm
still just learning to program in Lua itself.

In regard to your application, have you had any trouble with novices
complaining that Lua is too difficult for them?
On the other hand, have you had any trouble with experienced users
complaining that Lua is too wimpy for them?

Lisp has a long history of use as a DSL (Brief, Emacs and AutoCAD).
Forth has some history as a DSL, but mostly for use by electronics
technicians who use it for testing circuit boards or for low-level
operating-system initialization --- not so much for consumer
applications. I like Forth (it is the only language that I am
particularly good at), so I would prefer it, but I am worried that the
users might be intimidated by it and not make any effort to learn at
all. Because of this, I'm mostly only considering Lisp or Lua.

P.S. Thanks to everybody who has provided links for Lisp LLRB trees.
Some of that I could have found on my own. The reason why I posted the
question here, is that I only used LLRB trees in my novice Forth
package because they were simple to implement. Any kind of balanced
tree algorithm would be fine for my application, and some of the
algorithms might even be more efficient --- I don't really know very
much about the various balanced-tree algorithms and how they compare
to each other --- that is why I'm asking you Lispers, who presumably
do know something about algorithms.

Hugh Aguilar

unread,
Nov 1, 2010, 7:51:35 PM11/1/10
to
On Nov 1, 2:06 pm, w_a_x_man <w_a_x_...@yahoo.com> wrote:
> MatzLisp (Ruby):

I had never heard of Ruby being called "MatzLisp" before. I hadn't
known that Ruby was derived from Lisp --- I had assumed that it was
yet-another SmallTalk derivative.

I don't really know anything about Ruby except that *everything* is an
object, and it is slow.

For my DSL I also considered Python, with the speed-critical innards
being written in C. Python seems to be oriented toward being a front-
end for a C program, but I have never heard of this being done in
Ruby. Isn't it true that Ruby was mostly designed for use as a stand-
alone scripting language?

I doubt that either Python or Ruby generate fast-enough code to be
used on their own, without the innards being written in C. Also, Ruby
and Python seem too big for my use --- Lua has the advantage of being
small enough that users can learn it quickly. Lua, Scheme and Forth
are the only languages I know of that the designers spent more effort
on removing features than adding features --- their goal was to make a
language as simple as possible, while still being usable. This was
also Niklaus Wirth's goal with Oberon, but that one came out *too*
simple by most accounts.

Mark Wooding

unread,
Nov 1, 2010, 8:55:31 PM11/1/10
to
Hugh Aguilar <hughag...@yahoo.com> writes:

> I had never heard of Ruby being called "MatzLisp" before. I hadn't
> known that Ruby was derived from Lisp --- I had assumed that it was
> yet-another SmallTalk derivative.

I usually describe Ruby -- slightly unfairly -- as the shameful
love-child of Perl and Smalltalk. But there is a bit of Lisp heritage
there: Ruby's symbol syntax looks like CL keywords, and Ruby has
Scheme's first-class continuations.

-- [mdw]

Pascal J. Bourguignon

unread,
Nov 1, 2010, 9:23:42 PM11/1/10
to
Hugh Aguilar <hughag...@yahoo.com> writes:

> On Nov 1, 2:06�pm, w_a_x_man <w_a_x_...@yahoo.com> wrote:
>> MatzLisp (Ruby):
>
> I had never heard of Ruby being called "MatzLisp" before. I hadn't
> known that Ruby was derived from Lisp --- I had assumed that it was
> yet-another SmallTalk derivative.

Ruby is a Matzacred lisp.


> I don't really know anything about Ruby except that *everything* is an
> object, and it is slow.

You can also write code like this in ruby:
http://groups.google.com/group/comp.lang.ruby/msg/56fce4adeaa79f68

Scott L. Burson

unread,
Nov 2, 2010, 12:56:41 AM11/2/10
to
Hugh Aguilar wrote:
> Hello, I'm very new to CL (working my way through "Practical Common
> Lisp"). Can anybody tell me if CL (or PLT Scheme) has an associative
> arrays implementation? I want an implementation that supports doing an
> ordered traversal, and which also supports filtering of regions. I
> want to be able to copy everything within a region into another
> association, or everything outside of a region into another
> association, and also be able to merge two associations together.

FSet has ways to do all those things:

http://common-lisp.net/project/fset/

-- Scott

Scott L. Burson

unread,
Nov 2, 2010, 1:03:14 AM11/2/10
to
RG wrote:
>
> http://tinyurl.com/3yb8by2

The sarcasm here is uncalled-for. I'm glad Hugh asked us and gave me a
chance to plug FSet, which I don't think he would have found on his own
-- certainly not with that query.

-- Scott

Scott L. Burson

unread,
Nov 2, 2010, 1:04:51 AM11/2/10
to
Scott L. Burson wrote:
>
> FSet has ways to do all those things:
>
> http://common-lisp.net/project/fset/

I should have added: you can perhaps most easily download and install
FSet by using Quicklisp:

http://www.quicklisp.org/beta/

-- Scott

Hugh Aguilar

unread,
Nov 2, 2010, 3:04:32 PM11/2/10
to
On Nov 1, 11:03 pm, "Scott L. Burson" <Sc...@ergy.com> wrote:
> I'm glad Hugh asked us and gave me a
> chance to plug FSet, which I don't think he would have found on his own
> -- certainly not with that query.

Thanks for telling me about FSet. Your FSet package seems to be
inspired by the same kind of thinking that inspired my Forth novice
package --- you are providing basic data-structures and trying to be
robust, rather than application-specific.

I noticed that you said in your documentation: "[Fset is] not the
place to start for newcomers to the language."
I'm currently working my way through "Practical Common Lisp." Can you
recommend a path for getting from here to there?
My plan right now is to learn the basics of Lisp from this book, and
then port my slide-rule program from Forth to Lisp. That program is
recursive at its crux, which seems to make it the kind of program that
would usually get written in Lisp.

I tried to learn Factor, but got bogged down and quit. The problem was
that I didn't understand dynamic-OOP (I have only used static-OOP,
including C++). Factor's documentation is a reference, and it assumes
that the reader already knows the concepts, which I didn't. I decided
to learn Lisp, which Factor is largely derived from, because there are
beau-coup books and documents teaching CLOS concepts. After I grasp
the ideas, I may give Factor another try. I may just stick with CL
though; Factor doesn't support compile-time code, which CL does, so CL
is actually closer to Forth than Factor is, despite the fact that CL
doesn't have a Forth-like parameter stack (which Factor does).

In addition to arrays and lists, my novice package now has associative
arrays too. My slide-rule program used lists for holding all of the
marks in each scale. If I were rewriting that program today, I would
use associative arrays for the marks, with float values as keys. It
would be more efficient because I wouldn't have to sequentially search
for particular nodes or regions the way that I do with the lists. On
the other hand though, lists are useful for the gcode or PostScript
programs that I am generating --- I just append each line of code to
the list as I generate it. The same is true of the shapes, which are
intermediate data that the gcode and Postscript gets generated from.

I think that both Forth and Lisp are good languages for implementing
custom compilers, which is essentially what my slide-rule program is.
Writing a program like this in C would *not* be a good idea! It would
likely be twice as large as the Forth program, and take four times as
long to write.

Frank Buss

unread,
Nov 2, 2010, 7:40:15 PM11/2/10
to
Hugh Aguilar wrote:

> Lua has a reputation for being easy to extend with C, and you seem to
> agree. Do you think that it is feasible for me to replace the hash-
> tables with LLRB trees, or would this involve too much tearing apart
> of the Lua innards? I haven't examined Lua's C source-code yet, as I'm
> still just learning to program in Lua itself.

I don't know much of the details of the Lua runtime system, but why do you
want to change the hashtables with LLRB trees? What is the advantage? If
there are not many collisions, hashtables are fine and maybe even faster
for insert operations.

> In regard to your application, have you had any trouble with novices
> complaining that Lua is too difficult for them?

I have written a tutorial for novices for Lua Player:

http://wiki.ps2dev.org/psp:lua_player:tutorial

and there was a forum, if someone needed more help. In my experience Lua
was easier for non-programmers than other languages. Maybe one of the
reason WoW chooses it, too.

> On the other hand, have you had any trouble with experienced users
> complaining that Lua is too wimpy for them?

No, they didn't use it at all, if it was to wimpy for them :-) But there
were some power users, who like the idea of fast scripting in Lua and
enhanced it with their own C functions for the things which are not
possible in Lua.

Scott L. Burson

unread,
Nov 3, 2010, 1:12:34 AM11/3/10
to
Hugh Aguilar wrote:
>
> I noticed that you said in your documentation: "[Fset is] not the
> place to start for newcomers to the language."
> I'm currently working my way through "Practical Common Lisp." Can you
> recommend a path for getting from here to there?

I think that once you get through Practical Common Lisp, FSet will not
be too much of a stretch, given your other experience. Just be sure you
understand the difference between functions and macros.

-- Scott

Rob Warnock

unread,
Nov 3, 2010, 5:10:15 AM11/3/10
to
Frank Buss <f...@frank-buss.de> wrote:
+---------------

| Hugh Aguilar wrote:
| > Lua has a reputation for being easy to extend with C, and you seem to
| > agree. Do you think that it is feasible for me to replace the hash-
| > tables with LLRB trees, or would this involve too much tearing apart
| > of the Lua innards? I haven't examined Lua's C source-code yet, as I'm
| > still just learning to program in Lua itself.
|
| I don't know much of the details of the Lua runtime system, but why do you
| want to change the hashtables with LLRB trees? What is the advantage? If
| there are not many collisions, hashtables are fine and maybe even faster
| for insert operations.
+---------------

More importantly, Lua tables are *NOT* merely "hashtables": for
positive integer keys they are *also* "arrays" which provide O(1)
performance for keys that are "dense enough" between 1 and N [where
"dense" and "N" are dynamically discovered as the array is populated].
See <news:jf2dnZcKDNhFVsHR...@speakeasy.net> for more
details and references to the relevant Lua docs.

I'm not saying that it's totally infeasible to replace the hashtable
*part* of Lua tables with LLRB trees, just that you *MUST* preserve
the O(1) performance of the array part.


-Rob

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Hugh Aguilar

unread,
Nov 3, 2010, 3:11:35 PM11/3/10
to
On Nov 3, 3:10 am, r...@rpw3.org (Rob Warnock) wrote:
> Frank Buss  <f...@frank-buss.de> wrote:
> +---------------| Hugh Aguilar wrote:
>
> | > Lua has a reputation for being easy to extend with C, and you seem to
> | > agree. Do you think that it is feasible for me to replace the hash-
> | > tables with LLRB trees, or would this involve too much tearing apart
> | > of the Lua innards? I haven't examined Lua's C source-code yet, as I'm
> | > still just learning to program in Lua itself.
> |
> | I don't know much of the details of the Lua runtime system, but why do you
> | want to change the hashtables with LLRB trees? What is the advantage? If
> | there are not many collisions, hashtables are fine and maybe even faster
> | for insert operations.
> +---------------

The advantage is that I can do an ordered traversal when the key is a
non-integer value (specifically, when the key is a floating point
value). This could be accomplished in Lua-as-is by using a positive
integer as the key and then sorting the table by the floating-point
datum within the payload of the records. This would be inefficient
because, if I insert a new node, I have to resort the entire table
again. With the LLRB trees, the key can be the float, and the table
will *always* be sorted.

Another advantage, is that I can find regions, and filter out
everything in the region or everything outside of the region. This
could be accomplished in Lua-as-is by sorting the table and then doing
a binary search of the table to find the left-bracket and right-
bracket nodes for the region. The binary search for the left-bracket
would have to find the lowest node that is >= to the target, and the
search for the right-bracket would have to find the highest node that
is <= to the target. This is because, in many cases, there won't be a
node that is *exactly* on the specified boundary.

All in all, using Lua-as-is to work with ordered data would be
inefficient. You are right that the hash-tables may be faster for
insertions, but the inefficiency arises from the need to sort the
table after every insertion. Also, my program would be much
simplified
to have the floating-point value be the key, rather than have an
integer key that doesn't have any particular meaning, and have the
floating-point value buried in the payload. Because I'm expecting non-
programmers to be writing scripts in Lua, I want the table to be as
conceptually simple as possible, which means that the floating point
value will be the key.

> More importantly, Lua tables are *NOT* merely "hashtables": for
> positive integer keys they are *also* "arrays" which provide O(1)
> performance for keys that are "dense enough" between 1 and N [where
> "dense" and "N" are dynamically discovered as the array is populated].
> See <news:jf2dnZcKDNhFVsHR...@speakeasy.net> for more
> details and references to the relevant Lua docs.
>
> I'm not saying that it's totally infeasible to replace the hashtable
> *part* of Lua tables with LLRB trees, just that you *MUST* preserve
> the O(1) performance of the array part.

I know that Lua tables segregate the positive integer keys into an
array, and everything else into a hash-table. I'm only intending to
replace the hash-table part with an LLRB tree. We will still have
IPAIRS for the array part. The big difference is that PAIRS will
provide the nodes in an ordered manner, rather than just saying that
the order is undefined. Also, we will get the region-filtering that
isn't currently available.

Hugh Aguilar

unread,
Nov 3, 2010, 3:23:29 PM11/3/10
to

Thanks for your encouragement. After I finish "Practical Common Lisp,"
I will tackle the project of porting the slide-rule program to CL, and
will also upgrade it to use an associative array for the marks rather
than a list, although I will continue to use lists for everything
else.

I've pretty much decided to go with Lua for the new project that I was
discussing, but I also want to learn CL for general interest. I know
the difference between functions and macros --- that is really what
attracted me to Lisp. In Forth we have IMMEDIATE words, and in Lisp
you have macros, and afaik these are the only languages that have
anything like this. C does do string-replacement with #DEFINE, but
that is not at all the same thing as having code that executes at
compile-time.

namekuseijin

unread,
Nov 3, 2010, 4:06:10 PM11/3/10
to
>     ==>{:d=>8, :b=>9, :f=>0, :c=>33, :e=>7, :g=>99, :a=>22}- Hide quoted text -
>
> - Show quoted text -

care to enlighten us about how much fun it is to be a user of other
people's code rather than write code by yourself?

namekuseijin

unread,
Nov 3, 2010, 4:13:56 PM11/3/10
to
On Nov 1, 6:36 pm, Frank Buss <f...@frank-buss.de> wrote:
> Of course, Lisp doesn't need this split between two languages, it can be
> fast all the time, but I think it is not as easy as Lua for beginners.

http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=luajit&lang2=sbcl

That's LuaJIT beating SBCL hard, both in CPU time, memory usage and
code size. It's the scripting language of the Gods... python and ruby
are no match, even Scala has a hard time against it...

who could imagine highly optimized table datastructure could beat the
hell out of cons cells, lists, vectors and the like, huh?

it also allows for sweet-jesus easy FFI by calling GNU gmp here and
there with such grace you hardly notice it's FFI...

Lua's original developer was a Scheme teacher back then, BTW.

Nathan

unread,
Nov 4, 2010, 7:52:54 AM11/4/10
to
On Nov 3, 3:13 pm, namekuseijin <namekusei...@gmail.com> wrote:
> On Nov 1, 6:36 pm, Frank Buss <f...@frank-buss.de> wrote:
>
> > Of course, Lisp doesn't need this split between two languages, it can be
> > fast all the time, but I think it is not as easy as Lua for beginners.
>
> http://shootout.alioth.debian.org/u32/benchmark.php?test=all〈=lua...

>
> That's LuaJIT beating SBCL hard, both in CPU time, memory usage and
> code size.  It's the scripting language of the Gods... python and ruby
> are no match, even Scala has a hard time against it...
>
> who could imagine highly optimized table datastructure could beat the
> hell out of cons cells, lists, vectors and the like, huh?
>
> it also allows for sweet-jesus easy FFI by calling GNU gmp here and
> there with such grace you hardly notice it's FFI...
>
> Lua's original developer was a Scheme teacher back then, BTW.

Okay, this is likely to tork a lot of people off, that's not my goal.
I only want to accurately reflect what I've seen and heard. From my
own personal benchmarking, from emails with some of the implementors,
from stat's I've read: there is no such thing as a mature
implementation of Common Lisp. We learn Lisp because it is an amazing
language, not because it has an amazing implementation.

If you want your Lisp programs to perform, look into Clojure, it's a
Lisp dialect that runs in the JVM; fully bi-directional Java
compatible. I haven't benchmarked Common Lisp vs Clojure myself, but
I've read a few pieces by other people who said they did it. According
to their statistics at least, Clojure will run about 1/10 the speed of
Java which means it beats existing implementations of Common Lisp by a
factor of 100-1000 times.

Clojure is something a lot of Lisp people like to call a toy. From
what I've see, it doesn't have nearly the feature set of Common Lisp,
the development tools are also not up to the same level which is why I
personally don't use it. If I were trying to write production level
Lisp code however, rather than just being here to learn an amazing
language, I would absolutely go with Clojure.

Nothing about Clojure is that amazing, except that it's standing on
the shoulders of a giant (and computer scientists rarely stand on each
others shoulders). The libraries you're going to get from Java are
much more extensive than you'll find with Common Lisp. Java has also
had a lot of very smart people working on optimizing it's VM.

I read Guy Steele saying "...we were after the C++ programmers. We
managed to drag a lot of them about halfway to Lisp" while talking
about the Java programming language. Ever since then I've been
thinking that a project like Clojure might actually be the original
intent of the Java development team.

Anyway, don't give up on Lisp just because of a slow implementation.
It's true that it's something every developer likes to know they've
got good, but there's enough right with Common Lisp to put up with
almost anything.

Pascal Costanza

unread,
Nov 4, 2010, 8:16:56 AM11/4/10
to
On 04/11/2010 12:52, Nathan wrote:
> On Nov 3, 3:13 pm, namekuseijin<namekusei...@gmail.com> wrote:
>> On Nov 1, 6:36 pm, Frank Buss<f...@frank-buss.de> wrote:
>>
>>> Of course, Lisp doesn't need this split between two languages, it can be
>>> fast all the time, but I think it is not as easy as Lua for beginners.
>>
>> http://shootout.alioth.debian.org/u32/benchmark.php?test=all〈=lua...
>>
>> That's LuaJIT beating SBCL hard, both in CPU time, memory usage and
>> code size. It's the scripting language of the Gods... python and ruby
>> are no match, even Scala has a hard time against it...
>>
>> who could imagine highly optimized table datastructure could beat the
>> hell out of cons cells, lists, vectors and the like, huh?
>>
>> it also allows for sweet-jesus easy FFI by calling GNU gmp here and
>> there with such grace you hardly notice it's FFI...
>>
>> Lua's original developer was a Scheme teacher back then, BTW.
>
> Okay, this is likely to tork a lot of people off, that's not my goal.
> I only want to accurately reflect what I've seen and heard. From my
> own personal benchmarking, from emails with some of the implementors,
> from stat's I've read: there is no such thing as a mature
> implementation of Common Lisp. We learn Lisp because it is an amazing
> language, not because it has an amazing implementation.

There are a lot of industrial applications of Common Lisp, in the real
world, here and now. This wouldn't be possible if Common Lisp weren't a
mature language.

> If you want your Lisp programs to perform, look into Clojure, it's a
> Lisp dialect that runs in the JVM; fully bi-directional Java
> compatible. I haven't benchmarked Common Lisp vs Clojure myself, but
> I've read a few pieces by other people who said they did it. According
> to their statistics at least, Clojure will run about 1/10 the speed of
> Java which means it beats existing implementations of Common Lisp by a
> factor of 100-1000 times.

I would like to see a more serious comparison, taking different features
of each language into account. While in general, I wouldn't be surprised
if Clojure did better performance-wise, there are certain aspects in the
Clojure design where I find it hard to believe that it has a good chance
at beating Common Lisp. Say, plain functions in Clojure are probably
more efficient than in Common Lisp, but Clojure's method dispatch
mechanism is too general compared to that of CLOS, which in turn was
carefully designed to be both expressive _and_ efficient.

I don't want to make any predictions here, but you can't make them either.

> Clojure is something a lot of Lisp people like to call a toy. From
> what I've see, it doesn't have nearly the feature set of Common Lisp,
> the development tools are also not up to the same level which is why I
> personally don't use it. If I were trying to write production level
> Lisp code however, rather than just being here to learn an amazing
> language, I would absolutely go with Clojure.

This depends a lot on context. See above.

Just as an example, here is an example of a product recently shipped,
implemented in LispWorks: http://www.youtube.com/watch?v=Ti2q8eK0l58

> Nothing about Clojure is that amazing, except that it's standing on
> the shoulders of a giant (and computer scientists rarely stand on each
> others shoulders). The libraries you're going to get from Java are
> much more extensive than you'll find with Common Lisp. Java has also
> had a lot of very smart people working on optimizing it's VM.

Java libraries have to center around the very limiting single
inheritance, single dispatch object-centric model, which usually makes
them too complicated.

> I read Guy Steele saying "...we were after the C++ programmers. We
> managed to drag a lot of them about halfway to Lisp" while talking
> about the Java programming language. Ever since then I've been
> thinking that a project like Clojure might actually be the original
> intent of the Java development team.

After 15 years, still waiting for the other half. Not holding my
breath... :-P


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Tim Bradshaw

unread,
Nov 4, 2010, 8:54:17 AM11/4/10
to
On 2010-11-04 11:52:54 +0000, Nathan said:

> If you want your Lisp programs to perform, look into Clojure, it's a
> Lisp dialect that runs in the JVM; fully bi-directional Java
> compatible. I haven't benchmarked Common Lisp vs Clojure myself, but
> I've read a few pieces by other people who said they did it. According
> to their statistics at least, Clojure will run about 1/10 the speed of
> Java which means it beats existing implementations of Common Lisp by a
> factor of 100-1000 times.

This is so confused as to be tragic. For a start it makes the classic
class/instance confusion that people like to do: you can't compare the
performance of languages, only implementations of those languages. An
in fact you can't even do that, you can compare the performance of
implementations running specific programs. Java is a language, so does
not have performance charactistics. An implementation of Java running
a specific program does.

Then you claim a factor of 1,000 to 10,000 performance difference
(presumably for the same algorithm on good implementations of both
languages), which is just mad.

I thought this kind of silliness had died out, actually.

Nicolas Neuss

unread,
Nov 4, 2010, 9:12:33 AM11/4/10
to
Nathan <nbee...@gmail.com> writes:

> compatible. I haven't benchmarked Common Lisp vs Clojure myself, but
> I've read a few pieces by other people who said they did it.

Maybe you should. You will be surprised.

> According to their statistics at least, Clojure will run about 1/10
> the speed of Java which means it beats existing implementations of
> Common Lisp by a factor of 100-1000 times.

And can probably be beaten by other "existing" implementations of Common
Lisp by a factor of 100-1000.

> Clojure is something a lot of Lisp people like to call a toy.

I'm quite sure you won't find anyone here in this list calling it a toy.

Nicolas

Tamas K Papp

unread,
Nov 4, 2010, 10:12:41 AM11/4/10
to
On Thu, 04 Nov 2010 04:52:54 -0700, Nathan wrote:

> only want to accurately reflect what I've seen and heard. From my own
> personal benchmarking, from emails with some of the implementors, from
> stat's I've read: there is no such thing as a mature implementation of
> Common Lisp. We learn Lisp because it is an amazing language, not
> because it has an amazing implementation.

Please speak only for yourself. Personally, I find SBCL amazing.

> If you want your Lisp programs to perform, look into Clojure, it's a
> Lisp dialect that runs in the JVM; fully bi-directional Java compatible.
> I haven't benchmarked Common Lisp vs Clojure myself, but I've read a few
> pieces by other people who said they did it. According to their
> statistics at least, Clojure will run about 1/10 the speed of Java which
> means it beats existing implementations of Common Lisp by a factor of
> 100-1000 times.

So you found that the program runs 1000-10000 slower when implemented
in CL, compared to Java? I find this very hard to believe.

Can share the information necessary to replicate this benchmark ---
including source code, the exact implementation(s) you used, etc?

> Clojure is something a lot of Lisp people like to call a toy. From what

I can't say that this is a prevalent attitude. What made you form
this impression?

> Anyway, don't give up on Lisp just because of a slow implementation.

I was not about to give up on Lisp. Also, I find you premise is
false: CL has quite a few implementations that compile very efficient
code.

I think that you have made some extraordinary claims about CL, now it
is time to supply some extraordinary evidence. I am looking forward
to seeing your benchmarks.

Best,

Tamas

namekuseijin

unread,
Nov 4, 2010, 12:04:51 PM11/4/10
to
On Nov 4, 10:54 am, Tim Bradshaw <t...@tfeb.org> wrote:
> Then you claim a factor of 1,000 to 10,000 performance difference
> (presumably for the same algorithm on good implementations of both
> languages), which is just mad.

mad it is indeed:

http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=sbcl&lang2=clojure

but this is on single core... which is not Clojure's strength...

Pascal J. Bourguignon

unread,
Nov 4, 2010, 12:24:38 PM11/4/10
to
Nathan <nbee...@gmail.com> writes:

> If you want your Lisp programs to perform, look into Clojure, it's a
> Lisp dialect that runs in the JVM; fully bi-directional Java
> compatible. I haven't benchmarked Common Lisp vs Clojure myself, but
> I've read a few pieces by other people who said they did it. According
> to their statistics at least, Clojure will run about 1/10 the speed of
> Java which means it beats existing implementations of Common Lisp by a
> factor of 100-1000 times.

I don't believe that. I think it's a trick to have all CLispers try
Clojure and perhaps switch to it.

Next week, another lisp-like language will be invented, and they will
also try to disperse the lisp community efforts.

This is a 'babel' plot to destroy lisp.

Paul Donnelly

unread,
Nov 4, 2010, 1:12:02 PM11/4/10
to
Nathan <nbee...@gmail.com> writes:

> Okay, this is likely to tork a lot of people off, that's not my goal.
> I only want to accurately reflect what I've seen and heard. From my
> own personal benchmarking, from emails with some of the implementors,
> from stat's I've read: there is no such thing as a mature
> implementation of Common Lisp. We learn Lisp because it is an amazing
> language, not because it has an amazing implementation.

What is your definition of "mature"? It seems different from the one I
am using.

Thomas A. Russ

unread,
Nov 4, 2010, 12:38:15 PM11/4/10
to
Nathan <nbee...@gmail.com> writes:

> Okay, this is likely to tork a lot of people off, that's not my goal.
> I only want to accurately reflect what I've seen and heard. From my
> own personal benchmarking, from emails with some of the implementors,
> from stat's I've read: there is no such thing as a mature
> implementation of Common Lisp. We learn Lisp because it is an amazing
> language, not because it has an amazing implementation.

Um, this is patently not true.

There are several mature implementations of Common Lisp. Among them I
would note
Allegro Common Lisp (ACL) from Franz
Lucid Common Lisp
Steel Bank Common Lisp (SBCL)
CMU Common Lisp

These are mature in the sense of having been around for quite a while,
implementing the entire Common Lisp specification, and having high
quality implementations and good compilers.

> If you want your Lisp programs to perform, look into Clojure, it's a
> Lisp dialect that runs in the JVM; fully bi-directional Java
> compatible. I haven't benchmarked Common Lisp vs Clojure myself, but
> I've read a few pieces by other people who said they did it. According
> to their statistics at least, Clojure will run about 1/10 the speed of
> Java which means it beats existing implementations of Common Lisp by a
> factor of 100-1000 times.

Obviously you have not done much work with one of the quality Common
Lisp implementations if you think that is the speed you should expect.

We run a large system, PowerLoom [1], which we translate into native
Lisp, Java and C++ code. When we run our standard test suite on these
different base implementations we generally observe the following
performance characteristics:

The Java and Lisp versions perform about the same.
The C++ version is faster by a factor of 1.5 to 2x.

This is actually one of the few examples of what is essentially a single
code base translated into multiple languages. I think that gives a much
better feel for the effects of the language than random rumors.

[1] http://www.isi.edu/isd/LOOM/PowerLoom

--
Thomas A. Russ, USC/Information Sciences Institute

Hugh Aguilar

unread,
Nov 4, 2010, 5:44:10 PM11/4/10
to
On Nov 3, 2:13 pm, namekuseijin <namekusei...@gmail.com> wrote:
> [Lua is] the scripting language of the Gods... python and ruby

> are no match, even Scala has a hard time against it...

Lua is the scripting language of the gods??? I thought the gods did
their programming in DNA. We may not be qualified to criticize their
programming skills, considering that we *are* the programs. :-)

By some accounts, the internet was invented so people could argue
about which is the "best" programming language, and this continues to
be its primary use today. ;-) What I'm trying to say, is that it was
not my intention in this thread to start yet-another Language Wars
skirmish. These kinds of debates are like comparing apples and
oranges. Python and Ruby are *very* different from Lua in their size
and their intended use, so it doesn't really mean much to say that Lua
can beat them. For the application that I have in mind right now, I do
think that Lua is likely the best choice because it is *very* light-
weight. A lot of people like Ruby and Python for writing light-weight
programs, and PLT Scheme is also in this league. CL seems to be more
oriented toward heavy-weight programs though. A lot of this is just a
matter of taste though. There are people writing huge heavy-weight
programs in Scheme (Donald Knuth and his iTex), and there are also
people using CL for scripting.

My goal with Lisp or Scheme or whatever, is to learn how to do
"scripting" on desktop computers. When I worked as a Forth programmer
previously, I wrote software for micro-controllers, which is a lot
different than scripting on desktop computers. I have written software
for desktop computers in Forth, but I am increasingly unconvinced of
Forth appropriateness to the desktop environment (although I still
think that Forth is and always will be the best language for micro-
controllers). Given my background as a Forther, I think that PLT
Scheme would be a pretty good language for me, because it is lighter-
weight than CL. On the other hand though, there are more books
available for CL than Scheme, so I am learning CL first to make my
educational process easier (I prefer paper-and-ink books to online
documents).

On the subject of speed, in Forth I have always dropped down into
assembly-language as necessary. I'm generally more interested in how
easy this is for the various implementations, than in any table of
benchmarks comparing the various implementations in regard to
"typical" code (whatever that is). I asked about this once before and
was told that PLT Scheme and several of the CL implementations have
assemblers. Does anybody have an opinion on which systems lend
themselves to assembly-language programming?

Isaac Gouy

unread,
Nov 4, 2010, 5:46:12 PM11/4/10
to
On Nov 4, 9:04 am, namekuseijin <namekusei...@gmail.com> wrote:
> On Nov 4, 10:54 am, Tim Bradshaw <t...@tfeb.org> wrote:
>
> > Then you claim a factor of 1,000 to 10,000 performance difference
> > (presumably for the same algorithm on good implementations of both
> > languages), which is just mad.
>
> mad it is indeed:
>
> http://shootout.alioth.debian.org/u32/benchmark.php?test=all〈=sbc...

>
> but this is on single core... which is not Clojure's strength...


So look at quad-core ;-)

http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=sbcl&lang2=clojure


http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=sbcl&lang2=clojure

namekuseijin

unread,
Nov 4, 2010, 9:38:06 PM11/4/10
to
On Nov 4, 7:46 pm, Isaac Gouy <igo...@yahoo.com> wrote:
> On Nov 4, 9:04 am, namekuseijin <namekusei...@gmail.com> wrote:
>
> > On Nov 4, 10:54 am, Tim Bradshaw <t...@tfeb.org> wrote:
>
> > > Then you claim a factor of 1,000 to 10,000 performance difference
> > > (presumably for the same algorithm on good implementations of both
> > > languages), which is just mad.
>
> > mad it is indeed:
>
> >http://shootout.alioth.debian.org/u32/benchmark.php?test=all〈=sbc...
>
> > but this is on single core... which is not Clojure's strength...
>
> So look at quad-core ;-)
>
> http://shootout.alioth.debian.org/u32q/benchmark.php?test=all〈=sb...
>
> http://shootout.alioth.debian.org/u64q/benchmark.php?test=all〈=sb...

hey, aren't you from C++ world alone?

anyway, yeah, pretty dismal results. Still I find more impressive is
that you get quite a few concurrency in Clojure for granted, without
even looking (or writing) for it:

http://shootout.alioth.debian.org/u64q/program.php?test=spectralnorm&lang=clojure&id=3

this shares computation on 4 cores for a load of about 30% for each
one, yet I see no explicit concurrency mechanisms like in the low-
level CL one:

http://shootout.alioth.debian.org/u64q/program.php?test=spectralnorm&lang=sbcl&id=3

I wonder what would result if someone more experienced in Clojure
would program a more concurrently-geared version with operators like
doseq...

Tim Bradshaw

unread,
Nov 5, 2010, 7:06:45 AM11/5/10
to
On 2010-11-04 16:38:15 +0000, Thomas A. Russ said:

> There are several mature implementations of Common Lisp. Among them I
> would note
> Allegro Common Lisp (ACL) from Franz
> Lucid Common Lisp
> Steel Bank Common Lisp (SBCL)
> CMU Common Lisp

I suspect Lucid (now Liquid) is mostly legacy now. However LispWorks
(from the same vendor) should definitely be on this list, as should
Clozure Common Lisp, CLISP, and I am sure others I have missed.

There are probably ~10 good mature, maintained, implementations I
should think including at least a couple with excellent commercial
support.

I guess the point I'm trying to make is that the implementation
position is currently better than it has ever been: you have to be on a
fairly left-field platform to not have at least a couple of really good
implementations.

Isaac Gouy

unread,
Nov 5, 2010, 11:30:39 AM11/5/10
to
On Nov 4, 6:38 pm, namekuseijin <namekusei...@gmail.com> wrote:
> On Nov 4, 7:46 pm, Isaac Gouy <igo...@yahoo.com> wrote:
-snip-

> > > but this is on single core... which is not Clojure's strength...
>
> > So look at quad-core ;-)
>
> >http://shootout.alioth.debian.org/u32q/benchmark.php?test=all〈=sb...
>
> >http://shootout.alioth.debian.org/u64q/benchmark.php?test=all〈=sb...
>
> hey, aren't you from C++ world alone?


I have nothing to do with C++


> anyway, yeah, pretty dismal results.  Still I find more impressive is
> that you get quite a few concurrency in Clojure for granted, without
> even looking (or writing) for it:
>

> http://shootout.alioth.debian.org/u64q/program.php?test=spectralnorm&...


>
> this shares computation on 4 cores for a load of about 30% for each


Add the percentages

5,500: 20% + 20% + 43% + 17% = 100%

that's just the OS bouncing the process among the cores.


(I don't recall if JVM GC can use multi core even when the program
itself does not?)


> one, yet I see no explicit concurrency mechanisms like in the low-
> level CL one:
>

> http://shootout.alioth.debian.org/u64q/program.php?test=spectralnorm&...

namekuseijin

unread,
Nov 5, 2010, 11:54:22 AM11/5/10
to
On 5 nov, 09:06, Tim Bradshaw <t...@tfeb.org> wrote:
> On 2010-11-04 16:38:15 +0000, Thomas A. Russ said:
>
> > There are several mature implementations of Common Lisp.  Among them I
> > would note
> >    Allegro Common Lisp (ACL) from Franz
> >    Lucid Common Lisp
> >    Steel Bank Common Lisp (SBCL)
> >    CMU Common Lisp
>
> I suspect Lucid (now Liquid) is mostly legacy now.  However LispWorks
> (from the same vendor) should definitely be on this list, as should
> Clozure Common Lisp, CLISP, and I am sure others I have missed.
>
> There are probably ~10 good mature, maintained, implementations I
> should think including at least a couple with excellent commercial
> support.

good. It gives me a warm feeling inside to know that mature CL
implementations are being beaten hard by a still quite experimental
Lua JIT compiler... :p

Thomas A. Russ

unread,
Nov 5, 2010, 12:05:39 PM11/5/10
to
Tim Bradshaw <t...@tfeb.org> writes:

> On 2010-11-04 16:38:15 +0000, Thomas A. Russ said:
>
> > There are several mature implementations of Common Lisp. Among them I
> > would note
> > Allegro Common Lisp (ACL) from Franz
> > Lucid Common Lisp
> > Steel Bank Common Lisp (SBCL)
> > CMU Common Lisp
>
> I suspect Lucid (now Liquid) is mostly legacy now. However LispWorks
> (from the same vendor) should definitely be on this list,

You're right. I had the nagging feeling that I had gotten the name
wrong, but was too lazy to check.... I meant LispWorks.

Pascal Costanza

unread,
Nov 5, 2010, 4:04:18 PM11/5/10
to

It's not the first time in history that CL implementations get beaten by
implementations of other languages in terms of performance. There must
be something to Common Lisp that makes it worthwhile to use it in spite
of this.

namekuseijin

unread,
Nov 5, 2010, 8:38:51 PM11/5/10
to
On Nov 5, 6:04 pm, Pascal Costanza <p...@p-cos.net> wrote:
> > good.  It gives me a warm feeling inside to know that mature CL
> > implementations are being beaten hard by a still quite experimental
> > Lua JIT compiler... :p
>
> It's not the first time in history that CL implementations get beaten by
> implementations of other languages in terms of performance. There must
> be something to Common Lisp that makes it worthwhile to use it in spite
> of this.

perseverance?

in any case, it's the first time it's beaten by a dynamically typed
scripting language.

I have a love/hate relationship with the Lisp family of languages. I
like the basic ideas, its kernel of operators, I love the homoiconic
nature, I dig hierarchical paren-editing with proper tools, but it's
hard to cope up with the idea that lesser tools often provide better
performance out of ugly means. I mean, often Lisp compiled code is
beaten by tools not actually compiling anything, just translating as
much as they can into as close to C real loops as they can.

Pascal Costanza

unread,
Nov 6, 2010, 4:29:40 AM11/6/10
to
On 06/11/2010 01:38, namekuseijin wrote:
> On Nov 5, 6:04 pm, Pascal Costanza<p...@p-cos.net> wrote:
>>> good. It gives me a warm feeling inside to know that mature CL
>>> implementations are being beaten hard by a still quite experimental
>>> Lua JIT compiler... :p
>>
>> It's not the first time in history that CL implementations get beaten by
>> implementations of other languages in terms of performance. There must
>> be something to Common Lisp that makes it worthwhile to use it in spite
>> of this.
>
> perseverance?
>
> in any case, it's the first time it's beaten by a dynamically typed
> scripting language.

I wouldn't know. Smalltalk, for example, is known to have very efficient
implementations, and I wouldn't be surprised if at some stage in history
it was better than Lisp. Or maybe not. So what?

> I have a love/hate relationship with the Lisp family of languages. I
> like the basic ideas, its kernel of operators, I love the homoiconic
> nature, I dig hierarchical paren-editing with proper tools, but it's
> hard to cope up with the idea that lesser tools often provide better
> performance out of ugly means. I mean, often Lisp compiled code is
> beaten by tools not actually compiling anything, just translating as
> much as they can into as close to C real loops as they can.

To quote Richard Gabriel: "Performance is an issue, but it is not the
only issue."

Tamas K Papp

unread,
Nov 6, 2010, 5:49:21 AM11/6/10
to
On Fri, 05 Nov 2010 17:38:51 -0700, namekuseijin wrote:

> On Nov 5, 6:04 pm, Pascal Costanza <p...@p-cos.net> wrote:
>> > good.  It gives me a warm feeling inside to know that mature CL
>> > implementations are being beaten hard by a still quite experimental
>> > Lua JIT compiler... :p
>>
>> It's not the first time in history that CL implementations get beaten
>> by implementations of other languages in terms of performance. There
>> must be something to Common Lisp that makes it worthwhile to use it in
>> spite of this.
>
> perseverance?
>
> in any case, it's the first time it's beaten by a dynamically typed
> scripting language.

I think that you are reading too much into a few simple benchmarks.
The graph that you linked to shows that sometimes SBCL is faster than
LUA, sometimes the other way round. Consider this simple statistical
model:

log CPU time_{i,j} ~ N(alpha_j+beta_i, sigma^2_j)

for benchmark i and language (& implementation) j. Even if you care a
lot about speed, presumably you would not be interested in the log CPU
time measurements, but the comparison of alpha's. Since they are
stochastic, you would care about alpha_SBCL-alpha_LUAJIT or something
like that.

Given the small amount of data, you would have a huge posterior
uncertainty about these variables. You could overcome some of that
using a multilevel model for all the benchmarks, but even then, you
could only conclude that a language has "beaten" the other with
reasonable posterior probability when the disparity is very large.

> I have a love/hate relationship with the Lisp family of languages. I
> like the basic ideas, its kernel of operators, I love the homoiconic
> nature, I dig hierarchical paren-editing with proper tools, but it's
> hard to cope up with the idea that lesser tools often provide better
> performance out of ugly means. I mean, often Lisp compiled code is
> beaten by tools not actually compiling anything, just translating as
> much as they can into as close to C real loops as they can.

I don't see the problem --- you can always profile you code and
rewrite that 5% that is using up 80% of the CPU time; in
micro-optimized CL, or C, or Fortran, whatever. So you get the
elegance and power of Lisp, and speed is always available on demand.

For example, Liam Healy's GSLL uses GSL (written in C), while my LLA
library uses LAPACK/ATLAS (written in Fortran). I wrote the bindings,
then from then on I could just ignore this part and concentrate on
programming the tricky parts in CL. So I don't share your concerns,
for me this is a non-issue.

Best,

Tamas

Hugh Aguilar

unread,
Nov 6, 2010, 3:07:34 PM11/6/10
to
On Nov 6, 2:29 am, Pascal Costanza <p...@p-cos.net> wrote:
> On 06/11/2010 01:38, namekuseijin wrote:
>
> > On Nov 5, 6:04 pm, Pascal Costanza<p...@p-cos.net>  wrote:
> >>> good.  It gives me a warm feeling inside to know that mature CL
> >>> implementations are being beaten hard by a still quite experimental
> >>> Lua JIT compiler... :p
>
> >> It's not the first time in history that CL implementations get beaten by
> >> implementations of other languages in terms of performance. There must
> >> be something to Common Lisp that makes it worthwhile to use it in spite
> >> of this.
>
> > perseverance?

Code speed isn't a big issue for me. A modern computer can run a Lisp
program in a few seconds, compared to the same program written in 8088
assembly-language and taking a few hours back in the 1980s. Besides
that, if speed really is an issue, I wouldn't be using *any* dynamic-
OOP language for the job, so comparing the speed of dynamic-OOP
languages seems pointless to me. That is like comparing the speed of
pickup trucks, and ignoring the fact that their purpose is hauling
cargo.

According to the book, "Practical Common Lisp" (page 1, paragraph 1),
people use CL because: "you'll get more done, faster."

That is why I'm learning Lisp. I looked at Python, which makes the
same claim, but I decided that CL was a better choice because it is a
lot more powerful (it has macros, which I'm familiar with from Forth).
Also, I didn't like Python's philosophy that there is only one way to
do any job. That may be comforting for a novice (in the same way that
Christianity is comforting to children and child-like adults), but I
doubt that you'll find many Lispers claiming that there is only one
way to do any job and that due to divine inspiration they alone know
what that one true way is. That kind of thinking leads to an "End
Times" type of conclusion --- but I don't think any of us want the
world to come to an end, or computer science either. Lispers seem to
be a smarter crowd than you'll find in other language communities, but
they also are smart enough to know that they don't know everything ---
at least, that is what I'm hoping for (there may be exceptions).

As for Lua, I like it and intend to use it as an embedded scripting
language in C programs. Lua is completely different from CL though.
You can't embed CL inside of a C program because CL is too big. On the
other hand though, CL can be used for much larger programs than you
would really want to attempt in C/Lua. Those tables are pretty cool,
but they are somewhat of a Procrustean Bed, in the sense that they
aren't the best choice for every kind of data. As an analogy, C/Lua is
like an economy-hatchback that you use for carrying light loads, and
Lisp is like a full-size pickup truck that you use for carrying heavy
loads.

Tim Bradshaw

unread,
Nov 6, 2010, 3:48:42 PM11/6/10
to
On 2010-11-05 15:54:22 +0000, namekuseijin said:

> good. It gives me a warm feeling inside to know that mature CL
> implementations are being beaten hard by a still quite experimental
> Lua JIT compiler... :p

I very much doubt these benchamrks mean anything at all.

Tamas K Papp

unread,
Nov 7, 2010, 4:21:36 AM11/7/10
to
On Sat, 06 Nov 2010 12:07:34 -0700, Hugh Aguilar wrote:

> You can't embed CL inside of a C program because CL is too big. On the

AFAIK you can embed ECL in C/C++ programs. Their webpage has examples.

Best,

Tamas

Pascal J. Bourguignon

unread,
Nov 7, 2010, 5:14:31 AM11/7/10
to
Hugh Aguilar <hughag...@yahoo.com> writes:

> You can't embed CL inside of a C program because CL is too big.

What a brainfart!

On my system, there are 65 libraries in /usr/lib that are bigger than
libecl! boost-regexp is bigger than libecl! (This is just a fucking
regexp library and it's bigger than a whole language including a
compiler, an interpreter, and a standard library). Or libxml2, which is
just a fucking serializer and deserializer for sexps!

If CL is too big to include in a C program, then you should just stop
writing C program, because libc which is the standard library for C
programs, is twice the size of a CL implementation!


-rwxr-xr-x 1 root root 1960912 Oct 22 11:15 /usr/lib/libecl.so.9.12.3*
-rw-r--r-- 1 root root 1994280 Sep 25 21:15 /usr/lib/libdb-4.5.a
-rw-r--r-- 2 root root 2051448 Sep 25 12:58 /usr/lib/libboost_regex-1_41.a
-rw-r--r-- 2 root root 2051448 Sep 25 12:58 /usr/lib/libboost_regex-mt-1_41.a
-rwxr-xr-x 1 root root 2065040 Sep 25 23:27 /usr/lib/libgsl.so.0.15.0*
-rwxr-xr-x 1 root root 2074480 Sep 26 00:23 /usr/lib/libosp.so.5.0.0*
-rw-r--r-- 2 root root 2115172 Sep 25 12:58 /usr/lib/libboost_unit_test_framework-1_41.a
-rw-r--r-- 2 root root 2115172 Sep 25 12:58 /usr/lib/libboost_unit_test_framework-mt-1_41.a
-rw-r--r-- 1 root root 2116278 Oct 24 05:28 /usr/lib/libxml2.a
-rw-r--r-- 1 root root 2146394 Sep 25 21:05 /usr/lib/librecode.a
-rw-r--r-- 1 root root 2183898 Sep 25 21:15 /usr/lib/libdb_cxx-4.5.a
-rw-r--r-- 2 root root 2187126 Sep 25 12:58 /usr/lib/libboost_math_tr1l-1_41.a
-rw-r--r-- 2 root root 2187126 Sep 25 12:58 /usr/lib/libboost_math_tr1l-mt-1_41.a
-rw-r--r-- 1 root root 2190672 Sep 25 12:58 /usr/lib/libboost_math_tr1-1_41.a
-rw-r--r-- 1 root root 2190672 Sep 25 12:58 /usr/lib/libboost_math_tr1-mt-1_41.a
-rwxr-xr-x 1 root root 2198464 Oct 22 03:23 /usr/lib/libpoppler.so.7.0.0*
-rw-r--r-- 1 root root 2200582 Sep 25 21:15 /usr/lib/libdb_java-4.5.a
-rw-r--r-- 2 root root 2229584 Sep 25 12:58 /usr/lib/libboost_math_tr1f-1_41.a
-rw-r--r-- 2 root root 2229584 Sep 25 12:58 /usr/lib/libboost_math_tr1f-mt-1_41.a
-rw-r--r-- 1 root root 2292584 Sep 25 05:12 /usr/lib/libdb-4.7.a
-rwxr-xr-x 1 root root 2390936 Nov 1 14:10 /usr/lib/libMagickCore.so.3.0.0*
-rw-r--r-- 1 root root 2408634 Sep 25 05:11 /usr/lib/libdb-4.8.a
-rw-r--r-- 1 root root 2490538 Sep 25 05:12 /usr/lib/libdb_cxx-4.7.a
-rw-r--r-- 1 root root 2520144 Sep 25 05:12 /usr/lib/libdb_java-4.7.a
-rwxr-xr-x 1 root root 2558432 Sep 26 00:24 /usr/lib/libostyle.so.0.0.1*
-rwxr-xr-x 1 root root 2567840 Oct 20 02:12 /usr/lib/libmzscheme3m-4.2.2.so*
-rwxr-xr-x 1 root root 2569096 Oct 24 06:33 /usr/lib/libosg.so.2.8.3*
-rw-r--r-- 1 root root 2622160 Sep 25 05:11 /usr/lib/libdb_cxx-4.8.a
-rw-r--r-- 1 root root 2651792 Sep 25 05:11 /usr/lib/libdb_java-4.8.a
-rw-r--r-- 1 root root 2721322 Oct 23 09:59 /usr/lib/libdns.a
-rwxr-xr-x 1 root root 2729024 Sep 25 22:29 /usr/lib/libkdecore.so.5.4.0*
-rw-r--r-- 1 root root 2786418 Sep 25 05:11 /usr/lib/libdb_stl-4.8.a
-rw-r--r-- 1 root root 2819722 Sep 25 17:40 /usr/lib/libclucene.a
-rw-r--r-- 1 root root 2828816 Oct 22 11:26 /usr/lib/libpdf.a
-rwxr-xr-x 1 root root 2878784 Oct 11 06:00 /usr/lib/libXm.so.4.0.3*
-rw-r--r-- 1 root root 2882180 Oct 22 08:59 /usr/lib/libpython2.6.a
-rwxr-xr-x 1 root root 2918016 Sep 25 22:29 /usr/lib/libkio.so.5.4.0*
-rwxr-xr-x 1 root root 2942944 Sep 25 03:35 /usr/lib/libvorbisenc.so.2.0.7*
-rw-r--r-- 1 root root 2959148 Oct 22 11:26 /usr/lib/libpdf_java.a
-rwxr-xr-x 1 root root 3027216 Sep 25 22:29 /usr/lib/libplasma.so.3.0.0*
-rw-r--r-- 1 root root 3040934 Sep 25 21:24 /usr/lib/libgtk.a
-rw-r--r-- 1 root root 3053534 Oct 22 09:01 /usr/lib/libpython3.1.a
-rwxr-xr-x 1 root root 3104176 Oct 22 09:59 /usr/lib/libpari.so.2.3.4*
-rw-r--r-- 1 root root 3223716 Oct 8 09:35 /usr/lib/libcrypto.a
-rw-r--r-- 1 root root 3265690 Sep 25 12:58 /usr/lib/libboost_wave-1_41.a
-rw-r--r-- 1 root root 3265690 Sep 25 12:58 /usr/lib/libboost_wave-mt-1_41.a
-rwxr-xr-x 1 root root 3311752 Sep 25 23:28 /usr/lib/libqalculate.so.5.0.0*
-rw-r--r-- 1 root root 3978808 Sep 25 23:27 /usr/lib/libgsl.a
-rwxr-xr-x 1 root root 4259976 Oct 24 06:25 /usr/lib/libgtk-x11-2.0.so.0.2000.1*
-rw-r--r-- 1 root root 4405634 Nov 1 14:05 /usr/lib/libc.a
-rwxr-xr-x 1 root root 4446328 Sep 25 22:29 /usr/lib/libkdeui.so.5.4.0*
-rw-r--r-- 1 root root 4533916 Oct 11 06:00 /usr/lib/libXm.a
-rwxr-xr-x 1 root root 4666216 Sep 25 21:55 /usr/lib/libgtkmm-2.4.so.1.1.0*
-rwxr-xr-x 1 root root 4820608 Oct 24 08:39 /usr/lib/libsmbclient.so.0*
-rwxr-xr-x 1 root root 5009016 Oct 24 08:39 /usr/lib/libnetapi.so.0*
-rw-r--r-- 1 root root 5338078 Sep 25 23:28 /usr/lib/libqalculate.a
-rwxr-xr-x 1 root root 6758936 Oct 24 05:58 /usr/lib/libavcodec.so.52.72.2*
-rw-r--r-- 1 root root 7071428 Oct 22 09:48 /usr/lib/libcln.a
-rwxr-xr-x 1 root root 7536312 Sep 25 22:29 /usr/lib/libkhtml.so.5.4.0*
-rwxr-xr-x 1 root root 7970544 Oct 23 07:33 /usr/lib/libgs.so.8.71*
-rwxr-xr-x 1 root root 8142598 Oct 27 19:38 /usr/lib/libcuda.so.256.44*
-rw-r--r-- 1 root root 8587306 Oct 24 05:58 /usr/lib/libavcodec.a
-rwxr-xr-x 1 root root 11296848 Oct 22 05:24 /usr/lib/libCg.so*
-rwxr-xr-x 1 root root 14930560 Sep 25 17:41 /usr/lib/libicudata.so.44.1*
-rwxr-xr-x 1 root root 15886888 Oct 27 19:38 /usr/lib/libnvidia-compiler.so.256.44*
-rwxr-xr-x 1 root root 18637080 Sep 25 22:19 /usr/lib/libwebkit-1.0.so.2.17.5*
-rwxr-xr-x 1 root root 24999088 Oct 27 19:38 /usr/lib/libnvidia-glcore.so.256.44*

namekuseijin

unread,
Nov 7, 2010, 9:05:59 AM11/7/10
to
On 7 nov, 08:14, p...@informatimago.com (Pascal J. Bourguignon) wrote:

> Hugh Aguilar <hughaguila...@yahoo.com> writes:
> > You can't embed CL inside of a C program because CL is too big.
>
> What a brainfart!
>
> On my system, there are 65 libraries in /usr/lib that are bigger than
> libecl!  boost-regexp is bigger than libecl!  (This is just a fucking
> regexp library and it's bigger than a whole language including a
> compiler, an interpreter, and a standard library).  Or libxml2, which is
> just a fucking serializer and deserializer for sexps!
>
> If CL is too big to include in a C program, then you should just stop
> writing C program, because libc which is the standard library for C
> programs, is twice the size of a CL implementation!
>
> -rwxr-xr-x 1 root root  1960912 Oct 22 11:15 /usr/lib/libecl.so.9.12.3*
> -rw-r--r-- 2 root root  2051448 Sep 25 12:58 /usr/lib/libboost_regex-1_41.a
> -rw-r--r-- 1 root root  2116278 Oct 24 05:28 /usr/lib/libxml2.a
> -rwxr-xr-x 1 root root  2567840 Oct 20 02:12 /usr/lib/libmzscheme3m-4.2.2.so*
> -rw-r--r-- 1 root root  2882180 Oct 22 08:59 /usr/lib/libpython2.6.a
> -rw-r--r-- 1 root root  4405634 Nov  1 14:05 /usr/lib/libc.a

PWNED

libecl is smaller than libpython too. But I suspect that's because
libecl is just some interpreter for CL code, not a lib of CL code.
There must be some /usr/share/ecl or something for CL libs code,
right?

besides, regex is not part of CL... it is part of python, though...

Mark Wooding

unread,
Nov 7, 2010, 9:42:08 AM11/7/10
to
p...@informatimago.com (Pascal J. Bourguignon) writes:

> On my system, there are 65 libraries in /usr/lib that are bigger than
> libecl! boost-regexp is bigger than libecl! (This is just a fucking
> regexp library and it's bigger than a whole language including a
> compiler, an interpreter, and a standard library).

Be fair. ECL's compiler is separately loaded from
/usr/lib/ecl-VERSION/cmp.fas. (It's still rather worrying, though.)

-- [mdw]

André Thieme

unread,
Nov 7, 2010, 10:17:56 AM11/7/10
to
Am 04.11.2010 13:16, schrieb Pascal Costanza:
> On 04/11/2010 12:52, Nathan wrote:

>> If you want your Lisp programs to perform, look into Clojure, it's a
>> Lisp dialect that runs in the JVM; fully bi-directional Java
>> compatible. I haven't benchmarked Common Lisp vs Clojure myself, but
>> I've read a few pieces by other people who said they did it. According
>> to their statistics at least, Clojure will run about 1/10 the speed of
>> Java which means it beats existing implementations of Common Lisp by a
>> factor of 100-1000 times.
>
> I would like to see a more serious comparison, taking different features
> of each language into account. While in general, I wouldn't be surprised
> if Clojure did better performance-wise, there are certain aspects in the
> Clojure design where I find it hard to believe that it has a good chance
> at beating Common Lisp. Say, plain functions in Clojure are probably
> more efficient than in Common Lisp, but Clojure's method dispatch
> mechanism is too general compared to that of CLOS, which in turn was
> carefully designed to be both expressive _and_ efficient.

The number 100 to 1000 is totally bogus. On many problem sets an
equivalent Clojure implementation, running on the JVM with the right
parameters set, may outperform a CL version, though not by orders of
magnitude. Plus there are still challenges that (for example) SBCL will
perform better.

About the function calls:
Yes, they are pretty efficient in Clojure. And you are right that the
method dispatch for multimethods is less efficient and some CLs are
probably better at this.
I microbenchmarked fns vs. multimethods, and got roughly a factor of 10.
Though since Clojure 1.2 there are now Protocols that are a special case
of MM: they dispatch on the type of the first argument.
This is the most typical case and a reason why single-dispatch languages
are able to get anything done, as one often wants to call a method for
a specific kind of object.
And calls of Protocol methods vs. ordinary fns is about factor of 2.


> Java libraries have to center around the very limiting single
> inheritance, single dispatch object-centric model, which usually makes
> them too complicated.

In my experience single dispatch works well for most cases.
For example, let’s assume how many ANSI CL functions could be
implemented with single dispatch. It is not like 3%, but more like,
say, 90%.

Pascal Costanza

unread,
Nov 7, 2010, 10:37:42 AM11/7/10
to

CLOS is specified in such a way that it doesn't matter whether single
dispatch occurs more often than multiple dispatch or not. I think this
is better, because you don't have to worry about this anymore, and you
don't need to change between different concepts for performance reasons.

Juanjo

unread,
Nov 7, 2010, 5:44:41 PM11/7/10
to

Be fair too. ECL has TWO compilers. A bytecodes compiler that is in
libecl.so and a separate Lisp-to-C compiler that is in cmp.fas :-)

Mark Wooding

unread,
Nov 7, 2010, 5:53:17 PM11/7/10
to
Juanjo <juanjose.g...@googlemail.com> writes:

> Be fair too. ECL has TWO compilers. A bytecodes compiler that is in
> libecl.so and a separate Lisp-to-C compiler that is in cmp.fas :-)

It's a fair cop. You're indeed right.

-- [mdw]

Pascal J. Bourguignon

unread,
Nov 7, 2010, 8:18:39 PM11/7/10
to
namekuseijin <nameku...@gmail.com> writes:

Only 972 macros and functions, in addition to the compilier (granted,
targetting C, not x86), interpreter and debugger.

> There must be some /usr/share/ecl or something for CL libs code,
> right?

Not really.

[pjb@kuiper :0.0 lispm.dyndns.org]$ ls -lh /usr/lib/ecl-9.12.3/
total 4.2M
drwxr-xr-x 3 root root 4.0K Oct 22 11:15 ./
drwxr-xr-x 148 root root 120K Nov 7 11:18 ../
-rw-r--r-- 1 root root 753 Oct 22 11:15 BUILD-STAMP
-rwxr-xr-x 1 root root 156K Oct 22 11:15 asdf.fas*
-rw-r--r-- 1 root root 64 Oct 22 11:15 bytecmp.asd
-rwxr-xr-x 1 root root 19K Oct 22 11:15 bytecmp.fas*
-rw-r--r-- 1 root root 56 Oct 22 11:15 clx.asd
-rwxr-xr-x 1 root root 1.7M Oct 22 11:15 clx.fas*
-rw-r--r-- 1 root root 56 Oct 22 11:15 cmp.asd
-rwxr-xr-x 1 root root 653K Oct 22 11:15 cmp.fas*
-rw-r--r-- 1 root root 68 Oct 22 11:15 defsystem.asd
-rwxr-xr-x 1 root root 172K Oct 22 11:15 defsystem.fas*
-rw-r--r-- 1 root root 80K Oct 22 11:15 dpp
-rw-r--r-- 1 root root 701K Oct 22 11:15 ecl_min
drwxr-xr-x 2 root root 4.0K Oct 22 11:15 encodings/
-rw-r--r-- 1 root root 185K Oct 22 11:15 help.doc
-rw-r--r-- 1 root root 64 Oct 22 11:15 profile.asd
-rwxr-xr-x 1 root root 39K Oct 22 11:15 profile.fas*
-rw-r--r-- 1 root root 54 Oct 22 11:15 rt.asd
-rwxr-xr-x 1 root root 31K Oct 22 11:15 rt.fas*
-rw-r--r-- 1 root root 78 Oct 22 11:15 sb-bsd-sockets.asd
-rwxr-xr-x 1 root root 5.8K Oct 22 11:15 sb-bsd-sockets.fas*
-rw-r--r-- 1 root root 72 Oct 22 11:15 serve-event.asd
-rwxr-xr-x 1 root root 19K Oct 22 11:15 serve-event.fas*
-rw-r--r-- 1 root root 64 Oct 22 11:15 sockets.asd
-rwxr-xr-x 1 root root 87K Oct 22 11:15 sockets.fas*
-rw-r--r-- 1 root root 71K Oct 22 11:15 sysfun.lsp
-rw-r--r-- 1 root root 100K Oct 22 11:15 ucd.dat

ecl_min is a duplicate of libecl, it doesn't use it:

[pjb@kuiper :0.0 lispm.dyndns.org]$ ldd /usr/lib/ecl-9.12.3/ecl_min
linux-vdso.so.1 => (0x00007ffffd74b000)
libgmp.so.3 => /usr/lib/libgmp.so.3 (0x00007f86a15f6000)
libgc.so.1 => /usr/lib/libgc.so.1 (0x00007f86a139e000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00007f86a1182000)
libdl.so.2 => /lib/libdl.so.2 (0x00007f86a0f7e000)
libm.so.6 => /lib/libm.so.6 (0x00007f86a0cfd000)
libc.so.6 => /lib/libc.so.6 (0x00007f86a09a1000)
/lib64/ld-linux-x86-64.so.2 (0x00007f86a184e000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00007f86a078a000)

Most of the size in /usr/lib/ecl is the unicode character encoding
descriptions, and clx and other add-on systems.

> besides, regex is not part of CL... it is part of python, though...

What is part of a language, what is not part? That's why discussing
sizes is ridiculous. clisp provides the REGEXP package which is a FFI
to regex(3), and you can write a reader macro to integrate regular
expressions written as /a(b|c)*d/ in CL.

Pascal J. Bourguignon

unread,
Nov 7, 2010, 8:19:02 PM11/7/10
to
m...@distorted.org.uk (Mark Wooding) writes:

Oops, right! I forgot cmp.fas.

Robert Maas, http://tinyurl.com/uh3t

unread,
Nov 8, 2010, 3:13:19 AM11/8/10
to
> From: Hugh Aguilar <hughaguila...@yahoo.com>
> Can anybody tell me if CL (or PLT Scheme) has an associative
> arrays implementation?

CL has three different associative-array implementations built in:
- Assoc lists
- Property lists
- Hashtables

> I want an implementation that supports doing an ordered traversal

The first two of those listed above have a well-defined traversal order.

> and which also supports filtering of regions. I want to be able
> to copy everything within a region into another association, or
> everything outside of a region into another association,

Since we're talking about a linear object, by "region" do you
really mean segment (from some starting element to some ending
element, where each element is a key-value pair)?

> and also be able to merge two associations together.

By "merge" do you mean "append" or "collate" or merge per some
order-relation that each linear object satisfied before the merge?

So-far it sounds like you want an assoc list. You can use the SORT
function to rearrange it per some order-relation, and after two
assoc lists have been sorted you can use the MERGE function to
merge them per that same order-relation.

> The reason why I ask, is that I wrote such an associative arrays
> implementation (using LLRB trees) in ANS-Forth. I want to have the
> same capability in Lisp that I have in Forth. I can port my code from
> Forth to Lisp (after I get better at Lisp), but I would be interested
> in knowing if somebody else has already done something similar.

Google search shows LLRB is a type of self-balacing binary search
tree. Such have extra properties that you didn't list above,
specifically that you can insert/delete/merge/split in log(n) time
while sharing all structure (between original and modified tree)
except the log(n) path from root to the point of change. You didn't
say whether you *need* that feature, in which case assoc lists
wouldn't satisfy your needs, or not, in which case assoc lists
would be good enough. If you typically have less than a thousand
items in each associative array, the speed difference is probably
too small to care about.

By the way, a few years ago I implemented my own SBBST, which
balances on the basis of total number of nodes rather than maximum
depth of each sub-tree, and allows you to find the nth element in
log(n) time which some of the other SBBSTs don't allow.

Robert Maas, http://tinyurl.com/uh3t

unread,
Nov 8, 2010, 3:25:32 AM11/8/10
to
> From: p...@informatimago.com (Pascal J. Bourguignon)
> And also:
> http://www.informatimago.com/develop/lisp/com/informatimago/common-lisp/llrbtree.lisp
> ;;;;AUTHORS
> ;;;; <PJB> Pascal J. Bourguignon <p...@informatimago.com>

That's weird! All these years I thought Pascal J. Bourguignon was
just one person. How many people are you? Sind Sie und bist du
zwei? Did you ever get in a disagreement with yourself about how to
do some part of the program, and have a knock-down drag-out fight
with yourself? Which of you-all won the fight?

Pascal J. Bourguignon

unread,
Nov 8, 2010, 4:56:51 AM11/8/10
to
seeWeb...@rem.intarweb.org (Robert Maas, http://tinyurl.com/uh3t)
writes:

It's even worse, in some asd files (generated automatically), we see
that the authors are Pascal J. Bourguignon and Pascal Bourguignon.

I'm just an optimist and dream eventually to sprout into a
multi-nationnal corporation with thousands of lisp programmers helping
me.

Hugh Aguilar

unread,
Nov 8, 2010, 3:31:00 PM11/8/10
to
On Nov 7, 3:14 am, p...@informatimago.com (Pascal J. Bourguignon)
wrote:

> Hugh Aguilar <hughaguila...@yahoo.com> writes:
> > You can't embed CL inside of a C program because CL is too big.
>
> What a brainfart!
>
> On my system, there are 65 libraries in /usr/lib that are bigger than
> libecl!  boost-regexp is bigger than libecl!  (This is just a fucking
> regexp library and it's bigger than a whole language including a
> compiler, an interpreter, and a standard library).  Or libxml2, which is
> just a fucking serializer and deserializer for sexps!
>
> If CL is too big to include in a C program, then you should just stop
> writing C program, because libc which is the standard library for C
> programs, is twice the size of a CL implementation!

Considering that I'm an admitted newbie to CL, I think you could have
disagreed with me without all of the vulgarity.

A big part of why I'm getting away from comp.lang.forth is because of
the vulgarity over there. I have been told that my code is "crap" and
it "sucks" --- that my use of binary trees rather than hash tables is
"stupid" and "incompetent" --- that I am "diseased" --- etc.. If I
complain that this kind of language is inappropriate for a programming
forum, Elizabeth Rather accuses me of being "homophobic and
intolerant:"
http://groups.google.com/group/comp.lang.forth/browse_thread/thread/c37b473ec4da66f1

I'm expecting you Lispers to be a more classy crowd than the Forthers.
Don't they have etiquette classes at MIT? :-)

Tim Bradshaw

unread,
Nov 8, 2010, 3:39:10 PM11/8/10