Against the Tide of Common Lisp

Skip to first unread message


Feb 18, 1987, 2:14:01 AM2/18/87

In <1...@lmi-angel.UUCP>, Bob Krajewski writes:

>>When it appeared last year, the results were 3:1 *against* CL, mostly
>>via Mail.
>What exactly are you trying to imply here ? What were the circumstances of
>rejection ?

Simple! Last time I started this discussion, most of the comments
received were private, not public, and most of them were of the form "I
don't like CL much either"!

>>I have so much heartburn with SETF as a "primitive" that I'll save it
>>for another day.
>Well, I'd like to hear them. It would be interesting to see what your
>objections are.

Real Soon Now :-)

>>7. >MEMBER used EQ instead of EQUAL.
>>Mea culpa, it uses EQL!
>Nitpicking aside, this is hardly arbritrary -- remember that since Common
>Lisp is a new dialect, there was only a secondary consideration in being
>compatible with other Lisp dialects.

Foolish me, I believed the book!

>In non-specialized implementations, this is less likely to be true if
>not many conventions of a ``virtual Lisp machine'' are honored in compiled

At what point should a compiler go for full out machine dependent
optimization, as opposed to honoring a "virtual LISP machine"?
(Not that any such VLM is defined by Common LISP anyway).

>>It is also one of LISP's major features that anonymous functions
>>get generated as non-compiled functions and must be interpreted.

I was referring to CONS'ed functions created at run time. Sorry if
that wasn't clear...

>OK, back to the crusade...
>>It will result and too expensive. To be accepted, LISP must be able to run
>>on general purpose, multi-user computers.
>It takes a consultant to come up with conclusions like this, and
>requirements like this...
>>There must be a greater understanding of the problems, and benefits
>>of Common LISP, particularly by the 'naive' would be user.

It takes a LISP Machine Vendor to ignore that large a market :-)

Seriously, how are sales of LISP machines to the commercial sector?
What percentage of sales are to Universities and DARPA/DoD funded
R&D? How many LISP machines have been sold to banks? To
Insurance Companies? To aerospace companies?

The simple truth is that the perception that LISP is big and slow
is extremely common. It also happens to be *true*. The ability
of LISP implementors to dream up features that outstrip
improvements in hardware has been going on since before I
wrote my first function in LISP, finally resulting in a storage
management scheme where 50% of memory is always unused :-)

>>Selling it as the 'ultimate' LISP standard is dangerous and
>Who said that ? Common Lisp is not a step forward in terms of Lisp
>``features.'' By reining in the spec and getting diverse implementors to
>agree on something, I can write a program on a Sun, and have it work on
>(say) a Lisp Machine (Zetalisp), a Silicon Graphics box (Franz Common Lisp),
>a DEC-20 (Hedrick's Common Lisp), a VAX (NIL or DEC Standard Lisp), and so

Boy, are you a dreamer! Last year, when I made the rounds of non-SPICE
derived Common LISPs, I managed to break every one within 10 minutes!
And if you can get anything to run in NIL, please let me know how!

And of course, anything that gets developed on any of the LISP
machines is guaranteed to have non-CL code in them!

I'm not clear on one thing; when you say "reining in the spec", are
you refering to Common LISP as 'being a reined in spec' or
that Common LISP needs to be 'reined in'.

>At least now there is a Lisp which is no more repugnant
>than C (actually, at lot less, in my freely admitted biased opinion) as a
>portable programming language

This is stretching the term "portable"; one assumes that portable
means "easily" transported :-) Code written in Common LISP
may be portable, but the language itself sure isn't!!!

>Robert P. Krajewski

Jeffrey M. Jacobs
CONSART Systems Inc.
Technical and Managerial Consultants
P.O. Box 3016, Manhattan Beach, CA 90266
USENET: jja...@well.UUCP


Feb 18, 1987, 2:18:59 AM2/18/87

In <26...@mcc-pp.UUCP>, Patrick McGehearty writes:

>... On compilers vs interpreters
>As a systems and performance measurement type, I have always been
>concerned with how fast my programs run. One of the critical
>measures of success of OS code is how fast it is perceived to be.

We not only share a common background, but a common *concern*;
that of performance! In fact, this is one of my biggest gripes
about Common LISP; in exchange for very little, if any, improvement in
functionality, it requires an enormous increase in CPU and

It is hard to believe, but combining all those options for
LAMBDA lists, allowing the 'initform' of DEFVAR to not be
executed until the variable value is needed, implicit lexical
closures, etc result in a dramatic increase in CPU and memory
requirement, both directly and indirectly.

The worst part is that you *cannot* get around them! You
can elect to use SETQ instead of SETF, but you can't elect
to use MEMQ instead of MEMBER!

>Also, old rumors about programs behaving differently in compiled
>and interpreted mode made me distrust the interpreter as a naive user.

Well, Common LISP is supposed to be the same. Most experienced
LISP programmers will tell you that even with the differences
between compilers and interpreter, it was seldom a problem.

>Breakpoints and function tracing are still available as well
>>as the old, old reliable of print statements. Indeed, when at
>a breakpoint, I can rewrite and recompile any function that I am
>not currently within..
>I claim that this approach to Lisp development is followed without lossage
>by many of the new arrivals to the Lisp world.

You can't know what your are missing if you've never had it!

Interpreted LISP can provide an debugging and development
environment that is far beyond that of break, trace and print!

There is a tremendous amount of seminal work, primarily in and
from InterLISP, that is only possible in an interpreter. Such
things as automatic error correction, being able to change
a function that you are currently 'in', being able to alter and
modify the flow and results of a lengthy computation, etc.

I've worked in situations where Integration and Test literally
can take hours or days; the ability to change something that
is in progress without having to restart from scratch would be an
enormous asset in such situations.

I recommend looking at some of the capabilities of InterLISP
(or even UCI LISP) to understand what is potentially being lost.

>...on Common Lisp environments
>I recognize that Lisp machines are too expensive for most developers,
>but workstations such as Sun now have Common Lisp compilers
>(from Kyoto, Franz, and Lucid at a minimum), with runtime
>environment development continuing. I claim that reasonable
>Common Lisp development environments are available on $15,000 workstations
>and multiuser systems such as Vaxes and Sequents today, and will be
>available soon on $5000 PCs (based on high performance, large address
>space chips such as M68020 or Intel 386)

To quote a system manager, "Common LISP is a great tool for turning
a VAX 780 into a single user machine!"

To quote Charles Hedrick,
"Personally I would have wished for CL to be smaller. As the manager
of a number of timesharing systems, I cringe at a Lisp where each user
takes 8 MB of swap space"

The problems is that you have been hoodwinked into believing that you
can't have similar capabilities and functionality without a LISP
machine or dedicated workstation. (Do you think the LISP
machine vendors would have been happy with a small core of
primitives that would run on anything, and with successivly
complex layers for those who want them?)

The French produce a LISP called Le_LISP; it is "standard" across
VAXes, MS/PC-DOS, MacIntosh and various 68000 workstation.
There is also a VLSI implementation in progress. They begin
by defining a very simple "virtual machine", with a great deal
of thought given to how people actually write LISP code, (as
opposed to the CL committee, whose basic approach was how
people *might* want to write code :-).

According to Dr. Lee Rice of Marquette (DEC Professional, March
1986), Le_LISP on a VAX 780 supported an additional
37 student for an AI class with no noticeable degradation even
during peak periods!

The Rice article gave a simple FIBONACCI benchmark; interpreted,
LE_LISP on a VAX 780 took 4.25 seconds, compared to 8.9 for
VAX InterLISP, 16.1 FRANZ and 29 seconds on a Symbolics. "Optimized
Compiled" gave 0.12 for LE_LISP and 0.15 for Symbolics.
(BTW, anybody having Gabriel benchmarks for LE_LISP on
other than Mac or PC, please let me know).

The Macintosh version will run on a 512K MAC, and will execute
the BROWSE benchmark! I know of no other serious commercial
implementation that will run BROWSE in 512K. It runs TAK on a
512K MAC in 62 seconds, *interpreted* (and remember that the
MAC has a ridiculous amount of overhead).

So take your daily dose of salt!

>...on portability
>Implementors of major commercial programs want as wide a potential
>market for their product as possible. Thus, they chose to implement
>in the best available PORTABLE environment, rather than the best
>environment. Common Lisp appears the best choice.
>Researchers without the requirement for portability
>may chose other environments such as Scheme or InterLisp.

Most implementors of commercial Expert Systems have *abandoned*
Common LISP, primarily due to the abysmal performance.

Portability is certainly desireable in a language, but the high
cost of CL far outweighs it's 'portability'. Common syntax
and semantics are wonderful, but the ability to run in a cost
effective manner is also important!

>...on commonality
>I was shocked to discover that MacLisp and InterLisp are significantly
>more different than C and Pascal. I am surprised that they
>are commonly lumped together as the same language. Scheme is
>yet farther away both in syntax and philosophy. All are in the
>same family just as C and Pascal are both related to Algol60,
>but beyond that...

You will also notice that InterLISP is *one* language, whereas
MacLISP is the root from which almost all of the other dialects

(I also don't consider SCHEME to be a LISP dialect; I consider it
a separate language, with similar syntax).

You will note that I have not defended any particular dialect of
LISP. My main complaint is that Common LISP is not only not
an improvement on other dialects, but is a major step backward in
language design.
Common LISP is enormously wasteful of CPU and memory,
and ignores nearly all of the lessons learned throughout the years
in the field of software engineering.

>Someone should write a book describing the "definitive" core of the language,
>followed by reasonable macros and library functions for the rest of
>the language.

There is no "definitive core" at this time, (nor is there a conceptual

If you examine the history of LISP, it started with a very
small, well defined set of primitive and grew explosively. But
as large as it grew, it was still defined in terms of 'smaller'
operations. See the NEW UCI LISP Manual and the InterLISP
manual, or, if you can get your hands on one, an old MACLISP manual.

It is hard to believe that a language with LISP's historical roots
would result in something as broadly defined as Common LISP.

Hopefully, the ANSI Committee will improve on the situation, but
I doubt that the fundamental design flaws can be eliminated.

>-- Patrick McGehearty,
> representing at least one view of the growing community of Lisp users.

Jeffrey M. Jacobs
CONSART Systems Inc.
Technical and Managerial Consultants
P.O. Box 3016, Manhattan Beach, CA 90266
USENET: jja...@well.UUCP

P.S. You do know that the real motivation behind the development of
LISP machines was to have something to run EMAC on? :-)


Feb 18, 1987, 2:22:08 AM2/18/87

In, <>, Rob MacLachlan writes:
>>Subject: Re: Against the Tide of Common LISP
>>Date: 13 Feb 87 19:00:00 GMT
>>Nf-From: uicsrd.CSRD.UIUC.EDU!sharma Feb 13 13:00:00 1987
>> There is a pretty good critique of Common Lisp in :
>> "A Critique of Common Lisp" by Rodney Brooks and Richard Gabriel
>>(Stanford). It appeared in the proceedings of the 1984 ACM Symposium on
>>Lisp and Functional Programming.
>Yeah, this paper is reasonably coherent, but should be taken with a grain
>of salt. Some of the arguments in it are semi-bogus in that they present a
>problem, but don't present simple, commonly used solutions that largely
>solve the problem.

>For example, in one section complaining about the inefficiency of the
>complex calling mechanisms and their use in langauge primitives, they
>basically construct a straw man out of SUBST (or some similar function).
>What they do is observe that SUBST is required to take keywords in Common
>Lisp and that the obvious implementation of SUBST is recursive. From this
>they leap to the conclusion that a Common Lisp SUBST must gather all the
>incoming keys into a rest arg and then use APPLY to pass the keys into
>recursive invocations. If this was really necessary, then it would be a
>major efficiency problem. Fortunately, this is easily fixed by writing an
>internal SUBST that takes normal positional arguments, and then making the
>SUBST function call this with the parsed keywords. It is also easy to make
>the compiler call the internal function directly, totally avoiding keyword
>Now presumably the authors knew that this overhead could be avoided by a
>modest increment in complexity, but this isn't at all obvious to readers not
>familiar with Common Lisp implementation techniques.

Ok, let's try another example. Let's assume that SUBST is contained in
a loop requiring 1,000,000 executions. What "largely" solves this problem?

And to prove my case, let me quote from a REAL EXPERT:

"Common LISP has very powerful argument passing mechanisms.
Unfortunately, two of the most poweful mechanisms, rest arguments
and keyword arguments, have a serious performance penalty
in Spice LISP.


Neither problem is serious unless thousands of calls are being made
to the function in question..."

- Spice LISP User's Guide, Chapter 5, Efficiency by Rob MacIachlan.

Of course the real problem with Common LISP is that the user has
no choice; there are no alternate primitives which don't involve the
keyword overhead, so the experienced user must instead rely on the
implementor for efficiency. There is no guarantee, or even good estimate,
how the efficiency will vary from machine to machine, or implementation
to implementation, thus offsetting some of the great claims of
portability. (What runs well on one implementation may
run terribly on another).

>I also point out that, despite any misgivings voiced in the paper, Gabriel
>is a major player in Lucid Inc., whose sole product is a Common Lisp
>implementation. Evendently he believes that it is a practical, real-world
>programming language.

A man who combines good aesthetic judgement with good business
judgement. Sell 'em what they want, not what they need!
After all, "nobody ever went broker by underestimating
the taste of the American consumer":-)

> Rob


Feb 18, 1987, 2:36:17 AM2/18/87

In <> Rob MacLachlan

>Since some people may not have understood my claims for the desirability of
>a standard not specifying everything, I will elaborate.
>Consider the DOTIMES macro. In CMU Common Lisp,
> (dotimes (i n) body) ==>
> (do ((i 0 (1+ i))
> (<gensym> n))
> ((>= i <gensym>))
> body)

>Now, if Common Lisp required this implementation, it would imply that
>setting "I" within the body is a meaningful thing to do. Instead, Common
>Lisp simply specifies in English what DOTIMES does, and then goes on to say
>that the result of setting the loop index is undefined. This allows the
>implementation to assume that the loop index is not set, possibly increasing

There are two possibilities here; 1. The implementation allows
the "i" to be set, as in some other, older languages, or 2.
the disclaimer can be made in English. "Result is undefined"
is a valid specification; not specifying things is a different

>The same sort of issues are present in the "destructive" functions, possibly
>to a greater degree. If an implementation was specified for NREVERSE, then
>users could count on the argument being destructively modified in a
>particular way. This is bad, since the user doesn't need to know how the
>argument is destroyed as long as properly he uses the result, and requiring
>the argument to be modified in a particular way would have strong
>interactions with highly implementation-dependent properties such as storage
>management disciplines. For example, in some implementations it might be
>most efficient to make the "destructive" operations identical to the normal
>operations, and not modify the argument at all.

I see; as long as the result of (RPLACA X Y) is any CONS cell with
the CAR set to Y and the CDR the same as before, then this is

>In any case, the tremendous complexity of Common Lisp would make it very
>difficult to specify it all in a formal way such as that used in the ADA
>language specification.

See the InterLISP manual, and others.

> When reading the Common Lisp manual, you must
>assume that whenever the meaning of a construct is not explicitly specified,
>it is undefined, and therefore erroneous.

I've seen enough spec's in my time to know that making those kinds
of assumptions is deadly. Further, that is nearly the
definition of a bad specification!

>Completeness of specification certainly doesn't seem to predict language
>success. Consider Algol 68 and C.

C isn't successful??????


Feb 21, 1987, 10:09:25 PM2/21/87

In <12...@Shasta.STANFORD.EDU>, Andy Freeman writes:

>I've forgetten why JJ is so down on macros; doesn't "real" lisp have

I'm not down on macros; I am down on SETF.

In a nutshell, SETF is essentially
a primitive, i.e. there are no corresponding operations for
many of it's features, so, as a macro, it becomes excessively
expensive, particularly for arrays. I also don't believe that 'primitives'
anything other than a fixed number of arguments.

Both of these aspects should be reserved for a "higher level".


Feb 21, 1987, 10:12:24 PM2/21/87

In <1...@lmi-angel.UUCP>, Bob Krajewski writes:

>In other words, why would anybody do

> (let ((lose-p nil))
> (save-the-world))

>unless LOSE-P was a special variable which evidently affected the behavior
>of the SAVE-THE-WORLD function ? If the variable LOSE-P were not special,
>the compiler would probably warn about the useless binding.

Let me change the example a little bit; let's make it



Possible reasons for this:

1. NOT_SURE_YET isn't completely defined, yet.

2. It is anticipated that NOT_SURE_YET might, in the future, need to
change the value of LOSE-P.

3. It is a primary function of a package, such as an editor,
with LOSE-P the key global variable. Think of calling a structure
editor with the form to be edited.

Remember that the domain under discussion is that of debugging
*other* people's code, not reading people's code for interest
or fun. If they were writing reasonable code, I wouldn't have
to be debugging it :-)


Feb 21, 1987, 10:14:49 PM2/21/87

In <695...@hpfclp.HP.COM>, Chan Benson writes:

>>B.S! All the compiled code for SET need do is check that the first argument
>>be lexically equivalent to a lexically apparent variable and change
>>the appropriate cell, stack location, or whatever. Easy for a compiler
>>to do!

>I don't see how it's possible to do this (excuse my potential ignorance).

Let's take a simple example:


The compiler would have to generate code that would effectively
be equal to

((EQ X 'Y) (SETQ Y (CONS Y Z)))
((EQ X 'Z) (SETQ Z (CONS Y Z)))

i.e. a hidden 'macro-expansion' at compile time. This should put
to rest assertions that it "can't be done"; it's actually trivial.
(Whether it's desireable or not is another discussion).


Feb 21, 1987, 10:17:41 PM2/21/87

In <12...@mmm.UUCP>, Bill Rouner writes, asking for help in

>(defun foo ()
> (let ((a '(this value set in the let)))
> (eval (car '(a)))))

Page 321 actually provides a nice, coherent explanation, "the
form is evaluated in the *current* dynamic environment and
a *null* lexical environment.

Aren't the new scoping rules fun? (Disclaimer: I don't know
Bill Rouner, and his msg was not set up to support any of
my claims about Common LISP.

Will Clinger, in <10...@tekchips.TEK.COM>, writes a very nice
explanation of the problem, but then goes onto confuse things
with a SCHEMEr's viewpoint of the world (valid but different). To

>By the way, lists are not expressions either. The general principle
>is that data structures and code are entirely disjoint. Apologists
>for Lisp have historically done their best to say that data structures
>and code are the same, and Common Lisp retains some really bizarre
>things to make that seem so, thus the confusion continues.

One of the key issues that I consider distinguishes "Real LISP" from
Common LISP is indeed the strong committment to the equivalence of
data (meaning list structures) and code. I consider this
idea one of the foundations of "Real LISP", and Common LISP intent
is vague at best. InterLISP, for example,
still makes a declaration of this philosophy in the manual, and it is still one
of the keystones in the teaching of LISP (although whether it
should be when teaching Common LISP is open to question).

SCHEME does not share this philosophy.

Will is correct in asserting that Common LISP does do some very
bizarre things to try and keep this; the lack of a syntactical distinction
between variables and symbols is absurd, as Bill's example points

Note that there is a difference between "code and data being
equivalent, i.e. list structures" and "code being data, i.e. functional

If variables and symbols are to be truly disjoint,
then there should be a *syntactic* distinction.

If code and data are to be truly disjoint (non-equivalent), then
Common LISP's syntax becomes a real loser; there's no good reason
not to support better syntactic constructs, and not much reason
to inflict Lots of Insidiously Stack Parends on the world!

Feb 23, 1987, 6:54:47 AM2/23/87
I found this highly relevant document about the Common Lisp design process
while cleaning my directory. It was written by Skef Wholey, who implemented
a large part of Spice Lisp while a full-time undergraduate student. The
Common Lisp spec was being developed at the same time that Spice Lisp was
being written. This was a cause for no little aggravation for Skef, since he
often had to rewrite code several times.


Common LISP
(to the tune of Dylan's "Maggie's Farm")

I ain't gonna hack on Common LISP no more,
I ain't gonna hack on Common LISP no more.
See, it was spawned from MACLISP,
And then they threw in Scheme,
And now everybody's made it
"just a little" short of clean.
The language specification is insane.

I ain't gonna hack on Guy Steele's LISP no more,
I ain't gonna hack on Guy Steele's LISP no more.
When you mail him a question,
And then wait for a reply,
Well you can sit for weeks now,
And begin to think he's died.
His MAIL.TXT is one great big black hole.

I ain't gonna hack on Fahlman's LISP no more,
I ain't gonna hack on Fahlman's LISP no more.
Well he gives you an X-1,
And he puts you on a Perq,
And he asks you with a grin,
"Hey son, how much can you work?"
If I reboot one more time I'll lose my brain.

I ain't gonna hack on Dave Moon's LISP no more,
I ain't gonna hack on Dave Moon's LISP no more.
We had a simple SETF,
But it choked on LDB.
So Lunar Dave done fixed it:
Go look at page eighty three.
The Gang of Five they didn't take a poll.

I ain't gonna hack on Common LISP no more,
I ain't gonna hack on Common LISP no more.
With its tons of sequence functions,
And its lexical scoping,
I've now begun to like it,
But the users are moping:
"Without EXPLODE my life is full of pain."

(harmonica and fade)


Feb 24, 1987, 2:49:48 AM2/24/87
In article <26...@well.UUCP> jja...@well.UUCP (Jeffrey Jacobs) writes:
>Of course the real problem with Common LISP is that the user has
>no choice; there are no alternate primitives which don't involve the
>keyword overhead, so the experienced user must instead rely on the
>implementor for efficiency.

And if there were, you would be complaining about the fact that the
language provides two ways of doing these things (the primitive and
keyworded versions), and that it makes the language even bigger.

>There is no guarantee, or even good estimate,
>how the efficiency will vary from machine to machine, or implementation
>to implementation, thus offsetting some of the great claims of
>portability. (What runs well on one implementation may
>run terribly on another).

I hope you aren't intending to imply that only Common Lisp is subject
to this problem. It is true of all languages for which there are
multiple implementations, and true of most other standardized things
(for example, VT102's implement X3.64 more slowly than VT200's).

>>I also point out that, despite any misgivings voiced in the paper, Gabriel
>>is a major player in Lucid Inc., whose sole product is a Common Lisp
>>implementation. Evendently he believes that it is a practical, real-world
>>programming language.

I'd like to point out that Gabriel is one of the most vocal members of
X3J13 (the Common Lisp standardization committee) regarding the issues
of simplification. For example, he is one of the people arguing for
the merging of the function and value cells, in the style of Scheme
and EuLisp. Evidently he would rather work WITH the Common Lisp
community than AGAINST it in order to move it in the directions he
would prefer.
Barry Margolin
ARPA: barmar@MIT-Multics
UUCP: ..!genrad!mit-eddie!barmar


Feb 24, 1987, 3:17:52 AM2/24/87
In article <26...@well.UUCP> jja...@well.UUCP (Jeffrey Jacobs) writes:

Describing a way for SET to assign to lexical variables:

>The compiler would have to generate code that would effectively
>be equal to
>(COND ((EQ X 'X) (SETQ X (CONS Y Z)))
> ((EQ X 'Y) (SETQ Y (CONS Y Z)))
> ((EQ X 'Z) (SETQ Z (CONS Y Z)))

That is the wrong thing, though. Consider FOO being used in the

(LET ((X 3))

The X that is passed to FOO is in the lexical scope of FOO-CALLER, so
it is the one that one would expect to be assigned. One of the goals
of lexical scoping is that it should not make any difference to the
caller what the names of locals are in a function; if a lexical
variable is renamed, the only places you have to look for references
to the variable is within the lexical scope of that variable. The
parameters to FOO are not lexical variables because they can be
referenced outside the function.

I will admit that there are uses for this type of thing; for example,
in an object-oriented programming system implemented using lexical
variables, one might have a SET-INSTANCE-VARIABLE function that takes
the name of an instance variable. However, this would probably differ
from FOO because it wouldn't have the T clause, since it is ONLY
interested in lexical variables. I doubt that there is a use for the
generality of the FOO example, in which SET will set either a lexical
or special.

Reply all
Reply to author
0 new messages