Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Against the Tide of Common LISP

26 views
Skip to first unread message

jja...@well.uucp

unread,
Feb 7, 1987, 9:54:46 PM2/7/87
to

"Against the Tide of Common LISP"

Copyright (c) 1986, Jeffrey M. Jacobs, CONSART Systems Inc.,
P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802
Bix ID: jeffjacobs, CIS Userid 75076,2603

Reproduction by electronic means is permitted, provided that it is not
for commercial gain, and that this copyright notice remains intact."

The following are from various correspondences and notes on Common LISP:

Since you were brave enough to ask about Common Lisp, sit down for my answer:

I think CL is the WORST thing that could possibly happen to LISP. In fact, I
consider it a language different from "true" LISP. CL has everything in the
world in it, usually in 3 different forms and 4 different flavors, with 6
different options. I think the only thing they left out was FEXPRs...

It is obviously intended to be a "compileable" language, not an interpreted
language. By nature it will be very slow; somebody would have to spend quite a
bit of time and $ to make a "fast" interpreted version (say for a VAX). The
grotesque complexity and plethora of data types presents incredible problems to
the developer; it was several years before Golden Hill had lexical scoping,
and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!
It just eventually eats up it's entire VAX/VMS virtual memory and dies...

Further, there are inconsistencies and flat out errors in the book. So many
things are left vague, poorly defined and "to the developer".

The entire INTERLISP arena is left out of the range of compatability.

As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in
Common LISP. Once again we hear that LISP is "too slow" for such things, when
a large part of it is the use of Common LISP as opposed to a "faster" form
(i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
should have left the &aux, etc as macros). Every operation in CL is very
expensive in terms of CPU...


______________________________________________________________

I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
allow both dynamic and lexical makes the performance even worse. To me,
lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
part of the language semantics. I can accept SCHEME, where you always know
that it's lexical, but CL could drive you crazy (especially if you were
testing/debugging other people's code).

This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super
duper complex system that will do everything I can build. Who cares if it's
incredibly difficult and costly to build and understand, and that most of the
features will only get used because "they are there", driving up the cpu useage
and making the whole development process more costly...

BTW, I think the book is poorly written and assume a great deal of knowledge
about LISP and MACLISP in particular. I wouldn't give it to ANYBODY to learn
LISP

...Not only does he assume you know a lot about LISP, he assume you know a LOT
about half the other existing implementations to boot.

I am inclined to doubt that it is possible to write a good introductory text on
Common LISP; you d**n near need to understand ALL of it before you can start
to use it. There is nowhere near the basic underlying set of primitives (or
philosophy) to start with, as there is in Real LISP (RL vs CL). You'll notice
that there is almost NO defining of functions using LISP in the Steele book.
Yet one of the best things about Real LISP is the precise definition of a
function!

Even when using Common LISP (NIL), I deliberately use a subset. I'm always
amazed when I pick up the book; I always find something that makes me curse.
Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
think, the author's name escapes me). The author uses SETF instead of SETQ,
stating that SETF will eventually replace SETQ and SET (!!). Thinking that
this was an error, I checked in Steel; lo and behold, tis true (sort of).
In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
of page 94! And it isn't even clear; if the variable is lexically bound AND
dynamically bound, which gets changed (or is it BOTH)? Who knows?
Where is the definitive reference?

"For consistency, it is legal to write (SETF)"; (a) in my book, that should be
an error, (b) if it's not an error, why isn't there a definition using the
approprate & keywords? Consistency? Generating an "insufficient args"
error seems more consistent to me...

Care to explain this to a "beginner"? Not to mention that SETF is a
MACRO, by definition, which will always take longer to evaluate.

Then try explaining why SET only affects dynamic bindings (a most glaring
error, in my opinion). Again, how many years of training, understanding
and textbooks are suddenly rendered obsolete? How many books say
(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)? Probably all
but two...

Then try to introduce them to DEFVAR, which may or may not get
evaluated who knows when! (And which aren't implemented correctly
very often, e.g. Franz Common and Golden Hill).

I don't think you can get 40% of the points in 4 readings! I'm constantly
amazed at what I find in there, and it's always the opposite of Real LISP!

MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
used EQ instead of EQUAL. I only checked about 4 books and manuals (UCILSP,
INTERLISP, IQLISP and a couple of others). David correctly pointed out that
CL defaults to EQ unless you use the keyword syntax. So years of training,
learning and ingrained habit go out the window. How many bugs
will this introduce. MEMQ wasn't good enough?

MEMBER isn't the only case...

While I'm at it, let me pick on the book itself a little. Even though CL
translates lower case to upper case, every instance of LISP names, code,
examples, etc are in **>> lower <<** case and lighter type. In fact,
everything that is not descriptive text is in lighter or smaller type.
It's VERY difficult to read just from the point of eye strain; instead of
the names and definitions leaping out to embed themselves in your brain,
you have to squint and strain, producing a nice avoidance response.
Not to mention that you can't skim it worth beans.

Although it's probably hopeless, I wish more implementors would take a stand
against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP"
is more than most would-be implementors can resist. Even I occasionally find
myself thinking "how would I implement that"; fortunately I then ask myself
WHY?

Jeffrey M. Jacobs <UCILSP>
CONSART Systems Inc.
Technical and Managerial Consultants
P.O. Box 3016, Manhattan Beach, CA 90266
(213)376-3802
CIS:75076,2603
BIX:jeffjacobs
USENET: jja...@well.UUCP

yera...@rpics.uucp

unread,
Feb 8, 1987, 11:14:15 AM2/8/87
to
In article <25...@well.UUCP/, jja...@well.UUCP (Jeffrey Jacobs) writes:
/
/ "Against the Tide of Common LISP"
/
/ Copyright (c) 1986, Jeffrey M. Jacobs, CONSART Systems Inc.,
/ P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802
/ Bix ID: jeffjacobs, CIS Userid 75076,2603
/
/ Reproduction by electronic means is permitted, provided that it is not
/ for commercial gain, and that this copyright notice remains intact."


Do I really need to keep that there? :-)



/
/ The following are from various correspondences and notes on Common LISP:
/
/ Since you were brave enough to ask about Common Lisp, sit down for my answer:
/
/ I think CL is the WORST thing that could possibly happen to LISP. In fact, I
/ consider it a language different from "true" LISP. CL has everything in the
/ world in it, usually in 3 different forms and 4 different flavors, with 6
/ different options. I think the only thing they left out was FEXPRs...


Sorry, no. Flavors are not part of the CL definition. You can add them
yourself if you want. :-)



/ It is obviously intended to be a "compileable" language, not an interpreted
/ language. By nature it will be very slow; somebody would have to spend quite a
/ bit of time and $ to make a "fast" interpreted version (say for a VAX). The
/ grotesque complexity and plethora of data types presents incredible problems to
/ the developer; it was several years before Golden Hill had lexical scoping,
/ and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!
/ It just eventually eats up it's entire VAX/VMS virtual memory and dies...

Agreed- but garbage collectors are no fun to write. The DEC product
GC's a full three megabytes in less than ten seconds- provided you have
enough physical memory. Not my (or CL's) fault if someone decides not to
complete the implementation.

/
/ Further, there are inconsistencies and flat out errors in the book. So many
/ things are left vague, poorly defined and "to the developer".
/
/ The entire INTERLISP arena is left out of the range of compatability.


True - there is no "spaghetti stack". But I've never really found
a good use for such a stack. I've never had to deal with a problem
that sat up and said to me "Hey, dummy, use the spaghetti stack!"

/
/ As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in
/ Common LISP. Once again we hear that LISP is "too slow" for such things, when
/ a large part of it is the use of Common LISP as opposed to a "faster" form
/ (i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
/ should have left the &aux, etc as macros). Every operation in CL is very
/ expensive in terms of CPU...
/

That depends on what model you use to interpret/compile your lisp source
into. There's no reason why your compiler/interpreter can't remember what
few variables are lexically scoped and handle them accordingly, keeping
the rest shallowly-bound in hash-table with a fixup stack.

/
/ ______________________________________________________________
/
/ I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
/ allow both dynamic and lexical makes the performance even worse. To me,
/ lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
/ part of the language semantics. I can accept SCHEME, where you always know
/ that it's lexical, but CL could drive you crazy (especially if you were
/ testing/debugging other people's code).

The rule I use is simple - if it appears in the argument list, it's
lexical. If not, it's dynamic. Then just look for PROGV's and that tells
you whether it's a binding that will get undone someday or if it
is a global.

I also ignore all the warnings from the compiler "Foo has been assumed
special". (to the extent that I worry only that I didn't blow it and
write "foo" when I meant "foobar")

/
/ This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super
/ duper complex system that will do everything I can build. Who cares if it's
/ incredibly difficult and costly to build and understand, and that most of the
/ features will only get used because "they are there", driving up the cpu useage
/ and making the whole development process more costly...
/

Well, I can say that the features concerning general sequences are
something I've had to kludge for myself in most other lisps, and I am
very glad to see them in CL. They make writing an optimizing compiler
much easier (that is, writing an optimizing compiler IN lisp, not necessarily
FOR lisp)

So what if they're only macros? The save me the trouble of having to
write such a thing myself, they are probably tested better than I would
bother to test my own creations, and maybe they're even somewhat
optimized (Hi Walter and Paul! :-) ).

I admit, when I sit down to do something truly bizarre, I do need a copy
of the book with me- but I really can write cleaner (that is, more
understandable and easier-to-test-and-debug) code if I use the
built-ins.

/ BTW, I think the book is poorly written and assume a great deal of knowledge
/ about LISP and MACLISP in particular. I wouldn't give it to ANYBODY to learn
/ LISP
/

True. Franz Opus 36 is better for people starting out.

/ ...Not only does he assume you know a lot about LISP, he assume you know a LOT
/ about half the other existing implementations to boot.
/
/ I am inclined to doubt that it is possible to write a good introductory text on
/ Common LISP; you d**n near need to understand ALL of it before you can start
/ to use it. There is nowhere near the basic underlying set of primitives (or
/ philosophy) to start with, as there is in Real LISP (RL vs CL). You'll notice
/ that there is almost NO defining of functions using LISP in the Steele book.
/ Yet one of the best things about Real LISP is the precise definition of a
/ function!
/
/ Even when using Common LISP (NIL), I deliberately use a subset. I'm always
/ amazed when I pick up the book; I always find something that makes me curse.
/ Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
/ think, the author's name escapes me). The author uses SETF instead of SETQ,
/ stating that SETF will eventually replace SETQ and SET (!!). Thinking that
/ this was an error, I checked in Steel; lo and behold, tis true (sort of).
/ In 2 2/3 pages devoted to SETF, there is // 1 << line at the very bottom
/ of page 94! And it isn't even clear; if the variable is lexically bound AND
/ dynamically bound, which gets changed (or is it BOTH)? Who knows?
/ Where is the definitive reference?
/

Yeah, I ignore SETF pretty much unless I'm making an array reference. Then
I pretend I'm typing AREF and then fix the syntax.

/ "For consistency, it is legal to write (SETF)"; (a) in my book, that should be
/ an error, (b) if it's not an error, why isn't there a definition using the
/ approprate & keywords? Consistency? Generating an "insufficient args"
/ error seems more consistent to me...
/
/ Care to explain this to a "beginner"? Not to mention that SETF is a
/ MACRO, by definition, which will always take longer to evaluate.
/
/ Then try explaining why SET only affects dynamic bindings (a most glaring
/ error, in my opinion). Again, how many years of training, understanding
/ and textbooks are suddenly rendered obsolete? How many books say
/ (SETQ X Y) is a convenient form of (SET (QUOTE X) Y)? Probably all
/ but two...
/
/ Then try to introduce them to DEFVAR, which may or may not get
/ evaluated who knows when! (And which aren't implemented correctly
/ very often, e.g. Franz Common and Golden Hill).

Why bother DEFVARing? It says right in CLTL that it won't affect correctness,
just efficiency.

/
/ I don't think you can get 40% of the points in 4 readings! I'm constantly
/ amazed at what I find in there, and it's always the opposite of Real LISP!
/
/ MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
/ used EQ instead of EQUAL. I only checked about 4 books and manuals (UCILSP,
/ INTERLISP, IQLISP and a couple of others). David correctly pointed out that
/ CL defaults to EQ unless you use the keyword syntax. So years of training,
/ learning and ingrained habit go out the window. How many bugs
/ will this introduce. MEMQ wasn't good enough?
/

Different lisp designers have different ideas about where it's right to
EQ, EQUAL, etc. Matter of personal taste.

/ MEMBER isn't the only case...
/
/ While I'm at it, let me pick on the book itself a little. Even though CL
/ translates lower case to upper case, every instance of LISP names, code,
/ examples, etc are in **// lower <<** case and lighter type. In fact,
/ everything that is not descriptive text is in lighter or smaller type.
/ It's VERY difficult to read just from the point of eye strain; instead of
/ the names and definitions leaping out to embed themselves in your brain,
/ you have to squint and strain, producing a nice avoidance response.
/ Not to mention that you can't skim it worth beans.
/

True. I wish I could get a copy of CLTL in TeXable form- and then modify
the function-font macro to be about 1.2 times the size of descriptive text.

/ Although it's probably hopeless, I wish more implementors would take a stand
/ against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP"
/ is more than most would-be implementors can resist. Even I occasionally find
/ myself thinking "how would I implement that"; fortunately I then ask myself
/ WHY?
/
/ Jeffrey M. Jacobs <UCILSP/
/ CONSART Systems Inc.
/ Technical and Managerial Consultants
/ P.O. Box 3016, Manhattan Beach, CA 90266
/ (213)376-3802
/ CIS:75076,2603
/ BIX:jeffjacobs
/ USENET: jja...@well.UUCP

Common lisp does have an "interior logic" that you can get into. It makes
sense after a while. Things like the scoping DO make a LOT of sense when
you start seriously considering a compiler. I know my code runs faster
with lexical than with dynamic. Remember, when you scope lexically and
compile, you have an absolute displacement onto the stack. This is
a fast thing. If you dynamically scope, you have to call a routine
to go into a hash table and find out where whatever-it-is is kept.
This is a not-so-fast thing for a compiled language. On calling
and return, you have to fix up the hash symbol table for each and
every dynamically-scoped argument. This is a very-not-fast thing.

I agree that this argument goes away when you interpret code.

If you wonder why I'm so worried about why the code should be compilable,
efficiency considerations, etc, it's because I'm writing a CL compiler
and so it DOES matter to me. (no, the compiler's not done yet, no
beg-to-post mail please)

-Bill Yerazunis

r...@lmi-angel.uucp

unread,
Feb 9, 1987, 5:38:13 PM2/9/87
to
In article <> jja...@well.UUCP (Jeffrey Jacobs) writes:
>
> "Against the Tide of Common LISP"
>
>Further, there are inconsistencies and flat out errors in the book. So many
>things are left vague, poorly defined and "to the developer".

This is sadly true, though at least CL has a spec isn't simply a summary of
what the first implementation did, unlike Interlisp or Maclisp.

>The entire INTERLISP arena is left out of the range of compatability.

Most of this can be done with a compatibility package, except for the
Interlisp ``feature'' about all arguments being optional. (This can be done
on the Lisp Machine with lambda-macros, but that's only for MIT-derived
machines.)

>I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to

>allow both dynamic and lexical makes the performance even worse. To me,

>lexical scoping was and should be a compiler OPTIMIZATION, not an inherent

>part of the language semantics.

I cannot agree with this at all; if somebody can't implement lexical scoping in a
efficient manner, they're doing something WRONG. At compile time, no time
is spent ``deciding'' whether a variable is lexical or special (it's
lexically apparent from the code, right ?). In most cases, lexical
variables can go on the stack or in registers, which IS efficient. Special
variable references go through symbols, at least in a straightforward
implementation. That can be pretty efficient, too.

>I can accept SCHEME, where you always know

>that it's lexical, but CL could drive you crazy (especially if you were

>testing/debugging other people's code).

Huh ? Whether or not a variable is lexical can be determined by looking at
its lexical context (practically an axiom, eh ?). So if it's being used
freely, you can assume it's special.

>This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super

>duper complex system that will do everything I can build. Who cares if it's

>incredibly difficult and costly to build and understand, and that most of the

>features will only get used because "they are there", driving up the cpu useage

>and making the whole development process more costly...

Well, maybe having a function like MAP (takes a result type, maps over ANY
combination of sequences) is a pain to implement, but the fact there is
quite a bit of baggage in the generic sequence functions shouldn't slow down
parts of the system that don't use it. The CORE of Common Lisp, which is
lexical scoping, 22 special forms, some data types, and
evaluation/declaration rules, is not slow at all. It is not as elegant as
Scheme, true, there is certainly a manageable set of primitives. Quite a
bit of Common Lisp can be implemented in itself.

>BTW, I think the book is poorly written and assume a great deal of knowledge

>about LISP and MACLISP in particular. I wouldn't give it to ANYBODY to learn

>LISP

If you're talking about CLtL (Steele), that's true, but it's not meant to
teach [Common] Lisp anyway.

[More stuff about trying to learn about Lisp (in general) from CLtL. Sort
of like trying to learn English from the Oxford Unabridged Dictionary.]

>...The author uses SETF instead of SETQ,


>stating that SETF will eventually replace SETQ and SET (!!).

This is silly. SETQ is very ingrained in Lisp, though ``theoretically''
it's not needed anymore. The author was drawing a conclusion not based on
the way people actually use Lisp. The reason why SETF works on symbols
(turning into SETQ) is that macros which are advertised to use ``places''
(expressions that give values and can be written into) don't have to check
for the simple case themselves -- it's just the logical way for SETF to
work.

>Thinking that this was an error,

What ?

>I checked in Steel; lo and behold, tis true (sort of).

>In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom


>of page 94! And it isn't even clear; if the variable is lexically bound AND

>dynamically bound, which gets changed (or is it BOTH)? Who knows?

>Where is the definitive reference?

It follows the rules for variable scoping, so it follows the same rules that
SETQ and MULTIPLE-VALUE-SETQ do. The reason why this is not made explicit
is that the author expects (and rightly so) that if a language feature uses
a basic language concept (like setting variables), it will follow the rules
described for that concept, which were built up early in the book. In this
case, the sections on variables (and scoping) and the page before the one
you mentioned, which discussed the generalized variable concept (not the
generalized ``symbol naming a particular variable which is stored specially
or lexically'' concept).

>"For consistency, it is legal to write (SETF)"; (a) in my book, that should be

>an error, (b) if it's not an error, why isn't there a definition using the

>approprate & keywords? Consistency? Generating an "insufficient args"

>error seems more consistent to me...

Well, (SETF) does nothing. You probably wouldn't write this, but again, a
macro would find it useful. Should (LIST) signal an error too ?

>Care to explain this to a "beginner"? Not to mention that SETF is a

>MACRO, by definition, which will always take longer to evaluate.

Since you're a beginner, by your own admission, why do you think that a form
which is a macro call will be noticeably more expensive (in the interpreter,
the compiled case can't ever be slower) ? There are ways to optimize
macroexpansion, you know. Also, anybody can implement SETF as a special
form as long as they hide the fact from the user.

>Then try explaining why SET only affects dynamic bindings (a most glaring

>error, in my opinion). Again, how many years of training, understanding

>and textbooks are suddenly rendered obsolete? How many books say

>(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)? Probably all

>but two...

Once you acknowledge the existence of lexical scoping, then SET only makes
sense on special variables, because lexically scoped variables can be stored
in ways that (1) don't depend on the symbols that name them (2) aren't
accessible dynamically from the callee. SET is a FUNCTION that operates on
SYMBOLS, not variables.

Much of the problem is due to the fact that many textbooks on Lisp before
1982 or so (as opposed to a Scheme derivative with lexical scoping) assume
an all-special variable implementation. This is fast becoming a minority
for serious users of Lisp. So (SETQ x value) is equivalent to (SET 'x
value) in old Lisps, but even then, you're treading on thin ground. The old
Lisp Machine implementation went like this: all variables were special in
the interpreter, but in the compiler you had shallow binding and local
variables lived in the stack and were not accessible at all via SET or
SYMBOL-VALUE (called SYMEVAL in Lisp Machine Lisp). The SAME piece of CODE
behaved differently depending on whether it OR its callers OR it callees
were interpreted or compiled. I think this is true of Maclisp and maybe of
Franz.

>I don't think you can get 40% of the points in 4 readings! I'm constantly

>amazed at what I find in there, and it's always the opposite of Real LISP!

Ah, see, now maybe CL suffers from the Swiss Army Knife syndrome, but by
using the word ``Real'' you obviously have a few prejudices of your own.
(Oh, by the way, they reversed the arguments to CONS, ha ha...)

>MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER

>used EQ instead of EQUAL.

It's actually EQL...

>How many bugs will this introduce.

It won't introduce bugs into new code written by people who read the manual
and understand the interface and semantics of MEMBER. Your (legitimate)
obstacle is porting ``traditional Lisp'' code to Common Lisp. The
experience at LMI is that you use the package system to apparently redefine
functions which have the same names as different Common Lisp functions. It
is a familiar technique for me because Lisp Machine Lisp had quite a few
name conflicts with Common Lisp:

----------------------------------------
(make-package 'real-lisp)

(in-package 'real-lisp)

(shadow '(member assoc rassoc delete )) ; etc...

(export '(memq member))

;;; If you're a speed freak, change this to a macro, or hope the compiler
;;; can handle inline functions, or that the implementation can call
;;; functions quickly, or use ZL:DEFSUBST on the Lisp machine...
(defun member (x list)
(lisp:member x list :test #'equal))

(defun memq (x list)
(lisp:member x list :test #'eq))
----------------------------------------

Now you can move things into the REAL-LISP package. If you're more
ambitious you could make REAL-LISP a package that actually had all the right
symbols in it itself (as opposed to inheriting them) and exporting them, and
then other packages could USE it.

>Although it's probably hopeless, I wish more implementors would take a stand

>against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP"

>is more than most would-be implementors can resist. Even I occasionally find

>myself thinking "how would I implement that"; fortunately I then ask myself

>WHY?

Well, the main winning alternative is even further away from your Real Lisp
than Common Lisp is: Scheme, or T, which can be pretty much turned in a
systems programming language.--
Robert P. Krajewski
Internet/MIT: R...@MC.LCS.MIT.EDU
UUCP: ...{cca,harvard,mit-eddie}!lmi-angel!rpk

pre...@ccvaxa.uucp

unread,
Feb 10, 1987, 11:12:00 AM2/10/87
to

jja...@well.UUCP:

> "Against the Tide of Common LISP"
> ...

> MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
> used EQ instead of EQUAL. I only checked about 4 books and manuals (UCILSP,
> INTERLISP, IQLISP and a couple of others). David correctly pointed out that
> CL defaults to EQ unless you use the keyword syntax. So years of training,
> learning and ingrained habit go out the window. How many bugs
> will this introduce. MEMQ wasn't good enough?
----------
This error of fact (MEMBER defaults to EQL in CL, not to EQ) is just
one of many things that got bashed on when this was posted before
(6-9 months ago). It's often useful to have a diatribe posted to make
us think about our preconceptions and reconsider our biases, but one
would appreciate it if a NEW diatribe could be come up with rather
than one which has already been around the block...
[I could be wrong; it could be that it was posted to the CL mailing
list and that this list didn't see it; in either case the author
should have reviewed it in the light of the discussion.]

--
scott preece
gould/csd - urbana
uucp: ihnp4!uiucdcs!ccvaxa!preece
arpa: preece@gswd-vms

jja...@well.uucp

unread,
Feb 10, 1987, 10:16:00 PM2/10/87
to

In <7...@rpics.RPI.EDU>, Bill Yerazunis writes:

>Do I really need to keep that there? :-)

Only if you reproduce the whole thing somewhere else :-)

>Why bother DEFVARing? It says right in CLTL that it won't affect

>just efficiency.

It says right in CLTL "DEFVAR is the recommended way to declare the
use of a special variable".

I contend that this does affect correctness (and is also a good reason to
refer to the manual even when doing *simple* things). :-)

>Common lisp does have an "interior logic" that you can get into. It makes
>sense after a while. Things like the scoping DO make a LOT of sense when
>you start seriously considering a compiler.

Some of it makes sense, but not *good* sense :-)

The issue I raise is not "lexical vs dynamic"; it's the godawful mess that CL
uses!

(As a general rule of language design I agree that lexical is better;
dynamic scoping for LISP is both a personal prejudice and a
performance issue).

> I know my code runs faster
>with lexical than with dynamic. Remember, when you scope lexically and
>compile, you have an absolute displacement onto the stack. This is
>a fast thing.

>If you dynamically scope, you have to call a routine
>to go into a hash table and find out where whatever-it-is is kept.
>This is a not-so-fast thing for a compiled language. On calling
>and return, you have to fix up the hash symbol table for each and
>every dynamically-scoped argument. This is a very-not-fast thing

Say what???????????

The "value cell" is normally statically located at a known address!!!
No need to perform hash table lookup at all!!! All references are by
address.

Access time may possibly be *faster*, i.e. MOV ADDR, dest instead of
MOV INDEX(SP),dest.depending on CPU architecture!

Simplisticially, dynamic binding becomes:

PUSH #SPEC_CELL_ADDR ; save address for later restoral
PUSH SPEC_CELL_ADDR ; save current value
MOV new_value, SPEC_CELL_ADDR ; set value

restoring becomes simply

POP R1 ; get address of special cell, R1 assumed to be a reg.
POP (R1) ; restore value.

Jeffrey M. Jacobs


CONSART Systems Inc.
Technical and Managerial Consultants

P.O. Box 3016, Manhattan Beach, CA 90266
(213)376-3802

CIS:75076,2603
BIX:jeffjacobs
USENET: well!jjacobs

r...@spice.cs.cmu.edu.uucp

unread,
Feb 11, 1987, 6:41:44 AM2/11/87
to
For the record, I am not replying to this message in hopes of convincing
MR. Jacobs of anything, since he has shown himself to be beyond reason in
previous discussions. I simply want to point out that almost nothing he
says is true, and the few true things he says are irrelevant.

>
>I think CL is the WORST thing that could possibly happen to LISP. In fact, I
>consider it a language different from "true" LISP. CL has everything in the
>world in it, usually in 3 different forms and 4 different flavors, with 6
>different options. I think the only thing they left out was FEXPRs...

I presume that you have some dialect in mind which is "true" lisp, but all
the Lisps I have every used have had a great deal of duplication of
functionality, since Lisps tend to accrete rather than being designed.

>
>It is obviously intended to be a "compileable" language, not an interpreted
>language. By nature it will be very slow; somebody would have to spend quite
> a bit of time and $ to make a "fast" interpreted version (say for a VAX).

Compiled = slow? How silly of me, I thought the purpose of compilation was
to make code run faster. I presume that there is an unspoken assumption
that you will run code interpreted even though a compiler is available. In
my experience, non-stupid people debug code compiled when they have a modern
Lisp environment supporting incremental compilation.

>The grotesque complexity and plethora of data types presents incredible
>problems to the developer; it was several years before Golden Hill had
>lexical scoping, and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!

It is true that Common Lisp has a few types that are non-trivial to
implement and are not supported by some Lisps. The main examples are
bignum, complex and ratio arithmetic. Your other two assertions, while
true, have nothing to do with the complexity of Common Lisp datatypes. It
is true that implementing full lexical scoping in a compiler is non-trivial,
causing some implementors headaches; the Common Lisp designers felt that
this cost was adequately compensated for by the increment in power and
cleanliness. NIL existed after a fashion before anyone had thought of
designing Common Lisp, and it didn't have a garbage collector then either.

>Further, there are inconsistencies and flat out errors in the book. So many
>things are left vague, poorly defined and "to the developer".

True: this is a major motivation for the current ANSI standards effort.
However, some of the vaguenesses in the spec are quite deliberate. People
who have not participated in a standards effort invloving many
implementations may not appreciate how much a standard can be simplified my
leaving behavior in obscure cases undefined. This is quite different from
documenting a single implementation system where you can assume that what
the implementation does is the "right" thing.

>
>The entire INTERLISP arena is left out of the range of compatability.

True, and quite deliberate. Interlisp is substantially incompatible with
all the Lisps that we wanted to be compatible with. Of course, this is
largely because all of the active members of the Common Lisp design effort
were using Maclisp family Lisps. Other Lisp communities such as
XEROX/Interlisp were hiding their heads in the sand, hoping we would never
accomplish anything.

>
>As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in
>Common LISP. Once again we hear that LISP is "too slow" for such things, when
>a large part of it is the use of Common LISP as opposed to a "faster" form
>(i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
>should have left the &aux, etc as macros). Every operation in CL is very
>expensive in terms of CPU...

Even if you personally insist on using an interpreter, vendors using Lisp as
an implementation substrate will be less stupid. As you mentioned earlier,
Common Lisp was designed to be efficiently compilable, and none of the
above "ineffencies" have a negative effect on compiled code. As for
fundamental innefficiency, look at Robert P. Gabriel's book on measuring
Lisp performance. He compares many Lisps, both Common and uncommon, and the
Common Lisps do quite well. For example, Lucid Common Lisp on the SUN is
2x-4x faster than Franz on the same hardware.

>
>______________________________________________________________
>
>I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
>allow both dynamic and lexical makes the performance even worse.

Only in compiled code...

>To me,
>lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
>part of the language semantics.

Sticking your foot in your mouth and revealing that you have no understanding
of lexical scoping (as opposed to local scoping)...

>I can accept SCHEME, where you always know
>that it's lexical, but CL could drive you crazy (especially if you were
>testing/debugging other people's code).

For one, Common Lisp is hardly unique in having both dynamic and static
variables. Every Lisp that I know of allows dynamic binding, and every Lisp
that I know of will use also statically bind variables, at least in compiled
code. I believe that Scheme allows fluid binding; certainly T does.
I have also never heard anyone but you claim that mixed
lexical/dynamic scoping makes programs hard to understand, and I have to
deal with some real dimbulb users as part of my job. In contrast, I have
frequently heard claimed (and personally experienced) obscure bugs and
interactions due to the use of dynamic scoping.

>
>BTW, I think the book is poorly written and assume a great deal of knowledge
>about LISP and MACLISP in particular. I wouldn't give it to ANYBODY to learn
>LISP
>
>...Not only does he assume you know a lot about LISP, he assume you know a LOT
>about half the other existing implementations to boot.

There is a substantial element of truth here, but then CLTL wasn't intended
to be a "learning programming through Lisp book". The problem is that you
have all these Lisp wizards defining a standard, and they find it impossible
to "think like a novice" when specifying things.

>
>I am inclined to doubt that it is possible to write a good introductory text on
>Common LISP;

This is questionable. I believe that all the Maclisp family Lisp
introductory books are going over to Common Lisp (e.g. Winston and Horne).
Of course, you probably consider these books to be a priori not good.

>you d**n near need to understand ALL of it before you can start
>to use it. There is nowhere near the basic underlying set of primitives (or
>philosophy) to start with, as there is in Real LISP (RL vs CL).

Not really true, although this is the closest that you have come to a valid
esthetic argument against Common Lisp. Once you understand it, you realize
that there actually is a primitive subset, but this is only hinted at in
CLTL.

>You'll notice
>that there is almost NO defining of functions using LISP in the Steele book.
>Yet one of the best things about Real LISP is the precise definition of a
>function!

Once again, this is largely a result of good standards practice. If you say
that a given operation is equivalent to a piece of code, then you vastly
over-specify the operation, since you require that the result be the same
for *all possible conditions*. This unnecessarily restricts the
implementation, resulting the the performance penalties you so dread.

>
>Even when using Common LISP (NIL), I deliberately use a subset. I'm always
>amazed when I pick up the book; I always find something that makes me curse.

I will avoid elaborating possible conclusions about your limited mental
capacity; this statement only shows how emotionally involved you are in
denouncing a system which takes you out of your depth.

>Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
>think, the author's name escapes me). The author uses SETF instead of SETQ,
>stating that SETF will eventually replace SETQ and SET (!!). Thinking that
>this was an error, I checked in Steel; lo and behold, tis true (sort of).
>In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
>of page 94! And it isn't even clear; if the variable is lexically bound AND
>dynamically bound, which gets changed (or is it BOTH)? Who knows?
>Where is the definitive reference?

Well, obviously it sets the place named, and in a particular lexical
environment, a given name only names one variable, lexical or special as the
case may be. Your incomprehension provides some evidence that the
specification is inadequate, although you do exhibit an amazing capacity for
incomprehension.

>
>"For consistency, it is legal to write (SETF)"; (a) in my book, that should be
>an error, (b) if it's not an error, why isn't there a definition using the
>approprate & keywords? Consistency? Generating an "insufficient args"
>error seems more consistent to me...

The syntax specified in CLTL is:
SETF {place value}*
In CLTL's notation for macro syntax, this states that an arbitrary number of
(place, value) pairs may be specified. The sentence you complain about
is only restating the obvious so that even you could not miss this point.

>
>Then try explaining why SET only affects dynamic bindings (a most glaring
>error, in my opinion). Again, how many years of training, understanding
>and textbooks are suddenly rendered obsolete? How many books say
>(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)? Probably all
>but two...

Well, the times they are a changin'... Of course, if you understood lexical
variables, you would understand why you can't compute a variable name at run
time and then reference it.

>
>Then try to introduce them to DEFVAR, which may or may not get
>evaluated who knows when! (And which aren't implemented correctly
>very often, e.g. Franz Common and Golden Hill).

It is true that DEFVAR's behavior is somewhat non-intuitive, but it is
usually the "right thing" unless you are doing wrong things in your variable
inits. This is an instance of the MIT philosophy of doing the right
thing even if it is a bit more complicated (Like in ITS EMACS v.s.
imitations, a subject which I could flame about with verbosity and
irrationality comparable to yours).

>
>MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
>used EQ instead of EQUAL. I only checked about 4 books and manuals (UCILSP,
>INTERLISP, IQLISP and a couple of others). David correctly pointed out that
>CL defaults to EQ unless you use the keyword syntax. So years of training,
>learning and ingrained habit go out the window. How many bugs
>will this introduce. MEMQ wasn't good enough?

Of course you are wrong here, although only in a minor way. It uses EQL,
like every other Common Lisp function that has an implicit equality test.
This particular decision was agonized over for quite a while, but it was
decided to change MEMBER in the interest of consistency (which I believe you
defended earlier).

>
>While I'm at it, let me pick on the book itself a little. Even though CL
>translates lower case to upper case, every instance of LISP names, code,
>examples, etc are in **>> lower <<** case and lighter type. In fact,
>everything that is not descriptive text is in lighter or smaller type.

Yep, Digital Press botched the typesetting pretty badly. Of course, the
reason that the code is in lower case is that everyone with any taste codes
in lower case. The reason that READ uppercases is that Maclisp did.

Flamingly yours...
Rob MacLachlan (r...@c.cs.cmu.edu)

yera...@rpics.uucp

unread,
Feb 11, 1987, 11:56:56 AM2/11/87
to
In article <25...@well.UUCP>, jja...@well.UUCP (Jeffrey Jacobs) writes:
>
> In <7...@rpics.RPI.EDU>, Bill Yerazunis writes:
>
> > I know my code runs faster
> >with lexical than with dynamic. Remember, when you scope lexically and
> >compile, you have an absolute displacement onto the stack. This is
> >a fast thing.
>
> >If you dynamically scope, you have to call a routine
> >to go into a hash table and find out where whatever-it-is is kept.
> >This is a not-so-fast thing for a compiled language. On calling
> >and return, you have to fix up the hash symbol table for each and
> >every dynamically-scoped argument. This is a very-not-fast thing
>
> Say what???????????
>
> The "value cell" is normally statically located at a known address!!!
> No need to perform hash table lookup at all!!! All references are by
> address.
>
> Access time may possibly be *faster*, i.e. MOV ADDR, dest instead of
> MOV INDEX(SP),dest.depending on CPU architecture!

Yes- and no. The fixed address will be fine and dandy as long
as you assume a fixed symbol table size and location. In LISP,
assuming a fixed size and/or location for anything is a bad idea,
because you never can be sure that the garbage collector isn't going
to sneak up behind your back and move it on you when you aren't
looking.

Or even worse, you gensym up a few thousand temporary symbols and
you run out of symbol table space. OUCH!

The obvious cure for this is to call some sort of hashing function
and thereby circumvent the move/growth problem.


A related problem is the problem inherent in a lambda-definition. If I have

(defun foo (x)
(setq x 3)
)

foo returns the value 3 but a global value of X, (if it exists) should
be unchanged! Therefore, you have to save and restore EVERY formal
parameter (possibly to an undefined state). This requires a lot of
instructions.... and cycles.

Lexical scoping gives each function a discardable copy on the stack.
Therefore, no difficult restores.

-Bill Yerazunis

yera...@rpics.uucp

unread,
Feb 11, 1987, 11:58:32 AM2/11/87
to

My apologies for the EMACS/postnews interface that added lots of
blank spaces and lines. It has been dealt with.

Regrets and apologies.

-Bill Yerazunis

pa...@osu-eddie.uucp

unread,
Feb 11, 1987, 4:02:04 PM2/11/87
to
In article <25...@well.UUCP> jja...@well.UUCP (Jeffrey Jacobs) writes:
>(As a general rule of language design I agree that lexical is better;
>dynamic scoping for LISP is both a personal prejudice and a
>performance issue).

I must take an anti-dynamic stand at this point. The main thing I
have against dynamic scoping is that I ALLWAYS run the risk of having
someone else's code do uncool things to my routine's variables just
because the names are the same. The name is what I happen to call it,
but if it is inside a routine, it should be THAT ROUTINE'S variable,
and no one elses. Lisp just happens to be a language, but that shouldn't
make any difference.

>Say what???????????
>
>The "value cell" is normally statically located at a known address!!!
>No need to perform hash table lookup at all!!! All references are by
>address.

Not allways. What if I happen to be running a compiler that puts all
local variables into REGISTERS, and only pushes them when they need to
be saved (see _Structure and Interpretation of Computer Programs_
Abelson & Sussman, 1985 MIT Pres for a good example of register
handeling). In that case, a local variable fetch is the fastest thing
your machine can do. It is possible to make a dynamic code compiler
use registers, but it is VERY much harder.

As fror intrepeted code, what if your intrepter has an evaluate lambda
routine that works by compiling the lambda body (ONLY if it hasn't
done so allready) and then CALLing it? Note that CALLing might
actually mean running a psudo-machine on the compiled code. If this
part of the intrepter is written correctly, it will be just as fast as
a normal intrepter for most code, and VERY much faster for loops.

-- Paul Placeway
Department of Computer and Information Science
SNail: The Ohio State University
2036 Neil Ave. Columbus OH USA 43210-1277
ARPA: paul@ohio-state.{arpa,csnet}
UUCP: ...!cb{osgd,att}!osu-eddie!paul
--
-- Paul Placeway
Department of Computer and Information Science
SNail: The Ohio State University
2036 Neil Ave. Columbus OH USA 43210-1277
ARPA: paul@ohio-state.{arpa,csnet}
UUCP: ...!cb{osgd,att}!osu-eddie!paul

jja...@well.uucp

unread,
Feb 12, 1987, 1:08:39 AM2/12/87
to

Some comments on "Against the Tide of Common LISP".

First, let me point out that this is a repeat of material that appeared
here last June. There are several reasons that I have repeated it:

1) To gauge the ongoing change in reaction over the past two years.
The first time parts of it appeared in 1985, the reaction was
uniformly pro-CL.

When it appeared last year, the results were 3:1 *against* CL, mostly
via Mail.

Now, being "Against the Tide..." is almost fashionable...

2) To lay the groundwork for some new material that is in progress
and will be ready RSN.

I did not edit it since it last appeared, so let me briefly repeat some
of the comments made last summer:

I. My complaint that "both dynamic and lexical makes the
performance" even worse refers *mainly* to interpreted code.

I have already pointed out that in compiled code the difference in
performance is insignificant.

2. The same thing applies to macros. In interpreted code, a
macro takes significantly more time to evaluate.

I do not believe that it
is acceptable for a macro in interpreted code to by destructively
exanded, except under user control.

3. SET has always been a nasty problem; CL didn't fix the problem,
it only changed it. Getting rid of it and using a new name would
have been better.

After all, maybe somebody *wants* SET to set a lexical variable if that's
what it gets...

I will, however, concede that CL's SET is indeed generally the desired
result.

4. CL did not fix the problems associated with dynamic vs lexical
scoping and compilation, it only compounded them. My comment
that

>"lexical scoping was and should be a compiler OPTIMIZATION"

is a *historical* viewpoint. In the 'early' days, it was recognized
that most well written code was written in such a manner that
it was an easy and effective optimization to treat variables as
being lexical/local in scope. The interpreter/compiler dichotomy
is effectively a *historical accident* rather than design or intent of the
early builders of LISP.

UCI LISP should have been released with the compiler default as
SPECIAL. If it had been, would everybody now have a different
perspective?

BTW, it is trivial for a compiler to default to dynamic scoping...

5. >I checked in Steel; lo and behold, tis true (sort of).


>In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
>of page 94!

I was picking on the book, not the language. But thanks for all
the explanations anyway...

6. >"For consistency, it is legal to write (SETF)"

I have so much heartburn with SETF as a "primitive" that I'll save it
for another day.

7. >MEMBER used EQ instead of EQUAL.

Mea culpa, it uses EQL!

8. I only refer to Common LISP as defined in the Steele Book, and
to the Common LISP community's subsequent inability to make
any meaningful changes or create a subset. (Excluding current
ANSI efforts).

Some additional points:

1. Interpreter Performance

I believe that development under an interpreter provides
a substantially better development environment, and that
compiling should be a final step in development.

It is also one of LISP's major features that anonymous functions
get generated as non-compiled functions and must be interpreted.

As such, interpreter performance is important.

3. "Against the Tide of Common LISP"

The title expresses my 'agenda'. Common LISP is not a practical,
real world language.

It will result and too expensive. To be accepted, LISP must be able to run
on general purpose, multi-user computers.

It is chtance of other avenues and paths of
development in the United States.

There must be a greater understanding of the problems, and benefits
of Common LISP, particularly by the 'naive' would be user.

Selling it as the 'ultimate' LISP standard is dangerous and
self-defeating!

Jeffrey M. Jacobs
CONSART Systems Inc.
Technical and Managerial Consultants
P.O. Box 3016, Manhattan Beach, CA 90266
(213)376-3802
CIS:75076,2603
BIX:jeffjacobs

USENET: jja...@well.UUCP

pat...@mcc-pp.uucp

unread,
Feb 12, 1987, 10:44:22 AM2/12/87
to
As a new arrival to the Lisp world (2 years experience with Lisp,
vs 15 with C, Bliss, and Fortran), I think I have a different
view of Common Lisp vs other Lisps than either the definers of
the Common Lisp standard or jjacobs.

... On compilers vs interpreters
As a systems and performance measurement type, I have always been
concerned with how fast my programs run. One of the critical
measures of success of OS code is how fast it is perceived to be.
My default programmer's model says interpreters are slow.
Also, old rumors about programs behaving differently in compiled
and interpreted mode made me distrust the interpreter as a naive user.
Since I have an incremental compiler (Lisp machine), I compile everything
before I run it, except top-level commands. I have not noticed
significant impediments to development using this procedure.
Breakpoints and function tracing are still available as well
as the old, old reliable of print statements. Indeed, when at
a breakpoint, I can rewrite and recompile any function that I am
not currently within. Thus, from my viewpoint, all discussion of
how fast something is in an interpreter is irrelavent to my purposes.
Dynamically created closures can also be handled by an incremental compiler.
I claim that this approach to Lisp development is followed without lossage
by many of the new arrivals to the Lisp world.

...on Common Lisp environments
I recognize that Lisp machines are too expensive for most developers,
but workstations such as Sun now have Common Lisp compilers
(from Kyoto, Franz, and Lucid at a minimum), with runtime
environment development continuing. I claim that reasonable
Common Lisp development environments are available on $15,000 workstations
and multiuser systems such as Vaxes and Sequents today, and will be
available soon on $5000 PCs (based on high performance, large address
space chips such as M68020 or Intel 386)

...on portability
Implementors of major commercial programs want as wide a potential
market for their product as possible. Thus, they chose to implement
in the best available PORTABLE environment, rather than the best
environment. Common Lisp appears the best choice.
Researchers without the requirement for portability
may chose other environments such as Scheme or InterLisp.

...on commonality
I was shocked to discover that MacLisp and InterLisp are significantly
more different than C and Pascal. I am surprised that they
are commonly lumped together as the same language. Scheme is
yet farther away both in syntax and philosophy. All are in the
same family just as C and Pascal are both related to Algol60,
but beyond that...

...on Common Lisp the Language
I learned Common Lisp from Steele's book and found it heavy going.
A excellent first effort for defining a standard, but definitely
not a teaching or implementators aid. The intro books are becoming
available (with some lingering historical inconsistencies).
Someone should write a book describing the "definitive" core of the language,
followed by reasonable macros and library functions for the rest of
the language. It would be a great aid to experimental implementors.
Commercial implementations would continue to set themselves apart
by the quality of their optimiziers, debugging environments, etc.
<<A side note, C debuggers provide dynamic access to lexical
variables. I am sure Common Lisp ones can too, at some
implementation cost. I wonder when they will...>>
On the other hand, with only a brief exposure to Franz Lisp and
MultiLisp before plunging into Common Lisp (with ZetaLisp extensions),
I did not have the disadvantage of historical assumptions about the
definition of Lisp.

...on dynamic vs lexical scoping
Common Lisp did not go far enough in lexical scoping. Specifically,
it did not provide a way to define a lexical variable at the
outermost level. As it is, I cannot define global variables without
some risk of changing the performance of some function that
happened to use the same variable name, even if the variable
is used in a lexical way. The current recommended convention of
defining specials with *foo* leaves much to be desired.
Other than a globals, the only other uses I have seen for
for dynamically scoped variables is to pass additional values
to and from functions without using the argument list (generally
considered poor practice by software engineering types, but
occasionally preferred for performance reasons).

I seem to have rambled on, but I agree with Jeffrey Jacobs in saying:


> There must be a greater understanding of the problems, and benefits
> of Common LISP, particularly by the 'naive' would be user.
>
> Selling it as the 'ultimate' LISP standard is dangerous and
> self-defeating!

Instead, I consider it the current Lisp standard, subject to
"slow and careful" revision and improvement.
By comparison, Fortran 77 is not Modula2, but it is far better than
Fortran II.

I hope and expect "Common Lisp 2000" will represent significant
improvements over Common Lisp, perhaps with some remaining
historical uglynesses removed or better hidden (including dynamic scoping).

-- Patrick McGehearty,
representing at least one view of the growing community of Lisp users.

wi...@tekchips.uucp

unread,
Feb 12, 1987, 1:36:56 PM2/12/87
to
Jeffrey Jacobs (jja...@well.UUCP):

>I can accept SCHEME, where you always know
>that it's lexical, but CL could drive you crazy (especially if you were
>testing/debugging other people's code).

Robert P Krajewski (r...@mc.lcs.mit.edu):


>Huh ? Whether or not a variable is lexical can be determined by looking at
>its lexical context (practically an axiom, eh ?). So if it's being used
>freely, you can assume it's special.

Rob MacLachlan (r...@spice.cs.cmu.edu):


>I have also never heard anyone but you claim that mixed
>lexical/dynamic scoping makes programs hard to understand, and I have to
>deal with some real dimbulb users as part of my job. In contrast, I have
>frequently heard claimed (and personally experienced) obscure bugs and
>interactions due to the use of dynamic scoping.

Thanks to proclamations and DEFVARs (which perform proclamations), it
is not possible to tell whether a Common Lisp variable is lexical simply
by looking at its lexical context. See page 157 of CLtL. This is a major
lose. As Mr Jacobs observed, it drives you crazy when you try to read
code.

I certainly agree with Mr MacLachlan's point that dynamic scoping makes
programs hard to understand.

Rob MacLachlan (r...@spice.cs.cmu.edu):


>For one, Common Lisp is hardly unique in having both dynamic and static
>variables. Every Lisp that I know of allows dynamic binding, and every Lisp
>that I know of will use also statically bind variables, at least in compiled
>code. I believe that Scheme allows fluid binding; certainly T does.

Neither the 1985 nor 1986 Scheme reports talk about dynamic (fluid)
variables. The reason is that many different semantics are possible for
dynamic variables, each with their own best use, and Scheme is powerful
enough that these various semantics can be implemented by portable Scheme
code. We reasoned that programmers can load whichever variety of dynamic
variables they want out of a code library. The standard procedure library
described in the Scheme reports doesn't describe any of the possibilities
for dynamic variables because we wanted to avoid premature standardization.

Peace from a real dimbulb user,
Will Clinger
willc%tekc...@tektronix.csnet

mi...@think.uucp

unread,
Feb 13, 1987, 9:10:13 AM2/13/87
to
In article <25...@well.UUCP> jja...@well.UUCP (Jeffrey Jacobs) writes:
>Some additional points:

>1. Interpreter Performance
>I believe that development under an interpreter provides
>a substantially better development environment,

Absolutly.

>and that compiling should be a final step in development.

I find that I almost always run code compiled, even code under
development. The reasons are, most silly typos are caught by
the compiler, such as mispelled variables and functions.
When I have bugs, the lisp debugger is usually sufficient on
compiled functions. If I have an unually hard bug to understand,
then I might run interpreted with step.

>It is also one of LISP's major features that anonymous functions
>get generated as non-compiled functions and must be interpreted.

Most anonymous functions (I presume you mean lambda expressions where
you write #'(lambda (...) ...) in your code) will get compiled.
Only by saying '(lambda ...) or consing one up on the fly will you
get anonymous interpreted functions.

>As such, interpreter performance is important.

Yes, but not *that* important.

>3. "Against the Tide of Common LISP"
>The title expresses my 'agenda'. Common LISP is not a practical,
>real world language.

I find that attitude most unfortunate. If you want to say Common LISP
is a large language, and that it is difficult to implement well - then
I would agree with you totally. But I have to disagree strongly with
this statement.

> Jeffrey M. Jacobs


-- jeff
seismo!godot.think.com!mincy

sha...@uicsrd.uucp

unread,
Feb 13, 1987, 2:00:00 PM2/13/87
to


There is a pretty good critique of Common Lisp in :

"A Critique of Common Lisp" by Rodney Brooks and Richard Gabriel
(Stanford). It appeared in the proceedings of the 1984 ACM Symposium on
Lisp and Functional Programming.

r...@lmi-angel.uucp

unread,
Feb 13, 1987, 5:56:14 PM2/13/87
to
In article <> jja...@well.UUCP (Jeffrey Jacobs) writes:
>
>Some comments on "Against the Tide of Common LISP".

>
>1) To gauge the ongoing change in reaction over the past two years.
>The first time parts of it appeared in 1985, the reaction was
>uniformly pro-CL.
>
>When it appeared last year, the results were 3:1 *against* CL, mostly
>via Mail.

What exactly are you trying to imply here ? What were the circumstances of
rejection ?

>4. CL did not fix the problems associated with dynamic vs lexical
>scoping and compilation, it only compounded them. My comment
>that
>
>>"lexical scoping was and should be a compiler OPTIMIZATION"
>

>is a *historical* viewpoint... The interpreter/compiler dichotomy


>is effectively a *historical accident* rather than design or intent of the
>early builders of LISP.

Well, it's a fairly large blot on language semantics. Common Lisp decided
to get the semantics right, while not removing historical phenemona like the
names of certain list manipulation functions (NCONC or RPLACA).

>BTW, it is trivial for a compiler to default to dynamic scoping...

Yeah, the Lisp Machine compiler used to allow that. It's pretty disgusting.
It would also be trivial put a lot of other switches in the compiler that
would permit it to be more ad hoc because it was more convenient to implement
it that way.

>I have so much heartburn with SETF as a "primitive" that I'll save it
>for another day.

Well, I'd like to hear them. It would be interesting to see what your
objections are.

>7. >MEMBER used EQ instead of EQUAL.
>
>Mea culpa, it uses EQL!

Nitpicking aside, this is hardly arbritrary -- remember that since Common
Lisp is a new dialect, there was only a secondary consideration in being
compatible with other Lisp dialects. This decision was made, so there's no
use complaining that MEMBER in Common Lisp is a different function than MEMBER
in Maclisp. In a previous posting I indicated one solution for porting
*existing* code. There is a need for various older Lisp -> Common Lisp
compatibility packages, and to a large extent they can be very portable.

>1. Interpreter Performance
>
>I believe that development under an interpreter provides

>a substantially better development environment, and that


>compiling should be a final step in development.

It depends on what implementation you're using. Because the Lisp Machine
effort was driven by system programmers and a specialized architecture,
debugging compiled code is *easier* in most cases than debugging interpreted
code. In non-specialized implementations, this is less likely to be true if
not many conventions of a ``virtual Lisp machine'' are honored in compiled
code.

>It is also one of LISP's major features that anonymous functions
>get generated as non-compiled functions and must be interpreted.

This is another a priori opinion. How many mature implementations of Lisp
actually behave like this in the first place ? What are the applications of
a such an accidental behavior ?

>3. "Against the Tide of Common LISP"

>The title expresses my 'agenda'. Common LISP is not a practical,
>real world language.

OK, back to the crusade...

>It will result and too expensive. To be accepted, LISP must be able to run
>on general purpose, multi-user computers.

It takes a consultant to come up with conclusions like this, and
requirements like this...

>There must be a greater understanding of the problems, and benefits
>of Common LISP, particularly by the 'naive' would be user.

You're talking about a group with little clout. If the analogous attitude
were true in the personal computer marketplace, then everybody would have
Macs; instead ``power users''are perfectly happy to tweak PCs. Everyone
starts out naive, but people who want programs written will not tolerate
naivete in the would-be implementors. Would be users are not catered to by
a language definition, but by good textbooks, education, and lots of
hands-on experience.

>Selling it as the 'ultimate' LISP standard is dangerous and
>self-defeating!

Who said that ? Common Lisp is not a step forward in terms of Lisp
``features.'' By reining in the spec and getting diverse implementors to
agree on something, I can write a program on a Sun, and have it work on
(say) a Lisp Machine (Zetalisp), a Silicon Graphics box (Franz Common Lisp),
a DEC-20 (Hedrick's Common Lisp), a VAX (NIL or DEC Standard Lisp), and so
on. Before, one had to make a decision on whether to use a safe Maclisp
subset or a safe Interlisp subset, if one indeed expected portability to be
worth one's while at all. And then you got show off your expertise in #+/-,
STATUS and SSTATUS, and the knowledge of n dialect's opinions on whether NTH
was 0 or 1-based. At least now there is a Lisp which is no more repugnant
than C (actually, at lot less, in my freely admitted biased opinion) as a
portable programming language.

jja...@well.uucp

unread,
Feb 13, 1987, 10:57:38 PM2/13/87
to

In <1...@spice.cs.cmu.edu> Rob MacLachlan writes, without using any
four letter words or sexual innuendoe! The improvement in his
vocabulary and manners since last summer is to be commended!

Unfortunately, his reading skills have not improved as much.

In general, I do not hold most of the views that he attributes to me,
and his ability to misinterpret what I write amazes me, particularly
since he has already seen most of this discussed previously.

I'm sure his leaping to irrelevant and unwarranted conclusions is
already legion, as is his rapier wit, subtle sarcasm, and debating
society method of argumentation.

As such, I will only reply to those points of his which deal with
my original arguments, or which need addressing, such as his
apparent lack of understanding of the basic issues of software
engineering!

>>
>>It is obviously intended to be a "compileable" language, not an interpreted
>>language. By nature it will be very slow; somebody would have to spend quite
>> a bit of time and $ to make a "fast" interpreted version (say for a VAX).
>Compiled = slow? How silly of me, I thought the purpose of
>compilation was to make code run faster.

Read the paragraph again Rob!

>However, some of the vaguenesses in the spec are quite deliberate. People
>who have not participated in a standards effort invloving many
>implementations may not appreciate how much a standard can be simplified my
>leaving behavior in obscure cases undefined. This is quite different from
>documenting a single implementation system where you can assume that what
>the implementation does is the "right" thing.

Gee, just what the world needs; deliberately vague specs!!!

(And the COMMON LISP effort certainly doesn't begin to achieve
what the rest of the world considers a "standards effort")

>>The entire INTERLISP arena is left out of the range of compatability.
>True, and quite deliberate. Interlisp is substantially incompatible with
>all the Lisps that we wanted to be compatible with. Of course, this is
>largely because all of the active members of the Common Lisp design effort
>were using Maclisp family Lisps. Other Lisp communities such as
>XEROX/Interlisp were hiding their heads in the sand, hoping we would never
>accomplish anything.

How to win friends and influence people! I hope Rob gets tenure
at CMU, cause he might have a hard time getting ajob elsewhere.

>>As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented

>>Common LISP. Once again we hear that LISP is "too slow" for such things,

>>a large part of it is the use of Common LISP as opposed to a "faster" form
>>(i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
>>should have left the &aux, etc as macros). Every operation in CL is very
>>expensive in terms of CPU...
>Even if you personally insist on using an interpreter, vendors using Lisp as
>an implementation substrate will be less stupid.

Many vendors have *already* abandoned *compiled* Common LISP!
Interpreter speed had nothing to do with it.

>>I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
>>allow both dynamic and lexical makes the performance even worse.
>Only in compiled code...

And I'm supposed to be ignorant about building compilers???

>>There is nowhere near the basic underlying set of primitives (or
>>philosophy) to start with, as there is in Real LISP (RL vs CL).

>Not really true, although this is the closest that you have come to a valid
>esthetic argument against Common Lisp. Once you understand it, you realize
>that there actually is a primitive subset, but this is only hinted at in
>CLTL.

There is a *BIG* difference between what I say and "hinting"! The
failed attempt to create a subset is sufficient proof of that..

Any *true* core' exists only in Rob's imagination!

>>You'll notice
>>that there is almost NO defining of functions using LISP in the Steele book.
>>Yet one of the best things about Real LISP is the precise definition of a
>>function!
>Once again, this is largely a result of good standards practice.

Good standards practice = vague and poorly defined?????

> If you say
>that a given operation is equivalent to a piece of code, then you vastly
>over-specify the operation, since you require that the result be the same
>for *all possible conditions*. This unnecessarily restricts the
>implementation, resulting the the performance penalties you so dread.

Oh, I see. Expecting understandable, consistent results is an
unnecessary restriction on the implementor!!!

>Well, the times they are a changin'... Of course, if you understood lexical
>variables, you would understand why you can't compute a variable name at run
>time and then reference it.

B.S! All the compiled code for SET need do is check that the first argument
be lexically equivalent to a lexically apparent variable and change
the appropriate cell, stack location, or whatever. Easy for a compiler
to do!

(This may not be what a lot of people *want*, but it is possible).

>Flamingly yours...
> Rob MacLachlan (r...@c.cs.cmu.edu)

Jeffrey M. Jacobs

jja...@well.uucp

unread,
Feb 13, 1987, 11:39:41 PM2/13/87
to

In 1...@lmi-angel.UUCP, Bob Krajewski writes:

>>I can accept SCHEME, where you always know
>>that it's lexical, but CL could drive you crazy (especially if you were
>>testing/debugging other people's code).
>

>Huh ? Whether or not a variable is lexical can be determined by looking at
>its lexical context (practically an axiom, eh ?). So if it's being used
>freely, you can assume it's special.

I was referring to *debugging* code, not compiling it. In examining
a function, it is not *lexically* apparent whether an argument is
SPECIAL or local, i.e. if I enter

(DEFUN CONFUSE_ME (X Y Z)...

Now, you tell me if the variables are going to be dynamic or lexical?
Was a DEFVAR or a PROCLAIM issued earlier? No way to tell, is
there?

And should I assume that in

(DEFUN FOO (FUM FIE) (LIST FUM FIE F1E))

F1E is SPECIAL? Would you? Especially when someone else wrote it?

Common LISP makes an old problem worse, not better.

>
>Well, maybe having a function like MAP (takes a result type, maps over ANY
>combination of sequences) is a pain to implement, but the fact there is
>quite a bit of baggage in the generic sequence functions shouldn't slow down
>parts of the system that don't use it. The CORE of Common Lisp, which is
>lexical scoping, 22 special forms, some data types, and
>evaluation/declaration rules, is not slow at all. It is not as elegant as
>Scheme, true, there is certainly a manageable set of primitives. Quite a
>bit of Common Lisp can be implemented in itself.
>

If there is a CORE, why couldn't the committee to produce a subset come
up with anything? Certainly *parts* of it can be implemented in itself;
why should they then be considered a critical part of the language that
can't be done without?

I'd love to believe that what you describe is a *real* core, but I can't.
The book and the rest of the CL community say otherwise!

I will address the issue of baggage in the near future, let me state that
there is 1 major pieces of baggage which I believe has more adverse
affect on CL than anything else and that is the absurd complexity
of the LAMBDA list! Function calling will never be the same :-)

>Well, (SETF) does nothing. You probably wouldn't write this, but again, a
>macro would find it useful. Should (LIST) signal an error too ?

One of the most common errors found in software is improper number
of arguments. This has plagued more programs and languages than
I care to recall. Common LISP has given up the built-in error
checking of previous LISPs.

A macro which generates a function with zero arguments should
almost certainly be checking for the correct number of arguments.

>>Care to explain this to a "beginner"? Not to mention that SETF is a
>>MACRO, by definition, which will always take longer to evaluate.
>
>Since you're a beginner, by your own admission, why do you think that a form
>which is a macro call will be noticeably more expensive (in the interpreter,
>the compiled case can't ever be slower) ?

You misunderstand me; I'm not a beginner. In fact, I'm an old fogey :-)
I was one of the co-developers of UCI LISP, and crawled through
INTERLISP, MACLISP and of course Stanford LISP probably before
you could read :-)

You know perfectly well that interpreting macros takes more time!
First you have to EVAL the form, and then give the result to EVAL
again! (And I *don't* consider it kosher to destructively expand
without the user's control).

>(Oh, by the way, they reversed the arguments to CONS, ha ha...)

You mean (CONS 'A 'B) => (B . A)? :-)

>It won't introduce bugs into new code written by people who read the manual
>and understand the interface and semantics of MEMBER. Your (legitimate)
>obstacle is porting ``traditional Lisp'' code to Common Lisp.

Hey, CLtL specically states that one objective is to remain compatible
as much as possible. Now MEMBER is a very basic primitive, going
back even before my time!

>Well, the main winning alternative is even further away from your Real Lisp
>than Common Lisp is: Scheme, or T, which can be pretty much turned in a
>systems programming language.--
>Robert P. Krajewski

What about Le_LISP, or the proposed ISO/EU-LISP? Part of the
problem with the enormous amount of effort devoted to Common LISP
is it's stifling of other work in the United States.

le...@ucla-cs.uucp

unread,
Feb 14, 1987, 12:41:42 AM2/14/87
to
Can we all give Jeff Jacobs some slack? We should all
recognize by now that "Against the tide..." (got that copyrighted
yet jeff?) is his pet area. :-)

It all began back in the days when he wrote UCI Lisp with Meehan...

sh...@utah-cs.uucp

unread,
Feb 14, 1987, 1:16:05 PM2/14/87
to
I really shouldn't respond to this crud, but as one of the people who has
spent quite a bit of time thinking about CL subsets, I wanted to correct
the following misstatement:

In article <25...@well.UUCP> jja...@well.UUCP (Jeffrey Jacobs) writes:

>>The CORE of Common Lisp, which is
>>lexical scoping, 22 special forms, some data types, and
>>evaluation/declaration rules, is not slow at all. It is not as elegant as
>>Scheme, true, there is certainly a manageable set of primitives. Quite a
>>bit of Common Lisp can be implemented in itself.
>
>If there is a CORE, why couldn't the committee to produce a subset come
>up with anything? Certainly *parts* of it can be implemented in itself;
>why should they then be considered a critical part of the language that
>can't be done without?

First, there have been several proposals for subsets. They typically have
these problems:

1. Missing but desired features. Even within Utah, a subset that would make
everyone here happy contains over 400 functions. About the only thing
that everybody agrees on is that NSUBSTITUTE-IF-NOT should be omitted.
If you take all groups, there is even more variance about what is important.

2. Lack of orthogonality. If CL had a MEMBER that does EQUAL tests and a
MEMQ that does EQ tests, but only an ASSOC that does EQUAL tests and no ASSQ,
you can bet that everybody would moan and complain about it being
"non-orthogonal". Similarly for UNION and UNIONQ, INTERSECTION and INTER-
SECTIONQ, and so on. It's a no-win situation for designers; either they
add strange functions and be accused of making a fat language, or leave them
out and be accused of inconsistency.

3. Semantic interconnection. Parts of a design interact with each other.
The time functions use the Universal Time standard that counts in seconds
from 1/1/1900. It's a better choice than the Unix pseudo-standard 1/1/1970,
but unfortunately universal time is almost always a bignum, so you have to
have bignums around. Or consider keywords. If you decide that keywords
to functions are bad and throw them out, what do you do about DEFSTRUCT
constructor functions? I know what's been done in the past, and it's enough
to make one retch - like making constructor macros instead of functions :-( .
Again it's a no-win for designers, since if they didn't make things inter-
connected, people would bitch about needless duplication of language concepts.
There is still quite a bit of interest in trying to modularize, but it's
beyond the state of the art. (EuLisp was supposed to be like that, and it
seems to have bogged down...)

4. Inconsistent extensions. If the standard does not say anything about
a SORT function, then inevitably several people will write mutually incon-
sistent packages for sorting. This generates two sub-problems: first,
programs are continually loading this module or that module. PSL for instance
has hundreds of modules that can be loaded, but it's a pain in practice to
forget one of them (and autoloading has its disadvantages as well). More
importantly, one can get gray hairs trying to integrate two programs each
using a SORT with different arguments and behavior (maybe one package needs
a destructive SORT, and the other a non-destructive one). If you standardize
SORT, the problems go away.

So those are some of the more significant reasons why no subsets are favored.
I urge people to get copies of old drafts of CL and the archived discussions.
They're extremely interesting, and one gets a sense of the number of competing
interests that had to make what they considered to be major compromises
in order to produce anything at all. Would-be introducers of new Lisp
dialects should especially study the material and reflect on their chances
of success...

>...there is 1 major pieces of baggage which I believe has more adverse


>affect on CL than anything else and that is the absurd complexity
>of the LAMBDA list!

In PCLS we do interprocedural analysis to eliminate completely the overhead
of complex lambdas, by reducing to simple calls. It's extremely effective.
See Utah PASS Project Opnote 86-01 for details.

>Part of the
>problem with the enormous amount of effort devoted to Common LISP
>is it's stifling of other work in the United States.

That's a problem with any standard. Who is it that's being stifled
anyway? Not me...

> Jeffrey M. Jacobs
stan shebs

r...@spice.cs.cmu.edu.uucp

unread,
Feb 15, 1987, 5:07:20 PM2/15/87
to

>Subject: Re: Against the Tide of Common LISP
>Date: 13 Feb 87 19:00:00 GMT
>Nf-From: uicsrd.CSRD.UIUC.EDU!sharma Feb 13 13:00:00 1987

Yeah, this paper is reasonably coherent, but should be taken with a grain
of salt. Some of the arguments in it are semi-bogus in that they present a
problem, but don't present simple, commonly used solutions that largely
solve the problem.

For example, in one section complaining about the inefficiency of the
complex calling mechanisms and their use in langauge primitives, they
basically construct a straw man out of SUBST (or some similar function).

What they do is observe that SUBST is required to take keywords in Common
Lisp and that the obvious implementation of SUBST is recursive. From this
they leap to the conclusion that a Common Lisp SUBST must gather all the
incoming keys into a rest arg and then use APPLY to pass the keys into
recursive invocations. If this was really necessary, then it would be a
major efficiency problem. Fortunately, this is easily fixed by writing an
internal SUBST that takes normal positional arguments, and then making the
SUBST function call this with the parsed keywords. It is also easy to make
the compiler call the internal function directly, totally avoiding keyword
overhead.

Now presumably the authors knew that this overhead could be avoided by a
modest increment in complexity, but this isn't at all obvious to readers not
familiar with Common Lisp implementation techniques.

As I remember, the paper also complained about the excessive hair in the
Common Lisp ARRAY type preventing the obvious implementation. I agree that
adjustable and displaced arrays are largely useless, and not worth the
overhead. There is no doubt that they got in the language because lisp
machine compatibility was our number one compatibility priority. The
element of bogosity comes into the argument when they negelect to mention
Common Lisp's SIMPLE-ARRAY type which can be used as a declaration to tell
the system that you haven't done anything wierd with this array, and it can
be accessed in a reasonable fashion. This invalidates any argument of
inherent inefficiency of Common Lisp arrays, although it does impose on the
user a bit by requiring the declaration.

Probably the best criticism that they leveled against Common Lisp was aimed
at the numeric types and operations. Since Common Lisp only supports generic
arithmetic, extensive declarations and at least some compiler smarts are
required to generate good code for conventional architectures. On the other
hand, I think that there are powerful cleanliness and portability arguments
in favor of the generic arithmetic decision.

The COMPLEX, RATIO and to a lesser degree BIGNUM types also require
substantial work to implement, yet are not used all that much by "ordinary"
code (whatever that is). A lot of the complexity of numbers in Common Lisp
was motivated by a desire to "do numbers right" in hopes that Common Lisp
would be taken seriously for number crunching. This is definitely a break
with the past, when most implementations had poorly defined and implemented
floating point support.

I also point out that, despite any misgivings voiced in the paper, Gabriel
is a major player in Lucid Inc., whose sole product is a Common Lisp
implementation. Evendently he believes that it is a practical, real-world
programming language.

I think that there is a good chance that Common Lisp will become the
"FORTRAN of Lisps". Some of the constructs will seem bizzare, and many of
the restrictions will seem arbitrary; nobody will attempt to defend it
esthetically, but many people will get lots of work done.

Rob

r...@spice.cs.cmu.edu.uucp

unread,
Feb 15, 1987, 5:54:03 PM2/15/87
to
Since some people may not have understood my claims for the desirability of
a standard not specifying everything, I will elaborate.

Consider the DOTIMES macro. In CMU Common Lisp,
(dotimes (i n) body) ==>

(do ((i 0 (1+ i))
(<gensym> n))
((>= i <gensym>))
body)

Now, if Common Lisp required this implementation, it would imply that
setting "I" within the body is a meaningful thing to do. Instead, Common
Lisp simply specifies in English what DOTIMES does, and then goes on to say
that the result of setting the loop index is undefined. This allows the
implementation to assume that the loop index is not set, possibly increasing
efficiency.

The same sort of issues are present in the "destructive" functions, possibly
to a greater degree. If an implementation was specified for NREVERSE, then
users could count on the argument being destructively modified in a
particular way. This is bad, since the user doesn't need to know how the
argument is destroyed as long as properly he uses the result, and requiring
the argument to be modified in a particular way would have strong
interactions with highly implementation-dependent properties such as storage
management disciplines. For example, in some implementations it might be
most efficient to make the "destructive" operations identical to the normal
operations, and not modify the argument at all.

In any case, the tremendous complexity of Common Lisp would make it very
difficult to specify it all in a formal way such as that used in the ADA
language specification. When reading the Common Lisp manual, you must
assume that whenever the meaning of a construct is not explicitly specified,
it is undefined, and therefore erroneous.

This difficulty of complete specification can be used as an argument against
complex languages such as Common Lisp, but you should remember that
specification is not an end in itself; languages exist to be used.
Completeness of specification certainly doesn't seem to predict language
success. Consider Algol 68 and C.

Rob

mich...@bcsaic.uucp

unread,
Feb 16, 1987, 2:39:30 PM2/16/87
to
In article <42...@utah-cs.UUCP> sh...@utah-cs.UUCP (Stanley Shebs) writes:
>...

>programs are continually loading this module or that module. PSL for instance
>has hundreds of modules that can be loaded, but it's a pain in practice to
>forget one of them (and autoloading has its disadvantages as well)...

I'm curious: what's the disadvantage (besides the time it takes to do it)
of autoloading? We're using a version of Common Lisp that takes up
something over 5 megs just for itself, before we even load in any functions
When we ran in Franz, we used a *lot* less space. I suspect
that a lot of the difference is in the parts of CL that are over and
above Franz (ratios, real numbers, ad infinitum) and most of which I
have no use for, plus the things that Franz autoloaded (machacks, the tracer,
etc.). What's the disadvantage to autoloading?
--
Mike Maxwell
Boeing Advanced Technology Center
arpa: mich...@boeing.com
uucp: uw-beaver!uw-june!bcsaic!michaelm

an...@shasta.uucp

unread,
Feb 17, 1987, 3:24:52 AM2/17/87
to
In article <25...@well.UUCP> jja...@well.UUCP (Jeffrey Jacobs) writes:
>You know perfectly well that interpreting macros takes more time!
>First you have to EVAL the form, and then give the result to EVAL
>again! (And I *don't* consider it kosher to destructively expand
>without the user's control).

It is possible to cache the result of macro expansion without
destructively modifying user code. If this technique is implemented
correctly, this cache is flushed appropriately when a macro is redefined
(or changed to a function). In other words, speed is the only difference
between using this technique and re-expanding the macro anew each time the
relevant code is eval'd.

If one is slightly more clever, one can apply a variation of this
technique to function bodies and perform some other analysis at the
same time so that interpreted code runs fairly fast. (No, you don't
have to do it when the function is first called. There are other
opportunities.) Of course, you have to do this right or it gets in
the way of debugging, but ....

I'm sure that major CL vendors use better techniques than I can come
up with in 10 minutes.

I've forgetten why JJ is so down on macros; doesn't "real" lisp have
them?

> Certainly *parts* of [Common Lisp] can be implemented in itself;


>why should they then be considered a critical part of the language that
>can't be done without?

Every lisp dialect that I've written non-trivial programs in (Interlisp,
Maclisp, Franz, T) predefines forms that I could have defined myself using
other forms. This is good. I prefer to build on other people's work;
Turing machine programming is so tedious. Since CL is not a minimal
language, each vendor can choose a different core and implement the rest
of the language using it. This too is good; it leads to higher performance.

-andy
--
Andy Freeman
UUCP: ...!decwrl!shasta!andy forwards to
ARPA: an...@sushi.stanford.edu
(415) 329-1718/723-3088 home/cubicle

sh...@utah-orion.uucp

unread,
Feb 17, 1987, 12:30:39 PM2/17/87
to
In article <3...@bcsaic.UUCP> mich...@bcsaic.UUCP (Michael Maxwell) writes:

>I'm curious: what's the disadvantage (besides the time it takes to do it)
>of autoloading?

In order to provide for autoloading of a function, you have to preload the
name of the function and the place where its code lives, i.e. a form like

(define-autoload sort "/net/fileserver/usr/lib/cl/sort.b")

or some such. You have to have both the symbol SORT and the pathname for its
file present all the time. This may not seem like a big deal, but consider
that CL has over 600 functions---you chew up a bit of space for all those
(tho admittedly not several megs!). A decent autoloading system should also
be more clever than just to look at functions - consider FORMAT with strange
options, or even ordinary /. It would be unreasonable to load Roman numeral
printing just because you happened to say (format "hi there.~%"), or to load
ratios just because / can produce them sometimes. The observant reader will
suggest that autoloading be performed on internal functions like RATIO/ and
PRINT-ROMAN, but of course there are thousands and thousands of internal
functions. Finally, the loading time overhead can be nontrivial---if the
implementation is highly interconnected internally, you may end up loading
most of the system the hard way anyhow (it's not implausible for INTERSECT
to call SORT, which calls MAKE-ARRAY, ...).

Of course, none of these are insurmountable problems, but they do tend
to discourage implementors from building a system based on autoloading.
It *is* a useful feature to have!

>We're using a version of Common Lisp that takes up
>something over 5 megs just for itself,

Sounds suspiciously like Lucid's system, which is unusually voluminous,
partly because the compiler is resident (now *there's* a function to
autoload!). Many people believe that a full CL can be fit into one meg
of memory, but it requires serious attention to space optimization, which is
currently unfashionable - "just add another memory board!". I wonder
if compiler writers ever get kickbacks from memory manufacturers... :-(

>Mike Maxwell

stan

wa...@su-russell.uucp

unread,
Feb 18, 1987, 3:31:20 AM2/18/87
to
I see no reason to support autoloading in a system which supports a
decent implementation of virtual memory (UNIX does not fall into this
class). A good virtual memory implementation completely subsumes
autoloading by keeping only what you are likely to need in your
working set (this ignores the cost of page table size, etc.).

sh...@utah-orion.uucp

unread,
Feb 18, 1987, 10:31:35 AM2/18/87
to

This is yet another reason for implementors being indifferent about
autoloading. Unfortunately, there is a tendency to shrug and assume
that "virtual memory will take care of problem X".

Virtual memories can be sandbagged by poor implementation style, such
as a high degree of internal connection. Adding flavor to a user interface
by using kinky FORMAT options, or defining all the sequence functions
in terms of each other has severe costs in terms of paging rate. You
also need cooperation from storage management so that GC doesn't page in
every single last byte of system code. Lots of research has been done
on this, but there isn't even a consensus on reference counting vs GC
vs hybrids...

stan

ch...@hpfclp.uucp

unread,
Feb 18, 1987, 1:34:33 PM2/18/87
to
>>Well, the times they are a changin'... Of course, if you understood lexical
>>variables, you would understand why you can't compute a variable name at run
>>time and then reference it.

>B.S! All the compiled code for SET need do is check that the first argument
>be lexically equivalent to a lexically apparent variable and change

>the appropriate cell, stack location, or whatever. Easy for a compiler
>to do!

I don't see how it's possible to do this (excuse my potential ignorance).
Once the target argument for SET is evaluated and you have some symbol,
how does the compiled code decide whether or not the symbol identifies
a lexical variable? It seems to me that the information identifying the
names of lexical variables (and their place on the stack) has been compiled
away. It seems like this can't be done for the same reason that you
can't EVAL a form that contains a reference to a lexical variable.

Anyway, if you're keeping track, I prefer to ride the wave rather than
go against the tide. Hang Ten!

-- Chan Benson
{ihnp4 | hplabs}!hpfcla!chan
Hewlett-Packard Company
Fort Collins, CO

As usual, HP has nothing to do with what I say here.

wa...@su-russell.uucp

unread,
Feb 19, 1987, 3:01:29 AM2/19/87
to
Stan Shebs is correct in saying that complex function
inter-dependencies can cause poor paging behavior; however,
autoloading doesn't help this problem one bit. In fact, it can
actually aggravate the problem by paging in many useless objects.
Autoloading is a poor substitute for a good virtual memory system
because the grain size is too large - you generally must load a
complete group of related functions and you must waste time parsing
the binary file format at run time. Virtual memory avoids both of
these problems. I'm only arguing against autoloading in a good virtual
memory system, though.

Many popular operating systems do not have wonderful virtual memory
systems. UNIX, for example, handles large processes so poorly that I
might argue that autoloading *is* a good idea when writing a Lisp to
run on it.

Wade

r...@spice.cs.cmu.edu.uucp

unread,
Feb 20, 1987, 5:20:51 AM2/20/87
to
In article <1...@utah-orion.UUCP> sh...@utah-orion.UUCP (Stanley T. Shebs) writes:
>In article <3...@bcsaic.UUCP> mich...@bcsaic.UUCP (Michael Maxwell) writes:
>
>>I'm curious: what's the disadvantage (besides the time it takes to do it)
>>of autoloading?

Another point is that unless you are careful to make fasloaded code
shareable, autoloads for popular functions will actually waste memory. Many
systems make it possible to share the initial read-only portion of a
program, but do not allow shareable reading of files after a program has
started. If you have a VM system good enough to allow those sorts of games,
then autoloading is probably of dubious benefit.

The system I am most familar with that made extensive use of autoloads was
Maclisp. In the case of Maclisp, the autoloads were more of a way of
getting around the prohibitively small PDP-10 address space, rather than of
reducing memory usage.

Our approach to building systems here at CMU under Accent and now under Mach
is to put all standard faclilties in the core. Our cold-load builder
initially allocates all objects dynamically. When we build a full system,
we load in the editor, compiler, etc., once again allocating all structures
dynamically. We then use a GC-like utility called Purify that moves all
accessible objects into statically allocated storage. Objects that are
obviously unmodifiable such as function constants are moved into a
non-scavenged (unmarked) area, allowing GC to ignore these objects since
they can only point to statically allocated objects that cannot move.

Since Mach supports copy-on-write mapping of files, the entire Lisp image is
initially shareable. Only as objects are modified do pages become
non-sharable. Since the bulk of the system is code and strings that are
never modified, even a well-used Lisp is still largely sharable. I believe
that sharing is important even on single-user machines, since it is often
useful to have several lisps running.

Purify also uses heuristics to attempt to improve the code locality in the
resulting core image. It basically does a breadth-first traversal of the
call graph, placing the code for functions in the order it reaches them.
Eyeballing of the core file suggests that this does place groups of related
functions together, and subjective reports indicate that there is a
resulting improvement in response time. Unfortunately these effects are
difficult to quantify since they are largely things such as reductions in
the time to swap in the compiler after editing for a while.

Lisp is often accused of "bad locality"; although this is true to some
degree, it is also largely a result of an apples-and-oranges comparison.
The system manager looks at this "lisp" process and observes that it has a
huge amount of memory allocated, and has only accessed scattered parts of it
in recent history. If you compare it to a nice C program like grep, then it
has bad locality; the thing is that the C programmer doesn't just sit there
all day using grep, he also uses editors and debuggers and does all kinds of
file I/O. If you mashed all that stuff together into one address space then
you would see bad locality too. The reason that the lisp thrashes the
system and the C programmer doesn't is that the changes of context that the
C programmer makes are often explicitly flagged to the system by process
creation and termination.

One of the things that Purify attempts to do is place different systems in
different parts of memory so that the Lisp behaves more like a collection of
programs than a big ball of spaghetti. This is done by specifying a list of
"root structures"; the stuff reachable from a given root structure is
transported in one shot, and then you go on to the next root structure.
Currently our root structures are the top-level loop, the compiler the
editor and the editor key bindings table. Other symbols with no references
are also treated as call-graph roots.

Another advantage of the strategy of initially allocating all objects
dynamically is that allows stripped delivery vehicle Lisps to be created by
GC'ing away unreferenced objects. There is a version of purify that
uninterns all symbols and then does a GC while keeping a finger on a root
function. After the GC, symbols that still exist are reinterned, and the
root function is called when the suspended core image is resumed.

I haven't used this facility much, since it makes our life easier to
maintain only one core and inflict it on everyone. I once built a system
rooted on our editor, and it was about 50% smaller than the full system,
weighing in at about 1meg with a dense byte-coded instruction set. The
editor is a sizable system that makes no attempt to avoid hairy Common Lisp
features (the opposite if anything). I also didn't destroy the error
system, which meant that the READ and EVAL and PRINT and the debugger and
format ~:R were still all there. Things like bignum arithmetic also won't
go away unless you explictly blast them, since they are implicitly
referenced by the standard generic arithmetic routines.


>
>>We're using a version of Common Lisp that takes up
>>something over 5 megs just for itself,
>
>Sounds suspiciously like Lucid's system, which is unusually voluminous,
>partly because the compiler is resident (now *there's* a function to
>autoload!).

This really depends on your goal. Here at CMU we are more interested in
having great Lisp development environment than in having a Lisp you can use
for minimal $$$. I am much more intimidated by the functionality present in
the Lisp machine's 20meg+ world than I am intimidated by the efficiency of a
minimal Lisp that can run a silly benchmark in 512k.

The Lucid system contains both a resident compiler and editor, which is
required to implement the sort of incremental development environment that
Lisp machines offer. I don't doubt that an interpreted development cycle
would be more efficient of resources, but I would prefer what I have.

Yes I have used Interlisp. Back in my timesharing days I preferred it to
Maclisp, since it offerred an integrated development environment. Everyone
thought I was crazy since it was so big (and therefore slow). Today I am
using an integrated Lisp environment on a machine with 5x as much physical
memory as the 20 had virtual memory; I *like* it.

>Many people believe that a full CL can be fit into one meg
>of memory, but it requires serious attention to space optimization, which is
>currently unfashionable - "just add another memory board!".

I have little doubt that this is true if you used a byte-code interpreter.
The bare PERQ Spice Lisp system wasn't much more than a meg, and it was
optimized more for time than space, and had 150k of doc strings. The
question is why bother? The only use I can think of for such a toy system
would be for educational use on PC's. Once you start layering a real
development environment on top, it won't make much difference whether the
root Lisp is 500k or 5meg.

The day that Lisp is no longer perceived as being wasteful of resources will
be the day that Lisp is permantly banished from the cutting edge of
programming environment development. The current flamage shows that the day
is not here yet, but I can see an omen in the comparison to Smalltalk.
Smalltalk's pervasive object-oriented paradigm wastes CPU in a way
unthinkable in a Lisp system, and Smalltalk based systems are also
accomplishing amazing things.

>I wonder
>if compiler writers ever get kickbacks from memory manufacturers... :-(
>
>>Mike Maxwell
>
> stan

Well, I haven't gotten mine yet, but they say the check is in the mail...

Rob

ch...@mimsy.uucp

unread,
Feb 20, 1987, 10:49:54 AM2/20/87
to
In article <2...@su-russell.ARPA> wa...@su-russell.ARPA (Wade Hennessey) writes:
>Many popular operating systems do not have wonderful virtual memory
>systems. UNIX, for example, handles large processes so poorly that I
>might argue that autoloading *is* a good idea when writing a Lisp to
>run on it.

Unix? What is a Unix?

Please note that there are many different virtual memory systems
out there, all running under some Unix variant. They may indeed
all be bad, but saying `Unix handles large processes poorly' does
not provide enough information for someone who would be willing
to fix it. Which Unix? (The 4.[123]BSD VM code is generally
acknowledged as overdue for a rewrite. One is in progress, but
I have no details.)

Back to the `main point': autoloading generally gives up time in
favour of space---file system space. One can have umpteen versions
of a lisp binary that all share the same package. In a way this
is much like shared libraries. (If your system does the sharing
automatically, without requiring `autoload's or other helpful hints,
well, that *is* neat.)
--
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7690)
UUCP: seismo!mimsy!chris ARPA/CSNet: ch...@mimsy.umd.edu

pa...@osu-eddie.uucp

unread,
Feb 23, 1987, 5:31:45 PM2/23/87
to
In article <2...@su-russell.ARPA> wa...@su-russell.ARPA (Wade Hennessey) writes:
>
>Many popular operating systems do not have wonderful virtual memory
>systems. UNIX, for example, handles large processes so poorly that I
>might argue that autoloading *is* a good idea when writing a Lisp to
>run on it.
>
>Wade

No, this is not alltogether true. Unix will handle a very large
program just fine, as long as the memory refrences are not scattered,
but are instead mostly local and/or sequential. The problem here is
that most Lisp implementors are too lazy to carefully watch how there
lisp deals with VM.

One can, of course, give your Lisp a compacting GC, but this opens up
a whole new can of worms, including possibly, reduced GC performace...

wa...@su-russell.uucp

unread,
Feb 24, 1987, 1:55:28 PM2/24/87
to
When I said that Unix didn't handle VM well, I was refering to SUN
Unix. I wasn't complaining about locality problems - any virtual
memory system has a problem with scattered references.

I was complaining about the need to have the maximum amount of
swapping space (I tend to call it paging space) needed by a process
available at image startup. I've frequently been unable to run a Lisp
on the SUN because Unix doesn't think it has enough swapping space.

I also wish that Unix had features like copy-on-write and demand-zero
pages.

b...@bu-cs.uucp

unread,
Feb 28, 1987, 7:10:31 PM2/28/87
to

Re: all the problems with autoloading.

Hmm, seems to me that a solution would be to have a LISP hook into an
orderly virtual memory system so packages can be put (preferably
read-only and shared, re-binding would just stop pointing at the
read-only version, that happens in lisps now on such systems that
support read-only code) on page cluster (ie. 1 or more pages)
boundaries.

This should give all the advantages of auto-loading with the added
advantage of a possible auto-unloading automatically by a virtual
memory system. That is, all the features, none of the hassles,
transparent.

Seems like if we took a UNIX loader and convinced it to understand
some sort of hook (hmm, perhaps just "align each .o file" would be
enough, certainly as a start) to re-align packages* onto page or
segment boundaries it would just sort of happen.

Or am I missing something here?

[start stream-of-consciousness, sorry]

Well, replacing a package would now necessitate re-linking the LISP,
is that a fatal problem? (if everything was so neatly aligned how hard
could it be to just re-load [eg. with ld's -A flag] at the same address
and put it back into the runnable image, hmm, unless it didn't fit,
ok, well, try that and if not then re-build (or, hell, shuffle
everything down, no, that may cause problems with non-relocatable
code, ok, rebuild.) We are starting to see UNIX systems which re-build
on every boot (SYS/V), not free, but we're not talking about doing it
that often either (not on every start-up, just if a package changes.))

I assume I am going to hear that such-and-such did this in 1929, oh well.

-Barry Shein, Boston University

* Not necessarily in the CL sense of the word, I just mean a group
of functions which would be auto-loaded together.

Bruce Robertson

unread,
Mar 8, 1987, 1:45:23 AM3/8/87
to
In article <2...@su-russell.ARPA> wa...@su-russell.ARPA (Wade Hennessey) writes:
>I also wish that Unix had features like copy-on-write and demand-zero
>pages.

Much as I dislike System V, the virtual memory system provided by AT&T
does not require you to have enough swap space to contain the process
no matter how big it might get, and it does have copy-on-write and
demand-zero pages. Because of copy-on-write, data pages are stored in
the page cache as well as text pages, resulting in commonly used
programs starting up much more quickly.

In fact, we are using the System V virtual memory implementation in
the 4.3bsd port that we are working on, rather than Berkeley's
implementation.
--

Bruce Robertson
br...@stride.Stride.COM
cbosgd!utah-cs!utah-gr!stride!brucad

0 new messages