Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Which one, Lisp or Scheme?

54 views
Skip to first unread message

Yunho Jeon

unread,
Jan 20, 1997, 3:00:00 AM1/20/97
to

Hello,
I have some idea to experiment with, and the idea needs a language which can
execute codes produced by the program itself. Naturally, I thought LISP would
be the best choice. But while reading LISP FAQ, I found that there are
some varients of the language. Notably, Scheme seemed to be more modern and
cleaner language than LISP. Because I have almost no experience in those
languages, I would like to get answers from LISP/Scheme experts
for the following questions.

1) Which language is easier to learn (for C/C++ programmer)?
Scheme seems to be easier because it is much smaller than common lisp,
but how big is the difference (in terms of learning curve)?

2) The command to parse and execute a source program is 'eval', right?
If it is, is it a standard feature of Scheme? The Scheme FAQ was not
very clear on this point. Surely some implementations have it, but
the language seems to be more suited for compilers. Does the run-time
has embedded compiler? What's the overhead of it (size/speed)?
If I can't use compiler for my purposes, what do I lose and how much
(again in terms of speed)?

3) What about foreign language (C/C++) language interface and GUI's?
It's not essential for now, but it may be needed in future.

4) I am going to use Linux as the experiment platform, and don't want to
buy any commercial software - I'm only a student and it's hard to buy
and get support of such a software in Korea, where I live.
Both language has a lot of free implementations, but are they mature
enough?

Thanks for any helps in advance.
Best regards,
------------------------------------------------------------------------------
Yunho Jeon Tel: +82-2-880-6482~90 ext) 416
Intelligent Control Lab +82-2-875-9183
School of Electrical Engineering Fax: +82-2-888-4182
Seoul National University, Korea Email: yu...@csl.snu.ac.kr


Rainer Joswig

unread,
Jan 20, 1997, 3:00:00 AM1/20/97
to

> 1) Which language is easier to learn (for C/C++ programmer)?
> Scheme seems to be easier because it is much smaller than common lisp,
> but how big is the difference (in terms of learning curve)?

Common Lisp comes with a larger library, has some rough edges,
is more "industrial strength", ...

Scheme is much smaller, has some very different implementations,
has a lot of non-standard extensions, ..

Both Common Lisp and Scheme basics are relatively easy to learn.

> 2) The command to parse and execute a source program is 'eval', right?

Common Lisp has "READ", "EVAL" and "COMPILE".

> If it is, is it a standard feature of Scheme? The Scheme FAQ was not
> very clear on this point.

EVAL is not a standard feature of Scheme. Still, most Scheme systems
have it.

> 3) What about foreign language (C/C++) language interface and GUI's?
> It's not essential for now, but it may be needed in future.

FFI and GUI functionality is available. But no standard in sight.


> 4) I am going to use Linux as the experiment platform, and don't want to
> buy any commercial software - I'm only a student and it's hard to buy
> and get support of such a software in Korea, where I live.
> Both language has a lot of free implementations, but are they mature
> enough?

You may want to use Allegro CL 4.3 from Franz Inc. for Linux.
It is ***free*** for non commercial use. ACL 4.3 should be mature enough. ;-)
See http://www.franz.com/ how to get it. Got my CD-ROM (thanks Franz!,
but haven't tried it yet.


Rainer Joswig

Jussi Mantere

unread,
Jan 20, 1997, 3:00:00 AM1/20/97
to

Yunho Jeon (yu...@csl.snu.ac.kr) wrote:
: 2) The command to parse and execute a source program is 'eval', right?
: If it is, is it a standard feature of Scheme? The Scheme FAQ was not
: very clear on this point. Surely some implementations have it, but

: the language seems to be more suited for compilers. Does the run-time
: has embedded compiler? What's the overhead of it (size/speed)?
: If I can't use compiler for my purposes, what do I lose and how much
: (again in terms of speed)?
You're thinking way too C here.
When you install a Scheme (or any LISP) package on your computer,
you install the _interpreter_, or evaluator.
If you run the interpreter and type any command or procedure, the
evaluator automatically evaluates it and displays whatever value it's
supposed to display.

So, a "source program" is actually just a huge procedure which you'll execute
like any command.

(eval <op> <env>) as such is, afaik, a standard feature in scheme, defined in
R4RS. It evaluates whatever you feed to it in a given environtment.

See SICP for more info;)


: 4) I am going to use Linux as the experiment platform, and don't want to


: buy any commercial software - I'm only a student and it's hard to buy
: and get support of such a software in Korea, where I live.
: Both language has a lot of free implementations, but are they mature
: enough?

They are mature enough. Don't buy anything commercial.
Use Emacs and whatever scheme implementation you find convenient.
Guile, MIT Scheme... whatever.

-obs
--
(define me '((jussi mantere) (jmt 6b112a 02150 espoo) (09-468 2718)
(o...@iki.fi) (http://www.iki.fi/~obs)
(TiK abilobbari '97)))
Mikä Ihmeen Tiainen? Saakeli Cun Häiritsee - En Muista Enää!

Thant Tessman

unread,
Jan 20, 1997, 3:00:00 AM1/20/97
to

Yunho Jeon wrote:

> I have some idea to experiment with, and the idea needs a language which can

> execute codes produced by the program itself. [...]

> 2) The command to parse and execute a source program is 'eval', right?

Yes, but it's probably not what you need. The magical part of Scheme is
"lambda" which is how functions build other functions.

Actually, Scheme contains three levels of enlightenment. The first is
higher-order functions (lambda). The second is continuations
(call-with-current-continuation), and the third is macros.

Each will thoroughly hurt your brain, but re-birth is a painful process.
If you persevere you will be transformed, thrice, into a higher being.

-thant

Howard R. Stearns

unread,
Jan 20, 1997, 3:00:00 AM1/20/97
to

Yunho Jeon wrote:
>
> Hello,

> I have some idea to experiment with, and the idea needs a language which can
> execute codes produced by the program itself. Naturally, I thought LISP would
> be the best choice. But while reading LISP FAQ, I found that there are
> some varients of the language. Notably, Scheme seemed to be more modern and
> cleaner language than LISP. Because I have almost no experience in those
> languages, I would like to get answers from LISP/Scheme experts
> for the following questions.
> ...
> 2) The command to parse and execute a source program is 'eval', right?
> ...

Sort of.

The function READ parses information from a character stream and creates
lisp data (lists of literals, symbols, and more lists) that represent
the program "text".

The function EVAL can be used to execute such data as a program. EVAL
does not operate on characters, strings or streams.

In practice, EVAL is usually not necessary. In my experience, EVAL is
used in most textbooks only in discussing the implementation of an
interpreter (for Lisp or some other language). This is NOT the only way
to have a program execute utilities that are produced by the program
itself. Most projects are more cleanly and efficiently written using
compiled closures.

Will Hartung

unread,
Jan 20, 1997, 3:00:00 AM1/20/97
to

yu...@csl.snu.ac.kr (Yunho Jeon) writes:

>Hello,
>I have some idea to experiment with, and the idea needs a language which can
>execute codes produced by the program itself. Naturally, I thought LISP would
>be the best choice. But while reading LISP FAQ, I found that there are
>some varients of the language. Notably, Scheme seemed to be more modern and
>cleaner language than LISP. Because I have almost no experience in those
>languages, I would like to get answers from LISP/Scheme experts
>for the following questions.

I'm no expert, but that hasn't stopped me before.

>1) Which language is easier to learn (for C/C++ programmer)?
> Scheme seems to be easier because it is much smaller than common lisp,
> but how big is the difference (in terms of learning curve)?

Frankly, I would say that Lisp would be easier to learn than Scheme
for an C/C++ programmer.

Scheme is a lovely, elegant language, and is, I believe simpler and
easier to learn in its own right. It is hard not to like Scheme. But,
for someone who has a lot of history with C/C++, the way Scheme is
presented could throw you for a loop. You, as the student, would
probably take the approach of trying to learn "Scheme Syntax", whereas
the books spend more time on the "Scheme Way".

The "Scheme Way" of programming is very functional, lots of recursion,
local helper functions, etc. It is really a pretty nice way to go
about task of coding. However, its not the way MOST people
(particularly C/C++ people) write code. The idioms are all wrong.

If you look at how Lisp is presented, especially in something like
Paul Grahams "ANSI Common Lisp" book, it is easier to see the how your
entrenched C/C++ idioms translate into Lisp syntax and structures.

Once you get past the hurdles of the fact that you don't need pointers,
its pretty easy to write C/C++ code in a Lisp syntax. And Common Lisp
has an enormous catalog of functions to do all sorts of things. All of
the structures you are used to are in the language.

If you want to change the way to think about progamming and problem
solving, then grab all of the Scheme books, dig in, strap yourself
down, and hang on for a wild ride. It's quite a trip.

If you just want to work on your task, using your current mindset,
then get into Common Lisp, and let your fingers do the talking. You can
treat CL like C/C++ a lot easier. However, I do suggest you go in with
an open mind for the new, more powerful ways of solving problems that CL
can provide for you.

>2) The command to parse and execute a source program is 'eval', right?

> If it is, is it a standard feature of Scheme? The Scheme FAQ was not
> very clear on this point. Surely some implementations have it, but
> the language seems to be more suited for compilers. Does the run-time
> has embedded compiler? What's the overhead of it (size/speed)?
> If I can't use compiler for my purposes, what do I lose and how much
> (again in terms of speed)?

'eval' is standard in CL, not standard in Scheme, but as has been
mentioned, many Schemes provide it. 'eval' can incur a pretty dramatic
hit on the execution size of a program. How much overhead depends on
the implementation.

>3) What about foreign language (C/C++) language interface and GUI's?
> It's not essential for now, but it may be needed in future.

Many systems provide foriegn function interfaces. GUI's are available,
though less prominent.

>4) I am going to use Linux as the experiment platform, and don't want to
> buy any commercial software - I'm only a student and it's hard to buy
> and get support of such a software in Korea, where I live.
> Both language has a lot of free implementations, but are they mature
> enough?

There are several systems available for Linux. The FAQ lists most of
them. I like Aubrey Jaffers SCM Scheme package, and the latest Gambit-C
2.2 Scheme compiler getting a lot of good press. Scheme packages
differ wildly, so check the details.

There is a lot of "real" work going on in many of these packages, and
most are very mature.

Good Luck!

--
Will Hartung - Rancho Santa Margarita. It's a dry heat. vfr...@netcom.com
1990 VFR750 - VFR=Very Red "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
1993 Explorer - Cage? Hell, it's a prison. -D. Duck

Erik Naggum

unread,
Jan 21, 1997, 3:00:00 AM1/21/97
to

* Jussi Mantere

| When you install a Scheme (or any LISP) package on your computer,
| you install the _interpreter_, or evaluator.

this is factually wrong.

| (eval <op> <env>) as such is, afaik, a standard feature in scheme,
| defined in R4RS.

this is factually wrong.

| They are mature enough. Don't buy anything commercial.
| Use Emacs and whatever scheme implementation you find convenient.
| Guile, MIT Scheme... whatever.

this is disquieting. the strongest effects of free Lisp implementations to
date have been to turn people away from Lisp due to low performance, high
memory usage, etc; to perpetrate the _myth_ that all Lisps are interpreted,
that the language is slow, etc; to make people believe that Lisps don't fit
in with the rest of the operating system, that you can't make executables;
etc ad nauseam.

commercial implementations have taken Lisps out of the experimental lab and
made them shippable and supportable as useful systems. apparently, the
discipline needed to do this is not available for free, so it is safe to
assume it is hard, mostly uninspiring, work. (note, however, that my
experience is with Common Lisp. I don't know Scheme very well.)

#\Erik
--
1,3,7-trimethylxanthine -- a basic ingredient in quality software.

aro...@momotombo.austin.ibm.com

unread,
Jan 21, 1997, 3:00:00 AM1/21/97
to

Thant Tessman wrote:
> Actually, Scheme contains three levels of enlightenment. The first is
> higher-order functions (lambda). The second is continuations
> (call-with-current-continuation), and the third is macros.

Let's rename Scheme 'Scheme Trismegistus'.

Chris Bitmead

unread,
Jan 22, 1997, 3:00:00 AM1/22/97
to

In article <30627981...@naggum.no> Erik Naggum <er...@naggum.no> writes:

>| They are mature enough. Don't buy anything commercial.
>| Use Emacs and whatever scheme implementation you find convenient.
>| Guile, MIT Scheme... whatever.
>
>this is disquieting. the strongest effects of free Lisp implementations to
>date have been to turn people away from Lisp due to low performance, high
>memory usage, etc; to perpetrate the _myth_ that all Lisps are interpreted,
>that the language is slow, etc; to make people believe that Lisps don't fit
>in with the rest of the operating system, that you can't make executables;
>etc ad nauseam.
>
>commercial implementations have taken Lisps out of the experimental lab and
>made them shippable and supportable as useful systems. apparently, the
>discipline needed to do this is not available for free, so it is safe to
>assume it is hard, mostly uninspiring, work. (note, however, that my
>experience is with Common Lisp. I don't know Scheme very well.)

There are free Scheme and Lisp compilers capable of producing binary
executables. So you don't need a commercial product. (Although I'm
sure Franz lisp is an excellent product).

Erik Naggum

unread,
Jan 22, 1997, 3:00:00 AM1/22/97
to

* Chris Bitmead

| There are free Scheme and Lisp compilers capable of producing binary
| executables. So you don't need a commercial product. (Although I'm sure
| Franz lisp is an excellent product).

it may say more about my experience than anything else, but I grabbed all
the (free) Common Lisp implementations I could get my hands on for my
SPARC, including akcl, gcl, wcl, clisp, cmucl, and since I didn't have any
experience from any "real" Lisp systems, didn't know what I misssed outside
of CLtLn (n = 1 (akcl, gcl, wcl) or 2 (clisp, cmucl)). I don't want to go
advertising any products, but when I got my first commercial Lisp system
six weeks ago, I stopped working on my (Lisp) projects and sat down to
learn the _rest_ of the Lisp systems, as documented in about 1200 pages.
this has indeed paid off _very_ handsomely, yet it tells me that if all you
have ever seen are the free Lisps, you might be in for a very big surprise
when you get a Lisp-machine-like commercial implementation of Lisp.

(however, I might easily have missed similar software for free Lisps -- I
didn't know what to look for. maybe it would be useful if somebody who
knows what to look for in each compared free and commercial Lisp?)

Martin Cracauer

unread,
Jan 23, 1997, 3:00:00 AM1/23/97
to

Erik Naggum <er...@naggum.no> writes:

>| They are mature enough. Don't buy anything commercial.
>| Use Emacs and whatever scheme implementation you find convenient.
>| Guile, MIT Scheme... whatever.

>this is disquieting. the strongest effects of free Lisp implementations to
>date have been to turn people away from Lisp due to low performance, high
>memory usage, etc; to perpetrate the _myth_ that all Lisps are interpreted,
>that the language is slow, etc; to make people believe that Lisps don't fit
>in with the rest of the operating system, that you can't make executables;
>etc ad nauseam.

CMUCL is as fast and even smaller than most commercial implementations
on Unix. The only things I miss are threads and a better garbage
collector (although CMUCL's superior warnings help not to produce as
much gargabe in first place).

A commercial implementation has a nice environment, nice browsers,
maybe an editor in Common Lisp and therefore controllable from Lisp
and I found it very valuable to have such a visualization toolkit
around when I learned Common Lisp. But I think Eric missed the point
here.

I agree with the point that many free Lisp implementations are slow
and fail to point out in their documentation that one can run the same
program faster. In fact, I already had an argument with the author of
Xlisp about it after some magazine compared Lisp and perl (that is
Xlisp and perl) and headlined that Lisp is slow.

The problem here is that people choose a free implementation by other
criteria than speed and then complain it is too slow because they
underestimated the amount of efficiency they give up.

The Scheme community with eval implemented as write/read to disk
sometimes and total lack of declarations, but implementations that
take numbers as 32bit-limited without programmers permission is
another issue. While the author is not responsible, slib was what
turned me away from Scheme rather quickly. Some slib functionality in
unbeleivable slow in many implementation, functionality that is
standard in Common Lisp and Perl and therefore implemented either in C
or as overhead-free, declared code.

Martin
--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin_...@wavehh.hanse.de http://cracauer.cons.org Fax.: +4940 5228536
"As far as I'm concerned, if something is so complicated that you can't ex-
plain it in 10 seconds, then it's probably not worth knowing anyway"- Calvin

Erik Naggum

unread,
Jan 24, 1997, 3:00:00 AM1/24/97
to

* Martin Cracauer

| CMUCL is as fast and even smaller than most commercial implementations on
| Unix. The only things I miss are threads and a better garbage collector
| (although CMUCL's superior warnings help not to produce as much gargabe
| in first place).

ah, agreed, but CMUCL is a breed apart from the rest. it compiles to
native code directly for a number of platforms, which is a formidable task,
and it is exceptionally good at helping the programmer declare types of
values when it could have used a hint. the code it generates is also quite
good. however, this is not the norm for free Lisp implementations.

out of the several hundred free Lisps and Schemes out there, most are toys.
they may be fun toys, but they are still toys. I think this is because it
is so easy to write a toy Lisp or Scheme, and so hard to write a fully
functional Lisp system.

| A commercial implementation has a nice environment, nice browsers, maybe
| an editor in Common Lisp and therefore controllable from Lisp and I found
| it very valuable to have such a visualization toolkit around when I
| learned Common Lisp. But I think Eric missed the point here.

well, those have been immaterial to me, but I'm sure we meet different
aspects of Lisp systems according to our experience. I have never used a
real Lisp system for real until I got Allegro CL, so I was impressed by how
much the system knew about itself, how much time the source code manager
saved me, how well the debugging is integrated with the cross-referencing
utilities, etc. example: I have about 200 functions in a package I'm
writing, spread across several files, and I can ask who calls a given
function, directly or indirectly. I know which functions and files to
recompile after changing a macro by asking for who uses it. I can ask the
system to tell me which functions bind, set, and/or reference a variable.
I have needed this functionality for _years_, not just in Lisp. Allegro
also has search lists that can do wondrous things, too. it's everything
_outside_ of the common Common Lisp stratum that impresses me most.

| In fact, I already had an argument with the author of Xlisp about it
| after some magazine compared Lisp and perl (that is Xlisp and perl) and
| headlined that Lisp is slow.
|
| The problem here is that people choose a free implementation by other
| criteria than speed and then complain it is too slow because they
| underestimated the amount of efficiency they give up.

I see a possible pattern here. if C was slow on some machine or in some
particular implementation, nobody would blame C or headline that C is slow.
the myth is that Lisp is slow, and every time somebody meets a slow Lisp,
that myth is reinforced. that you can compile with CMUCL at high speed
settings and beat the sh*t out of C is just as much as an aberration as C
running under some bounds-checking and memory-tracking package is slow.

yes, we all know that Lisp can be incredibly fast. we all know that Scheme
can be statically typed, too, as in the Stalin compiler. neither changes
the naive perceptions and the myths that will be reinforced the next time a
small, neat, fun toy, somewhat like SIOD, is released and tested by
somebody who carries those myths with him.

that's how I meant that the free Lisps have mostly worked to turn people
away from Lisp. I didn't mean that you can't find free Lisps that people
would have flocked to if they would only get over their prejudices. I
meant that they don't, because of the many toys they have used and think
are the norm. it seems, for instance, that in educational settings, Lisp
and Scheme are not at all presented with anything resembling speed in mind,
and so students who are used to C and C++ and ads reading "X compiles Java
at over 10,000 lines per second", will have trouble _not_ remembering that
speed was never discussed, that their compilers and interpreters were slow,
etc, etc. I mean, we have people come in here and state that Lisp is an
interpreted language at least once a week! it's an image problem, and it's
a tragic element of that story that as Lisp implementers focus on other
things, students and amateur programmers are turned into speed fanatics
because that is the only forte of those other languages and systems. (and
never mind that Allegro CL for Linux produces code that runs rings around
C++ under Windows NT on the same machine. *sigh*)

Marco Antoniotti

unread,
Jan 24, 1997, 3:00:00 AM1/24/97
to

I am enjoying this thread and I believe that it would be helpful to do
some classification in the Lisp/Scheme field and to produce some
"advice paper" (or whatever you want to call it.) Any magazine
article which wouldn't refer to such a pamphlet (as the aforemetioned
Lisp vs. Perl bogus comparison) should immediatly be dismissed with
for lack of parenthesis :)

First of all (and here I already know, there will be chaos) there are
only three dialects to be considered.

Common Lisp
Emacs Lisp (which, btw. can always be turned almost to a CLtL1 by
(require 'cl))
Scheme

Xlisp claims to be more and more CL (at least CLtL1) compliant.
Therefore it should not be considered as a standalone dialect.

Now we list the only reasonable alternatives for a free (gratis)
Common Lisp implementation and rank them for speed.

1 - CMUCL (Sparc, HP-PA, MIPS, X86)
4 - GCL, ECL (most Unix flavors and architectures - KCL and AKCL
are *OLD*)
5 - CLISP (Sorry Bruno! :) )
6 - Xlisp

The gap is intended :) (even if it may not be as wide as I'd like) The
latest version of Allegro CL for Linux is free, but who knows what
will be the marketing policies of Franz Inc.

In the Scheme world, though I never tried it, I hear that the Stalin
compiler could be a 1 or a 2 in my previous scale. AFAIK all the
other Scheme's (apart from being incompatible from each other at some
level) rank at 6 OR WORSE.

Emacs Lisp is reasonably fast and I would rank it 5.

In the commercial field, it looks like there are only three
alternatives for Common Lisp and their relative speed in my modest
experience would be the following.

1/2 Lucid (Un*x)
2/3 ACL, LispWorks (Un*x)
2/3 MCL (MacOS)

I left out Genera/Symbolics because it is in a league of its own. And
I am not familiar with any of the other (if surviving) commercial CL
implementations.

I have not been using ACL/PC enough to have a good idea of it, but the
environment sure looks as appealing as the fantastic MCL on the Mac
(of which I am a big fan :) )

--
Marco Antoniotti - Resistente Umano
===============================================================================
...it is simplicity that is difficult to make.
Bertholdt Brecht

Michael Sperber [Mr. Preprocessor]

unread,
Jan 24, 1997, 3:00:00 AM1/24/97
to

>>>>> "DB" == David Betz <db...@xlisper.mv.com> writes:

DB> [ ... ] If Lisp is going to be compared with
DB> other languages for speed, it should be the commercial implementations
DB> that are used in the comparison. (Although perl isn't commercial as far
DB> as I know.)

Still, there are free implementations of Scheme that are *VERY* fast.
Both Gambit and Bigloo can actually compete with C on at least some
applications. I'd be suprised if, say, Chez Scheme, were
significantly faster.

Cheers =8-} Mike


Guillermo (Bill) J. Rozas

unread,
Jan 24, 1997, 3:00:00 AM1/24/97
to

In article <30631029...@naggum.no> Erik Naggum <er...@naggum.no> writes:

| From: Erik Naggum <er...@naggum.no>
| Date: 24 Jan 1997 13:56:07 +0000

| that's how I meant that the free Lisps have mostly worked to turn people
| away from Lisp. I didn't mean that you can't find free Lisps that people
| would have flocked to if they would only get over their prejudices. I
| meant that they don't, because of the many toys they have used and think
| are the norm. it seems, for instance, that in educational settings, Lisp
| and Scheme are not at all presented with anything resembling speed in mind,
| and so students who are used to C and C++ and ads reading "X compiles Java
| at over 10,000 lines per second", will have trouble _not_ remembering that
| speed was never discussed, that their compilers and interpreters were slow,
| etc, etc. I mean, we have people come in here and state that Lisp is an
| interpreted language at least once a week! it's an image problem, and it's
| a tragic element of that story that as Lisp implementers focus on other
| things, students and amateur programmers are turned into speed fanatics
| because that is the only forte of those other languages and systems. (and
| never mind that Allegro CL for Linux produces code that runs rings around
| C++ under Windows NT on the same machine. *sigh*)

Actually, I think that the speed of the implementation, although
important, is nowhere near as critical as other components.

Lisp/Scheme gives you so much rope that it is so much easier to hang
yourself with, especially with respect to performance.

There is an old adage that goes something like "Lisp programmers know
the value of everything and the cost of nothing", and I think that it
is very true.

I have seen people complain about slow implementations only to
discover quadratic algorithms because they were treating lists as
arrays, or some other similar problem. Because C lacks lists, they
would never have written the program that way. Even if they used
lists, they would have to implement list-ref and list-set! themselves,
and would immediately realize how expensive they are, so they would
rethink their strategy.

Three issues that help C in efficiency (in my view) are:

- Good efficiency model in the small. It is very clear without much
thought to most C programmers how expensive or cheap the primitive
operations are. As a counterpart, the cost of closures and
continuations in Lisp/Scheme (and even simple addition) is much harder
to tell (even by an experienced programmer) because they depend so
much more on how well the compiler was able to optimize the code,
leading to small transformations affecting performance in
non-negligible ways.

- The standard library in C is very small. Thus C programmers reinvent
the wheel (hash tables, etc.) over and over in their programs. In so
doing, the cost of these operations (which might be primitives in
Lisp/Scheme) becomes painfully obvious to them, so they use them much
more judiciously than they might if given to them freely.

- Compilers still largely compile Fortran-style code much better than
small-procedure-intensive code. In as much as the prevailing
programming style in C is closer to Fortran than it is to Scheme's
preferred style, the performance of the code will be better. Remember
that most data flow algorithms (and so-called "global" optimizations)
work within procedures. Register allocation and instruction
scheduling are understood (to the degree they are) only within
procedures, etc. There is just not that much that a compiler can do
(short of inlining which is difficult with computed function calls) if
the procedures are really small. Lisp/Scheme compilers try to do a
good job at procedure calls, often better than C/Fortran, but they
can't generally compensate for bad style (with respect to efficiency
if not modularity and elegance). That is why it is not unusual to
find that a Lisp/Scheme compiler will beat some C compiler in
Fibonacci, Tak, or some such, but not in a more realistic programs.

Scott Draves

unread,
Jan 24, 1997, 3:00:00 AM1/24/97
to

David Betz <db...@xlisper.mv.com> wrote

> If Lisp is going to be compared with

> other languages for speed, it should
> be the commercial implementations

why? there are plenty of free&fast
implementations of *both* C (GCC, LCC)
and Lisp/scheme (CMUCL, gcl, gambit, cscheme).

--
balance
equilibrium
death

http://www.cs.cmu.edu/~spot

Juergen Nickelsen

unread,
Jan 25, 1997, 3:00:00 AM1/25/97
to

Marco Antoniotti <mar...@crawdad.icsi.berkeley.edu> wrote:

> Emacs Lisp (which, btw. can always be turned almost to a CLtL1 by
> (require 'cl))

While Emacs Lisp is perhaps the most widespread Lisp dialect (with most
of its users unaware of its existence), I would not consider it one of
the major Lisp dialects -- its scope (no pun intended) is just too
small.

cl.el does not turn Emacs Lisp into a CLtL1 Lisp. It does provide some
of the convenience functions like caadr, backquote and comma (but as
macros, not reader macros, like in "(` (3 4 (, (+ 5 6)) 7)))", and some
other.

--
Juergen Nickelsen

Erik Naggum

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to

* Juergen Nickelsen

| cl.el does not turn Emacs Lisp into a CLtL1 Lisp. It does provide some
| of the convenience functions like caadr, backquote and comma (but as
| macros, not reader macros, like in "(` (3 4 (, (+ 5 6)) 7)))", and some
| other.

I'm getting _real_ sick and tired of old prejudice against various Lisps.

next time, check your facts with a recent Emacs. the backquote package was
completely rewritten 1994-03-06. that's nearly three years ago! the Lisp
reader now accepts both the old and the new style equally well.

*sigh*

Juergen Nickelsen

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to

By mistake (I was too stupid to handle my newsreader) I posted the
following before the article was complete:

> Marco Antoniotti <mar...@crawdad.icsi.berkeley.edu> wrote:
>
> > Emacs Lisp (which, btw. can always be turned almost to a CLtL1 by
> > (require 'cl))
>
> While Emacs Lisp is perhaps the most widespread Lisp dialect (with most
> of its users unaware of its existence), I would not consider it one of
> the major Lisp dialects -- its scope (no pun intended) is just too
> small.
>

> cl.el does not turn Emacs Lisp into a CLtL1 Lisp. It does provide some
> of the convenience functions like caadr, backquote and comma (but as
> macros, not reader macros, like in "(` (3 4 (, (+ 5 6)) 7)))", and some
> other.

To be precise, it adds a lot of convenience functions, and this is what
is intended by cl.el. Its author Dave Gillespie writes in the cl.el
documentation:

> * Some features are too complex or bulky relative to their benefit
> to Emacs Lisp programmers. CLOS and Common Lisp streams are fine
> examples of this group.
>
> * Other features cannot be implemented without modification to the
> Emacs Lisp interpreter itself, such as multiple return values,
> lexical scoping, case-insensitive symbols, and complex numbers.
> The "CL" package generally makes no attempt to emulate these
> features.

cl.el does indeed make Emacs Lisp programming easier for programmers
familiar with Common Lisp. Emacs Lisp with cl.el loaded is still way
different from Common Lisp, though.

--
Juergen Nickelsen

Juergen Nickelsen

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to

Erik Naggum <er...@naggum.no> wrote:

> * Juergen Nickelsen


> | cl.el does not turn Emacs Lisp into a CLtL1 Lisp. It does provide some
> | of the convenience functions like caadr, backquote and comma (but as
> | macros, not reader macros, like in "(` (3 4 (, (+ 5 6)) 7)))", and some
> | other.

[...]


> the backquote package was completely rewritten 1994-03-06. that's nearly
> three years ago! the Lisp reader now accepts both the old and the new
> style equally well.

Please apologize for this error, it is true that I didn't follow
developments of Emacs closely.

But that did only seem like my major point here; in fact I wanted to
write more (and did in a follow-up article), but due to a mistake the
article slipped out too early. In the follow-up I quote Dave Gillespie,
recent cl.el's author, from the documentation of cl.el (as contained in
XEmacs 19.14), which states clearly the facts why cl.el does not turn
Emacs Lisp into Common Lisp.

--
Juergen Nickelsen

Martin Cracauer

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to

Erik Naggum <er...@naggum.no> writes:

>* Martin Cracauer


>| A commercial implementation has a nice environment, nice browsers, maybe
>| an editor in Common Lisp and therefore controllable from Lisp and I found
>| it very valuable to have such a visualization toolkit around when I
>| learned Common Lisp. But I think Eric missed the point here.

>well, those have been immaterial to me, but I'm sure we meet different
>aspects of Lisp systems according to our experience. I have never used a
>real Lisp system for real until I got Allegro CL, so I was impressed by how
>much the system knew about itself, how much time the source code manager
>saved me, how well the debugging is integrated with the cross-referencing
>utilities, etc. example: I have about 200 functions in a package I'm
>writing, spread across several files, and I can ask who calls a given
>function, directly or indirectly. I know which functions and files to
>recompile after changing a macro by asking for who uses it. I can ask the
>system to tell me which functions bind, set, and/or reference a variable.
>I have needed this functionality for _years_, not just in Lisp. Allegro
>also has search lists that can do wondrous things, too. it's everything
>_outside_ of the common Common Lisp stratum that impresses me most.

Again, I think the difference between commercial and free
implementations is not so much that free implementations don't offer
such functionality, but that all this nice functionality is offered in
a very easy way. Tools like this are availiable for example in Mark
Kantrowitz' (spelling?) tools. Since many people what to design their
own tools -maybe for severl implementations, the availiable source for
free tools can become even more important.

As I stated earlier, I think it can be very important for a new user
to have such functionality offered without further thinking and
therefore to be free to concentrate on languages issues. This is at
least where I found my commercial implementation useful (Lispworks).

I think the most important thing the free commercial Lisp for Windows
and ACL for Linux is that new Common Lisp user get a much better
chance to become productive in Common Lisp before they give up.

Later, I found CMUCL and free tools not to be second rate anymore.

Comfortable CLOS browsing is another issue when talking about free
alternatives, though.

>| In fact, I already had an argument with the author of Xlisp about it
>| after some magazine compared Lisp and perl (that is Xlisp and perl) and
>| headlined that Lisp is slow.
>|
>| The problem here is that people choose a free implementation by other
>| criteria than speed and then complain it is too slow because they
>| underestimated the amount of efficiency they give up.

>I see a possible pattern here. if C was slow on some machine or in some
>particular implementation, nobody would blame C or headline that C is slow.
>the myth is that Lisp is slow, and every time somebody meets a slow Lisp,
>that myth is reinforced. that you can compile with CMUCL at high speed
>settings and beat the sh*t out of C is just as much as an aberration as C
>running under some bounds-checking and memory-tracking package is slow.

>yes, we all know that Lisp can be incredibly fast. we all know that Scheme
>can be statically typed, too, as in the Stalin compiler. neither changes
>the naive perceptions and the myths that will be reinforced the next time a
>small, neat, fun toy, somewhat like SIOD, is released and tested by
>somebody who carries those myths with him.

In my opinion, the problem is that many Lisp and especially Scheme
tool implementors couple their tools to an inefficient
implementation. No C programmer would do so.

Of course, many Scheme tools require extending the language, whereas C
tools usually can force the programmer to use clumsy interfaces.

But I think Lisp and especially Scheme tools implementors overrated
language elegancy and gave up intergrating their tools into the best
availiable language implmentation too early.


>that's how I meant that the free Lisps have mostly worked to turn people
>away from Lisp. I didn't mean that you can't find free Lisps that people
>would have flocked to if they would only get over their prejudices. I
>meant that they don't, because of the many toys they have used and think
>are the norm. it seems, for instance, that in educational settings, Lisp
>and Scheme are not at all presented with anything resembling speed in mind,
>and so students who are used to C and C++ and ads reading "X compiles Java
>at over 10,000 lines per second", will have trouble _not_ remembering that
>speed was never discussed, that their compilers and interpreters were slow,
>etc, etc. I mean, we have people come in here and state that Lisp is an
>interpreted language at least once a week! it's an image problem, and it's
>a tragic element of that story that as Lisp implementers focus on other
>things, students and amateur programmers are turned into speed fanatics
>because that is the only forte of those other languages and systems. (and
>never mind that Allegro CL for Linux produces code that runs rings around
>C++ under Windows NT on the same machine. *sigh*)

I think it's not *that* bad.

In my opinion, people tend to rate Lisp as interpreted because they
can't get the idea of a dynamic language that is compiled. They hear
of Lisp features and automatically say "interpreted" where they may
have thought "dynamic".

In other words, the problem is not a bad image of Lisp, but the fact
that people are ignorant and tend to fail to recognize what
performance problems are really involved with a dynamic language.

Alexey Goldin

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

crac...@wavehh.hanse.de (Martin Cracauer) writes:
>
> Comfortable CLOS browsing is another issue when talking about free
> alternatives, though.
>


OOBR (object oriented browser) for Emacs/Xemacs helps a lot. It comes
with Xemacs distribution, but works with Emacs too.

Cyber Surfer

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

In article <30631029...@naggum.no> er...@naggum.no "Erik Naggum" writes:

> that's how I meant that the free Lisps have mostly worked to turn people
> away from Lisp. I didn't mean that you can't find free Lisps that people
> would have flocked to if they would only get over their prejudices. I
> meant that they don't, because of the many toys they have used and think
> are the norm. it seems, for instance, that in educational settings, Lisp
> and Scheme are not at all presented with anything resembling speed in mind,
> and so students who are used to C and C++ and ads reading "X compiles Java
> at over 10,000 lines per second", will have trouble _not_ remembering that
> speed was never discussed, that their compilers and interpreters were slow,
> etc, etc. I mean, we have people come in here and state that Lisp is an
> interpreted language at least once a week! it's an image problem, and it's
> a tragic element of that story that as Lisp implementers focus on other
> things, students and amateur programmers are turned into speed fanatics
> because that is the only forte of those other languages and systems. (and
> never mind that Allegro CL for Linux produces code that runs rings around
> C++ under Windows NT on the same machine. *sigh*)

I agree with all the above. ACL can effortless beat the sh*t out
of C++ even on a Windows platform, and it does it using far less
memory. Perhaps I'm being unfiar by comparing ACL for Windows with
VC++ and MFC, but C++ and MFC is a popular tool for developing
Windows code, so it seems like a reasonable comparison to me.
VC++ compiling MFC code demands more than 5 times as much RAM as
ACL, and still sucks in terms of compilation speed (ACL is fast
as lightning, while VC++ crawls), and ease of development.

All of which can translate into more development time for C++
than with Lisp. Why then am I not using Lisp to development
the code I'm paid for? Simply because I'm still saving up for
ACL for Windows, which costs about 8 times more than VC++,
which I got for _free_, as that's what I'm paid to use.

While the value of Lisp is obvious to me, this doesn't write
cheques. The money has to come first, then the tools, then the
code. Not that this is a real problem, as I can get by using
free tools (the ones with no strings attached), and then say,
"Hey, this works - in spite of not using C++ - and I _know_
I can do the same in Lisp, only better and faster."

I like Rainer's point about stats and numeric accuracy. I may
well use that argument to justify replacing our existing stats
code with something written in Lisp (or Haskell, which provides
similar numerical advantages as Lisp).

I recently took another look at a Lisp interpreter that I wrote
in the late 80s. I learned a lot from writing that, but I've
learned even more since then. Revising it so it could be compiled
with GNU C - for any platform that GNU C runs on - was a very
satisfying experience.

However, I'll probably need to do a lot more work before I make
it available on the Internet. It needs some docs, and a heavy
disclaimer about how and when I wrote it, i.e. to help me learn
more about Lisp. As a result, it's no speed demon.

Thanks for the thought-provoking posts!
--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
Martin Rodgers | Developer and Information Broker | London, UK
Please remove the "nospam" if you want to email me.
"Blow out the candles, HAL."


Reini Urban

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

On Mon, 20 Jan 1997 11:19:30 +0100, jos...@lavielle.com (Rainer Joswig) wrote:
>Both Common Lisp and Scheme basics are relatively easy to learn.

In my eyes Common Lisp is quite hard to learn
(compared to standard lisp or scheme)
--
To prevent spam replies, my newsreader has misspelled my email address.
To send me email, remove the final "!". Sorry for the inconvenience!

Reini You can never surf too much!

Thant Tessman

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

Sin-Yaw Wang wrote:
>
> Thant Tessman wrote:
> > Actually, Scheme contains three levels of enlightenment. The first is
> > higher-order functions (lambda). The second is continuations
> > (call-with-current-continuation), and the third is macros.
>
> I have been a serious amateur on Scheme (never used it to do real
> work). I am in a slight disagreement with this statement. Certainly
> higher order functions is an enlightenment.

Welcome to level one.


> I actually never found any serious use of continuation that is beyond
> catch/throw. Yes, I heard all about co-routine and parallel programming
> simulation. [...] Can you enlighten me on why is continuation better
> than catch/throw?

Using co-routines completely changes the way you approach large, complicated
applications. (Below I'll include some simple task-threader code I wrote a
long time ago to teach myself call/cc. I don't vouch for its style. I was
just a beginner and it was just a learning exercise. I've since written a
better one.)


> I don't find macros to be a delight at all. The R4RS definitions are
> complicated and difficult to understand.

Yes, the macros in the R4RS aren't very clear. The "extend-syntax"
macros are much better. (Eugene E. Kohlbecker: "Syntactic Extensions
in the Programming Language Lisp", Ph.D. Thesis, Indiana University, 1990.)
If you dig around, you might be able to find an implementation for whatever
Scheme you're using.

Chez Scheme has an even newer macro system that is a bit more complicated,
but is frighteningly powerful.

-thant

---begin old code---


;; "make-dispatcher" is for building ojbects using closures

(define (make-dispatcher method-function-alist . supers)

(lambda (method . continue-search)

(if (null? continue-search)
(set! continue-search (lambda () (error "bad dispatch key" method)))
(set! continue-search (car continue-search)))

(let ((pair (assq method method-function-alist)))
(if (pair? pair)
(cdr pair)
(letrec ((get-method
(lambda (supers)
(if (pair? supers)
((car supers) method (if (null? (cdr supers))
continue-search
(lambda ()
(get-method
(cdr supers)))))
(continue-search)))))
(get-method supers))))))


;; queue

(define (make-queue)
(define head '())
(define tail '())

(define (append i)
(if (null? head)
(begin (set! head (cons i '()))
(set! tail head))
(begin (set-cdr! tail (cons i '()))
(set! tail (cdr tail))))
i)

(define (pop)
(if (null? head)
(error "queue: poping empty queue")
(let ((return (car head)))
(set! head (cdr head))
(if (null? head) (set! tail '()))
return)))

(make-dispatcher
`((append . ,append)
(pop . ,pop)
(empty? . ,(lambda () (null? head)))
(value . ,(lambda () (car head))))))

;;;


(define (make-task-queue)
(let* ((task-queue (make-queue))
(append (task-queue 'append))
(pop (task-queue 'pop))
(empty? (task-queue 'empty?))
(return #f))

(define (do-task task)
(call-with-current-continuation
(lambda (r)
(set! return r)
(task)
#f)))

(define (switch)
(set! return (call-with-current-continuation return)))

(define (exit) (return #f))

(define (step)
(if (not (empty?))
(let* ((task (pop))
(cont (task)))
(if (procedure? cont) (append (lambda () (cont return))))
(not (empty?)))
#f))

(define (go)
(letrec ((loop (lambda (not-done)
(if not-done
(loop (step))
#f))))
(loop (step))))

(define (add-task task)
(if (procedure? task)
(append (lambda () (do-task task)))
(error "task-queue: attempt to add a non-procedure" task)))

(define (add-tasks first . rest)
(letrec ((loop (lambda (f r)
(add-task f)
(if (pair? r)
(loop (car r) (cdr r))))))
(loop first rest)))

(make-dispatcher
`((add-task . ,add-tasks)
(switch . ,switch)
(exit . ,exit)
(step . ,step)
(go . ,go)))

;;;

(define q (make-task-queue))

(define (job1)
(letrec ((loop (lambda (i)
(write i) (newline)
((q 'switch))
(if (< i 3)
(loop (+ i 1))))))
(loop 1)))

(define (job2)
(letrec ((loop (lambda (i)
(write i) (newline)
((q 'switch))
(if (< i 6)
(loop (+ i 1))))))
(loop 1)))

(define (job3)
(letrec ((loop (lambda (i)
(write i) (newline)
((q 'switch))
(if (= i 9)
((q 'exit))
(loop (+ i 1))))))
(loop 1)))

((q 'add-task) job1 job2 job3)

((q 'go))

---end old code---

Erik Naggum

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

* Reini Urban

| In my eyes Common Lisp is quite hard to learn
| (compared to standard lisp or scheme)

what is "standard lisp"?

Martin Cracauer

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

g...@hplgr2.hpl.hp.com (Guillermo (Bill) J. Rozas) writes:

>In article <30631029...@naggum.no> Erik Naggum <er...@naggum.no> writes:

>| From: Erik Naggum <er...@naggum.no>
>| Date: 24 Jan 1997 13:56:07 +0000

>| that's how I meant that the free Lisps have mostly worked to turn people
>| away from Lisp. I didn't mean that you can't find free Lisps that people
>| would have flocked to if they would only get over their prejudices. I
>| meant that they don't, because of the many toys they have used and think
>| are the norm. it seems, for instance, that in educational settings, Lisp
>| and Scheme are not at all presented with anything resembling speed in mind,
>| and so students who are used to C and C++ and ads reading "X compiles Java
>| at over 10,000 lines per second", will have trouble _not_ remembering that
>| speed was never discussed, that their compilers and interpreters were slow,
>| etc, etc. I mean, we have people come in here and state that Lisp is an
>| interpreted language at least once a week! it's an image problem, and it's
>| a tragic element of that story that as Lisp implementers focus on other
>| things, students and amateur programmers are turned into speed fanatics
>| because that is the only forte of those other languages and systems. (and
>| never mind that Allegro CL for Linux produces code that runs rings around
>| C++ under Windows NT on the same machine. *sigh*)

>Actually, I think that the speed of the implementation, although


>important, is nowhere near as critical as other components.

While I think the rest of your posting is very valid, this statement
is not.

An existing performance problem in an implementation is usually a sign
for a misdesign. Several times I thought I should be clever enough to
work around such problems, only to find out that the implementors
aren't stupid either and the problem is a hard one.

I found myself quite often in a situation where a apparent minor
performance problems with a given language implementation (or OS, for
that matter) persisted and got worse and worse as a project continued.

I found CMUCL to be the only free CL implementation without major
performance showstoppers, and only when not taking PCL/CLOS into
account.

William D Clinger

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to will

The speed of a Lisp or Scheme implementation is tricky to
characterize, because an implementation may very well be
quite fast at some things but slow at others. For example,
I was the primary author of MacScheme, which was fast on tight
loops and generic arithmetic (especially fixnum arithmetic),
but was slow on non-tail calls. An implementor of Lisp or
Scheme has a lot more scope for both creativity and stupidity
than does an implementor of C, which tends to be implemented
approximately the same way by all compilers. See Dick Gabriel's
"Performance and Evaluation of Lisp Systems" for more on this.

I don't think it's useful to get too involved in a discussion
of which implementation is faster than another, because it
usually depends on precisely what you're trying to do and also
on your particular coding style. Having wasted a fair amount
of my life studying this sort of thing, however, I feel an urge
to offer some real albeit useless information.

Marco Antoniotti <mar...@crawdad.icsi.berkeley.edu> wrote:
> In the Scheme world, though I never tried it, I hear that the Stalin
> compiler could be a 1 or a 2 in my previous scale. AFAIK all the
> other Scheme's (apart from being incompatible from each other at some
> level) rank at 6 OR WORSE.

Chez Scheme and Larceny (our unreleased implementation) perform
in the same league with the commercial implementations of Common
Lisp that Antoniotti ranked as better than a 1. Stalin may be
in that league as well, but I haven't tried it. Gambit-C and
Bigloo would be a 1 or a 2; as noted below, their performance
is limited by the fact that they compile to C. MIT Scheme might
not be quite as fast, but it is probably on the order of 100 times
as fast as xlisp, which Antoniotti ranked at 6. I haven't used
Macintosh Common Lisp in recent years, but in 1988 its performance
(on the 68020) was roughly comparable to that of MacScheme, though
its performance profile was somewhat the opposite: MCL was faster
on non-tail calls, but slower on inner loops and arithmetic.

See http://www.ccs.neu.edu/home/will/Twobit/benchmarks1.html for
the kind of numbers that should not be taken very seriously, but
are better than hearsay. In particular these numbers illustrate
how the ranking of an implementation will vary depending on the
nature of the benchmark.

Michael Sperber wrote:
> Both Gambit and Bigloo can actually compete with C on at least some
> applications. I'd be suprised if, say, Chez Scheme, were
> significantly faster.

Chez Scheme is roughly twice as fast as Gambit-C on many programs,
mainly because Gambit-C compiles to C instead of to native code,
and you lose a factor of two because of the hoops that you have to
jump through to guarantee proper tail-recursion when generating
C code. This factor of two is acknowledged by Marc Feeley, the
author of Gambit. Bigloo also compiles to C, but may not suffer
quite as much because it doesn't try as hard to conform to the
IEEE/ANSI standard for Scheme.

William D Clinger

Jeff Barnett

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

In article <30633807...@naggum.no>, Erik Naggum <er...@naggum.no> writes:
|> | In my eyes Common Lisp is quite hard to learn
|> | (compared to standard lisp or scheme)
|>
|> what is "standard lisp"?

In the current context, I guess it means an "uncommon Lisp".
So any lisp with a small distribution must be standard!

Jeff Barnett

PS It's been that kind of day.


Espen Vestre

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

Erik Naggum <er...@naggum.no> writes:

> | In my eyes Common Lisp is quite hard to learn
> | (compared to standard lisp or scheme)
>
> what is "standard lisp"?

Standard Lisp was a pre common lisp attempt to standardize lisp.
There is a reference to The Standard Lisp Report in CLtL II.

--

Espen Vestre
Telenor Online AS

John Fitch

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

Standard LISP is still very much alive. It is the basis of the REDUCE
algebra system the author of whom realised a need for a standard basis
for his programs way back in the 60s. The second Standard LISP report
was written in the late 1970s; I was responsible for the IBM370
implementation at that time.

But it still is active with CSL and PSL.

==John ffitch
Bath and Codemist Ltd

Rainer Joswig

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

In article <30633807...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:

> * Reini Urban


> | In my eyes Common Lisp is quite hard to learn
> | (compared to standard lisp or scheme)
>
> what is "standard lisp"?

Maybe Standard Lisp? See http://www.rrz.uni-koeln.de/REDUCE/3.6/doc/sl/
for the Standard Lisp Report.

http://www.lavielle.com/~joswig/lisp.html

--
http://www.lavielle.com/~joswig/

Cyber Surfer

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

In article <1997Jan26.1...@wavehh.hanse.de>
crac...@wavehh.hanse.de "Martin Cracauer" writes:

> In my opinion, people tend to rate Lisp as interpreted because they
> can't get the idea of a dynamic language that is compiled. They hear
> of Lisp features and automatically say "interpreted" where they may
> have thought "dynamic".

This is why I can answer almost any attack on Lisp from a C/C++
programmer by suggesting that they read PJ Brown's book. Some
people may have even forgotten that this stuff can be done even
in Basic. Yes, Basic was once interactive. I can't remember the
last time I read anything about a commercial Basic implementation
that was interactive. (Perhaps because today almost everything for
Windows is _batch_ oriented - ironic, eh?)



> In other words, the problem is not a bad image of Lisp, but the fact
> that people are ignorant and tend to fail to recognize what
> performance problems are really involved with a dynamic language.

This is why I so often find myself recommending Brown's book.
Too many people don't have any idea what "interactive" means!
You're right, they think it means "interpreted". Even worse,
they think that "compiled" always means "native code compiled".

Erik sometimes calls people stupid, but if he's right, then we're
in a hopeless situation. If you're right, and I think you are, then
it's a case of ignorance, and we can fix that. It'll take a lot
of effort and time, but it can be done.

Perhaps Java is helping to make this possible, but it might also
make things much worse if it fails. After all, the JVM is seen as
"interpreted", which it isn't. Implementations may, but they also
may be compiled to native code. Not enough people realise this,
and this may be another ignorance problem.

If Java can suffer in this case, think about what ignorance can
do to Lisp, which is much harder for the average C hacker to
understand. It's going to be hard work.

Rainer Joswig

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

> On Mon, 20 Jan 1997 11:19:30 +0100, jos...@lavielle.com (Rainer Joswig) wrote:
> >Both Common Lisp and Scheme basics are relatively easy to learn.
>

> In my eyes Common Lisp is quite hard to learn
> (compared to standard lisp or scheme)

Really?

Perhaps some people try to tell you about CL which doesn't understand
it themselves (because they don't use it for example).
Then some people try to tell you that CL lacks pattern matching
like some other functional language. Not only is it easy
to integrate pattern matching, but they don't understand,
that for larger software libraries pattern based invocation
is not very maintainable. Then people begin to tell you
that CL does not allow to return "tuples" of values.
Again this is easy (use VALUES, or structures, whatever).

Common Lisp is releatively easy to understand. Not
full Common Lisp - you don't need to tell them about
DEFSETF or about meta classes. But Common Lisp
has the same basic properties like Scheme.
It additionally has values and function cells and
supports also dynamic binding (aka FLUID-LET in Scheme).
Well, that is no big deal. Then Common Lisp has a
small set of special forms, some macros and functions.
The basic evaluation model is easy.

Then you start programming. You will need some library
functions. Well, Common Lisp has a lots of stuff
in the language. You want to print something -
use PRINC (or whatever). Its already there. If you
need something complicated - its there, too.

You just need an overview over the CL libraries. In case
you need something - just look into the manual.
Why should it be more difficult to program
a student project for searching a maze and
printing the results to ASCII text in Common Lisp,
then in Scheme? All you need of CL looks
similar to the Scheme stuff.

I don't get it. I always thought, that CL is really
easy to master (compared to, say, Haskell or C++).

Rainer Joswig

--
http://www.lavielle.com/~joswig/

Cyber Surfer

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

In article <30633807...@naggum.no> er...@naggum.no "Erik Naggum" writes:

> * Reini Urban


> | In my eyes Common Lisp is quite hard to learn
> | (compared to standard lisp or scheme)
>

> what is "standard lisp"?

Reini may be refering to a Lisp dialect called Standard Lisp,
which I believe dates from 1966. The version I used on the Atari
ST included feature like backquote, which suspect is a more
recent Lisp feature, but I don't know.

Think of it as a predecessor to Common Lisp. In fact, you can
find it on the front cover of the paperback edition of CLtL1,
just after Zetalisp, and before NIL.

Cyber Surfer

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

In article <joswig-ya0231800...@news.lavielle.com>
jos...@lavielle.com "Rainer Joswig" writes:

> Perhaps some people try to tell you about CL which doesn't understand
> it themselves (because they don't use it for example).
> Then some people try to tell you that CL lacks pattern matching
> like some other functional language. Not only is it easy
> to integrate pattern matching, but they don't understand,

> that CL does not allow to return "tuples" of values.
> Again this is easy (use VALUES, or structures, whatever).

I agree that pattern matching can be added to CL, and other
Lisps too, but I'm not sure why you say that "for larger


software libraries pattern based invocation is not very

maintainable." The issue of code maintainence isn't effected
by pattern matching, as far as I'm aware.

Perhaps some people prefer the functional syntax, using
tuples. That doesn't necessarily imply that a tuple object
is CONstructed. It might be, but I think that's a detail
of the implementation, not the language.

It looks like you're defending CL by attacking another style
of programming. I hope this not the case, as I don't believe
that it's necessary. No language is so good that it justifies
such an attack, and I'm know that you can find better ways
of defending CL.

So, I'm assuming that you just badly phrased your point,
by appearing to include some language politics.



> I don't get it. I always thought, that CL is really
> easy to master (compared to, say, Haskell or C++).

Haskell is _also_ easy to understand. It's just a little
different to CL (and C++), but I find I can apply a great
deal of my experience in Lisp to Haskell. Like Lisp, it
helps you learn it using a good book, and know people who
can answer your questions.

Mastering any language worth learning takes time. After
more than 10 years, I'm still learning CL. I'm also still
learning C++, tho I feel I've used it enough by now.
I haven't _begun_ to master Scheme!

My favourite languages are still CL and Scheme, but I'm
beginning a love affair with Haskell. I don't feel that
I'm cheating any of them, as they're only tools. Damn fine
tools, just the same.

Jeff Dalton

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to Sch...@mc.lcs.mit.edu, yu...@csl.snu.ac.kr, vfr...@netcom.com

A while back, yu...@csl.snu.ac.kr (Yunho Jeon) asked:

I have some idea to experiment with, and the idea needs a language
which can execute codes produced by the program itself. Naturally, I
thought LISP would be the best choice. But while reading LISP FAQ, I
found that there are some varients of the language. Notably, Scheme
seemed to be more modern and cleaner language than LISP. Because I
have almost no experience in those languages, I would like to get
answers from LISP/Scheme experts for the following questions.

1) Which language is easier to learn (for C/C++ programmer)?
Scheme seems to be easier because it is much smaller than common lisp,
but how big is the difference (in terms of learning curve)?

vfr...@netcom.com (Will Hartung) replied:

Frankly, I would say that Lisp would be easier to learn than Scheme
for an C/C++ programmer.

Scheme is a lovely, elegant language, and is, I believe simpler and
easier to learn in its own right. It is hard not to like Scheme. But,
for someone who has a lot of history with C/C++, the way Scheme is
presented could throw you for a loop. [...]

The "Scheme Way" of programming is very functional, lots of recursion,
local helper functions, etc. It is really a pretty nice way to go
about task of coding. However, its not the way MOST people
(particularly C/C++ people) write code. The idioms are all wrong.

If you look at how Lisp is presented, especially in something like
Paul Grahams "ANSI Common Lisp" book, it is easier to see the how your
entrenched C/C++ idioms translate into Lisp syntax and structures.

I don't know which language (Scheme or Common Lisp) would be easier
to learn for a C/C++ programmer, but Scheme is *much* smaller, greatly
reducing the "which subset do I learn?" problem.

In any case, the question remonded me of something I saw here a
while back, namely a table of approximate translations between
Scheme and C++. I've put a link to a copy in

http://www.aiai.ed.ac.uk/~jeff/scheme/

along with a link to a Scheme quick-reference (also posted here a
while back).

-- jeff

Tim Bradshaw

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

* Reini Urban wrote:
> On Mon, 20 Jan 1997 11:19:30 +0100, jos...@lavielle.com (Rainer Joswig) wrote:
>> Both Common Lisp and Scheme basics are relatively easy to learn.

> In my eyes Common Lisp is quite hard to learn

> (compared to standard lisp or scheme)

If it's possible to ask this question without provoking endless futile
discussion, could you say why? I've taught courses on Common Lisp,
and it would be interesting to know what people find hard about basic CL,
especially compared to scheme.

I can see that CL is very *large* & therefore intimidating compared to
Scheme, but that can be fixed by teaching it the right way. Scope &
extent people find hard, but is common between them. Different fn and
variable namespaces make CL harder I think. call/cc makes scheme very
much conceptually harder though for many people.

So it would be quite interesting to know why CL is harder (or why
scheme is harder) and how that could be fixed, if it can.

--tim

Marc Feeley

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

William D Clinger <wi...@ccs.neu.edu> wrote:

> Chez Scheme is roughly twice as fast as Gambit-C on many programs,
> mainly because Gambit-C compiles to C instead of to native code,
> and you lose a factor of two because of the hoops that you have to
> jump through to guarantee proper tail-recursion when generating
> C code. This factor of two is acknowledged by Marc Feeley, the
> author of Gambit.

This isn't quite right. I acknowledge that Gambit-C is "on average" a
factor of 2 slower than Gambit when compiling directly to native code
(for details check the old and unpublished paper
http://www.iro.umontreal.ca/~feeley/papers/stc.ps). This factor of 2
is essentially due to the way Gambit-C implements proper
tail-recursive behavior in C.

Gambit-C is about the same performance as Chez-Scheme (when Gambit-C
uses "block" compilation to unsafe code with fixnum/flonum specific
arithmetic). Of course run time performance is only one part of the
story since many other characteristics are important to compare as
well in a practical setting (compile time, portability,
interoperability, adherence to the standards, language extensions,
debugging, etc, etc).

Marc

Earl & Daniella Harris

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to Rainer Joswig

Rainer Joswig wrote:

>
> In article <32ecf05f...@news.sime.com>, rur...@sbox.tu-graz.ac.at! wrote:
>
> > On Mon, 20 Jan 1997 11:19:30 +0100, jos...@lavielle.com (Rainer Joswig) wrote:
> > >Both Common Lisp and Scheme basics are relatively easy to learn.
> >
> > In my eyes Common Lisp is quite hard to learn
> > (compared to standard lisp or scheme)
>
> Really?

>
> Perhaps some people try to tell you about CL which doesn't understand
> it themselves (because they don't use it for example).

Ok. I don't understand CL. When I tried to teach myself CL, I was
expecting a simple language like Scheme. The details of CL seemed
overwhelming. I kept asking myself, why is this SO complicated?
This is suppose to be Lisp?

IMHO, CL looks like it is much harder to master the Scheme. The
following
is my rebuttal.

> Then some people try to tell you that CL lacks pattern matching
> like some other functional language. Not only is it easy
> to integrate pattern matching, but they don't understand,

> that for larger software libraries pattern based invocation

If you need to understand pattern matching to master CL, this is one
strike
against CL. Scheme doesn't have this; it isn't necessary.

Why would I want (or need) pattern matching in Lisp? Is this like
pattern matching in ML? Can you use CL and avoid using patterns?

> is not very maintainable. Then people begin to tell you


> that CL does not allow to return "tuples" of values.
> Again this is easy (use VALUES, or structures, whatever).

If you need to understand tuples and structures to master CL, this is
one strike against CL.
Scheme doesn't have tuples; it isn't necessary. Some scheme
implementations
have structures, but you don't need to learn it.

Tuples? Values? Structures? If I want to return more than value,
I return them in a list (or vector). Why would I want tuples?
Are structures like structures in C? How are tuples different from
structures and lists? Are values a new data type in CL?

>
> Common Lisp is releatively easy to understand. Not
> full Common Lisp - you don't need to tell them about
> DEFSETF or about meta classes. But Common Lisp
> has the same basic properties like Scheme.

If you have to learn and differentiate between several DEFs, this is
a strike against CL.

While scheme has essentially "define," Common Lisp has several
"DEFsomethings".
Why does Common Lisp have some many definitions?

If you need to understand "meta classes" to master CL, this is also
one strike against CL.

Why do you need meta classes in Common Lisp?

> It additionally has values and function cells and
> supports also dynamic binding (aka FLUID-LET in Scheme).
> Well, that is no big deal. Then Common Lisp has a
> small set of special forms, some macros and functions.
> The basic evaluation model is easy.

I'll talk about the evaluation model at the end.

>
> Then you start programming. You will need some library
> functions. Well, Common Lisp has a lots of stuff
> in the language. You want to print something -
> use PRINC (or whatever). Its already there. If you

If you need to understand several flags and options in the print
functions,
this is one strike against CL.

Scheme's printing options are much clearer to me. You apply the
print function to the value and it prints it.

CL has really exotic print functions with lots of flags and options.
It reminds me of C's print function. There are more details.

> need something complicated - its there, too.
>
> You just need an overview over the CL libraries. In case
> you need something - just look into the manual.

I'm not sure a libary makes a language easier to master.
A libary can be a convenience to the programmer. It saves
me the trouble of writing some programs.

> Why should it be more difficult to program
> a student project for searching a maze and
> printing the results to ASCII text in Common Lisp,
> then in Scheme? All you need of CL looks
> similar to the Scheme stuff.
>

> I don't get it. I always thought, that CL is really
> easy to master (compared to, say, Haskell or C++).

Regarding the evaluation model, CL's doesn't treat
functions as first class objects. I can't pass functions
around like other values (numbers, lists, etc).

I bet CL is really easy to master, when compared to C++.

However, IMHO, it is hard to defend that CL is easier to
master than Scheme. Just compare the reference manual size.

In CL defense, one could argue that CL has other advantages over
Scheme. I bet it is easy print out a number in hexidecimal format.

>
> Rainer Joswig
>
> --
> http://www.lavielle.com/~joswig/

Earl Harris Jr.

Seth Tisue

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <32F06A...@widomaker.com>,

Earl & Daniella Harris <esha...@widomaker.com> wrote:
>Regarding the evaluation model, CL's doesn't treat
>functions as first class objects. I can't pass functions
>around like other values (numbers, lists, etc).

This is totally incorrect. Functions are 100% first class objects in
Common Lisp, same as in Scheme.
--
== Seth Tisue <s-t...@nwu.edu> http://www.cs.nwu.edu/~tisue/

Erik Naggum

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

* Earl Esharris Daniella Harris

| IMHO, CL looks like it is much harder to master the Scheme.

FWIW, I found the opposite to be true. my favorite example is `member',
which Common Lisp calls `member', but which Scheme calls `memq', `memv', or
`member' according to which function should do the testing. in Common
Lisp, I can choose the test function with :test, and use `eq', `eql' or
`equal' as I see fit. should I want a different function, such as
`string-equal', I can use that, too. in Scheme, I must implement my own
`member' with `string-equal' as the predicate. in practice, I implement a
new `member' (which must be called something other than `member' since
Scheme doesn't have packages and redefining isn't kosher), which takes a
function as argument. in like manner, I must reimplement everything else I
need with a higher level of abstraction than Scheme provides. I have
concluded that Scheme is a pretty dumb language as standardized. had all
of this hype about functions as first-class arguments been true, wouldn't
Scheme have used them more often, I wonder.

| If you need to understand pattern matching to master CL, this is one
| strike against CL. Scheme doesn't have this; it isn't necessary.

you don't need to.

| If you need to understand tuples and structures to master CL, this is one
| strike against CL.

you don't need to.

| Are values a new data type in CL?

no, all functions return multiple values in Common Lisp.

| If you have to learn and differentiate between several DEFs, this is a
| strike against CL.

you need to know only `defvar' and `defun' to get going.

| While scheme has essentially "define," Common Lisp has several
| "DEFsomethings". Why does Common Lisp have some many definitions?

(1) because Common Lisp recognizes that a single namespace for functions
and variables is bad for you. (2) because Common Lisp has features that
Scheme does not have.

`defsetf' was mentioned. in Common Lisp, if you have a function (foo x)
that returns some piece of information, the typical function to make that
function return some new value known by your program is (setf (foo x)
new-value). e.g., if you retrieve elements from an array with (aref A i),
you store a new value with (setf (aref A i) x). in Scheme, you use
specialized functions to access different kinds of arrays, and you must use
different functions to store values into them, too. you define your own
setf methods (or "setter functions") with defsetf. you can also define
them with (defun (setf foo) ...) just like other functions. I find this
very elegant, and certainly much more so than functions named `set-foo!'.

| If you need to understand "meta classes" to master CL, this is also one
| strike against CL.

sigh. you don't need to.

| If you need to understand several flags and options in the print
| functions, this is one strike against CL.

sigh. you don't need to.

| Regarding the evaluation model, CL's doesn't treat functions as first
| class objects. I can't pass functions around like other values (numbers,
| lists, etc).

you should have asked a question. the above is as untrue as you can get.
however, functions aren't normally values of variables. this is seldom as
useful as Schemers think it is. the only difference between Scheme and
Common Lisp regarding functions is that in Scheme the first element of an
expression is evaluated like the rest of the elements, whereas in Common
Lisp, it is evaluated specially. evaluating a function call form in Scheme
means (apply #'funcall (mapcar #'eval form)), except that Scheme is allowed
to evaluate arguments in any order, whereas in Common Lisp it means
(apply (car form) (mapcar #'eval (cdr form))), keeping with Common Lisp
syntax in both cases for the sake of comparison.

| However, IMHO, it is hard to defend that CL is easier to master than
| Scheme. Just compare the reference manual size.

Scheme is a relatively low-level Lisp. you _can_ build tall abstractions,
but you run into many serious problems in scaling, not to mention the fact
that Scheme is a language of bare necessities, like living in a cave, while
Common Lisp is a language of complex societies where all the infrastructure
you want to take for granted is indeed already implemented.

however, I can sympathize with you on the cognitive level. Common Lisp
seems large and unwieldy. let me illustrate with an anecdote. I moved to
California to work on a project and stayed for five months. when I first
got there, I found that I experienced cognitive overload in supermarkets.
this was the first time I had had to buy my own groceries in the U.S. and
was not at all used to the variety or the brand names or anything. there
were dozens of different maple syrups, a hundred kinds of bread, etc. back
home in Oslo, there is much less variety, and some stores even specialize
in having a small number of products, like 800. gradually, over many
years, I had come to choose around 40 different products that I bought on a
regular basis. I could shop sleeping. however, in California, my nearest
supermarket was quite small by U.S. standars, and offered only 3000 or so
different products, none of which even looked like what I was used to. it
dawned on me after having become quite tired of the first couple shopping
experiences that I had tried to take in all of them at the same time, that
I had no grounds for comparisons, and that I still didn't think in dollars
so I didn't even have a working economic model to fit things into. in
response to this cognitive overload, I systematically went through a small,
new section of the store every time I went there, after I had found a
subset of their products that I could eat. it still took two months before
I had a working knowledge of brand names, product categories, price levels,
etc. however, my point is that although I recognized a problem in my
approach to something new and very large by my standards, I didn't starve
while I tried to sort out which maple syrup to pour on which breakfast
waffles with which fruits on it. (I learned later that I had skipped the
best maple syrup, too.) when I got home, I had acquired a taste for the
variety, and found myself buying at least 5 times more kinds of goods on a
regular basis than I used to. if you like, Scheme is like a bakery that
produce three kinds of bread according to clearly specified nutritional
models, while Common Lisp is like a supermarket with baked goods from a
large variety of bakeries. once you enjoy the variety, you won't find the
bakery confined to "only what's good for you" to be much of a treat.

it may be worth noting that I had already become really annoyed by C's and
Unix' lack of useful functions and all the manual work that was necessary
to get even simple things done. (such as: in a system-wide shell init
file, you need to set the search list (path) for interactive shells. it is
important that a few directories be present for all users, but their order
is immaterial and may be chosen by the user. if some directory is not
present in the search list, it should be made the new first element of the
sublist that contains only elements from the required list, in other words:
it should be added before any other required directories, but after any
that the user might have set up. exercise: do this in the various shells,
in Scheme, and in Common Lisp. for extra bonus points, do it in perl.)

Matthew R Wette

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

Here's another stick on the fire:

On a SPARCStation an Allegro CL uses *11 meg* to print "hello, world".
SCM uses *1 meg* to print "hello, world".

CL requires more $$ for ram and disk.

Matt
--
matthew...@jpl.nasa.gov -- I speak for myself, not for JPL.

Martin Cracauer

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

Tim Bradshaw <t...@aiai.ed.ac.uk> writes:

>* Reini Urban wrote:
>> On Mon, 20 Jan 1997 11:19:30 +0100, jos...@lavielle.com (Rainer Joswig) wrote:
>>> Both Common Lisp and Scheme basics are relatively easy to learn.

>> In my eyes Common Lisp is quite hard to learn
>> (compared to standard lisp or scheme)

>If it's possible to ask this question without provoking endless futile


>discussion, could you say why? I've taught courses on Common Lisp,
>and it would be interesting to know what people find hard about basic CL,
>especially compared to scheme.

>I can see that CL is very *large* & therefore intimidating compared to
>Scheme, but that can be fixed by teaching it the right way. Scope &
>extent people find hard, but is common between them. Different fn and
>variable namespaces make CL harder I think. call/cc makes scheme very
>much conceptually harder though for many people.

>So it would be quite interesting to know why CL is harder (or why
>scheme is harder) and how that could be fixed, if it can.

As someone who likes Common Lisp, I also found it quite hard to
learn.

1) Some concepts were quite hard to get for me, scoping, multiple
namespaces, reader macros (understanding existing programs is a lot
easier when you got the idea that all these #-constructs are just the
same as calling an S-expression macro) and in some ways
setf-constructs.

2) I learn best by reading existing sources. In Common Lisp, you will
face the whole range of teh language.

On the other hand, while it was hard to read such programs, it was
very useful. For example, when a sensible programmer chooses the best
sequence type for a given task. In Common Lisp, he will most likely
use constructs that you can look up in CLtL2. In C++ before STL, he
usually will implement his own stuff or use a non-standard lib and in
C people are very likely to push everyting into arrays.

3) While I liked the syntax, I found it to be pretty uncomfortable
when it comes to access sequence members and struct entries and
instance variables.

Also, the syntax for declarations is not really intuitive.

4) It is not easy to get decent performance out of Common Lisp when
you don't have an idea what makes a hashtable different from a
non-hashing assosication table and why people invented lists and why
it is useful to keep an pointer to the end of a list.

Of course, you will not write good peforming C programs also, but in C
you are likely to use arrays of inlined data members, which was for my
applications fast enough. In Common Lisp, you might end up using lists
like arrays and get no compile-time typechecking at all.

5) Usually no source-level debugging. I found it very useful to see a C
program step-by-step with variable watches turned on.

6) Profiling requires you to set up the symbols you want traced in
advance. With GNU gprof for C, you just say "profile this program" and
it uses all functions it has an entry for.

7) Environment issues. It takes some time to get used to work in a
permanent image. For example, why can't you load a package that
contains a symbol you just tried to access? Because you just triggered
the symbol to be created in the current package. Not easy to get for a
batch-language user.

I don't think Scheme is much easier to learn. C is at least easier to
understand for someone who has an understanding how a CPU works and
how data is arranged in a computer (which was the case for me).

I don't want to start a language flame war, also. After all, this is a
description of a past state of mine and you're not going to change
history, no matter how wrong I was :-)

Happy Lisping

cosc...@bayou.uh.edu

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

Earl & Daniella Harris (esha...@widomaker.com) wrote:

[Snip]

: Ok. I don't understand CL. When I tried to teach myself CL, I was


: expecting a simple language like Scheme. The details of CL seemed
: overwhelming. I kept asking myself, why is this SO complicated?
: This is suppose to be Lisp?

It's so complicated because it's so powerful. There's so much
that you can do in Common Lisp that there are naturally many
things to learn. Note however that you can put off learning
many of these things and still write effective programs.
A testament to this fact is that many of Common Lisps'
capabilities can be (or may even be) written in Common Lisp
itself, using a few core primitives.


: IMHO, CL looks like it is much harder to master the Scheme. The
: following
: is my rebuttal.

Again it's harder to master, but then that's because there is
more to master. That's like saying "Chess is Harder to
Master than Checkers". Well sure it is, but that's not
a disadvantage of chess!


: > Then some people try to tell you that CL lacks pattern matching


: > like some other functional language. Not only is it easy
: > to integrate pattern matching, but they don't understand,
: > that for larger software libraries pattern based invocation

: If you need to understand pattern matching to master CL, this is one


: strike
: against CL. Scheme doesn't have this; it isn't necessary.

Common Lisp doesn't offer pattern matching. Maybe you should
actually try to learn it or look at it in detail before reaching
such hasty conclusions.


: Why would I want (or need) pattern matching in Lisp? Is this like


: pattern matching in ML? Can you use CL and avoid using patterns?

Scheme is basically a tiny version of Common Lisp (this may be a bit
of an oversimplification), so that alone should give you an
idea of what Common Lisp can and can't do.


: > is not very maintainable. Then people begin to tell you


: > that CL does not allow to return "tuples" of values.
: > Again this is easy (use VALUES, or structures, whatever).

: If you need to understand tuples and structures to master CL, this is
: one strike against CL.

Mastery of any language means that you should know the ins and outs
of that language. Contrast this with being able to effectively
use a language. With Common Lisp you can effectively use the
language without understanding structures (I don't even know
if tuples are supported). You can use lists instead of
structures and use them instead.

Structures however are very simple to use, and very well designed,
and they are there for when you are ready for them.


: Scheme doesn't have tuples; it isn't necessary. Some scheme


: implementations
: have structures, but you don't need to learn it.

I don't think that Common Lisp even has tuples either, and you
don't have to learn structures in Common Lisp. Again you've
got lists, just use them.


: Tuples? Values? Structures? If I want to return more than value,


: I return them in a list (or vector). Why would I want tuples?

You could do the same in Common Lisp.


: Are structures like structures in C? How are tuples different from
: structures and lists? Are values a new data type in CL?

I can't speak for tuples, and I'm not sure what you mean by
"values", but I can tell you how the structures in Lisp work.

Basically you define a structure (much like you would in C), but
instead of using a "." to access structure fields, Lisp creates
specialized functions -- accessor functions for you to access
the structure fields with. This hides the implementation details
and makes it a snap for you to later replace them with another
implementation (if you so desire). Structures in Lisp are
basically ADTs (abstract data types), and so they are like structures
in other languages in that you access particular fields and can
refer to them as a whole, but are different in that they
are abstracted.


: >
: > Common Lisp is releatively easy to understand. Not


: > full Common Lisp - you don't need to tell them about
: > DEFSETF or about meta classes. But Common Lisp

: > has the same basic properties like Scheme.

: If you have to learn and differentiate between several DEFs, this is
: a strike against CL.

Again you can do a lot (possibly more than Scheme) without
differentiating between several DEFs. Again, think of Scheme
as a tiny subset of Common Lisp. With Common Lisp, you can
choose to use a tiny subset, and so it can be very much like
Scheme (in terms of simplicity). No one is forcing you to
use all these features, but they are there for when you
need them.


: While scheme has essentially "define," Common Lisp has several


: "DEFsomethings".
: Why does Common Lisp have some many definitions?

Because they do different things. Again you are not forced
to use all these definitions.


: If you need to understand "meta classes" to master CL, this is also
: one strike against CL.

The same answers apply, let's just fast forward since this is
redundant.

[Snip]

: Scheme's printing options are much clearer to me. You apply the


: print function to the value and it prints it.

: CL has really exotic print functions with lots of flags and options.
: It reminds me of C's print function. There are more details.

Again use what subset makes you feel comfortable. Scheme is simpler
because it's *WEAKER*. Get it?


[Snip]

: I'm not sure a libary makes a language easier to master.


: A libary can be a convenience to the programmer. It saves
: me the trouble of writing some programs.

Think of much of Common Lisp as optional libraries for you
to use when you decide you need them.


[Snip]

: Regarding the evaluation model, CL's doesn't treat


: functions as first class objects. I can't pass functions
: around like other values (numbers, lists, etc).

Yes you can. The thing is with Common Lisp (as contrasted
with Haskell), you'll need a special quoting notation
to keep things clear, but that's it.

That's how functions like funcall and apply work, by
taking functions as arguments. If you couldn't do that
in Common Lisp then how do these functions even exist?


: I bet CL is really easy to master, when compared to C++.

: However, IMHO, it is hard to defend that CL is easier to


: master than Scheme. Just compare the reference manual size.

I'm not arguing that CL is harder to master than Scheme, I'm
merely trying to point out that mastery is one thing, and
using something productively is another, and you can use
CL productively without all that much effort.


: In CL defense, one could argue that CL has other advantages over


: Scheme. I bet it is easy print out a number in hexidecimal format.

That was uncalled for. Common Lisp is vastly more powerful than
Scheme and is therefore larger. It's that simple. If you can't
come to grips with this simple fact, then maybe Common Lisp
really is beyond you and you should stick with something simpler
like Scheme.


[Snip]

: Earl Harris Jr.

--
Cya,
Ahmed

In order to satisfy their mania for conquest, lives are squandered
Discharge

Rainer Joswig

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <854545...@wildcard.demon.co.uk>,
cyber_...@wildcard.demon.co.uk wrote:

> I agree that pattern matching can be added to CL, and other
> Lisps too, but I'm not sure why you say that "for larger
> software libraries pattern based invocation is not very
> maintainable." The issue of code maintainence isn't effected
> by pattern matching, as far as I'm aware.

Not? Right now most of these functional languages
have tuples and lists, but not records. Write
large software with lots of datatypes (a windows system, ...)
using pattern matching. It is possible, but now try
to change the underlying data representation.
What effects does this have to external users if
the implementation changes, etc. Could it
be difficult to understand software if you
have lots of patterns and you have to look very
closely to determine which function will be invoked
when?

> It looks like you're defending CL by attacking another style
> of programming. I hope this not the case, as I don't believe
> that it's necessary. No language is so good that it justifies
> such an attack, and I'm know that you can find better ways
> of defending CL.

This is your interpretation. I don't like it.

But some people have been
struggling with similar approaches years ago. Its
a bit like what people experienced with rule based
languages in real world software (like OPS5-based
configuration systems, or even the infamous sendmail).

> Haskell is _also_ easy to understand. It's just a little
> different to CL (and C++), but I find I can apply a great
> deal of my experience in Lisp to Haskell. Like Lisp, it
> helps you learn it using a good book, and know people who
> can answer your questions.

Until someone has understood monadic IO, he may already
have successfully written some 10000 lines of
Common Lisp stream-based IO code. Also
I might add the Haskell type system is not *that*
easy to understand.

I'm not saying anything bad about Haskell, it´s just that
even with FP knowledge it is not easy to master and some
books (there is a nice German one) about Haskell do look
like white noise to the uninitiated.

> Mastering any language worth learning takes time. After
> more than 10 years, I'm still learning CL.

After more than 10 years, I'm still writing lots
of code with Common Lisp.

> My favourite languages are still CL and Scheme, but I'm
> beginning a love affair with Haskell. I don't feel that
> I'm cheating any of them, as they're only tools. Damn fine
> tools, just the same.

You still have failed to ground your "love affair" on
rationalism (as far as possible).

Steve Austin

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

On 30 Jan 1997 20:15:48 +0000, Erik Naggum <er...@naggum.no> wrote:

>(1) because Common Lisp recognizes that a single namespace for functions
>and variables is bad for you.

Could you clarify this for me please? I'm very much a newcomer to
Common Lisp, and I naively assumed that the originators of Scheme used
a common namespace to simplify the syntax of higher order functions.
What advantages do separate namespaces provide?

Steve Austin
sau...@nf.sympatico.ca


Michael Sperber [Mr. Preprocessor]

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

Some misconceptions about Scheme from the view of CL programmers need
clarification.

>>>>> "Erik" == Erik Naggum <er...@naggum.no> writes:

Erik> in Scheme, I must implement my own `member' with `string-equal'
Erik> as the predicate.

In Scheme, equal? works on strings. No need to.

Erik> in practice, I implement a new `member' (which must be called
Erik> something other than `member' since Scheme doesn't have packages
Erik> and redefining isn't kosher), which takes a function as
Erik> argument.

Redefining *is* kosher in Scheme as of the IEEE standard.

Erik> in like manner, I must reimplement everything else I
Erik> need with a higher level of abstraction than Scheme provides.

At least that is easy in Scheme. In Common Lisp, if I want call/cc
(and it is *much* more useful than Common Lisp programmers usually
care to acknowledge), I cannot express it in terms of other Common
Lisp primitives.

The high-level macro system that Scheme has (about to be made a
mandatory feature for R5RS) (or something with equivalent
functionality) is very hard to implement right in Common Lisp. I
doubt that it's been done. (Except maybe as part of PseudoScheme :-})
I'd be happy to be educated on the subject.

Erik> (1) because Common Lisp recognizes that a single namespace for functions
Erik> and variables is bad for you.

Again, that's an assertion without any proof. Multiple namespaces
greatly complicate dealing with names conceptually, especially when
the same name has multiple bindings with disjoint meanings. Possibly
a matter of taste, admittedly.

Erik> (2) because Common Lisp has features that Scheme does not have.

So? Scheme has features that Common Lisp does not have.

Erik> `defsetf' was mentioned.

defsetf is trivial to define with Scheme high-level macros.

Erik> you should have asked a question. the above is as untrue as you can get.
Erik> however, functions aren't normally values of variables. this is seldom as
Erik> useful as Schemers think it is.

Erik, you should have asked a question. It is immensely useful all
the time. I'd be happy to send to oodles of source code where having
to use funcall would greatly screw up the code. Admittedly, code that
CL programmers would

Erik> Scheme is a relatively low-level Lisp. you _can_ build tall abstractions,
Erik> but you run into many serious problems in scaling,

Such as?

Erik> not to mention the fact that Scheme is a language of bare
Erik> necessities, like living in a cave, while Common Lisp is a
Erik> language of complex societies where all the infrastructure you
Erik> want to take for granted is indeed already implemented.

As far as infrastructure for building abstractions is concerned, I
want (and need) call/cc and macros. So?


Erik Naggum

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

* Matthew R. Wette

| On a SPARCStation an Allegro CL uses *11 meg* to print "hello, world".
| SCM uses *1 meg* to print "hello, world".

this is very odd. ACL 4.3 on my SPARCstation has a swap footprint close to
5M. CMUCL has a swap footprint of about 1M. scsh uses 9M, and MIT Scheme
eats 12M.

| CL requires more $$ for ram and disk.

some Scheme _implementations_ require far more RAM and disk than some
Common Lisp _implementations_, and vice versa, I'm sure.

Erik Naggum

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

* Michael Sperber

| Some misconceptions about Scheme from the view of CL programmers need
| clarification.

that may be, but please do not add more of them.

| >>>>> "Erik" == Erik Naggum <er...@naggum.no> writes:
|
| Erik> in Scheme, I must implement my own `member' with `string-equal'
| Erik> as the predicate.
|
| In Scheme, equal? works on strings. No need to.

`equal' is case sensitive. `string-equal' is not. `equal?' in Scheme is
also case sensitive. if this is not sufficient, choose a different
function, and get the point.

| Erik> in like manner, I must reimplement everything else I
| Erik> need with a higher level of abstraction than Scheme provides.
|
| At least that is easy in Scheme.

sigh. it may be hard, it may be easy. in Common Lisp I don't have to.

| Erik> (1) because Common Lisp recognizes that a single namespace for functions
| Erik> and variables is bad for you.
|
| Again, that's an assertion without any proof. Multiple namespaces
| greatly complicate dealing with names conceptually, especially when
| the same name has multiple bindings with disjoint meanings. Possibly
| a matter of taste, admittedly.

where was the first "assertion without proof"? your own?

| Erik> `defsetf' was mentioned.
|
| defsetf is trivial to define with Scheme high-level macros.

again, you need to roll your own. all those "trivial" things add up.

| Erik> however, functions aren't normally values of variables. this is
| Erik> seldom as useful as Schemers think it is.


|
| Erik, you should have asked a question. It is immensely useful all the
| time.

because in Scheme, you have no other choice. if you need it in Common
Lisp, you've implemented a different evaluation model before all those
trivial issues in Scheme have been implemented.

| I'd be happy to send to oodles of source code where having to use funcall
| would greatly screw up the code.

"greatly screw up the code"? misconceptions, eh? you're marketing.

| Erik> Scheme is a relatively low-level Lisp. you _can_ build tall
| Erik> abstractions, but you run into many serious problems in scaling,
|
| Such as?

lack of a standard package system, for starters.

| Erik> not to mention the fact that Scheme is a language of bare
| Erik> necessities, like living in a cave, while Common Lisp is a
| Erik> language of complex societies where all the infrastructure you
| Erik> want to take for granted is indeed already implemented.
|
| As far as infrastructure for building abstractions is concerned, I
| want (and need) call/cc and macros. So?

as if Common Lisp didn't have macros. sheesh!

call-with-current-continuation is unique to Scheme. somehow, people can
actually get work done in other languages. listening to Schemers, I wonder
how this is at all possible without call-with-current-continuation. could
it be that Scheme has removed all the _other_ mechanisms and replaced them
with a single very complex idea that is then used to reimplement them all?

in Scheme, you have to implement a lot of minor stuff. this creates one
Scheme environment per user or group of users. such is indeed the case.
in Common Lisp, it's there.

Michael Sperber [Mr. Preprocessor]

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

>>>>> "Erik" == Erik Naggum <er...@naggum.no> writes:

Erik> * Michael Sperber
Erik> | Some misconceptions about Scheme from the view of CL programmers need
Erik> | clarification.

Still ...

Erik> | Erik> (1) because Common Lisp recognizes that a single namespace for functions
Erik> | Erik> and variables is bad for you.
Erik> |
Erik> | Again, that's an assertion without any proof. Multiple namespaces
Erik> | greatly complicate dealing with names conceptually, especially when
Erik> | the same name has multiple bindings with disjoint meanings. Possibly
Erik> | a matter of taste, admittedly.

Erik> where was the first "assertion without proof"? your own?

The assertion was "a single namespace for functions and variables is
bad for you", to quote you. Yours. No proof.

Erik> lack of a standard package system, for starters.

Admitted, but also possible to build yourself. The code is out there
Erik, just download it. Of course, all these "little things add up."
A matter of taste if you'd rather be able to choose which ones and how
they work, and then grab them, or if Common Lisp pushes all this stuff
at you. For some programmers (me, for instance), Common Lisp rarely
provides the right abstractions, but rather something which is only
almost right. For others, it may be perfect.

This seems to be the difference in design philosophies between Scheme
and CL. Language elements make it into Scheme only on unanimous
consent of the RnRS authors, pretty good evidence that they are "The
Right Thing".

Read Dick Gabriel's paper on Common Lisp for some evidence on why.
I'm not trying to argue that Scheme is "better" than CL (which Erik is
trying to push at me), I'm just saying that people exist who prefer
Scheme to CL. (And that they, too, are getting serious work done in
Scheme.)

Erik> | As far as infrastructure for building abstractions is concerned, I
Erik> | want (and need) call/cc and macros. So?

Erik> as if Common Lisp didn't have macros. sheesh!

CL's macro system is by far not as convenient and (worse) far more
unsafe than Scheme macros.

Erik> call-with-current-continuation is unique to Scheme. somehow, people can
Erik> actually get work done in other languages.

Certain things you can't do without call/cc (or some equivalent
mechanism such as shift/reset), such as building mechanisms for
coroutines, threads etc. With call/cc, however, you can build *any*
control structure.

Erik> listening to Schemers, I wonder how this is at all possible
Erik> without call-with-current-continuation.

I've never seen anybody claim that. Quote someone, Erik, just once!

Erik> could it be that Scheme has removed all the _other_ mechanisms
Erik> and replaced them with a single very complex idea that is then
Erik> used to reimplement them all?

Common Lisp's idea of non-local control transfer is at least as
complex as call/cc, but nevertheless not as powerful. The formal
semantics in the Scheme standard takes up 12 4-inch lines, none of
which has more than 2 inches of stuff on it. Two of those lines are
declaration lines, two are error messages, which leaves 8 operational
lines. Those lines would easily fit on one or two lines on a full
page. How long is the explanation of non-local jumps in CL?

Erik> in Scheme, you have to implement a lot of minor stuff. this creates one
Erik> Scheme environment per user or group of users. such is indeed the case.
Erik> in Common Lisp, it's there.

True. Is this a bad thing for Scheme?

Cheers =8-} Mike

Steinar Bang

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

>>>>> spe...@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor]):

>>>>> "Erik" == Erik Naggum <er...@naggum.no> writes:

Erik> in Scheme, you have to implement a lot of minor stuff. this

Erik> creates one Scheme environment per user or group of users. such
Erik> is indeed the case. in Common Lisp, it's there.

> True. Is this a bad thing for Scheme?

It gets in the way of Scheme becoming a "real" systems programming
language.

Now whether Scheme *should* become one, us a completely different
issue.


- Steinar

Bradley J Lucier

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

I don't know if I should comment on a thread that I will soon add to
my kill file, but . . .

I know and like Scheme. Perhaps, as a mathematician, Scheme's small
size and uniform notation (single namespace for variables and functions)
appeals to me. Someone once said that a mathematician tries to
forget as much as possible, in fact anything he can look up, so the small
amount of knowledge I need to remember to use Scheme in a reasonable way
is an advantage.

I don't know Common Lisp, but I've recently read two books on CL that
gave me two different impressions of the language. The first, Paradigms
of Artificial Intelligence, by Norvig, is a great book. It made me
realize that certain CL features, like multiple return values and various
iteration macros, are valuable and useful tools to write code. I've started
to use several of the techniques that Norvig lays out in his book. I
can write the iteration macros in Scheme, and I can box and unbox multiple
return values; it's just more of a pain. Still, for the code I've been
writing, the Scheme notation is shorter and clearer to me.

I'm working through Graham's On Lisp, also a great book, on macros,
nondeterminacy, and other advanced topics in CL programming. My impression
of this book is that Graham starts several chapters saying ``Let's see how
the following would be done in Scheme, where it would be easier, and then
we'll write some CL macros to simulate some limited version of the Scheme
code.'' Consequently, this book gave me a greater appreciation for Scheme's
call-with-current-continuation and what could be done with it.

Brad Lucier luc...@math.purdue.edu

Erik Naggum

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

* Michael Sperber

| Erik> where was the first "assertion without proof"? your own?
|
| The assertion was "a single namespace for functions and variables is
| bad for you", to quote you. Yours. No proof.

FYI, you used the word "again", which implies that there is more than one,
and that those other ones precede the instance for which you use "again".
I can no find such no mention of any such from you. ergo my question. I'm
sorry to be pedantic about this, but since you revel in rhetorical devices
and accuse me assertions without proof, it must have been conscious on your
part, although the above suggests that you didn't even read what you wrote.

this is also very odd in light of the rest of your articles. you keep
making claims without proof all through them. only when you can't argue
against something do you need proof, it seems.

this is also odd because it is far from clear _how_ one would "prove" that
a single namespace for functions and variables is bad for anyone, even
though it is. or do I have an opponent who believes that that which cannot
be restated in other terms in such a way as to constitute a proof to his
use. in sum, I conclude that you request proof only as a rhetorical liking
cannot be true. next, you'll challenge any possible axioms I might employ
if you can't counter-arguments. et cetera ad nauseam.

| Admitted, but also possible to build yourself.

yeah, "full employment" seems to an argument in favor of Scheme.

| I'm not trying to argue that Scheme is "better" than CL (which Erik is
| trying to push at me), I'm just saying that people exist who prefer
| Scheme to CL.

look, you may engage in marketing and other lies as much as you want, but
please don't blame your opponents for it. the next quotation from you
really annoy me when you argue the way you do.

| CL's macro system is by far not as convenient and (worse) far more unsafe
| than Scheme macros.

in addition to being a blatant case of "argue that Scheme is `better' than
CL", this also seems like an assertion without proof. if you want to argue
against these things, please do so with respect to your own articles first.

| I've never seen anybody claim that. Quote someone, Erik, just once!

you're being insufferably silly. if call-with-current-continuation is the
be-all and end-all of control structures, and other control mechanisms are
not satisfing to Scheme programmers, then _obviously_ they are unable to
get their work done in other languages, right? when Scheme programmers
make the claim that they can't do without call-with-current-continuation,
such as you do, the only possible conclusion is that they need it for
things that other languages don't provide. this argument is repeated every
time somebody wants to compare Common Lisp to Scheme. ergo, one must
conclude that Scheme programmers are unable to get their work done in any
other language. since this looks pretty amazing compared to the fact that
people _do_ get their job done in any number of languages, I must conclude
that the argument for the necessity of call-with-current-continuation is
constructed ex post facto, and as such is specious at best.

| Common Lisp's idea of non-local control transfer is at least as
| complex as call/cc, but nevertheless not as powerful.

what does this mean if not that you _need_ this power, and that Common Lisp
(and every other language) would not be able to provide what you need?

| The formal semantics in the Scheme standard takes up 12 4-inch lines,
| none of which has more than 2 inches of stuff on it. Two of those lines
| are declaration lines, two are error messages, which leaves 8 operational
| lines. Those lines would easily fit on one or two lines on a full page.
| How long is the explanation of non-local jumps in CL?

this argument is so charmingly irrelevant. it suggests that people don't
program in Scheme, they only prove how elegant it would have been. but,
let me quote something you said just above: "I'm not trying to argue that
Scheme is `better' than CL", and contrast it to the above paragraph. do
you get what I get? is it a contradiction?

| True. Is this a bad thing for Scheme?

I started to work with Scheme some time ago. I got "the Unix feeling",
i.e., that of a system being sub-minimalistic in all interesting areas.
oh, sure, lots of thing could just be downloaded from somewhere, but (1)
the same name was used in different implementations of unrelated features,
(2) everything worked well with standard Scheme, but little else, (3) that
which was "most useful" was not portable or combinable with other "most
useful" features.

moreover, if you start to use a language, and the best answer to your
request for some functionality is not "look it up in standard", but "the
code is out there, Erik, just download it", or "it's trivial to build", I'm
hard pressed to accept an argument that the language is actually easy to
learn. in fact, I'm more convinced after this brief discussion than before
that if I want to get my job done in finite time, I should not use Scheme.

the curious thing is that the exact same argument (build it yourself if it
is not in the standard) is used of C, another sadly lacking language. I
wanted to get _away_ from C and the "you want a glass of beer? why,
there's sand on the beach to make the glass and all the ingredients you
need to make beer are out there, Erik, just go and collect them"-type of
"do it yourself"-ism.

I can also understand why cave dwellers don't like cities: they're full of
noise and pollution and so many things that are just inherited from the
past without redesigning them to fit a pure, formal model. but, somehow, I
like cities. they make it possible for me to make a living working from
home in my comfortable living-room-cum-office with only a purring cat to
distract me, instead of having to kill the animal whose remains are
sizzling in the pan and go pick the rice that's boiling or the herbs and
spices I think are needed to make the sauce that I instead make from water,
milk, and prefabricated, powdered sauce.

I close with two quotes from Michael A. Padlipsky's Elements of Networking
Style[1], Appendix 3, "the self-framed slogans suitable for mounting":

"Just because you think you need steel-belted radial tires and the store
only has polyglas-belted ones at present is still no excuse for going off
in a corner and reinventing the travois."

"The `it's _my_ ball' syndrome would be more understandable if home-made
sandboxes really were superior to store-bought sandboxes."

#\Erik

-------
[1] ISBN 0-13-268111-0

Alan Bawden

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to sch...@mc.lcs.mit.edu

I'm not interested in the "Lisp or Scheme" debate. I couldn't actually
tell you what the message I am about to quote from is all about. But one
remark just happened to catch my eye:

Date: 31 Jan 1997 08:28:26 +0100
From: spe...@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor])
...
Erik> `defsetf' was mentioned.

defsetf is trivial to define with Scheme high-level macros.

...

Seems to me that this might actually be quite hard.

I guess it depends on exactly what you think makes an adequate replacement
for Common Lisp's whole `setf' facility. If you want proper name scoping
(what some people call "hygiene") -and- you want something efficient (e.g.
`(setf (car x) 3)' compiles into `(set-car! x 3)' and not some runtime
dispatch), then I think the R4RS/R5RS high-level macro system can't do it
alone -- you'll need to use some facilities from your low-level macro
system. The probem is that you need to be able to determine whether the
`car' in the first sub-form of a `setf'-expression still refers to the
usual thing.

William Paul Vrotney

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

In article <y9liv4e...@modas.informatik.uni-tuebingen.de> spe...@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor]) writes:

>
> Erik> you should have asked a question. the above is as untrue as you can get.
> Erik> however, functions aren't normally values of variables. this is seldom as
> Erik> useful as Schemers think it is.


>
> Erik, you should have asked a question. It is immensely useful all

> the time. I'd be happy to send to oodles of source code where having
> to use funcall would greatly screw up the code. Admittedly, code that
> CL programmers would
>

Instead of oodles, could you just post one good example in Scheme? I'm not
doubting, I would just like to see other peoples view of how funcall is not
as good as. Thanks.

--

William P. Vrotney - vro...@netcom.com

Cyber Surfer

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

In article <30636876...@naggum.no> er...@naggum.no "Erik Naggum" writes:

> * Matthew R. Wette
> | On a SPARCStation an Allegro CL uses *11 meg* to print "hello, world".
> | SCM uses *1 meg* to print "hello, world".
>
> this is very odd. ACL 4.3 on my SPARCstation has a swap footprint close to
> 5M. CMUCL has a swap footprint of about 1M. scsh uses 9M, and MIT Scheme
> eats 12M.

A more fair comparison would be with CLISP. This is, as far as I
know, a complete CL system, and yet it can use as little as 1.5 MB.
That's not it's working set, either. I used to run it very comfortably
in 8 MB of RAM, using _no_ virtual memory at all, but that's probably
6 MB more than necessary. Since I've not yet found a DOS machine
with only 2 MB of RAM, I've not been able to test it myself.



> | CL requires more $$ for ram and disk.
>
> some Scheme _implementations_ require far more RAM and disk than some
> Common Lisp _implementations_, and vice versa, I'm sure.

This "bean counting" proves nothing. It's always the same with
bean counting. The first Lisp that I can remember reading about
ran in 16K of RAM, on a machine with a Z80 clocked at 1.76 Mhz.
I have a C++ compiler and framework that may require more than
80 MB of RAM (my current number of memory beans), but I can also
use it without that framework, or use a totaly different C++
compiler. What do these beans tell us? Nothing. If they did mean
anything, then we might be programming in assembly language,
instead of a high level language.

Leave this kind of squabling to the C and Pascal programmers,
and instead let us demonstrate the value of our enlightenment.

Cyber Surfer

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

In article <joswig-ya0231800...@news.lavielle.com>
jos...@lavielle.com "Rainer Joswig" writes:

> > I agree that pattern matching can be added to CL, and other
> > Lisps too, but I'm not sure why you say that "for larger
> > software libraries pattern based invocation is not very
> > maintainable." The issue of code maintainence isn't effected
> > by pattern matching, as far as I'm aware.
>
> Not? Right now most of these functional languages
> have tuples and lists, but not records. Write
> large software with lots of datatypes (a windows system, ...)
> using pattern matching. It is possible, but now try
> to change the underlying data representation.

Have you read anythign about functional languages recently?
You're demonstrating the kind of ignorance of C++ programmers,
when discussing Lisp. User defined aggregate data types have
been available in functional languages since the early 70s,
and perhaps even earlier.

> What effects does this have to external users if
> the implementation changes, etc. Could it
> be difficult to understand software if you
> have lots of patterns and you have to look very
> closely to determine which function will be invoked
> when?

As Erik might say, Bzzzt! Wrong. I'll let you figure out
how to find the appropriate FAQ to read...



> > It looks like you're defending CL by attacking another style
> > of programming. I hope this not the case, as I don't believe
> > that it's necessary. No language is so good that it justifies
> > such an attack, and I'm know that you can find better ways
> > of defending CL.
>
> This is your interpretation. I don't like it.

Fair enough. I don't like your interpretation of the abilities
of functional languages, which is not only wrong, but just a little
bizarre, considering how much FP and Lisp have in common.



> But some people have been
> struggling with similar approaches years ago. Its
> a bit like what people experienced with rule based
> languages in real world software (like OPS5-based
> configuration systems, or even the infamous sendmail).

Huh? These have nothing to do with FP. Try reading some
up to date information for a change.



> > Haskell is _also_ easy to understand. It's just a little
> > different to CL (and C++), but I find I can apply a great
> > deal of my experience in Lisp to Haskell. Like Lisp, it
> > helps you learn it using a good book, and know people who
> > can answer your questions.
>
> Until someone has understood monadic IO, he may already
> have successfully written some 10000 lines of
> Common Lisp stream-based IO code. Also
> I might add the Haskell type system is not *that*
> easy to understand.

I agree that it's not so easy, but this is partly due to
it being different to what we, as Lisp programmers, expect.
This doesn't mean that it doesn't work, or that it isn't
a powerful tool. Haskell also a very your tool, while Lisp
in general has been around long enough for some excellent
tutorials to be written, and for a strong culture to evolve.

> I'm not saying anything bad about Haskell, itæ„€ just that


> even with FP knowledge it is not easy to master and some
> books (there is a nice German one) about Haskell do look
> like white noise to the uninitiated.

I agree that the books available may leave a lot to be
desired. (See above.) It's way too early to use this in
a fair comparison. How many Lisp books were available 30
years ago?

This is a dumb way to judge languages. After all, look at
the vast shelves of books for C++. What does that prove?
Absolutely nothing, but that there's a lot of interest in
that particularly language.



> > Mastering any language worth learning takes time. After
> > more than 10 years, I'm still learning CL.
>
> After more than 10 years, I'm still writing lots
> of code with Common Lisp.

So am I. This proves nothing. We don't measure the quality
of a language only by KLOCs written per year.



> > My favourite languages are still CL and Scheme, but I'm
> > beginning a love affair with Haskell. I don't feel that
> > I'm cheating any of them, as they're only tools. Damn fine
> > tools, just the same.
>
> You still have failed to ground your "love affair" on
> rationalism (as far as possible).

Wander over to comp.lang.functional, and a number of us may
explain some of it to you. However, this is not the right place.
I'd like to avoid language politics - life is too damn short.

Just consider this: I'm writing CGI code in Haskell. The fact
that I can do this should tell you a great deal. Everything else
is hot air. If you're not interested in the truth about FP, then
you're only doing what so many C++ programmers like to do to us.
I don't wish to have any part in that!

Jeff Dalton

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to Sch...@mc.lcs.mit.edu

> Date: 31 Jan 1997 08:28:26 +0100
> From: spe...@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor])

> Erik> `defsetf' was mentioned.
>
> defsetf is trivial to define with Scheme high-level macros.

That is false. I hereby challenge you to trivially implement
full defsetf with Scheme high-level macros. Your implementation
must be able to handle such things as the example on page 139
of CLtL II.

> Erik, you should have asked a question. It is immensely useful all
> the time. I'd be happy to send to oodles of source code where having
> to use funcall would greatly screw up the code.

Ok, send some source code where having to use funcall would greatly
screw up the code.

> Erik> in like manner, I must reimplement everything else I


> Erik> need with a higher level of abstraction than Scheme provides.
>

> At least that is easy in Scheme. In Common Lisp, if I want call/cc
> (and it is *much* more useful than Common Lisp programmers usually
> care to acknowledge), I cannot express it in terms of other Common
> Lisp primitives.

I, for one, am glad CL does not have call/cc. It makes it too
had to determine what programs do.

> The high-level macro system that Scheme has (about to be made a
> mandatory feature for R5RS) (or something with equivalent
> functionality) is very hard to implement right in Common Lisp.

How do you know? What about CL makes it hard to "implement right"?

> I doubt that it's been done.

So? You overestimate how valuable it is.

(FWIW, the pattern language is available for CL. I don't know whether
anyone has bothered to implement the hygiene, but there's less need
for it in CL than in Scheme in any case.)

BTW, I happen to think that both Common Lisp and Scheme are good
languages, and I am glad that both are available.

-- jd

Barak Pearlmutter

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to Alan Bawden

Alan Bawden wrote:
>
> defsetf is trivial to define with Scheme high-level macros.
> ...
>
> Seems to me that this might actually be quite hard.
>
> I guess it depends on exactly what you think makes an adequate replacement
> for Common Lisp's whole `setf' facility. If you want proper name scoping
> (what some people call "hygiene") -and- you want something efficient (e.g.
> `(setf (car x) 3)' compiles into `(set-car! x 3)' and not some runtime
> dispatch), then I think the R4RS/R5RS high-level macro system can't do it
> alone.

Huh. In T and Oaklisp, setf (well, set! actually) is defined as a simple macro,
and the form (set! (car x) y) macro-expands to ((setter car) x y), which the
compiler constant-folds (setter car), assuming it is the right setter and the
right car in that context of course, and then the compiler notices that an open-
codable procedure is being applied, so it open codes it.

I don't know if it was hard to do, but it certainly used general purpose
mechanisms, which speed up other aspects of the system also.

I'd be unpleasantly surprised if Scheme48 doesn't also do this.

% oaklisp
Welcome to Oaklisp 1.2 -
> (cc '(car x))
((LOAD-GLO-CON X) (CAR) (RETURN))
> (cc '(set! (car x) y))
((LOAD-GLO-CON Y) (LOAD-GLO-CON X) (SET-CAR) (RETURN))
> (cc '(let ((car blah)) (set! (car x) y)))
((LOAD-GLO-CON BLAH) (LOAD-GLO-CON Y) (LOAD-GLO-CON X) (LOAD-STK 2 CAR) (LOAD-IMM #<LocatableOp 1040> SETTER) (STORE-NARGS 1) (FUNCALL-CXT) ...)
--
Barak Pearlmutter <b...@cs.unm.edu>, http://www.cs.unm.edu/~bap/

Alaric B. Williams

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

On 31 Jan 1997 08:40:29 +0000, Erik Naggum <er...@naggum.no> wrote:

>| >>>>> "Erik" == Erik Naggum <er...@naggum.no> writes:
>|

>| Erik> in Scheme, I must implement my own `member' with `string-equal'
>| Erik> as the predicate.
>|
>| In Scheme, equal? works on strings. No need to.
>
>`equal' is case sensitive. `string-equal' is not. `equal?' in Scheme is
>also case sensitive. if this is not sufficient, choose a different
>function, and get the point.

So there should be a 'first-that' or something that takes a lambda
parameter. An oversight in the standard procedure library isn't much
of a problem IMHO... things can be added to Scheme, but nothing can be
removed from CL...

>| Erik> Scheme is a relatively low-level Lisp. you _can_ build tall
>| Erik> abstractions, but you run into many serious problems in scaling,
>|
>| Such as?


>
>lack of a standard package system, for starters.

There's at least one good package system I've seen. Can't remember
where, but it's there if anyone wants it :-)

>call-with-current-continuation is unique to Scheme. somehow, people can

>actually get work done in other languages. listening to Schemers, I wonder
>how this is at all possible without call-with-current-continuation. could
>it be that Scheme has removed all the _other_ mechanisms and replaced them
>with a single very complex idea that is then used to reimplement them all?

>in Scheme, you have to implement a lot of minor stuff. this creates one
>Scheme environment per user or group of users. such is indeed the case.


>in Common Lisp, it's there.

This just needs the slow extension of the standard functions. IMHO
there should be a standard set of extension packages, like
dictionaries in FORTH. Ie, a package that standardises an exception
system based on call/cc, etc.

The objective of Scheme is to find the basic core that is needed to
implement everything else - or as much else as possible; who knows
what constructs CS research will devise in the future? Clearly, to
make full use of the fruits of that effort, the libraries that extend
that basic functionality are needed - but if they are all defined in
terms of that basic core, implementing Scheme is easy and
/manipulating/ Scheme is easy. Scheme source can be thought of as the
combination of a small set of primitives - well suited for axiomatic
transformations and all that fun stuff...

ABW
--

"Simply drag your mother in law's cellphone number from the
Address Book to the Laser Satellite icon, and the Targeting
Wizard will locate her. Then follow the onscreen prompts for
gigawattage and dispersion pattern..."

(Windows for Early Warning and Defence User's manual P385)

Alaric B. Williams Internet : ala...@abwillms.demon.co.uk
<A HREF="http://www.abwillms.demon.co.uk/">Hello :-)</A>

Erik Naggum

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

* Alaric B. Williams

| So there should be a 'first-that' or something that takes a lambda
| parameter. An oversight in the standard procedure library isn't much of
| a problem IMHO... things can be added to Scheme, but nothing can be
| removed from CL...

sigh. who can choose a language with such proponents, and such arguments
in its favor? it's legitimate for any Scheme user to point out flaws in
Common Lisp like they could win an olympic medal for it, but if you point
out a design problem with Scheme the same people will readily pardon any
and all flaws in Scheme as "oversights" or even worse trivializations.

it is impossible to argue with people who have detached their emotional
involvement in a language (which any language worth using will inspire in
its users) from rational appreciation of its role, relevance, and value.

Scheme is the only language I have ever seen where people will actually
argue in _favor_ of its flaws, explicitly or implicitly by some stupid
non-argument about some other language. once upon a time, I used to think
that a language (SGML) had such wondrous potential that I would ignore all
present flaws and practical problems. I gradually came to understand that
that potential would never be realized, precisely because nobody cared to
fix the present flaws and practical problems -- those who saw the potential
ignored them and talked about how SGML changed the idea of information and
all that fine management-level nonsense, and those who had to deal with
them just found ways to live with them, even arguing against changes!

take a look at Common Lisp's `member' some day. the `first-that' that you
seem to think of is called `member-if' in Common Lisp. it is different
from a `member' with a :test argument. also note the :key argument.
(and _please_ note that :test-not and `member-if-not' are deprecated.)

Simon Brooke

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

In article <ey3915b...@staffa.aiai.ed.ac.uk>,

Tim Bradshaw <t...@aiai.ed.ac.uk> writes:
> * Reini Urban wrote:
>> On Mon, 20 Jan 1997 11:19:30 +0100, jos...@lavielle.com (Rainer Joswig) wrote:
>>> Both Common Lisp and Scheme basics are relatively easy to learn.
>
>> In my eyes Common Lisp is quite hard to learn
>> (compared to standard lisp or scheme)
>
> If it's possible to ask this question without provoking endless futile
> discussion, could you say why? I've taught courses on Common Lisp,
> and it would be interesting to know what people find hard about basic CL,
> especially compared to scheme.

Hi Tim!

OK, lets start:

(i) LISP2: Why is a function different from a variable? Why is there more
than one name space? (see e.g. Gabriel and Pitman, _Technical Issues
in Separation in Function Cells and Value Cells_, in Lisp and Symbolic
Computation 1, 1 1988).

(ii)Weird lambda-list syntax. I *still* have trouble with this. &KEY,
&OPTIONAL, &REST... Both Scheme and Standard LISP are (to my way
of thinking) much more intuitive in this regard. Having things in
lambda-lists which don't get bound, but which affect the way other
things do get bound, seems ... I don't know. Prejudice, I supose;
I didn't learn it that way. But I don't like it!

Those are the two major criticisms I would still make about Common
LISP, ten years down the track. I also dislike the way that the reader
(according to the standard) ignores comments, so that comments are not
(according to the standard) available to an in-core editor; and the
way the reader (according to the standard) ignores case. But these are
details.

--
si...@intelligent.co.uk (Simon Brooke) http://www.intelligent.co.uk/~simon

If you really want a Tory government for ever, keep on voting
Labour. If you want a Labour government soon, vote SNP just once.

Alaric B. Williams

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

On 01 Feb 1997 02:20:53 +0000, Erik Naggum <er...@naggum.no> wrote:

>* Alaric B. Williams
>| So there should be a 'first-that' or something that takes a lambda
>| parameter. An oversight in the standard procedure library isn't much of
>| a problem IMHO... things can be added to Scheme, but nothing can be
>| removed from CL...

>sigh. who can choose a language with such proponents, and such arguments
>in its favor? it's legitimate for any Scheme user to point out flaws in
>Common Lisp like they could win an olympic medal for it,

If only ;-)

> but if you point
>out a design problem with Scheme the same people will readily pardon any
>and all flaws in Scheme as "oversights" or even worse trivializations.

The thing is, I see Scheme as a "work in progress". People invent
stuff, toy with it, get fed up with it, reinvent it in a better way -
then it makes it into some standard.

>it is impossible to argue with people who have detached their emotional
>involvement in a language (which any language worth using will inspire in
>its users) from rational appreciation of its role, relevance, and value.

Yup!

>Scheme is the only language I have ever seen where people will actually
>argue in _favor_ of its flaws, explicitly or implicitly by some stupid
>non-argument about some other language.

Try assembly language :-)

> once upon a time, I used to think
>that a language (SGML) had such wondrous potential that I would ignore all
>present flaws and practical problems. I gradually came to understand that
>that potential would never be realized, precisely because nobody cared to
>fix the present flaws and practical problems -- those who saw the potential
>ignored them and talked about how SGML changed the idea of information and
>all that fine management-level nonsense, and those who had to deal with
>them just found ways to live with them, even arguing against changes!

It ain't a perfect world - too many bigots and idiots :-(

>take a look at Common Lisp's `member' some day. the `first-that' that you
>seem to think of is called `member-if' in Common Lisp. it is different
>from a `member' with a :test argument. also note the :key argument.
>(and _please_ note that :test-not and `member-if-not' are deprecated.)

Exactly. Now, CL has the background and development to make it a
powerful language. The Scheme philosophy is to start again from the
bottom and build from cleaner foundations to - eventually - make a
nicer end product... the 'mem*' thing will probably be an unremoveable
flaw in the standard for ever more, but everyone makes mistakes and
ends up depreciating stuff when the new improved "first-that" comes
out!

Ray S. Dillinger

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

Seth Tisue wrote:

> Earl & Daniella Harris <esha...@widomaker.com> wrote:

> >Regarding the evaluation model, CL's doesn't treat
> >functions as first class objects. I can't pass functions
> >around like other values (numbers, lists, etc).
>

> This is totally incorrect. Functions are 100% first class objects in
> Common Lisp, same as in Scheme.

CL's insistence on a separate namespace obscures the issue. It
requires various limitations and constraints to be imposed which
are not needed in Scheme.

In scheme, a function is simply a value, like a number or a
string. In CL, a function is in a separate class -- it has to
be funcalled for example if you're evaluating to it.

((function-that-returns-another-function) argument1 argument2)

is a legal expression in Scheme, because it follows the convention
that *any* value in an expression may be replaced by an expression
which evaluates to it. In CL, you have to write something like

(define-function ?foo (function-that-returns-another-function))
(funcall ?foo argument1 argument2)

I'm just guessing about the use of the define-function syntax; I
must admit it's darn weird as far as I can tell to have different
defines for everything when it's only one operation.

But the point is that functions are treated differently than other
values in CL. You can't insert a subexpression evaluating to your
function in place of the function the way you can with any other
value. You can't use your standard assignment statement to assign
a value to a variable if that value happens to be of type function.

And numerous other warts and inconsistencies.

I value scheme more for what it does *NOT* have -- in the
form of exceptions to its clean, simple rules -- than for
what it does. CL is famous for its libraries, and rightly
so. But Scheme has the cleanest, simplest, and most
consistent rules of operation and evaluation of any language
I've ever come across, and that enables me to write correct
programs much more easily; I always know exactly what an
operation means, because it always means exactly the same
thing, and doesn't have weird exceptions like "you can
always substitute an expression evaluating to a value for
the value itself in any expression -- UNLESS it's a
function..." and "the value of ?foo is 27 -- UNLESS it's
the function ?foo instead of the variable ?foo -- and so
on.

I don't even use the "syntactic sugar" forms in scheme for
defining functions, etc; I *want* to think of lambda as an
function that returns a value, and define as a function
that binds a value to a variable name -- that's what they
do, that's what I want to do, so I use them. It's the same
operation, and therefore the same syntax, as assigning the
value returned from any other function to any variable.
Syntactic sugar, to me, just obscures the issue and introduces
another unnecessary exception to the rules to remember. And
if I don't have to memorize unnecessary junk, I'd rather not.

That, I suppose, is why I love scheme -- no other language
gives me such returns of precision and capability for such a
trifling effort of memorization and practice.

Bear
---
Inventions have long since reached their limit, and I see no
hope for further development.
-- Julius Sextus Frontinus
(Highly regarded engineer in Rome, 1st century A.D.)

Erik Naggum

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

* Simon Brooke

| I also dislike the way that the reader (according to the standard)
| ignores comments, so that comments are not (according to the standard)
| available to an in-core editor; and the way the reader (according to the
| standard) ignores case. But these are details.

which in-core editor (according to the standard) are you talking about?

Robert Sanders

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

"Ray S. Dillinger" <be...@sonic.net> writes:

> ((function-that-returns-another-function) argument1 argument2)
>
> is a legal expression in Scheme, because it follows the convention
> that *any* value in an expression may be replaced by an expression
> which evaluates to it. In CL, you have to write something like
>
> (define-function ?foo (function-that-returns-another-function))
> (funcall ?foo argument1 argument2)

That's not necessary.

(defun make-adder (x)
#'(lambda (y) (+ x y)))
=> MAKE-ADDER

(funcall (make-adder 3) 4)
=> 7

As was stated, functions are first-class objects. Funcall takes a
function argument.

> I don't even use the "syntactic sugar" forms in scheme for
> defining functions, etc; I *want* to think of lambda as an
> function that returns a value, and define as a function
> that binds a value to a variable name -- that's what they
> do, that's what I want to do, so I use them. It's the same

Neither define nor lambda are functions.

Also, you can think of the lack of an explicit "funcall" in Scheme as
a kind of syntactic sugar :-)

regards,
-- Robert

Erik Naggum

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

I find it symptomatic of Schemers attacks on Common Lisp that they are
ignorant, arrogant, and so devoid of correctnes and precision as to merit
no other badge than "prejudice".

* Ray S. Dillinger


| In scheme, a function is simply a value, like a number or a string. In
| CL, a function is in a separate class -- it has to be funcalled for
| example if you're evaluating to it.

Ray, you don't know what you're talking about. it also looks as if you
don't know what a namespace is. a namespace is simply a mechanism to refer
to objects by names taken from different sets of possible names. whether
an object is found by a name in one namespace or in another makes no
difference to the object. you seem to believe it does.

let me paraphrase your complaint with an example of a similar complaint
about Scheme:

in CL, a function is simply a value, like a number or string. to use
any value as a function, apply `funcall' to it. in Scheme, a value has
to be placed first in a form if you wish to call it as a function.
this is unlike the use of any other variable, which do not need this
special position in a form when using their value.

I guess you will automatically think the above is a specious argument, and
will ignore it, without having taken the care to examine the fact that it
is entirely valid, and that its main function is to show that your position
is not one of understanding.

occasionally, we hear that syntax is unimportant, that what matters is the
semantics of languages, indeed that arguments about syntax are _invalid_
because all languages are Turing equivalent. however, when some Scheme
users need to debunk Common Lisp, it is tragically common for them to
stumble in the syntax, and make such utterly false accusations as the above
complaints about namespaces.

let's look at one of your ignorant, invalid examples:

| ((function-that-returns-another-function) argument1 argument2)

vs

| (define-function ?foo (function-that-returns-another-function))
| (funcall ?foo argument1 argument2)

I find it interesting that most Common Lisp users who criticize Scheme do
so with syntactically correct Scheme programs or real examples, whereas
Scheme users who criticize Common Lisp generally don't have a clue. here's
how it would be done by somebody who had actually _seen_ Common Lisp:

(funcall (function-that-returns-another-function) argument1 argument2)

| I'm just guessing about the use of the define-function syntax; I must
| admit it's darn weird as far as I can tell to have different defines for
| everything when it's only one operation.

maybe if you didn't guess as much, you wouldn't make a fool of yourself.
and if you had paid attention, you would have known that it isn't only one
"operation" that is performed by differently-named functions, unless you
define "operation" so generally as to make all computers perform the same
operation, namely heat their immediate surroundings.

I wonder why so many people who love Scheme tend to be so unbelievably
arrogant in their belief that Scheme is superior that they don't even
bother to _learn_ anything about the languages they compare to. this
really makes me wonder what made them love Scheme in the first place. it
surely cannot be intelligent, rational, or informed balancing of features.
it cannot be based on a desire to study and learn languages. instead, Ray,
explains to us, it's based on "a trifling effort of memorization and
practice", if I understood what he meant by that, alien as it is to me.

| But the point is that functions are treated differently than other values
| in CL.

that may be your point, but it is a _fact_ that functions are treated just
like other values in Common Lisp. the difference between Common Lisp and
Scheme is that Scheme evaluates the first element of a form to find which
function to call, whereas Common Lisp regards the first element of a form
as a constant symbol, and looks up the function definition of that symbol.

I'm rather surprised by the many Scheme users who fail to understand such a
simple _fact_, instead preferring to make various bogus "points". every
such "point" reduces the credibility of the entire Scheme community.

| You can't insert a subexpression evaluating to your function in place of
| the function the way you can with any other value.

yes, you can. the problem is that you're very confused, mostly because you
treat Common Lisp as if it were Scheme, and then complain when it isn't.
Common Lisp does not _evaluate_ symbols to a function. period.

| You can't use your standard assignment statement to assign a value to a


| variable if that value happens to be of type function.

Ray, you don't know what you're talking about.

| And numerous other warts and inconsistencies.

I'm sorry to say that you seem to produce the warts you wish to attack
straight out of thin air, and that you wouldn't find half as many if you
had bothered to study Common Lisp before your ignorant attacks on it.

| But Scheme has the cleanest, simplest, and most consistent rules of
| operation and evaluation of any language I've ever come across, and that
| enables me to write correct programs much more easily; I always know
| exactly what an operation means, because it always means exactly the same
| thing, and doesn't have weird exceptions like "you can always substitute
| an expression evaluating to a value for the value itself in any
| expression -- UNLESS it's a function..." and "the value of ?foo is 27 --
| UNLESS it's the function ?foo instead of the variable ?foo -- and so on.

I find this entertaining. you don't know diddlysquat about Common Lisp,
yet speak with a stunning degree of certainty in which you prove beyond
_any_ reasonable doubt that you don't even care to look things up, yet you
know _exactly_ what Scheme does. yeah, right. I think the latter is pure
hyperbole -- that the degree to which you "know exactly" what Scheme does
is the same degree to which you know vaguely how Common Lisp works.

perhaps the difference between Scheme and Common Lisp programmers is that
the Common Lisp programmers _know_ they need to look things up, whereas the
Scheme programmers always _think_ they never need to. I guess that's also
why Common Lisp has documentation strings and Scheme doesn't. the lack of
documentation strings has always bothered me in Scheme. (yes, it's been
rehashed a few times. I know all the arguments.)

| That, I suppose, is why I love scheme -- no other language gives me such
| returns of precision and capability for such a trifling effort of
| memorization and practice.

this is _very_ amusing. your utter lack of precision in your description
of Common Lisp really drives a stake through the heart of your arguments.
but I guess that's the _real_ return you get from a trifling effort of
memorization and practice.

why do so few Scheme users seem to care enough to be accurate? could it be
influence from the language? or could it be that people who _don't_ care
flock to Scheme? I'm led to wonder by the many Scheme proponents who like
to attack strawmen and problems only of their own imagination.

Alan Bawden

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to b...@cs.unm.edu, Sch...@mc.lcs.mit.edu

Date: Fri, 31 Jan 1997 17:44:02 -0700
From: Barak Pearlmutter <b...@cs.unm.edu>

Alan Bawden wrote:
>
> defsetf is trivial to define with Scheme high-level macros.
> ...
>
> Seems to me that this might actually be quite hard.
>
> I guess it depends on exactly what you think makes an adequate replacement
> for Common Lisp's whole `setf' facility. If you want proper name scoping
> (what some people call "hygiene") -and- you want something efficient (e.g.
> `(setf (car x) 3)' compiles into `(set-car! x 3)' and not some runtime
> dispatch), then I think the R4RS/R5RS high-level macro system can't do it
> alone.

Huh. In T and Oaklisp, setf (well, set! actually) is defined as a
simple macro, and the form (set! (car x) y) macro-expands to ((setter
car) x y), which the compiler constant-folds (setter car), assuming it
is the right setter and the right car in that context of course, and
then the compiler notices that an open- codable procedure is being
applied, so it open codes it.

Of course I knew about this technique, but I didn't want to get bogged down
in the details. I always forget that around here, getting bogged down in
the details is the name of the game.

Permit me to amend my original statement:

If you want proper name scoping (what some people call "hygiene") -and-
you want something efficient (e.g. `(setf (car x) 3)' compiles into
`(set-car! x 3)' and not some runtime dispatch), then I think the

R4RS/R5RS high-level macro system can't do it alone. Either you will
need to resort to using the low-level facilities of your implementation,
or you will need to know that your compiler knows how to optimize
something like `setter'.

Since neither the low-level facilities, nor a guarantee about optimizing
`setter', is present in the standard, no properly scoped AND efficient
implementation of `defsetf' seems possible.

Barak Pearlmutter

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to Alan Bawden, Sch...@mc.lcs.mit.edu

Well sure, I suppose if the compiler won't do even the simplest
optimizations (inlining and constant folding) then you can't expect
some particular kinds of macros to be fast. But heck, under those
condition you can't expect anything else to be fast either - so why
pick on poor little SETF ?

Lyman S. Taylor

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

I had this at the bottom of my original draft of this. I thought I put
it at the top....

For programming "in the small" Scheme is cool. It doesn't scale up well,
though. Right tool for the right job...

Although the Common Lisp approach is prehaps a tad bit more "complicated".
Having a seperate namespace for function names isn't a totally bad thing.


In article <32F3A4...@sonic.net>, Ray S. Dillinger <be...@sonic.net> wrote:
>Seth Tisue wrote:
>
>> Earl & Daniella Harris <esha...@widomaker.com> wrote:
>> >Regarding the evaluation model, CL's doesn't treat
>> >functions as first class objects. I can't pass functions
>> >around like other values (numbers, lists, etc).
>>
>> This is totally incorrect. Functions are 100% first class objects in
>> Common Lisp, same as in Scheme.
>
>CL's insistence on a separate namespace obscures the issue. It
>requires various limitations and constraints to be imposed which
>are not needed in Scheme.
>

>In scheme, a function is simply a value, like a number or a
>string. In CL, a function is in a separate class -- it has to
>be funcalled for example if you're evaluating to it.

No it is not a seperate "class"... it is a value in a seperate namespace.
What is different is the evaluation rule, not the "classification".
In Common Lisp when looking for the function "value" of an s-expression
to be evaluated you look in the function namespace for that value.

[ If you wished to stick non function "values" into the function
namespace you could do the following:

(setf (symbol-function 'foo ) 3 )

However, it is not likely that setf will allow such an operation
since it is by no means "productive". More on how this makes
things simplier later...]


>((function-that-returns-another-function) argument1 argument2)
>
>is a legal expression in Scheme, because it follows the convention
>that *any* value in an expression may be replaced by an expression
>which evaluates to it.

You might then be preplexed by the following which works in CL...

( (lambda ( x y ) (+ x y )) argument1 argument2 )

[ Other than in "function name position" of a function call if you
just put a #' in front of everywhere you see a lambda in Scheme
I think that works out ok. I tend to think of it as lambda placing
values into the function namespace by default. I think that is to
make the above expression "consistant". A call to LAMBDA is the
only "function call" allowed in this position. This does mean that
in other placements you'll need to preface the lambda expression
with #' to get the value back. ]

>
>(define-function ?foo (function-that-returns-another-function))
>(funcall ?foo argument1 argument2)

...

Perhaps you mean:

(setf foo (function-that-returns-another-function))
(funcall foo argument1 argument2 )

Or you could just drop the setting of the variable all together:

(funcall (function-that-returns-another-function) argument1 argument2 )

Which compared with the Scheme version involves only inserting a FUNCALL
just after the left paren... "Radical" huh? :-)


>I'm just guessing about the use of the define-function syntax; I
>must admit it's darn weird as far as I can tell to have different
>defines for everything when it's only one operation.

...
>value. You can't use your standard assignment statement to assign

>a value to a variable if that value happens to be of type function.

Buzz....

(setf baz #'(lambda (x ) x ))

(functionp baz ) ==> T

If I choose to "invoke" the function then I must follow CL evaluation
rules. By which:

( baz .... )

doesn't work because BAZ doesn't have a binding in the function
namespace... which is where I'm suppose to look for such a value.

If you really only wanted one definer you could use the "all powerful"
SETF everywhere. ;-)

(setf (symbol-function 'foo ) #'(lambda ( ... ) .... ) )

instead of

(defun foo ( ...) .... )


>And numerous other warts and inconsistencies.
>

...


>the value itself in any expression -- UNLESS it's a
>function..." and "the value of ?foo is 27 -- UNLESS it's
>the function ?foo instead of the variable ?foo -- and so
>on.

This is the wrong why to look at it. ?foo has multiple properites.
One of which is it's "value" and one is it's "function value" ( i.e.
binding in the function namespace).

(setf ?foo '( 23 24 ) )

[ or if you perfer (setf (symbol-value '?foo) '( 23 24) ) ]

(setf (symbol-function '?foo) #'car )

?foo ==> ( 23 24)
(?foo '( a b )) ==> A
(?foo ?foo) ==> 23

The "complication" is recognizing which position in the function call
expression you are in to choose which "namespace" to look into for the value.
To some extent, in scheme you have to worry about position in the
expression too. After all

( (+ 1 3 ) .... )

isn't going to work. Any old expression cannot be in that first position.
I don't see the increase in "cognitive" workload as all that great to
contemplate about value and namespace. Especially when the semantics
of the setters are set up so that only function values can be found in the
function namespace.

Way too many lanaguages use multiple namespaces to a positive effect
to discount its usefulness [ e.g. record elements names ( or struct member
names in 'C'-land) are defined in the namespace of the record itelf.
imagine how much a pain it would be if all these names had to be unique.]
In programming in a team of 3-5 people I would rather not have to get
concensus as to what ALL of my function and module "globals" names have to
be in order not to have a name collision. I'd rather specify an interface
and say "this" is what I'm going "provide" and "this" is what I need.
[ I don't find the practice of putting a unique prefix on all of my
module vars and functions very appealing either... ]


--

Lyman S. Taylor "Computers are too reliable to replace
(ly...@cc.gatech.edu) humans effectively."
Commander Nathan Spring, "Starcops"

Rainer Joswig

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

In article <854739...@wildcard.demon.co.uk>,
cyber_...@wildcard.demon.co.uk wrote:

> when discussing Lisp. User defined aggregate data types have
> been available in functional languages since the early 70s,
> and perhaps even earlier.

Yeah, and Haskell has added them in 1996.
I'll gladly forward you a mail which summarized
some of the design problems/alternatives, which was posted by
Mark P Jones on the Haskell mailing list in 1995.

> > But some people have been
> > struggling with similar approaches years ago. Its
> > a bit like what people experienced with rule based
> > languages in real world software (like OPS5-based
> > configuration systems, or even the infamous sendmail).
>
> Huh? These have nothing to do with FP.

I was referring to pattern matching which is
common in both functional languages and rule based
languages. What do you think are common problems
with using patterns - can you image some? How
would you avoid them?

> Just consider this: I'm writing CGI code in Haskell. The fact

You could use Fortran, too. If you could not write CGIs
in Haskell, then I would worry.

Rainer Joswig

[To improve the style in this newsgroup, I have deleted and
omitted all personal attacks. Yeah, looks shorter now. :-) ]

--
http://www.lavielle.com/~joswig/

Alan Bawden

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to b...@cs.unm.edu, Sch...@mc.lcs.mit.edu

Date: Sat, 1 Feb 97 16:48 MST
From: Barak Pearlmutter <b...@cs.unm.edu>
Reply-To: b...@cs.unm.edu

Well I guess I'm just stupid or something, but I don't see how to write a
portable version of `setter' (either as a macro or as a procedure) so that
I can be confident that simple optimizations (such as inlining and constant
folding) cause `(setter car)' to become `set-car!'. And that also allows
me to write a `defsetf'.

The problem is that the "database" that maps `car' to `set-car!' has to be
stored somehow. But it can't be something as simple as a hashtable or an
alist, because the compiler has to be able to figure out that the mappings
are immutable. But the database has to absorb new mappings from `defsetf'
somehow. I don't see how to do it.

Without `defsetf', I don't see any problem. I don't need `setter' or
low-level macro tools. I just build all the mappings directly into the
definition of `setf'. But the addition of `defsetf' adds this additional
problem of communicating the information given in the `defsetf' expression
to the `setf' macro (or the code that the `setf' macro expands into).

Barak Pearlmutter

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to Alan Bawden, Sch...@mc.lcs.mit.edu

My point is that there is nothing particularly nasty or inefficient
about SETF in Scheme, as opposed to most any other complicated
construction.

You could implement SETF and DEFSETF using an accessor -> mutator
a-list. A sufficiently advanced compiler would flow-analyze the storage
location holding the a-list, notice that the relevent parts of it's
value are fixed in a particular lexical context, constant fold the
access to the a-list, inline the procedures that result, and shazam.

To make up for the deficiencies of current scheme compilers, which
cannot be counted on to perform this feat, many implementations provide
mechanisms for declaring constants, requesting inlining, promising that
a procedure is side-effect-free, etc. If these work well then you can
use them to get SETF to be fast. Just like you can use them to get
other things to be fast. There's nothing peculiarly unwieldy or slow
about SETF/DEFSETF with respect to either Scheme in general or hygienic
macros in particular.

Scheme tries to provide a few general purpose mechanisms. When the
result is slow, you're supposed to blame the implementors rather than
the language. Then you're supposed to roll your eyes, roll up your
sleaves, and grudgingly use whatever non-portable speed hooks you must.

Cyber Surfer

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

In article <30637524...@naggum.no> er...@naggum.no "Erik Naggum" writes:

> Scheme is the only language I have ever seen where people will actually
> argue in _favor_ of its flaws, explicitly or implicitly by some stupid

> non-argument about some other language. once upon a time, I used to think

Well, I've found that there's always someone who will argue in
favour of the flaws of a language (or OS, or editor, etc etc).
The saying, "one man's meat", refers to this tendency. <sigh>

> that a language (SGML) had such wondrous potential that I would ignore all
> present flaws and practical problems. I gradually came to understand that
> that potential would never be realized, precisely because nobody cared to
> fix the present flaws and practical problems -- those who saw the potential
> ignored them and talked about how SGML changed the idea of information and
> all that fine management-level nonsense, and those who had to deal with
> them just found ways to live with them, even arguing against changes!

Yes, I know what you mean. I first noticed this with CPUs like
the Z80 and 6502. Perhaps that was because at the time the Z80
vs 6502 question was a felt to be an important one. Today, few
people using computers even know what a CPU is, nor should they.
Yet that didn't stop us from arguing over such choices!

If we can worry about such trivial issues, what hope is there
for tools that are likely to survive a little longer, like SGML?
(I'm using the word "survive" in a way that requires little
qualifier: computers with 8bit CPUs are no longer a big issue.
In fact the same is true for 16bit CPUs, and who knows, maybe
the same is true for the 32bit CPU issue. The hardware itself
is another matter.)

I'm not suggesting that SGML is a fad, of course. On the other
hand, how many people using HTML know of its originals? Alas,
too few. At the very least, it could be of historical interest.

It's useful to reflect on how far we've some, and how much
further we've yet to travel. It's a very personal journey,
which may be why people disagree so much. We can't follow in
each others footsteps, but must instead find our own path to
enlightenment. It may, however, be fun to compare notes!

Cyber Surfer

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

In article <32F3A4...@sonic.net> be...@sonic.net "Ray S. Dillinger" writes:

> In scheme, a function is simply a value, like a number or a
> string. In CL, a function is in a separate class -- it has to
> be funcalled for example if you're evaluating to it.

Well, the _names_ are treated differently. Function objects
are another matter. I think that this is the distinction that
you're making - please correct me if I'm mistaken.

Alan Bawden

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to b...@cs.unm.edu, Sch...@mc.lcs.mit.edu

I said:

If you want proper name scoping (what some people call "hygiene") -and-
you want something efficient (e.g. `(setf (car x) 3)' compiles into
`(set-car! x 3)' and not some runtime dispatch), then I think the
R4RS/R5RS high-level macro system can't do it alone. Either you will
need to resort to using the low-level facilities of your implementation,
or you will need to know that your compiler knows how to optimize
something like `setter'.

I fail to see any way in which you have contradicted this. Note that I was
very careful from the very beginning to always say:

-and- you want something efficient

You apparently agree that if you want something efficient, it can't be
portable (at least until such time as we're all using that super smart
Scheme compiler we've been waiting twenty years for), so I guess the
discussion is over.

Barak Pearlmutter

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to Alan Bawden, Sch...@mc.lcs.mit.edu

By that logic nothing can be expressed in portable Scheme -and- be
efficient, since you just never know how bogus your compiler is. So
portable Scheme can't do catch/throw efficiently, since the compiler
might not be able to compile call/cc into something efficient even
when the continuation doesn't escape. And portable Scheme can't do
tight loops efficiently, since after all who know how slow procedure
calls might be. And arithmetic: a disaster.

The point is, portable Scheme is perfectly capable of expressing
SETF/DEFSETF in a fashion amenable to efficient implementation using
reasonable compiler technologies. Technologies embodied in eg SELF
and in fielded machine code emulators.

Our current Scheme implementations cannot achieve the speed we would
wish for this sort of construction, at least not without our holding
their hands. That's true of a lot of constructions, actually.

But the deficiencies in our implementations in this regard are
unrelated to the issue of hygiene, except to the extent that using
non-hygienic macros is one way to hold an implementation's hand. And
not the best way.

Alan Bawden

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to b...@cs.unm.edu, Sch...@mc.lcs.mit.edu

No one has any expectation that any current Scheme compiler can pull off
the optimization you outlined. Thus, as a practical matter, you -cannot-
write `defsetf' portably and efficiently in Scheme. Other things, as a
practical matter, -can- be written portably and efficiently in Scheme.
Case closed.

Cyber Surfer

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to

In article <joswig-ya0231800...@news.lavielle.com>
jos...@lavielle.com "Rainer Joswig" writes:

> > when discussing Lisp. User defined aggregate data types have
> > been available in functional languages since the early 70s,
> > and perhaps even earlier.
>
> Yeah, and Haskell has added them in 1996.
> I'll gladly forward you a mail which summarized
> some of the design problems/alternatives, which was posted by
> Mark P Jones on the Haskell mailing list in 1995.

FP does not begin and end with Haskell. Did you miss my
comments about ML? (Note: not just SML - that's more recent.)

> I was referring to pattern matching which is
> common in both functional languages and rule based
> languages. What do you think are common problems
> with using patterns - can you image some? How
> would you avoid them?

Ah, more word games. There's more to pattern matching.

>
> > Just consider this: I'm writing CGI code in Haskell. The fact
>
> You could use Fortran, too. If you could not write CGIs
> in Haskell, then I would worry.

Exactly. You _can_ do it - end of story. Everything else is
childish politics, and I'm sure you can do better than that.

> [To improve the style in this newsgroup, I have deleted and
> omitted all personal attacks. Yeah, looks shorter now. :-) ]

What personal attacks? Are lies about languages pemitted,
but a reference to the lack of truth _not_ permitted?

DOUBLE STANDARD ALERT!

William Clodius

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to

Generallizing and paraphrasing Eriks comment:

I find it symptomatic of attacks on almost any language by almost any
advocate of another language that they are ignorant, arrogant, and so


devoid of correctnes and precision as to merit no other badge than
"prejudice".

--

William B. Clodius Phone: (505)-665-9370
Los Alamos Nat. Lab., NIS-2 FAX: (505)-667-3815
PO Box 1663, MS-C323 Group office: (505)-667-5776
Los Alamos, NM 87545 Email: wclo...@lanl.gov

Jeff Dalton

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to Sch...@mc.lcs.mit.edu

> Date: 1 Feb 1997 11:51:06 GMT
> From: si...@caleddon.intelligent.co.uk (Simon Brooke)

> In article <ey3915b...@staffa.aiai.ed.ac.uk>,
> Tim Bradshaw <t...@aiai.ed.ac.uk> writes:

> >> In my eyes Common Lisp is quite hard to learn
> >> (compared to standard lisp or scheme)
> >
> > If it's possible to ask this question without provoking endless futile
> > discussion, could you say why? I've taught courses on Common Lisp,
> > and it would be interesting to know what people find hard about basic CL,
> > especially compared to scheme.

Hi, Simon! I thought you used to have some other major criticisms
of Common Lisp, and if the two below are the (only) (major) ones
you "would still make", perhaps you think the language isn't so
bad after all.

> OK, lets start:
>
> (i) LISP2: Why is a function different from a variable?

Why is a list different from a variable? (^_^) Oh, I see, you
want to ask:

> Why is there more
> than one name space? (see e.g. Gabriel and Pitman, _Technical Issues
> in Separation in Function Cells and Value Cells_, in Lisp and Symbolic
> Computation 1, 1 1988).

The history of this in Lisp is rather complex. But ... skipping
over lots of stuff ... it seemed a pretty natural division at one
time. By the mid to late 80s, let's say, it seemed that no one
would design a Lisp with separate function and value namespaces.
But before then, it was pretty common and many would defend it.

It's tempting to think there are decisive arguments against separate
namespaces and that those arguments eventually prevailed. I would
say, instead, that there are good arguments on both sides and that
there was a cultural shift, with all the complexities that involves,
rather a triumph of clear thinking over muddle and backwards-
compatibility.

Anyway, I use higher-order functions often in Common Lisp code, and
though funcall and #' are a bit awkward, I don't find them a serious
obstacle to using functional values. They do, however, make an
already difficult subject (higher-order function) harder to teach and
learn, because, in additional to the conceptual difficulties there are
these tricky syntactic details to get right.

> (ii)Weird lambda-list syntax. I *still* have trouble with this. &KEY,
> &OPTIONAL, &REST... Both Scheme and Standard LISP are (to my way
> of thinking) much more intuitive in this regard. Having things in
> lambda-lists which don't get bound, but which affect the way other
> things do get bound, seems ... I don't know. Prejudice, I supose;
> I didn't learn it that way. But I don't like it!

Historical reasons, and then it was too useful to get rid of.

But is this really a problem when teaching "basic CL" (see Tim's
question)? I would have thought that when teaching basic CL you
wouldn't have to go into such things in much detail.

Anyway, I think the main teaching / learning difficulty with CL is
something else, namely that there's so much in the language. There's
a tendency for textbooks and courses to be too broad and shallow.
With Scheme, it's easier to concentrate on conceptual issues; with CL,
you kind of feel you need to cover a fair amount of the language, if
you're really going to be teaching Common Lisp, so you spend too much
time on relatively trivial stuff, students are overloaded with
things-to-remember, and so on.

> Those are the two major criticisms I would still make about Common

> LISP, ten years down the track. I also dislike the way that the reader


> (according to the standard) ignores comments, so that comments are not
> (according to the standard) available to an in-core editor; and the
> way the reader (according to the standard) ignores case. But these are
> details.

BTW, one of the reasons readtable-case exists (it lets you change this
aspect of the reader) is because you and others in the UK complained
about the reader ignoring case. I didn't like that either, was tired
of the complaints, and so proposed readtable-case as a cleanup issue.

(It's unfortunate that readtable-case is so complex, but it was
impossible to change CL from using upper case internally.)

-- jeff

Simon Brooke

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to

In article <5crort$hj4$1...@news1.sympatico.ca>,
sau...@nf.sympatico.ca (Steve Austin) writes:
> On 30 Jan 1997 20:15:48 +0000, Erik Naggum <er...@naggum.no> wrote:
>
>>(1) because Common Lisp recognizes that a single namespace for functions
>>and variables is bad for you.
>
> Could you clarify this for me please? I'm very much a newcomer to
> Common Lisp, and I naively assumed that the originators of Scheme used
> a common namespace to simplify the syntax of higher order functions.
> What advantages do separate namespaces provide?

This is a *highly* contentious issue, even a religious issue. Basically
people who believe in a single name-space (as I do) would argue for
it's orthogonality and cleanliness; it make treating code as data (and
data as code) far more straightforward, and lets face it that's a very
LISPy programming style. People who believe in multiple name-spaces
will point out (correctly) that in a single name-space finding new
names for things can get awkward (but that's a problem largely solved
by the package system) and will claim that treating data as code (and
vice-versa) is a dangerous thing to do, and ought to be marked by
special rituals so you remember when you're doing it.

Simon

I'm fed up with Life 1.0. I never liked it much and now it's getting
me down. I think I'll upgrade to MSLife 97 -- you know, the one that
comes in a flash new box and within weeks you're crawling with bugs.

Chris Bitmead

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <31Jan1997....@LCS.MIT.EDU> Al...@LCS.MIT.EDU (Alan Bawden) writes:

>`(setf (car x) 3)' compiles into `(set-car! x 3)' and not some runtime
>dispatch), then I think the R4RS/R5RS high-level macro system can't do it

>alone -- you'll need to use some facilities from your low-level macro
>system. The probem is that you need to be able to determine whether the
>`car' in the first sub-form of a `setf'-expression still refers to the
>usual thing.

Why would you want to write (setf (car... instead of (setcar! ....
This is a serious question. What was the design trade-off between lisp
and scheme. The Scheme system seems more sensible to me.


Simon Brooke

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <30638175...@naggum.no>,
Erik Naggum <er...@naggum.no> writes:
> * Simon Brooke

>| I also dislike the way that the reader (according to the standard)
>| ignores comments, so that comments are not (according to the standard)
>| available to an in-core editor; and the way the reader (according to the
>| standard) ignores case. But these are details.
>
> which in-core editor (according to the standard) are you talking about?

Well, that's just the point. Common LISP assumes that you will edit
the file, not the structure. But as a LISP programmer, I'm not that
interested in the textual representation of my code, I'm interested in
it's structure. While the integration between Emacs and a Common LISP
system can be extremely good, and extremely quick, I still find it
much less intuitive to drop out of the LISP environment to a separate
editor which sees my code as just text than to use an in-core editor
(eg the InterLISP DEDIT, or the Cambridge LISP fedit/sedit) which
understands it's structure and can ensure I don't make silly
bracketing or lexical errors (it is also, of course, immensely easier
to hack up your own structure editor than to write a text editor).

The issue of comments in LISP is of course very difficult, because if
they are part of the structure they have to evaluate to something, and
consequently putting comments in the wrong place can affect the
computation (cf, again, InterLISP, where comments evaluated to
NIL). Of course this should not happen. Richard Barbour's (Procyon
Common LISP) solution of holding the comment structure on the
property-list of the function symbol was an interesting
work-around. But I feel that the solution of treating the comments as
things which are entirely lost when a function is loaded into core is
a cop-out. It also has very unfortunate results for people who write
code which manipulates code, because all internal documentation gets
lost in the process.

Simon

There is no Kay but Kay, and Little is his profit.

Ray S. Dillinger

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

William Clodius wrote:
>
> Generallizing and paraphrasing Eriks comment:
>
> I find it symptomatic of attacks on almost any language by almost any
> advocate of another language that they are ignorant, arrogant, and so

> devoid of correctnes and precision as to merit no other badge than
> "prejudice".

This may be true. I have better things to do with my time than
participate in a pointless language flamewar or talk about CL
when, as has been rightly pointed out, I know little about it.

I'll just say briefly what I *like* about Scheme. I like having
one namespace instead of more than one. I like having one form
of (define) instead of more than one. I like never needing
funcall. I like being able to manipulate *any* variables and
values using *exactly* the same few forms and calls.

I like having call/cc instead of bunches of prefab control
structures. I like having very few rules and procedures that I
need to remember. I like that it has an absolute minimum number
of special forms. I like its simplicity. And most of all I like
the way it makes me think about programming and process -- I
put together the absolute fundamentals to a process, or a control
structure, and I get insight into it.

My primary other language is Pascal -- Strongly typed, rigid,
with lots of syntactic rules and special forms. Scheme
changed utterly the way I thought about programs and process;
it's like one of those 'simple' strategy games where you learn
the rules -- ALL the rules -- in one minute and then discover
there's no top end to learning the strategy. Well, Scheme takes
less than two days to learn, and it will change your perspective
on programming utterly. Perhaps the same can be said of CL;
but never mind, I'm just saying it about scheme.

Any flames, attacks, or further invites into language wars, will
be duly ignored.

Bear

Erik Naggum

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

* Steve Austin

| I'm very much a newcomer to Common Lisp, and I naively assumed that the
| originators of Scheme used a common namespace to simplify the syntax of
| higher order functions. What advantages do separate namespaces provide?

as others have observed, there are (at least) two schools of thought here.

however, I'd like to approach this issue from a natural language point of
view, instead of a formal language point of view. clearly, if you define a
formal language to have only one namespace, you can argue all sorts of
things from there, but the question is not ex post facto arguments, but
rather the genesis of the idea.

in natural languages, we are used to context. indeed, contextual meaning
is what makes natural languages natural. we have `list' as a verb, and we
have `list' as a noun. we have `listless' as an adjective describing
something (like a programming language) that does not have lists, and an
adjective describing someone who is sort of permanently tired. when we
need to disambiguate, we do so with more words.

in Common Lisp, I can call some temporary variable `list' without having
removed my ability to create new lists with the `list' function. like the
natural language equivalent, `list' is both a verb and a noun, both a
function and a variable. I find that this rhymes very well with me, and I
also find that I would have severe problems if I could not use a word in a
natural language just because it was "used up" by another part of speech.
English is more prone to this than many other languages, but I happen to
like English, too.

why is just one namespace bad for you? first, name space management is
difficult. it is made more difficult by the lack of packages or other
means of creating new namespaces. it is made more difficult by any means
that artificially increase the collision rate of names. most languages
that try to scale have had namespace manipulators added to them. e.g., in
K&R C, struct members shared a single namespace, which nevertheless was
different from that of variables and functions. ANSI C made each struct a
separate namespace. C++ introduced the pervasive typedef, which not only
made class names a new type, but also a reserved word, which leads me to
the second reason. by having one namespace only, you effectively create a
new reserved word every time you name something globally. in Common Lisp,
you can't redefine the functional meaning of symbols imported from standard
packages, but you can use them in (almost) any other way, and you can
(must) declare that you intend to shadow symbols. in Scheme, you need to
be careful that you don't redefine symbols you will later need in their
global sense.

various counter-measures are necessary if you have only one namespace.
e.g., in C, the standard prescribes certain prefixes as belonging to the
compiler and the rest are up for grabs among modules that you might want to
link with. of course, using lexical scope, you reduce the impact of this
problem. still, you can't use reserved words where they have no other use
than to make the compiler barf. `default' is a perfectly reasonable
variable name. then, some compilers will introduce new reserved words just
for fun, like `try' and `catch'. Scheme, lacking both a package system and
a useful number of namespaces, open up for namespace management problems
that we know so well from large C programs (C being slightly better than
Scheme in the namespace division). a single namespace in the linker also
forced C++ to include a gross counter-measure appropriately called "name
mangling". lacking packages, lacking symbol types, lacking everything, a
C++ name as seen by the linker is some _implementation-specific_ junk that
makes life "interesting" for everything that wishes to talk with the C++
modules. as a counter-counter-measure against the collision-avoidance that
you need in one namespace, C++ has C linkage (extern "C") as an option to
make names visible in a predictable namespace.

now, C and C++ are language we love to hate, and the more we know the more
we hate them, partly because of these problems, but my point is that Scheme
is even _less_ scalable because of its severe restriction on names, and
doubly so because Schemers, like most Lispers, like descriptive names, not
cryptic naming conventions in somewhat less than 8 characters, which means
that artificial naming in Scheme looks a lot worse than artificial naming
in C.

it is often said that small is beautiful. now, anything can be beautiful
when it is small. the ugliest person you can think of was probably a quite
pretty baby. it doesn't take much effort to find a beautiful 16-year-old
girl, either. in fact, our modern notions of beauty and elegance are
_defined_ in terms of size and maturity, so the chance of anything small
and immature being beautiful is vastly higher than anything big or mature.
now, despite all the marketing that seems to be aimed at telling me that I
should dump a girlfriend when she becomes 25 and get a new 16-year-old (or
even younger), I plan to stay with mine partly because of her ability to
grow older in a way I like. consequently, I take exceptions to the
pedophilic attitudes to beauty and elegance that our societies have adopted
over the years. this is why I don't like the "small is beautiful" model of
aesthetics. this is why I think that almost anybody could make something
small and beautiful, but only a few can create something that grows from
small to huge and still remains beautiful. but then again, look at
interior architecture -- with huge spaces come a need for size-reducing
ornamentation. the scaling process _itself_ adds "junk" to what was "clean
surfaces" in a small model. Schemers refer to Common Lisp's "warts", and
prefer to think of Scheme as "clean". now, I wonder, would Schemers prefer
to live in small houses with nothing on their walls? would they still
prefer this if the walls were a 100 feet high and 200 feet long, or would
they, too, desire some ornamentation that would have looked _very_ bad if
it had been on a 10 by 20 feet wall?

Scheme's single namespace is a function of its size. Scheme with more than
one namespace _would_ have had bags on its side -- it would be very
inelegant. however, as applications grow and as Scheme environments grow,
the single namespace becomes disproportionately _small_. therefore, people
resist a growth path that would have been natural, because their notion of
beauty forbid it. Common Lisp with a single namespace would be confined
and forbidding, for the same reason. an analogy may be in order. in very
small towns, houses may have unique names. as the town grows in size, this
becomes too hard to even imagine working, and houses are instead numbered,
and the number space is managed by a street name. as the town grows more,
streets in neighboring towns it merges with may have the same name. but
towns have names, too, and states may have many towns. the United States
has lots of towns with the same name. there are even towns that bear the
name of countries in the global namespace. some people may still wish to
name their house, but it would be foolish to hope that that name would be
globally unique. all over the place, we invent namespaces to manage the
huge number of things we deal with. in Scheme, there are few things to
deal with, so few names are necessary. in Common Lisp, there are many
things to deal with, so means to keep names apart is _necessary_. in
consequence, Common Lisp has packages and symbol slots and namespaces.

why is a single name space bad for you? in addition to the reasons given
above, I'd like to add a problem as a conclusion: nothing restricts your
growth path more than a restricted ability to name your inventions or
creations. the psychological factor known as "cognitive load" imposes a
very heavy burden on our design, namely by having to avoid excesses in that
load. a single namespace is good if you have few names, and more than one
namespace would be bad. at some size of the set of names, however, a
single namespace becomes bad because what you once knew (namely, what a
symbol meant), _ceases_ to be rememberable. namespaces introduce context
to a language. I think communication without context is a contradiction in
terms, so naturally I applaud such introduction.

Erik Naggum

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

* Simon Brooke

| Common LISP assumes that you will edit the file, not the structure.

this is of course untrue. there are no assumption at all on how or what
you will edit if you want to edit Common Lisp programs. all the standard
says is that if you put things in a file, and use `read' to convert the
external representation into the internal representation, the semantics of
that operation is well-defined. editing is certainly outside the scope of
the standard.

| The issue of comments in LISP is of course very difficult, because if
| they are part of the structure they have to evaluate to something, and
| consequently putting comments in the wrong place can affect the
| computation (cf, again, InterLISP, where comments evaluated to NIL).

I find the issue of comments to be simple. if you need them, you bind the
semicolon and the sharp vertical bar to reader macro functions that return
an object of the appropriate comment type, which also prints as a comment
in the usual syntax. your codewalkers then need to learn to skip such
objects. shouldn't be too hard. if you need to load it as code, after you
have edited it, it should be no harder then to remove comment forms. the
way I see it, this can be done entirely outside the language.

| Of course this should not happen. Richard Barbour's (Procyon Common
| LISP) solution of holding the comment structure on the property-list of
| the function symbol was an interesting work-around. But I feel that the
| solution of treating the comments as things which are entirely lost when
| a function is loaded into core is a cop-out. It also has very
| unfortunate results for people who write code which manipulates code,
| because all internal documentation gets lost in the process.

I had this problem in SGML a few years back. it is not a problem as long
as you don't confuse code-as-data and code-as-code. the conversion is not
trivial to begin with, and a pre-pass to delete unwanted elements is not
really an issue. the issue is fundamentally the same as retaining white
space in processing many other languages. in Common Lisp, it's easy to
modify the behavior of the reader.

Barry Margolin

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <30639660...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:
>I find the issue of comments to be simple. if you need them, you bind the
>semicolon and the sharp vertical bar to reader macro functions that return
>an object of the appropriate comment type, which also prints as a comment
>in the usual syntax. your codewalkers then need to learn to skip such
>objects. shouldn't be too hard.

If the comments aren't removed by the reader, how do you ensure that all
user-written and third-party macros don't see them? Common Lisp (and most
Lisp-family languages) doesn't have a standard interface to the code
walker, so you can't depend on that to remove them.
--
Barry Margolin
BBN Corporation, Cambridge, MA
bar...@bbnplanet.com
(BBN customers, call (800) 632-7638 option 1 for support)

William D Clinger

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to Al...@lcs.mit.edu, will

Alan Bawden (Al...@LCS.MIT.EDU) wrote:

> I guess it depends on exactly what you think makes an adequate replacement

> for Common Lisp's whole `setf' facility. If you want proper name scoping


> (what some people call "hygiene") -and- you want something efficient (e.g.

> `(setf (car x) 3)' compiles into `(set-car! x 3)' and not some runtime
> dispatch), then I think the R4RS/R5RS high-level macro system can't do it
> alone -- you'll need to use some facilities from your low-level macro
> system. The probem is that you need to be able to determine whether the
> `car' in the first sub-form of a `setf'-expression still refers to the
> usual thing.

No, the R4RS high-level macro system does this for you automatically.
I showed this in "Macros in Scheme", an introduction that was printed
with the R4RS in Lisp Pointers and also in the MIT and University and
Oregon technical reports.

Perhaps Bawden misinterpreted the following paragraph from that
paper:

Alas, I cannot easily implement an analogue of Common Lisp's
SETF in Scheme using the high-level macro system. Although
local macros can be either recursive (LETREC-SYNTAX) or
non-recursive (LET-SYNTAX), global macros are always recursive
(DEFINE-SYNTAX). This should be fixed.

The problem here is that the absence of non-recursive global macros
makes it difficult to write a SET! macro that can redefine itself
incrementally every time a new data type is defined globally. This
is what is done by Common Lisp's DEFSETF, and an analogue of Common
Lisp's SETF needs to be able to do this because you can define new
data types in Common Lisp. Thus the claim that DEFSETF is difficult
to implement in R4RS Scheme was correct, but not for the reason given
by Bawden. DEFSETF becomes trivial when non-recursive global macros
are added to R4RS Scheme.

You can't define new data types in R4RS Scheme, so it is absolutely
trivial to implement a hygienic SETF in R4RS Scheme.

Will

Cyber Surfer

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5d582t$g...@tools.bbnplanet.com>
bar...@tools.bbnplanet.com "Barry Margolin" writes:

> If the comments aren't removed by the reader, how do you ensure that all
> user-written and third-party macros don't see them? Common Lisp (and most
> Lisp-family languages) doesn't have a standard interface to the code
> walker, so you can't depend on that to remove them.

You could declare a macro for comments, so that when it expands
the expression, it discards the comment and leaves just code.

(defmacro rem (remark &rest body)
`(progn ,@body))

This is crude, but it should allow Simon to use a structure
editor to edit his code, and yet maintain comments to document
what the code does etc.

Obvious, a better name could be chosen...

Alan Bawden

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to wi...@ccs.neu.edu, Sch...@mc.lcs.mit.edu

Date: Mon, 03 Feb 1997 15:03:33 +0000
From: William D Clinger <wi...@ccs.neu.edu>

Alan Bawden (Al...@LCS.MIT.EDU) wrote:

> I guess it depends on exactly what you think makes an adequate replacement
> for Common Lisp's whole `setf' facility. If you want proper name scoping
> (what some people call "hygiene") -and- you want something efficient (e.g.
> `(setf (car x) 3)' compiles into `(set-car! x 3)' and not some runtime
> dispatch), then I think the R4RS/R5RS high-level macro system can't do it
> alone -- you'll need to use some facilities from your low-level macro
> system. The probem is that you need to be able to determine whether the
> `car' in the first sub-form of a `setf'-expression still refers to the
> usual thing.

No, the R4RS high-level macro system does this for you automatically.

*Bzzzt!*

Will, I guess you weren't paying attention to how this thread evolved. My
original message was to someone who said (as an example of some point he
was trying to make) that `defsetf' would be trivial to implement. I just
pointed out that `defsetf' was a poor example for him to have picked,
because the problem was actually quite hard.

By the time we get to my message quoted above, I was saying "Common Lisp's
whole `setf' facility" as a way of refering to the problem, because the
difficulty isn't in writing `defsetf' in isolation, but rather designing
the rest of the `setf' facility so that something like `defsetf' is
possible. I didn't notice that the word "defsetf" didn't actually appear
in this paragraph. If I had noticed, I would have added it somewhere, so
that the reader wouldn't be able to forget what we were talking about.

As usual, I have come to bitterly regret sending my initial message to this
mailing list.

Rainer Joswig

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <854885...@wildcard.demon.co.uk>,
cyber_...@wildcard.demon.co.uk wrote:

> > Yeah, and Haskell has added them in 1996.
> > I'll gladly forward you a mail which summarized
> > some of the design problems/alternatives, which was posted by
> > Mark P Jones on the Haskell mailing list in 1995.
>
> FP does not begin and end with Haskell.

But it seems like a very prominent member.
Haskell tries to be a standard which unifies
non strict, typed FP languages. The ongoing evolution
of Haskell gives a good example about the problems
the designers are facing and which solutions they
are choosing. I find it very telling that they added
field access very late in the game. If they felt
more comfortable with one of the varius approaches
(and there have been some), they would have included
it earlier into the Haskell standard.

> Did you miss my
> comments about ML? (Note: not just SML - that's more recent.)

Have you used it? Tell us a bit about that.

> There's more to pattern matching.

What do you mean? Please give some examples. How do you
use pattern matching?

[To improve the style in this newsgroup, I have deleted and
omitted all personal attacks. Yeah, looks shorter now. :-) ]

--
http://www.lavielle.com/~joswig/

Jeffrey Mark Siskind

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

Can someone post the portable Scheme implementation of SETF/DEFSETF? I'd like
to try it in Stalin to see if Stalin's constant propagation and inlining can
convert (SETF (CAR X) Y) to (SET-CAR! X Y) using the portable implementation
without any handholding.
Jeff (home page http://www.emba.uvm.edu/~qobi)

Erik Naggum

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

* Erik Naggum

| I find the issue of comments to be simple. if you need them, you bind
| the semicolon and the sharp vertical bar to reader macro functions that
| return an object of the appropriate comment type, which also prints as a
| comment in the usual syntax. your codewalkers then need to learn to skip
| such objects. shouldn't be too hard.

* Barry Margolin


| If the comments aren't removed by the reader, how do you ensure that all
| user-written and third-party macros don't see them? Common Lisp (and most
| Lisp-family languages) doesn't have a standard interface to the code
| walker, so you can't depend on that to remove them.

I think macroexpansion sees the code qua code, so if you submit something
for macroexpansion, you must remove the comments. I also need to clarify
what I meant by "codewalker". I assume that in an editing setting, a
different kind of code walker is needed than in a compilation setting, and
that never the twain shall meet. in essence, I see editing and compiling
code as very different tasks. e.g., during editing whitespace means a lot,
during compiling it means nothing. in fact, during editing, a whole lot of
issues come up that don't in compiling. another example is the #+ and #-
reader macros. they must be retained in an edited function. I think
comments are just a special case of the many differing needs, and that we
delude ourselves if we think that executing directly from an editable form
of the code is going to be much simpler than reading forms from a file.

Simon Brooke

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

In article <855007...@wildcard.demon.co.uk>,

cyber_...@nospam.wildcard.demon.co.uk (Cyber Surfer) writes:
> In article <5d582t$g...@tools.bbnplanet.com>
> bar...@tools.bbnplanet.com "Barry Margolin" writes:
>
>> If the comments aren't removed by the reader, how do you ensure that all
>> user-written and third-party macros don't see them? Common Lisp (and most
>> Lisp-family languages) doesn't have a standard interface to the code
>> walker, so you can't depend on that to remove them.
>
> You could declare a macro for comments, so that when it expands
> the expression, it discards the comment and leaves just code.
>
> (defmacro rem (remark &rest body)
> `(progn ,@body))
>
> This is crude, but it should allow Simon to use a structure
> editor to edit his code, and yet maintain comments to document
> what the code does etc.

It's actually not as simple as this, because while the code-walker and
the compiler can trivially be programmed to ignore specially marked
forms, EVAL cannot (at least not trivially). Much cleverer people than
I have given this a lot of thought to this and not come up with a real
solution (although as I said earlier Richard Barbour's was a
reasonable work-around -- does anyone know what Richard is up to these
days?).

-- mens vacua in medio vacuo --

William D Clinger

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to Alan Bawden, b...@cs.unm.edu

Alan Bawden (Al...@LCS.MIT.EDU) wrote:
> *Bzzzt!*
>
> Will, I guess you weren't paying attention...

To make my argument easier to follow, I will _emphasize_ the
most important words.

I asked Jeff Dalton why he thought DEFSETF was hard to implement
in Scheme, and he pointed out that the complex form of Common
Lisp's DEFSETF is essentially a low-level macro in which _arbitrary_
Common Lisp code can be written. It is not trivial to implement
_all_ of Common Lisp in Scheme, so now I know what Dalton meant.

As I pointed out in my previous message, it is difficult to
implement even an _analogue_ of DEFSETF in R4RS Scheme because
the high-level macro system has a couple of design bugs.

Nonetheless it is _trivial_ to implement an _analogue_ of DEFSETF
using R4RS high-level macros _extended_ by a non-recursive
form of DEFINE-SYNTAX and by the "..." quotation feature for
writing macros that can define other macros. Both of these
extensions are easy to implement, free implementations are
available, and I used them four years ago to solve the very
problems that Alan Bawden is worried about.

In particular it is trivial to implement a DEFSETF such that

(defsetf (setf (car x) y)
(set-car! x y))

turns into

(define-syntax setf let* ; note the let* scoping extension
(syntax-rules (car) ; from my implementation of the
((setf (car x) y) ; "Macros That Work" implementation
(set-car! x y)))) ; in the Scheme repository; also slib.

This DEFSETF provides proper name scoping (_hygiene_) and is also
_efficient_. In particular (SETF (CAR X) Y) normally expands into
(SET-CAR! X Y), but

(let ((car 'toyota))
(setf (car (list 3 4)) 97))

causes the macro system to _report_ an error.

spe...@informatik.uni-tuebingen.de
(Michael Sperber [Mr. Preprocessor]) wrote:
> Could someone enlighten me if Common Lisp setf respects
> hygiene in some way?

In Common Lisp the FLET analogue of the above LET is an error,
but implementations of Common Lisp are not required to _detect_
or to _report_ this error, and most don't. In other words,
Common Lisp's SETF is _not_ hygienic.

Will

Cyber Surfer

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

In article <joswig-ya0231800...@news.lavielle.com>
jos...@lavielle.com "Rainer Joswig" writes:

> But it seems like a very prominent member.
> Haskell tries to be a standard which unifies
> non strict, typed FP languages. The ongoing evolution
> of Haskell gives a good example about the problems
> the designers are facing and which solutions they
> are choosing. I find it very telling that they added
> field access very late in the game. If they felt
> more comfortable with one of the varius approaches
> (and there have been some), they would have included
> it earlier into the Haskell standard.

I don't see this as a problem. If I did, then I'd be more
interested in using SML. There are plenty of alternatives!



> > Did you miss my
> > comments about ML? (Note: not just SML - that's more recent.)
>
> Have you used it? Tell us a bit about that.

I've used the evaluation version of MLWorks. The only
thing that bothers me about this implementation is the
lack of an integrated editor, but this could be fixed
by customising a programmable editor. For Unix, Harliquin
recommend Emacs, which should do nicely.



> > There's more to pattern matching.
>
> What do you mean? Please give some examples. How do you
> use pattern matching?

I use it for simple branching. I've found that the compilers
I've used can branch very efficiently using little more than
matching. It's essentially just a "case" control structure,
but with a nicer syntax. So far, I've never had to use "case"
in Haskell, and I hardly ever use "if".

File handling is Where Haskell is weakest, IMHO. If a compiler
has support for PackedString and I/O for this data type, then
it could be as efficient as any other language.



> [To improve the style in this newsgroup, I have deleted and
> omitted all personal attacks. Yeah, looks shorter now. :-) ]

I deny that they were personal. You've made some claims that I
believe are wrong, or at best, misleading. I'm merely challenging
your statements. If you wish to dump on FP, please do it in an
FP newsgroup, where you'll find some people far better informed
than myself, who can answer you.

Followup-To: comp.lang.functional

Jeff Dalton

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to Sch...@mc.lcs.mit.edu, si...@intelligent.co.uk

> From: si...@caleddon.intelligent.co.uk (Simon Brooke)

> > which in-core editor (according to the standard) are you talking about?
>
> Well, that's just the point. Common LISP assumes that you will edit
> the file, not the structure. But as a LISP programmer, I'm not that
> interested in the textual representation of my code, I'm interested in
> it's structure.

Sorry, Simon, but I have to defend Common Lisp from the InterLisp
critique as well as from the Scheme, EuLisp, and Dylan ones.

Surely you've used a structure editor? Right? You may have noticed
that it displays the code as 2-D text, and that it typically formats
it by using a pretty-printer. The pretty-printer uses ... (wait for
it) ... indentation to indicate structure. Just like in those
much-despised files.

There was a time when pretty-printers did a better job of laying
out Lisp code than programmers typically did. That time passed.

(And then there's the way structure editors encourage you to
nest code to unreadable depths. From time to time, someone
gives me code written that way, and I just have to discard it.)

Moreover, the structure editor wants to preserve structure. None
of those pesky paren errors, right? (Or at least not the ones that
involve the wrong _number_ of parens.) So these editors often make
a number of should-be-trivial editing operations annoyingly
difficult.

(I decided that the advocates of structure editing were totally
discredited by the advant of S-edit on the Xerox D-Machine.
Finally a structure editor that might actually be worth using.
But the fans of structure editors were saying how wonderful
they were back when the editor was D-edit! For them, it seemed,
that an editor was a structure editor outweighed anything else.)

Meanwhile, the much-despised text editors had developed a number
of Lisp-friendly features that allowed, e.g. manipulation of
S-exprs as units, thus eliminating many of the cases where
structure editors make editing easier.

Of course, it's possible to write a structure editor for Common Lisp.
(Couldn't CL be edited that way on D-machines?) But, amazingly
enough, many people actually prefer text editors. There's not
much pressure _to_ write structure editors for CL (or for Scheme,
for that matter).

> While the integration between Emacs and a Common LISP
> system can be extremely good, and extremely quick, I still find it
> much less intuitive to drop out of the LISP environment to a separate
> editor

Emacs doesn't have to be separate. Don't confuse structure with
not-separate.

> which sees my code as just text than to use an in-core editor
> (eg the InterLISP DEDIT, or the Cambridge LISP fedit/sedit) which
> understands it's structure and can ensure I don't make silly
> bracketing or lexical errors (it is also, of course, immensely easier
> to hack up your own structure editor than to write a text editor).

It's worse than I thought! (^_^) Not only does Simon prefer DEDIT
to text editors, he prefers the easy-to-write structure-editors!

> The issue of comments in LISP is of course very difficult, because if
> they are part of the structure they have to evaluate to something,

This is confusing the representation used by the editor with
what you think the interpreter and compiler probably use if they're
written in the way you expect. That is, you're supposing that
since the latter would have problems with comments, comments
cannot be included in the former.

> and
> consequently putting comments in the wrong place can affect the
> computation (cf, again, InterLISP, where comments evaluated to
> NIL).

And that never effected the computation! (GMAB)

> Of course this should not happen. Richard Barbour's (Procyon
> Common LISP) solution of holding the comment structure on the
> property-list of the function symbol was an interesting
> work-around. But I feel that the solution of treating the comments as
> things which are entirely lost when a function is loaded into core

Same confusion as above.

> is a cop-out. It also has very unfortunate results for people who write
> code which manipulates code, because all internal documentation gets

> lost in the process.

It makes it easier to manipulate code and moreover makes it possible
to write meaning-preserving transformations that don't have to figure
out what comments mean and how they need to be modified.

(Of course, you could include the comments if you wanted to.)

[Simon -- I'll mail you a copy of this, since you're prebably reading
this thread in comp.lang.lisp, while I'm reading it in a Scheme
amiling list, not News at all.]

Erik Naggum <er...@naggum.no> writes in reply to the same message:

> * Simon Brooke
> | Common LISP assumes that you will edit the file, not the structure.
>
> this is of course untrue. there are no assumption at all on how or what
> you will edit if you want to edit Common Lisp programs. all the standard
> says is that if you put things in a file, and use `read' to convert the
> external representation into the internal representation, the semantics of
> that operation is well-defined. editing is certainly outside the scope of
> the standard.

Are you sure? And "ed" function was in CLtL. Simon -- note that
the definition of (ed symbol) on CLtL II p 699 clearly does not
assume the function definition is in a file.

-- jeff

Thant Tessman

unread,
Feb 4, 1997, 3:00:00 AM2/4/97
to

Tim Bradshaw wrote:
>
> Reini Urban wrote:
[...]

> > In my eyes Common Lisp is quite hard to learn
> > (compared to standard lisp or scheme)
>
> If it's possible to ask this question without provoking endless futile
> discussion, could you say why? I've taught courses on Common Lisp,
> and it would be interesting to know what people find hard about basic
> CL, especially compared to scheme.

My problem with Common Lisp is that predicates end in "p" instead
of "?". This drives me up the wall.

It is loading more messages.
0 new messages