Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Future of Lisp

19 views
Skip to first unread message

Jonathan Cohen

unread,
Jul 20, 1995, 3:00:00 AM7/20/95
to
One of the advantages lisp has over C and C++ is that it avoids memory
leakage by using garbage collection. I've recently started using a
product called Purify that seems to do a very good job of detecting
memory leakage in C++ programs. It even seems to handle circular
data structures properly in that it reports as leakage a pair of
objects that point to each other. Since with enough testing it seems
reasonable that all leaks could be removed from even a large C++ program,
I'm wondering if Lisp has lost its advantage in this regard.
Garbage collection imposes two problems as I see it: the interruptions
it causes and also pinning down the address of a non-static object,
which makes calling into Lisp from another language too complicated.
Is there even a lisp tool that can track down points in source code
that generate garbage, tracing back the call stack, as Purify does?

Many of the other advantages of Lisp, such as interpreting constructed
objects, can be replicated in any language. Even the ability to
define new, specialized languages is not unique to Lisp with all the
tools available that are based on parser generators (e.g. yacc).

Given the realities of the job marketplace, its hard to find project
managers willing to use Lisp. And now its seems that the commercial
Lisp compilers are getting very expensive (although I haven't researched
that thoroughly). Its to the point that even a lisphead like myself
must serious consider porting existing Lisp code to C++ in order
to more easily attract people willing to maintain this code.

So what are the future prospects of Lisp?


Jon Cohen
jco...@tasc.com


Stefan Monnier

unread,
Jul 21, 1995, 3:00:00 AM7/21/95
to
In article <3ulpfn$m...@hazel.Read.TASC.COM>,
Jonathan Cohen <co...@luke.Read.TASC.COM> wrote:
] I'm wondering if Lisp has lost its advantage in this regard.

Well, you still avoid expensive, useless, time-consuming, annoying testing and
debugging.

] Garbage collection imposes two problems as I see it: the interruptions


] it causes and also pinning down the address of a non-static object,

Come on ! not the same old story again. Garbage collection doesn't have to
interrupt your program. Old GCs do, current GCs do it also but rarely, and fancy
(or future) GCs don't do it any more. Only real-time GC is still not trivial,
even though solutions exist.

] which makes calling into Lisp from another language too complicated.

"another language" here obviously means C, C++, pascal and fortran, right ?
since approximately every other language is garbage collected and has hence to
provide the same kind of info. But note that the opposite is true also: calling
C from Lisp is tricky. The problem is not with Lisp or with C, it's with the
difference between the two languages !

] Is there even a lisp tool that can track down points in source code


] that generate garbage, tracing back the call stack, as Purify does?

Aside from that, we can reiterate the other "advantages" of malloc/free:

- malloc/free is slow most of the time. (with a copying GC, allocation is just
a push (same as stack-allocation: makes alloca pointless). Of course GC takes
time also, but overall, it's not obvious which one is the best, speedwise.
- tracking which object can be freed is often tricky enough to make people avoid
the problem as much as possible by using copying instead of sharing. This
can have a negative impact on speed.
- keeping track of which object can be freed (when not using copying to avoid
the problem) can require additional code and data. Typical example: the
"manual" reference counting used in many C++ libraries. ref-count is not the
best known GC, by far. Not counting the problem with cyclic references (these
are generally avoided in those libraries), this manual ref-counting is slow as
hell because optimisations the compiler could do if he knew about it (there
are many tricks with ref-counts) cannot be easily (and safely) done. And
ref-count is not too good for locality either.
- a GC, due to its asynchronous nature can be implemented as separate process,
and hence removed from the "critical path", hence reducing the latency of some
programs (taking advantage of idle time).
- library interfaces often get ugly due to the problem of knowing who has to
free the object.

] Many of the other advantages of Lisp, such as interpreting constructed


] objects, can be replicated in any language. Even the ability to
] define new, specialized languages is not unique to Lisp with all the
] tools available that are based on parser generators (e.g. yacc).

Nothing's unique to any language. It's just that Lisp defines a syntax and a
function to read that syntax and evaluate it, so if you choose the syntax of
your mini-lianguage correctly, it's very easy to implement. But of course, you
can have a C++ library doing exactly the same !


Stefan

David B. Kuznick

unread,
Jul 21, 1995, 3:00:00 AM7/21/95
to
co...@luke.Read.TASC.COM (Jonathan Cohen) writes:

|> One of the advantages lisp has over C and C++ is that it avoids memory
|> leakage by using garbage collection.

It doesn't *avoid* memory leakage (nothing to stop you from continually pushing
stuff onto a list and forgetting to set the list to NIL when you no longer need
it), but it frees you from having to explicitly manage heap-allocated memory
yourself.

I've recently started using a
|> product called Purify that seems to do a very good job of detecting
|> memory leakage in C++ programs. It even seems to handle circular
|> data structures properly in that it reports as leakage a pair of
|> objects that point to each other. Since with enough testing it seems
|> reasonable that all leaks could be removed from even a large C++ program,

What does "enough testing" mean? I have my doubts that Purify can detect/remove
ALL leaks, but never having used it, I'll have to take your word for it.
Besides, it still doesn't remove the burden of having to manage memory yourself.

|> I'm wondering if Lisp has lost its advantage in this regard.

|> Garbage collection imposes two problems as I see it: the interruptions
|> it causes

Just curious as to what modern Lisp implementations you are basing your data on.
GC's are hardly noticeable nowadays on the workstations I use. In fact they're
usually less noticeable than the typical network and disk pauses which can plague
any program. And in C/C++ you can get interruptions during malloc and free also.

and also pinning down the address of a non-static object,

|> which makes calling into Lisp from another language too complicated.

Again, I ask what modern implementation are you basing your data on? In Lucid
and Harlequin, callbacks are *trivial*. In Allegro they require a little extra
work, but not much.

|> Is there even a lisp tool that can track down points in source code
|> that generate garbage, tracing back the call stack, as Purify does?

A good profiler helps here. It can show you where consing occurs in which
functions. Granted, I haven't seen anything as sophisticated as Purify (and it
would be nice to), but this is mainly because there isn't the same burning need
as in C/C++. But it would be something I'ld like to see.

|>
|> Many of the other advantages of Lisp, such as interpreting constructed
|> objects, can be replicated in any language.

Let's say "simulated". There are certain things you just can't do in an
environment-less language that you can in an evironmentful-language (notice I
stayed away from the compiled vs. interpreted issue. It's really the presence of
an environment in Lisp that makes it very different from C and friends, since
there are Lisps out there that always compile everything).

Even the ability to
|> define new, specialized languages is not unique to Lisp with all the
|> tools available that are based on parser generators (e.g. yacc).

Oh boy! Having used both Yacc and Lisp macros *extensively*, there's no way I
can buy into that statement! No, it's not "unique" to Lisp, but the ease and
power of its macro system pretty much is!

|> Given the realities of the job marketplace, its hard to find project
|> managers willing to use Lisp.

You have to know where to look! I seem to. :-)

And now its seems that the commercial
|> Lisp compilers are getting very expensive (although I haven't researched
|> that thoroughly).

Than maybe you should hold off commenting for now.

Its to the point that even a lisphead like myself
|> must serious consider porting existing Lisp code to C++ in order
|> to more easily attract people willing to maintain this code.

Certainly a valid issue. Just make sure you really weigh the pros and cons.
Ever been involved in a *huge* C++ project or a *huge* Lisp project? Have you
ever ported a *huge* Lisp system to C++? Two words: it sucks.

|> So what are the future prospects of Lisp?

I'ld love to know.

David Kuznick - da...@ci.com (preferred) or dkuz...@world.std.com ___
All your spirit rack abuses come to haunt you back by day {~._.~}
All your byzantine excuses given time give you away ( Y )
Don't be surprised when daylight comes ()~L~()
To find that memory prick your thumbs (_)-(_)
You'll tell them where we run to hide - "Afraid of Sunlight" - MARILLION


Jonathan Cohen

unread,
Jul 26, 1995, 3:00:00 AM7/26/95
to
David B. Kuznick (da...@pharlap.CI.COM) wrote:
: co...@luke.Read.TASC.COM (Jonathan Cohen) writes:

: I've recently started using a


: |> product called Purify that seems to do a very good job of detecting
: |> memory leakage in C++ programs. It even seems to handle circular
: |> data structures properly in that it reports as leakage a pair of
: |> objects that point to each other. Since with enough testing it seems
: |> reasonable that all leaks could be removed from even a large C++ program,

: What does "enough testing" mean? I have my doubts that Purify can detect/remove
: ALL leaks, but never having used it, I'll have to take your word for it.
: Besides, it still doesn't remove the burden of having to manage memory yourself.

My own view is that software is never completely bug-free, including
undetected memory leaks due to unusual circumstances. Purify appears
to detect all the leaks that occur during a series of test cases just
as those test cases expose all other types of bugs. It records N levels of
the call stack and so you easily tell where each piece of memory was
allocated. Purify shows you that stack for each leaked object when you tell
it to scan memory to look for leaks.

There is an important difference between what Purify does and
profiling for conses. For example, in the handling for a single input
event there may be both permanent allocation and generation of
garbage. It is important to keep the two separate because figuring out
what causes a leak is hard enough without having to wade through
allocations that are not part of the error. I think that this
capability would be useful in a Lisp environment also. Moreover, Lisp
could do it better because generational GC would allow the leakage to
be reported sooner. By default Purify does a scan for leaks only when
the program exits, and the user needs to manually initiate scans
during the middle of a run.


: |> I'm wondering if Lisp has lost its advantage in this regard.

: |> Garbage collection imposes two problems as I see it: the interruptions
: |> it causes

: Just curious as to what modern Lisp implementations you are basing your data on.
: GC's are hardly noticeable nowadays on the workstations I use. In fact they're
: usually less noticeable than the typical network and disk pauses which can plague
: any program. And in C/C++ you can get interruptions during malloc and free also.


Due to budgetary contraints I'm just using CMU Lisp, which is excellent
in many ways but not in GC. Thanks to an email response to my earlier post
I may be getting a new environment :).

In any case, the trend these days is towards distributed objects and
objects that reside in databases. Do "modern Lisp implementations"
provide generational GC in those situations? In these cases the GC
approach causes very significant delays, and it becomes important to
prevent garbage in the first place. T

: Ever been involved in a *huge* C++ project or a *huge* Lisp project? Have you


: ever ported a *huge* Lisp system to C++? Two words: it sucks.

There have been articles showing up, such as "The nightmare of C++" in the Nov
94 issue of Advanced Systems, where people gripe about C++ but that
language still remains the safest bet for project managers to choose
as a standard language. Arguments about peformance do not matter in
comparison to maintainability of code and availability of programmers.
The tools available to Lisp and C++ are comparable, but keeping around
two sets of tools is difficult to justify. I posted my original message
was an attempt to help me do just that for the project I am on now, by
eliciting specific reasons why C++ code is not as mainainable as Lisp.

I would like to thank Lawrence G. Mayka for several suggestions, including:

The future is what we make it; no more and no less. If you give up,
you have by definition already lost.

Jon Cohen
jco...@tasc.com

Richard M. Alderson III

unread,
Jul 26, 1995, 3:00:00 AM7/26/95
to
In article <ZIGGY.95J...@montreux.ai.mit.edu> zi...@montreux.ai.mit.edu
(Michael R. Blair) writes:

>Garbage collection is only one advantage of LISP over other languages.

>Others include (in no particular order):

[a long shopping list]

>Other languages may have some of these features but I know of none other than
>have all of them.

What's scary is how many of them have been around since long before Common
Lisp. I don't have my MDL manuals handy here, but other than an object system,
how many of these items was it lacking?
--
Rich Alderson You know the sort of thing that you can find in any dictionary
of a strange language, and which so excites the amateur philo-
logists, itching to derive one tongue from another that they
know better: a word that is nearly the same in form and meaning
as the corresponding word in English, or Latin, or Hebrew, or
what not.
--J. R. R. Tolkien,
alde...@netcom.com _The Notion Club Papers_

Billy Tanksley

unread,
Jul 26, 1995, 3:00:00 AM7/26/95
to
Michael R. Blair <zi...@montreux.ai.mit.edu> wrote:
>Garbage collection is only one advantage of LISP over other languages.

>Others include (in no particular order):

> - multiple value returns

Hey, that's useful. I use that enough in Forth. Out of curiosity, what
do the multiple returns look like? I don't see how they'd fit in Lisp's
rigid syntax :) :) (sorry, had to say it, I'm kidding, though). How's it
done?

> - beautiful syntax (well, ok, I may be a little biased on this one).

This is Lisp's greatest downfall. It seems that all Lispers believe that
Lisp has a beautiful syntax-- but to a non-Lisper, it's horrid. I've
tried to learn Lisp quite a few times now, and every time I've given up
in disgust over the overdose of parenthesis. I can handle different
"syntax"; after all, I've worked in Forth for a while now, and there's no
wierder one (actually, Forth has no syntax, but that's a different story).

The problem appears to me to be in the books; every one assumes that the
parenthetical notation is easy and intuitive, and it just never ceases to
throw me. I can understand writing that code with a good editor to check
parens, but reading it is (right now) horrible! Is Lisp just not
intended to be read, or is it something else?

I DO want to learn Lisp, but it's frustrating!

-Billy


John Atwood

unread,
Jul 27, 1995, 3:00:00 AM7/27/95
to
Billy Tanksley <tank...@owl.csusm.edu> wrote:

<snip>


>
>The problem appears to me to be in the books; every one assumes that the
>parenthetical notation is easy and intuitive, and it just never ceases to
>throw me. I can understand writing that code with a good editor to check
>parens, but reading it is (right now) horrible! Is Lisp just not
>intended to be read, or is it something else?

One tip. Use the editor's paren matching to read code also. Have it tell
you where the then leg of the if ends and the else begins, where that 7
line method call ends, etc.

John
--
_________________________________________________________________
Office phone: 503-737-5583 (Batcheller 349);home: 503-757-8772
Office mail: 303 Dearborn Hall, OSU, Corvallis, OR 97331
_________________________________________________________________

John Doner

unread,
Jul 27, 1995, 3:00:00 AM7/27/95
to
In article <3v776o$c...@owl.csusm.edu>,

Billy Tanksley <tank...@owl.csusm.edu> wrote:
>The problem appears to me to be in the books; every one assumes that the
>parenthetical notation is easy and intuitive, and it just never ceases to
>throw me. I can understand writing that code with a good editor to check
>parens, but reading it is (right now) horrible! Is Lisp just not
>intended to be read, or is it something else?

In mathematics, we customarily write function invocations as f(x), or
maybe more complex combinations like f(g(x,y),z). We could just as
well tuck the function names inside the parentheses, to get

(f,x) (f,(g,x,y),z)

If we then use whitespace instead of commas for separation, we get

(f x) (f (g x y) z)

So Lisp notation is really not far from standard mathematical notation
for prefix functions.

On the other hand, mathematics uses different sizes and kinds of
parentheses, and often infix notation, to enhance readability. It's
not hard to make Lisp allow [ ] and { }, although that's not the
standard. Different sizes are out. So is infix notation. Other
programming languages use not only infix notation and other kinds of
parentheses, but also keywords (begin ... end, if ... endif) for
grouping. These things really do help readability, although a lot of
what they offer can be gained just by systematic indentation, as a
good editor automatically provides.

When all the smoke clears, though, I think you're right: Lisp is
harder to read. But not a whole lot harder. And the payoff is that
it's easy to remember, easy to learn, and easy for a program to parse
(so it's easy to write macros).

John Doner


Shriram Krishnamurthi

unread,
Jul 27, 1995, 3:00:00 AM7/27/95
to
zi...@montreux.ai.mit.edu (Michael R. Blair) writes:

> - advanted control mechanisms (catch/throw and/or
> call-with-current-continuation)

If you're arguing in favor of catch/throw, I think the more accurate
argument is that these are *safe*: you don't end up with warnings of
the form

The calling
function must not itself have returned in the interim, oth-
erwise longjmp() will be returning control to a possibly
non-existent environment.

(from the SunOS man pages).

> - a clean formal mathematically well-founded semantics (namely,
> lambda-calculus)

Um. More like the lambda-*value* calculus, and one can argue about
how clean the formalization is.

> - beautiful syntax (well, ok, I may be a little biased on this one).

Hear, hear! (Or, in the best Usenet fashion, Here, here!)

'shriram

Jon Bodner

unread,
Jul 27, 1995, 3:00:00 AM7/27/95
to
tank...@owl.csusm.edu (Billy Tanksley) writes:

>Michael R. Blair <zi...@montreux.ai.mit.edu> wrote:
>>Garbage collection is only one advantage of LISP over other languages.

>>Others include (in no particular order):

>> - multiple value returns

>Hey, that's useful. I use that enough in Forth. Out of curiosity, what
>do the multiple returns look like? I don't see how they'd fit in Lisp's
>rigid syntax :) :) (sorry, had to say it, I'm kidding, though). How's it
>done?

There are a set of functions and macros which are defined to handle
multiple values. You can check out CLTL2 via WWW:

http://www.cs.cmu.edu/Web/Groups/AI/html/cltl/clm/node93.html#SECTION0011100000000000000000

to get more info about anything in CL. The above link is to the
section on multiple values.


>> - beautiful syntax (well, ok, I may be a little biased on this one).

>This is Lisp's greatest downfall. It seems that all Lispers believe that

>Lisp has a beautiful syntax-- but to a non-Lisper, it's horrid. I've
>tried to learn Lisp quite a few times now, and every time I've given up
>in disgust over the overdose of parenthesis. I can handle different
>"syntax"; after all, I've worked in Forth for a while now, and there's no
>wierder one (actually, Forth has no syntax, but that's a different story).

>The problem appears to me to be in the books; every one assumes that the

>parenthetical notation is easy and intuitive, and it just never ceases to
>throw me. I can understand writing that code with a good editor to check
>parens, but reading it is (right now) horrible! Is Lisp just not
>intended to be read, or is it something else?

The biggest problem is formatting, not the parenthesis. If indented
well, then Lisp becomes _far_ easier to read than C (at least to me).
If not...

Just like C, however, everyone has their own 1TBS (1TPS, I guess).
That is probably the hardest part of getting a handle on things.

>I DO want to learn Lisp, but it's frustrating!

I recommend Winston and Horn's book. I first learned Lisp from the
second edition, and gave that one to a friend who needed to learn it
when I got the third edition. It has an AI focus in the later
chapters, though.

For more advanced stuff, Paul Grahm's (sp?) book, "On Lisp," is
facinating. He does things with Lisp and Scheme that might
qualify as witchcraft...

-jon

Erik Naggum

unread,
Jul 27, 1995, 3:00:00 AM7/27/95
to
[Billy Tanksley]

| [Syntax] is Lisp's greatest downfall. It seems that all Lispers


| believe that Lisp has a beautiful syntax-- but to a non-Lisper, it's
| horrid. I've tried to learn Lisp quite a few times now, and every time
| I've given up in disgust over the overdose of parenthesis.

in a private exchange, I learned that parentheses are regarded as important
in infix syntaxes, since they change the order of evaluation, perhaps
leading to a cognitive load on those who have developed "reroute sign"
associations with parentheses. in Lisp, they are much less important, and
serve a completely different role. just as the semicolon is regarded as
unimportant in C, parentheses in Lisp serve as innocuous delimiters of the
interesting stuff. if you had a semicolon fixation, Pascal syntax would be
really hard, as it is not at all obvious where they should go. likewise,
the parenthesis fixation that comes from exposure to languages that are so
badly designed that you need to put up "reroute signs" to go where you want
will hurt you in better designed languages.

| The problem appears to me to be in the books; every one assumes that
| the parenthetical notation is easy and intuitive, and it just never
| ceases to throw me. I can understand writing that code with a good
| editor to check parens, but reading it is (right now) horrible! Is
| Lisp just not intended to be read, or is it something else?

Lisp must be indented to be read. most other languages can do without
proper indentation, but Lisp can't.

| I DO want to learn Lisp, but it's frustrating!

hang in there. you should try to desensitize yourself to parentheses, try
to find out why you see them so clearly, and perhaps try to regard other
characters as unusually prominent, for balance, like asterisks or
semicolons in C.

once you've gotten past the parenthesis fixation, you will find that a
regular syntax for both data and code makes certain operations very easy,
such as programs that analyze code and functions that generate code. also,
the fact that you have access to the same functions Lisp uses to read and
evaluate your code makes it easier to create small application languages on
the side or top of Lisp, store data in files to be read back by any Lisp
program, etc. when you get all this because of a unified syntax, it
becomes even more important to overcome your parenthophobia.

#<Erik 3015815764>
--
NETSCAPISM /net-'sca-,pi-z*m/ n (1995): habitual diversion of the mind to
purely imaginative activity or entertainment as an escape from the
realization that the Internet was built by and for someone else.

Erik Naggum

unread,
Jul 27, 1995, 3:00:00 AM7/27/95
to
[John Doner]

| On the other hand, mathematics uses different sizes and kinds of
| parentheses, and often infix notation, to enhance readability. It's
| not hard to make Lisp allow [ ] and { }, although that's not the
| standard.

yes, programming the lisp reader is quite easy.

| Different sizes are out.

not really. it only depends on how they are represented.

| So is infix notation.

all you need is a reader macro that parses a delimited expression and
returns a regular prefix form. in fact, one such package exists that
allows forms like

$ foo * bar + zot $

producing

(+ (* foo bar) zot)

I don't use it, so forget where you can find it. perhaps somebody else
could reiterate that information?

| When all the smoke clears, though, I think you're right: Lisp is harder
| to read. But not a whole lot harder. And the payoff is that it's easy
| to remember, easy to learn, and easy for a program to parse (so it's
| easy to write macros).

add "easy to write with intelligent editors" (such as Emacs). personally,
I find C, a language I used daily for a decade, and still do some stuff in,
harder to read, write, and generally deal with than Lisp. I could never
remember the precedence rules in C, and when I tried to learn C++, I was
literally drowning in syntax and an arbitrary semantic mess. some C++
aficionados think they have a "superior genetic trait" that allows them to
deal with C++'s random residue of a syntax disaster. (there was actually a
discussion about this on comp.lang.c++.) if so, I hope we find a cure.
conversely, it could be that dealing with _elegant_ syntaxes is also a
genetically induced preference.

now I use Emacs Lisp and Common Lisp almost exclusively, supplanting the
many Unix utilities I used to use. when I first saw Perl, my bogometer
blew up, and the proverbial mirror broke. it could be that I spend most of
my time writing programs that are intended to write programs, and it just
so much easier to deal with a fully delimited prefix language. I also work
with SGML, and it has similar syntactic properties.

#<Erik 3015861644>

Frank Adrian

unread,
Jul 28, 1995, 3:00:00 AM7/28/95
to
Shriram Krishnamurthi (shr...@europa.cs.rice.edu) wrote:

: zi...@montreux.ai.mit.edu (Michael R. Blair) writes:

: > - advanted control mechanisms (catch/throw and/or
: > call-with-current-continuation)

: If you're arguing in favor of catch/throw, I think the more accurate
: argument is that these are *safe*: you don't end up with warnings of
: the form

: The calling
: function must not itself have returned in the interim, oth-
: erwise longjmp() will be returning control to a possibly
: non-existent environment.

: (from the SunOS man pages).

This is not necessarily true (at least for catch/throw). It is possible to
return a procedure which tries to throw to a tag whose dynamic extent has
ended. For example:

(defun bad () (catch 'x #'(lambda (q) (if q 1 (throw 'x 0))))

;;; Apologies in advanced for any mangled syntax/incorrectness/etc.
;;;In the best tradition of USENET posting, I have not tested this code :-).

(funcall (bad) t) => 1
(funcall (bad) f) => error

In the second case, the extent of the catch form has ended. Trying to
throw to it barfs. See the following web link for more info:
http://www.cs.cmu.edu/Web/Groups/AI/html/cltl/clm/node96.html#7

: > - beautiful syntax (well, ok, I may be a little biased on this one).

: Hear, hear! (Or, in the best Usenet fashion, Here, here!)

Can't argue with this one!
___________________________________________________________________________
Frank A. Adrian ancar technology Object Analysis,
fra...@europa.com PO Box 1624 Design,
Portland, OR 97207 Implementation,
Voice: (503) 281-0724 and Training...
FAX: (503) 335-8976


Marco Antoniotti

unread,
Jul 28, 1995, 3:00:00 AM7/28/95
to
In article <3v89nv$a...@news.aero.org> do...@aerospace.aero.org (John Doner) writes:

From: do...@aerospace.aero.org (John Doner)
Newsgroups: comp.lang.lisp
Date: 27 Jul 1995 15:01:19 GMT
Organization: The Aerospace Corporation, El Segundo, CA
Lines: 38
NNTP-Posting-Host: armadillo.aero.org

In article <3v776o$c...@owl.csusm.edu>,
Billy Tanksley <tank...@owl.csusm.edu> wrote:

>The problem appears to me to be in the books; every one assumes that the
>parenthetical notation is easy and intuitive, and it just never ceases to
>throw me. I can understand writing that code with a good editor to check
>parens, but reading it is (right now) horrible! Is Lisp just not
>intended to be read, or is it something else?

In mathematics, we customarily write function invocations as f(x), or


maybe more complex combinations like f(g(x,y),z). We could just as
well tuck the function names inside the parentheses, to get

(f,x) (f,(g,x,y),z)

If we then use whitespace instead of commas for separation, we get

(f x) (f (g x y) z)

So Lisp notation is really not far from standard mathematical notation
for prefix functions.

I do not remember where I read this, but I wasn't the first
implementation of Lisp1.5 *required* to have the 'commas'? I remember
that the paper said that the commas did not get printed out because of
a bug in the program. The result was so much better that the bug was
promoted to "feature". :)

Cheers

--
Marco G. Antoniotti - Resistente Umano
-------------------------------------------------------------------------------
Robotics Lab | room: 1220 - tel. #: (212) 998 3370
Courant Institute NYU | e-mail: mar...@cs.nyu.edu
| WWW: http://found.cs.nyu.edu/marcoxa

...e` la semplicita` che e` difficile a farsi.
...it is simplicity that is difficult to make.
Bertholdt Brecht

Shriram Krishnamurthi

unread,
Jul 28, 1995, 3:00:00 AM7/28/95
to
Following up to Erik Naggum's and John Donner's responses to Billy
Tanksley, I wanted to add that one tool you (Billy) might find useful
is the many "pretty-printers" for the Lisp family. The one I'm most
familiar with is SLaTeX, which works with both Scheme (hence the `S')
and Common Lisp. SLaTeX puts different parts of the program in
different typefaces -- boldface for keywords, roman for identifiers,
and so forth. It's also vastly customizable.

Some people (not including myself) find SLaTeXed code *vastly* more
readable than generic Lisp; it may be that you will find it alleviates
the problem of parentheses. For a fine book on Lisp, which uses
SLaTeX for formatting, see

Daniel Friedman and Matthias Felleisen
/The Little Lisper/
MIT Press

Also, you can get SLaTeX from

ftp://cs.rice.edu/public/dorai/slatex24h.tar.gz

'shriram

David Neves

unread,
Jul 29, 1995, 3:00:00 AM7/29/95
to
In article <MARCOXA.95...@mosaic.robotics>,
mar...@mosaic.robotics (Marco Antoniotti) wrote:

: I do not remember where I read this, but I wasn't the first


: implementation of Lisp1.5 *required* to have the 'commas'?

From the Lisp 1.5 Programmer's manual:
"The LISP read program ... recognizes the ... separators "." "," and
(blank). The comma and blank are completely equivalent."

Marco Antoniotti

unread,
Jul 29, 1995, 3:00:00 AM7/29/95
to
In article <19950727...@naggum.no> Erik Naggum <er...@naggum.no> writes:


From: Erik Naggum <er...@naggum.no>
Newsgroups: comp.lang.lisp

Date: 27 Jul 1995 19:20:45 GMT
Organization: Naggum Software; +47 2295 0313
Lines: 56
NNTP-Posting-Host: gyda.ifi.uio.no

[John Doner]

| On the other hand, mathematics uses different sizes and kinds of
| parentheses, and often infix notation, to enhance readability. It's
| not hard to make Lisp allow [ ] and { }, although that's not the
| standard.

yes, programming the lisp reader is quite easy.

| Different sizes are out.

not really. it only depends on how they are represented.

| So is infix notation.

all you need is a reader macro that parses a delimited expression and
returns a regular prefix form. in fact, one such package exists that
allows forms like

$ foo * bar + zot $

producing

(+ (* foo bar) zot)

I don't use it, so forget where you can find it. perhaps somebody else
could reiterate that information?

It is the INFIX package and can be found in the AI.Repository at CMU.
The second version changed the syntax by using the #I() form.

The person to be thanked for this package is Mark kantrowitz.

Shriram Krishnamurthi

unread,
Jul 31, 1995, 3:00:00 AM7/31/95
to
fra...@europa.com (Frank Adrian) writes:

> : If you're arguing in favor of catch/throw, I think the more accurate
> : argument is that these are *safe*:

> This is not necessarily true (at least for catch/throw). It is


> possible to return a procedure which tries to throw to a tag whose
> dynamic extent has ended.

You don't understand. The difference between C and Lisp is that Lisp
will signal an error of some sort. In C, no error will necessarily be
signalled. Your program will continue executing, quite likely
generating nonsensical results -- different ones on each architecture.

This is why I used the word "safe" when distinguishing C and Lisp. In
terms of expressiveness, catch/throw offers nothing beyond
setjmp/longjmp, and indeed I didn't say it does.

'shriram

Billy Tanksley

unread,
Jul 31, 1995, 3:00:00 AM7/31/95
to
William Paul Vrotney <vro...@netcom.com> wrote:
>In article <3vbtq5$b...@owl.csusm.edu> tank...@owl.csusm.edu (Billy Tanksley) writes:

>> By the way, my main problem with Lisp's use of parenthesis arose from the
>> fact that I'm used to a language that uses no delimiters of any kind,
>> Forth. Has anyone made a RPN Lisp? That might be a good start for me.
>> Postscript is the closest I could find, and it's not close enough :).

>It's not possible unless you use, you guessed it, PARENTHESIS, since Common
>Lisp allows variable argument number functions. For example

> 3 4 5 6 + *
>is ambiguous, where as

That's not a difficulty. Indeed, indefinite-uncounted arguments require
delimiters; so I'd just forbid such arguments. No big deal.

>I say almost because the RPN expression

> 3 4 +

>could be interpreted as

> (progn (+ 3 4)) or (progn 3 (+ 4)) or (progn 3 4 (+))

Noe this is a problem. It would require a restructuring that problably
wouldn't be worth it.

>William P. Vrotney - vro...@netcom.com

-Billy
Thanks!

Frank Adrian

unread,
Aug 2, 1995, 3:00:00 AM8/2/95
to
Shriram Krishnamurthi (shr...@europa.cs.rice.edu) wrote:
: You don't understand. The difference between C and Lisp is that Lisp

: will signal an error of some sort. In C, no error will necessarily be
: signalled. Your program will continue executing, quite likely
: generating nonsensical results -- different ones on each architecture.

This is true, although in my experience, the program usually crashes, due
to expectations about values made by the programmer not being met, rather
than running to completion.

: This is why I used the word "safe" when distinguishing C and Lisp. In


: terms of expressiveness, catch/throw offers nothing beyond
: setjmp/longjmp, and indeed I didn't say it does.

My mistake. I did misunderstand your usage of the word "safe". You are
correct in your assertions.

Clint Hyde

unread,
Aug 3, 1995, 3:00:00 AM8/3/95
to
In article <vrotneyD...@netcom.com> vro...@netcom.com (William Paul Vrotney) writes:

Hi Bill!

--> In article <3vbtq5$b...@owl.csusm.edu> tank...@owl.csusm.edu (Billy Tanksley) writes:
--> >
--> > By the way, my main problem with Lisp's use of parenthesis arose from the
--> > fact that I'm used to a language that uses no delimiters of any kind,
--> > Forth. Has anyone made a RPN Lisp? That might be a good start for me.
--> > Postscript is the closest I could find, and it's not close enough :).
--> >
-->
--> It's not possible unless you use, you guessed it, PARENTHESIS, since Common
--> Lisp allows variable argument number functions. For example
-->
--> 3 4 5 6 + *

this is probably not clear, but not bad, postfix.

try this for comparison:

3 4 + 5 6 + *

the answer is 77 [i.e., (3+4)*(5+6)] (weell, assuming something possibly
not valid about operator precedence and how that gets handled)

--> is ambiguous, where as
-->
--> 3 (4 5 6 +) * or 3 4 (5 6 +) * are almost ok

these are actually considerably clearer...

--> I say almost because the RPN expression
-->
--> 3 4 +
-->
--> could be interpreted as
-->
--> (progn (+ 3 4)) or (progn 3 (+ 4)) or (progn 3 4 (+))

in Postscript at least (to the extent that I've looked at postscript,
and Forth, which ain't much), there is no possible misinterpretation of
operators. an operator, either built-in or defined by you, is ALWAYS an
operator, and encountering one means that now you eval that operator
with the args you've been collecting (again, what about operator
precedence?)

so therefore

3 4 + 5 6 + *

means (assuming something about precedence hierarchy)

collect 3 collect 4 add-all-stack-values-and-push-onto-stack
collect 5 collect 6 add-all-stack-values-and-push-onto-stack
multiply-all-stack-values-and-push-onto-stack

and the resulting value is 77

this shouldn't be too hard to do in lisp...after, lisp is touted as
being good as this sort of thing.

maybe I'll spend a few minutes on it...

...and having done so I find a conceptual problem. how do I know that

3 4 + 5 6 + *

doesn't mean [3 4 5 6 +] *, or 18 ? an operator precedence problem. and
the answer is you don't, so you have to make a decision.

so if we assume that the precedence hierarchy behaves a specific way,
then we either have to carry along the operators while computing the
precedences so we know when to evaluate what things together, or we just
eval as soon as possible with whatever has been collected.

OR, if NO operators have the equiv of &rest, then the problem goes
away--we always know when to eval, and when there's a num-of-args error.

so if + is only allowed 2 args, and * is only allowed two args, then
when you eval an operator, you use N args off the stack, where N is how
many that operator takes, not "all those on the stack so far"

unfortunately this makes for somewhat verbose code

(+ 1 2 3 4 5) turns into

1 2 + 3 + 4 + 5 +

or

1 2 + 3 4 + 5 + +

having never had an HP calculator in my life, I haven't dealt with this
weirdness much. (11 years of lisp doesn't help, but the parens eliminate
the problem of knowing what args go with what operator (thank god :)

-- clint


Jeff Dalton

unread,
Aug 10, 1995, 3:00:00 AM8/10/95
to
>>> - beautiful syntax (well, ok, I may be a little biased on this one).
>
>>This is Lisp's greatest downfall. It seems that all Lispers believe that
>>Lisp has a beautiful syntax-- but to a non-Lisper, it's horrid. I've
>>tried to learn Lisp quite a few times now, and every time I've given up
>>in disgust over the overdose of parenthesis. [...]

>
>>The problem appears to me to be in the books; every one assumes that the
>>parenthetical notation is easy and intuitive, and it just never ceases to
>>throw me. I can understand writing that code with a good editor to check
>>parens, but reading it is (right now) horrible! Is Lisp just not
>>intended to be read, or is it something else?

I can tell you why it's readable to me and to a number of other Lisp
programmers. It may not account for all of them, of course.

Don't read the parentheses.

In well-indented Lisp code, it is not necessary to pay attention to
individual parens, except in a few cases where you'd usually have to
do the same thing in other languages. The "pattern" of parens should
reinforce the sense of the indentation, so that the parens help
without getting in the way.

Remember that Lisp uses a Polish prefix notation so that, except for
operations that can take differing numbers of arguments, parens
aren't actually needed at all. And in most of the cases where
parens are needed, the indentation provides the same information.
This leaves a few expression-on-one-line cases where you may
have to pay attention to indivudual parens.

Such advice won't instantly make Lisp readable. Some practice in
writing Lisp and in **reading well-written Lisp** will help.

>The biggest problem is formatting, not the parenthesis. If indented
>well, then Lisp becomes _far_ easier to read than C (at least to me).
>If not...

Just so. The indentation is crucial.

>Just like C, however, everyone has their own 1TBS (1TPS, I guess).
>That is probably the hardest part of getting a handle on things.
>

>>I DO want to learn Lisp, but it's frustrating!

For learning, you might find Scheme better than Common Lisp, because
it's a much smaller language. OTOH, Common Lisp gives you more
built-in stuff to play around with.

>I recommend Winston and Horn's book. I first learned Lisp from the
>second edition, and gave that one to a friend who needed to learn it
>when I got the third edition. It has an AI focus in the later
>chapters, though.
>
>For more advanced stuff, Paul Grahm's (sp?) book, "On Lisp," is
>facinating. He does things with Lisp and Scheme that might
>qualify as witchcraft...

These days, you want the 3rd edition of Winston and Horn.
I'm not sure it's the best book for learning Lisp, but neither
do I have a definite alternative to recommend instead.

The Abelson and Sussman _Structure and Interpretation of Computer
Programs_ is in certain ways the best Lisp text, though it uses
Scheme. It's oriented towards teaching concepts rather than
teaching the language, which is to my mind better.

Norvig's _Paradigms of AI Programming_ is also good. It's probably
the best Common Lisp text overall. Though it's more advanced than
Winston and Horn, an experienced programmer might well be able to
learn Common Lisp from Norvig's into chapter.

All of these books good sources for well-written code.

-- jeff

0 new messages