Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

extension languages can be darn small, yet still powerfull

32 views
Skip to first unread message

lawrence.g.mayka

unread,
Aug 26, 1990, 4:50:18 PM8/26/90
to
In article <1990Aug24.1...@uwslh.slh.wisc.edu> lis...@uwslh.slh.wisc.edu (a.k.a. Chri) writes:
>My point is that there is nothing special that makes *Lisp* a great
>extension language. Some might argue that it is the "simple syntax,"
>but that same simple syntax is hated by many (myself *not* included).

Simplicity is nice, but extensibility is the much more important
virtue of Lisp syntax. (The two are of course related to some
degree.) Lisp's other main advantage as an extension language is its
dynamic typing, which greatly reduces the effort required to make
small changes to a program's behavior.

I have found that those who "hate" Lisp syntax almost invariably have
never used a powerful Lisp development environment such as Symbolics
Genera, or even Harlequin's LispWorks. Indeed, the "haters" usually
have never used any Lisp system beyond the 1962-vintage Lisp 1.5 that
most of us oldsters were introduced to in undergraduate school.
Opinions formed in ignorance carry little weight with me.


Lawrence G. Mayka
AT&T Bell Laboratories
l...@iexist.att.com

Standard disclaimer.

Steve Knight

unread,
Aug 27, 1990, 6:31:11 AM8/27/90
to
With trepidation, I'd like to try to make a couple of points about Lisp
syntax. The basic point is that the virtues claimed for Lisp syntax
are not exclusive to Lisp.

Lawrence writes:
> Simplicity is nice, but extensibility is the much more important
> virtue of Lisp syntax.

I agree with this. However, there's no reason to think that extensible
languages are obliged to adopt the approach of Lisp and Prolog in which
the programmatic syntax is identical to that of some concrete data types.
The main advantage with the Lisp/Prology approach, as I see it, is that
the learning curve for extension-programming is shortend. However, here's
an alternative look.

It is quite easy to imagine a programming language with (say) an Algol-like
syntax in which the parse tree is available as a standard data-structure.
Extending the language has several solutions. The elegant solution might
be to create new grammar rules with associated rewrite actions. A more
brute-force approach, used in Pop11 (a Lisp derivative), is to supply the
compiler's parsing & tokenising procedures.

The advantage I see in this approach is two-fold. Firstly, the programmer
gets to deal with a more familiar kind of syntax. Secondly, the parse
tree can be made more regular, making extension programming easier.

Speaking personally, I am no fan of the Lisp or Prolog style of syntax.
It seems to me to be an unfortunate conflation of issues -- external
syntax is there to make programs readable etc -- internal structure is
there to make manipulation of programs convenient. I think that Prolog
makes the better job of it, having the advantage of being relational so
that quoting rules aren't required, and providing a good infix/prefix/
postfix operator system. I should point out that I've happily
programmed in both of these languages for quite a few years and am
very comfortable programming with them. But I don't think that they are
beyond improvement.

Lawrence continues:


> I have found that those who "hate" Lisp syntax almost invariably have
> never used a powerful Lisp development environment such as Symbolics
> Genera, or even Harlequin's LispWorks.

Cause and effect -- but which way round do you think it operates? We
have a Symbolics with Genera sitting around turned off. We used it for
a couple of projects but the incremental benefits were too small.
I agree with Lawrence in
thinking that many folks pre-judge on the basis of ignorance, it would be
odd if it wasn't true! However, the issue of Lisp's syntax is largely
independent of programming environment -- you don't need Lisp's syntax
to do sensible things with programs.

I am inclined to think that the best interpretation of Lawrence's
observation is that expensive Lisp equipment is of much more interest to
folks who are prepared to tolerate the problems with Lisp. And I would
count the syntax as, overall, a problem because the benefits could be
achieved without the cost.

> Indeed, the "haters" usually
> have never used any Lisp system beyond the 1962-vintage Lisp 1.5 that
> most of us oldsters were introduced to in undergraduate school.

That doesn't ring a bell with me. My (UK based) experience is that people's
exposure is to Lisp's such as Lucid. This doesn't really sweeten
the experience, to be fair, since so many commercial lisp systems have
intense memory requirements -- which ends up spoiling the experience.

Another factor in the development of attitudes hostile to Lisp, in my view,
is the bias of university teaching. (I'm only talking about the UK here.)
I've had a great deal of trouble getting UK graduates to accept Lisp
as a viable language -- even for prototyping. The presentation of Prolog
is much fairer, although I encounter very set attitudes about what can
be done with Prolog.

Steve "Just the opinions, ma'am" Knight

lawrence.g.mayka

unread,
Aug 28, 1990, 7:52:40 PM8/28/90
to
In article <135...@otter.hpl.hp.com> s...@otter.hpl.hp.com (Steve Knight) writes:
>odd if it wasn't true! However, the issue of Lisp's syntax is largely
>independent of programming environment -- you don't need Lisp's syntax
>to do sensible things with programs.

I was referring more than anything to the fear some people have that
they "would never be able to keep track of all those parentheses." If
they use a reasonable editor (i.e., one that keeps track of
parentheses, at the very least!), they shouldn't have any difficulty.

Bob Riemenschneider

unread,
Aug 29, 1990, 11:11:39 PM8/29/90
to
To save some digging, Allen's _Anatomy of Lisp_ discusses EL1 -- not
in much greater detail than in the posting, though -- and refers you to

B. Wegbreit
"ECL Programmer's Manual"
Harvard Center for Research in Computing Technology TR23-74
December, 1974

-- rar

a.k.a. Chri

unread,
Aug 29, 1990, 11:32:43 AM8/29/90
to
l...@cbnewsc.att.com (lawrence.g.mayka) writes:

>In article <1990Aug24.1...@uwslh.slh.wisc.edu> lis...@uwslh.slh.wisc.edu (a.k.a. Chri) writes:
>>My point is that there is nothing special that makes *Lisp* a great
>>extension language. Some might argue that it is the "simple syntax,"
>>but that same simple syntax is hated by many (myself *not* included).

>Simplicity is nice, but extensibility is the much more important
>virtue of Lisp syntax. (The two are of course related to some
>degree.) Lisp's other main advantage as an extension language is its
>dynamic typing, which greatly reduces the effort required to make
>small changes to a program's behavior.

But other languages can be as extensible. In fact, ARexx (Rexx for
the Amiga) is likely the most extensible "extension language" I have
heard of. If a program has a Rexx-port, it can be used as an
extension to ARexx. In the Amiga community, it is very common to hear
of people custom-making editor/compiler/hot-key setups with random
editors, compilers, and ARexx. Languages like shell languages
(including ARexx) are incredibly extensible, because you can use other
programs as language "operators". Yes, lisp is naturally extensible,
but so are *many* other languages.

I will agree that lisp's dynamic typing are an important point of its
usefulness.

>I have found that those who "hate" Lisp syntax almost invariably have
>never used a powerful Lisp development environment such as Symbolics
>Genera, or even Harlequin's LispWorks. Indeed, the "haters" usually
>have never used any Lisp system beyond the 1962-vintage Lisp 1.5 that
>most of us oldsters were introduced to in undergraduate school.

Many of the lisp-haters that I am thinking of used the Xerox InterLisp
workstation environment. What got many people were the structured
editors. (I didn't mind these workstations much, aside from the
slowness.) Many people I have spoken to hate lisp because they are
used to Pascal-like languages. Some of these people had programmed in
lisp for several months (i.e. a few AI courses in school).

>Opinions formed in ignorance carry little weight with me.

I agree, but many of the lisp-haters I have met have gien lisp a
chance, and found they just didn't like it. Be careful of assuming
that lisp-haters have formed their opinions out of ignorance.

.oO Chris Oo.
--
Christopher Lishka 608-262-4485 "Dad, don't give in to mob mentality!"
Wisconsin State Lab. of Hygiene -- Bart Simpson
lis...@uwslh.slh.wisc.edu "I'm not, Son. I'm jumping on the bandwagon."
uunet!uwvax!uwslh!lishka -- Homer Simpson

Tom Faulhaber

unread,
Aug 29, 1990, 7:59:47 PM8/29/90
to
In article <135...@otter.hpl.hp.com> s...@otter.hpl.hp.com (Steve Knight) writes:
>It is quite easy to imagine a programming language with (say) an Algol-like
>syntax in which the parse tree is available as a standard data-structure.
>Extending the language has several solutions. The elegant solution might
>be to create new grammar rules with associated rewrite actions.

Actually a language that I think meets your criteria was developed at
Harvard in the early 70s. It was called EL1 (Extensible Language 1) and
it was like Algol on the outside and like Lisp on the inside. A system
called PDS (Program Development System) was built around it which
supplied the capabilty to write rewrite rules. These rules were very
similar to macros in modern Lisps, but with an Algol-like syntax. The
approach commonly used could be described as somewhat object-oriented in
that programmers would develop interfaces to their datatypes at a high
level without concern for the underlying impelmentation and then specify
the rewrite rules to translate the high-level primtives into actual
code.

Interfaces included not only functions but operators (any symbol could
be an operator) and iterators (and any syntactic sugar you could think
of to make the job easier).

EL1 had many of the shortcomings of the Lisps of its day, including
totally dynamic scoping, but it fostered an interesting approach to
programming.

EL1 was taught to upper-level undergraduates in a course called AM113
throughout the seventies and into the early eighties. Some ex-students
still haven't fully recovered :-).

I don't have the references on EL1 handy, but the primary source was Ben
Wegbreit's Ph.D. thesis at Harvard (around 1974, I think). Others who
wrote about aspects of the system were Tom Cheatham (the Prof behind it
all), Judy Townley, and Glenn Holloway.

Send me mail if you want more info and I'll try to dig it out.

Tom


--
-------------------------------------------------------------------------
Thomas A. Faulhaber, Jr. t...@marble.com
Marble Associates (408) 295-5099
Pacific Region/San Jose, CA

Tim Barnes

unread,
Aug 31, 1990, 6:55:55 PM8/31/90
to
In article <1990Aug28.2...@cbnewsc.att.com> l...@cbnewsc.att.com (lawrence.g.mayka) writes:

...> I was referring more than anything to the fear some people have that
...> they "would never be able to keep track of all those parentheses." If
...> they use a reasonable editor (i.e., one that keeps track of
...> parentheses, at the very least!), they shouldn't have any difficulty.

It's also the case that there are a variety of "syntax packages"
available for Lisp: for example Cadence's Skill(tm) allows the following
semantically equivalent syntaxes:
x = 3*sin( y->z )
(setq x (times 3 (sin (getq y z))))

Mentor Graphics' (used to be from Silicon Compiler Systems) GENIE(tm)
extension language allows either lisp-like, C-shell-like (I think) or
C-like syntax, though I believe they use separate parsers for each
syntax, where Skill lets you mix the two syntaxes in the same
expression.

There are some public domain syntax processors available too: for
example CGOL.

I find I'm happy with either syntactic style, but I like the mixture
best of all. And of course I use emacs to help me format whatever I
write.

--
/ / Manager, Framework Technology - Cadence Design Systems
-/- o /__ ___ __ ___ ___ ___ 2455 Augustine Drive
/ / /\ /\ / / ___/ / / / /__/ /__ Santa Clara, CA 95054-3082
/__ / / \/ \ /__/ /__/ / / / /__ ___/ bar...@cadence.com (408) 987 5417

lawrence.g.mayka

unread,
Aug 31, 1990, 10:30:47 PM8/31/90
to
In article <1990Aug29....@uwslh.slh.wisc.edu>, lis...@uwslh.slh.wisc.edu (a.k.a. Chri) writes:
> editors, compilers, and ARexx. Languages like shell languages
> (including ARexx) are incredibly extensible, because you can use other
> programs as language "operators". Yes, lisp is naturally extensible,
> but so are *many* other languages.

Extensible shell languages often use *prefix* notation, don't they?
Hmmm...

> Many of the lisp-haters that I am thinking of used the Xerox InterLisp
> workstation environment. What got many people were the structured
> editors. (I didn't mind these workstations much, aside from the

I have read of the '70s debate between syntax-sensitive text editing
(a la MIT) and true structure editing (a la Xerox). I am very pleased
with the former; the latter seems to be dying out, so I may never even
have a chance to sample it.

> slowness.) Many people I have spoken to hate lisp because they are
> used to Pascal-like languages. Some of these people had programmed in
> lisp for several months (i.e. a few AI courses in school).

Several months may or may not be a fair test, depending on the
intensity of Lisp usage, the brand of Lisp used, the degree of
emotional investment in other languages and environments, the degree
of resistance to change (of any kind), etc.

> chance, and found they just didn't like it. Be careful of assuming
> that lisp-haters have formed their opinions out of ignorance.

No, not all, of course. But enough to bother me, I guess.

Jeff Dalton

unread,
Sep 5, 1990, 2:07:59 PM9/5/90
to
In article <135...@otter.hpl.hp.com> s...@otter.hpl.hp.com (Steve Knight) writes:
>With trepidation, I'd like to try to make a couple of points about Lisp
>syntax.

The important thing to understand about (many, perhaps most, of) the
people who like Lisp is that they like the syntax and indeed prefer it
to the more Algol-like alternatives.

They are not, in their view, sacrificing something in order to get
extensibility (or whatever). And if they wanted a different syntax,
they would build it on top of Lisp.

So I think it's important to get the issues right. What it comes down
to, in my opinion, is that some people don't like Lisp syntax and are
consequently willing to use more complicated mechanisms for getting
some of the same benefits.

There isn't much that can be done about this except, when necessary,
pointing out some better ways of using and understanding Lisp. No one
has to like Lisp in the end, but at least then they'll have given it
a fair shot.

>The basic point is that the virtues claimed for Lisp syntax
>are not exclusive to Lisp.

Actually, the full set of virtues is exclusive to Lisp in the sense
that no other language offers the same set. (I am of course counting
Scheme as a kind of Lisp here.)

>Lawrence writes:
>> Simplicity is nice, but extensibility is the much more important
>> virtue of Lisp syntax.

I think it's misleading to consider such things in isolation.

>It is quite easy to imagine a programming language with (say) an Algol-like
>syntax in which the parse tree is available as a standard data-structure.

And many such languages have been built on top of Lisp.

>Speaking personally, I am no fan of the Lisp or Prolog style of syntax.
>It seems to me to be an unfortunate conflation of issues -- external
>syntax is there to make programs readable etc -- internal structure is
>there to make manipulation of programs convenient.

My view is just the opposite. It's fortunate that a readable
external syntax can correspond so closely to a flexible, easily
manipulated data structure.

> I think that Prolog
>makes the better job of it, having the advantage of being relational so
>that quoting rules aren't required, and providing a good infix/prefix/
>postfix operator system.

It's not because Prolog is relational that quoting rules aren't
required. And instead of quoting, Prolog has case conventions.
Variables begin in upper case, symbolic constants ("atoms") don't.
And to have something that begins in upper case treated as an
atom you have to ... wait for it ... put (single) quotes around
it. (Yes, I know we can argue about whether these quotes are
like the Lisp quote or like the |...| syntax.)

BTW, the Lisp quoting rules needn't be any more confusing than
the quoting rules of, say, Basic.

>I agree with Lawrence in
>thinking that many folks pre-judge on the basis of ignorance, it would be
>odd if it wasn't true! However, the issue of Lisp's syntax is largely
>independent of programming environment -- you don't need Lisp's syntax
>to do sensible things with programs.

The syntax issue is not independent of programming environment.
But this is not a claim that you need Lisp in order to do "sensible
things" -- it's possible to do sensible things with a variety of
languages -- but rather the observation that Lisp *requires* a certain
kind of programming environment if it's to be used effectively.
(In particular, indentation is the key to making Lisp readable,
and an editor that does much of it for you is a big help.)

When students, for example, try to write Lisp programs without an
editor with a good Lisp mode, it's not surprising that they find it
difficult. And since they don't know what it would be like to use a
better editor, it's not surprising that they think Lisp is to blame.

On the other hand, Lisp *does* make it easier than other languages to
do certain sensible things.

>I am inclined to think that the best interpretation of Lawrence's
>observation is that expensive Lisp equipment is of much more interest to
>folks who are prepared to tolerate the problems with Lisp.

That's true, of course, if you think of Lisp programmers as
tolerating problems.

>And I would count the syntax as, overall, a problem because the
>benefits could be achieved without the cost.

Let me try to be clear at the risk of repetition. Many Lisp
programmers do not regard it as a cost; they regard the syntax
as a benefit. Many, and not just because they're "used
to" Lisp, find Lisp *easier* to read than other languages that
have a more complicated syntax. Moreover, the combination of
benefits available in Lisp cannot be achieved (without giving
up something) in other languages.

>> Indeed, the "haters" usually
>> have never used any Lisp system beyond the 1962-vintage Lisp 1.5 that
>> most of us oldsters were introduced to in undergraduate school.
>
>That doesn't ring a bell with me. My (UK based) experience is that people's
>exposure is to Lisp's such as Lucid. This doesn't really sweeten
>the experience, to be fair, since so many commercial lisp systems have
>intense memory requirements -- which ends up spoiling the experience.

This has been my experience as well. Scheme is much better, I think,
at creating a good initial impression.

>Another factor in the development of attitudes hostile to Lisp, in my view,
>is the bias of university teaching. (I'm only talking about the UK here.)

I agree, and it can be true in the US as well.

Jeff Dalton, JANET: J.Da...@uk.ac.ed
AI Applications Institute, ARPA: J.Dalton%uk.a...@nsfnet-relay.ac.uk
Edinburgh University. UUCP: ...!ukc!ed.ac.uk!J.Dalton

Jay R. Freeman

unread,
Sep 6, 1990, 1:11:47 PM9/6/90
to

I have forgotten the details, but there used to be a non-Lispy
front-end parser for the once highly-popular "Portable Standard Lisp"
dialect. I believe it was called "RLisp", and accepted source in
a quite conventional Algol-like syntax. I myself did not prefer it to
conventional Lisp syntax (and though I at the time was much more familiar
with conventional lisp than with any language with Algol-like syntax, I
nevertheless got well familiar with this parser, since my assignment at
the time was to convert it into something else).

Perhaps it would be interesting to find out if there are any other
users or former users of PSL on the net, and what if anything they
thought of RLisp. I had the feeling that it had not caught on and did
not look as if it were going to.

-- Jay Freeman

<canonical disclaimer -- I speak only for myself>

Andy Freeman

unread,
Sep 5, 1990, 6:18:30 PM9/5/90
to
In article <33...@skye.ed.ac.uk> je...@aiai.UUCP (Jeff Dalton) writes:
>In article <135...@otter.hpl.hp.com> s...@otter.hpl.hp.com (Steve Knight) writes:
> I think that Prolog
>makes the better job of it, having the advantage of being relational so
>that quoting rules aren't required, and providing a good infix/prefix/
>postfix operator system.

Prolog most definitely has quoting rules; case matters in obscure ways
and constants that begin with the wrong case have to be quoted. The result
isn't simpler.

As to the advantages of infix/postfix/prefix syntax, I note that
operator arity is not limited to 1 and 2. Again, prolog uses two
sytaxes of syntax to handle a case that is handled by one syntax in
lisp. (I'm referring to the opr(i,j,k,l,m) vs a <- b , c business.)

-andy
--
UUCP: {arpa gateways, sun, decwrl, uunet, rutgers}!neon.stanford.edu!andy
ARPA: an...@neon.stanford.edu
BELLNET: (415) 723-3088

lou

unread,
Sep 6, 1990, 10:00:24 AM9/6/90
to
In article <135...@otter.hpl.hp.com> s...@otter.hpl.hp.com (Steve Knight) writes:

>Speaking personally, I am no fan of the Lisp or Prolog style of syntax.
>It seems to me to be an unfortunate conflation of issues -- external
>syntax is there to make programs readable etc -- internal structure is
>there to make manipulation of programs convenient.

The problem with this distinction between internal syntax (only
programs see it) and external syntax (for people) is that in an
interactive programming environment you need to be able to take
something in internal syntax and display it to the user (e.g. as part
of a debugger). But if this display is in internal syntax, then there
is little point in having the external syntax - the cost of dealing
with two syntaxes generally (in my experience and I think in that of
others) is very high, and outweighs any advantage of a nice external
syntax.

Thus, if internal and external syntaxes are different, the debuger has
to be able to display things in external syntax. I.e., you have to
back-translate from internal to external or else you have to somehow
access the original external syntax from which a given piece of
internal syntax was produced. These are both possible, and I know of
systems that do them, but either approach is more complex and more
expensive than simply using a common internal/external syntax. So the
question is, is the advantage of having separate syntaxes worth the
cost?

I think most people who use lisp would agree with me that, if you have
a good programming environment, including an editor that does
indentation for you, lisp syntax is just a humanly-usable as Pascal's,
so the advantage of a separate syntax is 0, and thus not worth any
cost at all. The one exception to this is in arithmetic expressions,
where I find that the lispish form is less readable than the Pascal
form. (I know that much of this is a matter of taste and experience,
and I do not claim that everyone should agree that lisp syntax is as
usable as Pascal.)

--
Lou Steinberg

uucp: {pretty much any major site}!rutgers!aramis.rutgers.edu!lou
arpa: l...@cs.rutgers.edu

Robert Krajewski

unread,
Sep 6, 1990, 1:48:50 PM9/6/90
to
In article <LOU.90Se...@atanasoff.rutgers.edu> l...@cs.rutgers.edu writes:
>...But if this display is in internal syntax, then there

>is little point in having the external syntax - the cost of dealing
>with two syntaxes generally (in my experience and I think in that of
>others) is very high, and outweighs any advantage of a nice external
>syntax.

This is exactly why nobody uses the ``original'' McCarthy Lisp syntax
anymore. People threw it out 25 years ago...
Robert P. Krajewski
Internet: r...@ai.mit.edu ; Lotus: robert_kraj...@crd.dnet.lotus.com

Steve Knight

unread,
Sep 6, 1990, 12:04:50 PM9/6/90
to
Andy (& Jeff too) point out that:

>Prolog most definitely has quoting rules; case matters in obscure ways
>and constants that begin with the wrong case have to be quoted. The result
>isn't simpler.

OK - you can see the distinction between variables (begin with upper case)
and atoms (anything else) as a quoting scheme. However, it certainly has
none of the complexity of Lisp's quoting schemes -- there's nothing that
corresponds to backquoting. It hardly deserves the adjective 'obscure'.

As evidence to this, I've never seen a failure-to-quote error committed in
Prolog, though I've seen it many times in Lisp. Is this just a UK thing?
Perhaps teaching methods in the US mean that students rarely or never
make those errors? I know it is a big problem for Lisp acceptance for
students in the UK.

>As to the advantages of infix/postfix/prefix syntax, I note that
>operator arity is not limited to 1 and 2. Again, prolog uses two
>sytaxes of syntax to handle a case that is handled by one syntax in
>lisp. (I'm referring to the opr(i,j,k,l,m) vs a <- b , c business.)

This point eludes me. Prolog has a limited way of extending its own
syntax, it is true. I was simply stating my view that it is able to
create a more satisfactory syntax even with those limits.

Obviously, the attractiveness of syntax is in the eye of the beholder.
I have to accept that folks like Jeff are sincere when they argue that the
syntax of Lisp is very attractive for them. (I am even inclined to agree
when the alternative is C++.)

My argument, which wasn't contradicted, I think, was only that you could
have the same benefits of Lisp syntax without the exact same syntax.

Steve

Eric C. Olson

unread,
Sep 7, 1990, 3:58:37 PM9/7/90
to
People often confuse the printed representation of Lisp with its
actual representation. That is, Lisp functions are lists with the
symbol lambda as its car. It happens that the common Lisp readers use
a fairly direct representation of Lisp functions, namely the body of a
defun function. However, one could imagine a defun-algol function
that translates algol like syntax to Lisp functions. Additionally,
one could have a print-algol function that would coerce Lisp functions
into algol like print representations.

I think of Lisp as a algorithmic assembler. It allows the programmer
to deal with the issues of a problem without getting involved in the
details of a specific machine. Often times, its useful to implement
a new language on top of Lisp to provide data abstraction. A rule
based expert system comes to mind.

Its easy to translate into Lisp from a language using another syntax.
The reverse is in not true. Especially with advent of aggresive
automatic translators. Which for example, prove whether or not
a binding can be allocated on a stack instead of a heap (in a Lisp to C
translator).

The only other type of language that provides this level of
functionality are stack oriented languages. I find them more
difficult to understand -- perhaps because they don't have all the
development tools that other languages have.

IMHO,
Eric

Eric
eri...@ssl.berkeley.edu

Bill Birch

unread,
Sep 7, 1990, 6:30:22 AM9/7/90
to
As beauty is in the eye of the beholder, a discussion about the merits of
LISP syntax shows the different views of the language. I guess my view
of the LISP syntax could take some people by suprise. As far as I am
concerned LISP is a software development tool. It allows me to express
solutions to problems in small languages of my own creation. The interpreters
and compilers for these languages are LISP programs.

For this use, the syntax of LISP is ideal since it places no trestrictions
on the format of my own languages. For example, I have implemented an
assembler for some very nasty state-engines (usually coded in hex by
"binary aboriginal" contractors). Just browsing through the average
introduction to LISP text will show many examples of mini-languages
implemented with the humble s-expression.

This is what LISP is all about in my view. LISP is a powerful tool,
the Play-Dough (Plasticene) of programming languages.

Bill


--
Automatic Disclaimer:
The views expressed above are those of the author alone and may not
represent the views of the IBM PC User Group.
--

Paul Frederick Snively

unread,
Sep 9, 1990, 2:16:48 AM9/9/90
to
I believe Jay is referring to MLisp.

MLisp is indeed an Algol-like syntax for Lisp. The version that I have is
written in PLisp (Pattern-matching Lisp), which in turn is written in Common
Lisp.

Paul Snively
Macintosh Developer Technical Support
Apple Computer, Inc.
Hanging out at Ch...@cup.portal.com

Ulf Dahlen

unread,
Sep 9, 1990, 7:34:17 AM9/9/90
to
In article <10...@life.ai.mit.edu> r...@rice-chex.ai.mit.edu (Robert Krajewski) writes:
>This is exactly why nobody uses the ``original'' McCarthy Lisp syntax
>anymore. People threw it out 25 years ago...

What did the ``original'' McCarthy Lisp syntax look like?


--Ulf Dahlen
Linkoping University, Sweden and Edinburgh University, Scotland
Internet: u...@ida.liu.se

Reuben_B...@cup.portal.com

unread,
Sep 9, 1990, 5:34:32 PM9/9/90
to
Jay is referring to RLisp. MLisp is yet another effort along the same lines.
RLisp used to be distributed with the REDUCE computer algebra package.
(Maybe it still is; I haven't used Reduce lately.) Portable Common Lisp was
originally developed to support portability of Reduce. RLisp was covered
in a chapter in the book by Organick, Forsythe, and Plummer, _Programming
Language Structures_, Academic Press (1978).

Personally I didn't find RLisp attractive. A few control statements such as
IF..THEN were in Algol syntax, but the guts of the work still had to be done
with Lisp 1.5-like functions, e.g. cons(car(a), b), so you felt like you
were switching between two different languages within the same expression!


Scheme, now, _feels_ like Algol-60 (the world's sweetest version of Fortran),
and I'd say that feel is more important than look.
-- Bert

John M. Blasik

unread,
Sep 9, 1990, 7:20:52 PM9/9/90
to
In article <BARNES.90A...@cds839.cadence.com> bar...@cadence.com (Tim Barnes) writes:
>
>It's also the case that there are a variety of "syntax packages"
>available for Lisp: for example Cadence's Skill(tm) allows the following
>semantically equivalent syntaxes:
> x = 3*sin( y->z )
> (setq x (times 3 (sin (getq y z))))
>

Too bad the designers brewed Skill(tm) with an overabunance of
syntatic sugar at the expense of the time honored hops and malt of lisp:

(cons 1 2)
*Error* cons: argument #2 should be a list (type template = "gl") - 2

-- john

Robert R. Kessler

unread,
Sep 10, 1990, 11:19:10 AM9/10/90
to
RLISP was an Algol-like Syntax that was a translator into Portable
Standard Lisp. To those of us who implemented and subsequently used
it, there were many mixed views. Some people felt that they liked it
because they didn't like the ``ugly'' Lisp syntax. They still had
access to all of the Lisp operations, but didn't have to put up with
the parens (remember that when we were RLISP, we didn't have fancy
text editors that did paren bouncing, auto-indentation, etc -- try
writing Lisp code without the editor features, it really is much more
difficult). The others among us, felt that RLISP just got in the way,
so we used PSL. RLISP has currently diverged some what. PSL is still
be distributed (by us and others) and supports a flavor of RLISP.
That version is still in use by the Alpha_1 group here at Utah which
has a solid modelling package, which has a mode where the users can
define models in RLISP. The REDUCE algebra system (which is also
still being distributed) has a slightly different version for
supporting computer algebra (in that case, RLISP works well -- the
most common users of REDUCE are non-computer scientists who find
things like infix operators a requirement). Finally, there is
something called RLISP-88 from Rand, which has extended RLISP with
concurrency operations, an object system, and other neat features.

B.

Hakan Soderstrom

unread,
Sep 11, 1990, 8:18:59 AM9/11/90
to
The syntax of Lisp is about the same as the syntax of
Assembler: it doesn't exactly stop you from doing what you
want, but it doesn't help either. Almost all kinds of errors
appear as run time errors.

Jeff Dalton writes,

>My view is just the opposite. It's fortunate that a readable
>external syntax can correspond so closely to a flexible, easily
>manipulated data structure.

Yes, this is the crux of the matter. It also means that the
syntax is a compromise between machine readability and human
readability. Because it was designed in the 60's, there is a
bias towards machine readability. You help the compiler
build its data structure.

Goodness. I promised never to enter a syntax argument again
... it is one of those sure-to-flame topics. But it is fun!
And where would we be without Lisp?

- Hakan

--
----------------------------------------------------
Hakan Soderstrom Phone: +46 (8) 752 1138
NMP-CAD Fax: +46 (8) 750 8056
P.O. Box 1193 E-mail: so...@nmpcad.se
S-164 22 Kista, Sweden

Ned Nowotny

unread,
Sep 11, 1990, 12:13:13 PM9/11/90
to
In article <1990Sep10.0...@hellgate.utah.edu> kessler%cons.u...@cs.utah.edu (Robert R. Kessler) writes:
=>RLISP was an Algol-like Syntax that was a translator into Portable
=>Standard Lisp. To those of us who implemented and subsequently used
=>it, there were many mixed views. Some people felt that they liked it
=>because they didn't like the ``ugly'' Lisp syntax. They still had
=>access to all of the Lisp operations, but didn't have to put up with
=>the parens (remember that when we were RLISP, we didn't have fancy
=>text editors that did paren bouncing, auto-indentation, etc -- try
=>writing Lisp code without the editor features, it really is much more
=>difficult). The others among us, felt that RLISP just got in the way,
=>so we used PSL. RLISP has currently diverged some what. PSL is still
=>be distributed (by us and others) and supports a flavor of RLISP.
=>That version is still in use by the Alpha_1 group here at Utah which
=>has a solid modelling package, which has a mode where the users can
=>define models in RLISP. The REDUCE algebra system (which is also
=>still being distributed) has a slightly different version for
=>supporting computer algebra (in that case, RLISP works well -- the
=>most common users of REDUCE are non-computer scientists who find
=>things like infix operators a requirement).

In so far as extension languages are concerned, this is the most
important argument against unsugared Lisp syntax. Most people
learned mathematics with infix operators and most people are more
accustomed to communicating in a written form where keywords and
separators are the typical delimiters, obviating the need for
parenthesis or bracket matching. In fact, most users are not
persuaded by arguments that Lisp syntax is "elegant" or "easy
to learn." They are far more likely to believe that the programmer
was to lazy to build a simple parser and therefore decided, because
of the obvious intrinsic value of the product, that the user should
be willing to be the parser for an otherwise unfamiliar notation.
This attitude, at best, is not customer-oriented and, in any case,
is unproductive. Parsing technology is well developed. Extension
languages can fairly easily accommodate an ALGOL-like syntax while
still providing all the semantics of Lisp (or Scheme, for that
matter.)

=>Finally, there is
=>something called RLISP-88 from Rand, which has extended RLISP with
=>concurrency operations, an object system, and other neat features.
=>
=>B.


Ned Nowotny, MCC CAD Program, Box 200195, Austin, TX 78720 Ph: (512) 338-3715
ARPA: n...@mcc.com UUCP: ...!cs.utexas.edu!milano!cadillac!ned
-------------------------------------------------------------------------------
"We have ways to make you scream." - Intel advertisement in the June 1989 DDJ.

Andy Freeman

unread,
Sep 11, 1990, 10:12:38 PM9/11/90
to ned...@mcc.com
In article <11...@cadillac.CAD.MCC.COM> ned%c...@MCC.COM (Ned Nowotny) writes:
>In so far as extension languages are concerned, this is the most
>important argument against unsugared Lisp syntax. Most people
>learned mathematics with infix operators and most people are more
>accustomed to communicating in a written form where keywords and
>separators are the typical delimiters, obviating the need for
>parenthesis or bracket matching. In fact, most users are not
>persuaded by arguments that Lisp syntax is "elegant" or "easy
>to learn." They are far more likely to believe that the programmer
>was to lazy to build a simple parser and therefore decided, because
>of the obvious intrinsic value of the product, that the user should
>be willing to be the parser for an otherwise unfamiliar notation.
>This attitude, at best, is not customer-oriented and, in any case,
>is unproductive. Parsing technology is well developed. Extension
>languages can fairly easily accommodate an ALGOL-like syntax while
>still providing all the semantics of Lisp (or Scheme, for that
>matter.)

This makes a couple of assumptions that are unlikely to be true.

1) We're not doing +,-,*,/ arithmetic, we're programming. (BTW - "+"
isn't really a binary operator, neither is "*"; there are surprisingly
few true binary, or unary, operations.)
2) One consequence is that binary and unary operators are the exception;
in fact, operators with arbitrary arity are common, or at least would
be if "modern" languages were as upto date as lisp. That being
the case, infix notation doesn't work and prefix notation requires
delimiters, which brings us back to lisp-like syntaxes.

As to the development of parsing technology, the state-of the art
syntax for n-ary operators, user-defined or system defined, is:
op(<operands, possibly separated by commas>)

I don't see that that is a big improvement over lisp syntax.

Simon E Spero

unread,
Sep 12, 1990, 12:32:11 PM9/12/90
to

One thing that a lot of people seem to be ignoring is the way that all
modern lisps make it so easy to augment lisp's syntax to suit the job at
hand. When it comes to building complex mathematical expressions, prefix
notation is absolutely hopeless.

When you can add a simple operator-precedence parser and attach it to a
macro in a few minutes, there is no need to bother writing it out long-hand.
Surely the main reason lisp has survived so long is it's ability to take on
and mimic the special features of newer languages that evolve around it.

Simon
--
zma...@uk.ac.ic.doc | sis...@cix.co.uk | ..!mcsun!ukc!slxsys!cix!sispero
------------------------------------------------------------------------------
The Poll Tax. | Saddam Hussein runs Lotus 123 on | DoC,IC,London SW7 2BZ
I'm Not. Are you?| Apple Macs.| I love the smell of Sarin in the morning

Jeff Dalton

unread,
Sep 12, 1990, 3:00:53 PM9/12/90
to
In article <SODER.90S...@basm.nmpcad.se> so...@nmpcad.se (Hakan Soderstrom) writes:
>The syntax of Lisp is about the same as the syntax of
>Assembler: it doesn't exactly stop you from doing what you
>want, but it doesn't help either. Almost all kinds of errors
>appear as run time errors.

Actually, Lisp syntax does help many people to do what they want.
It's certainly much more helpful than assembler. Maybe it doesn't
help *you* to do what you want, but so what? No one ever claimed
Lisp was the answer to all problems.

Of course people who think run-time checking is the worst of all
possible sins won't like Lisp. Those people would do well to use
another language instead. ML is a good choice if they want most
of the type work done for them.

>Jeff Dalton writes,
>
>>My view is just the opposite. It's fortunate that a readable
>>external syntax can correspond so closely to a flexible, easily
>>manipulated data structure.
>
>Yes, this is the crux of the matter. It also means that the
>syntax is a compromise between machine readability and human
>readability.

That's exactly what it doesn't mean. In order to be a compromise it
would have to be worse for humans (as compared to other programming
languages -- because every programming language makes such compromises
to some extent) in order to be better for machines.

But, as I pointed out before, (many) Lisp programmers don't regard it
as worse for humans: they prefer it to the more Algol-like syntaxes.
Critics of Lisp's syntax consistently ignore this point and suppose
that the syntax must be a cost rather than a benefit.

Of course, some people who dislike Lisp syntax may also happen to
think the syntax is good for machines. But it's only because they
prefer the syntax of other programming languages that they see Lisp as
making a greater compromise. And different preferences are just what
we expect on questions of syntax. Different people prefer different
things. It might be nice if everyone preferred the same syntax, but
it isn't so.

In any case, the idea that Lisp is more of a compromise than other
languages seems rather bizarre. It may seem plausible (to some) if
we restrict ourselves to syntax. But Lisp is notorious for being
unsuited to "conventional" machines. (Before anyone flames me,
let me point out that I think Lisp can be implemented effectively
on conventional machines. Nonetheless, it has a reputation that
is not entirely unjustified.)

>Because it was designed in the 60's, there is a
>bias towards machine readability. You help the compiler
>build its data structure.

There might be something to this if it weren't the case that
other languages designed at about the same time, such as FORTRAN
and Algol 60, didn't show the same "bias".

In any case, you're confusing the origin of Lisp syntax with
the question of whether it really is readable.

-- Jeff

Jeff Dalton

unread,
Sep 12, 1990, 3:32:14 PM9/12/90
to
In article <135...@otter.hpl.hp.com> s...@otter.hpl.hp.com (Steve Knight) writes:
>Andy (& Jeff too) point out that:
>>Prolog most definitely has quoting rules; case matters in obscure ways
>>and constants that begin with the wrong case have to be quoted. The result
>>isn't simpler.
>
>OK - you can see the distinction between variables (begin with upper case)
>and atoms (anything else) as a quoting scheme. However, it certainly has
>none of the complexity of Lisp's quoting schemes -- there's nothing that
>corresponds to backquoting. It hardly deserves the adjective 'obscure'.

Backquote is, in my opinion, a separate issue. Let me put it this
way: the rules for the use of QUOTE in Lisp (and for its abbreviation
the single quote) are neither obscure nor difficult to understand.
As I said in my previous message, the quoting rules for Lisp needn't
be any more confusing that those of, say, Basic.

As far as Prolog is concerned, the original claim was that Prolog
didn't have quoting rules. I don't really want to quarrel about
whether Prolog is better or worse than Lisp in this respect or in
general.

>As evidence to this, I've never seen a failure-to-quote error committed in
>Prolog, though I've seen it many times in Lisp. Is this just a UK thing?
>Perhaps teaching methods in the US mean that students rarely or never
>make those errors? I know it is a big problem for Lisp acceptance for
>students in the UK.

In my experience, in both the US and the UK, it is true that students
often make quoting mistakes in Lisp and that they often find the
quoting rules confusing. I think there are a number of possible
contributing factors.

A common mistake in presentation, I think, is to make non-evaluation
seem too much of a special case. For example, some people see SETQ as
confusing because it evaluates one "argument" and not another and so
they prefer to present SET first. This doesn't work so well in Scheme
or even in Common Lisp, but it sort of worked for older Lisps.
Unfortunately, it makes SETQ seem like a special case when it's
actually a pretty standard assignment:

Lisp Basic

(setq a 'apples) let a$ = "apples"
(setq b a) let b$ = a$
vs (setq b 'a) vs let b$ = "a$"

Indeed, the whole terminology in which SETQ, COND, etc. are presented
as funny kinds of "functions" and where functions are described as
"evaluating an argument" (or not) may be a mistake.

>Obviously, the attractiveness of syntax is in the eye of the beholder.
>I have to accept that folks like Jeff are sincere when they argue that the
>syntax of Lisp is very attractive for them. (I am even inclined to agree
>when the alternative is C++.)

I would agree, provided we don't take this "eye of the beholder"
stuff too far. It's true that different people will prefer different
syntaxes and that we can't say they're wrong to do so. However, we
shouldn't go on to conclude that all views on the virtues or otherwise
of a syntax are equally valid. Sometimes we can say, for example,
that someone hasn't learned good techniques for reading and writing
code in a particular syntax and that's why they find it so hard
to read and write.

>My argument, which wasn't contradicted, I think, was only that you could
>have the same benefits of Lisp syntax without the exact same syntax.

Actually, I did disagree with you on this point. Perhaps I should
have said so more explicitly. I don't think you can get the same
benefits. You can get *some* of the benefits, but by sacrificing some
of the others. Lisp occupies something like a local maximum in
"benefit space".

-- Jeff

Aaron Sloman

unread,
Sep 12, 1990, 11:34:40 PM9/12/90
to
It's interesting to see this debate surface yet again.

Jeff Dalton writes


> In my experience, in both the US and the UK, it is true that students
> often make quoting mistakes in Lisp and that they often find the
> quoting rules confusing. I think there are a number of possible
> contributing factors.
>
> A common mistake in presentation, I think, is to make non-evaluation
> seem too much of a special case. For example, some people see SETQ as
> confusing because it evaluates one "argument" and not another and so
> they prefer to present SET first.

....


>
> Indeed, the whole terminology in which SETQ, COND, etc. are presented
> as funny kinds of "functions" and where functions are described as
> "evaluating an argument" (or not) may be a mistake.

It wasn't till I read this remark of Jeff's that I realised that one
reason I don't like Lisp is that, apart from "(" and ")", Lisp
doesn't help one to distinguish syntax words and function names.

Actually, as a sort of failed mathematician, I do appreciate the
elegance and economy of lisp syntax and could probably even like
using some versions (i.e. T or Scheme, though not Common Lisp). But
for teaching most of the kinds of students I have met I would far
rather use Pop-11 which, in many ways, is similar to, and owes a
great deal to, Lisp, though its syntax is much more redundant.
(There are other differences that are irrelevant to this
discussion.)

In teaching Pop-11 I always try to get students to think in terms of
a distinction between

(a) "syntax" words (e.g. the assignment arrow "->", parentheses,
list brackers "[", "]", "if", "endif", "for", "endfor" "while",
"define" etc.)
and
(b) words that represent procedures (this includes functions and
predicates, which return results, and subroutines, which don't).

Most of its syntax forms have distinctive opening and closing
brackets, e.g. "for" ... "endfor", "define" ... "enddefine" etc.
This is verbose and inelegant but it helps learners to grasp the
sort of distinction (I think) Jeff is making.

There are exceptions, like the assignment arrow "->" which looks
just like an infix procedure name, but since students don't
naturally class it with the infix operators they already know (i.e.
the arithmetic operators) they easily accept it as a different kind
of beast, playing a special and important role in their programs.
(Actually taking something from the stack and putting it somewhere,
or running procedure "in updater mode").

Another kind of exception that helps to muddle the distinction is
the use of special brackets for constructing lists and vectors

[a list of words] {a vector of words}

The brackets have both a syntactic role (delimiting expressions) and
also implicitly identify functions that return results. Again these
are very special forms that are easily learnt as special cases.
(Natural languages are full of special cases: the human brain seems
good at coping with lots of special cases.)

By contrast the uniform syntax of lisp makes it not so easy to grasp
that car, cdr, sqrt, append, etc. are different beasts from setq,
cond, quote, loop, etc. Hence the tendency to make the kind of
mistake that Jeff describes, i.e. talking about different kinds of
functions.

It should be possible to investigate this conjecture about lisp
syntax causing more confusion empirically. I wonder if anyone has?

Moreover, the syntactic redundancy involved in using different
closing brackets for each construct in Pop-11 really does make it
much easier for many readers to take in programs, which is parly
analogous to the reasons why modern lisp programmers eschew the
syntactic economy of APL programmers and use long and redundant
procedure names in this sort of style
(sum_of_squares_of x y)
rather than
(ssq x y)

Having extra syntactic redundancy also makes it easier to provide
helpful compile time checks and error messages, e.g. if an editing
mistake produces
if isinteger(x)
else
foo(y)
endif

the compiler will complain:

MISPLACED SYNTAX WORD : FOUND else READING TO then

which can be especially helpful where sub-expressions are more
complex.

> Steve wrote


> >Obviously, the attractiveness of syntax is in the eye of the beholder.
> >I have to accept that folks like Jeff are sincere when they argue that the
> >syntax of Lisp is very attractive for them. (I am even inclined to agree
> >when the alternative is C++.)

Jeff replied


> I would agree, provided we don't take this "eye of the beholder"
> stuff too far. It's true that different people will prefer different
> syntaxes and that we can't say they're wrong to do so. However, we
> shouldn't go on to conclude that all views on the virtues or otherwise
> of a syntax are equally valid. Sometimes we can say, for example,
> that someone hasn't learned good techniques for reading and writing
> code in a particular syntax and that's why they find it so hard
> to read and write.

I agree.

It should also be possible to spell out precisely the cognitive
requirements for particular kinds of learners and users at
particular stages in their development, or for particular kinds of
programming tasks, and establish the strengths and weaknesses of
alternative languages by argument and evidence and not by simply
agreeing to differ.

E.g. the use of essentially redundant keywords like "then", "elseif"
and "else" in conditionals, and the use of distinct closing brackets
like "endif", "endwhile" that remind you what sort of construct they
are terminating, has a particularly important consequence. It
reduces short term memory load, and allows local attention
focussing, by providing local cues or reminders as to the nature of
the context, whereas without them one has to parse a larger
expression to know that something plays the role of a condition, or
a consequent. (Some lisp constructs also use these extra keywords,
e.g. "for" ... "from" ... "to". So the fault is one of degree.)

The point about memory load is an objective fact. Whether it matters
for a particular user is going to depend on the kinds of cognitive
skills that user has already developed. It's a bit like the question
whether you should repeat the key signature at the beginning of
every line in a musical score, or simply indicate it when the key
changes. For certain experienced musicians the economical method
will suffice. For most ordinary mortals the frequent reminder is
useful, even if it does increase the clutter on the page and
probably the printing costs! Note that the time signature is not
usually repeated on every line because that is (mostly) evident from
the contents of each bar, or at least determined to within a small
range of alternatives.

Another analogy is with the difference between the musical
annotation "crescendo al fine" (= get louder from here to the end),
which requires the reader to remember the instruction from there on,
and the alternative notation which has a pair of lines getting
further and further apart, immediately above or below the stave, as
a _continual_ reminder that you have to be getting louder. For many
would be performers the first form will not be as effective as
the second.

Exactly how important all these differences in syntactic redundancy
and memory load, etc. are, and in what way, will to some extent
remain unsettled until we have good theories describing the various
kinds of cognitive processes that go on in various kinds of people
when they read, or write, or design, or debug, programs.

But most designers of programming languages don't think about human
cognitive processes.

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QH, England
EMAIL aar...@cogs.sussex.ac.uk

Piercarlo Grandi

unread,
Sep 13, 1990, 8:52:42 AM9/13/90
to
On 12 Sep 90 02:12:38 GMT, an...@Neon.Stanford.EDU (Andy Freeman) said:

andy> In article <11...@cadillac.CAD.MCC.COM> ned%c...@MCC.COM (Ned
andy> Nowotny) writes:

ned> In so far as extension languages are concerned, this is the most
ned> important argument against unsugared Lisp syntax. Most people
ned> learned mathematics with infix operators and most people are more
ned> accustomed to communicating in a written form where keywords and
ned> separators are the typical delimiters, obviating the need for
ned> parenthesis or bracket matching.

I would also like to observe that while providing users with a familiar,
mathematical like syntax, may help sales, is is actually extremely
misleading, because even if they look like mathemtical expressions, the
semantics of expressions in programs are only weakly related to those of
the mathamtical expressions they look similar to, especially for
floating point, or unsigned in C (which actually uses modular arithmetic).

andy> This makes a couple of assumptions that are unlikely to be true.

andy> 1) We're not doing +,-,*,/ arithmetic, we're programming. (BTW - "+"
andy> isn't really a binary operator, neither is "*"; there are
andy> surprisingly few true binary, or unary, operations.)

Precisely. Agreed. Even the semantics are different.

andy> 2) One consequence is that binary and unary operators are the exception;
andy> in fact, operators with arbitrary arity are common, or at least would
andy> be if "modern" languages were as upto date as lisp. That being
andy> the case, infix notation doesn't work and prefix notation requires
andy> delimiters, which brings us back to lisp-like syntaxes.

The real challenge here is that we want some syntax that says, apply
this operator symbol to these arguments and return these value_s_. Even
lisp syntax does not really allow us to easily produce multiple values.

So either we say that after all all functions take just one argument and
return one result (and they may be both structured), which may be an
appealing solution, or we are stuck; mainly because our underlying
mathematical habits do not cope well with program technology (and I am
not happy with those that would like, like the functionalists, to reduce
programming to what is compatible with *their* notion of maths).

andy> As to the development of parsing technology, the state-of the art
andy> syntax for n-ary operators, user-defined or system defined, is:
andy> op(<operands, possibly separated by commas>)

andy> I don't see that that is a big improvement over lisp syntax.

Actually, there is state of the art technology for multiple arguments to
multiple results, and it is *Forth* of all things, or maybe POP-2. A lot
of power of Forth and Pop-2 indeed comes from their functions being able
to map the top N arguments on the stack to a new top of M results. Maybe
postfix is not that bad after all. Something like

10 3 / ( 10 and 3 replaced with 1 and 3 ) quot pop rem pop

or with some sugaring like Pop-11.

Another alternative is that used in languages like CDL or ALEPH, based
on affix/2level grammar style, to junk function notation entirely, and
just use imperative prefix syntax. Something like

divide + 10 + 3 - quot - rem.

I have seen substantial programs written like this, and this notation
actually is not as verbose as it looks, and is remarkably clear. For
example, Smalltalk syntax is essentially equivalent to this, as in:

10 dividedBy: 3 quotient: quot remainder: rem!

There are many interesting alternatives...
--
Piercarlo "Peter" Grandi | ARPA: pcg%uk.ac....@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: p...@cs.aber.ac.uk

Ozan Yigit

unread,
Sep 13, 1990, 12:33:00 PM9/13/90
to
[ongoing discussion regarding lisp syntax]

In article <34...@skye.ed.ac.uk> je...@aiai.UUCP (Jeff Dalton) writes:

>I would agree, provided we don't take this "eye of the beholder"
>stuff too far.

Why stop now, after we have come this far ?? ;-)

>... It's true that different people will prefer different


>syntaxes and that we can't say they're wrong to do so. However, we
>shouldn't go on to conclude that all views on the virtues or otherwise
>of a syntax are equally valid.

It follows therefore that one should try to substantiate some or all of the
claims regarding the effectiveness and benefits of a syntax, such as that
of lisp, instead of just presenting opinions. I have seen studies on
"natural artifical languages" (Gary Perlman's term for programming/command
languages), effects of punctuation in programming etc. but I don't recall
seeing a study that has a definitive word on overwhelming benefits of one
syntax over another. If you know of any that substantiate various claims
[so far made] about lisp syntax, I would be very interested in it.

[I am curious: Does anyone know why Papert chose lisp-like semantics but
not the syntax for Logo?]

[and regarding the benefits of syntax]

>You can get *some* of the benefits, but by sacrificing some
>of the others. Lisp occupies something like a local maximum in
>"benefit space".

Really. QED is it?

oz
---
The king: If there's no meaning Usenet: o...@nexus.yorku.ca
in it, that saves a world of trouble ......!uunet!utai!yunexus!oz
you know, as we needn't try to find any. Bitnet: oz@[yulibra|yuyetti]
Lewis Carroll (Alice in Wonderland) Phonet: +1 416 7365257x33976

Jeff Dalton

unread,
Sep 13, 1990, 4:56:45 PM9/13/90
to
In article <11...@cadillac.CAD.MCC.COM> ned%c...@MCC.COM (Ned Nowotny) writes:

>In so far as extension languages are concerned, this is the most
>important argument against unsugared Lisp syntax. Most people
>learned mathematics with infix operators and most people are more
>accustomed to communicating in a written form where keywords and
>separators are the typical delimiters, obviating the need for
>parenthesis or bracket matching.

Um, when's the last time *you* wrote expressions in an infix
language. Parentheses and bracket-matching are definitely
involved. That is, the difference is, to some extent, a
mater of degree.

Of course, you're right that there are more user-friendly syntaxs that
than provided by Lisp, at least if the users are not already familiar
with Lisp. However, (1) the implementation of a Lisp-based extension
language tends to be simpler and smaller, (2) the result is a proper
programming langauge rather than something more restricted, (3) Lisp
is at least as "friendly" as some of the alternatives such as Post-
Script, (4) experience with a number of implementations of Emacs (eg,
Multics Emacs, GNU Emacs) -- and of other things -- has shown that
users, even "non-programmers", can use Lisp effectively as an
extension language and even find such use pleasant.

> In fact, most users are not
>persuaded by arguments that Lisp syntax is "elegant" or "easy
>to learn." They are far more likely to believe that the programmer
>was to lazy to build a simple parser and therefore decided, because
>of the obvious intrinsic value of the product, that the user should
>be willing to be the parser for an otherwise unfamiliar notation.

I also think you overestimate the extent to which users will be
comfortable with mathematics and the rigidities imposed by programming
languages in general. That is, many users will feel they are parsing
an unfamiliar notation regardless.

>This attitude, at best, is not customer-oriented and, in any case,
>is unproductive. Parsing technology is well developed. Extension
>languages can fairly easily accommodate an ALGOL-like syntax while
>still providing all the semantics of Lisp (or Scheme, for that
>matter.)

True.

-- JD

Richard A. O'Keefe

unread,
Sep 13, 1990, 9:10:22 PM9/13/90
to
In article <34...@syma.sussex.ac.uk>, aar...@syma.sussex.ac.uk (Aaron Sloman) writes:
> It's interesting to see this debate surface yet again.

> It wasn't till I read this remark of Jeff's that I realised that one


> reason I don't like Lisp is that, apart from "(" and ")", Lisp
> doesn't help one to distinguish syntax words and function names.

I once built a programming language which was a sort of hybrid between
Lisp and Pop. In RILL, one wrote e.g.
$IF (> X Y) $THEN (SETQ MAX X) $ELSE (SETQ MAX Y) $FI
Basically, I used keywords for control structures ($PROC for lambda,
$BEGIN for let, $IF, $FOR, $WHILE, and so on) and Lisp syntax for the rest.
The parser _was_ better able to notice typing mistakes, and I _did_ make
far fewer parenthesis errors than I did with straight Lisp when I later
got my hands on it. But that was before I met Emacs.

> By contrast the uniform syntax of lisp makes it not so easy to grasp
> that car, cdr, sqrt, append, etc. are different beasts from setq,
> cond, quote, loop, etc. Hence the tendency to make the kind of
> mistake that Jeff describes, i.e. talking about different kinds of
> functions.

The thing about Pop (as it was last time I used it) is that there is
no defined internal form for code. At one end of the spectrum we have
"token stream" and at the other end we have "compiled code", and there
is nothing in between. I don't know how Pop-11 handles it these days,
but in WonderPop the easiest way to write a macro was e.g.

form for x:id in e:expr do s:stmts enddo;
formvars L;
vars L; e -> L;
while L.null.not do L.dest -> x -> L; s enddo
endform

which is roughly the equivalent of

(defmacro for (x e &rest s)
(let ((L (gensym)))
`(do ((,L ,e (cdr ,L)))
((null ,L))
(setq ,x (car ,L))
,@s)))

but it worked rather differently. When the parser found the keyword
'for' it would call the function defined by the form. That function
would call .readid to read the identifier for x. It would then check
that the next token was "in". It would then call .readexpr to read the
expression e. It would then check that the next token was "do". It
would then call .readstmts do read the body s. It would then check that
the next token was "enddo". Then it would start on the expansion. If
any of the reads or tests failed, it would backtrack and try another
form (any number of forms could start with the same keyword). What were
x, e, and s bound to? *lists of tokens*. The body of the form was
processed by making a list of tokens and pushing the lot back on the
input token stream.

I'm sure I have the names of the reading functions wrong, but that's
basically how macros worked in WonderPop, as transformations of
sequences of tokens.

It works very well. I found Pop "forms" easy to use.

But macros aren't the only use for an internal representation.
There was no debugging interpreter, for example. (Though you could
quite easily trace functions.)

> It should also be possible to spell out precisely the cognitive
> requirements for particular kinds of learners and users at
> particular stages in their development, or for particular kinds of
> programming tasks, and establish the strengths and weaknesses of
> alternative languages by argument and evidence and not by simply
> agreeing to differ.

Agreed!

> But most designers of programming languages don't think about human
> cognitive processes.

You should read C.J.Date's comments on SQL...

--
Heuer's Law: Any feature is a bug unless it can be turned off.

Richard A. O'Keefe

unread,
Sep 14, 1990, 3:45:24 AM9/14/90
to
In article <PCG.90Se...@odin.cs.aber.ac.uk>, p...@cs.aber.ac.uk (Piercarlo Grandi) writes:
> On 12 Sep 90 02:12:38 GMT, an...@Neon.Stanford.EDU (Andy Freeman) said:
> andy> 1) We're not doing +,-,*,/ arithmetic, we're programming. (BTW - "+"
> andy> isn't really a binary operator, neither is "*"; there are
> andy> surprisingly few true binary, or unary, operations.)

> Precisely. Agreed. Even the semantics are different.

I missed this the first time it came around.
I have some bad news for the two of you: in floating-point arithmetic
"+" _is_ a binary operation. Floating-point "+" and "*" are not
associative. If one Lisp compiler turns (+ X Y Z) into
(plus (plus X Y) Z) and another turns it into (plus X (plus Y Z))
then they are going to produce _different_ results. For integer
and rational arithmetic, there's no problem, but anyone doing floating
point calculations in Lisp has to be very wary of non-binary + and * .

Jeffrey Jacobs

unread,
Sep 14, 1990, 3:06:08 PM9/14/90
to

Having watched the LISP Syntax thread for a while, I thought a little
history might be in order...

Way back in the old days, development in LISP and the underlying
philosophy of the language were substantially different.

LISP was an interpreted language. Debugging and development was done in
interpreted mode; compilation was considered the *last* step in the
development process.

LISP had two fundamental data types, CONS cells and ATOMs. ATOMs were
"indivisible", and included "interned symbols", numbers and strings.
(Arrays were added, but were less important to the underlying
concept).

A key aspect of the language was the "equivalence of data and (source)code",
i.e. code consisted of LISP stuctures, which could be manipulated *exactly*
like any other LISP structure. Note that this is substantially
different from the modern view, where "functions (*not* code) are a data
_type_" and not directly modifiable, e.g. even "interpreted" code in
most modern implementations
gets converted to something other than a CONS based structure.

This "equivalence" allowed some very interesting capabilities that are
no longer available in modern implementations. Since the interpreter
operated on list structures, it was possible to dynamically modify code
while in the process of execution. Now, most of us didn't write
self-modifying code (although we probably all tried it at least once).
But we were able to stop an execution, and make changes to code and
continue from the breakpoint without having to recompile or start over.
We could issue a break, put a trace point around an expression *within*
a defined function, and continue. Or we could fix it, and continue;
the "fix" would be propagated even to pending calls. E.g. if you
had
(DEFUN FOO (X Y)
expr1
expr2
...)

and expr1 invoked FOO recursively, you could break the execution,
change expr2 (or TRACE it, or BREAK it or...), all of the pending
invocations on the stack were affected. (You can't do this with
compiled code).

It allowed things like structure editors (where you didn't need to
worry about messing up parends), DWIM, and other features that have
been lost in the pursuit of performance.

With this view (and combined with the mathematical purity/simplicity
of McCarthy's original concept) LISP syntax not only makes sense,
it is virtually mandatory!

Of course, it also effectively mandated dynamic scoping. "Local"/lexical
scoping really came about as the default for the compiler primarily because
most well written LISP code didn't use function arguments as free/special
variables, so it was an obvious optimization. However, several years
ago, Daryle Lewis confided in me that he had intended that UCI LISP be
released with the compiler default set to everything being SPECIAL.
Given the historical problems in reconciling local and free variables,
and the fact that the vast majority of LISPers who learned the language
in the '70s and early '80 learned UCI LISP, I can't help but wonder
if what affect this might have had on Common LISP...

(FWIW, REDUCE was originally done in UCI LISP way back in the early '70s,
and BBN/INTERLISP supported MLISP, an Algol like syntax. Seems to me that
RLISP must go back that far as well. Given that structure editors are
incredibly ancient, I wonder why the people at Utah didn't use one of
those. Oh, and almost nobody ever learned LISP using 1.5...

Personally, I think the whole reason LISP machines were created
was so that people could run EMACS :-)

However, if compilation is the primary development strategy (which it
is with CL), then the LISP syntax is not particularly useful. Modern
block structured syntax is much easier to read and maintain; it also
allows constructs such as type declarations, etc. to be much
more readable. Infix notation is indeed much more familiar to most
people. Keyword syntax in most languages is much more obvious, readable
and certainly less prone to errors and abuse than CL. The elimination
of direct interpretation of structures (as read) and the almost total use
of text editors does indeed leave LISP syntax a relic from the past.


Jeffrey M. Jacobs (co-developer of UCI LISP, 1973)
ConsArt Systems Inc, Technology & Management Consulting
P.O. Box 3016, Manhattan Beach, CA 90266
voice: (213)376-3802, E-Mail: 7670...@COMPUSERVE.COM

Jeff Dalton

unread,
Sep 14, 1990, 4:21:11 PM9/14/90
to
In article <34...@syma.sussex.ac.uk> aar...@syma.sussex.ac.uk (Aaron
Sloman) writes:

>It wasn't till I read this remark of Jeff's that I realised that one
>reason I don't like Lisp is that, apart from "(" and ")", Lisp
>doesn't help one to distinguish syntax words and function names.

Actually, Pop-11 doesn't do much along those lines either.
It's not like it uses a different font for them (cf Algol).

>Moreover, the syntactic redundancy involved in using different
>closing brackets for each construct in Pop-11 really does make it
>much easier for many readers to take in programs,

>E.g. the use of essentially redundant keywords like "then", "elseif"


>and "else" in conditionals, and the use of distinct closing brackets
>like "endif", "endwhile" that remind you what sort of construct they
>are terminating, has a particularly important consequence. It

>reduces short term memory load, [...]

Now, there's no doubt soemthing to what you say. However, I don't
think there's as much to it as you suppose.

One of the *mistakes* some people make when writing Lisp is to
try to add the redundancy you describe by putting certain close
parens on a line of their own followed by a comment such as
" ; end cond". It makes the code *harder* to read, not easier.

People just starting to use Lisp, and people who use editors without a
good Lisp mode (which at least used to include the PopLog editor that
comes with Pop-11), may well find it helpful; but experienced Lisp
programmers generally do not.

Lisp procedures should be fairly short and indented so that it's easy
to see the scope of a construct: everything up to the next line not
indented more to the right. Putting in lots of end markers makes this
harder to see, and short-term mempory doesn't have much problem
keeping track.

Of course, it's no doubt possible to write procedures (such as ones
that are too long) where end markers might make a difference. But
is is also possible to avoid those cases, just as it is possible to
avoid other bad practices.

Moreover, if you want to argue for the advantages of distinct closing
brackets it's not necessary to compare Pop-11 with Lisp. How about
comparing it with a language that uses "begin" and "end" (or "{" and
"}") for everything rather than "endif" "endwhile", etc.? I think
there are too many other differences between Pop-11 and Lisp.

-- Jeff

Barry Margolin

unread,
Sep 14, 1990, 5:49:46 PM9/14/90
to
In article <20...@well.sf.ca.us> jja...@well.sf.ca.us (Jeffrey Jacobs) writes:
>Way back in the old days, ...

>LISP had two fundamental data types, CONS cells and ATOMs. ATOMs were
>"indivisible", and included "interned symbols", numbers and strings.
>(Arrays were added, but were less important to the underlying
>concept).

I don't know which "old days" you're referring to, but in the old days I
remember (I learned MacLisp in 1980) arrays predated strings. PDP-10
MacLisp had just recently acquired a kludgey fake string mechanism, but
there was little support for anything but input and output of them.
Arrays, on the other hand, had existed since at least the early 70's.
--
Barry Margolin, Thinking Machines Corp.

bar...@think.com
{uunet,harvard}!think!barmar

Robert Krajewski

unread,
Sep 15, 1990, 12:55:25 PM9/15/90
to
In article <62...@castle.ed.ac.uk> exp...@castle.ed.ac.uk (Ulf Dahlen) writes:
>What did the ``original'' McCarthy Lisp syntax look like?

Umm, actually, I can't recall exactly...

But I think it was something like:

The list (a b c) ==> (a, b, c)

(cons x y) ==> cons[x;y]

Piercarlo Grandi

unread,
Sep 15, 1990, 3:30:24 PM9/15/90
to
On 12 Sep 90 02:12:38 GMT, an...@Neon.Stanford.EDU (Andy Freeman) said:

andy> 1) We're not doing +,-,*,/ arithmetic, we're programming. (BTW - "+"
andy> isn't really a binary operator, neither is "*"; there are
andy> surprisingly few true binary, or unary, operations.)

In article <PCG.90Se...@odin.cs.aber.ac.uk>, p...@cs.aber.ac.uk
(Piercarlo Grandi) writes:

pcg> Precisely. Agreed. Even the semantics are different.

On 14 Sep 90 07:45:24 GMT, o...@goanna.cs.rmit.oz.au (Richard A. O'Keefe)
said:

ok> I missed this the first time it came around. I have some bad news
ok> for the two of you: in floating-point arithmetic "+" _is_ a binary
ok> operation. Floating-point "+" and "*" are not associative.

The semantics *are* different. Didn't I write that?

ok> If one Lisp compiler turns (+ X Y Z) into (plus (plus X Y) Z) and
ok> another turns it into (plus X (plus Y Z)) then they are going to
ok> produce _different_ results.

As long as people *know* that the semantics are different, and this is
the big problem, they can choose whether to code the things in either of
the three ways.

ok> For integer and rational arithmetic, there's no problem,

Well, this is a case in point for my argument about the pitfalls; there
is still a problem. Nobody constrains you to have only positive numbers
as the operands to n-ary fixed point +, so that

(+ -10 +32767 +10)

is not well defined on a 16 bit machine, unless you do modular
arithmetic throughout.

As we already know but sometimes forget, arithmetic on computers follows
*very* different rules from arithmetic in vanilla mathematics, and using
a notation that resembles the latter can be utterly misleading, even for
very competent people, without pausing for hard thought.

(+ a b c d e)

simply means apply repeated *computer* addition on *computer* fixed or
floating point throughout. It is the *programmer's* responsibility to
make sure this makes sense -- the unfamiliar syntax may make him pause,
at least -- and somehow maps into vanilla mathematics not too
inaccurately.

ok> but anyone doing floating point calculations in Lisp has to be very
ok> wary of non-binary + and * .

Anyone doing floating point arithmetic on *any* machine, IEEE standard
or Cray, has to be very very wary of assuming it is the same as
arithmetic on reals, and yet a lot of people do (and then complain that
two computers with different floating representations print utterly
different results to their programs!).

The real problem is that *lots* of people still believe that floating
point numbers are reals, and fixed point ones have infinite precision!

They miss out completely on the non associativity or modularity, and all
the other funny, funny properties of floating and fixed point.

My contention is not with lisp n-ary operators (that are damn useful and
safe on other domains), it is that even the simplest existing operators
on floats and fixeds have entirely non-obvious semantics, and that this
is carefully disguised by conventional, mathematics looking syntax. Ah
sales!

Aaron Sloman

unread,
Sep 16, 1990, 8:58:54 AM9/16/90
to
je...@aiai.ed.ac.uk (Jeff Dalton) (14 Sep 90 20:21:11 GMT)
commented on my remarks on the syntactic poverty of Lisp.

(From me:)


> >It wasn't till I read this remark of Jeff's that I realised that one
> >reason I don't like Lisp is that, apart from "(" and ")", Lisp
> >doesn't help one to distinguish syntax words and function names.
>

(From Jeff:)


> Actually, Pop-11 doesn't do much along those lines either.
> It's not like it uses a different font for them (cf Algol).

I agree that Pop doesn't make the syntax words look different in
isolation. That wasn't what I meant (though perhaps that would be a
good idea especially in printed text). Rather, I meant that in
Pop-11 the syntax words (or many of them) play an obviously
different role in syntactic constructs, e.g. having matching
brackets and associated keywords. This helps students grasp that
they have a different role from function names ie. they are
concerned with how program text is grouped into sub-structures with
(e.g.) control relations between them (evaluating THIS expression
determines whether THAT one is executed, or whether iteration
continues, etc.)

Getting a good conceptual understanding of all this takes students
some time. Using collections of related syntax words that indicate
different kinds of syntactic "fields" or "contexts" in program text,
seems to help. (I now think this is more than just an aid to short
term memory as suggested in my earlier message. But the point needs
more thought.)

> ....
(From me:)


> >E.g. the use of essentially redundant keywords like "then", "elseif"
> >and "else" in conditionals, and the use of distinct closing brackets
> >like "endif", "endwhile" that remind you what sort of construct they
> >are terminating, has a particularly important consequence. It
> >reduces short term memory load, [...]

(From jeff:)


> Now, there's no doubt soemthing to what you say. However, I don't
> think there's as much to it as you suppose.
>
> One of the *mistakes* some people make when writing Lisp is to
> try to add the redundancy you describe by putting certain close
> parens on a line of their own followed by a comment such as
> " ; end cond". It makes the code *harder* to read, not easier.

(He goes on to justify this by saying that indentation plus a good
editor helps, and that GOOD lisp programs have short procedure
definitions and that adding such comments makes them longer and
harder to take in.)

> Of course, it's no doubt possible to write procedures (such as ones
> that are too long) where end markers might make a difference. But
> is is also possible to avoid those cases, just as it is possible to
> avoid other bad practices.

I agree that a good editor helps a lot (though people often have to
read code without an editor, e.g. printed examples in text books,
etc) and I agree that well-written short lisp definitions
(e.g. up to five or six lines) are often (though not always) easily
parsed by the human brain and don't need much explanatory clutter.

But I doubt that it is always desirable or possible to build good
programs out of definitions that are short enough to make the extra
contextual reminders unnecessary. It probably depends on the field
of application. E.g. I suspect that in symbolic math programs you
can get away with lots of short procedures, whereas in graphics,
vision, operating system design, compilers(??) and building complex
interactive programs like text editors and formatters, some at least
of the procedures are going to have longish stretches of nested case
analysis, several nested loops, etc.

Even Common_Lisp_The_Language (second edition) has examples that I
think are long enough to benefit from these aids that you say are
unnecessary. (Scanning more or less at random for pages with lisp
code, I found examples on pages 340-349, 667, 759, 962 and 965. Or
are these just examples of bad lisp style? (I've seen much worse!)


>
> Moreover, if you want to argue for the advantages of distinct closing
> brackets it's not necessary to compare Pop-11 with Lisp. How about
> comparing it with a language that uses "begin" and "end" (or "{" and
> "}") for everything rather than "endif" "endwhile", etc.?

Yes, Pascal and C (especially C) are open to some of the same
objections as lisp, because they don't have sufficient disinct
opening and closing brackets, though the use of "else" is a step
in the right direction.

This is why many programmers using these languages add the kinds of
comments you disapprove of. (Some Pop-11 programmers do also.)

ML is another language which, from my observations and reports of
student difficulties, has a syntax that is too economical, though in
a very different way from Lisp. I don't know the language well, but
I think it requires the reader to depend too much on remembered
operator precedences in different contexts. This is no problem for a
computer but very hard for people. So students often have great
trouble understanding how compile-time error messages relate to the
code they have written, which "looks" obviously right to them.
Additional use of brackets might help. (Perhaps this syntactic
poverty will prevent ML ever being used widely for large scale
software engineering.)

Prolog has yet another kind of syntactic poverty, inherited from its
dependence on a logical interpretation. (E.g. textual order has very
different procedural meanings depending on context: within a rule
concatenation means something like "and then", whereas between rules
with the same predicate it means something like "and if that fails
then try...")

My general point is that human perception of complex structures is
easiest and most reliable when there is well chosen redundancy in
the structures, and most difficult and error-prone when there isn't.
However, as you point out, too much redundancy can sometimes get in
the way, and we don't yet know the trade-offs. The use of
indentation in lisp and C is an example of redundancy that is an
essential aid for humans although totally unnecessary in theory. But
it is only one type of redundancy, and is useful only on a small
scale (as you imply).

(Maybe this stuff should have been cross-posted to comp.cog-eng,
since cognitive engineering is what we are talking about.)

For the record, I should also say that I don't think there's much
difference in readability between the following:

a(b(c(d, e), f(g, h))) [Pop-11 and Pascal, etc]

(a (b (c d e) (f g h))) [Lisp]

Though I do find the latter more visually elegant.


> I think
> there are too many other differences between Pop-11 and Lisp.

Yes, there are other differences, including the differences
mentioned by Richard O'keefe to which I'll respond in another
message.

Aaron

Aaron Sloman

unread,
Sep 16, 1990, 9:05:16 AM9/16/90
to
o...@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (14 Sep 90)
Organization: Comp Sci, RMIT, Melbourne, Australia

Richard commented on my comments on Jeff Dalton's remarks about Lisp.

(From me)


> > It wasn't till I read this remark of Jeff's that I realised that one
> > reason I don't like Lisp is that, apart from "(" and ")", Lisp
> > doesn't help one to distinguish syntax words and function names.
>

(From Richard)


> I once built a programming language which was a sort of hybrid between
> Lisp and Pop. In RILL, one wrote e.g.
> $IF (> X Y) $THEN (SETQ MAX X) $ELSE (SETQ MAX Y) $FI
> Basically, I used keywords for control structures ($PROC for lambda,
> $BEGIN for let, $IF, $FOR, $WHILE, and so on) and Lisp syntax for the rest.
> The parser _was_ better able to notice typing mistakes, and I _did_ make
> far fewer parenthesis errors than I did with straight Lisp when I later
> got my hands on it. But that was before I met Emacs.
>
> > By contrast the uniform syntax of lisp makes it not so easy to grasp
> > that car, cdr, sqrt, append, etc. are different beasts from setq,
> > cond, quote, loop, etc. Hence the tendency to make the kind of
> > mistake that Jeff describes, i.e. talking about different kinds of
> > functions.
>
> The thing about Pop (as it was last time I used it) is that there is
> no defined internal form for code. At one end of the spectrum we have
> "token stream" and at the other end we have "compiled code", and there
> is nothing in between.

> .....

Nowadays, at least in Poplog, there is the Poplog virtual
machine (PVM) in between, but that is still not easy for user
programs to get hold of, and in any case it is still not the sort of
thing you want.

The lack of something like a well defined internal parse tree
structure for Pop-11 is both a great weakness and a great strength.

It is a weakness because it limits the ability of programs to
analyse Pop-11 programs and makes debugging aids harder to build. It
also makes macros harder to write: you have to read in a "flat"
sequence of text items, then replace it on the input list with a new
sequence, instead of being able to reorganise a parse tree (though
pattern matching utilities of the sort described in your letter can
help).

It (lack of a defined internal form) is a strength because Pop-11
(unlike older versions of Pop) provides an alternative to writing
macros. I.e. you can define a syntactic extension by creating a
syntax word that reads in text and then plants instructions for the
PVM. You are not restricted to extensions that map onto what the
Pop-11 compiler can handle, as you would be with macros. A syntax
word can plant any code that makes sense for the PVM, which provides
a richer range of possibilities. Just to illustrate, the PVM can
support ML and Prolog as well as Pop-11 and Common Lisp. PVM
instructions get compiled to machine code. Could a compiled ML or
Prolog be defined using Lisp macros?

The lack of a defined internal form enhances extendability: if there
were a defined internal form you could not create an extension that
did not map onto that form. (I am assuming that any internal form that
is "high level" enough to be useful in the way you want is going to
be more restrictive than the PVM. I don't know how to prove this!)

It may be possible to have some sort of compromise: a core language
with a defined internal form, with extensions permitted that don't
have to map onto it. I have no idea how feasible this would be.

(From me)


> > But most designers of programming languages don't think about human
> > cognitive processes.
>

(From Richard)


> You should read C.J.Date's comments on SQL...

Perhaps you should post a reference and a summary???

Aaron

Scott Simpson

unread,
Sep 17, 1990, 10:05:10 AM9/17/90
to
In article <PCG.90Se...@odin.cs.aber.ac.uk> p...@cs.aber.ac.uk (Piercarlo Grandi) writes:
>Anyone doing floating point arithmetic on *any* machine, IEEE standard
>or Cray, has to be very very wary of assuming it is the same as
>arithmetic on reals, and yet a lot of people do (and then complain that
>two computers with different floating representations print utterly
>different results to their programs!).

Real programmers don't use real numbers. Donald Knuth did all the
arithmetic in TeX in fixed point numbers he called FIXes, which are
numbers raised to 2^20th (i.e., he uses the lower 20 bits for the
fractional part). This gives the same results on all (32 bit or
greater) machines but you still must be wary with your arithmetic.
(Come to think of it, maybe he did it in scaled points (which are
2^16th). I don't have the source here. I'll have to check. My point is
the same though.)
Scott Simpson TRW sc...@coyote.trw.com

Eric C. Olson

unread,
Sep 17, 1990, 12:53:05 PM9/17/90
to
One way of evaluating the "virtues" of a language, is to compare their
expressiveness. For example, someone already mentioned that an infix
notation parser can be readily added to Lisp. I mentioned previously
that for ruled based expert systems, a lhs/rhs parser is also a simple
extention to Lisp. Making a similiar system in an Algol language is
more complex. Inorder for the rules to be expressed in the native
language the generated "meta" level code must be passed through the
compiler. This may require system calls, or an inferior process, or
that the compiler be imbedded into program. A good working example of
this is C++, which (typically) converts instructions into C code.

However there are significant limitations on what can be done in C++.
For example, suppose you need to convolve an image as quickly as
possible, and that the convolution kernel is not known at compile
time. In Lisp, one can easily write code to make a function that applies
a specific convolution kernal to an image. The generated code only
contains a single loop and a bunch of conditional statements for
boundaries, a sum, and a bunch of multiplications. Almost
any modern Lisp compiler can generate efficient code for this. In
addition, special case can be handled. In particular, kernals with
zero values and ones can be optimized.

My impression is that writing a similiar piece of code in Algol
languages would be needlessly complicated (although doable).

I've also seen Lisp implementations in C, and C implementations in
Lisp. Although I've seen more Lisp implementations in C, I rather
enjoyed using C in Lisp -- since it caught all my illegal pointers
during program development and safely returned the Lisp debugger.

Anyway, I've yet to find an application that can be easily implemented
in an Algol language that cannot be readily implemented in Lisp.
However, the above examples are easily implemented in Lisp, but, I
think, are not readily implemented in Algol languages.

But, hey, some one tell me I'm wrong. I'll listen.
Eric

Eric
eri...@ssl.berkeley.edu

Jeff Dalton

unread,
Sep 17, 1990, 3:02:33 PM9/17/90
to
In article <15...@yunexus.YorkU.CA> o...@yunexus.yorku.ca (Ozan Yigit) writes:
>In article <34...@skye.ed.ac.uk> je...@aiai.UUCP (Jeff Dalton) writes:

>>... It's true that different people will prefer different
>>syntaxes and that we can't say they're wrong to do so. However, we
>>shouldn't go on to conclude that all views on the virtues or otherwise
>>of a syntax are equally valid.

>It follows therefore that one should try to substantiate some or all of the
>claims regarding the effectiveness and benefits of a syntax, such as that
>of lisp, instead of just presenting opinions.

What follows from what I said is that people should find out how good
Lisp programmers use the language before deciding that it has an
inherently losing syntax.

>I have seen studies on "natural artificial languages" [...] but I don't recall


>seeing a study that has a definitive word on overwhelming benefits of one
>syntax over another. If you know of any that substantiate various claims
>[so far made] about lisp syntax, I would be very interested in it.

I thought I was reasonably clear about what I had in mind when making
the statement you quote above, namely such things as the fact that
some people decide against Lisp syntax when they haven't used a good
editor or haven't learned some of the more effective techniques for
understanding Lisp.

The point I was making above was, more or less, just that informed
opinion should carry more weight than uninformed opinion. This has
rather little to do with definitive words on the overwhelming benefits


of one syntax over another.

I suppose, however, I have claimed that Lisp's syntax is simpler than
some of the alternatives. That seems to me rather self-evident. In
any case, I don't intend to argue about it.

I have also claimed that Lisp's quoting rules needn't be more
confusing than those of other languages such as (for example) Basic.
I don't expect everyone to agree with me on this, but it's based
on the observation that quotes are needed in the same cases, namely
when the external representation of some data object might otherwise
be mistaken for an expression. This suggests that it should be
possible to explain (enough of) Lisp's quotation rules in a way that
is not more confusing. If you want to say this is "just opinion", I
suppose we'll just have to disagree.

>>You can get *some* of the benefits, but by sacrificing some
>>of the others. Lisp occupies something like a local maximum in
>>"benefit space".
>
>Really. QED is it?

That's right, at least until someone provides a counterexample :-)

Steve Knight provided one, but of the wrong sort. He showed that
one of the benefits of Lisp syntax -- extensibility -- could be
obtained with another syntax. But it was done by giving up some
of the simplicity of the Lisp approach.

I am quite happy for someone to decide that the syntax of some other
language is better than that of Lisp, or that they prefer to give up
some of the simplicity (or whatever other virtues might be claimed).
Indeed, I think these are questions on which reasonable people can
reasonably disagree. It's a mistake, in my view, to suppose that
some sort of study will give us the definitive answers.

However, when people start saying that Lisp's syntax is a cost
rather than a benefit, with implication that everyone ought to
see it that way, I think it makes sense to point out that some
people *don't* see it that way, that they don't feel they are
tolerating the syntax in order to get extensibility (or whatever),
that they're aren't trying to make life easier for machines at
their own expense, and so on. Any definitive answer about
Lisp syntax has to take these views into account.

-- Jeff

Jeffrey Jacobs

unread,
Sep 17, 1990, 4:20:07 PM9/17/90
to

> old days (Barry Margolin learned LISP in 1980)

Early '70s; UCI LISP and BBN(/INTER)LISP both had strings. But implementation
was a bit different than today.

Jeffrey M. Jacobs

Jeffrey Jacobs

unread,
Sep 17, 1990, 4:24:34 PM9/17/90
to

> closing brackets

Back in the good 'ol days, when we had to slog through 10 feet of snow
to get to our glass TTYs, we had another level of parentheses, consisting
of [,] and <esc>. "]" closed off all open "(" or an open "["; <esc>
closed off all open "(" and/or "[".

David Vinayak Wallace

unread,
Sep 17, 1990, 7:48:59 PM9/17/90
to
Why is it good to make a distinction between "syntax words" and
"function names?" Sure you need to understand the semantics of QUOTE
before you use it, but then again you need to understand the semantics
of PLUS (as has been ably pointed out) before you use it, too.

Personally, the thing that drives me up the wall about non-lispy
languages like C is the artificial distinction between "expressions"
and "statements." Apart from making it harder for me to think about
my code, they don't let me tell my compiler various things it ought to
know (by, for instance, requiring that I make lots of assignments).

lawrence.g.mayka

unread,
Sep 17, 1990, 8:21:37 PM9/17/90
to
In article <PCG.90Se...@odin.cs.aber.ac.uk>, p...@cs.aber.ac.uk (Piercarlo Grandi) writes:
> The real challenge here is that we want some syntax that says, apply
> this operator symbol to these arguments and return these value_s_. Even
> lisp syntax does not really allow us to easily produce multiple values.

Common Lisp supports multiple value return with a fairly simple syntax.


Lawrence G. Mayka
AT&T Bell Laboratories
l...@iexist.att.com

Standard disclaimer.

lawrence.g.mayka

unread,
Sep 17, 1990, 9:21:30 PM9/17/90
to
In article <20...@well.sf.ca.us>, jja...@well.sf.ca.us (Jeffrey Jacobs) writes:
> the "fix" would be propagated even to pending calls. E.g. if you
> had
> (DEFUN FOO (X Y)
> expr1
> expr2
> ...)
>
> and expr1 invoked FOO recursively, you could break the execution,
> change expr2 (or TRACE it, or BREAK it or...), all of the pending
> invocations on the stack were affected. (You can't do this with
> compiled code).

You can if you use the Set Breakpoint command on a Symbolics
workstation. TRACE :WHEREIN and ADVISE-WITHIN are also applicable if
'expr2' is a function call. Or if you mean to change FOO permanently,
recompile it and then reinvoke it at its earliest occurrence on the
stack.

> However, if compilation is the primary development strategy (which it
> is with CL), then the LISP syntax is not particularly useful. Modern

The great power of Common Lisp's unique syntax extension facilities
(a.k.a. its macro capability) depends on the language's regular

lawrence.g.mayka

unread,
Sep 17, 1990, 9:46:09 PM9/17/90
to
In article <MASINTER.90...@origami.parc.xerox.com>, masi...@parc.xerox.com (Larry Masinter) writes:
> about how it does nesting. I think a lot of people find Lisp hard to
> read and maintain, and that the difficulty is intrinsic, but I don't
> think there is as strong a correlation with editor technology as you
> imply.

Most complaints I've heard about Lisp syntax, from both novices and
regular users, boil down to the claim that the repetitive,
positionally dependent syntax of most Lisp constructs has insufficient
redundancy for easy recognition by the human eye. Repetition of
parentheses could be reduced by defining (e.g., via the macro
character facility) a character pair such as {} to be synonymous with
(). Positional dependency could be reduced simply by making greater
use of keywords (e.g., defining macros synonymous with common
constructs but taking keyword arguments instead of positional ones).
The difficulty some people have in reading Lisp is hence not intrinsic
to its syntax, but rather an accident of common practice.

lawrence.g.mayka

unread,
Sep 17, 1990, 9:53:17 PM9/17/90
to
In article <PCG.90Se...@odin.cs.aber.ac.uk>, p...@cs.aber.ac.uk (Piercarlo Grandi) writes:
> is still a problem. Nobody constrains you to have only positive numbers
> as the operands to n-ary fixed point +, so that
>
> (+ -10 +32767 +10)
>
> is not well defined on a 16 bit machine, unless you do modular
> arithmetic throughout.

Common Lisp integers have infinite precision, so Lisp programmers
don't need to worry about 16-bit machine words vs. 32-bit machine
words. Rational numbers in Lisp also have infinite precision.

Richard A. O'Keefe

unread,
Sep 18, 1990, 2:46:13 AM9/18/90
to
In article <34...@skye.ed.ac.uk>, je...@aiai.ed.ac.uk (Jeff Dalton) writes:
> In article <34...@syma.sussex.ac.uk> aar...@syma.sussex.ac.uk (Aaron
> Sloman) writes:

> >It wasn't till I read this remark of Jeff's that I realised that one
> >reason I don't like Lisp is that, apart from "(" and ")", Lisp
> >doesn't help one to distinguish syntax words and function names.

> Actually, Pop-11 doesn't do much along those lines either.
> It's not like it uses a different font for them (cf Algol).

Come come. All that's needed for that is a suitable vgrind(1) description.
Surely there must be one available already; Pop-11 syntax is close enough
to Pascal/Modula syntax for it to work.

(I must admit that Interlisp-D's editors did a nice job of displaying
functions with "keywords" and comments very clearly distinguished.)

I've been marking some 2nd year Pascal assignments recently, and have
come to the conclusion that Lisp programmers probably have _less_ trouble
with parentheses. Pascal's precedence rules are so counterintuitive that
people seem to throw in lots of parentheses just because they're never
quite sure what Pascal will do to them if they're left out. At least in
Lisp the rules are _simple_.

Richard A. O'Keefe

unread,
Sep 18, 1990, 8:43:53 AM9/18/90
to
In article <34...@syma.sussex.ac.uk>, aar...@syma.sussex.ac.uk (Aaron Sloman) writes:
> Prolog has yet another kind of syntactic poverty, inherited from its
> dependence on a logical interpretation. (E.g. textual order has very
> different procedural meanings depending on context: within a rule
> concatenation means something like "and then", whereas between rules
> with the same predicate it means something like "and if that fails
> then try...")

Not to defend Prolog syntax overmuch, but I had to stop and think for
several minutes before I understood what Aaron Sloman was getting at
here. It had never occurred to me that

p :- % p <if>
a, % a <and-then>
b. % b.

and
p(a). % either p(a)
p(b). % or-else p(b)

were both "concatenation". For sequencing of clauses, yes it is plain
juxtaposition. But and-then is an explicit operator ",", not mere
concatenation. I have always experienced these two constructions as
utterly different, and it is very hard for me to see them as similar.
If we wrote
(((p) ; p if
(a) ; a and-then
(b))) ; b
and
(((p a)) ; either p(a)
((p b))) ; or-else p(b)
_then_ we would be using juxtaposition of lists for both.

> For the record, I should also say that I don't think there's much
> difference in readability between the following:

> a(b(c(d, e), f(g, h))) [Pop-11 and Pascal, etc]

> (a (b (c d e) (f g h))) [Lisp]

I was about to agree, and then I suddenly realised that I find the
second distinctly easier to read. Hang on a minute, I'm a _Prolog_
programmer, what's going on here?

Just for fun, here's a Pop-2 statement that does the same function calls:

d, e.c, g, h.f.b.a;

Not to knock Pop. If Scheme is the Messiah, Pop is John the Baptist.
[It's a _joke_, Joyce.]

Richard A. O'Keefe

unread,
Sep 18, 1990, 9:10:57 AM9/18/90
to
In article <34...@syma.sussex.ac.uk>, aar...@syma.sussex.ac.uk (Aaron Sloman) writes:
> It (lack of a defined internal form) is a strength because Pop-11
> (unlike older versions of Pop) provides an alternative to writing
> macros. I.e. you can define a syntactic extension by creating a
> syntax word that reads in text and then plants instructions for the
> PVM.

Was it Madcap that used to do that?

> Could a compiled ML or
> Prolog be defined using Lisp macros?

Yes. All you have to do is provide appropriate syntactic forms for
the macros to expand into. In Interlisp-D, for example, it would be
quite straightforward. It's not the *top* level that's special here,
it's the *bottom* level.

> The lack of a defined internal form enhances extendability: if there
> were a defined internal form you could not create an extension that
> did not map onto that form.

But if every PVM instruction were available as a construct that could
appear in text, there would be no such extension possible.

For example, in many C compilers, there is no assembly code program
which cannot be generated by that compiler, because there is an asm()
construct which can be used to generate the instruction of your choice.
Nearer "home" (in the sense that it is more like the way suitable Lisp
compilers work) the Sun C compilers use ".il" files which let you map
what looks like a function call to the instructions of your choice, and
peephole optimisation happens after such expansion. It's the _bottom_
level that matters here.


[Aaron Sloman]


> But most designers of programming languages don't think about human
> cognitive processes.

[Richard O'Keefe]


> You should read C.J.Date's comments on SQL...

I had in mind his "A Guide to the SQL Standard".
I think the mildest thing he has ever said is
"it has to be said that the SQL standard is not a particularly _good_ one".
Next mildest: "SQL in its present form is extremely _un_orthogonal and is
chock full of apparently arbitrary restrictions, exceptions, and special
rules."
And the relevant quote is in the Guide, which I haven't here, so from
memory "There are a lot of principles known for designing programming
languages. Unfortunately the design of SQL respects none of them."
Of course the design principles he refers to: consistency, orthogonality,
and so on are basically cognitive principles. In "Relational Database
writings 1985-1989" chapter 13 is "EXISTS is not 'Exists'! (Some Logical
Flaws in SQL)" whose abstract is (and I quote literally) "SQL is not sound."
His point is that the logical connectives of SQL do not behave like the
logical connectives of logic. (Neither do and-then and or-else in Prolog,
but they are closer than SQL.) Again: this is a cognitive point; perhaps
the operations in SQL can be justified, but can introducing such a clash
between what the user expects (because of the names) and what the user
_gets_ be justified?

To return to Lisp: now that I have read part of CLtL-2, I have become
convinced that Scheme is the Lisp for ->me<-. Why? Because I think I
understand it, and I'm quite sure that I don't understand CLtL-2.

Cliff Click

unread,
Sep 18, 1990, 1:48:51 PM9/18/90
to
For all those folks involved in the Lisp syntax is good/bad wars...

I've done some Scheme programming, darn little ML programming and scads
of C/Fortran/Pascal/etc programming. I find ML syntax to be incredibly
unreadable, while Scheme & C syntax isn't that bad. Is this a function of:

1) Not enough experience with ML,

2) the difficultly of having a type system in a function language,

3) lousy design of ML syntax or

4) none of the above?


Gratefully awaiting your enlightened responses,
Cliff Click
--
Cliff Click
cli...@owlnet.rice.edu

Jeffrey Jacobs

unread,
Sep 18, 1990, 5:36:03 PM9/18/90
to

Larry Masinter writes:

> *Way* back in the old days, Lisp was batch. Punch cards & JCL.
> Although the first generation (Lisp 1.5 on 7040 and its ilk) may have
> been primarily interpreted, mmost of the second generation Lisps were
> compiled (I think I remember that in Stanford's Lisp/360 the DEFINE
> function invoked the compiler.)

I'm not sure which "second generation LISPs" Larry refers to. Having
fortunately been spared any "batch LISPs" other than briefly and
mercifully SDC's, I can't comment. Certainly, Maclisp, BBN/Inter-, UCI,
Stanford 1.6 meet my definition of being primarily interpreted during
the development process. (I can't imagine anything more horrible than
programming LISP in a batch environment ;-)

>BBN Lisp had strings and arrays hash tables and a few other data types
>in the early 70's, and I don't think it was unique. Lisp 1.5 only had
>a few datatypes, though.

UCI and 1.6 also had similar data types. What I meant to point out was
that the conceptual focus was on CONS cells and ATOMS as the *primary*
data types of interest. As opposed to modern Common LISPs, where the
term "atom" had disappeared,
people are deathly afraid of using CONS cells, and the focus
seems to be primarily on sequences, hash-tables, vectors, etc. (This
reached its peak in the SPICE litrature where users were warned to
'avoid the use of CONS whenever possible').

>if you deleted, added, or renamed an argument, you might start
>executing code where the variable bindings of the old stack frame
>didn't match what the new code expeced.

True, but protecting the programmer from dumb mistakes is not something
for which LISP is famous. Even in a compiled environment, changing
argument lists still causes problems. And in the interpreted mode,
you could often fix the roblem and continue, as opposed to starting over.


>[As a side note, many learning LISP programmers frequently do
> encounter self-modifying code and are mystified by it, e.g.,
> (let ((var '(a b c)))
> ...
> (nconc var value))
> ]

This isn't self modifying code, it's destructive modification of a data
structure (and let's avoid a circular argument about "var" being "code";
the learning programmer encounters this as a data example, not an
example of code).

As an aside, destructive modification of structures, or even
the building of any structure that is not a fairly simple list,
seems to be something that either programmers don't learn or fear
greatly. I know one former member of the committee that confessed
to never having used RPLACA or its equivalent. See also the
recent thread on building lists and TCONC...

>As for dynamic error correction, insertion of breakpoints and trace
>points, however, the major impediment is not that Lisp is compiled:
>the major problem is the presence of *MACROS* in the language.

Sorry, this on doesn't fly. The information needed for dynamic correction,
tracing, etc. is simply gone. You can't issue a break, followed
by a "TRACE FOO IN FUN" where "UN" is a compiled function, continue,
ad have pending call to FOO in previous invocation of FUN trace.
(The Medly example simply demonstrates this).

Nor can I buy the "MACROS" argument. Most applications programmers(as
opposed to "language implementors"), make very little use of macros. Nor
do they have much reason to do source level debugging of macros supplied
with their particular implementation. IOW, *most* of their debugging is
done on functions, not macros. And if they do use macros, they are usually
fairly simple and straightforward.

>I don't really know the demographics, but my impression was that the
>popular Lisp implementations of the '70s were MacLisp, Lisp 1.6, Franz
>Lisp, Interlisp, and that UCI Lisp wasn't so widely used. Maybe the
>distinction is between the research vs. the student community.

The distinction is indeed the "student community". UCI LISP was,
according to my records and correspondence, the most widely used system
for *teaching* LISP. In fact, it was used for classes at Stanford around '75
or '76, and was used at CMU into the '80s (CMU rewrote it). I don't think
it ever was adopted by SAIL, although I'm sure they incorporated many of the
enhancements. Certainly BBN/Interlisp and MacLisp were the premier
versions used in the research community, followed by Franz later in the
decade. (I'm not real clear on what happend with 1.6; UCI was a
melding of Stanford 1.6 and BBN).

>I'm not sure about the REDUCE bit, although it doesn't ring true (my
>'standard lisp' history is a bit rusty)

Trust me; I was providing support to Utah while I was at ISI in 1974!

> but you certainly shouldn't
>confuse MLISP (which is the meta-lisp used in McCarthy's books) with
>CLISP (which is the DWIM-based support for infix notation that was
>added to Interlisp in the mid-70s.)

Mea culpa, it was indeed CLISP! This was available at least as early
as '74. My earlier BBN/Inter manuals are in storage, but I think it
might even go back a little earlier.

>I'm not sure what editor Griss
>and Hearn used for REDUCE, but I'd guess it was some variant of TECO.
>Remember that in the early 70's, people still actually used teletype
>machines and paper-based terminals to talk to computers; neither text
>or structure based editors actually let you *see* anything unless you
>asked it to print it out for you.

I remember, literally. I used to have a TI Silentwriter next to two terminals
in my office...

>The ability to write programs that create or analyze other programs
>without resorting to extensive string manipulation or pattern matching
>has always been a strength of Lisp, whether or not you can do so on
>the fly with programs that are running on the stack.

Quite true! And, I might add, the simple syntax of LISP makes it much
easier to write code that generates LISP code than is the case with other
languages.

>I guess C doesn't have a 'Modern block structured syntax' (I certainly
>get cross-eyed looking at some of the constructs that cross my
>screen); perhaps it is a relic from the past, too.

Heck, I get *serious* eye strain trying to read most C code. Nobody
on this planet will ever accuse me of being a C-booster. (And it
certainly qualifies as a "relic"; the biggest influence in C was
probably the difficulty in pushing the keys on the old ASR teletypes!)

>> Personally, I think the whole reason LISP machines were created
>> was so that people could run EMACS :-)

>I believe EMACS predated LISP machines by some margin

Hey, my point exactly. More (?) than one EMACS user on a PDP-10 brought
it to its knees, so the LISP machine had to be invented :-)

>I think a lot of people find Lisp hard to
>read and maintain, and that the difficulty is intrinsic, but I don't
>think there is as strong a correlation with editor technology as you
>imply.

I didn't mean to imply any such correlation; in fact, I'm quite happy to
use ?MACS. But I'd also like to have a structure editor, and DWIM, and
a lot of other things that are no longer available. More importantly,
I'd like to have seen more of the underlying concepts and philosophy
maintained!

>(This may sound a little like a flame, but I don't mean it to be. Just
>comparing my recollection of History to Mr. Jacobs.)

No flame taken...

Jeffrey M. Jacobs
ConsArt Systems Inc, Technology & Management Consulting
P.O. Box 3016, Manhattan Beach, CA 90266
voice: (213)376-3802, E-Mail: 7670...@COMPUSERVE.COM

"I hope that when I get old I won't jes sit around talking about glory days,
but I probably will" - Bruce Springsteen

Thomas M. Breuel

unread,
Sep 18, 1990, 7:41:18 PM9/18/90
to
In article <1990Sep18....@rice.edu>, cli...@libya.rice.edu

(Cliff Click) writes:
|> I've done some Scheme programming, darn little ML programming and
scads
|> of C/Fortran/Pascal/etc programming. I find ML syntax to be
incredibly
|> unreadable, while Scheme & C syntax isn't that bad.

ML syntax is a little tricky, and quite different from C or Algol
syntax. However, it is very convenient for functional programming,
in particular if you use lots of currying.

Tim Moore

unread,
Sep 18, 1990, 8:08:29 PM9/18/90
to
In article <20...@well.sf.ca.us> jja...@well.sf.ca.us (Jeffrey Jacobs) writes:
>Larry Masinter writes:
>>...

>True, but protecting the programmer from dumb mistakes is not something
>for which LISP is famous. Even in a compiled environment, changing
>argument lists still causes problems. And in the interpreted mode,
>you could often fix the roblem and continue, as opposed to starting over.
>

I'd say that Lisp, with its copious runtime typechecking, does a
pretty good job of protecting the user from dumb mistakes, compared
with other languages like C.

>
>>[As a side note, many learning LISP programmers frequently do
>> encounter self-modifying code and are mystified by it, e.g.,
>> (let ((var '(a b c)))
>> ...
>> (nconc var value))
>> ]
>
>This isn't self modifying code, it's destructive modification of a data
>structure (and let's avoid a circular argument about "var" being "code";
>the learning programmer encounters this as a data example, not an
>example of code).
>

It is self-modifying code in the sense that it changes the behavior of
the form from one evaluation to the next by tweaking a structure that
is not obviously part of the global state of the program. It's clearly
a hack that has been made obsolete by closures. It's not legal Common
Lisp code, because the constant may be in read-only memory.

>As an aside, destructive modification of structures, or even
>the building of any structure that is not a fairly simple list,
>seems to be something that either programmers don't learn or fear
>greatly. I know one former member of the committee that confessed
>to never having used RPLACA or its equivalent. See also the
>recent thread on building lists and TCONC...
>

A reason for this may be that one of the most influential Lisp books
of the 80's (IMHO), Abelson and Sussman's "Structure and
Interpretation of Computer Programs", does a pretty good job of
discouraging the pitfalls of destructive modification. Also, avoiding
destructive modification is sound engineering practice. Far too often
have I stumbled over someone's clever destructive hack when trying to
modify code.

I think many Lisp programmers go through a phase where they discover
the forbidden pleasures of destructive modification and go overboard
using it. I myself am starting to grow out of it. It causes too much
grief in the long run.

Also, in modern Lisps that use a generational garbage collector,
destructive modification of a structure can be as expensive if not
more than allocating a fresh structure.

>>As for dynamic error correction, insertion of breakpoints and trace
>>points, however, the major impediment is not that Lisp is compiled:
>>the major problem is the presence of *MACROS* in the language.

>...

>Nor can I buy the "MACROS" argument. Most applications programmers(as
>opposed to "language implementors"), make very little use of macros. Nor
>do they have much reason to do source level debugging of macros supplied
>with their particular implementation. IOW, *most* of their debugging is
>done on functions, not macros. And if they do use macros, they are usually
>fairly simple and straightforward.
>

It's not the macros that the application programmers write themselves
that cause problems; at least the programmer knows what the expansions
of those macros look like. Rather, it's the macros that are a part of
the language that cause trouble. Consider how often Common Lisp
programmers use cond, setf, dotimes, dolist, do, and do*. The
expansions of these macros can be pretty hairy. Maintaining the
correspondence between the internal representation of a function and
the form that produced it is not trivial.

>>I'm not sure about the REDUCE bit, although it doesn't ring true (my
>>'standard lisp' history is a bit rusty)
>
>Trust me; I was providing support to Utah while I was at ISI in 1974!

So that explains the old copy of the UCI Lisp manual in the PASS group
library...

>>The ability to write programs that create or analyze other programs
>>without resorting to extensive string manipulation or pattern matching
>>has always been a strength of Lisp, whether or not you can do so on
>>the fly with programs that are running on the stack.
>
>Quite true! And, I might add, the simple syntax of LISP makes it much
>easier to write code that generates LISP code than is the case with other
>languages.

I third this point. Many problems can be solved cleverly and
efficiently by generating code for the solution instead of solving the
problem directly. Pattern matchers come to mind. To return briefly to
the original subject of this thread, I don't think that the trend
towards an opaque function representation has effected this capability
at all.

>
>Jeffrey M. Jacobs
>ConsArt Systems Inc, Technology & Management Consulting
>P.O. Box 3016, Manhattan Beach, CA 90266
>voice: (213)376-3802, E-Mail: 7670...@COMPUSERVE.COM
>
>"I hope that when I get old I won't jes sit around talking about glory days,
>but I probably will" - Bruce Springsteen

Tim Moore mo...@cs.utah.edu {bellcore,hplabs}!utah-cs!moore
"Ah, youth. Ah, statute of limitations."
-John Waters

Jamie Zawinski

unread,
Sep 19, 1990, 4:10:08 AM9/19/90
to
In article <20...@well.sf.ca.us> jja...@well.sf.ca.us (Jeffrey Jacobs) writes:
>
> Most applications programmers (as opposed to "language implementors"), make

> very little use of macros. Nor do they have much reason to do source level
> debugging of macros supplied with their particular implementation. IOW,
> *most* of their debugging is done on functions, not macros. And if they do
> use macros, they are usually fairly simple and straightforward.

I disagree; macros are Common Lisp's only serious hook into the compiler, and
I use them all the time when writing code that needs to be efficient. In a
pagination system I worked on, we used macros to turn a structure describing a
finite-state machine into a parser for it. The strength of the thing was the
fact that, once all of the macroexpansion was done, there were a lot of
clauses of the form (= 4 4) which the compiler optimized away. No need to
learn the meta-language that things like lex use. And what other language
lets you define new control structures?

(And though I'm a "language implementor" now, I wasn't then. :-))

Saying that most macros are fairly simple is like saying that most functions
aren't recursive. It's probably true, but it's not really that meaningful a
generalization.

-- Jamie

Piercarlo Grandi

unread,
Sep 20, 1990, 9:37:57 AM9/20/90
to
On 18 Sep 90 00:21:37 GMT, l...@cbnewsc.att.com (lawrence.g.mayka) said:

lgm> In article <PCG.90Se...@odin.cs.aber.ac.uk>,
lgm> p...@cs.aber.ac.uk (Piercarlo Grandi) writes:

pcg> The real challenge here is that we want some syntax that says, apply
pcg> this operator symbol to these arguments and return these value_s_. Even
pcg> lisp syntax does not really allow us to easily produce multiple values.

lgm> Common Lisp supports multiple value return with a fairly simple syntax.

It is fairly ad hoc, ugly and "inefficient". It essentially implies that
you are assignining a list to a list. It is also slightly incosistent in
flavour with the rest of the language.

Conventions like that used by Aleph (no return values, only in and out
parameters), or Forth (take N from the stack, return M from the stack)
seem to be much more consistent...

You have a point that it is simple, though. It is still a bit of a fixup
job, though (IMNHO of course).

Thomas M. Breuel

unread,
Sep 20, 1990, 5:03:11 PM9/20/90
to
In article <12...@ists.ists.ca>, mi...@ists.ists.ca (Mike Clarkson) writes:

|> moore%cdr.ut...@cs.utah.edu (Tim Moore) writes:
|> >It's not the macros that the application programmers write themselves
|> >that cause problems [...], it's the macros that are a part of

|> >the language that cause trouble.
|>
|> When I look at the kind of code environment I write Common Lisp code in
|> these days, I feel that most application programmers make very heavy use
|> of macro packages, some of which are getting very large. How may files
|> that you have start with
|>
|> (require 'pcl)
|> (require 'clx)
|> (require 'loop)
|>
|> etc. This doesn't invalidate Jeff Jacobs or Tim Moore's arguments, but it
|> leads me to an observation. As we go on as lisp programmers (in
Common Lisp),
|> we are building higher and higher levels of abstraction, relying on
|> larger and larger programming bases above the underlying language.
|> [...]
|> I dearly love Scheme as a language, but because of the lack of
|> standardization [of macros] it's difficult to find the abstracting
packages
|> on which we increasingly rely. I feel that the approach of
encouraging
|> implementations to continue to experiment with different solutions
has
|> had its merits (first class continuations as an example), but the
time
|> has come to solidify the language and build upwards on the very real
|> accomplishments of the base language.

The use of macros to implement PCL, CLX, and LOOP is
highly questionable:

* In a language like Scheme (as opposed to CommonLisp)
that provides efficient implementations of higher order
constructs with natural syntax, there is little need or
excuse for something as horrible as LOOP in the first
place.

* I see no reason for an X interface to make heavy use of
macros (everything can be implemented nicely as functions
or integrable functions).

* It's perhaps OK to prototype an object system as a macro
package, but for production use, an object system (or any
equally complex extension to the language) should be part
of the compiler, for reasons of efficiency and
debuggability.

Altogether, I think the decision not to standardise macros
in Scheme has had the beneficial side effect of
discouraging their use (maybe that was one of the intents
in the first place). On the other hand, I doubt that the
differences between DEFINE-MACRO in different implementations
of Scheme are so significant that it is a serious obstacle for
writing a portable X window system interface, portable
iteration constructs, or portable prototypes of something
like an object system (even PCL doesn't run out of the box
on every implementation of CommonLisp).

Much more important to my taste than a standard macro
facility (there exists a de-facto standard anyway), would
be guidelines for a standard foreign function interface
to Scheme (at least for numerical code), and guidelines
for optimization features such as MAKE-MONOTYPE-VECTOR.
I say "guidelines" because I realize that such extensions
should not really be part of the language (they may not
even be implementable on some systems), but that there
should be a common consensus about how extensions of
this kind should look.

Mike Clarkson

unread,
Sep 19, 1990, 10:46:13 PM9/19/90
to
In article <1990Sep18....@hellgate.utah.edu> moore%cdr.ut...@cs.utah.edu (Tim Moore) writes:
>>Nor can I buy the "MACROS" argument. Most applications programmers(as
>>opposed to "language implementors"), make very little use of macros. Nor
>>do they have much reason to do source level debugging of macros supplied
>>with their particular implementation. IOW, *most* of their debugging is
>>done on functions, not macros. And if they do use macros, they are usually
>>fairly simple and straightforward.
>>
>
>It's not the macros that the application programmers write themselves
>that cause problems; at least the programmer knows what the expansions
>of those macros look like. Rather, it's the macros that are a part of
>the language that cause trouble. Consider how often Common Lisp
>programmers use cond, setf, dotimes, dolist, do, and do*. The
>expansions of these macros can be pretty hairy. Maintaining the
>correspondence between the internal representation of a function and
>the form that produced it is not trivial.

A bit of a tangent on macros and Scheme:

When I look at the kind of code environment I write Common Lisp code in
these days, I feel that most application programmers make very heavy use
of macro packages, some of which are getting very large. How may files
that you have start with

(require 'pcl)
(require 'clx)
(require 'loop)

etc. This doesn't invalidate Jeff Jacobs or Tim Moore's arguments, but it
leads me to an observation. As we go on as lisp programmers (in Common Lisp),
we are building higher and higher levels of abstraction, relying on
larger and larger programming bases above the underlying language.

The large packages hid many of the bookkeeping details from the applications
programmer, and assuming they work as described, s/he is able to write
much denser and compact code. Look at programming in Picasso as an example.

This is "a good thing" in the Abselson and Sussman sense, and each or
these packages I mentioned is becoming a standard of sorts, getting
incorporated back into the evolving Common Lisp standard. Contrast this
with the case in Scheme: there is no standard definition of macros or
structures or advanced loops or any of the things neccessary to build
these abstracting packages. As a result, they are difficult to build
and by definition, ex-standard.

The lack of these parts of the Scheme standard is intentional. In the
current Scheme point of view is that, "The ability to alter the syntax
of the language creates numerous problems. All current implementations
of Scheme have macro facilities that solve those problems to one degree
or another, but the solutions are quite different and it isn't clear at
this time which solution is best, or indeed whether any of the solutions
are truly adequate. Rather than standardize, we are encouraging
implementations to continue to experiment with different solutions." But
in the interim, time marches on, and it is crippling the language from
development into areas that are also very important.

I dearly love Scheme as a language, but because of the lack of

standardization it's difficult to find the abstracting packages on which


we increasingly rely. I feel that the approach of encouraging
implementations to continue to experiment with different solutions has
had its merits (first class continuations as an example), but the time
has come to solidify the language and build upwards on the very real
accomplishments of the base language.


Mike.

--
Mike Clarkson mi...@ists.ists.ca
Institute for Space and Terrestrial Science uunet!attcan!ists!mike
York University, North York, Ontario, FORTRAN - just say no.
CANADA M3J 1P3 +1 (416) 736-5611

lawrence.g.mayka

unread,
Sep 20, 1990, 7:47:52 PM9/20/90
to
In article <1990Sep2...@ai.mit.edu>, t...@ai.mit.edu (Thomas M. Breuel) writes:
> * In a language like Scheme (as opposed to CommonLisp)
> that provides efficient implementations of higher order
> constructs with natural syntax, there is little need or
> excuse for something as horrible as LOOP in the first
> place.

We can talk about Dick Waters' SERIES package instead, if you prefer.
Though functional rather than iterative in style, the SERIES package
still makes heavy use of macros. Why? To perform source code
rearrangement during compilation, especially the unraveling of series
expressions into iteration to improve performance. In general, I see
the primary purpose of macros not as simple syntactic sugar but as a
means of manipulating source code "surreptitiously" at compile time
for reasons of efficiency improvement, tool applicability (e.g.,
recording the source file of a function definition so that Meta-. can
find it on request), existence in the compilation environment (e.g.,
DEFPACKAGE), etc.

Andy Freeman

unread,
Sep 21, 1990, 2:59:44 PM9/21/90
to t...@ai.mit.edu
In article <1990Sep2...@ai.mit.edu> t...@ai.mit.edu writes:
>* It's perhaps OK to prototype an object system as a macro
>package, but for production use, an object system (or any
>equally complex extension to the language) should be part
>of the compiler, for reasons of efficiency and
>debuggability.

I know that I'd rather debug macros than a compiler. In addition, it
isn't necessarily true that sticking something in the compiler makes
it more efficient, especially since "in the compiler" may well mean
"some expansion that the compiler does before it starts real work".

BTW - Which compiler and which object system? Group A may control
one, but not the other; does that mean that they shouldn't have an
object system?

>Much more important to my taste than a standard macro
>facility (there exists a de-facto standard anyway), would
>be guidelines for a standard foreign function interface
>to Scheme (at least for numerical code), and guidelines
>for optimization features such as MAKE-MONOTYPE-VECTOR.

The people working on macros seem to disagree. Go for it.

-andy
--
UUCP: {arpa gateways, sun, decwrl, uunet, rutgers}!neon.stanford.edu!andy
ARPA: an...@neon.stanford.edu
BELLNET: (415) 723-3088

Bruce R. Miller

unread,
Sep 21, 1990, 6:09:56 PM9/21/90
to

In article <20...@well.sf.ca.us>, Jeffrey Jacobs writes:
>
> Larry Masinter wrote:
> >>>[As a side note, many learning LISP programmers frequently do
> >>> encounter self-modifying code and are mystified by it, e.g.,
> >>> (let ((var '(a b c)))
> >>> ...
> >>> (nconc var value))
> To which I replied:
> >>This isn't self modifying code, it's ...>
> And Tim Moore wrote:
> >It is self-modifying code in the sense ...
> Tim is quite correct, and points out what I (now) assume Larry Masinter
> ...
> >>It's not legal Common Lisp code, because ...
> Correct me if I'm wrong, but it seems to me that this *is* legal and...
> In any case, it's basically poor programming style, which is found in
> every language.

That last point is RIGHT in any case.
Without digging thru CLtL to determine its legality, it seems pretty
clear that it is unclear what it SHOULD do! And that different
interpreters/compilers will make different choices. And that the
Committee probably didn't even want to specify.

Having written such things accidentally in the past (most likely on a
misguided optimization binge :>), and been, of course, severely bitten, I
was curious what Rel 8 symbolics (on ivory) did. Interpreted, it does
permanently modify the data. Compiled it gave an error to the effect:

Error: Attempt to RPLACD a list that is embedded in a structure and
therefore cannot be RPLACD'ed.

BRAVO! (and, to the best of my knowledge, they dont even have
read-only memory; or maybe ivory does now?)

In any case, one could imagine a perfectly legitimate interpretter
consing the list fresh every time, giving a 3rd behavior. Before you
groan, consider that this might in fact be the most consistent behavior!
The (QUOTE (A B C)) form is (conceptually, at least) evaluated each time
the function is called! Should (QUOTE (A B C)) sometimes
return (A B C FOO)?

In any case, I suspect we've wandered off on a tangent here...

> >Rather, it's the macros that are a part of
> >the language that cause trouble. Consider how often Common Lisp

YOW! For my tastes (note that I said Taste!) the lisp macro may be the
single most dramatic and convincing feature in FAVOR of lisp!
[Not that I dont like its other features].

Some people argue that the lisp code/data equivalence is seldom used --
If they mean programs that explicitly build other programs, yes I write
relatively few of them. BUT, the power & flexibility of macros comes
from being able to write macros almost exactly like writing any other
lisp. And most lisp programmers I know, myself included, use macros
quite heavily. Even when I use a continuation style (sort of), I
usually hide it in a cozy with-foo type macro.

bruce.

Peter da Silva

unread,
Sep 22, 1990, 12:28:59 AM9/22/90
to
In article <20...@well.sf.ca.us> jja...@well.sf.ca.us (Jeffrey Jacobs) writes:
> Having watched the LISP Syntax thread for a while, I thought a little
> history might be in order...

> Way back in the old days, [...] LISP was an interpreted language.

Extension languages, by and large, still *are* interpreted...

> However, if compilation is the primary development strategy (which it

> is with CL), then the LISP syntax is not particularly useful. [...]

... so the LISP syntax *remains* particularly useful in this domain. And,
remember, that's where this discussion started...
--
Peter da Silva. `-_-'
+1 713 274 5180. 'U`
pe...@ferranti.com

Aaron Sloman

unread,
Sep 21, 1990, 3:37:10 PM9/21/90
to

Lawrence G. Mayka
AT&T Bell Laboratories (l...@cbnewsc.att.com)
writes:

>
> Most complaints I've heard about Lisp syntax, from both novices and
> regular users, boil down to the claim that the repetitive,
> positionally dependent syntax of most Lisp constructs has insufficient
> redundancy for easy recognition by the human eye.

And for the system to provide compile-time help for the user who
makes mistakes.

> ....Repetition of


> parentheses could be reduced by defining (e.g., via the macro
> character facility) a character pair such as {} to be synonymous with
> (). Positional dependency could be reduced simply by making greater
> use of keywords (e.g., defining macros synonymous with common
> constructs but taking keyword arguments instead of positional ones).
> The difficulty some people have in reading Lisp is hence not intrinsic
> to its syntax, but rather an accident of common practice.
>

If this sort of enhancement of redundancy and readability were done
in some standard, generally agreed, way, then some of the main
objections that I and others have to the lisp family of languages
(Common Lisp, Scheme, T, ....) would go away. I _might_ even
consider using T in place of Pop-11 one day???


Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QH, England
EMAIL aar...@cogs.sussex.ac.uk
or:
aarons%uk.ac.su...@nsfnet-relay.ac.uk
aarons%uk.ac.sussex.cogs%nsfnet-re...@relay.cs.net
BITNET: aarons%uk.ac.su...@uk.ac
UUCP: ...mcvax!ukc!cogs!aarons
or aar...@cogs.uucp

Aaron Sloman

unread,
Sep 21, 1990, 3:41:05 PM9/21/90
to
j...@lucid.com (Jamie Zawinski) writes:

> .....And what other language


> lets you define new control structures?

Just for the record -- Pop-11 does, either using macros (though they
are somewhat different from Lisp macros, as pointed out in previous
discussion in comp.lang.lisp) or by defining new "syntax words" that
read in some text and plant code for the incremental compiler.

Jeffrey Jacobs

unread,
Sep 21, 1990, 2:31:45 PM9/21/90
to

Larry Masinter wrote:
>>>[As a side note, many learning LISP programmers frequently do
>>> encounter self-modifying code and are mystified by it, e.g.,
>>> (let ((var '(a b c)))
>>> ...
>>> (nconc var value))
>>> ]

To which I replied:


>>This isn't self modifying code, it's destructive modification of a data
>>structure (and let's avoid a circular argument about "var" being "code";
>>the learning programmer encounters this as a data example, not an
>>example of code).

And Tim Moore wrote:

>It is self-modifying code in the sense that it changes the behavior of
>the form from one evaluation to the next by tweaking a structure that
>is not obviously part of the global state of the program.

Tim is quite correct, and points out what I (now) assume Larry Masinter
meant. I (mis-)took the example in a different context.

>>It's not legal Common
>>Lisp code, because the constant may be in read-only memory.

Correct me if I'm wrong, but it seems to me that this *is* legal and
that the compiler should specifically *not* allocate quoted lists such
as that to read-only memory!

In any case, it's basically poor programming style, which is found in
every language.

>Rather, it's the macros that are a part of


>the language that cause trouble. Consider how often Common Lisp
>programmers use cond, setf, dotimes, dolist, do, and do*. The
>expansions of these macros can be pretty hairy.
>Maintaining the
>correspondence between the internal representation of a function and
>the form that produced it is not trivial.

This assume a destructive replacement at the point of invocation of
these forms. In the dark distant pass, macro expansion during
interpretation occured *every* time the form was encountered (and
COND wasn't a macro); one paid the price during interpretation, trading
off ease of debugging and maintaining the original form for the improvement
in efficiency during compilation. As time went on, destructive replacement
during interpretation became an option, and subsequently became the
de-facto default. (Note that this provides its own set of problems,
i.e. a macro being changed, but previous invocations don't get updated).

But none of these arguments is particularly compelling, IMHO. They
are throwing the baby out with the bath water, i.e. "because the
programmmer may have difficulty dealing with certain things in such
an environment, we will eliminate all such capabilities"!


>"Ah, youth.
I remember youth...
>Ah, statute of limitations."
Thank god!
> -John Waters

Jeff Dalton

unread,
Sep 21, 1990, 2:52:05 PM9/21/90
to
In article <37...@goanna.cs.rmit.oz.au> o...@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes:
>In article <34...@skye.ed.ac.uk>, je...@aiai.ed.ac.uk (Jeff Dalton) writes:
>> In article <34...@syma.sussex.ac.uk> aar...@syma.sussex.ac.uk (Aaron
>> Sloman) writes:

>> >It wasn't till I read this remark of Jeff's that I realised that one
>> >reason I don't like Lisp is that, apart from "(" and ")", Lisp
>> >doesn't help one to distinguish syntax words and function names.

>> Actually, Pop-11 doesn't do much along those lines either.
>> It's not like it uses a different font for them (cf Algol).

>Come come. All that's needed for that is a suitable vgrind(1) description.
>Surely there must be one available already; Pop-11 syntax is close enough
>to Pascal/Modula syntax for it to work.

Come come yourself. Haven't you ever seen that sort of thing
done for Lisp? If not, check out _The Little Lisper_ sometime.

>(I must admit that Interlisp-D's editors did a nice job of displaying
>functions with "keywords" and comments very clearly distinguished.)

That too.

>I've been marking some 2nd year Pascal assignments recently, and have
>come to the conclusion that Lisp programmers probably have _less_ trouble
>with parentheses. Pascal's precedence rules are so counterintuitive that
>people seem to throw in lots of parentheses just because they're never
>quite sure what Pascal will do to them if they're left out. At least in
>Lisp the rules are _simple_.

Just so.

Peter da Silva

unread,
Sep 22, 1990, 11:35:21 AM9/22/90
to
In article <PCG.90Se...@odin.cs.aber.ac.uk>, p...@cs.aber.ac.uk (Piercarlo Grandi) writes:
> The real challenge here is that we want some syntax that says, apply
> this operator symbol to these arguments and return these value_s_. Even
> lisp syntax does not really allow us to easily produce multiple values.

I may be being more than usually dense, here, but what's wrong with returning
a list? What could be more natural?

Tim Moore

unread,
Sep 22, 1990, 4:43:02 PM9/22/90
to
In article <20...@well.sf.ca.us> jja...@well.sf.ca.us (Jeffrey Jacobs) writes:
>
>Larry Masinter wrote:
>>>>[As a side note, many learning LISP programmers frequently do
>>>> encounter self-modifying code and are mystified by it, e.g.,
>>>> (let ((var '(a b c)))
>>>> ...
>>>> (nconc var value))
...
>And Tim Moore wrote:
...

>>>It's not legal Common
>>>Lisp code, because the constant may be in read-only memory.
>
>Correct me if I'm wrong, but it seems to me that this *is* legal and
>that the compiler should specifically *not* allocate quoted lists such
>as that to read-only memory!

For better or worse, this is from CLtL2, pg 115:
X3J13 [the body specifying ANSI Common Lisp] voted in January 1989 to
clarify that it is an error to destructively modify any object that
appears as a constant in executable code, whether within a quote
special form or as a self-evaluating form.

I'm not sure where this was specified in the orginal CLtL; obviously
it wasn't specified too clearly or X3J13 wouldn't have needed to
clarify this point.

>Jeffrey M. Jacobs
>ConsArt Systems Inc, Technology & Management Consulting
>P.O. Box 3016, Manhattan Beach, CA 90266
>voice: (213)376-3802, E-Mail: 7670...@COMPUSERVE.COM

Peter da Silva

unread,
Sep 22, 1990, 11:41:25 AM9/22/90
to
In article <12...@ists.ists.ca> mi...@ists.ists.ca.ists.ca (Mike Clarkson) writes:
> I feel that the approach of encouraging
> implementations to continue to experiment with different solutions has
> had its merits (first class continuations as an example), but the time
> has come to solidify the language and build upwards on the very real
> accomplishments of the base language.

As an object lesson on what can happen to a language if you let this sort
of experimentation run unchecked, look at Forth. They are finally working
on an ANSI standard for the language, and they can't even agree on how
division should work!

David Vinayak Wallace

unread,
Sep 22, 1990, 6:07:17 PM9/22/90
to

Date: 21 Sep 90 22:09:56 GMT
From: mil...@GEM.cam.nist.gov (Bruce R. Miller)

Larry Masinter wrote:
>[As a side note, many learning LISP programmers frequently do
> encounter self-modifying code and are mystified by it, e.g.,
> (let ((var '(a b c)))
> ...
> (nconc var value))

Without digging thru CLtL to determine its legality, it seems pretty


clear that it is unclear what it SHOULD do! And that different
interpreters/compilers will make different choices. And that the
Committee probably didn't even want to specify.

In any case, one could imagine a perfectly legitimate interpretter


consing the list fresh every time, giving a 3rd behavior. Before you
groan, consider that this might in fact be the most consistent behavior!
The (QUOTE (A B C)) form is (conceptually, at least) evaluated each time
the function is called! Should (QUOTE (A B C)) sometimes
return (A B C FOO)?

The interpretation of either case is straightforward.

Quote returns the object it was handed. It doesn't try to guess "what
you meant." After all, the code isn't text; quote just has a pointer
to the car of the list you handed it!

I would be unhappy if quote performed computation each time you used
it! A very common idiom is to pass a quoted, uninterned symbol around
as a marker in some structure. (I used to use '(())). This is the
only way I can get a token I guarantee won't appear in my input
stream!

Anyway, the capability to side-effect constants in storage was not
removed for taste reasons, but efficiency; it permits me as a lisp
implementor to place code into read-only storage (and possibly share
it among processes).

Piercarlo Grandi

unread,
Sep 23, 1990, 2:28:29 PM9/23/90
to
On 22 Sep 90 15:35:21 GMT, pe...@ficc.ferranti.com (Peter da Silva) said:

peter> In article <PCG.90Se...@odin.cs.aber.ac.uk>,
peter> p...@cs.aber.ac.uk (Piercarlo Grandi) writes:

pcg> The real challenge here is that we want some syntax that says,
pcg> apply this operator symbol to these arguments and return these
pcg> value_s_. Even lisp syntax does not really allow us to easily
pcg> produce multiple values.

In other words there is some difference between say

(f (a b c d e)) vs. (f a b c d e)
(return (a b c)) vs. (return a b c)

For example,

(lambda (x y) (list (div x y) (rem x y)))

could be rewritten as

(lambda (x y) (div x y) (rem x y))

if we removed the implicit-progn rule. But this opens up other
questions...

peter> I may be being more than usually dense, here, but what's wrong
peter> with returning a list? What could be more natural?

That it is not returning multiple values -- it is returning a single
value. You can always get out of the multiple value difficulty like
that. Unfortunately it is also a good reason to also require functions
to have a single parameter. Maybe this is the right way to do things, or
maybe a function over a cartesian product or a curried function is not
quite the same thing as a function over a list.

One argument I could make is that Aleph or Forth seem to handle, each in
their way, the multiple-parameter/multiple-result problem more
elegantly, in a more fundamental way than passing around lists and
multiple-bind.

The effect may be largely the same, though.

Richard A. O'Keefe

unread,
Sep 24, 1990, 4:16:28 AM9/24/90
to
On 14 Sep 90 07:45:24 GMT, o...@goanna.cs.rmit.oz.au (I) wrote
> operation. Floating-point "+" and "*" are not associative.

In article <PCG.90Se...@odin.cs.aber.ac.uk>, p...@cs.aber.ac.uk (Piercarlo Grandi) writes:

> The semantics *are* different. Didn't I write that?

I was responding to this exchange:
Andy Freeman wrote:
"+" isn't really a binary operator, neither is "*";
there are surprisingly few true binary operations
to which Pierlecarlo Grandi replied
Precisely. Agreed.

and surely to agree that "+" and "*" are "not really" binary operations
but are really N-ary "associative" operations "precisely" is to say that
they are associative?

> ok> For integer and rational arithmetic, there's no problem,
>
> Well, this is a case in point for my argument about the pitfalls; there


> is still a problem. Nobody constrains you to have only positive numbers
> as the operands to n-ary fixed point +, so that
>
> (+ -10 +32767 +10)
>
> is not well defined on a 16 bit machine, unless you do modular
> arithmetic throughout.

This turns out not to be the case. Given any arithmetic expression made
up of 16-bit integer constants, variables having suitable values, and
the operations "+", unary "-", binary "-", and "*", if the result of the
whole expression should be in range, the answer is guaranteed correct.
If the implementation detects integer overflow, then arithmetic
expressions are only weakly equivalent to their mathematical counterparts,
but if integer overflow is ignored, then the answer is right iff it is in
range. In particular, on a 16-bit machine, if
(+ -10 +32767 +10)
gives you any answer other than 32767 or "overflow", your Lisp is broken.

Given that I referred to "integer and rational arithmetic", I think it's
reasonably clear that I was referring to Common Lisp (and of course T!)
where integer and rational arithmetic are in fact offered.


> As we already know but sometimes forget, arithmetic on computers follows
> *very* different rules from arithmetic in vanilla mathematics,

Not for integer and rational arithmetic it doesn't, and even N-bit
arithmetic is correct *modular* arithmetic (except for division).

> (+ a b c d e)
>
> simply means apply repeated *computer* addition on *computer* fixed or
> floating point throughout.

But it is an utterly hopeless notation for that! The only excuse for
using N-ary notation for addition is that it genuinely doesn't matter
which way the additions are done. For 2s-complement (with overflow
ignored) and for integer and rational arithmetic, this is indeed the
case, and (+ ...) notation can be used without misleading. But for
FP it matters.

> ok> but anyone doing floating point calculations in Lisp has to be very
> ok> wary of non-binary + and * .
>
> Anyone doing floating point arithmetic on *any* machine, IEEE standard
> or Cray, has to be very very wary of assuming it is the same as
> arithmetic on reals

This completely misses the point. I am supposing someone who thoroughly
understands floating-point arithmetic. (+ a b c) is a trap for _that_
person, no matter how much he understands IEEE or CRAY or even /360
arithmetic, because there is no promise about how it will turn into
>floating-point< operations (which we assume the user thoroughly grasps).
(+ a b c) may be compiled as (+ (+ a b) c) or as (+ a (+ b c)) or even
as (+ (+ a c) b). For floating-point, these are different formulas.
The trap here is that the particular Lisp used by a floating-point
expert may compile (+ a b c) into (+ (+ a b) c) and he may mistake this
for something guaranteed by the language. The copy of CLtL2 I've
borrowed is at home right now, but there is nothing in CLtL1 to require
this. The entire description of "+" is
+ &rest numbers [Function]
This returns the sum of the arguments. If there are no arguments,
the result is 0, which is an identity for this operation.
There is no reason to expect that + will always involve an addition.
As far as I can see, there is nothing to prevent a CL implementation
compiling
(declare (type float X))
(+ X X X)
as
(* 3 X)
even though on the given machine the two imagined floating-point formulas
may yield different answers. I'm sure CLtL2 must clarify this.

Steve Knight

unread,
Sep 24, 1990, 2:28:18 PM9/24/90
to
Eric Olsen points out:
> However there are significant limitations on what can be done in C++.
> For example, suppose you need to convolve an image as quickly as
> possible, and that the convolution kernel is not known at compile
> time. In Lisp, one can easily write code to make a function that applies
> a specific convolution kernal to an image. [...]
> My impression is that writing a similiar piece of code in Algol
> languages would be needlessly complicated (although doable).

This point is one that illustrates the usefulness of creating code on the
fly. Where I differ in view is in thinking that this capability does not
entail the direct relationship between internal and external syntax that Lisp's
syntax possesses.

Jeff Dalton has argued, and I concede, that if you don't have such a direct
relationship then a certain simplicity is lost. I justify this, in my mind,
by believing that the benefit of having a block structured syntax outweighs
the minor complications introduced. Of course, you'd have to agree the
complications were minor, the simplicity was not significant, and that
a different syntax might be better -- and it looks to me as if Jeff has a
different assessment on all of these.

> Anyway, I've yet to find an application that can be easily implemented
> in an Algol language that cannot be readily implemented in Lisp.
> However, the above examples are easily implemented in Lisp, but, I
> think, are not readily implemented in Algol languages.

I'd be happy to prove that this was not possible, on the simple basis of
matching construct & concepts between the two schools. I've never heard anyone
state the contrary. Of course, this is NOT the same as stating that Lisp
is an unsuitable delivery vehicle, alas, which is all too often the case.

Steve

Roman Budzianowski

unread,
Sep 24, 1990, 1:36:52 PM9/24/90
to
In article <E++...@xds13.ferranti.com>, pe...@ficc.ferranti.com (Peter

My understanding was that the major reason is efficiency: multiple
values are returned on the stack, instead of consing a new list. The
rest is a syntactic sugar ( important in that if you are interested only
in the first value there is no additional syntax).

Steve Knight

unread,
Sep 24, 1990, 2:02:15 PM9/24/90
to
Cliff wonders:

> I find ML syntax to be incredibly
> unreadable, while Scheme & C syntax isn't that bad. [...]

> 1) Not enough experience with ML,

Nope -- the more you use it, the more you hate it.

> 2) the difficultly of having a type system in a function language,

Nope -- the type system is a pain, but that's got nothing to do with the
syntax. It's easy to write a ML-style type-checker for S-expression syntax
languages.

> 3) lousy design of ML syntax or

Alas, it is true. ML's syntax is riddled with problems - the lack of a
closing keyword for IF means that you are left guessing how it binds with
respect to (for example) a HANDLE statement. The overloading of END means
that you are frequently left guessing, or measuring indentation with a ruler,
to work out what closes with what. The fact that datatype constructors are
tupled rather than curried (as in Miranda) means that pattern matching
involves many superfluous brackets. The incorrect precedence for FN almost
always forces the insertion of disambiguating brackets. The failure to
properly distinguish variable namespaces from constructor namespaces
(as is done correctly in Prolog) means that it is trivial to write
incorrect programs ..... and so on.

As an example of this last point, I will leave you with a rather uncomfortable
observation about ML. Taking this definition in isolation out of a program,
you cannot determine what it means because the context is too important.
fun f x = x;
(Get it? "x" might be a constructor or a variable. You don't know.)

> Gratefully awaiting your enlightened responses,

Sorryfully, I have to say that ML is one of the few languages that makes me
wish I was using S-expression syntax. Now stop that sniggering at the back...

Steve

Tim Moore

unread,
Sep 24, 1990, 6:27:35 PM9/24/90
to
In article <28631...@ARTEMIS.cam.nist.gov> mil...@cam.nist.gov (Bruce R. Miller) writes:

>
>In article <GUMBY.90S...@Cygnus.COM>, David Vinayak Wallace writes:
>> Date: 21 Sep 90 22:09:56 GMT
>> From: mil...@GEM.cam.nist.gov (Bruce R. Miller)
>> Larry Masinter wrote:
>> > (let ((var '(a b c)))
>> > ...
>> > (nconc var value))
>> ...

>> Quote returns the object it was handed. It doesn't try to guess "what
>> you meant." After all, the code isn't text; quote just has a pointer
>> to the car of the list you handed it!
>>
>I'm a little confused about what you're saying. In the simplest case,
>READ consed up the list, and QUOTE just returns its arg.
>Clearly if you repeatedly INTERPRETED the above form, you get the same
>result each time. (but perhaps not within a defun which may or may
>not digest the form!)

Regardless of any processing done by defun, the form is read only
once. It is true that the cons cell that is the value of var is
the same from invocation to invocation. However, the intention is that
the last element of the list whose head is that cons cell will be
changed each time through the code. So the result, (say, the return
value of nconc) is different each time.

>If you WANT the permanent change then you should presumably write


>(let ((var '(a b c)))

> (defun foo (..)
> .. (nconc var value)))
>to make clear the extent of var.

This implies that foo is a closure. The previous code is an idiom that
dates from Lisps without closures. You're still trying to modify a
quoted constant here. If you have closures you might as well do

(let ((var '(c b a)))
(defun foo (...)
(push value var)))

>> Anyway, the capability to side-effect constants in storage was not
>> removed for taste reasons, but efficiency; it permits me as a lisp
>> implementor to place code into read-only storage (and possibly share
>> it among processes).
>

>Ah, if it's in read-only storage then you can NOT modify it, no?

If you want to modify it, don't quote it.

>
>BTW; Tim Moore, in another posting pointed out that this is considered
>`illegal' although he didn't say if CLtL specified whether an error is
>signalled or whether it is simply `undefined'.

In the terminology of CLtL, "it is an error". This means that valid
Common Lisp programs can't do it, but if it's done the results are
undefined and an error need not be signalled.

>
>bruce

Bruce R. Miller

unread,
Sep 24, 1990, 2:52:17 PM9/24/90
to

In article <GUMBY.90S...@Cygnus.COM>, David Vinayak Wallace writes:
> Date: 21 Sep 90 22:09:56 GMT
> From: mil...@GEM.cam.nist.gov (Bruce R. Miller)
> Larry Masinter wrote:
> > (let ((var '(a b c)))
> > ...
> > (nconc var value))
> ...

> In any case, one could imagine a perfectly legitimate interpretter
> consing the list fresh every time, giving a 3rd behavior. Before you
> groan, consider that this might in fact be the most consistent behavior!
> The (QUOTE (A B C)) form is (conceptually, at least) evaluated each time
> the function is called! Should (QUOTE (A B C)) sometimes
> return (A B C FOO)?
>
> The interpretation of either case is straightforward.
>
> Quote returns the object it was handed. It doesn't try to guess "what
> you meant." After all, the code isn't text; quote just has a pointer
> to the car of the list you handed it!
>
I'm a little confused about what you're saying. In the simplest case,
READ consed up the list, and QUOTE just returns its arg.
Clearly if you repeatedly INTERPRETED the above form, you get the same
result each time. (but perhaps not within a defun which may or may
not digest the form!) Compilation makes it a bit less clear; READ is no
longer involved. What pointer are we handing it?

If you WANT the permanent change then you should presumably write

(let ((var '(a b c)))

(defun foo (..)
.. (nconc var value)))
to make clear the extent of var.

> I would be unhappy if quote performed computation each time you used

> it! A very common idiom is to pass a quoted, uninterned symbol around
> as a marker in some structure. (I used to use '(())). This is the
> only way I can get a token I guarantee won't appear in my input
> stream!
>

So would I!!!! I dont quite follow your example, but I'm certainly not
suggesting that quote should creates a new list -- nor even that the (a
b c) should be consed anew each time. Indeed, I expect the quote should
`disappear' upon compilation. I just wanted to say that the EFFECT of
that approach MAY be the most appropriate behavior.

> Anyway, the capability to side-effect constants in storage was not
> removed for taste reasons, but efficiency; it permits me as a lisp
> implementor to place code into read-only storage (and possibly share
> it among processes).

Ah, if it's in read-only storage then you can NOT modify it, no?

BTW; Tim Moore, in another posting pointed out that this is considered


`illegal' although he didn't say if CLtL specified whether an error is
signalled or whether it is simply `undefined'.

bruce

Jeff Dalton

unread,
Sep 24, 1990, 4:34:53 PM9/24/90
to
In article <34...@syma.sussex.ac.uk> aar...@syma.sussex.ac.uk (Aaron Sloman) writes:
>je...@aiai.ed.ac.uk (Jeff Dalton) (14 Sep 90 20:21:11 GMT)
>commented on my remarks on the syntactic poverty of Lisp.

>> One of the *mistakes* some people make when writing Lisp is to
>> try to add the redundancy you describe by putting certain close
>> parens on a line of their own followed by a comment such as
>> " ; end cond". It makes the code *harder* to read, not easier.
>
>(He goes on to justify this by saying that indentation plus a good
>editor helps, and that GOOD lisp programs have short procedure
>definitions and that adding such comments makes them longer and
>harder to take in.)

Humm. After reading this message from you, and some e-mail, I've
decided that I must not have been as clear as I thought, especially
when I brought in the question of procedure length.

What happens in Pop-11 is that every "if" ends in "endif", every
"define" ends in "enddefine", and so on. I think that doing that
(or something close to it) in Lisp is (a) unnecessary, and (b) a
mistake. It makes the code harder to read, but not just, or even
primarily, because it makes the code longer. The main problem is
that the code is more cluttered, the indentation obscured, and it
becomes harder to pick out more important names that appear in the
code, because there are all these other things made out of letters.

In the Lisp indentation style used in, for example, most of the
good Lisp textbooks, close parens are not emphasised (they're
just on the end of a line containing some code) and so don't
get in the way when reading. They contribute to the overall
"shape" of the code, but don't have to be processed individually
by the (human) reader. If instead they're put on lines of their
own, and especially if an "end" comment is attached, then they
become more significant and it takes more work to precess them.

Moreover, to the extent that one has to read the end marker, to see
whether it's an "endif" or an "endfor", one has failed to see this
scope information in the indentation itself; so I'm not sure the
increase in redundancy is as much as one might think.

Anyway, I don't know whether having distinct end brackets ("endif" vs
"endfor") is better than what's done in Pascal (all such groupings
use "begin" and "end" -- Pascal seems to have dropped the Algol 60
provision for comments following the "end") or C (all groupings use
"{" and "}"). I just don't think it works that well in Lisp.

But what about "long" procedures?

One thing to note is that it isn't just a matter of the number of
characters or lines involved. If we have a DEFUN, then a LET, and
everything else is in a CASE, and each CASE clause is fairly short,
the whole procedure can be fairly long without becoming especially
hard to read. So it's really a question of complexity, in some
sense.

A good way to make long procedures more readable is to break them
up, visually, into manageable sections. Blank lines and a brief
comment introducing a section are more effective, in my experience,
than emphasising end markers.

Another important technique is to make the procedures shorter by
using appropriate macros and higher-order functions (and, of course,
by putting some things into separate procedures).

In many languages, loops are written out using a "for" statement
(or equivalent), and the programmer has to recognize certain common
patterns of code. Lisp is often written that way too, but since
it's fairly easy to write a function or macro that embodies the
same pattern in a shorter, more easily recognized, form, the Lisp
programmer has some effective alternatives to writing out loops
in-line. Hence function such as MAPCAR, macros that implement
"list comprehensions", and so on.

>> Of course, it's no doubt possible to write procedures (such as ones
>> that are too long) where end markers might make a difference. But
>> is is also possible to avoid those cases, just as it is possible to
>> avoid other bad practices.

>I agree that a good editor helps a lot (though people often have to
>read code without an editor, e.g. printed examples in text books,

The editor is to help you *write* code that can be read. The whole
point of good indentation is to make it possible to read the code
without having to pay attention to individual parentheses. And
the code is readable whether it's in a book, a file, or whatever.

The reason people without a good Lisp editor may find end markers
helpful is that they have difficulty matching parens when they're
writing and find it difficult to check whether they've made a mistake.
And, since they're new to Lisp, they may also find that syntax easier
to read.

But I certainly didn't mean to say (if I did say it) that a good
editor was necessary in order to read the code. (Which is not to
say there *couldn't* be an editor that helped.)

>etc) and I agree that well-written short lisp definitions
>(e.g. up to five or six lines) are often (though not always) easily
>parsed by the human brain and don't need much explanatory clutter.

I don't know where you got the "five or six lines" from; it's
certainly not the limit I had in mind for "short". And *of course*
such functions are not *always* easy to read; good indentation, and
other good practices, are still necessary.

There are readable Lisp procedure definitions up to a page long in
some textbooks (and of course in other places). Once definitions
are longer than a page, they are usually significantly harder
to read in any language, although there are certain combinations
of constructs for which long definitions aren't so great a problem.
(Such the the CASE example mentioned above.)

I don't think Lisp is necessarily much worse than other languages
in this respect (if it's worse at all).

>But I doubt that it is always desirable or possible to build good
>programs out of definitions that are short enough to make the extra
>contextual reminders unnecessary.

Given your "five or six lines", I'm not surprised to find you
think this way.

And there are contextual reminders other than end markers, such
as introductory comments, which I find more effective.

>Even Common_Lisp_The_Language (second edition) has examples that I
>think are long enough to benefit from these aids that you say are
>unnecessary. (Scanning more or less at random for pages with lisp
>code, I found examples on pages 340-349, 667, 759, 962 and 965. Or
>are these just examples of bad lisp style? (I've seen much worse!)

I have also seen code that is harder to read. Look at "AI Practice:
Examples in Pop-11" sometime. [I'm sorry. I'm sure this book has
many virtues. But some of the procedures cross a couple of page
boundaries or are, in other ways, somewhat hard to read.]

Moreover, we have to be clear on just what aids I think are
unnecessary. I think the imitating Pop-11 end markers is not
a good idea (because there are so many of them) but that end
markers are sometimes helpful. This was discussed in more
detail above.

>My general point is that human perception of complex structures is
>easiest and most reliable when there is well chosen redundancy in
>the structures, and most difficult and error-prone when there isn't.
>However, as you point out, too much redundancy can sometimes get in
>the way, and we don't yet know the trade-offs.

Here I agree. Well-chosen redundancy is important. However, it's
important to bear in mind that many programmers new to Lisp haven't
yet learned to "see" the redundancy or else chose programming styles
that obscure it. The redundancy comes from three things:

* Arity (eg CAR has one argument).
* Indentation.
* The overall shape of the parenthesis groups
(note that this does not require that the reader pay
much attention to individual parens)

To show what I mean by the last, here's an example. Code
that looks like this:

( (( )
( ))
(( )
( ))
(
( )))

is usually a COND. (COND is perhaps the least readable Lisp
construct.) Breaking up the paren groups tends to remove this
information. Putting in end markers adds a different redundancy
but tends to make the indentation less effective.

Anyway, I'm not interested in a language war or in a discussion
that more properly belongs in alt.religion.computers. I just
want to keep some space for the view that Lisp is (or can be)
readable and to suggest how some people may have been approaching
the language in less effective ways.

-- Jeff

Jeff Dalton

unread,
Sep 25, 1990, 1:11:25 PM9/25/90
to
I couldn't quite make out who wrote the stuff quoted below, so I
decided to avoid the risk of misattribution by not attributing it
to anyone.

>> I would be unhappy if quote performed computation each time you used
>> it! A very common idiom is to pass a quoted, uninterned symbol around
>> as a marker in some structure. (I used to use '(())). This is the
>> only way I can get a token I guarantee won't appear in my input
>> stream!

I always call LIST or CONS to get a unique object, rather than rely on
anything that involves QUOTE, although some people have pointed out
that, if you want an object that cannot appear in a stream, the stream
itself will do (unless, thanks to #., the stream is held in a global
variable). That is, you do something like:

(with-open-stream (s ...)
(do ((input (read s nil s) (read s nil s)))
((eq input s))
(process input)))

-- Jeff

Jeff Dalton

unread,
Sep 26, 1990, 10:11:20 AM9/26/90
to
In article <1990Sep20.2...@cbnewsc.att.com> l...@cbnewsc.att.com (lawrence.g.mayka) writes:
>In article <1990Sep2...@ai.mit.edu>, t...@ai.mit.edu (Thomas M. Breuel) writes:
>> * In a language like Scheme (as opposed to CommonLisp)
>> that provides efficient implementations of higher order
>> constructs with natural syntax, there is little need or
>> excuse for something as horrible as LOOP in the first
>> place.

I think this ritual Common Lisp bashing is becoming a bit of a bore,
don't you?

The truth is that I can easily write a named-let macro in Common Lisp
and write a large class of loops in exactly the way I would write them
in Scheme. The efficiency isn't bad either, because most CL compilers
can optimize self-tail-recursion for local functions. (Yes, I know
Scheme does better than that.)

Moreover, if I want something like the Abelson & Sussman COLLECT macro
for streams I can write it in portable Common Lisp. I would argue
that code using such macros is significantly easier to understand that
the same thing done with nested recursive loops or nested calles to
MAP-STREAM and FLATMAP.

Until Scheme has a standard way of writing macros, I can't write
such things in portable Scheme. So I am very much in favor of
having such a standard mechanism for Scheme.

Of course, if you prefer higher-order functions instead of macros,
there is already a large set of them available in Common Lisp (MAPCAR,
SOME, REDUCE, etc.) and it is easy to write more.

So (1) Scheme's advantages over Common Lisp is this area, while real
and significant, are not as great as one might suppose, and (2) there
are iteration macros that are worth having regardless of one's views
on LOOP.

>We can talk about Dick Waters' SERIES package instead, if you prefer.

Yes, let's. I don't expect Scheme folk to flock to this beastie,
but it does show one of the benefits of having macros, namely that
it enables such work to be done.

Moreover, one of the arguments for Scheme, I would think, is that
anyone doing such work in Scheme would be encouraged by the nature of
the language to look for simple, general, "deep" mechanisms rather
than the relatively broad, "shallow" mechanisms that would fit better
in a big language such as Common Lisp. (This is not to imply that the
Series package is shallow or in any way bad (or even that shallow in
this sense is bad). Anyway, I'm speaking in general here and no
longer taking Series as my example.)

-- Jeff

lou

unread,
Sep 26, 1990, 3:18:49 PM9/26/90
to

>It's not the macros that the application programmers write themselves
>that cause problems; at least the programmer knows what the expansions
>of those macros look like. Rather, it's the macros that are a part of
>the language that cause trouble. Consider how often Common Lisp
>programmers use cond, setf, dotimes, dolist, do, and do*. The
>expansions of these macros can be pretty hairy. Maintaining the
>correspondence between the internal representation of a function and
>the form that produced it is not trivial.

Here is a case in point, that actually happened to a student in my AI
Programming class. He was getting an error message about bad
variables in a let, and could not figure out what was going wrong,
since the code being executed did not have a let in it! It turns out
that in our lisp, prog is implemented as a macro that expands into a
form that involves a let. He did have a prog, and the syntax of that
prog was bad - not bad enough to prevent the macro expansion but bad
enough to cause the let it expanded into to give an error. If the
error message were in terms of a prog he would have seen his error
quite quickly. In fact he did not, and wasted time on it.

On the other hand, using the interactive debugger (and my vast
experience :-) I was able to figure it out pretty quickly. But this
does point out that any time the code as seen in the debugger / error
messages / etc. differs from the code as written by the programmer,
there is a price to pay.
--
Lou Steinberg

uucp: {pretty much any major site}!rutgers!aramis.rutgers.edu!lou
internet: l...@cs.rutgers.edu

Rob MacLachlan

unread,
Sep 29, 1990, 1:20:28 PM9/29/90
to
You just weren't using the right compiler. foo.lisp is:
(defun test ()
(prog ((:end 42))
(print :end)))

________________________________
* (compile-file "test:foo.lisp")

Python version 0.0, VM version DECstation 3100/Mach 0.0 on 29 SEP 90 01:13:48 pm.
Compiling: /afs/cs.cmu.edu/project/clisp/new-compiler/tests/foo.lisp 29 SEP 90 01:13:19 pm


In: DEFUN TEST
(PROG ((:END 42))
(PRINT :END))
--> BLOCK
==>
(LET ((:END 42)) (TAGBODY (PRINT :END)))
Error: Name of lambda-variable is a constant: :END.
________________________________

Rob (r...@cs.cmu.edu)

0 new messages