Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Better Dylan syntax?

58 views
Skip to first unread message

Kelly Murray

unread,
Aug 29, 1997, 3:00:00 AM8/29/97
to

>From: Andrew LM Shalit <al...@folly.org>:
>...
> In addition, we found that there was an intense negative market
> reaction to association with Lisp and Lisp-like syntaxes. We
> chose to acknowledge this fact of life and remove the association.
>...
> There were a number of obstacles, any one of which could have been
> a show stopper. Having worked with many potential users over the years
> and been in a number of focus groups, I can honestly say that retaining
> the Lisp-like syntax would have been a show stopper. At the very
> least, it would have made it much, much harder to sell the tool to
> a mass audience.
>...

Hmm. The approach that I'm taking for my web server
scripting language I call SilkScript, is to still use the same
functional syntactic style, but eliminate the truly useless
parenthesis in a couple very commonly used forms, in particular let.
So I changed let from
(let ((a 10)
(b 20)) ...)
to
(let a = 10
b = 20
do
...)

And also rely on LOOP, which has a non-parenthesis-crazy syntax.

Do you think this is doomed as well?

Another one I think is a confusing in Lisp is the single quote
character. Just about every language in the world uses
single quotes in matching pairs. It is quite unsettling to
see (list 'a) if you've never seen or understood Lisp before.
(not to mention many editors/wordprocs want to match quote chars)
There is no reason single quotes can't be matched, and once
you've written code this way (as I have), I'm totally perturbed
over having to code (list 'symbol) instead of (list 'symbol')
And what the hell is #'equalp as a function notation?
Try to explain THAT one to a non-Lisper,
or better yet #'(lambda () ).
Sure anyone can get used to it, and perhaps quickly,
but will they get past it?

I've found the reaction to my syntactic changes highly negative
from just about all existing Lisp programmers.
In my opinion, Lisp people complain loudly how others are hung up
about syntax, but they are the worst complainers of all
when someone wants to change Lisp syntax.

We will all just end up using C/Java syntax in the end it seems.

-Kelly Murray k...@franz.com http://www.franz.com/

Martin Rodgers

unread,
Aug 29, 1997, 3:00:00 AM8/29/97
to

Kelly Murray wheezed these wise words:

[excellent observations about syntax snipped]

> Try to explain THAT one to a non-Lisper,
> or better yet #'(lambda () ).
> Sure anyone can get used to it, and perhaps quickly,
> but will they get past it?

This is a good question. Once you get past the "Blank expression"
reaction to Lisp itself, maybe your chances will improve. In my
experience, that first hurdle is the tough one, and I've never
succeeded. That's why I'm now intending to try an alternative
strategy, using Dylan. Get the new ideas across, _then_ try the
syntax. Why put the biggest hurdle first? I'd like to put it last, and
see if that helps avoid the "Blank expressions".

This may seem obvious, but it could be a point worth making (again).
Most people who use computers are not programmers. There, I said it.
Even those who have some programming experience may still have not
experienced Lisp, Smalltalk, Actor, or even Java. If they no longer
program, but now choose the tools that other programmers will use,
then it could be very hard to justify Lisp to them. As managers,
they'll be more interested in advantages that mean something to them.
In order to "sell" Lisp to them, we must understand what it is that
they want, and then find a way of describing Lisp in terms that match
these demands.

Java's current success seems to come from using this technique. It
uses a syntax that will be vaguely familiar to most programmers.
There's nothing too radical there. Ironically, this appears to be what
Lisp programmers dislike about Java. Meanwhile, Lisp's differences are
what alienate non-Lispers, as you've described so well.



> I've found the reaction to my syntactic changes highly negative
> from just about all existing Lisp programmers.

See above. One man's meat is another man's poison. It's like the most
vocal Lispers feel that admiting the existance of another way of doing
something, like syntax, is to admit a failure in Lisp. I don't share
this attitude, but I'm amused that C++ programmers _do_ share it. The
differences between Java and C++ are far smaller than between Lisp and
C++, even if you only consider syntax.

> In my opinion, Lisp people complain loudly how others are hung up
> about syntax, but they are the worst complainers of all
> when someone wants to change Lisp syntax.

Oh yes. Am I alone in thinking that, at present, C++ is changing more
rapidly than Common Lisp? Some of these changes affect the syntax, and
yet I too notice Lisp programmers howling in agony whenever a minor
change to the syntax, or criticism of Lisp syntax, is made.

It's fun, at these times, to see someone point out the existance of M
expressions. ;) Religious zealots like to rewrite history, so it's
vital that we should remember the _true_ beginnings of Lisp.

> We will all just end up using C/Java syntax in the end it seems.

I'm currently rather enamoured with the Haskell syntax. ;) It has a
lot of properties of the Lisp syntax, like freedom, but without the
need for lots of "noise", whether parenthesis-noise or whatever.

Alas, some Lispers feel a need to dump on Haskell, just like they need
to dump on Java, Dylan etc - even Scheme! Maybe they do this for
simple reason that "it's not (Common) Lisp". Well, yes. It's _not_.
Of course not! That's why we give it a different name, so that we can
distinguish it from any other language.

(Perhaps if we called everything "Lisp", this would all be simpler.
After all, anything possible in another language can also be done in
Lisp. Wouldn't that make all other languages just a subset of Lisp?
Well, perhaps not. We still need to _implement_ those semantics. The
potential isn't the same thing.)

So we use different languages to address different needs, and we give
them different semantics, different syntax, and different names. Dylan
is a language that is distinct from Common Lisp and Scheme, just as CL
and Scheme are distinct from each other. Yet there are Lispers who get
hung up about these differences, as if they shouldn't exist!

While it may be tempting to try to solve all problems with just one
language, we should not assume that all programmers will - or should -
feel the same way. They may - and do - choose to use other languages,
and sometimes more than one. Even if they're wrong, that's _their_
choice, not ours. Telling them that they've got it all wrong is no
better than a C++ programmer telling us that _we've_ got it wrong.

Is this too obvious for some people, or are they just zealots, blinded
by their religious fervour?
--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"As you read this: Am I dead yet?" - Rudy Rucker
Please note: my email address is gubbish

Martin Rodgers

unread,
Aug 29, 1997, 3:00:00 AM8/29/97
to

William Paul Vrotney wheezed these wise words:

> That's because there are good reasons for preferring Lisp syntax and keeping
> it the way it is:

Yes, this all works, once you accept it. Stating the obvious, I know,
but not everyone accepts it. Hence the Dylan syntax.

Or is that too obvious? You're using the same kind of arguments that
C++ people use for justifying C++ syntax: "we use it, so it's good."
Never mind the poor sods who don't like it.

Cedric Adjih

unread,
Aug 29, 1997, 3:00:00 AM8/29/97
to

William Paul Vrotney (vro...@netcom.com) wrote:
:
: That's because there are good reasons for preferring Lisp syntax and keeping

: it the way it is:
:
: 1. Lisp syntax is elegant and regular. This is important when you are
: involved with writing lots of parsers. True that one can define crazy
: macros like the extended LOOP macro, but at least in Common Lisp we can
: macroexpand and then easily parse the resulting regular lists.
:
: 2. S-expressions are nice for representing data. Even some C programmers,
: such as some robotics programmers, use Lisp s-expression syntax for data.
:
: 3. Lisp syntax text is easier to manipulate in a text editor such as Emacs.
:
: 4. Polish notations makes more sense than infix. They are more concise for
: one. They are more *stackable* for another. I do fine with my HP
: calculator, but I still have not figured out in general how to do fast
: complex calculations on my infix calculator.
:
: 5. Basic Lisp syntax is not cluttered with commas, colons, semicolons etc.
: That's why in Common Lisp we can use such symbols for readermacro
: abbreviations for QUOTE, LIST, and a myriad of other useful abbreviations.
:
: Other than that it is just vogue I can't come up with one reason for
: preferring C/Java syntax.

I have tried recently Scheme, and the "simplicity of the syntax" that
is sometimes pointed is not an obvious advantage in my opinion.

First you could write your all code in a Turing Machine Language with
only 0 or 1. Imagine what amazingly simple, regular and elegant syntax
it would be !

Ok, I grant you that in that case semantics won't map trivially to
syntax (i.e. code would be unreadable). But that's a problem that I
have with scheme. The code I wrote was cluttered with things like:

(vector-set! (mytype->content container) i
(+ (vector-ref (mytype->content container) i)
(vector-ref some-data i)))

while in Python that would be written:
container.content[i]=container.content[i]+someData[i]


And there was many instances things like:

(let ((result 0)) ; MIT scheme doesn't allow (define result 0) :-(
(do ((index 0 (1+ i))
((= index (vector-length (mytype->field myinstance)))))
(let ((field1 (vector-ref (mytype->field myinstance) i))
...)))
OR
(do ((index 0 (1+ index))
(result 0))
((= index (vector-length (mytype->field myinstance))) result)
(let ((field1 (vector-ref (mytype->field myinstance) index))
...)))

while in Python would be written as:

result=0
for index in range(0, len(myinstance.field)):
field1=myinstance.field[index]
...

It looked somewhat like Tcl-programming to me (except that scheme is much
more fun) and I still prefer Python syntax. Until at least I become
enlightened about the wonderful scheme macros (it seems easier to find PhD
thesis than tutorials on this topic). Of course, I tried to switch to
Common Lisp, but, well, I'm still reading CLTL :-)
For now, I can courageouly cope with the internal complexity of
parsers/software tools of other languages (especially since I don't write
them). Your mileage varies.

BTW it's not fair to compare Lisp syntax exclusively to C++. There are
other languages with syntaxes less clumsy than the latter.


-- Cedric Adjih

Andreas Bogk

unread,
Aug 29, 1997, 3:00:00 AM8/29/97
to

Hi,

I've found an interesting opinion on the syntax matter on

http://www.ai.mit.edu/projects/transit/tn93/tn93.html ,

a document which I highly recommend.

It says:

Syntax is Dead -- We are enlightened enough today to understand the
role which syntax plays in programming languages. It is an artifact
and a matter of taste rather than a fundamental issue. Syntax can
rapidly become a non-issue with the basic technology we have today. It
is a simple matter for the machine to take programming descriptions
which admit to the same semantic meaning from any of the multitudes of
syntactic expressions. Similarly, it is a simple matter for the
machine to present ``programs'' to the programmer in just about any
syntactic form.


Wouldn't that be nice...

Andreas

--
Never underestimate the value of fprintf() for debugging purposes.


Chris Page

unread,
Aug 29, 1997, 3:00:00 AM8/29/97
to

I liked the (now defunct) idea of supporting two syntaxes in Dylan.
Actually, I'd much rather have a syntax-neutral storage form (perhaps
something like a pre-parse Lisp) and editors that operate at the syntactic
level instead of on characters. Imagine editors that even allow for
user-customized "syntaxes" that help them visualize/edit code in ways
specific to their needs.

Also, someone mentioned infix vs. polish notation. I think this is a
perfect case for supporting "multiple syntaxes". Some expressions are more
appropriately expressed in infix or polish, or even in the 2D notation I
use on paper. Perhaps more importantly, readers of the code may be more
comfortable reading one notation vs. another. Another good case for this is
static data definitions. Imagine being able to view/edit these definitions
in a format customized for the data type.

I think the argument for one syntax vs. another is always a red-herring.
The semantics are fundamentally important, and syntax should be dynamic,
meeting the users changing needs instead of the compilers static
requirements.

..........................................................................
Chris Page - Dylan Hacker - Harlequin, Inc. - <http://www.best.com/~page/>


Erik Naggum

unread,
Aug 29, 1997, 3:00:00 AM8/29/97
to

* Kelly Murray

| Another one I think is a confusing in Lisp is the single quote character.

I think I'll make the objection a little wider than just single quote
characters. reader macros are bad, one might argue, because they introduce
new syntax. the "syntax disease" is to create new syntax just because you
need a new language feature. Common Lisp should not suffer from this
illness, one might argue, because it destroys all the benefits of the
language.

instead of (list 'a), one could equally well write (list (quote a)), and
instead of (member <item> <list> :test #'equalp), one could equally well
write (member <item> <list> :test (function equalp)), but things do get a
little hairier with other construct that have been given special reader
macros. #(foo bar zot) must be written #.(make-array 3 :initial-contents
(quote (foo bar zot))) -- oops, I meant (load-time-value (make-array 3
:initial-contents (quote (foo bar zot)))). suddenly, we find ourselves
arguing against read-time construction of non-trivial constants.

while we could get rid of ' and #' in exchange for a little more typing, we
could _not_ get rid of reader macros as such without serious overhaul of
the way data is read and written in Lisp. first, we don't want to go the
route of the spastic syntactics (Perl, C++) and create languages so hard to
parse nobody gets it right except the compiler (and that only if you're
lucky). second, we really do _not_ want to break the distinction between
read-time construction of constants and evaluation (compile-time or
run-time, although the latter is so hard to get right today that a simpler
syntax would be tempting :) -- or we end up with different languages for
code and data, and we can't read source programs as data, anymore.

note that reader macros control the interpretation of the input stream as
data, not the grammar of the language. while "need a new language feature"
is a commonality between any creation of new syntax, their kinds and the
reasons for their needs are radically different between Lisp and most other
languages.

| It is quite unsettling to see (list 'a) if you've never seen or
| understood Lisp before. (not to mention many editors/wordprocs want to
| match quote chars)

I saw (list (quote a)) before I ever saw (list 'a), and I have never even
considered what you think is a problem as any more than a 5-second lesson.

| There is no reason single quotes can't be matched ...

well, if (list 'foo) would become (list 'foo'), then '(foo) would become
'(foo)', and (list '|foo bar|) would probably become (list 'foo bar')
because it's obvious that the space would be a stupid error and it might as
well be put to use, but then '(foo)' in the uncommon Lisp would be
interpreted as '|(foo)| in Common Lisp, right? likewise, is (list 'foo
'bar) in the uncommon Lisp the same as (list 'foo' bar) or an error?

is (list 'foo''bar') really a list of a single symbol |foo'bar|, the way
doubled quoting symbols are used in many languages?

is '(foo bar 'zot' quux)' and (quote (foo bar (quote zot) quux)) the same,
or is it the error it looks like it is? anybody remember the arcane syntax
of the `` process output operators in Unix shells and its interaction with
internal quoting mechanisms in Bourne and C shell? modern shells use a new
syntax $(...) instead of `...`, and, amazingly, the quoting problems just
disappeared. I see no reason to reinvent the syntax of the C shell.

so, how about using matching delimiters for quoted constants, instead?
then we could write [(foo bar [zot] quux)] without ambiguity at any level,
and the pretty-printer could just be told to print (quote x) as [x], but
before we re-invent the syntax for `quote', it would be of immense benefit
to all programmers everywhere if _strings_ could have opening and closing
delimiters so it was possible to determine whether you were inside or
outside a literal string without parsing the entire file from the start to
wherever you are. to those who have ISO 8859-1 available to them, and get
this in the full glory of 8-bit transport, I would like strings to look
like this: «hello, elegant» instead of "goodbye, stupid". I can easily add
reader macros that read this correctly, but printing strings from the Lisp
with this new syntax is much harder. I don't think it should be.

| Try to explain THAT one to a non-Lisper, or better yet #'(lambda () ).
| Sure anyone can get used to it, and perhaps quickly, but will they get
| past it?

once people have heard that 'x is _identical_ to (quote x) and #'x is
identical to (function x), I have never seen them have any problems of any
kind using these abbreviated forms. I suppose that if you never told them
about this, they would never realize that (car (read-from-string "'foo"))
is the symbol `quote', not the symbol `foo', and they would also think they
needed ' before symbols that weren't to be evaluated, even in quoted lists.
(well, that is actually a question that was posted to comp.emacs a few
months ago, but the person who asked was positively _delighted_ to learn
what 'x actually meant. seems somebody had just taught him "raw syntax" in
the beautiful C/Perl/C++ tradition.)

| I've found the reaction to my syntactic changes highly negative from just
| about all existing Lisp programmers.

well, the reason _could_ be that they're just plain bad suggestions.

| In my opinion, Lisp people complain loudly how others are hung up about
| syntax, but they are the worst complainers of all when someone wants to
| change Lisp syntax.

yes, they are, but there is no contradiction in this observation.

we observe that "the others" are hung up in their syntax for _grammatic_
reasons, while Lisp purists maintain that each _type_ should have a unique
read and write syntax that facilitates transparent interchange of permanent
objects over text streams. (unfortunately, there are serious flaws in how
this is done in practice when the writer doesn't know what the reader will
accept, and the reader doesn't know what the writer used to think when it
wrote what it tries to read, as alluded to above for strings.)

whenever somebody wants to change the Lisp syntax, it's a pretty safe bet
that it is for grammatical reasons, but this is against the Lisp tradition
for its type-centric syntax, and thus it will be rejected, and it _should_
be rejected, because grammatical syntax is fundamentally _wrong_. if Lisp
is afflicted with it, the reactions are just as fierce as when we find that
the syntax-grammarians insist on adding more hieroglyphics to their caves.

also, Lisp's syntax is 100% configurable. nothing whatsoever keeps you
from installing a brand new readtable that doesn't even recognize Lisp, but
you get to install a brand new printer, as well, and this is more boring.
you could even read your input with an LR(k) parser with "keywords" and
conflicts and whatnot, instead of the simple LL(1) language that Lisp
parses with its recursive descent technology.

| We will all just end up using C/Java syntax in the end it seems.

only if we buy into their _grammatical_ syntax and abandon the _type_
syntax unique to Lisp (or nearly so, anyway). Lisp will never abandon its
type-centric view of syntax, so we will not end up using syntax for grammar
any time soon.

(note: I just named these concepts "grammatical syntax" and "type syntax",
ignorant as I am of any prior art in this important field, and I'm not even
sure these concepts are all that valid in the first place.)

[also, greetings to all from my cat, Xyzzy, who stretched out in her sleep
and rested her head long enough on the `delete' key while I got up to get
some more coffee that I had to retype major parts of this message, and thus
had time to figure out the two syntax concepts.]

#\Erik
--
404 You're better off without that file. Trust me.

joe davison

unread,
Aug 29, 1997, 3:00:00 AM8/29/97
to

I suspect many of those who find lisp's syntax to be unpleasant and
unnecessary don't understand why lisper's talk about the simplicity
of the syntax, and particularly why they think it's an advantage.

Let me see if I can motivate it.

I'll start by admitting that many, perhaps even most, human readers
of the language often find the lisp syntax somewhat harder to
read and understand than C or Pascal syntax, certainly so when first
exposed to the language. I'll also admit that all those parenthises
are a pain, particularly with editors that don't help balance them.

"Well," you might say, "if you start off by admitting that, it's
obvious that C/Pascal syntax is better -- why not just get rid of
the stupid lisp syntax?"

The answer to that is what most lispers fail to mention. Humans
are not the only ones who read lisp programs. That is, of course,
also true even for C++ -- the compiler has to read it. Aside from
the compiler, however, it is unlikely that many C/Pascal/C++ programs
will have any need to read a program -- there's not much value in it --
what's JoeAverageProgram going to do with another program after it
reads it? Execute it? Doesn't seem likely. Maybe pretty print it,
but one can get by without actually parsing the input for that.
Almost any other reason why one would want to read a C/C++/Pascal
program will cause the programmer to reach for a parser generator, like
Lex/Yacc or their successors, because the problem of writing a
parser by hand for such languages is harder than is worth it.

That is exactly the point. If I have to write a parser for a program,
it's much easier to write one to handle lisp syntax, because the
syntax is simpler. Furthermore, because of the nature of the language,
it's also much easier to do something with the program after you've parsed
it! It's even easy to execute it, if eval() is available! Functions
can be written in the input stream and called from compiled code!
Programs can construct functions at runtime and execute them. (OK, Dylan
isn't Lisp, so some of the latter arguments don't apply to dylan, but
I sensed a more general question.)

Because those things are easier with the simpler syntax, people are
more likely to find interesting applications for the capabilities, and
will resist losing those capabilities. Since the other syntaxes are
harder, people using those syntaxes don't try to do those things, and
they don't see the value in the simpler sytax. So, it's more than
just personal preference, but it probably starts there.

After using great macro processors, such as that in lisp and in SWAP
assembler at Bell Labs, it's really a pain to either treat the one in
C as the best that's available or to switch to the Other truly extensible
language, Forth.

joe davison

Martin Rodgers

unread,
Aug 30, 1997, 3:00:00 AM8/30/97
to

Rainer Joswig wheezed these wise words:

> Sorry, but when posting all this shit over and over, have you ever
> written a working piece of software?

Sure. Does this attempt at an insult mean that you don't wish to a
engage in debate? ISTR Peaceman trying a similar tactic.

Martin Rodgers

unread,
Aug 30, 1997, 3:00:00 AM8/30/97
to

William Paul Vrotney wheezed these wise words:

> I did not argue that Lisp syntax was better than C/Java syntax I was just
> giving some reasons to the original poster's question of why most Lisp
> programmers prefer Lisp syntax the way it is. There is a difference.

I'm aware of that. There is most certainly a difference. There's also
a degree of hostility toward the Lisp syntax, even if this is based on
ignorance. The danger, I think, could be to assume that it is _only_
based on ignorance. There are actually people who find infix easier to
deal with. I'm not one of them, but I can appreciate their problem. I
just imagine what it would be like if I felt the same way about infix.

> All that you are doing is implying the "vogue" reason for justifying C++ and
> Dylan syntax. Popular acceptability is a persistent force. After Arabic
> numerals were introduced it took society using Roman numerals hundreds of
> years before it accepted Arabic numerals.

I'm being pragmatic. Intead of expecting to win everyone over in one
step, I see a long series of very small steps, each of which has a
good chance of winning many converts. I see both Dylan and Java as
such small steps. Ok, neither language is Common Lisp (that's plain
enough), but I don't expect many C++ programmers to appreciate CL as
easily as Dylan or Java. Once they've made that small step, they'll be
significantly closer to where we are.

Assuming, of course, that CL is indeed the language that we all want
to use. ;) Assuming also that we actually do want to convert people to
Lisp, instead of alienating them. Right now, the message that I get is
that we _are_ alienating most programmers, because of the trivial
things like the syntax. Lisp isn't the only language to "suffer" from
this kind of small minded thinking. Consider the fuss over semicolons
in other lanaguages. To me, this is one very good reason for not using
semicolons - so we can avoid the syntax politics. Unfortunately, there
are other syntax issues for programmers to fight over, and the
semicolon business is only there because so many languages use that
character for similar purposes, like statement delimiters.

People can even argue over which character is used for comments! I
recently read a review of an app in which the reviewer used this
irrelevancy in his summary, listing it under the negative points.
When you consider the infix/prefix question, is it any wonder that
people have so much trouble with Lisp syntax? This is far more
profound than the choice of which character you use to signify
comments, or whether you use another character as a seperator or a
terminator.

So, I think this problem will be around for a long time. That's why I
accept the Dylan syntax, I accept that Dylan is not Common Lisp, nor
is it Scheme. I just hope that a sufficient number of other
programmers also accept it. I also hope that a greater precentage of
Lisp programmers accept it than posts to UseNet would suggest. _We_
should know better than to argue over something as trivial as syntax.

Brian Rogoff

unread,
Aug 30, 1997, 3:00:00 AM8/30/97
to

On 29 Aug 1997, Kelly Murray wrote:
> ... stuff about how people hate Lisp syntax deleted ...
>
> Hmm. The approach that I'm taking for my web server
> scripting language I call SilkScript, is to still use the same
> functional syntactic style, but eliminate the truly useless
> parenthesis in a couple very commonly used forms, in particular let.
> So I changed let from
> (let ((a 10)
> (b 20)) ...)
> to
> (let a = 10
> b = 20
> do
> ...)
>
> And also rely on LOOP, which has a non-parenthesis-crazy syntax.
>
> Do you think this is doomed as well?

IMO, yes. Why would someone who can't get past Lisp syntax find this any
better? And why whould people who find Lisp syntax acceptable like this
any more? Personally, I find loop very un-Lisp-like, but I realize that
many (most?) Lispers love it. If anything, I wish that Common Lisp had
adopted the EuLisp/Dylan convention of enclosing <class> names in angle
brackets, and maybe regularized more of the function names along the lines
of Scheme. Those tiny changes would do a bit to enhance readability
without changing Lisp syntax at all.



> Another one I think is a confusing in Lisp is the single quote

> character. Just about every language in the world uses

> single quotes in matching pairs. It is quite unsettling to

Ada uses an unmatched '. Probably VHDL and PL/SQL too. I find Ada syntax
comparatively pleasant, certainly way better IMO than C/C++/Java syntax
(and semantics too ;-).

> I've found the reaction to my syntactic changes highly negative
> from just about all existing Lisp programmers.

I don't think that they are good changes, but I acknowledge that I
could be wrong. I think messing with syntax is a big waste of time at
this point. Standardizing FFIs, developing UI libraries (whither CLIM?),
and more mundane stuff like that seems more important.

With respect to Dylan, I hope someone with some pull read that last
paragraph. *Way* too much time was wasted on Dylan syntax. I don't find
it awful, nor do I find it awe inspiring. I do think that a lot of people
who had high hopes for Dylan in 1992 consider it dead now. I know I do,
though I hope something good might come out of it.

> We will all just end up using C/Java syntax in the end it seems.

Many of us are using it right now :-). But if you mean that Lisp will adopt
C syntax in the end, I sincerely doubt it.

-- Brian

Erik Naggum

unread,
Aug 30, 1997, 3:00:00 AM8/30/97
to

* Martin Rodgers -> Rainer Joswig

| Sure. Does this attempt at an insult mean that you don't wish to a
| engage in debate? ISTR Peaceman trying a similar tactic.

it could also mean that Rainer is as tired of your repetitiveness as he is
of "the Peaceman's" repetitiveness. personally, I find this possibility a
lot more probable than your lame attempt at a counter-insult. I also have
the distinct impression that he would not be alone in such a sentiment.

William Paul Vrotney

unread,
Aug 31, 1997, 3:00:00 AM8/31/97
to

In article <MPG.e71e4d43...@news.demon.co.uk>
mcr@this_email_address_intentionally_left_crap_wildcard.demon.co.uk (Martin
Rodgers) writes:

>
> William Paul Vrotney wheezed these wise words:
>
> > I did not argue that Lisp syntax was better than C/Java syntax I was just
> > giving some reasons to the original poster's question of why most Lisp
> > programmers prefer Lisp syntax the way it is. There is a difference.
>
> I'm aware of that. There is most certainly a difference. There's also

[middle of response]

> Lisp programmers accept it than posts to UseNet would suggest. _We_
> should know better than to argue over something as trivial as syntax.

Overall your response post was thoughful. But just one more time, I did not
argue over syntax, I merely listed what I thought were some reasons why the
original poster's associates preferred Lisp syntax the way it is. He had a
legitimate question about Lisp syntax. You started off (see above) by
acknowledging this however your last sentence made me wonder if you forgot
it.

For the most part I agree with you, however I do not agree that syntax is
trivial. The Common Lisp group put a lot of energy and thought into Common
Lisp syntax. Lisp syntax is very important to me and a lot of other Lisp
programmers, including you I presume. For AI people, discussing artificial
or natural languages, syntax is crucial. We should not cower from
discussing Lisp syntax for fear that we will turn off people turned off by
Lisp syntax. After all, this is comp.lang.lisp, where one would expect to
find discussions on Lisp syntax. To some people discussing syntax is
boring, fine, but that does not deny the fact that syntax is essential.


--

William P. Vrotney - vro...@netcom.com

William Paul Vrotney

unread,
Aug 31, 1997, 3:00:00 AM8/31/97
to

>
> IMO, yes. Why would someone who can't get past Lisp syntax find this any
> better? And why whould people who find Lisp syntax acceptable like this
> any more? Personally, I find loop very un-Lisp-like, but I realize that
> many (most?) Lispers love it.

For one, I don't. I agree with you and find the extended loop macro to be
very un-Lisp-like. Most *extended* loop macro advocates that I've known
make a case for its power but I find 99% of all the solving power and
efficiency that I need in the smaller syntax of the Common Lisp iteration
constructs and the occasional 1% that I can't solve I use the *un-extended*
loop macro with a return.

The extended loop macro is a whole language unto itself. I helped debug the
original implementation of the extended loop macro before there was a Common
Lisp version, and believe me, there is a ***lot*** of code to the extended
loop macro.

Martin Rodgers

unread,
Aug 31, 1997, 3:00:00 AM8/31/97
to

William Paul Vrotney wheezed these wise words:

> For the most part I agree with you, however I do not agree that syntax is


> trivial. The Common Lisp group put a lot of energy and thought into Common
> Lisp syntax. Lisp syntax is very important to me and a lot of other Lisp
> programmers, including you I presume. For AI people, discussing artificial
> or natural languages, syntax is crucial. We should not cower from
> discussing Lisp syntax for fear that we will turn off people turned off by
> Lisp syntax. After all, this is comp.lang.lisp, where one would expect to
> find discussions on Lisp syntax. To some people discussing syntax is
> boring, fine, but that does not deny the fact that syntax is essential.

In Lisp, we can appreciate syntax for what it truely is. Just another
detail to be processed. No, people do not all find syntax boring,
instead they instigate long and tedious flamewars over the use of a
single character. In Lisp, we can set that character ourselves, making
such wars pointless. That detail _is_ trivial.

Please forgive me if I'm a little obsessed with this, but I've been
observing computer-related Jihads for as long as I've been using these
things, and it now feels like watching a bunch of children. There must
surely be better things for us to do. Ok, you can't live without
syntax, but let's _not_ treat it like something sacred.

As I pointed out above, Lisp syntax isn't "written in stone". if the
Dylan syntax bothers anyone, then perhaps they should be using another
language, like Lisp. I'm far more interested in semantics...

Martin Rodgers

unread,
Aug 31, 1997, 3:00:00 AM8/31/97
to

Erik Naggum wheezed these wise words:

> it could also mean that Rainer is as tired of your repetitiveness as he is
> of "the Peaceman's" repetitiveness. personally, I find this possibility a
> lot more probable than your lame attempt at a counter-insult. I also have
> the distinct impression that he would not be alone in such a sentiment.

If I were attacking Lisp, and insisting that only C++ is a serious
language, then I'd be playing the "one language" game that Peaceman
used in his troll. I, however, am (hopefully) doing something else,
like trying to understand and exlain why it is that so many people
fail to appreciate Lisp. I don't believe it's as simple as "they
idiots". They're most certainly ignorant, but that's no crime. (We
could perhaps blame mass marketing for that. It would at least be
closer to the truth.)

It could be that we're not very good at explaining Lisp to non-
Lispers. I know that we can explain Lisp to ourselves, and to people
who are willing to learn. What's not so easy to explaining Lisp to
those who are _hostile_ to Lisp, esp when that hostility is based on
misunderstandings about not just Lisp, but also the alternatives.
The general ignorance of compiler theory is a part of this, I think.

To appreciate Lisp syntax, it helps to appreciate that the syntax is
for data, and that Lisp data can be used to represent Lisp code. This
simple idea isn't as obvious to everyone. This could be one of the
major causes of confusion about Lisp, and so hostilty toward Lisp.

If you disagree, just say so. There's no need for insults. I merely
ask that you consider this explanation, and try to appreciate the
point of view of people less well educated than ourselves. If that's
too much to ask, then you have every right to flame me.

Martin Rodgers

unread,
Aug 31, 1997, 3:00:00 AM8/31/97
to

William Paul Vrotney wheezed these wise words:

> The extended loop macro is a whole language unto itself. I helped debug the


> original implementation of the extended loop macro before there was a Common
> Lisp version, and believe me, there is a ***lot*** of code to the extended
> loop macro.

I've always prefered the series functions, and regret that they didn't
make it into CL. The implementation of the series functions available
from the Internet fails to compile in ACL/PC or LispWorks, and I'm not
entirely sure that I understand why. LWW reports 160+ errors, even
after I've added compiler-let. What else is missing?

Someday I'll take a longer look at this code, but until then I can
only wonder what it would be like to use these functions.

Erik Naggum

unread,
Aug 31, 1997, 3:00:00 AM8/31/97
to

* Martin Rodgers
| I'm asking you for a contribution to this thread.

and my answer should be quite simple to understand: I don't want to discuss
anything at all with you on your premises, I neither agree nor disagree
with you, and I generally don't read what you write except when I make the
mistake of posting something you later follow up to. I _reject_ your
questions, your opinions, your reasoning. what you have to say is truly
uninteresting. just how bluntly must this be put for you to "get it"?

let me say this, so I can hope that you will at least remember it and stop
asking people for "contributions" to your threads: every single thread I
have read from you or stumbled across is a diversion from any possibly
interesting topic that could be discussed in comp.lang.lisp. when I
realized that and that I was helping such diversions along, I stopped
replying to you, but _still_ you keep going, asking me for a contribution
to your threads. I have no desire, and no obligation, to take part in your
bogus discussions. it's sad, indeed, that I have to tell you this instead
of you getting the message on your own. I would have thought that it ought
to have been bordering on the bloody obvious by now.

#\Erik
--
Lady Died

Martin Rodgers

unread,
Aug 31, 1997, 3:00:00 AM8/31/97
to

Erik Naggum wheezed these wise words:

> and my answer should be quite simple to understand: I don't want to discuss


> anything at all with you on your premises, I neither agree nor disagree
> with you, and I generally don't read what you write except when I make the
> mistake of posting something you later follow up to. I _reject_ your
> questions, your opinions, your reasoning. what you have to say is truly
> uninteresting. just how bluntly must this be put for you to "get it"?

May I remind you that I only invited a comment on Dylan's syntax from
you _after_ you criticised me. _You_ may find what I have to say
uninteresting; the feeling is mutual. All I ever get from you is pure
hostility. Why should I take your insults seriously?



> let me say this, so I can hope that you will at least remember it and stop
> asking people for "contributions" to your threads: every single thread I
> have read from you or stumbled across is a diversion from any possibly
> interesting topic that could be discussed in comp.lang.lisp. when I
> realized that and that I was helping such diversions along, I stopped
> replying to you, but _still_ you keep going, asking me for a contribution
> to your threads. I have no desire, and no obligation, to take part in your
> bogus discussions. it's sad, indeed, that I have to tell you this instead
> of you getting the message on your own. I would have thought that it ought
> to have been bordering on the bloody obvious by now.

Ahh, so anything that you disagree with is "uninteresting". Right, now
I get what you're saying. Sorry, Erik, I didn't mean to disagree with
you. I was instead commenting on the people who have an inflexible
attitude to an issue, and refuse to even discuss it. Their religious
fanaticism makes their arguments sound to be like a jihad.

I wasn't inviting you to say anything at all - you did that, after
Rainer. I've no idea why you should do that, nor why Rainer should
wish to post anything here. Neither of you appear to be interested in
Dylan, and I certainly recall Rainer claiming that "Dylan is dead" a
least a few times. If Dylan is indeed dead, then this entire thread is
academic, and there's nothing worth discussing.

This is the only reason that I asked how you feel about Dylan. If you
have no interest in Dylan, what is bothering you? If you do have an
interest in Dylan, I still don't see what is bothering you, as you've
yet to say. You do, however, have a great deal to say about _Common
Lisp_. Fair enough. Kelly Murray was also discussing CL's syntax.

> Lady Died

I've seen much better trolls than this - this is a troller's field
day. ;-) Try the newsgroups alt.conspiracy.princess-diana and/or
uk.current-events.princess-diana. You'll find some sick examples.

Followups set to comp.lang.dylan

Andreas Bogk

unread,
Sep 1, 1997, 3:00:00 AM9/1/97
to

>>>>> "Martin" == Martin Rodgers <mcr@this_email_address_intentionally_left_crap_wildcard.demon.co.uk> writes:

Martin> To appreciate Lisp syntax, it helps to appreciate that the
Martin> syntax is for data, and that Lisp data can be used to
Martin> represent Lisp code. This simple idea isn't as obvious to
Martin> everyone. This could be one of the major causes of
Martin> confusion about Lisp, and so hostilty toward Lisp.

This is of course way cool. But I've stopped treating my code as data
when I stopped programming assembler. If I don't want to that, why
should I suffer from a harder-to-read syntax?

Andreas

P.S.: Followup set.

David H Wild

unread,
Sep 1, 1997, 3:00:00 AM9/1/97
to

In article <MPG.e7377997...@news.demon.co.uk>,
Martin Rodgers
<mcr@this_email_address_intentionally_left_crap_wildcard.demon.co.uk>
wrote:

> It could be that we're not very good at explaining Lisp to non-
> Lispers.

As a relative newcomer to Lisp I think that this is an important point. In
the arguments with the Peaceman several people mentioned closures as
something in Lisp not partalleled in other languages. I wasn't sure what
they were so I consulted one or two of the books.

My first port of call was Tatar's "A Programmers Guide to Common Lisp". The
word closure was not in the index, but there was something under "Lexical
Closure - see function objects". When I looked here there were some
examples, but, although there was the usual thing about "demonstrating the
power and elegance of Lisp", there was nothing showing *why* these
techniques were powerful.

I moved on to Graham's "Ansi Common Lisp" and, on page 107 there was a
section about closures. The examples were more detailed than in the
previous book and an example, "make-adder", which was interesting - but it
didn't explain why it was worth writing such a function when calling the
'+' function was available.

Only when I looked at Wilensky's "Common Lispcraft" and found examples
which defined private counters, unalterable by any other function, did I
begin to understand why the idea was significant. Graham's "On Lisp" has a
lot more about closures and I am beginning to understand what the
advantages are.

As I say, I have begun to understand, but it has taken a lot of digging and
without access to *several* books I wouldn't have got there. To be fair,
and I am not trying to disparage Lisp, I suspect that much of the
difficulty comes from the fact that if you don't appreciate the problem it
is difficult to see the significance of the solution. When I was at school
we learned the formula for solving quadratic equations, but no-one ever
told us where we might meet a quadratic equation - most of the examples
were expressed as pure algebra - or why we might want to solve it. Because
of this an error in the working wasn't obvious in the way ordinary
arithmetic often is. I might not know the exact total at the supermarket
but I still have a fair idea of what I have spent without adding it up as I
go on.

As with many other Lisp elements what we solitary learners need is a lot
more worked examples with comments about the technical aspects, including
criteria for picking the appropriate technique for our problem.

--
__ __ __ __ __ ___ _____________________________________________
|__||__)/ __/ \|\ ||_ | /
| || \\__/\__/| \||__ | /...Internet access for all Acorn RISC machines
___________________________/ dhw...@argonet.co.uk
Uploaded to newnews.dial.pipex.com on Mon,01 Sep 1997.20:02:23


Vassili Bykov

unread,
Sep 2, 1997, 3:00:00 AM9/2/97
to

In article <y8a2038...@horten.artcom.de>,

Andreas Bogk <and...@artcom.de> wrote:
> This is of course way cool. But I've stopped treating my code as data
> when I stopped programming assembler. If I don't want to that, why
> should I suffer from a harder-to-read syntax?

Your assembler parallel implies that any function of a running Lisp
program is represented as a list that can be destructively modified by
that same program at run time. This is *not* the case in modern Lisps.
When people talk about treating Lisp code as data, they most often mean
macroexpansion - which occurs at compilation as opposed to run time. (I
hope you are not about to say "but Lisp is an interpreted language!") At
compilation time, regardless of the programming language, a program is,
indeed, just a data. The difference Lisp makes is that that same program
can specify functions to manipulate that data at compilation time. So
the true assembler or C parallel would be an assembler or C program
containing functions that extend the _compiler_ before or while the
program is compiled. (Granted, a pretty fantastic thing to imagine -
talk about Stone Age and flint knives.)

Why would anyone want to do that and "suffer from a harder-to-read
syntax"? Because that allows to adjust the language to bring it closer to
the problem domain, thus making programs *easier to read and understand*
-- contrary to the "syntax problem"! Speaking of which, this is a mantra
of "Lisp is slow" credibility. If you have trouble understanding Lisp
code, you are inexperienced, that's all. Same thing applies to any other
language. While, if we are talking of languages as tools used by
professionals, as opposed to gawkers wandering through newsgroups, I
won't believe that an experienced Lisper would have more trouble
understanding someone else's Lisp code than an experienced C programmer
understanding someone else's C code -- and, at that, the fragment
analyzed by the Lisper during the same time would *mean* much more.

--Vassili

-------------------==== Posted via Deja News ====-----------------------
http://www.dejanews.com/ Search, Read, Post to Usenet

Martin Rodgers

unread,
Sep 2, 1997, 3:00:00 AM9/2/97
to

Rainer Joswig wheezed these wise words:

> Sorry, but when posting all this shit over and over, have you ever
> written a working piece of software?

Neither you nor Erik seem willing to discuss the real issue, which is
not about me, but about - in this case - Dylan syntax. It was changed,
apparently, to make the language more appealing to non-Lisper. This
would be consistant with the goals set in the 1992 DRM.

I've just been speaking to Erik by phone, and he tried the same
tactic. Unfortunately, I indulged him, and then we ran out of time. I
was hoping to dicuss syntax etc, but no matter. There's very little
worth saying, really. After all, it has already been said.

As for Erik's flaming...We didn't have time to discuss that, either.


--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough

Please note: my email address is gubbish

ignorance is better than stupidity
you can cure ignorance

Martin Rodgers

unread,
Sep 2, 1997, 3:00:00 AM9/2/97
to

David H Wild wheezed these wise words:

> As I say, I have begun to understand, but it has taken a lot of digging and
> without access to *several* books I wouldn't have got there. To be fair,
> and I am not trying to disparage Lisp, I suspect that much of the
> difficulty comes from the fact that if you don't appreciate the problem it
> is difficult to see the significance of the solution. When I was at school
> we learned the formula for solving quadratic equations, but no-one ever
> told us where we might meet a quadratic equation - most of the examples
> were expressed as pure algebra - or why we might want to solve it. Because
> of this an error in the working wasn't obvious in the way ordinary
> arithmetic often is. I might not know the exact total at the supermarket
> but I still have a fair idea of what I have spent without adding it up as I
> go on.

I recall mentioning Boolean Algebra to Peaceman. I guess he either
missed the point, or refused to acknowledge it. I learned a _little_
about this branch of mathematics, when I was 9 years old, from a
Disney encyclopedia, a reading lamp with two switches (AND), and a
hall light with two switches (XOR). This is what a child can teach
themselves, if they have the right book.

A golden rule that I found years ago, in a book review, is to never
introduce a term without also explaining. The example picked on in
that review was, I think, 'pixel'. I've certainly noticed it used in a
live TV (so it may have been forgivable) without explanation at all,
so I wouldn't be suprised if some people learning about computers from
that TV series - which was specifically teaching about computers - are
a little confused. Still, it was the first show of its kind in the UK,
and a few mistakes are expected.

Perhaps we should also be forgiving of the various Lisp tutorials that
makes this mistake. Let's not forget the point that Steele makes in
the preface to CLtL2:

> The 1984 definition of Common Lisp was imperfect and incomplete. In
> some cases this was inadvertent: some odd boundary situation was
> overlooked and its consequences not specified, or different passages
> in were conflict, or some property of Lisp was so well-known and
> traditionally relied upon that I forgot to write it down.

This may well be a common mistake (pun intended).



> As with many other Lisp elements what we solitary learners need is a lot
> more worked examples with comments about the technical aspects, including
> criteria for picking the appropriate technique for our problem.

Ah, yes. Steele gives a lot of excellent examples.

Some Lisp tutorials, however, could use more complete working
programs. The Tatar book that you mentioned has only two useful
programs in it, while all the code that Winston and Horn give us in
their book has some practical value. Not every reader may beed to use
a quatratic equations, but you can be sure that some of them will.
Enough to get a realistic feel for how Lisp can be applied to such
problems.

So, the quality of the examples is also important. Still, quantity is
good. If you have quantity _and_ quality, then you've probably got a
very good book. Another characteristic that I like is dividing a book
into two parts: teaching the language itself and programming with it.
These are two different, but strongly related, things. Lisp style is
not something you can learn from a reference manual. You can, however,
learn it from good examples.


--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
Please note: my email address is gubbish

will write lisp code for food

Andreas Bogk

unread,
Sep 2, 1997, 3:00:00 AM9/2/97
to

>>>>> "Vassili" == Vassili Bykov <vby...@cam.org> writes:

Vassili> time. This is *not* the case in modern Lisps. When
Vassili> people talk about treating Lisp code as data, they most
Vassili> often mean macroexpansion - which occurs at compilation
Vassili> as opposed to run time. (I hope you are not about to say

Ok, now we're talking features. Dylan has powerful macros too, despite
its infix syntax.

Vassili> "but Lisp is an interpreted language!") At compilation

Hey, I'm much smarter than that :). Even if it is interpreted Lisp I'm
using to write this posting.

Vassili> Why would anyone want to do that and "suffer from a
Vassili> harder-to-read syntax"? Because that allows to adjust the
Vassili> language to bring it closer to the problem domain, thus
Vassili> making programs *easier to read and understand* --

You can do all this with Dylan.

Vassili> contrary to the "syntax problem"! Speaking of which,
Vassili> this is a mantra of "Lisp is slow" credibility. If you

I'm not so sure. Maybe it's just that I've been using infix syntax
ever since I've learnt it in school by the age of 7. Maybe it's
because I've never had a UPN pocket calculator. But

a[7] := 3;

is just closer to how I think than

(aref-setter 'a 7 3) ; and I'm not even sure if this is correct
; because the syntax gives me so few hints on
; what is what

and

b := c * d + e * f;

feels much more natural to me than

(setq b (+ (* c d) (* e f))) .

Of course I do understand both forms, but by cultural bias I prefer
infix. In fact, most people do.

And if I gain nothing but a steep learning curve by switching to
prefix, why should I?

Vassili> wandering through newsgroups, I won't believe that an
Vassili> experienced Lisper would have more trouble understanding
Vassili> someone else's Lisp code than an experienced C programmer
Vassili> understanding someone else's C code -- and, at that, the

In fact, I believe the contrary. Have you ever seen submissions to the
International Obfuscated C Code Contest?

Andreas

Thant Tessman

unread,
Sep 2, 1997, 3:00:00 AM9/2/97
to

William Paul Vrotney wrote:

> [...] Popular acceptability is a persistent force. After

> Arabic numerals were introduced it took society using Roman
> numerals hundreds of years before it accepted Arabic numerals.

And we still write them backwards. (That is, since we write
english from left to right, we should be writing numbers
least-significant digit first from left to right.)

-thant

--
thant at acm dot org

Erik Naggum

unread,
Sep 2, 1997, 3:00:00 AM9/2/97
to

* Andreas Bogk

| Maybe it's just that I've been using infix syntax ever since I've learnt
| it in school by the age of 7. Maybe it's because I've never had a UPN
| pocket calculator.

I wonder how many Lisp programmers like RPN calculators compared to the
rest of the population. (I don't leave home without my HP48GX, and I've
used HP calculators since a friend of my father's showed me his when I
was only an egg.)

| But
|
| a[7] := 3;
|
| is just closer to how I think than
|
| (aref-setter 'a 7 3) ; and I'm not even sure if this is correct
| ; because the syntax gives me so few hints on
| ; what is what

you know, a significant part of the problem with non-Lispers not liking
Lisp is that they invent such horrible "Lisp" when they don't know what it
would actually look like. I think this is indicative of something.

your a[7] is (aref a 7) in Common Lisp.
your a[7] := x is (setf (aref a 7) x) in Common Lisp.

depending on how your multidimensional tables look, if you have any, we
have a[7][6][5] or a[7,6,5] vs (aref a 7 6 5).

| b := c * d + e * f;
|
| feels much more natural to me than
|
| (setq b (+ (* c d) (* e f))) .
|
| Of course I do understand both forms, but by cultural bias I prefer
| infix. In fact, most people do.

for trivial examples, most people do. I find it odd that mostly trivial
examples are shown to debunk prefix syntax.

a + b + c (+ a b c)
a + b * 4 + c * 16 (+ a (* b 4) (* c 16))
a * (b + 4) * (c + 16) (* a (+ b 4) (+ c 16))

a < b AND b < c (< a b c)
a /= b AND b /= c AND a =/ c (/= a b c)

it is also typical of infix languages to run into a terrible mess with the
precedence of operators that cause programmers to use temporary variables
in order to simplify their expressions to be readable. also, multiline
expressions are actually frowned upon in classes because infix just gets
too complex. it is not uncommon to find Lisp code that adds together a
bunch of values taken from nested expressions in a single form:

(+ (foo 17)
(bar 47 11)
(zot 23 69))

| And if I gain nothing but a steep learning curve by switching to
| prefix, why should I?

you do in fact gain very much. it is just hard for infix people to imagine
that prefix forms could yield more than a trivial rewrite of _their_ forms
of expression. you won't find Lisp programmers who write (+ (+ a b) c)
just because a + b + c is really (a + b) + c, and (and (< a b) (< b c)) is
really uncommon in Lisp code even though infix folks can't hack a < b < c,
although that's precisely what they learned in their early years in school.

incidentally, reader macros are available that take infix expressions
enclosed in some special syntax (like a pair of brackets) and turn them
into prefix form for Lisp to see. this only works for trivial arithmetic.

Erik Naggum

unread,
Sep 2, 1997, 3:00:00 AM9/2/97
to

* Martin Rodgers

| I've just been speaking to Erik by phone, and he tried the same
| tactic. Unfortunately, I indulged him, and then we ran out of time. I
| was hoping to dicuss syntax etc, but no matter. There's very little
| worth saying, really. After all, it has already been said.
|
| As for Erik's flaming...We didn't have time to discuss that, either.

you know, for a person who professes not to talk about people but about so
and so random and irrelevant issue, you have a bizarre hangup with
discussing those people who _reject_ your pathetic attempts at "contact".

the first thing I told you in your unsolicted, unwanted phone call was that
it wasn't a good time to call because I had a friend coming over and had to
clean up a little first. I had to tell you that I had to hang up because
of my invited guest _five_times_ and you still kept talking and talking and
I had no option but to hang up on you. I keep the phone number in the
Organization header and on my home page and stuff like that because I want
to give people I'd like to talk to a chance to reach me. I do _not_ keep
it there to invite stalkers and pathetic wannabes a chance to annoy me.

I also seem to remember you telling me that you had lots of people who
agreed with you and me encouraging you to talk to them instead of bothering
_me_ all the time.

and we didn't run out of time, you pathetic liar, _you_ had no time to run
out of to begin with since you were going on overtime from the very start.
yes, I have blocked your e-mail with the same software that blocks equally
undesired unsolicited commercial e-mail, but you keep mailing me from other
addresses, flat out refusing to take a hint. how many ways and how many
times do I need to tell you that I DO NOT WANT TO COMMUNICATE WITH YOU. I
thought I had made this exceptionally clear, but when I don't want to talk
to you, you take this public with despicable lies about what you did. this
is _unbelievably_ tasteless. I cannot fathom what you think you can gain
from me that you want so badly as to attempt to extort it from me with this
abominable form of blackmail that you engage in.

GO AWAY, Martin Rodgers!

Thant Tessman

unread,
Sep 3, 1997, 3:00:00 AM9/3/97
to

Aaron Gross wrote:

> Why should we be writing numbers least-significant digit first?
> This makes you skip ahead to get to the important part. [...]

Arabic is written from right to left (isn't it?), which is why
Arabic numbers are written right to left least significant digit
first. Think about doing math with natural numbers; everything
starts with the least significant digits at the right and extends
to the left.

Just trying to figure out how culturally sticky syntax can be.

Which end of the egg do you eat first?

Gareth McCaughan

unread,
Sep 3, 1997, 3:00:00 AM9/3/97
to

Maurizio Vitale wrote:

> That wasn't the point. Erik's point (at least as I understood it) was
> that most non-lisp programmers when faced with a lisp-in-3-days kind
> of introductions are tempted into writing

>
> (and (< a b) (< b c))
>

> simply because they think in the language they come from and simply
> map && into and, < into < and switch from infix to prefix.

I'm sure that's true, but it's a point about the specific languages
they're coming from rather than about infix notation as such.

If there were a lot of people saying "look how ugly prefix notation
is -- you have to write (and (< a b) (< b c))", then it would be very
relevant to point out that you don't. But the usual objection is
"simple arithmetic expressions are much more cumbersome", and to that
it simply isn't relevant that some languages using infix notation
are unnecessarily ugly too.

What's more, infix notation can lead to winnage here too; you can
(in principle) say things like "a<b<=c/=d", which *would* have to
be represented by a big "and" in Lisp.

Erik's main argument for prefix notation (it's regular and copes better
than infix with complicated stuff) is absolutely right, and when
combined with the facts that

- prefix notation is much nicer for machines to work with, making
e.g. clever macros easier to write

- prefix notation is exactly what just about all languages use
for function calls, so that the alleged advantages of infix
notation only apply to a very limited class of expressions

it's a good argument. But to say, as Erik did, that "infix folks
can't hack a < b < c" is just plain wrong. C and Pascal folks may
be unable to hack a<b<c, but that doesn't have much to do with
infixness (infixity? infixitude?).

--
Gareth McCaughan Dept. of Pure Mathematics & Mathematical Statistics,
gj...@dpmms.cam.ac.uk Cambridge University, England.

Erik Naggum

unread,
Sep 3, 1997, 3:00:00 AM9/3/97
to

* Gareth McCaughan

| But to say, as Erik did, that "infix folks can't hack a < b < c" is just
| plain wrong. C and Pascal folks may be unable to hack a<b<c, but that
| doesn't have much to do with infixness (infixity? infixitude?).

hm. in a < b `<' is a binary operator, while it is ternary in a < b < c,
and n-ary in a < b < ... < z. although I know of no language that has
anything but binary infix operators, I get the impression from the above
that you do. (if not, I will continue to believe that arity _is_ connected
to infix vs prefix. in particular, n-ary functions are usually written in
a prefix form in infix languages, too, so one might argue that infix vs
prefix should only be fought on the grounds of binary operators, but this
destroys the motivation for the fight, which was to show that infix was
more natural in general. if something is more natural only for extremely
restricted parts of what people wish to do, and one can demonstrate that
they are, in fact, restricted in their thinking as a result, I think we
have a pretty good argument that such restrictions should be lifted, and
thus that n-ary operators must be allowed to compete, which would mean that
infix must plead nolo contendere and prefix must win.)

there is also an important difference between n-ary infix operators (if
they do exist) that basically expand to a long sequence of binary operators
with `and' glue in between, the glue itself being binary in nature (plus
some guarantees about one-time evaluation of duplicated terms), and true
multi-argument functions, as is the natural thing in prefix syntax, such as
for ordering in particular.

to a Lisper, (< a b c) probably communicates "monotonically increasing
order of" instead of "less than". at least it does to me, and I find
similar ways to "read" the prefix form vastly more expressive and easy to
deal with. e.g., (first foo) is read as if it were "first of foo". (it
is, of course, no more read out loud than any other reading is.)

I also find it interesting to see how ranges are expressed in the integer
subtypes. a inclusive range like [0,10] would be written (integer 0 10),
but an exclusive range like (0,10) would be written (integer (0) (10)), so
one would be tempted to define new _types_ for specific ranges and use,
say, (check-type x state-index) where state-index might be (integer 0 31),
instead of an explicit test like (<= 0 x 31). what does this have to do
with infix vs prefix? I'm trying to show where prefix _gains_ a user above
being a mechanical rewrite of the infix forms, showing a more mathematical
thinking than the simple, syntactic arithmetic forms that infix favors.

I believe infix syntax restricts us to think about that which can be
expressed with binary operators and that it would actually be _relieving_
to be able to work with n-ary operators in practice, without incurring a
high syntactic cost. (I know ML can use infix syntax for user-defined
functions, too, but there are other factors that make ML's syntax less
bothersome than the infix syntax as found in Pascal, C/C++, Perl, etc.)

Bruce Tobin

unread,
Sep 3, 1997, 3:00:00 AM9/3/97
to Cedric Adjih

Cedric Adjih wrote:
>
> William Paul Vrotney (vro...@netcom.com) wrote:
> :
> : [some points in favor of Common Lisp syntax]
> :
> I have tried recently Scheme, and the "simplicity of the syntax" that
> is sometimes pointed is not an obvious advantage in my opinion.
>
> The code I wrote was cluttered with things like:
>
> (vector-set! (mytype->content container) i
> (+ (vector-ref (mytype->content container) i)
> (vector-ref some-data i)))
>
> while in Python that would be written:
> container.content[i]=container.content[i]+someData[i]
>

I'm not a Schemer, so I can't say whether what you've written is
perspicuous Scheme. In Common Lisp, though, I'd write something like:

(incf (svref (container-content container) i)
(svref some-data i))

If I had to do this by addressing individual array elements. Most of
the time code like this occurs while iterating over array elements,
which I would do using a mapping function; see below.

> And there was many instances things like:
>
> (let ((result 0)) ; MIT scheme doesn't allow (define result 0) :-(

Hmm. It doesn't? Doesn't Scheme allow globals? Anyway, the let
structure is better from a software engineering standpoint; if you learn
to program without relying on side-effects you'll be amazed at how much
easier your code is to debug.

> (do ((index 0 (1+ i))
> ((= index (vector-length (mytype->field myinstance)))))
> (let ((field1 (vector-ref (mytype->field myinstance) i))
> ...)))
> OR
> (do ((index 0 (1+ index))
> (result 0))
> ((= index (vector-length (mytype->field myinstance))) result)
> (let ((field1 (vector-ref (mytype->field myinstance) index))
> ...)))
>
> while in Python would be written as:
>
> result=0
> for index in range(0, len(myinstance.field)):
> field1=myinstance.field[index]
> ...

In CL I'd probably say

(let ((result 0))
(map nil #'(lambda (field1)
...) (mytype-field myinstance))
result))

This is one line longer than your code. But see what happens when we
combine both examples. Python code:

for i in range(0, len(container.content)):
container.content[i]=container.content[i]+someData[i]

Lisp code:

(map nil #'(lambda (x y) (incf x y))
(container-content container) some-data)

Shorter and (once you're comfortable with the anonymous function syntax
in CL, which I admit could stand improvement) clearer.

>
> It looked somewhat like Tcl-programming to me (except that scheme is much
> more fun) and I still prefer Python syntax. Until at least I become
> enlightened about the wonderful scheme macros (it seems easier to find PhD
> thesis than tutorials on this topic). Of course, I tried to switch to
> Common Lisp, but, well, I'm still reading CLTL :-)

CLTL is a good reference but a lousy way to learn CL. I'd recommend
Paul Graham's
'ANSI Common Lisp'. It has a good introduction to CL macros, and the
same author's
'On Lisp' treats the subject in depth. CL macros differ from Scheme
macros but are
at least as powerful. For example, a library in the CMU archive
contains the macro
'doseq' with the aid of which our second example could have been
written:

(let ((result 0))
(doseq (field1 (mytype-field myinstance) result)
...))


> For now, I can courageouly cope with the internal complexity of
> parsers/software tools of other languages (especially since I don't write
> them). Your mileage varies.

Well, once you've worked with a language that makes it easy for you to
write them, it's hard to go back.

>
> BTW it's not fair to compare Lisp syntax exclusively to C++. There are
> other languages with syntaxes less clumsy than the latter.

Sure, but Python isn't suitable for real applications. Too slow, and
the use of indentation as a control flow construct makes it unsuitable
for writing anything that has to be maintained over a long period of
time. It makes a nice scripting language, though. Of course, Python
isn't the only other contender, so your point stands.

>
> -- Cedric Adjih

Maurizio Vitale

unread,
Sep 4, 1997, 3:00:00 AM9/4/97
to

Erik Naggum <cle...@naggum.no> writes:

> * Gareth McCaughan
> | But to say, as Erik did, that "infix folks can't hack a < b < c" is just
> | plain wrong. C and Pascal folks may be unable to hack a<b<c, but that
> | doesn't have much to do with infixness (infixity? infixitude?).
>
> hm. in a < b `<' is a binary operator, while it is ternary in a < b < c,
> and n-ary in a < b < ... < z. although I know of no language that has
> anything but binary infix operators, I get the impression from the above
> that you do. (if not, I will continue to believe that arity _is_ connected
> to infix vs prefix.

There're language with infix N-ary operators (Including C with ? :).
I'm also sure there're languages with user-definable infix N-ary
operators but those are much rare and I cannot give a name off hands,
but I'll look it up.
There was a paper (by S.P.Jones of Haskell memory) about parsing those
languages, but none of the languages I know in that class (KRC,
Miranda and Haskell) allows user defined N-ary operator.

In some sense message sending in Smalltalk could qualify, as you write
things like object :selector1 val1 :selector2 val2, but that's
stretching the definition a bit.

Maybe someone else can help my memory (Snobol4 or Icon?)

-mav

Lyn A Headley

unread,
Sep 4, 1997, 3:00:00 AM9/4/97
to

Bruce Tobin <bto...@infinet.com> writes:

> Sure, but Python isn't suitable for real applications. Too slow, and
> the use of indentation as a control flow construct makes it unsuitable
> for writing anything that has to be maintained over a long period of
> time. It makes a nice scripting language, though. Of course, Python
> isn't the only other contender, so your point stands.

sheesh. I thought your post was pretty reasonable until I read this
bunk. I have no idea what you mean by "real applications," but what
you wrote sounds suspiciously like flame-bait. I once wrote a 2000
line application in Python; (took me about 2 weeks) it felt pretty
real to me. (you may be talking about 100,000 line apps, but how many
apps get that big?) As for the speed issue, Python is like any other
language: you write your code, and if performance is a problem, you
optimize (and in python, you can optimize like crazy by rewriting
time-critical parts in C). And stop making wild, unsupported claims
like the one about indentation-based syntax ruining maintainability.
What are you talking about? The only reason Python's syntax is like
that is to _improve_ readability and hence, maintainability. And that
is just what it does.

I don't mean to bite your head off, but what do you expect when
you insult a language so near and dear to my heart?

-Lyn

-------------------------------------------------------
remove the word "bogus" from my address for the real one.

Barry Margolin

unread,
Sep 4, 1997, 3:00:00 AM9/4/97
to

In article <862035x...@g.pet.cam.ac.uk>,
Gareth McCaughan <gj...@dpmms.cam.ac.uk> wrote:
> - infix languages encourage a possibly misleading distinction
> between built-in binary operations and user-defined functions;

I don't think this is quite the distinction they make. Rather, the
distinction is between "common arithmetic and similar operations" and
"other operations", since many infix languages have built-in operations
that use function call notation (PL/I and Fortran have quite a few), and
some infix languages allow user-defined functions with infix notation
(e.g. C++'s operator overloading).

> - infix languages accordingly tend to make their builtins binary
> when they could, with different syntax, be variable-arity;

True. Luckily, some of he most common infix operators, such as + and *,
also happen to be associative and commutative, so they act like they're
variable-arity.

> - infix languages are less easily parsed;

This is an issue for quick and dirty code, but it's not an issue for
implementors -- compiler technology to parse infix notation is quite
mature.

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Cambridge, MA
Support the anti-spam movement; see <http://www.cauce.org/>
Please don't send technical questions directly to me, post them to newsgroups.

William Paul Vrotney

unread,
Sep 5, 1997, 3:00:00 AM9/5/97
to

In article <5unec0$j...@tools.bbnplanet.com> Barry Margolin


<bar...@bbnplanet.com> writes:
>
> In article <862035x...@g.pet.cam.ac.uk>,
> Gareth McCaughan <gj...@dpmms.cam.ac.uk> wrote:
>
> > - infix languages are less easily parsed;
>
> This is an issue for quick and dirty code, but it's not an issue for
> implementors -- compiler technology to parse infix notation is quite
> mature.
>

The fact "infix languages are less easily parsed" applies to programmers as
well as compilers. I can't count the number of times I've wasted lots of
time on bugs in C and C++ expressions because of an oversight "infixing" the
precedence rules. This never happens in Lisp.

Funny thing is, I've noticed that to save time many C and C++ programmers
(and I'm guilty myself) will put in a lot of parentheses to avoid spending
the time to parse the expression and to be sure of the outcome.

Richard A. O'Keefe

unread,
Sep 5, 1997, 3:00:00 AM9/5/97
to

mcr@this_email_address_intentionally_left_crap_wildcard.demon.co.uk (Martin Rodgers) writes:
>To appreciate Lisp syntax, it helps to appreciate that the syntax is
>for data, and that Lisp data can be used to represent Lisp code. This
>simple idea isn't as obvious to everyone. This could be one of the
>major causes of confusion about Lisp, and so hostilty toward Lisp.

I'd like to toss in a word here: Prolog.
Prolog and Lisp have a heck of a lot in common.
In particular, they share the idea that "to read code" is the same thing
as "to read data". Just as you can read a function OR a data value in
Lisp by calling (read), you can read a clause OR a data value in Prolog
by calling read(Result). Prolog allows user-defined prefix, infix, and
postfix operators in input _and_ output. These don't get in the way of
macro processing, because it's defined that X + Y is just sugar for +(X,Y).
Some Prolog systems have supported distfix operators as well, so that e.g.
if X then Y else Z ==> 'if then else'(X, Y, Z).

There are quite a number of things that Lisp reader-macros deal with that
have no Prolog equivalent (like ' and #'), but I don't see any particular
reason they couldn't be supported as well.

--
Unsolicited commercial E-mail to this account is prohibited; see section 76E
of the Commonwealth Crimes Act 1914 as amended by the Crimes Legislation
Amendment Act No 108 of 1989. Maximum penalty: 10 years in gaol.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.

Bruce Tobin

unread,
Sep 5, 1997, 3:00:00 AM9/5/97
to

Lyn A Headley wrote:
>
> Bruce Tobin <bto...@infinet.com> writes:
>
> > Sure, but Python isn't suitable for real applications. Too slow, and
> > the use of indentation as a control flow construct makes it unsuitable
> > for writing anything that has to be maintained over a long period of
> > time.
>
> sheesh. I thought your post was pretty reasonable

At least if you ignore the lisp code not working.

> until I read this
> bunk. I have no idea what you mean by "real applications," but what
> you wrote sounds suspiciously like flame-bait.

Poor choice of words on my part. What I meant to say is that Python is
not yet suitable for medium-to-large scale application development. Of
course, lots of folks would disagree with even this modified statement,
but I'm prepared to defend it.

> I once wrote a 2000
> line application in Python; (took me about 2 weeks) it felt pretty
> real to me. (you may be talking about 100,000 line apps, but how many
> apps get that big?)

Well, many of the apps I work on get that big. Too many of them,
unfortunately. Of course, some of those who read the code in my last
post may feel compelled to observe that if I wrote code more correctly I
might not need to write so much of it.

> As for the speed issue, Python is like any other
> language: you write your code, and if performance is a problem, you
> optimize (and in python, you can optimize like crazy by rewriting
> time-critical parts in C).

Sometimes that's feasible, and sometimes not. Contrary to accepted
wisdom on this subject I've found that paying a little attention to
performance issues from the outset can save you a lot of grief later on;
YMMV.

And stop making wild, unsupported claims
> like the one about indentation-based syntax ruining maintainability.
> What are you talking about? The only reason Python's syntax is like
> that is to _improve_ readability and hence, maintainability. And that
> is just what it does.

There are just too many ways that indentation information can disappear
from code that has to be maintained over a long period of time by many
different people: sections of code get cut and pasted, editors convert
spaces to tabs and disagree as to how many spaces a tab should expand
into, etc. People (and tools) are used to treating whitespace as
meaningless.

>
> I don't mean to bite your head off, but what do you expect when
> you insult a language so near and dear to my heart?
>

I'm just relieved that you saw fit to ignore my lousy code.

Martin Rodgers

unread,
Sep 6, 1997, 3:00:00 AM9/6/97
to

Richard A. O'Keefe wheezed these wise words:

> I'd like to toss in a word here: Prolog.

Indeed. I probably only prefer Lisp's syntax to Prolog's because I
have less experience with Prolog. Yet every Prolog tutorial that I
found in magazines left me bewildered. It was only when I started
reading tutorial _books_ that I finally "got it". It was one of
Prolog's features that I now appreciate most that used to confuse me
the most: the ability to write goals like "not(X)". A definition of
'not' would've clarified the meaning for me!

It illustrates how easy it is for someone used to one kind of
semantics to be confused by a language that uses a similar syntax with
totally different semantics. The goal "not(X)" doesn't find the value
of X and then apply 'not' to it. It isn't the same as a function that
negates a boolean value, which is how someone unfamiliar with Prolog
might read it.

Still, for me that confusion only lasted a short time, and didn't do
any harm. In fact, it made me even more curious about Prolog!

I agree about ' and #'. If you want them badly enough, then you could
write your own Prolog parser and add them to it. Perhaps this has
already been done? I've certainly seen examples that looked like
extentions to Prolog's syntax.

Are there any other languages we could mention? ;-)


--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
Please note: my email address is gubbish

ET

unread,
Sep 6, 1997, 3:00:00 AM9/6/97
to


Cedric Adjih wrote in article <5u6og8$d...@news-rocq.inria.fr>...

> Ok, I grant you that in that case semantics won't map trivially to
>syntax (i.e. code would be unreadable). But that's a problem that I
>have with scheme. The code I wrote was cluttered with things like:


>
> (vector-set! (mytype->content container) i
> (+ (vector-ref (mytype->content container) i)
> (vector-ref some-data i)))
>
>while in Python that would be written:
> container.content[i]=container.content[i]+someData[i]
>

Without some surrounding context, I can't be sure what you were
doing, but having code that is ``cluttered'' with certain types of
expressions usually means that there is some abstraction that could
be expressed. For example, if you are performing an elementwise
computation on vectors, perhaps code along the lines of this
would be what you want:

(vector-add! (mytype->content container) some-data)
for some suitable definition of vector-add! (With the usual caveats about
destructive operations, of course).

>
>And there was many instances things like:
>
> (let ((result 0)) ; MIT scheme doesn't allow (define result 0) :-(

> (do ((index 0 (1+ i))
> ((= index (vector-length (mytype->field myinstance)))))
> (let ((field1 (vector-ref (mytype->field myinstance) i))
> ...)))
> OR
> (do ((index 0 (1+ index))
> (result 0))
> ((= index (vector-length (mytype->field myinstance))) result)
> (let ((field1 (vector-ref (mytype->field myinstance) index))
> ...)))

again, you might want to define and use a vector-mapping procedure:
(define (vector-map f v)
(do ((index 0 (1+ index)))
((= index (vector-length v)))
(f (vector-ref v index))))

(vector-map (lambda (field1) ...) (mytype->field myinstance))

(and once again, I'm ignoring the result variable that must have been
mutated.)

The point being, if you have duplicated code, wrap it up in a function
and pass in the varying code with a lambda expression. I'll grant you that
it does take a little getting used to.


Marco Antoniotti

unread,
Sep 6, 1997, 3:00:00 AM9/6/97
to

In article <5uqln2$hon$1...@newsie2.cent.net> "ET" <emer...@eval-apply.com> writes:

From: "ET" <emer...@eval-apply.com>
Newsgroups: comp.lang.dylan,comp.lang.lisp
Date: Sat, 6 Sep 1997 00:22:11 -0400
Organization: CENTnet, Inc.
Lines: 61
NNTP-Posting-Host: tsa-139.cape.com
X-Newsreader: Microsoft Outlook Express 4.71.1008.3
X-MimeOle: Produced By Microsoft MimeOLE Engine V4.71.1008.3
Xref: agate comp.lang.dylan:8456 comp.lang.lisp:30556

Cedric Adjih wrote in article <5u6og8$d...@news-rocq.inria.fr>...

> Ok, I grant you that in that case semantics won't map trivially to
>syntax (i.e. code would be unreadable). But that's a problem that I
>have with scheme. The code I wrote was cluttered with things like:
>
> (vector-set! (mytype->content container) i
> (+ (vector-ref (mytype->content container) i)
> (vector-ref some-data i)))
>
>while in Python that would be written:
> container.content[i]=container.content[i]+someData[i]
>

Just to be fussy, in Common Lisp (using M. Kantrowitz's INFIX package)
you can write

#I(content(container)[i] += someData[i])

Cheers
--
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472

Paul Prescod

unread,
Sep 7, 1997, 3:00:00 AM9/7/97
to

In article <340792...@interaccess.com>,
joe davison <jo...@interaccess.com> wrote:
>The answer to that is what most lispers fail to mention. Humans
>are not the only ones who read lisp programs.

This is very true. For all its faults, I learned to appreciate Lisp syntax
the first time I debugged a function by evaluating its sub-expressions
by hand in MIT Scheme's editor. One day I was doing "type, save, compile,
link, execute" and the next day "type, execute, type execute, save (when it
works)". I haven't yet found an editor for an infix language that can
do that. (I'm not talking about a REPL, but an editor)

I'm not convinced that the benefit of this feature overwhelms the cost in
education and popularity, but I can see how it is addictive.

Paul Prescod


Paul Prescod

unread,
Sep 7, 1997, 3:00:00 AM9/7/97
to

In article <vrotneyE...@netcom.com>,

William Paul Vrotney <vro...@netcom.com> wrote:
>Funny thing is, I've noticed that to save time many C and C++ programmers
>(and I'm guilty myself) will put in a lot of parentheses to avoid spending
>the time to parse the expression and to be sure of the outcome.

Why is this something to be guilty of? If parenthesis make the expression
easier to understand then putting them in is a good thing, right?

Paul Prescod


Andreas Bogk

unread,
Sep 7, 1997, 3:00:00 AM9/7/97
to

>>>>> "William" == William Paul Vrotney <vro...@netcom.com> writes:

William> The fact "infix languages are less easily parsed" applies
William> to programmers as well as compilers. I can't count the
William> number of times I've wasted lots of time on bugs in C and
William> C++ expressions because of an oversight "infixing" the
William> precedence rules. This never happens in Lisp.

C precedences are broken. Even more so since C involves lots of
pointer arithmetics. And C++ has this stupid notation of using << and
>> for input and output.

Martin Rodgers

unread,
Sep 7, 1997, 3:00:00 AM9/7/97
to

Paul Prescod wheezed these wise words:

> I'm not convinced that the benefit of this feature overwhelms the cost in
> education and popularity, but I can see how it is addictive.

Perhaps the cost is that we may alienate some programmers. It may just
be a question of _which_ programmers: those who choose prefix before
anything else, or those who choose infix before anything else.

I know what you mean about the addictiveness. Of course, these days we
can just hit buttons on a toolbar, instead of typing, but the
principles are the same - as long as you do actually have a toolbar.

Yep, I'm a point and click addict. ;)

Paul Prescod

unread,
Sep 7, 1997, 3:00:00 AM9/7/97
to

In article <86wwkvn...@g.pet.cam.ac.uk>,
Gareth McCaughan <gj...@dpmms.cam.ac.uk> wrote:
>I'm not sure I agree. I'm thinking not of compiler writers but of
>ordinary programmers writing macros and other such things. That
>happens all the time, unlike compiler writing which doesn't need
>to be done from scratch too often, so it's not enough for the
>technology to be mature -- it's better if it's pretty easy too.

It's a little redundant to write a parser anyhow. Maybe it would be better if
the compiler's parser were provided as a library that you could use from your
favourite macro language. This is, for example, how people typically work
with SGML -- through an API or pre-parsed "pablum" form. It seems to work, but
people still complain that parsing SGML is too hard so obviously they want to
write their own parsers for whatever reason (habit?).

Paul Prescod


Martin Rodgers

unread,
Sep 7, 1997, 3:00:00 AM9/7/97
to

Paul Prescod wheezed these wise words:

> Why is this something to be guilty of? If parenthesis make the expression


> easier to understand then putting them in is a good thing, right?

That's why I've always done it! That, and my discomfort with infix. If
it helps me write code that I know will be be compiled correctly, with
the meaning that I intended, then I won't feel guilty about it.

Maybe this helped me accept Lisp's syntax...No parenthesis phobia, but
a very strong aversion to infix. This "parenthesis guilt" is for those
people with a parenthesis aversion. It's probably made worse by the
use of editors unable to do parenthesis matching. Strangely, I can
cope without parenthesis even when using Lisp, but I appreciate it in
almost _any_ language. (One obvious except would be Forth.)

My motto could've once been "Give me parentheses or syntax errors". ;)

Marco Antoniotti

unread,
Sep 7, 1997, 3:00:00 AM9/7/97
to

In article <y8an2lp...@quelle.artcom.de> Andreas Bogk <and...@artcom.de> writes:

From: Andreas Bogk <and...@artcom.de>
Newsgroups: comp.lang.dylan,comp.lang.lisp
Date: 07 Sep 1997 17:59:14 +0200
Organization: ART+COM GmbH Berlin
Path: agate!newsgate.duke.edu!nntprelay.mathworks.com!newsfeed.ecrc.net!news-feed1.eu.concert.net!unlisys!not-for-mail
Lines: 19
Sender: and...@quelle.artcom.de
References: <comp.lang.dylan.19...@gairsay.aiai.ed.ac.uk>
<30822267...@naggum.no> <86lo1ec...@g.pet.cam.a
<862035x...@g.pet.cam.ac.uk> <5unec0$j...@tools.bbnplanet.com>
<vrotneyE...@netcom.com>
NNTP-Posting-Host: schleuse-inx-bt.artcom.de
X-Newsreader: Gnus v5.3/Emacs 19.34
Xref: agate comp.lang.dylan:8463 comp.lang.lisp:30569

...

C precedences are broken. Even more so since C involves lots of
pointer arithmetics. And C++ has this stupid notation of using << and
>> for input and output.

C/C++ precedence are not that broken. They are just very complex, due
to the fact that practically every character in C/C++ has a meaning as
an operator.

In passing, I'd say that one of the nicest things of C++ are the << >>
stream operators. You can write pretty sophisticated and elegant code
with them.

Axel Schairer

unread,
Sep 7, 1997, 3:00:00 AM9/7/97
to

Barry Margolin wrote:
> Gareth McCaughan <gj...@dpmms.cam.ac.uk> wrote:
> > - infix languages are less easily parsed;
>
> This is an issue for quick and dirty code, but it's not an issue for
> implementors -- compiler technology to parse infix notation is quite
> mature.

This might be true. It is true, of course, if you know all your infix
operators when you build your parser/parse tables. But I do not know
how to handle the situation where you

- have user-defined infixes _and_
- you want/need to use tools like bison/yacc/zebu ...

Is there something I should know and obviously don't?

Thanks, Axel

--
=== Axel Schairer, htt p://www .df ki.de/v se/staf f/sch airer/ ===
[So long as you predefine a parser rule for each precedence level, it's
not hard to fiddle the lexer to return user-defined operators as op tokens
at the appropriate level. -John]
--
Send compilers articles to comp...@iecc.com,
meta-mail to compiler...@iecc.com.

Erik Naggum

unread,
Sep 7, 1997, 3:00:00 AM9/7/97
to

* Paul Prescod

| It's a little redundant to write a parser anyhow. Maybe it would be
| better if the compiler's parser were provided as a library that you could
| use from your favourite macro language. This is, for example, how people
| typically work with SGML -- through an API or pre-parsed "pablum"
| form. It seems to work, but people still complain that parsing SGML is
| too hard so obviously they want to write their own parsers for whatever
| reason (habit?).

there is no agreement on what an SGML document "means", and widely varying
ideas about what should and should not be _unavailable_ after the parsing
has completed. this means people _will_ want to handle the raw text form
of SGML as opposed to some arbitrarily chosen non-textual form. there is
some consensus in the SGML community that two SGML documents are the same
only if their textual forms are bitwise identical. there are many attempts
to use _comments_ for various half-witted purposes, and comments are
typically ignored in the parsed forms. etc.

had SGML defined a "read syntax" and talked about objects at a post-parse
level, SGML would have been a lot more interesting. in SGML today, every
whitespace is sacred, every inferred delimiter must be marked as not
present in the text, etc. you just can't handle this mess intelligently by
defining a _single_ "API", and you will always find people who are smarter
than the last guy who defined a "complete API". it's sad, really.

William Paul Vrotney

unread,
Sep 8, 1997, 3:00:00 AM9/8/97
to


In article <EG51...@undergrad.math.uwaterloo.ca>
papr...@calum.csclub.uwaterloo.ca (Paul Prescod) writes:

>
> In article <vrotneyE...@netcom.com>,
> William Paul Vrotney <vro...@netcom.com> wrote:
> >Funny thing is, I've noticed that to save time many C and C++ programmers
> >(and I'm guilty myself) will put in a lot of parentheses to avoid spending
> >the time to parse the expression and to be sure of the outcome.
>

> Why is this something to be guilty of? If parenthesis make the expression
> easier to understand then putting them in is a good thing, right?
>

Right. What I meant was that I am guilty of not stopping to look up the
infix precedence rules. Many good programmers don't. Which begs the
question "Why bother with infix notation to begin with if it introduces a
weak spot in programming languages?".

We recognize infix notation as having value as an extemporaneous
abbreviation for hand written mathematics. Note that much of formal math is
expressed using prefix notation, eg. f(g(x, y)) where g could be +. However
for computer languages one might question whether infix syntax has any value
that outweighs its problems. Besides, as you say above, it is certainly
clearer to put the parentheses into infix syntax anyway, something that is
needed in Lisp prefix expressions.

Polish notations (both prefix and postfix (rpn)) on the other hand open up a
world of uniform expression nesting that is natural to programming. This
can be seen in both Lisp (prefix) and Forth (postfix). This uniform
expression nesting coupled with the fact that Polish syntax allows the
language definition to be a lot simpler is a *huge* benefit. Why it is not
more popular seems to be explained by the same reason as to why Arabic
numerals were not more popular than Roman numerals for hundreds of years.
Politics.

Andreas Bogk

unread,
Sep 8, 1997, 3:00:00 AM9/8/97
to

>>>>> "William" == William Paul Vrotney <vro...@netcom.com> writes:

William> Note that much of formal math is expressed using prefix
William> notation, eg. f(g(x, y)) where g could be +. However for

Funny you mention that. Interestingly enough, it's not (f (g x
y)). That's one of the points in Lisp syntax that striked me as
odd. And most mathematicians write + when they mean it. They even use
infix operators when they mean the general class of commutative,
associative functions. That's for a reason, it makes the formulas
easier to understand, because they are more graphical this way.

And while braces might be easier to parse for a computer (remember:
barcodes are easy to parse for computers too), the additional
syntactical context of infix languages does make the program easier to
read for humans. In Lisp, you have to know which argument has which
meaning. In infix languages, you can grok that directly from the
statements. In a loop expression, for instance, Lisp doesn't make a
real difference between the condition and the loop body, they are just
arguments. This is of course a legitimate theoretical approach, after
all, both are just expressions. But there's a difference in my head,
and that's why I give them different names, and that's why I prefer to
recognize both at a glance without having to count endless braces.

The whole "if it's easy to parse for a computer, it's easy to parse
for a human" argument is fundamentally flawed. Computers are good at
counting (braces, for instance), but humans are good at recognizing
patterns.

Andreas Bogk

unread,
Sep 8, 1997, 3:00:00 AM9/8/97
to

>>>>> "Marco" == Marco Antoniotti <mar...@infiniti.PATH.Berkeley.EDU> writes:

Marco> C/C++ precedence are not that broken. They are just very
Marco> complex, due to the fact that practically every character
Marco> in C/C++ has a meaning as an operator.

There are 18 precedence levels, and & has a lower precedence than
==. And too complex is equivalent to broken.

Marco> In passing, I'd say that one of the nicest things of C++
Marco> are the << >> stream operators. You can write pretty
Marco> sophisticated and elegant code with them.

They are a kludge to circumvent C++'s problem of having no type safe
variable argument lists.

Marco Antoniotti

unread,
Sep 8, 1997, 3:00:00 AM9/8/97
to

In article <y8araaz...@horten.artcom.de> Andreas Bogk <and...@artcom.de> writes:

From: Andreas Bogk <and...@artcom.de>
Newsgroups: comp.lang.dylan,comp.lang.lisp
Date: 08 Sep 1997 19:27:44 +0200
Organization: ART+COM GmbH Berlin
Lines: 36
Sender: and...@horten.artcom.de
NNTP-Posting-Host: schleuse-inx-bt.artcom.de
X-Newsreader: Gnus v5.3/Emacs 19.34

Xref: agate comp.lang.dylan:8474 comp.lang.lisp:30593

>>>>> "William" == William Paul Vrotney <vro...@netcom.com> writes:

William> Note that much of formal math is expressed using prefix
William> notation, eg. f(g(x, y)) where g could be +. However for

...

And while braces might be easier to parse for a computer (remember:
barcodes are easy to parse for computers too), the additional
syntactical context of infix languages does make the program easier to
read for humans. In Lisp, you have to know which argument has which
meaning. In infix languages, you can grok that directly from the
statements. In a loop expression, for instance, Lisp doesn't make a
real difference between the condition and the loop body, they are just
arguments. This is of course a legitimate theoretical approach, after
all, both are just expressions. But there's a difference in my head,
and that's why I give them different names, and that's why I prefer to
recognize both at a glance without having to count endless braces.

As I said earlier, in CL you can use the INFIX package to have infix
syntax when you need it. The CL LOOP macro and the iteration
constructs DOTIMES and DOLIST give you all the information you need to
infer what they actually mean. Emacs counts all the braces for you :)

Cheers

Barry Margolin

unread,
Sep 8, 1997, 3:00:00 AM9/8/97
to

In article <vrotneyE...@netcom.com>,
William Paul Vrotney <vro...@netcom.com> wrote:
>Right. What I meant was that I am guilty of not stopping to look up the
>infix precedence rules. Many good programmers don't. Which begs the
>question "Why bother with infix notation to begin with if it introduces a
>weak spot in programming languages?".

Because 75-90% of the time your expressions aren't so complex that you get
confused about the precedence rules. There are some common precedence
rules that just about all infix languages share (APL is the only exception
I can think of offhand -- it eschews precedence rules in favor of a
consistent left-to-right rule): multiplication/division binds tighter than
addition/subtraction, and every programmer learned this back when they were
babies. And many other predence rules are also intuitive when combined
with these basic operators; it only becomes confusing when you have
multiple advanced operators in the same expression, and in that 10% of the
time you add parentheses.

I've done plenty of programming in both Lisp and infix languages, and I can
deal with either. I like Lisp for its higher features. The simple syntax
is useful for macro writing, but at the level of writing individual
expressions I don't need the language forcing me to parenthesize
everything. Infix languages allow common things to be written tersely,
which I think is important -- you can recognize common patterns and idioms
more easily if they're compact.

William Paul Vrotney

unread,
Sep 9, 1997, 3:00:00 AM9/9/97
to

In article <5v21ml$j...@tools.bbnplanet.com> Barry Margolin <bar...@bbnplanet.com> writes:

>
> In article <vrotneyE...@netcom.com>,
> William Paul Vrotney <vro...@netcom.com> wrote:
> >Right. What I meant was that I am guilty of not stopping to look up the
> >infix precedence rules. Many good programmers don't. Which begs the
> >question "Why bother with infix notation to begin with if it introduces a
> >weak spot in programming languages?".
>
> Because 75-90% of the time your expressions aren't so complex that you get
> confused about the precedence rules. There are some common precedence
> rules that just about all infix languages share (APL is the only exception
> I can think of offhand -- it eschews precedence rules in favor of a
> consistent left-to-right rule): multiplication/division binds tighter than
> addition/subtraction, and every programmer learned this back when they were
> babies. And many other predence rules are also intuitive when combined
> with these basic operators; it only becomes confusing when you have
> multiple advanced operators in the same expression, and in that 10% of the
> time you add parentheses.
>

I appreciate what you are saying here, but that was not the point. It was
the simple assertion that infix causes more programming errors to be made
than prefix for many programmers.

What you are saying above seems similar to me to the assertion "Cast iron
pluming doesn't cause problems most of the time."

William Paul Vrotney

unread,
Sep 9, 1997, 3:00:00 AM9/9/97
to

In article <y8araaz...@horten.artcom.de> Andreas Bogk
<and...@artcom.de> writes:

>
> >>>>> "William" == William Paul Vrotney <vro...@netcom.com> writes:
>
> William> Note that much of formal math is expressed using prefix
> William> notation, eg. f(g(x, y)) where g could be +. However for
>

> Funny you mention that. Interestingly enough, it's not (f (g x
> y)). That's one of the points in Lisp syntax that striked me as
> odd. And most mathematicians write + when they mean it. They even use
> infix operators when they mean the general class of commutative,
> associative functions. That's for a reason, it makes the formulas
> easier to understand, because they are more graphical this way.
>

What you are talking about here is a personal preference thing. Some people
have better pattern recognition with infix, others with prefix. For example
I personally find prefix notation more graphical.

The original point being made was not this personal preference thing. It


was the simple assertion that infix causes more programming errors to be
made than prefix for many programmers.

> And while braces might be easier to parse for a computer (remember:


> barcodes are easy to parse for computers too), the additional
> syntactical context of infix languages does make the program easier to
> read for humans.

You should have said "for some humans". I don't find infix languages easier
to read and I am human, at least I think I am. And there are many humans
who feel the same as me.

> In Lisp, you have to know which argument has which
> meaning. In infix languages, you can grok that directly from the
> statements.

Unless I don't understand what you mean here, you have just contradicted
yourself. In Lisp the meaning of

(OP1 a (OP2 b c d) e)

can be inferred directly from the statement, but in an infix language the
meaning of

a OP1 b OP2 c OP2 d OP1 e

can *not* be directly inferred from the statement. You need additional
information which in most cases are the precedence rules.


> In a loop expression, for instance, Lisp doesn't make a
> real difference between the condition and the loop body, they are just
> arguments.


You need to be more specific, are you taking about the loop macro? The
extended loop macro? Lisp looping expressions in general, like do, dolist
etc. ?


> This is of course a legitimate theoretical approach, after
> all, both are just expressions. But there's a difference in my head,
> and that's why I give them different names, and that's why I prefer to
> recognize both at a glance without having to count endless braces.

If you are counting parentheses (or braces) then you are doing the wrong
kind of pattern recongnition. At least, it is wrong in that it is not a
very efficient use of your time.

A good Lisp programmer rarely counts parentheses, especially trailing
parentheses. In fact in INTERLISP you did not even need trailing
parentheses, you could replace any number of trailing parentheses with a
"]". For example

(defun faa (x)
(cond ((numberp x) (+ x x))
(t 'anomolous]

The whole point here being that in a prefix language the patterns are not
properly recognized by the number of expression delimiters, but by the
nesting of the expressions. This nesting of expressions textually is
usually aided by graphical devices such as indentation.

>
> The whole "if it's easy to parse for a computer, it's easy to parse
> for a human" argument is fundamentally flawed. Computers are good at
> counting (braces, for instance), but humans are good at recognizing
> patterns.
>

It is not fundamentally flawed in the case of polish notations however. For
example as I pointed out above some humans recognize patterns more easily
in prefix form.

The converse is also not fundamentally flawed, ie. "Sometimes if it is
easier to parse by a human it is also easier to parse for a computer". The
classical example here is when a C programmer creates a macro that generates
an expression without enclosing parentheses. The resulting computer
interpretation is not always what was intended and is this a well documented
C trap that has wasted untold hours of programmer's time with bugs.

Bill Coderre

unread,
Sep 9, 1997, 3:00:00 AM9/9/97
to

Bottom line:

prefix/postfix notation: awkward until you get used to it (1 hour). Simple
enough to understand if you indent it right. Rarely causes mis-evaluated
expressions. Incredible consistency allows nifty macros, which most people
don't use.

infix notation: people are already used to it. Put in some extra
parentheses to make sure that you don't get mis-evaluated. Macros are
trickier.

Overall, I think the "problems" debated here are far less than the C
problem of = versus == or the similar Lisp problem of eq versus equal.

bc

Paul Prescod

unread,
Sep 9, 1997, 3:00:00 AM9/9/97
to

In article <30826543...@naggum.no>, Erik Naggum <cle...@naggum.no> wrote:
>had SGML defined a "read syntax" and talked about objects at a post-parse
>level, SGML would have been a lot more interesting. in SGML today, every
>whitespace is sacred, every inferred delimiter must be marked as not
>present in the text, etc. you just can't handle this mess intelligently by
>defining a _single_ "API", and you will always find people who are smarter
>than the last guy who defined a "complete API". it's sad, really.

Doesn't the grove formalism solve this problem? For those not familiar
with the grove it is a description of the structure of an SGML document
similar to a "parse tree" from language theory.

Getting back to Lisp, the problem shouldn't exist in Lisp (or Dylan) in
the first place because they *do* have "read syntaxes" and well defined
semantics. So a Dylan programmer with access to a parse tree "API"should be
able to do the same transformations and metaprogramming that Lisp programmers
do.

On the other hand, I can see how it would be quite hard to implement the
Dylan equivalent of "evaluate-last-sexpr".

Paul Prescod


news_check.py

unread,
Sep 10, 1997, 3:00:00 AM9/10/97
to

In article <5unec0$j...@tools.bbnplanet.com> Barry Margolin <bar...@bbnplanet.com> wrote:
> In article <862035x...@g.pet.cam.ac.uk>,

> Gareth McCaughan <gj...@dpmms.cam.ac.uk> wrote:
> > - infix languages encourage a possibly misleading distinction

In article <862035x...@g.pet.cam.ac.uk>,


Gareth McCaughan <gj...@dpmms.cam.ac.uk> wrote:
> - infix languages encourage a possibly misleading distinction
> between built-in binary operations and user-defined functions;

I don't think this is quite the distinction they make. Rather, the
distinction is between "common arithmetic and similar operations" and
"other operations", since many infix languages have built-in operations
that use function call notation (PL/I and Fortran have quite a few), and
some infix languages allow user-defined functions with infix notation
(e.g. C++'s operator overloading).

> - infix languages accordingly tend to make their builtins binary
> when they could, with different syntax, be variable-arity;

True. Luckily, some of he most common infix operators, such as + and *,
also happen to be associative and commutative, so they act like they're
variable-arity.

> - infix languages are less easily parsed;

This is an issue for quick and dirty code, but it's not an issue for
implementors -- compiler technology to parse infix notation is quite
mature.

--

Paul Prescod

unread,
Sep 10, 1997, 3:00:00 AM9/10/97
to

In article <vrotneyE...@netcom.com>,
William Paul Vrotney <vro...@netcom.com> wrote:

>Polish notations (both prefix and postfix (rpn)) on the other hand open up a
>world of uniform expression nesting that is natural to programming. This
>can be seen in both Lisp (prefix) and Forth (postfix). This uniform
>expression nesting coupled with the fact that Polish syntax allows the
>language definition to be a lot simpler is a *huge* benefit. Why it is not
>more popular seems to be explained by the same reason as to why Arabic
>numerals were not more popular than Roman numerals for hundreds of years.
>Politics.

I don't see that conformance with the dominant cultural standard as "politics"
but as "appealing to intuition and experience." This is the same reason we use
the "+" symbol for addition and not, say the, "|" symbol.

I think it is highly debatable whether a uniform syntax is really superior.
Print-based mathematics has deviced a wide variety of symbols on the theory
that the more diverse your notation the easier it is to scan an interpret.
Another feature of print-based math is that operators occur as close to their
operands as is possible. Again this is supposed to make it easier to scan
and interpret.

I think that it is understandable that people will be relucant to give
up these benefits for something new and different without a reason that they
can understand.

Paul Prescod


William Paul Vrotney

unread,
Sep 11, 1997, 3:00:00 AM9/11/97
to


In article <EGALz...@undergrad.math.uwaterloo.ca>
papr...@calum.csclub.uwaterloo.ca (Paul Prescod) writes:

>
> In article <vrotneyE...@netcom.com>,
> William Paul Vrotney <vro...@netcom.com> wrote:
> >Polish notations (both prefix and postfix (rpn)) on the other hand open up a
> >world of uniform expression nesting that is natural to programming. This
> >can be seen in both Lisp (prefix) and Forth (postfix). This uniform
> >expression nesting coupled with the fact that Polish syntax allows the
> >language definition to be a lot simpler is a *huge* benefit. Why it is not
> >more popular seems to be explained by the same reason as to why Arabic
> >numerals were not more popular than Roman numerals for hundreds of years.
> >Politics.
>
> I don't see that conformance with the dominant cultural standard as "politics"
> but as "appealing to intuition and experience." This is the same reason we use
> the "+" symbol for addition and not, say the, "|" symbol.
>
> I think it is highly debatable whether a uniform syntax is really superior.
> Print-based mathematics has deviced a wide variety of symbols on the theory
> that the more diverse your notation the easier it is to scan an interpret.
> Another feature of print-based math is that operators occur as close to their
> operands as is possible. Again this is supposed to make it easier to scan
> and interpret.

Once again, we were talking about general purpose programming languages in
this thread not print-based mathematics. As I said in a previous post we
recognize the importance of infix notations to mathematics. We also
recognize the importance of special symbols for mathematics, chemistry,
physics, economics, etc. In fact these diverse notations are critical to
special purpose programming/modeling languages for these specific
disciplines.

But the gist of this thread is general purpose programming language syntax,
and we would like to keep them as simple and elegant as possible. It seems
we keep learning this lesson over and over again in developing programming
languages but yet have a hard time getting any consensus in the wider
population.

>
> I think that it is understandable that people will be relucant to give
> up these benefits for something new and different without a reason that they
> can understand.
>

Which people? If we are talking about special purpose modeling language
users, like Mathematica users for example, then I agree and they should not
give up anything.

But if we are talking about general purpose programming language users then
I would like to make a pitch for simplicity, elegance and beauty. These
qualities have been put down now for too long now in favor of this
"efficiency gain" issue. Not that efficiency is not important but there are
many other qualities of a programming language that are more important than
efficiency. We have learned that simplicity, elegance and beauty in
programming language syntax as well as semantics offer more benefits than
just shear joy of programming, such as a flexible robust macro language as
in Lisp. Besides there is nothing wrong with shear joy of programming
anyway, it makes for productive programmers.

The elegance of Lisp is attractive to a lot of top notch computer scientists
and AI people, not because it is some sort of fetish with them, but because
they hold the same principles dear as other scientists. These principles
are manifest in the simplicity and elegance of physics formulas for example.
Simplicity and elegance is important in physics formulas so that the
scientist can explain and predict events in his world. Scientists abhor
complex formulas and models, they will spend enormous time and energy
simplifying and unifying. The outcome almost always proves that this is the
correct approach. This is one way to convince the world that general
purpose programming should follow suit. Since engineering is an out-crop of
science, much carries over.

Jonathan Eifrig

unread,
Sep 12, 1997, 3:00:00 AM9/12/97
to

Barry Margolin wrote:
> Gareth McCaughan <gj...@dpmms.cam.ac.uk> wrote:
> > - infix languages are less easily parsed;
>
> This is an issue for quick and dirty code, but it's not an issue for
> implementors -- compiler technology to parse infix notation is quite
> mature.

Axel Schairer <scha...@dfki.de> wrote


> This might be true. It is true, of course, if you know all your infix
> operators when you build your parser/parse tables. But I do not know
> how to handle the situation where you
>
> - have user-defined infixes _and_
> - you want/need to use tools like bison/yacc/zebu ...

Standard ML (sort of) falls into this category: you have
user-definable "infix-able" identifiers with user-defined precedence
levels.

Actually, it's a bit worse than that, since function application
doesn't require parentheses around the argument. Thus,

ID1 ID2 ID3

can be parsed as either (ID1 ID2) ID3 (if ID2 is *not* an infix
operator) or ID2 (ID1,ID3) (if it is defined as a binary infix
operator).

The SML/NJ compiler handles this situation by not attempting to handle
the problem using a fixed set of parsing rules. Instead, sequences of
"ID-like" subexpressions (basically, expressions separated by
identifers) are glommed together into list, and fed into a separate
operator-precendence style parser that uses the current symbol table
to figure out which identifers are infix and at what precedence. This
scheme seems to work pretty well in practice.

Indeed, it seems the biggest problem with user-defined infix
identifers is coming up with some syntax for defining the parsing
behavior of the identifiers!

- Jonathan Eifrig (eif...@acm.org)

Norman Ramsey

unread,
Sep 12, 1997, 3:00:00 AM9/12/97
to

Axel Schairer <scha...@dfki.de> wrote:
>This might be true. It is true, of course, if you know all your infix
>operators when you build your parser/parse tables. But I do not know
>how to handle the situation where you
>
> - have user-defined infixes _and_
> - you want/need to use tools like bison/yacc/zebu ...
>
>Is there something I should know and obviously don't?

The thing to do is to use yacc-like tools to parse a sequence of
``atomic expressions'', then use a little operator-precedence parser
to handle the infix operators. My paper `Unparsing Expressions with
Prefix and Postfix Operators' at
http://www.cs.virginia.edu/~nr/pubs/unparse-abstract.html has such a
parser in the appendix. The advantage of this technique is that you
don't have to limit the number of levels of user-defined precedence.
I've found such limits bothersome when I'm trying to embed one
language in another (e.g., defining analogs of C operators in ML is
tedious because you have to cram C's 14 levels of precedence into ML's
10).


Excerpt from an EBNF grammar that uses this trick:

Exp : Atomic {Atomic} ;
Atomic : "(" Exp {"," Exp} ")"
| "[" Exp {"," Exp} "]"
| "let" {Decl} "in" Exp "end"
| "if" Exp "then" Exp "else" Exp "fi"
| Ident
| Literal
;

If you don't mind ugly ML code, you can see the rest of the grammar at
http://www.cs.virginia.edu/~nr/toolkit/working/sml/rtl/grammar.html
and the post-pass at
http://www.cs.virginia.edu/~nr/toolkit/working/sml/rtl/elab.html.


Norman

--
Norman Ramsey -- moderator, comp.programming.literate
http://www.cs.virginia.edu/~nr

0 new messages