Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Time for a Fresh Scheme Standard: Say Goodbye to the RnRS Relic

66 views
Skip to first unread message

New Scheme

unread,
Dec 21, 2001, 7:14:04 PM12/21/01
to
***************************************************************
Time for a Fresh Scheme Standard
And to Say Goodbye to the RnRS Relic
----------------------------------------------------

Is it time for a new scheme standard? Is it time to make a break from
the ossified RnRS document? Is it time to bring Scheme into the 21st
century?

Scheme has become very dated. The RnRS series of documents are a
relic of a more dynamic past but are now little more than a fossil
record of its ever slowing development. With each year that passes,
scheme becomes more irrelevant to the practical and academic software
development, education and research worlds.

So what should be done? Fix the uncertainties, clear up the undefined
areas. Don't be scared to admit weaknesses and mistakes in the
current standard. Solicit help from the Common Lisp community and
draw upon their extensive practical experience. Learn from the
Functional community and their many strong ideas. And ask the
compiler vendors about practicalities.

Its time for a fresh look at scheme. Its time to break away from the
RnRS and its brotherhood of old men in their isolated,
self-referential world. Its time to reinvigorate the language.

Its time for a new standard.

israel r t

unread,
Dec 23, 2001, 8:51:54 PM12/23/01
to

Lisp needs to reinvent itself.
The last standard was released in 1994, ie nearly a decade ago.

As Paul Graham said:
"It's about time for a new dialect of Lisp. The two leading dialects,
Common Lisp and Scheme, have not been substantially changed since the
1980s.

What a language is has changed since then. In 1985, a programming
language was just a spec. Now, thanks to Perl, it means not just (and
maybe not even) a spec, but also a good free implementation, huge
libraries, and constant updates."

Lisp is no longer taught * at leading universities.
Lisp jobs are increasingly scarce.

Lisp is viewed in the real world as akin to COBOL only less likely to
provide a paying job.

Lisp has a severe image problem . ***
Eventually, it will go the way of Jovial and the Titan command
language.

Paul Graham is moving in the right direction with his lisp dialect
Arc. From his talk at the Lightweight Languages Workshop
MIT Artificial Intelligence Lab :

" In The Periodic Table, Primo Levi tells a story that happened when
he was working in a varnish factory. He was a chemist, and he was
fascinated by the fact that the varnish recipe included a raw onion.
What could it be for? No one knew; it was just part of the recipe. So
he investigated, and eventually discovered that they had started
throwing the onion in years ago to test the temperature of the
varnish: if it was hot enough, the onion would fry.

We're going to try not to include any onions in Arc. Everything is
open to question. "
http://www.paulgraham.com/arcll1.html


Footnotes:

* except in increasingly marginalised AI courses .

** The mega-LOCs of COBOL in the finance sector will ensure jobs for
COBOL drudges well into the next millenium.

*** and I am not referring to Naggum either...


Frank A. Adrian

unread,
Dec 23, 2001, 11:37:54 PM12/23/01
to
israel r t wrote:
> Lisp needs to reinvent itself.
> The last standard was released in 1994, ie nearly a decade ago.
>
> As Paul Graham said:
> "It's about time for a new dialect of Lisp. The two leading dialects,
> Common Lisp and Scheme, have not been substantially changed since the
> 1980s.

Fine. Do you have (a) suggestions or (b) funding for people to participate
in such an effort?

> What a language is has changed since then. In 1985, a programming
> language was just a spec. Now, thanks to Perl, it means not just (and
> maybe not even) a spec, but also a good free implementation, huge
> libraries, and constant updates."

I guess. Inventors of Dylan, Curl, and AutoLisp, at the very least, would
tend to disagree with you.

> Lisp is no longer taught * at leading universities.
> Lisp jobs are increasingly scarce.

Sad. 90% of anything is crap, to quote Ted Sturgeon. I would assume this
applies to incomplete university educations and most jobs, as well.

> Lisp is viewed in the real world as akin to COBOL only less likely to
> provide a paying job.

Probably true, but see my last comment. When exposed to crap long enough,
even good people start having trouble telling the difference.

> Lisp has a severe image problem . ***
> Eventually, it will go the way of Jovial and the Titan command
> language.

Yup, just like Fortran. It's been around for 50 years. I guess it'll be
good for another 50. By then, if it hasn't been updated, I'll start to
worry.

> Paul Graham is moving in the right direction with his lisp dialect
> Arc.

It's nice to have an opinion. You're entitled to yours, no matter how
misguided.

[Obligatory anecdote about needless process step/ingredient snipped.]

There are many people that believe Common Lisp needs a bit of updating. Do
you have (a) suggestions or (b) funding? If not, are you just trying to
raise hackles, showing that people here are less friendly than on
c.l.scheme? If you have come here with that objective, you should have
also noticed that your comments were answered without rancor (albeit
briefly) and that most participants in this forum would answer you with
that tone (not, if you had come with the aforementioned goal in mind, you
should have been deserving of this much courtesy). Of course, cluelessness
in followups and arguments might be handled with much less forgiveness.

faa

Erik Naggum

unread,
Dec 24, 2001, 2:24:03 AM12/24/01
to
* israel r t <isra...@antispam.optushome.com.au>

| Lisp needs to reinvent itself.
| The last standard was released in 1994, ie nearly a decade ago.

How old are you?

///
--
The past is not more important than the future, despite what your culture
has taught you. Your future observations, conclusions, and beliefs are
more important to you than those in your past ever will be. The world is
changing so fast the balance between the past and the future has shifted.

israel r t

unread,
Dec 24, 2001, 4:47:46 PM12/24/01
to

Lisp needs to reinvent itself.
The last standard was released in 1994, ie nearly a decade ago.

As Paul Graham said:
"It's about time for a new dialect of Lisp. The two leading dialects,
Common Lisp and Scheme, have not been substantially changed since the
1980s.

What a language is has changed since then. In 1985, a programming
language was just a spec. Now, thanks to Perl, it means not just (and
maybe not even) a spec, but also a good free implementation, huge
libraries, and constant updates."

Lisp is no longer taught * at leading universities.
Lisp jobs are increasingly scarce.

Lisp is viewed in the real world as akin to COBOL **only less likely

Andreas Bogk

unread,
Dec 24, 2001, 5:16:18 PM12/24/01
to

> Lisp needs to reinvent itself.

I suggest to take a look at Dylan. It's a pretty recent Lisp-like
language, and it's got a few things right (but on the other hand
omitted some features some people consider essential). I've also got
a list of things to do better on the next iteration.

You can learn a lot from Dylan when designing a new Lisp.

Andreas

--
"In my eyes it is never a crime to steal knowledge. It is a good
theft. The pirate of knowledge is a good pirate."
(Michel Serres)

Jeffrey Siegal

unread,
Dec 24, 2001, 7:26:36 PM12/24/01
to
Andreas Bogk wrote:
> I suggest to take a look at Dylan. It's a pretty recent Lisp-like
> language, and it's got a few things right (but on the other hand
> omitted some features some people consider essential).

I consider Lisp syntax (or something similarly elegant) to be
essential. I suspect that many proponents of Dylan-like languages would
consider it unacceptable. I strongly suspect there is no middle ground.

(Yes, I'm aware of Lisp-syntax Dylan, but I think there's a reason it
got abandoned.)

Andreas Bogk

unread,
Dec 24, 2001, 8:08:16 PM12/24/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> I consider Lisp syntax (or something similarly elegant) to be
> essential. I suspect that many proponents of Dylan-like languages would
> consider it unacceptable. I strongly suspect there is no middle ground.

For the language user, there may be no middle ground. From the
perspective of the language designer, the syntax is just one issue of
many, so even if you prefer S-expressions, there's still a lot of Lisp
to discover in Dylan.

> (Yes, I'm aware of Lisp-syntax Dylan, but I think there's a reason it
> got abandoned.)

The reason was that a lot of people, especially those who should be
persuaded to use Dylan, consider infix syntax to be more readable.
I'm well aware that this is paid with increased complexity in macros,
and I'm still not firm enough in macrology to know whether this is a
substantial complaint or not. Still, Dylan provides valuable input
for designing the next Lisp/Scheme/whatever.

Jeffrey Siegal

unread,
Dec 24, 2001, 8:37:32 PM12/24/01
to
Andreas Bogk wrote:
> > I consider Lisp syntax (or something similarly elegant) to be
> > essential. I suspect that many proponents of Dylan-like languages would
> > consider it unacceptable. I strongly suspect there is no middle ground.
>
> For the language user, there may be no middle ground. From the
> perspective of the language designer, the syntax is just one issue of
> many, so even if you prefer S-expressions, there's still a lot of Lisp
> to discover in Dylan.

Did you mean "A lot _for_ Lisp to discover?" There is little in Dylan
that didn't originate with Lisp, except the syntax. What does Dylan
have that Scheme + CLOS + "a collections library" doesn't have?

Thaddeus L Olczyk

unread,
Dec 24, 2001, 10:05:39 PM12/24/01
to
On 24 Dec 2001 23:16:18 +0100, Andreas Bogk <and...@andreas.org>
wrote:

>israel r t <isra...@antispam.optushome.com.au> writes:
>
>> Lisp needs to reinvent itself.
>
>I suggest to take a look at Dylan. It's a pretty recent Lisp-like
>language, and it's got a few things right (but on the other hand
>omitted some features some people consider essential). I've also got
>a list of things to do better on the next iteration.
>
>You can learn a lot from Dylan when designing a new Lisp.
>
>Andreas

What I saw of Dylan looked good, but it is a dead language. Stillborn.

Andreas Bogk

unread,
Dec 24, 2001, 10:13:44 PM12/24/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> > many, so even if you prefer S-expressions, there's still a lot of Lisp
> > to discover in Dylan.
> Did you mean "A lot _for_ Lisp to discover?" There is little in Dylan
> that didn't originate with Lisp, except the syntax.

No, I agree that most of Dylan originated in one Lisp dialect or
another. But I think that Dylan is a well-balanced blend of these
features, it feels good. That's why I suggest to at least take a look
at it when designing the next Lisp[0].

> What does Dylan have that Scheme + CLOS + "a collections library"
> doesn't have?

That would be conditions, type annotations and a useful module/library
system. Oh, and dynamism vs. performance tradeoffs like sealing,
primary classes and limited types.

Andreas

[0] Or "Lisp" or successor of Scheme which is "not a Lisp" or
whatever.

Jeffrey Siegal

unread,
Dec 25, 2001, 5:08:15 AM12/25/01
to
Andreas Bogk wrote:
> > What does Dylan have that Scheme + CLOS + "a collections library"
> > doesn't have?
>
> That would be conditions, type annotations and a useful module/library
> system.

I agree about modules, although I don't really like the way Dylan uses
multiple files to define a simple module. There should be a way of
doing that in-line. A CLOS-style object system does have type
annotations, at least at the method level (which is probably enough),
because they're necessary for dispatch. As for conditions, I prefer
passing condition handlers as explicit arguments. With proper tail
calls and limited use of call/cc to escape out of CPS, it works fine.

> Oh, and dynamism vs. performance tradeoffs like sealing,
> primary classes and limited types.

I think these are overhyped features which have been adaquately
addressed in Lisp/Scheme using either different implementations as
needed, declarations, etc.

David Madore

unread,
Dec 25, 2001, 9:55:38 AM12/25/01
to
Jeffrey Siegal in litteris <3C27C7BC...@quiotix.com> scripsit:

> I consider Lisp syntax (or something similarly elegant) to be
> essential. I suspect that many proponents of Dylan-like languages would
> consider it unacceptable. I strongly suspect there is no middle ground.

Concrete syntax is irrelevant, isn't it? The same compiler could be
made to accept a dozen different syntaxes, or even configurable
syntaxes, and prefer none (Caml can handle configurable syntaxes
through the CamlP4 preprocessor, but it still prefers one).

--
David A. Madore
(david....@ens.fr,
http://www.eleves.ens.fr:8080/home/madore/ )

Andreas Bogk

unread,
Dec 25, 2001, 2:19:12 PM12/25/01
to
olc...@interaccess.com (Thaddeus L Olczyk) writes:

> What I saw of Dylan looked good, but it is a dead language. Stillborn.

There are two Dylan compilers being actively maintained, one
commercial, one free. That's not exactly dead.

Andreas Bogk

unread,
Dec 25, 2001, 2:42:11 PM12/25/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> I agree about modules, although I don't really like the way Dylan uses
> multiple files to define a simple module. There should be a way of

The idea behind Dylan was that the source code resides in a code
database, and the file format is just used for interchange. Of
course, in reality there are source files, and the interchange format
is a little awkward to use. That should be easier, I agree.

> doing that in-line. A CLOS-style object system does have type
> annotations, at least at the method level (which is probably enough),
> because they're necessary for dispatch.

Having type annotations for bindings gives the optimizer a lot of meat
to work on.

> As for conditions, I prefer
> passing condition handlers as explicit arguments. With proper tail
> calls and limited use of call/cc to escape out of CPS, it works fine.

I don't think so. Having to pass around handlers for all sorts of
conditions is a nuisance. This is something CL and Dylan got right,
IMHO.

> > Oh, and dynamism vs. performance tradeoffs like sealing,
> > primary classes and limited types.
> I think these are overhyped features which have been adaquately
> addressed in Lisp/Scheme using either different implementations as
> needed, declarations, etc.

The point is that you can start writing code without caring about
performance. Once the design has settled, you can sprinkle some
adjectives here and there, and the code becomes fast, without having
to re-implement performance-critical code. I consider sealing to be a
good thing.

Andreas

Bruce Hoult

unread,
Dec 26, 2001, 12:18:37 AM12/26/01
to
In article <3C27C7BC...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Andreas Bogk wrote:
> > I suggest to take a look at Dylan. It's a pretty recent Lisp-like
> > language, and it's got a few things right (but on the other hand
> > omitted some features some people consider essential).
>
> I consider Lisp syntax (or something similarly elegant) to be
> essential. I suspect that many proponents of Dylan-like languages would
> consider it unacceptable. I strongly suspect there is no middle ground.

I can happily use either. Or paren-less prefix (Logo, ML). Or postfix
(PostScript, Forth). But even after much use of the others I find that
I do prefer "conventional" syntax.


> (Yes, I'm aware of Lisp-syntax Dylan, but I think there's a reason it
> got abandoned.)

The reason as I understand it is that no one could figure out how to
bidirectionally map macros between infix and prefix.

I'm not sure whether this is impossible or merely hard.

It's interesting that some of the more complex macros in Common Lisp
look uncommonly like the "infix" syntax in Dylan. e.g. the "loop"
macro, which is nearly identical to the Dylan "for" statement macro.
Thus it might be acceptable to the Lisp-syntax people to essentially
retain (nearly?) the same syntax for statement macros in both modes.
Function macros are easy to translate. That leaves Dylan's declaration
macros to think about.

Another solution might be to explicitly define both syntaxes when you
define a macro. More work, but then you don't define new syntax quite
as often as you define functions.

-- Bruce

Bruce Hoult

unread,
Dec 26, 2001, 12:26:16 AM12/26/01
to
In article <3C27D85C...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

Perhaps not a lot that is radical, but simply a lot of nice cleaning up.

- having a "let" where the scope is implicit (to the end of the current
progn) is a big win in unclutering code

- Dylan's ":=" and CL's "setf" are the same idea, but := is easier to
read for some people.

- same goes for "[]" vs "element()".

- why do aref and gethash in CL have opposite argument orders?


I think you get the point.

None of these (or other) items are critical in themselves, but I find
that put all together they provide a cleaner, easier to use (and
remember) language.

-- Bruce

Bruce Hoult

unread,
Dec 26, 2001, 12:53:26 AM12/26/01
to
In article <3C28500F...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Andreas Bogk wrote:
> > > What does Dylan have that Scheme + CLOS + "a collections library"
> > > doesn't have?
> >
> > That would be conditions, type annotations and a useful module/library
> > system.
>
> I agree about modules, although I don't really like the way Dylan uses
> multiple files to define a simple module. There should be a way of
> doing that in-line.

No one does. That was supposed to be just an interchange format, not
something that users had to deal with. That was the case in the Apple
IDE, where all the code was kept in a database.

We've had a bit of discussion recently on a way to put various modules
into the same source file. Nothing has been agreed yet, but in Gwydion
we have recently done a related thing in implementing a "single-file
mode" that lets you write small programs without a library or module
declaration at all, with a default set of imports. If/when your program
outgrows that you can always add the .lid file.

The ability to put imports/exports in the same file with code is
something we definitely plan for fairly soon.

-- Bruce

Jeffrey Siegal

unread,
Dec 26, 2001, 9:38:23 AM12/26/01
to
Bruce Hoult wrote:

>>Andreas Bogk wrote:
>>
>>>I suggest to take a look at Dylan. It's a pretty recent Lisp-like
>>>language, and it's got a few things right (but on the other hand
>>>omitted some features some people consider essential).
>>>
>>I consider Lisp syntax (or something similarly elegant) to be
>>essential. I suspect that many proponents of Dylan-like languages would
>>consider it unacceptable. I strongly suspect there is no middle ground.
>
> I can happily use either. Or paren-less prefix (Logo, ML). Or postfix
> (PostScript, Forth). But even after much use of the others I find that
> I do prefer "conventional" syntax.

It isn't a question of using. It is a question of being able to define
new syntax without stretching or breaking the inherent limits of the
existing syntax. Lisp lives essentially forever in the world of
computer languages because it almost can't be outgrown. To the extent
that Dylan lives at all, it will still die when the world decides that
objects aren't that central to programming after all, and moves on to
some other model, or when someone comes up with a new syntactic
construct that it is incompatible with Dylan's syntax. Lisp will live on.


>>(Yes, I'm aware of Lisp-syntax Dylan, but I think there's a reason it
>>got abandoned.)
>
> The reason as I understand it is that no one could figure out how to
> bidirectionally map macros between infix and prefix.
>
> I'm not sure whether this is impossible or merely hard.

And the reason the decision was made to drop prefix rather than infix
when that happened was the overriding goal of trying to sell Dylan
alongside Java or C as a language for the great masses. (Which today
seems utterly absurd.)

Many smart people have observed that when you encounter a "hard" (if not
impossible) problem, you have already made a mistake somewhere back down
the road. Trying to "add" an infix syntax without recognizing that this
almost certainly means losing expressive power and generality was just
such a mistake.

> Another solution might be to explicitly define both syntaxes when you
> define a macro. More work, but then you don't define new syntax quite
> as often as you define functions.

That would be very error prone.

Jeffrey Siegal

unread,
Dec 26, 2001, 9:42:22 AM12/26/01
to
Andreas Bogk wrote:

>>doing that in-line. A CLOS-style object system does have type
>>annotations, at least at the method level (which is probably enough),
>>because they're necessary for dispatch.
>>
>
> Having type annotations for bindings gives the optimizer a lot of meat
> to work on.

I'm not so sure about that, given good type inference, and methods that
are kept reasonably small. In any case, it is a trivially small matter
to add type bindings to let statements one they exist for methods.

>>As for conditions, I prefer
>>passing condition handlers as explicit arguments. With proper tail
>>calls and limited use of call/cc to escape out of CPS, it works fine.
>
> I don't think so. Having to pass around handlers for all sorts of
> conditions is a nuisance. This is something CL and Dylan got right,
> IMHO.

Chocolate and vanilla. I would add that explicitly passing condition
handlers around is a bit like explicit typing, becuase it prevents you
from leaving conditions unhandled.

>>>Oh, and dynamism vs. performance tradeoffs like sealing,
>>>primary classes and limited types.
>>>
>>I think these are overhyped features which have been adaquately
>>addressed in Lisp/Scheme using either different implementations as
>>needed, declarations, etc.
>
> The point is that you can start writing code without caring about
> performance. Once the design has settled, you can sprinkle some
> adjectives here and there, and the code becomes fast, without having
> to re-implement performance-critical code. I consider sealing to be a
> good thing.

I do this in Scheme today, and I don't even sprinkle adjectives here and
there, by developing in a developer-friendly environment and then
switching to a highly-optimized block compiler for tuning and production.

Bruce Hoult

unread,
Dec 26, 2001, 11:05:12 AM12/26/01
to
In article <3C29E0DF...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Bruce Hoult wrote:
>
> >>Andreas Bogk wrote:
> >>
> >>>I suggest to take a look at Dylan. It's a pretty recent Lisp-like
> >>>language, and it's got a few things right (but on the other hand
> >>>omitted some features some people consider essential).
> >>>
> >>I consider Lisp syntax (or something similarly elegant) to be
> >>essential. I suspect that many proponents of Dylan-like languages would
> >>consider it unacceptable. I strongly suspect there is no middle ground.
> >
> > I can happily use either. Or paren-less prefix (Logo, ML). Or postfix
> > (PostScript, Forth). But even after much use of the others I find that
> > I do prefer "conventional" syntax.
>
> It isn't a question of using. It is a question of being able to define
> new syntax without stretching or breaking the inherent limits of the
> existing syntax. Lisp lives essentially forever in the world of
> computer languages because it almost can't be outgrown.

That's true only in the trivial sense that Lisp has no syntax, so Lisp
syntax can't be outgrown. Dylan has pretty much all the same semantics
as Lisp, and a malleable syntax.


> To the extent
> that Dylan lives at all, it will still die when the world decides that
> objects aren't that central to programming after all, and moves on to
> some other model, or when someone comes up with a new syntactic
> construct that it is incompatible with Dylan's syntax. Lisp will live on.

There is no such construct. If it can be fitted into Lisp's
functions-only notation then it can also be fitted into Dylan's
functions and function-macros. In Dylan in may well be *better* fitted
into statement macros, but that's an additional possibility, not a
restriction.


> >>(Yes, I'm aware of Lisp-syntax Dylan, but I think there's a reason it
> >>got abandoned.)
> >
> > The reason as I understand it is that no one could figure out how to
> > bidirectionally map macros between infix and prefix.
> >
> > I'm not sure whether this is impossible or merely hard.
>
> And the reason the decision was made to drop prefix rather than infix
> when that happened was the overriding goal of trying to sell Dylan
> alongside Java or C as a language for the great masses. (Which today
> seems utterly absurd.)

Why? Since that decision was made, the great masses have adopted both
Java and Perl, while Lisp has remained in the wilderness. I don't see
any reason to think that infix syntax is a *disadvantage* to the goal of
attaining popularity. The time may simply be not yet right. After all,
it is only just now that reasonably mature Dylan implementations are
becoming available.


> Many smart people have observed that when you encounter a "hard" (if not
> impossible) problem, you have already made a mistake somewhere back down
> the road.

Or no one had the correct "ah-ha" moment yet.


> Trying to "add" an infix syntax without recognizing that this
> almost certainly means losing expressive power and generality was just
> such a mistake.

In your opinion.


> > Another solution might be to explicitly define both syntaxes when you
> > define a macro. More work, but then you don't define new syntax quite
> > as often as you define functions.
>
> That would be very error prone.

A great many things in programming are error prone. In fact anything in
which it is impossible to make a mistake is almost certainly not
powerful enough to be useful. It is reasonable to expect that
programmers have *some* skill. Also, even if a compiler can't
reasonably translate an infix macro to a prefix macro (or the reverse),
it seems entirely reasonable for it to apply some consistency checks to
two such macros supplied by a human.

-- Bruce

Jeffrey Siegal

unread,
Dec 26, 2001, 12:07:40 PM12/26/01
to
Bruce Hoult wrote:
> That's true only in the trivial sense that Lisp has no syntax, so Lisp
> syntax can't be outgrown.

Hardly. It just has a syntax with a very simple and powerful basic
construction rule. However, on top of that construction rule,
enormously powerful syntactic abstractions can be (and are) built. What
Algol-like languages lack is the basic construction rule which allows
you to decompose the syntax down into elemental componets. That makes
any macro system either enormously complex or lacking in power, or both.

Consider, for example, what low-level Lisp macros would look like in an
Algol-like language. They can be done but the result is enormously
complex (and also fragile; if the language syntax is extended, macros
written that way will likely break).

> There is no such construct. If it can be fitted into Lisp's
> functions-only notation then it can also be fitted into Dylan's
> functions and function-macros.

Of course it can, just as you could write a Lisp interpreter in Dylan
and use that. But at some point it becomes language-abuse, not
language-use, becuause the facilities the language provides to help you
end up either being in the way, or being useless warts. I can tell you
from experience that trying to do extremely complex things with
function-style macros in an Algol-like language is far, far worse than
doing the same thing in Lisp, since such things are a natural extension
of the Lisp syntax but stronly conflict with the flavor of an
Algol-style langauge. Yes, it can be done that way, but it might as
well not be possible because no one will want to use it.

> > And the reason the decision was made to drop prefix rather than infix
> > when that happened was the overriding goal of trying to sell Dylan
> > alongside Java or C as a language for the great masses. (Which today
> > seems utterly absurd.)
>
> Why?

I didn't mean the decision was absurd at the time, just that the
possiblity of Dylan being sold to the great masses today is absurd.
Dylan is a useful niche language, which is all it will ever be. As a
niche language, though, you don't need to sell it with a candy-coated
syntax. I might be using it today if the Lisp syntax had been retained,
but I have no interest whatsoever in an Algol-syntax niche langauge. If
I'm going to use such a langauge, it is going to at least be a
mainstream one with all of the benefits that acrue from that status
(i.e., all things considered I'd rather use Java, and I do, than Dylan,
despite recognizing that Dylan is a much nicer language).

> Since that decision was made, the great masses have adopted both
> Java and Perl, while Lisp has remained in the wilderness. I don't see
> any reason to think that infix syntax is a *disadvantage* to the goal of
> attaining popularity.

I wasn't suggesting that.

> The time may simply be not yet right. After all,
> it is only just now that reasonably mature Dylan implementations are
> becoming available.

With all due respect, I think you are dreaming, and I think some honest
self-reflection would confirm that.

> > Many smart people have observed that when you encounter a "hard" (if not
> > impossible) problem, you have already made a mistake somewhere back down
> > the road.
>
> Or no one had the correct "ah-ha" moment yet.

Taking a path which requires an as-yet-unknown "ah ha" to suceed is a
design error. It is those moments which make new paths feasible. Blind
leaps occasionally do lead there (I'm a big fan of evoluationary
learning), but when they don't, you should be willing to accept that the
leap was a mistake and backtrack.

> > Trying to "add" an infix syntax without recognizing that this
> > almost certainly means losing expressive power and generality was just
> > such a mistake.
>
> In your opinion.

Absolutely true.

> > > Another solution might be to explicitly define both syntaxes when you
> > > define a macro. More work, but then you don't define new syntax quite
> > > as often as you define functions.
> >
> > That would be very error prone.
>
> A great many things in programming are error prone. In fact anything in
> which it is impossible to make a mistake is almost certainly not
> powerful enough to be useful. It is reasonable to expect that
> programmers have *some* skill.

Requiring a programmer to maintain two distinct pieces of code which are
supposed to have the same effect is something that experience shows to
be extremely difficult and error prone. As development practices go,
such an approach is best avoided.

Francois-Rene Rideau

unread,
Dec 26, 2001, 2:49:26 PM12/26/01
to
Jeffrey Siegal <j...@quiotix.com> writes Re: New Lisp ?

> Consider, for example, what low-level Lisp macros would look like in an
> Algol-like language. They can be done but the result is enormously
> complex (and also fragile; if the language syntax is extended, macros
> written that way will likely break).
I wonder what you think or someone who knows them as well as LISP macros
thinks of CamlP4 or of parse-tree filtering in Erlang.
These may not be as seamlessly integrated in their mother language as are
LISP macros, but they look very promising.

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]
A language that doesn't affect the way you think about programming,
is not worth knowing. -- Alan Perlis

Jeffrey Siegal

unread,
Dec 26, 2001, 3:33:23 PM12/26/01
to
Francois-Rene Rideau wrote:

>>Consider, for example, what low-level Lisp macros would look like in an
>>Algol-like language. They can be done but the result is enormously
>>complex (and also fragile; if the language syntax is extended, macros
>>written that way will likely break).
>>
> I wonder what you think or someone who knows them as well as LISP macros
> thinks of CamlP4 or of parse-tree filtering in Erlang.

I have not looked at them before so I am not very familar with them. I
looked quickly at CamlP4 and it looked very similar to what I've seen
before in terms of attempts to do this. In particular, fairly complex,
and requiring the programmer to understand quite a bit about parsing
theory and practice (an interesting field, but not one that every
programmer necessarily knows about or wants to know about).

Anyone who can not see that the complexity of such things is a strong
argument in favor of a simple Lisp-like syntax[*] is blind or
prejudiced. Perhaps not an overriding argument that would cause one to
use a Lisp-syntax despite other issues, but still...

[*] By "Lisp-like" syntax I mean a syntax that can be constructed and
decomposed using a few simple, easy-to-understand rules. It doesn't
neceessarily need to be Lisp-syntax itself. For example, it might use
indentation rather than parenthesis to indicate nesting. Or it might be
something else. But whatever it is, it should reduce to some sort of
logical and simple internal form, not some mostly random collection of
Algol-like constructs that exist largely the result of a string of
historical accidents.

Feuer

unread,
Dec 26, 2001, 9:20:28 PM12/26/01
to
Bruce Hoult wrote:

> In article <3C27C7BC...@quiotix.com>, Jeffrey Siegal
> <j...@quiotix.com> wrote:

>
> > I consider Lisp syntax (or something similarly elegant) to be
> > essential. I suspect that many proponents of Dylan-like languages would
> > consider it unacceptable. I strongly suspect there is no middle ground.
>
> I can happily use either. Or paren-less prefix (Logo, ML). Or postfix
> (PostScript, Forth). But even after much use of the others I find that
> I do prefer "conventional" syntax.

The advantage of a language with significant syntax is that it allows the
programmer to quickly and easily write certain kinds of code. For example, ML
and Haskell syntax make it easy to write curried functions and function
applications, as well as infix operators. It would be quite annoying to call
a simple function by saying
(((foldl f) h) lst)

Infix is probably less important, but is convenient.

Bruce Hoult

unread,
Dec 26, 2001, 7:20:19 PM12/26/01
to
In article <3C2A03DC...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Bruce Hoult wrote:
> > That's true only in the trivial sense that Lisp has no syntax, so Lisp
> > syntax can't be outgrown.
>
> Hardly. It just has a syntax with a very simple and powerful basic
> construction rule. However, on top of that construction rule,
> enormously powerful syntactic abstractions can be (and are) built. What
> Algol-like languages lack is the basic construction rule which allows
> you to decompose the syntax down into elemental componets. That makes
> any macro system either enormously complex or lacking in power, or both.
>
> Consider, for example, what low-level Lisp macros would look like in an
> Algol-like language. They can be done but the result is enormously
> complex (and also fragile; if the language syntax is extended, macros
> written that way will likely break).

There are examples of the same thing happening in reverse. When macros
get sufficiently complex and have enough combinations of different
possibilities, it becomes too difficult to follow a purely S-expresion
syntax. Consider the example of the Dylan "for" macro. How would you
map all that functionality and all those options onto an S-expression
syntax? Well, we can look at what is done in Common Lisp with the
"loop" macro. The same idea, very nearly exactly the same options. And
we find that in fact it does *not* use S-expression syntax, but instead
makes a little infix language that ends up very similar to that part of
Dylan.

As you say: "at some point it becomes language-abuse, not language-use".

Now, I happen to think that the "loop" macro is a *good* thing about
Common Lisp, but:

1) building such things yourself isn't well supported in CL (it is in
Dylan)

2) I find that I actually *prefer* this sort of thing to have infix
syntax, and prefer all loops and other control structures to use it. If
nothing else, it means that you know immediately whether you're looking
at a standard function application or a special form. That's what Dylan
does.


> > There is no such construct. If it can be fitted into Lisp's
> > functions-only notation then it can also be fitted into Dylan's
> > functions and function-macros.
>
> Of course it can, just as you could write a Lisp interpreter in Dylan
> and use that. But at some point it becomes language-abuse, not
> language-use, becuause the facilities the language provides to help you
> end up either being in the way, or being useless warts. I can tell you
> from experience that trying to do extremely complex things with
> function-style macros in an Algol-like language is far, far worse than
> doing the same thing in Lisp

Which algol-like language?


> I didn't mean the decision was absurd at the time, just that the
> possiblity of Dylan being sold to the great masses today is absurd.
> Dylan is a useful niche language, which is all it will ever be.

Presumably, then, you feel the same way about Lisp?


> As a niche language, though, you don't need to sell it with a
> candy-coated syntax. I might be using it today if the Lisp syntax
> had been retained, but I have no interest whatsoever in an
> Algol-syntax niche langauge.

*You* may not, but not everyone feels that way. Apart from Dylan, there
are people out there using OCaml, Haskell and a bunch of lesser-known
niche languages with Algol-like syntaxes. Not all of them intend to
remain niche languages.


> If I'm going to use such a langauge, it is going to at least be a
> mainstream one with all of the benefits that acrue from that status
> (i.e., all things considered I'd rather use Java, and I do, than Dylan,
> despite recognizing that Dylan is a much nicer language).

Half a dozen years ago Java wasn't mainstream. A dozen years ago (when
I started using it) C++ wasn't mainstream. The same goes for Perl
before the WWW happened. Plenty of languages have made the transition
from niche to mainstream in the past, and there is every reason to think
that plenty more will in the future. C++ and Perl and Java are not the
last word in language design for the masses.


> > The time may simply be not yet right. After all, it is only just
> > now that reasonably mature Dylan implementations are becoming
> > available.
>
> With all due respect, I think you are dreaming, and I think some honest
> self-reflection would confirm that.

Dreaming in what respect? Are you saying that reasonably mature Dylan
implementations are not yet available? Or that they have been for some
time? Or something else?

I'm certainly under no illusions that "a reasonably mature
implementation" is sufficient for market success. But it's surely
necessary.


> > > Many smart people have observed that when you encounter a "hard"
> > > (if not impossible) problem, you have already made a mistake
> > > somewhere back down the road.
> >
> > Or no one had the correct "ah-ha" moment yet.
>
> Taking a path which requires an as-yet-unknown "ah ha" to suceed is a
> design error.

I agree. And that path -- attempting to support both infx and prefix
syntaxes -- has *not* been taken. A clean switch was made from one to
the other.


> > > > Another solution might be to explicitly define both syntaxes
> > > > when you define a macro. More work, but then you don't define
> > > > new syntax quite as often as you define functions.
> > >
> > > That would be very error prone.
> >
> > A great many things in programming are error prone. In fact
> > anything in which it is impossible to make a mistake is almost
> > certainly not powerful enough to be useful. It is reasonable to
> > expect that programmers have *some* skill.
>
> Requiring a programmer to maintain two distinct pieces of code which are
> supposed to have the same effect is something that experience shows to
> be extremely difficult and error prone. As development practices go,
> such an approach is best avoided.

Mainstream programmers are expected to keep functions and prototypes in
synch. They are expected to maintain quite complex invariants over
large bodies of code, usually without benefit of anything more powerful
than "assert". They are expected to declare the type of a variable in
one place and then use it appropriately in other places. They are
expected to make sure that variables are correctly initialized over all
execution paths. They are expected to explicitly free dynamic memory at
those points -- and only those points -- where it is no longer needed.

Compared to some of those, correctly setting up alternate syntaxes in
the odd macro definition is hardly onerous or error-prone. And it might
be totally optional -- needed *only* if you want to use both. Most
mainstream programmers would presumably be satisfied with the Algol-like
syntax in the first place.

-- Bruce

Bruce Hoult

unread,
Dec 26, 2001, 7:27:15 PM12/26/01
to
In article <3C2A856C...@his.com>, Feuer <fe...@his.com> wrote:

> Bruce Hoult wrote:
> > I can happily use either. Or paren-less prefix (Logo, ML). Or postfix
> > (PostScript, Forth). But even after much use of the others I find that
> > I do prefer "conventional" syntax.
>
> The advantage of a language with significant syntax is that it allows
> the programmer to quickly and easily write certain kinds of code. For
> example, ML and Haskell syntax make it easy to write curried functions
> and function applications, as well as infix operators. It would be
> quite annoying to call a simple function by saying (((foldl f) h) lst)

That's true, but it seems awfully arbitrary.

How often do you find that you need to put function arguments in an
uncomfortable order so that the automatic currying works out right?

-- Bruce

Kaz Kylheku

unread,
Dec 26, 2001, 8:25:57 PM12/26/01
to
In article <87zo45sq...@Samaris.tunes.org>, Francois-Rene Rideau wrote:
>Jeffrey Siegal <j...@quiotix.com> writes Re: New Lisp ?
>> Consider, for example, what low-level Lisp macros would look like in an
>> Algol-like language. They can be done but the result is enormously
>> complex (and also fragile; if the language syntax is extended, macros
>> written that way will likely break).
>I wonder what you think or someone who knows them as well as LISP macros
>thinks of CamlP4 or of parse-tree filtering in Erlang.

Once you introduce parse tree filtering, don't you think that users will
eventually want a way to specify any arbitrary parse tree, not just ones
that correspond to the few shapes determined by a hardcoded parser?

Then you are looking at some bracketed notation.

Coby Beck

unread,
Dec 26, 2001, 8:39:06 PM12/26/01
to

"Bruce Hoult" <br...@hoult.org> wrote in message
news:bruce-9D1859....@news.paradise.net.nz...

>
> - why do aref and gethash in CL have opposite argument orders?
>
>

aref is also opposite to nth. I think one "justification" if not "reason" for
this is the need to accomodate multiple indices in aref. For nth, the second
argument is the list but for aref, if not the first, it could be the 2nd,
3rd...etc..depending on how many array dimensions you have.

--
Coby
(remove #\space "coby . beck @ opentechgroup . com")


David Rush

unread,
Dec 26, 2001, 9:08:03 PM12/26/01
to
Andreas Bogk <and...@andreas.org> writes:

> Jeffrey Siegal <j...@quiotix.com> writes:
> > As for conditions, I prefer
> > passing condition handlers as explicit arguments. With proper tail
> > calls and limited use of call/cc to escape out of CPS, it works fine.
>
> I don't think so. Having to pass around handlers for all sorts of
> conditions is a nuisance. This is something CL and Dylan got right,
> IMHO.

Wel I've not written any large reactive systems using CPS for
condition-handling, but it certainly seems to work well in my
data-mining code. As things stand today, I'd probably not choose
Scheme for a large GUI application, although I'm cooking up ideas to
try out in PLT Scheme just to see if there GUI support is as good as
it looks. Maybe sometime in this millenium I'll get around to it.

> > > Oh, and dynamism vs. performance tradeoffs like sealing,

Huh? What is this feature?

> > I think these are overhyped features which have been adaquately
> > addressed in Lisp/Scheme using either different implementations as
> > needed, declarations, etc.
>
> The point is that you can start writing code without caring about
> performance.

Surely you *don't* really mean this. Big-O issues will jump up and get
you if you don't think about them.

> Once the design has settled, you can sprinkle some
> adjectives here and there, and the code becomes fast, without having
> to re-implement performance-critical code. I consider sealing to be a
> good thing.

Do you not also get the same benefits if you develop using good
functional abstractions?

david rush
--
The beginning of wisdom for a [software engineer] is to recognize the
difference between getting a program to work, and getting it right.
-- M A Jackson, 1975

David Rush

unread,
Dec 26, 2001, 9:24:50 PM12/26/01
to
Bruce Hoult <br...@hoult.org> writes:
> In article <3C2A856C...@his.com>, Feuer <fe...@his.com> wrote:
> > The advantage of a language with significant syntax is that it allows
> > the programmer to quickly and easily write certain kinds of code. For
> > example, ML and Haskell syntax make it easy to write curried functions
> > and function applications, as well as infix operators. It would be
> > quite annoying to call a simple function by saying (((foldl f) h) lst)
>
> How often do you find that you need to put function arguments in an
> uncomfortable order so that the automatic currying works out right?

Well comparing my experiences from 5 years ago programming constraint
solvers and scheduling systems in SML to the present where I'm data
mining in Scheme, I'd say I encounter about equal hassle in both
systems w/rt curried functions and partial applications. I use a lot
more HO functions in Scheme, at least partly to simulate SML
functors, and I get mildly annoyed at the number of explicit lambdas I
need to include. OTOH, I spent a fair amount of time in SML fretting
over the most `natural' partial application order (not to mention the
whole tuple vs curried API issue).

On the whole I prefer Scheme, but I might find that I also like SML
better (not that I ever *dis*liked it) now that my fluency in the
functional paradigm has grown.

Just $0.02.

david rush
--
From the start...the flute has been associated with pure (some might
say impure) energy. Its sound releases something naturally untamed, as
if a squirrel were let loose in a church." --Seamus Heaney

Jeffrey Siegal

unread,
Dec 26, 2001, 9:51:46 PM12/26/01
to
Bruce Hoult wrote:
> There are examples of the same thing happening in reverse. When macros
> get sufficiently complex and have enough combinations of different
> possibilities, it becomes too difficult to follow a purely S-expresion
> syntax.

Not in my experience. YMMV.

> Consider the example of the Dylan "for" macro. How would you
> map all that functionality and all those options onto an S-expression
> syntax?

I haven't looked closely at the Dylan for macro, but I suspect by not
including all that functionality into a single construct.

> > > There is no such construct. If it can be fitted into Lisp's
> > > functions-only notation then it can also be fitted into Dylan's
> > > functions and function-macros.
> >
> > Of course it can, just as you could write a Lisp interpreter in Dylan
> > and use that. But at some point it becomes language-abuse, not
> > language-use, becuause the facilities the language provides to help you
> > end up either being in the way, or being useless warts. I can tell you
> > from experience that trying to do extremely complex things with
> > function-style macros in an Algol-like language is far, far worse than
> > doing the same thing in Lisp
>
> Which algol-like language?

C (preprocessor macros) and Java (a proprietary preprocessor). Other
people have done similar things with C++ templates and the result is
similarly unwieldy.

> > I didn't mean the decision was absurd at the time, just that the
> > possiblity of Dylan being sold to the great masses today is absurd.
> > Dylan is a useful niche language, which is all it will ever be.
>
> Presumably, then, you feel the same way about Lisp?

Absolutely.



> > If I'm going to use such a langauge, it is going to at least be a
> > mainstream one with all of the benefits that acrue from that status
> > (i.e., all things considered I'd rather use Java, and I do, than Dylan,
> > despite recognizing that Dylan is a much nicer language).
>
> Half a dozen years ago Java wasn't mainstream. A dozen years ago (when
> I started using it) C++ wasn't mainstream. The same goes for Perl
> before the WWW happened. Plenty of languages have made the transition
> from niche to mainstream in the past, and there is every reason to think
> that plenty more will in the future. C++ and Perl and Java are not the
> last word in language design for the masses.

That's all true, but for every language that "breaks out" there are a
zillion that don't, and those that break out generally have a big
promoter, though there are occasional exceptions, on the order of
perhaps one per decade. Not unlike pop artists.

> > > The time may simply be not yet right. After all, it is only just
> > > now that reasonably mature Dylan implementations are becoming
> > > available.
> >
> > With all due respect, I think you are dreaming, and I think some honest
> > self-reflection would confirm that.
>
> Dreaming in what respect? Are you saying that reasonably mature Dylan
> implementations are not yet available? Or that they have been for some
> time? Or something else?

That Dylan is going to go mainstream when "the time is right", and also
about reasonably mature implementations becoming available "just now."
I consider Harlequin's product to have been "reasonably mature" some
time ago.

Dylan will almost certainly never break into the mainstream without a
big promoter. The opportunity was largely lost when Apple dropped it.
Perhaps with Apple's backing it could have given Java a good run, but
without it, no way.

Bruce Hoult

unread,
Dec 26, 2001, 11:14:40 PM12/26/01
to
In article <3C2A8CC2...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Bruce Hoult wrote:
> > Consider the example of the Dylan "for" macro. How would you
> > map all that functionality and all those options onto an S-expression
> > syntax?
>
> I haven't looked closely at the Dylan for macro, but I suspect by not
> including all that functionality into a single construct.

Which wouldn't be very satisfactory.

Both CL's "loop" and Dylan's "for" are basically similar to "do" in
Scheme, in that they allow you to iterate with a number of variables
updated in parallel. But unlike Scheme they allow not just "var = init
then update-expression", but also automatic stepping through numeric
ranges ("var from foo to bar by baz") and multiple termination tests.
CL allows stepping through lists and hashes, Dylan allows stepping
through arbitrary collections. CL allows collecting expressions into a
result list (or summing them).

It's hard to see how to decompose this functionality into different
constructs while still allowing different loop variables to
simultaneously be controlled in different ways. Which is very desirable.

On the other hand, Scheme's "do" is already near the limits of
S-expression comprehensibility. Trying to extend it to do what Dylan's
"for" or CL's "loop" do would I think take it well past.


> > > I can tell you
> > > from experience that trying to do extremely complex things with
> > > function-style macros in an Algol-like language is far, far worse
> > > than doing the same thing in Lisp
> >
> > Which algol-like language?
>
> C (preprocessor macros) and Java (a proprietary preprocessor). Other
> people have done similar things with C++ templates and the result is
> similarly unwieldy.

I agree in each of those cases.

None of those language syntaxes (well, basically the same one) were
designed to be amenable to sophisticated macro processing. Dylan's
*was*. All the declarations and control structures were designed from
the outset to be implemented as macros. And in d2c, at least, they are.


> > > I didn't mean the decision was absurd at the time, just that the
> > > possiblity of Dylan being sold to the great masses today is absurd.
> > > Dylan is a useful niche language, which is all it will ever be.
> >
> > Presumably, then, you feel the same way about Lisp?
>
> Absolutely.

Fair enough then.


> > > If I'm going to use such a langauge, it is going to at least be a
> > > mainstream one with all of the benefits that acrue from that status
> > > (i.e., all things considered I'd rather use Java, and I do, than
> > > Dylan,
> > > despite recognizing that Dylan is a much nicer language).
> >
> > Half a dozen years ago Java wasn't mainstream. A dozen years ago (when
> > I started using it) C++ wasn't mainstream. The same goes for Perl
> > before the WWW happened. Plenty of languages have made the transition
> > from niche to mainstream in the past, and there is every reason to
> > think
> > that plenty more will in the future. C++ and Perl and Java are not the
> > last word in language design for the masses.
>
> That's all true, but for every language that "breaks out" there are a
> zillion that don't,

Sure.

> and those that break out generally have a big promoter

Actually, that appears to be the exception. Java had huge promotion. C
and C++ might have been from AT&T but they can't really be said to have
*promoted* them. The authors pushed them personally, just as Larry Wall
did with Perl and Guido did with Python.


> > > > The time may simply be not yet right. After all, it is only just
> > > > now that reasonably mature Dylan implementations are becoming
> > > > available.
> > >
> > > With all due respect, I think you are dreaming, and I think some
> > > honest self-reflection would confirm that.
> >
> > Dreaming in what respect? Are you saying that reasonably mature Dylan
> > implementations are not yet available? Or that they have been for some
> > time? Or something else?
>
> That Dylan is going to go mainstream when "the time is right",

I certianly wouldn't put it as strongly as "is going to". "Might have a
shot" is more like it.


> and also
> about reasonably mature implementations becoming available "just now."
> I consider Harlequin's product to have been "reasonably mature" some
> time ago.

Yes, but it's only been on one OS -- and technical people's least
favourite one, at that.

They now have a Linux beta out, which is good.


Gwydion is on probably every interesting platform: Un*x, MacOS, MacOS X,
Windows (Cywwin), BeOS. But it's not as mature as Harlequin/Fun-O and
won't be in a position to even attempt to "break out" for a year or two
yet at the curren rate.


> Dylan will almost certainly never break into the mainstream without a
> big promoter. The opportunity was largely lost when Apple dropped it.

That was a sad day, yes. And it's taking a while to recover from. The
good news is that 1) the implementations are getting there, and 2) most
mainstream people have never even heard of it, so when we're ready it'll
be "new" to them, not recycled.


> Perhaps with Apple's backing it could have given Java a good run, but
> without it, no way.

It's a long shot, for sure. I think that probably OCaml has a better
shot at it. Maybe Erlang, with its big backer. Both those are probably
a bit cryptic for the average punter though. We get people turning up
on the Gwydion mailing list saying things like "I never saw Dylan before
but I just browsed through [the compiler | your ICFP entry] and I COULD
ACTUALLY UNDERSTAND IT".

-- Bruce

Jeffrey Siegal

unread,
Dec 27, 2001, 12:07:36 AM12/27/01
to
Bruce Hoult wrote:
> Which wouldn't be very satisfactory.
>
> Both CL's "loop" and Dylan's "for" are basically similar to "do" in
> Scheme, in that they allow you to iterate with a number of variables
> updated in parallel. But unlike Scheme they allow not just "var = init
> then update-expression", but also automatic stepping through numeric
> ranges ("var from foo to bar by baz") and multiple termination tests.
> CL allows stepping through lists and hashes, Dylan allows stepping
> through arbitrary collections. CL allows collecting expressions into a
> result list (or summing them).
>
> It's hard to see how to decompose this functionality into different
> constructs while still allowing different loop variables to
> simultaneously be controlled in different ways. Which is very desirable.
>
> On the other hand, Scheme's "do" is already near the limits of
> S-expression comprehensibility. Trying to extend it to do what Dylan's
> "for" or CL's "loop" do would I think take it well past.

It isn't as if I started programming yesterday, and I just don't see the
need for all that nonsense. Frankly, I rarely even use the Scheme do
macro. The basic iteration mechanisms are usually powerful enough for
me, and if I need something specific for a particular use, I build it.
If I want to step by 57 and I don't feel like doing it explicitly, I'll
build a loop-by-57 form.

Mostly, though, I don't explicitly iterate very much, prefering
map-style approaches, or when I have to use a collection that doesn't
implement that, and don't care much about performance, I'll just resort
to generating a list and using map or for-each themselves. (And with a
good compiler, the cost of doing this may be small anyway.) For example,
I have a private collection of map-style iterators for matrices that
allow various degress of control over stepping and so forth. I find the
overall approach far superior to a for or loop type widget. But, YMMV,
of course.

> > > > I can tell you
> > > > from experience that trying to do extremely complex things with
> > > > function-style macros in an Algol-like language is far, far worse
> > > > than doing the same thing in Lisp
> > >
> > > Which algol-like language?
> >
> > C (preprocessor macros) and Java (a proprietary preprocessor). Other
> > people have done similar things with C++ templates and the result is
> > similarly unwieldy.
>
> I agree in each of those cases.
>
> None of those language syntaxes (well, basically the same one) were
> designed to be amenable to sophisticated macro processing. Dylan's
> *was*. All the declarations and control structures were designed from
> the outset to be implemented as macros. And in d2c, at least, they are.

You've confused yourself. We were discussing your claim that Dylan
could do anything that Lisp can do because it has function-style
macros. Resorting to function-style macros leaves you in essentially
the same place as doing function-style macros in C or Java or C++.

> Actually, that appears to be the exception. Java had huge promotion. C
> and C++ might have been from AT&T but they can't really be said to have
> *promoted* them.

C was the exception of the 80s. C++ was heavily promoted by Microsoft
(and the other Windows compiler vendors, when there were any). Perl was
perhaps the exception of the 90s (I consider Perl marginal and Python to
be clearly niche).

Andreas Bogk

unread,
Dec 26, 2001, 11:26:35 PM12/26/01
to
David Rush <ku...@bellsouth.net> writes:

> > I don't think so. Having to pass around handlers for all sorts of
> > conditions is a nuisance. This is something CL and Dylan got right,
> > IMHO.
> Wel I've not written any large reactive systems using CPS for
> condition-handling, but it certainly seems to work well in my
> data-mining code. As things stand today, I'd probably not choose
> Scheme for a large GUI application, although I'm cooking up ideas to

As soon as you pile up some layers of code, it quickly becomes tedious
to pass around handlers everywhere. Just imagine passing a GUI dialog
for resolving a "disk full" condition all the way through the GUI,
your application code, your storage abstraction down to the actual
disk access.

Imagine that you have some OS-agnostic code in the middle layers, and
that you don't even know that some condition might arise and that it
can be fixed. Proper exceptions allow code at distant places to
communicate efficiently.

> > > > Oh, and dynamism vs. performance tradeoffs like sealing,
> Huh? What is this feature?

It allows you to specify that you won't add a new method to a certain
generic function, or some application domain of that function. For
instance, the Dylan <integer> type is a regular class which cannot be
subclassed. The + function is a generic function sealed over the
domain (<integer>, <integer>), so nobody can override that definiton.

So you get all the performance benefits you'd get when implementing
integers specially, like Java does, but still <integer>s are regular
objects, and you can get the same kind of performance benefits for
your own classes.

> > The point is that you can start writing code without caring about
> > performance.
> Surely you *don't* really mean this. Big-O issues will jump up and get
> you if you don't think about them.

Actually, I usually nail down *what* I want to do with a naive
implementation, which isn't really intended to solve the problem, but
just serves me as some kind of formal specification of the problem,
which happens to be executable too. Starting from that, I can
experiment with *how* to solve certain aspects, at which time big-O
complexity comes into play. Only after having found the right
algorithms and a correct implementation for them I start to think
about low-level performance issues. And I want my language to support
this kind of process.

> > Once the design has settled, you can sprinkle some
> > adjectives here and there, and the code becomes fast, without having
> > to re-implement performance-critical code. I consider sealing to be a
> > good thing.
> Do you not also get the same benefits if you develop using good
> functional abstractions?

Of course you can. I just happen to prefer generic functions, and I
want them to be as fast as the functional approach, which can be sone
with sealing.

Andreas Bogk

unread,
Dec 27, 2001, 12:11:03 AM12/27/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> Dylan will almost certainly never break into the mainstream without a
> big promoter. The opportunity was largely lost when Apple dropped it.
> Perhaps with Apple's backing it could have given Java a good run, but
> without it, no way.

It need not be a commercial promoter. I think there are enough people
out there (and I guess a lot of them are reading these newsgroups) who
wish there was something like a modern LISP machine, or more
realistically, an operating system running on commodity hardware that
was written from scratch in a dynamic language. There might be enough
people (and smart enough people) to start the next Linux, who knows?

If such an OS (and especially it's APIs) would allow for multiple
syntaxes, or even multiple languages, it would appeal both to the
experts and to the masses.

Jeffrey Siegal

unread,
Dec 27, 2001, 12:18:23 AM12/27/01
to
Andreas Bogk wrote:
> It need not be a commercial promoter. I think there are enough people
> out there (and I guess a lot of them are reading these newsgroups) who
> wish there was something like a modern LISP machine, or more
> realistically, an operating system running on commodity hardware that
> was written from scratch in a dynamic language. There might be enough
> people (and smart enough people) to start the next Linux, who knows?

I agree with this.

However, Dylan is so far from that it might as well be in the next
universe. You've got a much better shot with Java, frankly.

Jeffrey Siegal

unread,
Dec 27, 2001, 12:23:24 AM12/27/01
to
Andreas Bogk wrote:
> As soon as you pile up some layers of code, it quickly becomes tedious
> to pass around handlers everywhere. Just imagine passing a GUI dialog
> for resolving a "disk full" condition all the way through the GUI,
> your application code, your storage abstraction down to the actual
> disk access.

What happens in Java is that you have to at least declare the exceptions
up the chain anyway (the compiler will reject a method that doesn't
catch or throw E which involves a method declared to throw E. It isn't
that much harder to explicitly pass the handler.

> So you get all the performance benefits you'd get when implementing
> integers specially, like Java does, but still <integer>s are regular
> objects, and you can get the same kind of performance benefits for
> your own classes.

Yes and no. They're not "regular objects" because they can't be
subclassed. I think what you'd see in any kind of production
environment if Dylan were used is that almost everything would get
sealed off, much the way a lot of Java code makes extensive use of
"final." At that point, you might as well just use a static block
compiler and let the compiler recognize what is subclassed and what
isn't.

Bruce Hoult

unread,
Dec 27, 2001, 12:32:18 AM12/27/01
to
In article <3C2AB04C...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Andreas Bogk wrote:
> > As soon as you pile up some layers of code, it quickly becomes tedious
> > to pass around handlers everywhere. Just imagine passing a GUI dialog
> > for resolving a "disk full" condition all the way through the GUI,
> > your application code, your storage abstraction down to the actual
> > disk access.
>
> What happens in Java is that you have to at least declare the exceptions
> up the chain anyway (the compiler will reject a method that doesn't
> catch or throw E which involves a method declared to throw E. It isn't
> that much harder to explicitly pass the handler.

That's perfectly true.

It's also true that dealing with exception specifications in Java
*sucks*. In large Java projects I've worked on the vast majority of cvs
commits end up being maintainance on the exception specifications. It's
just a lot of pointless makework.

Being "not much harder" than Java is no recommendation.

-- Bruce

israel r t

unread,
Dec 27, 2001, 12:36:54 AM12/27/01
to
On Thu, 27 Dec 2001 17:14:40 +1300, Bruce Hoult <br...@hoult.org>
wrote:

>> Dylan will almost certainly never break into the mainstream without a
>> big promoter. The opportunity was largely lost when Apple dropped it.
>
>That was a sad day, yes. And it's taking a while to recover from. The
>good news is that 1) the implementations are getting there, and 2) most
>mainstream people have never even heard of it, so when we're ready it'll
>be "new" to them, not recycled.

Dylan's biggest liability is its name ( named after an elderly
has-been 1960's rocker that only my parents would have been seen dead
listening to ) and the perception that it was "dropped by Apple".

Perhaps renaming it and changing its Pascal like syntax either towards
Scheme or towards C might get some disillusioned Schemers , lispers
or even some apostates from Java and C#....
As for names Skylan / Skylark for the Schemefied version or Cyclan for
the C syntax version (I was a EC Tubbs fan...) Or if you want a
musical name, Bach or Fugue would be nice ( Mozart is already taken by
the Oz/Mozart language )

Andreas Bogk

unread,
Dec 27, 2001, 12:45:11 AM12/27/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> What happens in Java is that you have to at least declare the exceptions
> up the chain anyway (the compiler will reject a method that doesn't

Yes, and that bothers me to no end. I want to have specific code that
knows about an exception in exactly two places: where it is generated,
and where it can be handled. All the code inbetween doesn't need to
know more than that an operation has failed and that it needs to clean
up.

> Yes and no. They're not "regular objects" because they can't be
> subclassed.

That's true. The point of sealing is to offer the option of turning
off certain OO features while retaining the benefits of others (the
user can still specialize his own generic functions on integers, for
instance).

> I think what you'd see in any kind of production
> environment if Dylan were used is that almost everything would get
> sealed off, much the way a lot of Java code makes extensive use of
> "final." At that point, you might as well just use a static block
> compiler and let the compiler recognize what is subclassed and what
> isn't.

I'd hate to use a static block compiler, the turnaround time would be
a nightmare. And I'd like to keep the option of adding classes and gf
methods at runtime.

Adreas

Andreas Bogk

unread,
Dec 27, 2001, 12:53:45 AM12/27/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> However, Dylan is so far from that it might as well be in the next
> universe. You've got a much better shot with Java, frankly.

Java might get the masses, but it's not in the heart of the wizards.
But it's the wizards who would be able to start such a project.

Bruce Hoult

unread,
Dec 27, 2001, 1:13:27 AM12/27/01
to
In article <3C2AAC98...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Bruce Hoult wrote:
> > Which wouldn't be very satisfactory.
> >
> > Both CL's "loop" and Dylan's "for" are basically similar to "do" in
> > Scheme, in that they allow you to iterate with a number of variables
> > updated in parallel. But unlike Scheme they allow not just "var = init
> > then update-expression", but also automatic stepping through numeric
> > ranges ("var from foo to bar by baz") and multiple termination tests.
> > CL allows stepping through lists and hashes, Dylan allows stepping
> > through arbitrary collections. CL allows collecting expressions into a
> > result list (or summing them).
> >
> > It's hard to see how to decompose this functionality into different
> > constructs while still allowing different loop variables to
> > simultaneously be controlled in different ways. Which is very
> > desirable.
> >
> > On the other hand, Scheme's "do" is already near the limits of
> > S-expression comprehensibility. Trying to extend it to do what Dylan's
> > "for" or CL's "loop" do would I think take it well past.
>
> It isn't as if I started programming yesterday, and I just don't see the
> need for all that nonsense. Frankly, I rarely even use the Scheme do
> macro. The basic iteration mechanisms are usually powerful enough for
> me, and if I need something specific for a particular use, I build it.
> If I want to step by 57 and I don't feel like doing it explicitly, I'll
> build a loop-by-57 form.

I suspect that this is pretty much where Scheme people on one hand and
CL and Dylan people on the other hand part ways. Everyone appreciates
generality and power when they need them, but the latter two groups also
value notational convenience for the common cases. Dylan expands the
"for" macro into a tail-recursive function (and CL does something
similar) precisely because many people find that easier to write, read,
and understand than the explicit tail-recursive form, for most common
cases.


> > > > > I can tell you
> > > > > from experience that trying to do extremely complex things with
> > > > > function-style macros in an Algol-like language is far, far worse
> > > > > than doing the same thing in Lisp
> > > >
> > > > Which algol-like language?
> > >
> > > C (preprocessor macros) and Java (a proprietary preprocessor). Other
> > > people have done similar things with C++ templates and the result is
> > > similarly unwieldy.
> >
> > I agree in each of those cases.
> >
> > None of those language syntaxes (well, basically the same one) were
> > designed to be amenable to sophisticated macro processing. Dylan's
> > *was*. All the declarations and control structures were designed from
> > the outset to be implemented as macros. And in d2c, at least, they
> > are.
>
> You've confused yourself. We were discussing your claim that Dylan
> could do anything that Lisp can do because it has function-style
> macros. Resorting to function-style macros leaves you in essentially
> the same place as doing function-style macros in C or Java or C++.

Rather better off, I think, since the C and C++ preprocessor is crap and
Java doesn't have one at all. Macro expansion in Dylan is *far* better
behaved, since it is hygenic and obeys lexical scoping (both with
respect to which macro is in scope where, and respecting the scoping of
arguments to the macro).


> > Actually, that appears to be the exception. Java had huge promotion.
> > C and C++ might have been from AT&T but they can't really be said
> > to have *promoted* them.
>
> C was the exception of the 80s.

So what was Pascal?


> C++ was heavily promoted by Microsoft

!!!

Microsoft didn't even *have* a C++ compiler until I'd been using the
language for three or four years. Wasn't 1.0 out in 1993 or so? Well,
it was total crap anyway, and VC++ wasn't really usable until 4.1 or 4.2
or something like that.

-- Bruce

Erik Naggum

unread,
Dec 27, 2001, 2:06:44 AM12/27/01
to
* israel r t
> Lisp needs to reinvent itself.

* Andreas Bogk


| I suggest to take a look at Dylan.

Since the whole thread is a spoof of an article that caused the Scheme
community to explode in rage and the resident Dylan fans completely fail
to understand that this is a stupid attempt to inflame the Lisp community
likewise, but rather once again take part in it with their unsolicited
Dylan marketing campaign -- which is no surprise, like Scheme fans, they
also erroneously think their language is a Lisp and annoy comp.lang.lisp
with marketing for their Lisp-wannabe languages every once in a while --
the conlusion is clear: Dylan is worth a look if and only if Lisp needs
to reinvent itself, which it does not need any more than Scheme does.

Both Dylan and Scheme have distanced themselves from the Lisp community
in a number of important ways, but when they lose ground, they return to
their Lisp heritage, and when they gain ground, they point out how Lisp
is not longer worth anyone's time. This closely parallels the behavior
of immature children who want to distance themselves from their parents
as long as they risk nothing by doing so. If Dylan and Scheme were for
real, they would make a clean cut with Lisp and attempt to make it on
their own without constant references to their heritage, good or bad.
"Members of the Lisp family" try to point out much better they are than
their parent, whatever "Lisp" as a parent means. Even Paul Graham, the
inventor of the silly new toy language "arc", needs to point out how
Common Lisp is superior to his new toy by knocking Common Lisp before he
has anything to show for himself.

Attacking Common Lisp is primarily a way to say "I don't understand
feature X, therefore it is must be bad and should be removed". If they
spent as much time trying to understand what was going on as they do
trying to fight against Common Lisp, they would not need to fight, either.

///
--
The past is not more important than the future, despite what your culture
has taught you. Your future observations, conclusions, and beliefs are
more important to you than those in your past ever will be. The world is
changing so fast the balance between the past and the future has shifted.

Frank A. Adrian

unread,
Dec 27, 2001, 2:20:35 AM12/27/01
to
israel r t wrote:
> Dylan's biggest liability is its name ( named after an elderly
> has-been 1960's rocker that only my parents would have been seen dead
> listening to )...

Even I know that Dylan was named that after Dylan Thomas and because Dylan
was an acronym for DYnamic LANguage. Hell, the first Dylan compiler
(written in MIT Scheme) was named Thomas. Bob Dylan's lawyers also tried
to sue Apple for this mis-perceived naming idea and the case was won by
Apple.

> ... and the perception that it was "dropped by Apple".

Most people who the purveyors of Dylan are targeting aren't even aware of
the role Apple played with the language.

Not that any of this will help the language in the long run (just so no one
thinks I have any sort of soft spot for the language as it is now).

faa

israel r t

unread,
Dec 27, 2001, 5:36:15 AM12/27/01
to
On 27 Dec 2001 06:53:45 +0100, Andreas Bogk <and...@andreas.org>
wrote:

>Jeffrey Siegal <j...@quiotix.com> writes:
>
>> However, Dylan is so far from that it might as well be in the next
>> universe. You've got a much better shot with Java, frankly.
>
>Java might get the masses, but it's not in the heart of the wizards.
>But it's the wizards who would be able to start such a project.

Dont be so sure...
The Demeter project and aspectj ( adaptive and aspect oriented
programming using extensions to java ) may spawn the next step after
oop and functional programming. It is certainly wizardly enough for
me

http://www.ccs.neu.edu/research/demeter/
http://aspectj.org


Janos Blazi

unread,
Dec 27, 2001, 6:21:48 AM12/27/01
to
[...]

> Even Paul Graham, the
> inventor of the silly new toy language "arc", needs to point out how
> Common Lisp is superior to his new toy by knocking Common Lisp before he
> has anything to show for himself.

How can you judge the sillyness of arc when it has not even been specified
yet? (The articles he wrote about arc are very clever in my opinion.)

Mr Graham's criticisim of CL is sincere and he genuinely loves Lisp. I
remember whe we talked about free implementations of CL you were very angry
and said something like it could not expected of anybody to give away his
work for free, etc. Maybe Mr Graham will do this and then many of us will
support him by buying his (O'Reilly)-book on arc (that hopefully will become
Archlisp). Maybe he will give Lisp the bright future it deserves.

J.B.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
Check out our new Unlimited Server. No Download or Time Limits!
-----== Over 80,000 Newsgroups - 19 Different Servers! ==-----

Kaz Kylheku

unread,
Dec 27, 2001, 10:36:11 AM12/27/01
to

In Lisp, rather than Scheme, that would be:

(funcall (funcall (foldl f) h) lst)

A ``curried'' function in the first position of a list is not
automatically called.

Now the *disadvantage* of significant syntax is that it allows the programmer
to quickly and easily write the kinds of code that the language designer
thought the programmer ought to be able to write quickly and easily.
That approach assumes that the language designer can forsee how the language
will be used, which could turn into a self-fulfilling prophecy.

If you don't like chaining the funcalls using nested expressions, you
can invent some syntax which folds up the nesting, like

(chained-funcall #'foldfl f h lst)

Another way might be to do it as a mapper:

(mapchain #'start l-1 l-2 ... l-n)

where start has to be a function that takes as many arguments
as there are lists. The first element from every list is taken,
and turned into an argument list for start, which returns
another function that takes as many arguments, and is used
for the second elements of the list and so on.

Now, I have no idea how important it is to make this kind of chaining
more convenient, or in what way. So if I were designing a language, I
would view it as a grave mistake to introduce a special purpose syntax
for it, given that people can just experiment with their own, and settle
on solutions that are right for them. Then if a pattern of useful
primitives emerges from the community, which many people find useful,
their functions and macros can be incorporated into the language.

You also mentioned infix. You can find implementations of infix evaluators
for Common Lisp, so if you really want infix, it's there.

The point is that special syntax is a really form of premature
optimization. Worse, it's a form of optimization based entirely on
guessing what is going to be needed. The resulting language might attract
users who are looking for exactly that, so that the optimization later
appears to have been correct.

Kaz Kylheku

unread,
Dec 27, 2001, 11:10:14 AM12/27/01
to
In article <bruce-9D1859....@news.paradise.net.nz>, Bruce
Hoult wrote:
>- Dylan's ":=" and CL's "setf" are the same idea, but := is easier to
>read for some people.

Without some formal study, what is readable is just a guess.

Coming to Lisp after half a lifetime using programming languages that
have some kind of infix assignment, I have no problem reading setf at all.
(That is just anecdotal evidence based on my own experience, and does
not extrapolate into what people find readable, hint, hint).

Even if the := notation is found more readable, it may not be wortwhile.
What is more important is to be able to manage large, complex programs.
Having a simple, programmable syntax is a way of trading some small-scale
readability for a more significant goal.

The real question is whether the given language is a suitable target
language for one's abstractions.

Daniel C. Wang

unread,
Dec 27, 2001, 11:40:45 AM12/27/01
to

k...@ashi.footprints.net (Kaz Kylheku) writes:

> In article <bruce-9D1859....@news.paradise.net.nz>, Bruce
> Hoult wrote:
> >- Dylan's ":=" and CL's "setf" are the same idea, but := is easier to
> >read for some people.
>
> Without some formal study, what is readable is just a guess.
>
> Coming to Lisp after half a lifetime using programming languages that
> have some kind of infix assignment, I have no problem reading setf at all.
> (That is just anecdotal evidence based on my own experience, and does
> not extrapolate into what people find readable, hint, hint).

The human visual systems has very low level feature dectors for finding
vertical and horizontal lines. You can easily pick out a horizontal line
hiding in a field of vertical lines. Just like picking out the green dot in
a field of yellow is pretty easy. Having the human visual system parse
"setf" takes quite a bit more work.

How many setfs? are in this string

skdjvsdkjjlksetflkasfjsflsklsdflkjwiescsfjeriosadflksetfkkfslkfjskfjskkksjdfliwasetf

How many := are are in this string

skdjvsdkjjlk:=lkasfjsflsklsdflkjwiescsfjeriosadflk:=kkfslkfjskfjskkksjdfliwa:=

Or this

---_+_+-==-=++-=-=--=-:==-=-=-=+=-=--=-++=-=--:=-=-=-=--0--=:=---+=--=-

Most people should find the middle tasks the easiest. If there's any
readability advantage to := I doubt it's an infix vs prefix issue. It's
probably more low level. Syntax highlighting probably makes the difference
go away completely.

Janos Blazi

unread,
Dec 27, 2001, 12:01:56 PM12/27/01
to
> How many setfs? are in this string
>
>
skdjvsdkjjlksetflkasfjsflsklsdflkjwiescsfjeriosadflksetfkkfslkfjskfjskkksjdf
liwasetf

Do you come across strings like that frequently in your work? Are you using
a Lisp *without parentheses*?

Kaz Kylheku

unread,
Dec 27, 2001, 12:52:26 PM12/27/01
to
In article <3c2b5...@news.newsgroups.com>, Janos Blazi wrote:
>> How many setfs? are in this string
>>
>>
>skdjvsdkjjlksetflkasfjsflsklsdflkjwiescsfjeriosadflksetfkkfslkfjskfjskkksjdf
>liwasetf
>
>Do you come across strings like that frequently in your work? Are you using
>a Lisp *without parentheses*?

Maybe he's using Fortran. When people developed Fortran, compiler writing
was a completely new field. It didn't occur to anyone that removing
all spaces in an early phase of translation was a bad idea. This
led to atrocities, like the infamous:

DOI=1.3

versus

DOI=1,3

The first means assign 1.3 to variable DOI. The second indicates the
start of a DO loop over the variable I from 1 to 3.

Jeffrey Siegal

unread,
Dec 27, 2001, 2:09:03 PM12/27/01
to
Andreas Bogk wrote:

>>What happens in Java is that you have to at least declare the exceptions
>>up the chain anyway (the compiler will reject a method that doesn't
>
> Yes, and that bothers me to no end. I want to have specific code that
> knows about an exception in exactly two places: where it is generated,
> and where it can be handled. All the code inbetween doesn't need to
> know more than that an operation has failed and that it needs to clean
> up.

[This is also a reply to Bruce's earlier comments.]

You will find views on both sides, not unlike the issue of static
typing. It is not always the case that code that isn't written to be
thrown through is safe; it may not clean up. Making sure that each
method has the appropriate declarations is a way to catch these things
at compile time. (Some methodological discipline is required to get any
benefit out of this, of course, but when isn't it?)

>>I think what you'd see in any kind of production
>>environment if Dylan were used is that almost everything would get
>>sealed off, much the way a lot of Java code makes extensive use of
>>"final." At that point, you might as well just use a static block
>>compiler and let the compiler recognize what is subclassed and what
>>isn't.
>
> I'd hate to use a static block compiler, the turnaround time would be
> a nightmare.


You don't generally use a static block compiler when code is in active
development. For example, I develop in DrScheme and then block compile
with Stalin for performance tuning and production use.

> And I'd like to keep the option of adding classes and gf
> methods at runtime.

Then you can't use sealing (much).

Jeffrey Siegal

unread,
Dec 27, 2001, 2:17:28 PM12/27/01
to
Bruce Hoult wrote:

> I suspect that this is pretty much where Scheme people on one hand and
> CL and Dylan people on the other hand part ways. Everyone appreciates
> generality and power when they need them, but the latter two groups also
> value notational convenience for the common cases. Dylan expands the
> "for" macro into a tail-recursive function (and CL does something
> similar) precisely because many people find that easier to write, read,
> and understand than the explicit tail-recursive form, for most common
> cases.

What you snipped is that I rarely use explicit iteration in complex
programs. I strongly prefer map-like forms, which are both conceptually
powerful and notationally convenient. I don't usually write explicit
iteration beyond the standard (let loop ...) idiom.

>>C was the exception of the 80s.
>>
>
> So what was Pascal?

A flop basically. It had a short stint in academia, and as the
programming langauge for the Macintosh, before being overrun by C, but
there was very little use in commercial shops, which would be a
necessity for a mainstram language.

>>C++ was heavily promoted by Microsoft
>>
>
> !!!
>
> Microsoft didn't even *have* a C++ compiler until I'd been using the
> language for three or four years. Wasn't 1.0 out in 1993 or so? Well,
> it was total crap anyway, and VC++ wasn't really usable until 4.1 or 4.2
> or something like that.

That was well before C++ became a mainstream "hit."


Ray Blaak

unread,
Dec 27, 2001, 2:25:13 PM12/27/01
to
Andreas Bogk <and...@andreas.org> writes:
> Jeffrey Siegal <j...@quiotix.com> writes:
>
> > What happens in Java is that you have to at least declare the exceptions
> > up the chain anyway (the compiler will reject a method that doesn't
>
> Yes, and that bothers me to no end. I want to have specific code that
> knows about an exception in exactly two places: where it is generated,
> and where it can be handled. All the code inbetween doesn't need to
> know more than that an operation has failed and that it needs to clean
> up.

Well you actually do have the choice in Java. Just have your exceptions extend
from RuntimeException or Error, and they are no longer required to be declared
in the throws clauses.

Then you have the knowledge of the exception to be exactly in the desired
places.

To "hide" the fact that you are possibly abusing the notion of "error", have a
base application exception class extend from RuntimeException or Error, and
have all you other exceptions extend from that.

Personally, though, I prefer having the explicit throws clauses, for then the
compiler forces me to be aware of the error issues. At the very least I know to
put in the necessary "finally" blocks and rethrow if handling the error is not
appropriate.

I have not found the maintenance issue to be too bad. One trick is to rethrow
in terms of a more general exception class, so that the methods in between the
low level error and the final handler just have one or two exception classes in
their throws clauses, as opposed to a miriad of particular ones (which cause
the maintenance problem).

The mistake that a lot of Java programmers make is to simply swallow the
unexpected exceptions with a stack trace and continue. This is the worst of all
possibilities, defeating the purpose of exceptions in the first place.

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
bl...@telus.net The Rhythm has my soul.

Daniel C. Wang

unread,
Dec 27, 2001, 2:21:51 PM12/27/01
to

"Janos Blazi" <jbl...@hotmail.com> writes:

> > How many setfs? are in this string
> >
> >
> skdjvsdkjjlksetflkasfjsflsklsdflkjwiescsfjeriosadflksetfkkfslkfjskfjskkksjdf
> liwasetf
>
> Do you come across strings like that frequently in your work? Are you using
> a Lisp *without parentheses*?

Even without parentheses the other task of picking out ":=" is *relatively*
easier than the "setf" task. I'm sure that with parentheses *both* tasks get
easier, but the relative difference is still there. You can argue that
adding parenthesese makes both tasks so easy that the relative difference
becomes meaningless. I personally do not think this is the case.

I don't have any Lisp code handy. However, if you take a larger piece of
source code and ask humans to underline every occurence of "setf" in it. It
will take them longer than a similar task where "setf" is replaced by ":="
even if you keep prefix notation... i.e.

(:= e1 e2)
is easier to visually recognize than

(setf e1 e2)
when dumped in a sea of program text.

If you let people syntax highlight things the task might get easier for
setf. Anyway, that just my take on it. I don't have any real experimental
evidence for it. However, I just described a relatively simple experiment
you can do yourself. I'd happy to hear any evidence either way.

The ease of visually parsing programs is one reason why I prefer { } to
BEGIN END. Picking out { and } is simply an easier task for the huuman
visual system when compared to BEGIN END.

Janos Blazi

unread,
Dec 27, 2001, 2:49:26 PM12/27/01
to
> Even without parentheses the other task of picking out ":=" is
*relatively*
> easier than the "setf" task.

But (:= a 5) is not very readable either and looks a bit clumsy as well. In
this case I should prefer (= a 5) and (eq a 5) or something like this.

Jeffrey Siegal

unread,
Dec 27, 2001, 2:51:29 PM12/27/01
to
Janos Blazi wrote:

> But (:= a 5) is not very readable either and looks a bit clumsy as well. In
> this case I should prefer (= a 5) and (eq a 5) or something like this.

I prefer (setf a 5) or (set! a 5) because it reads naturally ("Set a to
five"). I'm not sure why being able to pick out these forms visually is
an advantage anyway, any more so than any of the other forms one might use.


Thomas F. Burdick

unread,
Dec 27, 2001, 3:56:54 PM12/27/01
to
Bruce Hoult <br...@hoult.org> writes:

> In article <3C2A03DC...@quiotix.com>, Jeffrey Siegal
> <j...@quiotix.com> wrote:
>
> > Bruce Hoult wrote:
> > > That's true only in the trivial sense that Lisp has no syntax, so Lisp
> > > syntax can't be outgrown.
> >
> > Hardly. It just has a syntax with a very simple and powerful basic
> > construction rule. However, on top of that construction rule,
> > enormously powerful syntactic abstractions can be (and are) built. What
> > Algol-like languages lack is the basic construction rule which allows
> > you to decompose the syntax down into elemental componets. That makes
> > any macro system either enormously complex or lacking in power, or both.
> >
> > Consider, for example, what low-level Lisp macros would look like in an
> > Algol-like language. They can be done but the result is enormously
> > complex (and also fragile; if the language syntax is extended, macros
> > written that way will likely break).
>
> There are examples of the same thing happening in reverse. When macros
> get sufficiently complex and have enough combinations of different
> possibilities, it becomes too difficult to follow a purely S-expresion
> syntax. Consider the example of the Dylan "for" macro. How would you
> map all that functionality and all those options onto an S-expression
> syntax? Well, we can look at what is done in Common Lisp with the
> "loop" macro. The same idea, very nearly exactly the same options. And
> we find that in fact it does *not* use S-expression syntax, but instead
> makes a little infix language that ends up very similar to that part of
> Dylan.

But this is an intentional feature of LOOP, it's a little Algol-like
mini-language inside of CL. There are other non-standardized packages
that provide similar functionality, but use a more lispy approach.
One thing that's missing from LOOP as specified in the spec is any way
to extend it. Which is too bad, because that would make it even more
useful, but this is a (mis)feature of the spec, and isn't inherent in
a LOOP macro. MIT LOOP has such a feature, for example.

>
> Now, I happen to think that the "loop" macro is a *good* thing about
> Common Lisp, but:
>
> 1) building such things yourself isn't well supported in CL (it is in
> Dylan)

That's because it's a weird exception in CL. Complicated macros that
define a Lispy mini-language *are* well supported, and there's nothing
about iteration that makes it require a LOOP-like syntax.

> 2) I find that I actually *prefer* this sort of thing to have infix
> syntax, and prefer all loops and other control structures to use it. If
> nothing else, it means that you know immediately whether you're looking
> at a standard function application or a special form. That's what Dylan
> does.

I'm not sure what to say to this. It's usually pretty obvious if
something isn't a function call, unless it's supposed to behave like
one, in which case, I like that there isn't a visual distinction. If
you can't spot

(iterate ((i (interval :from 0))
(elt (list-elements some-list)))
...)

as a macro call, you just haven't read enough Lisp.

In a later post, you mention DO (which is also a Lisp macro). I tend
to think of it as a low-level iteration construct, mostly appropriate
for building sugary facilities on top of, much like TAGBODY and GO.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Thomas F. Burdick

unread,
Dec 27, 2001, 4:33:25 PM12/27/01
to
"Daniel C. Wang" <danwan...@cs.princeton.edu> writes:

> "Janos Blazi" <jbl...@hotmail.com> writes:
>
> > > How many setfs? are in this string
> > >
> > >
> > skdjvsdkjjlksetflkasfjsflsklsdflkjwiescsfjeriosadflksetfkkfslkfjskfjskkksjdf
> > liwasetf
> >
> > Do you come across strings like that frequently in your work? Are you using
> > a Lisp *without parentheses*?
>
> Even without parentheses the other task of picking out ":=" is *relatively*
> easier than the "setf" task. I'm sure that with parentheses *both* tasks get
> easier, but the relative difference is still there. You can argue that
> adding parenthesese makes both tasks so easy that the relative difference
> becomes meaningless. I personally do not think this is the case.
>
> I don't have any Lisp code handy. However, if you take a larger piece of
> source code and ask humans to underline every occurence of "setf" in it. It
> will take them longer than a similar task where "setf" is replaced by ":="
> even if you keep prefix notation... i.e.
>
> (:= e1 e2)
> is easier to visually recognize than
>
> (setf e1 e2)
> when dumped in a sea of program text.

But how often are you looking at a sea of program text? I'm usually
looking at a specific part of a program, in which case setting is one
of many things that can be done, and I can't think of any reason I'd
care so much more about it than anthing else the program could be
doing. If you want to find all the instances where e1 is assigned to,
use your editor's search facility. And if you really prefer :=, you
can define
(defmacro := (&rest stuff) (cons 'setf stuff))
That's a bad idea, not the least because := is the symbol (intern "="
"KEYWORD"), but you could use the more visually telling <- if you
wanted:

(<- e1 e2
(gethash 'e h1) e2)

But I find that a lot uglier than

(setf e1 e2
(gethash 'e h1) e2)

Tom Breton

unread,
Dec 27, 2001, 5:43:08 PM12/27/01
to
"Janos Blazi" <jbl...@hotmail.com> writes:

> [...]
>
> > Even Paul Graham, the
> > inventor of the silly new toy language "arc", needs to point out how
> > Common Lisp is superior to his new toy by knocking Common Lisp before he
> > has anything to show for himself.
>
> How can you judge the sillyness of arc when it has not even been specified
> yet? (The articles he wrote about arc are very clever in my opinion.)

One way is to read his webpage about it, cited at the beginning of
this thread. I did, and it seemed to me that his ideas were
wrong-headed.

--
Tom Breton at panix.com, username tehom. http://www.panix.com/~tehom

Daniel C. Wang

unread,
Dec 27, 2001, 6:36:38 PM12/27/01
to

t...@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
{stuff deleted}

> But how often are you looking at a sea of program text? I'm usually
> looking at a specific part of a program, in which case setting is one
> of many things that can be done, and I can't think of any reason I'd
> care so much more about it than anthing else the program could be
> doing. If you want to find all the instances where e1 is assigned to,
> use your editor's search facility. And if you really prefer :=, you
> can define

Scheme/Lisp has uniform syntax. This makes it easier to play all sorts of
crazy macro games with. The :=/setf example is one where Algol languages
trade the uniformity of syntax for readability. IMNHO readability has less
to do with infix/prefix issues than with uniform/non-uniform. My only claim
is that choosing too uniform a syntax can effect readability, and that these
readability issues are not just a matter of taste, but have some connection
with basic visual processing.

Graphic artist get paid to design logos and visual designs that are easy to
parse quickly. They manipulate color, layout, and shape to make things
easier to visually parse. Properly indented and syntax highlighted
Scheme/Lisp is pretty easy to read. Languages with non-uniform syntax that
use infix and more punctuation do not need to rely soo much on indentation
or color cues to acheive the same level of readability.

Bruce Hoult

unread,
Dec 27, 2001, 6:45:43 PM12/27/01
to
In article <3C2B7BC1...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

Because when you're programming in a mostly-functional language you want
to have the places that use mutation readily obvious.

-- Bruce

Bruce Hoult

unread,
Dec 27, 2001, 6:49:25 PM12/27/01
to
In article <3C2B73C8...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Bruce Hoult wrote:
>
> > I suspect that this is pretty much where Scheme people on one hand and
> > CL and Dylan people on the other hand part ways. Everyone appreciates
> > generality and power when they need them, but the latter two groups
> > also
> > value notational convenience for the common cases. Dylan expands the
> > "for" macro into a tail-recursive function (and CL does something
> > similar) precisely because many people find that easier to write, read,
> > and understand than the explicit tail-recursive form, for most common
> > cases.
>
> What you snipped is that I rarely use explicit iteration in complex
> programs.

I snipped it because I agree with it.


> >>C was the exception of the 80s.
> >>
> >
> > So what was Pascal?
>
> A flop basically. It had a short stint in academia, and as the
> programming langauge for the Macintosh, before being overrun by C, but
> there was very little use in commercial shops, which would be a
> necessity for a mainstram language.

It may be different where you are, but here in New Zealand I see a *lot*
of commercial work being done in Pascal (Delphi) even today.


> >>C++ was heavily promoted by Microsoft
> >>
> >
> > !!!
> >
> > Microsoft didn't even *have* a C++ compiler until I'd been using the
> > language for three or four years. Wasn't 1.0 out in 1993 or so? Well,
> > it was total crap anyway, and VC++ wasn't really usable until 4.1 or
> > 4.2 or something like that.
>
> That was well before C++ became a mainstream "hit."

I believe you are using circular definitions here.

-- Bruce

Janos Blazi

unread,
Dec 27, 2001, 7:24:00 PM12/27/01
to
> One way is to read his webpage about it, cited at the beginning of
> this thread. I did, and it seemed to me that his ideas were
> wrong-headed.

(1)
The webpage contains ideas only and no final product. So im my opinion you
can judge his intentions and his opinions but you cannot judge "arc",
whatever tha may become. It is Eric's general attitude to get angry at the
slightest inacuracy and what is source for the goose is source for the
gander.

(2)
At first reading I thought that Mr Graham had some points but it would be
much better (even in his sence) to create a new CL implementation and
augment it with a nice standard library and a GUI toolkit. Then I read the
text a second time and now I think I understand better what he is after.

Being a pessimistic man I just wonder as to what extent he will succeed and
whether there will be a new Lisp at all. But I hope he will succeed as his
success would make life more interesting.

Jeffrey Siegal

unread,
Dec 27, 2001, 7:27:49 PM12/27/01
to
Bruce Hoult wrote:

>>>>C was the exception of the 80s.
>>>>
>>>>
>>>So what was Pascal?
>>>
>>A flop basically. It had a short stint in academia, and as the
>>programming langauge for the Macintosh, before being overrun by C, but
>>there was very little use in commercial shops, which would be a
>>necessity for a mainstram language.
>>
>
> It may be different where you are, but here in New Zealand I see a *lot*
> of commercial work being done in Pascal (Delphi) even today.

You're right about Delphi, but that's certainly a case of a language
having a commercial promoter. I seem to recall that Delphi became
popular long after Pascal had all but died. Call it a second coming if
you like.

>>>>C++ was heavily promoted by Microsoft
>>>>
>>>>
>>>!!!
>>>
>>>Microsoft didn't even *have* a C++ compiler until I'd been using the
>>>language for three or four years. Wasn't 1.0 out in 1993 or so? Well,
>>>it was total crap anyway, and VC++ wasn't really usable until 4.1 or
>>>4.2 or something like that.
>>>
>>That was well before C++ became a mainstream "hit."
>>
>
> I believe you are using circular definitions here.

Huh? You speak of using C++ in 1989 or 1990. It was not mainstream at
that time, C was.


Neelakantan Krishnaswami

unread,
Dec 27, 2001, 8:16:33 PM12/27/01
to
Followups set to comp.lang.lisp only.

On 27 Dec 2001 14:21:51 -0500, Daniel C. Wang <danwan...@cs.princeton.edu>
wrote:


>
> I don't have any Lisp code handy. However, if you take a larger
> piece of source code and ask humans to underline every occurence of
> "setf" in it. It will take them longer than a similar task where
> "setf" is replaced by ":=" even if you keep prefix notation... i.e.
>
> (:= e1 e2)
> is easier to visually recognize than
>
> (setf e1 e2)
> when dumped in a sea of program text.

Another data point:

At the LL1 workshop, Paul Graham said that he was using '=' instead
of SETF in his new Arc dialect of Lisp. He said he tried it as an
experiment, and was surprised by how much easier it made the code
read.


Neel

Jochen Schmidt

unread,
Dec 28, 2001, 12:27:02 AM12/28/01
to
Daniel C. Wang wrote:

>
> t...@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> {stuff deleted}
>> But how often are you looking at a sea of program text? I'm usually
>> looking at a specific part of a program, in which case setting is one
>> of many things that can be done, and I can't think of any reason I'd
>> care so much more about it than anthing else the program could be
>> doing. If you want to find all the instances where e1 is assigned to,
>> use your editor's search facility. And if you really prefer :=, you
>> can define
>
> Scheme/Lisp has uniform syntax. This makes it easier to play all sorts of
> crazy macro games with. The :=/setf example is one where Algol languages
> trade the uniformity of syntax for readability. IMNHO readability has less
> to do with infix/prefix issues than with uniform/non-uniform. My only
> claim is that choosing too uniform a syntax can effect readability, and
> that these readability issues are not just a matter of taste, but have
> some connection with basic visual processing.

Please try to understand that "readability" is a person-specific facility.
Write _you_ can read such written syntax better but do not claim that
something is better readable by anyone.

The human brain is highly adaptive - you may be astonished how much your
brain adapts to a certain kind of syntax. At the start - coming from
algol-like languages - I had my problems reading lisp-syntax. But after
some months of using it I suddenly realized that I could read lisp-code
ways better than algol-like code. You may think that := is always better
readable than setf because it is smaller - I could respond that it is
actually maybe so small that it can be easier overlooked. If you really
want to emphasise an operator in your source-code you can even write SETF
instead of :=. You cannot upcase :=...

ciao,
Jochen

--
http://www.dataheaven.de

Sander Vesik

unread,
Dec 28, 2001, 9:08:45 AM12/28/01
to

Lots of people spend lots of time looking at a sea of program - usually
a sea of program they didn't write. And this can be for a wide number
of reasons, whetever to find or fix a bug or enhance some section.

> use your editor's search facility. And if you really prefer :=, you
> can define
> (defmacro := (&rest stuff) (cons 'setf stuff))
> That's a bad idea, not the least because := is the symbol (intern "="
> "KEYWORD"), but you could use the more visually telling <- if you
> wanted:

> (<- e1 e2
> (gethash 'e h1) e2)

Which assumes <- is more visually telling or easier to find than :=

> But I find that a lot uglier than

> (setf e1 e2
> (gethash 'e h1) e2)

> --
> /|_ .-----------------------.
> ,' .\ / | No to Imperialist war |
> ,--' _,' | Wage class war! |
> / / `-----------------------'
> ( -. |
> | ) |
> (`-. '--.)
> `. )----'

--
Sander

+++ Out of cheese error +++

Wade Humeniuk

unread,
Dec 28, 2001, 12:21:25 PM12/28/01
to
> Even without parentheses the other task of picking out ":=" is
*relatively*
> easier than the "setf" task. I'm sure that with parentheses *both* tasks
get
> easier, but the relative difference is still there. You can argue that
> adding parenthesese makes both tasks so easy that the relative difference
> becomes meaningless. I personally do not think this is the case.
>
> I don't have any Lisp code handy. However, if you take a larger piece of
> source code and ask humans to underline every occurence of "setf" in it.
It
> will take them longer than a similar task where "setf" is replaced by ":="
> even if you keep prefix notation... i.e.
>
> (:= e1 e2)
> is easier to visually recognize than
>
> (setf e1 e2)
> when dumped in a sea of program text.

(setf e1 e2) is good enough. Is this what constitutes programming language
improvements in peoples minds? Afraid to type a few extra characters? We
end up splitting hairs??? Infix, prefix or postfix, Lisp gets the job done
in a simple way. So there are a few inconsitencies, what is the problem.

Wade


Bob

unread,
Dec 28, 2001, 12:41:14 PM12/28/01
to
there are some annoying inconsistencies...but...man!...:= vs set! ??
wow...now that is petty...

Anton van Straaten

unread,
Dec 28, 2001, 1:21:36 PM12/28/01
to
Bob wrote:
>there are some annoying inconsistencies...but...man!...:= vs set! ??
>wow...now that is petty...

Daniel was talking about := vs. 'setf', and I think he's right about that:
it's easier to pick out := from a body of code. (How important that is, is
a separate question.) However, the exclamation mark in set! is similarly
easy to pick out, so the relative advantage of := which Daniel refers to,
doesn't really exist in languages with set!, i.e. Scheme.

Anton

Ray Blaak

unread,
Dec 28, 2001, 2:09:21 PM12/28/01
to

[Followups not obeyed, this is a pan-language issue]

ne...@alum.mit.edu (Neelakantan Krishnaswami) writes:
> At the LL1 workshop, Paul Graham said that he was using '=' instead
> of SETF in his new Arc dialect of Lisp. He said he tried it as an
> experiment, and was surprised by how much easier it made the code
> read.

This is the worst thing to do, since = has a better meaning as a (kind of)
equality predicate. C/C++/Java aside, a dedicated assignment token is
preferred, whether :=, setf, set!, whatever.

Languages that change the historical meaning of = cause unnecessary bugs,
e.g. the classic "if (a = 0) ..." business.

Kaz Kylheku

unread,
Dec 28, 2001, 2:14:04 PM12/28/01
to
In article <QK2X7.3087$5c4.3...@newsread1.prod.itd.earthlink.net>,

Anton van Straaten wrote:
>Bob wrote:
>>there are some annoying inconsistencies...but...man!...:= vs set! ??
>>wow...now that is petty...
>
>Daniel was talking about := vs. 'setf', and I think he's right about that:
>it's easier to pick out := from a body of code.

I can't find any study which confirms this hypothesis. It's easier to
come up with hypotheses. For instance, perhaps it's more important to be
able to spot := in a body of code because it's not the leftmost element
of a form, and because there might not be any whitespace surrounding it.

setf's are quite easy to spot in Lisp code, because they are on the left,
because one rarely writes more than one setf on one physical line in Lisp,
and because they are separated by whitespace from what follows.

(How important that is, is
>a separate question.) However, the exclamation mark in set! is similarly
>easy to pick out, so the relative advantage of := which Daniel refers to,
>doesn't really exist in languages with set!, i.e. Scheme.

The Scheme operator set! is not really equivalent to the setf or to a
generic := in another language like Dylan.

The Lisp operator setf can assign to a generalized place. There are many
standard places, and the repertoire of places can be extended by the user.

For example, if you want to assign the integer 42 to the third element
of a list, in Lisp you can do that as

(setf (third L) 42)

or assign to the third element of an array:

(setf (aref A 3) 42)

or to a hash table entry keyed on the string "blorg":

(setf (gethash "blorg" H) 42)

replace characters in string S starting at position 3:

(setf (subseq S 3) "abc")

You see, setf is actually a macro which groks the place form and
generates code for storing a value to it, thus providing a concept
that other languages call an lvalue or whatever. You don't have to
remember a myriad of different assignment operations for every conceivable
data structure that can be destructively manipulated. Moreover, you
can invent new kinds of places and extend setf to handle them in
a standard way.

In scheme, you get a few things like set!, set-car!, set-cdr!
vector-set! and string-set! without any standard, extensible macro to
unify assignment.

Now we are talking semantics, the domain where the truly legitimate
improvements in programming languages happen.

Wade Humeniuk

unread,
Dec 28, 2001, 2:23:03 PM12/28/01
to
I am neutral on whether its easier.

See some real code that I replaced the setf's. Careful, if your browser
tries to format the files as HTML looks funny, download the code instead.

http://www.cadvision.com/humeniuw/html-report.lisp

http://www.cadvision.com/humeniuw/html-report-nonsetf.lisp

HTML macros at

http://www.cadvision.com/humeniuw/primitive-html.lisp


:= might only stand out because it looks special. If all function-symbols
looked like it none would stand out either.

Wade

"Anton van Straaten" <an...@appsolutions.com> wrote in message
news:QK2X7.3087$5c4.3...@newsread1.prod.itd.earthlink.net...

Erik Naggum

unread,
Dec 28, 2001, 2:30:59 PM12/28/01
to
* Sander Vesik

| Lots of people spend lots of time looking at a sea of program - usually a
| sea of program they didn't write. And this can be for a wide number of
| reasons, whetever to find or fix a bug or enhance some section.

I used to be doing that for a living, while I still thought C was cool.
Certain kinds of bugs stand out from the source like bad spelling. But
some people do not see spelling mistakes, so make a lot of them and need
machine help to signal them to their visual system, like wavy red lines.
This is not how you learn to spell correctly. It is as bad as singing
off key and having a computer beep back at you until you no longer get it
wrong in _its_ view -- it can never get _good_ that way. I also think
programmers who need color to see their code need to drop the keyboard
and pick up some crayons -- if color helps, it is _not_ a good thing.

Whatever your language is, if you cannot scan it quickly or slowly as
your needs require, you are not a production quality programmer, yet, or
maybe you are not reading production quality code written by another
immature specimen. If the goal is to increase the number of production
quailty programmers, you train them, reward them when they get closer to
that goal and get rid of those who refuse to improve when necessary.
Rewarding incompetence and ignorance increases the number of incompetent
programmers. Designing programming languages and tools so incompetent
programmers can feel better about themselves is not the way to go.

///
--
The past is not more important than the future, despite what your culture
has taught you. Your future observations, conclusions, and beliefs are
more important to you than those in your past ever will be. The world is
changing so fast the balance between the past and the future has shifted.

Jeffrey Siegal

unread,
Dec 28, 2001, 3:47:51 PM12/28/01
to
Kaz Kylheku wrote:

> The Scheme operator set! is not really equivalent to the setf or to a
> generic := in another language like Dylan.

An extension to Scheme's set! operator to make it more like setf has
been proposed. See http://srfi.schemers.org/srfi-17/

Whether or not it is a good idea is another question. There doesn't
seem to be any consensus in the Scheme community. I don't know which
Scheme systems implement srfi-17.

Kaz Kylheku

unread,
Dec 28, 2001, 5:24:31 PM12/28/01
to
In article <10095485...@haldjas.folklore.ee>, Sander Vesik wrote:
>In comp.lang.scheme Thomas F. Burdick <t...@famine.ocf.berkeley.edu> wrote:
>> "Daniel C. Wang" <danwan...@cs.princeton.edu> writes:
>
>>> "Janos Blazi" <jbl...@hotmail.com> writes:
>>> (:= e1 e2)
>>> is easier to visually recognize than
>>>
>>> (setf e1 e2)
>>> when dumped in a sea of program text.
>
>> But how often are you looking at a sea of program text? I'm usually
>> looking at a specific part of a program, in which case setting is one
>> of many things that can be done, and I can't think of any reason I'd
>> care so much more about it than anthing else the program could be
>> doing. If you want to find all the instances where e1 is assigned to,
>
>Lots of people spend lots of time looking at a sea of program - usually
>a sea of program they didn't write. And this can be for a wide number
>of reasons, whetever to find or fix a bug or enhance some section.

Pardon me if I'm overly skeptical here.

The difficulty in grappling with a sea (or shall we say ``morass'')
of code does not rest in the spelling of the operators, or trivial
issues of arrangement like prefix versus postfix.

A morass of code astounds the mind with a congnitive overload when it
is not arranged into abstractions that can be understood on multiple
levels of chunking.

A large program should be understandable at some level where you don't
have to worry about individual assignment operations.

If it isn't, then I don't believe that changing the spelling of these
operations, or their syntax from prefix to infix, is going to help much.
It is their sheer number and mind-boggling interdependencies that
are the problem, not their spelling.

At the magnification level at which these trivial spelling conventions
matter, *any* convention becomes readable when one is accustomed to it.

The most surprising element of readability is that, in the first place,
the human mind can be trained to automatically recognize letters and
words. Three sticks arranged in the shape of a roof jump out not as
three sticks but as the letter A. This jumping is pushed entirely into
low level ``subroutines'' in your brain so that it doesn't require any
conscious effort; you just perceive an entire independent signal path
related to the image in front of you. You are aware that there
is a shape, but also that it ``speaks'' to you. A similar adaptation
occurs to the detailed elements of the syntax of a programming language.

Harald Hanche-Olsen

unread,
Dec 28, 2001, 4:46:17 PM12/28/01
to
+ Erik Naggum <er...@naggum.net>:

| I also think programmers who need color to see their code need to
| drop the keyboard and pick up some crayons -- if color helps, it
| is _not_ a good thing.

This reminds me of an often heard argument against the traffic
regulation requiring headlights to be on at all times: "If you can't
see an approaching car a hundred meters away in broad daylight, you
should not be driving a car in the first place." True, but this
misses the point: There are lots of valid reasons why your attention
may be momentarily directed elsewhere, and then those headlights alert
you to the presence of the approaching car earlier. If a dangerous
situation is averted that way, you are probably not even aware of the
fact, as you took appropriate action without a second thought. In the
same way, I find some use of colour quite helpful in reading code.
Like indentation, it's really just another pathway into your brain,
alerting you to structure that is not immediately apparent at first
glance. I certainly find it easier to concentrate on the essence of
any programming problem the more clues, like indentation and
colouring, are there to make structure apparent, allowing me to spend
less mental energy on discovering what the text actually says and
correspondingly more on why it says what it says. In Lisp, however,
colour definitely takes the back seat to indentation, which is at
least one or two orders of magnitudes more helpful in making structure
obvious to the eye. To consider two opposite extremes, I can happily
read and edit Lisp code without the use of colouring - but in working
with markup languages like TeX or HTML, I really appreciate having the
markup stick out from the text like a sore thumb - which, in many
cases, is an accurate description of its nature.

Another issue is that colouring done wrong is terribly distracting,
just like bad indentation. And font-locking code in emacs seems to
get it wrong just a bit too often: If I naïvely type

(let ((case :upcase))
...)

in a Lisp buffer with font-locking turned on, emacs will highlight
"case" as if it were a keyword. And then there are horrors like what
happens to latex.ltx, where everything after the dollar sign in

\def\@argarraycr[#1]{%
\ifnum0=`{\fi}${}\ifdim #1>\z@ \@xargarraycr{#1}\else
\@yargarraycr{#1}\fi}

seems to be in math mode as far as emacs is concerned, turning the
final third of the file a uniform brownish colour. Ugh. You can tell
if the author used emacs for editing this file if a place like this is
immediately followed by a comment containing a lone dollar sign to get
the font-locking code back on the right track.

--
* Harald Hanche-Olsen <URL:http://www.math.ntnu.no/~hanche/>
- Yes it works in practice - but does it work in theory?

Johan Ur Riise

unread,
Dec 28, 2001, 5:58:49 PM12/28/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> Andreas Bogk wrote:
> > As soon as you pile up some layers of code, it quickly becomes tedious
> > to pass around handlers everywhere. Just imagine passing a GUI dialog
> > for resolving a "disk full" condition all the way through the GUI,
> > your application code, your storage abstraction down to the actual
> > disk access.


>
> What happens in Java is that you have to at least declare the exceptions
> up the chain anyway (the compiler will reject a method that doesn't

> catch or throw E which involves a method declared to throw E. It isn't
> that much harder to explicitly pass the handler.

Not completely true for Java. If you use RuntimeException or an
instance of a subclasses of RuntimeException, you dont have to
declare it in throws clauses.

Ref.:
http://java.sun.com/j2se/1.3/docs/api/java/lang/RuntimeException.html

Feuer

unread,
Dec 28, 2001, 10:29:06 PM12/28/01
to

Ray Blaak wrote:

> This is the worst thing to do, since = has a better meaning as a (kind of)
> equality predicate. C/C++/Java aside, a dedicated assignment token is
> preferred, whether :=, setf, set!, whatever.
>
> Languages that change the historical meaning of = cause unnecessary bugs,
> e.g. the classic "if (a = 0) ..." business.

Tell that to the ML or Haskell programmers (or Prolog?). In Haskell, = is
used to define variables (== is used to test). In ML it is used both to
define variables and test for (a sort of) equality. Assignment is :=.
Confusing definition with equality would probably lead to a syntax error in
these languages. Confusing equality with assignment in ML would be bad, but
that's not too likely.

Barry Margolin

unread,
Dec 28, 2001, 7:40:33 PM12/28/01
to
In article <m33d1v5f...@blight.transcend.org>,

Ray Blaak <bl...@telus.net> wrote:
>Languages that change the historical meaning of = cause unnecessary bugs,
>e.g. the classic "if (a = 0) ..." business.

PL/I and Fortran never had that problem, even though they use = for *both*
assignment and equality testing. This is possible in languages where
statements and expressions are distinct and not allowed in each other's
context. The test clause of an "if" statement is an expression, so = is
determined to be equality; a standalone "a = b" is a statement, so = must
be assignment.

I believe the historical reason for using = in assignments is due to
mathematicians. They indicate assignment by writing "Let <var> = <expr>",
and programming language designers then chose to elide the "let" for
brevity. BASIC originally required the LET verb, but the implementors
noticed that it could be disambiguated by context just as in Fortran, so
they made it optional. I guess assignments are so common that programmers
really prefer to keep them as terse as possible (this eventually led to C's
"x <op>= y" abbreviation for the "x = x <op> y" idiom).

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Anton van Straaten

unread,
Dec 28, 2001, 8:43:22 PM12/28/01
to
>The Scheme operator set! is not really equivalent to the setf or to a
>generic := in another language like Dylan.

Happily, the ! convention easily extends to the names of other mutating
operations.

>The Lisp operator setf can assign to a generalized place. There are many
>standard places, and the repertoire of places can be extended by the user.

This is a nice feature for the level of language which Lisp is, in which the
details of operations are more commonly hidden from the programmer. Pity
it's not called set!, though! ;o)

Anton

Anton van Straaten

unread,
Dec 28, 2001, 8:54:15 PM12/28/01
to
Wade Humeniuk wrote:
>I am neutral on whether its easier.
>
>:= might only stand out because it looks special. If all function-symbols
>looked like it none would stand out either.

Actually, in your example, I thought := *did* stand out better than setf,
but I agree, whether something stands out does depend on what it's
surrounded by, obviously. When I said "it's easier to pick out := from a
body of code", I admit I wasn't thinking of HTMLisp code, which to me, tends
to be hard to read "globally" whether you're using JSP, ASP, PHP, etc., or
some variety of HTML embedded in Lisp.

Anton

Jeffrey Siegal

unread,
Dec 28, 2001, 9:43:37 PM12/28/01
to
Barry Margolin wrote:

> I believe the historical reason for using = in assignments is due to
> mathematicians. They indicate assignment by writing "Let <var> = <expr>",

Not exactly. Let var = exp in mathematics is an assertion of equality,
not assignment. Assignment is a different operation which doesn't exist
in traditional mathematics but can of course be represented
mathematically using e.g. denotational semantics.

Languages such as ML are slightly different, where the let var = exp
operator introduces a binding, which, absent subsequent assignement
(impossible in ML), is equivalent to a mathematical assertion of equality.

I do agree that <var> = <exp> in programming came about as a derivative
of the mathematical usage, but it shouldn't have, since the two are
fundamentally different in most programming languages.

Kaz Kylheku

unread,
Dec 28, 2001, 10:19:36 PM12/28/01
to
In article <_c9X7.3900$5c4.4...@newsread1.prod.itd.earthlink.net>,

You know, many an assembly language allows you to use the same mnemonic
to move data into various kinds of locations. Which is a nice feature
for the level of language it is: one or two levels above binary opcodes.

Anton van Straaten

unread,
Dec 29, 2001, 12:19:04 AM12/29/01
to
Kaz Kylheku wrote:
>You know, many an assembly language allows you to use the same mnemonic
>to move data into various kinds of locations. Which is a nice feature
>for the level of language it is: one or two levels above binary opcodes.

That's probably a good analogy: Scheme is perhaps one level "above" the
lambda calculus, and Lisp is at least two. All their differences and the
reasons why those differences *should* exist can be explained by this. The
second level introduces many more choices, thus many more potential
differences in direction.

Anton

Raymond Toy

unread,
Dec 28, 2001, 8:00:28 PM12/28/01
to
Barry Margolin wrote:

> PL/I and Fortran never had that problem, even though they use = for *both*
> assignment and equality testing.


I've never used PL/I, but in Fortran equality testing is done with .eq.
as in

if (a .eq. b) then
endif

Ray

Coby Beck

unread,
Dec 29, 2001, 11:00:40 AM12/29/01
to

"Kaz Kylheku" <k...@ashi.footprints.net> wrote in message
news:zi6X7.3727$L4.1...@news2.calgary.shaw.ca...

> In article <10095485...@haldjas.folklore.ee>, Sander Vesik wrote:
> >In comp.lang.scheme Thomas F. Burdick <t...@famine.ocf.berkeley.edu> wrote:
> >> "Daniel C. Wang" <danwan...@cs.princeton.edu> writes:
> >
> >>> "Janos Blazi" <jbl...@hotmail.com> writes:
> >>> (:= e1 e2)
> >>> is easier to visually recognize than
> >>>
> >>> (setf e1 e2)
> >>> when dumped in a sea of program text.
> >
> >> But how often are you looking at a sea of program text? I'm usually
> >> looking at a specific part of a program, in which case setting is one
> >> of many things that can be done, and I can't think of any reason I'd
> >> care so much more about it than anthing else the program could be
> >> doing. If you want to find all the instances where e1 is assigned to,
> >
> >Lots of people spend lots of time looking at a sea of program - usually
> >a sea of program they didn't write. And this can be for a wide number
> >of reasons, whetever to find or fix a bug or enhance some section.
>
> At the magnification level at which these trivial spelling conventions
> matter, *any* convention becomes readable when one is accustomed to it.
>

Hear, hear. This sums up my feelings on this precisely. (along with the
arguments I snipped for brevity that lead here)

--
Coby
(remove #\space "coby . beck @ opentechgroup . com")


Jochen Schmidt

unread,
Dec 29, 2001, 3:02:40 PM12/29/01
to
Wade Humeniuk wrote:

> I am neutral on whether its easier.
>
> See some real code that I replaced the setf's. Careful, if your browser
> tries to format the files as HTML looks funny, download the code instead.
>
> http://www.cadvision.com/humeniuw/html-report.lisp
>
> http://www.cadvision.com/humeniuw/html-report-nonsetf.lisp
>
> HTML macros at
>
> http://www.cadvision.com/humeniuw/primitive-html.lisp
>
>
> := might only stand out because it looks special. If all function-symbols
> looked like it none would stand out either.


Sidenote: := is the symbol named "=" in the KEYWORD package. So there are
other bad sideeffects on having an operator named like this in Common Lisp.
You _can_ define a function named by a keyword but those are not thought to
get bound to anything.

Lieven Marchand

unread,
Dec 29, 2001, 1:55:33 PM12/29/01
to
Harald Hanche-Olsen <han...@math.ntnu.no> writes:

> This reminds me of an often heard argument against the traffic
> regulation requiring headlights to be on at all times: "If you can't
> see an approaching car a hundred meters away in broad daylight, you
> should not be driving a car in the first place." True, but this
> misses the point: There are lots of valid reasons why your attention
> may be momentarily directed elsewhere, and then those headlights alert
> you to the presence of the approaching car earlier. If a dangerous
> situation is averted that way, you are probably not even aware of the
> fact, as you took appropriate action without a second thought.

One of the most persuasive arguments against headlights during day
time is the false sense of safety that everything of importance will
announce its presence by headlights, which will make traffic more
dangerous for cyclists and pedestrians.

To track this back on topic, I've used emacs's lisp fontlock mode and
am now using LispWorks editor which doesn't have that functionality
and I don't miss it at all in Lisp.

--
Lieven Marchand <m...@wyrd.be>
She says, "Honey, you're a Bastard of great proportion."
He says, "Darling, I plead guilty to that sin."
Cowboy Junkies -- A few simple words

Lieven Marchand

unread,
Dec 29, 2001, 1:59:05 PM12/29/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> Barry Margolin wrote:
>
> > I believe the historical reason for using = in assignments is due to
> > mathematicians. They indicate assignment by writing "Let <var> = <expr>",
>
> Not exactly. Let var = exp in mathematics is an assertion of equality,
> not assignment. Assignment is a different operation which doesn't exist
> in traditional mathematics but can of course be represented
> mathematically using e.g. denotational semantics.

Some texts make a difference between assertions of equality between
already existing concepts like sin^2 x + cos^2 x = 1 and an assertion
of equality that introduces or defines a new concept, usually by using
:= or by = with def over it like tan x := sin x / cos x.

Jochen Schmidt

unread,
Dec 29, 2001, 5:13:19 PM12/29/01
to
Lieven Marchand wrote:

> Harald Hanche-Olsen <han...@math.ntnu.no> writes:
>
>> This reminds me of an often heard argument against the traffic
>> regulation requiring headlights to be on at all times: "If you can't
>> see an approaching car a hundred meters away in broad daylight, you
>> should not be driving a car in the first place." True, but this
>> misses the point: There are lots of valid reasons why your attention
>> may be momentarily directed elsewhere, and then those headlights alert
>> you to the presence of the approaching car earlier. If a dangerous
>> situation is averted that way, you are probably not even aware of the
>> fact, as you took appropriate action without a second thought.
>
> One of the most persuasive arguments against headlights during day
> time is the false sense of safety that everything of importance will
> announce its presence by headlights, which will make traffic more
> dangerous for cyclists and pedestrians.
>
> To track this back on topic, I've used emacs's lisp fontlock mode and
> am now using LispWorks editor which doesn't have that functionality
> and I don't miss it at all in Lisp.

Sidenote:
The new 4.2 Release of LispWorks supports syntax-highlighting in the editor.

Russell Wallace

unread,
Dec 29, 2001, 9:46:05 PM12/29/01
to
On Fri, 28 Dec 2001 19:09:21 GMT, Ray Blaak <bl...@telus.net> wrote:

>This is the worst thing to do, since = has a better meaning as a (kind of)
>equality predicate. C/C++/Java aside, a dedicated assignment token is
>preferred, whether :=, setf, set!, whatever.

They should be distinct symbols, but since assignment is used much
more often than equality testing, it should get the more concise
symbol.

Yes, this offends mathematicians. Here's a proposal: you
mathematicians stop using the Greek alphabet for variable names, and
we programmers will stop using = for assignment ^.^

--
"Pity for the guilty is treachery to the innocent."
mailto:rwal...@esatclear.ie
http://www.esatclear.ie/~rwallace

Jeffrey Siegal

unread,
Dec 29, 2001, 10:00:36 PM12/29/01
to
Russell Wallace wrote:

> They should be distinct symbols, but since assignment is used much
> more often than equality testing, it should get the more concise
> symbol.

That depends very much on the language and style of programming. In
most Scheme programs, equality testing (though often not the numerical
equality that the = symbol is used to represent) is used *far* more than
assignment.

Kenny Tilton

unread,
Dec 29, 2001, 10:13:04 PM12/29/01
to
Agreed. Every (rare) time I find myself coding SETF I feel like Ace
Ventura handling the white bat. Yecch.

Shikaka!

kenny
clinisys

Russell Wallace

unread,
Dec 29, 2001, 10:22:52 PM12/29/01
to
On Sat, 29 Dec 2001 19:00:36 -0800, Jeffrey Siegal <j...@quiotix.com>
wrote:

Okay, that's a fair point - if you're designing a language like
Scheme, where assignment is intended to be rare, then by all means go
ahead and use the shorter symbol for equality testing.

Bruce Hoult

unread,
Dec 29, 2001, 11:25:15 PM12/29/01
to
In article <3c2e7fa0....@news.eircom.net>, r...@eircom.net (Russell
Wallace) wrote:

> On Fri, 28 Dec 2001 19:09:21 GMT, Ray Blaak <bl...@telus.net> wrote:
>
> >This is the worst thing to do, since = has a better meaning as a (kind
> >of)
> >equality predicate. C/C++/Java aside, a dedicated assignment token is
> >preferred, whether :=, setf, set!, whatever.
>
> They should be distinct symbols, but since assignment is used much
> more often than equality testing, it should get the more concise
> symbol.

Not in any of the languages represented by the groups being crossposted
to.

-- Bruce

Dr. Edmund Weitz

unread,
Dec 30, 2001, 4:34:21 AM12/30/01
to
r...@eircom.net (Russell Wallace) writes:

> On Fri, 28 Dec 2001 19:09:21 GMT, Ray Blaak <bl...@telus.net> wrote:
>
> Yes, this offends mathematicians. Here's a proposal: you
> mathematicians stop using the Greek alphabet for variable names, and
> we programmers will stop using = for assignment ^.^

Would it still be OK to use Hebrew letters for cardinal arithmetic?

:)

Edi.

David Madore

unread,
Dec 30, 2001, 11:27:07 AM12/30/01
to
Russell Wallace in litteris <3c2e7fa0....@news.eircom.net>
scripsit:

> Yes, this offends mathematicians. Here's a proposal: you
> mathematicians stop using the Greek alphabet for variable names, and
> we programmers will stop using = for assignment ^.^

Doesn't every (decent) programming language now support the use of
Unicode characters, including Greek letters, in identifiers? And with
Emacs 21 or vim 6, entering them should be as easy as selecting the
right input method / compose table.

(Actually, I'm seriously worried about this. I've seen programs which
were entirely commented in Japanese, and I was fortunate that the
identifier names were still (remotely) English. If even this moderate
help is lost, the program might just as well be in Unlambda for all I
can understand it - heck, I can't even tell most ideograms apart.
Well, I guess at least it's a good thing for those who like to use
one-character names for their variables :-)

--
David A. Madore
(david....@ens.fr,
http://www.eleves.ens.fr:8080/home/madore/ )

It is loading more messages.
0 new messages