Time for a Fresh Scheme Standard: Say Goodbye to the RnRS Relic

47 views
Skip to first unread message

New Scheme

unread,
Dec 21, 2001, 7:14:04 PM12/21/01
to
***************************************************************
Time for a Fresh Scheme Standard
And to Say Goodbye to the RnRS Relic
----------------------------------------------------

Is it time for a new scheme standard? Is it time to make a break from
the ossified RnRS document? Is it time to bring Scheme into the 21st
century?

Scheme has become very dated. The RnRS series of documents are a
relic of a more dynamic past but are now little more than a fossil
record of its ever slowing development. With each year that passes,
scheme becomes more irrelevant to the practical and academic software
development, education and research worlds.

So what should be done? Fix the uncertainties, clear up the undefined
areas. Don't be scared to admit weaknesses and mistakes in the
current standard. Solicit help from the Common Lisp community and
draw upon their extensive practical experience. Learn from the
Functional community and their many strong ideas. And ask the
compiler vendors about practicalities.

Its time for a fresh look at scheme. Its time to break away from the
RnRS and its brotherhood of old men in their isolated,
self-referential world. Its time to reinvigorate the language.

Its time for a new standard.

israel r t

unread,
Dec 23, 2001, 8:51:54 PM12/23/01
to

Lisp needs to reinvent itself.
The last standard was released in 1994, ie nearly a decade ago.

As Paul Graham said:
"It's about time for a new dialect of Lisp. The two leading dialects,
Common Lisp and Scheme, have not been substantially changed since the
1980s.

What a language is has changed since then. In 1985, a programming
language was just a spec. Now, thanks to Perl, it means not just (and
maybe not even) a spec, but also a good free implementation, huge
libraries, and constant updates."

Lisp is no longer taught * at leading universities.
Lisp jobs are increasingly scarce.

Lisp is viewed in the real world as akin to COBOL only less likely to
provide a paying job.

Lisp has a severe image problem . ***
Eventually, it will go the way of Jovial and the Titan command
language.

Paul Graham is moving in the right direction with his lisp dialect
Arc. From his talk at the Lightweight Languages Workshop
MIT Artificial Intelligence Lab :

" In The Periodic Table, Primo Levi tells a story that happened when
he was working in a varnish factory. He was a chemist, and he was
fascinated by the fact that the varnish recipe included a raw onion.
What could it be for? No one knew; it was just part of the recipe. So
he investigated, and eventually discovered that they had started
throwing the onion in years ago to test the temperature of the
varnish: if it was hot enough, the onion would fry.

We're going to try not to include any onions in Arc. Everything is
open to question. "
http://www.paulgraham.com/arcll1.html


Footnotes:

* except in increasingly marginalised AI courses .

** The mega-LOCs of COBOL in the finance sector will ensure jobs for
COBOL drudges well into the next millenium.

*** and I am not referring to Naggum either...


Frank A. Adrian

unread,
Dec 23, 2001, 11:37:54 PM12/23/01
to
israel r t wrote:
> Lisp needs to reinvent itself.
> The last standard was released in 1994, ie nearly a decade ago.
>
> As Paul Graham said:
> "It's about time for a new dialect of Lisp. The two leading dialects,
> Common Lisp and Scheme, have not been substantially changed since the
> 1980s.

Fine. Do you have (a) suggestions or (b) funding for people to participate
in such an effort?

> What a language is has changed since then. In 1985, a programming
> language was just a spec. Now, thanks to Perl, it means not just (and
> maybe not even) a spec, but also a good free implementation, huge
> libraries, and constant updates."

I guess. Inventors of Dylan, Curl, and AutoLisp, at the very least, would
tend to disagree with you.

> Lisp is no longer taught * at leading universities.
> Lisp jobs are increasingly scarce.

Sad. 90% of anything is crap, to quote Ted Sturgeon. I would assume this
applies to incomplete university educations and most jobs, as well.

> Lisp is viewed in the real world as akin to COBOL only less likely to
> provide a paying job.

Probably true, but see my last comment. When exposed to crap long enough,
even good people start having trouble telling the difference.

> Lisp has a severe image problem . ***
> Eventually, it will go the way of Jovial and the Titan command
> language.

Yup, just like Fortran. It's been around for 50 years. I guess it'll be
good for another 50. By then, if it hasn't been updated, I'll start to
worry.

> Paul Graham is moving in the right direction with his lisp dialect
> Arc.

It's nice to have an opinion. You're entitled to yours, no matter how
misguided.

[Obligatory anecdote about needless process step/ingredient snipped.]

There are many people that believe Common Lisp needs a bit of updating. Do
you have (a) suggestions or (b) funding? If not, are you just trying to
raise hackles, showing that people here are less friendly than on
c.l.scheme? If you have come here with that objective, you should have
also noticed that your comments were answered without rancor (albeit
briefly) and that most participants in this forum would answer you with
that tone (not, if you had come with the aforementioned goal in mind, you
should have been deserving of this much courtesy). Of course, cluelessness
in followups and arguments might be handled with much less forgiveness.

faa

Erik Naggum

unread,
Dec 24, 2001, 2:24:03 AM12/24/01
to
* israel r t <isra...@antispam.optushome.com.au>

| Lisp needs to reinvent itself.
| The last standard was released in 1994, ie nearly a decade ago.

How old are you?

///
--
The past is not more important than the future, despite what your culture
has taught you. Your future observations, conclusions, and beliefs are
more important to you than those in your past ever will be. The world is
changing so fast the balance between the past and the future has shifted.

israel r t

unread,
Dec 24, 2001, 4:47:46 PM12/24/01
to

Lisp needs to reinvent itself.
The last standard was released in 1994, ie nearly a decade ago.

As Paul Graham said:
"It's about time for a new dialect of Lisp. The two leading dialects,
Common Lisp and Scheme, have not been substantially changed since the
1980s.

What a language is has changed since then. In 1985, a programming
language was just a spec. Now, thanks to Perl, it means not just (and
maybe not even) a spec, but also a good free implementation, huge
libraries, and constant updates."

Lisp is no longer taught * at leading universities.
Lisp jobs are increasingly scarce.

Lisp is viewed in the real world as akin to COBOL **only less likely

Andreas Bogk

unread,
Dec 24, 2001, 5:16:18 PM12/24/01
to

> Lisp needs to reinvent itself.

I suggest to take a look at Dylan. It's a pretty recent Lisp-like
language, and it's got a few things right (but on the other hand
omitted some features some people consider essential). I've also got
a list of things to do better on the next iteration.

You can learn a lot from Dylan when designing a new Lisp.

Andreas

--
"In my eyes it is never a crime to steal knowledge. It is a good
theft. The pirate of knowledge is a good pirate."
(Michel Serres)

Jeffrey Siegal

unread,
Dec 24, 2001, 7:26:36 PM12/24/01
to
Andreas Bogk wrote:
> I suggest to take a look at Dylan. It's a pretty recent Lisp-like
> language, and it's got a few things right (but on the other hand
> omitted some features some people consider essential).

I consider Lisp syntax (or something similarly elegant) to be
essential. I suspect that many proponents of Dylan-like languages would
consider it unacceptable. I strongly suspect there is no middle ground.

(Yes, I'm aware of Lisp-syntax Dylan, but I think there's a reason it
got abandoned.)

Andreas Bogk

unread,
Dec 24, 2001, 8:08:16 PM12/24/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> I consider Lisp syntax (or something similarly elegant) to be
> essential. I suspect that many proponents of Dylan-like languages would
> consider it unacceptable. I strongly suspect there is no middle ground.

For the language user, there may be no middle ground. From the
perspective of the language designer, the syntax is just one issue of
many, so even if you prefer S-expressions, there's still a lot of Lisp
to discover in Dylan.

> (Yes, I'm aware of Lisp-syntax Dylan, but I think there's a reason it
> got abandoned.)

The reason was that a lot of people, especially those who should be
persuaded to use Dylan, consider infix syntax to be more readable.
I'm well aware that this is paid with increased complexity in macros,
and I'm still not firm enough in macrology to know whether this is a
substantial complaint or not. Still, Dylan provides valuable input
for designing the next Lisp/Scheme/whatever.

Jeffrey Siegal

unread,
Dec 24, 2001, 8:37:32 PM12/24/01
to
Andreas Bogk wrote:
> > I consider Lisp syntax (or something similarly elegant) to be
> > essential. I suspect that many proponents of Dylan-like languages would
> > consider it unacceptable. I strongly suspect there is no middle ground.
>
> For the language user, there may be no middle ground. From the
> perspective of the language designer, the syntax is just one issue of
> many, so even if you prefer S-expressions, there's still a lot of Lisp
> to discover in Dylan.

Did you mean "A lot _for_ Lisp to discover?" There is little in Dylan
that didn't originate with Lisp, except the syntax. What does Dylan
have that Scheme + CLOS + "a collections library" doesn't have?

Thaddeus L Olczyk

unread,
Dec 24, 2001, 10:05:39 PM12/24/01
to
On 24 Dec 2001 23:16:18 +0100, Andreas Bogk <and...@andreas.org>
wrote:

>israel r t <isra...@antispam.optushome.com.au> writes:
>
>> Lisp needs to reinvent itself.
>
>I suggest to take a look at Dylan. It's a pretty recent Lisp-like
>language, and it's got a few things right (but on the other hand
>omitted some features some people consider essential). I've also got
>a list of things to do better on the next iteration.
>
>You can learn a lot from Dylan when designing a new Lisp.
>
>Andreas

What I saw of Dylan looked good, but it is a dead language. Stillborn.

Andreas Bogk

unread,
Dec 24, 2001, 10:13:44 PM12/24/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> > many, so even if you prefer S-expressions, there's still a lot of Lisp
> > to discover in Dylan.
> Did you mean "A lot _for_ Lisp to discover?" There is little in Dylan
> that didn't originate with Lisp, except the syntax.

No, I agree that most of Dylan originated in one Lisp dialect or
another. But I think that Dylan is a well-balanced blend of these
features, it feels good. That's why I suggest to at least take a look
at it when designing the next Lisp[0].

> What does Dylan have that Scheme + CLOS + "a collections library"
> doesn't have?

That would be conditions, type annotations and a useful module/library
system. Oh, and dynamism vs. performance tradeoffs like sealing,
primary classes and limited types.

Andreas

[0] Or "Lisp" or successor of Scheme which is "not a Lisp" or
whatever.

Jeffrey Siegal

unread,
Dec 25, 2001, 5:08:15 AM12/25/01
to
Andreas Bogk wrote:
> > What does Dylan have that Scheme + CLOS + "a collections library"
> > doesn't have?
>
> That would be conditions, type annotations and a useful module/library
> system.

I agree about modules, although I don't really like the way Dylan uses
multiple files to define a simple module. There should be a way of
doing that in-line. A CLOS-style object system does have type
annotations, at least at the method level (which is probably enough),
because they're necessary for dispatch. As for conditions, I prefer
passing condition handlers as explicit arguments. With proper tail
calls and limited use of call/cc to escape out of CPS, it works fine.

> Oh, and dynamism vs. performance tradeoffs like sealing,
> primary classes and limited types.

I think these are overhyped features which have been adaquately
addressed in Lisp/Scheme using either different implementations as
needed, declarations, etc.

David Madore

unread,
Dec 25, 2001, 9:55:38 AM12/25/01
to
Jeffrey Siegal in litteris <3C27C7BC...@quiotix.com> scripsit:

> I consider Lisp syntax (or something similarly elegant) to be
> essential. I suspect that many proponents of Dylan-like languages would
> consider it unacceptable. I strongly suspect there is no middle ground.

Concrete syntax is irrelevant, isn't it? The same compiler could be
made to accept a dozen different syntaxes, or even configurable
syntaxes, and prefer none (Caml can handle configurable syntaxes
through the CamlP4 preprocessor, but it still prefers one).

--
David A. Madore
(david....@ens.fr,
http://www.eleves.ens.fr:8080/home/madore/ )

Andreas Bogk

unread,
Dec 25, 2001, 2:19:12 PM12/25/01
to
olc...@interaccess.com (Thaddeus L Olczyk) writes:

> What I saw of Dylan looked good, but it is a dead language. Stillborn.

There are two Dylan compilers being actively maintained, one
commercial, one free. That's not exactly dead.

Andreas Bogk

unread,
Dec 25, 2001, 2:42:11 PM12/25/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> I agree about modules, although I don't really like the way Dylan uses
> multiple files to define a simple module. There should be a way of

The idea behind Dylan was that the source code resides in a code
database, and the file format is just used for interchange. Of
course, in reality there are source files, and the interchange format
is a little awkward to use. That should be easier, I agree.

> doing that in-line. A CLOS-style object system does have type
> annotations, at least at the method level (which is probably enough),
> because they're necessary for dispatch.

Having type annotations for bindings gives the optimizer a lot of meat
to work on.

> As for conditions, I prefer
> passing condition handlers as explicit arguments. With proper tail
> calls and limited use of call/cc to escape out of CPS, it works fine.

I don't think so. Having to pass around handlers for all sorts of
conditions is a nuisance. This is something CL and Dylan got right,
IMHO.

> > Oh, and dynamism vs. performance tradeoffs like sealing,
> > primary classes and limited types.
> I think these are overhyped features which have been adaquately
> addressed in Lisp/Scheme using either different implementations as
> needed, declarations, etc.

The point is that you can start writing code without caring about
performance. Once the design has settled, you can sprinkle some
adjectives here and there, and the code becomes fast, without having
to re-implement performance-critical code. I consider sealing to be a
good thing.

Andreas

Bruce Hoult

unread,
Dec 26, 2001, 12:18:37 AM12/26/01
to
In article <3C27C7BC...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Andreas Bogk wrote:
> > I suggest to take a look at Dylan. It's a pretty recent Lisp-like
> > language, and it's got a few things right (but on the other hand
> > omitted some features some people consider essential).
>
> I consider Lisp syntax (or something similarly elegant) to be
> essential. I suspect that many proponents of Dylan-like languages would
> consider it unacceptable. I strongly suspect there is no middle ground.

I can happily use either. Or paren-less prefix (Logo, ML). Or postfix
(PostScript, Forth). But even after much use of the others I find that
I do prefer "conventional" syntax.


> (Yes, I'm aware of Lisp-syntax Dylan, but I think there's a reason it
> got abandoned.)

The reason as I understand it is that no one could figure out how to
bidirectionally map macros between infix and prefix.

I'm not sure whether this is impossible or merely hard.

It's interesting that some of the more complex macros in Common Lisp
look uncommonly like the "infix" syntax in Dylan. e.g. the "loop"
macro, which is nearly identical to the Dylan "for" statement macro.
Thus it might be acceptable to the Lisp-syntax people to essentially
retain (nearly?) the same syntax for statement macros in both modes.
Function macros are easy to translate. That leaves Dylan's declaration
macros to think about.

Another solution might be to explicitly define both syntaxes when you
define a macro. More work, but then you don't define new syntax quite
as often as you define functions.

-- Bruce

Bruce Hoult

unread,
Dec 26, 2001, 12:26:16 AM12/26/01
to
In article <3C27D85C...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

Perhaps not a lot that is radical, but simply a lot of nice cleaning up.

- having a "let" where the scope is implicit (to the end of the current
progn) is a big win in unclutering code

- Dylan's ":=" and CL's "setf" are the same idea, but := is easier to
read for some people.

- same goes for "[]" vs "element()".

- why do aref and gethash in CL have opposite argument orders?


I think you get the point.

None of these (or other) items are critical in themselves, but I find
that put all together they provide a cleaner, easier to use (and
remember) language.

-- Bruce

Bruce Hoult

unread,
Dec 26, 2001, 12:53:26 AM12/26/01
to
In article <3C28500F...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Andreas Bogk wrote:
> > > What does Dylan have that Scheme + CLOS + "a collections library"
> > > doesn't have?
> >
> > That would be conditions, type annotations and a useful module/library
> > system.
>
> I agree about modules, although I don't really like the way Dylan uses
> multiple files to define a simple module. There should be a way of
> doing that in-line.

No one does. That was supposed to be just an interchange format, not
something that users had to deal with. That was the case in the Apple
IDE, where all the code was kept in a database.

We've had a bit of discussion recently on a way to put various modules
into the same source file. Nothing has been agreed yet, but in Gwydion
we have recently done a related thing in implementing a "single-file
mode" that lets you write small programs without a library or module
declaration at all, with a default set of imports. If/when your program
outgrows that you can always add the .lid file.

The ability to put imports/exports in the same file with code is
something we definitely plan for fairly soon.

-- Bruce

Jeffrey Siegal

unread,
Dec 26, 2001, 9:38:23 AM12/26/01
to
Bruce Hoult wrote:

>>Andreas Bogk wrote:
>>
>>>I suggest to take a look at Dylan. It's a pretty recent Lisp-like
>>>language, and it's got a few things right (but on the other hand
>>>omitted some features some people consider essential).
>>>
>>I consider Lisp syntax (or something similarly elegant) to be
>>essential. I suspect that many proponents of Dylan-like languages would
>>consider it unacceptable. I strongly suspect there is no middle ground.
>
> I can happily use either. Or paren-less prefix (Logo, ML). Or postfix
> (PostScript, Forth). But even after much use of the others I find that
> I do prefer "conventional" syntax.

It isn't a question of using. It is a question of being able to define
new syntax without stretching or breaking the inherent limits of the
existing syntax. Lisp lives essentially forever in the world of
computer languages because it almost can't be outgrown. To the extent
that Dylan lives at all, it will still die when the world decides that
objects aren't that central to programming after all, and moves on to
some other model, or when someone comes up with a new syntactic
construct that it is incompatible with Dylan's syntax. Lisp will live on.


>>(Yes, I'm aware of Lisp-syntax Dylan, but I think there's a reason it
>>got abandoned.)
>
> The reason as I understand it is that no one could figure out how to
> bidirectionally map macros between infix and prefix.
>
> I'm not sure whether this is impossible or merely hard.

And the reason the decision was made to drop prefix rather than infix
when that happened was the overriding goal of trying to sell Dylan
alongside Java or C as a language for the great masses. (Which today
seems utterly absurd.)

Many smart people have observed that when you encounter a "hard" (if not
impossible) problem, you have already made a mistake somewhere back down
the road. Trying to "add" an infix syntax without recognizing that this
almost certainly means losing expressive power and generality was just
such a mistake.

> Another solution might be to explicitly define both syntaxes when you
> define a macro. More work, but then you don't define new syntax quite
> as often as you define functions.

That would be very error prone.

Jeffrey Siegal

unread,
Dec 26, 2001, 9:42:22 AM12/26/01
to
Andreas Bogk wrote:

>>doing that in-line. A CLOS-style object system does have type
>>annotations, at least at the method level (which is probably enough),
>>because they're necessary for dispatch.
>>
>
> Having type annotations for bindings gives the optimizer a lot of meat
> to work on.

I'm not so sure about that, given good type inference, and methods that
are kept reasonably small. In any case, it is a trivially small matter
to add type bindings to let statements one they exist for methods.

>>As for conditions, I prefer
>>passing condition handlers as explicit arguments. With proper tail
>>calls and limited use of call/cc to escape out of CPS, it works fine.
>
> I don't think so. Having to pass around handlers for all sorts of
> conditions is a nuisance. This is something CL and Dylan got right,
> IMHO.

Chocolate and vanilla. I would add that explicitly passing condition
handlers around is a bit like explicit typing, becuase it prevents you
from leaving conditions unhandled.

>>>Oh, and dynamism vs. performance tradeoffs like sealing,
>>>primary classes and limited types.
>>>
>>I think these are overhyped features which have been adaquately
>>addressed in Lisp/Scheme using either different implementations as
>>needed, declarations, etc.
>
> The point is that you can start writing code without caring about
> performance. Once the design has settled, you can sprinkle some
> adjectives here and there, and the code becomes fast, without having
> to re-implement performance-critical code. I consider sealing to be a
> good thing.

I do this in Scheme today, and I don't even sprinkle adjectives here and
there, by developing in a developer-friendly environment and then
switching to a highly-optimized block compiler for tuning and production.

Bruce Hoult

unread,
Dec 26, 2001, 11:05:12 AM12/26/01
to
In article <3C29E0DF...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Bruce Hoult wrote:
>
> >>Andreas Bogk wrote:
> >>
> >>>I suggest to take a look at Dylan. It's a pretty recent Lisp-like
> >>>language, and it's got a few things right (but on the other hand
> >>>omitted some features some people consider essential).
> >>>
> >>I consider Lisp syntax (or something similarly elegant) to be
> >>essential. I suspect that many proponents of Dylan-like languages would
> >>consider it unacceptable. I strongly suspect there is no middle ground.
> >
> > I can happily use either. Or paren-less prefix (Logo, ML). Or postfix
> > (PostScript, Forth). But even after much use of the others I find that
> > I do prefer "conventional" syntax.
>
> It isn't a question of using. It is a question of being able to define
> new syntax without stretching or breaking the inherent limits of the
> existing syntax. Lisp lives essentially forever in the world of
> computer languages because it almost can't be outgrown.

That's true only in the trivial sense that Lisp has no syntax, so Lisp
syntax can't be outgrown. Dylan has pretty much all the same semantics
as Lisp, and a malleable syntax.


> To the extent
> that Dylan lives at all, it will still die when the world decides that
> objects aren't that central to programming after all, and moves on to
> some other model, or when someone comes up with a new syntactic
> construct that it is incompatible with Dylan's syntax. Lisp will live on.

There is no such construct. If it can be fitted into Lisp's
functions-only notation then it can also be fitted into Dylan's
functions and function-macros. In Dylan in may well be *better* fitted
into statement macros, but that's an additional possibility, not a
restriction.


> >>(Yes, I'm aware of Lisp-syntax Dylan, but I think there's a reason it
> >>got abandoned.)
> >
> > The reason as I understand it is that no one could figure out how to
> > bidirectionally map macros between infix and prefix.
> >
> > I'm not sure whether this is impossible or merely hard.
>
> And the reason the decision was made to drop prefix rather than infix
> when that happened was the overriding goal of trying to sell Dylan
> alongside Java or C as a language for the great masses. (Which today
> seems utterly absurd.)

Why? Since that decision was made, the great masses have adopted both
Java and Perl, while Lisp has remained in the wilderness. I don't see
any reason to think that infix syntax is a *disadvantage* to the goal of
attaining popularity. The time may simply be not yet right. After all,
it is only just now that reasonably mature Dylan implementations are
becoming available.


> Many smart people have observed that when you encounter a "hard" (if not
> impossible) problem, you have already made a mistake somewhere back down
> the road.

Or no one had the correct "ah-ha" moment yet.


> Trying to "add" an infix syntax without recognizing that this
> almost certainly means losing expressive power and generality was just
> such a mistake.

In your opinion.


> > Another solution might be to explicitly define both syntaxes when you
> > define a macro. More work, but then you don't define new syntax quite
> > as often as you define functions.
>
> That would be very error prone.

A great many things in programming are error prone. In fact anything in
which it is impossible to make a mistake is almost certainly not
powerful enough to be useful. It is reasonable to expect that
programmers have *some* skill. Also, even if a compiler can't
reasonably translate an infix macro to a prefix macro (or the reverse),
it seems entirely reasonable for it to apply some consistency checks to
two such macros supplied by a human.

-- Bruce

Jeffrey Siegal

unread,
Dec 26, 2001, 12:07:40 PM12/26/01
to
Bruce Hoult wrote:
> That's true only in the trivial sense that Lisp has no syntax, so Lisp
> syntax can't be outgrown.

Hardly. It just has a syntax with a very simple and powerful basic
construction rule. However, on top of that construction rule,
enormously powerful syntactic abstractions can be (and are) built. What
Algol-like languages lack is the basic construction rule which allows
you to decompose the syntax down into elemental componets. That makes
any macro system either enormously complex or lacking in power, or both.

Consider, for example, what low-level Lisp macros would look like in an
Algol-like language. They can be done but the result is enormously
complex (and also fragile; if the language syntax is extended, macros
written that way will likely break).

> There is no such construct. If it can be fitted into Lisp's
> functions-only notation then it can also be fitted into Dylan's
> functions and function-macros.

Of course it can, just as you could write a Lisp interpreter in Dylan
and use that. But at some point it becomes language-abuse, not
language-use, becuause the facilities the language provides to help you
end up either being in the way, or being useless warts. I can tell you
from experience that trying to do extremely complex things with
function-style macros in an Algol-like language is far, far worse than
doing the same thing in Lisp, since such things are a natural extension
of the Lisp syntax but stronly conflict with the flavor of an
Algol-style langauge. Yes, it can be done that way, but it might as
well not be possible because no one will want to use it.

> > And the reason the decision was made to drop prefix rather than infix
> > when that happened was the overriding goal of trying to sell Dylan
> > alongside Java or C as a language for the great masses. (Which today
> > seems utterly absurd.)
>
> Why?

I didn't mean the decision was absurd at the time, just that the
possiblity of Dylan being sold to the great masses today is absurd.
Dylan is a useful niche language, which is all it will ever be. As a
niche language, though, you don't need to sell it with a candy-coated
syntax. I might be using it today if the Lisp syntax had been retained,
but I have no interest whatsoever in an Algol-syntax niche langauge. If
I'm going to use such a langauge, it is going to at least be a
mainstream one with all of the benefits that acrue from that status
(i.e., all things considered I'd rather use Java, and I do, than Dylan,
despite recognizing that Dylan is a much nicer language).

> Since that decision was made, the great masses have adopted both
> Java and Perl, while Lisp has remained in the wilderness. I don't see
> any reason to think that infix syntax is a *disadvantage* to the goal of
> attaining popularity.

I wasn't suggesting that.

> The time may simply be not yet right. After all,
> it is only just now that reasonably mature Dylan implementations are
> becoming available.

With all due respect, I think you are dreaming, and I think some honest
self-reflection would confirm that.

> > Many smart people have observed that when you encounter a "hard" (if not
> > impossible) problem, you have already made a mistake somewhere back down
> > the road.
>
> Or no one had the correct "ah-ha" moment yet.

Taking a path which requires an as-yet-unknown "ah ha" to suceed is a
design error. It is those moments which make new paths feasible. Blind
leaps occasionally do lead there (I'm a big fan of evoluationary
learning), but when they don't, you should be willing to accept that the
leap was a mistake and backtrack.

> > Trying to "add" an infix syntax without recognizing that this
> > almost certainly means losing expressive power and generality was just
> > such a mistake.
>
> In your opinion.

Absolutely true.

> > > Another solution might be to explicitly define both syntaxes when you
> > > define a macro. More work, but then you don't define new syntax quite
> > > as often as you define functions.
> >
> > That would be very error prone.
>
> A great many things in programming are error prone. In fact anything in
> which it is impossible to make a mistake is almost certainly not
> powerful enough to be useful. It is reasonable to expect that
> programmers have *some* skill.

Requiring a programmer to maintain two distinct pieces of code which are
supposed to have the same effect is something that experience shows to
be extremely difficult and error prone. As development practices go,
such an approach is best avoided.

Francois-Rene Rideau

unread,
Dec 26, 2001, 2:49:26 PM12/26/01
to
Jeffrey Siegal <j...@quiotix.com> writes Re: New Lisp ?

> Consider, for example, what low-level Lisp macros would look like in an
> Algol-like language. They can be done but the result is enormously
> complex (and also fragile; if the language syntax is extended, macros
> written that way will likely break).
I wonder what you think or someone who knows them as well as LISP macros
thinks of CamlP4 or of parse-tree filtering in Erlang.
These may not be as seamlessly integrated in their mother language as are
LISP macros, but they look very promising.

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]
A language that doesn't affect the way you think about programming,
is not worth knowing. -- Alan Perlis

Jeffrey Siegal

unread,
Dec 26, 2001, 3:33:23 PM12/26/01
to
Francois-Rene Rideau wrote:

>>Consider, for example, what low-level Lisp macros would look like in an
>>Algol-like language. They can be done but the result is enormously
>>complex (and also fragile; if the language syntax is extended, macros
>>written that way will likely break).
>>
> I wonder what you think or someone who knows them as well as LISP macros
> thinks of CamlP4 or of parse-tree filtering in Erlang.

I have not looked at them before so I am not very familar with them. I
looked quickly at CamlP4 and it looked very similar to what I've seen
before in terms of attempts to do this. In particular, fairly complex,
and requiring the programmer to understand quite a bit about parsing
theory and practice (an interesting field, but not one that every
programmer necessarily knows about or wants to know about).

Anyone who can not see that the complexity of such things is a strong
argument in favor of a simple Lisp-like syntax[*] is blind or
prejudiced. Perhaps not an overriding argument that would cause one to
use a Lisp-syntax despite other issues, but still...

[*] By "Lisp-like" syntax I mean a syntax that can be constructed and
decomposed using a few simple, easy-to-understand rules. It doesn't
neceessarily need to be Lisp-syntax itself. For example, it might use
indentation rather than parenthesis to indicate nesting. Or it might be
something else. But whatever it is, it should reduce to some sort of
logical and simple internal form, not some mostly random collection of
Algol-like constructs that exist largely the result of a string of
historical accidents.

Feuer

unread,
Dec 26, 2001, 9:20:28 PM12/26/01
to
Bruce Hoult wrote:

> In article <3C27C7BC...@quiotix.com>, Jeffrey Siegal
> <j...@quiotix.com> wrote:

>
> > I consider Lisp syntax (or something similarly elegant) to be
> > essential. I suspect that many proponents of Dylan-like languages would
> > consider it unacceptable. I strongly suspect there is no middle ground.
>
> I can happily use either. Or paren-less prefix (Logo, ML). Or postfix
> (PostScript, Forth). But even after much use of the others I find that
> I do prefer "conventional" syntax.

The advantage of a language with significant syntax is that it allows the
programmer to quickly and easily write certain kinds of code. For example, ML
and Haskell syntax make it easy to write curried functions and function
applications, as well as infix operators. It would be quite annoying to call
a simple function by saying
(((foldl f) h) lst)

Infix is probably less important, but is convenient.

Bruce Hoult

unread,
Dec 26, 2001, 7:20:19 PM12/26/01
to
In article <3C2A03DC...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Bruce Hoult wrote:
> > That's true only in the trivial sense that Lisp has no syntax, so Lisp
> > syntax can't be outgrown.
>
> Hardly. It just has a syntax with a very simple and powerful basic
> construction rule. However, on top of that construction rule,
> enormously powerful syntactic abstractions can be (and are) built. What
> Algol-like languages lack is the basic construction rule which allows
> you to decompose the syntax down into elemental componets. That makes
> any macro system either enormously complex or lacking in power, or both.
>
> Consider, for example, what low-level Lisp macros would look like in an
> Algol-like language. They can be done but the result is enormously
> complex (and also fragile; if the language syntax is extended, macros
> written that way will likely break).

There are examples of the same thing happening in reverse. When macros
get sufficiently complex and have enough combinations of different
possibilities, it becomes too difficult to follow a purely S-expresion
syntax. Consider the example of the Dylan "for" macro. How would you
map all that functionality and all those options onto an S-expression
syntax? Well, we can look at what is done in Common Lisp with the
"loop" macro. The same idea, very nearly exactly the same options. And
we find that in fact it does *not* use S-expression syntax, but instead
makes a little infix language that ends up very similar to that part of
Dylan.

As you say: "at some point it becomes language-abuse, not language-use".

Now, I happen to think that the "loop" macro is a *good* thing about
Common Lisp, but:

1) building such things yourself isn't well supported in CL (it is in
Dylan)

2) I find that I actually *prefer* this sort of thing to have infix
syntax, and prefer all loops and other control structures to use it. If
nothing else, it means that you know immediately whether you're looking
at a standard function application or a special form. That's what Dylan
does.


> > There is no such construct. If it can be fitted into Lisp's
> > functions-only notation then it can also be fitted into Dylan's
> > functions and function-macros.
>
> Of course it can, just as you could write a Lisp interpreter in Dylan
> and use that. But at some point it becomes language-abuse, not
> language-use, becuause the facilities the language provides to help you
> end up either being in the way, or being useless warts. I can tell you
> from experience that trying to do extremely complex things with
> function-style macros in an Algol-like language is far, far worse than
> doing the same thing in Lisp

Which algol-like language?


> I didn't mean the decision was absurd at the time, just that the
> possiblity of Dylan being sold to the great masses today is absurd.
> Dylan is a useful niche language, which is all it will ever be.

Presumably, then, you feel the same way about Lisp?


> As a niche language, though, you don't need to sell it with a
> candy-coated syntax. I might be using it today if the Lisp syntax
> had been retained, but I have no interest whatsoever in an
> Algol-syntax niche langauge.

*You* may not, but not everyone feels that way. Apart from Dylan, there
are people out there using OCaml, Haskell and a bunch of lesser-known
niche languages with Algol-like syntaxes. Not all of them intend to
remain niche languages.


> If I'm going to use such a langauge, it is going to at least be a
> mainstream one with all of the benefits that acrue from that status
> (i.e., all things considered I'd rather use Java, and I do, than Dylan,
> despite recognizing that Dylan is a much nicer language).

Half a dozen years ago Java wasn't mainstream. A dozen years ago (when
I started using it) C++ wasn't mainstream. The same goes for Perl
before the WWW happened. Plenty of languages have made the transition
from niche to mainstream in the past, and there is every reason to think
that plenty more will in the future. C++ and Perl and Java are not the
last word in language design for the masses.


> > The time may simply be not yet right. After all, it is only just
> > now that reasonably mature Dylan implementations are becoming
> > available.
>
> With all due respect, I think you are dreaming, and I think some honest
> self-reflection would confirm that.

Dreaming in what respect? Are you saying that reasonably mature Dylan
implementations are not yet available? Or that they have been for some
time? Or something else?

I'm certainly under no illusions that "a reasonably mature
implementation" is sufficient for market success. But it's surely
necessary.


> > > Many smart people have observed that when you encounter a "hard"
> > > (if not impossible) problem, you have already made a mistake
> > > somewhere back down the road.
> >
> > Or no one had the correct "ah-ha" moment yet.
>
> Taking a path which requires an as-yet-unknown "ah ha" to suceed is a
> design error.

I agree. And that path -- attempting to support both infx and prefix
syntaxes -- has *not* been taken. A clean switch was made from one to
the other.


> > > > Another solution might be to explicitly define both syntaxes
> > > > when you define a macro. More work, but then you don't define
> > > > new syntax quite as often as you define functions.
> > >
> > > That would be very error prone.
> >
> > A great many things in programming are error prone. In fact
> > anything in which it is impossible to make a mistake is almost
> > certainly not powerful enough to be useful. It is reasonable to
> > expect that programmers have *some* skill.
>
> Requiring a programmer to maintain two distinct pieces of code which are
> supposed to have the same effect is something that experience shows to
> be extremely difficult and error prone. As development practices go,
> such an approach is best avoided.

Mainstream programmers are expected to keep functions and prototypes in
synch. They are expected to maintain quite complex invariants over
large bodies of code, usually without benefit of anything more powerful
than "assert". They are expected to declare the type of a variable in
one place and then use it appropriately in other places. They are
expected to make sure that variables are correctly initialized over all
execution paths. They are expected to explicitly free dynamic memory at
those points -- and only those points -- where it is no longer needed.

Compared to some of those, correctly setting up alternate syntaxes in
the odd macro definition is hardly onerous or error-prone. And it might
be totally optional -- needed *only* if you want to use both. Most
mainstream programmers would presumably be satisfied with the Algol-like
syntax in the first place.

-- Bruce

Bruce Hoult

unread,
Dec 26, 2001, 7:27:15 PM12/26/01
to
In article <3C2A856C...@his.com>, Feuer <fe...@his.com> wrote:

> Bruce Hoult wrote:
> > I can happily use either. Or paren-less prefix (Logo, ML). Or postfix
> > (PostScript, Forth). But even after much use of the others I find that
> > I do prefer "conventional" syntax.
>
> The advantage of a language with significant syntax is that it allows
> the programmer to quickly and easily write certain kinds of code. For
> example, ML and Haskell syntax make it easy to write curried functions
> and function applications, as well as infix operators. It would be
> quite annoying to call a simple function by saying (((foldl f) h) lst)

That's true, but it seems awfully arbitrary.

How often do you find that you need to put function arguments in an
uncomfortable order so that the automatic currying works out right?

-- Bruce

Kaz Kylheku

unread,
Dec 26, 2001, 8:25:57 PM12/26/01
to
In article <87zo45sq...@Samaris.tunes.org>, Francois-Rene Rideau wrote:
>Jeffrey Siegal <j...@quiotix.com> writes Re: New Lisp ?
>> Consider, for example, what low-level Lisp macros would look like in an
>> Algol-like language. They can be done but the result is enormously
>> complex (and also fragile; if the language syntax is extended, macros
>> written that way will likely break).
>I wonder what you think or someone who knows them as well as LISP macros
>thinks of CamlP4 or of parse-tree filtering in Erlang.

Once you introduce parse tree filtering, don't you think that users will
eventually want a way to specify any arbitrary parse tree, not just ones
that correspond to the few shapes determined by a hardcoded parser?

Then you are looking at some bracketed notation.

Coby Beck

unread,
Dec 26, 2001, 8:39:06 PM12/26/01
to

"Bruce Hoult" <br...@hoult.org> wrote in message
news:bruce-9D1859....@news.paradise.net.nz...

>
> - why do aref and gethash in CL have opposite argument orders?
>
>

aref is also opposite to nth. I think one "justification" if not "reason" for
this is the need to accomodate multiple indices in aref. For nth, the second
argument is the list but for aref, if not the first, it could be the 2nd,
3rd...etc..depending on how many array dimensions you have.

--
Coby
(remove #\space "coby . beck @ opentechgroup . com")


David Rush

unread,
Dec 26, 2001, 9:08:03 PM12/26/01
to
Andreas Bogk <and...@andreas.org> writes:

> Jeffrey Siegal <j...@quiotix.com> writes:
> > As for conditions, I prefer
> > passing condition handlers as explicit arguments. With proper tail
> > calls and limited use of call/cc to escape out of CPS, it works fine.
>
> I don't think so. Having to pass around handlers for all sorts of
> conditions is a nuisance. This is something CL and Dylan got right,
> IMHO.

Wel I've not written any large reactive systems using CPS for
condition-handling, but it certainly seems to work well in my
data-mining code. As things stand today, I'd probably not choose
Scheme for a large GUI application, although I'm cooking up ideas to
try out in PLT Scheme just to see if there GUI support is as good as
it looks. Maybe sometime in this millenium I'll get around to it.

> > > Oh, and dynamism vs. performance tradeoffs like sealing,

Huh? What is this feature?

> > I think these are overhyped features which have been adaquately
> > addressed in Lisp/Scheme using either different implementations as
> > needed, declarations, etc.
>
> The point is that you can start writing code without caring about
> performance.

Surely you *don't* really mean this. Big-O issues will jump up and get
you if you don't think about them.

> Once the design has settled, you can sprinkle some
> adjectives here and there, and the code becomes fast, without having
> to re-implement performance-critical code. I consider sealing to be a
> good thing.

Do you not also get the same benefits if you develop using good
functional abstractions?

david rush
--
The beginning of wisdom for a [software engineer] is to recognize the
difference between getting a program to work, and getting it right.
-- M A Jackson, 1975

David Rush

unread,
Dec 26, 2001, 9:24:50 PM12/26/01
to
Bruce Hoult <br...@hoult.org> writes:
> In article <3C2A856C...@his.com>, Feuer <fe...@his.com> wrote:
> > The advantage of a language with significant syntax is that it allows
> > the programmer to quickly and easily write certain kinds of code. For
> > example, ML and Haskell syntax make it easy to write curried functions
> > and function applications, as well as infix operators. It would be
> > quite annoying to call a simple function by saying (((foldl f) h) lst)
>
> How often do you find that you need to put function arguments in an
> uncomfortable order so that the automatic currying works out right?

Well comparing my experiences from 5 years ago programming constraint
solvers and scheduling systems in SML to the present where I'm data
mining in Scheme, I'd say I encounter about equal hassle in both
systems w/rt curried functions and partial applications. I use a lot
more HO functions in Scheme, at least partly to simulate SML
functors, and I get mildly annoyed at the number of explicit lambdas I
need to include. OTOH, I spent a fair amount of time in SML fretting
over the most `natural' partial application order (not to mention the
whole tuple vs curried API issue).

On the whole I prefer Scheme, but I might find that I also like SML
better (not that I ever *dis*liked it) now that my fluency in the
functional paradigm has grown.

Just $0.02.

david rush
--
From the start...the flute has been associated with pure (some might
say impure) energy. Its sound releases something naturally untamed, as
if a squirrel were let loose in a church." --Seamus Heaney

Jeffrey Siegal

unread,
Dec 26, 2001, 9:51:46 PM12/26/01
to
Bruce Hoult wrote:
> There are examples of the same thing happening in reverse. When macros
> get sufficiently complex and have enough combinations of different
> possibilities, it becomes too difficult to follow a purely S-expresion
> syntax.

Not in my experience. YMMV.

> Consider the example of the Dylan "for" macro. How would you
> map all that functionality and all those options onto an S-expression
> syntax?

I haven't looked closely at the Dylan for macro, but I suspect by not
including all that functionality into a single construct.

> > > There is no such construct. If it can be fitted into Lisp's
> > > functions-only notation then it can also be fitted into Dylan's
> > > functions and function-macros.
> >
> > Of course it can, just as you could write a Lisp interpreter in Dylan
> > and use that. But at some point it becomes language-abuse, not
> > language-use, becuause the facilities the language provides to help you
> > end up either being in the way, or being useless warts. I can tell you
> > from experience that trying to do extremely complex things with
> > function-style macros in an Algol-like language is far, far worse than
> > doing the same thing in Lisp
>
> Which algol-like language?

C (preprocessor macros) and Java (a proprietary preprocessor). Other
people have done similar things with C++ templates and the result is
similarly unwieldy.

> > I didn't mean the decision was absurd at the time, just that the
> > possiblity of Dylan being sold to the great masses today is absurd.
> > Dylan is a useful niche language, which is all it will ever be.
>
> Presumably, then, you feel the same way about Lisp?

Absolutely.



> > If I'm going to use such a langauge, it is going to at least be a
> > mainstream one with all of the benefits that acrue from that status
> > (i.e., all things considered I'd rather use Java, and I do, than Dylan,
> > despite recognizing that Dylan is a much nicer language).
>
> Half a dozen years ago Java wasn't mainstream. A dozen years ago (when
> I started using it) C++ wasn't mainstream. The same goes for Perl
> before the WWW happened. Plenty of languages have made the transition
> from niche to mainstream in the past, and there is every reason to think
> that plenty more will in the future. C++ and Perl and Java are not the
> last word in language design for the masses.

That's all true, but for every language that "breaks out" there are a
zillion that don't, and those that break out generally have a big
promoter, though there are occasional exceptions, on the order of
perhaps one per decade. Not unlike pop artists.

> > > The time may simply be not yet right. After all, it is only just
> > > now that reasonably mature Dylan implementations are becoming
> > > available.
> >
> > With all due respect, I think you are dreaming, and I think some honest
> > self-reflection would confirm that.
>
> Dreaming in what respect? Are you saying that reasonably mature Dylan
> implementations are not yet available? Or that they have been for some
> time? Or something else?

That Dylan is going to go mainstream when "the time is right", and also
about reasonably mature implementations becoming available "just now."
I consider Harlequin's product to have been "reasonably mature" some
time ago.

Dylan will almost certainly never break into the mainstream without a
big promoter. The opportunity was largely lost when Apple dropped it.
Perhaps with Apple's backing it could have given Java a good run, but
without it, no way.

Bruce Hoult

unread,
Dec 26, 2001, 11:14:40 PM12/26/01
to
In article <3C2A8CC2...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Bruce Hoult wrote:
> > Consider the example of the Dylan "for" macro. How would you
> > map all that functionality and all those options onto an S-expression
> > syntax?
>
> I haven't looked closely at the Dylan for macro, but I suspect by not
> including all that functionality into a single construct.

Which wouldn't be very satisfactory.

Both CL's "loop" and Dylan's "for" are basically similar to "do" in
Scheme, in that they allow you to iterate with a number of variables
updated in parallel. But unlike Scheme they allow not just "var = init
then update-expression", but also automatic stepping through numeric
ranges ("var from foo to bar by baz") and multiple termination tests.
CL allows stepping through lists and hashes, Dylan allows stepping
through arbitrary collections. CL allows collecting expressions into a
result list (or summing them).

It's hard to see how to decompose this functionality into different
constructs while still allowing different loop variables to
simultaneously be controlled in different ways. Which is very desirable.

On the other hand, Scheme's "do" is already near the limits of
S-expression comprehensibility. Trying to extend it to do what Dylan's
"for" or CL's "loop" do would I think take it well past.


> > > I can tell you
> > > from experience that trying to do extremely complex things with
> > > function-style macros in an Algol-like language is far, far worse
> > > than doing the same thing in Lisp
> >
> > Which algol-like language?
>
> C (preprocessor macros) and Java (a proprietary preprocessor). Other
> people have done similar things with C++ templates and the result is
> similarly unwieldy.

I agree in each of those cases.

None of those language syntaxes (well, basically the same one) were
designed to be amenable to sophisticated macro processing. Dylan's
*was*. All the declarations and control structures were designed from
the outset to be implemented as macros. And in d2c, at least, they are.


> > > I didn't mean the decision was absurd at the time, just that the
> > > possiblity of Dylan being sold to the great masses today is absurd.
> > > Dylan is a useful niche language, which is all it will ever be.
> >
> > Presumably, then, you feel the same way about Lisp?
>
> Absolutely.

Fair enough then.


> > > If I'm going to use such a langauge, it is going to at least be a
> > > mainstream one with all of the benefits that acrue from that status
> > > (i.e., all things considered I'd rather use Java, and I do, than
> > > Dylan,
> > > despite recognizing that Dylan is a much nicer language).
> >
> > Half a dozen years ago Java wasn't mainstream. A dozen years ago (when
> > I started using it) C++ wasn't mainstream. The same goes for Perl
> > before the WWW happened. Plenty of languages have made the transition
> > from niche to mainstream in the past, and there is every reason to
> > think
> > that plenty more will in the future. C++ and Perl and Java are not the
> > last word in language design for the masses.
>
> That's all true, but for every language that "breaks out" there are a
> zillion that don't,

Sure.

> and those that break out generally have a big promoter

Actually, that appears to be the exception. Java had huge promotion. C
and C++ might have been from AT&T but they can't really be said to have
*promoted* them. The authors pushed them personally, just as Larry Wall
did with Perl and Guido did with Python.


> > > > The time may simply be not yet right. After all, it is only just
> > > > now that reasonably mature Dylan implementations are becoming
> > > > available.
> > >
> > > With all due respect, I think you are dreaming, and I think some
> > > honest self-reflection would confirm that.
> >
> > Dreaming in what respect? Are you saying that reasonably mature Dylan
> > implementations are not yet available? Or that they have been for some
> > time? Or something else?
>
> That Dylan is going to go mainstream when "the time is right",

I certianly wouldn't put it as strongly as "is going to". "Might have a
shot" is more like it.


> and also
> about reasonably mature implementations becoming available "just now."
> I consider Harlequin's product to have been "reasonably mature" some
> time ago.

Yes, but it's only been on one OS -- and technical people's least
favourite one, at that.

They now have a Linux beta out, which is good.


Gwydion is on probably every interesting platform: Un*x, MacOS, MacOS X,
Windows (Cywwin), BeOS. But it's not as mature as Harlequin/Fun-O and
won't be in a position to even attempt to "break out" for a year or two
yet at the curren rate.


> Dylan will almost certainly never break into the mainstream without a
> big promoter. The opportunity was largely lost when Apple dropped it.

That was a sad day, yes. And it's taking a while to recover from. The
good news is that 1) the implementations are getting there, and 2) most
mainstream people have never even heard of it, so when we're ready it'll
be "new" to them, not recycled.


> Perhaps with Apple's backing it could have given Java a good run, but
> without it, no way.

It's a long shot, for sure. I think that probably OCaml has a better
shot at it. Maybe Erlang, with its big backer. Both those are probably
a bit cryptic for the average punter though. We get people turning up
on the Gwydion mailing list saying things like "I never saw Dylan before
but I just browsed through [the compiler | your ICFP entry] and I COULD
ACTUALLY UNDERSTAND IT".

-- Bruce

Jeffrey Siegal

unread,
Dec 27, 2001, 12:07:36 AM12/27/01
to
Bruce Hoult wrote:
> Which wouldn't be very satisfactory.
>
> Both CL's "loop" and Dylan's "for" are basically similar to "do" in
> Scheme, in that they allow you to iterate with a number of variables
> updated in parallel. But unlike Scheme they allow not just "var = init
> then update-expression", but also automatic stepping through numeric
> ranges ("var from foo to bar by baz") and multiple termination tests.
> CL allows stepping through lists and hashes, Dylan allows stepping
> through arbitrary collections. CL allows collecting expressions into a
> result list (or summing them).
>
> It's hard to see how to decompose this functionality into different
> constructs while still allowing different loop variables to
> simultaneously be controlled in different ways. Which is very desirable.
>
> On the other hand, Scheme's "do" is already near the limits of
> S-expression comprehensibility. Trying to extend it to do what Dylan's
> "for" or CL's "loop" do would I think take it well past.

It isn't as if I started programming yesterday, and I just don't see the
need for all that nonsense. Frankly, I rarely even use the Scheme do
macro. The basic iteration mechanisms are usually powerful enough for
me, and if I need something specific for a particular use, I build it.
If I want to step by 57 and I don't feel like doing it explicitly, I'll
build a loop-by-57 form.

Mostly, though, I don't explicitly iterate very much, prefering
map-style approaches, or when I have to use a collection that doesn't
implement that, and don't care much about performance, I'll just resort
to generating a list and using map or for-each themselves. (And with a
good compiler, the cost of doing this may be small anyway.) For example,
I have a private collection of map-style iterators for matrices that
allow various degress of control over stepping and so forth. I find the
overall approach far superior to a for or loop type widget. But, YMMV,
of course.

> > > > I can tell you
> > > > from experience that trying to do extremely complex things with
> > > > function-style macros in an Algol-like language is far, far worse
> > > > than doing the same thing in Lisp
> > >
> > > Which algol-like language?
> >
> > C (preprocessor macros) and Java (a proprietary preprocessor). Other
> > people have done similar things with C++ templates and the result is
> > similarly unwieldy.
>
> I agree in each of those cases.
>
> None of those language syntaxes (well, basically the same one) were
> designed to be amenable to sophisticated macro processing. Dylan's
> *was*. All the declarations and control structures were designed from
> the outset to be implemented as macros. And in d2c, at least, they are.

You've confused yourself. We were discussing your claim that Dylan
could do anything that Lisp can do because it has function-style
macros. Resorting to function-style macros leaves you in essentially
the same place as doing function-style macros in C or Java or C++.

> Actually, that appears to be the exception. Java had huge promotion. C
> and C++ might have been from AT&T but they can't really be said to have
> *promoted* them.

C was the exception of the 80s. C++ was heavily promoted by Microsoft
(and the other Windows compiler vendors, when there were any). Perl was
perhaps the exception of the 90s (I consider Perl marginal and Python to
be clearly niche).

Andreas Bogk

unread,
Dec 26, 2001, 11:26:35 PM12/26/01
to
David Rush <ku...@bellsouth.net> writes:

> > I don't think so. Having to pass around handlers for all sorts of
> > conditions is a nuisance. This is something CL and Dylan got right,
> > IMHO.
> Wel I've not written any large reactive systems using CPS for
> condition-handling, but it certainly seems to work well in my
> data-mining code. As things stand today, I'd probably not choose
> Scheme for a large GUI application, although I'm cooking up ideas to

As soon as you pile up some layers of code, it quickly becomes tedious
to pass around handlers everywhere. Just imagine passing a GUI dialog
for resolving a "disk full" condition all the way through the GUI,
your application code, your storage abstraction down to the actual
disk access.

Imagine that you have some OS-agnostic code in the middle layers, and
that you don't even know that some condition might arise and that it
can be fixed. Proper exceptions allow code at distant places to
communicate efficiently.

> > > > Oh, and dynamism vs. performance tradeoffs like sealing,
> Huh? What is this feature?

It allows you to specify that you won't add a new method to a certain
generic function, or some application domain of that function. For
instance, the Dylan <integer> type is a regular class which cannot be
subclassed. The + function is a generic function sealed over the
domain (<integer>, <integer>), so nobody can override that definiton.

So you get all the performance benefits you'd get when implementing
integers specially, like Java does, but still <integer>s are regular
objects, and you can get the same kind of performance benefits for
your own classes.

> > The point is that you can start writing code without caring about
> > performance.
> Surely you *don't* really mean this. Big-O issues will jump up and get
> you if you don't think about them.

Actually, I usually nail down *what* I want to do with a naive
implementation, which isn't really intended to solve the problem, but
just serves me as some kind of formal specification of the problem,
which happens to be executable too. Starting from that, I can
experiment with *how* to solve certain aspects, at which time big-O
complexity comes into play. Only after having found the right
algorithms and a correct implementation for them I start to think
about low-level performance issues. And I want my language to support
this kind of process.

> > Once the design has settled, you can sprinkle some
> > adjectives here and there, and the code becomes fast, without having
> > to re-implement performance-critical code. I consider sealing to be a
> > good thing.
> Do you not also get the same benefits if you develop using good
> functional abstractions?

Of course you can. I just happen to prefer generic functions, and I
want them to be as fast as the functional approach, which can be sone
with sealing.

Andreas Bogk

unread,
Dec 27, 2001, 12:11:03 AM12/27/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> Dylan will almost certainly never break into the mainstream without a
> big promoter. The opportunity was largely lost when Apple dropped it.
> Perhaps with Apple's backing it could have given Java a good run, but
> without it, no way.

It need not be a commercial promoter. I think there are enough people
out there (and I guess a lot of them are reading these newsgroups) who
wish there was something like a modern LISP machine, or more
realistically, an operating system running on commodity hardware that
was written from scratch in a dynamic language. There might be enough
people (and smart enough people) to start the next Linux, who knows?

If such an OS (and especially it's APIs) would allow for multiple
syntaxes, or even multiple languages, it would appeal both to the
experts and to the masses.

Jeffrey Siegal

unread,
Dec 27, 2001, 12:18:23 AM12/27/01
to
Andreas Bogk wrote:
> It need not be a commercial promoter. I think there are enough people
> out there (and I guess a lot of them are reading these newsgroups) who
> wish there was something like a modern LISP machine, or more
> realistically, an operating system running on commodity hardware that
> was written from scratch in a dynamic language. There might be enough
> people (and smart enough people) to start the next Linux, who knows?

I agree with this.

However, Dylan is so far from that it might as well be in the next
universe. You've got a much better shot with Java, frankly.

Jeffrey Siegal

unread,
Dec 27, 2001, 12:23:24 AM12/27/01
to
Andreas Bogk wrote:
> As soon as you pile up some layers of code, it quickly becomes tedious
> to pass around handlers everywhere. Just imagine passing a GUI dialog
> for resolving a "disk full" condition all the way through the GUI,
> your application code, your storage abstraction down to the actual
> disk access.

What happens in Java is that you have to at least declare the exceptions
up the chain anyway (the compiler will reject a method that doesn't
catch or throw E which involves a method declared to throw E. It isn't
that much harder to explicitly pass the handler.

> So you get all the performance benefits you'd get when implementing
> integers specially, like Java does, but still <integer>s are regular
> objects, and you can get the same kind of performance benefits for
> your own classes.

Yes and no. They're not "regular objects" because they can't be
subclassed. I think what you'd see in any kind of production
environment if Dylan were used is that almost everything would get
sealed off, much the way a lot of Java code makes extensive use of
"final." At that point, you might as well just use a static block
compiler and let the compiler recognize what is subclassed and what
isn't.

Bruce Hoult

unread,
Dec 27, 2001, 12:32:18 AM12/27/01
to
In article <3C2AB04C...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Andreas Bogk wrote:
> > As soon as you pile up some layers of code, it quickly becomes tedious
> > to pass around handlers everywhere. Just imagine passing a GUI dialog
> > for resolving a "disk full" condition all the way through the GUI,
> > your application code, your storage abstraction down to the actual
> > disk access.
>
> What happens in Java is that you have to at least declare the exceptions
> up the chain anyway (the compiler will reject a method that doesn't
> catch or throw E which involves a method declared to throw E. It isn't
> that much harder to explicitly pass the handler.

That's perfectly true.

It's also true that dealing with exception specifications in Java
*sucks*. In large Java projects I've worked on the vast majority of cvs
commits end up being maintainance on the exception specifications. It's
just a lot of pointless makework.

Being "not much harder" than Java is no recommendation.

-- Bruce

israel r t

unread,
Dec 27, 2001, 12:36:54 AM12/27/01
to
On Thu, 27 Dec 2001 17:14:40 +1300, Bruce Hoult <br...@hoult.org>
wrote:

>> Dylan will almost certainly never break into the mainstream without a
>> big promoter. The opportunity was largely lost when Apple dropped it.
>
>That was a sad day, yes. And it's taking a while to recover from. The
>good news is that 1) the implementations are getting there, and 2) most
>mainstream people have never even heard of it, so when we're ready it'll
>be "new" to them, not recycled.

Dylan's biggest liability is its name ( named after an elderly
has-been 1960's rocker that only my parents would have been seen dead
listening to ) and the perception that it was "dropped by Apple".

Perhaps renaming it and changing its Pascal like syntax either towards
Scheme or towards C might get some disillusioned Schemers , lispers
or even some apostates from Java and C#....
As for names Skylan / Skylark for the Schemefied version or Cyclan for
the C syntax version (I was a EC Tubbs fan...) Or if you want a
musical name, Bach or Fugue would be nice ( Mozart is already taken by
the Oz/Mozart language )

Andreas Bogk

unread,
Dec 27, 2001, 12:45:11 AM12/27/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> What happens in Java is that you have to at least declare the exceptions
> up the chain anyway (the compiler will reject a method that doesn't

Yes, and that bothers me to no end. I want to have specific code that
knows about an exception in exactly two places: where it is generated,
and where it can be handled. All the code inbetween doesn't need to
know more than that an operation has failed and that it needs to clean
up.

> Yes and no. They're not "regular objects" because they can't be
> subclassed.

That's true. The point of sealing is to offer the option of turning
off certain OO features while retaining the benefits of others (the
user can still specialize his own generic functions on integers, for
instance).

> I think what you'd see in any kind of production
> environment if Dylan were used is that almost everything would get
> sealed off, much the way a lot of Java code makes extensive use of
> "final." At that point, you might as well just use a static block
> compiler and let the compiler recognize what is subclassed and what
> isn't.

I'd hate to use a static block compiler, the turnaround time would be
a nightmare. And I'd like to keep the option of adding classes and gf
methods at runtime.

Adreas

Andreas Bogk

unread,
Dec 27, 2001, 12:53:45 AM12/27/01
to
Jeffrey Siegal <j...@quiotix.com> writes:

> However, Dylan is so far from that it might as well be in the next
> universe. You've got a much better shot with Java, frankly.

Java might get the masses, but it's not in the heart of the wizards.
But it's the wizards who would be able to start such a project.

Bruce Hoult

unread,
Dec 27, 2001, 1:13:27 AM12/27/01
to
In article <3C2AAC98...@quiotix.com>, Jeffrey Siegal
<j...@quiotix.com> wrote:

> Bruce Hoult wrote:
> > Which wouldn't be very satisfactory.
> >
> > Both CL's "loop" and Dylan's "for" are basically similar to "do" in
> > Scheme, in that they allow you to iterate with a number of variables
> > updated in parallel. But unlike Scheme they allow not just "var = init
> > then update-expression", but also automatic stepping through numeric
> > ranges ("var from foo to bar by baz") and multiple termination tests.
> > CL allows stepping through lists and hashes, Dylan allows stepping
> > through arbitrary collections. CL allows collecting expressions into a
> > result list (or summing them).
> >
> > It's hard to see how to decompose this functionality into different
> > constructs while still allowing different loop variables to
> > simultaneously be controlled in different ways. Which is very
> > desirable.
> >
> > On the other hand, Scheme's "do" is already near the limits of
> > S-expression comprehensibility. Trying to extend it to do what Dylan's
> > "for" or CL's "loop" do would I think take it well past.
>
> It isn't as if I started programming yesterday, and I just don't see the
> need for all that nonsense. Frankly, I rarely even use the Scheme do
> macro. The basic iteration mechanisms are usually powerful enough for
> me, and if I need something specific for a particular use, I build it.
> If I want to step by 57 and I don't feel like doing it explicitly, I'll
> build a loop-by-57 form.

I suspect that this is pretty much where Scheme people on one hand and
CL and Dylan people on the other hand part ways. Everyone appreciates
generality and power when they need them, but the latter two groups also
value notational convenience for the common cases. Dylan expands the
"for" macro into a tail-recursive function (and CL does something
similar) precisely because many people find that easier to write, read,
and understand than the explicit tail-recursive form, for most common
cases.


> > > > > I can tell you
> > > > > from experience that trying to do extremely complex things with
> > > > > function-style macros in an Algol-like language is far, far worse
> > > > > than doing the same thing in Lisp
> > > >
> > > > Which algol-like language?
> > >
> > > C (preprocessor macros) and Java (a proprietary preprocessor). Other
> > > people have done similar things with C++ templates and the result is
> > > similarly unwieldy.
> >
> > I agree in each of those cases.
> >
> > None of those language syntaxes (well, basically the same one) were
> > designed to be amenable to sophisticated macro processing. Dylan's
> > *was*. All the declarations and control structures were designed from
> > the outset to be implemented as macros. And in d2c, at least, they
> > are.
>
> You've confused yourself. We were discussing your claim that Dylan
> could do anything that Lisp can do because it has function-style
> macros. Resorting to function-style macros leaves you in essentially
> the same place as doing function-style macros in C or Java or C++.

Rather better off, I think, since the C and C++ preprocessor is crap and
Java doesn't have one at all. Macro expansion in Dylan is *far* better
behaved, since it is hygenic and obeys lexical scoping (both with
respect to which macro is in scope where, and respecting the scoping of
arguments to the macro).


> > Actually, that appears to be the exception. Java had huge promotion.
> > C and C++ might have been from AT&T but they can't really be said
> > to have *promoted* them.
>
> C was the exception of the 80s.

So what was Pascal?


> C++ was heavily promoted by Microsoft

!!!

Microsoft didn't even *have* a C++ compiler until I'd been using the
language for three or four years. Wasn't 1.0 out in 1993 or so? Well,
it was total crap anyway, and VC++ wasn't really usable until 4.1 or 4.2
or something like that.

-- Bruce

Erik Naggum

unread,
Dec 27, 2001, 2:06:44 AM12/27/01
to
* israel r t
> Lisp needs to reinvent itself.

* Andreas Bogk


| I suggest to take a look at Dylan.

Since the whole thread is a spoof of an article that caused the Scheme
community to explode in rage and the resident Dylan fans completely fail
to understand that this is a stupid attempt to inflame the Lisp community
likewise, but rather once again take part in it with their unsolicited
Dylan marketing campaign -- which is no surprise, like Scheme fans, they
also erroneously think their language is a Lisp and annoy comp.lang.lisp
with marketing for their Lisp-wannabe languages every once in a while --
the conlusion is clear: Dylan is worth a look if and only if Lisp needs
to reinvent itself, which it does not need any more than Scheme does.

Both Dylan and Scheme have distanced themselves from the Lisp community
in a number of important ways, but when they lose ground, they return to
their Lisp heritage, and when they gain ground, they point out how Lisp
is not longer worth anyone's time. This closely parallels the behavior
of immature children who want to distance themselves from their parents
as long as they risk nothing by doing so. If Dylan and Scheme were for
real, they would make a clean cut with Lisp and attempt to make it on
their own without constant references to their heritage, good or bad.
"Members of the Lisp family" try to point out much better they are than
their parent, whatever "Lisp" as a parent means. Even Paul Graham, the
inventor of the silly new toy language "arc", needs to point out how
Common Lisp is superior to his new toy by knocking Common Lisp before he
has anything to show for himself.

Attacking Common Lisp is primarily a way to say "I don't understand
feature X, therefore it is must be bad and should be removed". If they
spent as much time trying to understand what was going on as they do
trying to fight against Common Lisp, they would not need to fight, either.

///
--
The past is not more important than the future, despite what your culture
has taught you. Your future observations, conclusions, and beliefs are
more important to you than those in your past ever will be. The world is
changing so fast the balance between the past and the future has shifted.

Frank A. Adrian

unread,
Dec 27, 2001, 2:20:35 AM12/27/01
to
israel r t wrote:
> Dylan's biggest liability is its name ( named after an elderly
> has-been 1960's rocker that only my parents would have been seen dead
> listening to )...

Even I know that Dylan was named that after Dylan Thomas and because Dylan
was an acronym for DYnamic LANguage. Hell, the first Dylan compiler
(written in MIT Scheme) was named Thomas. Bob Dylan's lawyers also tried
to sue Apple for this mis-perceived naming idea and the case was won by
Apple.

> ... and the perception that it was "dropped by Apple".

Most people who the purveyors of Dylan are targeting aren't even aware of
the role Apple played with the language.

Not that any of this will help the language in the long run (just so no one
thinks I have any sort of soft spot for the language as it is now).

faa

israel r t

unread,
Dec 27, 2001, 5:36:15 AM12/27/01
to
On 27 Dec 2001 06:53:45 +0100, Andreas Bogk <and...@andreas.org>
wrote:

>Jeffrey Siegal <j...@quiotix.com> writes:
>
>> However, Dylan is so far from that it might as well be in the next
>> universe. You've got a much better shot with Java, frankly.
>
>Java might get the masses, but it's not in the heart of the wizards.
>But it's the wizards who would be able to start such a project.

Dont be so sure...
The Demeter project and aspectj ( adaptive and aspect oriented
programming using extensions to java ) may spawn the next step after
oop and functional programming. It is certainly wizardly enough for
me

http://www.ccs.neu.edu/research/demeter/
http://aspectj.org


Janos Blazi

unread,
Dec 27, 2001, 6:21:48 AM12/27/01
to
[...]

> Even Paul Graham, the
> inventor of the silly new toy language "arc", needs to point out how
> Common Lisp is superior to his new toy by knocking Common Lisp before he
> has anything to show for himself.

How can you judge the sillyness of arc when it has not even been specified
yet? (The articles he wrote about arc are very clever in my opinion.)

Mr Graham's criticisim of CL is sincere and he genuinely loves Lisp. I
remember whe we talked about free implementations of CL you were very angry
and said something like it could not expected of anybody to give away his
work for free, etc. Maybe Mr Graham will do this and then many of us will
support him by buying his (O'Reilly)-book on arc (that hopefully will become
Archlisp). Maybe he will give Lisp the bright future it deserves.

J.B.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
Check out our new Unlimited Server. No Download or Time Limits!
-----== Over 80,000 Newsgroups - 19 Different Servers! ==-----

Kaz Kylheku

unread,
Dec 27, 2001, 10:36:11 AM12/27/01
to

In Lisp, rather than Scheme, that would be:

(funcall (funcall (foldl f) h) lst)

A ``curried'' function in the first position of a list is not
automatically called.

Now the *disadvantage* of significant syntax is that it allows the programmer
to quickly and easily write the kinds of code that the language designer
thought the programmer ought to be able to write quickly and easily.
That approach assumes that the language designer can forsee how the language
will be used, which could turn into a self-fulfilling prophecy.

If you don't like chaining the funcalls using nested expressions, you
can invent some syntax which folds up the nesting, like

(chained-funcall #'foldfl f h lst)

Another way might be to do it as a mapper:

(mapchain #'start l-1 l-2 ... l-n)

where start has to be a function that takes as many arguments
as there are lists. The first element from every list is taken,
and turned into an argument list for start, which returns
another function that takes as many arguments, and is used
for the second elements of the list and so on.

Now, I have no idea how important it is to make this kind of chaining
more convenient, or in what way. So if I were designing a language, I
would view it as a grave mistake to introduce a special purpose syntax
for it, given that people can just experiment with their own, and settle
on solutions that are right for them. Then if a pattern of useful
primitives emerges from the community, which many people find useful,
their functions and macros can be incorporated into the language.

You also mentioned infix. You can find implementations of infix evaluators
for Common Lisp, so if you really want infix, it's there.

The point is that special syntax is a really form of premature
optimization. Worse, it's a form of optimization based entirely on
guessing what is going to be needed. The resulting language might attract
users who are looking for exactly that, so that the optimization later
appears to have been correct.

Kaz Kylheku

unread,
Dec 27, 2001, 11:10:14 AM12/27/01
to
In article <bruce-9D1859....@news.paradise.net.nz>, Bruce
Hoult wrote:
>- Dylan's ":=" and CL's "setf" are the same idea, but := is easier to
>read for some people.

Without some formal study, what is readable is just a guess.

Coming to Lisp after half a lifetime using programming languages that
have some kind of infix assignment, I have no problem reading setf at all.
(That is just anecdotal evidence based on my own experience, and does
not extrapolate into what people find readable, hint, hint).

Even if the := notation is found more readable, it may not be wortwhile.
What is more important is to be able to manage large, complex programs.
Having a simple, programmable syntax is a way of trading some small-scale
readability for a more significant goal.

The real question is whether the given language is a suitable target
language for one's abstractions.

Daniel C. Wang

unread,
Dec 27, 2001, 11:40:45 AM12/27/01
to

k...@ashi.footprints.net (Kaz Kylheku) writes:

> In article <bruce-9D1859....@news.paradise.net.nz>, Bruce
> Hoult wrote:
> >- Dylan's ":=" and CL's "setf" are the same idea, but := is easier to
> >read for some people.
>
> Without some formal study, what is readable is just a guess.
>
> Coming to Lisp after half a lifetime using programming languages that
> have some kind of infix assignment, I have no problem reading setf at all.
> (That is just anecdotal evidence based on my own experience, and does
> not extrapolate into what people find readable, hint, hint).

The human visual systems has very low level feature dectors for finding
vertical and horizontal lines. You can easily pick out a horizontal line
hiding in a field of vertical lines. Just like picking out the green dot in
a field of yellow is pretty easy. Having the human visual system parse
"setf" takes quite a bit more work.

How many setfs? are in this string

skdjvsdkjjlksetflkasfjsflsklsdflkjwiescsfjeriosadflksetfkkfslkfjskfjskkksjdfliwasetf

How many := are are in this string

skdjvsdkjjlk:=lkasfjsflsklsdflkjwiescsfjeriosadflk:=kkfslkfjskfjskkksjdfliwa:=

Or this

---_+_+-==-=++-=-=--=-:==-=-=-=+=-=--=-++=-=--:=-=-=-=--0--=:=---+=--=-

Most people should find the middle tasks the easiest. If there's any
readability advantage to := I doubt it's an infix vs prefix issue. It's
probably more low level. Syntax highlighting probably makes the difference
go away completely.

Janos Blazi

unread,
Dec 27, 2001, 12:01:56 PM12/27/01
to
> How many setfs? are in this string
>
>
skdjvsdkjjlksetflkasfjsflsklsdflkjwiescsfjeriosadflksetfkkfslkfjskfjskkksjdf
liwasetf

Do you come across strings like that frequently in your work? Are you using
a Lisp *without parentheses*?

Kaz Kylheku

unread,
Dec 27, 2001, 12:52:26 PM12/27/01
to
In article <3c2b5...@news.newsgroups.com>, Janos Blazi wrote:
>> How many setfs? are in this string
>>
>>
>skdjvsdkjjlksetflkasfjsflsklsdflkjwiescsfjeriosadflksetfkkfslkfjskfjskkksjdf
>liwasetf
>
>Do you come across strings like that frequently in your work? Are you using
>a Lisp *without parentheses*?

Maybe he's using Fortran. When people developed Fortran, compiler writing
was a completely new field. It didn't occur to anyone that removing
all spaces in an early phase of translation was a bad idea. This
led to atrocities, like the infamous:

DOI=1.3

versus

DOI=1,3

The first means assign 1.3 to variable DOI. The second indicates the
start of a DO loop over the variable I from 1 to 3.

Jeffrey Siegal

unread,
Dec 27, 2001, 2:09:03 PM12/27/01
to
Andreas Bogk wrote:

>>What happens in Java is that you have to at least declare the exceptions
>>up the chain anyway (the compiler will reject a method that doesn't
>
> Yes, and that bothers me to no end. I want to have specific code that
> knows about an exception in exactly two places: where it is generated,
> and where it can be handled. All the code inbetween doesn't need to
> know more than that an operation has failed and that it needs to clean
> up.

[This is also a reply to Bruce's earlier comments.]

You will find views on both sides, not unlike the issue of static
typing. It is not always the case that code that isn't written to be
thrown through is safe; it may not clean up. Making sure that each
method has the appropriate declarations is a way to catch these things
at compile time. (Some methodological discipline is required to get any
benefit out of this, of course, but when isn't it?)

>>I think what you'd see in any kind of production
>>environment if Dylan were used is that almost everything would get
>>sealed off, much the way a lot of Java code makes extensive use of
>>"final." At that point, you might as well just use a static block
>>compiler and let the compiler recognize what is subclassed and what
>>isn't.
>
> I'd hate to use a static block compiler, the turnaround time would be
> a nightmare.


You don't generally use a static block compiler when code is in active
development. For example, I develop in DrScheme and then block compile
with Stalin for performance tuning and production use.

> And I'd like to keep the option of adding classes and gf
> methods at runtime.

Then you can't use sealing (much).

Jeffrey Siegal

unread,
Dec 27, 2001, 2:17:28 PM12/27/01
to
Bruce Hoult wrote:

> I suspect that this is pretty much where Scheme people on one hand and
> CL and Dylan people on the other hand part ways. Everyone appreciates
> generality and power when they need them, but the latter two groups also
> value notational convenience for the common cases. Dylan expands the
> "for" macro into a tail-recursive function (and CL does something
> similar) precisely because many people find that easier to write, read,
> and understand than the explicit tail-recursive form, for most common
> cases.

What you snipped is that I rarely use explicit iteration in complex
programs. I strongly prefer map-like forms, which are both conceptually
powerful and notationally convenient. I don't usually write explicit
iteration beyond the standard (let loop ...) idiom.

>>C was the exception of the 80s.
>>
>
> So what was Pascal?

A flop basically. It had a short stint in academia, and as the
programming langauge for the Macintosh, before being overrun by C, but
there was very little use in commercial shops, which would be a
necessity for a mainstram language.

>>C++ was heavily promoted by Microsoft
>>
>
> !!!
>
> Microsoft didn't even *have* a C++ compiler until I'd been using the
> language for three or four years. Wasn't 1.0 out in 1993 or so? Well,
> it was total crap anyway, and VC++ wasn't really usable until 4.1 or 4.2
> or something like that.

That was well before C++ became a mainstream "hit."


Ray Blaak

unread,
Dec 27, 2001, 2:25:13 PM12/27/01
to
Andreas Bogk <and...@andreas.org> writes:
> Jeffrey Siegal <j...@quiotix.com> writes:
>
> > What happens in Java is that you have to at least declare the exceptions
> > up the chain anyway (the compiler will reject a method that doesn't
>
> Yes, and that bothers me to no end. I want to have specific code that
> knows about an exception in exactly two places: where it is generated,
> and where it can be handled. All the code inbetween doesn't need to
> know more than that an operation has failed and that it needs to clean
> up.

Well you actually do have the choice in Java. Just have your exceptions extend
from RuntimeException or Error, and they are no longer required to be declared
in the throws clauses.

Then you have the knowledge of the exception to be exactly in the desired
places.

To "hide" the fact that you are possibly abusing the notion of "error", have a
base application exception class extend from RuntimeException or Error, and
have all you other exceptions extend from that.

Personally, though, I prefer having the explicit throws clauses, for then the
compiler forces me to be aware of the error issues. At the very least I know to
put in the necessary "finally" blocks and rethrow if handling the error is not
appropriate.

I have not found the maintenance issue to be too bad. One trick is to rethrow
in terms of a more general exception class, so that the methods in between the
low level error and the final handler just have one or two exception classes in
their throws clauses, as opposed to a miriad of particular ones (which cause
the maintenance problem).

The mistake that a lot of Java programmers make is to simply swallow the
unexpected exceptions with a stack trace and continue. This is the worst of all
possibilities, defeating the purpose of exceptions in the first place.

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
bl...@telus.net The Rhythm has my soul.

Daniel C. Wang

unread,
Dec 27, 2001, 2:21:51 PM12/27/01
to

"Janos Blazi" <jbl...@hotmail.com> writes:

> > How many setfs? are in this string
> >
> >
> skdjvsdkjjlksetflkasfjsflsklsdflkjwiescsfjeriosadflksetfkkfslkfjskfjskkksjdf
> liwasetf
>
> Do you come across strings like that frequently in your work? Are you using
> a Lisp *without parentheses*?

Even without parentheses the other task of picking out ":=" is *relatively*
easier than the "setf" task. I'm sure that with parentheses *both* tasks get
easier, but the relative difference is still there. You can argue that
adding parenthesese makes both tasks so easy that the relative difference
becomes meaningless. I personally do not think this is the case.

I don't have any Lisp code handy. However, if you take a larger piece of
source code and ask humans to underline every occurence of "setf" in it. It
will take them longer than a similar task where "setf" is replaced by ":="
even if you keep prefix notation... i.e.

(:= e1 e2)
is easier to visually recognize than

(setf e1 e2)
when dumped in a sea of program text.

If you let people syntax highlight things the task might get easier for
setf. Anyway, that just my take on it. I don't have any real experimental
evidence for it. However, I just described a relatively simple experiment
you can do yourself. I'd happy to hear any evidence either way.

The ease of visually parsing programs is one reason why I prefer { } to
BEGIN END. Picking out { and } is simply an easier task for the huuman
visual system when compared to BEGIN END.

Janos Blazi

unread,
Dec 27, 2001, 2:49:26 PM12/27/01
to
> Even without parentheses the other task of picking out ":=" is
*relatively*
> easier than the "setf" task.

But (:= a 5) is not very readable either and looks a bit clumsy as well. In
this case I should prefer (= a 5) and (eq a 5) or something like this.

Jeffrey Siegal

unread,
Dec 27, 2001, 2:51:29 PM12/27/01