Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Reflections on a classic Lisp Paper

174 views
Skip to first unread message

Ray Dillinger

unread,
Jul 4, 2006, 4:30:20 PM7/4/06
to

Reflections on a classic Lisp Paper

Note: if you're only interested in Modern Common Lisp or
Scheme, skip this article. It deals with history and the
semantics of constructs which no longer appear in modern
lisp dialects.

Note: Crossposted to CLS and CLL. Please remember how
easy it is to be polite to one another.

In 1980, Kent Pitman (and if you're reading this, Hi)
published a paper entitled, 'Special Forms In Lisp', which
explored three different mechanisms for producing
user-defined special forms - Macros, Fexprs, and Nlambda.

In it, he briefly explained all three systems (as they were
understood at the time) and then made comparisons of their
advantages and disadvantages, concluding that macros were
really all a Lisp dialect needed.

You can read his paper online at
http://www.nhplace.com/kent/Papers/Special-Forms.html .

Having read Pitman's paper, it is my contention that:

1) Modern Common-Lisp style macros are not as clearly good
relative to other mechanisms as the rather different macros
he was writing about were, in that several advantages claimed
for the old formulation of macros do not apply to the new
formulations.

2) Modern compilation techniques have extended what were
once advantages obtaining only to macros to callable
functions, potentially including fexprs and nlambda's.

3) Fexprs in particular, retaining their acknowledged
advantages of applicability and first-class status at
runtime, can be made drastically better than the fexprs this
paper talked about by handling environments explicitly.

My arguments of these points are thus:

1) The Macros he was talking about in this paper were
run-time macros rather than compile-time macros. He
cites the ability to use the macro evaluation to alter
the text of the macro form into a non-macro form (expand
once, run many times) or not as the user requires as an
advantage of Macros. In modern dialects, where all
macroexpansion is pushed to the compilation phase, there
is no remaining choice and this is no longer an
advantage.

2) The cited advantages of macros include inline expansion
for named abstractions, not available at that time with
function calls (including normal as well as fexpr and
nlambda calls). Modern compilers do function inlining
just fine, so this is also no longer an advantage for
macros.

3) The cited advantages of macros include the macroexpansion
being in the same lexical contour as local non-special
variables and therefore having the ability to provide
scope rules that other types of definitions cannot. But:

First, that very ability to capture local variables is
now widely regarded as "breaking hygiene" and gives
rise to a nest of smallish semantic problems, not
really formalized at the time the paper was published.

Second, Lispers have been dealing or refusing to deal
with this nest of smallish semantic problems for
years, and have pretty much spec'd out good methods of
enforcing hygiene when and where it's needed. Many of
these methods (including the very simple one of
getting a local variable bound to the lexical
environment of the call site and using eval with that
environment explicitly when a capture is intended) are
equally applicable to fexprs and nlambdas.

Third, the exact formulation of fexprs and nlambdas
implemented in the systems of 1980 is not the only
possible formulation. There is no inherent problem
with a fexpr or nlambda form that gets a local
variable bound to the lexical environment of the call
site when it's called. Such fexpr's or nlambda's
would then have the same ability as macros to do
things in the local scope of the call site, without
the risk of unintentional capture.

4) A cited disadvantage of both Fexprs and Macros is the
need for declaration in the compiler. A Fexpr
declaration would alert the system to suppress argument
evaluation prior to the function call, whereas if a macro
definition does not precede its call site the call is
compiled as a function call to an as-yet-undefined
function.

First, modules have greatly diminished the
inconvenience of having calls resolve to declarations
in nearly-arbitrary orders.

Second, an alternate semantics for function calls
wherein _all_ arguments are evaluated under the
control of the called functions rather than before the
call would completely eliminate the need for fexpr
declarations.

Third, there is no formulation I can think of that
would entirely eliminate the need for macro
definitions visible when the macro call site is
compiled.


Kent M Pitman

unread,
Jul 4, 2006, 5:39:19 PM7/4/06
to
[ Replying to comp.lang.lisp only.
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

Ray Dillinger <be...@sonic.net> writes:

> Reflections on a classic Lisp Paper
>
> Note: if you're only interested in Modern Common Lisp or
> Scheme, skip this article. It deals with history and the
> semantics of constructs which no longer appear in modern
> lisp dialects.

Apropos the Fourth of July, on which I'm posting this, that remark is
like saying if you're interested only in the future of the nation, don't
go see the movie 1776 today.

(Btw, if you've never seen it, that's on my list of top ten movies of all
time, not even just top ten musicals, which it is. Quite an awesome movie,
and it speaks to tomorrow as much as today because it talks about recurrent
issues and about how decisions are made and the fact that decisions are
made by individual people with individual fears, frailties, passions,
obsessions, diseases, shady interests, and so on. Without giving away the
climax, the story offered [I don't even know if it's true, but it somewhat
doesn't matter] of how the last vote was acquired is a lesson to all of us
about political reality.)

I don't know that everyone should read my 1980 paper. But I do know that
the set of them who should it not limited to those "only interested in
Modern Common Lisp" unless you pejoratively mean "only those who value the
latest and greatest and think that a new release of anything means the
rest of eternity has been obsoleted". But I don't think those describe the
same sets.

I care desperately about the present and future of the US, and I find
documents like the Federalist papers directly relevant because they inform
me of pitfalls that were avoided and of things that tried and failed and so on.

> Note: Crossposted to CLS and CLL. Please remember how
> easy it is to be polite to one another.
>
> In 1980, Kent Pitman (and if you're reading this, Hi)
> published a paper entitled, 'Special Forms In Lisp', which
> explored three different mechanisms for producing
> user-defined special forms - Macros, Fexprs, and Nlambda.

Incidentally, just to establish my position here, it wasn't me that "caused"
these changes so much as that acted as scribe in summarizing a mood of
a lot of people. I came to the conclusions in the paper by surveying a whole
community, who was of very similar mind. What got the paper note was that
it summarized a mood of the times, sort of like Gorbechev was said to have
more summarized the political mood of the USSR at the time than having
single-handedly brought down ccommunism. (Not to blow the importance
of my paper or of Lisp out of context by grandiose comparisons, but just
to make an easy-to-understand metaphor.) Some things are just due and the
person who first says them is sometimes credited but is not always the real
cause.

> In it, he briefly explained all three systems (as they were
> understood at the time) and then made comparisons of their
> advantages and disadvantages, concluding that macros were
> really all a Lisp dialect needed.
>
> You can read his paper online at
> http://www.nhplace.com/kent/Papers/Special-Forms.html .
>
> Having read Pitman's paper, it is my contention that:
>
> 1) Modern Common-Lisp style macros are not as clearly good
> relative to other mechanisms as the rather different macros
> he was writing about were, in that several advantages claimed
> for the old formulation of macros do not apply to the new
> formulations.

Such as...? I'm not being defensive here, just curious.
Claims like this do require evidence though.

> 2) Modern compilation techniques have extended what were
> once advantages obtaining only to macros to callable
> functions, potentially including fexprs and nlambda's.

Again, I find it hard to make sense of this or evaluate its
truth value without examples.



> 3) Fexprs in particular, retaining their acknowledged
> advantages of applicability and first-class status at
> runtime, can be made drastically better than the fexprs this
> paper talked about by handling environments explicitly.

This is a possible but somewhat difficult claim to make. In order to
make it fully, there are probably some things you have to do which you
overlook here. In particular, you pretty much must count in the
positive effect of being able to do universal quantification across
all special forms, which was the result of the CL decision to follow
up on my paper's thesis and to limit special forms to a fixed set.
This allows the writing of portable code-walkers, and even though no
one has done that, I don't think it's impossible in CL--I just think
it's hard. I have one somewhere that I wrote, for example, and it
worked pretty well. But the point is that the ways it fell down were
over environment access (which you presuppose here) not due to
impenetrability of fixed special forms. Once you open the space to new
creation, people can't have code-walkers, even if the environment problem
is fixed, unless you create a protocol for understanding and code-walking
new special forms. And I'm not sure that's possible without a general
purpose AI description language to fully describe the effect of the
new special form, or without a reflective formal semantics that can be
dynamically understood and processed by codewalkers to assure that
appropriate semantics are maintained. (And the problem there may well
be that once you can do that, it's possible that you've reduced the language
from a fixed set of special forms to some other fixed space dictated by
the semantics and that all you're offering is new syntax--which is all
that macros do. I don't know this for sure--I'd have to read up on
denotational semantics to find out--but it's a "point of due diligence"
I wouldn't let pass before declaring success if I were you).

> My arguments of these points are thus:
>
> 1) The Macros he was talking about in this paper were
> run-time macros rather than compile-time macros. He
> cites the ability to use the macro evaluation to alter
> the text of the macro form into a non-macro form (expand
> once, run many times) or not as the user requires as an
> advantage of Macros.

Yes, we found through later experimentation that this wasn't very interesting
computationally and mostly addressed short-term small address space concerns
and so we got rid of the "memoization" kind of macros in CL. No one complained
until now. You need examples to put teeth in your remarks. Claims of this
sort are effectively assertions that there are expressions that cannot be
rewritten properly and efficiently without what you propose, and if you offer
no such challenge expressions, your argument is weak.

> In modern dialects, where all
> macroexpansion is pushed to the compilation phase, there
> is no remaining choice and this is no longer an
> advantage.

I don't think those that were involved in this change would define it this
way. I think they'd mostly say that twe did two rounds of simplification:
one to remove fexprs and another to remove self-modifying macros since both
seemed to add little power and mostly the opportunity to let users confuse
themselves. Your claim that this was ill-advised requires examples.
But certainly it's wrong to suggest that the "round two" changes as I've
alled them here were done by people oblivious to the motivation of "round one",
and so saying "Lisp has changed so much that we should revamp it" is like
saying "I decided to repaint my house and then I also repainted the trim
on the house in a matching color, so I therefore have changed the house so
much I should go back and reconsider the original change". I think these
changes were similarly motivated: to keep users from being caught in complex
code-analysis halting problems, and I think they succeeded. A claim that
they should be re-armed with foot-pointing weapons because users have gotten
smarter and less spazz-prone requires some elaboration.

> 2) The cited advantages of macros include inline expansion
> for named abstractions, not available at that time with
> function calls (including normal as well as fexpr and
> nlambda calls). Modern compilers do function inlining
> just fine, so this is also no longer an advantage for
> macros.
>
> 3) The cited advantages of macros include the macroexpansion
> being in the same lexical contour as local non-special
> variables and therefore having the ability to provide
> scope rules that other types of definitions cannot. But:
>
> First, that very ability to capture local variables is
> now widely regarded as "breaking hygiene" and gives
> rise to a nest of smallish semantic problems, not
> really formalized at the time the paper was published.
>
> Second, Lispers have been dealing or refusing to deal
> with this nest of smallish semantic problems for
> years, and have pretty much spec'd out good methods of
> enforcing hygiene when and where it's needed. Many of
> these methods (including the very simple one of
> getting a local variable bound to the lexical
> environment of the call site and using eval with that
> environment explicitly when a capture is intended) are
> equally applicable to fexprs and nlambdas.

I think you are underestimating the degree to which you might thwart
the ability to compile.

In Maclisp, I think some special forms were not compilable or not fully
compilable because the compiler just couldn't do it. Nowadays, we
have special forms that we can compile, but if you re-open it to users
doing arbitrary things, you may or may not be able to still compile them.
The burden is on you to show that even when a change, all uses of
even user-defined special forms can be compiled... not on to show you've
been held back by a concern that they might not be.

> Third, the exact formulation of fexprs and nlambdas
> implemented in the systems of 1980 is not the only
> possible formulation. There is no inherent problem
> with a fexpr or nlambda form that gets a local
> variable bound to the lexical environment of the call
> site when it's called. Such fexpr's or nlambda's
> would then have the same ability as macros to do
> things in the local scope of the call site, without
> the risk of unintentional capture.

Some of what compilers do is to prove by exhaustive analysis that various
effects cannot happen, too. If you create a situation where there is a
proliferation of operators with the potential to do changes that the compiler
cannot know, such proofs can break. The burden is again on you to show
they weill not.



> 4) A cited disadvantage of both Fexprs and Macros is the
> need for declaration in the compiler. A Fexpr
> declaration would alert the system to suppress argument
> evaluation prior to the function call, whereas if a macro
> definition does not precede its call site the call is
> compiled as a function call to an as-yet-undefined
> function.
>
> First, modules have greatly diminished the
> inconvenience of having calls resolve to declarations
> in nearly-arbitrary orders.
>
> Second, an alternate semantics for function calls
> wherein _all_ arguments are evaluated under the
> control of the called functions rather than before the
> call would completely eliminate the need for fexpr
> declarations.

And would make compiler optimization hard.

> Third, there is no formulation I can think of that
> would entirely eliminate the need for macro
> definitions visible when the macro call site is
> compiled.

I can't even figure out what the relevance of your point is on this last.

In any case, I await the version of this with lots of worked examples.

The only languages I know of that do this kind of thing have no compiler.
I don't regard that as a proof that they can't have one. But it doesn't
give me confidence that semantics like this leads to good compilers.

William D Clinger

unread,
Jul 4, 2006, 6:19:27 PM7/4/06
to
The following paper nails down one of the more important
things to understand about FEXPRs:

Mitchell Wand. The Theory of Fexprs is Trivial.
Higher-Order and Symbolic Computation 10(3),
May 1998, pages 189-199.

Abstract: We provide a very simple model of a reflective
facility based on the pure lambda-calculus, and we show
that its theory of contextual equivalence is trivial: two
terms in the language are contextually equivalent iff
they are alpha-congruent.

Will

Ray Dillinger

unread,
Jul 4, 2006, 10:05:09 PM7/4/06
to
Kent M Pitman wrote:
> [ Replying to comp.lang.lisp only.
> http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

>>1) Modern Common-Lisp style macros are not as clearly good


>>relative to other mechanisms as the rather different macros
>>he was writing about were, in that several advantages claimed
>>for the old formulation of macros do not apply to the new
>>formulations.
>
>
> Such as...? I'm not being defensive here, just curious.
> Claims like this do require evidence though.

Contentions first, arguments later. I'm being didactic,
like people who present abstracts before the body of their
papers. :-) Although, as you mentioned when you actually
got to the arguments, the arguments are weak in the absence
of a functioning fexpr-based lisp.

>>3) Fexprs in particular, retaining their acknowledged
>>advantages of applicability and first-class status at
>>runtime, can be made drastically better than the fexprs this
>>paper talked about by handling environments explicitly.
>
>
> This is a possible but somewhat difficult claim to make. In order to
> make it fully, there are probably some things you have to do which you
> overlook here.

One thing only, which is to produce a working system that isn't
agonizingly dog-slow. As you note, I haven't done it yet.

> In particular, you pretty much must count in the
> positive effect of being able to do universal quantification across
> all special forms, which was the result of the CL decision to follow
> up on my paper's thesis and to limit special forms to a fixed set.
> This allows the writing of portable code-walkers, and even though no
> one has done that, I don't think it's impossible in CL--I just think
> it's hard. I have one somewhere that I wrote, for example, and it
> worked pretty well.

I have one I wrote in scheme, and it works okay as long as it's
R4RS scheme - (shows how long ago I wrote it, I guess) But it
trips hard on implementation-specific extensions, which are a large
part of the game in Scheme dialects. Thing is, I haven't seen very
many compelling examples of utility derived from the ability to
code-walk. Environment access is both more and less tractable in
Scheme, due to the hygienic macros and absence of special variables.

> Once you open the space to new
> creation, people can't have code-walkers, even if the environment problem
> is fixed, unless you create a protocol for understanding and code-walking
> new special forms.

Actually, an idea I'm now toying with in that direction is one I
got from the very paper we're talking about. What I really *want* in
Fexprs is that they're applicable and have first-class runtime
existence so they can be stored in variables, returned from functions,
etc. I'm thinking of using your "conversion between" function as an
example and creating fexprs that mirror macros and vice versa, so that
a coexistence of the two can happen. User defines a macro, system
defines a fexpr derived from it, and then the macro source can be used
to drive a protocol for understanding and faux code-walking the fexpr
<insert slides of me waving my hands madly here>.
Both would exist in the system; the simple use cases handled as macros,
and anything that requires runtime existence handled by the
corresponding fexpr. I dunno yet whether this will actually work;
This is a handwave and I'm trying to figure it out still.


> And I'm not sure that's possible without a general
> purpose AI description language to fully describe the effect of the
> new special form, or without a reflective formal semantics that can be
> dynamically understood and processed by codewalkers to assure that
> appropriate semantics are maintained. (And the problem there may well
> be that once you can do that, it's possible that you've reduced the language
> from a fixed set of special forms to some other fixed space dictated by
> the semantics and that all you're offering is new syntax--which is all
> that macros do. I don't know this for sure--I'd have to read up on
> denotational semantics to find out--but it's a "point of due diligence"
> I wouldn't let pass before declaring success if I were you).

Right. Don't get me wrong here, I'm not claiming that macros are
a bad or wrong thing -- only that the reasons presented in that paper
no longer support claims of their superiority. Anything else
definitely has to be tried and proven before huge claims can be made.

>>My arguments of these points are thus:
>>
>>1) The Macros he was talking about in this paper were
>> run-time macros rather than compile-time macros. He
>> cites the ability to use the macro evaluation to alter
>> the text of the macro form into a non-macro form (expand
>> once, run many times) or not as the user requires as an
>> advantage of Macros.
>
>
> Yes, we found through later experimentation that this wasn't very interesting
> computationally and mostly addressed short-term small address space concerns
> and so we got rid of the "memoization" kind of macros in CL. No one complained
> until now. You need examples to put teeth in your remarks. Claims of this
> sort are effectively assertions that there are expressions that cannot be
> rewritten properly and efficiently without what you propose, and if you offer
> no such challenge expressions, your argument is weak.

I think that is tantamount to saying, "Yes, that turned out not
to be an advantage of macros after all. We didn't miss it when
we got rid of it."

And I'm not complaining about its absence; only saying that the
capability, now abandoned, does not represent a compelling argument
in favor of macros as currently implemented (and apparently not
a very compelling argument in favor of macros as implemented then,
either).


> I don't think those that were involved in this change would define it this
> way. I think they'd mostly say that twe did two rounds of simplification:
> one to remove fexprs and another to remove self-modifying macros since both
> seemed to add little power and mostly the opportunity to let users confuse
> themselves. Your claim that this was ill-advised requires examples.

I'm sorry; that's not what I claimed. What I claimed was that the
reasoning presented in the paper is not an argument which supports
modern-style macros. To ask whether the removal of self-modifying
macrology was ill-advised or not, is a completely different thing,
and I have no real evidence of that one way or another.

> But certainly it's wrong to suggest that the "round two" changes as I've
> alled them here were done by people oblivious to the motivation of "round one",
> and so saying "Lisp has changed so much that we should revamp it" is like
> saying "I decided to repaint my house and then I also repainted the trim
> on the house in a matching color, so I therefore have changed the house so
> much I should go back and reconsider the original change".


Ack! I'm not calling for a revamp; I believe that there should be
experimental dialects, yes, and that the question should be explored
again. But a revamp of existing dialects would be maniacal. Common
Lisp macros work, are well understood and happily used by thousands
of programmers, and are simple and powerful enough for all ordinary
(and most extraordinary) uses!

Every good Lisp dialect is a collection of design decisions that
*WORK* together. Scheme's hygienic macrology works better with
its Lisp-1 semantics than defmacro, but would not serve CL as well.
CL's lisp-2 semantics work better with its defmacro, but would
not serve as gracefully as lisp-1 semantics for the kind of
functional programming that Scheme excels at. And so on. It
would be counterproductive to mess up existing dialects by
introducing features not suited for them and not consistent with
the rest of their design decisions.

I believe that CL and Scheme, which are both excellent dialects,
would be made worse (or completely unrecognizable) by introducing
fexprs into those languages. But while they are self-consistent
and occupy "sweet spots" in the design space, I don't think that
they occupy the *ONLY* sweet spots in the design space, and I do
think that fexprs ought to be revisited as a valid choice by the
experimenters.

> I think these
> changes were similarly motivated: to keep users from being caught in complex
> code-analysis halting problems, and I think they succeeded. A claim that
> they should be re-armed with foot-pointing weapons because users have gotten
> smarter and less spazz-prone requires some elaboration.

:-) I don't believe that users have gotten smarter
or less spazz-prone. But I generally think that tools
should NEVER prevent people from doing anything. Maybe
you can just think of me as someone who *Likes* foot-
pointing weapons.

Right now, I'm annoyed that I can't apply or funcall
"and" and "if" and "or" and "lambda" and other important
forms, some of them user-defined. I can't do these
things because an applicable or funcallable or first-
class form of these first-order entities would
necessarily be a fexpr (of some sort) and modern Lisps
don't have, nor want, fexprs.

I think that there *is* a sweet-spot in the lispy design
space for a dialect where syntax (and all other routines)
are defined as fexprs rather than macros-and-functions.
The abolition of the division between macros and functions
is aesthetically tasty, in terms of producing a more
unified and seamless whole. If macros exist at all in
such a lisp, they would exist purely internally, derived
from fexpr definitions, as ways to do the equivalent of
function inlining. Other design decisions would have
to work with that, however, and those design decisions
would create a lisp so fundamentally different that it
could not be Common Lisp or Scheme, nor could it be a
successor to either of those languages.

> I think you are underestimating the degree to which you might thwart
> the ability to compile.

Well... no. I pretty much know that you would have to keep
a representation of the source code around and that any machine-
code vectors you do produce could be invalidated at the drop of
a hat by further execution of self-modifying code. I'm not
underestimating that. I might be underestimating the importance
of it, but I'm aware of the technical issues.

My reasoning is that the general public used Java 1.2 and found
its performance acceptable. They obviously don't *really* care
about compilation as such, nor execution speed. The fact that
they gripe about Lisp being slow has nothing to do with lisp
actually being slow; they just think it's alien and scary so
they spread long-outdated stories to give each other excuses
about why they don't like it. For most of them, it's a folklore
process, not a real technical requirement.

What I need to do, I think, is produce something that's not quite
as dog-slow as Java 1.2. Interpreted, compiled, or something in
the middle doesn't matter quite so much to me.

> Some of what compilers do is to prove by exhaustive analysis that various
> effects cannot happen, too. If you create a situation where there is a
> proliferation of operators with the potential to do changes that the compiler
> cannot know, such proofs can break. The burden is again on you to show
> they weill not.

I think I regard that as being on a par with the burden that
static-typing fanatics always throw at Lispers' feet; to show
that type errors cannot happen at runtime and runtime type checks
will never be needed. It may be possible to prove true for
particular programs, and static typing and type inference are
good things for compiler optimizations. With declarations the
class of programs for which it is true may be expanded. But
just as modern Lisps retain the ability to emit code that does
runtime type checks and code that can encounter type errors at
runtime, I believe a fexpr-based Lisp would never be completely
free of the need for the capability to recompile functions on
the fly at runtime. Particular programs may be provably free
of this need (and may satisfy static type requirements too),
and the development tools should help you produce such programs
if that is your wish - but the language would need the ability
to do runtime partial recompilations, the same way other Lisps
need the ability to do runtime typechecks.

I don't think that's a showstopper, any more than most lispers
think runtime type checks are a showstopper. Perhaps I am a
maniac.

Bear

Ken Tilton

unread,
Jul 4, 2006, 11:28:33 PM7/4/06
to

Ray Dillinger wrote:


> Kent M Pitman wrote:
>
> I believe that CL and Scheme, which are both excellent dialects,
> would be made worse (or completely unrecognizable) by introducing
> fexprs into those languages. But while they are self-consistent
> and occupy "sweet spots" in the design space, I don't think that
> they occupy the *ONLY* sweet spots in the design space, and I do
> think that fexprs ought to be revisited as a valid choice by the
> experimenters.
>
> > I think these
>
>> changes were similarly motivated: to keep users from being caught in
>> complex
>> code-analysis halting problems, and I think they succeeded. A claim that
>> they should be re-armed with foot-pointing weapons because users have
>> gotten
>> smarter and less spazz-prone requires some elaboration.
>
>
> :-) I don't believe that users have gotten smarter
> or less spazz-prone. But I generally think that tools
> should NEVER prevent people from doing anything. Maybe
> you can just think of me as someone who *Likes* foot-
> pointing weapons.

Certainly the Lisp spirit, but I have come down on the other side of
this question in re Cells, and I point to the KR House of Backdoors as
justification.

The general problem arises when the FPW also serves as a nice way to
duck careful thinking, providing a quick win at the cost of first a lost
toe and then for want of a toe the foot etc etc.

When the PyCells Summer of Code student (before even knowing much at all
about Cells) wanted to allow KR-esque backdoors, I encouraged him to
have fun, learn a lot, and change the name. Fortunately for the
Pythonistas, it looks as if at least the first release will stay closer
to its roots.

Anyway, I have no clue what you all are talking about, but my approach
to Cells FPWs is "show me the use case". If it cannot be handled without
an FPW, then Cells gets extended with a TFW that achieves the same.

kenneth

--
Cells: http://common-lisp.net/project/cells/

"I'll say I'm losing my grip, and it feels terrific."
-- Smiling husband to scowling wife, New Yorker cartoon

Kent M Pitman

unread,
Jul 5, 2006, 12:19:10 AM7/5/06
to
Ray Dillinger <be...@sonic.net> writes:

> Contentions first, arguments later. I'm being didactic,
> like people who present abstracts before the body of their
> papers. :-) Although, as you mentioned when you actually
> got to the arguments, the arguments are weak in the absence
> of a functioning fexpr-based lisp.

Ok. I'll try to indulge the old "willful suspension of disbelief"
until I see the examples. I don't actually mean to insist you can't
be right or to sound overly defensive. If my paper got you thinking
and you ended up concluding new things from it, I've done my job
anyway. But I do happen to believe what was in the paper at least
until I've seen something else better demonstrated. :)

> One thing only, which is to produce a working system that isn't
> agonizingly dog-slow. As you note, I haven't done it yet.

Well, slow is ok if it's "amenable to compilation". I don't mean
you have to show a compiler, just show that in principle it's not
an obvious barrier to being compiled.

> Every good Lisp dialect is a collection of design decisions that
> *WORK* together. Scheme's hygienic macrology works better with
> its Lisp-1 semantics than defmacro, but would not serve CL as well.
> CL's lisp-2 semantics work better with its defmacro, but would
> not serve as gracefully as lisp-1 semantics for the kind of
> functional programming that Scheme excels at. And so on.
> It would be counterproductive to mess up existing dialects by
> introducing features not suited for them and not consistent with
> the rest of their design decisions.

We're in total sync on this whole paragraph.

> I believe that CL and Scheme, which are both excellent dialects,
> would be made worse (or completely unrecognizable) by introducing
> fexprs into those languages.

I could possibly be convinced that with a reflective, extensible
formal semantics, Scheme might be possible to integrate this ok. In
CL, rather than toying with semantics, I could imagine some sort of
variation on protocols (a la MOP) could do it in an integrated
fashion. But I have nothing to offer offhand and am not sure it's
ultimately needed (an inherently subjective judgment, of course).

While you're mining ideas for these languages, I hope you've perused
my http://www.nhplace.com/kent/Half-Baked/ pages. Given that you're
looking at namespaces and h ygiene, and even if you ignore all of the
detail of what's there, I'd hate to waste the chance to get someone
else thinking about that idea...

> But while they are self-consistent
> and occupy "sweet spots" in the design space, I don't think that
> they occupy the *ONLY* sweet spots in the design space, and I do
> think that fexprs ought to be revisited as a valid choice by the
> experimenters.

Yes, I'm in agreement with this. I hope the entire community doesn't
spend all its energy (as I sometimes think the Scheme community does)
reimplementing the same or similar languages... but yet, I don't think
things should stagnate either.

In upcoming months, I'll hoping to be offering a couple of unrelated
proposals for how to address such issues myself. I'm not quite ready
to roll that out yet. But I'm actively working on some ideas again
lately. I wouldn't want to be caught appearing to be hypocritical and
saying others shouldn't do the same. Having good strong stable
languages is good. Figuring out how to experiment and extend without
disturbing that is the real art.

> > I think these changes were similarly motivated: to keep users from
> > being caught in complex code-analysis halting problems, and I
> > think they succeeded. A claim that they should be re-armed with
> > foot-pointing weapons because users have gotten smarter and less
> > spazz-prone requires some elaboration.
>
> :-) I don't believe that users have gotten smarter
> or less spazz-prone. But I generally think that tools
> should NEVER prevent people from doing anything. Maybe
> you can just think of me as someone who *Likes* foot-
> pointing weapons.

FWIW, this was the Lisp Machine's philosphy that led to offering
&QUOTE. They had to use it to bootstrap their system, and they didn't
want to deny any power to users that they used themselves. It was
part of the whole open source philosophy. I think it was a noble
sentiment, but I also think that the problem with open source is that
implementors cannot know what users are depending on and what they're
not.

> Right now, I'm annoyed that I can't apply or funcall
> "and" and "if" and "or" and "lambda" and other important
> forms, some of them user-defined. I can't do these
> things because an applicable or funcallable or first-
> class form of these first-order entities would
> necessarily be a fexpr (of some sort) and modern Lisps
> don't have, nor want, fexprs.

This specific problem may have solutions that are short of fexprs.
But again I'll wait to see the theatrical rollout of your movie.
It's hard to tell what's going on from the trailers.
(As I'm always muttering in the theatre when I see trailers these
days: "Coming in August: See it again in the right order.")

> I think that there *is* a sweet-spot in the lispy design
> space for a dialect where syntax (and all other routines)
> are defined as fexprs rather than macros-and-functions.

I wrote a dialect that had first class fexprs once. It wasn't
compilable though, as far as I know.

> The abolition of the division between macros and functions
> is aesthetically tasty, in terms of producing a more
> unified and seamless whole. If macros exist at all in
> such a lisp, they would exist purely internally, derived
> from fexpr definitions, as ways to do the equivalent of
> function inlining. Other design decisions would have
> to work with that, however, and those design decisions
> would create a lisp so fundamentally different that it
> could not be Common Lisp or Scheme, nor could it be a
> successor to either of those languages.

Linear ordering of dialects is not a concern, so don't let that hold
you back.

As long as you're free of compatibility constraints, it certainly
gives you more license to think.

There is a downside to Lisp if a popular interpreted-only or
non-compilable dialect appears, though, in that it may spread
misconceptions anew. But it's a bit early to accuse you of that.
First you have to win on the popular part. :)

> > I think you are underestimating the degree to which you might
> > thwart the ability to compile.
>
> Well... no. I pretty much know that you would have to keep
> a representation of the source code around and that any machine-
> code vectors you do produce could be invalidated at the drop of
> a hat by further execution of self-modifying code. I'm not
> underestimating that. I might be underestimating the importance
> of it, but I'm aware of the technical issues.
>
> My reasoning is that the general public used Java 1.2 and found
> its performance acceptable. They obviously don't *really* care
> about compilation as such, nor execution speed. The fact that
> they gripe about Lisp being slow has nothing to do with lisp
> actually being slow; they just think it's alien and scary so
> they spread long-outdated stories to give each other excuses
> about why they don't like it. For most of them, it's a folklore
> process, not a real technical requirement.
>
> What I need to do, I think, is produce something that's not quite
> as dog-slow as Java 1.2. Interpreted, compiled, or something in
> the middle doesn't matter quite so much to me.

Well, if there are questions you're up against and you want to drop me
private mail, I'm not above offering hints. I don't have it in for
you to fail. I just can't figure out what you're doing enough to
offer much more than caveats for you to avoid so you don't redo
mistakes we already lived. You should, as much as possible, make new
mistakes so we can learn new things to avoid.



> > Some of what compilers do is to prove by exhaustive analysis that
> > various effects cannot happen, too. If you create a situation
> > where there is a proliferation of operators with the potential to
> > do changes that the compiler cannot know, such proofs can break.
> > The burden is again on you to show they weill not.
>
> I think I regard that as being on a par with the burden that
> static-typing fanatics always throw at Lispers' feet; to show
> that type errors cannot happen at runtime and runtime type checks
> will never be needed. It may be possible to prove true for
> particular programs, and static typing and type inference are
> good things for compiler optimizations. With declarations the
> class of programs for which it is true may be expanded. But
> just as modern Lisps retain the ability to emit code that does
> runtime type checks and code that can encounter type errors at
> runtime, I believe a fexpr-based Lisp would never be completely
> free of the need for the capability to recompile functions on
> the fly at runtime. Particular programs may be provably free
> of this need (and may satisfy static type requirements too),
> and the development tools should help you produce such programs
> if that is your wish - but the language would need the ability
> to do runtime partial recompilations, the same way other Lisps
> need the ability to do runtime typechecks.

I understand what you're getting at. I just lived the nightmare of
Maclisp being uncompilable in places. Of course, then, we didn't yet
have an evolved model that it mattered that interpreted or compiled,
it had the same semantics. So the consequence was that as it shifted
from interpreted to compiled, variables went from special to almost
lexical. That was really obvious. So you could tell what parts of
your program were running interpreted and what weren't. THAT is
something that has legitimately changed, and given that JIT technology
is not a license to dynamically change program semantics, it's
possible there are some other unexplored choice points.

> I don't think that's a showstopper, any more than most lispers
> think runtime type checks are a showstopper. Perhaps I am a
> maniac.

This is not something that I think will hold you back, so I won't fuss
about this. Energy is good, if you can harnass it. I'm always
saying, for example, that good programmers are often borderline
afflicted with OCD (obsessive compulsive disorder). If you can
harnass that, it's a mass of usefully directable energy...

Anyway, what you say doesn't sound like it must be crazy. Just like
it might be. But it's your life energy and your time on earth to
decide how to spend ... and what would life be without a bit of
passion?

Pascal Costanza

unread,
Jul 8, 2006, 9:06:46 AM7/8/06
to
Ray Dillinger wrote:
>
> Reflections on a classic Lisp Paper
>

You should definitely read about 3-Lisp. The paper "Control-related
meta-level facilities in LISP" by Jim des Rivieres gives an excellent
overview. It's published in the book "Meta-Level Architectures and
Reflection" by Patti Maes and Daniele Nardi. It's hard to find, but
definitely worth the effort.


Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

shr...@gmail.com

unread,
Jul 9, 2006, 9:24:34 PM7/9/06
to
Ray Dillinger wrote:

> Third, the exact formulation of fexprs and nlambdas
> implemented in the systems of 1980 is not the only
> possible formulation. There is no inherent problem
> with a fexpr or nlambda form that gets a local
> variable bound to the lexical environment of the call
> site when it's called. Such fexpr's or nlambda's
> would then have the same ability as macros to do
> things in the local scope of the call site, without
> the risk of unintentional capture.

I encourage people interested in the above point to look at John
Shutt's work, which addresses this quite elegantly:

http://web.cs.wpi.edu/~jshutt/kernel.html

Shriram

Kent M Pitman

unread,
Jul 10, 2006, 12:59:49 AM7/10/06
to
[ Replying to comp.lang.lisp only.
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

I asked Will in private e-mail what the bottom line summary was on
this, since he offered neither a URL nor a hint as to what it was
about. Here is the relevant part of his reply:

| The abstract said it all. I will offer you a couple
| of paraphrases, however, which you may cite as you
| wish. [...]
|
| Paraphrase 1: In the lambda calculus plus a simple
| FEXPR-like feature, the *only* semantics-preserving
| syntax-directed translation from the language to
| itself is the identity function.
|
| Paraphrase 2: You can't write a compiler for the
| language unless it's a whole-program compiler that
| can see the entire context and all possible uses
| of any program phrase it attempts to compile.
|
| Paraphrase 3: FEXPRs screw compilation.

Ray Dillinger

unread,
Jul 10, 2006, 2:46:14 PM7/10/06
to
Kent M Pitman wrote:
> [ Replying to comp.lang.lisp only.
> http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

> I asked Will in private e-mail what the bottom line summary was on


> this, since he offered neither a URL nor a hint as to what it was
> about. Here is the relevant part of his reply:
>
> | The abstract said it all. I will offer you a couple
> | of paraphrases, however, which you may cite as you
> | wish. [...]
> |
> | Paraphrase 1: In the lambda calculus plus a simple
> | FEXPR-like feature, the *only* semantics-preserving
> | syntax-directed translation from the language to
> | itself is the identity function.
> |
> | Paraphrase 2: You can't write a compiler for the
> | language unless it's a whole-program compiler that
> | can see the entire context and all possible uses
> | of any program phrase it attempts to compile.
> |
> | Paraphrase 3: FEXPRs screw compilation.

Yep, that was my conclusion too. Glad to see it validated by
an academic whose paper went through rigorous review.

The "Compilation" I envison doesn't (and can't, unless it's a
whole-program compilation) throw away any source text. It can
calculate machine-code vectors for anticipated uses and register
the types of uses or events that will invalidate those machine-
code vectors meaning you'd have to recompile that bit.

Such a lisp during separate compilation *cannot* rely on the
ability to get rid of source during compilation while preserving
semantics. It can only make assumptions, produce machine code
on the basis of those assumptions, and register machine code based
on those assumptions for invalidation (and runtime replacement)
if those assumptions should ever be violated.

In a whole-program compilation, where you can say things rigorously
(for example, you can prove that there are no functions in the system
which can possibly store or return a non-simple function) then you
can get rid of source and get rid of the compiler in the image. But
short of that, you can't.

You've been using the word "compile" without saying exactly what
you mean by it. The only sense in which I've intended to "compile"
this dialect is in the sense of adding machine-code vectors to it
to speed anticipated usages, and adding machinery to keep track of
when those machine-code vectors become invalid so you have to
calculate new ones. If you claim this is "not compilation" then
a dialect with fexpr-based semantics "cannot be compiled."

But I claim that the class of programs which "cannot be compiled"
for whatever definition of "compilation" applies to other lisp
dialects is exactly the class of programs which "cannot be expressed"
in a compiled dialect with no first-class fexprs. In other words,
if I treat the extended dialect in the same way we treat common
lisp, I never return non-simple functions from other functions or
store them in structures nor create them at runtime nor apply/
funcall them, etc -- then compilation proceeds exactly as it
does in CL, where one does not do these things with macros
anyway.


Bear

Kent M Pitman

unread,
Jul 10, 2006, 3:26:26 PM7/10/06
to
Ray Dillinger <be...@sonic.net> writes:

> You've been using the word "compile" without saying exactly what
> you mean by it. The only sense in which I've intended to "compile"
> this dialect is in the sense of adding machine-code vectors to it
> to speed anticipated usages, and adding machinery to keep track of
> when those machine-code vectors become invalid so you have to
> calculate new ones. If you claim this is "not compilation" then
> a dialect with fexpr-based semantics "cannot be compiled."

I construe the term in a reasonably loose way since it means so
many things to so many people.

In my own usage, translate usually means to source-to-source and
compile usually means high-level-source to low-level-source, but
little more than that. I'm influenced in my understanding by a
compiler Steele once made, called "cheapy", if I recall correctly,
where it generated pseudocode slower than the original source. It was
important to me to come to an understanding that compilation does not
necessarily make things better--we just hope it does--rather like
compression. For still morecost, you can check the result and if
you've lost you can just do without. :)

In the context of this conversation, my use of compile, though is
not even about that level of detail. I just meant that the compiler
needs the ability to poke at source code (usually by some form of
recursive descent, where it takes notes about preceding and outer
contours and applies them to manipulations of the later and inner
ones). The thing I'm worried about is that once inside a call to
a fexpr, you can't easily make even simple choices like "am I looking
at code or data", "will this be evaluated in the lexical environment
I expect or not", "will this be evaluated in the dynamic environment".

The paper that you refer to in the original post on this thread was
my second published paper, so I was pretty green when I presented it.
Someone asked me a question in the Q&A about why I didn't think an
approach such as yours would work, and I couldn't quite formulate
an answer. It was rather embarrassing. I fumbled around and said
that I'd thought about it hard and was pretty sure I was right but
couldn't remember why. Someone (for some reason I think it might
have been Joachim Laubsch, who I met later in other settings) came up
afterward and said they thought my conclusion was correct and that
I had probably been looking in my response to the question for the
phrase "halting problem". To this day, I believe that's the barrier
you are up against--it's not that you can't do analysis, it's that
if the analysis has to execute code in its mind to know what the
fexpr will do, then it had better hope the fexpr is written in a
super-clear way, or it had better limit itself in time or steps
and admit it just can't always win, or else it is vulnerable to
not halting under some circumstances...

> But I claim that the class of programs which "cannot be compiled"
> for whatever definition of "compilation" applies to other lisp
> dialects is exactly the class of programs which "cannot be expressed"
> in a compiled dialect with no first-class fexprs. In other words,
> if I treat the extended dialect in the same way we treat common
> lisp, I never return non-simple functions from other functions or
> store them in structures nor create them at runtime nor apply/
> funcall them, etc -- then compilation proceeds exactly as it
> does in CL, where one does not do these things with macros
> anyway.

I'm not sure I follow you here. But I'm in a hurry because I have
other things on my plate. I'll look again on the weekend if I have
time, or maybe the weekend after that. It's a busy time for me in
both my personal and work life.

0 new messages