Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Scheme macros

104 views
Skip to first unread message

Jacek Generowicz

unread,
Feb 19, 2004, 10:44:12 AM2/19/04
to
[Don't want to start a flamefest, so I'm not crossposting this to
c.l.s]

In _Lambda: the ultimate "little language"_, Olin Shivers suggests
(second paragraph of section 9) that " ... Scheme has the most
sophisticated macro system of any language in this family ...", where
'this family' is explicitly stated to include Common Lisp.

Now, I don't know much about the Scheme macro system ... my impression
is that the main difference from CL's macros lies in Scheme's
so-called hygiene, which I (mis?)understand to be a mechanism for
preventing variable capture (which in CL is considered to be a
feature, not a bug, of the macro system (particularly given that
variable capture is less "problematic" in a Lisp-N, where N>1)).

My question is ... how should I interpret the claim that Scheme's
macro system is more "sophisticated" than CL's ? Does it merely reflect
a preference for Scheme's "hygene", or is there something else behind
the statement ?

Barry Margolin

unread,
Feb 19, 2004, 11:54:14 AM2/19/04
to
In article <tyfad3f...@pcepsft001.cern.ch>,
Jacek Generowicz <jacek.ge...@cern.ch> wrote:

> My question is ... how should I interpret the claim that Scheme's
> macro system is more "sophisticated" than CL's ? Does it merely reflect
> a preference for Scheme's "hygene", or is there something else behind
> the statement ?

That may be part of it. He could also be referring to Scheme's
pattern-based macros using syntax-case.

--
Barry Margolin, bar...@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***

Paul Dietz

unread,
Feb 19, 2004, 12:07:57 PM2/19/04
to
Jacek Generowicz wrote:

> My question is ... how should I interpret the claim that Scheme's
> macro system is more "sophisticated" than CL's ? Does it merely reflect
> a preference for Scheme's "hygene", or is there something else behind
> the statement ?

One definition of sophisticated: 'lacking natural simplicity'.

Seems right to me.

Paul

Anton van Straaten

unread,
Feb 20, 2004, 1:43:07 AM2/20/04
to

It's not only hygiene. Like Barry, I would assume that Olin was talking
about the syntax-case macro system:
http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?Syntax-Case
http://www.scheme.com/syntax-case/
http://www.scheme.com/tspl/syntax.html#./syntax:h3

Syntax-case is a rich system that's hard to summarize briefly, but I'll take
a stab at it. I've identified four key features below:

* Although syntax-case is a hygienic macro system, it allows you to
selectively break hygiene. This allows it to do the same kinds of things
defmacro can do, but from the opposite direction: you need to take special
action to break hygiene, rather than having to take action to preserve
hygiene. All things being equal, the syntax-case approach ought to be
preferable; but defmacro fans will tell you that all things aren't equal,
that syntax-case pays a price in terms of complexity.

* Syntax-case supports pattern-matching of syntax, but also supports use of
procedural code, as defmacro does.

* With syntax-case, uses of pattern variables or macro variables don't have
to be escaped in the same way as they do with defmacro, i.e. you get rid of
all the quasiquote, unquote, etc., along with the need to think about that
issue when writing macros. In a 9-line example on c.l.s. the other day[*],
a hygienic pattern-matching macro eliminated "two gensyms, one quasiquote,
six unquotes, and one unquote-splicing." The only time you really need to
think about issues related to escaping of variables is when breaking
hygiene, which is the exception rather than the rule.

* Syntax-case represents syntax using syntax objects with their own
specialized structure, rather than using ordinary lists, as defmacro does.
This is something of a tradeoff, which can be thought of as analogous to the
tradeoff that can arise in almost any Lisp program, when choosing between
implementing a data structure using only lists, or implementing it using
some kind of structure mechanism (defstruct, CLOS). The flexibility of
lists still has some advantages in some situations, but I don't know of
anyone who uses lists exclusively in real programs today. It can be argued
that a similar situation applies to macros, especially more complex ones:
using nothing but lists to manipulate syntax leads to exactly the same sort
of problems and limitations that you have when using nothing but lists to
manipulate an ordinary program's data; but using non-list structures also
loses some simplicity.

My own feeling is that it's useful to have access more than one macro
system. They all have a different flavor and have different pros and cons.
For example, although syntax-rules is the most restricted of these systems,
it makes for cleaner and simpler macros than defmacro, in many cases where a
purely hygienic macro is wanted. Happily, Lisp being the ultra-flexible
language that it is, anyone who wants to play with syntax-rules in CL can
take a look at:
http://www.ccs.neu.edu/home/dorai/mbe/mbe-lsp.html

Anton

[*]
http://groups.google.com/groups?selm=f0wWb.458%24hm4.393%40newsread3.news.at
l.earthlink.net


Pascal Costanza

unread,
Feb 20, 2004, 5:41:13 AM2/20/04
to

Anton van Straaten wrote:

> It's not only hygiene. Like Barry, I would assume that Olin was talking
> about the syntax-case macro system:
> http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?Syntax-Case
> http://www.scheme.com/syntax-case/
> http://www.scheme.com/tspl/syntax.html#./syntax:h3

Thanks for these links.

> Syntax-case is a rich system that's hard to summarize briefly, but I'll take
> a stab at it. I've identified four key features below:
>
> * Although syntax-case is a hygienic macro system, it allows you to
> selectively break hygiene. This allows it to do the same kinds of things
> defmacro can do, but from the opposite direction: you need to take special
> action to break hygiene, rather than having to take action to preserve
> hygiene. All things being equal, the syntax-case approach ought to be
> preferable; but defmacro fans will tell you that all things aren't equal,
> that syntax-case pays a price in terms of complexity.

Hm, not sure. I think the main issue is perhaps somewhat different.

To me, the conceptual simplicity of CL-style macros is striking: It's
just a transformation of s-expression. That's it. Once understood, it's
clear that you can do anything with this conceptual model.

Here is an analogy:

In Scheme, you have continuations and space-safe tail recursion. This
guarantees that you can model any kind of iteration construct that you
might need with some guaranteed properties. In Common Lisp you have the
LOOP macro which covers the presumably important cases.

To me, things like syntax-rules and syntax-case look to macro
programming like the LOOP macro looks to iteration. Maybe they really
cover the important cases, but they seem hard to learn. And it
immediately makes me wonder whether it is really worth it. After all, I
know how to make things work with DEFMACRO.

Strange thing is: I like the LOOP macro. There's something mysterious
going on here...

BTW, what you really need to make something like DEFMACRO work is, on
top of that, of course quasiquotation, GENSYM/MAKE-SYMBOL or
string->uninterned-symbol and most probably a Lisp-2.

My feeling is that Common Lispniks would have an easier time to consider
using Scheme when appropriate if Scheme implementations would more
clearly document whether they support these features (except, of course,
for the Lisp-2 thing). It's important that you can create uninterned
symbols.

> * Syntax-case supports pattern-matching of syntax, but also supports use of
> procedural code, as defmacro does.
>
> * With syntax-case, uses of pattern variables or macro variables don't have
> to be escaped in the same way as they do with defmacro, i.e. you get rid of
> all the quasiquote, unquote, etc., along with the need to think about that
> issue when writing macros. In a 9-line example on c.l.s. the other day[*],
> a hygienic pattern-matching macro eliminated "two gensyms, one quasiquote,
> six unquotes, and one unquote-splicing." The only time you really need to
> think about issues related to escaping of variables is when breaking
> hygiene, which is the exception rather than the rule.

I have looked at this example, and the funny thing is that I immediately
start to wonder which elements of the macro definition refer to
macro-expansion-time entities and which refer to run-time stuff. I don't
have to think about these issues with quasiquotations because I
immediately see it from the code.

I am not saying that this makes syntax-case worse than quasiquotation.
Maybe I am just missing something.

> * Syntax-case represents syntax using syntax objects with their own
> specialized structure, rather than using ordinary lists, as defmacro does.
> This is something of a tradeoff, which can be thought of as analogous to the
> tradeoff that can arise in almost any Lisp program, when choosing between
> implementing a data structure using only lists, or implementing it using
> some kind of structure mechanism (defstruct, CLOS). The flexibility of
> lists still has some advantages in some situations, but I don't know of
> anyone who uses lists exclusively in real programs today. It can be argued
> that a similar situation applies to macros, especially more complex ones:
> using nothing but lists to manipulate syntax leads to exactly the same sort
> of problems and limitations that you have when using nothing but lists to
> manipulate an ordinary program's data; but using non-list structures also
> loses some simplicity.

Hm, I recall reading that syntax-case allows for recording line numbers
of the original expression. Are there more advantages?

> My own feeling is that it's useful to have access more than one macro
> system. They all have a different flavor and have different pros and cons.
> For example, although syntax-rules is the most restricted of these systems,
> it makes for cleaner and simpler macros than defmacro, in many cases where a
> purely hygienic macro is wanted. Happily, Lisp being the ultra-flexible
> language that it is, anyone who wants to play with syntax-rules in CL can
> take a look at:
> http://www.ccs.neu.edu/home/dorai/mbe/mbe-lsp.html

Having more options is certainly better. (Such a statement from a fan of
a supposedly minimal language?!? ;)

Anyway, thanks for your insightful posting. I appreciate your efforts to
bridge the gaps between the Scheme and Common Lisp communities,
especially because this isn't always easy.


Pascal

--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."

Tim Bradshaw

unread,
Feb 20, 2004, 7:59:55 AM2/20/04
to
Jacek Generowicz <jacek.ge...@cern.ch> wrote in message news:<tyfad3f...@pcepsft001.cern.ch>...

>
> My question is ... how should I interpret the claim that Scheme's
> macro system is more "sophisticated" than CL's ? Does it merely reflect
> a preference for Scheme's "hygene", or is there something else behind
> the statement ?

It requires more pages to describe it. Indeed, isn't the macro
section of the scheme standard bigger than the rest of the standard
put together or something?

Joe Marshall

unread,
Feb 20, 2004, 9:42:29 AM2/20/04
to
Jacek Generowicz <jacek.ge...@cern.ch> writes:

> My question is ... how should I interpret the claim that Scheme's
> macro system is more "sophisticated" than CL's ? Does it merely reflect
> a preference for Scheme's "hygene", or is there something else behind
> the statement ?

You should interpret the claim based on the biases of the person
making the claim.

One way of looking at is that the macro system performs a
source->source transformation. To do this you need a mechanism for
destructuring the original code that preserves the meaning of the
subforms that are not being transformed, a mechanism for transforming
the fragments, a mechanism for introducing new forms with a meaning
independent of the source context, and a mechanism for combining
these.

The Common Lisp macro system provides a hook and a small amount of
destructuring. The mechanism for transforming the fragments and
combining the results are provided by the built-in list manipulation
facilities.

The rest of the work is a burden that the macro writer
must take on himself. The macro writer must avoid introducting
colliding identifiers so that the original forms, when re-injected,
retain their meaning. He must ensure that introduced gensyms are
correctly referred to in the expanded code so that the new forms are
independent of context. Admittedly, this is not a large burden to
bear for the power of macros.

The Scheme macro system provides a much richer destructuring facility,
and automates the entire process of renaming to preserve meaning. It
provides a richer set of base tools *for that particular problem*, so
in this way it is more sophisticated than the CL macro system.

However, the Scheme macro system does *not* provide much in the way of
transforming the program fragments or combining the fragments and new
code into a result. This is done via pattern matching and templates
that do not give you easy access to the underlying list structure. It
can be amazingly difficult to generate the patterns and templates
needed to do what would be simple list manipulation.

The end result is that the Scheme macro system can be a pain in the
ass to understand and use. From what I've seen, I'm not the only one
in the Scheme community that finds the hygienic macro system
difficult. It seems that the burden of maintaining hygiene manually is
smaller than the burden introduced by maintaining it automatically.


Grzegorz Chrupała

unread,
Feb 20, 2004, 11:28:53 AM2/20/04
to
Anton van Straaten wrote:


> It's not only hygiene. Like Barry, I would assume that Olin was talking
> about the syntax-case macro system:

Given that Olin's paper is about scsh/Scheme48, and given that AFAIK
Scheme48 does not provide syntax-case, this seems unlikely. Anyway, it's
not terribly difficult for a macro system to be more "sophisticated" (as
opposed to more general) than Common Lisps defamcro.

--
Grzegorz Chrupała | http://pithekos.net | grze...@jabber.org
gosh -bE "begin (print \
(map (cut number->string <> 36) '(18 806564 1714020422))) (exit)"

Jens Axel Søgaard

unread,
Feb 20, 2004, 5:58:57 PM2/20/04
to
Pascal Costanza wrote:
> Anton van Straaten wrote:

> My feeling is that Common Lispniks would have an easier time to consider
> using Scheme when appropriate if Scheme implementations would more
> clearly document whether they support these features (except, of course,
> for the Lisp-2 thing). It's important that you can create uninterned
> symbols.

I don't know any Scheme with defmacro that doesn't support gensym.
Note en passant that define-macro in DrScheme is implemented in terms
of syntax-case (one page of code AFAIR).

> I have looked at this example, and the funny thing is that I immediately
> start to wonder which elements of the macro definition refer to
> macro-expansion-time entities and which refer to run-time stuff. I don't
> have to think about these issues with quasiquotations because I
> immediately see it from the code.
>
> I am not saying that this makes syntax-case worse than quasiquotation.
> Maybe I am just missing something.

Colors. Colors and arrows. That's what you are missing:

<http://www.scheme.dk/macros-in-color.png>

--
Jens Axel Søgaard

Pascal Costanza

unread,
Feb 20, 2004, 6:41:06 PM2/20/04
to

Jens Axel Søgaard wrote:

> Pascal Costanza wrote:
>
>> Anton van Straaten wrote:
>
>> My feeling is that Common Lispniks would have an easier time to
>> consider using Scheme when appropriate if Scheme implementations would
>> more clearly document whether they support these features (except, of
>> course, for the Lisp-2 thing). It's important that you can create
>> uninterned symbols.
>
> I don't know any Scheme with defmacro that doesn't support gensym.

I have read docs of some Schemes where I had the feeling that the
respectively provided gensym functions just tried hard to generate
unique strings, but didn't actually rely on object identity / symbol
identity to ensure uniqueness. Maybe I misunderstood those docs. But
that's the whole point of my argument.

In fact, I have just browsed through some of the Scheme docs in order to
double-check my impression. I have to admit it was easier this time to
track down the relevant sections. When a Scheme implements
string->uninterned-symbol, I am relatively happy. When it doesn't, but
it implements gensym, then I wonder how gensym is implemented. Some docs
say that gensym creates a unique symbol, some even state that it's
uninterned. What do the others do?

So at least, I know what to look for by now. Still, I have the feeling
that things could be easier. For example, the Scheme standard could just
standardize string->uninterned-symbol, couldn't it?

> Note en passant that define-macro in DrScheme is implemented in terms
> of syntax-case (one page of code AFAIR).

...but this isn't a Turing equivalence argument, is it? ;)

>> I have looked at this example, and the funny thing is that I
>> immediately start to wonder which elements of the macro definition
>> refer to macro-expansion-time entities and which refer to run-time
>> stuff. I don't have to think about these issues with quasiquotations
>> because I immediately see it from the code.
>>
>> I am not saying that this makes syntax-case worse than quasiquotation.
>> Maybe I am just missing something.
>
> Colors. Colors and arrows. That's what you are missing:
>
> <http://www.scheme.dk/macros-in-color.png>

No, I am red/green colorblind.

Jens Axel Søgaard

unread,
Feb 20, 2004, 6:46:17 PM2/20/04
to
Pascal Costanza wrote:

>> Colors. Colors and arrows. That's what you are missing:

>> <http://www.scheme.dk/macros-in-color.png>

> No, I am red/green colorblind.

You are in luck, nothing is colored red.

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Feb 20, 2004, 6:50:53 PM2/20/04
to
Pascal Costanza wrote:
> Jens Axel Søgaard wrote:
>> Pascal Costanza wrote:

>>> My feeling is that Common Lispniks would have an easier time to
>>> consider using Scheme when appropriate if Scheme implementations
>>> would more clearly document whether they support these features
>>> (except, of course, for the Lisp-2 thing). It's important that you
>>> can create uninterned symbols.

>> I don't know any Scheme with defmacro that doesn't support gensym.

> I have read docs of some Schemes where I had the feeling that the
> respectively provided gensym functions just tried hard to generate
> unique strings, but didn't actually rely on object identity / symbol
> identity to ensure uniqueness. Maybe I misunderstood those docs. But
> that's the whole point of my argument.

That's a valid concern. Documentation is worth nothing, if it isn't
precise.

> In fact, I have just browsed through some of the Scheme docs in order to
> double-check my impression. I have to admit it was easier this time to
> track down the relevant sections. When a Scheme implements
> string->uninterned-symbol, I am relatively happy. When it doesn't, but
> it implements gensym, then I wonder how gensym is implemented. Some docs
> say that gensym creates a unique symbol, some even state that it's
> uninterned. What do the others do?
>
> So at least, I know what to look for by now. Still, I have the feeling
> that things could be easier. For example, the Scheme standard could just
> standardize string->uninterned-symbol, couldn't it?

I suppose so.

>> Note en passant that define-macro in DrScheme is implemented in terms
>> of syntax-case (one page of code AFAIR).

> ...but this isn't a Turing equivalence argument, is it? ;)

Hence the "en passant" :-)

--
Jens Axel Søgaard

Pascal Costanza

unread,
Feb 20, 2004, 7:17:31 PM2/20/04
to

Jens Axel Søgaard wrote:

Red/green colorblindness doesn't "work" like that. For example, it's
really hard for me to parse any useful information from the coloring of
the (define-syntax for ...) form in the example you have linked to.
Unlesss my eyes are only a few inches away from the screen. It is only
called "red/green colorblindness" because of some biological details,
but the effect doesn't really have to do with the colors red and green
as such, at least not as we perceive them.

In general, it's a good idea to have a colorblind person check colors in
a program, presentation, etc., in order to ensure that they have the
desired effect on them, if it is really important. About 10% of the male
population are colorblind, so I don't think it's negligible.


Pascal

P.S.: Hey, this is still on topic! ;)

Pascal Costanza

unread,
Feb 20, 2004, 7:18:42 PM2/20/04
to

Jens Axel Søgaard wrote:

>>> Note en passant that define-macro in DrScheme is implemented in terms
>>> of syntax-case (one page of code AFAIR).
>
>
>> ...but this isn't a Turing equivalence argument, is it? ;)
>
>
> Hence the "en passant" :-)

:-))

Jens Axel Søgaard

unread,
Feb 20, 2004, 7:39:20 PM2/20/04
to
Pascal Costanza wrote:
>
> Jens Axel Søgaard wrote:
>
>> Pascal Costanza wrote:
>>
>>>> Colors. Colors and arrows. That's what you are missing:
>>
>>
>>>> <http://www.scheme.dk/macros-in-color.png>
>>
>>
>>> No, I am red/green colorblind.
>>
>>
>> You are in luck, nothing is colored red.
>
>
> Red/green colorblindness doesn't "work" like that. For example, it's
> really hard for me to parse any useful information from the coloring of
> the (define-syntax for ...) form in the example you have linked to.
> Unlesss my eyes are only a few inches away from the screen.

I apologize. I sincerely thought, you were making a joke (a good one even).

> It is only
> called "red/green colorblindness" because of some biological details,
> but the effect doesn't really have to do with the colors red and green
> as such, at least not as we perceive them.

Does it affect the perception of other colors as well?
I mean, is there a problem with disguishing say a mixture of 50% red and 50% blue,
from a mixture with 50% green and 50% blue?

> In general, it's a good idea to have a colorblind person check colors in
> a program, presentation, etc., in order to ensure that they have the
> desired effect on them, if it is really important. About 10% of the male
> population are colorblind, so I don't think it's negligible.

Which colors would you prefer? Hm. An option
to use alternative means of indicating the various categories,
say underline, bold, italic etc would be nice.

--
Jens Axel Søgaard

Pascal Costanza

unread,
Feb 20, 2004, 7:57:22 PM2/20/04
to

Jens Axel Søgaard wrote:

>> Red/green colorblindness doesn't "work" like that. For example, it's
>> really hard for me to parse any useful information from the coloring
>> of the (define-syntax for ...) form in the example you have linked to.
>> Unlesss my eyes are only a few inches away from the screen.
>
> I apologize. I sincerely thought, you were making a joke (a good one even).

No problem. Really. ;)

>> It is only called "red/green colorblindness" because of some
>> biological details, but the effect doesn't really have to do with the
>> colors red and green as such, at least not as we perceive them.
>
> Does it affect the perception of other colors as well?
> I mean, is there a problem with disguishing say a mixture of 50% red and
> 50% blue, from a mixture with 50% green and 50% blue?

I can't describe the effects, because I really don't know the details. I
I am only a victim. ;) Indeed, the mixed colors are the most problematic
ones.

>> In general, it's a good idea to have a colorblind person check colors
>> in a program, presentation, etc., in order to ensure that they have
>> the desired effect on them, if it is really important. About 10% of
>> the male population are colorblind, so I don't think it's negligible.
>
> Which colors would you prefer?

Colors like yellow, orange and blue are generally ok. But the best thing
in my experience is to sit down together with someone affected, explain
to him what you want to achieve, and then agree on some colors.

> Hm. An option
> to use alternative means of indicating the various categories,
> say underline, bold, italic etc would be nice.

Yes, that's also a good idea. Bold and italic can have negative
consequences for the layout, though. Shades of grey are usually also
very good when you don't use too many. This also helps in getting usable
printouts on b/w printers. (This is also a useful rule of thumb: If you
can still distinguish the colors when converted to grey scale, then it's
likely that colorblinded people can also distinguish them.)


Pascal

Kaz Kylheku

unread,
Feb 21, 2004, 4:00:24 AM2/21/04
to
Jacek Generowicz <jacek.ge...@cern.ch> wrote in message news:<tyfad3f...@pcepsft001.cern.ch>...
> [Don't want to start a flamefest, so I'm not crossposting this to
> c.l.s]
>
> In _Lambda: the ultimate "little language"_, Olin Shivers suggests
> (second paragraph of section 9) that " ... Scheme has the most
> sophisticated macro system of any language in this family ...", where
> 'this family' is explicitly stated to include Common Lisp.

This just illustrates the difference between what Schemers tend to
find important or interesting and what Lispers tend find important or
interesting.

Scheme comes with a sophisticated macro system. Lisp comes with
sophisticated macros.

Given a Lisp system, I can show ready-made macros like LOOP to Lisp
newbies. In a five minute demo, I can expose how an expressive little
language that brings a clear benefit to everyday programming tasks is
implemented by source-to-source transformations on lists.

A slick way to write macros is important if you want to showcase your
macro-writing system to people who already understand macro-writing
systems and their surrounding issues. It's not that imporant if you
want to get a software engineering task done, because the cost of
writing a macro is amortized over all of its uses. Given that
amortization, maintaining a nested backquote stuffed with MAPCAR calls
and whatnot isn't such a big deal. It's not important what the
expander of a macro looks like, and quite often it's not even
important what the expansion looks like as long as it is correct.

The Lisp way of writing macros is powerful and general, because you
are writing code, rather than expressing your intent in a tightly
constrained pattern-matching language. If you want some ad-hoc
irregular syntax, you can just write code to handle it. If you
deliberately want non-hygiene, that is easy. If you want to break open
a macro argument that is a string literal and parse a language
embedded in that, no problem, just write the code. If you want to
treat symbols as name-equivalent so :FOR, #:FOR and MYPACKAGE::FOR
denote the same thing, just write the code. And so forth.

Can someone show me the syntax-rules implementation of something that
closely resembles the CL LOOP macro, with all the clauses? Surely,
this should be a piece of cake using the ``most sophisticated macro
system''. What does syntax-rules bring to the table when you want to
write something like this?

How about the FORMATTER macro? What would be the syntax-rules for
that?

pete kirkham

unread,
Feb 21, 2004, 6:10:42 AM2/21/04
to
Pascal Costanza wrote:
> Shades of grey are usually also
> very good when you don't use too many. This also helps in getting usable
> printouts on b/w printers. (This is also a useful rule of thumb: If you
> can still distinguish the colors when converted to grey scale, then it's
> likely that colorblinded people can also distinguish them.)

Though some kinds of dyslexics find luminosity variations confusing (I
do, though more for backgrounds rather than foreground), so it's better
to have both colour and a grey-scale default modes, and the option for
the user to configure these.

One of the projects I'm on uses background colour to differentiate the
status of input fields (unset, entered, inherited from parent,
calculated from child, etc.), and one of the users is colour blind. He
can't tell the difference between the fields with my settings, and I
can't read the entries with his.


Pete
(who ended up studying electronics as he was the only non-colour blind
member of his family, so had to help wire all the plugs as a kid)

Anton van Straaten

unread,
Feb 21, 2004, 8:57:30 PM2/21/04
to
Pascal Costanza wrote:
> > Syntax-case is a rich system that's hard to summarize briefly, but I'll
take
> > a stab at it. I've identified four key features below:
> >
> > * Although syntax-case is a hygienic macro system, it allows you to
> > selectively break hygiene. This allows it to do the same kinds of
things
> > defmacro can do, but from the opposite direction: you need to take
special
> > action to break hygiene, rather than having to take action to preserve
> > hygiene. All things being equal, the syntax-case approach ought to be
> > preferable; but defmacro fans will tell you that all things aren't
equal,
> > that syntax-case pays a price in terms of complexity.
>
> Hm, not sure. I think the main issue is perhaps somewhat different.
>
> To me, the conceptual simplicity of CL-style macros is striking: It's
> just a transformation of s-expression. That's it.

That's what all of these macro systems are.

> Once understood, it's clear that you can do anything
> with this conceptual model.

The same is true of syntax-case.

> To me, things like syntax-rules and syntax-case look to macro
> programming like the LOOP macro looks to iteration. Maybe they really
> cover the important cases, but they seem hard to learn.

Syntax-rules is not hard to learn. If anything, it suffers from being
almost too simple; as well as from lacking good, short introductory
material. You specify a pattern, and specify the syntax that should replace
that pattern. That's all there is to it.

Syntax-case is more complex, and I do think that's a drawback when compared
to defmacro. It increases the temptation to conclude the following:

> And it immediately makes me wonder whether it is really worth it.
> After all, I know how to make things work with DEFMACRO.

I might wonder something similar if I were a Python programmer looking at
Lisp: Lisp seems hard to learn, and I would know how to make things work
with Python.

> BTW, what you really need to make something like DEFMACRO work is, on
> top of that, of course quasiquotation, GENSYM/MAKE-SYMBOL or
> string->uninterned-symbol and most probably a Lisp-2.

I don't see that Lisp-2 is an issue. As Jens pointed out, all the major
Schemes implement defmacro & gensym, and it works fine. And, of course,
quasiquotation is standardized by R5RS.

> My feeling is that Common Lispniks would have an easier time to consider
> using Scheme when appropriate if Scheme implementations would more
> clearly document whether they support these features (except, of course,
> for the Lisp-2 thing). It's important that you can create uninterned
symbols.

I wouldn't expect many Common Lispniks to use Scheme when appropriate.
Unless they like already like Scheme, why should they? And if they already
like Scheme, they're unlikely to be put off by such vague concerns as the
above. For the record, I'm not aware of any limitations on the defmacro
capabilities in the many Schemes which implement it.

> I have looked at this example, and the funny thing is that I immediately
> start to wonder which elements of the macro definition refer to
> macro-expansion-time entities and which refer to run-time stuff. I don't
> have to think about these issues with quasiquotations because I
> immediately see it from the code.

You don't have to think about these issues with syntax-rules - you're only
used to thinking about them because you're forced to, with defmacro.

In terms of name scoping, syntax-rules macros work exactly the same way name
scoping does with lambda. If you're comfortable with the way name scoping
works with lambda, you should have no problems with syntax-rules.

To back that up: in a given lambda expression, all names used are defined in
one of the following places:

* the lambda formals list;
* within the lambda body;
* in a lexically enclosing scope;
* in the dynamic environment.

In a syntax-rules macro, all names used are defined in:

* the syntax-rules pattern;
* within the syntax-rules body;
* in a lexically enclosing scope;
* in the dynamic environment.

There's no more need to quote or unquote names in a syntax-rules macro, than
there is in an ordinary lambda expression.

The reason you have to worry about quoting and unquoting of names in
defmacro is because you need to control whether or not you capture names
from the invoking environment. With pure hygienic macros like syntax-rules,
you simply can't do that, so it's not an issue.

Of course, quoting in defmacro is not just about names: it's also about
syntax. The reason syntax doesn't have to be quoted in syntax-rules macros
is that they rely on pattern matching and template replacement, not
procedural code. No procedural code means that except for macro variables
(addressed above), and special tokens like "...", everything in a macro
template is syntax at the same level, so there's no need to quote it.

The result is that it's actually easier to reason about syntax-rules
macros - which makes them easier to write, and easier to read. As a result,
and also because of the enforced hygiene, they're less error-prone.

Note that I'm not claiming syntax-rules macros are appropriate in all
cases - obviously, where you want to be able to break hygiene, they're not a
good solution (although it's possible to "break hygiene" with syntax-rules,
by redefining binding operators like lambda and let [*]).

> I am not saying that this makes syntax-case worse than quasiquotation.
> Maybe I am just missing something.

I don't think it's possible to compare without actually learning to use both
systems. If you're interested in becoming a bit more familiar, I'd
recommend starting with syntax-rules, which is very simple and easy to get
up to speed with. It also has the benefit that you could use it from within
CL via Dorai Sitaram's package.

> Hm, I recall reading that syntax-case allows for recording line numbers
> of the original expression. Are there more advantages?

Using domain-specific structures instead of lists to represent syntax means
that you can associate any information you want to with source code. As
Jens pointed out, PLT uses this to good effect. I think there's an argument
that this is an obvious way forward for Lisp-like languages - things like
refactoring tools and any other kinds of automated code processing can
benefit from it.

> Having more options is certainly better. (Such a statement from a fan of
> a supposedly minimal language?!? ;)

I think options are required here because none of these systems are perfect
in all respects. But the core language is still minimal - these macro
systems are all surface features, for syntax transformation. The portable
syntax-case system can be implemented in almost any standard Scheme, and
both syntax-rules and defmacro are easy to implement in terms of
syntax-case. You can view syntax-rules and defmacro as applications of
syntax-case. The reverse is not the case.

> Anyway, thanks for your insightful posting. I appreciate your efforts to
> bridge the gaps between the Scheme and Common Lisp communities,
> especially because this isn't always easy.

Thanks. I'm more interested in the underlying ideas, than in language
advocacy. Systems for transforming s-expressions are relevant to all Lisp
dialects.

Anton

[*]
http://groups.google.com/groups?selm=7eb8ac3e.0203271253.74bb0819%40posting.
google.com


Anton van Straaten

unread,
Feb 21, 2004, 11:16:47 PM2/21/04
to
Pascal Costanza wrote:
> I have read docs of some Schemes where I had the feeling that the
> respectively provided gensym functions just tried hard to generate
> unique strings, but didn't actually rely on object identity / symbol
> identity to ensure uniqueness. Maybe I misunderstood those docs.
> But that's the whole point of my argument.

It seems to me that you're exercising unwarranted suspicion. Do you have
any reason to believe gensym won't work the way it should? I suspect the
authors of whichever docs you're referring to may not have bothered to
specifically address the details you're concerned about, simply because it
wouldn't make sense to implement a gensym that doesn't work reliably in all
cases, and it seems almost too obvious to mention that.

> In fact, I have just browsed through some of the Scheme docs in order to
> double-check my impression. I have to admit it was easier this time to
> track down the relevant sections. When a Scheme implements
> string->uninterned-symbol, I am relatively happy. When it doesn't, but
> it implements gensym, then I wonder how gensym is implemented. Some docs
> say that gensym creates a unique symbol, some even state that it's
> uninterned. What do the others do?

You're worrying about internal implementation details, which only matter in
two cases: if the observed behavior is incorrect (which afaik, is not the
case); or if you have some particular interest in implementation details.
In the latter case, you shouldn't expect the documentation that tells you
how to use the language, to give all the specifics of the internals.
However, many Schemes do have papers available which go into some detail
about their internals.

> So at least, I know what to look for by now. Still, I have the feeling
> that things could be easier.

Do you mean that it could be easier for a CL user to find information that
satisfies him that something in Scheme has the same observable semantics as
it does in CL? I could make the same claim about CL. One way to address
that would be "CL for Scheme users" and "Scheme for CL users" (I think I've
seen at least one of those somewhere.)

> For example, the Scheme standard could just
> standardize string->uninterned-symbol, couldn't it?

You could submit a SRFI for it. But are you suggesting standardizing
string->uninterned-symbol simply to force a particular implementation style
for gensym? If so, why is that necessary?

> > Note en passant that define-macro in DrScheme is implemented in terms
> > of syntax-case (one page of code AFAIR).
>
> ...but this isn't a Turing equivalence argument, is it? ;)

No, but it does say something about the generality and power of syntax-case.
The PLT defmacro implementation is about 110 lines in total, much of which
is handling exceptions and two syntax variations.

Anton


Joe Marshall

unread,
Feb 22, 2004, 12:38:55 PM2/22/04
to
k...@ashi.footprints.net (Kaz Kylheku) writes:

> Can someone show me the syntax-rules implementation of something that
> closely resembles the CL LOOP macro, with all the clauses? Surely,
> this should be a piece of cake using the ``most sophisticated macro
> system''.

See http://okmij.org/ftp/Scheme/macros.html

Oleg has written a compiler that transforms Scheme code to the
appropriate pattern-matching rules.

> What does syntax-rules bring to the table when you want to write
> something like this?

Common Lisp has some restrictions to ensure that macros can work
correctly. For instance, you are not permitted to shadow identifiers
in the CL package. Syntax-rules does not have this restriction.


--
~jrm

Pascal Costanza

unread,
Feb 22, 2004, 2:19:10 PM2/22/04
to
Anton van Straaten wrote:

> Pascal Costanza wrote:
>
>>To me, the conceptual simplicity of CL-style macros is striking: It's
>>just a transformation of s-expression. That's it.
>
> That's what all of these macro systems are.

R5RS doesn't say so. At least, I don't see where the term "macro
transformer" is defined. It seems to me that the standard tries hard to
hide that fact. (But I might simply not have found the relevant sections.)

>>Once understood, it's clear that you can do anything
>>with this conceptual model.
>
> The same is true of syntax-case.

Of course, I will take your word for that. But I still don't understand
what syntax-case does. I have browsed through the various links that are
usually referred (mainly papers and a book by Dybvig), but I find it
very hard to follow the contents. It would be good if there would exist
some kind of high-level overview about syntax-case for people who
already know DEFMACRO.

>>To me, things like syntax-rules and syntax-case look to macro
>>programming like the LOOP macro looks to iteration. Maybe they really
>>cover the important cases, but they seem hard to learn.
>
> Syntax-rules is not hard to learn. If anything, it suffers from being
> almost too simple; as well as from lacking good, short introductory
> material. You specify a pattern, and specify the syntax that should replace
> that pattern. That's all there is to it.

Examples like those given in
http://okmij.org/ftp/Scheme/r5rs-macros-pitfalls.txt seem to indicate
that syntax-rules just trade one set of possible pitfalls with a
different set, but along that way the conceptual simplicity is lost.

Here are the examples from that reference implemented with DEFMACRO:

(defun foo-f (x)
(flet ((id (x) x))
(id (1+ x))))

(defmacro foo-m (x)
`(macrolet ((id (x) x))
(id (1+ ,x))))

(defmacro bar-m2 (var &body body)
`(macrolet ((helper (&body body)
`(lambda (,',var) ,@body)))
(helper ,@body)))


I really don't see the problem. Seriously not.

> Syntax-case is more complex, and I do think that's a drawback when compared
> to defmacro. It increases the temptation to conclude the following:
>
>>And it immediately makes me wonder whether it is really worth it.
>>After all, I know how to make things work with DEFMACRO.
>
> I might wonder something similar if I were a Python programmer looking at
> Lisp: Lisp seems hard to learn, and I would know how to make things work
> with Python.

Lisp and Scheme bring you metacircularity. As soon as Pythonistas write
program generators, it's clear that their laguage is missing something
important. Of course, they can write a Lisp interpreter in Python, but
that's besides the point.

Do you really think that syntax-case is an equally important step forward?

>>BTW, what you really need to make something like DEFMACRO work is, on
>>top of that, of course quasiquotation, GENSYM/MAKE-SYMBOL or
>>string->uninterned-symbol and most probably a Lisp-2.
>
> I don't see that Lisp-2 is an issue.

See http://citeseer.nj.nec.com/bawden88syntactic.html

Here are comments on the examples given in that paper from a Lisp-2
point of view:

1. Lexical variable bindings in the client code don't capture function
definitions used in macros:

(let ((cons 5))
(push 'foo stack))

is not an issue.

2. A client's lexical reference wouldn't conflict with a binding
introduced by a macro because the macro would make sure to use unique
symbols:

(defmacro or (exp-1 exp-2)
(let ((temp (gensym)))
`(let ((,temp ,exp-1))
(if ,temp ,temp ,exp-2))))

The expansion of (or (memq x y) temp) would be correct. This isn't
really a Lisp-2 issue. Note, however, that there are good examples why
one would want to introduce a new binding in a macro. As long as this is
documented, there is no real problem here. This example boils down to
the question what a reasonable default is and how hard it is to get the
other variant.

3. The third example in that paper shows how a newly introduced lexical
function binding may interfere with a function that is used in a macro
expansion. However, the names of global functions definitions are an
important part of an application's ontology. You know that you redefine
an important element of your application when meaningful names are used.
Different ontologies can be separated by proper namespace mechanisms.

Here it helps that a Lisp-2 seperates variables and functions by
default. Variables are usually not important parts of an application's
ontology. If they are, the convention in Common Lisp is to use proper
naming schemes, like asterisks for special variables. Effectively, this
creates a new namespace. (ISLISP is a little bit more explicit than
Common Lisp in this regard.)

4. The fourth example can be solved with a proper GENSYM for "use" in
the "contorted" macro. Again, there are good examples why one would want
to introduce a new binding in a macro.

> As Jens pointed out, all the major
> Schemes implement defmacro & gensym, and it works fine. And, of course,
> quasiquotation is standardized by R5RS.

OK, I believe you.

>>My feeling is that Common Lispniks would have an easier time to consider
>>using Scheme when appropriate if Scheme implementations would more
>>clearly document whether they support these features (except, of course,
>>for the Lisp-2 thing). It's important that you can create uninterned symbols.
>
> I wouldn't expect many Common Lispniks to use Scheme when appropriate.
> Unless they like already like Scheme, why should they? And if they already
> like Scheme, they're unlikely to be put off by such vague concerns as the
> above. For the record, I'm not aware of any limitations on the defmacro
> capabilities in the many Schemes which implement it.

Some while ago, I wanted to experiment with continuations in Scheme.
Apart from the fact that not all Schemes seem to implement continuations
fully and/or correctly (see
http://sisc.sourceforge.net/r5rspitresults.html ), the fact that the
respective documentations make me feel uneasy about whether I have to
relearn programming techniques for totally unrelated areas is a clear
downside IMHO.

[...]


> The result is that it's actually easier to reason about syntax-rules
> macros - which makes them easier to write, and easier to read. As a result,
> and also because of the enforced hygiene, they're less error-prone.

I don't mind using DEFMACRO for simple things. I don't find them hard to
write or read, and I don't know why they would be more error-prone.
Sounds similar to some of the claims made by advocates of static type
systems. Maybe this boils down to just a matter of taste.

>>I am not saying that this makes syntax-case worse than quasiquotation.
>>Maybe I am just missing something.
>
> I don't think it's possible to compare without actually learning to use both
> systems. If you're interested in becoming a bit more familiar, I'd
> recommend starting with syntax-rules, which is very simple and easy to get
> up to speed with. It also has the benefit that you could use it from within
> CL via Dorai Sitaram's package.

OK.

>>Hm, I recall reading that syntax-case allows for recording line numbers
>>of the original expression. Are there more advantages?
>
> Using domain-specific structures instead of lists to represent syntax means
> that you can associate any information you want to with source code. As
> Jens pointed out, PLT uses this to good effect. I think there's an argument
> that this is an obvious way forward for Lisp-like languages - things like
> refactoring tools and any other kinds of automated code processing can
> benefit from it.

OK

>>Having more options is certainly better. (Such a statement from a fan of
>>a supposedly minimal language?!? ;)
>
> I think options are required here because none of these systems are perfect
> in all respects. But the core language is still minimal - these macro
> systems are all surface features, for syntax transformation. The portable
> syntax-case system can be implemented in almost any standard Scheme, and
> both syntax-rules and defmacro are easy to implement in terms of
> syntax-case. You can view syntax-rules and defmacro as applications of
> syntax-case. The reverse is not the case.

What stands in the way of implementing syntax-case on top of DEFMACRO?
(This is not a rhetorical question.)

>>Anyway, thanks for your insightful posting. I appreciate your efforts to
>>bridge the gaps between the Scheme and Common Lisp communities,
>>especially because this isn't always easy.
>
> Thanks. I'm more interested in the underlying ideas, than in language
> advocacy. Systems for transforming s-expressions are relevant to all Lisp
> dialects.

Agreed.

Pascal Costanza

unread,
Feb 22, 2004, 2:36:23 PM2/22/04
to

Anton van Straaten wrote:

> Pascal Costanza wrote:
>
>>I have read docs of some Schemes where I had the feeling that the
>>respectively provided gensym functions just tried hard to generate
>>unique strings, but didn't actually rely on object identity / symbol
>>identity to ensure uniqueness. Maybe I misunderstood those docs.
>>But that's the whole point of my argument.
>
> It seems to me that you're exercising unwarranted suspicion. Do you have
> any reason to believe gensym won't work the way it should? I suspect the
> authors of whichever docs you're referring to may not have bothered to
> specifically address the details you're concerned about, simply because it
> wouldn't make sense to implement a gensym that doesn't work reliably in all
> cases, and it seems almost too obvious to mention that.

OK, I have to admit that this is my mistake. The last time, I wanted to
know whether a Scheme implementation suits my needs, I have looked at
SISC's documentation, among others. At that time I didn't know about
string->uninterned-symbol, but expected something more Common Lispish.

I have found the following section which made me very skeptical:
http://sisc.sourceforge.net/manual/html/apc.html#N15287

To my ears, this sounded a lot like they were trying to achieve
uniqueness by properly named strings. However, now that I have checked
the docs again I have noticed that the details of that section serve a
different purpose.

Someone should write a highly opinionated guide to Scheme. ;) (not me)

>>In fact, I have just browsed through some of the Scheme docs in order to
>>double-check my impression. I have to admit it was easier this time to
>>track down the relevant sections. When a Scheme implements
>>string->uninterned-symbol, I am relatively happy. When it doesn't, but
>>it implements gensym, then I wonder how gensym is implemented. Some docs
>>say that gensym creates a unique symbol, some even state that it's
>>uninterned. What do the others do?
>
> You're worrying about internal implementation details, which only matter in
> two cases: if the observed behavior is incorrect (which afaik, is not the
> case); or if you have some particular interest in implementation details.
> In the latter case, you shouldn't expect the documentation that tells you
> how to use the language, to give all the specifics of the internals.

OK, thanks for being insisting.

>>So at least, I know what to look for by now. Still, I have the feeling
>>that things could be easier.
>
> Do you mean that it could be easier for a CL user to find information that
> satisfies him that something in Scheme has the same observable semantics as
> it does in CL? I could make the same claim about CL. One way to address
> that would be "CL for Scheme users" and "Scheme for CL users" (I think I've
> seen at least one of those somewhere.)

...maybe a joint paper for both communities...

>>For example, the Scheme standard could just
>>standardize string->uninterned-symbol, couldn't it?
>
> You could submit a SRFI for it. But are you suggesting standardizing
> string->uninterned-symbol simply to force a particular implementation style
> for gensym? If so, why is that necessary?

It's not. Just a wrong expectation on my side.

Jens Axel Søgaard

unread,
Feb 22, 2004, 3:31:00 PM2/22/04
to
Pascal Costanza wrote:

> Some while ago, I wanted to experiment with continuations in Scheme.
> Apart from the fact that not all Schemes seem to implement continuations
> fully and/or correctly (see
> http://sisc.sourceforge.net/r5rspitresults.html ), the fact that the
> respective documentations make me feel uneasy about whether I have to
> relearn programming techniques for totally unrelated areas is a clear
> downside IMHO.

Most of these errors are not about call/cc but letrec
(e.g. 1.1 and 1.2 [I was too lazy to check the others]), where some
implementors have chosen to deviate sligthly from the standard (some
refer to it as letrec*). This deviation require the use of call/cc to
observe, and that's why the test examples are filled with call/cc.

I wouldn't be surprised if the behaviour becomes sanctioned in R6RS.

>>> Hm, I recall reading that syntax-case allows for recording line numbers
>>> of the original expression. Are there more advantages?

Perhaps you have already read it, but Dybvig's "Writing Hygenic Macros
in Scheme with Syntax-Case" available at

<ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/iucstr356.ps.gz>

is one of the best expositions of syntax-case from an user view point.

> What stands in the way of implementing syntax-case on top of DEFMACRO?
> (This is not a rhetorical question.)

I can't see any. Hm. Perhaps one could simple throw the Scheme source
of syntax-case through PseudoScheme?

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Feb 22, 2004, 3:35:59 PM2/22/04
to
Pascal Costanza wrote:
> Jens Axel Søgaard wrote:

>> Does it affect the perception of other colors as well?
>> I mean, is there a problem with disguishing say a mixture of 50% red
>> and 50% blue, from a mixture with 50% green and 50% blue?

> I can't describe the effects, because I really don't know the details. I
> I am only a victim. ;) Indeed, the mixed colors are the most problematic
> ones.

OK

>>> In general, it's a good idea to have a colorblind person check colors
>>> in a program, presentation, etc., in order to ensure that they have
>>> the desired effect on them, if it is really important. About 10% of
>>> the male population are colorblind, so I don't think it's negligible.

>> Which colors would you prefer?

> Colors like yellow, orange and blue are generally ok. But the best thing
> in my experience is to sit down together with someone affected, explain
> to him what you want to achieve, and then agree on some colors.
>
>> Hm. An option
>> to use alternative means of indicating the various categories,
>> say underline, bold, italic etc would be nice.
>
>
> Yes, that's also a good idea. Bold and italic can have negative
> consequences for the layout, though. Shades of grey are usually also
> very good when you don't use too many. This also helps in getting usable
> printouts on b/w printers. (This is also a useful rule of thumb: If you
> can still distinguish the colors when converted to grey scale, then it's
> likely that colorblinded people can also distinguish them.)

I'll keep that in mind.

--
Jens Axel Søgaard

Brian Mastenbrook

unread,
Feb 22, 2004, 4:29:03 PM2/22/04
to
In article <P2WZb.14144$W74....@newsread1.news.atl.earthlink.net>,

Anton van Straaten <an...@appsolutions.com> wrote:

> It seems to me that you're exercising unwarranted suspicion. Do you have
> any reason to believe gensym won't work the way it should? I suspect the
> authors of whichever docs you're referring to may not have bothered to
> specifically address the details you're concerned about, simply because it
> wouldn't make sense to implement a gensym that doesn't work reliably in all
> cases, and it seems almost too obvious to mention that.

Perhaps someone should tell that to Dybvig:

> (define g1 '#{g18 |%japmchQ4DMDM\\%+|})
> (define g2 (gensym))
> g2

Error in intern-gensym: unique name "%japmchQ4DMDM\\\\%+" already
interned.
Type (debug) to enter the debugger.

I did this by observing the pattern of printed gensyms and guessing the
next value in the sequence.

--
Brian Mastenbrook
http://www.cs.indiana.edu/~bmastenb/

Anton van Straaten

unread,
Feb 22, 2004, 6:03:46 PM2/22/04
to
Pascal Costanza wrote:
> Anton van Straaten wrote:
>
> > Pascal Costanza wrote:
> >
> >>To me, the conceptual simplicity of CL-style macros is striking: It's
> >>just a transformation of s-expression. That's it.
> >
> > That's what all of these macro systems are.
>
> R5RS doesn't say so. At least, I don't see where the term "macro
> transformer" is defined. It seems to me that the standard tries hard to
> hide that fact. (But I might simply not have found the relevant sections.)

If you look up "macro transformer" in the index, it points you to a page
which contains the following definition (Sec. 4.3, Macros):

"The set of rules that specifies how a use of a macro is transcribed into a
more primitive expression is called the 'transformer' of the macro."

I don't think it's hiding anything. Do you think otherwise?

> >>Once understood, it's clear that you can do anything
> >>with this conceptual model.
> >
> > The same is true of syntax-case.
>
> Of course, I will take your word for that. But I still don't understand
> what syntax-case does. I have browsed through the various links that are
> usually referred (mainly papers and a book by Dybvig), but I find it
> very hard to follow the contents. It would be good if there would exist
> some kind of high-level overview about syntax-case for people who
> already know DEFMACRO.

I agree the docs don't make it easy to get into at first. I learned
syntax-case (up to a point - I'm not an expert by any means) after first
learning and using syntax-rules for some time, and having previously been
familiar with defmacro. I think syntax-rules makes a good starting point,
because it teaches the hygienic pattern matching approach in a simpler
context. That same approach is used by syntax-case, but augmented with a
much more powerful procedural syntax manipulation capability.

But I'll ignore my own advice and take a stab at explaining syntax-case,
starting from a defmacro perspective. Perhaps the gentlest introduction to
syntax-case is Dybvig's paper, "Writing Hygenic Macros in Scheme with
Syntax-Case":
ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/iucstr356.ps.gz ,
and I'll use some of its examples below.

Start with defmacro, and imagine that instead of quoting syntax using
quasiquote, you use a special form, 'syntax', which instead of returning a
list as quasiquote does, returns a syntax object, i.e. an instance of an
abstract data type which represents a piece of syntax. This type has an API
which supports various code walking & manipulation capabilities. It can
also be converted to a list (or whatever value the original syntax
represented) via 'syntax-object->datum'.

An important thing to note here is that a syntax object "understands" the
syntax it represents - it's not just an undifferentiated list. It knows
which values are identifiers, it knows things about where those identifiers
are bound, and as we've touched on, it can track things like line numbers
(which may be implementation-dependent). If you're developing code
manipulation tools - editors, debuggers etc. - these syntax objects give you
a capability which defmacro doesn't even attempt to address. Syntax objects
are a richer way to represent program syntax than lists, and their uses go
beyond just macros.

Within a syntax expression of the form (syntax ...), any references to macro
variables are replaced with their values, i.e. there's no need to unquote
references to macro variables. So to steal an example from Dybvig, here's
an 'and2' macro which works like 'and', but for only two operands:

(define-syntax and2
(lambda (x)
(syntax-case x ()
((_ x y)
(syntax (if x y #f))))))

The (lambda (x) ...) binds a syntax object to x, representing the syntax of
the expression which invoked the macro. In theory, you can do whatever you
want with that syntax object. Most commonly, you'll use syntax-case to do a
pattern match on it, which is what the above example does with the
expression (syntax-case x () ...). The () is for literals, like 'else' in
cond.

Within the above syntax-case expression, there's a single pattern and a
corresponding template:

((_ x y)
(syntax (if x y #f))))))

The underscore in (_ x y) represents the name of the macro - you could also
write (and2 x y), it doesn't matter. This pattern will match any
invocations of AND2 with two parameters. After the pattern, is the
expression which will be executed when that pattern is matched. In this
case, it's simply a syntax expression which returns the expression (if x y
#f). The return value from syntax-case must be a syntax object, which
represents the actual syntax of the final program.

In the above, since x and y are macro variables (strictly speaking, "pattern
variables"), their values will be substituted when the syntax object is
created, so that (and2 4 5) becomes (if 4 5 #f).

I think I'll stop there for now, since I have other things to do! I've
touched on some of the more important points about syntax-case. There's a
lot more to it than the above - particularly, breaking hygiene, and
executing procedural code rather than simply applying a template. But all
of it fits into the above framework, and involves manipulating syntax
objects, in a way similar to what you would do in defmacro, but through the
syntax object API rather than manipulating lists. There are plenty more
examples in the Dybvig paper.

> > Syntax-rules is not hard to learn. If anything, it suffers from being
> > almost too simple; as well as from lacking good, short introductory
> > material. You specify a pattern, and specify the syntax that should
replace
> > that pattern. That's all there is to it.
>
> Examples like those given in
> http://okmij.org/ftp/Scheme/r5rs-macros-pitfalls.txt seem to indicate
> that syntax-rules just trade one set of possible pitfalls with a
> different set, but along that way the conceptual simplicity is lost.

I don't accept that "conceptual simplicity is lost" with syntax-rules. It's
a different approach, which in some ways is conceptually simpler than
defmacro, since it doesn't require the user to manually keep track of the
different levels at which the macro operates. The pitfalls you mention may
indeed be flaws in syntax-rules - I'm not familiar enough with them to
comment - but I find that syntax-rules works very well for many kinds of
macros, better than defmacro in fact.

Of course, the latter claim is hardly ever going to be accepted by someone
only familiar with defmacro. For the record, I learned and used defmacro
before ever using syntax-rules or syntax-case, and I still use defmacro from
time to time, so I think I have a good basis for comparison.

> Here are the examples from that reference implemented with DEFMACRO:
>
> (defun foo-f (x)
> (flet ((id (x) x))
> (id (1+ x))))
>
> (defmacro foo-m (x)
> `(macrolet ((id (x) x))
> (id (1+ ,x))))
>
> (defmacro bar-m2 (var &body body)
> `(macrolet ((helper (&body body)
> `(lambda (,',var) ,@body)))
> (helper ,@body)))
>
>
> I really don't see the problem. Seriously not.

I'm not sure what you mean about not seeing the problem. One of the
problems mentioned in the article is that syntax-rules pattern variables
don't shadow. I don't know if there's a justification for that, or it's
simply a bug in the design of syntax-rules. But you usually get an error if
you make this mistake, and it's easy to fix, and easy to avoid. It doesn't
mean that syntax-rules is not useful, and it's still better than defmacro,
which you can't dispute until you've learned syntax-rules. ;)

> > Syntax-case is more complex, and I do think that's a drawback when
compared
> > to defmacro. It increases the temptation to conclude the following:
> >
> >>And it immediately makes me wonder whether it is really worth it.
> >>After all, I know how to make things work with DEFMACRO.
> >
> > I might wonder something similar if I were a Python programmer looking
at
> > Lisp: Lisp seems hard to learn, and I would know how to make things work
> > with Python.
>
> Lisp and Scheme bring you metacircularity. As soon as Pythonistas write
> program generators, it's clear that their laguage is missing something
> important. Of course, they can write a Lisp interpreter in Python, but
> that's besides the point.
>
> Do you really think that syntax-case is an equally important step forward?

In some respects, yes, but that's not what I really meant. It's easy to
look at something from the outside and find reasons not to try it, and that
was my main point. But the points you've been picking on don't seem very
substantial to me - it seems as though you're looking for reasons to ignore
these systems, rather than looking for reasons you might want to learn them.
To conclude from the points you've raised that these systems can be ignored,
seems to me to throw a bunch of babies out with the bathwater. (They're
much cuter babies than that warty defmacro baby, too! ;)

Of course, a CL programmer who wants to write standard CL code, obviously
has little incentive to be interested in other macro systems. But if your
interests cross multiple languages, then there's value in the Scheme macro
systems, at the very least in the sense that learning more than one approach
to the same problem expands the horizons of your understanding of the
problem.

> >>BTW, what you really need to make something like DEFMACRO work is, on
> >>top of that, of course quasiquotation, GENSYM/MAKE-SYMBOL or
> >>string->uninterned-symbol and most probably a Lisp-2.
> >
> > I don't see that Lisp-2 is an issue.
>
> See http://citeseer.nj.nec.com/bawden88syntactic.html

I'm familiar with why people claim it's an issue, but in practice I think
it's not significantly worse than the issue of hygiene in defmacros in
general. As I've said and defended here once before, Lisp-1 can express any
Lisp-2 program, simply by changing any conflicting names so as not to
conflict - a conceptually trivial transformation, with consequences which
are primarily subjective. It would have an impact on porting Lisp-2 macros
to Lisp-1, but it doesn't limit what you can easily express in Lisp-1.

Put another way, having the ability to accidentally compensate for hygiene
violations in some cases - where multiple namespaces happen to prevent the
problem - isn't a solution to the general problem of not having hygiene.
Since you haven't solved the general problem, you still have to address
questions of hygiene, in various low-level ways. A single namespace doesn't
makes this problem worse in any significant way.

> Here it helps that a Lisp-2 seperates variables and functions by
> default. Variables are usually not important parts of an application's
> ontology. If they are, the convention in Common Lisp is to use proper
> naming schemes, like asterisks for special variables. Effectively, this
> creates a new namespace.

Mmm, asterisks. This, to me, is why the whole Lisp-1/2 debate is moot. The
solution is simply Lisp-N, where you can define namespaces, modules, etc.
and control how they're used. See PLT Scheme etc.

> 4. The fourth example can be solved with a proper GENSYM for "use" in
> the "contorted" macro.

The phrase "proper GENSYM" is an oxymoron. GENSYM operates at a strangely
low level of abstraction. Why don't you use GENSYM when declaring normal
lexical variables in a procedure? Rhetorical question, of course - the
point is, GENSYM is a kludge. It's not a particularly onerous one, but it's
part of what makes defmacro worth improving on.

> Some while ago, I wanted to experiment with continuations in Scheme.
> Apart from the fact that not all Schemes seem to implement continuations
> fully and/or correctly (see
> http://sisc.sourceforge.net/r5rspitresults.html ), the fact that the
> respective documentations make me feel uneasy about whether I have to
> relearn programming techniques for totally unrelated areas is a clear
> downside IMHO.

We're straying far afield here. ;) But I'll give my opinion about
continuations, too. Re the quality of implementations, once again you're
looking at edge cases. Forget about those, they're not important, except
in, well, edge cases. All of the major Schemes either support continuations
well, or tell you when they don't - e.g., some of the Scheme to C compilers
deliberately provide restricted continuations.

As far as relearning programming techniques goes, first and foremost,
continuations are a general conceptual model for control flow. If you only
write single-threaded code in a language with a traditional linear
stack-based control flow, you won't have much use for continuations -
they're far more powerful and general than is needed to deal with that case.
But for systems with more complex control flow, continuations can provide a
very useful model - web servers are just one example, but really any system
which involves multiple threads, distributed processing, etc. can benefit
from modeling via continuations.

Scheme is one of very few languages - along with SML/NJ, Stackless Python,
and the RhinoWithContinuations version of Javascript - which implements
first-class continuations. If you're developing tools in the spaces
mentioned above, this is a useful capability. Stackless Python uses
continuations to support microthreading; Scheme has a number of web server
solutions which use continuations to "invert control" so that the structure
of a web application's code can be decoupled from its web page structure;
and RhinoWithContinuations does something similar for web applications in
the Cocoon web framework. For applications which need them, continuations
are very useful, and have little competition. Their competition is mainly
OS-level threads, which really solve a different problem, and conceptually
are a stream of continuations anyway.

For ordinary programming, though, continuations are more or less
irrelevant - they should be dealt with under the hood, whether by tools like
those web server frameworks, or by language constructs like exception
handlers and microthreads. The only reason to learn about use of
first-class continuations as a programming construct is either for the sake
of learning, to deepen your understanding of programming; or if you are
interested in developing language or system tools that use them. If you are
interested in any of this, then yes, you're going to have to do some
learning and relearning - there's no way around that. But for most ordinary
applications, you can safely ignore continuations.

> [...]
> > The result is that it's actually easier to reason about syntax-rules
> > macros - which makes them easier to write, and easier to read. As a
result,
> > and also because of the enforced hygiene, they're less error-prone.
>
> I don't mind using DEFMACRO for simple things. I don't find them hard to
> write or read, and I don't know why they would be more error-prone.
> Sounds similar to some of the claims made by advocates of static type
> systems. Maybe this boils down to just a matter of taste.

Maybe - let's talk once you've tried syntax-rules. But you gave a clue to
your reading & writing process for DEFMACRO when you said that when reading
a syntax-rules macro, you were immediately worrying about which level the
various tokens were at. You've learned to look for, and expect something
that, with syntax-rules, you can simply forget about. You don't do these
things when writing ordinary functions - why do you put up with it when
writing macros? What would you think of Lisp if you had to use gensym to
initialize every variable you use? You've simply become very used to a
low-level technique, so that you don't believe there's any need for a higher
level technique.

> What stands in the way of implementing syntax-case on top of DEFMACRO?
> (This is not a rhetorical question.)

I don't think it would make much sense. The implementation of syntax
objects has little do with what defmacro does. The pattern matching forms
of syntax-case might be defined via DEFMACRO at the surface level, but their
definitions deal with syntax objects, so there'd be little for defmacro to
do once the syntax objects had been constructed. It wouldn't help much when
constructing the syntax objects, either, since the 'syntax' form doesn't use
defmacro syntax, and I can't see any point in converting it internally.

The reason that it's easy to implement DEFMACRO in syntax-case is that a
syntax object is a superset of the list representations of syntax used by
DEFMACRO. You can translate syntax-as-lists to syntax objects and back
again, without losing anything - it's part of the standard syntax object
API, so nothing additional is needed to do that, which is partly why a
syntax-case implementation of DEFMACRO is short. Going the other way is
more problematic, since syntax-as-lists has less information than a syntax
object.

Anton


Anton van Straaten

unread,
Feb 22, 2004, 6:27:43 PM2/22/04
to

Ouch! Is that with Chez? Petite Chez 6.0a doesn't do that.

Anton


Brian Mastenbrook

unread,
Feb 22, 2004, 10:41:35 PM2/22/04
to
In article <mza_b.14958$W74...@newsread1.news.atl.earthlink.net>,

Anton van Straaten <an...@appsolutions.com> wrote:

> I don't accept that "conceptual simplicity is lost" with syntax-rules. It's
> a different approach, which in some ways is conceptually simpler than
> defmacro, since it doesn't require the user to manually keep track of the
> different levels at which the macro operates. The pitfalls you mention may
> indeed be flaws in syntax-rules - I'm not familiar enough with them to
> comment - but I find that syntax-rules works very well for many kinds of
> macros, better than defmacro in fact.
>
> Of course, the latter claim is hardly ever going to be accepted by someone
> only familiar with defmacro. For the record, I learned and used defmacro
> before ever using syntax-rules or syntax-case, and I still use defmacro from
> time to time, so I think I have a good basis for comparison.

For the record, I'm familiar with both defmacro and syntax-rules,
though I am considerably more familiar with the first. With helpful
tools for list destructuring and mass generation of gensyms, defmacro
macros can be pretty easy to write, whereas I often have to squint at a
syntax-rules macro to figure out what ...'s correspond to what. Perhaps
this is simply a consequence of my level of experience. Perhaps it's
just personal preference.

> > 4. The fourth example can be solved with a proper GENSYM for "use" in
> > the "contorted" macro.
>
> The phrase "proper GENSYM" is an oxymoron. GENSYM operates at a strangely
> low level of abstraction. Why don't you use GENSYM when declaring normal
> lexical variables in a procedure? Rhetorical question, of course - the
> point is, GENSYM is a kludge. It's not a particularly onerous one, but it's
> part of what makes defmacro worth improving on.

I don't understand what makes GENSYM such a kludge, nor why it is such
a low level of abstraction. GENSYM is not doing anything different than
CONS does - it returns something which is unique to eq? with the
specified contents. In CL, a GENSYM-alike can minimally be written as
(make-symbol "").

> We're straying far afield here. ;) But I'll give my opinion about
> continuations, too. Re the quality of implementations, once again you're
> looking at edge cases. Forget about those, they're not important, except
> in, well, edge cases. All of the major Schemes either support continuations
> well, or tell you when they don't - e.g., some of the Scheme to C compilers
> deliberately provide restricted continuations.

Fortunately, this is what Chicken is for.

> Scheme is one of very few languages - along with SML/NJ, Stackless Python,
> and the RhinoWithContinuations version of Javascript - which implements
> first-class continuations. If you're developing tools in the spaces
> mentioned above, this is a useful capability. Stackless Python uses
> continuations to support microthreading; Scheme has a number of web server
> solutions which use continuations to "invert control" so that the structure
> of a web application's code can be decoupled from its web page structure;
> and RhinoWithContinuations does something similar for web applications in
> the Cocoon web framework. For applications which need them, continuations
> are very useful, and have little competition. Their competition is mainly
> OS-level threads, which really solve a different problem, and conceptually
> are a stream of continuations anyway.

And here is where the next Lisp/Scheme debate is going to start up.
When Schemers speak of continuations, they really mean implicit
continuations - the idea that the program should be granted access to
the continuations that are flying around under the hood in the
implementation. However, the name "continuation" is considerably more
general, or else anybody using CPS needs to find a new term.

In fact I would argue that the main competition to call/cc
continuations is going to be CPS, and that CPS has a really big
advantage: after you write your program, you can identify what types of
continuations you use, and then change their representation to a list
of a name and arguments which describes the continuation of that type.
For instance, a standard CPS example:

(defun fib-cps (n k)
(if (< n 3) (funcall k 1)
(fib-cps (- n 2)
(lambda (v1)
(fib-cps (- n 1)
(lambda (v2)
(funcall k (+ v1 v2))))))))

Becomes:

(defun fib-cps (n k)
(if (< n 3) (do-continuation k 1)
(fib-cps (- n 2)
(make-val1-continuation (- n 1)
k))))

(defun make-val1-continuation (n k)
`(val1-continuation ,n ,k))

(defun make-add1-continuation (n k)
`(add1-continuation ,n ,k))

(defun do-continuation (k arg)
(ecase (car k)
(return-k arg)
(val1-continuation (fib-cps (cadr k)
(make-add1-continuation arg (caddr k))))
(add1-continuation (do-continuation (caddr k)
(+ arg (cadr k))))))

Of course this is all rather complicated to write by hand, but there is
one huge advantage: the continuations are now serializable, even across
different lisps! Provided with sufficient tools to generate code like
this, I don't see why anyone would prefer call/cc continuations (which
are inherently fragile) for this use case.

CPS also brings with it a clearer view of dynamically scoped variables,
which are useful in writing network applications. With call/cc
continuations, calling a saved continuation (eg for a web process) will
trigger wind points on the way out, meaning you can't store eg the
current connection in a simple dynamic variable.

One interesting thing to note is that continuations can be described in
terms of the standard UNIX call fork() with shared memory. If you do
use shared memory, fork() will simply copy the stack alone - so, to
exploit this, when you call/cc, fork() a new process, and have it
immediately suspend itself. When it is unsuspended, it should fork()
itself into a running process and then re-suspend itself. When you call
the associated continuation, unsuspend that process, and kill yourself.
Make sure to store return values on the heap, and you're all set! Thus
continuations are really a subset of standard UNIX multiprocessing
semantics :-)

Brian Mastenbrook

unread,
Feb 22, 2004, 10:48:19 PM2/22/04
to
In article <PVa_b.14987$W74....@newsread1.news.atl.earthlink.net>,

Anton van Straaten <an...@appsolutions.com> wrote:

> Ouch! Is that with Chez? Petite Chez 6.0a doesn't do that.
>
> Anton

I tried that example with Chez 6.9b, which corresponds to "whatever is
on the CS computers for us to use". I think the real issue here is with
print-gensym - it's not generating a unique representation for the
gensym, so it simply collides. Of course I have issues with
print-gensym to begin with - gensyms should not be READable to the same
value. To do otherwise makes it not a gensym. Otherwise gensym could be
boiled down to something evil like (loop for i from 1 do (if (not
(find-symbol (format nil "g~A" i)))) (return (intern (format nil "g~A"
i)))) .

Anton van Straaten

unread,
Feb 23, 2004, 2:19:33 AM2/23/04
to
Brian Mastenbrook wrote:
> In article <mza_b.14958$W74...@newsread1.news.atl.earthlink.net>,
> Anton van Straaten <an...@appsolutions.com> wrote:
>
> > I find that syntax-rules works very well for many kinds of
> > macros, better than defmacro in fact.
> >
> > Of course, the latter claim is hardly ever going to be accepted by
someone
> > only familiar with defmacro. For the record, I learned and used
defmacro
> > before ever using syntax-rules or syntax-case, and I still use defmacro
from
> > time to time, so I think I have a good basis for comparison.
>
> For the record, I'm familiar with both defmacro and syntax-rules,
> though I am considerably more familiar with the first. With helpful
> tools for list destructuring and mass generation of gensyms, defmacro
> macros can be pretty easy to write, whereas I often have to squint at a
> syntax-rules macro to figure out what ...'s correspond to what. Perhaps
> this is simply a consequence of my level of experience. Perhaps it's
> just personal preference.

I'm sure there is a large subjective element. My real point relates to the
initial reaction that defmacro users often have, to not knowing what's going
on when they don't see syntax and variables being quoted or unquoted.
However, this is actually a benefit of syntax-rules, that they're simply
unfamiliar with. I notice you didn't mention that issue as being something
that you have to squint for, which isn't surprising if you're somewhat
familiar with syntax-rules.

Re the ...'s, they since they appear after the expression which is being
repeated, I tend to think of them like a postfix token - along the lines of
the quote and quasiquote tokens, but appearing after the expression to which
it applies, instead of before. The equivalent operation in defmacro is
usually procedural code, which I don't think is any clearer to absorb at a
glance.

> I don't understand what makes GENSYM such a kludge, nor why it is such
> a low level of abstraction. GENSYM is not doing anything different than
> CONS does - it returns something which is unique to eq? with the
> specified contents. In CL, a GENSYM-alike can minimally be written as
> (make-symbol "").

Yes, GENSYM is just a constructor. But you don't normally have to
"construct" your variable names. The only reason you do with defmacro, is
because defmacro doesn't deal with hygiene. It's low-level because you're
implementing the sort of feature that is usually handled by the language.

You're right, that with some wrappers to generate gensym, you can make this
easier. But as I said, you don't have to do this with normal lexical or
other variables, and I doubt you'd be defending it if you did have to gensym
all your variables.

What makes macros different? That's semi-rhetorical - it's an interesting
question to answer. I think the answer ends up being that defmacro exists
at an intersection point between ease of implementation, sufficient ease of
use, and the macro equivalent of Turing-completeness. There are other
interesting intersection points, though.

> > All of the major Schemes either support continuations
> > well, or tell you when they don't - e.g., some of the Scheme to C
compilers
> > deliberately provide restricted continuations.
>
> Fortunately, this is what Chicken is for.

Yes, Chicken warms the cockles of my cool-hack-loving heart. For those who
aren't familiar, it's a Scheme which generates C code that uses a version of
Henry Baker's Cheney-on-the-MTA mechanism for supporting tail recursion and
continuations in C:
http://home.pipeline.com/~hbaker1/CheneyMTA.html )

> > For applications which need them, continuations
> > are very useful, and have little competition. Their competition is
mainly
> > OS-level threads, which really solve a different problem, and
conceptually
> > are a stream of continuations anyway.
>
> And here is where the next Lisp/Scheme debate is going to start up.
> When Schemers speak of continuations, they really mean implicit
> continuations - the idea that the program should be granted access to
> the continuations that are flying around under the hood in the
> implementation. However, the name "continuation" is considerably more
> general, or else anybody using CPS needs to find a new term.

No argument about the term. The full term for the Scheme thingies is
"first-class continuation", and you can produce them in other ways than
call/cc.

> In fact I would argue that the main competition to call/cc
> continuations is going to be CPS, and that CPS has a really big
> advantage: after you write your program, you can identify what types of
> continuations you use, and then change their representation to a list
> of a name and arguments which describes the continuation of that type.

CPS is very useful, but one of the applications for first-class
continuations is implementing language-level tools - exception systems,
unusual control flows like goal seeking, web server inversion of control,
etc. In all of these cases, you don't want to require the end programmer to
write their code in CPS. You raised that point:

> Of course this is all rather complicated to write by hand, but there is
> one huge advantage: the continuations are now serializable, even across
> different lisps! Provided with sufficient tools to generate code like
> this, I don't see why anyone would prefer call/cc continuations (which
> are inherently fragile) for this use case.

If you write tools to generate CPS code behind the scenes, so that the user
ends up with serializable continuations, then you've implemented a language
which offers first-class continuations. Whether it provides them through
call/cc or not isn't particularly important - call/cc just happens to be one
of the most simple and general way of giving access to continuations. If
such tools actually existed as a layer over Lisp or Scheme, I'm sure it
would be useful.

Those tools might still find it useful to offer something like call/cc, if
it's possible given the typing issues, so that users can do their own
control manipulations rather than relying on whatever the tools support. I
suspect it'll be tricky to eliminate the relevance of call/cc - it's a bit
like saying that given function definition syntax & variable binding syntax
etc., we could eliminate the need for lambda. It'd still be there, just
hidden, which is the way it's supposed to be anyway.

> One interesting thing to note is that continuations can be described in
> terms of the standard UNIX call fork() with shared memory. If you do
> use shared memory, fork() will simply copy the stack alone - so, to
> exploit this, when you call/cc, fork() a new process, and have it
> immediately suspend itself. When it is unsuspended, it should fork()
> itself into a running process and then re-suspend itself. When you call
> the associated continuation, unsuspend that process, and kill yourself.
> Make sure to store return values on the heap, and you're all set! Thus
> continuations are really a subset of standard UNIX multiprocessing
> semantics :-)

I presume you'd have to implement mutable variables via heap allocated ref
cells, or something, otherwise you'd end cloning variables that should be
shared (if I've understood the model correctly). So, since this model is
restricted, it's actually a subset of the continuation model, not the other
way around. :)

Anton


Brian Mastenbrook

unread,
Feb 23, 2004, 5:38:30 AM2/23/04
to
In article <9Qh_b.15739$W74....@newsread1.news.atl.earthlink.net>,

Anton van Straaten <an...@appsolutions.com> wrote:

> > I don't understand what makes GENSYM such a kludge, nor why it is such
> > a low level of abstraction. GENSYM is not doing anything different than
> > CONS does - it returns something which is unique to eq? with the
> > specified contents. In CL, a GENSYM-alike can minimally be written as
> > (make-symbol "").
>
> Yes, GENSYM is just a constructor. But you don't normally have to
> "construct" your variable names. The only reason you do with defmacro, is
> because defmacro doesn't deal with hygiene. It's low-level because you're
> implementing the sort of feature that is usually handled by the language.

Wait, stop. Who said anything about GENSYM being a variable name? Not
me, for sure. GENSYM constructs a symbol. The fact that the evaluation
semantics for symbols is to treat them as variables isn't necessarily
relevant to the existance of GENSYM - it has the same use if you are
writing an interpreter or compiler for a language with hygienic macros.

I guess what you are really arguing is that Lisp a level of abstraction
to represent its macros which is normally hidden in a Scheme
implementation. This much is true, but I suspect that lispers do not
respond to "LOW-LEVEL" the way you do: on the contrary, I'd prefer to
work in a language that exposes its low-level technologies to me via
appropriate reflection.

> CPS is very useful, but one of the applications for first-class
> continuations is implementing language-level tools - exception systems,
> unusual control flows like goal seeking, web server inversion of control,
> etc. In all of these cases, you don't want to require the end programmer to
> write their code in CPS. You raised that point:

[elided]

> If you write tools to generate CPS code behind the scenes, so that the user
> ends up with serializable continuations, then you've implemented a language
> which offers first-class continuations. Whether it provides them through
> call/cc or not isn't particularly important - call/cc just happens to be one
> of the most simple and general way of giving access to continuations. If
> such tools actually existed as a layer over Lisp or Scheme, I'm sure it
> would be useful.

I was not arguing about the semantics or need for a tool like call/cc
in general. The issue I was trying to raise was whether call/cc was
necessary in the host language, especially for the specific problem of
inversion of control in a web server. call/cc is a very powerful but
very brutal tool in a sense, and a CPSer can produce code with
advantages (such as serializable continuations) that the host
continuations can't offer. It's just like shifting from use of LAMBDA
to an explicit data structure, to separate out the data being closed
over from the code in question. Both of these things are very useful
when you are actually trying to write a web application which needs
live upgrading, load balancing, serializable state, et al. Similarly
I'm sure a goal-directed reasoner might want to have serializable
state, for when you're trying to reproduce your experiments :-)

> Those tools might still find it useful to offer something like call/cc, if
> it's possible given the typing issues, so that users can do their own
> control manipulations rather than relying on whatever the tools support. I
> suspect it'll be tricky to eliminate the relevance of call/cc - it's a bit
> like saying that given function definition syntax & variable binding syntax
> etc., we could eliminate the need for lambda. It'd still be there, just
> hidden, which is the way it's supposed to be anyway.

Of course, but even if call/cc is offered in full, it will still not be
a language level tool to the host language. I believe this to be an
important distinction.

> > One interesting thing to note is that continuations can be described in
> > terms of the standard UNIX call fork() with shared memory. If you do
> > use shared memory, fork() will simply copy the stack alone - so, to
> > exploit this, when you call/cc, fork() a new process, and have it
> > immediately suspend itself. When it is unsuspended, it should fork()
> > itself into a running process and then re-suspend itself. When you call
> > the associated continuation, unsuspend that process, and kill yourself.
> > Make sure to store return values on the heap, and you're all set! Thus
> > continuations are really a subset of standard UNIX multiprocessing
> > semantics :-)
>
> I presume you'd have to implement mutable variables via heap allocated ref
> cells, or something, otherwise you'd end cloning variables that should be
> shared (if I've understood the model correctly). So, since this model is
> restricted, it's actually a subset of the continuation model, not the other
> way around. :)

Actually, since it provides the continuation model when everything is
heap-allocated, and something slightly different when some things are
stack-allocated, doesn't that technically make continuations a subset
of this? :-)

Joe Marshall

unread,
Feb 23, 2004, 5:40:45 AM2/23/04
to
Brian Mastenbrook <NOSPAMbmas...@cs.indiana.edu> writes:

> And here is where the next Lisp/Scheme debate is going to start up.
> When Schemers speak of continuations, they really mean implicit
> continuations - the idea that the program should be granted access to
> the continuations that are flying around under the hood in the
> implementation. However, the name "continuation" is considerably more
> general, or else anybody using CPS needs to find a new term.
>
> In fact I would argue that the main competition to call/cc
> continuations is going to be CPS, and that CPS has a really big
> advantage: after you write your program, you can identify what types of
> continuations you use, and then change their representation to a list
> of a name and arguments which describes the continuation of that type.

The problem with this approach is that this requires a global
transformation and you may not have access to all the code.

--
~jrm

Brian Mastenbrook

unread,
Feb 23, 2004, 8:43:46 AM2/23/04
to
In article <k72e9i...@comcast.net>, Joe Marshall
<prunes...@comcast.net> wrote:

> The problem with this approach is that this requires a global
> transformation and you may not have access to all the code.

Where is it? Locked away somewhere?

You only need to transform your own code; I would hope that any other
function you would be calling from a web page generator would be
simple.

Joe Marshall

unread,
Feb 23, 2004, 9:20:43 AM2/23/04
to
Brian Mastenbrook <NOSPAMbmas...@cs.indiana.edu> writes:

> In article <k72e9i...@comcast.net>, Joe Marshall
> <prunes...@comcast.net> wrote:
>
>> The problem with this approach is that this requires a global
>> transformation and you may not have access to all the code.
>
> Where is it? Locked away somewhere?

Essentially. Are you going to CPS convert the entire system? Even
the primitives?

> You only need to transform your own code; I would hope that any other
> function you would be calling from a web page generator would be
> simple.

Like mapcar? CPS conversion is a model for
call-with-current-continuation, but it is not a practical
substitution.

Marcin 'Qrczak' Kowalczyk

unread,
Feb 23, 2004, 9:22:36 AM2/23/04
to
On Mon, 23 Feb 2004 08:43:46 -0500, Brian Mastenbrook wrote:

>> The problem with this approach is that this requires a global
>> transformation and you may not have access to all the code.
>
> Where is it? Locked away somewhere?
>
> You only need to transform your own code; I would hope that any other
> function you would be calling from a web page generator would be
> simple.

Calling higher order functions like mapcar and passing them functions
which capture or invoke continuations requires a CPS transformer to
reimplement these functions (if their source is not visible).

Some builtin functions are hard to transform. For example funcall: you
should determine whether its argument is a transformed function or not,
which is in general not known until runtime.

CPS transformation by code walking is necessarily incomplete because
it's not possible to transform higher order black boxes.

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Anton van Straaten

unread,
Feb 23, 2004, 2:58:44 PM2/23/04
to
Brian Mastenbrook wrote:
> In article <9Qh_b.15739$W74....@newsread1.news.atl.earthlink.net>,
> Anton van Straaten <an...@appsolutions.com> wrote:
>
> > > I don't understand what makes GENSYM such a kludge, nor why it
> > > is such a low level of abstraction. GENSYM is not doing anything
> > > different than CONS does - it returns something which is unique to
> > > eq? with the specified contents. In CL, a GENSYM-alike can
> > > minimally be written as (make-symbol "").
> >
> > Yes, GENSYM is just a constructor. But you don't normally have to
> > "construct" your variable names. The only reason you do with defmacro,
> > is because defmacro doesn't deal with hygiene. It's low-level because
> > you're implementing the sort of feature that is usually handled by the
> > language.
>
> Wait, stop. Who said anything about GENSYM being a variable name?
> Not me, for sure.

Me neither. The second sentence in my paragraph above is right-associative,
and beta-substitutes for the "do" in the sentence to its right: applying an
inlining transform, the third sentence would read: "The only reason you have
to "construct" your variable names with defmacro..."

> GENSYM constructs a symbol. The fact that the evaluation
> semantics for symbols is to treat them as variables isn't necessarily
> relevant to the existance of GENSYM - it has the same use if you are
> writing an interpreter or compiler for a language with hygienic macros.

Right, I wasn't objecting to GENSYM's existence, but to its use to create
variables safe for use in defmacro, and the fact that the consequences of
that can't be hidden, even if the GENSYM itself is.

> I guess what you are really arguing is that Lisp a level of abstraction
> to represent its macros which is normally hidden in a Scheme
> implementation. This much is true, but I suspect that lispers do not
> respond to "LOW-LEVEL" the way you do: on the contrary, I'd prefer to
> work in a language that exposes its low-level technologies to me via
> appropriate reflection.

I have no problem with that - but it doesn't make sense to have to deal with
that in every macro that's ever written. You might use a feature like
GENSYM to implement some language/system-level construct, but what I'm
saying is that your average, ordinary macro shouldn't have to deal with that
level of abstraction - it has nothing to do with the problem domain.
Wrapping the GENSYM in higher-level functions helps, but you're still
dealing with the consequences of the issue - the need for GENSYM is just the
symptom.

One of the strengths of Lisp family languages is that code can be written at
a level closer to the problem domain. When programming in many other
languages, even some of the more modern languages with garbage collection
etc., programmers are more often forced to deal with issues that amount to
limitations or quirks of the language. Lisp, because of macros,
unrestricted procedural abstraction, and the things that have been built on
top of that (like CLOS), suffers much less from this.

However, use of DEFMACRO is an exception - the programmer is required to
deal with instantiating certain variable names before using them, much like
a C programmer has to allocate memory before storing anything. Hiding the
GENSYM use doesn't help any more than hiding the allocation in C++ (via
'new') - if you don't use the proper construct, or do the correct escaping
of references to a name, it's an error. My analogy is actually startlingly
appropriate: escaping a name using unquote is analogous to dereferencing a
pointer in C, using '*'. A design at the appropriate level of abstraction
would know when a dereference or an unquote needs to occur, and not require
the programmer to worry about it with every use.

The main difference between the way variables work in defmacro, and the way
memory allocation & access works in C, is that the consequences of getting
defmacro wrong are nowhere near as severe. But that doesn't excuse the
abstraction level violation that's taking place in every macro.

Note that I'm not saying DEFMACRO should be tossed in the trash. I don't
think any of the alternatives exceed DEFMACRO's utility & simplicity in
every dimension. But they do improve on it in some important ways, and in
the absence of a perfect macro system which achieves a perfect score in
every dimension, I think it's useful to have more than one system.

> I was not arguing about the semantics or need for a tool like call/cc
> in general. The issue I was trying to raise was whether call/cc was
> necessary in the host language, especially for the specific problem of
> inversion of control in a web server. call/cc is a very powerful but
> very brutal tool in a sense, and a CPSer can produce code with
> advantages (such as serializable continuations) that the host
> continuations can't offer. It's just like shifting from use of LAMBDA
> to an explicit data structure, to separate out the data being closed
> over from the code in question. Both of these things are very useful
> when you are actually trying to write a web application which needs
> live upgrading, load balancing, serializable state, et al.

If you deliver a language or tools that helps me do all of these things, I'd
love to use it. As others have pointed out, there are some issues to
address. In the meantime, I think call/cc is a useful tool to have. If it
later turns out to have been a step on the road to better things, there's
nothing wrong with that.

> > > Thus continuations are really a subset of standard UNIX
> > > multiprocessing semantics :-)
> >
> > I presume you'd have to implement mutable variables via heap allocated
ref
> > cells, or something, otherwise you'd end cloning variables that should
be
> > shared (if I've understood the model correctly). So, since this model
is
> > restricted, it's actually a subset of the continuation model, not the
other
> > way around. :)
>
> Actually, since it provides the continuation model when everything is
> heap-allocated, and something slightly different when some things are
> stack-allocated, doesn't that technically make continuations a subset
> of this? :-)

No. :oP

Anton


Pascal Costanza

unread,
Feb 23, 2004, 6:12:43 PM2/23/04
to

Jens Axel Søgaard wrote:

> Perhaps you have already read it, but Dybvig's "Writing Hygenic Macros
> in Scheme with Syntax-Case" available at
>
> <ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/iucstr356.ps.gz>
>
> is one of the best expositions of syntax-case from an user view point.

Thanks for the link. At a first glance it looks quite good. (No, I
haven't read it yet.)

Pascal Costanza

unread,
Feb 23, 2004, 7:49:07 PM2/23/04
to

Anton van Straaten wrote:

> If you look up "macro transformer" in the index, it points you to a page
> which contains the following definition (Sec. 4.3, Macros):
>
> "The set of rules that specifies how a use of a macro is transcribed into a
> more primitive expression is called the 'transformer' of the macro."
>
> I don't think it's hiding anything. Do you think otherwise?

I think Dybvig's explanations in "Writing Hygenic Macros in Scheme with
Syntax-Case" are much clearer:

"Macro transformers are procedures of one argument. The argument to a
macro transformer is a syntax object, which contains contextual
information about an expression in addition to its structure. [...]"

Generally, I need mental models how language constructs are mapped to
the lower levels in order to understand and trust them. This is true for
almost every language feature that I use. Schemers seem to be more
mathematically inclined and prefer other perspectives to understanding
language features, and maybe I am just not in the right target audience.

> But I'll ignore my own advice and take a stab at explaining syntax-case,
> starting from a defmacro perspective.

Thanks for your explanations. They help a lot.

> Start with defmacro, and imagine that instead of quoting syntax using
> quasiquote, you use a special form, 'syntax', which instead of returning a
> list as quasiquote does, returns a syntax object, i.e. an instance of an
> abstract data type which represents a piece of syntax. This type has an API
> which supports various code walking & manipulation capabilities. It can
> also be converted to a list (or whatever value the original syntax
> represented) via 'syntax-object->datum'.

[...]


> Syntax objects
> are a richer way to represent program syntax than lists, and their uses go
> beyond just macros.

That's the first glimpse I have caught that this might be something
worthwhile to learn. Where is that API documented/specified?

> I think I'll stop there for now, since I have other things to do!

Thanks a lot for taking your time to provide these explanations. They
help a lot.

>>Examples like those given in
>>http://okmij.org/ftp/Scheme/r5rs-macros-pitfalls.txt seem to indicate
>>that syntax-rules just trade one set of possible pitfalls with a
>>different set, but along that way the conceptual simplicity is lost.
>
> I don't accept that "conceptual simplicity is lost" with syntax-rules. It's
> a different approach, which in some ways is conceptually simpler than
> defmacro, since it doesn't require the user to manually keep track of the
> different levels at which the macro operates. The pitfalls you mention may
> indeed be flaws in syntax-rules - I'm not familiar enough with them to
> comment - but I find that syntax-rules works very well for many kinds of
> macros, better than defmacro in fact.

OK, my preliminary conclusion is that it is just a different programming
style for expressing macros.

> I'm not sure what you mean about not seeing the problem. One of the
> problems mentioned in the article is that syntax-rules pattern variables
> don't shadow. I don't know if there's a justification for that, or it's
> simply a bug in the design of syntax-rules. But you usually get an error if
> you make this mistake, and it's easy to fix, and easy to avoid. It doesn't
> mean that syntax-rules is not useful, and it's still better than defmacro,
> which you can't dispute until you've learned syntax-rules. ;)

Until now I have found defmacro very easy to learn, and most of the
stuff I have read so far about Scheme's macro system(s) is very
inaccessible for me. This makes defmacro de facto more useful to me. Of
course, this might change in the future.

>>Lisp and Scheme bring you metacircularity. As soon as Pythonistas write
>>program generators, it's clear that their laguage is missing something
>>important. Of course, they can write a Lisp interpreter in Python, but
>>that's besides the point.
>>
>>Do you really think that syntax-case is an equally important step forward?
>
> In some respects, yes, but that's not what I really meant. It's easy to
> look at something from the outside and find reasons not to try it, and that
> was my main point. But the points you've been picking on don't seem very
> substantial to me - it seems as though you're looking for reasons to ignore
> these systems, rather than looking for reasons you might want to learn them.

That's not so far from the truth. See below.

> Of course, a CL programmer who wants to write standard CL code, obviously
> has little incentive to be interested in other macro systems. But if your
> interests cross multiple languages, then there's value in the Scheme macro
> systems, at the very least in the sense that learning more than one approach
> to the same problem expands the horizons of your understanding of the
> problem.

I accept that.

>>>>BTW, what you really need to make something like DEFMACRO work is, on
>>>>top of that, of course quasiquotation, GENSYM/MAKE-SYMBOL or
>>>>string->uninterned-symbol and most probably a Lisp-2.
>>>
>>>I don't see that Lisp-2 is an issue.
>>
>>See http://citeseer.nj.nec.com/bawden88syntactic.html
>
> I'm familiar with why people claim it's an issue, but in practice I think
> it's not significantly worse than the issue of hygiene in defmacros in
> general. As I've said and defended here once before, Lisp-1 can express any
> Lisp-2 program, simply by changing any conflicting names so as not to
> conflict - a conceptually trivial transformation, with consequences which
> are primarily subjective. It would have an impact on porting Lisp-2 macros
> to Lisp-1, but it doesn't limit what you can easily express in Lisp-1.

If you are talking about an automatic transformation of names here, then
I wouldn't agree that this is a relevant argument. Programmers choose
names because they are descriptive of the nature of the conceptual
entities these names stand for. An automatic translation looses this
aspect, at least to a certain degree.

If I say (let ((list ...)) ...) in Common Lisp, I have chosen the name
"list" to say something about the variable that it denotes. If that name
gets automatically translated to some arbitrary other name it either
looses a certain amount of that descriptive quality ("lst"), or it is
amended with some irrelevant information ("list-var").

It's true that this doesn't essentially limit what you can express in
Lisp-1, but it's also true that there _is_ a fundamental difference
between functions and values, and even though the separation of these
two spaces was an accident in the history of Lisp, it still matches an
important qualitative distinction.

> Put another way, having the ability to accidentally compensate for hygiene
> violations in some cases - where multiple namespaces happen to prevent the
> problem - isn't a solution to the general problem of not having hygiene.
> Since you haven't solved the general problem, you still have to address
> questions of hygiene, in various low-level ways. A single namespace doesn't
> makes this problem worse in any significant way.

I disagree. Names for functions and values cannot accidentally clash in
a Lisp-2, and this is an important category of potential clashes in a
Lisp-1, even outside the domain of macro programming.

>>Here it helps that a Lisp-2 seperates variables and functions by
>>default. Variables are usually not important parts of an application's
>>ontology. If they are, the convention in Common Lisp is to use proper
>>naming schemes, like asterisks for special variables. Effectively, this
>>creates a new namespace.
>
> Mmm, asterisks. This, to me, is why the whole Lisp-1/2 debate is moot. The
> solution is simply Lisp-N, where you can define namespaces, modules, etc.
> and control how they're used. See PLT Scheme etc.

Unfortunately, there doesn't seem to be conventions to separate names
for functions and values that Schemers adhere to. This is an important
aspect of Common Lisp's naming convention for special variables.

The point is that the number of cases in which hygiene is a real issue
is considerably reduced by the combination of Lisp-2-ness and naming
conventions in Common Lisp, so that the remaining cases aren't pressing
anymore. For macros, you can just use a set of idioms that you can
easily memorize and you're done with it.

>>4. The fourth example can be solved with a proper GENSYM for "use" in
>>the "contorted" macro.
>
> The phrase "proper GENSYM" is an oxymoron. GENSYM operates at a strangely
> low level of abstraction. Why don't you use GENSYM when declaring normal
> lexical variables in a procedure? Rhetorical question, of course

[...]

No, not really. Some time ago, I have started some thought experiments
how one could design a Lisp dialect that works like that, and I think
this has the potential to add some interesting features.

>>Some while ago, I wanted to experiment with continuations in Scheme.
>>Apart from the fact that not all Schemes seem to implement continuations
>>fully and/or correctly (see
>>http://sisc.sourceforge.net/r5rspitresults.html ), the fact that the
>>respective documentations make me feel uneasy about whether I have to
>>relearn programming techniques for totally unrelated areas is a clear
>>downside IMHO.
>
> We're straying far afield here. ;) But I'll give my opinion about
> continuations, too.

Maybe my posting was too ambiguous here, but I really didn't want to
talk about continuations here. I understand them (because I have a
mental model how they are mapped to parameter passing and procedure
invocation mechanisms!), and I have a fair understanding of what they
can be used for.

However, the point is that in order to experiment with continuations, I
have to put up with all the other design decisions of Scheme. For most
of Scheme's features that's ok, but wrt macros this can be annoying.

I have made a few attempts to get an understanding of Scheme macros in
the past, and I have always found them too hard to understand especially
in comparison to the only minor improvements they seemed to make. The
optimal thing would have been a Common Lisp implementation with call/cc
built in, but such a beast doesn't seem to exist.

So yes: If you want to use Scheme for something that's not really
related to macros, and you don't _want_ to learn syntax-rules or
syntax-case for some reason, and you are not sure whether gensym
actually works, because most Schemers tell you that this evil anyway,
then this can be annoying. I think that this is actually a disservice to
Scheme and Lisp in general.

I have no problems to agree with you that defmacro, syntax-rules and
syntax-case are just different ways to implement macros, maybe with
their own respective strengths in the latter two cases that I cannot
judge at the moment. However, it is a fact that defmacro is generally
described in the Scheme community as fundamentally problematic, and this
is clearly wrong. (See for example the first paragraph of "Syntactic
Abstractions in Scheme" by Hieb, Dybvig, Bruggeman.)

>>I don't mind using DEFMACRO for simple things. I don't find them hard to
>>write or read, and I don't know why they would be more error-prone.
>>Sounds similar to some of the claims made by advocates of static type
>>systems. Maybe this boils down to just a matter of taste.
>
> Maybe - let's talk once you've tried syntax-rules. But you gave a clue to
> your reading & writing process for DEFMACRO when you said that when reading
> a syntax-rules macro, you were immediately worrying about which level the
> various tokens were at. You've learned to look for, and expect something
> that, with syntax-rules, you can simply forget about. You don't do these
> things when writing ordinary functions

Yes, I do.

> - why do you put up with it when
> writing macros? What would you think of Lisp if you had to use gensym to
> initialize every variable you use? You've simply become very used to a
> low-level technique, so that you don't believe there's any need for a higher
> level technique.

Right. At least, this was right until I have heard from you that syntax
objects can be used for things that go beyond macros.

Rahul Jain

unread,
Feb 23, 2004, 9:07:22 PM2/23/04
to
Brian Mastenbrook <NOSPAMbmas...@cs.indiana.edu> writes:

> Wait, stop. Who said anything about GENSYM being a variable name? Not
> me, for sure. GENSYM constructs a symbol. The fact that the evaluation
> semantics for symbols is to treat them as variables isn't necessarily
> relevant to the existance of GENSYM - it has the same use if you are
> writing an interpreter or compiler for a language with hygienic macros.

I think this is a key issue. Earlier, it was claimed that a Lisp-1 can
emulate a Lisp-2 by just renaming variables. This doesn't do anything
for the case where the plist of a symbol is used by the macro-function
for that symbol. That macro-function can then be attached to any symbols
and the plist can be used to customize its behavior. Given a MOP, I'd
rather implement this using funcallable-instances, but I don't see how
having a Lisp-1 gives you those for free. :)

--
Rahul Jain
rj...@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist

Rahul Jain

unread,
Feb 23, 2004, 9:13:20 PM2/23/04
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> I have no problem with that - but it doesn't make sense to have to deal with
> that in every macro that's ever written. You might use a feature like
> GENSYM to implement some language/system-level construct, but what I'm
> saying is that your average, ordinary macro shouldn't have to deal with that
> level of abstraction - it has nothing to do with the problem domain.
> Wrapping the GENSYM in higher-level functions helps, but you're still
> dealing with the consequences of the issue - the need for GENSYM is just the
> symptom.

First you say you have no problem with using GENSYM under the covers.
Then you say that having abstractions that use GENSYM under the covers
is a problem. I don't get it. FWIW, there is a quasi-standard REBINDING
(a.k.a. WITH-GENSYMS) macro that provides just such an abstraction. I
should probably use it, but I'm too lazy to. :)

> However, use of DEFMACRO is an exception - the programmer is required to
> deal with instantiating certain variable names before using them, much like
> a C programmer has to allocate memory before storing anything.

The same gripe could be given about MOP. One has to instantiate the
slot-definition before using it. It's a question of layered protocols.
Use the layer that fits the problem. This reminds me: you still haven't
shown an implementation of LOOP in syntax-rules (or maybe I haven't
gotten that far in the thread).

Anton van Straaten

unread,
Feb 23, 2004, 11:15:06 PM2/23/04
to
Rahul Jain wrote:
> "Anton van Straaten" <an...@appsolutions.com> writes:
>
> > I have no problem with that - but it doesn't make sense to have to deal
with
> > that in every macro that's ever written. You might use a feature like
> > GENSYM to implement some language/system-level construct, but what I'm
> > saying is that your average, ordinary macro shouldn't have to deal with
that
> > level of abstraction - it has nothing to do with the problem domain.
> > Wrapping the GENSYM in higher-level functions helps, but you're still
> > dealing with the consequences of the issue - the need for GENSYM is just
the
> > symptom.
>
> First you say you have no problem with using GENSYM under the covers.
> Then you say that having abstractions that use GENSYM under the covers
> is a problem. I don't get it.

Sorry for not being clear. I'm saying that in the case of DEFMACRO, the
abstraction necessarily leaks, which reduces its quality as an abstraction.
As I said, the need for GENSYM is just a symptom, of the hygiene issue.

> FWIW, there is a quasi-standard REBINDING (a.k.a. WITH-GENSYMS)
> macro that provides just such an abstraction.

Even if you use that, you still have to unquote the references to the
variables it declares. If you forget to unquote them, it's a bug - not only
that, but it's one that isn't necessarily detected by the compiler, and can
result in erroneous variable capture. Now, I'm not saying that's the end of
the world, but you have to admit that in other contexts - such as when
dealing with ordinary variables - we don't accept that sort of thing. Why
the exception for DEFMACRO?

> > However, use of DEFMACRO is an exception - the programmer is required to
> > deal with instantiating certain variable names before using them, much
like
> > a C programmer has to allocate memory before storing anything.
>
> The same gripe could be given about MOP. One has to instantiate the
> slot-definition before using it. It's a question of layered protocols.
> Use the layer that fits the problem.

Yes, that's what I'm saying. With DEFMACRO, there is no higher layer, but
there should be. That's the exact issue that syntax-rules and syntax-case
address.

> This reminds me: you still haven't shown an implementation of LOOP
> in syntax-rules (or maybe I haven't gotten that far in the thread).

You'd have to pay me to implement LOOP, in any language. It could be
implemented straightforwardly enough with syntax-case. It may very well be
difficult to implement in syntax-rules, but that's not particularly
relevant - I've repeatedly said that I don't consider syntax-rules a
complete replacement for DEFMACRO. However, syntax-rules does show how a
higher layer than DEFMACRO can work well, so there are lessons to be learned
from it. At the very least, it offers an alternative way of thinking about
macros, to help combat the Sapir-Whorf hypothesis as it applies to CL
macros.

Anton


Rahul Jain

unread,
Feb 24, 2004, 12:04:17 AM2/24/04
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> Sorry for not being clear. I'm saying that in the case of DEFMACRO, the
> abstraction necessarily leaks, which reduces its quality as an abstraction.

No. DEFMACRO is an abstraction over syntax. It is not an abstraction
over semantics. Compare this to CAR/CDR/CDAR vs.
FIRST/SECOND/THIRD/REST. Note the equivalent of CDAR as a list-operator.
Macros are like cons cells. Syntax-rules are (I guess) like lists. Would
you claim that the existence of CDAR reduces the quality of cons cells
as an abstraction?

> As I said, the need for GENSYM is just a symptom, of the hygiene issue.

Like lack of side-effects, hygiene is NOT a goal. It's just a property
of _certain_ macros.

>> FWIW, there is a quasi-standard REBINDING (a.k.a. WITH-GENSYMS)
>> macro that provides just such an abstraction.
>
> Even if you use that, you still have to unquote the references to the
> variables it declares. If you forget to unquote them, it's a bug - not only
> that, but it's one that isn't necessarily detected by the compiler, and can
> result in erroneous variable capture. Now, I'm not saying that's the end of
> the world, but you have to admit that in other contexts - such as when
> dealing with ordinary variables - we don't accept that sort of thing. Why
> the exception for DEFMACRO?

(let ((x 1)
(y 2))
(let ((x 2)) ;; OOPS. this was supposed to be z!
...))

Lisp compilers accept this sort of thing, and so do scheme compilers, afaik.

> Yes, that's what I'm saying. With DEFMACRO, there is no higher layer, but
> there should be. That's the exact issue that syntax-rules and syntax-case
> address.

OK, but that says nothing about DEFMACRO itself being considered
harmful. In other posts, the way that syntax-rules adds information to
the expansion such as what is used as a binding and what isn't (in order
to help identify what should be captured/shadowed, and what should be
gensymed, I assume). I'm not sure if you could then use a
DEFMACRO-defined macro in the definition of a syntax-rules-defined
macro. If that's the case, that _serverely_ limits the usefulness of
syntax-rules.

Jacek Generowicz

unread,
Feb 24, 2004, 3:19:20 AM2/24/04
to
Pascal Costanza <cost...@web.de> writes:

> Anton van Straaten wrote:
>
> > I think I'll stop there for now, since I have other things to do!
>
> Thanks a lot for taking your time to provide these explanations. They
> help a lot.

Seconded.

Pascal Costanza

unread,
Feb 24, 2004, 4:34:20 AM2/24/04
to

Anton van Straaten wrote:

>>FWIW, there is a quasi-standard REBINDING (a.k.a. WITH-GENSYMS)
>>macro that provides just such an abstraction.
>
> Even if you use that, you still have to unquote the references to the
> variables it declares. If you forget to unquote them, it's a bug - not only
> that, but it's one that isn't necessarily detected by the compiler, and can
> result in erroneous variable capture. Now, I'm not saying that's the end of
> the world, but you have to admit that in other contexts - such as when
> dealing with ordinary variables - we don't accept that sort of thing. Why
> the exception for DEFMACRO?

Just reword that section, and IMHO one can see that this is just a
question of reasonable defaults:

"Even if you use syntax-case, you still have to take special action to
break hygiene. If you forget to do that, it's a bug - not only that,

but it's one that isn't necessarily detected by the compiler, and can

result in erroneous absence of variable capture."

I understand that syntax-rules and syntax-case is a different style for
expressing macros, but I don't accept the hygiene part. The purpose of
an abstraction is to suppress irrelevant elements. Name capture is an
important and useful concept that you have to know about anyway, so it's
not irrelevant.

>>This reminds me: you still haven't shown an implementation of LOOP
>>in syntax-rules (or maybe I haven't gotten that far in the thread).

I don't think anyone has claimed that syntax-rules is appropriate for
expressing LOOP. If DEFMACRO's only purpose was to be able to express
LOOP then we wouldn't need it, because LOOP is part of the standard.

Pascal Costanza

unread,
Feb 24, 2004, 5:32:09 PM2/24/04
to

Pascal Costanza wrote:

> I have no problems to agree with you that defmacro, syntax-rules and
> syntax-case are just different ways to implement macros, maybe with
> their own respective strengths in the latter two cases that I cannot
> judge at the moment.

Here is the result of my first experiment with syntax-rules...

(define-syntax special-let
(syntax-rules ()
((_ () form ...)
(let () form ...))
((_ ((var binding) more ...) form ...)
(let ((var binding))
(special-let (more ...) form ...)))
((_ (var more ...) form ...)
(let ((var ()))
(special-let (more ...) form ...)))))

...and for comparison purposes, here is how I would implement it in
Common Lisp:

(defmacro special-let (bindings &body body)
(reduce (lambda (binding body)
(cond ((symbolp binding)
`(let ((,binding nil))
,body))
(t `(let (,binding)
,body))))
bindings
:from-end t :initial-value `(progn ,@body)))


I am not sure what to think of this. Comments?

Jens Axel Søgaard

unread,
Feb 24, 2004, 8:24:07 PM2/24/04
to

I am not sure. What happens in the CL macro
if the body contains a macro that expands to
one of the variables being bound in the special
let?

Example:


(define a 0)

(define-syntax foo
(syntax-rules ()
((_)
(+ a 100))))

(special-let ((a 42))
(foo))


Evaluates to 100.

But

(defparameter a 0)

(defmacro foo ()
'(+ a 100))

(special-let ((a 42))
(foo))

evaluates to 142.


I tried (defvar a 0) and (defconstant a 0)
instead of (defparamter a 0). The first
also evaluates to 142, the second gives an
error.


In order to give the defmacro the same behaviour
as the syntax-rules one, you need to rename the user
given names in bindings using gensym. The tricky part
is then the renaming in the body.


However, the possibility also exist that I messed up
in the translation of the example from Scheme to CL.
In that case forget the paragraph above.
(But please tell, me what the correct translation
would be)


--
Jens Axel Søgaard

dw

unread,
Feb 25, 2004, 1:07:09 AM2/25/04
to
"Anton van Straaten" <an...@appsolutions.com> wrote in message news:<mza_b.14958$W74...@newsread1.news.atl.earthlink.net>...

>
> (define-syntax and2
> (lambda (x)
> (syntax-case x ()
> ((_ x y)
> (syntax (if x y #f))))))
>
> The (lambda (x) ...) binds a syntax object to x, representing the syntax of
> the expression which invoked the macro. In theory, you can do whatever you
> want with that syntax object. Most commonly, you'll use syntax-case to do a
> pattern match on it, which is what the above example does with the
> expression (syntax-case x () ...). The () is for literals, like 'else' in
> cond.
>
> Within the above syntax-case expression, there's a single pattern and a
> corresponding template:
>
> ((_ x y)
> (syntax (if x y #f))))))
>
> The underscore in (_ x y) represents the name of the macro - you could also
> write (and2 x y), it doesn't matter. This pattern will match any
> invocations of AND2 with two parameters. After the pattern, is the
> expression which will be executed when that pattern is matched. In this
> case, it's simply a syntax expression which returns the expression (if x y
> #f). The return value from syntax-case must be a syntax object, which
> represents the actual syntax of the final program.

From what I understood, it looks like Scheme macros are a bit like C++
templates, that is instead of directly manipulating lists, they use
pattern matching. Both are Turing-complete systems.

If you are familiar with templates, can you compare syntax-case and syntax-
rules to them?

If you were designing a new computer language "for the masses" what kind of
macros would it have?

Pascal Costanza

unread,
Feb 25, 2004, 8:08:24 AM2/25/04
to

Jens Axel Søgaard wrote:

If Common Lispniks write such macros, they explicitly want to capture
the variable A defined in the surrounding code. For example, this could
be part of the specification of the macro FOO: "captures variable A and
adds 100"

If you don't want that, you have to write the following:

(defmacro foo ()
(let ((a (gensym)))
`(+ ,a 100)))

...or better:

(defmacro foo ()
(with-unique-names (a)
`(+ ,a 100)))

WITH-UNIQUE-NAMES is not defined in ANSI Common Lisp, but is easy to
implement and provided with some CL implementations. Sometimes, it is
called WITH-GENSYMS.

It's possible to capture names with syntax-case via some explicit
manipulations of syntax objects. But I don't understand the details yet.
Contrary to what _some_ literature about hygienic macros suggests, name
capture is a very useful concept.

Pascal Costanza

unread,
Feb 25, 2004, 8:25:54 AM2/25/04
to

Pascal Costanza wrote:

> If you don't want that, you have to write the following:
>
> (defmacro foo ()
> (let ((a (gensym)))
> `(+ ,a 100)))
>
> ...or better:
>
> (defmacro foo ()
> (with-unique-names (a)
> `(+ ,a 100)))

...nonsense, these macros would produce errors because they would
attempt to add 100 to an unbound variable.

The question is what you want to achieve with such a macro. If you want
to capture a name, then capture it. If you want to make sure that you
don't capture an arbitrary A, give it a more meaningful name or put it
in package created for this purpose.

(in-package "A-CAPTURER")

(defvar a 0) ;; this should rather be *a*, but for the sake of the
example...

(defmacro foo ()
'(+ a 100))


(in-package "OTHER-PACKAGE")

(special-let ((a 42))
(foo))

Joe Marshall

unread,
Feb 25, 2004, 9:05:07 AM2/25/04
to
Pascal Costanza <cost...@web.de> writes:

> Contrary to what _some_ literature about hygienic macros
> suggests, name capture is a very useful concept.

Name capture *is* quite useful. Inadvertant name capture is much less
desirable.

Mario S. Mommer

unread,
Feb 25, 2004, 9:23:24 AM2/25/04
to

I agree, but i have /never/ found a bug caused by this. Has it ever
happened to you? In CL, that is.


james anderson

unread,
Feb 25, 2004, 10:57:36 AM2/25/04
to

Pascal Costanza wrote:
>
> ...


>
> WITH-UNIQUE-NAMES is not defined in ANSI Common Lisp, but is easy to
> implement and provided with some CL implementations. Sometimes, it is
> called WITH-GENSYMS.
>

there have been several references to the WITH-UNIQUE-NAMES / WITH-GENSYMS
operators lately. all with the intended application, to establish unique names
for use in a constructed macro expansion. is there any reason not to formulate
the operation as a compile-time rather than run-time operation? as in

(defmacro foo (x)
(gensym-macrolet (a)
`(+ a ,x)))

where gensym-macrolet itself is a macro?

...

Jeremy Yallop

unread,
Feb 25, 2004, 11:41:39 AM2/25/04
to
Jens Axel Søgaard wrote:
> Pascal Costanza wrote:

Exactly the same as happens in the Scheme macro.

> (define a 0)
>
> (define-syntax foo
> (syntax-rules ()
> ((_)
> (+ a 100))))
>
> (special-let ((a 42))
> (foo))

(define-macro (foo)
'(+ a 3))

(special-let ((a 42))
(foo))

=> 45

> (defmacro foo ()
> '(+ a 100))

Your "foo" is the "unhygenic" macro that causes the name capture here;
Pascal's defmacro and define-syntax versions of special-let behave
equivalently in this respect. The "foo" macro is really a red herring
here.

> In order to give the defmacro the same behaviour
> as the syntax-rules one, you need to rename the user
> given names in bindings using gensym.

I don't think that would work very well.

Jeremy.

Jens Axel Søgaard

unread,
Feb 25, 2004, 11:44:11 AM2/25/04
to

Ok. I'll explain the Scheme example in a little more depth, and
refrain from any attempt to write it in CL.

The variable A mention in the expansion of FOO has lexical scope
seen from the definition of FOO.

Thus

(define a 0)

(define-syntax foo
(syntax-rules ()
((_)
(+ a 100))))

(display (special-let ((a 42)) (foo)))
(newline)

(set! a 10)

(display (special-let ((a 42)) (foo)))
(newline)

displays

100
110

> If you don't want that, you have to write the following:
>
> (defmacro foo ()
> (let ((a (gensym)))
> `(+ ,a 100)))
>
> ...or better:
>
> (defmacro foo ()
> (with-unique-names (a)
> `(+ ,a 100)))

But then a new variable is introduced. I want variables refered
to in macros (not bound by code introduced by the macro) to refer to
the variables in the lexical scope of the definition of the macro
as opposed to the places of use of the macro.

As far as I can tell your definitions above doesn't give the same
behaviour as the elaborated Scheme example above.

The question is: Is it possible to write a defmacro without making an
alpha conversion of the code in body of special-let?

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Feb 25, 2004, 12:10:13 PM2/25/04
to

But now you are defining a new macro. The point of syntax-rules
macros is that they make it easy to ensure that you don't
inadvertently capture a variable. In other words, a variable name
in a macro definition should refer to the variable in the lexical
scope of the macro definition (not the macro use).


The question was not how to write FOO such that the variable
is captured. The qustion is:

How do I write the macro FOO in CL

(define-syntax foo
(syntax-rules ()
((_) (+ a 100))))

such that Pascal's definition of special-let doesn't
capture the variable?

If that's not possible, then the conclusion is that
the two macros Pascal wanted us to compare does not
have the same semantics (which by the way, I assume
was the intention).

The next question is then: What needs to be done to
the define-macro definition of special-let to get
the same semantics? One way is this:

>>In order to give the defmacro the same behaviour
>>as the syntax-rules one, you need to rename the user
>>given names in bindings using gensym.

[Here I was talking about the definition of special-let]

> I don't think that would work very well.

I do. The algorithm behind syntax-rules performs the
renaming for you. In the defmacro case, you have to
do it yourself.

--
Jens Axel Søgaard

Marco Baringer

unread,
Feb 25, 2004, 12:28:32 PM2/25/04
to
Jens Axel Søgaard <use...@jasoegaard.dk> writes:

> I do. The algorithm behind syntax-rules performs the
> renaming for you. In the defmacro case, you have to
> do it yourself.

doesn't syntax rules have to do more than just renaming? Since, iirc,
scheme macros also capture their lexcial context, with something like
this:

(let ((a 5))


(define-syntax foo
(syntax-rules ()
((_) (+ a 100))))

There is no way, in portable common lisp, to write a define-syntax
with the same semantics as the above foo macro.

--
-Marco
Ring the bells that still can ring.
Forget your perfect offering.
There is a crack in everything.
That's how the light gets in.
-Leonard Cohen

Jeremy Yallop

unread,
Feb 25, 2004, 12:32:45 PM2/25/04
to

Yes, but Pascal's special-let is entirely irrelevant to this: it
doesn't cause any inadvertent name capture. Your CL "foo" macro does
cause "inadvertent" name capture, so if you don't want that to happen,
don't write the macro like that. Your examples behave exactly the
same in (builtin) let as in special-let, which shows that special-let
is not at fault here.

> In other words, a variable name in a macro definition should refer
> to the variable in the lexical scope of the macro definition (not
> the macro use).
>
> The question was not how to write FOO such that the variable
> is captured.

The claim was that the defmacro version of special-let captures the
variable. I have shown that this is not the case: your "foo" macro
captures the variable, and does so in both the defmacro and the
syntax-rules versions of special-let.

> The qustion is:
>
> How do I write the macro FOO in CL
>
> (define-syntax foo
> (syntax-rules ()
> ((_) (+ a 100))))
>
> such that Pascal's definition of special-let doesn't
> capture the variable?

Which variable? The problem is that there is no top-level lexical
environment to refer to in CL. CL "let" bindings are dynamic for
variables created with defparameter, so the top-level value isn't
active within the body of the "let".

> If that's not possible, then the conclusion is that
> the two macros Pascal wanted us to compare does not
> have the same semantics (which by the way, I assume
> was the intention).

Well, perhaps. The different semantics, though, have nothing to do
with name capture.

> The next question is then: What needs to be done to
> the define-macro definition of special-let to get
> the same semantics? One way is this:
>
>>>In order to give the defmacro the same behaviour
>>>as the syntax-rules one, you need to rename the user
>>>given names in bindings using gensym.
>
> [Here I was talking about the definition of special-let]
>
>> I don't think that would work very well.
>
> I do. The algorithm behind syntax-rules performs the
> renaming for you. In the defmacro case, you have to
> do it yourself.

I think this entirely misses the point. There is no name capture in
special-let; bindings are established for variable names that the user
supplies and these bindings are active within the body of the macro
invocation. No renaming is necessary (or possible).

Jeremy.

Christopher C. Stacy

unread,
Feb 25, 2004, 1:44:13 PM2/25/04
to
>>>>> On Wed, 25 Feb 2004 09:05:07 -0500, Joe Marshall ("Joe") writes:

Joe> Pascal Costanza <cost...@web.de> writes:
>> Contrary to what _some_ literature about hygienic macros
>> suggests, name capture is a very useful concept.

Joe> Name capture *is* quite useful.

Usually in MACROLET.

Jens Axel Søgaard

unread,
Feb 25, 2004, 2:26:48 PM2/25/04
to

Forget my CL version of the FOO macro. That was just my (bad)
attempt to translate it.

>>The qustion is:
>>
>> How do I write the macro FOO in CL
>>
>> (define-syntax foo
>> (syntax-rules ()
>> ((_) (+ a 100))))
>>
>> such that Pascal's definition of special-let doesn't
>> capture the variable?
>
>
> Which variable?

The variable A.

> The problem is that there is no top-level lexical
> environment to refer to in CL.

> CL "let" bindings are dynamic for
> variables created with defparameter, so the top-level value isn't
> active within the body of the "let".

Ah! Thanks for pointing that out. No harm done. Whether A is in the
top level or not is fortunately not important to the example.
Here it is with local variable.

(let ((a 0))
(let-syntax ((foo (syntax-rules ()
((_)
(+ a 100)))))

(display (special-let ((a 42)) (foo)))
(newline)

(set! a 10)

(display (special-let ((a 42)) (foo)))

(newline)))

Prints:

100
110


>>If that's not possible, then the conclusion is that
>>the two macros Pascal wanted us to compare does not
>>have the same semantics (which by the way, I assume
>>was the intention).

> Well, perhaps. The different semantics, though, have nothing
> to do with name capture.

What are you thinking about?

>>I do. The algorithm behind syntax-rules performs the
>>renaming for you. In the defmacro case, you have to
>>do it yourself.
>
>
> I think this entirely misses the point. There is no name capture in
> special-let; bindings are established for variable names that the user
> supplies and these bindings are active within the body of the macro
> invocation. No renaming is necessary (or possible).

If one never uses macros whose expansion refer to variables in the
lexical scopy of the definition of the macro, then there are no difference.

How do I in a defmacro refer to a variable in the lexical scope of
the definition of the variable?

--
Jens Axel Søgaard

Ray Dillinger

unread,
Feb 25, 2004, 5:51:07 PM2/25/04
to
Rahul Jain wrote: > I think this is a key issue. Earlier, it was claimed that a Lisp-1 can > emulate a Lisp-2 by just renaming variables. This doesn't do anything > for the case where the plist of a symbol is used by the macro-function > for that symbol. That macro-function can then be attached to any symbols > and the plist can be used to customize its behavior. Given a MOP, I'd > rather implement this using funcallable-instances, but I don't see how > having a Lisp-1 gives you those for free. :) It's worth a note that scheme - the only currently popular Lisp-1 -- doesn't require plists and most scheme implementations don't have them. While in common lisp it's common to have a whole bunch of values stored in a symbol under different names in the symbol's property list, this is not possible in most schemes. In scheme, a symbol is merely a value, and its only distinguishing feature is the spelling of its name. Arguably, use of a plist means we're not really talking about a lisp-2 anymore; at that point we're talking about a lisp-n, with n unbounded. And while it is strictly true that straightforward lexical transformations can turn any lisp-N code into lisp-1 code where N is known, the argument breaks down on plists because N is unbounded. Bear

Kaz Kylheku

unread,
Feb 25, 2004, 8:07:10 PM2/25/04
to
james anderson <james.a...@setf.de> wrote in message news:<403CC5E5...@setf.de>...

> there have been several references to the WITH-UNIQUE-NAMES / WITH-GENSYMS
> operators lately. all with the intended application, to establish unique names
> for use in a constructed macro expansion. is there any reason not to formulate
> the operation as a compile-time rather than run-time operation? as in

WITH-GENSYMS and similar things work at compile time, or whenever
macro-expansion takes place.

> (defmacro foo (x)
> (gensym-macrolet (a)
> `(+ a ,x)))
>
> where gensym-macrolet itself is a macro?

The problem is that GENSYM-MACROLET has to implement a code-walker
which scans the unevaluated backquote template and replaces every
occurence of A by a gensym.

That's a lot of work just for the sake of avoiding a blister on your
middle finger when you type the extra commas:

(with-gensyms (a)
`(+ ,a ,x))

You already have a backquote expander working for you, capable of
looking for markers in a template and substituting values, so the
obvious thing is to use it.

Common Lisp has packages which you can use to achieve hygiene, so none
of these tricks are necessary. If you put your macro in a package, all
of the symbols in the macro body will be interned in your package (or
else be symbols that your package has imported: you have control over
that).

(in-package :my-macro)

(defmacro foo (x)
`(+ a ,x))

The symbols here are MY-MACRO::FOO, MY-MACRO::X and MY-MACRO::A. No
WITH-GENSYMS needed, no code walker, no Scheme features, nothing.

Pascal Costanza

unread,
Feb 25, 2004, 8:32:18 PM2/25/04
to

Jens Axel Søgaard wrote:

> How do I in a defmacro refer to a variable in the lexical scope of
> the definition of the variable?

Here is an attempt to do that.

(let ((a 42))
(defmacro foo ()
`(symbol-macrolet ((a (funcall ',(lambda () a))))
(+ a 100))))

...but it seems hard to me to generalize this. On the other hand, it
also seems unusual to me. I guess that one would either use special
variables or factor things out in their own packages for such a purpose.

However, I would be interested in other opinions.

Pascal Costanza

unread,
Feb 25, 2004, 8:35:10 PM2/25/04
to

james anderson wrote:

This is the first time I hear about gensym-macrolet. What is this?

james anderson

unread,
Feb 26, 2004, 1:39:51 AM2/26/04
to

Kaz Kylheku wrote:
>
> james anderson <james.a...@setf.de> wrote in message news:<403CC5E5...@setf.de>...
> > there have been several references to the WITH-UNIQUE-NAMES / WITH-GENSYMS
> > operators lately. all with the intended application, to establish unique names
> > for use in a constructed macro expansion. is there any reason not to formulate
> > the operation as a compile-time rather than run-time operation? as in
>
> WITH-GENSYMS and similar things work at compile time, or whenever
> macro-expansion takes place.

which is the macro definition's run-time. the macro itself also has a compile-time.

>
> > (defmacro foo (x)
> > (gensym-macrolet (a)
> > `(+ a ,x)))
> >
> > where gensym-macrolet itself is a macro?
>
> The problem is that GENSYM-MACROLET has to implement a code-walker
> which scans the unevaluated backquote template and replaces every
> occurence of A by a gensym.

as the intended behaviour would not be entirely analogous to symbol-macrolet -
neither in the operator/value namespace disctinction nor in the observance of
lexical constraints, one conceivable implementation is as a naive sublis.

>
> That's a lot of work just for the sake of avoiding a blister on your
> middle finger when you type the extra commas:
>

> ...


>
> (in-package :my-macro)
>
> (defmacro foo (x)
> `(+ a ,x))
>
> The symbols here are MY-MACRO::FOO, MY-MACRO::X and MY-MACRO::A. No
> WITH-GENSYMS needed, no code walker, no Scheme features, nothing.

all well and true. i'm just wondering.

...

Björn Lindberg

unread,
Feb 26, 2004, 7:04:03 AM2/26/04
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> > FWIW, there is a quasi-standard REBINDING (a.k.a. WITH-GENSYMS)
> > macro that provides just such an abstraction.
>
> Even if you use that, you still have to unquote the references to the
> variables it declares. If you forget to unquote them, it's a bug - not only
> that, but it's one that isn't necessarily detected by the compiler, and can
> result in erroneous variable capture. Now, I'm not saying that's the end of
> the world, but you have to admit that in other contexts - such as when
> dealing with ordinary variables - we don't accept that sort of thing. Why
> the exception for DEFMACRO?

You don't *have* to unquote. If you want the macro invocation (foo 7)
to expand into (bar a 7), you can either write it as `(bar a ,x) or
(list 'bar 'a x). The unquoting in the first case is just a
convenience. But if you want to be able to refer to both variables
local to the macro as well as variables at the place of macro
expansion, I don't see how you can avoid having some kind of
quoting/unquoting to distinguish them. How is this distinction made in
the scheme macro systems?


Björn

Björn Lindberg

unread,
Feb 26, 2004, 7:18:12 AM2/26/04
to
Jens Axel Søgaard <use...@jasoegaard.dk> writes:

I don't think you can, but that does not have anything to do with
defmacro, but rather the lack in CL of global lexicals. For instance,
if I write the following:

(let ((a 0)) ; a is a lexical
(defun set-a (x)
(setf a x))
(defmacro foo ()
`(+ ,a 100)))

I get:

* (special-let ((a 42)) (foo))
==> 100
* (set-a 10)
==> 10
* (special-let ((a 42)) (foo))
==> 110


Björn

Pascal Costanza

unread,
Feb 26, 2004, 8:22:41 AM2/26/04
to
Björn Lindberg wrote:
> Jens Axel Søgaard <use...@jasoegaard.dk> writes:
[...]

> I don't think you can, but that does not have anything to do with
> defmacro, but rather the lack in CL of global lexicals. For instance,
> if I write the following:
>
> (let ((a 0)) ; a is a lexical
> (defun set-a (x)
> (setf a x))
> (defmacro foo ()
> `(+ ,a 100)))
>
> I get:
>
> * (special-let ((a 42)) (foo))
> ==> 100
> * (set-a 10)
> ==> 10
> * (special-let ((a 42)) (foo))
> ==> 110

Right, this is simpler than I thought.


Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Joe Marshall

unread,
Feb 26, 2004, 9:15:51 AM2/26/04
to

Yes, but the macros were quite hairy. I don't have an off-hand
example, though.

--
~jrm

Rahul Jain

unread,
Feb 26, 2004, 10:30:25 AM2/26/04
to
Ray Dillinger <be...@sonic.net> writes:

> Arguably, use of a plist means we're not really talking about a lisp-2
> anymore; at that point we're talking about a lisp-n, with n unbounded.
> And while it is strictly true that straightforward lexical transformations
> can turn any lisp-N code into lisp-1 code where N is known, the argument
> breaks down on plists because N is unbounded.

I disagree. The plist is merely some property of the symbol. The only
real difference between a lisp-1 and a lisp-2 is in how the compiler
_treats_ the symbols in different locations in the form it's given.

What's the problem is that by renaming a symbol when it's used in a
different context, we lose the association between the bindings in those
two contexts. Take, for example, the english word `record'. The name is
the same, but in different syntactic contexts, it has slightly different
(but related) meanings. Since symbols in lisp are objects with identity,
we can use this association to good effect.

In lisp, a symbol is really treated as an object, for use by
applications. In scheme, it's just a name for a lexical binding,
specifically for the use of the compiler. Any other uses will end up
causing conflicts.

--
Rahul Jain
rj...@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist

Tim Bradshaw

unread,
Feb 26, 2004, 12:11:26 PM2/26/04
to
* james anderson wrote:

> which is the macro definition's run-time. the macro itself also has a compile-time.

So does the compiler. Why is this interesting?

--tim

james anderson

unread,
Feb 26, 2004, 12:50:54 PM2/26/04
to

because i was enquiring about substitution mechanisms which act at the macro
definition's compile time and the observation, to which the above responded, was

Kaz Kylheku wrote:
>
> james anderson <james.a...@setf.de> wrote in message news:<403CC5E5...@setf.de>...

> > ...


>
> WITH-GENSYMS and similar things work at compile time, or whenever
> macro-expansion takes place.

which describes an operator which substitutes at the macro's run-time rather
than at its compile-time. [at least, given the definitions of which i am
aware.]

...

Ray Dillinger

unread,
Feb 27, 2004, 12:54:33 PM2/27/04
to
Rahul Jain wrote:
>
> Ray Dillinger <be...@sonic.net> writes:
>
> > Arguably, use of a plist means we're not really talking about a lisp-2
> > anymore; at that point we're talking about a lisp-n, with n unbounded.
> > And while it is strictly true that straightforward lexical transformations
> > can turn any lisp-N code into lisp-1 code where N is known, the argument
> > breaks down on plists because N is unbounded.
>
> I disagree. The plist is merely some property of the symbol. The only
> real difference between a lisp-1 and a lisp-2 is in how the compiler
> _treats_ the symbols in different locations in the form it's given.
>
> What's the problem is that by renaming a symbol when it's used in a
> different context, we lose the association between the bindings in those
> two contexts.

There is no association between the bindings in a lisp-1 to lose.
Different symbols are merely different values. you can't rename
a symbol in a pure lisp-1, any more than you can rename 23 or
#\a or the string value "rename." So talking about what happens
when a symbol is renamed in a different context is like talking
about what happens when the moon turns into cheese.

If you can rename it, meaningfully, then its name is a value that
can be changed without changing its identity, and there is at least
one other value stored in the identified structure. That means we
are simply not talking about a lisp-1 any more.

> Take, for example, the english word `record'. The name is
> the same, but in different syntactic contexts, it has slightly different
> (but related) meanings. Since symbols in lisp are objects with identity,
> we can use this association to good effect.

I know what you're talking about; but the argument simply does not
apply to lisp in general. It only applies to lisp-2's and greater.


> In lisp, a symbol is really treated as an object, for use by
> applications. In scheme, it's just a name for a lexical binding,
> specifically for the use of the compiler. Any other uses will end up
> causing conflicts.

*blip.* Sorry, I had a mental trainwreck when I read this. Up to now,
I've been assuming when you used "lisp" that you were specifically
including all dialects.

For clarity, please, If you're comparing one lisp dialect to another,
or comparing families of dialects such as lisp-1's and lisp-n's,
use the names of the dialects you're talking about.

Bear

Kaz Kylheku

unread,
Feb 27, 2004, 1:34:34 PM2/27/04
to
Jens Axel Søgaard <use...@jasoegaard.dk> wrote in message news:<403cf721$0$255

> How do I in a defmacro refer to a variable in the lexical scope of
> the definition of the variable?

I don't see how you can fulfill this requirement without taking away
the idea that a macro produces the source code to a form which is
substituted somewhere, or changing what lexical scope means or what an
expression is.

In Common Lisp, a form can only be in two distinct lexical scopes
simultaneously if those scopes are nested.

What you are asking for is to be able to write a form (the macro body)
which is simultaneously in the lexical scope of the macro definition,
and in the lexical scope of the substitution site. Some of the free
variable references in the body resolve to one scope, other ones to
the other.

One way to do that is to change the inputs to the macro to be higher
order functions. Even simple variable references become lambdas. This
is how it works in lazy functional languages. A simple expression like
X is really (LAMBDA () X), so that when you pass X down through a
chain of function calls, its evaluation can be delayed, yet take place
in the right lexical environment.

We can turn all of the argument forms supplied by the macro user into
lambdas, and we can write the macro body as a lambda. Ensure that the
closures are created in the appropriate environments, and it's done.

The question is, would you want the macro system do do all this
inefficient, complicated junk automatically, under the hood, just to
solve some problem that is better addressed using dynamically scoped
module variables in a package?

Example: suppose we want to implement a REPEAT macro that repeatedly
evaluates the supplied form. The repetition count comes implicitly
from a lexical variable set up around the macro:

(let ((repeat-count 10))
(defmacro repeat (form)
`(funcall ,(lambda (form-closure)
(dotimes (i repeat-count)
(funcall form-closure)))
(lambda () ,form))))

If expressions were lazy, and if functions had destructuring lambda
lists, we could do this as an inline function:

(declaim (inline repeat))

(let ((repeat-count 10))
(defun repeat (lazy-value)
(dotimes (i repeat-count)
lazy-value))) ;; assumption: force happens here, no caching.

Or we could write a macro-writing macro that would do all the LAMBDA's
and FUNCALL's for us: it would take all of the argument pieces, make
closures out of them, turn the body into a closure, and use a MACROLET
to rewrite the references into funcalls.

But doing so takes away power from the macro system and complicates it
output.

I don't want the inputs to a macro to be some opaque objects; I want
them to be lists that I can subject to arbitrary analysis and
synthesis. If I leave free variables in the resulting form, I want
them to connect to lexical or dynamic variables in the substitution
environment only. I don't want the object output by a macro to carry
bindings back to the factory from where it came, unless I put them in.

Nils Gösche

unread,
Feb 27, 2004, 5:16:47 PM2/27/04
to
Jens Axel Søgaard <use...@jasoegaard.dk> writes:

[...]

I am not sure what you guys are talking about in this thread, but I
have a feeling you might want to play around with

(defmacro foo ()
`(+ ,a 100))

Regards,
--
Nils Gösche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID #xEEFBA4AF

Nils Gösche

unread,
Feb 27, 2004, 5:25:24 PM2/27/04
to
I wrote:

> (defmacro foo ()
> `(+ ,a 100))

Oops, somebody else posted the same thing already. Sorry for the
redundance.

Pascal Costanza

unread,
Feb 29, 2004, 4:01:32 PM2/29/04
to

Jens Axel Søgaard wrote:

It's a pity that we have got sidetracked again by the hygiene issue. I
think the thread has shown that there are several options to take care
of this by now. The most natural way in Common Lisp seems to be to use
special variables and/or to use packages for namespacing issues.

In the meantime, I have experimented a little with ways to simulate
syntax-rules/syntax-case in Common Lisp. I think today I have got the
important insight: The Common Lisp way is to separate the concerns more
clearly.

There are three issue, as far as I can see:

1) hygiene: The difference between CL and Scheme is that they just have
different defaults here. The respective communities prefer their
respective defaults, for various reasons, but there is no clear winner here.

2) referential transparency: We have discussed this in this thread.

3) pattern matching: This allows for a different programming style.

In CL, we have WITH-UNIQUE-NAMES and REBINDING for hygiene/referential
transparency issues.

Now here comes the pattern matching stuff!

I have fiddled with DESTRUCTURING-BIND, error handling and dynamic
scoping, and I think this works. See code below.


(use-package :lispworks)

(defvar *destructuring*)

(defmacro destructuring-case (expression &body forms)
(with-unique-names (block)
(rebinding (expression)
`(block ,block
(let ((*destructuring* t))
(tagbody
,@(loop for form in forms
for tag = (gensym)
collect `(handler-bind
((error (lambda (err)
(declare (ignore err))
(when *destructuring*
(go ,tag)))))
(destructuring-bind
,(car form) ,expression
(declare (ignorable _ __ ___))
(let ((*destructuring* nil))
(return-from ,block ,@(cdr form)))))
collect tag)
(error "No match in destructuring-case.")))))))


;; test

(defmacro special-let (&whole form &rest args)
(declare (ignore args))
(destructuring-case form
((_ ((var binding) &rest more) &rest forms)
`(let ((,var ,binding))
(special-let (,@more) ,@forms)))
((_ (var &rest more) &rest forms)
`(let ((,var nil))
(special-let (,@more) ,@forms)))
((_ () &rest forms)
`(let () ,@forms))))

For illustration purposes, here is the expansion of the
destructuring-case form.

(LET* ((#:EXPRESSION829 FORM))
(BLOCK #:BLOCK828
(LET ((*DESTRUCTURING* T))
(TAGBODY (HANDLER-BIND ((ERROR
(LAMBDA (ERR)
(DECLARE (IGNORE ERR))
(WHEN *DESTRUCTURING* (GO #:G830)))))
(DESTRUCTURING-BIND
(_ ((VAR BINDING) &REST MORE) &REST FORMS)
#:EXPRESSION829
(DECLARE (IGNORABLE _ __ ___))
(LET ((*DESTRUCTURING* NIL))
(RETURN-FROM #:BLOCK828
`(LET ((,VAR ,BINDING))
(SPECIAL-LET (,@MORE) ,@FORMS))))))
#:G830 (HANDLER-BIND ((ERROR
(LAMBDA (ERR)
(DECLARE (IGNORE ERR))
(WHEN *DESTRUCTURING* (GO #:G831)))))
(DESTRUCTURING-BIND
(_ (VAR &REST MORE) &REST FORMS)
#:EXPRESSION829
(DECLARE (IGNORABLE _ __ ___))
(LET ((*DESTRUCTURING* NIL))
(RETURN-FROM #:BLOCK828
`(LET ((,VAR NIL))
(SPECIAL-LET (,@MORE) ,@FORMS))))))
#:G831 (HANDLER-BIND ((ERROR
(LAMBDA (ERR)
(DECLARE (IGNORE ERR))
(WHEN *DESTRUCTURING* (GO #:G832)))))
(DESTRUCTURING-BIND
(_ NIL &REST FORMS)
#:EXPRESSION829
(DECLARE (IGNORABLE _ __ ___))
(LET ((*DESTRUCTURING* NIL))
(RETURN-FROM #:BLOCK828 `(LET () ,@FORMS)))))
#:G832 (ERROR "No match in destructuring-case.")))))

Comments?

Brian Mastenbrook

unread,
Feb 29, 2004, 4:33:56 PM2/29/04
to
In article <c1tjve$nsd$1...@newsreader2.netcologne.de>, Pascal Costanza
<cost...@web.de> wrote:

> I have fiddled with DESTRUCTURING-BIND, error handling and dynamic
> scoping, and I think this works. See code below.
>

> Comments?

There's no good idea that can't be reinvented! I've been using an
essentially identical macro for quite a while now; it's part of my
common-idioms package on cliki. (Also happens to be the first google
hit for destructuring-case.)

Brian

--
Brian Mastenbrook
http://www.cs.indiana.edu/~bmastenb/

Pascal Costanza

unread,
Feb 29, 2004, 7:59:21 PM2/29/04
to

Brian Mastenbrook wrote:

> In article <c1tjve$nsd$1...@newsreader2.netcologne.de>, Pascal Costanza
> <cost...@web.de> wrote:
>
>>I have fiddled with DESTRUCTURING-BIND, error handling and dynamic
>>scoping, and I think this works. See code below.
>>
>>Comments?
>
> There's no good idea that can't be reinvented! I've been using an
> essentially identical macro for quite a while now; it's part of my
> common-idioms package on cliki. (Also happens to be the first google
> hit for destructuring-case.)

I wasn't aware of that. Your version even supports literals and what
Schemers call a "fender". Cool!

Rahul Jain

unread,
Feb 29, 2004, 11:31:02 PM2/29/04
to
Ray Dillinger <be...@sonic.net> writes:

> Rahul Jain wrote:
>> What's the problem is that by renaming a symbol when it's used in a
>> different context, we lose the association between the bindings in those
>> two contexts.
>
> There is no association between the bindings in a lisp-1 to lose.

The point is that we were trying to convert a lisp-2 program into a
lisp-1 program. The fact that there is an association in a lisp-2 and
there is none in a lisp-1 is the problem.

>> Take, for example, the english word `record'. The name is
>> the same, but in different syntactic contexts, it has slightly different
>> (but related) meanings. Since symbols in lisp are objects with identity,
>> we can use this association to good effect.
>
> I know what you're talking about; but the argument simply does not
> apply to lisp in general. It only applies to lisp-2's and greater.

Yes, that's my point. You aren't allowed to do this in a lisp-1.
Therefore, symbol munging doesn't fix the problem.

>> In lisp, a symbol is really treated as an object, for use by
>> applications. In scheme, it's just a name for a lexical binding,
>> specifically for the use of the compiler. Any other uses will end up
>> causing conflicts.
>
> *blip.* Sorry, I had a mental trainwreck when I read this. Up to now,
> I've been assuming when you used "lisp" that you were specifically
> including all dialects.

Personally, I don't consider Scheme to be a Lisp dialect any more than I
consider C to be an Algol dialect.

Ray Dillinger

unread,
Mar 1, 2004, 1:36:53 AM3/1/04
to
Rahul Jain wrote:
>
> Ray Dillinger <be...@sonic.net> writes:
>
> > Rahul Jain wrote:
> >> What's the problem is that by renaming a symbol when it's used in a
> >> different context, we lose the association between the bindings in those
> >> two contexts.
> >
> > There is no association between the bindings in a lisp-1 to lose.
>
> The point is that we were trying to convert a lisp-2 program into a
> lisp-1 program. The fact that there is an association in a lisp-2 and
> there is none in a lisp-1 is the problem.

It's still a simple syntactic trasformation. You wind up using
N variables in a lisp-1 to represent what you have in a single
symbol in a lisp-N.

The syntactic transformation is that all the derived symbols get
passed around together; effectively there are now N times as many
variables in every scope and every call.

When N is unknown this is no longer possible to do without actually
analyzing the program, ie, it's no longer merely syntactic.

However, a tiny scrap of analysis suffices. All you do is read the
program looking for different properties, including syntactic sugar for
them like accessing a 'function' property via position in a combination.

The number of different properties you find is N, and then you convert
from lisp-N to lisp-1 syntactically.


> > I know what you're talking about; but the argument simply does not
> > apply to lisp in general. It only applies to lisp-2's and greater.
>
> Yes, that's my point. You aren't allowed to do this in a lisp-1.
> Therefore, symbol munging doesn't fix the problem.

Of course it does. You just wind up with more symbols is all,
and instead of being bound by their name, they are "bound" by being
explicitly passed around together.

> > *blip.* Sorry, I had a mental trainwreck when I read this. Up to now,
> > I've been assuming when you used "lisp" that you were specifically
> > including all dialects.
>
> Personally, I don't consider Scheme to be a Lisp dialect any more than I
> consider C to be an Algol dialect.

Hmmm. Fully parenthesized prefix notation, higher-order functions,
anonymous functions, code-as-data, dynamic typing, car, cons, cdr,
lambda, quote, eval,.... sure looks like a lisp to me. I think
you're wrong about that.

Interesting take on things, though. Are there any lisp-1's which
you do consider to be a Lisp dialect? If so, what's the distinguishing
feature?

Bear

Rahul Jain

unread,
Mar 1, 2004, 11:19:20 AM3/1/04
to
Ray Dillinger <be...@sonic.net> writes:

> However, a tiny scrap of analysis suffices. All you do is read the
> program looking for different properties, including syntactic sugar for
> them like accessing a 'function' property via position in a combination.
>
> The number of different properties you find is N, and then you convert
> from lisp-N to lisp-1 syntactically.

This sounds like an interesting theory. I don't know if it's doable in
practice. How would you figure out all semantic connotations of a symbol
throught all the code that is being called in the presence of dynamic
redefinition and forward references? Wouldn't it be easier to just treat
symbols as objects when they're used as objects? You might as well pass
around standard-instances by exploding them into their component slots.

>> Personally, I don't consider Scheme to be a Lisp dialect any more than I
>> consider C to be an Algol dialect.
>
> Hmmm. Fully parenthesized prefix notation, higher-order functions,
> anonymous functions, code-as-data, dynamic typing, car, cons, cdr,
> lambda, quote, eval,.... sure looks like a lisp to me. I think
> you're wrong about that.

Note that I consider dylan to be more of a lisp than scheme. Also, CAR
and CDR don't even do the same things in lisp and scheme. I can add them
to C rather trivially, too, once I choose lisp semantics or scheme
semantics.

Scheme code (in fully-parenthesized prefix notation, which, as a type of
criterion, would put perl and C into the same language family) is a
bunch of text that gets parsed into some structured intermediate form
and then has semantic analysis applied to it. All the syntax is defined
textually, not structurally. The "code data" that exists in scheme is
about as close to lisp's s-expressions as s-expressions are to python's
code data, as we've learned in the discussion about scheme macros.

Perl has lambda (anon. functions, higher-order functions) and eval.

Creating literals (quote) is nothing special to any language. Even C has
it.

Dynamic typing is hardly unique or even universal in the family of
languages that includes lisp and scheme (take ML, for example).

> Interesting take on things, though. Are there any lisp-1's which
> you do consider to be a Lisp dialect? If so, what's the distinguishing
> feature?

As with all terms, it's not a single feature: there is a center point
and there is a range which varies based on the context of a given
communication. I consider there to be a language family that contains
lisp, scheme, dylan, ocaml, and perl, as they all share some fundamental
features.

Ray Dillinger

unread,
Mar 2, 2004, 10:04:29 PM3/2/04
to
Rahul Jain wrote:
>
> > Hmmm. Fully parenthesized prefix notation, higher-order functions,
> > anonymous functions, code-as-data, dynamic typing, car, cons, cdr,
> > lambda, quote, eval,.... sure looks like a lisp to me. I think
> > you're wrong about that.
>
> Note that I consider dylan to be more of a lisp than scheme. Also, CAR
> and CDR don't even do the same things in lisp and scheme. I can add them
> to C rather trivially, too, once I choose lisp semantics or scheme
> semantics.

That's weird, to me. Dylan code isn't expressed by a sublanguage
of the language it uses for data, so it wasn't even in the running
to be a lisp.

After a bit of reflection, I get what you're angling at with CAR and CDR
having different semantics in Scheme and Common Lisp; Scheme has them
returning values, all the time, whereas CL has them returning values in
some contexts and mutable locations in others. So scheme is biased
much more strongly in favor of functional code. Or, more precisely speaking,
against side effecting code.

However, I don't think either way is any more or less "lispy;" to the
extent that it is, Scheme's CAR and CDR have the semantics described by
McCarthy way back when and CL has an extension invented since, so this
argues for CL being less a lisp than scheme(!).

Also, the difference in CAR/CDR has nothing to do with the fundamental
differences between lisp-1's and lisp-n's, which we've been discussing
up to now.


> Scheme code is a


> bunch of text that gets parsed into some structured intermediate form
> and then has semantic analysis applied to it.

Scheme's eval takes structured list data, the same as Common Lisp's.
I can create and evaluate scheme code on the fly by manipulating list
data, so this is just clearly wrong. There is *NO* reading of R5RS
that allows eval to be defined on mere text, period. You must stop
making this absurd claim; it can only damage your credibility.

The only difference is that the standard doesn't have to talk about
it as much in terms of abstract forms because scheme doesn't have
reader macros to cause the distinction between forms and text to be
as clear.

> > Interesting take on things, though. Are there any lisp-1's which
> > you do consider to be a Lisp dialect? If so, what's the distinguishing
> > feature?
>
> As with all terms, it's not a single feature: there is a center point
> and there is a range which varies based on the context of a given
> communication. I consider there to be a language family that contains
> lisp, scheme, dylan, ocaml, and perl, as they all share some fundamental
> features.

So, and this is just a guess, do you consider Common Lisp to be the precise
center of what Lisp is? Is Lisp defined in terms of distance-from-CL
for you, or do you see CL as occupying its own particular spot of the
universe of Lisp? As a lexically-scoped Lisp-2, it is representative of
less than 10% of Lisp's history.

Bear

Rahul Jain

unread,
Mar 2, 2004, 10:25:24 PM3/2/04
to
Ray Dillinger <be...@sonic.net> writes:

> That's weird, to me. Dylan code isn't expressed by a sublanguage
> of the language it uses for data, so it wasn't even in the running
> to be a lisp.

Well, neither is scheme code, as I said we learned in the discussion
about scheme macros. Scheme code is a totally different kind of data
structure that includes lots of semantic information, unlike lisp code.

> After a bit of reflection, I get what you're angling at with CAR and CDR
> having different semantics in Scheme and Common Lisp; Scheme has them
> returning values, all the time, whereas CL has them returning values in
> some contexts and mutable locations in others.

No, that's done by the modifier macros themselves. They run the
setf-expander you defined and get the 5 values needed to manipulate that
operator's conceptual "place".

> So scheme is biased much more strongly in favor of functional code.
> Or, more precisely speaking, against side effecting code.

No, scheme is just biased at making you _suffer_ if you want to make
side-effecting code efficient. (Not that any side-effect-happy language
is any better.)

> Also, the difference in CAR/CDR has nothing to do with the fundamental
> differences between lisp-1's and lisp-n's, which we've been discussing
> up to now.

Yes, I was using them as examples.

>> Scheme code is a
>> bunch of text that gets parsed into some structured intermediate form
>> and then has semantic analysis applied to it.
>
> Scheme's eval takes structured list data, the same as Common Lisp's.

But the syntax isn't defined in terms of structured list data.

> I can create and evaluate scheme code on the fly by manipulating list
> data, so this is just clearly wrong. There is *NO* reading of R5RS
> that allows eval to be defined on mere text, period. You must stop
> making this absurd claim; it can only damage your credibility.

EVAL operates on something other than the definition of the syntax in
the standard.

> The only difference is that the standard doesn't have to talk about
> it as much in terms of abstract forms because scheme doesn't have
> reader macros to cause the distinction between forms and text to be
> as clear.

It also doesn't have syntactic macros, just semantic ones. (Officially)

> So, and this is just a guess, do you consider Common Lisp to be the precise
> center of what Lisp is? Is Lisp defined in terms of distance-from-CL
> for you, or do you see CL as occupying its own particular spot of the
> universe of Lisp? As a lexically-scoped Lisp-2, it is representative of
> less than 10% of Lisp's history.

Well, it's the only current general purpose language that calls itself
"Lisp". Including scheme in any family with CL makes for a very broad
family, as I said.

Note also that communities change over time. Is "English" defined in
terms of distance from Modern English or Old English?

Jens Axel Søgaard

unread,
Mar 3, 2004, 12:51:41 PM3/3/04
to
Kaz Kylheku wrote:

> Common Lisp has packages which you can use to achieve hygiene, so none
> of these tricks are necessary. If you put your macro in a package, all
> of the symbols in the macro body will be interned in your package (or
> else be symbols that your package has imported: you have control over
> that).
>
> (in-package :my-macro)
>
> (defmacro foo (x)
> `(+ a ,x))
>
> The symbols here are MY-MACRO::FOO, MY-MACRO::X and MY-MACRO::A. No
> WITH-GENSYMS needed, no code walker, no Scheme features, nothing.

Thanks. That answered my question on how to refer to a variable
in the lexical scope at the place of the definition of the macro.

This also explains why Common Lispers are satisfied with defmacro
and Schemers are not. There are no standard module system in Scheme
and thus the same solution doesn't apply to Scheme.

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Mar 3, 2004, 12:56:15 PM3/3/04
to
Kaz Kylheku wrote:
> Jens Axel Søgaard <use...@jasoegaard.dk> wrote in message news:<403cf721$0$255
>
>>How do I in a defmacro refer to a variable in the lexical scope of
>>the definition of the variable?

> I don't see how you can fulfill this requirement without taking away
> the idea that a macro produces the source code to a form which is
> substituted somewhere, or changing what lexical scope means or what an
> expression is.

The package solution you gave in another post, has the effect I wanted.

> In Common Lisp, a form can only be in two distinct lexical scopes
> simultaneously if those scopes are nested.
>
> What you are asking for is to be able to write a form (the macro body)
> which is simultaneously in the lexical scope of the macro definition,
> and in the lexical scope of the substitution site. Some of the free
> variable references in the body resolve to one scope, other ones to
> the other.
>
> One way to do that is to change the inputs to the macro to be higher
> order functions. Even simple variable references become lambdas. This
> is how it works in lazy functional languages. A simple expression like
> X is really (LAMBDA () X), so that when you pass X down through a
> chain of function calls, its evaluation can be delayed, yet take place
> in the right lexical environment.
>
> We can turn all of the argument forms supplied by the macro user into
> lambdas, and we can write the macro body as a lambda. Ensure that the
> closures are created in the appropriate environments, and it's done.
>
> The question is, would you want the macro system do do all this
> inefficient, complicated junk automatically, under the hood, just to
> solve some problem that is better addressed using dynamically scoped
> module variables in a package?

I didn't knew the package solution at the time.

Thanks for a thorough explanation.

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Mar 3, 2004, 1:18:40 PM3/3/04
to
Pascal Costanza wrote:
> Jens Axel Søgaard wrote:
>> Pascal Costanza wrote:
>>
>>> Here is the result of my first experiment with syntax-rules...
>>>
>>> (define-syntax special-let
>>> (syntax-rules ()
>>> ((_ () form ...)
>>> (let () form ...))
>>> ((_ ((var binding) more ...) form ...)
>>> (let ((var binding))
>>> (special-let (more ...) form ...)))
>>> ((_ (var more ...) form ...)
>>> (let ((var ()))
>>> (special-let (more ...) form ...)))))
>>>
>>> ...and for comparison purposes, here is how I would implement it in
>>> Common Lisp:

>>> I am not sure what to think of this. Comments?

>> I am not sure. What happens in the CL macro
>> if the body contains a macro that expands to
>> one of the variables being bound in the special
>> let?

> It's a pity that we have got sidetracked again by the hygiene issue. I
> think the thread has shown that there are several options to take care
> of this by now. The most natural way in Common Lisp seems to be to use
> special variables and/or to use packages for namespacing issues.

Yes. I think I focused on the hygiene problem, because that's the
hard problem (if you don't have packages).

> In the meantime, I have experimented a little with ways to simulate
> syntax-rules/syntax-case in Common Lisp. I think today I have got the
> important insight: The Common Lisp way is to separate the concerns more
> clearly.
>
> There are three issue, as far as I can see:
>
> 1) hygiene: The difference between CL and Scheme is that they just have
> different defaults here. The respective communities prefer their
> respective defaults, for various reasons, but there is no clear winner
> here.
>
> 2) referential transparency: We have discussed this in this thread.
>
> 3) pattern matching: This allows for a different programming style.

As you and Masterbrook demonstrate it is relatively easy to
copy the pattern matching style of writing macros when you
have defmacro.


> (defmacro special-let (&whole form &rest args)
> (declare (ignore args))
> (destructuring-case form
> ((_ ((var binding) &rest more) &rest forms)
> `(let ((,var ,binding))
> (special-let (,@more) ,@forms)))
> ((_ (var &rest more) &rest forms)
> `(let ((,var nil))
> (special-let (,@more) ,@forms)))
> ((_ () &rest forms)
> `(let () ,@forms))))

I find this code much more readable than you original CL macro,
since it concentrates at the code transformation at hand.

For fun I have written a little desugarer for R5RS Scheme
(without macros) using (the PLT version of) syntax-case.

It can be found here:

<http://www.scheme.dk/desugar.scm>

The forms quasisyntax (also written #' ) and unsyntax (also written #,)
correspond to normal quasiquote and unquote, but construct syntax objects
instead of lists.

The forms quasisyntax/loc and syntax/lox (abreviated qs/l ans d/s in the code)
let the macro writer associate location on the syntax to be constructed.

An example (the desugarer is called d):

(syntax-case* stx (quote if begin set! lambda define let
let* letrec cond else => case and or do)
identifier=?
...
[(begin e) (d (s/l stx e))]
...)

The form (begin e) (= stx) desugars to e. The form (s/l stx e) is used
to associate the syntax location of (begin e) (i.e. the file, module, and linie number,
and column span) to the desugared syntax e. If an error occurs under the desugaring
of e then it's possible to give an error message to user in terms of the original
code.

The desugaring of let is as follows:

[(let ([vs es] ...) e) (let ([des (smap d #'(es ...))])
(qs/l stx (let #,(smap2 list #'(vs ...) des) #,(d #'e))))]
[(let ([vs es] ...) e e* ...) (d (s/l stx (let ([vs es] ...) (begin e e* ...))))]
[(let t ([vs es] ...) e e* ...) (d (s/l stx (letrec ((t (lambda (vs ...) e e* ...))))))]

where smap and smap2 are like MAP but works on syntax representing lists.

--
Jens Axel Søgaard

Pascal Costanza

unread,
Mar 3, 2004, 5:07:43 PM3/3/04
to

Jens Axel Søgaard wrote:

> Pascal Costanza wrote:
>

>> It's a pity that we have got sidetracked again by the hygiene issue. I
>> think the thread has shown that there are several options to take care
>> of this by now. The most natural way in Common Lisp seems to be to use
>> special variables and/or to use packages for namespacing issues.
>
> Yes. I think I focused on the hygiene problem, because that's the
> hard problem (if you don't have packages).

OK, I see.

>> (defmacro special-let (&whole form &rest args)
>> (declare (ignore args))
>> (destructuring-case form
>> ((_ ((var binding) &rest more) &rest forms)
>> `(let ((,var ,binding))
>> (special-let (,@more) ,@forms)))
>> ((_ (var &rest more) &rest forms)
>> `(let ((,var nil))
>> (special-let (,@more) ,@forms)))
>> ((_ () &rest forms)
>> `(let () ,@forms))))
>
>
> I find this code much more readable than you original CL macro,
> since it concentrates at the code transformation at hand.

Funny that I find my original code easier to understand, because it more
clearly states the flow of control IMHO. With destructuring-case /
syntax-rules/case one uses recursion instead of iteration, and in this
particular example I find iteration more natural. But this is certainly
just a matter of taste.

Anyway, the destructuring-case is extremely close to the syntax-rules
variant. It uses &rest and ,@ instead of ... - it would be interesting
to see if one could make effective use of other lambda keywords.

> For fun I have written a little desugarer for R5RS Scheme
> (without macros) using (the PLT version of) syntax-case.
>
> It can be found here:
>
> <http://www.scheme.dk/desugar.scm>

Hm, interesting. But what would you use that for? (This not a rhetorical
question.)

My first guess would be that Common Lispers would rather trust their
vendors to do these things right for the ANSI stuff. It would be
interesting to have such facilities for user-supplied macros, but I
guess that's what environment objects are supposed to be used for in
combination with *MACROEXPAND-HOOK*. (Environment objects are not
supported in ANSI, but see 8.5 in CLtL2.)

But as I said, this is my first guess, and I haven't thought about the
details. It's unlikely that I am missing something.


Pascal

--
1st European Lisp and Scheme Workshop
June 13 - Oslo, Norway - co-located with ECOOP 2004
http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/

Jens Axel Søgaard

unread,
Mar 3, 2004, 5:30:49 PM3/3/04
to
Pascal Costanza wrote:
> Jens Axel Søgaard wrote:
>> Pascal Costanza wrote:

>> I find this code much more readable than you original CL macro,
>> since it concentrates at the code transformation at hand.

> Funny that I find my original code easier to understand, because it more
> clearly states the flow of control IMHO. With destructuring-case /
> syntax-rules/case one uses recursion instead of iteration, and in this
> particular example I find iteration more natural. But this is certainly
> just a matter of taste.

If I need to convince myself that the transformation is correct,
then in

>>> (defmacro special-let (&whole form &rest args)
>>> (declare (ignore args))
>>> (destructuring-case form
>>> ((_ ((var binding) &rest more) &rest forms)
>>> `(let ((,var ,binding))
>>> (special-let (,@more) ,@forms)))
>>> ((_ (var &rest more) &rest forms)
>>> `(let ((,var nil))
>>> (special-let (,@more) ,@forms)))
>>> ((_ () &rest forms)
>>> `(let () ,@forms))))

I need to focus on three small transformations (which can be
checked seperately) rather than having one large transformation.

E.g. the first clause states that

(let ((<var1> <exp1>)
(<var2> <exp2>)
...)
<body-exp> ...)

can be rewritten as

(let ((<var1> <exp1>))
(let ((<var2> <exp2>)
...)
<body-exp> ...)) .

That this transformation is correct is easy to see.

> Anyway, the destructuring-case is extremely close to the syntax-rules
> variant. It uses &rest and ,@ instead of ... - it would be interesting
> to see if one could make effective use of other lambda keywords.

Indeed.

>> For fun I have written a little desugarer for R5RS Scheme
>> (without macros) using (the PLT version of) syntax-case.
>>
>> It can be found here:
>>
>> <http://www.scheme.dk/desugar.scm>
>
>
> Hm, interesting. But what would you use that for? (This not a rhetorical
> question.)

First, as a (hopefully) interesting example of the use of syntax-case.

Second, in a Scheme compiler as a first approximation for the desugaring
phase (since it doesn't handle macros).

> My first guess would be that Common Lispers would rather trust their
> vendors to do these things right for the ANSI stuff. It would be
> interesting to have such facilities for user-supplied macros, but I
> guess that's what environment objects are supposed to be used for in
> combination with *MACROEXPAND-HOOK*. (Environment objects are not
> supported in ANSI, but see 8.5 in CLtL2.)
>
> But as I said, this is my first guess, and I haven't thought about the
> details. It's unlikely that I am missing something.

It was just a toy example. I am not going to replace my system's
normal macro expander :-)

--
Jens Axel Søgaard

William D Clinger

unread,
Mar 4, 2004, 2:07:50 AM3/4/04
to
Pascal Costanza <cost...@web.de> wrote:
> It's a pity that we have got sidetracked again by the hygiene issue. I
> think the thread has shown that there are several options to take care
> of this by now. The most natural way in Common Lisp seems to be to use
> special variables and/or to use packages for namespacing issues.

I don't see how special variables and/or namespaces could give you
an equivalent of referential transparency for local macros.

> There are three issue, as far as I can see:
>
> 1) hygiene: The difference between CL and Scheme is that they just have
> different defaults here. The respective communities prefer their
> respective defaults, for various reasons, but there is no clear winner here.
>
> 2) referential transparency: We have discussed this in this thread.
>
> 3) pattern matching: This allows for a different programming style.

Those are indeed the three differences between macros in CL and
Scheme.

The importance of these differences lies not so much in whether
you can get your work done as in what kinds of languages you can
design without losing the benefits of reliable macros. Scheme-style
macros have been implemented for rather different languages like
Modula-2. If you look at all the features of Common Lisp that you
are using to get the same effect, it's a little hard to believe that
the CL approach would generalize as well to other languages.

Will

Marcin 'Qrczak' Kowalczyk

unread,
Mar 4, 2004, 5:08:27 AM3/4/04
to
On Tue, 02 Mar 2004 22:25:24 -0500, Rahul Jain wrote:

>> Scheme's eval takes structured list data, the same as Common Lisp's.
>
> But the syntax isn't defined in terms of structured list data.

But it's the question of language descripton, no? It could be, while being
equivalent. Or is there something which prevents defining Scheme semantics
in terms of the list data? Is it legal to write (foo . (bar)) instead of
(foo bar) as procedure application? Guile allows that.

Ironically, Common Lisp semantics must be defined in terms of characters,
not list data. Because of readtable changes, and changes of variables
which influence the reader (e.g. the current package), you can't read the
whole file and then define the semantics in terms of the read data. You
must execute statements one by one while reading them. So either you
define the semantics of a single statement at a time, and say that all
forms in a file are just read and executed (is it true without exceptions?)
or the semantics must be a function of character data in the file.

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Pascal Costanza

unread,
Mar 4, 2004, 5:47:08 AM3/4/04
to

Jens Axel Søgaard wrote:

> If I need to convince myself that the transformation is correct,
> then in
>
> >>> (defmacro special-let (&whole form &rest args)
> >>> (declare (ignore args))
> >>> (destructuring-case form
> >>> ((_ ((var binding) &rest more) &rest forms)
> >>> `(let ((,var ,binding))
> >>> (special-let (,@more) ,@forms)))
> >>> ((_ (var &rest more) &rest forms)
> >>> `(let ((,var nil))
> >>> (special-let (,@more) ,@forms)))
> >>> ((_ () &rest forms)
> >>> `(let () ,@forms))))
>
> I need to focus on three small transformations (which can be
> checked seperately) rather than having one large transformation.

Yes, I understand that.

>>> For fun I have written a little desugarer for R5RS Scheme
>>> (without macros) using (the PLT version of) syntax-case.
>>>
>>> It can be found here:
>>>
>>> <http://www.scheme.dk/desugar.scm>
>>
>> Hm, interesting. But what would you use that for? (This not a
>> rhetorical question.)
>
> First, as a (hopefully) interesting example of the use of syntax-case.
>
> Second, in a Scheme compiler as a first approximation for the desugaring
> phase (since it doesn't handle macros).
>
>> My first guess would be that Common Lispers would rather trust their
>> vendors to do these things right for the ANSI stuff. It would be
>> interesting to have such facilities for user-supplied macros, but I
>> guess that's what environment objects are supposed to be used for in
>> combination with *MACROEXPAND-HOOK*. (Environment objects are not
>> supported in ANSI, but see 8.5 in CLtL2.)
>>
>> But as I said, this is my first guess, and I haven't thought about the
>> details. It's unlikely that I am missing something.
>
> It was just a toy example. I am not going to replace my system's
> normal macro expander :-)

:)

However, I think by now that my first guess was wrong. When we take the
very simple case of a symbol in a Common Lisp source code, it is always
eq-identical to itself, no matter where it is mentioned. So there is no
way to unambiguously map it to additional information, say, in a
hashtable. Instead, one really needs a more sophisticated data structure
to represent both the symbol and, say, source code locations. So this is
what syntax objects are actually there for.

Thanks for you patience - I have learned something.

Pascal Costanza

unread,
Mar 4, 2004, 7:16:02 AM3/4/04
to

William D Clinger wrote:

> Pascal Costanza <cost...@web.de> wrote:
>
>>It's a pity that we have got sidetracked again by the hygiene issue. I
>>think the thread has shown that there are several options to take care
>>of this by now. The most natural way in Common Lisp seems to be to use
>>special variables and/or to use packages for namespacing issues.
>
> I don't see how special variables and/or namespaces could give you
> an equivalent of referential transparency for local macros.

This should do the job:

(defmacro macrolet* (vars defs &body body)
(let ((syms (mapcar (lambda (var)
(cons var (copy-symbol var)))
vars)))
`(let ,(mapcar (lambda (var)
`(,(cdr (assoc var syms))
(lambda () ,var)))
vars)
(declare (special ,@(mapcar #'cdr syms)))
(macrolet
,(mapcar
(lambda (def)
`(,(car def)
,(cadr def)
`(let ()
(declare (special ,@',(mapcar #'cdr syms)))
(symbol-macrolet
,',(mapcar
(lambda (var)
`(,var (funcall ,(cdr (assoc var syms)))))
vars)
,@',(cddr def)))))
defs)
,@body))))

(let ((x 5))
(macrolet* (x)
((test () (list x)))
(let ((x 55))
(test))))
=> (5)

The expansion is this:

(LET ((X 5))
(LET ((#:X (LAMBDA () X)))
(DECLARE (SPECIAL #:X))
(MACROLET
((TEST ()
`(LET ()
(DECLARE (SPECIAL #:X))
(SYMBOL-MACROLET ((X (FUNCALL #:X)))
(LIST X)))))
(LET ((X 55))
(LET ()
(DECLARE (SPECIAL #:X))
(SYMBOL-MACROLET ((X (FUNCALL #:X)))
(LIST (FUNCALL #:X))))))))

(This macro doesn't yet include checks whether variable names clash with
parameter names for local macros.)

>>There are three issue, as far as I can see:
>>
>>1) hygiene: The difference between CL and Scheme is that they just have
>>different defaults here. The respective communities prefer their
>>respective defaults, for various reasons, but there is no clear winner here.
>>
>>2) referential transparency: We have discussed this in this thread.
>>
>>3) pattern matching: This allows for a different programming style.
>
> Those are indeed the three differences between macros in CL and
> Scheme.
>
> The importance of these differences lies not so much in whether
> you can get your work done as in what kinds of languages you can
> design without losing the benefits of reliable macros. Scheme-style
> macros have been implemented for rather different languages like
> Modula-2. If you look at all the features of Common Lisp that you
> are using to get the same effect, it's a little hard to believe that
> the CL approach would generalize as well to other languages.

This is a very interesting perspective I haven't yet thought of at all.
You're right, and this resonates with the notion that Scheme is a
testbed for language features that go beyond Lisp. (I don't recall
exactly where I have read this...)

Thanks a lot to Anton, Jens and you for a very fruitful discussion!

Kaz Kylheku

unread,
Mar 4, 2004, 4:44:02 PM3/4/04
to
cesur...@verizon.net (William D Clinger) wrote in message news:<fb74251e.04030...@posting.google.com>...

> Pascal Costanza <cost...@web.de> wrote:
> > There are three issue, as far as I can see:
> >
> > 1) hygiene: The difference between CL and Scheme is that they just have
> > different defaults here. The respective communities prefer their
> > respective defaults, for various reasons, but there is no clear winner here.
> >
> > 2) referential transparency: We have discussed this in this thread.
> >
> > 3) pattern matching: This allows for a different programming style.
>
> Those are indeed the three differences between macros in CL and
> Scheme.
>
> The importance of these differences lies not so much in whether
> you can get your work done as in what kinds of languages you can
> design without losing the benefits of reliable macros. Scheme-style
> macros have been implemented for rather different languages like
> Modula-2. If you look at all the features of Common Lisp that you
> are using to get the same effect, it's a little hard to believe that
> the CL approach would generalize as well to other languages.

So what? Maybe it can't be easily generalized to other languages
because it hasn't been braindamaged to fit.

Lisp macros require that the source code be a programmer-visible data
structure, full stop. So to have Lisp-like macros in Modula-2, you'd
have to have a way of operating on Modula-2 source as an abstract
syntax tree.

The Scheme macro system does not rely on the program having a nested
list structure. It does not pass unevaluated source code to a macro
transformer. It implements some hodge-podge semantics in which the
inputs to the macro are not pure source, but pieces of a program
containing bindings. The pattern matcher behaves differently based on
whether symbols occuring in the macro call have bindings or not in the
calling original environment. The template itself isn't pure code
either because its free variables magically attach to bindings in the
defining environment.

I disagree with the insinuation that Modula-2 is ``rather different''
from Scheme. It's rather different from Lisp, that's certain. Add
lexical closures, macros and continuations to Modula-2 and you have
Scheme with Pascal-like syntax.

Gareth McCaughan

unread,
Mar 4, 2004, 8:09:32 PM3/4/04
to
Kaz Kylheku wrote:

> I disagree with the insinuation that Modula-2 is ``rather different''
> from Scheme. It's rather different from Lisp, that's certain. Add
> lexical closures, macros and continuations to Modula-2 and you have
> Scheme with Pascal-like syntax.

The mind boggles. Those are pretty major features, and so is the
difference in syntax between Scheme-like and Pascal-like. And you
forgot to mention dynamic typing, and the entirely different
standard library, and symbols, and automatic memory management,
and literal notation for linked lists, and functions with variable
numbers of arguments, and a completely different set of control
structures. But yes, apart from those trifling matters and a few
more I might have forgotten, I'm sure Scheme and Modula-2 are
exactly the same.

--
Gareth McCaughan
.sig under construc

William D Clinger

unread,
Mar 5, 2004, 1:45:43 AM3/5/04
to
k...@ashi.footprints.net (Kaz Kylheku) wrote:
> Lisp macros require that the source code be a programmer-visible data
> structure, full stop. So to have Lisp-like macros in Modula-2, you'd
> have to have a way of operating on Modula-2 source as an abstract
> syntax tree.

True.

> The Scheme macro system does not rely on the program having a nested
> list structure.

Untrue. The Scheme macro system *does* rely on this. That's the
main problem you have to solve when implementing Scheme-like macros
for Modula-2. The obvious approach is to figure out some way for the
macro system to operate on Modula-2 source as abstract syntax trees.

> It does not pass unevaluated source code to a macro
> transformer.

Untrue.

> It implements some hodge-podge semantics in which the
> inputs to the macro are not pure source, but pieces of a program
> containing bindings.

This doesn't sound true to me, but perhaps you understand these
things better than I.

> The pattern matcher behaves differently based on
> whether symbols occuring in the macro call have bindings or not in the
> calling original environment.

Untrue. You may be confusing pattern matching with transcription,
and don't understand transcription, but that's just a guess.

> The template itself isn't pure code
> either because its free variables magically attach to bindings in the
> defining environment.

True.

> I disagree with the insinuation that Modula-2 is ``rather different''
> from Scheme.

That is probably true.

By the way, can you explain to me why Pascal Costanza's MACROLET*
doesn't work with this minor variation of the example he gave?
Can you explain how to fix his code so things like this will work
reliably?

(let ((x 5))
(macrolet* (x)
((test () (list x))

(set37 () (setf x 37)))
(let ((x 55))
(set37)
(test))))

Will

Pascal Costanza

unread,
Mar 5, 2004, 3:28:44 AM3/5/04
to

William D Clinger wrote:

> By the way, can you explain to me why Pascal Costanza's MACROLET*
> doesn't work with this minor variation of the example he gave?
> Can you explain how to fix his code so things like this will work
> reliably?
>
> (let ((x 5))
> (macrolet* (x)
> ((test () (list x))
> (set37 () (setf x 37)))
> (let ((x 55))
> (set37)
> (test))))

LispWorks tells me that there is a setf expander missing for the funcall
of the generated special symbol. So it should probably be possible to
add a setf expander for that, but I haven't checked.

Independent of whether this works or not, this thing starts to turn into
a Turing-equivalence argument, and that's pretty boring. To me, it's
clear by now what Anton has said in the beginning:
syntax-rules/syntax-case have their respective advantages wrt
expressivity. The fact that these advantages don't seem to be important
in Common Lisp is largely irrelevant - it surely doesn't falsify that
assessment.

To put it like that: Assume you are forced to program in Java instead of
Common Lisp - this happens in the "real world" - then it's probably
better to have a Scheme-style macro system than to have no Lisp feature
at all. ;)

Joe Marshall

unread,
Mar 5, 2004, 4:20:24 AM3/5/04
to
Pascal Costanza <cost...@web.de> writes:

> William D Clinger wrote:
>
>> By the way, can you explain to me why Pascal Costanza's MACROLET*
>> doesn't work with this minor variation of the example he gave?
>> Can you explain how to fix his code so things like this will work
>> reliably?
>> (let ((x 5))
>> (macrolet* (x)
>> ((test () (list x)) (set37 () (setf x 37)))
>> (let ((x 55))
>> (set37)
>> (test))))
>
> LispWorks tells me that there is a setf expander missing for the
> funcall of the generated special symbol. So it should probably be
> possible to add a setf expander for that, but I haven't checked.
>
> Independent of whether this works or not, this thing starts to turn
> into a Turing-equivalence argument, and that's pretty boring.

Will's example is a bit more subtle than that...

--
~jrm

Pascal Costanza

unread,
Mar 5, 2004, 1:34:47 PM3/5/04
to

Joe Marshall wrote:

First note that I have meant my remark as an acknowledgement of
syntax-rules/syntax-case. The code I need to write to make things work
with DEFMACRO starts to get very hairy, just to prove that it can be
made to work. But that's not the really interesting point. What's
interesting is how convenient it is to express something, and
syntax-rules/syntax-case indeed seem to do a much better job here.

Nevertheless, because of your remark I have tried to make it work, and I
have to admit that I don't get the subtlety. I am probably missing
something. What's the point? Is it about the reliability? The code below
seems to work...

Here's the code.

(defmacro macrolet* (vars defs &body body)
(let ((syms (mapcar (lambda (var)
(cons var (copy-symbol var)))
vars)))
`(let ,(mapcar (lambda (var)
`(,(cdr (assoc var syms))

(cons (lambda () ,var)
(lambda (value)
(setf ,var value)))))


vars)
(declare (special ,@(mapcar #'cdr syms)))
(macrolet
,(mapcar
(lambda (def)
`(,(car def)
,(cadr def)
`(let ()
(declare (special ,@',(mapcar #'cdr syms)))

(flet ,',(loop for var in vars
for sym = (cdr (assoc var syms))
collect `(,sym ()
(funcall (car ,sym)))
collect `((setf ,sym) (value)
(funcall
(cdr ,sym) value)))


(symbol-macrolet
,',(mapcar
(lambda (var)

`(,var (,(cdr (assoc var syms)))))
vars)
,@',(cddr def))))))
defs)
,@body))))

(let ((x 5))
(macrolet* (x)
((test () (list x))
(set37 () (setf x 37)))
(let ((x 55))
(set37)
(test))))

==> (37)

Here's the expansion.

(LET ((X 5))
(LET ((#:X (CONS (LAMBDA () X)
(LAMBDA (VALUE)
(LET* ((#:G919 VALUE))
(SETQ X #:G919))))))


(DECLARE (SPECIAL #:X))
(MACROLET ((TEST ()
`(LET ()
(DECLARE (SPECIAL #:X))

(FLET ((#:X ()
(FUNCALL (CAR #:X)))
((SETF #:X) (VALUE)
(FUNCALL (CDR #:X) VALUE)))
(SYMBOL-MACROLET ((X (#:X)))
(LIST X)))))
(SET37 ()


`(LET ()
(DECLARE (SPECIAL #:X))

(FLET ((#:X ()
(FUNCALL (CAR #:X)))
((SETF #:X) (VALUE)
(FUNCALL (CDR #:X) VALUE)))
(SYMBOL-MACROLET ((X (#:X)))
(SETF X 37))))))


(LET ((X 55))
(LET ()
(DECLARE (SPECIAL #:X))

(FLET ((#:X ()
(FUNCALL (CAR #:X)))
((SETF #:X) (VALUE)
(FUNCALL (CDR #:X) VALUE)))
(SYMBOL-MACROLET ((X (#:X)))
(LET* ((#:G928 37))
(SETF::GENSYM\ \"X\" #:G928)))))


(LET ()
(DECLARE (SPECIAL #:X))

(FLET ((#:X ()
(FUNCALL (CAR #:X)))
((SETF #:X) (VALUE)
(FUNCALL (CDR #:X) VALUE)))
(SYMBOL-MACROLET ((X (#:X)))
(LIST (#:X)))))))))

Joe Marshall

unread,
Mar 5, 2004, 3:20:16 PM3/5/04
to
Pascal Costanza <cost...@web.de> writes:

> Nevertheless, because of your remark I have tried to make it work, and
> I have to admit that I don't get the subtlety. I am probably missing
> something. What's the point? Is it about the reliability? The code
> below seems to work...

Before I make a complete ass of myself, let me make sure I understand
what macrolet* is supposed to do. Am I correct in that the VARS list
indicates which variables are to be scoped at the macro-definition
site and all other variables are scoped at the macro-use site?

It is loading more messages.
0 new messages