Scheme macros

84 views
Skip to first unread message

Jacek Generowicz

unread,
Feb 19, 2004, 10:44:12 AM2/19/04
to
[Don't want to start a flamefest, so I'm not crossposting this to
c.l.s]

In _Lambda: the ultimate "little language"_, Olin Shivers suggests
(second paragraph of section 9) that " ... Scheme has the most
sophisticated macro system of any language in this family ...", where
'this family' is explicitly stated to include Common Lisp.

Now, I don't know much about the Scheme macro system ... my impression
is that the main difference from CL's macros lies in Scheme's
so-called hygiene, which I (mis?)understand to be a mechanism for
preventing variable capture (which in CL is considered to be a
feature, not a bug, of the macro system (particularly given that
variable capture is less "problematic" in a Lisp-N, where N>1)).

My question is ... how should I interpret the claim that Scheme's
macro system is more "sophisticated" than CL's ? Does it merely reflect
a preference for Scheme's "hygene", or is there something else behind
the statement ?

Barry Margolin

unread,
Feb 19, 2004, 11:54:14 AM2/19/04
to
In article <tyfad3f...@pcepsft001.cern.ch>,
Jacek Generowicz <jacek.ge...@cern.ch> wrote:

> My question is ... how should I interpret the claim that Scheme's
> macro system is more "sophisticated" than CL's ? Does it merely reflect
> a preference for Scheme's "hygene", or is there something else behind
> the statement ?

That may be part of it. He could also be referring to Scheme's
pattern-based macros using syntax-case.

--
Barry Margolin, bar...@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***

Paul Dietz

unread,
Feb 19, 2004, 12:07:57 PM2/19/04
to
Jacek Generowicz wrote:

> My question is ... how should I interpret the claim that Scheme's
> macro system is more "sophisticated" than CL's ? Does it merely reflect
> a preference for Scheme's "hygene", or is there something else behind
> the statement ?

One definition of sophisticated: 'lacking natural simplicity'.

Seems right to me.

Paul

Anton van Straaten

unread,
Feb 20, 2004, 1:43:07 AM2/20/04
to

It's not only hygiene. Like Barry, I would assume that Olin was talking
about the syntax-case macro system:
http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?Syntax-Case
http://www.scheme.com/syntax-case/
http://www.scheme.com/tspl/syntax.html#./syntax:h3

Syntax-case is a rich system that's hard to summarize briefly, but I'll take
a stab at it. I've identified four key features below:

* Although syntax-case is a hygienic macro system, it allows you to
selectively break hygiene. This allows it to do the same kinds of things
defmacro can do, but from the opposite direction: you need to take special
action to break hygiene, rather than having to take action to preserve
hygiene. All things being equal, the syntax-case approach ought to be
preferable; but defmacro fans will tell you that all things aren't equal,
that syntax-case pays a price in terms of complexity.

* Syntax-case supports pattern-matching of syntax, but also supports use of
procedural code, as defmacro does.

* With syntax-case, uses of pattern variables or macro variables don't have
to be escaped in the same way as they do with defmacro, i.e. you get rid of
all the quasiquote, unquote, etc., along with the need to think about that
issue when writing macros. In a 9-line example on c.l.s. the other day[*],
a hygienic pattern-matching macro eliminated "two gensyms, one quasiquote,
six unquotes, and one unquote-splicing." The only time you really need to
think about issues related to escaping of variables is when breaking
hygiene, which is the exception rather than the rule.

* Syntax-case represents syntax using syntax objects with their own
specialized structure, rather than using ordinary lists, as defmacro does.
This is something of a tradeoff, which can be thought of as analogous to the
tradeoff that can arise in almost any Lisp program, when choosing between
implementing a data structure using only lists, or implementing it using
some kind of structure mechanism (defstruct, CLOS). The flexibility of
lists still has some advantages in some situations, but I don't know of
anyone who uses lists exclusively in real programs today. It can be argued
that a similar situation applies to macros, especially more complex ones:
using nothing but lists to manipulate syntax leads to exactly the same sort
of problems and limitations that you have when using nothing but lists to
manipulate an ordinary program's data; but using non-list structures also
loses some simplicity.

My own feeling is that it's useful to have access more than one macro
system. They all have a different flavor and have different pros and cons.
For example, although syntax-rules is the most restricted of these systems,
it makes for cleaner and simpler macros than defmacro, in many cases where a
purely hygienic macro is wanted. Happily, Lisp being the ultra-flexible
language that it is, anyone who wants to play with syntax-rules in CL can
take a look at:
http://www.ccs.neu.edu/home/dorai/mbe/mbe-lsp.html

Anton

[*]
http://groups.google.com/groups?selm=f0wWb.458%24hm4.393%40newsread3.news.at
l.earthlink.net


Pascal Costanza

unread,
Feb 20, 2004, 5:41:13 AM2/20/04
to

Anton van Straaten wrote:

> It's not only hygiene. Like Barry, I would assume that Olin was talking
> about the syntax-case macro system:
> http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?Syntax-Case
> http://www.scheme.com/syntax-case/
> http://www.scheme.com/tspl/syntax.html#./syntax:h3

Thanks for these links.

> Syntax-case is a rich system that's hard to summarize briefly, but I'll take
> a stab at it. I've identified four key features below:
>
> * Although syntax-case is a hygienic macro system, it allows you to
> selectively break hygiene. This allows it to do the same kinds of things
> defmacro can do, but from the opposite direction: you need to take special
> action to break hygiene, rather than having to take action to preserve
> hygiene. All things being equal, the syntax-case approach ought to be
> preferable; but defmacro fans will tell you that all things aren't equal,
> that syntax-case pays a price in terms of complexity.

Hm, not sure. I think the main issue is perhaps somewhat different.

To me, the conceptual simplicity of CL-style macros is striking: It's
just a transformation of s-expression. That's it. Once understood, it's
clear that you can do anything with this conceptual model.

Here is an analogy:

In Scheme, you have continuations and space-safe tail recursion. This
guarantees that you can model any kind of iteration construct that you
might need with some guaranteed properties. In Common Lisp you have the
LOOP macro which covers the presumably important cases.

To me, things like syntax-rules and syntax-case look to macro
programming like the LOOP macro looks to iteration. Maybe they really
cover the important cases, but they seem hard to learn. And it
immediately makes me wonder whether it is really worth it. After all, I
know how to make things work with DEFMACRO.

Strange thing is: I like the LOOP macro. There's something mysterious
going on here...

BTW, what you really need to make something like DEFMACRO work is, on
top of that, of course quasiquotation, GENSYM/MAKE-SYMBOL or
string->uninterned-symbol and most probably a Lisp-2.

My feeling is that Common Lispniks would have an easier time to consider
using Scheme when appropriate if Scheme implementations would more
clearly document whether they support these features (except, of course,
for the Lisp-2 thing). It's important that you can create uninterned
symbols.

> * Syntax-case supports pattern-matching of syntax, but also supports use of
> procedural code, as defmacro does.
>
> * With syntax-case, uses of pattern variables or macro variables don't have
> to be escaped in the same way as they do with defmacro, i.e. you get rid of
> all the quasiquote, unquote, etc., along with the need to think about that
> issue when writing macros. In a 9-line example on c.l.s. the other day[*],
> a hygienic pattern-matching macro eliminated "two gensyms, one quasiquote,
> six unquotes, and one unquote-splicing." The only time you really need to
> think about issues related to escaping of variables is when breaking
> hygiene, which is the exception rather than the rule.

I have looked at this example, and the funny thing is that I immediately
start to wonder which elements of the macro definition refer to
macro-expansion-time entities and which refer to run-time stuff. I don't
have to think about these issues with quasiquotations because I
immediately see it from the code.

I am not saying that this makes syntax-case worse than quasiquotation.
Maybe I am just missing something.

> * Syntax-case represents syntax using syntax objects with their own
> specialized structure, rather than using ordinary lists, as defmacro does.
> This is something of a tradeoff, which can be thought of as analogous to the
> tradeoff that can arise in almost any Lisp program, when choosing between
> implementing a data structure using only lists, or implementing it using
> some kind of structure mechanism (defstruct, CLOS). The flexibility of
> lists still has some advantages in some situations, but I don't know of
> anyone who uses lists exclusively in real programs today. It can be argued
> that a similar situation applies to macros, especially more complex ones:
> using nothing but lists to manipulate syntax leads to exactly the same sort
> of problems and limitations that you have when using nothing but lists to
> manipulate an ordinary program's data; but using non-list structures also
> loses some simplicity.

Hm, I recall reading that syntax-case allows for recording line numbers
of the original expression. Are there more advantages?

> My own feeling is that it's useful to have access more than one macro
> system. They all have a different flavor and have different pros and cons.
> For example, although syntax-rules is the most restricted of these systems,
> it makes for cleaner and simpler macros than defmacro, in many cases where a
> purely hygienic macro is wanted. Happily, Lisp being the ultra-flexible
> language that it is, anyone who wants to play with syntax-rules in CL can
> take a look at:
> http://www.ccs.neu.edu/home/dorai/mbe/mbe-lsp.html

Having more options is certainly better. (Such a statement from a fan of
a supposedly minimal language?!? ;)

Anyway, thanks for your insightful posting. I appreciate your efforts to
bridge the gaps between the Scheme and Common Lisp communities,
especially because this isn't always easy.


Pascal

--
Tyler: "How's that working out for you?"
Jack: "Great."
Tyler: "Keep it up, then."

Tim Bradshaw

unread,
Feb 20, 2004, 7:59:55 AM2/20/04
to
Jacek Generowicz <jacek.ge...@cern.ch> wrote in message news:<tyfad3f...@pcepsft001.cern.ch>...

>
> My question is ... how should I interpret the claim that Scheme's
> macro system is more "sophisticated" than CL's ? Does it merely reflect
> a preference for Scheme's "hygene", or is there something else behind
> the statement ?

It requires more pages to describe it. Indeed, isn't the macro
section of the scheme standard bigger than the rest of the standard
put together or something?

Joe Marshall

unread,
Feb 20, 2004, 9:42:29 AM2/20/04
to
Jacek Generowicz <jacek.ge...@cern.ch> writes:

> My question is ... how should I interpret the claim that Scheme's
> macro system is more "sophisticated" than CL's ? Does it merely reflect
> a preference for Scheme's "hygene", or is there something else behind
> the statement ?

You should interpret the claim based on the biases of the person
making the claim.

One way of looking at is that the macro system performs a
source->source transformation. To do this you need a mechanism for
destructuring the original code that preserves the meaning of the
subforms that are not being transformed, a mechanism for transforming
the fragments, a mechanism for introducing new forms with a meaning
independent of the source context, and a mechanism for combining
these.

The Common Lisp macro system provides a hook and a small amount of
destructuring. The mechanism for transforming the fragments and
combining the results are provided by the built-in list manipulation
facilities.

The rest of the work is a burden that the macro writer
must take on himself. The macro writer must avoid introducting
colliding identifiers so that the original forms, when re-injected,
retain their meaning. He must ensure that introduced gensyms are
correctly referred to in the expanded code so that the new forms are
independent of context. Admittedly, this is not a large burden to
bear for the power of macros.

The Scheme macro system provides a much richer destructuring facility,
and automates the entire process of renaming to preserve meaning. It
provides a richer set of base tools *for that particular problem*, so
in this way it is more sophisticated than the CL macro system.

However, the Scheme macro system does *not* provide much in the way of
transforming the program fragments or combining the fragments and new
code into a result. This is done via pattern matching and templates
that do not give you easy access to the underlying list structure. It
can be amazingly difficult to generate the patterns and templates
needed to do what would be simple list manipulation.

The end result is that the Scheme macro system can be a pain in the
ass to understand and use. From what I've seen, I'm not the only one
in the Scheme community that finds the hygienic macro system
difficult. It seems that the burden of maintaining hygiene manually is
smaller than the burden introduced by maintaining it automatically.


Grzegorz Chrupała

unread,
Feb 20, 2004, 11:28:53 AM2/20/04
to
Anton van Straaten wrote:


> It's not only hygiene. Like Barry, I would assume that Olin was talking
> about the syntax-case macro system:

Given that Olin's paper is about scsh/Scheme48, and given that AFAIK
Scheme48 does not provide syntax-case, this seems unlikely. Anyway, it's
not terribly difficult for a macro system to be more "sophisticated" (as
opposed to more general) than Common Lisps defamcro.

--
Grzegorz Chrupała | http://pithekos.net | grze...@jabber.org
gosh -bE "begin (print \
(map (cut number->string <> 36) '(18 806564 1714020422))) (exit)"

Jens Axel Søgaard

unread,
Feb 20, 2004, 5:58:57 PM2/20/04
to
Pascal Costanza wrote:
> Anton van Straaten wrote:

> My feeling is that Common Lispniks would have an easier time to consider
> using Scheme when appropriate if Scheme implementations would more
> clearly document whether they support these features (except, of course,
> for the Lisp-2 thing). It's important that you can create uninterned
> symbols.

I don't know any Scheme with defmacro that doesn't support gensym.
Note en passant that define-macro in DrScheme is implemented in terms
of syntax-case (one page of code AFAIR).

> I have looked at this example, and the funny thing is that I immediately
> start to wonder which elements of the macro definition refer to
> macro-expansion-time entities and which refer to run-time stuff. I don't
> have to think about these issues with quasiquotations because I
> immediately see it from the code.
>
> I am not saying that this makes syntax-case worse than quasiquotation.
> Maybe I am just missing something.

Colors. Colors and arrows. That's what you are missing:

<http://www.scheme.dk/macros-in-color.png>

--
Jens Axel Søgaard

Pascal Costanza

unread,
Feb 20, 2004, 6:41:06 PM2/20/04
to

Jens Axel Søgaard wrote:

> Pascal Costanza wrote:
>
>> Anton van Straaten wrote:
>
>> My feeling is that Common Lispniks would have an easier time to
>> consider using Scheme when appropriate if Scheme implementations would
>> more clearly document whether they support these features (except, of
>> course, for the Lisp-2 thing). It's important that you can create
>> uninterned symbols.
>
> I don't know any Scheme with defmacro that doesn't support gensym.

I have read docs of some Schemes where I had the feeling that the
respectively provided gensym functions just tried hard to generate
unique strings, but didn't actually rely on object identity / symbol
identity to ensure uniqueness. Maybe I misunderstood those docs. But
that's the whole point of my argument.

In fact, I have just browsed through some of the Scheme docs in order to
double-check my impression. I have to admit it was easier this time to
track down the relevant sections. When a Scheme implements
string->uninterned-symbol, I am relatively happy. When it doesn't, but
it implements gensym, then I wonder how gensym is implemented. Some docs
say that gensym creates a unique symbol, some even state that it's
uninterned. What do the others do?

So at least, I know what to look for by now. Still, I have the feeling
that things could be easier. For example, the Scheme standard could just
standardize string->uninterned-symbol, couldn't it?

> Note en passant that define-macro in DrScheme is implemented in terms
> of syntax-case (one page of code AFAIR).

...but this isn't a Turing equivalence argument, is it? ;)

>> I have looked at this example, and the funny thing is that I
>> immediately start to wonder which elements of the macro definition
>> refer to macro-expansion-time entities and which refer to run-time
>> stuff. I don't have to think about these issues with quasiquotations
>> because I immediately see it from the code.
>>
>> I am not saying that this makes syntax-case worse than quasiquotation.
>> Maybe I am just missing something.
>
> Colors. Colors and arrows. That's what you are missing:
>
> <http://www.scheme.dk/macros-in-color.png>

No, I am red/green colorblind.

Jens Axel Søgaard

unread,
Feb 20, 2004, 6:46:17 PM2/20/04
to
Pascal Costanza wrote:

>> Colors. Colors and arrows. That's what you are missing:

>> <http://www.scheme.dk/macros-in-color.png>

> No, I am red/green colorblind.

You are in luck, nothing is colored red.

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Feb 20, 2004, 6:50:53 PM2/20/04
to
Pascal Costanza wrote:
> Jens Axel Søgaard wrote:
>> Pascal Costanza wrote:

>>> My feeling is that Common Lispniks would have an easier time to
>>> consider using Scheme when appropriate if Scheme implementations
>>> would more clearly document whether they support these features
>>> (except, of course, for the Lisp-2 thing). It's important that you
>>> can create uninterned symbols.

>> I don't know any Scheme with defmacro that doesn't support gensym.

> I have read docs of some Schemes where I had the feeling that the
> respectively provided gensym functions just tried hard to generate
> unique strings, but didn't actually rely on object identity / symbol
> identity to ensure uniqueness. Maybe I misunderstood those docs. But
> that's the whole point of my argument.

That's a valid concern. Documentation is worth nothing, if it isn't
precise.

> In fact, I have just browsed through some of the Scheme docs in order to
> double-check my impression. I have to admit it was easier this time to
> track down the relevant sections. When a Scheme implements
> string->uninterned-symbol, I am relatively happy. When it doesn't, but
> it implements gensym, then I wonder how gensym is implemented. Some docs
> say that gensym creates a unique symbol, some even state that it's
> uninterned. What do the others do?
>
> So at least, I know what to look for by now. Still, I have the feeling
> that things could be easier. For example, the Scheme standard could just
> standardize string->uninterned-symbol, couldn't it?

I suppose so.

>> Note en passant that define-macro in DrScheme is implemented in terms
>> of syntax-case (one page of code AFAIR).

> ...but this isn't a Turing equivalence argument, is it? ;)

Hence the "en passant" :-)

--
Jens Axel Søgaard

Pascal Costanza

unread,
Feb 20, 2004, 7:17:31 PM2/20/04
to

Jens Axel Søgaard wrote:

Red/green colorblindness doesn't "work" like that. For example, it's
really hard for me to parse any useful information from the coloring of
the (define-syntax for ...) form in the example you have linked to.
Unlesss my eyes are only a few inches away from the screen. It is only
called "red/green colorblindness" because of some biological details,
but the effect doesn't really have to do with the colors red and green
as such, at least not as we perceive them.

In general, it's a good idea to have a colorblind person check colors in
a program, presentation, etc., in order to ensure that they have the
desired effect on them, if it is really important. About 10% of the male
population are colorblind, so I don't think it's negligible.


Pascal

P.S.: Hey, this is still on topic! ;)

Pascal Costanza

unread,
Feb 20, 2004, 7:18:42 PM2/20/04
to

Jens Axel Søgaard wrote:

>>> Note en passant that define-macro in DrScheme is implemented in terms
>>> of syntax-case (one page of code AFAIR).
>
>
>> ...but this isn't a Turing equivalence argument, is it? ;)
>
>
> Hence the "en passant" :-)

:-))

Jens Axel Søgaard

unread,
Feb 20, 2004, 7:39:20 PM2/20/04
to
Pascal Costanza wrote:
>
> Jens Axel Søgaard wrote:
>
>> Pascal Costanza wrote:
>>
>>>> Colors. Colors and arrows. That's what you are missing:
>>
>>
>>>> <http://www.scheme.dk/macros-in-color.png>
>>
>>
>>> No, I am red/green colorblind.
>>
>>
>> You are in luck, nothing is colored red.
>
>
> Red/green colorblindness doesn't "work" like that. For example, it's
> really hard for me to parse any useful information from the coloring of
> the (define-syntax for ...) form in the example you have linked to.
> Unlesss my eyes are only a few inches away from the screen.

I apologize. I sincerely thought, you were making a joke (a good one even).

> It is only
> called "red/green colorblindness" because of some biological details,
> but the effect doesn't really have to do with the colors red and green
> as such, at least not as we perceive them.

Does it affect the perception of other colors as well?
I mean, is there a problem with disguishing say a mixture of 50% red and 50% blue,
from a mixture with 50% green and 50% blue?

> In general, it's a good idea to have a colorblind person check colors in
> a program, presentation, etc., in order to ensure that they have the
> desired effect on them, if it is really important. About 10% of the male
> population are colorblind, so I don't think it's negligible.

Which colors would you prefer? Hm. An option
to use alternative means of indicating the various categories,
say underline, bold, italic etc would be nice.

--
Jens Axel Søgaard

Pascal Costanza

unread,
Feb 20, 2004, 7:57:22 PM2/20/04
to

Jens Axel Søgaard wrote:

>> Red/green colorblindness doesn't "work" like that. For example, it's
>> really hard for me to parse any useful information from the coloring
>> of the (define-syntax for ...) form in the example you have linked to.
>> Unlesss my eyes are only a few inches away from the screen.
>
> I apologize. I sincerely thought, you were making a joke (a good one even).

No problem. Really. ;)

>> It is only called "red/green colorblindness" because of some
>> biological details, but the effect doesn't really have to do with the
>> colors red and green as such, at least not as we perceive them.
>
> Does it affect the perception of other colors as well?
> I mean, is there a problem with disguishing say a mixture of 50% red and
> 50% blue, from a mixture with 50% green and 50% blue?

I can't describe the effects, because I really don't know the details. I
I am only a victim. ;) Indeed, the mixed colors are the most problematic
ones.

>> In general, it's a good idea to have a colorblind person check colors
>> in a program, presentation, etc., in order to ensure that they have
>> the desired effect on them, if it is really important. About 10% of
>> the male population are colorblind, so I don't think it's negligible.
>
> Which colors would you prefer?

Colors like yellow, orange and blue are generally ok. But the best thing
in my experience is to sit down together with someone affected, explain
to him what you want to achieve, and then agree on some colors.

> Hm. An option
> to use alternative means of indicating the various categories,
> say underline, bold, italic etc would be nice.

Yes, that's also a good idea. Bold and italic can have negative
consequences for the layout, though. Shades of grey are usually also
very good when you don't use too many. This also helps in getting usable
printouts on b/w printers. (This is also a useful rule of thumb: If you
can still distinguish the colors when converted to grey scale, then it's
likely that colorblinded people can also distinguish them.)


Pascal

Kaz Kylheku

unread,
Feb 21, 2004, 4:00:24 AM2/21/04
to
Jacek Generowicz <jacek.ge...@cern.ch> wrote in message news:<tyfad3f...@pcepsft001.cern.ch>...
> [Don't want to start a flamefest, so I'm not crossposting this to
> c.l.s]
>
> In _Lambda: the ultimate "little language"_, Olin Shivers suggests
> (second paragraph of section 9) that " ... Scheme has the most
> sophisticated macro system of any language in this family ...", where
> 'this family' is explicitly stated to include Common Lisp.

This just illustrates the difference between what Schemers tend to
find important or interesting and what Lispers tend find important or
interesting.

Scheme comes with a sophisticated macro system. Lisp comes with
sophisticated macros.

Given a Lisp system, I can show ready-made macros like LOOP to Lisp
newbies. In a five minute demo, I can expose how an expressive little
language that brings a clear benefit to everyday programming tasks is
implemented by source-to-source transformations on lists.

A slick way to write macros is important if you want to showcase your
macro-writing system to people who already understand macro-writing
systems and their surrounding issues. It's not that imporant if you
want to get a software engineering task done, because the cost of
writing a macro is amortized over all of its uses. Given that
amortization, maintaining a nested backquote stuffed with MAPCAR calls
and whatnot isn't such a big deal. It's not important what the
expander of a macro looks like, and quite often it's not even
important what the expansion looks like as long as it is correct.

The Lisp way of writing macros is powerful and general, because you
are writing code, rather than expressing your intent in a tightly
constrained pattern-matching language. If you want some ad-hoc
irregular syntax, you can just write code to handle it. If you
deliberately want non-hygiene, that is easy. If you want to break open
a macro argument that is a string literal and parse a language
embedded in that, no problem, just write the code. If you want to
treat symbols as name-equivalent so :FOR, #:FOR and MYPACKAGE::FOR
denote the same thing, just write the code. And so forth.

Can someone show me the syntax-rules implementation of something that
closely resembles the CL LOOP macro, with all the clauses? Surely,
this should be a piece of cake using the ``most sophisticated macro
system''. What does syntax-rules bring to the table when you want to
write something like this?

How about the FORMATTER macro? What would be the syntax-rules for
that?

pete kirkham

unread,
Feb 21, 2004, 6:10:42 AM2/21/04
to
Pascal Costanza wrote:
> Shades of grey are usually also
> very good when you don't use too many. This also helps in getting usable
> printouts on b/w printers. (This is also a useful rule of thumb: If you
> can still distinguish the colors when converted to grey scale, then it's
> likely that colorblinded people can also distinguish them.)

Though some kinds of dyslexics find luminosity variations confusing (I
do, though more for backgrounds rather than foreground), so it's better
to have both colour and a grey-scale default modes, and the option for
the user to configure these.

One of the projects I'm on uses background colour to differentiate the
status of input fields (unset, entered, inherited from parent,
calculated from child, etc.), and one of the users is colour blind. He
can't tell the difference between the fields with my settings, and I
can't read the entries with his.


Pete
(who ended up studying electronics as he was the only non-colour blind
member of his family, so had to help wire all the plugs as a kid)

Anton van Straaten

unread,
Feb 21, 2004, 8:57:30 PM2/21/04
to
Pascal Costanza wrote:
> > Syntax-case is a rich system that's hard to summarize briefly, but I'll
take
> > a stab at it. I've identified four key features below:
> >
> > * Although syntax-case is a hygienic macro system, it allows you to
> > selectively break hygiene. This allows it to do the same kinds of
things
> > defmacro can do, but from the opposite direction: you need to take
special
> > action to break hygiene, rather than having to take action to preserve
> > hygiene. All things being equal, the syntax-case approach ought to be
> > preferable; but defmacro fans will tell you that all things aren't
equal,
> > that syntax-case pays a price in terms of complexity.
>
> Hm, not sure. I think the main issue is perhaps somewhat different.
>
> To me, the conceptual simplicity of CL-style macros is striking: It's
> just a transformation of s-expression. That's it.

That's what all of these macro systems are.

> Once understood, it's clear that you can do anything
> with this conceptual model.

The same is true of syntax-case.

> To me, things like syntax-rules and syntax-case look to macro
> programming like the LOOP macro looks to iteration. Maybe they really
> cover the important cases, but they seem hard to learn.

Syntax-rules is not hard to learn. If anything, it suffers from being
almost too simple; as well as from lacking good, short introductory
material. You specify a pattern, and specify the syntax that should replace
that pattern. That's all there is to it.

Syntax-case is more complex, and I do think that's a drawback when compared
to defmacro. It increases the temptation to conclude the following:

> And it immediately makes me wonder whether it is really worth it.
> After all, I know how to make things work with DEFMACRO.

I might wonder something similar if I were a Python programmer looking at
Lisp: Lisp seems hard to learn, and I would know how to make things work
with Python.

> BTW, what you really need to make something like DEFMACRO work is, on
> top of that, of course quasiquotation, GENSYM/MAKE-SYMBOL or
> string->uninterned-symbol and most probably a Lisp-2.

I don't see that Lisp-2 is an issue. As Jens pointed out, all the major
Schemes implement defmacro & gensym, and it works fine. And, of course,
quasiquotation is standardized by R5RS.

> My feeling is that Common Lispniks would have an easier time to consider
> using Scheme when appropriate if Scheme implementations would more
> clearly document whether they support these features (except, of course,
> for the Lisp-2 thing). It's important that you can create uninterned
symbols.

I wouldn't expect many Common Lispniks to use Scheme when appropriate.
Unless they like already like Scheme, why should they? And if they already
like Scheme, they're unlikely to be put off by such vague concerns as the
above. For the record, I'm not aware of any limitations on the defmacro
capabilities in the many Schemes which implement it.

> I have looked at this example, and the funny thing is that I immediately
> start to wonder which elements of the macro definition refer to
> macro-expansion-time entities and which refer to run-time stuff. I don't
> have to think about these issues with quasiquotations because I
> immediately see it from the code.

You don't have to think about these issues with syntax-rules - you're only
used to thinking about them because you're forced to, with defmacro.

In terms of name scoping, syntax-rules macros work exactly the same way name
scoping does with lambda. If you're comfortable with the way name scoping
works with lambda, you should have no problems with syntax-rules.

To back that up: in a given lambda expression, all names used are defined in
one of the following places:

* the lambda formals list;
* within the lambda body;
* in a lexically enclosing scope;
* in the dynamic environment.

In a syntax-rules macro, all names used are defined in:

* the syntax-rules pattern;
* within the syntax-rules body;
* in a lexically enclosing scope;
* in the dynamic environment.

There's no more need to quote or unquote names in a syntax-rules macro, than
there is in an ordinary lambda expression.

The reason you have to worry about quoting and unquoting of names in
defmacro is because you need to control whether or not you capture names
from the invoking environment. With pure hygienic macros like syntax-rules,
you simply can't do that, so it's not an issue.

Of course, quoting in defmacro is not just about names: it's also about
syntax. The reason syntax doesn't have to be quoted in syntax-rules macros
is that they rely on pattern matching and template replacement, not
procedural code. No procedural code means that except for macro variables
(addressed above), and special tokens like "...", everything in a macro
template is syntax at the same level, so there's no need to quote it.

The result is that it's actually easier to reason about syntax-rules
macros - which makes them easier to write, and easier to read. As a result,
and also because of the enforced hygiene, they're less error-prone.

Note that I'm not claiming syntax-rules macros are appropriate in all
cases - obviously, where you want to be able to break hygiene, they're not a
good solution (although it's possible to "break hygiene" with syntax-rules,
by redefining binding operators like lambda and let [*]).

> I am not saying that this makes syntax-case worse than quasiquotation.
> Maybe I am just missing something.

I don't think it's possible to compare without actually learning to use both
systems. If you're interested in becoming a bit more familiar, I'd
recommend starting with syntax-rules, which is very simple and easy to get
up to speed with. It also has the benefit that you could use it from within
CL via Dorai Sitaram's package.

> Hm, I recall reading that syntax-case allows for recording line numbers
> of the original expression. Are there more advantages?

Using domain-specific structures instead of lists to represent syntax means
that you can associate any information you want to with source code. As
Jens pointed out, PLT uses this to good effect. I think there's an argument
that this is an obvious way forward for Lisp-like languages - things like
refactoring tools and any other kinds of automated code processing can
benefit from it.

> Having more options is certainly better. (Such a statement from a fan of
> a supposedly minimal language?!? ;)

I think options are required here because none of these systems are perfect
in all respects. But the core language is still minimal - these macro
systems are all surface features, for syntax transformation. The portable
syntax-case system can be implemented in almost any standard Scheme, and
both syntax-rules and defmacro are easy to implement in terms of
syntax-case. You can view syntax-rules and defmacro as applications of
syntax-case. The reverse is not the case.

> Anyway, thanks for your insightful posting. I appreciate your efforts to
> bridge the gaps between the Scheme and Common Lisp communities,
> especially because this isn't always easy.

Thanks. I'm more interested in the underlying ideas, than in language
advocacy. Systems for transforming s-expressions are relevant to all Lisp
dialects.

Anton

[*]
http://groups.google.com/groups?selm=7eb8ac3e.0203271253.74bb0819%40posting.
google.com


Anton van Straaten

unread,
Feb 21, 2004, 11:16:47 PM2/21/04
to
Pascal Costanza wrote:
> I have read docs of some Schemes where I had the feeling that the
> respectively provided gensym functions just tried hard to generate
> unique strings, but didn't actually rely on object identity / symbol
> identity to ensure uniqueness. Maybe I misunderstood those docs.
> But that's the whole point of my argument.

It seems to me that you're exercising unwarranted suspicion. Do you have
any reason to believe gensym won't work the way it should? I suspect the
authors of whichever docs you're referring to may not have bothered to
specifically address the details you're concerned about, simply because it
wouldn't make sense to implement a gensym that doesn't work reliably in all
cases, and it seems almost too obvious to mention that.

> In fact, I have just browsed through some of the Scheme docs in order to
> double-check my impression. I have to admit it was easier this time to
> track down the relevant sections. When a Scheme implements
> string->uninterned-symbol, I am relatively happy. When it doesn't, but
> it implements gensym, then I wonder how gensym is implemented. Some docs
> say that gensym creates a unique symbol, some even state that it's
> uninterned. What do the others do?

You're worrying about internal implementation details, which only matter in
two cases: if the observed behavior is incorrect (which afaik, is not the
case); or if you have some particular interest in implementation details.
In the latter case, you shouldn't expect the documentation that tells you
how to use the language, to give all the specifics of the internals.
However, many Schemes do have papers available which go into some detail
about their internals.

> So at least, I know what to look for by now. Still, I have the feeling
> that things could be easier.

Do you mean that it could be easier for a CL user to find information that
satisfies him that something in Scheme has the same observable semantics as
it does in CL? I could make the same claim about CL. One way to address
that would be "CL for Scheme users" and "Scheme for CL users" (I think I've
seen at least one of those somewhere.)

> For example, the Scheme standard could just
> standardize string->uninterned-symbol, couldn't it?

You could submit a SRFI for it. But are you suggesting standardizing
string->uninterned-symbol simply to force a particular implementation style
for gensym? If so, why is that necessary?

> > Note en passant that define-macro in DrScheme is implemented in terms
> > of syntax-case (one page of code AFAIR).
>
> ...but this isn't a Turing equivalence argument, is it? ;)

No, but it does say something about the generality and power of syntax-case.
The PLT defmacro implementation is about 110 lines in total, much of which
is handling exceptions and two syntax variations.

Anton


Joe Marshall

unread,
Feb 22, 2004, 12:38:55 PM2/22/04
to
k...@ashi.footprints.net (Kaz Kylheku) writes:

> Can someone show me the syntax-rules implementation of something that
> closely resembles the CL LOOP macro, with all the clauses? Surely,
> this should be a piece of cake using the ``most sophisticated macro
> system''.

See http://okmij.org/ftp/Scheme/macros.html

Oleg has written a compiler that transforms Scheme code to the
appropriate pattern-matching rules.

> What does syntax-rules bring to the table when you want to write
> something like this?

Common Lisp has some restrictions to ensure that macros can work
correctly. For instance, you are not permitted to shadow identifiers
in the CL package. Syntax-rules does not have this restriction.


--
~jrm

Pascal Costanza

unread,
Feb 22, 2004, 2:19:10 PM2/22/04
to
Anton van Straaten wrote:

> Pascal Costanza wrote:
>
>>To me, the conceptual simplicity of CL-style macros is striking: It's
>>just a transformation of s-expression. That's it.
>
> That's what all of these macro systems are.

R5RS doesn't say so. At least, I don't see where the term "macro
transformer" is defined. It seems to me that the standard tries hard to
hide that fact. (But I might simply not have found the relevant sections.)

>>Once understood, it's clear that you can do anything
>>with this conceptual model.
>
> The same is true of syntax-case.

Of course, I will take your word for that. But I still don't understand
what syntax-case does. I have browsed through the various links that are
usually referred (mainly papers and a book by Dybvig), but I find it
very hard to follow the contents. It would be good if there would exist
some kind of high-level overview about syntax-case for people who
already know DEFMACRO.

>>To me, things like syntax-rules and syntax-case look to macro
>>programming like the LOOP macro looks to iteration. Maybe they really
>>cover the important cases, but they seem hard to learn.
>
> Syntax-rules is not hard to learn. If anything, it suffers from being
> almost too simple; as well as from lacking good, short introductory
> material. You specify a pattern, and specify the syntax that should replace
> that pattern. That's all there is to it.

Examples like those given in
http://okmij.org/ftp/Scheme/r5rs-macros-pitfalls.txt seem to indicate
that syntax-rules just trade one set of possible pitfalls with a
different set, but along that way the conceptual simplicity is lost.

Here are the examples from that reference implemented with DEFMACRO:

(defun foo-f (x)
(flet ((id (x) x))
(id (1+ x))))

(defmacro foo-m (x)
`(macrolet ((id (x) x))
(id (1+ ,x))))

(defmacro bar-m2 (var &body body)
`(macrolet ((helper (&body body)
`(lambda (,',var) ,@body)))
(helper ,@body)))


I really don't see the problem. Seriously not.

> Syntax-case is more complex, and I do think that's a drawback when compared
> to defmacro. It increases the temptation to conclude the following:
>
>>And it immediately makes me wonder whether it is really worth it.
>>After all, I know how to make things work with DEFMACRO.
>
> I might wonder something similar if I were a Python programmer looking at
> Lisp: Lisp seems hard to learn, and I would know how to make things work
> with Python.

Lisp and Scheme bring you metacircularity. As soon as Pythonistas write
program generators, it's clear that their laguage is missing something
important. Of course, they can write a Lisp interpreter in Python, but
that's besides the point.

Do you really think that syntax-case is an equally important step forward?

>>BTW, what you really need to make something like DEFMACRO work is, on
>>top of that, of course quasiquotation, GENSYM/MAKE-SYMBOL or
>>string->uninterned-symbol and most probably a Lisp-2.
>
> I don't see that Lisp-2 is an issue.

See http://citeseer.nj.nec.com/bawden88syntactic.html

Here are comments on the examples given in that paper from a Lisp-2
point of view:

1. Lexical variable bindings in the client code don't capture function
definitions used in macros:

(let ((cons 5))
(push 'foo stack))

is not an issue.

2. A client's lexical reference wouldn't conflict with a binding
introduced by a macro because the macro would make sure to use unique
symbols:

(defmacro or (exp-1 exp-2)
(let ((temp (gensym)))
`(let ((,temp ,exp-1))
(if ,temp ,temp ,exp-2))))

The expansion of (or (memq x y) temp) would be correct. This isn't
really a Lisp-2 issue. Note, however, that there are good examples why
one would want to introduce a new binding in a macro. As long as this is
documented, there is no real problem here. This example boils down to
the question what a reasonable default is and how hard it is to get the
other variant.

3. The third example in that paper shows how a newly introduced lexical
function binding may interfere with a function that is used in a macro
expansion. However, the names of global functions definitions are an
important part of an application's ontology. You know that you redefine
an important element of your application when meaningful names are used.
Different ontologies can be separated by proper namespace mechanisms.

Here it helps that a Lisp-2 seperates variables and functions by
default. Variables are usually not important parts of an application's
ontology. If they are, the convention in Common Lisp is to use proper
naming schemes, like asterisks for special variables. Effectively, this
creates a new namespace. (ISLISP is a little bit more explicit than
Common Lisp in this regard.)

4. The fourth example can be solved with a proper GENSYM for "use" in
the "contorted" macro. Again, there are good examples why one would want
to introduce a new binding in a macro.

> As Jens pointed out, all the major
> Schemes implement defmacro & gensym, and it works fine. And, of course,
> quasiquotation is standardized by R5RS.

OK, I believe you.

>>My feeling is that Common Lispniks would have an easier time to consider
>>using Scheme when appropriate if Scheme implementations would more
>>clearly document whether they support these features (except, of course,
>>for the Lisp-2 thing). It's important that you can create uninterned symbols.
>
> I wouldn't expect many Common Lispniks to use Scheme when appropriate.
> Unless they like already like Scheme, why should they? And if they already
> like Scheme, they're unlikely to be put off by such vague concerns as the
> above. For the record, I'm not aware of any limitations on the defmacro
> capabilities in the many Schemes which implement it.

Some while ago, I wanted to experiment with continuations in Scheme.
Apart from the fact that not all Schemes seem to implement continuations
fully and/or correctly (see
http://sisc.sourceforge.net/r5rspitresults.html ), the fact that the
respective documentations make me feel uneasy about whether I have to
relearn programming techniques for totally unrelated areas is a clear
downside IMHO.

[...]


> The result is that it's actually easier to reason about syntax-rules
> macros - which makes them easier to write, and easier to read. As a result,
> and also because of the enforced hygiene, they're less error-prone.

I don't mind using DEFMACRO for simple things. I don't find them hard to
write or read, and I don't know why they would be more error-prone.
Sounds similar to some of the claims made by advocates of static type
systems. Maybe this boils down to just a matter of taste.

>>I am not saying that this makes syntax-case worse than quasiquotation.
>>Maybe I am just missing something.
>
> I don't think it's possible to compare without actually learning to use both
> systems. If you're interested in becoming a bit more familiar, I'd
> recommend starting with syntax-rules, which is very simple and easy to get
> up to speed with. It also has the benefit that you could use it from within
> CL via Dorai Sitaram's package.

OK.

>>Hm, I recall reading that syntax-case allows for recording line numbers
>>of the original expression. Are there more advantages?
>
> Using domain-specific structures instead of lists to represent syntax means
> that you can associate any information you want to with source code. As
> Jens pointed out, PLT uses this to good effect. I think there's an argument
> that this is an obvious way forward for Lisp-like languages - things like
> refactoring tools and any other kinds of automated code processing can
> benefit from it.

OK

>>Having more options is certainly better. (Such a statement from a fan of
>>a supposedly minimal language?!? ;)
>
> I think options are required here because none of these systems are perfect
> in all respects. But the core language is still minimal - these macro
> systems are all surface features, for syntax transformation. The portable
> syntax-case system can be implemented in almost any standard Scheme, and
> both syntax-rules and defmacro are easy to implement in terms of
> syntax-case. You can view syntax-rules and defmacro as applications of
> syntax-case. The reverse is not the case.

What stands in the way of implementing syntax-case on top of DEFMACRO?
(This is not a rhetorical question.)

>>Anyway, thanks for your insightful posting. I appreciate your efforts to
>>bridge the gaps between the Scheme and Common Lisp communities,
>>especially because this isn't always easy.
>
> Thanks. I'm more interested in the underlying ideas, than in language
> advocacy. Systems for transforming s-expressions are relevant to all Lisp
> dialects.

Agreed.

Pascal Costanza

unread,
Feb 22, 2004, 2:36:23 PM2/22/04
to

Anton van Straaten wrote:

> Pascal Costanza wrote:
>
>>I have read docs of some Schemes where I had the feeling that the
>>respectively provided gensym functions just tried hard to generate
>>unique strings, but didn't actually rely on object identity / symbol
>>identity to ensure uniqueness. Maybe I misunderstood those docs.
>>But that's the whole point of my argument.
>
> It seems to me that you're exercising unwarranted suspicion. Do you have
> any reason to believe gensym won't work the way it should? I suspect the
> authors of whichever docs you're referring to may not have bothered to
> specifically address the details you're concerned about, simply because it
> wouldn't make sense to implement a gensym that doesn't work reliably in all
> cases, and it seems almost too obvious to mention that.

OK, I have to admit that this is my mistake. The last time, I wanted to
know whether a Scheme implementation suits my needs, I have looked at
SISC's documentation, among others. At that time I didn't know about
string->uninterned-symbol, but expected something more Common Lispish.

I have found the following section which made me very skeptical:
http://sisc.sourceforge.net/manual/html/apc.html#N15287

To my ears, this sounded a lot like they were trying to achieve
uniqueness by properly named strings. However, now that I have checked
the docs again I have noticed that the details of that section serve a
different purpose.

Someone should write a highly opinionated guide to Scheme. ;) (not me)

>>In fact, I have just browsed through some of the Scheme docs in order to
>>double-check my impression. I have to admit it was easier this time to
>>track down the relevant sections. When a Scheme implements
>>string->uninterned-symbol, I am relatively happy. When it doesn't, but
>>it implements gensym, then I wonder how gensym is implemented. Some docs
>>say that gensym creates a unique symbol, some even state that it's
>>uninterned. What do the others do?
>
> You're worrying about internal implementation details, which only matter in
> two cases: if the observed behavior is incorrect (which afaik, is not the
> case); or if you have some particular interest in implementation details.
> In the latter case, you shouldn't expect the documentation that tells you
> how to use the language, to give all the specifics of the internals.

OK, thanks for being insisting.

>>So at least, I know what to look for by now. Still, I have the feeling
>>that things could be easier.
>
> Do you mean that it could be easier for a CL user to find information that
> satisfies him that something in Scheme has the same observable semantics as
> it does in CL? I could make the same claim about CL. One way to address
> that would be "CL for Scheme users" and "Scheme for CL users" (I think I've
> seen at least one of those somewhere.)

...maybe a joint paper for both communities...

>>For example, the Scheme standard could just
>>standardize string->uninterned-symbol, couldn't it?
>
> You could submit a SRFI for it. But are you suggesting standardizing
> string->uninterned-symbol simply to force a particular implementation style
> for gensym? If so, why is that necessary?

It's not. Just a wrong expectation on my side.

Jens Axel Søgaard

unread,
Feb 22, 2004, 3:31:00 PM2/22/04
to
Pascal Costanza wrote:

> Some while ago, I wanted to experiment with continuations in Scheme.
> Apart from the fact that not all Schemes seem to implement continuations
> fully and/or correctly (see
> http://sisc.sourceforge.net/r5rspitresults.html ), the fact that the
> respective documentations make me feel uneasy about whether I have to
> relearn programming techniques for totally unrelated areas is a clear
> downside IMHO.

Most of these errors are not about call/cc but letrec
(e.g. 1.1 and 1.2 [I was too lazy to check the others]), where some
implementors have chosen to deviate sligthly from the standard (some
refer to it as letrec*). This deviation require the use of call/cc to
observe, and that's why the test examples are filled with call/cc.

I wouldn't be surprised if the behaviour becomes sanctioned in R6RS.

>>> Hm, I recall reading that syntax-case allows for recording line numbers
>>> of the original expression. Are there more advantages?

Perhaps you have already read it, but Dybvig's "Writing Hygenic Macros
in Scheme with Syntax-Case" available at

<ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/iucstr356.ps.gz>

is one of the best expositions of syntax-case from an user view point.

> What stands in the way of implementing syntax-case on top of DEFMACRO?
> (This is not a rhetorical question.)

I can't see any. Hm. Perhaps one could simple throw the Scheme source
of syntax-case through PseudoScheme?

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Feb 22, 2004, 3:35:59 PM2/22/04
to
Pascal Costanza wrote:
> Jens Axel Søgaard wrote:

>> Does it affect the perception of other colors as well?
>> I mean, is there a problem with disguishing say a mixture of 50% red
>> and 50% blue, from a mixture with 50% green and 50% blue?

> I can't describe the effects, because I really don't know the details. I
> I am only a victim. ;) Indeed, the mixed colors are the most problematic
> ones.

OK

>>> In general, it's a good idea to have a colorblind person check colors
>>> in a program, presentation, etc., in order to ensure that they have
>>> the desired effect on them, if it is really important. About 10% of
>>> the male population are colorblind, so I don't think it's negligible.

>> Which colors would you prefer?

> Colors like yellow, orange and blue are generally ok. But the best thing
> in my experience is to sit down together with someone affected, explain
> to him what you want to achieve, and then agree on some colors.
>
>> Hm. An option
>> to use alternative means of indicating the various categories,
>> say underline, bold, italic etc would be nice.
>
>
> Yes, that's also a good idea. Bold and italic can have negative
> consequences for the layout, though. Shades of grey are usually also
> very good when you don't use too many. This also helps in getting usable
> printouts on b/w printers. (This is also a useful rule of thumb: If you
> can still distinguish the colors when converted to grey scale, then it's
> likely that colorblinded people can also distinguish them.)

I'll keep that in mind.

--
Jens Axel Søgaard

Brian Mastenbrook

unread,
Feb 22, 2004, 4:29:03 PM2/22/04
to
In article <P2WZb.14144$W74....@newsread1.news.atl.earthlink.net>,

Anton van Straaten <an...@appsolutions.com> wrote:

> It seems to me that you're exercising unwarranted suspicion. Do you have
> any reason to believe gensym won't work the way it should? I suspect the
> authors of whichever docs you're referring to may not have bothered to
> specifically address the details you're concerned about, simply because it
> wouldn't make sense to implement a gensym that doesn't work reliably in all
> cases, and it seems almost too obvious to mention that.

Perhaps someone should tell that to Dybvig:

> (define g1 '#{g18 |%japmchQ4DMDM\\%+|})
> (define g2 (gensym))
> g2

Error in intern-gensym: unique name "%japmchQ4DMDM\\\\%+" already
interned.
Type (debug) to enter the debugger.

I did this by observing the pattern of printed gensyms and guessing the
next value in the sequence.

--
Brian Mastenbrook
http://www.cs.indiana.edu/~bmastenb/

Anton van Straaten

unread,
Feb 22, 2004, 6:03:46 PM2/22/04
to
Pascal Costanza wrote:
> Anton van Straaten wrote:
>
> > Pascal Costanza wrote:
> >
> >>To me, the conceptual simplicity of CL-style macros is striking: It's
> >>just a transformation of s-expression. That's it.
> >
> > That's what all of these macro systems are.
>
> R5RS doesn't say so. At least, I don't see where the term "macro
> transformer" is defined. It seems to me that the standard tries hard to
> hide that fact. (But I might simply not have found the relevant sections.)

If you look up "macro transformer" in the index, it points you to a page
which contains the following definition (Sec. 4.3, Macros):

"The set of rules that specifies how a use of a macro is transcribed into a
more primitive expression is called the 'transformer' of the macro."

I don't think it's hiding anything. Do you think otherwise?

> >>Once understood, it's clear that you can do anything
> >>with this conceptual model.
> >
> > The same is true of syntax-case.
>
> Of course, I will take your word for that. But I still don't understand
> what syntax-case does. I have browsed through the various links that are
> usually referred (mainly papers and a book by Dybvig), but I find it
> very hard to follow the contents. It would be good if there would exist
> some kind of high-level overview about syntax-case for people who
> already know DEFMACRO.

I agree the docs don't make it easy to get into at first. I learned
syntax-case (up to a point - I'm not an expert by any means) after first
learning and using syntax-rules for some time, and having previously been
familiar with defmacro. I think syntax-rules makes a good starting point,
because it teaches the hygienic pattern matching approach in a simpler
context. That same approach is used by syntax-case, but augmented with a
much more powerful procedural syntax manipulation capability.

But I'll ignore my own advice and take a stab at explaining syntax-case,
starting from a defmacro perspective. Perhaps the gentlest introduction to
syntax-case is Dybvig's paper, "Writing Hygenic Macros in Scheme with
Syntax-Case":
ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/iucstr356.ps.gz ,
and I'll use some of its examples below.

Start with defmacro, and imagine that instead of quoting syntax using
quasiquote, you use a special form, 'syntax', which instead of returning a
list as quasiquote does, returns a syntax object, i.e. an instance of an
abstract data type which represents a piece of syntax. This type has an API
which supports various code walking & manipulation capabilities. It can
also be converted to a list (or whatever value the original syntax
represented) via 'syntax-object->datum'.

An important thing to note here is that a syntax object "understands" the
syntax it represents - it's not just an undifferentiated list. It knows
which values are identifiers, it knows things about where those identifiers
are bound, and as we've touched on, it can track things like line numbers
(which may be implementation-dependent). If you're developing code
manipulation tools - editors, debuggers etc. - these syntax objects give you
a capability which defmacro doesn't even attempt to address. Syntax objects
are a richer way to represent program syntax than lists, and their uses go
beyond just macros.

Within a syntax expression of the form (syntax ...), any references to macro
variables are replaced with their values, i.e. there's no need to unquote
references to macro variables. So to steal an example from Dybvig, here's
an 'and2' macro which works like 'and', but for only two operands:

(define-syntax and2
(lambda (x)
(syntax-case x ()
((_ x y)
(syntax (if x y #f))))))

The (lambda (x) ...) binds a syntax object to x, representing the syntax of
the expression which invoked the macro. In theory, you can do whatever you
want with that syntax object. Most commonly, you'll use syntax-case to do a
pattern match on it, which is what the above example does with the
expression (syntax-case x () ...). The () is for literals, like 'else' in
cond.

Within the above syntax-case expression, there's a single pattern and a
corresponding template:

((_ x y)
(syntax (if x y #f))))))

The underscore in (_ x y) represents the name of the macro - you could also
write (and2 x y), it doesn't matter. This pattern will match any
invocations of AND2 with two parameters. After the pattern, is the
expression which will be executed when that pattern is matched. In this
case, it's simply a syntax expression which returns the expression (if x y
#f). The return value from syntax-case must be a syntax object, which
represents the actual syntax of the final program.

In the above, since x and y are macro variables (strictly speaking, "pattern
variables"), their values will be substituted when the syntax object is
created, so that (and2 4 5) becomes (if 4 5 #f).

I think I'll stop there for now, since I have other things to do! I've
touched on some of the more important points about syntax-case. There's a
lot more to it than the above - particularly, breaking hygiene, and
executing procedural code rather than simply applying a template. But all
of it fits into the above framework, and involves manipulating syntax
objects, in a way similar to what you would do in defmacro, but through the
syntax object API rather than manipulating lists. There are plenty more
examples in the Dybvig paper.

> > Syntax-rules is not hard to learn. If anything, it suffers from being
> > almost too simple; as well as from lacking good, short introductory
> > material. You specify a pattern, and specify the syntax that should
replace
> > that pattern. That's all there is to it.
>
> Examples like those given in
> http://okmij.org/ftp/Scheme/r5rs-macros-pitfalls.txt seem to indicate
> that syntax-rules just trade one set of possible pitfalls with a
> different set, but along that way the conceptual simplicity is lost.

I don't accept that "conceptual simplicity is lost" with syntax-rules. It's
a different approach, which in some ways is conceptually simpler than
defmacro, since it doesn't require the user to manually keep track of the
different levels at which the macro operates. The pitfalls you mention may
indeed be flaws in syntax-rules - I'm not familiar enough with them to
comment - but I find that syntax-rules works very well for many kinds of
macros, better than defmacro in fact.

Of course, the latter claim is hardly ever going to be accepted by someone
only familiar with defmacro. For the record, I learned and used defmacro
before ever using syntax-rules or syntax-case, and I still use defmacro from
time to time, so I think I have a good basis for comparison.

> Here are the examples from that reference implemented with DEFMACRO:
>
> (defun foo-f (x)
> (flet ((id (x) x))
> (id (1+ x))))
>
> (defmacro foo-m (x)
> `(macrolet ((id (x) x))
> (id (1+ ,x))))
>
> (defmacro bar-m2 (var &body body)
> `(macrolet ((helper (&body body)
> `(lambda (,',var) ,@body)))
> (helper ,@body)))
>
>
> I really don't see the problem. Seriously not.

I'm not sure what you mean about not seeing the problem. One of the
problems mentioned in the article is that syntax-rules pattern variables
don't shadow. I don't know if there's a justification for that, or it's
simply a bug in the design of syntax-rules. But you usually get an error if
you make this mistake, and it's easy to fix, and easy to avoid. It doesn't
mean that syntax-rules is not useful, and it's still better than defmacro,
which you can't dispute until you've learned syntax-rules. ;)

> > Syntax-case is more complex, and I do think that's a drawback when
compared
> > to defmacro. It increases the temptation to conclude the following:
> >
> >>And it immediately makes me wonder whether it is really worth it.
> >>After all, I know how to make things work with DEFMACRO.
> >
> > I might wonder something similar if I were a Python programmer looking
at
> > Lisp: Lisp seems hard to learn, and I would know how to make things work
> > with Python.
>
> Lisp and Scheme bring you metacircularity. As soon as Pythonistas write
> program generators, it's clear that their laguage is missing something
> important. Of course, they can write a Lisp interpreter in Python, but
> that's besides the point.
>
> Do you really think that syntax-case is an equally important step forward?

In some respects, yes, but that's not what I really meant. It's easy to
look at something from the outside and find reasons not to try it, and that
was my main point. But the points you've been picking on don't seem very
substantial to me - it seems as though you're looking for reasons to ignore
these systems, rather than looking for reasons you might want to learn them.
To conclude from the points you've raised that these systems can be ignored,
seems to me to throw a bunch of babies out with the bathwater. (They're
much cuter babies than that warty defmacro baby, too! ;)

Of course, a CL programmer who wants to write standard CL code, obviously
has little incentive to be interested in other macro systems. But if your
interests cross multiple languages, then there's value in the Scheme macro
systems, at the very least in the sense that learning more than one approach
to the same problem expands the horizons of your understanding of the
problem.

> >>BTW, what you really need to make something like DEFMACRO work is, on
> >>top of that, of course quasiquotation, GENSYM/MAKE-SYMBOL or
> >>string->uninterned-symbol and most probably a Lisp-2.
> >
> > I don't see that Lisp-2 is an issue.
>
> See http://citeseer.nj.nec.com/bawden88syntactic.html

I'm familiar with why people claim it's an issue, but in practice I think
it's not significantly worse than the issue of hygiene in defmacros in
general. As I've said and defended here once before, Lisp-1 can express any
Lisp-2 program, simply by changing any conflicting names so as not to
conflict - a conceptually trivial transformation, with consequences which
are primarily subjective. It would have an impact on porting Lisp-2 macros
to Lisp-1, but it doesn't limit what you can easily express in Lisp-1.

Put another way, having the ability to accidentally compensate for hygiene
violations in some cases - where multiple namespaces happen to prevent the
problem - isn't a solution to the general problem of not having hygiene.
Since you haven't solved the general problem, you still have to address
questions of hygiene, in various low-level ways. A single namespace doesn't
makes this problem worse in any significant way.

> Here it helps that a Lisp-2 seperates variables and functions by
> default. Variables are usually not important parts of an application's
> ontology. If they are, the convention in Common Lisp is to use proper
> naming schemes, like asterisks for special variables. Effectively, this
> creates a new namespace.

Mmm, asterisks. This, to me, is why the whole Lisp-1/2 debate is moot. The
solution is simply Lisp-N, where you can define namespaces, modules, etc.
and control how they're used. See PLT Scheme etc.

> 4. The fourth example can be solved with a proper GENSYM for "use" in
> the "contorted" macro.

The phrase "proper GENSYM" is an oxymoron. GENSYM operates at a strangely
low level of abstraction. Why don't you use GENSYM when declaring normal
lexical variables in a procedure? Rhetorical question, of course - the
point is, GENSYM is a kludge. It's not a particularly onerous one, but it's
part of what makes defmacro worth improving on.

> Some while ago, I wanted to experiment with continuations in Scheme.
> Apart from the fact that not all Schemes seem to implement continuations
> fully and/or correctly (see
> http://sisc.sourceforge.net/r5rspitresults.html ), the fact that the
> respective documentations make me feel uneasy about whether I have to
> relearn programming techniques for totally unrelated areas is a clear
> downside IMHO.

We're straying far afield here. ;) But I'll give my opinion about
continuations, too. Re the quality of implementations, once again you're
looking at edge cases. Forget about those, they're not important, except
in, well, edge cases. All of the major Schemes either support continuations
well, or tell you when they don't - e.g., some of the Scheme to C compilers
deliberately provide restricted continuations.

As far as relearning programming techniques goes, first and foremost,
continuations are a general conceptual model for control flow. If you only
write single-threaded code in a language with a traditional linear
stack-based control flow, you won't have much use for continuations -
they're far more powerful and general than is needed to deal with that case.
But for systems with more complex control flow, continuations can provide a
very useful model - web servers are just one example, but really any system
which involves multiple threads, distributed processing, etc. can benefit
from modeling via continuations.

Scheme is one of very few languages - along with SML/NJ, Stackless Python,
and the RhinoWithContinuations version of Javascript - which implements
first-class continuations. If you're developing tools in the spaces
mentioned above, this is a useful capability. Stackless Python uses
continuations to support microthreading; Scheme has a number of web server
solutions which use continuations to "invert control" so that the structure
of a web application's code can be decoupled from its web page structure;
and RhinoWithContinuations does something similar for web applications in
the Cocoon web framework. For applications which need them, continuations
are very useful, and have little competition. Their competition is mainly
OS-level threads, which really solve a different problem, and conceptually
are a stream of continuations anyway.

For ordinary programming, though, continuations are more or less
irrelevant - they should be dealt with under the hood, whether by tools like
those web server frameworks, or by language constructs like exception
handlers and microthreads. The only reason to learn about use of
first-class continuations as a programming construct is either for the sake
of learning, to deepen your understanding of programming; or if you are
interested in developing language or system tools that use them. If you are
interested in any of this, then yes, you're going to have to do some
learning and relearning - there's no way around that. But for most ordinary
applications, you can safely ignore continuations.

> [...]
> > The result is that it's actually easier to reason about syntax-rules
> > macros - which makes them easier to write, and easier to read. As a
result,
> > and also because of the enforced hygiene, they're less error-prone.
>
> I don't mind using DEFMACRO for simple things. I don't find them hard to
> write or read, and I don't know why they would be more error-prone.
> Sounds similar to some of the claims made by advocates of static type
> systems. Maybe this boils down to just a matter of taste.

Maybe - let's talk once you've tried syntax-rules. But you gave a clue to
your reading & writing process for DEFMACRO when you said that when reading
a syntax-rules macro, you were immediately worrying about which level the
various tokens were at. You've learned to look for, and expect something
that, with syntax-rules, you can simply forget about. You don't do these
things when writing ordinary functions - why do you put up with it when
writing macros? What would you think of Lisp if you had to use gensym to
initialize every variable you use? You've simply become very used to a
low-level technique, so that you don't believe there's any need for a higher
level technique.

> What stands in the way of implementing syntax-case on top of DEFMACRO?
> (This is not a rhetorical question.)

I don't think it would make much sense. The implementation of syntax
objects has little do with what defmacro does. The pattern matching forms
of syntax-case might be defined via DEFMACRO at the surface level, but their
definitions deal with syntax objects, so there'd be little for defmacro to
do once the syntax objects had been constructed. It wouldn't help much when
constructing the syntax objects, either, since the 'syntax' form doesn't use
defmacro syntax, and I can't see any point in converting it internally.

The reason that it's easy to implement DEFMACRO in syntax-case is that a
syntax object is a superset of the list representations of syntax used by
DEFMACRO. You can translate syntax-as-lists to syntax objects and back
again, without losing anything - it's part of the standard syntax object
API, so nothing additional is needed to do that, which is partly why a
syntax-case implementation of DEFMACRO is short. Going the other way is
more problematic, since syntax-as-lists has less information than a syntax
object.

Anton


Anton van Straaten

unread,
Feb 22, 2004, 6:27:43 PM2/22/04
to

Ouch! Is that with Chez? Petite Chez 6.0a doesn't do that.

Anton


Brian Mastenbrook

unread,
Feb 22, 2004, 10:41:35 PM2/22/04
to
In article <mza_b.14958$W74...@newsread1.news.atl.earthlink.net>,

Anton van Straaten <an...@appsolutions.com> wrote:

> I don't accept that "conceptual simplicity is lost" with syntax-rules. It's
> a different approach, which in some ways is conceptually simpler than
> defmacro, since it doesn't require the user to manually keep track of the
> different levels at which the macro operates. The pitfalls you mention may
> indeed be flaws in syntax-rules - I'm not familiar enough with them to
> comment - but I find that syntax-rules works very well for many kinds of
> macros, better than defmacro in fact.
>
> Of course, the latter claim is hardly ever going to be accepted by someone
> only familiar with defmacro. For the record, I learned and used defmacro
> before ever using syntax-rules or syntax-case, and I still use defmacro from
> time to time, so I think I have a good basis for comparison.

For the record, I'm familiar with both defmacro and syntax-rules,
though I am considerably more familiar with the first. With helpful
tools for list destructuring and mass generation of gensyms, defmacro
macros can be pretty easy to write, whereas I often have to squint at a
syntax-rules macro to figure out what ...'s correspond to what. Perhaps
this is simply a consequence of my level of experience. Perhaps it's
just personal preference.

> > 4. The fourth example can be solved with a proper GENSYM for "use" in
> > the "contorted" macro.
>
> The phrase "proper GENSYM" is an oxymoron. GENSYM operates at a strangely
> low level of abstraction. Why don't you use GENSYM when declaring normal
> lexical variables in a procedure? Rhetorical question, of course - the
> point is, GENSYM is a kludge. It's not a particularly onerous one, but it's
> part of what makes defmacro worth improving on.

I don't understand what makes GENSYM such a kludge, nor why it is such
a low level of abstraction. GENSYM is not doing anything different than
CONS does - it returns something which is unique to eq? with the
specified contents. In CL, a GENSYM-alike can minimally be written as
(make-symbol "").

> We're straying far afield here. ;) But I'll give my opinion about
> continuations, too. Re the quality of implementations, once again you're
> looking at edge cases. Forget about those, they're not important, except
> in, well, edge cases. All of the major Schemes either support continuations
> well, or tell you when they don't - e.g., some of the Scheme to C compilers
> deliberately provide restricted continuations.

Fortunately, this is what Chicken is for.

> Scheme is one of very few languages - along with SML/NJ, Stackless Python,
> and the RhinoWithContinuations version of Javascript - which implements
> first-class continuations. If you're developing tools in the spaces
> mentioned above, this is a useful capability. Stackless Python uses
> continuations to support microthreading; Scheme has a number of web server
> solutions which use continuations to "invert control" so that the structure
> of a web application's code can be decoupled from its web page structure;
> and RhinoWithContinuations does something similar for web applications in
> the Cocoon web framework. For applications which need them, continuations
> are very useful, and have little competition. Their competition is mainly
> OS-level threads, which really solve a different problem, and conceptually
> are a stream of continuations anyway.

And here is where the next Lisp/Scheme debate is going to start up.
When Schemers speak of continuations, they really mean implicit
continuations - the idea that the program should be granted access to
the continuations that are flying around under the hood in the
implementation. However, the name "continuation" is considerably more
general, or else anybody using CPS needs to find a new term.

In fact I would argue that the main competition to call/cc
continuations is going to be CPS, and that CPS has a really big
advantage: after you write your program, you can identify what types of
continuations you use, and then change their representation to a list
of a name and arguments which describes the continuation of that type.
For instance, a standard CPS example:

(defun fib-cps (n k)
(if (< n 3) (funcall k 1)
(fib-cps (- n 2)
(lambda (v1)
(fib-cps (- n 1)
(lambda (v2)
(funcall k (+ v1 v2))))))))

Becomes:

(defun fib-cps (n k)
(if (< n 3) (do-continuation k 1)
(fib-cps (- n 2)
(make-val1-continuation (- n 1)
k))))

(defun make-val1-continuation (n k)
`(val1-continuation ,n ,k))

(defun make-add1-continuation (n k)
`(add1-continuation ,n ,k))

(defun do-continuation (k arg)
(ecase (car k)
(return-k arg)
(val1-continuation (fib-cps (cadr k)
(make-add1-continuation arg (caddr k))))
(add1-continuation (do-continuation (caddr k)
(+ arg (cadr k))))))

Of course this is all rather complicated to write by hand, but there is
one huge advantage: the continuations are now serializable, even across
different lisps! Provided with sufficient tools to generate code like
this, I don't see why anyone would prefer call/cc continuations (which
are inherently fragile) for this use case.

CPS also brings with it a clearer view of dynamically scoped variables,
which are useful in writing network applications. With call/cc
continuations, calling a saved continuation (eg for a web process) will
trigger wind points on the way out, meaning you can't store eg the
current connection in a simple dynamic variable.

One interesting thing to note is that continuations can be described in
terms of the standard UNIX call fork() with shared memory. If you do
use shared memory, fork() will simply copy the stack alone - so, to
exploit this, when you call/cc, fork() a new process, and have it
immediately suspend itself. When it is unsuspended, it should fork()
itself into a running process and then re-suspend itself. When you call
the associated continuation, unsuspend that process, and kill yourself.
Make sure to store return values on the heap, and you're all set! Thus
continuations are really a subset of standard UNIX multiprocessing
semantics :-)

Brian Mastenbrook

unread,
Feb 22, 2004, 10:48:19 PM2/22/04
to
In article <PVa_b.14987$W74....@newsread1.news.atl.earthlink.net>,

Anton van Straaten <an...@appsolutions.com> wrote:

> Ouch! Is that with Chez? Petite Chez 6.0a doesn't do that.
>
> Anton

I tried that example with Chez 6.9b, which corresponds to "whatever is
on the CS computers for us to use". I think the real issue here is with
print-gensym - it's not generating a unique representation for the
gensym, so it simply collides. Of course I have issues with
print-gensym to begin with - gensyms should not be READable to the same
value. To do otherwise makes it not a gensym. Otherwise gensym could be
boiled down to something evil like (loop for i from 1 do (if (not
(find-symbol (format nil "g~A" i)))) (return (intern (format nil "g~A"
i)))) .

Anton van Straaten

unread,
Feb 23, 2004, 2:19:33 AM2/23/04
to
Brian Mastenbrook wrote:
> In article <mza_b.14958$W74...@newsread1.news.atl.earthlink.net>,
> Anton van Straaten <an...@appsolutions.com> wrote:
>
> > I find that syntax-rules works very well for many kinds of
> > macros, better than defmacro in fact.
> >
> > Of course, the latter claim is hardly ever going to be accepted by
someone
> > only familiar with defmacro. For the record, I learned and used
defmacro
> > before ever using syntax-rules or syntax-case, and I still use defmacro
from
> > time to time, so I think I have a good basis for comparison.
>
> For the record, I'm familiar with both defmacro and syntax-rules,
> though I am considerably more familiar with the first. With helpful
> tools for list destructuring and mass generation of gensyms, defmacro
> macros can be pretty easy to write, whereas I often have to squint at a
> syntax-rules macro to figure out what ...'s correspond to what. Perhaps
> this is simply a consequence of my level of experience. Perhaps it's
> just personal preference.

I'm sure there is a large subjective element. My real point relates to the
initial reaction that defmacro users often have, to not knowing what's going
on when they don't see syntax and variables being quoted or unquoted.
However, this is actually a benefit of syntax-rules, that they're simply
unfamiliar with. I notice you didn't mention that issue as being something
that you have to squint for, which isn't surprising if you're somewhat
familiar with syntax-rules.

Re the ...'s, they since they appear after the expression which is being
repeated, I tend to think of them like a postfix token - along the lines of
the quote and quasiquote tokens, but appearing after the expression to which
it applies, instead of before. The equivalent operation in defmacro is
usually procedural code, which I don't think is any clearer to absorb at a
glance.

> I don't understand what makes GENSYM such a kludge, nor why it is such
> a low level of abstraction. GENSYM is not doing anything different than
> CONS does - it returns something which is unique to eq? with the
> specified contents. In CL, a GENSYM-alike can minimally be written as
> (make-symbol "").

Yes, GENSYM is just a constructor. But you don't normally have to
"construct" your variable names. The only reason you do with defmacro, is
because defmacro doesn't deal with hygiene. It's low-level because you're
implementing the sort of feature that is usually handled by the language.

You're right, that with some wrappers to generate gensym, you can make this
easier. But as I said, you don't have to do this with normal lexical or
other variables, and I doubt you'd be defending it if you did have to gensym
all your variables.

What makes macros different? That's semi-rhetorical - it's an interesting
question to answer. I think the answer ends up being that defmacro exists
at an intersection point between ease of implementation, sufficient ease of
use, and the macro equivalent of Turing-completeness. There are other
interesting intersection points, though.

> > All of the major Schemes either support continuations
> > well, or tell you when they don't - e.g., some of the Scheme to C
compilers
> > deliberately provide restricted continuations.
>
> Fortunately, this is what Chicken is for.

Yes, Chicken warms the cockles of my cool-hack-loving heart. For those who
aren't familiar, it's a Scheme which generates C code that uses a version of
Henry Baker's Cheney-on-the-MTA mechanism for supporting tail recursion and
continuations in C:
http://home.pipeline.com/~hbaker1/CheneyMTA.html )

> > For applications which need them, continuations
> > are very useful, and have little competition. Their competition is
mainly
> > OS-level threads, which really solve a different problem, and
conceptually
> > are a stream of continuations anyway.
>
> And here is where the next Lisp/Scheme debate is going to start up.
> When Schemers speak of continuations, they really mean implicit
> continuations - the idea that the program should be granted access to
> the continuations that are flying around under the hood in the
> implementation. However, the name "continuation" is considerably more
> general, or else anybody using CPS needs to find a new term.

No argument about the term. The full term for the Scheme thingies is
"first-class continuation", and you can produce them in other ways than
call/cc.

> In fact I would argue that the main competition to call/cc
> continuations is going to be CPS, and that CPS has a really big
> advantage: after you write your program, you can identify what types of
> continuations you use, and then change their representation to a list
> of a name and arguments which describes the continuation of that type.

CPS is very useful, but one of the applications for first-class
continuations is implementing language-level tools - exception systems,
unusual control flows like goal seeking, web server inversion of control,
etc. In all of these cases, you don't want to require the end programmer to
write their code in CPS. You raised that point:

> Of course this is all rather complicated to write by hand, but there is
> one huge advantage: the continuations are now serializable, even across
> different lisps! Provided with sufficient tools to generate code like
> this, I don't see why anyone would prefer call/cc continuations (which
> are inherently fragile) for this use case.

If you write tools to generate CPS code behind the scenes, so that the user
ends up with serializable continuations, then you've implemented a language
which offers first-class continuations. Whether it provides them through
call/cc or not isn't particularly important - call/cc just happens to be one
of the most simple and general way of giving access to continuations. If
such tools actually existed as a layer over Lisp or Scheme, I'm sure it
would be useful.

Those tools might still find it useful to offer something like call/cc, if
it's possible given the typing issues, so that users can do their own
control manipulations rather than relying on whatever the tools support. I
suspect it'll be tricky to eliminate the relevance of call/cc - it's a bit
like saying that given function definition syntax & variable binding syntax
etc., we could eliminate the need for lambda. It'd still be there, just
hidden, which is the way it's supposed to be anyway.

> One interesting thing to note is that continuations can be described in
> terms of the standard UNIX call fork() with shared memory. If you do
> use shared memory, fork() will simply copy the stack alone - so, to
> exploit this, when you call/cc, fork() a new process, and have it
> immediately suspend itself. When it is unsuspended, it should fork()
> itself into a running process and then re-suspend itself. When you call
> the associated continuation, unsuspend that process, and kill yourself.
> Make sure to store return values on the heap, and you're all set! Thus
> continuations are really a subset of standard UNIX multiprocessing
> semantics :-)

I presume you'd have to implement mutable variables via heap allocated ref
cells, or something, otherwise you'd end cloning variables that should be
shared (if I've understood the model correctly). So, since this model is
restricted, it's actually a subset of the continuation model, not the other
way around. :)

Anton


Brian Mastenbrook

unread,
Feb 23, 2004, 5:38:30 AM2/23/04
to
In article <9Qh_b.15739$W74....@newsread1.news.atl.earthlink.net>,

Anton van Straaten <an...@appsolutions.com> wrote:

> > I don't understand what makes GENSYM such a kludge, nor why it is such
> > a low level of abstraction. GENSYM is not doing anything different than
> > CONS does - it returns something which is unique to eq? with the
> > specified contents. In CL, a GENSYM-alike can minimally be written as
> > (make-symbol "").
>
> Yes, GENSYM is just a constructor. But you don't normally have to
> "construct" your variable names. The only reason you do with defmacro, is
> because defmacro doesn't deal with hygiene. It's low-level because you're
> implementing the sort of feature that is usually handled by the language.

Wait, stop. Who said anything about GENSYM being a variable name? Not
me, for sure. GENSYM constructs a symbol. The fact that the evaluation
semantics for symbols is to treat them as variables isn't necessarily
relevant to the existance of GENSYM - it has the same use if you are
writing an interpreter or compiler for a language with hygienic macros.

I guess what you are really arguing is that Lisp a level of abstraction
to represent its macros which is normally hidden in a Scheme
implementation. This much is true, but I suspect that lispers do not
respond to "LOW-LEVEL" the way you do: on the contrary, I'd prefer to
work in a language that exposes its low-level technologies to me via
appropriate reflection.

> CPS is very useful, but one of the applications for first-class
> continuations is implementing language-level tools - exception systems,
> unusual control flows like goal seeking, web server inversion of control,
> etc. In all of these cases, you don't want to require the end programmer to
> write their code in CPS. You raised that point:

[elided]

> If you write tools to generate CPS code behind the scenes, so that the user
> ends up with serializable continuations, then you've implemented a language
> which offers first-class continuations. Whether it provides them through
> call/cc or not isn't particularly important - call/cc just happens to be one
> of the most simple and general way of giving access to continuations. If
> such tools actually existed as a layer over Lisp or Scheme, I'm sure it
> would be useful.

I was not arguing about the semantics or need for a tool like call/cc
in general. The issue I was trying to raise was whether call/cc was
necessary in the host language, especially for the specific problem of
inversion of control in a web server. call/cc is a very powerful but
very brutal tool in a sense, and a CPSer can produce code with
advantages (such as serializable continuations) that the host
continuations can't offer. It's just like shifting from use of LAMBDA
to an explicit data structure, to separate out the data being closed
over from the code in question. Both of these things are very useful
when you are actually trying to write a web application which needs
live upgrading, load balancing, serializable state, et al. Similarly
I'm sure a goal-directed reasoner might want to have serializable
state, for when you're trying to reproduce your experiments :-)

> Those tools might still find it useful to offer something like call/cc, if
> it's possible given the typing issues, so that users can do their own
> control manipulations rather than relying on whatever the tools support. I
> suspect it'll be tricky to eliminate the relevance of call/cc - it's a bit
> like saying that given function definition syntax & variable binding syntax
> etc., we could eliminate the need for lambda. It'd still be there, just
> hidden, which is the way it's supposed to be anyway.

Of course, but even if call/cc is offered in full, it will still not be
a language level tool to the host language. I believe this to be an
important distinction.

> > One interesting thing to note is that continuations can be described in
> > terms of the standard UNIX call fork() with shared memory. If you do
> > use shared memory, fork() will simply copy the stack alone - so, to
> > exploit this, when you call/cc, fork() a new process, and have it
> > immediately suspend itself. When it is unsuspended, it should fork()
> > itself into a running process and then re-suspend itself. When you call
> > the associated continuation, unsuspend that process, and kill yourself.
> > Make sure to store return values on the heap, and you're all set! Thus
> > continuations are really a subset of standard UNIX multiprocessing
> > semantics :-)
>
> I presume you'd have to implement mutable variables via heap allocated ref
> cells, or something, otherwise you'd end cloning variables that should be
> shared (if I've understood the model correctly). So, since this model is
> restricted, it's actually a subset of the continuation model, not the other
> way around. :)

Actually, since it provides the continuation model when everything is
heap-allocated, and something slightly different when some things are
stack-allocated, doesn't that technically make continuations a subset
of this? :-)

Joe Marshall

unread,
Feb 23, 2004, 5:40:45 AM2/23/04
to
Brian Mastenbrook <NOSPAMbmas...@cs.indiana.edu> writes:

> And here is where the next Lisp/Scheme debate is going to start up.
> When Schemers speak of continuations, they really mean implicit
> continuations - the idea that the program should be granted access to
> the continuations that are flying around under the hood in the
> implementation. However, the name "continuation" is considerably more
> general, or else anybody using CPS needs to find a new term.
>
> In fact I would argue that the main competition to call/cc
> continuations is going to be CPS, and that CPS has a really big
> advantage: after you write your program, you can identify what types of
> continuations you use, and then change their representation to a list
> of a name and arguments which describes the continuation of that type.

The problem with this approach is that this requires a global
transformation and you may not have access to all the code.

--
~jrm

Brian Mastenbrook

unread,
Feb 23, 2004, 8:43:46 AM2/23/04
to
In article <k72e9i...@comcast.net>, Joe Marshall
<prunes...@comcast.net> wrote:

> The problem with this approach is that this requires a global
> transformation and you may not have access to all the code.

Where is it? Locked away somewhere?

You only need to transform your own code; I would hope that any other
function you would be calling from a web page generator would be
simple.

Joe Marshall

unread,
Feb 23, 2004, 9:20:43 AM2/23/04
to
Brian Mastenbrook <NOSPAMbmas...@cs.indiana.edu> writes:

> In article <k72e9i...@comcast.net>, Joe Marshall
> <prunes...@comcast.net> wrote:
>
>> The problem with this approach is that this requires a global
>> transformation and you may not have access to all the code.
>
> Where is it? Locked away somewhere?

Essentially. Are you going to CPS convert the entire system? Even
the primitives?

> You only need to transform your own code; I would hope that any other
> function you would be calling from a web page generator would be
> simple.

Like mapcar? CPS conversion is a model for
call-with-current-continuation, but it is not a practical
substitution.

Marcin 'Qrczak' Kowalczyk

unread,
Feb 23, 2004, 9:22:36 AM2/23/04
to
On Mon, 23 Feb 2004 08:43:46 -0500, Brian Mastenbrook wrote:

>> The problem with this approach is that this requires a global
>> transformation and you may not have access to all the code.
>
> Where is it? Locked away somewhere?
>
> You only need to transform your own code; I would hope that any other
> function you would be calling from a web page generator would be
> simple.

Calling higher order functions like mapcar and passing them functions
which capture or invoke continuations requires a CPS transformer to
reimplement these functions (if their source is not visible).

Some builtin functions are hard to transform. For example funcall: you
should determine whether its argument is a transformed function or not,
which is in general not known until runtime.

CPS transformation by code walking is necessarily incomplete because
it's not possible to transform higher order black boxes.

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Anton van Straaten

unread,
Feb 23, 2004, 2:58:44 PM2/23/04
to
Brian Mastenbrook wrote:
> In article <9Qh_b.15739$W74....@newsread1.news.atl.earthlink.net>,
> Anton van Straaten <an...@appsolutions.com> wrote:
>
> > > I don't understand what makes GENSYM such a kludge, nor why it
> > > is such a low level of abstraction. GENSYM is not doing anything
> > > different than CONS does - it returns something which is unique to
> > > eq? with the specified contents. In CL, a GENSYM-alike can
> > > minimally be written as (make-symbol "").
> >
> > Yes, GENSYM is just a constructor. But you don't normally have to
> > "construct" your variable names. The only reason you do with defmacro,
> > is because defmacro doesn't deal with hygiene. It's low-level because
> > you're implementing the sort of feature that is usually handled by the
> > language.
>
> Wait, stop. Who said anything about GENSYM being a variable name?
> Not me, for sure.

Me neither. The second sentence in my paragraph above is right-associative,
and beta-substitutes for the "do" in the sentence to its right: applying an
inlining transform, the third sentence would read: "The only reason you have
to "construct" your variable names with defmacro..."

> GENSYM constructs a symbol. The fact that the evaluation
> semantics for symbols is to treat them as variables isn't necessarily
> relevant to the existance of GENSYM - it has the same use if you are
> writing an interpreter or compiler for a language with hygienic macros.

Right, I wasn't objecting to GENSYM's existence, but to its use to create
variables safe for use in defmacro, and the fact that the consequences of
that can't be hidden, even if the GENSYM itself is.

> I guess what you are really arguing is that Lisp a level of abstraction
> to represent its macros which is normally hidden in a Scheme
> implementation. This much is true, but I suspect that lispers do not
> respond to "LOW-LEVEL" the way you do: on the contrary, I'd prefer to
> work in a language that exposes its low-level technologies to me via
> appropriate reflection.

I have no problem with that - but it doesn't make sense to have to deal with
that in every macro that's ever written. You might use a feature like
GENSYM to implement some language/system-level construct, but what I'm
saying is that your average, ordinary macro shouldn't have to deal with that
level of abstraction - it has nothing to do with the problem domain.
Wrapping the GENSYM in higher-level functions helps, but you're still
dealing with the consequences of the issue - the need for GENSYM is just the
symptom.

One of the strengths of Lisp family languages is that code can be written at
a level closer to the problem domain. When programming in many other
languages, even some of the more modern languages with garbage collection
etc., programmers are more often forced to deal with issues that amount to
limitations or quirks of the language. Lisp, because of macros,
unrestricted procedural abstraction, and the things that have been built on
top of that (like CLOS), suffers much less from this.

However, use of DEFMACRO is an exception - the programmer is required to
deal with instantiating certain variable names before using them, much like
a C programmer has to allocate memory before storing anything. Hiding the
GENSYM use doesn't help any more than hiding the allocation in C++ (via
'new') - if you don't use the proper construct, or do the correct escaping
of references to a name, it's an error. My analogy is actually startlingly
appropriate: escaping a name using unquote is analogous to dereferencing a
pointer in C, using '*'. A design at the appropriate level of abstraction
would know when a dereference or an unquote needs to occur, and not require
the programmer to worry about it with every use.

The main difference between the way variables work in defmacro, and the way
memory allocation & access works in C, is that the consequences of getting
defmacro wrong are nowhere near as severe. But that doesn't excuse the
abstraction level violation that's taking place in every macro.

Note that I'm not saying DEFMACRO should be tossed in the trash. I don't
think any of the alternatives exceed DEFMACRO's utility & simplicity in
every dimension. But they do improve on it in some important ways, and in
the absence of a perfect macro system which achieves a perfect score in
every dimension, I think it's useful to have more than one system.

> I was not arguing about the semantics or need for a tool like call/cc
> in general. The issue I was trying to raise was whether call/cc was
> necessary in the host language, especially for the specific problem of
> inversion of control in a web server. call/cc is a very powerful but
> very brutal tool in a sense, and a CPSer can produce code with
> advantages (such as serializable continuations) that the host
> continuations can't offer. It's just like shifting from use of LAMBDA
> to an explicit data structure, to separate out the data being closed
> over from the code in question. Both of these things are very useful
> when you are actually trying to write a web application which needs
> live upgrading, load balancing, serializable state, et al.

If you deliver a language or tools that helps me do all of these things, I'd
love to use it. As others have pointed out, there are some issues to
address. In the meantime, I think call/cc is a useful tool to have. If it
later turns out to have been a step on the road to better things, there's
nothing wrong with that.

> > > Thus continuations are really a subset of standard UNIX
> > > multiprocessing semantics :-)
> >
> > I presume you'd have to implement mutable variables via heap allocated
ref
> > cells, or something, otherwise you'd end cloning variables that should
be
> > shared (if I've understood the model correctly). So, since this model
is
> > restricted, it's actually a subset of the continuation model, not the
other
> > way around. :)
>
> Actually, since it provides the continuation model when everything is
> heap-allocated, and something slightly different when some things are
> stack-allocated, doesn't that technically make continuations a subset
> of this? :-)

No. :oP

Anton


Pascal Costanza

unread,
Feb 23, 2004, 6:12:43 PM2/23/04
to

Jens Axel Søgaard wrote:

> Perhaps you have already read it, but Dybvig's "Writing Hygenic Macros
> in Scheme with Syntax-Case" available at
>
> <ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/iucstr356.ps.gz>
>
> is one of the best expositions of syntax-case from an user view point.

Thanks for the link. At a first glance it looks quite good. (No, I
haven't read it yet.)

Pascal Costanza

unread,
Feb 23, 2004, 7:49:07 PM2/23/04
to

Anton van Straaten wrote:

> If you look up "macro transformer" in the index, it points you to a page
> which contains the following definition (Sec. 4.3, Macros):
>
> "The set of rules that specifies how a use of a macro is transcribed into a
> more primitive expression is called the 'transformer' of the macro."
>
> I don't think it's hiding anything. Do you think otherwise?

I think Dybvig's explanations in "Writing Hygenic Macros in Scheme with
Syntax-Case" are much clearer:

"Macro transformers are procedures of one argument. The argument to a
macro transformer is a syntax object, which contains contextual
information about an expression in addition to its structure. [...]"

Generally, I need mental models how language constructs are mapped to
the lower levels in order to understand and trust them. This is true for
almost every language feature that I use. Schemers seem to be more
mathematically inclined and prefer other perspectives to understanding
language features, and maybe I am just not in the right target audience.

> But I'll ignore my own advice and take a stab at explaining syntax-case,
> starting from a defmacro perspective.

Thanks for your explanations. They help a lot.

> Start with defmacro, and imagine that instead of quoting syntax using
> quasiquote, you use a special form, 'syntax', which instead of returning a
> list as quasiquote does, returns a syntax object, i.e. an instance of an
> abstract data type which represents a piece of syntax. This type has an API
> which supports various code walking & manipulation capabilities. It can
> also be converted to a list (or whatever value the original syntax
> represented) via 'syntax-object->datum'.

[...]


> Syntax objects
> are a richer way to represent program syntax than lists, and their uses go
> beyond just macros.

That's the first glimpse I have caught that this might be something
worthwhile to learn. Where is that API documented/specified?

> I think I'll stop there for now, since I have other things to do!

Thanks a lot for taking your time to provide these explanations. They
help a lot.

>>Examples like those given in
>>http://okmij.org/ftp/Scheme/r5rs-macros-pitfalls.txt seem to indicate
>>that syntax-rules just trade one set of possible pitfalls with a
>>different set, but along that way the conceptual simplicity is lost.
>
> I don't accept that "conceptual simplicity is lost" with syntax-rules. It's
> a different approach, which in some ways is conceptually simpler than
> defmacro, since it doesn't require the user to manually keep track of the
> different levels at which the macro operates. The pitfalls you mention may
> indeed be flaws in syntax-rules - I'm not familiar enough with them to
> comment - but I find that syntax-rules works very well for many kinds of
> macros, better than defmacro in fact.

OK, my preliminary conclusion is that it is just a different programming
style for expressing macros.

> I'm not sure what you mean about not seeing the problem. One of the
> problems mentioned in the article is that syntax-rules pattern variables
> don't shadow. I don't know if there's a justification for that, or it's
> simply a bug in the design of syntax-rules. But you usually get an error if
> you make this mistake, and it's easy to fix, and easy to avoid. It doesn't
> mean that syntax-rules is not useful, and it's still better than defmacro,
> which you can't dispute until you've learned syntax-rules. ;)

Until now I have found defmacro very easy to learn, and most of the
stuff I have read so far about Scheme's macro system(s) is very
inaccessible for me. This makes defmacro de facto more useful to me. Of
course, this might change in the future.

>>Lisp and Scheme bring you metacircularity. As soon as Pythonistas write
>>program generators, it's clear that their laguage is missing something
>>important. Of course, they can write a Lisp interpreter in Python, but
>>that's besides the point.
>>
>>Do you really think that syntax-case is an equally important step forward?
>
> In some respects, yes, but that's not what I really meant. It's easy to
> look at something from the outside and find reasons not to try it, and that
> was my main point. But the points you've been picking on don't seem very
> substantial to me - it seems as though you're looking for reasons to ignore
> these systems, rather than looking for reasons you might want to learn them.

That's not so far from the truth. See below.

> Of course, a CL programmer who wants to write standard CL code, obviously
> has little incentive to be interested in other macro systems. But if your
> interests cross multiple languages, then there's value in the Scheme macro
> systems, at the very least in the sense that learning more than one approach
> to the same problem expands the horizons of your understanding of the
> problem.

I accept that.

>>>>BTW, what you really need to make something like DEFMACRO work is, on
>>>>top of that, of course quasiquotation, GENSYM/MAKE-SYMBOL or
>>>>string->uninterned-symbol and most probably a Lisp-2.
>>>
>>>I don't see that Lisp-2 is an issue.
>>
>>See http://citeseer.nj.nec.com/bawden88syntactic.html
>
> I'm familiar with why people claim it's an issue, but in practice I think
> it's not significantly worse than the issue of hygiene in defmacros in
> general. As I've said and defended here once before, Lisp-1 can express any
> Lisp-2 program, simply by changing any conflicting names so as not to
> conflict - a conceptually trivial transformation, with consequences which
> are primarily subjective. It would have an impact on porting Lisp-2 macros
> to Lisp-1, but it doesn't limit what you can easily express in Lisp-1.

If you are talking about an automatic transformation of names here, then
I wouldn't agree that this is a relevant argument. Programmers choose
names because they are descriptive of the nature of the conceptual
entities these names stand for. An automatic translation looses this
aspect, at least to a certain degree.

If I say (let ((list ...)) ...) in Common Lisp, I have chosen the name
"list" to say something about the variable that it denotes. If that name
gets automatically translated to some arbitrary other name it either
looses a certain amount of that descriptive quality ("lst"), or it is
amended with some irrelevant information ("list-var").

It's true that this doesn't essentially limit what you can express in
Lisp-1, but it's also true that there _is_ a fundamental difference
between functions and values, and even though the separation of these
two spaces was an accident in the history of Lisp, it still matches an
important qualitative distinction.

> Put another way, having the ability to accidentally compensate for hygiene
> violations in some cases - where multiple namespaces happen to prevent the
> problem - isn't a solution to the general problem of not having hygiene.
> Since you haven't solved the general problem, you still have to address
> questions of hygiene, in various low-level ways. A single namespace doesn't
> makes this problem worse in any significant way.

I disagree. Names for functions and values cannot accidentally clash in
a Lisp-2, and this is an important category of potential clashes in a
Lisp-1, even outside the domain of macro programming.

>>Here it helps that a Lisp-2 seperates variables and functions by
>>default. Variables are usually not important parts of an application's
>>ontology. If they are, the convention in Common Lisp is to use proper
>>naming schemes, like asterisks for special variables. Effectively, this
>>creates a new namespace.
>
> Mmm, asterisks. This, to me, is why the whole Lisp-1/2 debate is moot. The
> solution is simply Lisp-N, where you can define namespaces, modules, etc.
> and control how they're used. See PLT Scheme etc.

Unfortunately, there doesn't seem to be conventions to separate names
for functions and values that Schemers adhere to. This is an important
aspect of Common Lisp's naming convention for special variables.

The point is that the number of cases in which hygiene is a real issue
is considerably reduced by the combination of Lisp-2-ness and naming
conventions in Common Lisp, so that the remaining cases aren't pressing
anymore. For macros, you can just use a set of idioms that you can
easily memorize and you're done with it.

>>4. The fourth example can be solved with a proper GENSYM for "use" in
>>the "contorted" macro.
>
> The phrase "proper GENSYM" is an oxymoron. GENSYM operates at a strangely
> low level of abstraction. Why don't you use GENSYM when declaring normal
> lexical variables in a procedure? Rhetorical question, of course

[...]

No, not really. Some time ago, I have started some thought experiments
how one could design a Lisp dialect that works like that, and I think
this has the potential to add some interesting features.

>>Some while ago, I wanted to experiment with continuations in Scheme.
>>Apart from the fact that not all Schemes seem to implement continuations
>>fully and/or correctly (see
>>http://sisc.sourceforge.net/r5rspitresults.html ), the fact that the
>>respective documentations make me feel uneasy about whether I have to
>>relearn programming techniques for totally unrelated areas is a clear
>>downside IMHO.
>
> We're straying far afield here. ;) But I'll give my opinion about
> continuations, too.

Maybe my posting was too ambiguous here, but I really didn't want to
talk about continuations here. I understand them (because I have a
mental model how they are mapped to parameter passing and procedure
invocation mechanisms!), and I have a fair understanding of what they
can be used for.

However, the point is that in order to experiment with continuations, I
have to put up with all the other design decisions of Scheme. For most
of Scheme's features that's ok, but wrt macros this can be annoying.

I have made a few attempts to get an understanding of Scheme macros in
the past, and I have always found them too hard to understand especially
in comparison to the only minor improvements they seemed to make. The
optimal thing would have been a Common Lisp implementation with call/cc
built in, but such a beast doesn't seem to exist.

So yes: If you want to use Scheme for something that's not really
related to macros, and you don't _want_ to learn syntax-rules or
syntax-case for some reason, and you are not sure whether gensym
actually works, because most Schemers tell you that this evil anyway,
then this can be annoying. I think that this is actually a disservice to
Scheme and Lisp in general.

I have no problems to agree with you that defmacro, syntax-rules and
syntax-case are just different ways to implement macros, maybe with
their own respective strengths in the latter two cases that I cannot
judge at the moment. However, it is a fact that defmacro is generally
described in the Scheme community as fundamentally problematic, and this
is clearly wrong. (See for example the first paragraph of "Syntactic
Abstractions in Scheme" by Hieb, Dybvig, Bruggeman.)

>>I don't mind using DEFMACRO for simple things. I don't find them hard to
>>write or read, and I don't know why they would be more error-prone.
>>Sounds similar to some of the claims made by advocates of static type
>>systems. Maybe this boils down to just a matter of taste.
>
> Maybe - let's talk once you've tried syntax-rules. But you gave a clue to
> your reading & writing process for DEFMACRO when you said that when reading
> a syntax-rules macro, you were immediately worrying about which level the
> various tokens were at. You've learned to look for, and expect something
> that, with syntax-rules, you can simply forget about. You don't do these
> things when writing ordinary functions

Yes, I do.

> - why do you put up with it when
> writing macros? What would you think of Lisp if you had to use gensym to
> initialize every variable you use? You've simply become very used to a
> low-level technique, so that you don't believe there's any need for a higher
> level technique.

Right. At least, this was right until I have heard from you that syntax
objects can be used for things that go beyond macros.

Rahul Jain

unread,
Feb 23, 2004, 9:07:22 PM2/23/04
to
Brian Mastenbrook <NOSPAMbmas...@cs.indiana.edu> writes:

> Wait, stop. Who said anything about GENSYM being a variable name? Not
> me, for sure. GENSYM constructs a symbol. The fact that the evaluation
> semantics for symbols is to treat them as variables isn't necessarily
> relevant to the existance of GENSYM - it has the same use if you are
> writing an interpreter or compiler for a language with hygienic macros.

I think this is a key issue. Earlier, it was claimed that a Lisp-1 can
emulate a Lisp-2 by just renaming variables. This doesn't do anything
for the case where the plist of a symbol is used by the macro-function
for that symbol. That macro-function can then be attached to any symbols
and the plist can be used to customize its behavior. Given a MOP, I'd
rather implement this using funcallable-instances, but I don't see how
having a Lisp-1 gives you those for free. :)

--
Rahul Jain
rj...@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist

Rahul Jain

unread,
Feb 23, 2004, 9:13:20 PM2/23/04
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> I have no problem with that - but it doesn't make sense to have to deal with
> that in every macro that's ever written. You might use a feature like
> GENSYM to implement some language/system-level construct, but what I'm
> saying is that your average, ordinary macro shouldn't have to deal with that
> level of abstraction - it has nothing to do with the problem domain.
> Wrapping the GENSYM in higher-level functions helps, but you're still
> dealing with the consequences of the issue - the need for GENSYM is just the
> symptom.

First you say you have no problem with using GENSYM under the covers.
Then you say that having abstractions that use GENSYM under the covers
is a problem. I don't get it. FWIW, there is a quasi-standard REBINDING
(a.k.a. WITH-GENSYMS) macro that provides just such an abstraction. I
should probably use it, but I'm too lazy to. :)

> However, use of DEFMACRO is an exception - the programmer is required to
> deal with instantiating certain variable names before using them, much like
> a C programmer has to allocate memory before storing anything.

The same gripe could be given about MOP. One has to instantiate the
slot-definition before using it. It's a question of layered protocols.
Use the layer that fits the problem. This reminds me: you still haven't
shown an implementation of LOOP in syntax-rules (or maybe I haven't
gotten that far in the thread).

Anton van Straaten

unread,
Feb 23, 2004, 11:15:06 PM2/23/04
to
Rahul Jain wrote:
> "Anton van Straaten" <an...@appsolutions.com> writes:
>
> > I have no problem with that - but it doesn't make sense to have to deal
with
> > that in every macro that's ever written. You might use a feature like
> > GENSYM to implement some language/system-level construct, but what I'm
> > saying is that your average, ordinary macro shouldn't have to deal with
that
> > level of abstraction - it has nothing to do with the problem domain.
> > Wrapping the GENSYM in higher-level functions helps, but you're still
> > dealing with the consequences of the issue - the need for GENSYM is just
the
> > symptom.
>
> First you say you have no problem with using GENSYM under the covers.
> Then you say that having abstractions that use GENSYM under the covers
> is a problem. I don't get it.

Sorry for not being clear. I'm saying that in the case of DEFMACRO, the
abstraction necessarily leaks, which reduces its quality as an abstraction.
As I said, the need for GENSYM is just a symptom, of the hygiene issue.

> FWIW, there is a quasi-standard REBINDING (a.k.a. WITH-GENSYMS)
> macro that provides just such an abstraction.

Even if you use that, you still have to unquote the references to the
variables it declares. If you forget to unquote them, it's a bug - not only
that, but it's one that isn't necessarily detected by the compiler, and can
result in erroneous variable capture. Now, I'm not saying that's the end of
the world, but you have to admit that in other contexts - such as when
dealing with ordinary variables - we don't accept that sort of thing. Why
the exception for DEFMACRO?

> > However, use of DEFMACRO is an exception - the programmer is required to
> > deal with instantiating certain variable names before using them, much
like
> > a C programmer has to allocate memory before storing anything.
>
> The same gripe could be given about MOP. One has to instantiate the
> slot-definition before using it. It's a question of layered protocols.
> Use the layer that fits the problem.

Yes, that's what I'm saying. With DEFMACRO, there is no higher layer, but
there should be. That's the exact issue that syntax-rules and syntax-case
address.

> This reminds me: you still haven't shown an implementation of LOOP
> in syntax-rules (or maybe I haven't gotten that far in the thread).

You'd have to pay me to implement LOOP, in any language. It could be
implemented straightforwardly enough with syntax-case. It may very well be
difficult to implement in syntax-rules, but that's not particularly
relevant - I've repeatedly said that I don't consider syntax-rules a
complete replacement for DEFMACRO. However, syntax-rules does show how a
higher layer than DEFMACRO can work well, so there are lessons to be learned
from it. At the very least, it offers an alternative way of thinking about
macros, to help combat the Sapir-Whorf hypothesis as it applies to CL
macros.

Anton


Rahul Jain

unread,
Feb 24, 2004, 12:04:17 AM2/24/04
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> Sorry for not being clear. I'm saying that in the case of DEFMACRO, the
> abstraction necessarily leaks, which reduces its quality as an abstraction.

No. DEFMACRO is an abstraction over syntax. It is not an abstraction
over semantics. Compare this to CAR/CDR/CDAR vs.
FIRST/SECOND/THIRD/REST. Note the equivalent of CDAR as a list-operator.
Macros are like cons cells. Syntax-rules are (I guess) like lists. Would
you claim that the existence of CDAR reduces the quality of cons cells
as an abstraction?

> As I said, the need for GENSYM is just a symptom, of the hygiene issue.

Like lack of side-effects, hygiene is NOT a goal. It's just a property
of _certain_ macros.

>> FWIW, there is a quasi-standard REBINDING (a.k.a. WITH-GENSYMS)
>> macro that provides just such an abstraction.
>
> Even if you use that, you still have to unquote the references to the
> variables it declares. If you forget to unquote them, it's a bug - not only
> that, but it's one that isn't necessarily detected by the compiler, and can
> result in erroneous variable capture. Now, I'm not saying that's the end of
> the world, but you have to admit that in other contexts - such as when
> dealing with ordinary variables - we don't accept that sort of thing. Why
> the exception for DEFMACRO?

(let ((x 1)
(y 2))
(let ((x 2)) ;; OOPS. this was supposed to be z!
...))

Lisp compilers accept this sort of thing, and so do scheme compilers, afaik.

> Yes, that's what I'm saying. With DEFMACRO, there is no higher layer, but
> there should be. That's the exact issue that syntax-rules and syntax-case
> address.

OK, but that says nothing about DEFMACRO itself being considered
harmful. In other posts, the way that syntax-rules adds information to
the expansion such as what is used as a binding and what isn't (in order
to help identify what should be captured/shadowed, and what should be
gensymed, I assume). I'm not sure if you could then use a
DEFMACRO-defined macro in the definition of a syntax-rules-defined
macro. If that's the case, that _serverely_ limits the usefulness of
syntax-rules.

Jacek Generowicz

unread,
Feb 24, 2004, 3:19:20 AM2/24/04
to
Pascal Costanza <cost...@web.de> writes:

> Anton van Straaten wrote:
>
> > I think I'll stop there for now, since I have other things to do!
>
> Thanks a lot for taking your time to provide these explanations. They
> help a lot.

Seconded.

Pascal Costanza

unread,
Feb 24, 2004, 4:34:20 AM2/24/04
to

Anton van Straaten wrote:

>>FWIW, there is a quasi-standard REBINDING (a.k.a. WITH-GENSYMS)
>>macro that provides just such an abstraction.
>
> Even if you use that, you still have to unquote the references to the
> variables it declares. If you forget to unquote them, it's a bug - not only
> that, but it's one that isn't necessarily detected by the compiler, and can
> result in erroneous variable capture. Now, I'm not saying that's the end of
> the world, but you have to admit that in other contexts - such as when
> dealing with ordinary variables - we don't accept that sort of thing. Why
> the exception for DEFMACRO?

Just reword that section, and IMHO one can see that this is just a
question of reasonable defaults:

"Even if you use syntax-case, you still have to take special action to
break hygiene. If you forget to do that, it's a bug - not only that,

but it's one that isn't necessarily detected by the compiler, and can

result in erroneous absence of variable capture."

I understand that syntax-rules and syntax-case is a different style for
expressing macros, but I don't accept the hygiene part. The purpose of
an abstraction is to suppress irrelevant elements. Name capture is an
important and useful concept that you have to know about anyway, so it's
not irrelevant.

>>This reminds me: you still haven't shown an implementation of LOOP
>>in syntax-rules (or maybe I haven't gotten that far in the thread).

I don't think anyone has claimed that syntax-rules is appropriate for
expressing LOOP. If DEFMACRO's only purpose was to be able to express
LOOP then we wouldn't need it, because LOOP is part of the standard.

Pascal Costanza

unread,
Feb 24, 2004, 5:32:09 PM2/24/04
to

Pascal Costanza wrote:

> I have no problems to agree with you that defmacro, syntax-rules and
> syntax-case are just different ways to implement macros, maybe with
> their own respective strengths in the latter two cases that I cannot
> judge at the moment.

Here is the result of my first experiment with syntax-rules...

(define-syntax special-let
(syntax-rules ()
((_ () form ...)
(let () form ...))
((_ ((var binding) more ...) form ...)
(let ((var binding))
(special-let (more ...) form ...)))
((_ (var more ...) form ...)
(let ((var ()))
(special-let (more ...) form ...)))))

...and for comparison purposes, here is how I would implement it in
Common Lisp:

(defmacro special-let (bindings &body body)
(reduce (lambda (binding body)
(cond ((symbolp binding)
`(let ((,binding nil))
,body))
(t `(let (,binding)
,body))))
bindings
:from-end t :initial-value `(progn ,@body)))


I am not sure what to think of this. Comments?

Jens Axel Søgaard

unread,
Feb 24, 2004, 8:24:07 PM2/24/04
to

I am not sure. What happens in the CL macro
if the body contains a macro that expands to
one of the variables being bound in the special
let?

Example:


(define a 0)

(define-syntax foo
(syntax-rules ()
((_)
(+ a 100))))

(special-let ((a 42))
(foo))


Evaluates to 100.

But

(defparameter a 0)

(defmacro foo ()
'(+ a 100))

(special-let ((a 42))
(foo))

evaluates to 142.


I tried (defvar a 0) and (defconstant a 0)
instead of (defparamter a 0). The first
also evaluates to 142, the second gives an
error.


In order to give the defmacro the same behaviour
as the syntax-rules one, you need to rename the user
given names in bindings using gensym. The tricky part
is then the renaming in the body.


However, the possibility also exist that I messed up
in the translation of the example from Scheme to CL.
In that case forget the paragraph above.
(But please tell, me what the correct translation
would be)


--
Jens Axel Søgaard

dw

unread,
Feb 25, 2004, 1:07:09 AM2/25/04
to
"Anton van Straaten" <an...@appsolutions.com> wrote in message news:<mza_b.14958$W74...@newsread1.news.atl.earthlink.net>...

>
> (define-syntax and2
> (lambda (x)
> (syntax-case x ()
> ((_ x y)
> (syntax (if x y #f))))))
>
> The (lambda (x) ...) binds a syntax object to x, representing the syntax of
> the expression which invoked the macro. In theory, you can do whatever you
> want with that syntax object. Most commonly, you'll use syntax-case to do a
> pattern match on it, which is what the above example does with the
> expression (syntax-case x () ...). The () is for literals, like 'else' in
> cond.
>
> Within the above syntax-case expression, there's a single pattern and a
> corresponding template:
>
> ((_ x y)
> (syntax (if x y #f))))))
>
> The underscore in (_ x y) represents the name of the macro - you could also
> write (and2 x y), it doesn't matter. This pattern will match any
> invocations of AND2 with two parameters. After the pattern, is the
> expression which will be executed when that pattern is matched. In this
> case, it's simply a syntax expression which returns the expression (if x y
> #f). The return value from syntax-case must be a syntax object, which
> represents the actual syntax of the final program.

From what I understood, it looks like Scheme macros are a bit like C++
templates, that is instead of directly manipulating lists, they use
pattern matching. Both are Turing-complete systems.

If you are familiar with templates, can you compare syntax-case and syntax-
rules to them?

If you were designing a new computer language "for the masses" what kind of
macros would it have?

Pascal Costanza

unread,
Feb 25, 2004, 8:08:24 AM2/25/04
to

Jens Axel Søgaard wrote:

If Common Lispniks write such macros, they explicitly want to capture
the variable A defined in the surrounding code. For example, this could
be part of the specification of the macro FOO: "captures variable A and
adds 100"

If you don't want that, you have to write the following:

(defmacro foo ()
(let ((a (gensym)))
`(+ ,a 100)))

...or better:

(defmacro foo ()
(with-unique-names (a)
`(+ ,a 100)))

WITH-UNIQUE-NAMES is not defined in ANSI Common Lisp, but is easy to
implement and provided with some CL implementations. Sometimes, it is
called WITH-GENSYMS.

It's possible to capture names with syntax-case via some explicit
manipulations of syntax objects. But I don't understand the details yet.
Contrary to what _some_ literature about hygienic macros suggests, name
capture is a very useful concept.

Pascal Costanza

unread,
Feb 25, 2004, 8:25:54 AM2/25/04
to

Pascal Costanza wrote:

> If you don't want that, you have to write the following:
>
> (defmacro foo ()
> (let ((a (gensym)))
> `(+ ,a 100)))
>
> ...or better:
>
> (defmacro foo ()
> (with-unique-names (a)
> `(+ ,a 100)))

...nonsense, these macros would produce errors because they would
attempt to add 100 to an unbound variable.

The question is what you want to achieve with such a macro. If you want
to capture a name, then capture it. If you want to make sure that you
don't capture an arbitrary A, give it a more meaningful name or put it
in package created for this purpose.

(in-package "A-CAPTURER")

(defvar a 0) ;; this should rather be *a*, but for the sake of the
example...

(defmacro foo ()
'(+ a 100))


(in-package "OTHER-PACKAGE")

(special-let ((a 42))
(foo))

Joe Marshall

unread,
Feb 25, 2004, 9:05:07 AM2/25/04