Re: er-macro-transformer/sc-macro-transformer

53 views
Skip to first unread message

John Cowan

unread,
Sep 25, 2020, 1:01:40 PM9/25/20
to Marc Nieper-Wißkirchen, srfi...@srfi.schemers.org, scheme-re...@googlegroups.com
[+ srfi-211, +]


On Fri, Sep 25, 2020 at 2:17 AM Marc Nieper-Wißkirchen <ma...@nieper-wisskirchen.de> wrote:

> In that case, implicit renaming a la Chicken and Picrin should perhaps be added.

I can do that.

And so you have.
 
> I don't consider Larceny to have an implementation of ER macros (or Racket either), though I did list Larceny under ER systems faute de mieux.

I wouldn't abandon Larceny's implementation so easily. It is a sane
implementation.

I don't want to reject it, just to distinguish it.  The fact is, we have eight Schemes with implementations of ER, and there may very well be that many implementation strategies.  Here's what I know or guess:

Gauche, Scheme48: Native implementations

Chicken, Picrin: Native implementations (they also have IR in parallel)

Chibi: On top of syntactic closures.

MIT: Probably on top of syntactic closures.

Sagittarius: Also has syntax-case, but I don't know the relationship.

Larceny: SRFI 72.
 
The main difference is that in order to test whether
binding one identifier would bind the other identifier one has to use
bound-identifier=? instead of eq?. But this abstraction is easy to add
to the other ER systems (by just defining bound-identifier=? to be
eq?).

That seems reasonable, given that there isn't even a de facto spec for ER.
 
Another difference is that the other ER systems allow raw
symbols in the output, but the semantics of that is questionable
anyway and should be left as an error in a possible standardization.

We should warn that that limits portability, but not outlaw it.
 
>> In any case, I hope that we will be able to reach a consensus before
>> any voting because the results of such votings are often erratic.

General rant (which is why I have cc'ed scheme-reports-wg2):

Reason is not the deciding factor in this process, and is less important than consensus (if we can get it) or a vote (if we cannot).  Both of these are erratic processes: it is easy to reach a consensus on something that is, sub specie aeternitatis, a mistake.  We have done it many times.  One reason for having open voting in WG2 is that it allows the source of decisions to be more broad-based and inclusive.

Rather, reason is a rhetorical technique for getting people to vote for what you want.  It is a (morally) good technique, and we encourage people to use it rather than the (very effective) techniques for "making the worse cause appear the better.  But in the end, that's all it is.  Decisions come from committee votes (and people rarely or never explain their votes), and ultimately from implementers and programmers.

> I think that is very unlikely.  Nobody is likely to come up with an unshakable explanation for how one can be rooted in the other; many have tried, and some have gone on record that they don't think it can be done.

See the R4RS system I added to the SRFI, which can serve as a common
denominator.

Very nice!

I think asking people to read a paper or (mutable) implementation documentation is inadequate for something that itself is part of the permanent record and and may become (as a whole or in part) an element of R7RS-large.  I think therefore that the exports of (srfi 211 er-macro-transformer), (srfi 211 ir-macro-transformer), (srfi 211 with-ellipsis), and (srfi 211 sc-macro-transformer) should be fully documented here, most of which can be done by cut and paste.  This is particularly important if you are going to change the exports or the semantics.

In addition, since Racket has a greatly expanded syntax-case system, the library name should be r6rs to indicate what version of syntax-case the SRFI conforms to.  I also think it is clearer and easier if you use explicit-renaming, implicit-renaming, and syntactic-closures in the library names.  People who use these facilities know they are macro transformers, so let's emphasize the difference rather than the similarity.

Of course, to reach consensus, implementers must be willing to do some
work on their implementations.

Naturally, but how much, and what existing code will it break?  It's one thing to ask ER implementers to add bound-identifier=?, another to disallow raw symbols in the output.

Another general rant:

It is important when designing a SRFI for which there is a lot of semi-compatible prior art to try to leverage that art as much as possible.  When writing SRFI 143 (fixnums), I designed the SRFI to be as system-independent as possible, but when the implementation is running on Chicken, it exploits (chicken fixnum), which the compiler knows how to optimize.  There is a portable implementation, rubber-chicken.scm, of the parts of (chicken fixnum) that are needed to use it on another Scheme, so the sample implementation as a whole remains portable.

Of course this is only an exemplar.  If some other Scheme has a different set of fixnum primitives, it can and should adapt the sample implementation to use its own as the basis.  In particular, the Chibi version simply uses the generic operators, as Alex said they are no slower than specialized ones would be, and since overflow is an error, that's not a concern.
 
>> I cannot believe that the non-destructability is meant as a feature
>> because it makes it impossible for a macro to expand into another
>> macro written in that language.
>
>
> Can you give an example?  I have a concrete-operational mind.

Consider the definition of, say, letrec in the appendix of the R7RS.
Its expansion produces let expressions, which are again derived. Now
assume that both the letrec and the let macro are written in syntactic
closures. Invoking the letrec macro will yield a syntactic closure.
The underlying system unwraps the syntactic closure until it sees the
let keyword. The expression received by the let transformer will,
however, still be (partially) wrapped in closures.

I get it now (I think).  Thanks.



John Cowan          http://vrici.lojban.org/~cowan        co...@ccil.org
If a soldier is asked why he kills people who have done him no harm, or a
terrorist why he kills innocent people with his bombs, they can always
reply that war has been declared, and there are no innocent people in an
enemy country in wartime.  The answer is psychotic, but it is the answer
that humanity has given to every act of aggression in history.  --Northrop Frye

Marc Nieper-Wißkirchen

unread,
Sep 25, 2020, 1:49:16 PM9/25/20
to scheme-re...@googlegroups.com
---------- Forwarded message ---------
Von: Marc Nieper-Wißkirchen <ma...@nieper-wisskirchen.de>
Date: Fr., 25. Sept. 2020 um 19:46 Uhr
Subject: Re: [scheme-reports-wg2] Re: er-macro-transformer/sc-macro-transformer
To: <scheme-re...@googlegroups.com>
Cc: <srfi...@srfi.schemers.org>, <scheme-re...@googlegroups.com>


Am Fr., 25. Sept. 2020 um 19:01 Uhr schrieb John Cowan <co...@ccil.org>:

[...]

>> > I don't consider Larceny to have an implementation of ER macros (or Racket either), though I did list Larceny under ER systems faute de mieux.
>>
>> I wouldn't abandon Larceny's implementation so easily. It is a sane
>> implementation.
>
>
> I don't want to reject it, just to distinguish it. The fact is, we have eight Schemes with implementations of ER, and there may very well be that many implementation strategies. Here's what I know or guess:
>
> Gauche, Scheme48: Native implementations
>
> Chicken, Picrin: Native implementations (they also have IR in parallel)
>
> Chibi: On top of syntactic closures.
>
> MIT: Probably on top of syntactic closures.
>
> Sagittarius: Also has syntax-case, but I don't know the relationship.
>
> Larceny: SRFI 72.

I am not sure whether the term "native implementation" is
well-defined. The only thing one can state with certainty that the
above systems have some core in common.

[...]

>> Another difference is that the other ER systems allow raw
>> symbols in the output, but the semantics of that is questionable
>> anyway and should be left as an error in a possible standardization.
>
>
> We should warn that that limits portability, but not outlaw it.

Saying that "it is an error" seems to be effectively the same.

In any case, with the current ER systems, outputting raw symbols is
broken anyway.

In Unsyntax, I made the change of closing raw symbols in the use
environment, which is the only sane way to handle them.

>> >> In any case, I hope that we will be able to reach a consensus before
>> >> any voting because the results of such votings are often erratic.
>
>
> General rant (which is why I have cc'ed scheme-reports-wg2):
>
> Reason is not the deciding factor in this process, and is less important than consensus (if we can get it) or a vote (if we cannot). Both of these are erratic processes: it is easy to reach a consensus on something that is, sub specie aeternitatis, a mistake. We have done it many times. One reason for having open voting in WG2 is that it allows the source of decisions to be more broad-based and inclusive.
>
> Rather, reason is a rhetorical technique for getting people to vote for what you want. It is a (morally) good technique, and we encourage people to use it rather than the (very effective) techniques for "making the worse cause appear the better. But in the end, that's all it is. Decisions come from committee votes (and people rarely or never explain their votes), and ultimately from implementers and programmers.

Maybe I can follow you here, maybe not.

I will answer with an example. Consider the question of whether "1 is
a prime number". Even though it is a question about how to define a
concept, there is only one legitimate sane answer. Although I may use
reasoning to explain the correct answer, the truth of the answer is
independent of any reasoning, let alone of any voting.

Not all questions are like this one, of course.

>> > I think that is very unlikely. Nobody is likely to come up with an unshakable explanation for how one can be rooted in the other; many have tried, and some have gone on record that they don't think it can be done.
>>
>> See the R4RS system I added to the SRFI, which can serve as a common
>> denominator.
>
>
> Very nice!
>
> I think asking people to read a paper or (mutable) implementation documentation is inadequate for something that itself is part of the permanent record and and may become (as a whole or in part) an element of R7RS-large. I think therefore that the exports of (srfi 211 er-macro-transformer), (srfi 211 ir-macro-transformer), (srfi 211 with-ellipsis), and (srfi 211 sc-macro-transformer) should be fully documented here, most of which can be done by cut and paste. This is particularly important if you are going to change the exports or the semantics.

The licenses for the things that are to be copied and pasted have to
be checked. The documentation at least for sc-macro-transformer and
with-ellipsis comes from GNU projects that archive earlier versions,
so when we fix the version, mutability matters less.

That said, I can fill in specifications but these will probably (have
to) make choices, which I am currently evading (especially in the case
of ER macros).

> In addition, since Racket has a greatly expanded syntax-case system, the library name should be r6rs to indicate what version of syntax-case the SRFI conforms to. I also think it is clearer and easier if you use explicit-renaming, implicit-renaming, and syntactic-closures in the library names. People who use these facilities know they are macro transformers, so let's emphasize the difference rather than the similarity.

I would rather want to find a substitute for the (srfi 211 r4rs)
library name, which I just chose for the lack of a better name. Maybe,
(srfi 211 low-level) is already a better name. I won't rename (srfi
211 syntax-case) to (srfi 211 r6rs) for another reason: The R6RS macro
procedures/syntax are split into several sublibraries here because
identifier syntax or variable transformers are completely orthogonal
to syntax-case and apply (when present) equally well to all the other
macro systems.

The change of the XX-macro-transformer library names to something more
specific sounds reasonable.

>> Of course, to reach consensus, implementers must be willing to do some
>> work on their implementations.
>
>
> Naturally, but how much, and what existing code will it break? It's one thing to ask ER implementers to add bound-identifier=?, another to disallow raw symbols in the output.

As I wrote above, macro transformers that output raw symbols are, in
currently existing systems, in general, broken anyway. Anyway, making
it an error to output raw symbols in a standard doesn't mean that each
individual system has to forbid them. We are not producing R6RS
mustard. :)

That said, I would advise not to ban raw symbols if the underlying
macro system ensures that the resulting symbols are closed in the use
environment, which is what you want anyway when you produce symbols.

As for how much work: Initially, Unsyntax only had syntax-case. Now
(after hacking 2 days) it supports all the systems described here.

> Another general rant:
>
> It is important when designing a SRFI for which there is a lot of semi-compatible prior art to try to leverage that art as much as possible. When writing SRFI 143 (fixnums), I designed the SRFI to be as system-independent as possible, but when the implementation is running on Chicken, it exploits (chicken fixnum), which the compiler knows how to optimize. There is a portable implementation, rubber-chicken.scm, of the parts of (chicken fixnum) that are needed to use it on another Scheme, so the sample implementation as a whole remains portable.
>
> Of course this is only an exemplar. If some other Scheme has a different set of fixnum primitives, it can and should adapt the sample implementation to use its own as the basis. In particular, the Chibi version simply uses the generic operators, as Alex said they are no slower than specialized ones would be, and since overflow is an error, that's not a concern.

I am slightly at a loss here. Aren't we doing exactly this by trying
to find some common denominator?

PS: The current "pre-next-draft" version of SRFI 211 is here:
https://htmlpreview.github.io/?https://github.com/mnieper/srfi-211/blob/master/srfi-211.html.
I had already added a number of comments to ER macros before I
received your email.

Marc Nieper-Wißkirchen

unread,
Sep 25, 2020, 2:01:42 PM9/25/20
to scheme-re...@googlegroups.com, srfi...@srfi.schemers.org, John Cowan
Am Fr., 25. Sept. 2020 um 19:49 Uhr schrieb Marc Nieper-Wißkirchen
<marc....@gmail.com>:

> >> Of course, to reach consensus, implementers must be willing to do some
> >> work on their implementations.
> >
> >
> > Naturally, but how much, and what existing code will it break? It's one thing to ask ER implementers to add bound-identifier=?, another to disallow raw symbols in the output.

PPS We have almost only talked about the implementations but the users
matter at least as much. Given the scope of R7RS (large), high-quality
implementations will matter more than toy implementations. And toy
examples of code are less relevant compared to code already written or
code that will be written. For example, the Nanopass framework is an
existing, huge, and successful piece of Scheme software, written with
syntax-case. (I doubt anyone wants to write applied macros like these
with er-macro-transformer.)

John Cowan

unread,
Sep 25, 2020, 6:07:31 PM9/25/20
to Marc Nieper-Wi?kirchen, scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
On Fri, Sep 25, 2020 at 1:47 PM Marc Nieper-Wißkirchen <ma...@nieper-wisskirchen.de> wrote:

I am not sure whether the term "native implementation" is
well-defined.

By that I mean that it is not built on top of any other implementation.
 
I will answer with an example. Consider the question of whether "1 is
a prime number". Even though it is a question about how to define a
concept, there is only one legitimate sane answer. Although I may use
reasoning to explain the correct answer, the truth of the answer is
independent of any reasoning, let alone of any voting.

Without doubt.
 
Not all questions are like this one, of course.

Indeed.  And the questions "What belongs in this SRFI?" and "What goes into R7RS-large?" are very much unlike "Is 1 prime?"  They are much more like "What are the boundaries of country X?", which is a matter of fact until it is not.

That said, I can fill in specifications but these will probably (have
to) make choices, which I am currently evading (especially in the case
of ER macros).

You will have to, I think.
 
I would rather want to find a substitute for the (srfi 211 r4rs)
library name, which I just chose for the lack of a better name. Maybe,
(srfi 211 low-level) is already a better name.

That makes sense to me.
 
I won't rename (srfi
211 syntax-case) to (srfi 211 r6rs) for another reason: The R6RS macro

procedures/syntax are split into several sublibraries here because
identifier syntax or variable transformers are completely orthogonal
to syntax-case and apply (when present) equally well to all the other
macro systems.

Clear.
 
I am slightly at a loss here. Aren't we doing exactly this by trying
to find some common denominator?

Yes.  However, sometimes providing the common elements only is too underspecified.

PPS We have almost only talked about the implementations but the users
matter at least as much. Given the scope of R7RS (large), high-quality
implementations will matter more than toy implementations.

That's true.  But I am constrained by my position as chair of R7RS-large from having public views on which implementations are high-quality and which are toys.

  (I doubt anyone wants to write applied macros like these
with er-macro-transformer.)

 I certainly would not.  But then I don't want to write low-level macros at all.  If I were writing a multipass compiler like that, I'd do it at the runtime level.



John Cowan          http://vrici.lojban.org/~cowan        co...@ccil.org
C'est la` pourtant que se livre le sens du dire, de ce que, s'y conjuguant
le nyania qui bruit des sexes en compagnie, il supplee a ce qu'entre eux,
de rapport nyait pas.               --Jacques Lacan, "L'Etourdit"

Marc Nieper-Wißkirchen

unread,
Sep 26, 2020, 4:56:45 AM9/26/20
to scheme-re...@googlegroups.com, John Cowan, srfi...@srfi.schemers.org
Am Sa., 26. Sept. 2020 um 00:07 Uhr schrieb John Cowan <co...@ccil.org>:

>> I am not sure whether the term "native implementation" is
>> well-defined.
>
>
> By that I mean that it is not built on top of any other implementation.

We mustn't mix up implementations with interfaces. For example,
syntax-case is not an implementation, but an interface, which can very
well be provided by fundamentally different types of implementations
(like, say, Chez/Racket/Larceny show).

For example, Unsyntax uses the marks and substitutions algorithm of
Dybvig together with a notion of a syntactic closure to implement
syntax-case/er-macro-transformer/ir-macro-transformer.

In the end, a user doesn't care what the underlying algorithm is as
long as it works. All algorithms are well-understood and sufficiently
simple to implement so that they can be switched out in case an
implementation hits a wall.

> Indeed. And the questions "What belongs in this SRFI?" and "What goes into R7RS-large?" are very much unlike "Is 1 prime?" They are much more like "What are the boundaries of country X?", which is a matter of fact until it is not.

Agreed. Questions I had in mind are, say, whether pattern variables
are matched with those in a template in syntax-rules through
bound-identifier=? or free-identifier=?. Such a question came up in
the discussion of SRFI 148 as the various systems had implemented them
differently. But there is only one "right answer"
(bound-identifier=?), which became clear in the discussion.

Similarly, if we just specified the current behavior of ER macro
implementations in the case of raw symbols in the output, we would get
a specification, which is factually bad.

>> That said, I can fill in specifications but these will probably (have
>> to) make choices, which I am currently evading (especially in the case
>> of ER macros).
>
>
> You will have to, I think.

I have heard you and Alex. :)

>> I would rather want to find a substitute for the (srfi 211 r4rs)
>> library name, which I just chose for the lack of a better name. Maybe,
>> (srfi 211 low-level) is already a better name.
>
>
> That makes sense to me.

So... (srfi 211 low-level).

>> PPS We have almost only talked about the implementations but the users
>> matter at least as much. Given the scope of R7RS (large), high-quality
>> implementations will matter more than toy implementations.
>
>
> That's true. But I am constrained by my position as chair of R7RS-large from having public views on which implementations are high-quality and which are toys.

And the definition of "high-quality" is not of the same type as the
definition of a prime number.

My point is that one reason for splitting the language was that to
have a language that can be easily supported by almost all existing
implementations and to have a language in the scope of R6RS++, for
which we don't expect that every hobby implementation has the manpower
to provide a complete implementation.

That said, when it comes to syntax, I'll try to keep the Unsyntax
frontend as much up-to-date as possible so that it can be used as a
drop-in replacement much like psyntax was used for R6RS. This way, I
hope, we can solve some questions more from a user perspective than
from a perspective of what is already available in the implementations
of those active on these mailing lists.

>
>> (I doubt anyone wants to write applied macros like these
>>
>> with er-macro-transformer.)
>
>
> I certainly would not. But then I don't want to write low-level macros at all. If I were writing a multipass compiler like that, I'd do it at the runtime level.

In this context, "procedural macro" is probably the better term.

What do you mean by "at the runtime level"? That you would write a
code generator that produces the Scheme code, which is then finally
compiled? But this way, you would have to reinvent everything because
your code generator would effectively be an expander.

The nice thing about Scheme procedure macros is that one's expand-time
level is the other one's run-time level. As soon as you pass to the
right-hand side of define-syntax, you have done the shift.

It is pretty nice a feature of the syntax-case system that you can
actually test your macro transformer at the absolute runtime level,
meaning at the REPL because a transformer is nothing but a procedure,
which you can experiment with as with any other procedure. No other
system is as simple as this in this regard (apart from the R4RS
system).

John Cowan

unread,
Sep 27, 2020, 7:28:47 PM9/27/20
to Marc Nieper-Wißkirchen, scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
On Sat, Sep 26, 2020 at 4:56 AM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
 
We mustn't mix up implementations with interfaces. For example,
syntax-case is not an implementation, but an interface, which can very
well be provided by fundamentally different types of implementations
(like, say, Chez/Racket/Larceny show).

Certainly.  However, if examination of a Scheme implementation's code shows (say) that sc-transformer is implemented using er-transformer, it is pretty much safe to suppose that the two interfaces are compatible.

Agreed. Questions I had in mind are, say, whether pattern variables
are matched with those in a template in syntax-rules through
bound-identifier=? or free-identifier=?. Such a question came up in
the discussion of SRFI 148 as the various systems had implemented them
differently. But there is only one "right answer"
(bound-identifier=?), which became clear in the discussion.

It "became evident" in the sense that there was consensus, which means either that all agreed or that those who disagreed were no longer willing to argue their positions.  A different group might well come to a different consensus.
 
Similarly, if we just specified the current behavior of ER macro
implementations in the case of raw symbols in the output, we would get
a specification, which is factually bad.

Again, this means that some macros will behave in a way that some people would not expect.  One can get used to anything in time, "even hanging" as the saying is.  Calling it "factually bad" is essentially a judgment of taste rather than of reason.

My point is that one reason for splitting the language was [...] 
 to have a language in the scope of R6RS++, for
which we don't expect that every hobby implementation has the manpower
to provide a complete implementation.

Absolutely.  But the further we move away from R5RS+ behavior (where by "+" I mean "commonly accepted extensions") the less likely actual implementation becomes.
 
In this context, "procedural macro" is probably the better term.

Yes.
 
What do you mean by "at the runtime level"? That you would write a
code generator that produces the Scheme code, which is then finally
compiled?
 
Just so, for good and bad.  Embedded DSLs are a reasonable use case for macros.  But a CL-to-Scheme, to say nothing of a Fortran-to-Scheme, compiler would not be.  There are many intermediate cases.
 
It is pretty nice a feature of the syntax-case system that you can
actually test your macro transformer at the absolute runtime level,
meaning at the REPL because a transformer is nothing but a procedure,
which you can experiment with as with any other procedure.

How is that distinct from an ER-transformer?



John Cowan          http://vrici.lojban.org/~cowan        co...@ccil.org
Mark Twain on Cecil Rhodes: I admire him, I freely admit it,
and when his time comes I shall buy a piece of the rope for a keepsake.

Marc Nieper-Wißkirchen

unread,
Sep 29, 2020, 11:44:14 AM9/29/20
to John Cowan, scheme-re...@googlegroups.com, srfi...@srfi.schemers.org
Am Mo., 28. Sept. 2020 um 01:28 Uhr schrieb John Cowan <co...@ccil.org>:

> On Sat, Sep 26, 2020 at 4:56 AM Marc Nieper-Wißkirchen <marc....@gmail.com> wrote:
>
>>
>> We mustn't mix up implementations with interfaces. For example,
>> syntax-case is not an implementation, but an interface, which can very
>> well be provided by fundamentally different types of implementations
>> (like, say, Chez/Racket/Larceny show).
>
>
> Certainly. However, if examination of a Scheme implementation's code shows (say) that sc-transformer is implemented using er-transformer, it is pretty much safe to suppose that the two interfaces are compatible.

One always needs to take a close look. For example, Chibi says that
er-macro-transformer is implemented in terms of syntactic closures.
But Chibi's syntactic closures do not implement the (full) SC macro
facility. The same is true for Picrin. So, in this case, syntactic
closures are an implementation technique for both ER (Chibi) and SC
(MIT/GNU), but Chibi's existence doesn't prove necessarily the
compatibility of the two interfaces.

That said, as long as we do not incorporate implementation-specific
quirks (e.g. forcing bound-identifier=? to be eq?) into the various
interfaces, Unsyntax now shows that all of the interfaces in SRFI 211
can be implemented on the same base at the same time, making them
compatible. Of course, different implementations may handle one
interface more efficiently than another one.

>> Agreed. Questions I had in mind are, say, whether pattern variables
>> are matched with those in a template in syntax-rules through
>> bound-identifier=? or free-identifier=?. Such a question came up in
>> the discussion of SRFI 148 as the various systems had implemented them
>> differently. But there is only one "right answer"
>> (bound-identifier=?), which became clear in the discussion.
>
>
> It "became evident" in the sense that there was consensus, which means either that all agreed or that those who disagreed were no longer willing to argue their positions. A different group might well come to a different consensus.

I chose this example because I believe that we found the "right
answer". (This presupposes, of course, that there is a right answer as
in the example of primes.) There was consensus, but not about the
answer itself but that we had found the right answer, not that we
could agree on an answer.

>> Similarly, if we just specified the current behavior of ER macro
>> implementations in the case of raw symbols in the output, we would get
>> a specification, which is factually bad.
>
>
> Again, this means that some macros will behave in a way that some people would not expect. One can get used to anything in time, "even hanging" as the saying is. Calling it "factually bad" is essentially a judgment of taste rather than of reason.

"Bad" may not come from the word family that is appropriate here. How
would you call it if the norm of sunglasses were such that the right
lens was so dark that one couldn't look through it? Of course, one can
get used to it, but nevertheless, I would call such a norm a poor one,
actually a factually poor one. (Again, you may know a better word than
me for "poor".)

>
>> My point is that one reason for splitting the language was [...]
>>
>> to have a language in the scope of R6RS++, for
>> which we don't expect that every hobby implementation has the manpower
>> to provide a complete implementation.
>
>
> Absolutely. But the further we move away from R5RS+ behavior (where by "+" I mean "commonly accepted extensions") the less likely actual implementation becomes.

At the moment, we have moved very far away from R5RS+. And we haven't
moved very far from R6RS either, which is likewise good because there
is no reason why R6RS platforms shouldn't support R7RS (large) in the
long run.

Moreover, syntactic extensions (what the scope of SRFI 211 is) are by
far less a problem for implementations (barring animosities of their
maintainers) than additions to the evaluation model (system calls,
threads, continuations, environments, tail-call contexts, signals,
garbage collection, ...) because they can be implemented relatively
trivially by replacing the frontend.

On the other hand, even with adding only a bit of R5RS+ stuff, we may
already move away from existing implementations. Of course, nominally,
it is easy for an R7RS (small) implementation to implement, say, the
Tangerine edition. And yet it may not be efficient enough with all
sample implementations of the SRFIs bundled for the area of
application of R7RS (large). (To give you an example, when I tried to
pretty print about 500 kB of Scheme code using Chibi's implementation
of SRFI 166, it seemingly took forever. On the other hand, Chez's
built-in pretty-printer did the job in an instant. Of course, SRFI 166
is at a much higher level and Chibi is not an optimizing compiler, but
it shows that current implementations of R7RS (large) cannot yet cope
with existing tools.)

Whatever the final outcome of the R7RS (large) process will be (and
whether we will get high quality (choose your preferred meaning of
high quality here) implementations, the whole process has already
benefitted the Scheme world tremendously through the standardization
of so many libraries that have been being created and described in
SRFIs along the way.


>> What do you mean by "at the runtime level"? That you would write a
>> code generator that produces the Scheme code, which is then finally
>> compiled?
>
>
> Just so, for good and bad. Embedded DSLs are a reasonable use case for macros. But a CL-to-Scheme, to say nothing of a Fortran-to-Scheme, compiler would not be. There are many intermediate cases.

If you want to use your CL or Fortran inside Scheme (so that it is
integrated into the module system and aware of the lexical context),
it is by far the easiest to implement this as a procedural macro.
Because otherwise, you would have to write a Scheme compiler as well.

Note that, at least in principle, turning a procedure into a macro is
just one instance of `define-syntax'. For example, if you have your
Fortran-to-Scheme compiler as a procedure, you can immediately use it
at expand-time.

More likely than the wish to implement a Fortran-to-Scheme compiler is
probably to implement something like lex/yacc in Scheme. In the C
world, these are source-code generating tools with all the associated
problems. In Scheme with procedural macros, you would first write a
lex/yacc analogue that does the translation at run-time as much as the
traditional lex/yacc. But then you can lift everything to expand-time
through `define-syntax':

SRFI 115 is another good use case so that regular expressions are
compiled at expansion time and not at runtime.

>> It is pretty nice a feature of the syntax-case system that you can
>> actually test your macro transformer at the absolute runtime level,
>> meaning at the REPL because a transformer is nothing but a procedure,
>> which you can experiment with as with any other procedure.
>
>
> How is that distinct from an ER-transformer?

An ER-transformer itself (the result from calling/expanding
er-macro-transformer) is an opaque object. Of course, you can test the
procedure you plug into er-macro-transformer, but then you have to
forge the rename and the compare procedures. Of course, it's doable
somehow, but with transformer procedures, it is much more direct.
Reply all
Reply to author
Forward
0 new messages