Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

R6 Counterproposal

37 views
Skip to first unread message

Tom Lord

unread,
May 25, 2007, 2:49:04 PM5/25/07
to
R6 should be completely different from the current draft,
in my opinion.

The addition of a few, parsimoniously chosen features
eliminates the need for almost everything that is new
in the R6 draft. Nearly *ALL* of the new hacks could
be done as SRFIs, if only R6 would add these few OPTIONAL
features:


* Reader Extensibility

The Scheme reader should have something *similar* to
Common Lisp's reader tables and reader hooks. It should
be possible, for example, to program the reader to
parse (for example) Lua code or Python Code. It should be
possible to program the reader to parse XML or (lax) HTML.


* First Class Reflection of Types and Environments

** There should be a new atomic type, TYPE?

There should be one unique (under EQV?) type value
for each value type recognized by the implementation.
Each value type should have exactly one TYPE. Programs
should be able to create new type values. Programs
should be able to box arbitrary values under a given
TYPE. Access (reading and writing contents of) a
value should require access to the corresponding TYPE
value.

"Records" should not be in R6. Given the simpler proposal
of TYPE objects, records can be implemented in a SRFIO.


** There should be first class environments and locatives

Programs should be able to capture, at run-time, an
associative data structure which encapsulates all of the
variables within the lexical scope of the point of capture.
It should be possible to modify the binding of any variable
and to add or remove bindings in the innermost frame of
a first-class environment. (Of course, EVAL should accept
these environments as a parameter.)

Of course, it must be a statically decidable property
of each non-top-level lexical scope whether or not
environments representing that scope are ever captured,
whether captured environments escape, and whether and how
non-escaping environments might be modified.

Programs should also be able to construct run-time binding
contours which "share variables" in ways which are not
statically apparent in any of the source code -- hence,
locatives.

These features are straightforward completion of the idea of
having environments present in the semantics of the
language. They fit naturally into all of the classic
techniques for implementing interpreters.

These features admit a straightforward implementation of
many different kinds of module system.

These features provide Scheme with the (efficient, natural)
basis of a dynamic, prototype-based object system such
as is found in Python, Lua, Javascript, Self, and so forth.

* FEXPRs (Normal Order Functions) and First Class Closures

LAMBDA defines applicative-order procedures.

FLAMBDA should define normal-order procedures. The
evaluation rule for applications of an FLAMBDA procedure
are explained below.

A first-class CLOSURE value encapsulates a first-class
environment, a source form, a parameters sepecification,
and a designation as either an FLAMBDA or a LAMBDA
procedure. (CLOSURE values are a subset of PROCEDURE
values).

When an FLAMBDA is used in a procedure call:

(<exp0> <exp1> <exp2> ....)

where <exp0> evaluates to an FLAMBDA procedure

The FLAMBDA body is evaluated, with parameters bound,
as if it were an ordinary lambda that had been invoked:

(<exp0> (make-closure (the-environment) `(,lambda () ,'<exp1>))
(make-closure (the-environment) `(,lambda
() ,'<exp1>))
.... )


Thus, normal-order evaluation is supported -- but it is not
"all" that FLAMBDA can do.

Macros should be removed from the language, and promoted via
SRFIs.

Module systems should be removed from the language, and ....

Record systems should be .....

You get the idea.


Thanks,
-t

Kjetil S. Matheussen

unread,
May 26, 2007, 2:16:48 AM5/26/07
to

On Fri, 25 May 2007, Tom Lord wrote:

> R6 should be completely different from the current draft,
> in my opinion.
>
> The addition of a few, parsimoniously chosen features
> eliminates the need for almost everything that is new
> in the R6 draft. Nearly *ALL* of the new hacks could
> be done as SRFIs, if only R6 would add these few OPTIONAL
> features:
>
>
> * Reader Extensibility
>
> The Scheme reader should have something *similar* to
> Common Lisp's reader tables and reader hooks. It should
> be possible, for example, to program the reader to
> parse (for example) Lua code or Python Code. It should be
> possible to program the reader to parse XML or (lax) HTML.
>

Maybe. An what about access to line and coloumn numbers?
Those are what I miss with the current reader. (but perhaps line and
coloumn numbers is a triviality compared to what the above would imply?)

> * First Class Reflection of Types and Environments

Sounds like a very good idea to me, but I'm no expert.

> * FEXPRs (Normal Order Functions) and First Class Closures
>
> LAMBDA defines applicative-order procedures.
>
> FLAMBDA should define normal-order procedures. The
> evaluation rule for applications of an FLAMBDA procedure
> are explained below.
>

Yes, I support this very much. It would make everything very natural and
beautiful. :-) I was thinking about something similar a time ago to make
it possible to (apply) macros, but wasn't able to put it into
current scheme in a convenient way, so I ended up with an ugly hack
instead (see r7rs post).

Graham

unread,
May 28, 2007, 2:18:58 PM5/28/07
to
On May 25, 2:49 pm, Tom Lord <l...@emf.net> wrote:
> The addition of a few, parsimoniously chosen features
> eliminates the need for almost everything that is new
> in the R6 draft.

I find this to be a very attractive idea.

As a relatively new Schemer, and a user of primarily one Scheme system
-- Chicken, whose author is not exactly a fan of R6RS -- it seems like
R6 is the first push toward the true balkanization of the Scheme
community. That's a sad thing. Upon reading your proposal, I could see
how your features could (probably) be implemented in Chicken -- though
perhaps not efficiently -- meaning that this excellent Scheme could
remain a cooperative player in the future Scheme community. And that
is a good thing.

Perhaps the precise set of core features required for R6 is not
exactly as you have specified; I hope that you can provide additional
rationale for your feature set. It's a fundamentally good idea to find
out what that set *is* and distill R6 into *that*, plus a set of well-
written SRFIs.

> Nearly *ALL* of the new hacks could
> be done as SRFIs, if only R6 would add these few OPTIONAL
> features:

When you say "nearly all", are there R6RS features that you think
might not be implementable using your features?

> The Scheme reader should have something *similar* to
> Common Lisp's reader tables and reader hooks. It should
> be possible, for example, to program the reader to
> parse (for example) Lua code or Python Code. It should be
> possible to program the reader to parse XML or (lax) HTML.

Absolutely. Surely this is low-hanging fruit.

> * First Class Reflection of Types and Environments
> ** There should be a new atomic type, TYPE?
> There should be one unique (under EQV?) type value
> for each value type recognized by the implementation.
> Each value type should have exactly one TYPE. Programs
> should be able to create new type values. Programs
> should be able to box arbitrary values under a given
> TYPE. Access (reading and writing contents of) a
> value should require access to the corresponding TYPE
> value.

How would you enforce the "should require access" clause, and what
exactly does it mean? Are you calling for generic functions? I
wouldn't mind hearing a bit more rationale, how this forms a basis for
the R6 feature set.

> "Records" should not be in R6. Given the simpler proposal

> of TYPE objects, records can be implemented in a SRFIO [sic].

Yes, given extensible types this makes sense.

> ** There should be first class environments and locatives
> Programs should be able to capture, at run-time, an
> associative data structure which encapsulates all of the
> variables within the lexical scope of the point of capture.
> It should be possible to modify the binding of any variable
> and to add or remove bindings in the innermost frame of
> a first-class environment. (Of course, EVAL should accept
> these environments as a parameter.)

Sort of "opening closures", like the Siskind/Pearlmutter proposal for
a (map-closure) construct? (http://www.bcl.hamilton.ie/~qobi/map-
closure/)

> Of course, it must be a statically decidable property
> of each non-top-level lexical scope whether or not
> environments representing that scope are ever captured,
> whether captured environments escape, and whether and how
> non-escaping environments might be modified.

OK, but this sounds like something I'd want to disable in performance-
sensitive code. The Siskind paper suggests you could have both open
closures and optimized, compiled code, but that sounds like a research
project to me. :-)

I'm not a language expert, but your static analysis sounds like lambda-
lifting. Is there more to it than that? ("It" being the analysis, not
necessarily the code transformation.)

> Programs should also be able to construct run-time binding
> contours which "share variables" in ways which are not
> statically apparent in any of the source code -- hence,
> locatives.

I've never used a "run-time binding contour" before, though it sounds
exciting. :-) Do you mean something like parameters, or dynamic
scoping? Is this a "nice to have" feature, or (again) can you refer
back to some element of R6 you're proposing to replace?

> These features are straightforward completion of the idea of
> having environments present in the semantics of the
> language. They fit naturally into all of the classic
> techniques for implementing interpreters.

Yet I would hope that they are not suboptimal for compiled Scheme.

> These features provide Scheme with the (efficient, natural)
> basis of a dynamic, prototype-based object system such
> as is found in Python, Lua, Javascript, Self, and so forth.

Ibid.

> * FEXPRs (Normal Order Functions) and First Class Closures
> LAMBDA defines applicative-order procedures.
> FLAMBDA should define normal-order procedures. The
> evaluation rule for applications of an FLAMBDA procedure
> are explained below.

> ...


> Thus, normal-order evaluation is supported -- but it is not
> "all" that FLAMBDA can do.

I sense that I've missed an important nuance (or a more important
history lesson). This looks a lot like runtime macro-expansion. Is
there something fundamentally different than (defmacro ...) going on
here?

> Macros should be removed from the language, and promoted via
> SRFIs.
> Module systems should be removed from the language, and ....
> Record systems should be .....
> You get the idea.

Yes, and if your reasoning is sound, and your proposed features can be
implemented efficiently, then I think this a *much* better direction
to take Scheme than to expand and enterprisify the core language.
Thank you for posting this very interesting proposal.

Graham

--
Graham Fawcett
Application Developer/Consultant
Centre for Teaching and Learning
University of Windsor
Windsor, Ontario, CANADA N9B 3P4
faw...@uwindsor.ca

Ivan Raikov

unread,
May 28, 2007, 9:23:49 PM5/28/07
to

I like most of this proposal, but I don't understand the rationale
for first-order types. Wouldn't that make it much harder for a Scheme
compiler to generate efficient code? What are some situations where
it would make sense to have first-order types as part of the core
language?

-Ivan

Ray Blaak

unread,
May 29, 2007, 12:56:01 AM5/29/07
to
Tom Lord <lo...@emf.net> writes:
> "Records" should not be in R6. Given the simpler proposal
> of TYPE objects, records can be implemented in a SRFIO.

Bah! Records are so so basic it is astounding that a current language like
Scheme does not have them predefined.

Sure, they can be derivable from a more primitive type construction, or even
macros, but please let's just standardize on them so they can be relied on
without thinking about it.

Sure, put them into an SRFI, but just be sure to make that SRFI mandatory.

Now there's a thought: what other SRFI's should be mandatory?

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
rAYb...@STRIPCAPStelus.net The Rhythm has my soul.

dan...@gmail.com

unread,
May 29, 2007, 4:13:05 AM5/29/07
to
> Sure, put them into an SRFI, but just be sure to make that SRFI mandatory.
>
> Now there's a thought: what other SRFI's should be mandatory?

I find this a much better approach. My own R5RS code has no
portability problems between Schemes that support the following:

SRFI-0 cond-expand (just to identify the implementation)
SRFI-55 require-extension (even with just the (srfi n) convention)

These two are really the most important. Then,

SRFI-18 (where it makes sense)
SRFI-6 (string ports)
SRFI-23 (error)
SRFI-39 (parameters)
syntax-case (whatever the SRFI is, I think there's a bunch of them
now) -- not because I have the least respect for this bloated,
verbose, un-Schemely joke of a macro system, but because it's already
in wide use. I'm open for discussion on this one, though.

With this foundation, you can include the reference implementations
from other SRFI's whenever they're not in by default. That includes
SRFI-9, Ray, though I agree that it should be in by default ("or else
complain to your implementor").

The most problematic Schemes try to argue about SRFI-0 and SRFI-55,
with (in my view) unrealistic arguments. The main idea of the
opponents of these SRFIs seems to be that rather than standardizing
something simple, it's better to standardize something perfect (but of
course, they have different ideas of what's perfect).

Besides these SRFI's, I'd find the following very useful:

* a socket SRFI, for Schemes that support networking

* non-blocking streams that interact correctly with SRFI-18 threads

* a binary I/O SRFI (that's binary as opposed to text, not
bidirectional!)

* a SRFI about creating custom text streams (by specifying char-
ready?, read-char and write-char), as well as custom binary streams

And the following features:

* define-macro WITH macro-expand, to get back the power of LISP macros
that was unwisely suppressed in R5RS (no, syntax-case does not offer
macro-expand, it offers just define-macro). If macro-expand doesn't
work with syntax-rules

* reader macros (again, let's please get back to the LISP camp)

* include, with defined meaning for relative paths from a second
nested include (load needs the same clarification)

Additionally,

* Unicode support needs to be clarified. I'm not sure how.

And finally, I find the following are SRFIs useful (but they can
always be gotten via the reference implementation):

SRFI-8 receive
SRFI-9 records
SRFI-13 strings
SRFI-14 char sets
SRFI-16 case-lambda
SRFI-19 time
SRFI-26 cut
SRFI-31 rec
SRFI-42 eager comprehensions
SRFI-43 vectors
SRFI-60 bit twiddling
SRFI-69 hash tables
SRFI-63 arrays

These should be in, unless there's an argument about how exactly to
implement them in a particular implementation (as was the case with
srfi-69 in SISC for a while).

I have written a wide range of Scheme code, from web applications to a
Scheme-to-Javascript translator to file systems. My favorite
implementations have been Chicken, SISC, Guile and OCS. Never have I
felt that R5RS + SRFIs is inadequate. I have had lots of trouble
porting my code to PLT and Scheme 48 (and implicitly SCSH), which seem
to have large "right thing" communities that don't particularly care
about cross-Scheme portability (that means portability across
*present* Schemes, not their ideal of what Scheme is "supposed" to be
or will be like in 10 years).

This proposal is a very conservative evolution from R5RS that would
preserve backward compatibility. I think evolution should be gradual
and in small steps. R5RS already has 99% of what we need and a
wonderful ecosystem of implementations. Let's not blow it with R6RS
(OTOH, useful parts of R6RS should be cherry-picked as needed).

While I like many parts of Tom Lord's proposals, I think that the
foundations need to be strengthened first, before talking about
environments and first-class types. One note: whenever you add a new
core feature, think whether all existing Schemes can support it, and
whether you really want to leave their users in the dark. This should
temper enthusiasm for unrestrained innovation.

Cheers,
Dan Muresan

dan...@gmail.com

unread,
May 29, 2007, 4:21:16 AM5/29/07
to
> * define-macro WITH macro-expand, to get back the power of LISP macros
> that was unwisely suppressed in R5RS (no, syntax-case does not offer
> macro-expand, it offers just define-macro). If macro-expand doesn't
> work with syntax-rules

I meant to say that if macro-expand doesn't work with syntax-rules

1) at least make it work with define-macro. Yes, this would mean
having two overlapping macro systems and an incomplete macro-expand --
so what?

2) that's yet another downside of syntax-rules, NOT an argument
against macro-expand, for all hygiene fanboys out there :)

-- Dan

tj

unread,
May 29, 2007, 5:26:00 AM5/29/07
to
> When an FLAMBDA is used in a procedure call:
>
> (<exp0> <exp1> <exp2> ....)
>
> where <exp0> evaluates to an FLAMBDA procedure
>
> The FLAMBDA body is evaluated, with parameters bound,
> as if it were an ordinary lambda that had been invoked:
>
> (<exp0> (make-closure (the-environment) `(,lambda () ,'<exp1>))
> (make-closure (the-environment) `(,lambda
> () ,'<exp1>))
> .... )
>
> Thus, normal-order evaluation is supported -- but it is not
> "all" that FLAMBDA can do.
>
> Macros should be removed from the language, and promoted via
> SRFIs.

We can't use literals in your FLAMBDA system. E.g. the ELSE in a COND
form

(cond ((acceptable? R6RS) 'ok)
(else 'fubarred))

I like your environment and type proposals a lot, though.

TJ

Jens Axel Søgaard

unread,
May 29, 2007, 6:33:41 AM5/29/07
to
Tom Lord wrote:
> R6 should be completely different from the current draft,
> in my opinion.
>
> The addition of a few, parsimoniously chosen features
> eliminates the need for almost everything that is new
> in the R6 draft. Nearly *ALL* of the new hacks could
> be done as SRFIs, if only R6 would add these few OPTIONAL
> features:
>
>
> * Reader Extensibility
>
> The Scheme reader should have something *similar* to
> Common Lisp's reader tables and reader hooks. It should
> be possible, for example, to program the reader to
> parse (for example) Lua code or Python Code. It should be
> possible to program the reader to parse XML or (lax) HTML.

Do you have a specific proposal in mind?


> * First Class Reflection of Types and Environments
>
> ** There should be a new atomic type, TYPE?
>
> There should be one unique (under EQV?) type value
> for each value type recognized by the implementation.
> Each value type should have exactly one TYPE. Programs
> should be able to create new type values. Programs
> should be able to box arbitrary values under a given
> TYPE. Access (reading and writing contents of) a
> value should require access to the corresponding TYPE
> value.
>
> "Records" should not be in R6. Given the simpler proposal
> of TYPE objects, records can be implemented in a SRFIO.

Are you serious?

The resources in the Scheme community is few. We need to
be able to write libraries that work on more than one
implementation. There are a lot of legacy code around
that uses old record systems - and that's okay!

The current record proposal is rather involved, but it
makes it possible to implement the old systems in R6RS,
s.t. we can run old code.

> ** There should be first class environments and locatives
>
> Programs should be able to capture, at run-time, an
> associative data structure which encapsulates all of the
> variables within the lexical scope of the point of capture.
> It should be possible to modify the binding of any variable
> and to add or remove bindings in the innermost frame of
> a first-class environment. (Of course, EVAL should accept
> these environments as a parameter.)
>
> Of course, it must be a statically decidable property
> of each non-top-level lexical scope whether or not
> environments representing that scope are ever captured,
> whether captured environments escape, and whether and how
> non-escaping environments might be modified.

How will you make it statically decidable ?

> * FEXPRs (Normal Order Functions) and First Class Closures
>
> LAMBDA defines applicative-order procedures.
>
> FLAMBDA should define normal-order procedures. The
> evaluation rule for applications of an FLAMBDA procedure
> are explained below.
>
> A first-class CLOSURE value encapsulates a first-class
> environment, a source form, a parameters sepecification,
> and a designation as either an FLAMBDA or a LAMBDA
> procedure. (CLOSURE values are a subset of PROCEDURE
> values).
>
> When an FLAMBDA is used in a procedure call:
>
> (<exp0> <exp1> <exp2> ....)
>
> where <exp0> evaluates to an FLAMBDA procedure
>
> The FLAMBDA body is evaluated, with parameters bound,
> as if it were an ordinary lambda that had been invoked:
>
> (<exp0> (make-closure (the-environment) `(,lambda () ,'<exp1>))
> (make-closure (the-environment) `(,lambda
> () ,'<exp1>))
> .... )

This is too dynamic for my taste. It has always been
characteristic of Scheme that it gave compilers a fair
chance of doing static analysis. In a sense it is the
most static of the dynamic languages. (The same argument
was made at the R6RS list).

> Thus, normal-order evaluation is supported -- but it is not
> "all" that FLAMBDA can do.
>
> Macros should be removed from the language, and promoted via
> SRFIs.
>
> Module systems should be removed from the language, and ....

Without a standardized macro and module system, we'll never
see cross-implementation libraries.

--
Jens Axel Søgaard

Kjetil Svalastog Matheussen

unread,
May 29, 2007, 8:45:43 AM5/29/07
to
On Tue, 29 May 2007, Jens Axel Søgaard wrote:

> > () ,'<exp1>))
> > .... )
>
> This is too dynamic for my taste. It has always been
> characteristic of Scheme that it gave compilers a fair
> chance of doing static analysis. In a sense it is the
> most static of the dynamic languages. (The same argument
> was made at the R6RS list).
>

Can't compilers which struggle enough still be able to
theoretically produce equally fast code as before?

And why is it important? Raw computation speed is very seldomly
important at all. Whats more important is to keep the time between when
the code is written and when its running as low as possible, and this
proposal encourages that because it makes it possible to delay macro
expansion until the very last moment its needed.

> > Thus, normal-order evaluation is supported -- but it is not
> > "all" that FLAMBDA can do.
> >
> > Macros should be removed from the language, and promoted via
> > SRFIs.
> >
> > Module systems should be removed from the language, and ....
>
> Without a standardized macro and module system, we'll never

I think the point was that such should be put into SRFIs:
"and promoted via SRFIs".

Anton van Straaten

unread,
May 29, 2007, 12:46:21 PM5/29/07
to
Kjetil Svalastog Matheussen wrote:
> On Tue, 29 May 2007, Jens Axel Søgaard wrote:
>
>
>>>() ,'<exp1>))
>>> .... )
>>
>>This is too dynamic for my taste. It has always been
>>characteristic of Scheme that it gave compilers a fair
>>chance of doing static analysis. In a sense it is the
>>most static of the dynamic languages. (The same argument
>>was made at the R6RS list).
>>
>
>
> Can't compilers which struggle enough still be able to
> theoretically produce equally fast code as before?
>
> And why is it important?

Because reasoning about static behavior is easier than reasoning about
dynamic behavior, for programmers as well as compilers.

> Raw computation speed is very seldomly important at all.

The above statement contains a hidden assumption, which is that it
applies to programs implemented in programming languages whose features
are implemented efficiently. If the core features of a programmming
language are inefficient, performance tends to become an important issue
quickly and often.

Luckily for us, language implementors soon hear about performance
problems in their languages, and then spend time optimizing them away.
This process has been going on for decades in the Scheme community. The
result is that you end up with language implementations that are
optimized in all sorts of ways, the reasons for which aren't always
obvious to a casual observer, and which may not even affect the programs
which that observer wants to write.

Moving away from FEXPRs is one of those optimizations that has happened
over and over, as described in an earlier message (by Joe Marshall, I
believe).

The reasons for avoiding FEXPRs go beyond performance: there are also
issues like distribution of programs, since FEXPRs can require that
macro-expansion and compilation of source code take place on the end
user's system, which often isn't viable. This is essentially a
worst-case example of the inability to perform static analysis.

Switching to FEXPRs as a core feature would be a huge step backwards for
most modern Scheme implementations. It's pretty safe to say that this
isn't going to happen, no matter what. FEXPRs are worse than useless as
a core feature in terms of which other features can putatively be
implemented. Core features should aid reasoning ability, not undermine it.

Anton

Kjetil S. Matheussen

unread,
May 29, 2007, 3:02:02 PM5/29/07
to

On Tue, 29 May 2007, Anton van Straaten wrote:

> Kjetil Svalastog Matheussen wrote:
>> On Tue, 29 May 2007, Jens Axel Søgaard wrote:
>>
>>
>> > > () ,'<exp1>))
>> > > .... )
>> >
>> > This is too dynamic for my taste. It has always been
>> > characteristic of Scheme that it gave compilers a fair
>> > chance of doing static analysis. In a sense it is the
>> > most static of the dynamic languages. (The same argument
>> > was made at the R6RS list).
>> >
>>
>>
>> Can't compilers which struggle enough still be able to theoretically
>> produce equally fast code as before?
>>
>> And why is it important?
>
> Because reasoning about static behavior is easier than reasoning about
> dynamic behavior, for programmers as well as compilers.
>

Okay. I'm not sure its a good reason (in this case) though, but maybe.


>> Raw computation speed is very seldomly important at all.
>
> The above statement contains a hidden assumption, which is that it applies to
> programs implemented in programming languages whose features are implemented
> efficiently. If the core features of a programmming language are
> inefficient, performance tends to become an important issue quickly and
> often.
>

Yes, but would this change really have that much of an impact? SCM, which
works like this, is a fairly quick scheme implementation even though its
just an interpreter. Well, I guess theres no right or wrong answer here.

> Luckily for us, language implementors soon hear about performance problems in
> their languages, and then spend time optimizing them away. This process has
> been going on for decades in the Scheme community. The result is that you
> end up with language implementations that are optimized in all sorts of ways,
> the reasons for which aren't always obvious to a casual observer, and which
> may not even affect the programs which that observer wants to write.
>

Maybe, but fexpr's does provide some sexy features. Maybe it could be
worth it.

> Moving away from FEXPRs is one of those optimizations that has happened over
> and over, as described in an earlier message (by Joe Marshall, I believe).
>

Can't remember anything about that. In case you are thinking about the
discussion on the r6rs list, I think Joe Marshalls was thinking about
macros as variables, so it was just a misunderstanding.

> The reasons for avoiding FEXPRs go beyond performance: there are also issues
> like distribution of programs, since FEXPRs can require that macro-expansion
> and compilation of source code take place on the end user's system, which
> often isn't viable. This is essentially a worst-case example of the
> inability to perform static analysis.
>

FEXPR's was requested as an optional feature, and SCM's hobbit compiler
can't do macro-expansion at run-time. Obviously, if you want to distribute
a program in a way where it can't be compiled on the end user's system,
you must avoid macros that require such. In scm and guile (which also use
the hobbit compiler), you just test your program running the hobbit
compiler to see if your program still works if the macros are expanded
only at compile time. And that works, some programs fail, others don't.
(Sorry for my bad language)


> Switching to FEXPRs as a core feature would be a huge step backwards for most
> modern Scheme implementations. It's pretty safe to say that this isn't going
> to happen, no matter what. FEXPRs are worse than useless as a core feature
> in terms of which other features can putatively be implemented. Core
> features should aid reasoning ability, not undermine it.
>

Well, maybe. But its a nice feature.

wayo.c...@gmail.com

unread,
May 29, 2007, 8:08:57 PM5/29/07
to
On May 25, 1:49 pm, Tom Lord <l...@emf.net> wrote:

> * FEXPRs (Normal Order Functions) and First Class Closures

Tom, check out the Kernel programming language:

http://web.cs.wpi.edu/~jshutt/kernel.html

Ed

Anton van Straaten

unread,
May 31, 2007, 1:38:08 AM5/31/07
to
Kjetil S. Matheussen wrote:
> Yes, but would this change really have that much of an impact? SCM,
> which works like this, is a fairly quick scheme implementation even
> though its just an interpreter. Well, I guess theres no right or wrong
> answer here.

Right, I don't think every Scheme implementation wants to be constrained
to the choices that SCM has made here.

> Maybe, but fexpr's does provide some sexy features. Maybe it could be
> worth it.

Experimenting with features like this can be great fun. But if anything
should be a SRFI as opposed to a core feature, it's FEXPRs. The only
reason to standardize them beyond that would be if you wanted to use
FEXPRs as a core feature on which to build other important features,
which seems to be what Tom had in mind.

>> Moving away from FEXPRs is one of those optimizations that has
>> happened over and over, as described in an earlier message (by Joe
>> Marshall, I believe).
>>
>
> Can't remember anything about that. In case you are thinking about the
> discussion on the r6rs list,

You're correct, I was thinking of this message:
http://lists.r6rs.org/pipermail/r6rs-discuss/2007-May/002405.html

> I think Joe Marshalls was thinking about
> macros as variables, so it was just a misunderstanding.

I believe that Joe's comments in the beginning of the above message
apply to what Tom was suggesting.

> FEXPR's was requested as an optional feature

Well, Tom's proposal wasn't completely clear. In multiple messages, he
said that FEXPRs (or FLAMBDAs) would be sufficient to implement module
systems, and that module systems, macro systems, etc. should be a SRFI.
The implication is that the SRFI module systems would be implemented
in terms of FEXPRs, or maybe in terms of some other SRFI macro system
(also implemented in terms of FEXPRs?) It's this idea, of using FEXPRs
as a building block for other important features, that I'm saying isn't
consistent with the direction of most of the major Scheme systems. That
situation isn't likely to change.

Anton

Tom Lord

unread,
May 31, 2007, 12:41:16 PM5/31/07
to
On May 30, 10:38 pm, Anton van Straaten <a...@appsolutions.com> wrote:

> Right, I don't think every Scheme implementation
> wants to be constrained to the choices that SCM
> has made here.


My proposal would not constrain implementations
that way. Fexprs, environments, and fully general
reader extensions should be strictly optional
features: portable code can not rely on their
presence.

The features can and likely will be present in
some implementations, namely some interpreters
(those implementations that try to make EVAL
low-latency and high-as-practical throughput).

Therefore, the features will be useful for writing
standard implementations of things like syntax-case
or PLT modules. For example, the reference
implementation in a SRFI could reasonably use these
features. People will be able to play around with
the features in interpreters that support them and
can decide, in their own implementations, to provide
the SRFIs but not the very dynamic features themselves.

An interesting question is whether there can be
implementations that contain both an interpreter
and a compiler such that if code does make "crazy"
use of fexprs the compiler just gives up (and the
interpreter takes over) but if code uses only,
say, syntax-case macros and PLT modules an optimization
pass eliminates all fexprs and first-class environments
from that code.


> > FEXPR's was requested as an optional feature

> Well, Tom's proposal wasn't completely clear. In multiple messages, he
> said that FEXPRs (or FLAMBDAs) would be sufficient to implement module
> systems, and that module systems, macro systems, etc. should be a SRFI.
> The implication is that the SRFI module systems would be implemented
> in terms of FEXPRs, or maybe in terms of some other SRFI macro system
> (also implemented in terms of FEXPRs?) It's this idea, of using FEXPRs
> as a building block for other important features, that I'm saying isn't
> consistent with the direction of most of the major Scheme systems. That
> situation isn't likely to change.


Consider something like a syntax-case SRFI. The "API"
defined in that SRFI would depend on no other SRFIs. The
reference implementation would depend on two optional
features: first-class environments and FEXPRs.

Now, an implementation has at least three choices:

Provide FEXPRs/ Provide
environments? syntax-case?

1) no no
2) no yes
3) yes yes

Note that there is no "yes/no" choice because of the
reference implementation of the SRFI: if you provide
FEXPRs/environments then you get syntax-case "for free".

Most existing implementations of R5 immediately
qualify as conforming in sense (2): no fexprs and
environments but, yes, they have syntax-case.

There is another way to look at the options
implementations will have. Consider implementations
in classes (2) and (3), above -- those that provide
syntax-case. They have a choice: they can
implement syntax-case directly, as a special case
in the compiler or interpreter, or they can use the
reference implementation. For exaple, a compiler
might not support "fexprs and environments in general"
but might tolerate them in an early "pass" if they
can be more-or-less eliminated from the code by
constant folding and other forms of partial evaluation.

It seems to me that Scheme's history is unduly
influenced by a period of time when it seemed very,
very important to benchmark serious Scheme compilers
against languages like C. So, there
was a lot of sentiment to make sure that the core,
standardized language was 100% compilable (c.f., the
resistance to EVAL).

I think that that emphasis on compilation is, more
than anything, what has all but killed Scheme's
real-world successes. It forces every author of a
Scheme program to solve constraints imposed by a
compiler writer, and for very little return:

If you need a low-level language characterized by
tightly compiled code, C and C++ are still the best
choices by far. If you need a high-level language
with tight-as-practical compilation, Java wins over
Scheme and now Haskell is moving up.

And, tragically: if you want to just hack in a high-level
language, innovating freely, not terribly concerned
about performance, and often adding features by *in
effect* building domain specific languages (one of
Scheme's alleged strengths) -- evidence on the ground
is that you're best off using Lua, Ruby, Perl, PHP,
Python, or any of similar ilk. Those languages
are, each every one of them, filled with horrible warts
compared to Scheme but they have this strength:
millions of programmers can well-enough grok how they
work, and they work flexibly enough, to experiment with
"gross" hacks that get a job done (e.g., ActiveRecord in
Ruby, objects in Lua, etc.) It's as if Schemers look
at what's going on in those languages, correctly decide
"that's the wrong way to do it," but then mysteriously
and sadly conclude "there must be no *right* way to do
it."

Scheme's historic strength is to provide a very wide
range of functionality out of just a few, parsimoniously
chosen features.

I see no parsimony in things like module systems and
macro systems, especially when they are so complex,
so limited, so arbitrarilly chosen from among the options,
and so overlapping in what underlying (but unsurfaced)
concepts they rely on (such as syntactic environments).

Fexprs and environments surface the underlying concepts
in a practical way, directly, allowing not only the presently
considered macro and module systems but much more beyond.
Fexprs and environments, in an interpreted setting, put
Scheme (and its advantages) in the game against already-
flexible interpreted languages like Lua, Ruby, et al.

I think Scheme is going to go in the more dynamic direction
on its own. It's just too simple and correct an idea.
In some sense, the only question is whether or not the
revised report series will remain a legitimate source of
authority on Scheme.

-t


Kjetil S. Matheussen

unread,
May 31, 2007, 1:35:24 PM5/31/07
to

On Thu, 31 May 2007, Tom Lord wrote:

>
> It seems to me that Scheme's history is unduly
> influenced by a period of time when it seemed very,
> very important to benchmark serious Scheme compilers
> against languages like C. So, there
> was a lot of sentiment to make sure that the core,
> standardized language was 100% compilable (c.f., the
> resistance to EVAL).
>

It seems so to me as well. Also, I'm very sure the average scheme program
would earn orders of magnitude in performance if the standard had
required a profiler, rather than limit the language to make it more
convenient for the compiler writers. (Note that I don't think the standard
should require a profiler, I just think it would be a far more efficient
way to increase the performance of scheme programs.)

Kjetil S. Matheussen

unread,
May 31, 2007, 1:47:50 PM5/31/07
to

On Thu, 31 May 2007, Kjetil S. Matheussen wrote:

>
> On Thu, 31 May 2007, Tom Lord wrote:
>
>>
>> It seems to me that Scheme's history is unduly
>> influenced by a period of time when it seemed very,
>> very important to benchmark serious Scheme compilers
>> against languages like C. So, there
>> was a lot of sentiment to make sure that the core,
>> standardized language was 100% compilable (c.f., the
>> resistance to EVAL).
>>
>
> It seems so to me as well. Also, I'm very sure the average scheme program
> would earn orders of magnitude in performance if the standard had required a
> profiler,

On second thought, I might have exaggerated a little bit here... :-)

Joe Marshall

unread,
May 31, 2007, 8:08:41 PM5/31/07
to
On May 31, 9:41 am, Tom Lord <l...@emf.net> wrote:
> On May 30, 10:38 pm, Anton van Straaten <a...@appsolutions.com> wrote:
>
> > Right, I don't think every Scheme implementation
> > wants to be constrained to the choices that SCM
> > has made here.
>
> My proposal would not constrain implementations
> that way. Fexprs, environments, and fully general
> reader extensions should be strictly optional
> features: portable code can not rely on their
> presence.

My poor brain can't keep up here. I'm still confused
over what aspects of fexpr's you want.

But if you are suggesting purely optional features,
why on earth are you calling this a `counterproposal'?
It doesn't seem to have anything to do with r6rs.

>
> It seems to me that Scheme's history is unduly
> influenced by a period of time when it seemed very,
> very important to benchmark serious Scheme compilers
> against languages like C. So, there
> was a lot of sentiment to make sure that the core,
> standardized language was 100% compilable (c.f., the
> resistance to EVAL).

I won't argue how it seems to you, but this is not the case.

The emphasis has never been on performance, but on the
ability to reason about the code. One would *like* to
assign a semantics to the code that is a little more
predictive and useful than just `run it and see what
it does'. Some language features complicate the semantics,
but are generally useful. An example is typed objects
such as integers and strings. Other features may add
more complexity to the semantics than is worth it. Object
systems come to mind here.

Features like FEXPRs and certain implementations of
first-class environments (like the ability to extract
the environment from any arbitrary closure) completely
*obliterate* the semantics. In general, *anything* can
be mutated or changed at any time and there is no
way to reason about the program *except* by running it
and seeing what happens.

The key point is this: if the semantics cannot be
reasoned about, you cannot compile the language.

This should be obvious. The compiler reasons (naively)
about the code.

What is less obvious is that the *programmer* also
must reason about the code. He can usually reason
much more effectively than the compiler can, but he
often reasons much less accurately. The human
programmer often employs a simplified model of the
evaluation process in his reasoning.

This leads to the converse of the point above: if
you cannot compile the language, the semantics cannot
be reasoned about.

It *might* be possible to apply a little human
intuition to the program, but you are asking a lot
from the programmer. There are some really smart
people that work on program language design and on
teaching. The consensus among them is that if the
*compiler* cannot figure out what is meant, then a
*human* has no hope at all.

This is why features like FEXPRs and general first-class
environments have fallen out of favor.

>
> Scheme's historic strength is to provide a very wide
> range of functionality out of just a few, parsimoniously
> chosen features.
>
> I see no parsimony in things like module systems and
> macro systems, especially when they are so complex,
> so limited, so arbitrarilly chosen from among the options,
> and so overlapping in what underlying (but unsurfaced)
> concepts they rely on (such as syntactic environments).

I don't know why not. Macros give you syntactic abstraction
like FEXPRs, but since they can be expanded away in a
separate phase, they add little semantic overhead. *That*
is parsimonious.

I don't like syntax-case that much because it greatly
complicates macros without adding a whole lot in the way
of features. You can get *most* of what you want with
syntax-rules.

I have not read the modules proposal.

> Fexprs and environments surface the underlying concepts
> in a practical way, directly, allowing not only the presently
> considered macro and module systems but much more beyond.
> Fexprs and environments, in an interpreted setting, put
> Scheme (and its advantages) in the game against already-
> flexible interpreted languages like Lua, Ruby, et al.

I don't recall there being a competition. What's the prize?


Anton van Straaten

unread,
May 31, 2007, 8:29:08 PM5/31/07
to
Tom Lord wrote:
> Therefore, the features will be useful for writing
> standard implementations of things like syntax-case
> or PLT modules. For example, the reference
> implementation in a SRFI could reasonably use these
> features. People will be able to play around with
> the features in interpreters that support them and
> can decide, in their own implementations, to provide
> the SRFIs but not the very dynamic features themselves.

That doesn't sound very attractive: you're addressing the inherent
weaknesses of these features by saying that implementors might want to
avoid using them to actually build on. So they don't really achieve
what they're purporting to achieve: to eliminate the need to define e.g.
a module system as core features of the language.

> An interesting question is whether there can be
> implementations that contain both an interpreter
> and a compiler such that if code does make "crazy"
> use of fexprs the compiler just gives up (and the
> interpreter takes over) but if code uses only,
> say, syntax-case macros and PLT modules an optimization
> pass eliminates all fexprs and first-class environments
> from that code.

The usual way such interesting questions are dealt with is that someone
builds or extends an implementation to explore them. Nothing stops
anyone from doing that today, with almost any Scheme system.

> It seems to me that Scheme's history is unduly
> influenced by a period of time when it seemed very,
> very important to benchmark serious Scheme compilers
> against languages like C. So, there
> was a lot of sentiment to make sure that the core,
> standardized language was 100% compilable (c.f., the
> resistance to EVAL).
>
> I think that that emphasis on compilation is, more
> than anything, what has all but killed Scheme's
> real-world successes.

This perspective misses an important characteristic of Scheme, which is
also very relevant to this discussion: that one of the major drivers of
Scheme's development has been research. Some of that research has been
compiler research (and very good compiler research, at that), but
there's been plenty of research in other areas, too.

In that context, behavior such as the resistance to EVAL is not driven
only by a desire for efficient compilation, but also by the desire for
coherent and tractable theories of programs -- the desire to understand
programs better, which enables better language implementations, and
better programs.

It's lots of fun, as a hacker, to just smoosh up all the stages of a
program's evaluation into nearly a single phase, and damn the
consequences. Some languages and implementations have done something
along those lines, and had some success with it (Lisp, Smalltalk, Self,
and the modern descendants of scripting languages). But it's not as
though this is the final word in language design. In fact, those
languages have all also run into the problems with the approach in
various ways, ranging from intractability to performance.

A big part of Scheme's research story has been exploring these kinds of
problems. As I argued in a recent exchange with Pascal Costanza in the
"Vote on R6RS" thread, that research has focused on solving some of
these problems in ways that are more tractable than the simple, highly
dynamic solutions. Often, in this process, some properties are given up
in exchange for others, and it can take more research and time to regain
those properties, if that's considered desirable (it isn't always).

I agree that this process has been a big factor detracting from Scheme's
suitability as a mainstream programming language (whether that's
important, good, or bad is a separate question), but characterizing this
as an "emphasis on compilation" is, I dunno, like characterizing the
space program as having "an emphasis on rockets".

Research into the relationship between source code and running programs
does tend to result in principled techniques to transform the one to the
other, part of which is called "compilation", so yes, there's some
emphasis on that.

> And, tragically: if you want to just hack in a high-level
> language, innovating freely, not terribly concerned
> about performance, and often adding features by *in
> effect* building domain specific languages (one of
> Scheme's alleged strengths) -- evidence on the ground
> is that you're best off using Lua, Ruby, Perl, PHP,
> Python, or any of similar ilk. Those languages
> are, each every one of them, filled with horrible warts
> compared to Scheme but they have this strength:
> millions of programmers can well-enough grok how they
> work, and they work flexibly enough, to experiment with
> "gross" hacks that get a job done (e.g., ActiveRecord in
> Ruby, objects in Lua, etc.) It's as if Schemers look
> at what's going on in those languages, correctly decide
> "that's the wrong way to do it," but then mysteriously
> and sadly conclude "there must be no *right* way to do
> it."

There are right ways to do it, implemented in Schemes today, backed by
the research I was talking about. But if you're convinced that you
need, say, FEXPRs to do the sort of hacking you want, then few of the
available choices are going to look right to you. I'd suggest that's a
question of perspective, not a problem with Scheme.

BTW, I'm not sure that there's much stopping anyone from taking an
existing Scheme, implementing a nice metacircular interpreter on it (see
EIOD), and getting the sort of functionality you want. It ought to be
possible to do that quite portably. Perhaps integration between the
interpreter and the host would need careful handling, but it should be
doable.

> Scheme's historic strength is to provide a very wide
> range of functionality out of just a few, parsimoniously
> chosen features.

Sure, but it doesn't follow that any such feature is automatically a
good choice for Scheme. There are reasons that the features you mention
have traditionally been treated with caution, at best.

> I see no parsimony in things like module systems and
> macro systems, especially when they are so complex,
> so limited, so arbitrarilly chosen from among the options,
> and so overlapping in what underlying (but unsurfaced)
> concepts they rely on (such as syntactic environments).

They're building blocks of a different kind from, say, lambda or
first-class continuations, and there are reasons for the differences.
I'll respond on the parsimony issue separately, in the next few days.
(This also overlaps with Alex Shinn's summary of controversial R6RS
features.)

> Fexprs and environments surface the underlying concepts
> in a practical way, directly, allowing not only the presently
> considered macro and module systems but much more beyond.
> Fexprs and environments, in an interpreted setting, put
> Scheme (and its advantages) in the game against already-
> flexible interpreted languages like Lua, Ruby, et al.

Scheme is already in that game, in better shape than the competitors you
mention *from a technical perspective*. Where Scheme has a challenge is
in catering to the "millions of programmers [that] can well-enough grok

how they work, and they work flexibly enough, to experiment with "gross"

hacks that get a job done". But I think you underestimate the strength
of the connection between the languages that you see as being "filled
with horrible warts compared to Scheme", and their support for "gross
hacks".

Are you really sure that you want Scheme to be a language designed to
support and encourage gross hacks? When a new users asks about using
EVAL, do you think the answer should be "sure, go ahead, knock yourself
out," or isn't there some value in pointing out that there are more
elegant and precise ways to solve whatever problem is being asked about,
and in explaining those approaches, and in being able to point to books
and papers that describe them?

> I think Scheme is going to go in the more dynamic direction
> on its own. It's just too simple and correct an idea.

A number of Scheme implementations have put effort into supporting
dynamic features, and in that sense are becoming more dynamic, but
they're taking different approaches to the ones you favor. This
discussion involves more than a single dimension of more/less dynamic.

> In some sense, the only question is whether or not the
> revised report series will remain a legitimate source of
> authority on Scheme.

Any revised report which is accepted and followed by enough implementors
to be useful will, of course, continue to be relevant.

The role of the revised reports as a "legitimate source of authority" on
Scheme is often misunderstood. One only has to compare R5RS to what
Scheme implementations actually do, to get a hint of the issues here.

As Will pointed out recently, R6RS adds a new paradigm for portable code
(or at least can usefully be seen as such), but it's no more the
comprehensive arbiter of what Scheme implementations can and will do
than R5RS was, except in certain well-defined and encapsulated areas.

Anton

Ray Dillinger

unread,
May 31, 2007, 11:33:24 PM5/31/07
to
Anton van Straaten wrote:

> This perspective misses an important characteristic of Scheme, which is
> also very relevant to this discussion: that one of the major drivers of
> Scheme's development has been research. Some of that research has been
> compiler research (and very good compiler research, at that), but
> there's been plenty of research in other areas, too.
>
> In that context, behavior such as the resistance to EVAL is not driven
> only by a desire for efficient compilation, but also by the desire for
> coherent and tractable theories of programs -- the desire to understand
> programs better, which enables better language implementations, and
> better programs.

I would like to observe that this is not the desire to understand
programs better. This is the desire to create a language for
expressing *only* those programs which are easier to understand.

There is a difference.

We will not have fully climbed this curve until we have figured out
how to reason about and analyze even the intractable programs that
can be created by radical abuse of dynamic features such as eval.
A language without those features will do only a very limited amount
in bringing about such understanding.

Bear

Tom Lord

unread,
Jun 1, 2007, 12:33:58 AM6/1/07
to
Seconded.

-t

Joe Marshall

unread,
Jun 1, 2007, 1:39:12 AM6/1/07
to
On May 31, 8:33 pm, Ray Dillinger <b...@sonic.net> wrote:
>
> I would like to observe that this is not the desire to understand
> programs better. This is the desire to create a language for
> expressing *only* those programs which are easier to understand.
>
> There is a difference.
>
> We will not have fully climbed this curve until we have figured out
> how to reason about and analyze even the intractable programs that
> can be created by radical abuse of dynamic features such as eval.
> A language without those features will do only a very limited amount
> in bringing about such understanding.

Yes, but a language *with* those features will at least have the same
problems. This is a simple consequence of Godel's incompleteness
theorem, and there's no getting around it.

Anton van Straaten

unread,
Jun 1, 2007, 2:08:54 AM6/1/07
to
Ray Dillinger wrote:
> Anton van Straaten wrote:
>
>> This perspective misses an important characteristic of Scheme, which
>> is also very relevant to this discussion: that one of the major
>> drivers of Scheme's development has been research. Some of that
>> research has been compiler research (and very good compiler research,
>> at that), but there's been plenty of research in other areas, too.
>>
>> In that context, behav00ior such as the resistance to EVAL is not driven
>> only by a desire for efficient compilation, but also by the desire for
>> coherent and tractable theories of programs -- the desire to
>> understand programs better, which enables better language
>> implementations, and better programs.
>
>
> I would like to observe that this is not the desire to understand
> programs better. This is the desire to create a language for
> expressing *only* those programs which are easier to understand.
>
> There is a difference.

I agree that there's a difference, I disagree that about the desire.
The desire is to do what any good programmer does: decompose a problem
into its component parts, separate what can be separated, and figure out
how to achieve the goal in a manageable way. IOW, to apply the same
principles to metaprogramming (in this case the design of the language)
as you do to ordinary programming.

> We will not have fully climbed this curve until we have figured out
> how to reason about and analyze even the intractable programs that
> can be created by radical abuse of dynamic features such as eval.

There's not really much mystery about what's going on with "radical

abuse of dynamic features such as eval".

> A language without those features will do only a very limited amount
> in bringing about such understanding.

It would be good to look at some examples. For example, the "static"
library system in R6RS is being used to provide much better support for
eval than R5RS had. So what are the limitations you're concerned about,
the things that make you think that the language is designed for
"expressing *only* those programs which are easier to understand"?

Anton

samth

unread,
Jun 1, 2007, 9:24:55 AM6/1/07
to
On May 31, 11:33 pm, Ray Dillinger <b...@sonic.net> wrote:
> Anton van Straaten wrote:
> > This perspective misses an important characteristic of Scheme, which is
> > also very relevant to this discussion: that one of the major drivers of
> > Scheme's development has been research. Some of that research has been
> > compiler research (and very good compiler research, at that), but
> > there's been plenty of research in other areas, too.
>
> > In that context, behavior such as the resistance to EVAL is not driven
> > only by a desire for efficient compilation, but also by the desire for
> > coherent and tractable theories of programs -- the desire to understand
> > programs better, which enables better language implementations, and
> > better programs.
>
> I would like to observe that this is not the desire to understand
> programs better. This is the desire to create a language for
> expressing *only* those programs which are easier to understand.
>
> There is a difference.

This difference is less than it seems. Unless a feature is carefully
controlled,
using it in one part of a program can invalidate reasoning about other
parts of
the program. call/cc and set! are excellent examples of this - using
them in
one place can have non-local effects, and that's the whole point. But
there are
costs to reasoning just to having a feature in the language, unless
you only ever
reason about complete programs (which is not the case for anyone).

> We will not have fully climbed this curve until we have figured out
> how to reason about and analyze even the intractable programs that
> can be created by radical abuse of dynamic features such as eval.
> A language without those features will do only a very limited amount
> in bringing about such understanding.

For many such features, the amount of analysis that's possible is well-
understood.
For example, Mitch Wand's paper on FEXPRs. Also, what analysis can
you do on this
program:

(begin
(eval (read))
x)

Is x bound? No analysis can possibly save you here.

sam th

Pascal Costanza

unread,
Jun 1, 2007, 3:28:32 PM6/1/07
to
samth wrote:

> For many such features, the amount of analysis that's possible is well-
> understood. For example, Mitch Wand's paper on FEXPRs.

That paper is a red herring. Yes, FEXPRs make static analysis
impossible. But that doesn't say anything about analysis in general.

The Self project has shown that a lot is possible if you do it at runtime.

> Also, what analysis can you do on this program:
>
> (begin
> (eval (read))
> x)
>
> Is x bound? No analysis can possibly save you here.

Of course - exactly after executing (eval (read)), but still before
looking up x.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

0 new messages