Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What is dynamic-wind ?

7 views
Skip to first unread message

Thomas Baruchel

unread,
Feb 2, 2002, 12:46:22 PM2/2/02
to
Brest, le samedi 2 février

Probably asked several times, but I had a look in the archives,
read carefully the R5RS ans can't really understand what is
dynamic-wind and what are its main usages. Is there a good tutorial?
Can you explain it to me ?

--
Reply-To: name @ provider
My name is: thomas.baruchel
My email provider is: libertysurf.fr
I was against this system 'til I received 750 identical messages in a few min.

Michael Sperber [Mr. Preprocessor]

unread,
Feb 3, 2002, 10:58:20 AM2/3/02
to
>>>>> "Thomas" == Thomas Baruchel <baru...@libertysurf.france> writes:

Thomas> Brest, le samedi 2 février

Thomas> Probably asked several times, but I had a look in the archives,
Thomas> read carefully the R5RS ans can't really understand what is
Thomas> dynamic-wind and what are its main usages. Is there a good tutorial?
Thomas> Can you explain it to me ?

Sometimes a piece of code can only run if certain conditions are met
upon entry, say, a certain file has been opened. Moreover, you might
want to make sure some clean-up code is run once the piece of code is
finished.

This is no problem (just do (begin (prepare) (code) (cleanup))) as
long as there's no CALL/CC involved. As soon as CALL/CC is involved,
it's possible that the code doesn't return regularly but rather calls
some other continuation, or that a jump back into the code occurs
after it's finished. DYNAMIC-WIND makes sure that the preparation
code and the cleanup code gets run regardless of how the code is
entered or exited.

--
Cheers =8-} Mike
Friede, Völkerverständigung und überhaupt blabla

Matthias Blume

unread,
Feb 3, 2002, 11:27:13 PM2/3/02
to

Thomas, please, be aware that the above explanation is the *intent* behind
dynamic-wind, but not what it really does. For example (and discussed here
to death in the past), dynamic-wind will *not* make sure that the preparation-
or cleanup code gets to run *regardless* of how the code is entered/exited.
Instead, it will only deal with entries or exits due to continuations captured
by the built-in call/wc (which misleadingly is still being called call/cc :-).
Entries or exits due to explicitly created continuations (i.e., via lambda)
will *not* run.

In short, dynamic-wind is a hack that can be used to tie state managemnt to
dynamic (call-)context. This is just like C++ where memory allocation state
is being tied to dynamic context via constructors and deconstructors. And
the same problems arise: Just as you can easily have memory leaks or dangling
pointers in C++, dynamic-wind is not the appropriate tool for defending against
being in the wrong state at the wrong time in Scheme.

Visit you favorite netnews archive for more on this.

Regards,
Matthias

felix

unread,
Feb 4, 2002, 11:32:31 AM2/4/02
to
>(and discussed here to death in the past)

So why start it again?


felix


David Rush

unread,
Feb 5, 2002, 5:35:25 AM2/5/02
to
"felix" <felixu...@freenet.de> writes:
> >(and discussed here to death in the past)
>
> So why start it again?

because OP asked...

NOT.

Perhaps the dynamic-wind debate should go into the FAQ...

david rush
--
Any clod can have the facts - having opinions is an art
-- Infinite Void (on comp.lang.scheme)

Matthias Blume

unread,
Feb 5, 2002, 10:07:54 AM2/5/02
to
felix wrote:

>>(and discussed here to death in the past)
>>
>
> So why start it again?


Someone asked.

BTW, I didn't ("start it again"). I just gave a (very)

brief summary and a pointer to past discussions.

--
-Matthias

Michael Sperber [Mr. Preprocessor]

unread,
Feb 5, 2002, 11:28:36 AM2/5/02
to
>>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:

Matthias> In short, dynamic-wind is a hack that can be used to tie
Matthias> state managemnt to dynamic (call-)context. This is just
Matthias> like C++ where memory allocation state is being tied to
Matthias> dynamic context via constructors and deconstructors.

Unlike C++, nothing is tied to DYNAMIC-WIND in the language. The
programmer can just as well leave it alone.

Matthias> And the same problems arise:

No, as the Scheme programmer need not worry about memory allocation
state the same way in C++.

Matthias> Just as you can easily have memory leaks or dangling
Matthias> pointers in C++, dynamic-wind is not the appropriate tool
Matthias> for defending against being in the wrong state at the wrong
Matthias> time in Scheme.

DYNAMIC-WIND is in fact the appropriate tool for precisely this in
many situations. Its an extremely low-level tool which needs to be
used with great care. Its shortcomings (which you've pointed out) are
pretty much undisputed.

(I'm sorry to dredge up these statements once again. The original
poster simply asked what DYNAMIC-WIND does. He sure didn't ask for
personal opinions on its shortcomings.)

Matthias Blume

unread,
Feb 5, 2002, 5:44:09 PM2/5/02
to
Michael Sperber [Mr. Preprocessor] wrote:
>>>>>>"Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:
>>>>>>
>
> Matthias> In short, dynamic-wind is a hack that can be used to tie
> Matthias> state managemnt to dynamic (call-)context. This is just
> Matthias> like C++ where memory allocation state is being tied to
> Matthias> dynamic context via constructors and deconstructors.
>
> Unlike C++, nothing is tied to DYNAMIC-WIND in the language. The
> programmer can just as well leave it alone.

I said: "can be used to tie ...".

> Matthias> And the same problems arise:
>
> No, as the Scheme programmer need not worry about memory allocation
> state the same way in C++.

s/same/analogous/

> Matthias> Just as you can easily have memory leaks or dangling
> Matthias> pointers in C++, dynamic-wind is not the appropriate tool
> Matthias> for defending against being in the wrong state at the wrong
> Matthias> time in Scheme.
>
> DYNAMIC-WIND is in fact the appropriate tool for precisely this in
> many situations. Its an extremely low-level tool which needs to be
> used with great care.

No, the problem is that it is not dynamic-wind that needs to be used with
great care, but the rest of the program that needs to be written with
great care -- keeping in mind that dynamic-wind is lurking somewhere.

> Its shortcomings (which you've pointed out) are pretty much undisputed.

To me, it has only shortcomings since:

- It does not do what it is supposed to do.
- What it actually does is not really useful in practice. Or, to put
it differently, its presence in a program severely constrains one's
programming style in the remainder of the program.
- It destroys a very useful property of call/cc, taking away the
*only* useful application for it (namely implementing context switch).

> (I'm sorry to dredge up these statements once again. The original
> poster simply asked what DYNAMIC-WIND does. He sure didn't ask for
> personal opinions on its shortcomings.)

Right. The original poster asked what dynamic-wind does, so he deserves
an honest answer and not the glossy advertising.

--
-Matthias

Michael Sperber [Mr. Preprocessor]

unread,
Feb 6, 2002, 4:15:20 AM2/6/02
to
>>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:

Matthias> No, the problem is that it is not dynamic-wind that needs to
Matthias> be used with great care, but the rest of the program that
Matthias> needs to be written with great care -- keeping in mind that
Matthias> dynamic-wind is lurking somewhere.

In my experience, this is difficult, but not nearly as hard as you
make it out to be.

felix

unread,
Feb 6, 2002, 5:22:47 AM2/6/02
to

Matthias Blume wrote in message <3C5FF54...@shimizu-blume.com>...

>felix wrote:
>
>>>(and discussed here to death in the past)
>>>
>>
>> So why start it again?
>
>
>Someone asked.
>


And his question has been answered (by Sperber).

So what?


felix


Matthias Blume

unread,
Feb 6, 2002, 8:15:42 AM2/6/02
to

And I thought that the answer was lacking somewhat.

Anyway, if you think what I write is drivel (and I know you do),
you don't have to read it.

Matthias

Matthias Blume

unread,
Feb 6, 2002, 10:25:19 AM2/6/02
to
Michael Sperber [Mr. Preprocessor] wrote:
>>>>>>"Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:
>>>>>>
>
> Matthias> No, the problem is that it is not dynamic-wind that needs to
> Matthias> be used with great care, but the rest of the program that
> Matthias> needs to be written with great care -- keeping in mind that
> Matthias> dynamic-wind is lurking somewhere.
>
> In my experience, this is difficult, but not nearly as hard as you
> make it out to be.

In my experience, I have not come across a single instance of a problem where
I would have been comfortable using dynamic-wind. To me, it is just one
of those "features piled on top of features" that we once loathed in the
Scheme community. It is seriously broken in many ways, and it has very few
redeeming qualities (if any).

--
-Matthias

Michael Sperber [Mr. Preprocessor]

unread,
Feb 6, 2002, 11:57:36 AM2/6/02
to
>>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:

Matthias> Michael Sperber [Mr. Preprocessor] wrote:
>>>>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:
>>>>>>>
Matthias> No, the problem is that it is not dynamic-wind that needs
>> to
Matthias> be used with great care, but the rest of the program that
Matthias> needs to be written with great care -- keeping in mind that
Matthias> dynamic-wind is lurking somewhere.
>> In my experience, this is difficult, but not nearly as hard as you
>> make it out to be.

Matthias> In my experience, I have not come across a single instance of a problem where
Matthias> I would have been comfortable using dynamic-wind.

The source code of Scheme 48 provides a number of appropriate uses of
DYNAMIC-WIND, in its kernel as well as its libraries. Its number
(about a dozen) compared to the size of the code base suggests the
caution warranted in using it.

Marc Feeley

unread,
Feb 6, 2002, 2:37:39 PM2/6/02
to
> In my experience, I have not come across a single instance of a problem where
> I would have been comfortable using dynamic-wind. To me, it is just one
> of those "features piled on top of features" that we once loathed in the
> Scheme community. It is seriously broken in many ways, and it has very few
> redeeming qualities (if any).

I agree. dynamic-wind is an awkward construct with a non-trivial
semantics that isn't even well defined in R5RS (what happens if a
continuation is captured from within an after/before thunk **while**
the continuation is being wound/unwound?). dynamic-wind should not
have been added to Scheme (the same is true of multiple-values but
that's a different subject). I remember voting against dynamic-wind
at the 1992 meeting in Palo Alto. I have never used it in my code but
for R5RS conformance I implemented it in Gambit-C (I've appended below
the implementation of dynamic-wind, and of call/cc which is closely
coupled with dynamic-wind, for the benefit of those who haven't yet
appreciated the implementation complexity in a Scheme system that
supports threads).

Marc

(define (dynamic-wind before thunk after)
(##continuation-capture ; see my Scheme workshop paper for the definition of continuation-capture
(lambda (cont thunk)
(before)
(let* ((dynwind
(macro-denv-dynwind
(macro-thread-denv (macro-current-thread))))
(new-dynwind
(macro-make-dynwind
(##fixnum.+ (macro-dynwind-level dynwind) 1)
before
after
cont))
(results ; may get bound to a multiple-values object
(macro-dynamic-bind dynwind new-dynwind thunk)))
(after)
results))
thunk))

(define (call-with-current-continuation receiver)
(##continuation-capture
(lambda (cont receiver)
(receiver
(lambda (#!optional
(val1 (macro-absent-obj))
(val2 (macro-absent-obj))
#!rest
others)

(define (unwind-src src dst continue)
(##continuation-graft
(macro-dynwind-cont src)
(lambda ()
((macro-dynwind-after src))
(let ((new-src
(macro-denv-dynwind
(macro-continuation-denv (macro-dynwind-cont src)))))
(cond ((##eq? new-src dst)
(continue))
((##fixnum.< (macro-dynwind-level dst)
(macro-dynwind-level new-src))
(unwind-src new-src dst continue))
(else
(unwind-src-wind-dst new-src dst continue)))))))

(define (wind-dst src dst continue)
(let* ((new-dst
(macro-denv-dynwind
(macro-continuation-denv (macro-dynwind-cont dst))))
(new-continue
(lambda ()
(##continuation-graft
(macro-dynwind-cont dst)
(lambda ()
((macro-dynwind-before dst))
(continue))))))
(if (##eq? src new-dst)
(new-continue)
(unwind-src-wind-dst
src
new-dst
(lambda () (new-continue))))))

(define (unwind-src-wind-dst src dst continue)
(##continuation-graft
(macro-dynwind-cont src)
(lambda ()
((macro-dynwind-after src))
(let* ((new-src
(macro-denv-dynwind
(macro-continuation-denv (macro-dynwind-cont src))))
(new-dst
(macro-denv-dynwind
(macro-continuation-denv (macro-dynwind-cont dst))))
(new-continue
(lambda ()
(##continuation-graft
(macro-dynwind-cont dst)
(lambda ()
((macro-dynwind-before dst))
(continue))))))
(if (##eq? new-src new-dst)
(new-continue)
(unwind-src-wind-dst
new-src
new-dst
(lambda () (new-continue))))))))

(define (continuation-return-with-winding cont results)
(let* ((src
(macro-denv-dynwind
(macro-thread-denv (macro-current-thread))))
(dst
(macro-denv-dynwind
(macro-continuation-denv cont))))
(if (##eq? src dst) ; check common case (same dynamic-wind context)
(##continuation-return cont results)
(let ((continue
(lambda () (##continuation-return cont results)))
(src-level
(macro-dynwind-level src))
(dst-level
(macro-dynwind-level dst)))
(cond ((##fixnum.< dst-level src-level)
(unwind-src src dst continue))
((##fixnum.< src-level dst-level)
(wind-dst src dst continue))
(else
(unwind-src-wind-dst src dst continue)))))))

(continuation-return-with-winding
cont
(cond ((##eq? val1 (macro-absent-obj))
(##values))
((##eq? val2 (macro-absent-obj))
val1)
((##null? others)
(##values val1 val2))
(else
(##subtype-set!
(##list->vector (##cons val1 (##cons val2 others)))
(macro-subtype-boxvalues))))))))
receiver))

felix

unread,
Feb 7, 2002, 12:50:54 PM2/7/02
to
>And I thought that the answer was lacking somewhat.
>
>Anyway, if you think what I write is drivel (and I know you do),
>you don't have to read it.
>


(Oh, come on, Matthias!)

David Rush is right. This is a FAQ. You know what? Why don't
you write the whole dynamic-wind stuff down once and for all
and submit it to the maintainer of the Scheme FAQ?


felix


felix

unread,
Feb 7, 2002, 12:46:00 PM2/7/02
to

Marc Feeley wrote in message ...

>
>I agree. dynamic-wind is an awkward construct with a non-trivial
>semantics that isn't even well defined in R5RS (what happens if a
>continuation is captured from within an after/before thunk **while**
>the continuation is being wound/unwound?). dynamic-wind should not
>have been added to Scheme (the same is true of multiple-values but
>that's a different subject). I remember voting against dynamic-wind
>at the 1992 meeting in Palo Alto.


Yet it makes it possible to implement constructs like parameterize, fluid-let,
let-fluid and with-exception-handler in a reliable and portable manner.
Or consider wrapper code for debuggers, tracing or profiling. Sometimes
you just need dynamic-wind or you are left with hacking the implementation
(if it's possible at all).


felix


Matthias Blume

unread,
Feb 7, 2002, 2:44:25 PM2/7/02
to
felix wrote:
> Marc Feeley wrote in message ...
>
>>I agree. dynamic-wind is an awkward construct with a non-trivial
>>semantics that isn't even well defined in R5RS (what happens if a
>>continuation is captured from within an after/before thunk **while**
>>the continuation is being wound/unwound?). dynamic-wind should not
>>have been added to Scheme (the same is true of multiple-values but
>>that's a different subject). I remember voting against dynamic-wind
>>at the 1992 meeting in Palo Alto.
>>
>
>
> Yet it makes it possible to implement constructs like parameterize, fluid-let,
> let-fluid and with-exception-handler in a reliable and portable manner.

This is not all that surprising given that all three suffer from precisely the
same conceptual problems that I gripe about in the case of dynamic-wind.
In any case, dynamic-wind is a *terrible* way (in terms of efficiency) of
implementing them given that saving/restoring of the exception handler or a
fluid variable can be done in constant time (per variable) -- independently
of the "distance" between the two dynamic contexts in question, but dynamic-wind
can take arbitrary amounts of time -- going through tons and tons of unnecessary
unwinders and rewinders the effects of all but the last of which are discarded
anyway.

In my view, exceptions and exception handling should be native concepts of the
language. In a CPS-based compiler, "with-exception-handler" can easily and
efficiently be implemented using continuation-passing-handler-passing style.
This results in very little overhead, is semantically clean and well-defined,
and comes with none of the implementation complexity that Marc referred to.

--
-Matthias

Michael Sperber [Mr. Preprocessor]

unread,
Feb 8, 2002, 4:10:45 AM2/8/02
to

>>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:

Matthias> In my view, exceptions and exception handling should be native concepts of the
Matthias> language. In a CPS-based compiler, "with-exception-handler" can easily and
Matthias> efficiently be implemented using continuation-passing-handler-passing style.
Matthias> This results in very little overhead, is semantically clean and well-defined,
Matthias> and comes with none of the implementation complexity that Marc referred to.

Only that exceptions also have subtle interactions with call/cc which
specifically affect context-switching. There's some excellent work
from the Fox project on that:

Edoardo Biagioni, Ken Cline, Peter Lee, Chris Okasaki, and Chris
Stone. Safe-for-space threads in Standard ML. In Olivier Danvy and
Carolyn Talcott, editors, Proceedings of the ACM SIGPLAN Workshop on
Continuations, Paris, France, January 1997. Available in special issue
of Higher-Order and Symbolic Computation, 11(2):209-225, December
1998.

http://foxnet.cs.cmu.edu/papers/cokasaki-threads.ps

Matthias Blume

unread,
Feb 8, 2002, 10:43:14 AM2/8/02
to
Michael Sperber [Mr. Preprocessor] wrote:

To refresh everyone's memory: The problem in question is that in a certain "naive"
implementation of threads on top of call/cc, a new thread (created by "fork") will
inherit both its parent regular continuation and that parent's exception continuation.
Since the thread never returns regularly, it is reasonable to expect that the
compiler can get rid of the regular continuation. The same is not true for
the exception continuation.

This fact follows directly from the semantics of call/cc and is in no
way "subtle". It also has nothing to do with whether or not one uses
handler-passing-style.

The obvious remedy is to use some kind of trampoline technology, i.e., having a special
context that fork delegates the task of creating a new thread to.
When implementing CML (which is being named explicitly in the above-mentioned paper),
John Reppy knew about this problem and its solution all along. He did not think that
it would be a problem in practice, and the cost of the trampoline solution would
outweigh its benefits.

A better solution to the problem is to have another low-level operator "isolate" that
runs a function in an empty context (where both the regular and the exn continuation
are trivial).

By the way, since this is still under the subject line of "dynamic-wind", I would like to
point out that the precise same problems that arise from having an exn handler context
also arise if you have a winding context.

--
-Matthias

Michael Sperber [Mr. Preprocessor]

unread,
Feb 8, 2002, 12:05:52 PM2/8/02
to
>>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:

Matthias> Michael Sperber [Mr. Preprocessor] wrote:
>>>>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:
>>>>>>>
Matthias> In my view, exceptions and exception handling should be
>> native concepts of the
Matthias> language. In a CPS-based compiler, "with-exception-handler" can easily and
Matthias> efficiently be implemented using continuation-passing-handler-passing style.
Matthias> This results in very little overhead, is semantically clean and well-defined,
Matthias> and comes with none of the implementation complexity that Marc referred to.
>> Only that exceptions also have subtle interactions with call/cc which
>> specifically affect context-switching. There's some excellent work
>> from the Fox project on that:
>> Edoardo Biagioni, Ken Cline, Peter Lee, Chris Okasaki, and Chris
>> Stone. Safe-for-space threads in Standard ML. In Olivier Danvy and
>> Carolyn Talcott, editors, Proceedings of the ACM SIGPLAN Workshop on
>> Continuations, Paris, France, January 1997. Available in special issue
>> of Higher-Order and Symbolic Computation, 11(2):209-225, December
>> 1998.
>> http://foxnet.cs.cmu.edu/papers/cokasaki-threads.ps

Matthias> This fact follows directly from the semantics of call/cc and
Matthias> is in no way "subtle".

That depends on your definition of "subtle" of course. I find it at
about the same level of subtlety as dealing with DYNAMIC-WIND. It
sure was subtle enough to warrant a research paper and a full session
at the Advanced Functional Programming School.

Matthias> It also has nothing to do with whether or not one uses
Matthias> handler-passing-style.

Right. I see now I quoted too much. Sorry about that.

Matthias> By the way, since this is still under the subject line of
Matthias> "dynamic-wind", I would like to point out that the precise
Matthias> same problems that arise from having an exn handler context
Matthias> also arise if you have a winding context.

Exactly. (I did say "also".) We're in full agreement here.

felix

unread,
Feb 9, 2002, 6:23:28 AM2/9/02
to

Matthias Blume wrote in message <3C62D919...@shimizu-blume.com>...

>>
>> Yet it makes it possible to implement constructs like parameterize, fluid-let,
>> let-fluid and with-exception-handler in a reliable and portable manner.
>
>This is not all that surprising given that all three suffer from precisely the
>same conceptual problems that I gripe about in the case of dynamic-wind.
>In any case, dynamic-wind is a *terrible* way (in terms of efficiency) of
>implementing them given that saving/restoring of the exception handler or a
>fluid variable can be done in constant time (per variable) -- independently
>of the "distance" between the two dynamic contexts in question, but dynamic-wind
>can take arbitrary amounts of time -- going through tons and tons of unnecessary
>unwinders and rewinders the effects of all but the last of which are discarded
>anyway.


In practice I experienced no such problems.

>
>In my view, exceptions and exception handling should be native concepts of the
>language. In a CPS-based compiler, "with-exception-handler" can easily and
>efficiently be implemented using continuation-passing-handler-passing style.
>This results in very little overhead, is semantically clean and well-defined,
>and comes with none of the implementation complexity that Marc referred to.
>


What attracts me in Scheme is that it provides a few powerful constructs
that can be used to implement all the stuff I need. These constructs are so
general that they can be hairy to use in certain situations. "dynamic-wind"
is very lowlevel, that is true. It has implications that forbid to use it lightheartedlly.
Yet sometimes it's essential to have it at your disposal, for exactly the same
reasons I stated in my previous posting (and which you addressed only partially).

What if someone thinks of a new language construct that needs establishing
a reliable dynamic context that is call/cc-safe? Should she wait until the
language implementor provides a new construct?
In Scheme she can implement it herself, right away.
That's the beauty of it.


felix


Matthias Blume

unread,
Feb 9, 2002, 8:06:13 AM2/9/02
to
felix wrote:

>
> What attracts me in Scheme is that it provides a few powerful constructs
> that can be used to implement all the stuff I need. These constructs are so
> general that they can be hairy to use in certain situations. "dynamic-wind"
> is very lowlevel, that is true. It has implications that forbid to use it lightheartedlly.
> Yet sometimes it's essential to have it at your disposal, for exactly the same
> reasons I stated in my previous posting (and which you addressed only partially).

No, it is not "essential" precisely in the sense of your own words -- namely because there
are *already* powerful enough constructs in the language that let you implement it
if you really need it. (I still doubt very heavily that you *really* need it, but
you might *think* that you really need it, so you may be served.) The mistake is
to put the sucker gratuituously into the language definition itself.

By the way, what did I address only partially? Do you mean those "debuggers" and "tracers"?
If so, I don't see any inherent connection between debuggers and dynamic-wind. Maybe
you should shed some light before we discuss this any further.

> What if someone thinks of a new language construct that needs establishing
> a reliable dynamic context that is call/cc-safe?

I can think of no such language construct. Why would one need a call/cc-safe dynamic
context other than for its own sake -- especially in a language where computations
can easily escape such contexts anyway?

> Should she wait until the language implementor provides a new construct?

She should rethink her problem until she realizes that she didn't need such a thing after
all -- or, which is more likely, that even if she's got it, it wouldn't really solve her
problem in a reliable, general manner.

> In Scheme she can implement it herself, right away.

If she is really that desparate, she could do it even without having dynamic-wind. It will
take only those few extra lines that define dynamic-wind together with call/wc. If she wants
all instances of call/cc behave like call/wc, she can even set! the former to the latter.

> That's the beauty of it.

Well, it's pretty ugly, actually. But then, beauty is in the eye of the beholder.

Matthias


felix

unread,
Feb 9, 2002, 11:21:27 AM2/9/02
to

Matthias Blume wrote in message <3C651F36...@shimizu-blume.com>...

>
>By the way, what did I address only partially? Do you mean those "debuggers" and "tracers"?
>If so, I don't see any inherent connection between debuggers and dynamic-wind. Maybe
>you should shed some light before we discuss this any further.

Consider keeping information about pending call-frames. You have to guard
against escaping continuations, don't you? Say you instrument a source-file, and
each call should record information (time, etc.).

>
>> What if someone thinks of a new language construct that needs establishing
>> a reliable dynamic context that is call/cc-safe?
>
>I can think of no such language construct. Why would one need a call/cc-safe dynamic
>context other than for its own sake -- especially in a language where computations
>can easily escape such contexts anyway?


I can think of some (and I already mentioned them).
And the fact that *you* can't think of anything doesn't mean such a thing might not exist.
(well, you obviously are of a different opinion - omniscience must a wonderful gift!).

>
>> Should she wait until the language implementor provides a new construct?
>
>She should rethink her problem until she realizes that she didn't need such a thing after
>all -- or, which is more likely, that even if she's got it, it wouldn't really solve her
>problem in a reliable, general manner.

It is just as likely that she needs exactly such a thing. People might need (for example)
"parameterize". It is not for you to judge wether it's sensible to implement/use such
a thing.

You say: "nobody is ever gonna need it because *I* say it's ugly and can be implemented
differently".
I say: "let the user decide and give him the tools to do it".

Fortunately R5RS is on my side.

>
>> In Scheme she can implement it herself, right away.
>
>If she is really that desparate, she could do it even without having dynamic-wind. It will
>take only those few extra lines that define dynamic-wind together with call/wc. If she wants
>all instances of call/cc behave like call/wc, she can even set! the former to the latter.


So, effectively you say that if the user wants "dynamic-wind", she should implement
it herself? Hm. Why don't give it to her right away? Might be more efficient, also.

>Well, it's pretty ugly, actually. But then, beauty is in the eye of the beholder.


Right.


felix


Jeffrey Siegal

unread,
Feb 9, 2002, 2:01:12 PM2/9/02
to
felix wrote:
> >If she is really that desparate, she could do it even without having dynamic-wind. It will
> >take only those few extra lines that define dynamic-wind together with call/wc. If she wants
> >all instances of call/cc behave like call/wc, she can even set! the former to the latter.
>
> So, effectively you say that if the user wants "dynamic-wind", she should implement
> it herself? Hm. Why don't give it to her right away? Might be more efficient, also.

I think his objection is that R5RS's call/cc is actually call/wc, which
is a conceptually a derived construct (or should be).

Matthias Blume

unread,
Feb 9, 2002, 8:32:44 PM2/9/02
to

To be precise, if call/wc existed as a separate construct (which could be a derived one
in this case), I can simply ignore its existence (together with that of dynamic-wind)
in my everyday programming.

Matthias

Matthias Blume

unread,
Feb 9, 2002, 9:23:46 PM2/9/02
to
felix wrote:

> Consider keeping information about pending call-frames. You have to guard
> against escaping continuations, don't you? Say you instrument a source-file, and
> each call should record information (time, etc.).

This could easily be dealt with by having a debug version of call/cc. If you are willing
to instrument your source code, this is just a tiny extra step to take. There is no need,
however, to throw this beast into the language proper.

>>>What if someone thinks of a new language construct that needs establishing
>>>a reliable dynamic context that is call/cc-safe?
>>>
>>I can think of no such language construct. Why would one need a call/cc-safe dynamic
>>context other than for its own sake -- especially in a language where computations
>>can easily escape such contexts anyway?
>>
>
> I can think of some (and I already mentioned them).

No, you did not mentioned them. Every single example so far can be dealt with without need
for dynamic-wind. And in most (if not all) of these cases, doing so is preferable not
just because it is "the solution that does not require dynamic-wind", but because it is
"the solution that is superior in robustness and/or efficiency".

(For example, a fluid-let-like construct -- as questionable as it may be in its own right --
does not have to inhibit tail recursiveness (i.e., it does not need to grow the stack or
any other data structure), but a dynamic-wind-based implementation will definitely not be
properly tail recursive. Not to mention (since I already did) the overhead of running all
those useless winders.)

>>>Should she wait until the language implementor provides a new construct?
>>>
>>She should rethink her problem until she realizes that she didn't need such a thing after
>>all -- or, which is more likely, that even if she's got it, it wouldn't really solve her
>>problem in a reliable, general manner.
>>
>
> It is just as likely that she needs exactly such a thing.

I doubt that very much. In fact, I think it is highly unlikely.

> People might need (for example)
> "parameterize". It is not for you to judge wether it's sensible to implement/use such
> a thing.

Now you are justifying the existence of one questionable language construct with that of
another. Plus, once again, using dynamic-wind to implement parameterize is highly
inefficient.

And, by the way, since all I am doing here is stating an opinion, it is very well for me
to make a judgement of the above kind. I do think that dynamic-wind and friends should not
be used and should be absent from a programming language. (The original Scheme design was
quite clear in that it did not like dynamic scoping. And all this "dynamic-this" and
"fluid-that" stuff is basically an attempt of resurrecting an old, evil ghost.)
It is not for you to judge whether it is for me to judge. This is netnews.

> You say: "nobody is ever gonna need it because *I* say it's ugly and can be implemented
> differently".

No, you got it wrong: I do not say "because I SAY it's ugly", but rather "because it actually
is ugly, whether I say so or not". (In other words, you don't need to mix levels here. You
might disagree with what I say, but the line of reasoning in what I am saying does not itself
refer to the fact that I am saying it... :-)

> I say: "let the user decide and give him the tools to do it".

In high-level languages there are all kinds of things that we decide not to give users access to.
Raw memory manipulation is one example good example. It is all a matter of where to draw the
line. I would draw the line in such a way that it keeps dynamic-wind out.

> So, effectively you say that if the user wants "dynamic-wind", she should implement
> it herself? Hm. Why don't give it to her right away? Might be more efficient, also.

First of all, I do not say "she should implement it herself". In fact, I think she definitely
shouldn't. Instead, she should rework her design in such a way that dynamic-wind is
unnecessary. However, if she insists (probably because of her extremely poor judgement :-), she
*can* implement it herself. Basically, what I am saying is "well then, go ahead if you must.
it's your own risk, though".

By your line of reasoning ("someone might need it, so why don't we throw it in?"), why don't we
just throw in the kitchen sink -- along with all the other stuff that "might be useful to
someone" and that "we might be able to provide more efficiently".
Wait a minute, that has already been done. It's called Common Lisp. :-)

Matthias

felix

unread,
Feb 10, 2002, 7:37:11 AM2/10/02
to

Matthias Blume wrote in message <3C65DA24...@shimizu-blume.com>...

>
>
>(For example, a fluid-let-like construct -- as questionable as it may be in its own right --
>does not have to inhibit tail recursiveness (i.e., it does not need to grow the stack or
>any other data structure), but a dynamic-wind-based implementation will definitely not be
>properly tail recursive. Not to mention (since I already did) the overhead of running all
>those useless winders.)
>

Would you care of providing an implementation of "fluid-let" that does the above?
I would also be interested in a performance comparison with a version that uses
a native "dynamic-wind".

>>
>> It is just as likely that she needs exactly such a thing.
>
>I doubt that very much. In fact, I think it is highly unlikely.


I don't. What now?

>
> > People might need (for example)
>> "parameterize". It is not for you to judge wether it's sensible to implement/use such
>> a thing.
>
>Now you are justifying the existence of one questionable language construct with that of
>another. Plus, once again, using dynamic-wind to implement parameterize is highly
>inefficient.


Once again, please provide a more efficient one.

>
>And, by the way, since all I am doing here is stating an opinion, it is very well for me
>to make a judgement of the above kind. I do think that dynamic-wind and friends should not
>be used and should be absent from a programming language. (The original Scheme design was
>quite clear in that it did not like dynamic scoping. And all this "dynamic-this" and
>"fluid-that" stuff is basically an attempt of resurrecting an old, evil ghost.)


Interestingly quite a number of Scheme implementations provide "dynamic-wind"
and constructs built on top of it. How would you explain that?

(I just want to point out that it's not just unwashed pariah coders like me that
have the odd use for "dynamic-wind" and mates).

Why is "dynamic-wind" given in R5RS if it's so questionable? (dont' bother -
it's a rethorical question).

People *are* actually using it!

>No, you got it wrong: I do not say "because I SAY it's ugly", but rather "because it actually
>is ugly, whether I say so or not".

In your last posting you claimed that "beauty is in the eye of the beholder."
How does that relate to what you say above?

>
>> I say: "let the user decide and give him the tools to do it".
>
>In high-level languages there are all kinds of things that we decide not to give users access to.
>Raw memory manipulation is one example good example. It is all a matter of where to draw the
>line. I would draw the line in such a way that it keeps dynamic-wind out.


A perfectly valid standpoint.

>
>By your line of reasoning ("someone might need it, so why don't we throw it in?"), why don't we
>just throw in the kitchen sink -- along with all the other stuff that "might be useful to
>someone" and that "we might be able to provide more efficiently".


You got it completely wrong (naturally).

My reasoning is: "it's a standard construct, it's actually used (directly or indirectly),
so why don't we leave it where it is?"


felix


Matthias Blume

unread,
Feb 10, 2002, 2:07:36 PM2/10/02
to
felix wrote:
> Matthias Blume wrote in message <3C65DA24...@shimizu-blume.com>...
>
>>
>>(For example, a fluid-let-like construct -- as questionable as it may be in its own right --
>>does not have to inhibit tail recursiveness (i.e., it does not need to grow the stack or
>>any other data structure), but a dynamic-wind-based implementation will definitely not be
>>properly tail recursive. Not to mention (since I already did) the overhead of running all
>>those useless winders.)
>>
>>
>
> Would you care of providing an implementation of "fluid-let" that does the above?

Well, it requires compiler support.

1. Identify occurences of fluid-bound variables.
2. Make all functions (internally) take an extra argument which is a dictionary
(a functional finite map).
3. Replace all bound occurences of the fluid variable with an access to the
dictionary.
4. Implement fluid-let by simply (non-destructively) updating the existing
dictionary, using the new one for the body. In a tail-recursive loop that
goes through a fluid-let, most of the dictionaries would quickly become
garbage, and the space used at any one time (on behalf of the fluid variable)
is constant.

Disclaimer: I have not done such an implementation, and I may have overlooked some
details, but I am pretty certain that it can be done along those lines.

> I would also be interested in a performance comparison with a version that uses
> a native "dynamic-wind".

This obviously depends on which kind of program you use it in. If it is a
CML-style implementation of concurrency on top of call/cc where hundreds of thousands
of call/cc-s happen every second, the difference can be devastating.

>>>People might need (for example)
>>>"parameterize". It is not for you to judge wether it's sensible to implement/use such
>>>a thing.
>>>
>>Now you are justifying the existence of one questionable language construct with that of
>>another. Plus, once again, using dynamic-wind to implement parameterize is highly
>>inefficient.
>>
>
>
> Once again, please provide a more efficient one.

It's pretty much along the same lines as above.

> Interestingly quite a number of Scheme implementations provide "dynamic-wind"
> and constructs built on top of it. How would you explain that?

Well, how do I explain it? I don't. It's a fact -- and a sad one at that.
Just because 20,000 people do it, it does not have to be right, right?

> Why is "dynamic-wind" given in R5RS if it's so questionable? (dont' bother -
> it's a rethorical question).

No, it's a good question. I'd like to hear the answer to it, too. Especially given
the fact that voices were raised against it.

> People *are* actually using it!

... which is a shame. There are a lot of things that "people" do, and which I more
or less violently disagree with.

>>No, you got it wrong: I do not say "because I SAY it's ugly", but rather "because it actually
>>is ugly, whether I say so or not".
>>
>
> In your last posting you claimed that "beauty is in the eye of the beholder."
> How does that relate to what you say above?

Not at all. All I was pointing out was that I said "because Y, X" and not "because I say Y, X".
The two statements are quite different, don't you see? I am the first to admit that nothing
becomes true or false just because I happen to say so. But that does not stop me from believing
in truths and falsehoods of certain things, or from having reasons for these beliefs.
(You can, of course, disagree with me and say "not X" -- which perhaps is because "not Y" or
because "not (because Y, X)" etc. You can also say "not (because Matthias says Y, X)", but that
does not help in our argument since I never claimed the opposite.

> You got it completely wrong (naturally).

Naturally.

> My reasoning is: "it's a standard construct, it's actually used (directly or indirectly),
> so why don't we leave it where it is?"

I am asking to leave it were it was. (It wasn't in Scheme until R5RS came along.)
Dynamic scope was also a "standard construct" in Lisp 1.5 and on, so why did we not leave it?
The distinction between value cells and function cells was in Lisp, so why did we not leave it?
Nil was identified with both #f and (), so why did we not leave that alone?
No call/cc was in the language, so why did we not leave that alone?

The list goes on and on. If we are forbidden to question the status quo, we will be stuck
forever.

Matthias

felix

unread,
Feb 11, 2002, 4:56:40 AM2/11/02
to

Matthias Blume wrote in message <3C66C569...@shimizu-blume.com>...

>
>Well, it requires compiler support.
>


This is not what I was looking for. "dynamic-wind" is all the
compiler-support I need (and get). Hacking the compiler is exactly the
thing that should *not* be needed in Scheme.


felix


David Rush

unread,
Feb 11, 2002, 5:42:36 AM2/11/02
to
"felix" <felixu...@freenet.de> writes:
> Matthias Blume wrote in message <3C65DA24...@shimizu-blume.com>...
> >I do think that dynamic-wind and friends should not
> >be used and should be absent from a programming language. (The
> >original Scheme design was quite clear in that it did not like
> >dynamic scoping. And all this "dynamic-this" and "fluid-that"
> >stuff is basically an attempt of resurrecting an old, evil ghost.)

Matthias, while I agree with you on dynamic-wind, I feel like I should
point out that the attempted resurrection of that evil dynamic-scoping
ghost is a direct outgrowth of people's felt needs for expressivity in
programming languages. It's an attempt (of sorts) to tame a beast
which exists in Scheme anyway because of the semantics of top-level
define. I might be full of shit, but I suspect that the whole issue of
dynamic scoping arises immediately from having mutable data.

> Interestingly quite a number of Scheme implementations provide
> "dynamic-wind" and constructs built on top of it. How would you
> explain that?

<sarcasm>
The great sense of personal worth that one feels from being able to
claim 'standards compliance' in the computer industry has nothing to
do with it, I'm sure.
</sarcasm>

Just look at the hassles Jeff Siskind gets from Stalin's
"non-compliance" to R4RS - when Stalin is *in fact* compliant enough
to R4RS that you hardly notice the differences. Stalin's only 'sin'
(if you will) is that JMS was kind enough to very clearly document the
areas in which Stalin is non-compliant: other (well-known) Schemes
have similiar incompatibilities but do not take such pains to point
them out. So what is his reward for saying that requirement X or Y of
R5RS is incompatible with his implementation goals? An endless run of
slagging for having the audacity to suggest (and implement) the idea
that some of the features of R[45]RS might be inefficient or mistaken.

> (I just want to point out that it's not just unwashed pariah coders
> like me that have the odd use for "dynamic-wind" and mates).

I'm not saying that dynamic-wind is not useful. Just that it should
not be in R5RS. If the SRFI process had existed at the time that R5RS
was adopted, dynamic-wind (and call/wc) would have been a *great* SRFI
candidate. There is at least as much controversy over dynamic-wind as
there is about SRFI-17, yet no-one is still whining about *that*
abomination.

There is a large connotational difference between "conformant" and
"implements". This is part of the beauty of the SRFI system.

> Why is "dynamic-wind" given in R5RS if it's so questionable? (dont' bother -
> it's a rethorical question).

It shouldn't be a rhetorical question. In fact dynamic-wind has been
cited as being a violation of the 'widely implemented and
uncontroversial' rule for RnRS inclusion. It should be *removed* from
any future RnRS.

> My reasoning is: "it's a standard construct, it's actually used
> (directly or indirectly), so why don't we leave it where it is?"

It shouldn't be left alone because:

1) it's broken
2) it can be implemented in terms of R4RS call/cc and not
conversely
3) it shouldn't have been standardized in the first place
4) nothing breaks if it gets removed from RnRS

Even my .sigmonster knows this is The Right Thing (tm)

david rush
--
Scheme: Language, the way Church would have wanted it.
-- Anton van Straaten (the Scheme Marketing Dept from c.l.s)

Matthias Blume

unread,
Feb 11, 2002, 8:12:56 AM2/11/02
to
felix wrote:
> Matthias Blume wrote in message <3C66C569...@shimizu-blume.com>...
>
>>Well, it requires compiler support.
>>
>>
>
>
> This is not what I was looking for. "dynamic-wind" is all the
> compiler-support I need (and get).

Why? I thought you are implementing a compiler. Doing what I described is
trivial compared to the complexity of the rest of a compiler.

> Hacking the compiler is exactly the
> thing that should *not* be needed in Scheme.

But we do, one way or another. Otherwise we end up with inefficient solutions.

Matthias

Sander Vesik

unread,
Feb 11, 2002, 8:46:56 AM2/11/02
to
Matthias Blume <matt...@shimizu-blume.com> wrote:
> felix wrote:

[snip]

>>
>> Would you care of providing an implementation of "fluid-let" that does the above?
>
> Well, it requires compiler support.

You lose. You specificly lose because you acknowledge that in your proposed world
something that is desirable to be an independent add-on cannot be such but has
to be implemented in the complier using quite horrible hacks.

[big snip]

>
> Matthias
>

--
Sander

+++ Out of cheese error +++

Matthias Blume

unread,
Feb 11, 2002, 10:29:34 AM2/11/02
to
Michael Sperber [Mr. Preprocessor] wrote:
>>>>>>"Moiself" == Michael Sperber <spe...@informatik.uni-tuebingen.de> writes:
>>>>>>
>
>>>>>>"Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:
>>>>>>
>
> Matthias> 1. Identify occurences of fluid-bound variables.
> Matthias> 2. Make all functions (internally) take an extra argument which is a dictionary
> Matthias> (a functional finite map).
> Matthias> 3. Replace all bound occurences of the fluid variable with an access to the
> Matthias> dictionary.
> Matthias> 4. Implement fluid-let by simply (non-destructively) updating the existing
> Matthias> dictionary, using the new one for the body. In a tail-recursive loop that
> Matthias> goes through a fluid-let, most of the dictionaries would quickly become
> Matthias> garbage, and the space used at any one time (on behalf of the fluid variable)
> Matthias> is constant.
>
> Moiself> I'm not getting your last point: In a recursive setting, it seems to
> Moiself> me the still-needed old fluids can stack up indefinitely.
>
> Never mind, I misparsed your last sentence. Sorry about that.

While trying to think through the details I realized that things aren't quite as easy
as I imagined. The problem is with potentially very subtle interactions between
regular set! and fluid-let. This could be dealt with if all variables would be statically
known to be mutable and/or subject to fluid-let -- which (short of whole-program analysis)
is, unfortunately, not the case for global variables. The trouble is, of course, that
almost all variables subject to fluid-let are global ones.

(It is really true that there are several distinct murky corners of the Scheme language --
the nature of its toplevel bindings being one of them. And sometimes several of these
corners conspire and it all comes to a head.)

The situation is (I think) much better with "parameterize" which does not seem to suffer from
the global variable problem.

--
-Matthias

felix

unread,
Feb 11, 2002, 10:25:07 AM2/11/02
to

Matthias Blume wrote in message <3C67C3CC...@shimizu-blume.com>...

>felix wrote:
>> Matthias Blume wrote in message <3C66C569...@shimizu-blume.com>...
>>
>>>Well, it requires compiler support.
>>>
>>>
>>
>>
>> This is not what I was looking for. "dynamic-wind" is all the
>> compiler-support I need (and get).
>
>Why? I thought you are implementing a compiler. Doing what I described is
>trivial compared to the complexity of the rest of a compiler.


You still don't get it. It's not about me. What I find useful is that
R5RS gives me (as a user) the means to implement certain things
(like "fluid-let") WITHOUT HACKING THE COMPILER as such.
That's the reason why I find "dynamic-wind" useful.


felix


felix

unread,
Feb 11, 2002, 10:48:22 AM2/11/02
to

David Rush wrote in message ...

>
>Just look at the hassles Jeff Siskind gets from Stalin's
>"non-compliance" to R4RS - when Stalin is *in fact* compliant enough
>to R4RS that you hardly notice the differences. Stalin's only 'sin'
>(if you will) is that JMS was kind enough to very clearly document the
>areas in which Stalin is non-compliant: other (well-known) Schemes
>have similiar incompatibilities but do not take such pains to point
>them out. So what is his reward for saying that requirement X or Y of
>R5RS is incompatible with his implementation goals? An endless run of
>slagging for having the audacity to suggest (and implement) the idea
>that some of the features of R[45]RS might be inefficient or mistaken.


The point is that I can understand Eli Barzilay, when he says that
"Stalin is not a real Scheme" (and I assume he includes those other
(well-known) Schemes).
That doesn't say anything about Stalin's achievements regarding
performance. I don't question those (no way!). But that is immaterial
for someone who needs certain Scheme features that are
not provided.
And TR is such a fundamental feature of Scheme that I take it for granted.

[dynamic-wind]


>
>It shouldn't be left alone because:
>
> 1) it's broken


I made the experience that it works pretty good (at least for me).


felix


Matthias Blume

unread,
Feb 11, 2002, 10:53:29 AM2/11/02
to
Sander Vesik wrote:
> Matthias Blume <matt...@shimizu-blume.com> wrote:
>
>>felix wrote:
>>
>
> [snip]
>
>
>>>Would you care of providing an implementation of "fluid-let" that does the above?
>>>
>>Well, it requires compiler support.
>>
>
> You lose. You specificly lose because you acknowledge that in your proposed world
> something that is desirable to be an independent add-on cannot be such but has
> to be implemented in the complier using quite horrible hacks.

I never claimed that these things could be implemented as independent ad-ons. I just
said that it can be done more efficiently. Compiler support is definitely (in my view)
a viable way of implementing things.

Anyway, in the present case, it may very well be that things are not quite as easy as
I imagined. (See my other article on that.)

By the way, why "horrible hacks"? Every decent Scheme implementation does a very
similar analysis pass over a program to discover immutable variables.

[On a final note: I don't understand why people are so obsessed with everything being
"independent ad-ons". If you put feature X into the language, you might as well have
compilers implement X directly and not just as some macro.
Of course, there is the neverending quest for simplicity, but as Einstein said:
"Things should be made as simple as possible -- but no simpler."
Sacrificing efficiency for (very questionable in this case) simplicity is not the way
to go, IMHO.]

--
-Matthias

Jeffrey Siegal

unread,
Feb 11, 2002, 11:10:46 AM2/11/02
to
felix wrote:
> And TR is such a fundamental feature of Scheme that I take it for granted.

Given that, it seems likely that you'd want to implement fluid-let
(assuming you want it at all) in a tail-recursive manner. Which means
not using dynamic-wind to implement it.

Matthias Blume

unread,
Feb 11, 2002, 11:32:56 AM2/11/02
to
felix wrote:

> You still don't get it.

Pot. Kettle. Black.

> It's not about me. What I find useful is that
> R5RS gives me (as a user) the means to implement certain things
> (like "fluid-let") WITHOUT HACKING THE COMPILER as such.
> That's the reason why I find "dynamic-wind" useful.

Paraphrasing your own words: If it is so useful (I am speaking of
fluid-let), then why not provide it directly, why force the user
to implement it herself? After all, we might be able to provide
it in a more efficient fashion!

Plus, for the last time, you already *can* implement dynamic-wind
(and fluid-let on top of it) in any R4RS-compliant scheme. Providing
dynamic-wind as part of the language definition is not required for
that!

Now, which one do you want: Provide "useful" features in an efficient
manner by the implementation (i.e., the compiler), or let the user
implement it herself? You can't have it both ways.

Anyway, aside from all this, fluid-let is not a good justification for
dynamic-wind since both of them ought to be absent because they are
ugly. At least, that's how I see it.

Ok, we've been around this circle for the fifth time, we have
gotten to the stage of personal attacks, so it is high time to quit...

Over and out.
Matthias

David Feuer

unread,
Feb 11, 2002, 3:39:44 PM2/11/02
to
Matthias Blume wrote:

> Anyway, in the present case, it may very well be that things are not quite as easy as

> I imagined. (See my other article on that.)

I think something very similar was described in one of the Steele & Sussman "Lambda the
Ultimate" papers, which can be found at readscheme.org

David Feuer

David Feuer

unread,
Feb 11, 2002, 3:44:47 PM2/11/02
to
Matthias Blume wrote:

> Anyway, aside from all this, fluid-let is not a good justification for
> dynamic-wind since both of them ought to be absent because they are
> ugly. At least, that's how I see it.

fluid-let is not the most beautiful thing in the world. However, it
allows for a more controlled introduction of variables into a computation
than global variables do. It can be useful in some applications.

David Feuer

Doug Quale

unread,
Feb 11, 2002, 4:54:05 PM2/11/02
to
David Rush <ku...@bellsouth.net> writes:

> Just look at the hassles Jeff Siskind gets from Stalin's
> "non-compliance" to R4RS - when Stalin is *in fact* compliant enough
> to R4RS that you hardly notice the differences. Stalin's only 'sin'
> (if you will) is that JMS was kind enough to very clearly document the
> areas in which Stalin is non-compliant: other (well-known) Schemes
> have similiar incompatibilities but do not take such pains to point
> them out. So what is his reward for saying that requirement X or Y of
> R5RS is incompatible with his implementation goals? An endless run of
> slagging for having the audacity to suggest (and implement) the idea
> that some of the features of R[45]RS might be inefficient or mistaken.

Is this really a correct characterization of what people say about
Stalin? I haven't seen it. On this forum I see Stalin mentioned two
ways:

1. Stalin is the greatest thing since sliced bread. All Scheme compilers
should aspire to be like Stalin.

2. Stalin is very impressive, but it compiles a Scheme-like language,
not Scheme. In particular, a Scheme compiler requires correct
implementation of tail recursion.

Why can't both of these views be correct? Why are Stalin's defenders
so thin-skinned as to consider #2 to be a personal attack? Jeff
Siskind has made it clear that he doesn't consider tail call
optimization to be an interesting problem. In many (most?) cases
Stalin will optimize tail calls anyway, but I don't know how you know
for sure what it will do to any particular program.

In any case, those who maintain that Stalin compiles a Scheme-like
language that isn't Scheme aren't trolling this newsgroup trying to
stir up trouble or hurt Jeff Siskind's feelings. It's almost always a
response to someone asking "Stalin does X. Why doesn't Scheme Y do
X?". Sometimes the answer really is that Stalin doesn't compile the
same language. Other times the answer may be that we don't understand
all the techniques that Stalin uses to produce efficient code. Still
other times the answer may be that not everyone is patient enough to
wait for a compiler that is as slow as Stalin.

I have never seen anyone say that Stalin was not interesting,
worthwhile and useful software. In fact usually people don't even say
that Scheme is a better language than the language Stalin compiles,
they just point out the fact that they are different languages. Sadly
that is invariably labelled as an attack. I think that Stalin is
clearly the best compiler for some problems, and it is not as suitable
for some others. Apparently that isn't enough to make the Stalin camp
happy.

Tail call optimization (TCO) has been a part of Scheme from the very
beginning, over 25 years now. Tail call optimization is not an
example of "some of the features of R[45]RS [that] might be
inefficient or mistaken". A language without TCO can be interesting,
worthwhile and useful, but it can never be Scheme. Anyone who finds
that fact really troubling should either implement his own Scheme-like
language or implement a language like Common Lisp that doesn't require
TCO. In fact Stalin takes takes the first approach and is very
successful. I don't understand why the Stalin camp can't be satisfied
with this. Unfortunately anyone who acknowledges that Stalin is
interesting but still prefers to implement or work within the Scheme
standards rather than experimenting with his own language is accused
of "An endless run of slagging".

Sigh.

--
Doug Quale

David Feuer

unread,
Feb 11, 2002, 5:30:15 PM2/11/02
to
Doug Quale wrote:

> Tail call optimization (TCO) has been a part of Scheme from the very
> beginning, over 25 years now. Tail call optimization is not an
> example of "some of the features of R[45]RS [that] might be
> inefficient or mistaken". A language without TCO can be interesting,
> worthwhile and useful, but it can never be Scheme.

I don't really understand what tail-call optimization (the
compilation/interpretation technique) really has to do with Scheme (the
language). It is rather interesting that tail-call optimization is required
by a language specification that says pretty much nothing else about the
mechanics of how programs are to be executed, and nothing else about the
efficiency that programs should have. There is no guarantee whatsoever that
(let loop (n 1) (if (< n k) (loop (+ n 1)))) will not take O(e^k) time and
space. It is perhaps unfortunate that program performance can very rarely
be calculated by applying standard requirements, but that is a fact of life
for most languages. Different implementation techniques will make different
programs fast and slow, and any programmer engaged in time-space hacking
really needs to be at least reasonably familiar with the techniques of
his/her implementation.

It seems to me that tail-call optimization and Scheme are really connected
only historically (proper TCO was first written for Scheme). I am no
authority, but it seems to me that tail-call requirements should be removed
from the standard, and placed instead into a list of recommendations for
implementors, along with some suggestions on memory management and other
things not relating directly to the language itself.

Side note:
As Suskind has demonstrated, the tail-call requirements (while possibly
wonderful to the standard writers) are not universally appreciated by
implementors. Therefore, it seems odd that they are in the standard at all.

David Feuer

Jeffrey Siegal

unread,
Feb 11, 2002, 5:51:41 PM2/11/02
to
David Feuer wrote:
> There is no guarantee whatsoever that
> (let loop (n 1) (if (< n k) (loop (+ n 1)))) will not take O(e^k) time and
> space.

What about:

"1.1 ... This allows the execution of an iterative computation in
constant space, even if the iterative computation is described by a
syntactically recursive procedure"

or

"3.5 ... A Scheme implementation is properly tail-recursive if it
supports an unbounded number of active tail calls" (given that any real
implemention has a bounded amount of space available, this effectively
requires constant space)?

By the way, Stalin will execute the above code in constant space because
it is a self-call. Stalin only fails to implement tail calls in
constant space when they are not self-calls.

David Feuer

unread,
Feb 11, 2002, 6:07:33 PM2/11/02
to
My essential point holds despite possibly bad examples. There are no guarantees
on the time or space efficiency of cons, car, cdr, +, if, call/cc, eq?, set!,
or just about anything else, except in the way that they are informally required
to interact with tail-recursion requirements (note especially that program
execution may be interrupted at any time for garbage collection or anything
else). The tail-call requirement says that iterative computations can be
described recursively, but makes no specific requirements about when those
iterative computations must be efficient. For example, take the simple
factorial example:
(define (fact n)
(define (f n acc)
(if (= n 0) acc (fact (- n 1) (* n acc))))
(f n 1))

We are guaranteed that f will be called "tail-recursively", but we are not
guaranteed that
multiplication, subtraction, or comparison will take constant time or space
branching (if evaluation) will take constant time or space.

While these will probably hold (approximately) in any reasonable implementation,
none of them are specified in the standard, as far as I know. I don't think the
standard even specifies any bounds on the time or space that may be required for
lambda abstraction! It therefore seems strange to me that the standard chooses
tail recursion as a "special case" that needs to be specified, and therefore
held above all other efficiency considerations.

David Feuer

unread,
Feb 11, 2002, 6:10:51 PM2/11/02
to
David Feuer wrote:

> My essential point holds despite possibly bad examples. There are no guarantees
> on the time or space efficiency of cons, car, cdr, +, if, call/cc, eq?, set!,
> or just about anything else, except in the way that they are informally required

Sorry to reply to my own post, but I just wanted to say that even function
application (in and of itself) is not guaranteed to take constant time. In some
implementations, calling captured continuations may not.

David Feuer

David Feuer

unread,
Feb 11, 2002, 6:14:30 PM2/11/02
to
Final (I hope) point: There are languages which are specifically designed to enable
efficiency guarentees, and whose efficiency guarantees are codified in the language
standard. These are generally called real-time languages. Scheme is not a real-time
language. It would certainly be possible to write a standard for Real-Time Scheme,
which would place efficiency requirements on real-time Scheme implementations,
including bounds on basic operations, specific limitations on GC time, and so on, but
R5RS is not such a standard. It only makes one guarantee, which is not nearly enough
for real-time programming.

David Feuer

Barry Margolin

unread,
Feb 11, 2002, 6:23:22 PM2/11/02
to
In article <3C684EB5...@brown.edu>,

David Feuer <David...@brown.edu> wrote:
>My essential point holds despite possibly bad examples. There are no guarantees
>on the time or space efficiency of cons, car, cdr, +, if, call/cc, eq?, set!,
>or just about anything else, except in the way that they are informally required
>to interact with tail-recursion requirements (note especially that program
>execution may be interrupted at any time for garbage collection or anything
>else). The tail-call requirement says that iterative computations can be
>described recursively, but makes no specific requirements about when those
>iterative computations must be efficient. For example, take the simple
>factorial example:
>(define (fact n)
> (define (f n acc)
> (if (= n 0) acc (fact (- n 1) (* n acc))))
> (f n 1))
>
>We are guaranteed that f will be called "tail-recursively", but we are not
>guaranteed that
>multiplication, subtraction, or comparison will take constant time or space
>branching (if evaluation) will take constant time or space.

True. But the requirement for tail-call optimization allows you to make
correct predictions about the space efficiency of the parts of the program
that you *do* have control over.

Since Scheme doesn't have "goto" or "return" operators, function calls are
the only methods of non-local control transfer. In order for it to be able
to perform analogously to these primitives, tail-call optimization is
required. Otherwise, even invoking a continuation that's supposed to
return would actually push deeper in the stack, and you'd never get out!

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Jeffrey Siegal

unread,
Feb 11, 2002, 6:36:47 PM2/11/02
to
One important point is that the tail recursion requirement has never
been about time, so you can discard all of your comments about time and
focus entirely on space. For example, your comment about execution
being interrupted for GC is particularly irrelevant; GC is clearly one
way in which the space requirement of tail recursion can be met.

That said, there is probably some truth in what you wrote. There is a
certain amount of implicit understanding of space behavior on the Scheme
Reports. The explicit requirement for tail recursion was probably
inserted because that understanding is not implicit to implementors of
other lanaguges (particularly non-Scheme Lisp-family languages). To be
more technically correct, it would probably be better to have explicit
discussion of space behavior elsewhere in the document as well.

Jeffrey Siegal

unread,
Feb 11, 2002, 6:39:17 PM2/11/02
to
Barry Margolin wrote:
> Otherwise, even invoking a continuation that's supposed to
> return would actually push deeper in the stack, and you'd never get out!

Except that Scheme doesn't have a "stack" as part of the language
(although some implementations may use one or more). Restating that to
say that invoking such a continuation would incur an incremental space
cost (creating an additional, useless, continuation) would be more
precisely correct.

David Feuer

unread,
Feb 11, 2002, 6:33:58 PM2/11/02
to
Barry Margolin wrote:

> True. But the requirement for tail-call optimization allows you to make
> correct predictions about the space efficiency of the parts of the program
> that you *do* have control over.
>
> Since Scheme doesn't have "goto" or "return" operators, function calls are
> the only methods of non-local control transfer. In order for it to be able
> to perform analogously to these primitives, tail-call optimization is
> required. Otherwise, even invoking a continuation that's supposed to
> return would actually push deeper in the stack, and you'd never get out!

So it's important. I agree with that. But that is not really the question. The
question is whether it belongs in the _language_ standard. Efficiency is not part
of the language (you don't see bizarre constructs like (declare foo
extra-fast-and-cheap)), and R5RS makes no other efficiency requirements (so, in
particular, it cannot be reasonably seen as a combined language and performance
standard). So I don't see why tail-call optimization is there. Tail-call
optimization is an implementation detail. Some implementations provide it, some
don't. These implementations may both be implementing the same language, but one
will be more efficient for certain tasks than for others.

P.S. If enough people would like a version of Stalin that supported full tail-call
optimization, would it be possible to add this to the compiler without significant
changes? If so, why haven't any of these people done so?

David Feuer

David Feuer

unread,
Feb 11, 2002, 6:44:21 PM2/11/02
to
Jeffrey Siegal wrote:

Correct. Sorry to confuse things. I guess my basic point was that the
language does not otherwise make any requirements about performance
generally, either in time or in space.

I would still argue that this does not belong in the language standard. A
separate document could give these implementation recommendations. The
Scheme community would then be more free (within the standard) to
experiment with different implementation strategies.

David Feuer

Jeffrey Siegal

unread,
Feb 11, 2002, 6:54:46 PM2/11/02
to
David Feuer wrote:
> So it's important. I agree with that. But that is not really the question. The
> question is whether it belongs in the _language_ standard. Efficiency is not part
> of the language (you don't see bizarre constructs like (declare foo
> extra-fast-and-cheap)), and R5RS makes no other efficiency requirements (so, in
> particular, it cannot be reasonably seen as a combined language and performance
> standard).

The tail call requirement is not and has never been about "efficiency."
There is no requirement that tail calls use a *small* amount of space
(or meet any other requirement for space- or time-effiency in absolute
terms), only that the amount of space required be constant.

The point of the the requirement is to promise to Scheme programmers
that if they are using a compliant implentation, their program will not
run out of space *because* it makes repeated tail calls. Whatever
*else* it does may or may not cause it to run out of space, but the
requirements on this in the Reports are *terribly* weak. For example,
"The reason that implementations of Scheme do not (usually!) run out of
storage is that they are permitted to reclaim the storage occupied by an
object if they can prove that the object cannot possibly matter to any
future computation." That doesn't even require a Scheme implementation
to implement GC at all! It can simply run out of storage after
allocating a limited number objects, whether or not those objects remain
in use! (As long as it doesn't allocate objects in order to perform a
tail call.)

David Feuer

unread,
Feb 11, 2002, 7:10:14 PM2/11/02
to
Jeffrey Siegal wrote:

> The tail call requirement is not and has never been about "efficiency."
> There is no requirement that tail calls use a *small* amount of space
> (or meet any other requirement for space- or time-effiency in absolute
> terms), only that the amount of space required be constant.

I guess this depends on what you mean by efficiency. As far as I know, space-efficient
can mean either "capable of running in the amount of space available" or "takes a small
(in some sense) amount of space". You seem to limit it to this second meaning.
Similarly, time-efficient can mean either "capable of meeting certain specific
deadlines" or "takes a small (in some sense) amount of time". Tail-call requirements
are for space-efficiency what real-time time guarantees are for time-efficiency.
Neither of them, to my mind, is actually relevant to the _language_.

> The point of the the requirement is to promise to Scheme programmers
> that if they are using a compliant implentation, their program will not
> run out of space *because* it makes repeated tail calls. Whatever
> *else* it does may or may not cause it to run out of space, but the
> requirements on this in the Reports are *terribly* weak. For example,
> "The reason that implementations of Scheme do not (usually!) run out of
> storage is that they are permitted to reclaim the storage occupied by an
> object if they can prove that the object cannot possibly matter to any
> future computation." That doesn't even require a Scheme implementation
> to implement GC at all! It can simply run out of storage after
> allocating a limited number objects, whether or not those objects remain
> in use! (As long as it doesn't allocate objects in order to perform a
> tail call.)

Exactly. So any non-trivial program will depend for space-boundedness on
implementation issues not discussed in the standard. So it seems there are two
reasonable choices: add a whole bunch of space-efficiency requirements to the standard
(probably increasing the size of the standard a good deal), or remove the one
requirement from the standard and put it somewhere else that could hold the rest of the
requirements without breaking. I personally prefer the latter option, but nobody has
to care what I think. I think it would be reasonable two have a "Bounded-Space Scheme"
standard that would allow programmers to guarantee certain space behavior of their
programs, and a "Real-Time Scheme" standard that would do the same for time behavior.
Quite likely, most implementations would choose to support the bounded-space standard,
but not the real-time standard. A number of implementations could reasonably choose to
support neither of them, but still comply with a hypothetical R6RS.

David Feuer

Jeffrey Siegal

unread,
Feb 11, 2002, 7:40:57 PM2/11/02
to
David Feuer wrote:
> > The tail call requirement is not and has never been about "efficiency."
> > There is no requirement that tail calls use a *small* amount of space
> > (or meet any other requirement for space- or time-effiency in absolute
> > terms), only that the amount of space required be constant.
>
> I guess this depends on what you mean by efficiency.

I think the most natural definition in this context is, "Acting or
producing effectively with a minimum of waste, expense, or unnecessary
effort" (from dictionary.com).

> > "The reason that implementations of Scheme do not (usually!) run out of
> > storage is that they are permitted to reclaim the storage occupied by an
> > object if they can prove that the object cannot possibly matter to any
> > future computation." That doesn't even require a Scheme implementation
> > to implement GC at all! It can simply run out of storage after
> > allocating a limited number objects, whether or not those objects remain
> > in use! (As long as it doesn't allocate objects in order to perform a
> > tail call.)
>
> Exactly. So any non-trivial program will depend for space-boundedness on
> implementation issues not discussed in the standard.

I wonder if people would say that a Scheme that only allowed you to
allocate a fixed number of calls to object allocators and then failed
(assuming it implemented tail recursion properly) was a "normal Scheme"
or a "real Scheme." Technically according to R5RS, it would be.

I suspect, though, that most people would rather go in the direction of
specifying at least a few of the things which have always been widely if
not universally understood to be part of a "normal Scheme" (such as the
above not being the case) as opposed to going the other way and
weakening the langauge definition further.

David Rush

unread,
Feb 11, 2002, 8:01:10 PM2/11/02
to
Late at night jumping back into the middle of something that I
(apparently) started although the vagaries of Usenet propagation have
eliminated much of the recent context from my view...

David Feuer <David...@brown.edu> writes:
> Sorry to reply to my own post, but I just wanted to say that even function
> application (in and of itself) is not guaranteed to take constant
> time.

Indeed. The model in the DS suggests that varargs lambdas are likely to
take time linear in the number of optional arguments passed.

david rush
--
There's man all over for you, blaming on his boots the faults of his feet.
-- Samuel Becket (Waiting For Godot)

Jeffrey Siegal

unread,
Feb 11, 2002, 8:11:46 PM2/11/02
to
David Rush wrote:
> Indeed. The model in the DS suggests that varargs lambdas are likely to
> take time linear in the number of optional arguments passed.

All procedure calls tend to be linear in the number of arguments,
optional or otherwise, becuase they have to do *something* to each of
the arguments. This may not be apparent in the DS, but I wouldn't look
to the DS for any indication of cost.

David Feuer

unread,
Feb 11, 2002, 8:06:12 PM2/11/02
to
Jeffrey Siegal wrote:

> I think the most natural definition in this context is, "Acting or
> producing effectively with a minimum of waste, expense, or unnecessary
> effort" (from dictionary.com).

OK. It doesn't matter what you call it. I was classifying space boundedness
under the heading of space efficiency, but if you like, they can be classed
together under "good space behavior".

> I wonder if people would say that a Scheme that only allowed you to
> allocate a fixed number of calls to object allocators and then failed
> (assuming it implemented tail recursion properly) was a "normal Scheme"
> or a "real Scheme." Technically according to R5RS, it would be.

Sounds like a simple real-time scheme to me... This behavior may be perfectly
acceptible if your deadline comes before you run out of memory. Not a "normal
scheme", no, but possibly a useful one for very small real-time apps.

> I suspect, though, that most people would rather go in the direction of
> specifying at least a few of the things which have always been widely if
> not universally understood to be part of a "normal Scheme" (such as the
> above not being the case) as opposed to going the other way and
> weakening the langauge definition further.

Stalin is not the only compiler that does not provide full tail-call
optimization. So I don't know if this is quite universal.

David Feuer

Doug Quale

unread,
Feb 11, 2002, 8:19:10 PM2/11/02
to
David Feuer <David...@brown.edu> writes:

> Doug Quale wrote:
>
> > Tail call optimization (TCO) has been a part of Scheme from the very
> > beginning, over 25 years now. Tail call optimization is not an
> > example of "some of the features of R[45]RS [that] might be
> > inefficient or mistaken". A language without TCO can be interesting,
> > worthwhile and useful, but it can never be Scheme.
>
> I don't really understand what tail-call optimization (the
> compilation/interpretation technique) really has to do with Scheme (the
> language).

As you suspect, you have to look at the history of Scheme to answer
this question. In 1975 Guy Steele and Gerald Sussman were designing
and writing some experimental lisp interpreters and compilers. Their
experiments used MacLisp as a base. Compared to MacLisp, the biggest
innovation in Scheme was lexical scoping. Lexical scope was in wide
use outside the lisp world at that time but hard as it is to believe
today, lexical scope was still considered risky and unproven in the
lisp world. Scheme was not the first lexically scoped lisp (someone
may be able to help here, perhaps MDL?), but it was the first to gain
widespread attention.

The second major innovation in Scheme was the elimination of a MacLisp
abomination that I believe was called a FEXPR. This was a macro-like
"function" which didn't evaluate its arguments. In addition to being
ugly, FEXPRs are hard (impossible) to compile efficiently. Steele
realized that tail call optimization along with lexical scoping would
allow macros to do the same job, and when implemented properly could
be very efficient. In fact Steele realized that almost all syntax
could be replaced by macros efficiently if the compiler were written
correctly. That's how Scheme was born. (It's ironic. Although Steele
intended that macros would be in common use to extend Scheme syntax,
macros fell into disfavor with many in the Scheme crowd early on.
Macros did not reappear in the standard until R4RS put them in an
optional appendix in 1991.)

Historically Scheme was actually designed with efficient compilation
in mind, and that's one reason why tail call optimization was
required.

See:
Guy Lewis Steele, Jr.. "Debunking the 'Expensive Procedure Call' Myth,
or, Procedure Call Implementations Considered Harmful, or, Lambda: The
Ultimate GOTO". MIT AI Lab. AI Lab Memo AIM-443. October 1977

Better yet go to http://library.readscheme.org/page1.html and read all
of the lambda papers on that page. Scheme's recorded history goes
back to 1975. In 1975 I was still in high school so I didn't see
these papers until a few years later when I was in college. Still, to
someone who was programming in Pascal at the time, "The Art of the
Interpreter, or the Modularity Complex (Parts Zero, One, and Two)"
made a tremendous impression on me. Scheme changed the way that I
thought about programming and computing forever.

> It is rather interesting that tail-call optimization is required
> by a language specification that says pretty much nothing else about the
> mechanics of how programs are to be executed, and nothing else about the
> efficiency that programs should have. There is no guarantee whatsoever that
> (let loop (n 1) (if (< n k) (loop (+ n 1)))) will not take O(e^k) time and
> space.

Nope. As was pointed out by someone else, the Scheme standard does
require O(1) space for that loop. In fact it must -- Scheme has no
iteration constructs other than recursion. Even the DO loop is
defined as a recursion. There is simply no way in general to write a
reliable program in Scheme if tail call optimization is not
guaranteed.

> It is perhaps unfortunate that program performance can very rarely
> be calculated by applying standard requirements, but that is a fact of life
> for most languages. Different implementation techniques will make different
> programs fast and slow, and any programmer engaged in time-space hacking
> really needs to be at least reasonably familiar with the techniques of
> his/her implementation.

I'm not sure about this. Would a C implementation be allowed to use
exponential space for

for (n = 1; n < k; ++n)
;

Perhaps the C standard allows it, but any such implementation would
rightly be considered broken. Looping is a fundamental programming
concept and any general purpose language needs loops that are O(1)
space. Any programmer rightly considers constant space/linear time
loops to be part of the contract he has with the language and its
implementation.

> It seems to me that tail-call optimization and Scheme are really connected
> only historically (proper TCO was first written for Scheme). I am no
> authority, but it seems to me that tail-call requirements should be removed
> from the standard, and placed instead into a list of recommendations for
> implementors, along with some suggestions on memory management and other
> things not relating directly to the language itself.

You can do this only if you are willing to to change the definition of
the DO construct to guarantee constant space and linear time or add
another iteration construct with those guarantees. This is perfectly
legitimate. Common Lisp does it, for instance.

> Side note:
> As Suskind has demonstrated, the tail-call requirements (while possibly
> wonderful to the standard writers) are not universally appreciated by
> implementors. Therefore, it seems odd that they are in the standard at all.

Implementors who don't like TCO generally are using C as an
intermediate language. (This also leads to the unfortunate temptation
to ignore numeric overflow also, something else I consider to be a
serious drawback.) The only major Scheme implementations that do not
guarantee TCO that I am familiar with are Bigloo and Stalin.

TCO is quite possible if you use C as an intermediate language but
there are tradeoffs involved. Gambit and chicken are two Scheme
compilers that compile to C but do full TCO. There was a version of
SML/NJ that compiled to C and did full TCO. Anyone who chooses not to
implement TCO is perfectly free to do so. The resulting language may
look a lot like Scheme, but it won't be Scheme.

In Scheme, proper implementation of tail recursion is non-negotiable.


--
Doug Quale

Matthias Blume

unread,
Feb 11, 2002, 8:57:11 PM2/11/02
to
David Feuer wrote:
> Doug Quale wrote:
>
>
>>Tail call optimization (TCO) has been a part of Scheme from the very
>>beginning, over 25 years now. Tail call optimization is not an
>>example of "some of the features of R[45]RS [that] might be
>>inefficient or mistaken". A language without TCO can be interesting,
>>worthwhile and useful, but it can never be Scheme.
>>
>
> I don't really understand what tail-call optimization (the
> compilation/interpretation technique) really has to do with Scheme (the
> language). It is rather interesting that tail-call optimization is required
> by a language specification that says pretty much nothing else about the
> mechanics of how programs are to be executed, and nothing else about the
> efficiency that programs should have. There is no guarantee whatsoever that
> (let loop (n 1) (if (< n k) (loop (+ n 1)))) will not take O(e^k) time and
> space. It is perhaps unfortunate that program performance can very rarely
> be calculated by applying standard requirements, but that is a fact of life
> for most languages. Different implementation techniques will make different
> programs fast and slow, and any programmer engaged in time-space hacking
> really needs to be at least reasonably familiar with the techniques of
> his/her implementation.

This is an old argument -- and an infamous one in which I have been heavily
involved myself. Basically, the TR requirement wants to make sure that
certain programs don't run out of space because of stack overflow. I have
argued in the past (and so have others such as Andrew Appel) that this
requirement -- when stated in isolation -- is a bit silly because nowhere is there
any indication of which language constructs are allowed to use up how much
space -- stack space or otherwise. We all have an *intuitive* understanding
of what's meant by the requirement, but formally the requirement is incomplete
and therefore meaningless.

In the meantime, I believe, most authors have accepted this point of view --
at least to some extend (correct me if I am wrong), and there has been work by
Will Clinger et al. that states much more precisely how space is being consumed
by a Scheme program. With this background one can then state requirements such
as the TR requirement. R5RS (I think -- I don't have it handy right now) at least
refers to Clinger's work, and it also states much more precisely than has been done
in RnRS (n <= 4) what a tail call is. (The latter is the easiest part because there
is a simple syntactic characterization.)

Matthias


Matthias Blume

unread,
Feb 11, 2002, 9:05:18 PM2/11/02
to

Sorry, this is really re-warming old, old stuff, but you are wrong:
The Scheme "standard" as it has been written until now does not say, for
example, how much space will be used up by, e.g., + or <. Thus, the above example
might run out of space (and in an implementation that heap-allocates the
stack, doing so will be indistinguishable from running out of stack space).

As a matter of fact, the following loop *will* run out of space in any
Scheme implementation that uses arbitrary-precision integers:

(let loop ((n 1)) (if (something-that-is-always-true) (loop (+ n 1)) #f))

... it will just do so very slowly. :-)

Matthias


Rob Warnock

unread,
Feb 11, 2002, 9:16:40 PM2/11/02
to
Doug Quale <qua...@charter.net> wrote:
+---------------

| David Feuer <David...@brown.edu> writes:
| > I don't really understand what tail-call optimization (the
| > compilation/interpretation technique) really has to do with Scheme (the
| > language).
|
| As you suspect, you have to look at the history of Scheme to answer
| this question. In 1975 Guy Steele and Gerald Sussman were designing
| and writing some experimental lisp interpreters and compilers...
...

| Better yet go to http://library.readscheme.org/page1.html and read all
| of the lambda papers on that page. Scheme's recorded history goes
| back to 1975. In 1975 I was still in high school so I didn't see
| these papers until a few years later when I was in college. Still, to
| someone who was programming in Pascal at the time, "The Art of the
| Interpreter, or the Modularity Complex (Parts Zero, One, and Two)"
| made a tremendous impression on me. Scheme changed the way that I
| thought about programming and computing forever.
+---------------

In particular, in those very papers it is stated explicitly that
tail-call optimization was what allowed the merger of the ideas
of "actors" and "procedures". [Early Scheme was largely about
experimenting with "actors".] *If* (and only if) procedure calls
in tail position were equivalent to "jump-with-arguments", then
"actors" (stateful objects that were sent messages) and "procedures"
turned out to be *identical* in the implementation of that early
interpreter. At which point, Steele realized he could get rid of
a separate "actor" object type, and just use (TCO-safe) procedures.

+---------------


| Anyone who chooses not to implement TCO is perfectly free to do so.
| The resulting language may look a lot like Scheme, but it won't be Scheme.
| In Scheme, proper implementation of tail recursion is non-negotiable.

+---------------

Exactly so. IMHO.

An example of "actor-like" procedures is a long-running finite-state
machine written as one procedure per state, somewhat in this style:

(define (stateXXX entry-event)
(begin ...do stuff we always do when entering this state,
possibly conditioned by "entry-event"...)
(let ((x (get-next-event)))
(case x
((this)
...do something [perhaps output?]...
(stateYYY x)) ; switch to state YYY
((that)
...do something...
(stateZZZ x)) ; switch to state ZZZ
((the other things)
...do something with one of {the,other,things}...
(stateWWW x)) ; switch to state WWW
(else
...maybe output, maybe not...
(stateXXX x))))) ; loop in state XXX

(define (stateYYY entry-event) ...whatever...)
(define (stateZZZ entry-event) ...whatever...)
(define (stateWWW entry-event) ...whatever...)

You can view the "entry-event" as a "message" "sent" to the next-state
"actor", or as a parameter to a tail-called procedure, your choice.
*Because* Scheme guarantees TCO, the above will not run out of space,
no matter how long the state machine runs [good plan, if the state
machine is, say, an operating system!!].


-Rob

-----
Rob Warnock, 30-3-510 <rp...@sgi.com>
SGI Network Engineering <http://www.meer.net/~rpw3/>
1600 Amphitheatre Pkwy. Phone: 650-933-1673
Mountain View, CA 94043 PP-ASEL-IA

Jeffrey Siegal

unread,
Feb 11, 2002, 9:24:51 PM2/11/02
to
Rob Warnock wrote:
> You can view the "entry-event" as a "message" "sent" to the next-state
> "actor", or as a parameter to a tail-called procedure, your choice.
> *Because* Scheme guarantees TCO, the above will not run out of space,
> no matter how long the state machine runs [good plan, if the state
> machine is, say, an operating system!!].

More precisely, the tail calls won't cause it to run out of space. It
still might well run out of space for some other reason.

Jeffrey Siegal

unread,
Feb 11, 2002, 9:30:06 PM2/11/02
to
Matthias Blume wrote:
> As a matter of fact, the following loop *will* run out of space in any
> Scheme implementation that uses arbitrary-precision integers:
>
> (let loop ((n 1)) (if (something-that-is-always-true) (loop (+ n 1)) #f))
>
> ... it will just do so very slowly. :-)

I think it is rasonable to conceptually decompose computations such as
this into two parts

1. Loop
2. Stuff inside loop

The "Stuff inside loop" can always, potentially, cause the system to run
out of space. It could run out of space on the very first iteration,
the tenth, or some other time.

The "Loop" part should never cause the system to run out of space. As
long as the "Stuff inside loop" behaves itself, the loop can continue
indefinitely. This would not be the case without the tail call
requirement.

In other words, the tail call requirement doesn't allow you to say that
an entire program will not run out of space, but it allows you to make
the conditional statement that program will not run out of space as long
as the rest of the program (other than the tail calls) does not cause it

David Rush

unread,
Feb 11, 2002, 9:36:23 PM2/11/02
to

D'oh. It's very late here. I *knew* I shouldn't have posted.

david rush
--
Property is theft.
-- What is Property? (Pierre-Joseph Proudhon)

Doug Quale

unread,
Feb 11, 2002, 10:07:50 PM2/11/02
to
Matthias Blume <matt...@shimizu-blume.com> writes:

> Doug Quale wrote:
> >
> >>(let loop (n 1) (if (< n k) (loop (+ n 1)))) will not take O(e^k) time and
> >>space.
> >>
> > Nope. As was pointed out by someone else, the Scheme standard does
> > require O(1) space for that loop.
>
> Sorry, this is really re-warming old, old stuff, but you are wrong:
> The Scheme "standard" as it has been written until now does not say,
> for example, how much space will be used up by, e.g., + or <. Thus,
> the above example might run out of space (and in an implementation
> that heap-allocates the stack, doing so will be indistinguishable
> from running out of stack space).
>
> As a matter of fact, the following loop *will* run out of space in any
> Scheme implementation that uses arbitrary-precision integers:
>
> (let loop ((n 1)) (if (something-that-is-always-true) (loop (+ n 1)) #f))
>
> ... it will just do so very slowly. :-)

True, but only in a lawyerly way. I should simply have said that
Scheme requires that

(let loop () (loop))

take constant space. As was pointed out by someone else, the standard
doesn't say what that constant space is, so a conforming
implementation could use (for instance) a thousand words or a billion
words for this call but it must be a constant amount.

Kent Pitman has pointed out in a different context that a conforming
implementation could be as simple as

(display "Out of space!")

for all programs. This is true, but I don't think most programmers
are very worried about this possibility, nor do they have to be.

--
Doug Quale

Matthias Blume

unread,
Feb 11, 2002, 10:56:32 PM2/11/02
to

This is all true, of course. The problem is that in order to make this
precise (and a language definition ought to), one must say very precisely
what it is that makes a loop a loop, and what parts of the computation are
merely "stuff inside the loop". Moreover, in practical terms, just posing
requirements on the loop part is meaningless if the "stuff inside the loop"
part can arbitrarily run out of space. As far as observable behavior is
concerned, running out of space is running out of space.
To avoid further wars of words on this, let me point out that we are basically
in agreement -- except I am taking a bit more of a language lawyerly stance
here. (Well, it is every language implementer's job to be very pedantic with
the language definition.)

Matthias

Jeffrey Siegal

unread,
Feb 11, 2002, 11:09:17 PM2/11/02
to
Matthias Blume wrote:
> This is all true, of course. The problem is that in order to make this
> precise (and a language definition ought to), one must say very precisely
> what it is that makes a loop a loop, and what parts of the computation are
> merely "stuff inside the loop".

The report does say (or require by inference) that the tail calls
themselves must take constant space. So to the extent that you build a
loop out of only tail calls, the loop so built must take constant
space. Other than that, all bets are off.

> To avoid further wars of words on this, let me point out that we are basically
> in agreement

I agree.

Richard Uhtenwoldt

unread,
Feb 12, 2002, 2:28:30 AM2/12/02
to
The argument that a language that has feature X is better because you
never know when you might needed X is flawed. You see that, right? A
large part of expertise in most fields is knowing what *not* to do. A
language that does not let inexperienced programmers do things that
experts have learned never to do is superior to one that has more
features and thereby gives the programmer more "freedom".

(This is a special case of the more general observation that
restrictions are more important than features in computer science,
but that's an article for another day.)

So, for example, the fact that call/cc can be used to implement
virtually any control construct is not by itself a compelling argument
for having it because it could be the case that you only need a fixed
set of control constructs with significantly less generality than what
call/cc gives you --that any more than that just gives the inexperienced
programmer rope he will tend to use to hang himself more often than to
win.

(People pressed for time can skip to the last paragraph now.)

Given that most of the world's programs are written in languages that do
not even have anonymous functions (defined for the purpose of this note
as functions you can refer to without naming them), of course a
programmer does not *need* dynamic-wind: Scheme without them is Turing
complete after all. Does the benefit outweigh the cost in
implementation overhead and implementor effort and more importantly
education and intelligence required of the readers of code with the
dynamic-wind in it? That's the question.

One the one hand, dynamic-wind is hard to learn unless you have really
good teachers, compared to most constructs in most languages. One the
other hand, if it allows a clean programming solution with a low usage
of lines of source code, the resultant increase in
code-understandability and -brevity might be worth the high learning
barrier.

Another good question is, if your code has dynamic-wind in it, what
fraction of the world's population of programmers will be willing and
able to understand your code?

I think the same questions could be asked of call/cc.
Most programmers can easily grasp downward continuations,
which do not have any properties execeptions do not have,
but upward continuations require major learning effort.
"Well, students need to learn it because that's part of
computing reality," some of you are saying. No, it becomes
part of computing reality only if enough people decide it will be.

Call/cc and dynamic-wind need to be able to make some set of useful
programs much simpler and shorter if they are to justify their cost in
requirements for programmer education. Were are these examples of
useful programs made much simpler by call/cc and dynamic-wind? I've
seen the implementation of coroutines via call/cc in Friedman et al's
EOPL and am not impressed because (1) coroutines have been added to, eg,
C and one of the Modulas as primitives --works fine-- (2) and because
the damned code is really hard to grok. How many people could get that
code right without a lot of edit and test cycles?
Very few I wager. Code that you cannot understand by sitting
and staring at it like you would a math proof but rather you have
to edit and test repeatedly. Sounds like Perl!

I suppose there are people who can grok that code just by sitting and
staring at it. It has been said that young academics are attracted to
research areas and problems where they can show how clever they are. I
sometimes think that that explains a large part of the attraction to
call/cc and dynamic wind.

--
Richard Uhtenwoldt

felix

unread,
Feb 12, 2002, 6:14:05 AM2/12/02
to

Matthias Blume wrote in message <3C67F238...@shimizu-blume.com>...
>
>Paraphrasing your own words: If it is so useful (I am speaking of
>fluid-let), then why not provide it directly, why force the user
>to implement it herself? After all, we might be able to provide
>it in a more efficient fashion!

>


Correct. But "dynamic-wind" is so general (some say it's too general),
that it gives me ways to implement *numerous* constructs in a portable
manner.

I'm not claiming that these constructs are universally The Right Thing.

Contrary to you (if I understand correctly) I don't think I can anticipate
every possible practical application of "dynamic-wind".
Once in a while a come upon a situation where it gives me exactly what
I need.

>Ok, we've been around this circle for the fifth time, we have
>gotten to the stage of personal attacks, so it is high time to quit...

Personal attacks??? Geez, and I haven't even really tried! ;-)


felix

felix

unread,
Feb 12, 2002, 6:14:10 AM2/12/02
to

Jeffrey Siegal wrote in message <3C67ED06...@quiotix.com>...
>felix wrote:
>> And TR is such a fundamental feature of Scheme that I take it for granted.
>
>Given that, it seems likely that you'd want to implement fluid-let
>(assuming you want it at all) in a tail-recursive manner. Which means
>not using dynamic-wind to implement it.

A reasonable point of view. When I use "fluid-let" (using "dynamic-wind")
then I'm aware that it's not tail-recursive, but that is an exception that I can
live with. Normal procedure calls are a different matter (IMHO).


felix


felix

unread,
Feb 12, 2002, 6:45:52 AM2/12/02
to

Richard Uhtenwoldt wrote in message ...

>The argument that a language that has feature X is better because you
>never know when you might needed X is flawed. You see that, right? A
>large part of expertise in most fields is knowing what *not* to do.

Right. But a language that has a feature X that is used in several interesting
ways loses expressiveness when that feature is removed (IMHO).

If X can be implemented on top of that language, then it is probably
even better (and I basically agree with Matthias Blume here - strangely
enough).
What I find surprising is that the opponents of "dynamic-wind" seem to
give reasons for it's brokenness that are mostly based on esthetical,
puristic and highly academic ground.

The only practical issue I have seen so far is that multithreading
can't be implemented with R5RS call/cc. Ok. but it neither gives you
timer interrupts.

I also haven't experienced any earth-shaking loss of efficiency, yet.
Perhaps someone can post some code?


felix


Matthias Blume

unread,
Feb 12, 2002, 7:30:14 AM2/12/02
to
Jeffrey Siegal wrote:
> Matthias Blume wrote:
>
>>This is all true, of course. The problem is that in order to make this
>>precise (and a language definition ought to), one must say very precisely
>>what it is that makes a loop a loop, and what parts of the computation are
>>merely "stuff inside the loop".
>>
>
> The report does say (or require by inference) that the tail calls
> themselves must take constant space. So to the extent that you build a
> loop out of only tail calls, the loop so built must take constant
> space. Other than that, all bets are off.

If your program consists of *nothing* but tail calls, then it must run in
constant space, I agree. But not many useful programs are of that nature. :-)

Matthias


Matthias Blume

unread,
Feb 12, 2002, 7:39:53 AM2/12/02
to
felix wrote:
> Richard Uhtenwoldt wrote in message ...
>
>>The argument that a language that has feature X is better because you
>>never know when you might needed X is flawed. You see that, right? A
>>large part of expertise in most fields is knowing what *not* to do.
>>
>
> Right. But a language that has a feature X that is used in several interesting
> ways loses expressiveness when that feature is removed (IMHO).

Not necessarily. Lisp programmmers have used (and use) dynamic scope in
many different interesting ways. It still was removed when Scheme was
created.

In particular, it is possible that a language feature X -- even though being
used in interesting ways -- interferes with other language features that
are used in even more interesting ways. Removing X from the language thus
has a net positive effect on expressiveness.

> What I find surprising is that the opponents of "dynamic-wind" seem to
> give reasons for it's brokenness that are mostly based on esthetical,
> puristic and highly academic ground.

No, it is precisely a case of the above: interference with other language features
(original call/cc, higher-order programming, ...).

> The only practical issue I have seen so far is that multithreading
> can't be implemented with R5RS call/cc. Ok. but it neither gives you
> timer interrupts.

You don't need timer interrupts to see the problem. A fine-grained cooperatively
multithreaded program will suffer the same.

Matthias

Doug Quale

unread,
Feb 12, 2002, 1:13:44 PM2/12/02
to
Matthias Blume <matt...@shimizu-blume.com> writes:

> If your program consists of *nothing* but tail calls, then it must
> run in constant space, I agree. But not many useful programs are of
> that nature. :-)

Write it in CPS. Then every call is a tail call. ;-)

Barry Margolin

unread,
Feb 12, 2002, 2:11:21 PM2/12/02
to
In article <3C685755...@brown.edu>,

David Feuer <David...@brown.edu> wrote:
>I would still argue that this does not belong in the language standard. A
>separate document could give these implementation recommendations.

This was one of the fundamental issues that led the original design of
Scheme; read all the "Lambda -- the Ultimate XXX" papers. Would it really
be appropriate to relegate this to a separate "implementation
recommendation" document?

It's critical to the design of Scheme that a tail-recursive algorithm can
be known not to ever result in a stack overflow -- its stack use is
required to be equivalent to the corresponding iterative algorithm. You
may not know how much space an individual stack frame takes up, but you can
know that it will not multiply based on the repeated invocations of the
function.

This traditional version of factorial is permitted to die due to stack
overflow:

(define (fact-recursive n)
(if (= n 0)
1
(* n (fact-recursive (- n 1)))))

This tail-recursive version is not:

(define (fact-tail-recursive n)
(define (recurse-internal n product)
(if (= n 0)
product
(recurse-internal (- n 1) (* n product))))
(recurse-internal n 1))

Programmers must be able to depend on this, otherwise they might have to
use languages that have GOTO or FOR-loops in order to avoid problems with
infinite recursion.

If you didn't have tail-call optimization, how would you write an infinite
loop in Scheme, i.e. the equivalent of C's 'while(1) ...'?

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Barry Margolin

unread,
Feb 12, 2002, 2:19:06 PM2/12/02
to
In article <3C6854E6...@brown.edu>,
David Feuer <David...@brown.edu> wrote:
>Barry Margolin wrote:
>
>> True. But the requirement for tail-call optimization allows you to make
>> correct predictions about the space efficiency of the parts of the program
>> that you *do* have control over.
>>
>> Since Scheme doesn't have "goto" or "return" operators, function calls are
>> the only methods of non-local control transfer. In order for it to be able
>> to perform analogously to these primitives, tail-call optimization is
>> required. Otherwise, even invoking a continuation that's supposed to
>> return would actually push deeper in the stack, and you'd never get out!

>
>So it's important. I agree with that. But that is not really the question. The
>question is whether it belongs in the _language_ standard.

Yes. Without it, the language is incomplete. At the very least you would
need to add a GOTO or iteration statement to the language so that you could
accomplish what tail-recursion provides.

Barry Margolin

unread,
Feb 12, 2002, 2:23:09 PM2/12/02
to
In article <3C685625...@quiotix.com>,
Jeffrey Siegal <j...@quiotix.com> wrote:

>Barry Margolin wrote:
>> Otherwise, even invoking a continuation that's supposed to
>> return would actually push deeper in the stack, and you'd never get out!
>
>Except that Scheme doesn't have a "stack" as part of the language
>(although some implementations may use one or more). Restating that to
>say that invoking such a continuation would incur an incremental space
>cost (creating an additional, useless, continuation) would be more
>precisely correct.

Call it whatever you want. Stack doesn't have to be a linear piece of
memory, it's any conceptual data structure that retains information about
all the pending calls. It doesn't matter if it's in the heap or in a
special area of memory.

Tail-call optimization simply means that when a tail-call occurs, no return
information needs to be retained -- you can use the information from the
call that was already in progress (since you're just going to return to a
return).

Sander Vesik

unread,
Feb 12, 2002, 2:44:52 PM2/12/02
to
Jeffrey Siegal <j...@quiotix.com> wrote:
>
> The point of the the requirement is to promise to Scheme programmers
> that if they are using a compliant implentation, their program will not
> run out of space *because* it makes repeated tail calls. Whatever
> *else* it does may or may not cause it to run out of space, but the
> requirements on this in the Reports are *terribly* weak. For example,
> "The reason that implementations of Scheme do not (usually!) run out of
> storage is that they are permitted to reclaim the storage occupied by an
> object if they can prove that the object cannot possibly matter to any
> future computation." That doesn't even require a Scheme implementation
> to implement GC at all! It can simply run out of storage after
> allocating a limited number objects, whether or not those objects remain
> in use! (As long as it doesn't allocate objects in order to perform a
> tail call.)

OTOH, there are a lot of people who associate "proper" GC (and not just say
ref counting) with a "proper" implementation of scheme, largely just like
for tail-calls.

--
Sander

+++ Out of cheese error +++

Sander Vesik

unread,
Feb 12, 2002, 3:05:44 PM2/12/02
to
David Feuer <David...@brown.edu> wrote:
> Jeffrey Siegal wrote:
>
>> One important point is that the tail recursion requirement has never
>> been about time, so you can discard all of your comments about time and
>> focus entirely on space. For example, your comment about execution
>> being interrupted for GC is particularly irrelevant; GC is clearly one
>> way in which the space requirement of tail recursion can be met.
>>
>> That said, there is probably some truth in what you wrote. There is a
>> certain amount of implicit understanding of space behavior on the Scheme
>> Reports. The explicit requirement for tail recursion was probably
>> inserted because that understanding is not implicit to implementors of
>> other lanaguges (particularly non-Scheme Lisp-family languages). To be
>> more technically correct, it would probably be better to have explicit
>> discussion of space behavior elsewhere in the document as well.
>
> Correct. Sorry to confuse things. I guess my basic point was that the
> language does not otherwise make any requirements about performance
> generally, either in time or in space.

>
> I would still argue that this does not belong in the language standard. A
> separate document could give these implementation recommendations. The
> Scheme community would then be more free (within the standard) to
> experiment with different implementation strategies.

I guess a problem here might be that at the very least, self tail-recursion
*is* required? After all, if you don't have self tail-recursion, the
practice of using recursion to replace loops would suddenly become a very
questionable behaviour.

>
> David Feuer

Matthias Blume

unread,
Feb 12, 2002, 3:21:52 PM2/12/02
to

You misunderstood:

What I was (jokingly) talking about were programs that have *only* tail
calls: no closure creation, no arithmetic, no list consing, no I/O, ...

(A CPS-converted program will run out of space just like its DS counterpart
because in general it will contain LAMBDA expressions that will consume
space due to the closures that are allocated on their behalf.)

--
-Matthias

Jeffrey Siegal

unread,
Feb 12, 2002, 4:33:53 PM2/12/02
to
Sander Vesik wrote:
> I guess a problem here might be that at the very least, self tail-recursion
> *is* required? After all, if you don't have self tail-recursion, the
> practice of using recursion to replace loops would suddenly become a very
> questionable behaviour.

How would it be any more questionable than allocating memory in a
language which doesn't promise that any memory allocated will *ever* be
reclaimed?

While I disagree with Mr. Feuer's suggestion that the tail call
requirement be removed, I do agree with his assessment that the Scheme
reports are in a very strange place when it comes to making space
requirements about tail calls but nothing else.


Jeffrey Siegal

unread,
Feb 12, 2002, 4:35:07 PM2/12/02
to
Sander Vesik wrote:
> OTOH, there are a lot of people who associate "proper" GC (and not just say
> ref counting) with a "proper" implementation of scheme, largely just like
> for tail-calls.

Except that GC is not like tail calls because it isn't required by the
report. Perhaps it should be; that's a different issue.

Barry Margolin

unread,
Feb 12, 2002, 5:21:56 PM2/12/02
to
In article <3C698A4...@quiotix.com>,

This is a good point. The report mentions that most implementations
perform garbage collection, but doesn't require it. The lack of an
explicit "free" operation can be seen as strongly implying that GC is
necessary. But by the same token, the lack of "goto", "for", "while", and
"return" could be interpreted as strongly implying tail-call optimization
is necessary, without actually having to mandate it.

I think they did it this way for historical reasons. It's generally
expected that any language that has dynamic allocation either provides
manual deallocation or performs automatic garbage collection. So it didn't
really need to be mandated -- any implementor knows that consing is taking
place continuously, and if they don't GC they'll run out of memory very
quickly. You could create an implementation of Scheme that doesn't GC, but
it would be considered a toy, only suitable for simple demonstrations.

Tail-call optimization, on the other hand, was not a common thing. Scheme
was one of the first languages to need it (and is probably still one of
very few). If left to their own devices, implementors probably wouldn't do
it (some Common Lisp implementations do it in limited cases, such as
self-calls and/or only when certain optimization settings are enabled). As
this was one of the things that distinguished Scheme from other languages
in the Lisp family (it was pretty much the reason Scheme was invented), it
was felt important enough to set in stone in the standard.

David Feuer

unread,
Feb 12, 2002, 7:00:54 PM2/12/02
to
Barry Margolin wrote:

> This is a good point. The report mentions that most implementations
> perform garbage collection, but doesn't require it. The lack of an
> explicit "free" operation can be seen as strongly implying that GC is
> necessary. But by the same token, the lack of "goto", "for", "while", and
> "return" could be interpreted as strongly implying tail-call optimization
> is necessary, without actually having to mandate it.

Agreed. However, one could imagine a non-toy Scheme implementation that did not
use tracing garbage collection, but did provide explicit memory management
primitives (cf. ML Kit).
It would similarly be possible to imagine a non-toy Scheme implementation that did
not
support tail-calls as required by R5RS, but did support other control operators to
enable efficient
loops. I don't personally have any idea what the performance tradeoffs are with
TCO, but I could
imagine that this would be beneficial for some applications.

Alternatively, one could imagive a highly-optimizing compiler that performed
program transformations
that usually made code more efficient, but under rare circumstances obscured the
tail-recursive nature
of the program, disabling TCO. This could be acceptable for most applications, as
long as they did
not rely on TCO under those particular circumstances.

> Tail-call optimization, on the other hand, was not a common thing. Scheme
> was one of the first languages to need it (and is probably still one of
> very few). If left to their own devices, implementors probably wouldn't do
> it (some Common Lisp implementations do it in limited cases, such as
> self-calls and/or only when certain optimization settings are enabled). As
> this was one of the things that distinguished Scheme from other languages
> in the Lisp family (it was pretty much the reason Scheme was invented), it
> was felt important enough to set in stone in the standard.

Any implementation that does not provide tail-call optimization in at least some
cases
would break given any real Scheme program. Most real implementations will want to
provide
it in all cases; some will only be able to provide it under some circumstances.
This sort of
compromise is equivalent to doing some simple sort of reference-counting garbage
collection.

David Feuer

A Non-mouse

unread,
Feb 13, 2002, 12:05:57 AM2/13/02
to
David Feuer wrote:
>

> I would still argue that [Tail Call optimization] does not

> belong in the language standard. A separate document could
> give these implementation recommendations.

I would argue that Tail Call Optimization is a flatly necessary
part of the language. It is necessary because tail calls are
how iteration is expressed in the language. If I write a loop
that executes a thousand times, using tail call syntax, I expect
it to operate in constant space. A C programmer using the
syntax of that language to express the same loop would have
the same expectation.

I would also argue that the RnRS standards underspecify the
language in many ways and should be at least twice as long.
My most recent frustration involves different implementations
giving different branch cuts in trig functions on complex
numbers.

> The Scheme community would then be more free (within the
> standard) to experiment with different implementation
> strategies.

Experimentation that affects program semantics (which tail
call optimization does) is not done and ought not be done
within a standard. We are free to experiment, of course;
but in order to write well-behaved code I want well-defined
semantics, and that means standards conformance.

Bear

felix

unread,
Feb 13, 2002, 6:38:26 AM2/13/02
to

Matthias Blume wrote in message <3C690D8D...@shimizu-blume.com>...

>felix wrote:
>>
>> Right. But a language that has a feature X that is used in several interesting
>> ways loses expressiveness when that feature is removed (IMHO).
>
>Not necessarily. Lisp programmmers have used (and use) dynamic scope in
>many different interesting ways. It still was removed when Scheme was
>created.


And given back to it (in more general form) later because it has been
found useful...

>
>> What I find surprising is that the opponents of "dynamic-wind" seem to
>> give reasons for it's brokenness that are mostly based on esthetical,
>> puristic and highly academic ground.
>
>No, it is precisely a case of the above: interference with other language features
>(original call/cc, higher-order programming, ...).


That "interference" is something that (at least for me) hasn't shown to be
a serious problem in the real world.

>
>> The only practical issue I have seen so far is that multithreading
>> can't be implemented with R5RS call/cc. Ok. but it neither gives you
>> timer interrupts.
>
>You don't need timer interrupts to see the problem. A fine-grained cooperatively
>multithreaded program will suffer the same.


I don't claim the opposite - we are in agreement here (what I mean is that
multithreading on call/cc might not be possible - but for any *serious* use you
need timer interrupts anyway, which have to be provided by non-standard
means).


felix


Matthias Blume

unread,
Feb 13, 2002, 10:32:55 AM2/13/02
to
A Non-mouse wrote:

> [ ... ] If I write a loop

> that executes a thousand times, using tail call syntax, I expect
> it to operate in constant space. A C programmer using the
> syntax of that language to express the same loop would have
> the same expectation.

Would you care to cite the part of the C standard that actually
*mandates* this? (It could be that it is really written, but
I doubt it somehow. Don't have the standard text here, so I can't
check myself.)

Expectations are one thing, guarantees made by a language definition
are another.

--
-Matthias

David Rush

unread,
Feb 13, 2002, 11:38:17 AM2/13/02
to
Matthias Blume <matt...@shimizu-blume.com> writes:
> Doug Quale wrote:
> > Matthias Blume <matt...@shimizu-blume.com> writes:
> >>If your program consists of *nothing* but tail calls, then it must
> >>run in constant space, I agree. But not many useful programs are of
> >>that nature. :-)
> >>
> > Write it in CPS. Then every call is a tail call. ;-)

> (A CPS-converted program will run out of space just like its DS counterpart


> because in general it will contain LAMBDA expressions that will consume
> space due to the closures that are allocated on their behalf.)

So Scheme should not only do the tail-call optimization, but also be
safe-for-space, no? *That* ought to shut down a lot of the minor
implementations ;)

david rush
--
There's man all over for you, blaming on his boots the faults of his feet.
-- Samuel Becket (Waiting For Godot)

Jeffrey Siegal

unread,
Feb 13, 2002, 12:43:02 PM2/13/02
to
felix wrote:
> I don't claim the opposite - we are in agreement here (what I mean is that
> multithreading on call/cc might not be possible - but for any *serious* use you
> need timer interrupts anyway, which have to be provided by non-standard
> means).

I disagree with that. I think preemptive multitasking to be
conceptually quite different from cooperative multitasking. Preemptive
threads should probably have operating system support and usually
involve a concrete thread object. But are basically a generalized form
of coroutines. I find it much more reasonable, and even desirable, for
cooperative multitasking to be modeled with call/cc, but not preemptive
multitasking.

Indeed, it is reasonable in some cases to do cooperative multitasking
(just as with coroutines or any other control mechanism) within a single
OS thread while at the same time doing preemptive multitasking between
that thread and others.

Sander Vesik

unread,
Feb 13, 2002, 2:24:09 PM2/13/02
to
Jeffrey Siegal <j...@quiotix.com> wrote:
> Sander Vesik wrote:
>> I guess a problem here might be that at the very least, self tail-recursion
>> *is* required? After all, if you don't have self tail-recursion, the
>> practice of using recursion to replace loops would suddenly become a very
>> questionable behaviour.
>
> How would it be any more questionable than allocating memory in a
> language which doesn't promise that any memory allocated will *ever* be
> reclaimed?

Yes, a scheme implementation that never freed any memory would probably
be a valid implementation. The language not requiring memory freeing
in any way is fixable by the implementation just providing a "free" or
"unallocate" operation - no such is possible with an ever-growing stack
due to all calls being ordinary, stack-increasing calls.


>
> While I disagree with Mr. Feuer's suggestion that the tail call
> requirement be removed, I do agree with his assessment that the Scheme
> reports are in a very strange place when it comes to making space
> requirements about tail calls but nothing else.
>

Oh yes - I definately agree with that.

Doug Quale

unread,
Feb 13, 2002, 3:01:15 PM2/13/02
to
Matthias Blume <matt...@shimizu-blume.com> writes:

> A Non-mouse wrote:
>
> > [ ... ] If I write a loop that executes a thousand times, using tail
> > call syntax, I expect it to operate in constant space. A C
> > programmer using the syntax of that language to express the same
> > loop would have the same expectation.
>
> Would you care to cite the part of the C standard that actually
> *mandates* this? (It could be that it is really written, but
> I doubt it somehow. Don't have the standard text here, so I can't
> check myself.)

I think he made it clear that he is simply saying a C programmer
would have that expectation. His statement makes no representation
about the C standard whatsoever.

In fact he says

> > A C programmer using the syntax of that language to express the same
> > loop would have the same expectation.

Why do you think this statement indicates that A Non-mouse is saying
the C standard would mandate anything?

> Expectations are one thing, guarantees made by a language definition
> are another.

Certainly, and this is exactly the point. Proper tail recursion
agrees with A Non-Mouse's expectations (and mine). It's clear from
reading this group that this expectation is not universal. This is
precisely why the Scheme language definition does not rely on
expectations and instead simply guarantees proper tail recursion.


--
Doug Quale

William D Clinger

unread,
Feb 13, 2002, 4:05:28 PM2/13/02
to
Matthias Blume wrote:
> As a matter of fact, the following loop *will* run out of space in any
> Scheme implementation that uses arbitrary-precision integers:
>
> (let loop ((n 1)) (if (something-that-is-always-true) (loop (+ n 1)) #f))
>
> ... it will just do so very slowly. :-)

The universe is believed to be about 10^10 years old.

Current implementations of Scheme typically use 8388608 or
more bits of precision for exact integers. This limit will
be reached long before that loop runs out of space. On my
workstation, it will take more than 10^20000 years to reach
that limit.

> In the meantime, I believe, most authors have accepted this point of
> view -- at least to some extend (correct me if I am wrong), and there
> has been work by Will Clinger et al. that states much more precisely
> how space is being consumed by a Scheme program. With this background
> one can then state requirements such as the TR requirement. R5RS (I
> think -- I don't have it handy right now) at least refers to Clinger's
> work, and it also states much more precisely than has been done in RnRS
> (n <= 4) what a tail call is. (The latter is the easiest part because
> there is a simple syntactic characterization.)

The paper to which Matthias refers is online at
ftp://ftp.ccs.neu.edu/pub/people/will/tail.ps.gz

Will

Erik Max Francis

unread,
Feb 13, 2002, 4:01:50 PM2/13/02
to
Matthias Blume wrote:

> Would you care to cite the part of the C standard that actually
> *mandates* this? (It could be that it is really written, but
> I doubt it somehow. Don't have the standard text here, so I can't
> check myself.)
>
> Expectations are one thing, guarantees made by a language definition
> are another.

That would be hard to do, since neither C89 nor C99 make any such
guarantee.

--
Erik Max Francis / m...@alcyone.com / http://www.alcyone.com/max/
__ San Jose, CA, US / 37 20 N 121 53 W / ICQ16063900 / &tSftDotIotE
/ \ Laws are silent in time of war.
\__/ Cicero
Esperanto reference / http://www.alcyone.com/max/lang/esperanto/
An Esperanto reference for English speakers.

Matthias Blume

unread,
Feb 13, 2002, 4:23:02 PM2/13/02
to
Jeffrey Siegal wrote:
> felix wrote:
>
>> I don't claim the opposite - we are in agreement here (what I mean is
>> that multithreading on call/cc might not be possible - but for any
>> *serious* use you need timer interrupts anyway, which have to be
>> provided by non-standard means).
>
>
> I disagree with that. I think preemptive multitasking to be conceptually
> quite different from cooperative multitasking. Preemptive threads
> should probably have operating system support and usually involve a
> concrete thread object.

And I disagree with *that*.

OS threads tend to be very heavy-weight -- especially when it comes
to inter-thread communication and locking. Some popular implementations
also reserve exorbitant amounts of address space per thread, effectively
limiting the number of threads that one can have.

A good example for call/cc-based preemptive threads is CML.

> Indeed, it is reasonable in some cases to do cooperative multitasking
> (just as with coroutines or any other control mechanism) within a single
> OS thread while at the same time doing preemptive multitasking between
> that thread and others.

... and it is also reasonable to have multiple OS threads ("execution contexts"),
each being preemptively multiplexed between many call/cc-based threads.
Basically, execution contexts (= OS threads) give you access to SMP hardware
(if available). But for most context *switching* (i.e., where one CPU must
be multiplexed among many threads) as opposed to having genuine parallel execution,
a local mechanism such as call/cc can be much faster while consuming much fewer
resources.

--
-Matthias

Jeffrey Siegal

unread,
Feb 13, 2002, 5:48:50 PM2/13/02
to
Matthias Blume wrote:
>> I disagree with that. I think preemptive multitasking to be
>> conceptually quite different from cooperative multitasking.
>> Preemptive threads should probably have operating system support and
>> usually involve a concrete thread object.
>
> And I disagree with *that*.
>
> OS threads tend to be very heavy-weight -- especially when it comes
> to inter-thread communication and locking. Some popular implementations
> also reserve exorbitant amounts of address space per thread, effectively
> limiting the number of threads that one can have.

Those are *bad* implementations. The place to deal with problems in
operating system implementation is in the operating system, by either
fixing it or using a different operating system, not by compensating for
it in a programming language or in applications.

By the way, "operating system" support doesn't necessarily mean kernel
support. Some or all of the thread model might be implemented in user
space using some sort of NxM facility, in cooperation with a kernel. But
I still differentiate between preemptive multitasking, which must
necessarily interact through some sort of interthread locking or
communications mechanism, and cooperative multitasking, where the
program itself decides when to stop executing one thread and start
executing another and communication is usually just a of shared data,
just as with various functions of a program sharing data.

Erann Gat

unread,
Feb 13, 2002, 5:54:31 PM2/13/02
to
In article <b84e9a9f.02021...@posting.google.com>,

ces...@qnci.net (William D Clinger) wrote:

> Matthias Blume wrote:
> > As a matter of fact, the following loop *will* run out of space in any
> > Scheme implementation that uses arbitrary-precision integers:
> >
> > (let loop ((n 1)) (if (something-that-is-always-true) (loop (+ n 1)) #f))
> >
> > ... it will just do so very slowly. :-)
>
> The universe is believed to be about 10^10 years old.
>
> Current implementations of Scheme typically use 8388608 or
> more bits of precision for exact integers. This limit will
> be reached long before that loop runs out of space. On my
> workstation, it will take more than 10^20000 years to reach
> that limit.

Well, then you obviously need a faster machine.

;-) ;-) ;-) ;-)

E.

Jeffrey Mark Siskind

unread,
Feb 13, 2002, 7:45:22 PM2/13/02
to
It's clear from
> reading this group that this expectation is not universal. This is
> precisely why the Scheme language definition does not rely on
> expectations and instead simply guarantees proper tail recursion.

The issue is not whether R5RS requires TCO. As a point of fact it does.
The issue that people are debating is whether it should require TCO. And that
is what there is disagreement on.

But that question is just one reflection of a much deeper question: Should
there even be a Scheme standard in the first place? And if so, what is its
purpose?

Standards are typically adopted for one reason: to facilitate interoperability
between a large community of users. That is the reason we have standard line
voltages, plugs and sockets, frequency and bandwidth allocations, modulation
and coding strategies, signaling conventions, etc. In all these cases we have
a huge community of users that would be significantly and adversely affected
if they had no standard or did not conform to the standard. Software systems
in general, and programming languages in particular, can also have a
sufficiently large community of users with a sufficiently compelling need for
interoperability to warrant standardization. In the case of Scheme, however,
it is different. While some of us might like it to be different, the
unfortunate state of affairs is that we do not have a large community of users.
And the standard that we have does little to facilitate interoperability.
(I for one, find that my code won't port to different Scheme implementations,
not because of noncompliance to things specified in RnRS but because of things
not specified in RnRS that are implemented differently in different
implementations or, more often, implemented in one implementation but not
another.) Given this, the standard that we have is useless, at least for the
typical reasons to adopt standards.

Now RnRS does serve other purposes. It is a guideline to implementors. A
guideline, not a law. It is useful as a manual. Implementors who don't have
time to write their own manual can use RnRS as a starting point. (Joel
Bartlett did exactly that with Scheme->C. He distributed R4RS plus an
addendum.) It documents the current state of thinking in the Scheme community.
That state changes over time which is why we have such a large set
{RnRS|n=1,...}. And it is a starting point for a lot of interesting research
and innovation. Among other things, the standard itself is a testbed for
research and innovation in the process of writing programming-language
standards. All this is very useful.

But unfortunately, the existence RnRS has negative side effects. One is that,
more often then not, it is used as an excuse to exclude people from the
community and denigrate their work. Even after I pointed that out a week ago,
there has still been a tremendous amount of flame war about what does and
doesn't constitute Scheme. Purely on the basis of what the standard says. And
second, and even more importantly, there is a fundamental principle: standards
and innovation are at odds with each other. Standards preclude innovation and
innovation prevents standardization. Historically, we tend to standardize
things when the benefits of standardization outweigh the benefits of further
innovation. And standards fall by the wayside when the pressure of innovation
becomes too strong (for example the IEEE standard for the S-100 bus).

I believe that what Scheme has to offer the world is the innovation that it
fosters. At least at the current time, it does not have the potential of a
large community of users that need interoperability. Weighing the positive and
negative effects that Scheme standards have had on the community has driven me
to the opinion that the Scheme community would be better off without a Scheme
standard. Therefore I advocate that we adopt the empty string as R6RS.

P.S. I've always envied John Cage for composing and publishing silence as
music.

Jeffrey Mark Siskind

unread,
Feb 13, 2002, 9:39:42 PM2/13/02
to
By the way, Stalin will execute the above code in constant space because
it is a self-call. Stalin only fails to implement tail calls in
constant space when they are not self-calls.

As I elaborate below, Stalin implements TCO in many more cases then just self
tail calls.

Alternatively, one could imagive a highly-optimizing compiler that performed
program transformations that usually made code more efficient, but under
rare circumstances obscured the tail-recursive nature of the program,
disabling TCO. This could be acceptable for most applications, as
long as they did not rely on TCO under those particular circumstances.

As I elaborate below, the cases where Stalin fails to perform TCO are quite
rare.

Just look at the hassles Jeff Siskind gets from Stalin's
"non-compliance" to R4RS - when Stalin is *in fact* compliant enough
to R4RS that you hardly notice the differences. Stalin's only 'sin'
(if you will) is that JMS was kind enough to very clearly document the
areas in which Stalin is non-compliant: other (well-known) Schemes
have similiar incompatibilities but do not take such pains to point
them out.

As I elaborate below, even in the case of TCO, you will find it difficult to
find the cases where Stalin does not comply with R4RS.

I'd like to enlighten people with an explanation of the level of support for
TCO in Stalin.

1. Stalin performs TCO on a tail call site E when E is in or in-lined in the
procedure P that is the target of that call site E. (For this email, I'm
going to ignore the possibility that a higher-order call site can have
multiple targets, such that the call site can be in-lined in some targets
but not others. Stalin handles this possibility in an appropriate
fashion.)
2. Stalin in-lines a procedure at a call site if that is the only
non-self-tail call site for that procedure. (For this email, I'm going to
ignore the possiblity that there are multiple non-self-tail call sites but
all but one are in unreachable code. Stalin handles this possibility in an
appropriate fashion.)
3. Stalin makes duplicate copies of procedures that are called more than
once, calling different copies at different call sites, when appropriate
circumstances hold and compiler switches are set.

Together, these three principles cause TCO to occur in a surprisingly large
number of situations. Much more than simply saying self-calls. For example,
in the following:

(begin (define (f)
(f)) ; TCO
(f))

The indicated call site undergoes TCO, because it is a self tail cail. In the
following:

(begin (define (f)
(let ((x 0))
(f))) ; TCO
(f))

the indicated call site also undergoes TCO. Technically, this is not a self
tail call because the LET expands into a LAMBDA so this constitutes a mutually
recursive pair of procedures that each tail call each other. But the LAMBDA
into which the LET expands is called only onece and by principle (2) gets
inlined in F so the indicated call site undergoes TCO by way of principle (1)
above.

In the following:

(let loop ()
(loop)) ;TCO

the indicated call site also undergoes TCO because named LET gets expanded
according to the definition in the appendix of R4RS and the same general
principles (1) and (2) above apply. Even in code like the following:

(let outer ()
(let inner ()
(if e
(outer) ; TCO
(inner)))) ; TCO

with nested loops where the inner loop can escape to the outer loop, both
indicated call sites undergo TCO by way of the same principles (1) and (2)
above. The same thing even happens with mutually recursive procedures like the
following:

(begin (define (f)
(g)) ; TCO
(define (g)
(f)) ;TCO
(f))

or even with the following:

(begin (define (f)
(if e1
(f) ; TCO
(g))) ; TCO
(define (g)
(if e2
(f) ; TCO
(g))) ; TCO
(f))

because, in both cases, G is inlined in F because it has only one
non-self-tail call site. All of these do not require any code duplication.
With the following code:

(begin (define (f)
(if e1
(f) ; TCO
(g))) ; TCO
(define (g)
(if e2
(f) ; TCO
(g)))
(if e3
(f)
(g)))

principle (3) kicks in to make duplicate copies which then allow TCO to happen
at all of the indicated call sites.

Together, these allow Stalin to perform TCO is a large collection of cases.
Stalin performs TCO is a larger collection of cases than other implementations
that only partially support TCO. As a matter of practise, I have *NEVER* found
a case of a real program that I wished to write that could not run in Stalin
because of lack of full TCO. For that matter, I have *NEVER* found a case of a
real program that I wished to write that could not run in Scheme->C because of
lack of full TCO. And Scheme->C performs less TCO than Stalin. Now, of course,
you can construct toy examples which break Stalin or Scheme->C. And people
have done so. But that is irrelevant. The only real-world programming cliches
that I am aware of that Stalin cannot handle are

1. writing finite state machines as collections of mutually tail-recursive
procedures,
2. handwritten CPS, and
3. performing cooperative multitasking via CALL/CC.

I make this claim with the experience of having written hundreds of thousands
of lines of Scheme code (perhaps more than a million) over ten years.

Before anyone claims that Stalin accepts a different language than other
Scheme compilers because of lack of full TCO, I welcome them to demonstrate a
real-world example, not from the above list, that Stalin fails to compile into
working code because of lack of TCO.

Jeffrey Mark Siskind

unread,
Feb 13, 2002, 9:57:15 PM2/13/02
to
One thing I forgot to point out in my previous message is that Stalin
gives you a warning when it is not compliant with the tail recursion
requirement, i.e. when there is a tail call site in a call-graph
cycle for which TCO is not performed. It even tells you the offending
call site. So if you compile your program with Stalin and do not get
such a warning then Stalin is compliant with the tail recursion
requirement for your program.

Note that it is fairly straightforward to change the code generator
to use a trampoline for such call sites. And thus be fully compliant.
I have intended to do so for quite some time now. But it is low on my
list.

Doug Quale

unread,
Feb 14, 2002, 1:32:17 AM2/14/02
to
qo...@purdue.edu (Jeffrey Mark Siskind) writes:

> The only real-world programming cliches
> that I am aware of that Stalin cannot handle are
>
> 1. writing finite state machines as collections of mutually tail-recursive
> procedures,
> 2. handwritten CPS, and
> 3. performing cooperative multitasking via CALL/CC.
>
> I make this claim with the experience of having written hundreds of thousands
> of lines of Scheme code (perhaps more than a million) over ten years.
>
> Before anyone claims that Stalin accepts a different language than
> other Scheme compilers because of lack of full TCO, I welcome them
> to demonstrate a real-world example, not from the above list, that
> Stalin fails to compile into working code because of lack of TCO.

I don't understand. You just explained some real world examples
yourself. In particular, Rob Warnock (I think) earlier talked about
the specific case of FSMs in some detail. This is a commonly given as
an example of the benefits of full tail call optimization. Why does
anyone have to provide a different example? If someone did provide a
different example, wouldn't you just add it to your list and say "The
only real world programming cliches that I am aware of that Stalin
cannot handle are (these 4)."

Unfortunately you may characterize this question as a flame, just as
you sometimes seem to do with any comment or question about Stalin
that you perceive as negative even when that was not the intent of the
writer. (Actually you seem to characterize any argument that Scheme
requires full tail call optimization as a flame against Stalin, even
when Stalin isn't mentioned in the discussion.)

(BTW, does the CPS in item 2 have to be handwritten? What if a
program were to generate the CPS code itself, and that were used as
input to Stalin. Couldn't the same problem arise?)

I have never seen anyone claim that Stalin was not a valuable tool
simply because it does not comply with RnRS for any n. You seem to be
simultaneously admitting that Stalin doesn't implement the same
language as Liar, Gambit and Chez (and others) and denying it. I
guess I'm confused as to what you are really taking issue with.

At some point you may put full tail call optimization into Stalin and
then you will be able to thumb your nose at those of us who (rightly
or wrongly) insist that Scheme requires it.

(I fear you expect that people will then complain about the lack of
the full numeric tower or some such. I don't think that would happen.
The full tower is not required by the Scheme standard and many people
happily use Schemes that lack full numeric support. All the standard
actually requires is exact numbers large enough to use as vector and
string indices. [I would still like to detect numeric overflow,
however.] I really don't see the vendetta against Stalin that you
seem to see. I think you are just surprised that some people value
full tail call optimization as much as they do. I think they would be
very pleased if Stalin provided full TCO (and detected numeric
overflow).)

But even if Stalin never provides full tail call optimization, I think
if you don't like the Scheme standard you should feel free to ignore
it. (Of course you should also feel free to try to get the standard
changed, but Scheme appears to be "functionally stable" in the sense
that IBM uses (used?) for any software that it was no longer
developing.) Stalin will still have many fans who appreciate its
advantages compared to its competitors, namely that Stalin is free
software that produces small, efficient executables that are easy to
deliver. Anyone who needs (or thinks she needs) something not
provided by Stalin can choose to use a different implementation. It
seems to me that's exactly as things should be.


--
Doug Quale

It is loading more messages.
0 new messages