Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why no call with current continuation?

37 views
Skip to first unread message

Tim Josling

unread,
Sep 30, 2002, 6:50:11 PM9/30/02
to
I learned scheme first and now I am moving to lisp.

I am wondering why there is no call/cc in lisp. It seems to be potentially
useful. For example, in the book 'On Lisp', a significant part of this
excellent book seems to consists of reinventing various cut down versions of
call/cc.

(Call/cc is a way to save the current state of the program in a variable. You
can use the variable later on onto resume processing at that point. It can be
used for coroutines, exception handling, nondeterministic search, etc.)

The lisp approach seems to be to provide specific features for the common uses
of call/cc, such as exception handling. To my naive point of view this seems
to be contrary to the usual lisp approach. Normally you would expect to have a
very general facility, with macros/special forms used to provide specialised
versions. An example of this approach would be the iteration facilities which
can be constructed from goto and recursion, but are much more convenient.

Three possibilities as to why call/cc is not in lisp occur to me:

1. For some reason it is not seen as useful. But to me it seems useful.

2. It is too hard to implement efficiently - period. I don't know enough about
implementing functional languages to say for sure, though it does look like it
might be a challenge to implement e.g. maybe you need to copy the stack.

3. It can be implemented efficiently in scheme, but is too hard to implement
efficiently in lisp. I can't see any reason why this would be the case. Lisp
is bigger than scheme but the core of the language does not look that much
bigger to me.

Can anyone shed any light on this? If someone wants to use this as a
springboard for a lisp versus scheme discussion, please start another thread.
(In my view lisp and scheme both have good reasons to exist.

Tim Josling

Pascal Costanza

unread,
Oct 2, 2002, 4:20:35 PM10/2/02
to
Tim Josling wrote:
> I learned scheme first and now I am moving to lisp.
>
> I am wondering why there is no call/cc in lisp. It seems to be potentially
> useful. For example, in the book 'On Lisp', a significant part of this
> excellent book seems to consists of reinventing various cut down versions of
> call/cc.

The standard argument I have read several times is that call/cc turned
out to be too powerful and is hard to use correctly. For example, I
recall reading a paper that formally proved that continuations and
exception handling are _not_ compatible. If you are interested I can try
to find out again which paper this was.

I don't know the motivations for including a call/cc simulation in "On
Lisp". To me it seemed a little like a defensive argument, but this is
only my gut feeling. I don't have enough experience to be able to tell
if continuations are really useful beyond language design experiments,
but I would definitely also be interested to hear what other, more
experienced people think.


Pascal

Edi Weitz

unread,
Oct 2, 2002, 4:55:35 PM10/2/02
to
Tim Josling <t...@melbpc.org.au> writes:

It my be worthwhile reading this article

<http://world.std.com/~pitman/pfaq/unwind-protect-vs-continuations.html>

by Kent Pitman.

Edi.

Martti Halminen

unread,
Oct 2, 2002, 4:40:35 PM10/2/02
to
Tim Josling wrote:

> I am wondering why there is no call/cc in lisp. It seems to be potentially
> useful. For example, in the book 'On Lisp', a significant part of this
> excellent book seems to consists of reinventing various cut down versions of
> call/cc.

A Google groups search for call/cc in c.l.l. gives about 496 hits.
Reading some of those would probably answer most of your questions,
you're not the first to ask this.

--

Dorai Sitaram

unread,
Oct 2, 2002, 5:09:10 PM10/2/02
to
In article <3D98D523...@melbpc.org.au>,

Tim Josling <t...@melbpc.org.au> wrote:
>I am wondering why there is no call/cc in lisp. It seems to be potentially
>useful. For example, in the book 'On Lisp', a significant part of this
>excellent book seems to consists of reinventing various cut down versions of
>call/cc.

Baby steps, please. The voices of reason are
still negotiating to get mandatory tail-call
elimination into the next iteration of CL. call/cc can
wait.

Christophe Rhodes

unread,
Oct 2, 2002, 5:27:40 PM10/2/02
to
ds...@goldshoe.gte.com (Dorai Sitaram) writes:

If the "voices of reason" could come up with a specification of what
"mandatory tail-call elimination" actually means, phrased in a fashion
that is actually understandable by implementors, ideally with a
reference implementation of the specification's semantics (for cmucl
or one of the other "free" implementations, for extra credit, so
everything is clear), then perhaps those people working on CL
implementations who seem to be behaving so "unreasonably" might
actually sit up and take notice.

But until someone comes up with a proposal dealing with whether FOO
is in a "mandatory tail-call" position in the following:
(let ((*print-length* 3))
(foo))
[ bearing in mind that FOO could end up throwing an error, putting you
into the debugger, where complex restarts may be available ]
and sundry other issues, then I think describing people who don't
actively work towards implementing more tail-call elimination by
implication as "unreasonable" is a little bit rich.

Christophe
--
http://www-jcsu.jesus.cam.ac.uk/~csr21/ +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%") (pprint #36rJesusCollegeCambridge)

Gareth McCaughan

unread,
Oct 2, 2002, 5:25:30 PM10/2/02
to
Tim Josling wrote:

> I am wondering why there is no call/cc in lisp. It seems to be potentially
> useful. For example, in the book 'On Lisp', a significant part of this
> excellent book seems to consists of reinventing various cut down versions of
> call/cc.

...

> Three possibilities as to why call/cc is not in lisp occur to me:
>
> 1. For some reason it is not seen as useful. But to me it seems useful.
>
> 2. It is too hard to implement efficiently - period. I don't know enough about
> implementing functional languages to say for sure, though it does look like it
> might be a challenge to implement e.g. maybe you need to copy the stack.
>
> 3. It can be implemented efficiently in scheme, but is too hard to implement
> efficiently in lisp. I can't see any reason why this would be the case. Lisp
> is bigger than scheme but the core of the language does not look that much
> bigger to me.

1. Efficient implementability was a major consideration when
Common Lisp was being defined. call/cc is very difficult
to implement efficiently. I don't think it would add
substantially more difficulty to CL than it does to Scheme,
but it adds quite a lot to Scheme. There's one reason why
it might hurt CL more than Scheme: producing efficient code
in the presence of call/cc requires sophisticated control-flow
analysis, which is harder for very large systems. CL is,
on the whole, much more interested in very large systems
than Scheme is.

2. call/cc is indeed useful, but most of the things it's useful
for can be provided without needing the machinery of call/cc.
The condition system, throw/catch, etc, provide most of what
you'd use call/cc for. But, indeed, not everything.

3. Common Lisp was more a consolidation of existing practice
than a locus of innovation. Among the Lisp users and implementors
involved in Common Lisp, call/cc just wasn't an issue.

Perhaps I should expand on the statement that "call/cc is very
difficult to implement efficiently". It's not just that making
call/cc work fast is difficult; it's that having a system that
contains call/cc makes *other* things harder to do effficiently.
The trouble is that if you can't prove that *nothing* in a given
piece of code can cause an instance of its continuation to
escape, then a bunch of optimizations on that code are blocked.
But any call to an external function can cause the current
continuation to escape. So you end up having to do whole-program
analysis, and that hurts.

--
Gareth McCaughan Gareth.M...@pobox.com
.sig under construc

Will Deakin

unread,
Oct 2, 2002, 5:46:56 PM10/2/02
to
I think Kent Pitman's article -- kindly reference by Edi Weitz --
provides an excellent answer to why call/cc might not such a good idea.
However, I also think there are some other problems as well.

Tim Josling wrote:
> Three possibilities as to why call/cc is not in lisp occur to me:
> 1. For some reason it is not seen as useful. But to me it seems useful.

How do you currently use call/cc? As Paul Graham demonstrates, you can
get much of functionality of call/cc. So when do you need to go beyond
this? FWIW I don't think the marginal gains are worth losing the kind of
guarantees stuff like unwind-protect give you.

> 2. It is too hard to implement efficiently - period.

If you were writing a implementation that you wanted to run with more
than one thread of excution at a time, how would you integrate this
with the use of call/cc? (I really don't know the answer to this. I
realise that threading is not in the ANSI standard. However, there are
implementations out there with extensions the provide good threading
support. I would then see call/cc as being are *really* big headache. So
again, why bother?

> I don't know enough about implementing functional languages to say for

> sure...
Hmmm. I could read this as saying that common lisp *is* a functional
language. Are you sure?

:)w

ozan s yigit

unread,
Oct 2, 2002, 5:56:41 PM10/2/02
to
Tim Josling <t...@melbpc.org.au> writes:

> I am wondering why there is no call/cc in lisp. [...]

because r. gabriel hates it. :) :) :)

oz
---
bang go the blobs. -- ponder stibbons

Brad Miller

unread,
Oct 2, 2002, 6:01:16 PM10/2/02
to

"Dorai Sitaram" <ds...@goldshoe.gte.com> wrote in message
news:anfn9m$d74$1...@news.gte.com...

Tail call elimination does not change the semantics of the language (at
worst, it breaks them), while continue-with-current-call and it's ilk
actually add something fundamentally new, and not directly possible in CL
(though you could easily implement call/cc in Interlisp, as stacks were
first class. Those were the days!). The best you can do in CL is use dynamic
closures (not actually part of CL but a common extension, aka coroutines or
stack groups), but it can be difficult to correctly close over the entire
virtual machine state without significant (human and machine) effort. If all
you need is downward closures, life is considerably simpler, and there are
several packages supplying this functionality on the net.


Tim Bradshaw

unread,
Oct 2, 2002, 5:44:00 PM10/2/02
to
* Dorai Sitaram wrote:
> Baby steps, please. The voices of reason are
> still negotiating to get mandatory tail-call
> elimination into the next iteration of CL. call/cc can
> wait.

You are just trying to be offensive here, aren't you? Why? What
purpose does this serve? Could you perhaps not write something
constuctive, instead?

--tim

Tim Bradshaw

unread,
Oct 2, 2002, 6:45:00 PM10/2/02
to
* Tim Josling wrote:

> The lisp approach seems to be to provide specific features for the
> common uses of call/cc, such as exception handling. To my naive
> point of view this seems to be contrary to the usual lisp
> approach. Normally you would expect to have a very general facility,
> with macros/special forms used to provide specialised versions. An
> example of this approach would be the iteration facilities which can
> be constructed from goto and recursion, but are much more
> convenient.

There are a number of reasons for not having call/cc. One is that the
moment you have it you have to deal with a million issues of code that
might be run many times. For instance the moment you have call/cc
things like UNWIND-PROTECT become no good - when (if ever) should the
protecting forms happen. So instead you need something like
DYNAMIC-WIND which gets to specify code that runs on the way out and
then again on the way back in. So everything now looks kind of nice
(if about 10x as complicated to reason about as before). But now what
if you want to interact with external resources like, say, files, or
streams. How should WITH-OPEN-STREAM work in the presence of upward
continuations? When exactly is it safe to close the stream, since you
won't get it back once you do, no matter how much cleverness you do
with DYNAMIC-WIND.

This has all been discussed *many* times on cll before - do a google
search. Inevitably these discussions degenerate into sniping between
scheme people and Lisp people, just as this one will (I suppose Dorai
Siteram's post could charitably be seen as an attempt to stifle the
thread at birth by causing such degeneration to happen instantly).

And I think your take on `the Lisp approach' is basically completely
backward, at least as regards CL. The CL approach is to be pragmatic
- implement what can be agreed on and what can be seen to be
efficiently implementable and useful: don't implement stuff that
suddenly opens huge questions about what everything means, or whether
it can be implemented efficiently.

--tim

Bruce Hoult

unread,
Oct 2, 2002, 7:14:13 PM10/2/02
to
In article <sqy99gy...@lambda.jcn.srcf.net>,
Christophe Rhodes <cs...@cam.ac.uk> wrote:

> But until someone comes up with a proposal dealing with whether FOO
> is in a "mandatory tail-call" position in the following:
> (let ((*print-length* 3))
> (foo))

If *print-length* is (following the usual convention) elsewhere declared
special, then (foo) is clearly not in tail position, the restoration of
*print-length* is.

Otherwise (foo) clearly *is* in tail position.

What's the problem?

-- Bruce

Barry Margolin

unread,
Oct 2, 2002, 6:50:57 PM10/2/02
to
In article <ey3n0pw...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:
>And I think your take on `the Lisp approach' is basically completely
>backward, at least as regards CL. The CL approach is to be pragmatic
>- implement what can be agreed on and what can be seen to be
>efficiently implementable and useful: don't implement stuff that
>suddenly opens huge questions about what everything means, or whether
>it can be implemented efficiently.

I agree with Tim here. The CL philosophy is to provide a library of
high-level, proven facilities that are directly useful in applications,
like hash tables and sequence functions. The Scheme approach is to provide
basic mechanisms that can be used as building blocks for any programming
paradigm. Common Lisp was intended for industrial programming, while
Scheme was designed for academics. That doesn't mean they're only fitted
for those environments, but their general approaches are based on the needs
of those different types of uses.

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Erik Naggum

unread,
Oct 2, 2002, 7:21:15 PM10/2/02
to
* Tim Josling

| I am wondering why there is no call/cc in lisp. It seems to be potentially
| useful.

Please get back to us when it is /actually/ useful.

| For example, in the book 'On Lisp', a significant part of this excellent
| book seems to consists of reinventing various cut down versions of
| call/cc.

Well, duh.

| (Call/cc is a way to save the current state of the program in a
| variable. You can use the variable later on onto resume processing at
| that point. It can be used for coroutines, exception handling,
| nondeterministic search, etc.)

None of us know Scheme at all, so thanks for this lecture.

| To my naive point of view this seems to be contrary to the usual lisp
| approach.

In your next life, learn a real Lisp before Scheme. This will change
what you consider "usual" as well as actually help grasp a hell of lot
more than you do now.

| Three possibilities as to why call/cc is not in lisp occur to me:
|
| 1. For some reason it is not seen as useful. But to me it seems useful.

That is because you have yet to learn how useful the Common Lisp
constructs are. A common affliction of Scheme users is to believe that
their ignorance of Common Lisp gives them a right to tell people about
Scheme, which they know. This is actually nuts.

| 2. It is too hard to implement efficiently - period. I don't know enough
| about implementing functional languages to say for sure, though it does
| look like it might be a challenge to implement e.g. maybe you need to
| copy the stack.

You do no implement languages with call/cc with ordinary stacks. Stacks
are normally used to contain call frames. This works well because calls
are strictly hierarchical. With call/cc, this is no longer true, so call
frames need to be allocated from the heap and garbage-collected instead
of the stack memory being trivially reusable.

| 3. It can be implemented efficiently in scheme, but is too hard to
| implement efficiently in lisp. I can't see any reason why this would be
| the case. Lisp is bigger than scheme but the core of the language does
| not look that much bigger to me.

Do you actually believe this nonsense to be true?

| Can anyone shed any light on this? If someone wants to use this as a
| springboard for a lisp versus scheme discussion, please start another
| thread. (In my view lisp and scheme both have good reasons to exist.

I think both antibiotics and bacteria have good reason to exist, too.

--
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.

Erik Naggum

unread,
Oct 2, 2002, 7:23:48 PM10/2/02
to
* Dorai Sitaram
| The voices of reason

Tim, you are well advised to ignore this nutjob of you want to learn and
understand Common Lisp. Dorai is a known Scheme troll in this newsgroup
and wastes no opportunity to tell us how superior Scheme is to actually
getting work done.

Jens Axel Søgaard

unread,
Oct 2, 2002, 8:12:14 PM10/2/02
to
Erik Naggum wrote:
> * Tim Josling

>> Three possibilities as to why call/cc is not in lisp
>> occur to me:

>> 3. It can be implemented efficiently in scheme, but is


>> too hard to implement efficiently in lisp. I can't see
>> any reason why this would be the case. Lisp is bigger
>> than scheme but the core of the language does not look
>> that much bigger to me.
>
> Do you actually believe this nonsense to be true?

He is asking about the reason.

That Tim is ignoring that call/cc is expensive in Scheme
is another thing.

--
Jens Axel Søgaard

Attribute not to malice that which might be better explained
by ignorance.

Kaz Kylheku

unread,
Oct 2, 2002, 11:45:36 PM10/2/02
to
Tim Josling <t...@melbpc.org.au> wrote in message news:<3D98D523...@melbpc.org.au>...

> I learned scheme first and now I am moving to lisp.
>
> I am wondering why there is no call/cc in lisp.

One possible answer is that it requires a complex stack structure. Do
a web search for ``spaghetti stack''.

> It seems to be potentially
> useful. For example, in the book 'On Lisp', a significant part of this
> excellent book seems to consists of reinventing various cut down versions of
> call/cc.

The question is whether actual Lisp software in production use
contains such reinventions, or whether they were just whipped up for
someone's amusement.



> (Call/cc is a way to save the current state of the program in a variable. You

> can use the variable laterYour on onto resume processing at that point. It can be


> used for coroutines, exception handling, nondeterministic search, etc.)

Yes; but that state remains saved even when the context in which it
was created terminates; you can jump in and out of continuations. Each
continuation is in fact a nearly a whole thread, requiring a chunk of
stack space.

> The lisp approach seems to be to provide specific features for the common uses
> of call/cc, such as exception handling. To my naive point of view this seems
> to be contrary to the usual lisp approach. Normally you would expect to have a
> very general facility, with macros/special forms used to provide specialised
> versions. An example of this approach would be the iteration facilities which
> can be constructed from goto and recursion, but are much more convenient.

But the difference is that GO doesn't have a whole facet of
functionality that is ignored by iterative constructs built on top of
it. It is used in its simple entirety to build up these higher level
control mechanisms.

On the other hand, you do not need call/cc in its entirety to
implement conditions.

How conditions work is that the program is suspended to search for a
handler. The handler is called, and then that handler can execute a
non-local exit somewhere, by invoking a restart, or other mechanism.
At that point, the stack is unwound. Restarting is possible because a
condition does not automatically unwind; when the handler is invoked,
all the dynamic context is still there, so invoking a restart is
nothing more than a non-local exit up through the point where the
condition was signaled to some enclosing restart.

So as you can see, it can all be done with a simple stack structure;
an implementation based on continuations would just generate useless
garbage.

> Three possibilities as to why call/cc is not in lisp occur to me:
>
> 1. For some reason it is not seen as useful. But to me it seems useful.

Here is a consideration: most modern implementations of Common Lisp
have threads! One way to look at continuations is that they are just a
very low level mechanism by which the programmer can explicitly juggle
multiple contexts. Threads are a simpler, higher level abstraction
than continuations. For one thing, you don't need to save and restore
your own context when switching between threads. This is really what
dynamic-wind is all about; setting up some context and tearing it down
upon entering and leaving a continuation. That's what threads are for
so you don't have to juggle! When dispatched, your thread continues
magically, all by itself, at the point where it was last suspended.

Dorai Sitaram

unread,
Oct 3, 2002, 9:32:04 AM10/3/02
to
In article <sqy99gy...@lambda.jcn.srcf.net>,
Christophe Rhodes <cs...@cam.ac.uk> wrote:
>
>But until someone comes up with a proposal dealing with whether FOO
>is in a "mandatory tail-call" position in the following:
> (let ((*print-length* 3))
> (foo))

*print-length* being a special variable, right? Then,
the call to foo is not in a tail-call position.

The other common objection I've heard is that when you
trace a procedure that calls itself tail-recursively,
the tail-calls cease to be tail-calls. That
would be precisely correct and expected.

Tail-call elimination (TCE) only eliminates tail-calls.
It obviously cannot and should not eliminate non-tail
calls (what does that mean anyway? [1]). In cases
where what "looks like" a tail-call but really isn't,
TCE won't eliminate it, but so wouldn't a non-TCE
scenario either, so you're losing nothing.

[1] That said, I do know of a rather academic paper
from the early 1980s that treats getting something like
special variables to not interfere with
tail-calls, but it may not be of interest here.

>[ bearing in mind that FOO could end up throwing an error, putting you
> into the debugger, where complex restarts may be available ]
>and sundry other issues, then I think describing people who don't
>actively work towards implementing more tail-call elimination by
>implication as "unreasonable" is a little bit rich.

Actually I was using "voices of reason" quite
literally, not as a synecdoche for reasonable
individuals. The effect of a recent re-reading of
Julian Jaynes, I guess. Let's not manufacture an
offense where none was intended.

To clarify: I too doubt that TCE will become part of
CL. In a hypothetical hierarchy however, I would class
adding call/cc to CL to be even lower than adding TCE.
To reiterate my answer to the OP, I do think that
adding call/cc to CL would be extremely low-priority
and have minimal (or at least non-worthwhile)
payoff. IOW: while both TCE and call/cc are unlikely,
call/cc is unlikelier. People also have a better
understanding of TCE than call/cc, and in languages
that have both, tend to use TCE oftener. Hence my
relativizing.

Erik Naggum

unread,
Oct 3, 2002, 10:01:40 AM10/3/02
to
* Dorai Sitaram

| *print-length* being a special variable, right? Then, the call to foo is
| not in a tail-call position.

This is not as obvious as you think, either. The unwinding that would
take place upon a return may well include special variable unbinding and
unwind-protect execution so a call to a function inside both may well be
able to inherit the call frame of the caller. The smart way to ensure
that special variables are unbound and unwind-protect forms are always
run is to arrange for the function to return to a system function that
does the necessary clean-up. When you do this, neither special bindings
nor unwind-protect forms would stand between you and tail-call merging.

| [1] That said, I do know of a rather academic paper from the early 1980s
| that treats getting something like special variables to not interfere
| with tail-calls, but it may not be of interest here.

Perhaps it is already widely implemented.

| To clarify: I too doubt that TCE will become part of CL.

It is already part of the implementations under suitable optimization.
Unlike the Scheme world, Common Lisp vendors are allowed to be smarter
than the specification.

| IOW: while both TCE and call/cc are unlikely, call/cc is unlikelier.

Perhaps you should try to study reality a little? Or is that unlikely?

Tim Bradshaw

unread,
Oct 3, 2002, 10:01:52 AM10/3/02
to
* Dorai Sitaram wrote:
> Tail-call elimination (TCE) only eliminates tail-calls.
> It obviously cannot and should not eliminate non-tail
> calls (what does that mean anyway? [1]). In cases
> where what "looks like" a tail-call but really isn't,
> TCE won't eliminate it, but so wouldn't a non-TCE
> scenario either, so you're losing nothing.

In order to add TCE *to the language definition* you have to specify
*in the definition* all the places that are in tail position - It's
not enough to say `we expect tail calls to be eliminated'. Given the
number of WITH-x style constructs in CL and the many possible
implementation techniques, this is probably rather a lot of work, and
it's probably not possible to arrive at other than a ludicrously
conservative definition without rendering several implementations
*which currently do TCE* non-conforming.

The current situation is that almost all implementations already
eliminate tail calls in many cases: the difference between that and
having them specified in the language is mostly just spending a whole
lot of money, and rendering some implementations that are currently
conforming non-conforming.

--tim

Barry Margolin

unread,
Oct 3, 2002, 10:11:02 AM10/3/02
to
In article <ey3smzn...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:
>In order to add TCE *to the language definition* you have to specify
>*in the definition* all the places that are in tail position - It's
>not enough to say `we expect tail calls to be eliminated'. Given the
>number of WITH-x style constructs in CL and the many possible
>implementation techniques, this is probably rather a lot of work, and
>it's probably not possible to arrive at other than a ludicrously
>conservative definition without rendering several implementations
>*which currently do TCE* non-conforming.

If the standard were conservative, then I'd expect that most of these
implementations would conform. After all, the standard would only specify
the places where TCE is *required*; that wouldn't prohibit TCE in other
places (just as the current standard, which doesn't require any TCE, does
not prohibit these implementations that perform TCE).

Joe Marshall

unread,
Oct 3, 2002, 11:01:03 AM10/3/02
to
Pascal Costanza <cost...@web.de> writes:

> Tim Josling wrote:
> > I learned scheme first and now I am moving to lisp. I am wondering
> > why there is no call/cc in lisp. It seems to be potentially
>
> > useful. For example, in the book 'On Lisp', a significant part of this
> > excellent book seems to consists of reinventing various cut down versions of
> > call/cc.
>
> The standard argument I have read several times is that call/cc turned
> out to be too powerful and is hard to use correctly. For example, I
> recall reading a paper that formally proved that continuations and
> exception handling are _not_ compatible. If you are interested I can
> try to find out again which paper this was.

If I recall correctly, the `proof' was not fully general.

Duane Rettig

unread,
Oct 3, 2002, 12:00:05 PM10/3/02
to

[ This is a terminology gripe and a request for different language
usage]

ds...@goldshoe.gte.com (Dorai Sitaram) writes:

> Tail-call elimination (TCE) only eliminates tail-calls.
> It obviously cannot and should not eliminate non-tail
> calls (what does that mean anyway? [1]). In cases
> where what "looks like" a tail-call but really isn't,
> TCE won't eliminate it, but so wouldn't a non-TCE
> scenario either, so you're losing nothing.

I detest the usage of the term Tail Call _Elimination_. I don't
know how old it is, and it seems to be as perennial as the "lisp
is big and slow" myth. Tail calls are never "eliminated" (as in
turned into nothing), they are changed into something else (as
in changed from a call to a jump). Using the word "elimination"
causes confusion. [I suppose that a tail call could be eliminated
if the compiler could prove that the called function returns exactly
the same value as the caller and has no side effects, but that kind
of "tail-call-elimination" would be less interesting than its greater
and more general category of "call-elimination", and is not what we're
discussing here.]

There are several ways to say what is being meant here in a more
accurate or less confusing way. "Tail Call Optimizaton" is perfectly
good, as is "Tail Merging" (the former is more descriptive, and the
latter is self-defining because it has no confusing predefinitions).

You might object to this request based on your own definition of
"elimination". However, my contention is that it is confusing to
others, and given alternatives, should not be used.

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

Duane Rettig

unread,
Oct 3, 2002, 12:00:06 PM10/3/02
to
Barry Margolin <bar...@genuity.net> writes:

> In article <ey3smzn...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:
> >In order to add TCE *to the language definition* you have to specify
> >*in the definition* all the places that are in tail position - It's
> >not enough to say `we expect tail calls to be eliminated'. Given the
> >number of WITH-x style constructs in CL and the many possible
> >implementation techniques, this is probably rather a lot of work, and
> >it's probably not possible to arrive at other than a ludicrously
> >conservative definition without rendering several implementations
> >*which currently do TCE* non-conforming.
>
> If the standard were conservative, then I'd expect that most of these
> implementations would conform. After all, the standard would only specify
> the places where TCE is *required*; that wouldn't prohibit TCE in other
> places (just as the current standard, which doesn't require any TCE, does
> not prohibit these implementations that perform TCE).

This depends on your definition of conservative. I believe that the
standard _is_ conservative in not forcing implementations into a
particular Tail Call Optimization practice. I also agree with Tim
that many (if not all) of the native-code CL compilers already provide
some sort of Tail Call Optimization under various circumstances.

Dorai Sitaram

unread,
Oct 3, 2002, 1:07:14 PM10/3/02
to
In article <44rc34...@beta.franz.com>,

Duane Rettig <du...@franz.com> wrote:
>
>[ This is a terminology gripe and a request for different language
>usage]...
>
>I detest the usage of the term Tail Call _Elimination_...

>
>There are several ways to say what is being meant here in a more
>accurate or less confusing way. "Tail Call Optimizaton" is perfectly
>good, as is "Tail Merging" (the former is more descriptive, and the
>latter is self-defining because it has no confusing predefinitions).
>
>You might object to this request based on your own definition of
>"elimination". However, my contention is that it is confusing to
>others, and given alternatives, should not be used.

I am just using what seems to be the most standard
term. I have no particular liking for or investment
in the term.

I do think Tail Call Optimization is a clear misnomer
_for what we're talking about_, because optimization
suggests optionality. _Mandatory_ TCE/O is what allows
certain drastic decisions about program writing to be
made. (I'm being purely morphological here,
Duane, not entering into whether those decisions
are good or bad. I know you and I have differing
opinions on their goodness, and that's OK.)

If it were only a possible and sometime optimization,
those decisions simply can't be made. This latter
kind of TCO may be a good thing in its own right,
but it is a different subject, and interesting
primarily to implementors, as it doesn't have the same
opportunity for impact on the user's programming
approach.

I'd use Tail Merging if it had guaranteed
understandability. (To my ear, it sounds like
something to with list-substructure sharing, or
maybe some recherche sorting routine, etc.)

Thomas F. Burdick

unread,
Oct 3, 2002, 1:16:47 PM10/3/02
to
ds...@goldshoe.gte.com (Dorai Sitaram) writes:

> I'd use Tail Merging if it had guaranteed
> understandability. (To my ear, it sounds like
> something to with list-substructure sharing, or
> maybe some recherche sorting routine, etc.)

How about Tail Call Merging? The explicit "call"[*] makes it pretty
clear what's being talked about, and "merging" really is a good word
for what's being talked about.

[*] It's particularly nice that "call" makes sense bound to either
"tail" or "merging". We're talking about tail calls, and the type of
merging we're talking about is call merging.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Joe Marshall

unread,
Oct 3, 2002, 1:59:20 PM10/3/02
to
Duane Rettig <du...@franz.com> writes:

> I detest the usage of the term Tail Call _Elimination_. I don't
> know how old it is, and it seems to be as perennial as the "lisp
> is big and slow" myth. Tail calls are never "eliminated" (as in
> turned into nothing), they are changed into something else (as
> in changed from a call to a jump). Using the word "elimination"
> causes confusion.

Can we go back to calling it the "JRST hack"?

Duane Rettig

unread,
Oct 3, 2002, 3:00:01 PM10/3/02
to
ds...@goldshoe.gte.com (Dorai Sitaram) writes:

> In article <44rc34...@beta.franz.com>,
> Duane Rettig <du...@franz.com> wrote:
> >
> >[ This is a terminology gripe and a request for different language
> >usage]...
> >
> >I detest the usage of the term Tail Call _Elimination_...
> >
> >There are several ways to say what is being meant here in a more
> >accurate or less confusing way. "Tail Call Optimizaton" is perfectly
> >good, as is "Tail Merging" (the former is more descriptive, and the
> >latter is self-defining because it has no confusing predefinitions).
> >
> >You might object to this request based on your own definition of
> >"elimination". However, my contention is that it is confusing to
> >others, and given alternatives, should not be used.
>
> I am just using what seems to be the most standard
> term. I have no particular liking for or investment
> in the term.
>
> I do think Tail Call Optimization is a clear misnomer
> _for what we're talking about_, because optimization
> suggests optionality. _Mandatory_ TCE/O is what allows
> certain drastic decisions about program writing to be
> made. (I'm being purely morphological here,
> Duane, not entering into whether those decisions
> are good or bad. I know you and I have differing
> opinions on their goodness, and that's OK.)

Yes, I understand your desire to reject Tail Call Optimization
for the purposes that you are seeking. So reject it and
move to something having to do with Merging. Thomas Burdick
suggested Tail Call Merging, and although I have seen this
term used as well and have probably even used it myself, out
of laziness I didn't mention all of the possibilities,
including something like Tail Call Iterative Rewriting (I've
never seen this one - I just made it up).

> If it were only a possible and sometime optimization,
> those decisions simply can't be made. This latter
> kind of TCO may be a good thing in its own right,
> but it is a different subject, and interesting
> primarily to implementors, as it doesn't have the same
> opportunity for impact on the user's programming
> approach.

And that is precisely why we in the Common Lisp realm are
so picky about the terminology. I use Tail Call Optimization
and Tail [Call] Merging interchangably, because from my point of
view optimizations are the only tradeoffs to be made in tail-call
rewriting. And if you really think that such is not the case in
Scheme, then why don't you consider the call to bar here:

(defun foo ()
(let ((*print-pretty* t))
(bar)))

to be in tail position?

I made it a CL example to demonstrate that it is really a
point-of-view situation; this call is most certainly in
tail-position, as I believe Erik mentioned before. It's just that
we CL implementors don't tend to change the call to a jump, which
would optimize stack usage, because the tradeoffs in extra code
needed to ensure the proper bindings when the called function
returns just don't make it practical to do such an optimization.

To get it back to Scheme, if the Scheme equivalent of the above
function doesn't consider such a call to be in tail position,
then Scheme is creating its own specialized definition of
Tail Position. No more or less correct than the traditional
definition used in Common Lisp circles, but in fact different and
thus confusing when the terminology is too specific.

When I first heard that Scheme mandated the rewriting of all tail
calls, I was amazed, because I knew how hard that is to do, and I
thought that they must be extremely hairy implementations to get
it right. But then I learned, in a similar conversation as this
one, that Scheme punts on some of the harder tail-call situations
by defining them _not_ to be tail calls. This is perfectly acceptable
to me, but please don't bring that terminology unqualified here.
[You can even call it "Scheme Tail Call Elimination", because it
then carries with it all of the definitional requirements that
Scheme gives to the term, and thus ceases to be confusing (unless
someone wants to actually get into Scheme vs CL differences, which
we don't want to do here, right? :-) It would also then reduce the
tendency for Scheme advocates to try to get STCE into CL, because
CL would never do STCE (otherwise it would become CLTCE :-)]

> I'd use Tail Merging if it had guaranteed
> understandability. (To my ear, it sounds like
> something to with list-substructure sharing, or
> maybe some recherche sorting routine, etc.)

Oh, isn't that how they implement them (and call/cc) in Scheme? :-) :-)

Dorai Sitaram

unread,
Oct 3, 2002, 4:16:53 PM10/3/02
to
In article <4vg4j2...@beta.franz.com>,
Duane Rettig <du...@franz.com> wrote:

>ds...@goldshoe.gte.com (Dorai Sitaram) writes:
>> If it were only a possible and sometime optimization,
>> those decisions simply can't be made. This latter
>> kind of TCO may be a good thing in its own right,
>> but it is a different subject, and interesting
>> primarily to implementors, as it doesn't have the same
>> opportunity for impact on the user's programming
>> approach.
>
>And that is precisely why we in the Common Lisp realm are
>so picky about the terminology. I use Tail Call Optimization
>and Tail [Call] Merging interchangably, because from my point of
>view optimizations are the only tradeoffs to be made in tail-call
>rewriting.

Yes, I am indeed describing something that is
different from optimization. To bring out the contrast
in stark terms: It is quite possible, and even quite
probable, for an unoptimized, slow-as-molasses language
to have TCE, and a really superfast language to not
have it. (To those who want to pipe up with a witty
example: yes, I know.)

A consequence of TCE being different from the
optimizations you are talking about is that there
are programs -- and I am _not_ just talking about
straight iterations -- that would become incorrect,
rather than just unoptimized, if the user wrote them
expecting TCE but the language didn't support it.

Another aspect of this is that if you added TCE to a
language for optimization reasons, you could
easily be making a big mistake.

>And if you really think that such is not the case in
>Scheme, then why don't you consider the call to bar here:
>
>(defun foo ()
> (let ((*print-pretty* t))
> (bar)))
>
>to be in tail position?

My reason is that the call to bar wouldn't be the last
thing that the enclosing call to foo does.

Paul Dietz

unread,
Oct 3, 2002, 4:36:50 PM10/3/02
to
Dorai Sitaram wrote:

> >And if you really think that such is not the case in
> >Scheme, then why don't you consider the call to bar here:
> >
> >(defun foo ()
> > (let ((*print-pretty* t))
> > (bar)))
> >
> >to be in tail position?
>
> My reason is that the call to bar wouldn't be the last
> thing that the enclosing call to foo does.


Do you view this as a tail call?

(defun foo ()
(let ((x ...))
(declare (dynamic-extent x))
(bar x)))


Paul

Duane Rettig

unread,
Oct 3, 2002, 5:00:01 PM10/3/02
to
Joe Marshall <j...@ccs.neu.edu> writes:

Heh. Then you would have an architecture-difference between assembler
mnemonics. Or perhaps yet another CAR/CDR crystalization.

Besides, "hack" has taken on bad connotations in some circles :-)


We tried to be as descriptive as possible, even in the
compiler switch names we used:

CL-USER(1): (apropos "MERGE-SWITCH")
COMP:TAIL-CALL-NON-SELF-MERGE-SWITCH value: #<Function SPEED > 1 AND DEBUG < 2
@
#x710e928a>
COMP:TAIL-CALL-SELF-MERGE-SWITCH value: #<Function SPEED > 0
@
#x710e9122>
CL-USER(2):

Note that we even distinguish between self-calls and non-self
calls, because there are different appropriate defaults for
each, to obtain optimal compromise between speed and debug.

Tim Josling

unread,
Oct 3, 2002, 5:00:25 PM10/3/02
to
Edi Weitz wrote:
> ...
> It my be worthwhile reading this article
>
> <http://world.std.com/~pitman/pfaq/unwind-protect-vs-continuations.html>
>
> by Kent Pitman.
>
> Edi.

Thanks.

Tim Josling

Tim Josling

unread,
Oct 3, 2002, 5:08:40 PM10/3/02
to
Will Deakin wrote:
>
>
> > I don't know enough about implementing functional languages to say for
> > sure...
> Hmmm. I could read this as saying that common lisp *is* a functional
> language. Are you sure?
>
> :)w

In the sense that 'words mean whatever I want them to mean', yes. That's all
I'm saying.

Tim Josling

Tim Josling

unread,
Oct 3, 2002, 5:10:44 PM10/3/02
to
Erik Naggum wrote:
>
> * Dorai Sitaram
> | The voices of reason
>
> Tim, you are well advised to ignore this nutjob of you want to learn and
> understand Common Lisp. Dorai is a known Scheme troll in this newsgroup
> and wastes no opportunity to tell us how superior Scheme is to actually
> getting work done.
>
> --
> Erik Naggum, Oslo, Norway
>

You learn more from your enemies than your friends, I have found.

It would indeeed be nice to have tail recursion guaranteed in specific known
circumstances, but maybe that's not reasonable.

Tim Josling

Tim Josling

unread,
Oct 3, 2002, 5:19:41 PM10/3/02
to


I was positing three possible reasons why call/cc was not in lisp. I was
saying 'here are three possible reasons, are they valid?'

So in no sense was I 'ignoring' the possible inefficiency of call/cc in
scheme. Hypothetical reason (2) actually suggests that maybe call/cc is too
hard to implement, period.

Why Erik misunderstood, I don't know. Perhaps I was not clear, perhaps his
English let him down, perhaps his apparent enthusiasm for seeing others as
'untermensch' blinded him to what I said. Just to be clear here, I don't know
which of these is true, or even if any of these explanations are true.

Tim Josling

Will Deakin

unread,
Oct 3, 2002, 5:28:31 PM10/3/02
to
Tim Josling wrote:
> You learn more from your enemies than your friends, I have found.
Maybe you need to be a bit more careful about who your friends are[1].

> It would indeeed be nice to have tail recursion guaranteed in specific known
> circumstances, but maybe that's not reasonable.

As has been pointed out elsewhere, in all implementations that I know
of[2] it *is* `guaranteed.[3]' It is simply that this optimisation isn't
mandated in the ANSI standard.

:)w

[1] On re-reading I now realise that I sound like a primary school
teacher ;)
[2] I presume somebody now tell me of an implementation that under no
circumstances carries out this optimisation.
[3] ...and yes, I do get to choose what guaranteed means... :)

Marc Spitzer

unread,
Oct 3, 2002, 5:29:05 PM10/3/02
to
Tim Josling <t...@melbpc.org.au> wrote in
news:3D9CB254...@melbpc.org.au:

> You learn more from your enemies than your friends, I have found.

I have found the reverse to be true.

>
> It would indeeed be nice to have tail recursion guaranteed in specific
> known circumstances, but maybe that's not reasonable.
>

Why? From what little I understand the tail recursion call elemination
need came about to help emulate looping constructs. They needed to come
up with some way to say "do this 1,000,000,000 times" and not have to
save 1,000,000,000 stack frames doing it. CL does not have this problem,
you use loop or one of the others to get it done.

marc

> Tim Josling
>

Tim Josling

unread,
Oct 3, 2002, 5:27:55 PM10/3/02
to
Erik Naggum wrote:
>
> * Tim Josling

> | (Call/cc is a way to save the current state of the program in a


> | variable. You can use the variable later on onto resume processing at
> | that point. It can be used for coroutines, exception handling,
> | nondeterministic search, etc.)
>
> None of us know Scheme at all, so thanks for this lecture.
>

No, but some may not know scheme to the extent of knowing call/cc. Therefore I
included a brief description.

I can't see any rational reason for objecting to that. I knew from experience
that some people object to having the obvious explained to them, so I put the
explanation in parentheses.

Tim Josling

Dorai Sitaram

unread,
Oct 3, 2002, 5:37:50 PM10/3/02
to
In article <3D9CAA62...@motorola.com>,

Paul Dietz <paul.f...@motorola.com> wrote:
>Do you view this as a tail call?
>
>(defun foo ()
> (let ((x ...))
> (declare (dynamic-extent x))
> (bar x)))

Yes, the call to `bar' is a tail call, unless I'm
thoroughly misunderstanding what (declare
(dynamic-extent ...)) does.

It depends on whether the variable x needs to be
restored to some "previous" value or not once its `let'
is done. I think not in this case, but maybe I am
mistaken in so thinking. Frankly, I've never used
dynamic-extent in my own code -- all my specials
have tended to be globals that I had carefully defvar'd
before first use.

Will Deakin

unread,
Oct 3, 2002, 5:40:21 PM10/3/02
to
Tim Josling wrote:
>>Hmmm. I could read this as saying that common lisp *is* a functional
>>language. Are you sure?
> In the sense that 'words mean whatever I want them to mean', yes. That's all
> I'm saying.
Enjoy.

:)w

Duane Rettig

unread,
Oct 3, 2002, 6:00:01 PM10/3/02
to
ds...@goldshoe.gte.com (Dorai Sitaram) writes:

> In article <4vg4j2...@beta.franz.com>,
> Duane Rettig <du...@franz.com> wrote:
> >ds...@goldshoe.gte.com (Dorai Sitaram) writes:
> >> If it were only a possible and sometime optimization,
> >> those decisions simply can't be made. This latter
> >> kind of TCO may be a good thing in its own right,
> >> but it is a different subject, and interesting
> >> primarily to implementors, as it doesn't have the same
> >> opportunity for impact on the user's programming
> >> approach.
> >
> >And that is precisely why we in the Common Lisp realm are
> >so picky about the terminology. I use Tail Call Optimization
> >and Tail [Call] Merging interchangably, because from my point of
> >view optimizations are the only tradeoffs to be made in tail-call
> >rewriting.
>
> Yes, I am indeed describing something that is
> different from optimization. To bring out the contrast
> in stark terms: It is quite possible, and even quite
> probable, for an unoptimized, slow-as-molasses language
> to have TCE, and a really superfast language to not
> have it. (To those who want to pipe up with a witty
> example: yes, I know.)

[I notice you're still calling it TCE, instead of STCE.]

> A consequence of TCE being different from the
> optimizations you are talking about is that there
> are programs -- and I am _not_ just talking about
> straight iterations -- that would become incorrect,
> rather than just unoptimized, if the user wrote them
> expecting TCE but the language didn't support it.

I'm assuming you mean STCE here, and of course, since Scheme
_defines_ guarantees, then it stands to reason that programmers
will take advantage of those guarantees. This has to do with
specification, not optimization, and this is not what I'm talking
about.

I am talking about the actual _design_ decision Scheme made
as to what constitutes tail-position. I submit that such a
decision _is_ a performance tradeoff. But we should set this
aside until we take care of the more pertinent question, below.

> Another aspect of this is that if you added TCE to a
> language for optimization reasons, you could
> easily be making a big mistake.

What version of TCE are you talking about here, and to what
mistake are you referring? (I know of several potential pitfalls,
and the languiage wuld have take care what the vendor and user can
and cannot assume, but I want to hear from you what mistake it
could be).

> >And if you really think that such is not the case in
> >Scheme, then why don't you consider the call to bar here:
> >
> >(defun foo ()
> > (let ((*print-pretty* t))
> > (bar)))
> >
> >to be in tail position?
>
> My reason is that the call to bar wouldn't be the last
> thing that the enclosing call to foo does.

Why does foo have to do anything? It would no longer even be on the
stack! Why can't the mechanism that returns from a function see to
it that bindstacks are kept consistent? I don't know the Lisp Machine
architectures intimately, so I don't know if they ever had this
capability, but I could easily envision such an architecture, and it
could even be emulated in GP hardware. In short, if the unbinding
of specials were part of the Lisp calling convention, then the above
example would indeed describe a tail-position situation.

Jens Axel Søgaard

unread,
Oct 3, 2002, 7:13:44 PM10/3/02
to
Tim Josling wrote:
> "Jens Axel Søgaard" wrote:
>>
>> Erik Naggum wrote:
>>> * Tim Josling
>>
>>>> Three possibilities as to why call/cc is not in lisp
>>>> occur to me:
>>
>>>> 3. It can be implemented efficiently in scheme,

> So in no sense was I 'ignoring' the possible inefficiency
> of call/cc in scheme.

Then I must have misunderstood 3.

--
Jens Axel Søgaard

Paul F. Dietz

unread,
Oct 3, 2002, 7:24:05 PM10/3/02
to
Dorai Sitaram wrote:

>>(defun foo ()
>> (let ((x ...))
>> (declare (dynamic-extent x))
>> (bar x)))
>
>
> Yes, the call to `bar' is a tail call, unless I'm
> thoroughly misunderstanding what (declare
> (dynamic-extent ...)) does.
>
> It depends on whether the variable x needs to be
> restored to some "previous" value or not once its `let'
> is done. I think not in this case, but maybe I am
> mistaken in so thinking. Frankly, I've never used
> dynamic-extent in my own code -- all my specials
> have tended to be globals that I had carefully defvar'd
> before first use.


dynamic-extent doesn't have anything to do with special
variables. It tells the compiler that it can allocate
the object(s) refered to by x on the stack instead of on
the heap. If the ... were, say, (list 10), then this list
could be stack allocated.

Paul

Geoff Miller

unread,
Oct 3, 2002, 8:08:50 PM10/3/02
to
Erik Naggum <er...@naggum.no> wrote in message news:<32426425...@naggum.no>...

> It is already part of the implementations under suitable optimization.
> Unlike the Scheme world, Common Lisp vendors are allowed to be smarter
> than the specification.

What is the basis for this claim? I know of know such restrictions on
Scheme implementations.

> Perhaps you should try to study reality a little? Or is that unlikely?

I don't see how this adds to your posted contribution. It's rude
frankly. You could stand be a little more constructive.

Geoff

ozan s yigit

unread,
Oct 3, 2002, 8:13:51 PM10/3/02
to
Duane Rettig <du...@franz.com> writes:

> I'm assuming you mean STCE here, and of course, since Scheme

> _defines_ guarantees, [...]

that is r5rs section on proper tail recursion.

> I am talking about the actual _design_ decision Scheme made
> as to what constitutes tail-position.

this is an interesting point. are there many other (than those defined
in R5RS say) circumstances where a tail-recursive call is converted to
iteration but scheme is somehow forbidden to do such a transformation?
[eg. see the explicit note on the section about "tail recursion" in
r5rs that i will not repeat here.] scheme implementations too are
allowed to be smart you know... (you should talk to kent dbyvig :)

> ... In short, if the unbinding


> of specials were part of the Lisp calling convention, then the above
> example would indeed describe a tail-position situation.

and it seems to me that there would be nothing stopping any scheme
implementation from taking advantage of that situation.

oz
---
freedom has a mental cost. -- peter roosen-runge

Rob Warnock

unread,
Oct 4, 2002, 4:54:20 AM10/4/02
to
Joe Marshall <j...@ccs.neu.edu> wrote:
+---------------

| Duane Rettig <du...@franz.com> writes:
| > I detest the usage of the term Tail Call _Elimination_.
|
| Can we go back to calling it the "JRST hack"?
+---------------

Don't you really mean the "PJRST" hack? ;-}


-Rob

p.s. Hmmm... Maybe not. The PJRST macro (opdef, really) was used
as early as TOPS-10 Monitor 4S72 (circa 1970), but it may not have
been officially mandated for denoting TCE until 5.0 (~1972?). The
"JRST hack" was probably used long before that in ITS/MacLisp, yes?

-----
Rob Warnock, PP-ASEL-IA <rp...@rpw3.org>
627 26th Avenue <URL:http://www.rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Tim Bradshaw

unread,
Oct 4, 2002, 5:40:03 AM10/4/02
to
* Duane Rettig wrote:

> Why does foo have to do anything? It would no longer even be on the
> stack! Why can't the mechanism that returns from a function see to
> it that bindstacks are kept consistent? I don't know the Lisp Machine
> architectures intimately, so I don't know if they ever had this
> capability, but I could easily envision such an architecture, and it
> could even be emulated in GP hardware. In short, if the unbinding
> of specials were part of the Lisp calling convention, then the above
> example would indeed describe a tail-position situation.

It might in some annoying technical sense be in tail-position, but it
isn't in the useful sense that a recursion through this tail-position
call will not use up unbounded memory. The memory eaten will be
stored restorations of special variables, not stack, but who cares?

I suppose that a very smart system might be able to keep a record of
whether the previous restoration was a `tail restoration' and if both
it and the current one are, then don't add a new one, since there is
no purpose in doing so. This test has to be done at runtime as far as
I can see, because you can't know statically who is calling you.

(I apologise for the somewhat made-up terminology in this article).

--tim

Frode Vatvedt Fjeld

unread,
Oct 4, 2002, 6:41:03 AM10/4/02
to
Christophe Rhodes <cs...@cam.ac.uk> writes:

> If the "voices of reason" could come up with a specification of what
> "mandatory tail-call elimination" actually means, phrased in a
> fashion that is actually understandable by implementors, ideally
> with a reference implementation of the specification's semantics
> (for cmucl or one of the other "free" implementations, for extra
> credit, so everything is clear), then perhaps those people working
> on CL implementations who seem to be behaving so "unreasonably"
> might actually sit up and take notice.

I don't make any claims about how reasonably this is voiced, but
still:

(defmacro goto-function-from (from-function to-function &rest args)
`(return-from ,from-function (funcall ,to-function ,@args)))

and/or:

(defmacro goto-function (&environment env function &rest args)
`(return-from ,(implementation::function-block-name env)
(funcall ,function ,@args)))

plus:

(define-compiler-macro goto-function
(&whole form &environment env function &rest args)
(cond
((and (not compiler:*never-merge-calls-p*)
(not (compiler::current-stack-frame-needs-unwinding env)))
`(implementation::replace-current-stack-frame ,function ,@args))

;; maybe more (implementation-specific) clever things can be
;; tried at this point.

(t (when compiler:*warn-when-goto-not-mergable-p*
(warn 'compiler::cant-merge-goto-function form))
form)))

And, possibly (i.e. not mandatory, but optionally), in the compiler,
something along the lines of this:

(defun compiler::compile-funcall (form env ...)
(let ((function (car form))
(args (cdr form)))
(if (and compiler:*automatically-merge-tail-calls-p*
(compiler::tail-call-position-p env ..))
(compile-form `(goto-function ,function ,@args) env ...))
(compiler::compile-funcall-normally form env ...)))

The macros define the semantics of the goto-function (-from) operator,
and shows it can easily be implemented in any existing CL, compiling
or interpreting. In fact, I believe it would be possible, having this
operator implemented like the compiler-macro suggests, to implement an
eval that behaves similar to a tail-call-merging compiler, wrt. to
stack-space. (Whether this is something one would want to have, I'm
not sure.)

In my opinion, mandatory automatic tail-call merging is not desirable
at all, because I very rarely use recursion to achieve iteration that
is linear to the input. So I don't think the (expectedly) very slight
increase in efficiency makes up for the reduced debuggability. By
having an explicit operator like goto-function, I think one gets
pretty much the best of both worlds.

The return-from operator could also be quite valuable in a certain
programming style I sometimes find myself using, where the program is
like a data-driven pipleline, and I have lots of relatively small
functions that mostly do data dispatching. In such circumstances, I
find that the call-stacks tend to become very long, and that they are
more often than not in the way of efficient debugging. If I'd had
goto-function, the data dispatching would become more like a primitive
dispatch, and the call-stacks would, I suspect, be a closer match to
my mental model of what goes on.

--
Frode Vatvedt Fjeld

Dorai Sitaram

unread,
Oct 4, 2002, 9:38:23 AM10/4/02
to
In article <4n0pv2...@beta.franz.com>,

Duane Rettig <du...@franz.com> wrote:
>ds...@goldshoe.gte.com (Dorai Sitaram) writes:
>
>> In article <4vg4j2...@beta.franz.com>,
>> Duane Rettig <du...@franz.com> wrote:
>> >... why don't you consider the call to bar here:

>> >
>> >(defun foo ()
>> > (let ((*print-pretty* t))
>> > (bar)))
>> >
>> >to be in tail position?
>>
>> My reason is that the call to bar wouldn't be the last
>> thing that the enclosing call to foo does.
>
>Why does foo have to do anything? It would no longer even be on the
>stack! Why can't the mechanism that returns from a function see to
>it that bindstacks are kept consistent? I don't know the Lisp Machine
>architectures intimately, so I don't know if they ever had this
>capability, but I could easily envision such an architecture, and it
>could even be emulated in GP hardware. In short, if the unbinding
>of specials were part of the Lisp calling convention, then the above
>example would indeed describe a tail-position situation.

That would of course be all right. If specials were
indeed "neat" wrt the merging of tail-calls, then the
number of recursively definable iterative processes
effectively increases. Impls that do this would
still be conforming to a spec that didn't require it.

At the same time, mandating this in a spec would
put at a disadvantage those impls that don't or can't
do this trick -- no doubt the motivation for folks to
have put forward the example (similar to the) above in
the first place -- and for other folks to realistically
fail such calls as tail-calls. Is tail-call merging
without keeping specials neat still of worth to the
user? I don't know. There may not be a definitive
answer.

BTW, I've only ever seen CL specials thrown up as the
sticking point. What, if any, are the others?

Frode Vatvedt Fjeld

unread,
Oct 4, 2002, 10:08:36 AM10/4/02
to
ds...@goldshoe.gte.com (Dorai Sitaram) writes:

> BTW, I've only ever seen CL specials thrown up as the sticking
> point. What, if any, are the others?

unwind-protect and catch, at least.

--
Frode Vatvedt Fjeld

Wade Humeniuk

unread,
Oct 4, 2002, 10:21:35 AM10/4/02
to

"Dorai Sitaram" <ds...@goldshoe.gte.com> wrote in message news:ank5kf$gbq$1...@news.gte.com...

> BTW, I've only ever seen CL specials thrown up as the
> sticking point. What, if any, are the others?

Another, for me, is what does it matter? I write my code, it runs whether
it does tail-call-whatever or not. I should not have to be concerned if a function
if tail-call-whatevered. When I use iteration I know everything is safe, but
how do I know what the compiler has done unless I dissassemble the code?
Will the compiler act the same from version to version, if I make a slight
change, will it still do it? If write a large function, make 20-50 mods over a
few day period, it will waste my time deciding.

Sure you can eventually define tail-call-whatever, but just because you can
does not mean you should do it. Keep it simple.

Wade

Erik Naggum

unread,
Oct 4, 2002, 10:34:09 AM10/4/02
to
* Tim Josling

| It would indeeed be nice to have tail recursion guaranteed in specific
| known circumstances, but maybe that's not reasonable.

The remaining question is who should pay for what you think "would be
nice" and why. Not only implementation costs, but just like a world
where you went to the food depot to pick up free food would have a
different supply and kinds of food from a world where you have to pay for
it, a world where tail recursion is costless would have consequences that
you may not be aware of. Likewise, if you are used to them, but can no
longer have them, what do you do? Squatting in front of your grocery
store and demanding free food like you were used to won't cut it.

--
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.

Erik Naggum

unread,
Oct 4, 2002, 10:37:13 AM10/4/02
to
* Tim Josling

| Why Erik misunderstood, I don't know. Perhaps I was not clear, perhaps his
| English let him down, perhaps his apparent enthusiasm for seeing others as
| 'untermensch' blinded him to what I said. Just to be clear here, I don't know
| which of these is true, or even if any of these explanations are true.

I do not know if you are a child molestor, but if you were, it would
explain some fairly odd things in your defensive behavior. Not that I
claim that you are, of course, but it would just explain some things.

Let's see if you get the point this time. I kind of doubt it.

Erik Naggum

unread,
Oct 4, 2002, 10:39:01 AM10/4/02
to
* Tim Josling

| No, but some may not know scheme to the extent of knowing call/cc.
| Therefore I included a brief description.

This some more things about your odd defensive behavior.

| I can't see any rational reason for objecting to that.

And therefore you assume there are none? Fascinating psychology.

Erik Naggum

unread,
Oct 4, 2002, 10:43:41 AM10/4/02
to
* Geoff Miller

| I don't see how this adds to your posted contribution. It's rude
| frankly. You could stand be a little more constructive.

In what edition of your etiquette standard is it /not/ rude to behave the
way you do in public? You are a rude and unconstructive arrogant snot.
If you really think you are constructive, then I was to begin with. OK?

Tim Bradshaw

unread,
Oct 4, 2002, 10:45:19 AM10/4/02
to
* Dorai Sitaram wrote:
> BTW, I've only ever seen CL specials thrown up as the
> sticking point. What, if any, are the others?

UNWIND-PROTECT, CATCH / THROW, HANDLER-BIND / HANDLER-CASE. Many
WITH-x constructs (which probably use UNWIND-PROTECT in their
implementation). Probably some others.

--tim

Fred Gilham

unread,
Oct 4, 2002, 10:58:15 AM10/4/02
to

Frode Vatvedt Fjeld <fro...@cs.uit.no> writes:

> ds...@goldshoe.gte.com (Dorai Sitaram) writes:
>
> > BTW, I've only ever seen CL specials thrown up as the sticking
> > point. What, if any, are the others?
>
> unwind-protect and catch, at least.

Here are the specific instances where CMUCL doesn't do tail-call
merging. From the CMUCL users manual:

Tail Recursion Exceptions

Although Python is claimed to be "properly" tail-recursive, some
might dispute this, since there are situations where tail
recursion is inhibited:

* When the call is enclosed by a special binding, or

* When the call is enclosed by a `catch' or `unwind-protect',
or

* When the call is enclosed by a `block' or `tagbody' and the
block name or `go' tag has been closed over.

These dynamic extent binding forms inhibit tail recursion because
they allocate stack space to represent the binding.
Shallow-binding implementations of dynamic scoping also require
cleanup code to be evaluated when the scope is exited.

CMU Lisp merges calls in the tail-call position, recursive or not, as
part of what it calls its `local call' optimization. Local call is
not as general as full lisp calls but much more efficient. It's used
to implement block compilation, among other things.

--
Fred Gilham gil...@csl.sri.com
"Come to me, all who labor and are heavy laden, and I will give you
rest. Take my yoke upon you, and learn from me, for I am gentle and
lowly in heart, and you will find rest for your souls. For my yoke
is easy, and my burden is light." --Jesus of Nazareth

Duane Rettig

unread,
Oct 4, 2002, 11:00:01 AM10/4/02
to
Tim Bradshaw <t...@cley.com> writes:

> * Duane Rettig wrote:
>
> > Why does foo have to do anything? It would no longer even be on the
> > stack! Why can't the mechanism that returns from a function see to
> > it that bindstacks are kept consistent? I don't know the Lisp Machine
> > architectures intimately, so I don't know if they ever had this
> > capability, but I could easily envision such an architecture, and it
> > could even be emulated in GP hardware. In short, if the unbinding
> > of specials were part of the Lisp calling convention, then the above
> > example would indeed describe a tail-position situation.
>
> It might in some annoying technical sense be in tail-position, but it
> isn't in the useful sense that a recursion through this tail-position
> call will not use up unbounded memory. The memory eaten will be
> stored restorations of special variables, not stack, but who cares?

Right, and I would never offer to do such an implementation, but this
continues to strengthen the whole point of my argument which has always
been that it _is_ an optimization decision to exclude specials
definitionally from tail position.

> I suppose that a very smart system might be able to keep a record of
> whether the previous restoration was a `tail restoration' and if both
> it and the current one are, then don't add a new one, since there is
> no purpose in doing so. This test has to be done at runtime as far as
> I can see, because you can't know statically who is calling you.

The bindstack we implement has depth, and the current depth is saved
at various times, for example, it is one of the values saved in a
catchframe before a catch is done, so that the process of throwing
to that catch tag can know how far to unbind. I could envision (but
only if it were mandated - I would probably never do something like
this voluntarily) that the Lisp calling sequence would mandate storing
the bindstack in the stack before every call, and then the return to
that function would need to ensure that the bindstack is the proper
size, and uunbind some number of bindings if not. Thus, if FOO calls
BAR, which then binds *print-pretty* and then tail-jumps to BAS, then
FOO will have saved the bindstack index (size) in its frame, and when
when BAS returns, FOO's call cleanup (or some utility returning-routine)
would have to unbind one special to get the bindstack to the right
size.

> (I apologise for the somewhat made-up terminology in this article).

No problem; it is all theoretical, and the point is the possibility
vs the practicality of such theory.

Dorai Sitaram

unread,
Oct 4, 2002, 11:12:05 AM10/4/02
to
In article <3D9CD18F...@dls.net>, Paul F. Dietz <di...@dls.net> wrote:
>Dorai Sitaram wrote:
>
>>>(defun foo ()
>>> (let ((x ...))
>>> (declare (dynamic-extent x))
>>> (bar x)))
>>
>> Yes, the call to `bar' is a tail call, unless I'm
>> thoroughly misunderstanding what (declare
>> (dynamic-extent ...)) does.
>
>dynamic-extent doesn't have anything to do with special
>variables. It tells the compiler that it can allocate
>the object(s) refered to by x on the stack instead of on
>the heap. If the ... were, say, (list 10), then this list
>could be stack allocated.

Thank you very much for the explanation. You
have just taught me that (declare dynamic-extent) is a
second problem area (after specials) for identifying
tail-calls in CL.

I did thoroughly misunderstand the nature of
dynamic-extent, and I think now that the call to (bar
x) can't afford to be a tail call after all -- because
the part of the stack representing the value of x can't
afford to be erased until bar has returned.

There may be heroic or even easy ways out of this,
which impl experts may be better positioned to tell
about. An unheroic way I can think of is for the impl
to prefer to make it a valid tail call and simply
ignore the dynamic-extent decl, which (the ignoring
part, I mean) I think is currently allowed for
all declare's except (declare special).

Kaz Kylheku

unread,
Oct 4, 2002, 11:59:32 AM10/4/02
to
Tim Josling <t...@melbpc.org.au> wrote in message news:<3D9CB65B...@melbpc.org.au>...

> Erik Naggum wrote:
> > None of us know Scheme at all, so thanks for this lecture.
> >
>
> No, but some may not know scheme to the extent of knowing call/cc. Therefore I
> included a brief description.
>
> I can't see any rational reason for objecting to that. I knew from experience
> that some people object to having the obvious explained to them, so I put the
> explanation in parentheses.

Right, so the readership's affinity for parentheses would
counterbalance its hatred of lectures. I see!

Duane Rettig

unread,
Oct 4, 2002, 12:00:01 PM10/4/02
to
ds...@goldshoe.gte.com (Dorai Sitaram) writes:

Correct. My whole argument is not about conformance to a spec,
but instead how that spec was derived in the first place. Thus,
whatever the spec requires, the user can count on. In Scheme, some
tail calls can be counted on to be merged, but not all (by reason
of performance, as above). In CL, the spec doesn't try to decide
what kinds of tail calls should be optimizable, so that if someone
suddenly discovers a way to optimize tail-calls in the presence
of specials, or if they build a Lisp Machine that does so in
hardware, that optimization would not go against the definitional
notation that the language has given to what a tail-call is.

> At the same time, mandating this in a spec would
> put at a disadvantage those impls that don't or can't
> do this trick -- no doubt the motivation for folks to
> have put forward the example (similar to the) above in
> the first place -- and for other folks to realistically
> fail such calls as tail-calls. Is tail-call merging
> without keeping specials neat still of worth to the
> user? I don't know. There may not be a definitive
> answer.

Precisely my point! Although I would not word it as possibly not
having a definitive answer; I would prefer to say that the pure
definition of tail-call is too hard to implement, and so the
places where tail-call-merging is done are and should be decided
in an optimizational tradeoff, just as many good things are done
in computing.

> BTW, I've only ever seen CL specials thrown up as the
> sticking point. What, if any, are the others?

Binding of specials is certainly the most up-front and obvious.
Others are:

1. Catches (including unwind-protects). A catch-frame is established
and must be preserved for as long as the catch is active. Thus,
in something like

(defun foo ()
(catch 'blarg
(bas)))

the tail-call to bas cannot be optimized unless the 'blarg catchframe
is saved, because all of the state saved in that catch must be restored
(e.g. stack, bindstack, any callee-saves registers, etc)

2. Any stack allocations. This includes stack-allocated &rest arguments
(i.e. a &rest with a dynamic-extent declaration), any conses or arrays
that have been declared dynamic-extent and are allocated on the stack, and
in the case of Allegro CL, we also stack-allocate flets/labels and/or
their environments when they are determined or declared to be dynamic-extent.
All of these must be evacuated in some way if a tail-call "optimization"
is to be performed. (Obviously, we make the call _not_ to perform the
merging, because the optimization tradeoffs are not good in most cases
for stack-allocated objects.

3. Argument count. We like to track the standard calling convention
of each architecture as much as possible; many times it saves a lot
of headaches when dealing with debuggers and system exception-handlers.
Some calling conventions specify a pre-allocated frame (many RISC
architectures do this) where enough stack is allocated for the
worst-case function call. For example, if FOO calls BAR with 1 argument,
and BAS with 10 arguments, then enough stack is allocated when FOO is
entered to call with 10 arguments. This static call-frame allows the
stack pointer to be moved exactly twice in a function; once at the
beginning to allocate the stack space, and once at the end to deallocate
it. It is philosophically different than the CISC-based architectures,
which have to carefully optimize all push/pop instructions in order to
achieve decent performance (I muust say that Intel have done an impressive
job, buut I digress...). Because the stack frames are pre-calculated,
any tail-call to a function with more arguments than have been allocated
would require extra work to set up the stack frame; this is especially
cumbersome when some of the arguments passed are in the stack frame being
reused/destroyed. Thus, in the example above, BAR could easily tail-jump
to any function with up to 10 arguments and still reuse FOO's
argument-space. But if BAR wants to call a function with 11 arguments,
then some rearranging of FOO's argument space (FOO still being an active
function) would be required and would carry with it some issues.

Sorry about this last one; I know of no way to describe it more simply.

Duane Rettig

unread,
Oct 4, 2002, 12:00:01 PM10/4/02
to

[responding to my own article]

Duane Rettig <du...@franz.com> writes:

> ds...@goldshoe.gte.com (Dorai Sitaram) writes:
>
> > BTW, I've only ever seen CL specials thrown up as the
> > sticking point. What, if any, are the others?

> 1. Catches (including unwind-protects). A catch-frame is established

I should mention that others have included GO tags that have been
"closed over". I presume that this means that there is some
runtime environment that needs to be adjusted between the GO and its
tag location (otherwise the GO would want to turn into just a jump
instruction). For such situations, we treat these tags as catch
nodes (and their GOs as throws), even if later optimization is done
on them. However, I should have mentioned these situations, which
we regard as if they are catches, for the purposes of
tail-call-optimization decisions.

Dorai Sitaram

unread,
Oct 4, 2002, 12:55:33 PM10/4/02
to
In article <4zntun...@beta.franz.com>,

Duane Rettig <du...@franz.com> wrote:
> 1. Catches (including unwind-protects). A catch-frame is established
>and must be preserved for as long as the catch is active. Thus,
>in something like
>
>(defun foo ()
> (catch 'blarg
> (bas)))
>
>the tail-call to bas cannot be optimized unless the 'blarg catchframe
>is saved, because all of the state saved in that catch must be restored
>(e.g. stack, bindstack, any callee-saves registers, etc)


Thank you for the rest of your post also!
Snipped only in the interest of OCR.

Re the above quoted part: If you will permit a
small eyebrow raise here: I never would have considered
the (bas) above a tail call (of foo). I can't even
discern a potential ambiguity about it (as was the case
with the specials and the dynamic-extent). Perhaps you
are being a smidg too liberal in identifying tail
calls?

Duane Rettig

unread,
Oct 4, 2002, 2:00:01 PM10/4/02
to
ds...@goldshoe.gte.com (Dorai Sitaram) writes:

> In article <4zntun...@beta.franz.com>,
> Duane Rettig <du...@franz.com> wrote:
> > 1. Catches (including unwind-protects). A catch-frame is established
> >and must be preserved for as long as the catch is active. Thus,
> >in something like
> >
> >(defun foo ()
> > (catch 'blarg
> > (bas)))
> >
> >the tail-call to bas cannot be optimized unless the 'blarg catchframe
> >is saved, because all of the state saved in that catch must be restored
> >(e.g. stack, bindstack, any callee-saves registers, etc)
>
>
> Thank you for the rest of your post also!
> Snipped only in the interest of OCR.

You're welcome.

> Re the above quoted part: If you will permit a
> small eyebrow raise here: I never would have considered
> the (bas) above a tail call (of foo). I can't even
> discern a potential ambiguity about it (as was the case
> with the specials and the dynamic-extent). Perhaps you
> are being a smidg too liberal in identifying tail
> calls?

No. It's exactly the same situation. FOO has nothing more to
do by the time it is calling BAR, and BAR's return value(s)
is(are) FOO's return value(s). Any throw to 'blarg must be able
to identify that catch, and that information must be saved
somewhere (just as the special binding has to be saved somewhere,
or a stack-alocated object must be saved somewhere). In all cases,
the amount of storage required to save the extra information might
be (and probably is) less than the storage required by FOO for its
stack. So at the possibly perverse cost of huge amounts of code
executed, some stack could be optimized. However, the perversity
of the cost of this optimization does not remove the conceptual
purity of the fact that this call is indeed in tail position.

I should, however, correct myself with respect to unwind-protects.
Any unwind-protect that has non-null cleanup forms by definition
cannot make a tail-call in its protected form (because the cleanup
form must run afterwards, and thus there is more to do after the
tail call candidate returns).

Thomas F. Burdick

unread,
Oct 4, 2002, 2:19:52 PM10/4/02
to
Tim Bradshaw <t...@cley.com> writes:

Hmm. If you're designing a system where you really care about tail
call merging, I don't see what's so crazy about being sure to reuse
all stack frames (control and binding). If I were designing such a
system, I'd have the value slot of symbols contain the value, a
pointer to the value-frame to restore when this one is undone, and a
pointer that the next frame should use for its restore pointer. That
way, if you know you're setting up a "tail binding", you set the
next-restoration pointer to the same frame as your resotration pointer.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Tim Josling

unread,
Oct 4, 2002, 3:58:47 PM10/4/02
to

Possibility (3) posited that it can be done OK is scheme but not in lisp. (1)
posited that it was not useful and (2) that it could not be implemented
efficiently in lisp or scheme. So I provided 3 possibilities I had thought of,
without necessarily saying one of the other was true or valid.

It was not (3) that you misunderstood but the overall context.

If you misunderstood it, that means I may not have explained it sufficiently
clearly. Or perhaps this occurred because Erik only quoted a fragment of what
I wrote.

Tim Josling

Jim White

unread,
Oct 4, 2002, 8:31:59 PM10/4/02
to
Barry Margolin wrote:
> In article <ey3n0pw...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:
>
>>And I think your take on `the Lisp approach' is basically completely
>>backward, at least as regards CL. The CL approach is to be pragmatic
>>- implement what can be agreed on and what can be seen to be
>>efficiently implementable and useful: don't implement stuff that
>>suddenly opens huge questions about what everything means, or whether
>>it can be implemented efficiently.
>
>
> I agree with Tim here. The CL philosophy is to provide a library of
> high-level, proven facilities that are directly useful in applications,
> like hash tables and sequence functions. The Scheme approach is to provide
> basic mechanisms that can be used as building blocks for any programming
> paradigm. Common Lisp was intended for industrial programming, while
> Scheme was designed for academics. That doesn't mean they're only fitted
> for those environments, but their general approaches are based on the needs
> of those different types of uses.


I agree that what you say is roughly how the two camps have divided in a
rather general way. But that certainly was not the intention.

I believe that Steele intended Scheme to be the bullet-proof base
language for system implementors. Then all those CL libraries and funky
language features would simply be a big toolbox of Scheme for any sort
of "industrial" purpose.

This is illustrated clearly by this whole discussion of control
structures in CL and how making any systematic changes is effectively
impossible.

In contrast, Scheme has exactly the necessary set of control structures
needed to implement *all* of CL's control structures *and* have nifty
extras like call/cc and TCE.


jim

Wade Humeniuk

unread,
Oct 4, 2002, 9:10:17 PM10/4/02
to

"Jim White" <j...@pagesmiths.com> wrote in message news:3D9E32EF...@pagesmiths.com...

> I agree that what you say is roughly how the two camps have divided in a
> rather general way. But that certainly was not the intention.
>
> I believe that Steele intended Scheme to be the bullet-proof base
> language for system implementors. Then all those CL libraries and funky
> language features would simply be a big toolbox of Scheme for any sort
> of "industrial" purpose.
>
> This is illustrated clearly by this whole discussion of control
> structures in CL and how making any systematic changes is effectively
> impossible.
>
> In contrast, Scheme has exactly the necessary set of control structures
> needed to implement *all* of CL's control structures *and* have nifty
> extras like call/cc and TCE.

The way I see it is that the people on the Common Lisp side see the programmer
as more important than the language and the Scheme side considers the language
more important than the programmer.

CL has been accommodating of many programming styles and practical issues,
while Scheme will not bend for anyone.

Wade

Kenny Tilton

unread,
Oct 5, 2002, 12:02:32 AM10/5/02
to

Jim White wrote:
> In contrast, Scheme has exactly the necessary set of control structures
> needed to implement *all* of CL's control structures *and* have nifty
> extras like call/cc and TCE.

Sorry, I have been listening with only one ear, but I thought I read a
few articles detailing how call/cc was incompatible with *all* of CL,
ie, certain specific very cool features. I note you say "*all* of CL's
/control structures/ [emphasis added]". Are we playing word games here?

kenny
clinisys

Jim White

unread,
Oct 5, 2002, 8:41:40 AM10/5/02
to

No word games. I haven't read all the arguments about the problems with
CL features either, but I feel reasonably confident in relying on Guy
Steele's analysis as he is the author of both of the original
specifications and that was his conclusion. Scheme arose as the answer
of how to have a well defined semantics for what a system like CL does.
Kent Pitman does quibble with the the current Scheme specification,
but that is something else entirely.

One thing Kent mentions is that Scheme's model of concurrency has
problems with parallel processors using shared memory. But that is
inherent in the architecture. People continue to try and use threading
and shared memory models for parallel processors, but there is no way to
arrive at a useful and correct semantic model that way. The only model
I'm aware of that can deal successfully with that type of implementation
is message passing.

My solution is to use Kawa, the Java-based Scheme system. It is also
being extended to implement Elisp and CL. Being based on Java and
useful with all code for the JVM, integration with even the largest
systems is easy and reliable. The grief with this approach is that
continuations don't really work and the tragedy is that the JVM could
have specified continuations (many JVM's actually have the necessary
internal structure) instead of the usual ad-hoc threads model which
lacks rigor and hence got largely deprecated (but essential for many
systems).

While Java (as does CL) presently accepts not having a concurrency model
that is well specified with respect to parallel processors and memory,
eventually there will be such a specification for the JVM. It will be
interesting to see how they solve the problem as people are going to
want to continue using their familiar threads-and-memory code. Given
that Java is a *very* "industrial" system and community, chances are
they will work out a design that specifies the tricks a VM must do to
meet that expectation (as opposed to taking an "academic" of using
continuations and message passing).

jim

Will Deakin

unread,
Oct 5, 2002, 9:14:42 AM10/5/02
to
Jim White wrote:
> I haven't read all the arguments about the problems with CL features
> either, but I feel reasonably confident in relying on Guy
> Steele's analysis as he is the author of both of the original
> specifications and that was his conclusion.
I would humbly suggest that you should do so as I am resonably confident
that your confidence is misplaced.

> People continue to try and use threading and shared memory models for
> parallel processors, but there is no way to arrive at a useful and
> correct semantic model that way.

I disagree. Is this statement again based on some form of personal
belief? Or could you provide some justification for this statement?

> Given that Java is a *very* "industrial" system and community, chances
> are they will work out a design that specifies the tricks a VM must

> do to meet that expectation...
We live in hope. However, my experience of "industrial" java systems has
been that in the industrial context it is lacking. I again would hope
that you are not confusing "industrial" with "popular."

:)w

Jim White

unread,
Oct 5, 2002, 10:47:55 PM10/5/02
to
Will Deakin wrote:
> Jim White wrote:
>
>> I haven't read all the arguments about the problems with CL features
>> either, but I feel reasonably confident in relying on Guy Steele's
>> analysis as he is the author of both of the original specifications
>> and that was his conclusion.
>
> I would humbly suggest that you should do so as I am resonably confident
> that your confidence is misplaced.

Do you have any specific pointers? I've looked at all the messages in
this thread, and they only refer to the problems of extending CL, and
are not disputing the assertion that CL can be implemented in Scheme.

>> People continue to try and use threading and shared memory models for
>> parallel processors, but there is no way to arrive at a useful and
>> correct semantic model that way.
>
> I disagree. Is this statement again based on some form of personal
> belief? Or could you provide some justification for this statement?

There is plenty of support for this view. No formal language
correctness system has been developed that supports the semantics of
threads-and-memory (with the attendant locking and cache coherency
warranties) as used in systems like CL (or Java) on parallel processors
and memory.

In contrast, message passing has been successful in exactly that
application dating back more than twenty years (Thoth and subsequent
proof systems for network communications) and is used directly in Mach
kernels for this very reason.

>> Given that Java is a *very* "industrial" system and community,
>> chances are they will work out a design that specifies the tricks a VM
>> must
>> do to meet that expectation...
>
> We live in hope. However, my experience of "industrial" java systems has
> been that in the industrial context it is lacking. I again would hope
> that you are not confusing "industrial" with "popular."

Well, the term "industrial" was used (not by me) to distinguish CL's
ready-to-do-convenient-application-development-work character from
Scheme implementations' tendency to be ala carte with respect to many of
those features. Certainly Java is not lacking in comparison to CL in
terms of its readiness or features for development of any application.

Java has even become the language for most RDF Schema-based (Semantic
Web) knowledge engineering applications, an area in which Lisp (or other
even less popular languages) used to be dominant.

And speaking of CL, Scheme, and Java, it is odd that (AFAIK) there is
still no complete CL implementation for the JVM, even though there are
several for Scheme. Although Kawa will probably get there one day, it
seems that the advantages for those interested in developing and
deploying CL applications would have made it a priority long before now.

jim

Christopher C. Stacy

unread,
Oct 6, 2002, 1:30:43 AM10/6/02
to
>>>>> On Sun, 06 Oct 2002 02:47:55 GMT, Jim White ("Jim") writes:
Jim> Java has even become the language for most RDF Schema-based
Jim> (Semantic Web) knowledge engineering applications, an area in
Jim> which Lisp (or other even less popular languages) used to be
Jim> dominant.

Are you saying that Lisp was widely used for the Semantic Web project,
and then JAVA took over? I think it's more likely that Java was always
the language widely used by most or all people to do that work.
Lisp had already fallen from popularity more than a decade before
that work was started, and Java was on the rise when it started.

Lisp certainly has a long history of being the language used for
programming semantic networks, and is doubtless a better choice
than Java in most or all respects. (But we're talking popularity
contests here, not technical considerations.)

Vassil Nikolov

unread,
Oct 6, 2002, 2:00:27 AM10/6/02
to
On Sun, 06 Oct 2002 02:47:55 GMT, Jim White <j...@pagesmiths.com> said:

[...]
JW> And speaking of CL, Scheme, and Java, it is odd that (AFAIK) there is
JW> still no complete CL implementation for the JVM, even though there are
JW> several for Scheme. Although Kawa will probably get there one day, it
JW> seems that the advantages for those interested in developing and
JW> deploying CL applications would have made it a priority long before now.

I guess not. I don't have a complete descriptions of all reasons
why, but one technical reason I can see is this. If I implement
(Common) Lisp in Java, I would want^1 to use the JVM's garbage
collector to collect Lisp objects, including cons cells. This
practically requires that cons cells be represented as instances of
a Java class (with, say, two instance variables for the car and the
cdr), which on its turn would IMO make consing too expensive.
Arithmetic would be slower, too, because all integers would have to
be boxed, and even if some (say, -1023 to 1024) are `interned' to
decrease the amount of fixnum consing, this would help only so
much.
_________
^1 probably; there might be sophisticated reasons why a Java
implementation of Common Lisp cannot actually make use of the
JVM's garbage collector

---Vassil.


--
Garbage collection is charged at 0.19e-9 cents a cons. Bulk rates
are also available: please contact memory management for details.

Tim Bradshaw

unread,
Oct 6, 2002, 4:39:54 AM10/6/02
to
* Jim White wrote:

> There is plenty of support for this view. No formal language
> correctness system has been developed that supports the semantics of
> threads-and-memory (with the attendant locking and cache coherency
> warranties) as used in systems like CL (or Java) on parallel
> processors and memory.

And that's the problem. Scheme people will stress away in their ivory
towers about `formal correctness systems', while the rest of the world
deploys CC shared memory machines with hundreds of processors.

--tim

Duane Rettig

unread,
Oct 6, 2002, 5:00:01 AM10/6/02
to
Jim White <j...@pagesmiths.com> writes:

> And speaking of CL, Scheme, and Java, it is odd that (AFAIK) there is
> still no complete CL implementation for the JVM, even though there are
> several for Scheme. Although Kawa will probably get there one day, it
> seems that the advantages for those interested in developing and
> deploying CL applications would have made it a priority long before
> now.

We see no demand at all for this, and I really can't see why one
would expect to see such a demand, except for academic interest.
We have had similar conversations on this newsgroup before about
the issue of putting CL on .NET, and the answer is the same for
both. The goal should not be to run a CL toplevel REPL on top of
a JVM or a CTS/.NET virtuual machine, where the slowdown would not
justify its existence, but instead to communicate and connect to it,
including generating code that will run on it. See:

http://www.franz.com/products/connectivity_tools/

for examples of these kinds of tools.

Geoff Miller

unread,
Oct 6, 2002, 10:16:39 AM10/6/02
to
Erik Naggum <er...@naggum.no> wrote in message news:<32426425...@naggum.no>...

> It is already part of the implementations under suitable optimization.
> Unlike the Scheme world, Common Lisp vendors are allowed to be smarter
> than the specification.

Still wondering: what is the basis for this claim? I know of know
such restrictions on Scheme implementations. Isn't a Scheme
implementer allowed
to pretty much do what he wants, for, against, or in addition to
specs,
as with any other standard?

Erik Naggum <er...@naggum.no> wrote in message news:<32427314...@naggum.no>...


> If you really think you are constructive, then I was to begin with. OK?

Yes, I think it's quite constructive to ask for support for a
statement
that appears to be inaccurate.

Geoff

Erik Naggum

unread,
Oct 6, 2002, 11:17:56 AM10/6/02
to
* Geoff Miller

| Still wondering: what is the basis for this claim?

Watching the Scheme community. Get the joke at their expense, OK?

Will Deakin

unread,
Oct 6, 2002, 2:56:44 PM10/6/02
to
Jim White wrote:
> There is plenty of support for this view. No formal language
> correctness system has been developed that supports the semantics of
> threads-and-memory (with the attendant locking and cache coherency
> warranties) as used in systems like CL (or Java) on parallel processors
> and memory.
Then this is where we will have to differ. To me the existance -- or
non-existance -- of formal language correctness *really* is something
that I don't care about. My concern has always been more pragmatic and
involves questions about whether these systems exist and as to whether
can the fitness for purpose can be demonstrated -- as in measured or
tested. It's the physical scientist in me.

> Well, the term "industrial" was used (not by me) to distinguish CL's
> ready-to-do-convenient-application-development-work character from
> Scheme implementations' tendency to be ala carte with respect to many of
> those features.

Meaning stems from context and in this context -- cl compared to scheme
-- I would tend to agree with the use of `industrial'. However, for me,
this does not hold when this meaning is transfered to java. This is
clearly belief based on painful experience.

> Certainly Java is not lacking in comparison to CL in terms of its
> readiness or features for development of any application.

As an ex-coder -- I now work in a support function -- I do not value
these things as value metrics for the deployment of systems.

> Java has even become the language for most RDF Schema-based (Semantic
> Web) knowledge engineering applications, an area in which Lisp (or other
> even less popular languages) used to be dominant.

Again I am worried about using popularity as a metric.

:)w

Russell Wallace

unread,
Oct 6, 2002, 4:03:51 PM10/6/02
to
On 2 Oct 2002 20:45:36 -0700, k...@ashi.footprints.net (Kaz Kylheku)
wrote:

>Tim Josling <t...@melbpc.org.au> wrote in message news:<3D98D523...@melbpc.org.au>...
>> I am wondering why there is no call/cc in lisp.
>
>One possible answer is that it requires a complex stack structure.

Don't closures already require this, or am I missing something? (I
think call/cc has costs which outweigh its advantages, but I didn't
think that was one of them, if you have closures anyway.)

>Do
>a web search for ``spaghetti stack''.

One of the contexts it shows up in is Smalltalk, which lacks call/cc
but has closures.

--
"Mercy to the guilty is treachery to the innocent."
Remove killer rodent from address to reply.
http://www.esatclear.ie/~rwallace

Jens Axel Søgaard

unread,
Oct 6, 2002, 4:32:06 PM10/6/02
to
Russell Wallace wrote:
> On 2 Oct 2002 20:45:36 -0700, k...@ashi.footprints.net (Kaz Kylheku)
> wrote:
>
>> Tim Josling <t...@melbpc.org.au> wrote in message
>> news:<3D98D523...@melbpc.org.au>...
>>> I am wondering why there is no call/cc in lisp.
>>
>> One possible answer is that it requires a complex stack structure.

Heap not stack.

> Don't closures already require this, or am I missing something?

You don't have to use the same allocation strategy for call frames
and closures. If there is no call/cc then the call stack is a real
stack and needs no garbage collection. This is much faster
than allocating the call frames in the garbage collected heaps.

Some Scheme implementations have managed to move the call graph
from the heap to a stack, but this makes 1) call/cc much more expensive
than it already is [but other things become cheaper], and 2) is harder
to implement.

See more info in Kent Dybvigs "Three Implementation Models of Scheme":

http://library.readscheme.org/page8.html

Which is exceptionally clear and well written.

--
Jens Axel Søgaard

Pascal Costanza

unread,
Oct 6, 2002, 4:48:22 PM10/6/02
to
Russell Wallace wrote:

>>Do
>>a web search for ``spaghetti stack''.
>
> One of the contexts it shows up in is Smalltalk, which lacks call/cc
> but has closures.

Actually, Smalltalk's returns from blocks are like Scheme's call/cc.
However, not all Smalltalk implementations get this right. For example,
some Smalltalkers told me that Squeak doesn't implement this correctly,
while VisualWorks does.

(When you return from a block in Smalltalk, you return to the place
where the block was defined, not where the block was called.)


Pascal

Russell Wallace

unread,
Oct 6, 2002, 5:13:15 PM10/6/02
to
On Sun, 6 Oct 2002 22:32:06 +0200, "Jens Axel Søgaard"
<use...@soegaard.net> wrote:

>Russell Wallace wrote:
>> Don't closures already require this, or am I missing something?
>
>You don't have to use the same allocation strategy for call frames
>and closures. If there is no call/cc then the call stack is a real
>stack and needs no garbage collection. This is much faster
>than allocating the call frames in the garbage collected heaps.

Yes, static analysis could optimize for the common case by using a
regular stack where heap storage isn't needed.

Wouldn't this apply just as much to call/cc as to lambda, though?

>See more info in Kent Dybvigs "Three Implementation Models of Scheme":
>
> http://library.readscheme.org/page8.html
>
>Which is exceptionally clear and well written.

Thanks - I don't suppose it's available anywhere in something other
than .ps.gz format?

Russell Wallace

unread,
Oct 6, 2002, 5:14:57 PM10/6/02
to
On Sun, 06 Oct 2002 22:48:22 +0200, Pascal Costanza <cost...@web.de>
wrote:

>(When you return from a block in Smalltalk, you return to the place
>where the block was defined, not where the block was called.)

*blink* Now that's odd - I wonder why they did it that way? Maybe for
the same reasons call/cc was invented, I guess.

Still, doesn't the existence of Lisp-style closures still break the
"you can always allocate call frames on a stack" property just as much
as call/cc does?

Pascal Costanza

unread,
Oct 6, 2002, 5:25:02 PM10/6/02
to
Russell Wallace wrote:
> On Sun, 06 Oct 2002 22:48:22 +0200, Pascal Costanza <cost...@web.de>
> wrote:
>
>
>>(When you return from a block in Smalltalk, you return to the place
>>where the block was defined, not where the block was called.)
>
> *blink* Now that's odd - I wonder why they did it that way? Maybe for
> the same reasons call/cc was invented, I guess.

I can't reconstruct the argument completely, but I think it was
something along the following lines. When you have an if-expression in
Smalltalk, you may want to return from the defining method inside of the
then- and/or else-part. So in the folloing example, the return
statements do the job.

expression :ifTrue [.... return ....]
:ifFalse [.... return ....]

If they would return to the call site, there wouldn't be a way to return
from the surrounding method.

(I am sorry, but I don't know Smalltalk syntax by heart, so this is
probably not correct syntax.)

Can someone assert whether I got it right?

An additional comment: Squeak gets it right in the above example; but it
doesn't get it right when blocks are passed around and being returned from.

> Still, doesn't the existence of Lisp-style closures still break the
> "you can always allocate call frames on a stack" property just as much
> as call/cc does?

I am sorry, but I haven't followed the discussion in this regard.


Pascal

Jens Axel Søgaard

unread,
Oct 6, 2002, 5:26:53 PM10/6/02
to
Russell Wallace wrote:
> On Sun, 6 Oct 2002 22:32:06 +0200, "Jens Axel Søgaard"
> <use...@soegaard.net> wrote:
>
>> Russell Wallace wrote:
>>> Don't closures already require this, or am I missing something?
>>
>> You don't have to use the same allocation strategy for call frames
>> and closures. If there is no call/cc then the call stack is a real
>> stack and needs no garbage collection. This is much faster
>> than allocating the call frames in the garbage collected heaps.
>
> Yes, static analysis could optimize for the common case by using a
> regular stack where heap storage isn't needed.
>
> Wouldn't this apply just as much to call/cc as to lambda, though?

Yes - but Dybvig doesn't use static analysis.

>> See more info in Kent Dybvigs "Three Implementation Models of
>> Scheme":
>>
>> http://library.readscheme.org/page8.html
>>
>> Which is exceptionally clear and well written.
>
> Thanks - I don't suppose it's available anywhere in something other
> than .ps.gz format?

I have converted it to pdf. For some reason it doesn't display
that well, but it prints ok.

http://ja.soegaard.net/3imp.pdf

--
Jens Axel Søgaard

ozan s yigit

unread,
Oct 6, 2002, 6:10:12 PM10/6/02
to
Jens Axel Søgaard:

> Some Scheme implementations have managed to move the call graph
> from the heap to a stack, but this makes 1) call/cc much more expensive
> than it already is [but other things become cheaper], and 2) is harder
> to implement.
>
> See more info in Kent Dybvigs "Three Implementation Models of Scheme":
>
> http://library.readscheme.org/page8.html
>
> Which is exceptionally clear and well written.

yes. i have implemented from that thesis directly, and it was great
fun. but for implementation strategies and costs, i would recommend
[also implemented from] is clinger/hartheimer/ost "implementation
strategies for continuations" from 1988 L&FP conference. [1] this
also shows strategies used by other languages like smalltalk.

oz
---
[1] http://portal.acm.org/citation.cfm?doid=62678.62692
--
take the long short cut. -- animation rule of thumb (richard williams)

Marco Antoniotti

unread,
Oct 6, 2002, 6:36:06 PM10/6/02
to

I am going to open up a different thread because it may be more
appropriate.


Jim White <j...@pagesmiths.com> writes:

...

> And speaking of CL, Scheme, and Java, it is odd that (AFAIK) there is
> still no complete CL implementation for the JVM, even though there are
> several for Scheme. Although Kawa will probably get there one day, it
> seems that the advantages for those interested in developing and
> deploying CL applications would have made it a priority long before
> now.

Aren't there some serious problems in the current JVM definition that
prevent a good implementation of CL (meaning with CLOS) on top of it?
I am just asking, as I have not really looked into this.

Cheers

--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
715 Broadway 10th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
"Hello New York! We'll do what we can!"
Bill Murray in `Ghostbusters'.

Jens Axel Søgaard

unread,
Oct 6, 2002, 6:35:26 PM10/6/02
to
ozan s yigit wrote:
> Jens Axel Søgaard:

>> See more info in Kent Dybvigs "Three Implementation Models of
>> Scheme":
>>
>> http://library.readscheme.org/page8.html
>>
>> Which is exceptionally clear and well written.
>
> yes. i have implemented from that thesis directly, and it was great
> fun. but for implementation strategies and costs, i would recommend
> [also implemented from] is clinger/hartheimer/ost "implementation
> strategies for continuations" from 1988 L&FP conference. [1] this
> also shows strategies used by other languages like smalltalk.

> [1] http://portal.acm.org/citation.cfm?doid=62678.62692

I don't know it - but I sure would like to read it. Unfortunately
I can't access the document - only universities have access.

--
Jens Axel Søgaard

ozan s yigit

unread,
Oct 6, 2002, 7:21:37 PM10/6/02
to
sorry for the not so general link:

> > [1] http://portal.acm.org/citation.cfm?doid=62678.62692

this one may work:

http://leonora.org/~vladimir/tmp/p124-clinger.pdf

oz
--
All theories are subject to change upon discovery of new evidence.
-- Disclaimer for Buckethead's FAQ

Pascal Costanza

unread,
Oct 6, 2002, 9:16:29 PM10/6/02
to
Marco Antoniotti wrote:
> I am going to open up a different thread because it may be more
> appropriate.
>
>
> Jim White <j...@pagesmiths.com> writes:
>
> ...
>
>
>>And speaking of CL, Scheme, and Java, it is odd that (AFAIK) there is
>>still no complete CL implementation for the JVM, even though there are
>>several for Scheme. Although Kawa will probably get there one day, it
>>seems that the advantages for those interested in developing and
>>deploying CL applications would have made it a priority long before
>>now.
>
>
> Aren't there some serious problems in the current JVM definition that
> prevent a good implementation of CL (meaning with CLOS) on top of it?
> I am just asking, as I have not really looked into this.

This would be a _huge_ effort, if you wanted to reuse Java's object
model. I think it can be done, by making use of several hacks and
exploiting some features, like for example the "hot swapping" of methods
in the newer JDKs, and one or the other runtime aspect weaving
frameworks. But you would always have to deal with unexpected
limitations. Furthermore, you would have to fiddle with Java's security
model, and this would mean that such an implementation cannot be
deployed in certain "real-world" scenarios. (Not necessarily because of
technical reasons, but rather because of political issues.)

Actually it would be much simpler to implement Common Lisp on top of
.NET, because you can always circumvent the CLR by switching to native
code on the fly. (As far as I know you can also manipulate virtual
method tables in .NET - you can't do that in Java.) But then again,
there would be no point in using .NET if a fair amount of effort would
be needed to just circumvent it. ;) Except, of course, for simpler
interoperability with other .NET languages.

Does this answer your question?


Pascal

Kaz Kylheku

unread,
Oct 6, 2002, 10:33:01 PM10/6/02
to
r...@vorpalbunnyeircom.net (Russell Wallace) wrote in message news:<3da096c3....@news.eircom.net>...

> On 2 Oct 2002 20:45:36 -0700, k...@ashi.footprints.net (Kaz Kylheku)
> wrote:
>
> >Tim Josling <t...@melbpc.org.au> wrote in message news:<3D98D523...@melbpc.org.au>...
> >> I am wondering why there is no call/cc in lisp.
> >
> >One possible answer is that it requires a complex stack structure.
>
> Don't closures already require this, or am I missing something? (I
> think call/cc has costs which outweigh its advantages, but I didn't
> think that was one of them, if you have closures anyway.)

Closures require complexity, but not in the stack structure. They
capture only the bindings that are lexically apparent, not ones that
are dynamically apparent. A closure does not carry a copy of ancestral
stack frames just in case it may want to escape up into them.

You cannot for instance do something like this:

(defun foo ()
#'(lambda () (return-from foo 42))

;; go back into foo via closure and try to return upward
(funcall (foo)) ;; error, no block called FOO visible

The closure does not capture the invocation of foo from which it can
return; it's just a disconnected capsule of lexical bindings. There is
no record of the stack which led up to the creation of the closure.

> >Do
> >a web search for ``spaghetti stack''.
>
> One of the contexts it shows up in is Smalltalk, which lacks call/cc
> but has closures.

I don't know Smalltalk. Are these *lexical* closures?

Kaz Kylheku

unread,
Oct 6, 2002, 10:39:09 PM10/6/02
to
r...@vorpalbunnyeircom.net (Russell Wallace) wrote in message news:<3da0a76e....@news.eircom.net>...

> On Sun, 06 Oct 2002 22:48:22 +0200, Pascal Costanza <cost...@web.de>
> wrote:
>
> >(When you return from a block in Smalltalk, you return to the place
> >where the block was defined, not where the block was called.)
>
> *blink* Now that's odd - I wonder why they did it that way? Maybe for
> the same reasons call/cc was invented, I guess.
>
> Still, doesn't the existence of Lisp-style closures still break the
> "you can always allocate call frames on a stack" property just as much
> as call/cc does?

Not just as much. If closures are present, then obviously you need to
shunt the bindings into an object that will survive the disappearance
of the stack frame. But that's it; the object is a self-contained
unit, a capsule containing only a snapshot of the current instance of
lexically-apparent material.

If a given lexical environment contains no closures, then it can be
pure stack. The compiler doesn't have to worry that some closure in
some other lexical environment will want to retain any portion of the
given environment.

Also, the stack frame can always be used for non-lexical bindings,
like blocks, restarts, catch tags, or the saved values of special
variables; none of that information is shunted into the storage
associated with closures, only the bindings of lexical variables.

Kaz Kylheku

unread,
Oct 6, 2002, 11:56:13 PM10/6/02
to
Marco Antoniotti <mar...@cs.nyu.edu> wrote in message news:<y6celb3i4...@octagon.valis.nyu.edu>...

> I am going to open up a different thread because it may be more
> appropriate.
>
>
> Jim White <j...@pagesmiths.com> writes:
>
> ...
>
> > And speaking of CL, Scheme, and Java, it is odd that (AFAIK) there is
> > still no complete CL implementation for the JVM, even though there are
> > several for Scheme. Although Kawa will probably get there one day, it
> > seems that the advantages for those interested in developing and
> > deploying CL applications would have made it a priority long before
> > now.
>
> Aren't there some serious problems in the current JVM definition that
> prevent a good implementation of CL (meaning with CLOS) on top of it?
> I am just asking, as I have not really looked into this.

I believe that the type system of the virtual machine prevents you
from even doing reasonable things, like making use of the bits of a
word to efficiently implement various kinds of cells, conses and the
like. You can't spare a few bits in a word to indicate that it's a
pointer, or that there is a wider type tag that tells you it is a
fixnum and so on.

Security through type safety is implemented at an entirely
inappropriate level in the JVM.

The designers completely missed the boat; you can implement safety and
security on a much higher level representation of a program, and trust
the machine language level.

The trusted compiler and interpreter can then have the freedom to use
the hardware however they see fit.

And if people want to run hard machine language, they can do that too,
if they obtain the privileges.

Russell Wallace

unread,
Oct 7, 2002, 3:38:12 AM10/7/02
to

Thanks!

Russell Wallace

unread,
Oct 7, 2002, 3:40:50 AM10/7/02
to
On 6 Oct 2002 19:33:01 -0700, k...@ashi.footprints.net (Kaz Kylheku)
wrote:

>Closures require complexity, but not in the stack structure. They


>capture only the bindings that are lexically apparent, not ones that
>are dynamically apparent. A closure does not carry a copy of ancestral
>stack frames just in case it may want to escape up into them.

...Whereas with continuations a lot less is knowable in advance,
right, good point.

>I don't know Smalltalk. Are these *lexical* closures?

I don't either, but I've since been told they're not, an explicit
return will jump back to their creation context, so they're not exact
equivalents of Lisp closures.

Pekka P. Pirinen

unread,
Oct 7, 2002, 10:15:17 AM10/7/02
to
k...@ashi.footprints.net (Kaz Kylheku) writes:
> r...@vorpalbunnyeircom.net (Russell Wallace) wrote in message news:<3da0a76e....@news.eircom.net>...
> > Still, doesn't the existence of Lisp-style closures still break the
> > "you can always allocate call frames on a stack" property just as much
> > as call/cc does?
>
> Not just as much. If closures are present, then obviously you need to
> shunt the bindings into an object that will survive the disappearance
> of the stack frame. But that's it; the object is a self-contained
> unit, a capsule containing only a snapshot of the current instance of
> lexically-apparent material.

That describes the nature of CL closures well. As you implied, the
compiler can easily work out which bindings go in the closure.

Also, invoking a closure is just another function call on top of the
current call stack (with some funky initialization to establish the
bindings, usually), it doesn't set up a new stack. A closure can
close over a block or a go tag, but the jump may only go upwards in
the call tree, not sideways into a dead branch.

> Also, the stack frame can always be used for non-lexical bindings,
> like blocks, restarts, catch tags, or the saved values of special
> variables; none of that information is shunted into the storage
> associated with closures, only the bindings of lexical variables.

We need to mention lexical functions separately, since we're not
talking about Scheme here.

As I mentioned above, blocks and go tags are lexical, but since a dead
block can't be returned to, you don't need to heap-allocate for them.
Some implementations don't allocate anything for BLOCK, and just store
a stack pointer and a code pointer in the closure for closed-over
blocks and go tags.
--
Pekka P. Pirinen
I have yet to see any problem, however complicated, which,
when you looked at it in the right way, did not become still
more complicated. - Poul Anderson

Jens Axel Søgaard

unread,
Oct 7, 2002, 10:32:22 AM10/7/02
to
ozan s yigit wrote:

> yes. i have implemented from that thesis directly, and it was great
> fun. but for implementation strategies and costs, i would recommend
> [also implemented from] is clinger/hartheimer/ost "implementation
> strategies for continuations" from 1988 L&FP conference. [1] this
> also shows strategies used by other languages like smalltalk.

An interesting read. I knew that there were several different
strategies, but I wasn't aware that so many actually had been used.
One lesson I learned was, that you just can't say "X is fastest". The
proper choice of allocation strategy depends on several factors. To
be sure, one needs to try them.

Thanks, for the read.
--
Jens Axel Søgaard

Hannah Schroeter

unread,
Oct 7, 2002, 11:14:41 AM10/7/02
to
Hello!

Russell Wallace <r...@vorpalbunnyeircom.net> wrote:
>[...]

>Wouldn't this apply just as much to call/cc as to lambda, though?

No. Lambdas capture only data and it's not too difficult to analyse
whether they actually refer to variables from the enclosing scopes
at all. First class continuations capture control, too.

>[...]

Kind regards,

Hannah.

It is loading more messages.
0 new messages