Message from discussion Implementing call/cc or Dynamic Continuations in ANSI CL
From: ces...@qnci.net (William D Clinger)
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 19 Apr 2003 11:15:06 -0700
References: <email@example.com> <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com>
Content-Type: text/plain; charset=ISO-8859-1
X-Trace: posting.google.com 1050776106 30753 127.0.0.1 (19 Apr 2003 18:15:06 GMT)
NNTP-Posting-Date: 19 Apr 2003 18:15:06 GMT
Quoting me, Erran Gat wrote:
> > For example, it is not at all clear to me whether Duane intends for
> > his preferred terms, "tail merging" or "tail-call merging", to mean
> > tail call optimization or proper tail recursion. It is not at all
> > clear that Duane even understands the difference.
> I certainly don't understand the difference (didn't even know there was
> one until just now). Would you please explain it, or point us to an
> explanation? Thanks.
The following paper contains several relevant citations:
William D Clinger. Proper tail recursion and space efficiency.
In the Proceedings of the 1998 ACM Conference on Programming
Language Design and Implementation, June 1998, pages 174-185.
Abstract: The IEEE/ANSI standard for Scheme requires implementations
to be properly tail recursive. This ensures that portable code can
rely upon the space efficiency of continuation-passing style and
other idioms. On its face, proper tail recursion concerns the
efficiency of procedure calls that occur within a tail context. When
examined closely, proper tail recursion also depends upon the fact
that garbage collection can be asymptotically more space-efficient
than Algol-like stack allocation.
Proper tail recursion is not the same as ad hoc tail call optimization
in stack-based languages. Proper tail recursion often precludes stack
allocation of variables, but yields a well-defined asymptotic space
complexity that can be relied upon by portable programs.
This paper offers a formal and implementation-independent definition
of proper tail recursion for Scheme. It also shows how an entire
family of reference implementations can be used to characterize
related safe-for-space properties, and proves the asymptotic
inequalities that hold between them.
I regard this paper as a commentary on the classic papers by Steele
and Sussman from the early 1970s. If you read those papers carefully,
you will see that the property they called "proper tail recursion" has
to do with space efficiency. Somewhere (and I regret that I don't
remember the exact citation) Steele actually wrote that it should be
possible to formalize proper tail recursion as a matter of asymptotic
space complexity. Several later authors (Appel, Harper, Morrisette,
Felleisen) have said the same thing, often adding that it should be
trivial to do this. The chief technical contribution of my paper is
that I actually did this trivial thing.
The motivations for my paper were twofold. First of all, in addition
to the very smart people who had said the formalization of proper tail
recursion should be trivial, there were other very smart people who
said it couldn't be done. I got tired of arguing with them, so I did
Having done this trivial thing, I decided to use it to explain the
distinction between proper tail recursion as defined by Steele and
Sussman and the various tail call optimizations that had sprung up
in the compiler community.
The problem here is that Steele and Sussman were not all that careful
to distinguish the space efficiency property that they called proper
tail recursion from the implementation techniques that they used to
achieve that property. The compiler community zeroed in on the
techniques without understanding the property. In the process, the
compiler community added restrictions as necessary to prevent these
techniques from interfering with stack allocation, which they took
for granted, completely missing the point that these added restrictions
destroyed the property that Steele and Sussman were at such pains to
describe. This happened even in the Lisp community, where the terms
"tail merging" and "tail call merging" were used to refer to various
optimizations, not to the space efficiency property.
On the other hand, there are a few notable papers in which the authors
understood full well that proper tail recursion is a space efficiency
property, and their concern is whether a proposed optimization is legal
with respect to that property (that is, does the optimization preserve
proper tail recursion?). See for example the papers I cited by Appel,
Chase, Hanson, and Serrano and Feeley. These papers give real-world
examples of the conflict between stack allocation (or some related
optimization) and proper tail recursion.
The examples in my paper aren't at all realistic; they were the simplest
I could contrive. Here is the example I gave:
(define (f n)
(let ((v (make-vector n)))
(if (zero? n)
(f (- n 1)))))
If v is stack-allocated, then this will not be properly tail-recursive.
This is not a convincing example, because most compilers (though not most
interpreters) would recognize that v isn't used, and wouldn't allocate
any storage for it at all. But you can modify this example in various
ways to make it more convincing. For example:
(define (f v)
(let* ((n (vector-length v 0))
(v (make-vector (- n 1))))
(if (zero? n)
This still isn't very convincing in Lisp-like languages, because the
location allocated for v in f is quite clearly dead once it has been
dereferenced to evaluate the actual parameter for the tail call to f.
To keep that location alive beyond a tail call in Lisp or Scheme, we'd
have to introduce a lambda expression that closes over v. In some
other languages we might pass v by reference or pass its address.
Either way, it becomes clear that allocating v on a stack would
interfere with proper tail recursion.