Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Tail recursion & CL

101 views
Skip to first unread message

Aleksandr Skobelev

unread,
Sep 14, 2001, 3:49:17 PM9/14/01
to
I wonder why CL, as I read somewhere, is not properly tail recursive?
If wasn't there Scheme at that moment when the CL ANSI standard has been
adopted, and if it Scheme wasn't claimed as a proper recursive language?

Thank you in advance,
Aleksandr

Barry Margolin

unread,
Sep 14, 2001, 4:17:12 PM9/14/01
to
In article <tvmtn9...@hermit.athome>,

Aleksandr Skobelev <22...@mail.ru> wrote:
>I wonder why CL, as I read somewhere, is not properly tail recursive?
>If wasn't there Scheme at that moment when the CL ANSI standard has been
>adopted, and if it Scheme wasn't claimed as a proper recursive language?

The designers of CL considered this to be an optimization that implementors
can use, rather than something that should be required by the language.
The dialects of Lisp that Common Lisp was designed based on were not
tail-recursive, and it wasn't considered a significant defect.

In my experience, most recursive functions are not written in a
tail-recursive manner, so the benefit is not even that significant.
Writing tail-recursive code often requires contorting the way the algorithm
is expressed. For instance, the simple recursive factorial:

(defun fact (n)
(if (<= n 2)
1
(* n (fact (1- n)))))

must become:

(defun fact (n)
(flet ((fact-internal (n accumulator)
(if (<= n 2)
accumulator
(fact-internal (1- n) (* n accumulator)))))
(fact-internal n 1)))

The reason why Scheme needs to be tail-recursive is because this is the
only means of control flow that it provides; it doesn't have TAGBODY/GO,
CATCH/THROW, or BLOCK/RETURN-FROM. Instead, it subsumes all of these with
continuations and function calling, and tail-call elimination is necessary
so that it works similarly to the operations that are being replaced.
Common Lisp has all these as standard features of the language, so
programmers aren't likely to try to use function calling in place of them.

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Thomas F. Burdick

unread,
Sep 14, 2001, 5:11:23 PM9/14/01
to
Barry Margolin <bar...@genuity.net> writes:

> In article <tvmtn9...@hermit.athome>,
> Aleksandr Skobelev <22...@mail.ru> wrote:
> >I wonder why CL, as I read somewhere, is not properly tail recursive?
> >If wasn't there Scheme at that moment when the CL ANSI standard has been
> >adopted, and if it Scheme wasn't claimed as a proper recursive language?
>
> The designers of CL considered this to be an optimization that implementors
> can use, rather than something that should be required by the language.
> The dialects of Lisp that Common Lisp was designed based on were not
> tail-recursive, and it wasn't considered a significant defect.

And it's an easy enough optimization that you can implement it
yourself if you can't trust the implementation you're deploying on to
do it for you.

> In my experience, most recursive functions are not written in a
> tail-recursive manner, so the benefit is not even that significant.
> Writing tail-recursive code often requires contorting the way the algorithm
> is expressed. For instance, the simple recursive factorial:
>
> (defun fact (n)
> (if (<= n 2)
> 1

^-- you meant n, of course :)


> (* n (fact (1- n)))))
>
> must become:
>
> (defun fact (n)
> (flet ((fact-internal (n accumulator)

^^^^-- you meant LABELS, of course


> (if (<= n 2)
> accumulator
> (fact-internal (1- n) (* n accumulator)))))
> (fact-internal n 1)))

Yep, that's certainly unnecessarily obscure. I find recursion over a
data stack to be a far more readable way to transform recursive
functions:

(defun fact (n)
(let ((stack ()))
(loop (cond
((<= n 2)
(push n stack)
(return))
(t (push n stack)
(setf n (1- n)))))
(reduce #'* stack)))

Of course, I think the value of tail-call optimization comes not in
simple cases of self calls, but for getting cases of n-mutual tail
calls.

Kent M Pitman

unread,
Sep 14, 2001, 6:19:48 PM9/14/01
to
Aleksandr Skobelev <22...@mail.ru> writes:

> I wonder why CL, as I read somewhere, is not properly tail recursive?

Because the language is not defined to be.

The form of your question appears to presuppose that you don't think there
would be any reason for a language not to be "properly tail recursive".
Just because of the word "proper" in that phrase, don't assume that the
whole world thinks it's proper (in the general English sense) to do this.

Some of us find stacks with missing elements hard to debug, for example.
I rely critically in my debugging on being able to look up the stack of
callers at the time of a bug to see how I got where I am. In Scheme, I have
often found it hugely difficult to debug because the stack is not a record of
how I got to the current point. The marginal benefit afforded to me by
the opportunity to program in other syntactic ways does not outweigh this
loss of debuggability.

Your mileage may, of course, vary.

But my key point is not that you should believe me that stack debugging
is more important; rather, you should believe me that there exist a set of
people who think that a move in the direction of tail recursion elimination
is not unambiguously a virtue.

> ... wasn't there Scheme at that moment when the CL ANSI standard has been
> adopted,

Scheme predates CL. Sussman and Steele designed Scheme in 1978, I believe.

Steele was editor of the first CL spec (CLTL); work began around 1980 and
the book was published in 1984.

There is no question that Steele was aware of his own prior work, so
neither accidental omission is not a serious possibility.

> and if it Scheme wasn't claimed as a proper recursive language?

CL is not Scheme, so what Scheme claims is not relevant to CL.

Think of languages as political parties and their features as planks
of their party platform, and you will find yourself surprised less often.

Frank A. Adrian

unread,
Sep 15, 2001, 1:10:32 AM9/15/01
to
Barry Margolin wrote:
For instance, the simple recursive factorial:
> must become:
>
> (defun fact (n)
> (flet ((fact-internal (n accumulator)
> (if (<= n 2)
> accumulator
> (fact-internal (1- n) (* n accumulator)))))
> (fact-internal n 1)))

Can't this just be done as:

(defun fact (n &optional (acc 1))
(if (< n 2)
acc
(fact (1- n) (* n acc))))

allowing tail recursion without (as much) pain? Or does the use of
optional parameters mess that up?

faa

Aleksandr Skobelev

unread,
Sep 15, 2001, 5:38:40 PM9/15/01
to

Thanks. So, as far as I understood, the reason of why CL hasn't been claimed
"properly tail recursive is an existence of constructs TAGBODY/GO, CATCH/THROW,
or BLOCK/RETURN-FROM, and binding of special variables where tail recursion
is inhibited? (The last point I've found in CMU CL User's Manual. I
definitely should read it before). :)

In other cases Lisp compilers might be "properly tail recursive" ones. At
least CMU CL Python compiler is a such. And I believe that this compiler
can also transformi some non tail recursive forms like FACT in your example
into tail recursive ones. And with such compiler one need not do manual
transformation into tail recursive form. But because it is not required by
specification one can't also rely on such behaviour working with any other
compiler. Am I wrong?


The reason because I've started asking about TR is that devil and foolish
benchmarking. I've compared speed of calculation of recursive algorithm for
ackermann function with parameters 3 12 for different compilators and
different languages. On my computer I've got
for gcc++ ~48 sec
for gforth ~40 sec
for cmucl ~32 sec ( I was a little proud) :)
for ocaml and clean ~3 sec! 8-( ) (I couldn't believe)

I think that such good results in the last case can be explaned only by
ability of ocaml and clean compilers to transform call (func n (func m
l)) at the end of form into tail recursive form. Because in same
benchmarking of calulating Fibonnachi numbers cmucl wins ocaml with a
little advantge.


I must say that it is mainly my own thoughts and guess-works which based
on very little expertise on Lisp, ocaml and recursion. :) So any comments
are appreciated and welcome.

Aleksandr

Aleksandr Skobelev

unread,
Sep 15, 2001, 6:24:54 PM9/15/01
to

Kent M Pitman <pit...@world.std.com> wrote:
> Aleksandr Skobelev <22...@mail.ru> writes:
>
>> I wonder why CL, as I read somewhere, is not properly tail recursive?
>
> Because the language is not defined to be.
>
> The form of your question appears to presuppose that you don't think there
> would be any reason for a language not to be "properly tail recursive".
> Just because of the word "proper" in that phrase, don't assume that the
> whole world thinks it's proper (in the general English sense) to do this.

OK. Let it be not a 'proper' but 'just' tail recursive in cases when tail
recursive definitly can take place.

>
> Some of us find stacks with missing elements hard to debug, for example.
> I rely critically in my debugging on being able to look up the stack of
> callers at the time of a bug to see how I got where I am. In Scheme, I have
> often found it hugely difficult to debug because the stack is not a record of
> how I got to the current point. The marginal benefit afforded to me by
> the opportunity to program in other syntactic ways does not outweigh this
> loss of debuggability.

I see your point. But possibility to eliminate tail recursive calls could
be considered like a quality that could be switched on/off with usial
(declare (optimize ...)) forms.

>
> Your mileage may, of course, vary.
>
> But my key point is not that you should believe me that stack debugging
> is more important; rather, you should believe me that there exist a set of
> people who think that a move in the direction of tail recursion elimination
> is not unambiguously a virtue.
>

I see. To speak honestly, being a C++ programmer I look at general
recursion with a little fear. For I see some vagueness in it. This is why
I so 'like' a conception of tail recursive elimination. But, of course, I
agree that it can be inspired mostly by a lack of enough experience here.
And it is possible to write reliable programms just using a general
recursion reasonable.

> ...

Aleksandr

Kent M Pitman

unread,
Sep 16, 2001, 2:37:41 PM9/16/01
to
Aleksandr Skobelev <22...@mail.ru> writes:

> I see your point. But possibility to eliminate tail recursive calls could
> be considered like a quality that could be switched on/off with usial
> (declare (optimize ...)) forms.

If we were defining the language anew (or modifying the core, which for
various reasons of economic expense, we can't at this time), I wouldn't
mind seeing a
(declare (optimize (tail-calls 3)))
being a way to get reliable tail call optimization and lesser values giving
you only certain tail call optimizations (perhaps depending on debug settings).

[It's a pity that we don't have a portable agreement abou whether, in general,
declarations work in absolute terms or relative terms, that is whether
(> speed safety) is the test or (> speed 2) is the test for various things.
Declarations are non-portable to begin with.]

It would be fine by me, and I think conforming, if vendors who wanted to
experimented with tail call removal. I would suggest if they did it that
when the debug quality's value was higher than tail-call quality's value,
they should not do the removal unless they had a scheme-style call history
cube or something like that they could substitute for debugging (such as it
is--I don't like those things).

> I see. To speak honestly, being a C++ programmer I look at general
> recursion with a little fear. For I see some vagueness in it. This is why
> I so 'like' a conception of tail recursive elimination.

Interesting. I bet that's more the exception than the rule. I remember
when answering machines came out for telephones, and a lot of my
telephone-hating friends wouldn't buy one. I kept saying "no, an answering
machine is not a kind of telephone thing, it's a kind of anti-telephone
thing". But it was a sophisticated observation to realize this, and they
shunned these helpful phone-shunning devices for a long time. You're
basically doing likewise here by saying you don't like recursion but you
like it where you can prove to yourself it isn't happening. That requires
a degree of sophistication that I bet most people, certainly most people
new to recursion, aren't going to have. For one thing, it's very hard for
new recursion users to "see" when tail recursion is happening and when not.
It's much easier for them to know when they're using looping primitives,
which is why many of us in the CL community prefer them. :-)

Still, I found your remark here quite interesting, and worth having had this
whole discussion to get to.

Juliusz Chroboczek

unread,
Sep 16, 2001, 5:17:16 PM9/16/01
to
Barry Margolin:

BM> In my experience, most recursive functions are not written in a
BM> tail-recursive manner, so the benefit is not even that significant.

I don't see tail recursion as a restricted case of recursion, but
rather as an alternative notation for iteration.

Saying that Common Lisp doesn't need tail recursion because it already
has TAGBODY is rather unconvincing. Pushing your argument to its
absurd conclusion, we don't need TAGBODY either because we've already
got PROG.

I have seen quite a few cases where tail recursion was the most
natural way to express a construction. The obvious example are finite
state automata, which are elegantly expressed as a set of mutually
tail-recursive functions. On one occasion, I made a set of coroutines
into a set of mutually tail-recursive functions by making the
coroutines' state explicit.

Obviously, tail recursion elimination is more than a mere
optimisation. In the FSA case, lack of tail recursion elimination
would make a constant-space algorithm into a linear space one, which
is perfectly unacceptable most of the time.

I personally consider the lack of guarantees on the behaviour of tail
calls as a weak point of the Common Lisp standard. When faced with a
situation that calls for tail recursion, I usually give up on writing
portable Common Lisp and write for CMUCL.

Juliusz

Aleksandr Skobelev

unread,
Sep 17, 2001, 3:15:30 AM9/17/01
to
Kent M Pitman <pit...@world.std.com> wrote:
> Aleksandr Skobelev <22...@mail.ru> writes:
>
>> I see your point. But possibility to eliminate tail recursive calls could
>> be considered like a quality that could be switched on/off with usial
>> (declare (optimize ...)) forms.
>
> If we were defining the language anew (or modifying the core, which for
> various reasons of economic expense, we can't at this time), I wouldn't
> mind seeing a
> (declare (optimize (tail-calls 3)))
> being a way to get reliable tail call optimization and lesser values giving
> you only certain tail call optimizations (perhaps depending on debug settings).

I'd offer more plain decision: if (zerop tail-calls) then tail call
optimization shouldn't be else they should be only in described schemes.

>
> [It's a pity that we don't have a portable agreement abou whether, in general,
> declarations work in absolute terms or relative terms, that is whether
> (> speed safety) is the test or (> speed 2) is the test for various things.
> Declarations are non-portable to begin with.]

Hmm... It is interesting. I haven't know about it.

>
> It would be fine by me, and I think conforming, if vendors who wanted to
> experimented with tail call removal. I would suggest if they did it that
> when the debug quality's value was higher than tail-call quality's value,
> they should not do the removal unless they had a scheme-style call history
> cube or something like that they could substitute for debugging (such as it
> is--I don't like those things).
>
>> I see. To speak honestly, being a C++ programmer I look at general
>> recursion with a little fear. For I see some vagueness in it. This is why
>> I so 'like' a conception of tail recursive elimination.
>
> Interesting. I bet that's more the exception than the rule. I remember
> when answering machines came out for telephones, and a lot of my
> telephone-hating friends wouldn't buy one. I kept saying "no, an answering
> machine is not a kind of telephone thing, it's a kind of anti-telephone
> thing". But it was a sophisticated observation to realize this, and they
> shunned these helpful phone-shunning devices for a long time.

:)

> You're basically doing likewise here by saying you don't like recursion
> but you like it where you can prove to yourself it isn't happening.

You understand me better than I own. :)
Really, I like recursion as an abstraction for representing some algorithms,
but not as a realization of ones on machine's level.

> That requires a degree of sophistication that I bet most people,
> certainly most people new to recursion, aren't going to have. For one
> thing, it's very hard for new recursion users to "see" when tail
> recursion is happening and when not. It's much easier for them to know
> when they're using looping primitives, which is why many of us in the CL
> community prefer them. :-)

So, if we return to the begining, it would be useful if there were described
situations in CL's specification when recirsive calls should be eliminated
with tail optimization? (It is not a question that required an answer just
now.) :)

Barry Margolin

unread,
Sep 17, 2001, 10:58:04 AM9/17/01
to
In article <2kBo7.1924$7I2.6...@news.uswest.net>,

Yes, it could be written that way. But since the accumulator isn't
supposed to be part of the public interface of the function, it seems wrong
to have it there.

Barry Margolin

unread,
Sep 17, 2001, 11:08:27 AM9/17/01
to
In article <0ph0o9...@hermit.athome>,

I don't think tail recursion would normally be expected to result in much
of a speedup of the program. Pushing a stack frame is not that expensive,
and a tail-recursive call still has to copy the parameters just like a
non-TR call. The final return will be faster, since it's not necessary to
unwind so many stack frames, but I can't imagine that resulting in 1000%
performance improvement. Tail recursion reduces stack space usage, which
could improve paging, but again I don't think it would be a significant
improvement; recent stack frames are almost always in RAM and cache, and
pushing and popping frames will only rarely produce page faults.

Kalle Olavi Niemitalo

unread,
Sep 17, 2001, 12:32:40 PM9/17/01
to
Barry Margolin <bar...@genuity.net> writes:

> But since the accumulator isn't supposed to be part of the
> public interface of the function, it seems wrong to have it
> there.

We can hide it:

(defun fact (n &key ((#1=#:acc acc) 1))


(if (< n 2)
acc

(fact (1- n) '#1# (* n acc))))

;-)

Has any of you written a macro that gensyms parameter keyword names?

Geoff Summerhayes

unread,
Sep 17, 2001, 5:08:23 PM9/17/01
to

"Kalle Olavi Niemitalo" <k...@iki.fi> wrote in message
news:87ofo9y...@Astalo.y2000.kon.iki.fi...

> Barry Margolin <bar...@genuity.net> writes:
>
> > But since the accumulator isn't supposed to be part of the
> > public interface of the function, it seems wrong to have it
> > there.
>
> We can hide it:
>
> (defun fact (n &key ((#1=#:acc acc) 1))
> (if (< n 2)
> acc
> (fact (1- n) '#1# (* n acc))))
>

Hidden? Not really...

<LW>

CL-USER 1 > (defun fact (n &key ((#1=#:acc acc) 1))


(if (< n 2)
acc
(fact (1- n) '#1# (* n acc))))

FACT

CL-USER 2 > (function-lambda-list 'fact)
(N &KEY ((#:ACC ACC) 1))

CL-USER 3 > (fact 5 (first (first (third (function-lambda-list
'fact)))) -1)
-120

<CLISP v2.27 function-lambda-list not defined>

(fact 5 (first (first (third (second (function-lambda-expression
#'fact))))) -1)
-120

-----------
Geoff

Gareth McCaughan

unread,
Sep 17, 2001, 8:10:14 PM9/17/01
to
Juliusz Chroboczek wrote:

> Obviously, tail recursion elimination is more than a mere
> optimisation. In the FSA case, lack of tail recursion elimination
> would make a constant-space algorithm into a linear space one, which
> is perfectly unacceptable most of the time.

So write a macro.

(defmacro with-state-machine (states &body body)
(let* ((names (mapcar #'first states))
(arglists (mapcar #'second states))
(bodies (mapcar #'cddr states))
(symbols (reduce #'union arglists)))
`(let ,symbols
(macrolet
,(mapcar (lambda (name vars)
(let ((args (loop for x in vars collect (gensym))))
`(,name ,args
`(progn
(psetq . ,',(loop for arg in args for var in vars
nconc `(,var `,,arg)))
(go ,',name)))))
names arglists)
(block nil
(tagbody
,@body
(return)
. ,(mapcan (lambda (name arglist body)
(declare (ignore arglist))
(cons name (append body (list '(return)))))
names arglists bodies)))))))

That's decidedly primitive, of course; a more serious version
would take steps to avoid unwanted symbol capture, for instance.

It's also wrong; it's too late at night for me to wrap my brain
around the backquoting and unquoting required to get the right
thing into the MACROLET expansions. "`,,arg" is definitely wrong;
I'm sure someone with a clearer head than mine can say what the
right thing is.

(Actually, the right thing probably involves using auxiliary
functions to clarify what's doing on and reduce the amount
of nesting. Again, I'm not sufficiently awake.)

--
Gareth McCaughan Gareth.M...@pobox.com
.sig under construc

Frank A. Adrian

unread,
Sep 18, 2001, 2:42:20 AM9/18/01
to
Barry Margolin wrote:

> In article <2kBo7.1924$7I2.6...@news.uswest.net>,
>>Barry Margolin wrote:

> Yes, it could be written that way. But since the accumulator isn't
> supposed to be part of the public interface of the function, it seems
> wrong to have it there.
>

I never said it was more pretty than the original recursive solution :-)!

I still prefer it over the use of the hideous (but more rigorous)
tail-recursive flet example that provoked this. Besides, if you use this
technique to compute Fibonacci Numbers, you get a generator for Lucas
Numbers and other generalized Fibonacci Numbers.

I.e., when

(defun fib (n &optional (base1 0) (base2 1))
(if (= n 0)
base2
(fib (1- n) base2 (+ base1 base2))))

is invoked as (fib 1 3), you get the Lucas Numbers, etc. Actually, the
fact function coded the way I described gives a set of functions equivalent
to integral multiples of the factorial function with the multiplier given
by the base case - and who's to say that the function family isn't more
interesting or proper (in the general mathematical sense)?

I guess my point is that, although the &optional method of coding tail
recursion seems to be a "hacky" sort of thing, many times it might also be
useful for generating a set of functions just as useful as the one used for
the "normative" case.

faa

Kalle Olavi Niemitalo

unread,
Sep 18, 2001, 1:51:25 AM9/18/01
to
"Geoff Summerhayes" <sNuOmS...@hNoOtSmPaAiMl.com> writes:

> <CLISP v2.27 function-lambda-list not defined>

Not in CLHS either. Is it expected to reveal &AUX variables too?

> (fact 5 (first (first (third (second (function-lambda-expression
> #'fact))))) -1)

That's cheating! If you use FUNCTION-LAMBDA-EXPRESSION then not
even LABELS is hidden.

Geoff Summerhayes

unread,
Sep 18, 2001, 1:39:32 PM9/18/01
to

"Kalle Olavi Niemitalo" <k...@iki.fi> wrote in message
news:87u1y1t...@Astalo.y2000.kon.iki.fi...

> "Geoff Summerhayes" <sNuOmS...@hNoOtSmPaAiMl.com> writes:
>
> > <CLISP v2.27 function-lambda-list not defined>
>
> Not in CLHS either. Is it expected to reveal &AUX variables too?

Realized this after posting when I looked for it in the HS instead
of relying on apropos. *SIGH*

>
> > (fact 5 (first (first (third (second (function-lambda-expression
> > #'fact))))) -1)
>
> That's cheating! If you use FUNCTION-LAMBDA-EXPRESSION then not
> even LABELS is hidden.

Is it possible call the function? The only ways I can think
of involve eval and that still leaves the actual helper
hidden.

This is all moot anyway since FUNCTION-LAMBDA-EXPRESSION is
not guaranteed to return something useful for either of our
examples.

It's a bit of a shame the FUNCTION-LAMBDA-LIST isn't in the
standard, I've found it very handy in my exploration of the
language, it succeeds in cases where FUNCTION-LAMBDA-EXPRESSION
returns nil for the expression. In fact, I haven't found a
reasonable case where it fails.

Geoff

Barry Margolin

unread,
Sep 18, 2001, 4:04:29 PM9/18/01
to
In article <tqf1c1d...@corp.supernews.com>,

Geoff Summerhayes <sNuOmS...@hNoOtSmPaAiMl.com> wrote:
>It's a bit of a shame the FUNCTION-LAMBDA-LIST isn't in the
>standard, I've found it very handy in my exploration of the
>language, it succeeds in cases where FUNCTION-LAMBDA-EXPRESSION
>returns nil for the expression. In fact, I haven't found a
>reasonable case where it fails.

FUNCTION-LAMBDA-LIST isn't in the standard, and FUNCTION-LAMBDA-EXPRESSION
isn't required to return anything useful, because we didn't want to
*require* implementations to retain all this source-code information at run
time. While most implementations do include this type of information for
debugging purposes, they often have options when compiling or dumping
production executables that will prune it out.

Juliusz Chroboczek

unread,
Sep 18, 2001, 7:18:17 PM9/18/01
to
KP> I wouldn't mind seeing a

KP> (declare (optimize (tail-calls 3)))

KP> being a way to get reliable tail call optimization and lesser
KP> values giving you only certain tail call optimizations (perhaps
KP> depending on debug settings).

It is my understanding that OPTIMIZE settings are not supposed to
change the semantics of a correct program.

Tail call elimination does change the semantics of correct programs.
The obvious example is

(defun loop ()
(loop))

which terminates without tail call elimination, but loops forever with
tail call elimination.

Juliusz

Juliusz Chroboczek

unread,
Sep 18, 2001, 7:23:05 PM9/18/01
to
>> Obviously, tail recursion elimination is more than a mere
>> optimisation.

GM> So write a macro.

Gareth,

You're missing my point. I know there are other ways. I know half a
dozen techniques for implementing FSMs.

My point is that I like to implement them in a certain way, and that
the Common Lisp standard allows a conforming implementation to
mis-compile my code. Fortunately, CMUCL doesn't take this liberty,
which is usually good enough for me.

Juliusz

Barry Margolin

unread,
Sep 19, 2001, 10:45:56 AM9/19/01
to
In article <87itegh...@pps.jussieu.fr>,

Juliusz Chroboczek <j...@pps.jussieu.fr> wrote:
>Tail call elimination does change the semantics of correct programs.
>The obvious example is
>
> (defun loop ()
> (loop))
>
>which terminates without tail call elimination, but loops forever with
>tail call elimination.

The only reason that terminates is if there's a stack overflow. That's an
implementation limitation, not part of the semantics. Nowhere in the CL
specification does it require that the stack must have a limited size. The
virtual machine described by the standard has infinite memory, but physics
prevents us from implementing the machine as described.

Kent M Pitman

unread,
Sep 19, 2001, 12:37:48 PM9/19/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> KP> I wouldn't mind seeing a
>
> KP> (declare (optimize (tail-calls 3)))
>
> KP> being a way to get reliable tail call optimization and lesser
> KP> values giving you only certain tail call optimizations (perhaps
> KP> depending on debug settings).
>
> It is my understanding that OPTIMIZE settings are not supposed to
> change the semantics of a correct program.

[See Barry's remarks.]



> Tail call elimination does change the semantics of correct programs.
> The obvious example is
>
> (defun loop ()
> (loop))
>
> which terminates without tail call elimination, but loops forever with
> tail call elimination.

I couldn't read this example. Can you rewrite it with another function name?
I can't tell if you mean to access the system LOOP primitive, which you are
not permitted to redefine anyway.

Even so, tail call elimination doesn't appear to me to affect looping
forever (other than resource limitations).

I once, for fun, added a halts? function to the T (Yale Scheme) language
dialect, and made it always return T, figuring all machines go down sometime.
But, you know, that's really not the kind of halting that the halting problem
is about. I think in the above loop example, if I understand you (and I
am not sure I do), there is termination in both cases: in one case due to
a resource limitation on stack, and in the other case in a resource limitation
brought by the mean time between failure of the host OS, its hardware, etc.,
which I assume is finite.

j...@itasoftware.com

unread,
Sep 19, 2001, 1:50:14 PM9/19/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

What is the return value?

Barry Margolin

unread,
Sep 19, 2001, 2:12:49 PM9/19/01
to
In article <hetzax...@itasoftware.com>, <j...@itasoftware.com> wrote:

>Juliusz Chroboczek <j...@pps.jussieu.fr> writes:
>> Tail call elimination does change the semantics of correct programs.
>> The obvious example is
>>
>> (defun loop ()
>> (loop))
>>
>> which terminates without tail call elimination, but loops forever with
>> tail call elimination.
>
>What is the return value?

What's the return value of the following function:

(defun doesnt-return ()
(error "Stack overflow"))

Thomas F. Burdick

unread,
Sep 19, 2001, 3:08:03 PM9/19/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

It doesn't mis-compile your code, it does exactly what it says it will
do: it calls the function. I think Gareth had the right impulse,
though: write a macro. It's just that WITH-STATE-MACHINE wasn't the
right one. Write yourself WITH-TAILCALL-OPTIMIZATION. I did for
Emacs when it was easier than changing my code to produce something
other than n-mutual tail-recursive code. Though I admit to not having
ported it to CL (I just continue to hand rewrite or leave it to
CMUCL). It's something that would have been nice to have in the
language, but since it doesn't *require* vendor support, and it's easy
enough to do yourself, it's not that big of a deal.

Juliusz Chroboczek

unread,
Sep 22, 2001, 12:27:38 PM9/22/01
to
Barry,

I think this discussion is getting off-topic, and doesn't really apply
to the point I was trying to make.

But I enjoy nit-picking about semantics, and will gladly join you.

BM> The only reason that terminates is if there's a stack overflow.
BM> That's an implementation limitation, not part of the semantics.
BM> Nowhere in the CL specification does it require that the stack
BM> must have a limited size.

There are two ways of understanding a specification ``assuming
unbounded resources'':

(i) assuming availability of potentially unbounded resources, said
program does FOO; and

(ii) there exists a finite (but arbitrarily large) amount of resources
within which said program does FOO.

Obviously, you're thinking in terms of (i), while I'm thinking in
terms of (ii). (Exercice for the reader: specify a class of
properties FOO such that (i) and (ii) coincide.)

BM> The virtual machine described by the standard has infinite memory,

I will grant you that most well-known formal semantic frameworks do
not deal with resource tracking at all, and therefore use model (i).
It is not clear to me, however, which model is used by the CL
standard, or whether the CL standard is explicit about the issue in
the first place. Perhaps Kent can enlighten us?

Juliusz

Juliusz Chroboczek

unread,
Sep 22, 2001, 12:27:39 PM9/22/01
to
KP> I think in the above loop example, if I understand you (and I am
KP> not sure I do), there is termination in both cases: in one case
KP> due to a resource limitation on stack, and in the other case in a
KP> resource limitation brought by the mean time between failure of
KP> the host OS, its hardware, etc., which I assume is finite.

Fair enough.

I think we're back to a rather fundamental problem, which is that of
program equivalence.

In practical terms, would you accept a definition of program
equivalence that identifies a program which runs in constant space
with one which runs in linear space?

I think that one can fairly argue for both answers. (``Fairly,'' in
this context, means ``without invoking Adam Smith's Invisible Appendages.'')

Juliusz

Kent M Pitman

unread,
Sep 22, 2001, 2:02:55 PM9/22/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> KP> I think in the above loop example, if I understand you (and I am
> KP> not sure I do), there is termination in both cases: in one case
> KP> due to a resource limitation on stack, and in the other case in a
> KP> resource limitation brought by the mean time between failure of
> KP> the host OS, its hardware, etc., which I assume is finite.
>
> Fair enough.
>
> I think we're back to a rather fundamental problem, which is that of
> program equivalence.
>
> In practical terms, would you accept a definition of program
> equivalence that identifies a program which runs in constant space
> with one which runs in linear space?

I'm not sure what "linear space" is unless you mean "linear with
respect to some input". Maybe (following on the subject line) you
mean "linear in the number of tail calls". But if I understand the
question correctly, I think I think that it doesn't matter whether
it's "linear" or even "O(ackerman(n))". What I think matters to me is
that it's O(finite_valued_fn(n)). I distinguish O(infinity) as a
non-constant that no finite valued function can keep up with. Put
another way, even with infinite tape on your turing machine, I think
you're not going to get to (foo) in (progn (loop) (foo)); and
therefore I think removing (loop) and letting you get to (foo) is not
a valid transformation. On the other hand, in (progn (bar) (foo)), if
(bar) is an exprssion that will run any long but known-to-be-finite
time with no defined effects (other than memory use or other things we
don't quantify), I think I think it's ok to optimize to (progn (foo)).

I don't purport to be a mathemetician, but my understanding is that
the question of a discontinuity than a mere question of resources.
That is, a function whose value goes to infinity at a certain point is
different than "gee, that function goes off my page of graph paper"
and different even than "gee, I'll never be able to buy enough graph
paper to show where that function tops out" to the point of "If I set
out to buy graph paper sufficient to try to show where that function
tops out, I would never do anything afterward because there is no end
to it." Also, I don't think it's the same to say "I recognize that
pursuing that purchase of graph paper is not worth doing; I'd better stop
here and do something else." (which is what I'm saying, with emphasis
on "STOP", meaning the program is done) and "I recognize that
pursuing that purchase of graph paper is not worth doing, so I'll just
blythely continue with the next step as if it didn't matter that I
skipped an instruction and I'll pretend nothing that followed was contingent
on my successful completion of the prior step" which is a very "C-language"
thing to do but which is NOT what I see Lisp as being about.

> I think that one can fairly argue for both answers. (``Fairly,'' in
> this context, means ``without invoking Adam Smith's Invisible Appendages.'')

This reference escapes me.

Erik Naggum

unread,
Sep 22, 2001, 2:22:06 PM9/22/01
to
* Juliusz Chroboczek

> Obviously, you're thinking in terms of (i), while I'm thinking in
> terms of (ii). (Exercice for the reader: specify a class of
> properties FOO such that (i) and (ii) coincide.)

It seems to me that arguing about the flaws of limited memory is
pointless when you accept limited execution time. These two may well
coincide if the system halts execution when it runs out of memory, as
when it invokes the debugger on the out-of-memory condition.

If there is no upper bound on the consumption of resources of a program,
it is ipso facto _broken_, and no silly quibbling over which resource is
exhausted _first_ can possibly change any semantics of said program.

It is _typical_ that this has to come up in relation to a Scheme concept.

///
--
Why did that stupid George W. Bush turn to Christian fundamentalism to
fight Islamic fundamentalism? Why use terms like "crusade", which only
invokes fear of a repetition of that disgraceful period of Christianity
with its _sustained_ terrorist attacks on Islam? He is _such_ an idiot.

Juliusz Chroboczek

unread,
Sep 22, 2001, 2:40:28 PM9/22/01
to
>> In practical terms, would you accept a definition of program
>> equivalence that identifies a program which runs in constant space
>> with one which runs in linear space?

KMP> I'm not sure what "linear space" is unless you mean "linear with
KMP> respect to some input".

Given some problem with a well-defined notion of input size, and given
two (correct) programs that solve this problem, one of which does so
in bounded space, the other in linear space, can the two programs be
considered as equivalent?

I think one can argue both ways. (Or else: the correct notion of
equivalence depends on the application.)

KMP> my understanding is that the question of a discontinuity than a
KMP> mere question of resources.

Both Barry and I have been very careful to avoid the term ``infinite''.
We were speaking of unbounded resources.

When modelling a finitary process such as deterministic computation,
infinity can only appear as the limit of an unbounded process. Thus,
the question of whether the process is continuous cannot even be
formulated: the very assumption that leads us to introduce the notion
of infinity is that the process is continuous.

If you prefer, we were not even considering the notion of actual
infinity. The largest (in some intuitive sense) notion we were
willing to consider was that of potentially unbounded usage of
resources: a usage that remains finite at any time, but with no a
priori bound.

>> (``Fairly,'' in this context, means ``without invoking Adam Smith's
>> Invisible Appendages.'')

KMP> This reference escapes me.

This modern form of mysticism, ``the market's invisible hand.''

Juliusz

Kent M Pitman

unread,
Sep 22, 2001, 5:56:55 PM9/22/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> >> In practical terms, would you accept a definition of program
> >> equivalence that identifies a program which runs in constant space
> >> with one which runs in linear space?
>
> KMP> I'm not sure what "linear space" is unless you mean "linear with
> KMP> respect to some input".
>
> Given some problem with a well-defined notion of input size, and given
> two (correct) programs that solve this problem, one of which does so
> in bounded space, the other in linear space, can the two programs be
> considered as equivalent?
>
> I think one can argue both ways. (Or else: the correct notion of
> equivalence depends on the application.)
>
> KMP> my understanding is that the question of a discontinuity than a
> KMP> mere question of resources.
>
> Both Barry and I have been very careful to avoid the term ``infinite''.
> We were speaking of unbounded resources.

But the issue of (progn (loop) (foo))
is not a question of unbounded resources, it's a question of
infinite resources. (foo) will never be reached.

> When modelling a finitary process such as deterministic computation,
> infinity can only appear as the limit of an unbounded process. Thus,
> the question of whether the process is continuous cannot even be
> formulated: the very assumption that leads us to introduce the notion
> of infinity is that the process is continuous.

I guess I'm just not mathy enough to make sense of this. In my
parlance, (loop) is an "infinite loop". There isn't any ambiguity
about this. Nor is there any ambiguity that (foo) in (progn (loop) (foo))
cannot be reached. Any application of mathematics which obscures this
quite obvious fact is either accidental or intentional obfuscation, IMO.
But whatever the motivation, it does not aid the understanding of the problem.

> If you prefer, we were not even considering the notion of actual
> infinity.

Then in my parlance, you are not addressing the form '(loop)'.

> The largest (in some intuitive sense) notion we were
> willing to consider was that of potentially unbounded usage of
> resources: a usage that remains finite at any time, but with no a
> priori bound.

This is trivially true of all computations but does not address the
"meaning" of those computations. You don't understand infinity by walking
a ways and saying "it seems finite to me" and then walking a ways farther
and saying "it still seems finite" and so on. You understand infinity
exactly by considering the fact that in the two step process "1. do infinite
work, then 2. do something else finite" that no matter how much you work on
step 1, you are no closer to doing step 2, and, moreover, understanding
that step 2 will never be done. That is the definition of infinity.
That is what (progn (loop) (foo)) says. If you have a mathematics that says
it means something else, you are using that math in the wrong place.

> >> (``Fairly,'' in this context, means ``without invoking Adam Smith's
> >> Invisible Appendages.'')
>
> KMP> This reference escapes me.
>
> This modern form of mysticism, ``the market's invisible hand.''

Hmm.

Juliusz Chroboczek

unread,
Sep 23, 2001, 11:22:01 AM9/23/01
to
KMP> But the issue of (progn (loop) (foo))
KMP> is not a question of unbounded resources, it's a question of
KMP> infinite resources. (foo) will never be reached.

Yes, we fully agree about that. There is no such thing as an actually
infinite amount of work. This was emphatically not the issue I was
trying to get you to comment on.

>> If you prefer, we were not even considering the notion of actual
>> infinity.

KMP> Then in my parlance, you are not addressing the form '(loop)'.

We agree about that, I am definitely not.

As you say, there is no ambiguity. With any reasonable semantics, the
programs (loop) and (progn (loop) (error "Foo!")) are equivalent.

On the other hand, consider the following:

(defun foo ()
(foo))

and suppose that you compile FOO without tail-call elimination. Does
FOO terminate?

The obvious answer is that my machine has finite memory, hence FOO
will run out of stack at some point, hence FOO terminates on my
machine. This answer invokes a specific maximum stack size, and so is
not acceptable as part of a language definition.

The answer that the best-known formal semantic frameworks give is that
assuming a potentially unbounded stack, FOO will never run out of
resources, hence FOO doesn't terminate. I claim that this answer is
not realistic.

Finally, you can say that, assuming any arbitrarily large but finite
amount of stack, FOO will terminate. As FOO terminates in all finite
models, FOO terminates. I claim that this last answer is more
realistic than the second one while avoiding mentioning a specific
amount of available resources.

The point I'm trying to make is that FOO terminates when compiled
without tail call elimination, and that FOO does not terminate if
compiled with tail call elimination. Thus, tail call elimination is
not a mere optimisation, but a semantic feature.

And, incidentally, a feature that I miss in my favourite programming
language.

Regards,

Juliusz

Erik Naggum

unread,
Sep 23, 2001, 11:54:36 AM9/23/01
to
* Juliusz Chroboczek <j...@pps.jussieu.fr>

> On the other hand, consider the following:
>
> (defun foo ()
> (foo))
>
> and suppose that you compile FOO without tail-call elimination. Does
> FOO terminate?

Why do you not distinguish between "terminate" and "signal an error"?

> Thus, tail call elimination is not a mere optimisation, but a semantic
> feature.

This is just plain wrong. Your semantic feature rests on an equivocation
between termination and crashing. I fail to see how this can reasonably
be considered anything but _completely_ bogus. What is the _value_ of a
crash? Rather, a crash is the antithesis of termination, because it
_prevents_ every reasonably expectation from a termination. The same is
true for a non-idiotic example which takes enough time to terminate that
it can run out of some resources, like electric power or memory or disk
space or whatever -- hell, that might even happen immediate after it has
_started_ running. It is profoundly counter-productive to treat each and
every cessation of execution the same way. I wonder what kind of silly
theoretical framework has found it useful to abstract away the value of a
computation so one could study the conditions for cessation of execution
without concern for what the computation yields. It seems to be one of
those useless theories that are used only to prove something that is
wrong and that every reasonable person knows to be wrong, but which can
be shown to be true if you accept a whole bunch of insane premises that
have been divorced from the reality and purpose of computing.

Again, it is typical that this discussion centers around a Scheme feature.

j...@itasoftware.com

unread,
Sep 24, 2001, 10:02:40 AM9/24/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> KMP> But the issue of (progn (loop) (foo))
> KMP> is not a question of unbounded resources, it's a question of
> KMP> infinite resources. (foo) will never be reached.
>
> Yes, we fully agree about that. There is no such thing as an actually
> infinite amount of work. This was emphatically not the issue I was
> trying to get you to comment on.
>
> >> If you prefer, we were not even considering the notion of actual
> >> infinity.
>
> KMP> Then in my parlance, you are not addressing the form '(loop)'.
>
> We agree about that, I am definitely not.
>
> As you say, there is no ambiguity. With any reasonable semantics, the
> programs (loop) and (progn (loop) (error "Foo!")) are equivalent.

Normal order semantics are unreasonable? First I've heard of that.

> On the other hand, consider the following:
>
> (defun foo ()
> (foo))
>
> and suppose that you compile FOO without tail-call elimination. Does
> FOO terminate?
>
> The obvious answer is that my machine has finite memory, hence FOO
> will run out of stack at some point, hence FOO terminates on my
> machine. This answer invokes a specific maximum stack size, and so is
> not acceptable as part of a language definition.

It also assumes that each call to (foo) consumes some amount of stack.
This is an unnecessary requirement.

Kent M Pitman

unread,
Sep 24, 2001, 10:21:39 AM9/24/01
to
j...@itasoftware.com writes:

> > KMP> Then in my parlance, you are not addressing the form '(loop)'.
> >
> > We agree about that, I am definitely not.
> >
> > As you say, there is no ambiguity. With any reasonable semantics, the
> > programs (loop) and (progn (loop) (error "Foo!")) are equivalent.
>
> Normal order semantics are unreasonable? First I've heard of that.

I believe the context was "reasonable semantics for CL" -- i.e.,
"reasonable transcription of the semantics which are present and fixed
for CL even if not specified formally". The absence of a formal
semantics presentation is not license to do new design.

It's been a long time since I had to deal with these. Is "normal
order" the one where you don't have to do things in order? If so, you
should remember that "reasonableness" is like "goodness"--it depends
on context and is not a universal property of the universe. In CL,
evaluation works left to right, so you have to first dispense with one
form to get to the next. If you are unable to dispense with a certain
form, you're done.



> > On the other hand, consider the following:
> >
> > (defun foo ()
> > (foo))
> >
> > and suppose that you compile FOO without tail-call elimination. Does
> > FOO terminate?
> >
> > The obvious answer is that my machine has finite memory, hence FOO
> > will run out of stack at some point, hence FOO terminates on my
> > machine. This answer invokes a specific maximum stack size, and so is
> > not acceptable as part of a language definition.

The language definition does not call "out of astack" a semantic
erorr but a resource limitation. Read about the difference between
the classes SERIOUS-CONDITION and ERROR. It is not an error to do this,
it just probably will result in a serious condition (stack resource xhausted);
though it might result also in lots of time passing and then the machine
having to be rebooted; that's less likely to get trapped but is also
a resource limitation. Neither is a semantic error per se.

> It also assumes that each call to (foo) consumes some amount of stack.
> This is an unnecessary requirement.

It doesn't matter. CL semantics deal neither with stack nor time, so either
of those commodities can run out and you have no control of it. This is not
to say those issues are not important, but they are left, as a matter of
language design, to the vendor. A vendor is not prohibited from doing
tail call elimination, they are simply not required to.

What is not left to the vendor is the right to hop over inconvenient
statements in order to get to one that is more convenient...

j...@itasoftware.com

unread,
Sep 24, 2001, 12:15:14 PM9/24/01
to
Kent M Pitman <pit...@world.std.com> writes:

> j...@itasoftware.com writes:
>
> > > KMP> Then in my parlance, you are not addressing the form '(loop)'.
> > >
> > > We agree about that, I am definitely not.
> > >
> > > As you say, there is no ambiguity. With any reasonable semantics, the
> > > programs (loop) and (progn (loop) (error "Foo!")) are equivalent.
> >
> > Normal order semantics are unreasonable? First I've heard of that.
>
> I believe the context was "reasonable semantics for CL" -- i.e.,
> "reasonable transcription of the semantics which are present and fixed
> for CL even if not specified formally".

Ah, I was thrown by your use of the work `any'.

> It's been a long time since I had to deal with these. Is "normal
> order" the one where you don't have to do things in order?

Technically, normal order is when you reduce the leftmost reducable
redex at each step. You can look upon this as a `lazy evaluation'
model. When you have no more reducable redexes, you have found the
`normal form'. Both normal order and applicative order will reduce to
the same normal form (when both reduce), but there are some
expressions for which applicative order reduction won't reduce.

Juliusz Chroboczek

unread,
Sep 28, 2001, 6:58:57 PM9/28/01
to
EN> Why do you not distinguish between "terminate" and "signal an error"?

[...]

EN> This is just plain wrong. Your semantic feature rests on an
EN> equivocation between termination and crashing.

No. It rests on *distinguishing* between non-termination and
the signalling of STORAGE-CONDITION.

Suppose you identify signalling a condition and not terminating, i.e.

(loop)

and

(error "Foo!")

are equivalent.

Then (assuming a compositional semantics) you have to identify

(progn (ignore-errors (loop)) t)

and

(progn (ignore-errors (error "Foo!")) t)

If you choose to identify the latter program with T (a rather natural
hypothesis), and the former one with (LOOP), then you've just identi-
fied (LOOP) with T. With a little more work (and some similarly
natural hypotheses), you've just identified all Lisp programs.

Not a very interesting semantics.

Juliusz

P.S. Okay, I've cheated. Whether a stack overflow generates a
STORAGE-CONDITION or causes a nuclear holocaust is ``implementation-
defined'' in ANSI Common Lisp. Still, I think it's natural to
distinguish between a ``crash'' and non-termination.

The accepted terminology in the Semantics community, by the way, is to
include abnormal termination under the term ``termination''.

Erik Naggum

unread,
Sep 28, 2001, 7:03:36 PM9/28/01
to
* Juliusz Chroboczek

> The accepted terminology in the Semantics community, by the way, is to
> include abnormal termination under the term ``termination''.

That seems like a very good way to cheat yourself out of a large number
of a really hairy problems. Cleaning up such things is not interesting.

///
--
Why, yes, I love freedom, Mr. President, but freedom means accepting risks.
So, no, Mr. President, I am not with you, and I am not with the terrorists.
I would be happier if you left the airports alone and took care of all the
cars that are a much bigger threat to my personal safety than any hijacking.

Juliusz Chroboczek

unread,
Sep 28, 2001, 7:12:54 PM9/28/01
to
KMP> This is not to say those issues are not important, but they are
KMP> left, as a matter of language design, to the vendor. A vendor is
KMP> not prohibited from doing tail call elimination, they are simply
KMP> not required to.

I understand that. I wasn't complaining about the vendors' behaviour,
but about the guarantees given by the standard being too weak for my
personal programming style.

People like you have taught me to program by the book, not for the
implementation. The lack of strong guarantees about tail-call
elimination in the book forces me to program for the implementation
(or to give up on a number of techniques that I happen to find useful).

Of course, I am not implying that you (plural) have done anything
wrong when designing ANSI Common Lisp. I am quite willing to believe
that this was the right compromise at the time (possibly for the very
same reasons as the ones you gave recently for the omission of a MOP).

Regards,

Juliusz

Kent M Pitman

unread,
Sep 28, 2001, 7:59:22 PM9/28/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

>
> Barry,
>
> I think this discussion is getting off-topic, and doesn't really apply
> to the point I was trying to make.
>
> But I enjoy nit-picking about semantics, and will gladly join you.
>
> BM> The only reason that terminates is if there's a stack overflow.
> BM> That's an implementation limitation, not part of the semantics.
> BM> Nowhere in the CL specification does it require that the stack
> BM> must have a limited size.
>
> There are two ways of understanding a specification ``assuming
> unbounded resources'':
>
> (i) assuming availability of potentially unbounded resources, said
> program does FOO; and
>
> (ii) there exists a finite (but arbitrarily large) amount of resources
> within which said program does FOO.
>

Lisp needs both (i) and (ii) togetether.

> Obviously, you're thinking in terms of (i), while I'm thinking in
> terms of (ii). (Exercice for the reader: specify a class of
> properties FOO such that (i) and (ii) coincide.)
>
> BM> The virtual machine described by the standard has infinite memory,
>
> I will grant you that most well-known formal semantic frameworks do
> not deal with resource tracking at all, and therefore use model (i).
> It is not clear to me, however, which model is used by the CL
> standard, or whether the CL standard is explicit about the issue in
> the first place. Perhaps Kent can enlighten us?

Necessarily because the language is reflective, it uses a hybrid model
for which I'm pretty sure your semantics just doesn't apply. The
standard is expressed in English and it is the burden of anyone who
wants to apply any out-of-band lemmas from some other domain to CL to
first show that they have a correct semantic description of the language.

We specifically chose English because it was democratic and accessible to
the programming community we needed to reach, such as myself, for example.
I am far from the "guiding force" behind CL. I was there, I had a lot of
opinions that got incorporated, and I was the "neutral editor" once votes
were taken. But people like Steele and Gabriel were there and fully
aware of enough issues of formal semantics that they could have objected at
any time if we were too far afield. My own personal understanding of
semantics says that something is "semantically well-formed" if I think I can
read it and understand it and implement it without ambiguity. I'm sorry
that's probably not very scientific, but I've found it operationally
useful over the years. The things I intuitively see as semantically
well-formed seem to be exactly the set of things the math guys are happy
describing, given that they ever take the patience to properly describe
it and don't just dive in with prefab scaffolding of some system they like
to use and hope it works in the current situation.

The system we use describes BOTH a virtual semantics in which you can recurse
indefinitely and not worry about stack, in which you can cons indefinitely
and not worry about heap, etc. AND a practical semantics in which you can
limit these things and still be a correct implementation. A conforming
implementation must not tell the user he has made a semantic violation if
he depends on stack to be too deep--SERIOUS-CONDITION is ok, but ERROR is
not. Ditto for consing too much, though handling that case is going to be
hard for other reasons. Still, it doesn't make an implementation
non-conforming.

(ignore-errors (loop)) does not terminate.

neither does

(defun foo () (foo))
(ignore-errors (foo))

The latter, though, creates a resource violation in some (but not all)
implementations. This is permitted. And Lisp is reflective, so the
resource violation is permitted to be signaled. It would be an unuusal
Lisp that signaled a resource violation for (ignore-errors (loop)) but
technically I think signalling a TIME-EXHAUSTED situation would be
permissible, as long as it was a serious condition and not an error.
It should not be trapped by IGNORE-ERRORS.

Lisp does allow even resource limitations to be trapped, so any formal
semantics you apply must be one that deals not only with bottom and its
infections nature compositionally, but also with some operator that can
trap bottom and keep it in check, at least some of the time. My intuitive
sense is that this is overpowerful and probably gets messy becuase it's
got analogies to a system with both an unstoppable force and an
infinite barrier. One has to give. Probably it will be the unstoppable
force, since writing infinite barriers probably can't keep lisp from
stopping. But conceptually, Lisp creates a reflective battleground in
which the struggle is permitted to take place. And if it's uncomputable
who will win, that's too bad, because in practice it's still worth trying
to navigate this space in the name of robustness. I would not want a weaker
language just becuase it promised more deterministic behavior.

Lisp is a hybrid of your modes (i) and (ii) above because it is intended
both to recognize an idealized semantics it may not be able to implement,
and also to accept its own limitations. (In this way, it's probably not
that different from being religious, for those of you who are.) It is both
trying to enforce the ideal semantics but also to still HAVE a semantics
in the case where its semantics are violated. In this regard, it is
partly reflective. It is not 3lisp nor some other dialect that purports
to be wholly reflective, but it plainly involves some of the complex issues
that come up in such situations, and embracing it with the same tools for
semantics that you use for other languages is not meeting it on its own
terms, I think. Lisp has traditionally begun by trying to figure out what
it wants to do and leaving it to the theorists to catch up; the problem with
working the other way around is that you always have a language which is
limited by the knowledge of the day it was designed and only gets old from
there; Lisp continues to challenge the theorists even after its design,
which may be puzzling but I regard as appropriate.

Kent M Pitman

unread,
Sep 28, 2001, 8:05:58 PM9/28/01
to
Juliusz Chroboczek <j...@glory.dcs.ed.ac.uk> writes:

> EN> Why do you not distinguish between "terminate" and "signal an error"?
>
> [...]
>
> EN> This is just plain wrong. Your semantic feature rests on an
> EN> equivocation between termination and crashing.
>
> No. It rests on *distinguishing* between non-termination and
> the signalling of STORAGE-CONDITION.

But why? STORAGE-CONDITION merely signals a lack of resource in the
ability to wait. It is not predictive of whether the waiting would
have done the same thing; it is utterly agnostic on the issue. No
implementation is required to stop. Implementations are permitted to
do tail call elimination.

> Suppose you identify signalling a condition and not terminating, i.e.
>
> (loop)
>
> and
>
> (error "Foo!")
>
> are equivalent.

They don't do the same thing. There are situations in which they are they
are operationally the same and situations in which they are not.

> Then (assuming a compositional semantics) you have to identify

I don't know what a "compositional semantics" implies here--sounds
like it means "a sequence of words not included in the spec". Would
that be a fair interpretation? Lisp is a partly reflective language,
and error invokes the reflective mechanism, while LOOP does not. So
there are many ways in which the two are alike and some ways in which
they are not.



> (progn (ignore-errors (loop)) t)
>
> and
>
> (progn (ignore-errors (error "Foo!")) t)

This is an example of how the two are not alike.



> If you choose to identify the latter program with T (a rather natural
> hypothesis), and the former one with (LOOP), then you've just identi-
> fied (LOOP) with T. With a little more work (and some similarly
> natural hypotheses), you've just identified all Lisp programs.

I don't know what this "identified with" stuff here is, but it's not in the
language definition, so it's really just not relevant. I see this as an
improper application of the mathy game of "deltas and epsilons" where you
have played deltas but are obliged to play epsilons. It's your burden to
show that your reasoning conforms to the language defintiion, which is
democratically expressed in English and open to all to understand; the
language is not offered in some elitist mathy formal semantics and that
means that attempts to apply such semantics should be made only where you
can prove that it such semantics is appropriate. That as contrasted with
Scheme where the language definition is offered in two allegedly redundant
forms, but where if there is ever a dispute a certain part of the public will
be locked out of it because they will be overwhelmed by obscure notation.
In the design of CL, we specifically addressed the idea of such redundancy
and decided it was a bad plan.

> Not a very interesting semantics.

You're welcome to conclude as you like (though you somewhat put the
lie to the claim by continuing to discuss it... sort of like someone
claiming that Jerry Springer is boring TV but tuning in every day to
watch the fist fights). We're busy building programs are interesting,
not semantics that are interesting. Our dull semantics has not seemed
to impede that.

> Juliusz
>
> P.S. Okay, I've cheated. Whether a stack overflow generates a
> STORAGE-CONDITION or causes a nuclear holocaust is ``implementation-
> defined'' in ANSI Common Lisp. Still, I think it's natural to
> distinguish between a ``crash'' and non-termination.

I don't get it. IGNORE-ERRORS doesn't trap stack overflow and it doesn't
trap infinite loops. They are both the same. They are resource limitations.

ERROR is trapped because it involves a semantic violation.

> The accepted terminology in the Semantics community, by the way, is to
> include abnormal termination under the term ``termination''.

Too bad, but you are misapplying the semantics of the semantics community.
The "abnormal termination" here is due to an inability of the processor
to have the resources to supply the correct semantics. The correct semantics
allow the program you want. HOWEVER, a processor is considered conforming
for not having those resources. The following is the definition of a
completely conforming common lisp processor; it's not likely to be
marketworthy, but it is allowed to claim conformance:

(defun common-lisp ()
(write-string "Resources exhausted."))

It probably also dumps out into a killer-small hello world, by the way, for
those who like to optimize such things and worry that CL has missed that boat.

CL has the notion of an ideal semantics separate from the notion of an
implemented semantics. I'm sorry if the semantics community doesn't
like that, but they'll just have to deal. We describe arbitrary
precision integers but we don't require all conforming processors to
have the memory needed to house them. We assume describe programs
that can have arbitrary stack depth but we leave it to implementations
to sort out how deep the stacks really are and whether they are
extensible. We SAY that these are not to be identified as errors but
as resource limitations.

We say programs in CL have a meaning independent of the question of
whether processors will reliably run them. Processors for CL are
intended to try their best to meet that meaning, but are allowed to
fail under certain cases and still be conforming. It is specifically
left to the market to sort out whether this is worth money to buy. We
figured that was a much better test of worthiness since in a
pluralistic society we expect there to be cases where one solution is
ok for some and not for others and vice versa... Some people will
want implementations with tail call elimination, some won't. I don't,
for example. You do. So we'll prefer different implementations. We
can still agree on the meaning of the programs that might different in
behavior between them, though. And we vendors can decide whose money
they want, without losing their right to claim conformance. Seems ok to
me...

Jochen Schmidt

unread,
Sep 28, 2001, 8:47:25 PM9/28/01
to
If you run your (loop) example on a laptop running on batteries
you will see in not so long time that even (loop) is doomed to
"terminate" because some resource is exhausted.

bye,
Jochen

--
http://www.dataheaven.de

Juliusz Chroboczek

unread,
Sep 29, 2001, 7:44:18 PM9/29/01
to
>> No. It rests on *distinguishing* between non-termination and
>> the signalling of STORAGE-CONDITION.

KMP> But why? STORAGE-CONDITION merely signals a lack of resource in
KMP> the ability to wait. It is not predictive of whether the waiting
KMP> would have done the same thing; it is utterly agnostic on the
KMP> issue. No implementation is required to stop. Implementations
KMP> are permitted to do tail call elimination.

Sure.

The question is the other way around: would it make sense for a
hypothetical future standard to require that a given piece of code is
*not* allowed to signal STORAGE-CONDITION or crash? In other words,
is there a technically correct way to *mandate* tail-call elimination
in a (hypothetical) standard of the quality of ANSI CL.

My claim is that there is; but this claim relies fundamentally on the
fact that STORAGE-CONDITION *is* different from looping.

In the rest of the discussion, you would appear to have dualised my
position: I am claiming that non-termination and signalling a
condition (or crashing) *are* different results...

>> (progn (ignore-errors (loop)) t)
>>
>> and
>>
>> (progn (ignore-errors (error "Foo!")) t)

KMP> This is an example of how the two are not alike.

...which you would appear to at least partly agree with.

Regards,

Juliusz


Kent M Pitman

unread,
Sep 29, 2001, 11:54:21 PM9/29/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> >> No. It rests on *distinguishing* between non-termination and
> >> the signalling of STORAGE-CONDITION.
>
> KMP> But why? STORAGE-CONDITION merely signals a lack of resource in
> KMP> the ability to wait. It is not predictive of whether the waiting
> KMP> would have done the same thing; it is utterly agnostic on the
> KMP> issue. No implementation is required to stop. Implementations
> KMP> are permitted to do tail call elimination.
>
> Sure.
>
> The question is the other way around: would it make sense for a
> hypothetical future standard to require that a given piece of code is
> *not* allowed to signal STORAGE-CONDITION or crash? In other words,
> is there a technically correct way to *mandate* tail-call elimination
> in a (hypothetical) standard of the quality of ANSI CL.

I think you're really asking me whether it is a valid thing to want
implementations to do, and I think "sure". But if your question is
literally, as worded, "should standards be the mechanism by which this
is achieved", I'm less sure.

I will say that I have in the past advocated that the standard would
be more useful if it told you things like "the worst case time in
doing a AREF will be O(1)" vs "the worst case time in doing ELT will
be O(n) for lists of length n or O(1) for arrays". There are places
where we've left it open and as a consequence an implementation can do
really stupid things that are baffling to users. SET-FILE-POSITION
was discussed recently; some people expect it to be O(1) and if it's
O(n) [for n=filepos], that's pretty scary. But the consensus was that
this should be left to an implementation.

To some extent, one might characterize my position as "there are
bigger fish to fry than tail recursion" or one might say "it's just
not consistent with the overall design to talk about memory use when
nothing really does". You might appeal back to the "infinity" vs "finite"
argument and say it was particularly important for the tail call case;
I don't know. I'd rather tail call elimination be under OPTIMIZE
declaration control and not a required feature of the language, so that
kind of issue would need to get factored in, too.

And then there's the practical matter that the current climate is that none
of this rises to the level of it being worth it to the community to change
the standard. Merely opening the standard for the fixing of ANY technical
item exposes the process to the risk of major politicking; it is not allowed
to just make a work item to make a small correction under ANSI rules, I
believe. I think you have to just open the spec for change and then other
things you didn't expect might creep in. Stability is best maintained by
not touching it, and there's nothing now that cries out as needing fixing.

I'd almost say that if something DID come up that required fixing, ANSI would
not be the vehicle of choice to do it.

The reason I think people wanted to leave speed issues entirely out of the
standard is that the market is really better at sorting things out. Let's
take the example of the MOP. It's not standard either, but consensus seems
to me to be that it's ok to use because most implementations do have it.
Other things, like tail call elimination, haven't even been voluntarily
adopted by implementations even though they are allowed to. If this came back
to ANSI and we operated as we have done in the past, the first thing that
would be asked when your hypothetical proposal to define this came to the
table would be "what is the current practice?" and the answer would be "no
one does it". And that would immediately be translated to statements like
"sounds dicey" and "sounds like it will impose cost on all vendors that they
don't feel a need for" and "sounds like there is no call for it".

The first stage of making the change is not to look to the standard unless
the standard forbids the change. The first stage is to get vendors to
experiment, and then to drive the wedge by asking that code that runs in
one vendor run in another. Best, I think, would be to operate under a switch
so that code that did this had to self-identify and other compilers could
minimally say "I don't support that option, your code will likely lose" rather
than quietly compiling the code and then losing on it at runtime...

> My claim is that there is; but this claim relies fundamentally on the
> fact that STORAGE-CONDITION *is* different from looping.
>
> In the rest of the discussion, you would appear to have dualised my
> position: I am claiming that non-termination and signalling a
> condition (or crashing) *are* different results...
>
> >> (progn (ignore-errors (loop)) t)
> >>
> >> and
> >>
> >> (progn (ignore-errors (error "Foo!")) t)
>
> KMP> This is an example of how the two are not alike.
>
> ...which you would appear to at least partly agree with.

Apparently. :-)

Lieven Marchand

unread,
Sep 30, 2001, 7:29:19 AM9/30/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> The question is the other way around: would it make sense for a
> hypothetical future standard to require that a given piece of code is
> *not* allowed to signal STORAGE-CONDITION or crash? In other words,
> is there a technically correct way to *mandate* tail-call elimination
> in a (hypothetical) standard of the quality of ANSI CL.

Defining tail call elimination is trickier than most people expect. In
fact, although Scheme has for a long time expressed the intention to
be "properly tail recursive", only the most recent report (R^5RS)
makes an attempt at defining this concept, and for the formal
definition it refers to a paper by Clinger(1998): "William D
Clinger. Proper tail recursion and space efficiency."

In the abstract of this article he says: "Proper tail recursion is not
the same as ad hoc tail call optimization in stack-based
languages. Proper tail recursion often precludes stack allocation of
variables, but yields a well-defined asymptotic space complexity that
can be relied upon by portable programs."

--
Lieven Marchand <m...@wyrd.be>
She says, "Honey, you're a Bastard of great proportion."
He says, "Darling, I plead guilty to that sin."
Cowboy Junkies -- A few simple words

Kent M Pitman

unread,
Sep 30, 2001, 2:11:52 PM9/30/01
to
Lieven Marchand <m...@wyrd.be> writes:

> Defining tail call elimination is trickier than most people expect. In
> fact, although Scheme has for a long time expressed the intention to
> be "properly tail recursive", only the most recent report (R^5RS)
> makes an attempt at defining this concept, and for the formal
> definition it refers to a paper by Clinger(1998): "William D

> Clinger. Proper tail recursion and space efficiency." [...]

I can hear some people saying "how could it be complicated?". For those
who are looking for at least one concrete example, I believe the following
kind of thing was raised as ambiguous even under the Scheme spec:

(defun (factorial x n)
(if (zerop x) n
(let ((result (factorial (- x 1) (* x n))))
result)))

As should be obvious, this affects the whole issue of how the language
treats variables--i.e., is it just "allowed" or is it actively "required"
to fold uses of variables, because this in turn forces implementations to
do some code analysis that they might not be prepared to do, especially in
an interpreter, the major point of which is to have lazy application of
semantics by analyzing things only locally as they are encountered; this
would effectively push an interpreter closer to a compiler, almost calling
into question whether an interpreter was even allowed.

Of course, you could just say it wasn't a tail call, but some people are
apparently offended by that because they want (let ((x y)) x) and y to be
semantically the same thing. If introducing such bindings creates
unwanted additional effects (beyond just naming a quantity), it inhibits
the usefulness of code transformations--or, at least, makes such
transformations a lot harder to write.

[Note that I often beat people up for wanting to apply certain optimizations
that are not part of the language and complaining that they can't, but this
is different: we're not in the context of "how do I optimize the fixed
language?" but rather "how should I define the language such that optimizations
I know of will be applicable?".]

Anyway, I hope this brief overview gives the sense that this issue of
tail call elimination isn't just a "no brainer" that can just be
dropped into the language spec trivially. It's much easier to have an
implementation maintainer take on the responsibility because that
maintainer is financially responsible for any impact on the usability
of their own implementation. For the language designers to make the
same choice, they have to be sensitive to revenue to implementations
they don't own, and that's a considerably bigger responsibility. ANSI
defines away such responsibility legallyl by allowing organizations to
freely join and vote, and by requiring even minority parties that are
economically impacted to be seriously treated. And the result of the vote
the last time around was that there was even majority concern ...

Francois-Rene Rideau

unread,
Sep 30, 2001, 6:25:27 PM9/30/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:
> The question is the other way around: would it make sense for a
> hypothetical future standard to require that a given piece of code is
> *not* allowed to signal STORAGE-CONDITION or crash? In other words,
> is there a technically correct way to *mandate* tail-call elimination
> in a (hypothetical) standard of the quality of ANSI CL.

Why not instead have an explicit (TAIL-CALL (F X)) special form,
or otherwise two special forms (TAIL-APPLY #'F (LIST X))
and (TAIL-FUNCALL #'F X) ? It would seem much more in line with
the rest of the Common LISP tradition: if it has a special,
different meaning, then have a special, different, syntax for it,
that signals the difference.

Yours freely,

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]
*EULA: By reading or responding to this message you agree that all my stated
opinions are correct.* "EULA" -- patent pending.

Rahul Jain

unread,
Sep 30, 2001, 9:23:46 PM9/30/01
to
Francois-Rene Rideau <fare+...@tunes.org> writes:

> Why not instead have an explicit (TAIL-CALL (F X)) special form,
> or otherwise two special forms (TAIL-APPLY #'F (LIST X))
> and (TAIL-FUNCALL #'F X) ? It would seem much more in line with
> the rest of the Common LISP tradition: if it has a special,
> different meaning, then have a special, different, syntax for it,
> that signals the difference.

Because we already have DO and LOOP for that.

I suppose someone could write macros that transformed these tail-apply
and tail-funcall -containing forms into DO or LOOP forms, but that
sounds to me like part of the process of writing a compiler. :)

--
-> -/- - Rahul Jain - -\- <-
-> -\- http://linux.rice.edu/~rahul -=- mailto:rahul...@usa.net -/- <-
-> -/- "I never could get the hang of Thursdays." - HHGTTG by DNA -\- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
Version 11.423.999.220020101.23.50110101.042
(c)1996-2000, All rights reserved. Disclaimer available upon request.

Thomas F. Burdick

unread,
Oct 1, 2001, 1:22:50 AM10/1/01
to
Rahul Jain <rj...@rice.edu> writes:

> Francois-Rene Rideau <fare+...@tunes.org> writes:
>
> > Why not instead have an explicit (TAIL-CALL (F X)) special form,
> > or otherwise two special forms (TAIL-APPLY #'F (LIST X))
> > and (TAIL-FUNCALL #'F X) ? It would seem much more in line with
> > the rest of the Common LISP tradition: if it has a special,
> > different meaning, then have a special, different, syntax for it,
> > that signals the difference.
>
> Because we already have DO and LOOP for that.

(defun foo (n)
(format t "There are ~D stack frames left~%" n))

(defun my-fun (stack-frames-left)
(if (not (zerop stack-frames-left))
(my-fun (1- stack-frames-left))
(tail-call (foo stack-frames-left))))

How are DO and LOOP going to help here?

Rahul Jain

unread,
Oct 1, 2001, 2:25:30 AM10/1/01
to
t...@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> (defun foo (n)
> (format t "There are ~D stack frames left~%" n))

> (defun my-fun (stack-frames-left)
> (if (not (zerop stack-frames-left))
> (my-fun (1- stack-frames-left))
> (tail-call (foo stack-frames-left))))

> How are DO and LOOP going to help here?

Technically, they're not. You could use LABELS or something, but it's
immaterial. Why not just use a compiler that has the behavior you
want?

Kent M Pitman

unread,
Oct 1, 2001, 3:46:03 AM10/1/01
to
Rahul Jain <rj...@rice.edu> writes:

> t...@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>
> > (defun foo (n)
> > (format t "There are ~D stack frames left~%" n))
>
> > (defun my-fun (stack-frames-left)
> > (if (not (zerop stack-frames-left))
> > (my-fun (1- stack-frames-left))
> > (tail-call (foo stack-frames-left))))
>
> > How are DO and LOOP going to help here?
>
> Technically, they're not. You could use LABELS or something, but it's
> immaterial. Why not just use a compiler that has the behavior you
> want?

I honestly don't undersatnd what this TAIL-CALL is going to do.

The only TAIL-CALL operator I've ever been familiar with was in Teco
and it was used to allow you to "GO" into another macro without popping
the q-register state, so would correspond more to:

(defun foo1 () n)
(defun foo (n) (tail-call (foo1)))
(foo 3) => 3

I can't see a tail-call in CL meaning *that*, so I don't know what it would
mean. What if I wrote:

(defun factorial (x)
(if (zerop x) 1
(* x (tail-call (factorial (1- x))))))

A lot of this has to with whether the implementation is recognizing that
it *could* tail-call the function. Some compilers might not know. Making
them have to be able to tell if it's valid might impose a lot of work on
them. The operator may be optional for the user, but it's not for the
implementation.

Bruce Hoult

unread,
Oct 1, 2001, 4:51:08 AM10/1/01
to
In article <sfwk7yf...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> I can't see a tail-call in CL meaning *that*, so I don't know what it
> would mean. What if I wrote:
>
> (defun factorial (x)
> (if (zerop x) 1
> (* x (tail-call (factorial (1- x))))))
>
> A lot of this has to with whether the implementation is recognizing
> that it *could* tail-call the function. Some compilers might not
> know. Making them have to be able to tell if it's valid might
> impose a lot of work on them. The operator may be optional for
> the user, but it's not for the implementation.

You seem to have lost me. What would you be expecting tail-call to do
here?


The only thing that can be tail called here is the "*". More than that
-- it *will* be tail called by any implementation that does tail-call
optimization (if it doesn't optimize even further and inline it).

You can't tail-call "factorial" because the result of the enclosing
function isn't identical to the result of the call of "factorial".

-- Bruce

Tim Bradshaw

unread,
Oct 1, 2001, 6:03:56 AM10/1/01
to
* Bruce Hoult wrote:

> You seem to have lost me. What would you be expecting tail-call to do
> here?

> The only thing that can be tail called here is the "*". More than that
> -- it *will* be tail called by any implementation that does tail-call
> optimization (if it doesn't optimize even further and inline it).

> You can't tail-call "factorial" because the result of the enclosing
> function isn't identical to the result of the call of "factorial".

I think the issue is that, in order to compile correct code, the
compiler has to *know* that it can or can not tail call something, or
equivalently that it should or should not ignore the TAIL-CALL form.
And that's probably most of the work it takes to do tail-call
elimination, so why have the form?

I once had a love-affair with named-LET-style iteration (I'm all cured
now, I use LOOP like Real Men). CMUCL has an ITERATE form which does
this sort of thing, and I wrote a clone using the obvious expansion to
LABELS. The problem is when you use an implementation which doesn't do
tail-call elimination, of course. So I hacked away some more and I
made my macro convert these `recursive' calls to GOs. Except now it
breaks if they aren't tail-calls, so I did some yet further hack which
required the name of the local function to have the string "LOOP" in
it, and only in that case would iterative code be compiled. Very
shortly after this I realised that this was all just stupid, and that
the only way to do this right was to do the whole analysis and
actually compile GO iff the thing was a tail call, and that doing
that would require me to write a reasonable chunk of a CL compiler.
This was the moment when I realised that I should have used LOOP all
along.

--tim

Bruce Hoult

unread,
Oct 1, 2001, 6:56:46 AM10/1/01
to
In article <ey3snd3...@cley.com>, Tim Bradshaw <t...@cley.com>
wrote:

> * Bruce Hoult wrote:
>
> > You seem to have lost me. What would you be expecting tail-call to do
> > here?
>
> > The only thing that can be tail called here is the "*". More than that
> > -- it *will* be tail called by any implementation that does tail-call
> > optimization (if it doesn't optimize even further and inline it).
>
> > You can't tail-call "factorial" because the result of the enclosing
> > function isn't identical to the result of the call of "factorial".
>
> I think the issue is that, in order to compile correct code, the
> compiler has to *know* that it can or can not tail call something, or
> equivalently that it should or should not ignore the TAIL-CALL form.
> And that's probably most of the work it takes to do tail-call
> elimination, so why have the form?

Ah, it was an argument against a special form? Fair enough, I agree.

So, what is the difficulty in deciding whether or not a function should
be tail called? It should be tail called If and Only If the result of
the current function is exactly the result of the function to be called.
That is, no calculations are to be done using the result of the function
being called and no environment cleanup is required (exception handlers,
destructors in languages which have them, etc...).

It seems to me that the only problems that arise are:

1) the actual mechanics of doing the tail-call at the machine level.
This can require a certain amount of stack-munging to pop off the return
address and old arguments, push the new arguments and re-push the return
address, and then jump to the tail-called function. This is easier with
e.g. RISC CPUs which don't use the stack for function calls and have
arguments in registers and the return address in a special (or at least
fixed) register. It's virtually impossible with anything that uses a
caller-cleanup function calling sequence (most C compilers used to do
that by default in order to support varargs, newer ones tend to use
callee-cleanup when they can) unless the current function and the
function being tail-called have the same number of bytes of arguments.

2) optimization problems. Things which are prima facie tail calls are
easy, but Kent is appearing to indicate that users want compilers -- and
even interpreters -- to do sufficient analysis to recognize some things
that are NOT tail calls but that can be *optimized* into tail calls.

I guess Kent's example is a good one:

(define (factorial x n)
(if (zero? x) n


(let ((result (factorial (- x 1) (* x n))))
result)))

He says some (undefined) people would expect the recursive call to
factorial to be a tail call even though it's not prima facie a tail call.

Expanding the definition of "let" this actually means...

(define (factorial x n)
(if (zero? x) n
((lambda (result) result)
(factorial (- x 1) (* x n)))))

... which is ...

(define (factorial x n)
(if (zero? x) n
(anon-lambda (factorial (- x 1) (* x n)))))

(define (anon-lambda result)
result)

Either way it's pretty clear that you need some pretty serious compiler
work to implement "inlining" in order to allow the call to factorial to
be a tail-call. And even then it would *not* be a tail call if there
was another expression in the body of the lambda/let ... especially one
with side-effects such as something like (format "factorial %d = %d\n" x
result).

I don't have a problem with saying that this is too complicated a
situation to expect an interpreter or simple compiler to analyse and
that if something isn't prima facie a tail-call then you can't *count*
on it being optimized as one.

But you might get lucky with a sufficiently smart compiler.

-- Bruce

Kent M Pitman

unread,
Oct 1, 2001, 10:58:35 AM10/1/01
to
Bruce Hoult <br...@hoult.org> writes:

> So, what is the difficulty in deciding whether or not a function should
> be tail called? It should be tail called If and Only If the result of
> the current function is exactly the result of the function to be called.

And otherwise what? Signal an error? Do a regular call?

> That is, no calculations are to be done using the result of the function
> being called and no environment cleanup is required (exception handlers,
> destructors in languages which have them, etc...).

What is "the function being called"? Can I tail-call from
(defun foo (x) (foo #'(lambda () (tail-call (foo 3)))) (bar))
What if I have a with-foo macro that lets me write this as
(defun foo (x) (with-foo (tail-call (foo 3))) (bar))
Does the user have to be told I'm wrapping a lambda in there?
This all sounds messy and ad hoc.

My main point is not that it can't be done, but that the RIGHT way to
do it is for a vendor to do it, present it to users, let it get
pounded on for a while, and then report EXPERIENCE. Vendors are equipped
to sense whether risks like this are worthwhile and to track the response.
If no vendor wants to do it (and I included "free software" makers as
vendors here, since they aren't selling for money but they still are
putting their reputation on the line, and risking their email box will fill),
there's little point to bugging the community as a whole to do it.
Standards work is not a place to do design; it is a place to consolidate
design among vendors that have either all done the same thing or have made
divergent solutions to a common problem that need tweaking in order to get
in line.

> It seems to me that the only problems that arise are:
>
> 1) the actual mechanics of doing the tail-call at the machine level.
> This can require a certain amount of stack-munging to pop off the return
> address and old arguments, push the new arguments and re-push the return
> address, and then jump to the tail-called function. This is easier with
> e.g. RISC CPUs which don't use the stack for function calls and have
> arguments in registers and the return address in a special (or at least
> fixed) register. It's virtually impossible with anything that uses a
> caller-cleanup function calling sequence (most C compilers used to do
> that by default in order to support varargs, newer ones tend to use
> callee-cleanup when they can) unless the current function and the
> function being tail-called have the same number of bytes of arguments.
>
> 2) optimization problems. Things which are prima facie tail calls are
> easy, but Kent is appearing to indicate that users want compilers -- and
> even interpreters -- to do sufficient analysis to recognize some things
> that are NOT tail calls but that can be *optimized* into tail calls.

I think it's backwards of this--that people doing code-transformation
want to know that the mere introduction of a named binding doesn't
obscure the meaning. But it's equivalent, I guess. I didn't mean
this to be a claim that it can't be done--just that a lot of people
who want this want it BECAUSE it can be depended upon, and so you have
to specify what can be depended upon clearly enough that they can
reliably tell the difference between what they can and cannot depend
upon.

Thomas F. Burdick

unread,
Oct 1, 2001, 4:45:17 PM10/1/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Rahul Jain <rj...@rice.edu> writes:
>
> > t...@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> >
> > > (defun foo (n)
> > > (format t "There are ~D stack frames left~%" n))
> >
> > > (defun my-fun (stack-frames-left)
> > > (if (not (zerop stack-frames-left))
> > > (my-fun (1- stack-frames-left))
> > > (tail-call (foo stack-frames-left))))
> >
> > > How are DO and LOOP going to help here?
> >
> > Technically, they're not. You could use LABELS or something, but it's
> > immaterial.

Say what? I was responding to the following:

Rahul Jain <rj...@rice.edu> writes:

> Francois-Rene Rideau <fare+...@tunes.org> writes:
>
> > Why not instead have an explicit (TAIL-CALL (F X)) special form,
> > or otherwise two special forms (TAIL-APPLY #'F (LIST X))
> > and (TAIL-FUNCALL #'F X) ? It would seem much more in line with
> > the rest of the Common LISP tradition: if it has a special,
> > different meaning, then have a special, different, syntax for it,
> > that signals the difference.
>
> Because we already have DO and LOOP for that.
>

> I suppose someone could write macros that transformed these tail-apply
> and tail-funcall -containing forms into DO or LOOP forms, but that
> sounds to me like part of the process of writing a compiler. :)

It's certainly material, it's the whole point of your post! I was
just trying to point out there are cases where tail calls are not just
loops in disguise. It's certainly not a loop, and if FOO is a
generally useful function, I don't want to put it in a LABELS or FLET.

> Why not just use a compiler that has the behavior you want?

Isn't that what the TAIL-CALL, TAIL-FUNCALL, TAIL-APPLY operators were
proposed as a part of?

KMP> I honestly don't undersatnd what this TAIL-CALL is going to do.

Well, I didn't propose it, and I wasn't particularly arguing for it
(just that it's not the same as DO), but I can actually think of a
nice use for it: request that the compiler eliminates that tail call.
Obviously it would need all the capabilities of a compiler that does
it more or less quietly. If it could not eliminate the call, it could
signal an error when compiling, and it would not eliminate tail calls
that weren't marked for elimination.

Actually, I kind of like that idea.

[...]


> I can't see a tail-call in CL meaning *that*, so I don't know what it would
> mean. What if I wrote:
>
> (defun factorial (x)
> (if (zerop x) 1
> (* x (tail-call (factorial (1- x))))))

The compiler could signal an error.

> A lot of this has to with whether the implementation is recognizing that
> it *could* tail-call the function. Some compilers might not know. Making
> them have to be able to tell if it's valid might impose a lot of work on
> them. The operator may be optional for the user, but it's not for the
> implementation.

Actually, it could be optional for the compiler: if the compiler
writer didn't want to deal with the analysis, it could always signal
an error when it came across a TAIL-CALL form.

Bruce Hoult

unread,
Oct 1, 2001, 5:44:24 PM10/1/01
to
In article <sfwk7yf...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> Bruce Hoult <br...@hoult.org> writes:
>
> > So, what is the difficulty in deciding whether or not a function should
> > be tail called? It should be tail called If and Only If the result of
> > the current function is exactly the result of the function to be
> > called.
>
> And otherwise what? Signal an error? Do a regular call?

?? there is no otherwise.

A function call is either the last function call in the body of the
function or else it is not. If it's last then tail-call it (with a
"jump"), if it's not last then call it the regular way (with a "jump to
subroutine").

"Last" means either the last expression evaluated in the function. This
means either being physically last, or else being the last expression in
an IF or COND (etc) that is physically last in the function.


> > That is, no calculations are to be done using the result of the
> > function
> > being called and no environment cleanup is required (exception
> > handlers,
> > destructors in languages which have them, etc...).
>
> What is "the function being called"?

Any function being called from another function.


> Can I tail-call from
> (defun foo (x) (foo #'(lambda () (tail-call (foo 3)))) (bar))

I don't know what "#'" means. And why are you introducing this
"tail-call" construct again after I pointed out that it is totally
unnecessary?


> What if I have a with-foo macro that lets me write this as
> (defun foo (x) (with-foo (tail-call (foo 3))) (bar))
> Does the user have to be told I'm wrapping a lambda in there?
> This all sounds messy and ad hoc.

Any "with-XXX" is a red flag that there is cleanup code executed after
the user's code. This automatically makes tail-calls of the user's code
impossible, since it is not in tail position within the function.

-- Bruce

Erik Naggum

unread,
Oct 1, 2001, 7:40:39 PM10/1/01
to
* Bruce Hoult

| A function call is either the last function call in the body of the
| function or else it is not. If it's last then tail-call it (with a
| "jump"), if it's not last then call it the regular way (with a "jump to
| subroutine").

But this is not quite as trivial as people seem to think. There are all
kinds of low-level cleanup issues that may not be visible or even known
to the programmer and which the compiler writer may need to be told about
(either through a formal specification of a requirement or by invoking a
local declaration) before it makes sense to allow for making a tail call
into a jump. Also note that the callee needs to believe that it was
called, unless the language has specified semantics to this effect, too.

| "Last" means either the last expression evaluated in the function. This
| means either being physically last, or else being the last expression in
| an IF or COND (etc) that is physically last in the function.

I think you might mean the form that evaluates to the value of the
function. It might, for instance, be possible to unwind special bindings
and unwind-protect forms in such a way that you could jump to a function
with it returning to the cleanup forms, so the notion of "last" might be
very confusing and counterproductive.

///

Francois-Rene Rideau

unread,
Oct 1, 2001, 7:44:36 PM10/1/01
to
t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> It's certainly material, it's the whole point of your post! I was
> just trying to point out there are cases where tail calls are not just
> loops in disguise.
Indeed. Tail calling a global function is just not translatable into a
labels or flet or go without losing the dynamic semantics of being able
to modify, trace, etc., the function. Conversely, you mightn't like to
do tail call elimination when not needed, because of relative inefficiency
(considering available implementation technique) or of loss during
some kind of debugging or security audit.

Actually, so as to eliminate confusion, instead of a TAIL-CALL special form,
or of TAIL-APPLY and TAIL-FUNCALL special forms, I think it might be better
to extend the RETURN and RETURN-FROM forms with a keyword argument :TAIL-CALL
or to provide TAIL-RETURN and TAIL-RETURN-FROM as in

(defun mult-fact (x n)
(declare (type unsigned-byte n))
(if (< n 2) n
(return (mult-fact (* x n) (1- n)) :tail-call t)))

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]

Laziness is mother of Intelligence. Father unknown.
-- Faré

Bruce Hoult

unread,
Oct 2, 2001, 12:11:46 AM10/2/01
to
In article <32109684...@naggum.net>, Erik Naggum <er...@naggum.net>
wrote:

> * Bruce Hoult
> | A function call is either the last function call in the body of the
> | function or else it is not. If it's last then tail-call it (with a
> | "jump"), if it's not last then call it the regular way (with a "jump to
> | subroutine").
>
> But this is not quite as trivial as people seem to think. There
> are all kinds of low-level cleanup issues that may not be visible
> or even known to the programmer

Yes and I covered several of these in my previous message:

<bruce-76CFD2....@news.akl.ihug.co.nz>


> I think you might mean the form that evaluates to the value of the
> function. It might, for instance, be possible to unwind special
> bindings and unwind-protect forms in such a way that you could jump
> to a function with it returning to the cleanup forms, so the notion
> of "last" might be very confusing and counterproductive.

I covered this in that previous message as well.

Anything subject to unwind protect or the like is prima facie NOT in
tail position. It is possible that a smart compiler may be able to
optimize things so that it can be moved into tail position, but that's
not something you could depend on.

It seems especially dangerous to move something out from the body of an
unwind-protect -- you'd have to prove first that it was impossible for
it to throw.

-- Bruce

Erik Naggum

unread,
Oct 2, 2001, 4:38:30 AM10/2/01
to
* Erik Naggum

| I think you might mean the form that evaluates to the value of the
| function. It might, for instance, be possible to unwind special
| bindings and unwind-protect forms in such a way that you could jump
| to a function with it returning to the cleanup forms, so the notion
| of "last" might be very confusing and counterproductive.

* Bruce Hoult


| I covered this in that previous message as well.
|
| Anything subject to unwind protect or the like is prima facie NOT in
| tail position. It is possible that a smart compiler may be able to
| optimize things so that it can be moved into tail position, but that's
| not something you could depend on.

If you had covered it, you would have known that this is not simply an
"optimization", but a design decision that would have to be taken if you
wanted true support for tail calls. Both special bindings and unwind-
protect are so common in Common Lisp programs that separating them from
the call stack would have been a _necessary_ design decision, in order
that tail calls actually produce useful results. Since you said you had
covered this, contrary to my usually flawless short-term memory (I study
SQL intensely right now, so there may be some irregular lossage :), I
looked for where you had, but I cannot find it.

| It seems especially dangerous to move something out from the body of an
| unwind-protect -- you'd have to prove first that it was impossible for it
| to throw.

This is utter nonsense. Please try to understand how a system that does
unwind forms _must_ set up these things. The body forms in practice _do_
return, throws or not, through a chain of unwind form, including special
bindings, and whether you do this by setting up in-band catcher code or
through a separate stack of unwind-forms-to-be-executed, is immaterial.
If you do the latter on a separate stack, there is no difference between
a normal return from the code that sets it upand a return from somewhere
else: the setup has already been done and the execution is performed as
part of the return procedure.

If you decide to mandate tail-call optimization into jumps, this design
would have to be "required" of an implementation because refusing to do
tail calls simply in the face of special bindings would just be stupid --
having an elegant solution as it does. If you do not mandate it, you can
get away with only "simple" tail-call optimization, and you can get an
even more inexpensive tail-call technique when you get it, if you get it.

In essence, I believe "properly tail-recursive" is a design decision that
affects how the call stack must be designed (and continuations fall right
out of a heap-allocated call frame and chain design) and how you can deal
with special bindings and unwind-protect (it is no accident that Scheme
does not), while tail-call _optimization_ is possible even in languages
such as ANSI C. I therefore also believe that it is a _disservice_ to
the community to require "properly tail-recursive" because it requires
too many negative and unwanted properties of the implementation, but that
it is perfectly legitimate to ask the implementations for such a feature.
Amazingly, this is the actual situtation: E.g., Allegro CL does a very
good job of optimizing tail calls to both self and non-self if you ask
for it in their proprietary way, but optimization _is_ a "proprietary"
process, anyway. I think some language designers fail to understand this
and believe in _mandated_ optimization, which only limit and restrict how
their languages may be implemented to the techniques known at the time
their favorite optimization theory was in vogue, or techniques developed
along that evolutionary branch. The less you prescribe in terms of how
to optimize, the more you can take advantage of free-ranging research in
optimization, both software and hardware. This is why C is _still_ best
optimized for PDP-11-style processors.

///

Bruce Hoult

unread,
Oct 2, 2001, 7:11:18 AM10/2/01
to
In article <32110007...@naggum.net>, Erik Naggum <er...@naggum.net>
wrote:

> * Erik Naggum


> | I think you might mean the form that evaluates to the value of the
> | function. It might, for instance, be possible to unwind special
> | bindings and unwind-protect forms in such a way that you could jump
> | to a function with it returning to the cleanup forms, so the notion
> | of "last" might be very confusing and counterproductive.
>
> * Bruce Hoult
> | I covered this in that previous message as well.
> |
> | Anything subject to unwind protect or the like is prima facie NOT in
> | tail position. It is possible that a smart compiler may be able to
> | optimize things so that it can be moved into tail position, but that's
> | not something you could depend on.
>
> If you had covered it, you would have known that this is not simply an
> "optimization", but a design decision that would have to be taken if
> you
> wanted true support for tail calls.

I don't know why, but you and Kent seem to want a lot more from "tail
call optimization" than I do.

Or, rather, you *don't* want it and so insist that if it can't work in
every conceivable situation then it isn't worth having at all.


> Both special bindings and unwind-
> protect are so common in Common Lisp programs that separating them from
> the call stack would have been a _necessary_ design decision, in order
> that tail calls actually produce useful results.

That would depend on what you regard as "useful" results.

So you can't do useful tail call elimination on things which are wrapped
inside "clean up after use" constucts? I agree and that's just fine by
me. Why do you think such constructs are a useful target for tail call
optimizations? Do you frequently write recursive functions or sets of
mutually-recursive functions in which each call invokes another layer of
cleanup? Can you give an example? In such situations it's always
seemed considerably cleaner to me to put the cleanup constructs at an
outer level and let some inner stuff recurse away merrily sans
intermediate cleanup.

One of the main reasons to support tail call optimization is so that you
can CPS-convert code in certain circumstances. Such code certainly
never has cleanup after the tail call -- the tail calls of the
continuation are *never* going to return. That's the point.

The same goes for loops of various sorts expressed as recursive
functions or sets of mutually-recursive functions. Think of them as a
different way of expressing iteration. Would you want to insert another
layer of cleanup code every time you executed the start of a loop? Why?


> | It seems especially dangerous to move something out from the body of an
> | unwind-protect -- you'd have to prove first that it was impossible for
> | it to throw.
>
> This is utter nonsense.

No it is not. You'd have to prove a bunch of other things as well, of
course, but that one alone is sufficient to show that it's very very
difficult to impossible to convert something inside an unwind-protect
into something that can be tail-called.


> Please try to understand how a system that does
> unwind forms _must_ set up these things. The body forms in practice
> _do_ return, throws or not, through a chain of unwind form

Indeed. Which is precisely why using a tail-call for things inside an
unwind-protect is a nonsense. The whole *point* of the tail call is
that it doesn't return and that you can therefore immediately throw away
(and make available for immediate reuse) all resources used by the
current function because the entire result of that function is now
described by the function you're about to call (together with the
arguments being passed to it). Anything that you have to come back and
do later (e.g. unwind) totally nullifies that.


> whether you do this by setting up in-band catcher code or
> through a separate stack of unwind-forms-to-be-executed, is immaterial.
> If you do the latter on a separate stack, there is no difference
> between
> a normal return from the code that sets it upand a return from
> somewhere
> else: the setup has already been done and the execution is performed as
> part of the return procedure.

This is true, but it's pretty pointless. In a recursive call that
contains an unwind-form you're going to grow your stack of
unwind-forms-to-be-executed without bound. Sure, you'll save a little
memory because the structure that you put on that stack each time is
probably a little smaller than the entire stack frame for the function
would be, but that's just a savings of a constant factor -- it doesn't
make a tail-recursive function (or set of mutually tail-calling
functions) execute in constant space, which is the purpose of the
optimization.


> I therefore also believe that it is a _disservice_ to
> the community to require "properly tail-recursive" because it requires
> too many negative and unwanted properties of the implementation

I certainly agree that it is undesirable to require what you appear to
mean by "properly tail-recursive", party because it would appear to be
impossible, and partly because I can't see any situation in which it
would be more useful than a much simpler definition of "properly
tail-recursive".

-- Bruce

Erik Naggum

unread,
Oct 2, 2001, 9:34:18 AM10/2/01
to
* Bruce Hoult

| I don't know why, but you and Kent seem to want a lot more from "tail
| call optimization" than I do.

Huh? I have argued for a differentiation of "properly tail-recursive"
and "tail call optimization". What I want from tail call optimization is
meant to be a severe restriction of what Scheme freaks want from their
undesirable "properly tail-recursive" concept.

| Or, rather, you *don't* want it and so insist that if it can't work in
| every conceivable situation then it isn't worth having at all.

Are you nuts? Where did you get this garbage? Of course I want it.
However, I do not want it mandated by the standard, because that has a
lot of negative aspects to it. Hell, if I want a standard that says
something so annoying as requiring a properly tail-recursive execution
model, I would use Scheme. I loathe Scheme. Get a grip, Bruce, quit
pretending that you are fighting an enemy you are not. Please realize
that there is a difference between a desire for a feature and a desire
for a law (or specification) that require others to provide a feature.
You seem to confuse the two, or at least not allow other people (Kent and
me?) to make a distinction, and get all flustered because you think we do
not want what we do not want there to be a law to _require_.

I believe the rest of your message rests on these false premises, too.

| No it is not.

Christ, Bruce, THINK! I have no patience with this crap. Go back and
read what I wrote. You have clearly not understood what I said, and now
you argue against something you have misunderstood. Quit that stupidity.

| You'd have to prove a bunch of other things as well, of course, but that
| one alone is sufficient to show that it's very very difficult to
| impossible to convert something inside an unwind-protect into something
| that can be tail-called.

This is just simply wrong. _Listen_, now, OK?

| Indeed. Which is precisely why using a tail-call for things inside an
| unwind-protect is a nonsense.

You obviously fail to think about this, and that annoys me. Consider
this simple fact: What a function returns to is _not_ under its control.
If a function returns to its caller, which does some work before it also
returns, it could as well return to a different function that does that
work. This is the core of the argument for tail-call optimzation, is it
not? Why does it _not_ apply to unwind forms? _Think_ about this, OK?

| The whole *point* of the tail call is that it doesn't return

Huh? What is this nonsense? On the contrary, the whole point is that it
returns to the caller of the caller in place of the caller, which has
_redirected_ it to return elsewhere from what _its_ caller said it should
return to. The whole *point* with tail calls is that there is a simple
semantic equivalence between call/return/return and jump/return, and that
this is completely independent of _both_ what you jump _and_ return to.
What I suggest is an _expansion_ of the mere implementation of tail-call
optimization such that you can call functions for their side-effect after
a call that returns a value simply by returning to them instead of
callling them. That is well within the conceptual framework of what you
seem to be clamoring for. Why do you not get this simple application of
your concept of tail-call optimization?

| Anything that you have to come back and do later (e.g. unwind) totally
| nullifies that.

_Wrong_. _THINK_ about this, will you? _How_ you arrange for something
to be done after you have returned is ORHTHOGONAL to arranging to do it.

| This is true, but it's pretty pointless. In a recursive call that
| contains an unwind-form you're going to grow your stack of
| unwind-forms-to-be-executed without bound.

So you think tail-call optimization is _only_ for recursion? What _is_
this? Bruce, you _are_ not the idiot you seem to be insisting on showing
me right now. What has gotten into you? The Scheme freaks want a
properly tail recursive execution model for a number of reasons. Tail
call optimization is a much weaker concept, and it will actually save a
lot of stack space -- namely the stack space consumed by arguments and
call frames -- _even_ if you have special bindings and unwind-protect
forms. _Those_ are the worth-while savings. And what is this silly
thing about "without bound"? There will be no more and no less bound on
the recursion regardless of tail-call optimization! If you can save on
_some_ resources, but not all, why are _you_ opposed to that? Was it not
_you_ who accused Kent and me of the following just a few lines up?

| Or, rather, you *don't* want it and so insist that if it can't work in
| every conceivable situation then it isn't worth having at all.

Now _you_ are arguing this silly position which nobody else holds. Quit
the stupidity and _think_ about what you are saying, damn it!

| Sure, you'll save a little memory because the structure that you put on
| that stack each time is probably a little smaller than the entire stack
| frame for the function would be, but that's just a savings of a constant
| factor -- it doesn't make a tail-recursive function (or set of mutually
| tail-calling functions) execute in constant space, which is the purpose
| of the optimization.

Why do you restrict an optimization to such a purpose? Why do you need
there to be "constant space" when there is a clear and present benefit
from not over-using another resource? And why do you drag in the _size_
of call frames? Christ, Bruce, that is _so_ beside the point. Call
frames can be *large* these days. An unwind-protect/special stack can be
very small. Now _you_ want to discard the stack space savings because
you will still need special/unwind-protect space? What _is_ this?

Whatever _are_ you defending, Bruce Hoult? Whatever it is, I can assure
you that it is _not_ under attack. Just _think_ about the arguments you
get and you might learn something that you are currently insisting on
_not_ seeing.

///

Kent M Pitman

unread,
Oct 2, 2001, 10:39:27 AM10/2/01
to
Bruce Hoult <br...@hoult.org> writes:

> I don't know why, but you and Kent seem to want a lot more from "tail
> call optimization" than I do.
>
> Or, rather, you *don't* want it and so insist that if it can't work in
> every conceivable situation then it isn't worth having at all.

Bruce, I'm just engaging in conversation in my free time here. I'm
tossing out issues I'm aware of for the sake of mixing the
conversation. I'm not waging a campaign.

The fact is that the language isn't changing any time soon.

But the fact is also that when the language does change, there are certain
"reasonable care" requirements that implicitly apply. Users often don't
mind if all kinds of issues are unresolve--most users only use 90% of the
language anyway, and would kill for us to offer only 90% of a certain
functionality and toss the rest. Like XML without DTD's for example.
A user can write a library that only implements 90% of the spec and it can
be very popular because they have to answer to no one for not doing the rest.
"Fine, don't use it if you don't like it" is all they have to say. A vendor,
on the other hand, has a higher degree of care required because he might
advertise an XML implementation and people would take them "more seriously"
and be more surprised if it wasn't complete, even if not advertised as such.
Or they might want it to be more complete next release because they assume
they're buying into a process, whereas user libraries are often one-off
take it or leave it. Language standards are even more complex because the
designers are NOT gating the introduction of extensions which can already
mostly be done by vendors but RATHER are imposing requirements on vendors.

In spite of how it seems to users,more like ain the US where we have
the all-encompassing Federal government and a local State government
both affecting individuals, and the Federal goverment is often
creating what are called "unfunded state mandates", where they require
the State government to do something but don't say where the money is
supposed to come from or how the program is to be implemented in
finite resources. There is, or should be at least, a large burden on
the people who impose such burdens to really show that it can be done.
So in a sense, and as a result the Federal government's key "design
job" should be to "be skeptical about the inclusion of anything
because it's so hard to tell that it will be universally a good idea.
One state may love it and already implement it, while another state may
be full of people who left that state to escape it!

I hope you can see that this analogy is true of language design as well.
It works better on the first round, when no one has anything and you
impose a bunch of things because you're in "attract mode" and you hope
people will gravitate to your language. But then when they have done it
you have a more complex problem because people have already adhered
"geographically" to vendors they like and now if you mix the soup, you
are not just affecting your "attraction pitch" but you are potentially
driving people away from something they have grown to like. People don't
like having to up and leave a country they've settled just because some
big government suddenly decides the local laws aren't good enough.

As such, I see my role as a language designer is to keep track of lots
of facts and ideas and preconceptions that people have and regurgitate
them at the right time so that a later discussion on an idea is not left
with the simple feeling that just because the people who argued about
it before are out of the room, it has no opposition. It is sometimes,
therefore, the case that I will raise an issue in a slightly ill-formed
way because I can't quite remember how it was best expressed, whether
it was me or someone else who expressed it. I'd rather you beat me up
for being vague or confused than have someone else beat me up for having
forgotten they cared about some issue at all.

And, in general, whenever I have marked an issue as "complex and
controversial" in my mind, I try not to let anyone get away with
convincing me it's "simple". It might be simple. And I might
eventually be convinced such. But I have an obligation because of how
I see my role to be considerably more obstinate than others in the
same discussion might be unless I'm sure that all parties with the
relevant concerns are properly represented by competent members of the
discussion.

So please don't oversimplify my position as to say "oh, you just don't
want this". The fact is that I know there are a lot of people who think
that this is not a no-brainer, and I regard standards-making as something
that has to involve consensus-building, so the solutions and techniques
I seek are based on the standard ways of doing that.

In the past, "lexical scoping", "CLOS", the "CL condition system" and
even (I kid you not) "base 10 numbers" (it was base 8 in Maclisp, and
it was a BIG fight to get it changed to base 10 by default) were
controversial. Sometimes people worry over things that are either
nothing to worry about at all (like "lexical scoping") or things that
even though controversial are too big a win to ignore (like "CLOS").
Whatever the case, starting small, and demonstrating "no ill effects"
in a small community is a better way to push forward. For lexical
scoping, "starting small" meant pointing to Algol and enough people
bought in at the start of CL that we took the "big risk". For CLOS,
"starting small" meant a portable implementation that people could
try; ditto for the CL condition system. (Base 10 was just achieved by
political coup, with no more demonstration of workableness than "all
human history to date", but I'm proud to say I supported it all along.
:-)

My obstinance here is more of the form "wrong community to try to convince
first" and "here are some issues to think about" than "this is why you
will fail". I'm only trying to say "start small" and "think about these
things". Beyond that, I'm willing to sit back and wait and watch.

> > Both special bindings and unwind-
> > protect are so common in Common Lisp programs that separating them from
> > the call stack would have been a _necessary_ design decision, in order
> > that tail calls actually produce useful results.
>
> That would depend on what you regard as "useful" results.

Language designers are obliged to think of something being a concern if
anyone might not think something useful. All that is being raised in this
context is "lack of specificity". You can't just say "tail recursion is
good" and expect people to join on. Make a clear spec of what you're
after and you'll win at least some converts just by making something clear
that people can inspect. They can't inspect your intent if you have no
spec--it enables you to be (accidentally or intentionally, I'll assume
accidentally but others may not) slippery, and that won't help you win
the debate.

> So you can't do useful tail call elimination on things which are wrapped
> inside "clean up after use" constucts? I agree and that's just fine by
> me.

Users say this all the time. They'd gladly sacrifice any feature they
don't use in order to get the part they do. But your real debate error
here is making this "exception" part of your debate about why to accept
this notion rather than making it part of the definition of the notion
you want to debate. You're effectively, by taking this posture, asking
people to debate multiple possible specs. Yes, picking a particular
semantics is riskier -- you want anything in this area and nailing down
one semantics may lose you the debate when you were entitled to win
in another. But you can always have a second debate from what you learn
on this one. Don't have the debate on all at once or people will think
it imprecise and you'll lose anyway.

> Why do you think such constructs are a useful target for tail call
> optimizations?

Because they might be and because you have not specified otherwise.
We're debating something too fuzzy and in the wrong forum.

> Do you frequently write recursive functions or sets of
> mutually-recursive functions in which each call invokes another layer of
> cleanup? Can you give an example? In such situations it's always
> seemed considerably cleaner to me to put the cleanup constructs at an
> outer level and let some inner stuff recurse away merrily sans
> intermediate cleanup.

The burden is not on them, it's on you. You've asked people to consider
an abstract. If you'd made it concrete, they might give you an example.
But it's worth no one's time to make an example if you can change the
semantics out from underneath them. They need to construct the example
based on fixed rules, not construct an example that can be debunked by
changing the rules.



> One of the main reasons to support tail call optimization is so that you
> can CPS-convert code in certain circumstances.

I could be mistaken, but I don't think you're losing this argument for
lack of positives. I think you're losing it for lack of helping people
dismiss their negatives. People may or may not want to CPS convert their
code; some may not realize they want to use tools that want this. But it
doesn't matter. This is the wrong discussion focus, IMO.

> Such code certainly
> never has cleanup after the tail call -- the tail calls of the
> continuation are *never* going to return. That's the point.

One of the reasons for tail call elimination is that people can rely
on it. Lisp has macros that might obscure that, and they need to have
the sense that the "guarantee" it offers will be meaningful. This is
not an irrelevant point.

Consider the US "star wars" missile defense system. No one who opposes
it is opposing the argument "you want to get rid of incoming missiles".
They're opposing it because they're not sure it can really guarantee
that claim; to change their mind, you're wasting your breath to expound
on the virtues. You'd better start enumerating the negatives.

Whatever you might think here, you're really addressing the negative
whose nam eis "I'm not sure I can see when this is going to fail".
You will not address this by saying "I am sure." You must address it by
(a) making af fixed set of rules and (b) under that fixed set of rules,
calling for examples of potential problems and then debugging the
concerns in concrete fashion. Or you will get nowhere.

[some stuff omitted]


> I certainly agree that it is undesirable to require what you appear to
> mean by "properly tail-recursive", party because it would appear to be
> impossible, and partly because I can't see any situation in which it
> would be more useful than a much simpler definition of "properly
> tail-recursive".


But the burden is not on the people not asking for the facility to define
the facility. You have not defined what YOU mean in formal terms, and yet
you ask those who have no sense of needing or wanting it to first define
it to your satisfaction and THEN debunk it. That's more burden than they
ought be obliged to take on. Rather like asking the Amish community in
America to sit on US trade boards so they can define clearly the technologies
they don't want to accept into their lifestyle. It isn't their burden to
do that. They are entitled to the luxury of sitting back, waiting for
the design to happen, and then deciding. Yes, that involves more up-front
risk on your part, but you are the one who wants to move ahead, not they.

Tim Bradshaw

unread,
Oct 2, 2001, 11:03:21 AM10/2/01
to
Erik Naggum <er...@naggum.net> writes:

>
> Are you nuts? Where did you get this garbage? Of course I want it.
> However, I do not want it mandated by the standard, because that has a
> lot of negative aspects to it.

I sometimes think that some of these issues would be clearer if
something slightly less emotive than tail-call elimination was at
stake.

Imagine, for instance if the CL language standard mandated that at
certain levels of optimisation, bounds checking should not be done for
arrayss. Is this reasonable? No, it's obviously absurd, since it
makes all sorts of hairy requirements on implementations. Worse than
this, it's absurd for implementations which manage to transform
something like this:

(dotimes (i 10000)
(setf (aref a i) 0))

into

(progn
(when (< (length a) 10000)
(error ...))
(dotimes (i 1000)
(setf (system::unchecked-aref a i) 0)))

Such an implementation has already essentially removed the
bounds-checking overhead: should it be required to remove the
remaining check, and if so why? (perhaps to make C programmers more
comfortable by introducing subtle and obscure vulnerabilities for no
benefit).

It's also absurd for implementations which run on hardware that
supports bounds checking.

Finally, it's absurd because attempting to overcome problems like the
above *in the standard* would result in a vast document which
attempted to cover all sorts of obscure possible implementation
quirks. Even if such a standard could be written, it would very
likely become obsolete as new implementation techniques were
discovered.

So a standard should not mandate this. At best it should hint that it
would be a good idea, but language like that has little place in a
standard!.

But this is obviously completely different than saying that
implementations should not remove bounds checks at certain levels of
optimisation. Of course they should be allowed to do that, and good
implementations probably will (and, I hope, some will do this raising
of checks out of loops to get the best of both worlds).

One thing a standard *could* do is to mandate that something
*equivalent* to bounds checking *must* be done at certain levels of
optimisation. This is much easier to assert in a standard because it
merely has to say that `an error should be signaled if ...' in the
terminology of the Cl standard (I don't, off the top of my head, know
if the CL standard does in fact mandate this) .

The same, I think is true for tail-call elimination. For the standard
to mandate that it must happen in a way that has meaning would require
an enormous amount of hair in the standard, some small amount of which
Erik has demonstrated. This hair would probably become obsolete in a
rather short time. Far better for the standard to remain silent on
the issue and leave it up to implementors.

--tim


Juliusz Chroboczek

unread,
Oct 7, 2001, 10:07:43 AM10/7/01
to
>> Not a very interesting semantics.

KMP> We're busy building programs are interesting, not semantics that
KMP> are interesting. Our dull semantics has not seemed to impede
KMP> that.

Sorry, I was arguing that this is *not* the semantics of Common Lisp.

KMP> It is specifically left to the market to sort out ...

With all due respect, Kent, you overuse this argument. Pushing it to
its logical extreme, we have no use for a carefully drafted Common
Lisp definition, because the market will choose right anyhow.

When I want something outside the standard implemented in my CL
system, I either go and speak with my vendor, or implement it myself.
Notwithstanding that possibility, I am grateful for the existence of a
rigorously drafted definition of Common Lisp that allows me to write
reliable programs reasonably independently of the implementation. Alas,
the said definition does not include guarantees about tail-call
elimination are not included in that definition; I would like to see
such guarantees included in a future version. (Whether my vendor does
actually perform tail-call elimination or not is a completely distinct
argument).

At the same time, I am conscious of the fact that the CL community
(including myself) is not ready right now for a new revision of the
standard. This doesn't preclude people from making notes about what
users would want to see in a future version; tail-call elimination is
high on my list of desired futures.

(Another thing I would like to see are stronger guarantees about what
conditions are signalled in exceptional situations; again, I do not
have problems with my vendor here, as I can always test my implemen-
tation's behaviour, but with the standard itself.)

Juliusz

Juliusz Chroboczek

unread,
Oct 7, 2001, 10:24:01 AM10/7/01
to
FR> Why not instead have an explicit (TAIL-CALL (F X)) special form,
FR> or otherwise two special forms (TAIL-APPLY #'F (LIST X)) and
FR> (TAIL-FUNCALL #'F X) ?

Hmm, interesting idea. Seems Forth-like to me, though, rather than
Lisp-like. I'm not sure whether I like it, but perhaps I've simply
had too much exposure to ML.

Note, by the way, that including such a function in CL naively would
break certain structural invariants. Changing your proposed syntax
slightly, the following

(declaim (special *foo*))

(defun break-the-system ()
(let ((*foo* 47))
(tail-call #'identity nil)))

would cause the virtual machine to push a dynamic environment and jump
into IDENTITY without leaving the dynamic stack unwinding code in the
continuation. A similar example could be built with UNWIND-PROTECT.

Of course, such cases can be detected statically (i.e. at compile
time) and could be forbidden, but requiring that an implementation
perform such checks imposes a similar burden to requiring automatic
tail-call elimination.

Juliusz

Juliusz Chroboczek

unread,
Oct 7, 2001, 10:45:34 AM10/7/01
to
EN> Both special bindings and unwind- protect are so common in
EN> Common Lisp programs that separating them from the call stack
EN> would have been a _necessary_ design decision, in order that
EN> tail calls actually produce useful results.

No, Erik, that would not be reasonable. It would merely replace space
used on the call stack to space used on the dynamic bindings stack or
the unwind stack.

I think the correct approach here would be to ask the users who
actually do require tail-call elimination for their programming style
to specify a minimal set of conditions under which tail-call
elimination should be performed. In my personal case, I need the
following:

- tail-call elimination between source files and (less often) between
packages;
- tail-call elimination within and to a CLOS generic function;
- eliminating a tail FUNCALL to a funarg, at least one with a null
lexical environment.

The obvious case -- tail call to self -- is not interesting, as I
usually find it more natural to replace it with one of the standard
iteration constructs.

Regards,

Juliusz

P.S. Sympathy for your SQL work. I'm studying XSLT right now; any
chance for a rant on the subject?

Juliusz Chroboczek

unread,
Oct 7, 2001, 10:59:13 AM10/7/01
to
EN> Hell, if I want a standard that says something so annoying as
EN> requiring a properly tail-recursive execution model, I would use
EN> Scheme.

I, on the other hand, want to use CL (which I happen to find more
comfortable than Scheme) and still require that tail-call elimination
should be performed in some circumstances.

Tail-call elimination, by the way, is not specific to Scheme, and is
mandated for example by ML. (Both major dialects, I believe.)

EN> So you think tail-call optimization is _only_ for recursion?
EN> What _is_ this?

It is only with recursion that tail-call elimination makes a
qualitative (as opposed to merely quantitative) difference.

Without recursion, tail-call elimination can provide you with a
(possibly large) *constant* space saving. While this may be worth
having, it is not something that belongs in a language definition.

With recursion (possibly indirect), tail-call elimination will, under
the right circumstances, convert unbounded space usage into constant
space. With some programming techniques, this means turning a
guaranteed crash into a reliable program.

Regards,

Juliusz

Juliusz Chroboczek

unread,
Oct 7, 2001, 11:02:50 AM10/7/01
to
JS> If you run your (loop) example on a laptop running on batteries
JS> you will see in not so long time that even (loop) is doomed to
JS> "terminate" because some resource is exhausted.

I see. It's actually useless to try to write reliable programs,
because my machine might very well run out of batteries.

Juliusz

Erik Naggum

unread,
Oct 8, 2001, 11:12:34 AM10/8/01
to
* Juliusz Chroboczek

| With all due respect, Kent, you overuse this argument. Pushing it to
| its logical extreme, we have no use for a carefully drafted Common
| Lisp definition, because the market will choose right anyhow.

Since the argument _only_ makes sense within the framework of a standard,
I fail to see how "logical" it could possibly be to drop the context and
still believe you have the same argument.

| Alas, the said definition does not include guarantees about tail-call
| elimination are not included in that definition; I would like to see such
| guarantees included in a future version.

You will not see such guarantees in any future version. If you want
this, there is Scheme. So much of the rest of what you argue for are
signs of a Scheme freak that I also fail to see why you do not just work
happily within the Scheme community instead of unhappily in the Common
Lisp community? What does it do for you to whine about tail-calls and
never get it?

| (Another thing I would like to see are stronger guarantees about what
| conditions are signalled in exceptional situations; again, I do not have

| problems with my vendor here, as I can always test my implementation's


| behaviour, but with the standard itself.)

What do these stronger guarantees actually entail? Is it the ability to
sue vendors if they decide to claim to purport to conform to a voluntary
standard? If so, nobody will volunteer to conform to it. We already
have vendors who think it is OK to claim to purport to something they
also display a strong disdain for when push comes to shove, and which
they undermine when they see an opportunity to do so, and the only thing
we can do with them is expose their destructiveness and applaud their
constructiveness, but the whole attitude towards the standard is that it
does not matter if you violate parts of it in ways that you cannot choose
to ignore. In my view, there should have been viable ways to solve these
"incompatibility" problems so they could co-exist with standard ways, or
even be _within_ the standard, but this is indeed the path less travelled.

///
--
My hero, George W. Bush, has taught me how to deal with people. "Make no
mistake", he has said about 2500 times in the past three weeks, and those
who make mistakes now feel his infinite wrath, or was that enduring care?

Kent M Pitman

unread,
Oct 8, 2001, 11:36:10 AM10/8/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> KMP> It is specifically left to the market to sort out ...
>
> With all due respect, Kent, you overuse this argument.

This is not an argument. It is, to the best of understanding, a fact.

Two major points follow. If you find yourself hopelessly mired in the
first, skip to the second (there are separator markers) so you don't
lose it.

- - - - -

My personal technical position has generally been to add a great deal
more to the language than got in. In CLHS, I've published the cleanup
issues that got accepted; you should see the heaps of things I have
proposed that did NOT get in, and you should hear the arguments
against them.

My position on this issue has been that tail recursion should be
something people could enable on a lexical basis with declarations. I
don't plan to use it, but I understand that others might. As long as
it's not invasive, I'm all for it.

The truth is that I spent a lot of time taking ideas from the Scheme
community and trying to get them into CL to narrow the gratuitous
distance between the two languages. But that hasn't been seen as
a priority.

The reasons for rejection vary. But two of the most common were "there's
no current practice" (i.e., let a vendor implement it and we'll see how
successful it was in practice) or "this might be painful for some vendor
in a way we don't understand" (i.e., let a vendor implement it and prove
to us that it doesn't get us in trouble). The other most common reasons
for rejecting otherwise well-formed proposals were "it's too late, we
feature-froze in 1988" or "gee, the spec is getting kind of big, do we
really need this?" or "if you require that, it will cost me a lot of money
to implement, can't we leave it to vendors to decide if it's a priority?".

I try to fairly represent the reasons for various things getting
rejected, to the best of my recollection. Steve Haflich and I were
there for most of it. Barry was there for quite a bit of it, though
not quite as much. Barry and Steve and I are pretty quick to jump on
each other if there's a mis-remembered element, and I think that keeps
the discussion pretty fair and as true to the record as one can be.

But the fact is that it was a major theme of the design to leave
vendors a fair amount of autonomy, especially in the transition from
CLTL to ANSI CL both because the first book had tried to overdefine
some things it shouldn't have, and also because we were already doing
a fair amount of new design with the condition system, clos, etc. and
we didn't want to take more risks than we had to. One of Larry
Masinter's invaluable contributions to the standard was the creation
of the cleanup form and the requirement to talk about current
practice, among other things. So the absence of current practice was
conspicuous both for the possibility that it might not be something
any customer was clamoring for (as judged by "enough to make a vendor
fear it was influencing sales", not as judged by "number of newsgroup
posts") and/or it might not be something well enough understood to know
the exact right way to do it or whether it had any ill effects.

This issue WAS discussed. This issue may or may not have had a
specific proposal--I'm busy bringing my personal notes back online
from backup tape and I may soon be able to answer such questions more
reliably. But this issue had no action taken for exactly the reason I
cited.

So the frequency with which I use that argument is really irrelevant.
It's like asking why things fall when you drop them and being annoyed
taht "gravity" is used too often to explain the answer. Truth is truth.

- - - - -

Then again, maybe your real gripe is that "leave it to vendors" is too
vendor-centric an answer. But to understand why THAT happened, you have
only to note that users were (and are) entitled to be (X3)J13 members,
but over the period of the design, nearly all user members decided it
wasn't worth the economic bother (that is, they decided to stop attending
and trust who was left to vote in their favor). That left only vendors.
And vendors act in their own interests, so of COURSE the votes were
vendor-centric. When you voluntarily give up your right to vote, you
deserve what you get.

Even as a vendor employee, and at some risk of annoying my employers,
I regarded this as a war between users and vendors and was angry at
the users for dropping out and apparently expecting vendors to vote
all the right ways. I spoke out at public forums about this,
blatantly berating users at ALU meetings saying "why don't you stand
up to these vendors and operate as a block to get what you want?",
"why don't you show up and outvote these vendors?". Honestly, I was
surprised I was not directly fired for my remarks, which in some
companies would have constituted insubordination and would have
merited immediate dismissal. It is to the credit of both Symbolics
and Harlequin that they quietly permitted me this degree of public
disagreement with their own policies without being angry at me. But
there was a limit to what I could do, and the users didn't exactly
start streaming into meetings. It was all I could do to get the CLIM
folks to show up at one meeting to argue for compiler optimizers when
vendors were of a mind not to include those in the language because
they each already had their own way of doing things and were not
motivated to impose a burden on themselves to add yet another facility
just for the purpose of satisfying people who didn't care to represent
their own concerns.

We went from about 60 meeting attendees (about 15-20 organizations plus some
alternates, more users than vendors) at the outset to about 8-10 meeting
attendees at the end, representing about 5 or 6 organizations, almost all
vendors. I think Kim Barrett and Sandra Loosemore, key contributors to the
process, might have been individual members. That change affected things.

In the end, fairness does not exist. ANSI "defines" fairness. Fairness
under their definition means "you had the right to be there, and you
weren't there, so your opinion doesn't matter". That may not be your
personal defintiion of fair, but it is sufficient to overcome the anti-trust
issues which are the reasons for the creation of ANSI in the first place.
(Ordinarily, companies cannot conspire to control an industry, but allowing
others to freely enter and vote nullifies that concern; if no one wants to
come, the vendors aren't required to make them. This detail transforms an
involuntary control situation to a voluntary one.)

Coby Beck

unread,
Oct 8, 2001, 1:39:32 PM10/8/01
to

"Erik Naggum" <er...@naggum.net> wrote in message
news:32115427...@naggum.net...
> * Juliusz Chroboczek
[reinserted]

KMP> It is specifically left to the market to sort out ...
> |
> | With all due respect, Kent, you overuse this argument. Pushing it to
> | its logical extreme, we have no use for a carefully drafted Common
> | Lisp definition, because the market will choose right anyhow.
>
> Since the argument _only_ makes sense within the framework of a standard,
> I fail to see how "logical" it could possibly be to drop the context and
> still believe you have the same argument.
>

Excellent point, I knew something was wrong with that argument, but couldn't
pinpoint it. Letting the community come to a consensus and formalizing it is
not the same as not formalizing anything, obviously.

I don't think characterizing that as an "over used argument" is even close to
accurate. It is a position Kent has stated was used in making design decisions
and as such can only be stated and explained, it is not an argument to be
debunked or proven. We are free to disagree and/or discuss the merits of such
a position, but you can't refute it logically.

It seems to me to be a very reasonable and productive approach.

As an onlooker to this thread, unfamiliar with the theory of tail call
elimination, I have yet to see any convincing efforts to:
a. define what it is *exactly* pro-tail callers are proposing is added to the
standard.
b. answer any of the specific questions/objections/complications from those
opposed.

> | Alas, the said definition does not include guarantees about tail-call
> | elimination are not included in that definition; I would like to see such
> | guarantees included in a future version.
>
> You will not see such guarantees in any future version. If you want
> this, there is Scheme. So much of the rest of what you argue for are
> signs of a Scheme freak that I also fail to see why you do not just work
> happily within the Scheme community instead of unhappily in the Common
> Lisp community? What does it do for you to whine about tail-calls and
> never get it?
>

This smacks so much of an "America, love it or leave it" kind of argument.

Not being familiar with Scheme outside of discussions in this forum, I am
blissfully unaware of the signs that give away a "Scheme Freak" and in my
ignorance I still find their arguments educational. So there is at least some
good that comes out of "unhappy" lispers "whining" away.....

Coby

--
(remove #\space "coby . beck @ opentechgroup . com")


Thomas F. Burdick

unread,
Oct 8, 2001, 5:38:56 PM10/8/01
to
Erik Naggum <er...@naggum.net> writes:

> * Juliusz Chroboczek

> | Alas, the said definition does not include guarantees about tail-call
> | elimination are not included in that definition; I would like to see such
> | guarantees included in a future version.
>
> You will not see such guarantees in any future version. If you want
> this, there is Scheme. So much of the rest of what you argue for are
> signs of a Scheme freak that I also fail to see why you do not just work
> happily within the Scheme community instead of unhappily in the Common
> Lisp community? What does it do for you to whine about tail-calls and
> never get it?

Oh, come on. Imagine it's pre-ANSI CL and he's asking for an object
system to be included in the standard. Would you accuse him of being
a SmallTalk freak who should go is-a and refactor his way back to
where he belongs? CL is a multi-paradigm language, and tail-call
elimination is useful for more than just CPS and "I refuse to
iterate"-style. It would be very nice to have some *portable* way of
ensuring that (1) certain important tail-calls are eliminated; or (2)
an error is raised. This is obviously something the vendors thought
their user base wanted: CMUCL, CLISP, ECLS, Allegro, LispWorks, and
MCL all have some form of tail-call elimination. This sounds to me
like a good candidate for standardization.

> | (Another thing I would like to see are stronger guarantees about what
> | conditions are signalled in exceptional situations; again, I do not have
> | problems with my vendor here, as I can always test my implementation's
> | behaviour, but with the standard itself.)
>
> What do these stronger guarantees actually entail? Is it the ability to
> sue vendors if they decide to claim to purport to conform to a voluntary
> standard? If so, nobody will volunteer to conform to it.

Uhm, I'm pretty sure he meant changing the wording to say that certain
conditions *will* be signalled in certain situations, instead of
*may*. I see no reason why this would be an unreasonable request for
a future version of the standard. Perhaps it wouldn't make it in, but
it sounds like a normal user request for portability.

Jochen Schmidt

unread,
Oct 8, 2001, 6:35:53 PM10/8/01
to
Juliusz Chroboczek wrote:

yes you got it ;-)

ciao,
Jochen

--
http://www.dataheaven.de

Kent M Pitman

unread,
Oct 8, 2001, 6:39:49 PM10/8/01
to
t...@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> I'm pretty sure he meant changing the wording to say that certain
> conditions *will* be signalled in certain situations, instead of
> *may*. I see no reason why this would be an unreasonable request for
> a future version of the standard.

Depends on a case by case basis. Every requirement to signal is a
requirement to check. Every requirement to check is time lost not
doing the application. In an application where data is already
correct, such time is wasted. It is between the application writer
and the vendor how much pre-testing has been done and how much is
needed at runtime.

IMO, one of the coolest and strangest features of CL, is the manner
in which its declarations are treated. They are "promises to have done
right". They can sometimes be checked (as CMU CL has shown) but they
sometimes cannot be. In most languages with declarations, declarations
that can't be checked are failures rather than treating the programmer as
an "oracle" who can tell the compiler things it might not be able to infer.
Taking away this ability in favor of requirements to assure these truth
has the potential effect of slowing the language down in places where you
could have told the program a truth that proving would either be hard to
do at compile time or expensive to do at runtime.

> Perhaps it wouldn't make it in, but
> it sounds like a normal user request for portability.

And it sounded like a normal reaction of another member of the
community to one person's seemingly trivial suggestion. Steele came
to the first meeting of the CL committee post-CLTL (it was in 1986 in
Monterrey, I believe) with a page full of "trivial" corrections he
thought sure everyone would just rubber stamp. He was amazed to find
that, in constrast to the willingness of a community pre-publication
who had no investment in their implementation or user-code to accept
new ideas, it was suddenly much harder to get change. New people had
arrived on the scene with different points of view. People had
started to invest in a particular point of view. etc. In the end, we
accepted none of his trivial changes and in fact realized from that
meeting that something formalized like ANSI would be needed (we
weren't sure what the mechanism was, but we knew "friendly consensus"
would no longer cut it) to move forward from there, since there were
irresolvable divides afoot...

Erik Naggum

unread,
Oct 8, 2001, 6:53:59 PM10/8/01
to
* Juliusz Chroboczek <j...@pps.jussieu.fr>

Juliusz, I find your approach in this thread to be irrational, and I
suspect that it has some philosophical underpinnings that I cannot quite
get a grip on. You have removed the real world from your reasoning, and
when people reintroduce it, they fall apart like they depend on a reality
different from what is normally experienced. There is something wrong
with a philosophy when that happens. In this case, your probably half-
facetious response betrays, I think, a failure to differentiate between
the normal and the exceptional. This is akin to what we find in politics
when people make no distinction between normal and emergency ethics --
under normal circumstances, no person's life is in danger and endangering
a person's life is wrong, but in an emergency, a person's life is in
danger and it may be ethical to endanger another person's life, such as
the attacker of the first person. If you attempt to apply normal ethics
to the emergency situation, you _will_ endanger someone's life and that
is wrong whichever way you look at it, so there is no right answer,
anymore, meaning your ethics has failed, but you cannot argue that your
normal ethics has failed across the board -- it has only failed in an
emergency. Likewise with software: If your theory assumes a normal
course of execution and termination, the theory _fails_ if apply it to
exceptional termination, but that does not mean the theory is wrong apart
from being abused for exceptional situations.

This issue is of course getting more complex in software where the normal
course of execution includes a large number of exceptional situations,
but running out of resources is clearly such an emergency that no theory
of reliable _software_ can deal with it. The hardware exists. It cannot
be abstracted away while the theories are still expected to have their
usual predictive power. Regardless of how reliable your software is, if
you are running it on a laptop computer in, say, Kabul, it just becomes
so incredibly stupid to argue that the _software_ failed if the hardware
evaporates and that it is useless to write reliable software when it
might as well blow up on you, literally. On the other hand, we do have
people from vendors in this newsgroup who argue that just because you
need sockets, which are not standardized, you might as well disobey some
other parts of the standard because the standard is not perfect in its
power to encompass the needs of the world. I think it is the same bogus
ideas that underlie the attitude to perfection and normalcy of both of
you guys, that if you define a perfection that is inherently unattainable
and impossible, you can go on griping about irrelevant flaws forever.

As far as I can tell, Common Lisp is good enough that it attracts people
who can make up a desire for an irrationally perfect world and thus never
actually be satisfied with what they are able to get in any real life.
For instance, you will never get a guarantee for tail-call merging in the
standard, and it is _completely_ useless to ask for it or even pretend
that you can get more than actual support for what you want from vendors.
The standard is not at fault for not giving you the irrationally perfect.

Erik Naggum

unread,
Oct 8, 2001, 8:39:55 PM10/8/01
to
* Juliusz Chroboczek <j...@pps.jussieu.fr>

| I, on the other hand, want to use CL (which I happen to find more
| comfortable than Scheme) and still require that tail-call elimination
| should be performed in some circumstances.

But you are not satisfied with it being done when you ask for it. You
seem to think that if it is required in the standard, you will also be
"guaranteed" to actually get it from your vendors, but it is the other
way around. There are still lots of things that the standard mandates
that people are not implementing exactly the same way. Like directory,
which we saw just recently. However, instead of arguing for how it
should be implemented and finding ways to get this, people get agitated
to the point of looking like they are _betrayed_ because the standard is
not sufficiently clear, as if it owed them that. I find this very odd,
but it _is_ also something that happens again and again in this forum.

| With recursion (possibly indirect), tail-call elimination will, under the
| right circumstances, convert unbounded space usage into constant space.
| With some programming techniques, this means turning a guaranteed crash
| into a reliable program.

If so, why is it not sufficiently to actually get what you want? Why do
you have to have a "guarantee" to get it via the standard? Do you think
you will pay less for it if people "had" to do it than if you ask for it?

Jochen Schmidt

unread,
Oct 9, 2001, 5:21:35 AM10/9/01
to
Juliusz Chroboczek wrote:

The interesting thing is what you do to make it useful again: you buy
a laptop with longer runtime and you ensure that you load the batteries
before all goes down. All this things are in no way standard in Scheme
but it seems to work in reality. So why should it be a problem to ensure
using a CL that supports tail-calls?

Tim Bradshaw

unread,
Oct 9, 2001, 5:53:38 AM10/9/01
to
* Thomas F Burdick wrote:
> It would be very nice to have some *portable* way of
> ensuring that (1) certain important tail-calls are eliminated; or (2)
> an error is raised. This is obviously something the vendors thought
> their user base wanted: CMUCL, CLISP, ECLS, Allegro, LispWorks, and
> MCL all have some form of tail-call elimination. This sounds to me
> like a good candidate for standardization.

I think that the pro-tail-call-elimination-in-the-standard people
should come up with a description, in language suitable for a standard
of just what it is they want.

Judging by the experience of Scheme, where, despite Scheme being a
much simpler language than CL from the point of view of
tail-call-elimination, it has taken a *long* time to tie down what is
meant by the requirement (witness article
<m3ite0z...@localhost.localdomain> in this thread), and it may
actually not be tied down yet, I'm not sure.

In order for a CL *standard* to mandate tail-call-elimination in a way
that is actually useful to programmers using the language it has to be
described in a fairly precise way, and I haven't seen such a precise
description in this thread, while I *have* seen a number of arguments
from the anti-tail-call-elimination-in-the-standard people that such a
precise description might be very hard. Rather we've had a lot of
vague handwaving as the above quote (I'm not trying to pick on this
article or its author in particular, it was just the closest to hand).

So I think it behooves the PTCEITS people to come up with a form of
wording which is precise enough that it is suitable for use in a
standard. At that point this whole discussion might have more
purpose.

--tim

Erik Naggum

unread,
Oct 9, 2001, 6:36:02 AM10/9/01
to
* Thomas F. Burdick

| Oh, come on. Imagine it's pre-ANSI CL and he's asking for an object
| system to be included in the standard.

Oh, come on. We are simply not in that position, anymore.

| It would be very nice to have some *portable* way of ensuring that (1)
| certain important tail-calls are eliminated; or (2) an error is raised.

I keep rewriting it "tail-call merging" because I think _elimination_ is
just plain wrong terminology. Is that just me?

What you seem to want is a portable way to query a function at the source
level for the ability of all compilers to unwind its call stack to the
point where it can redirect a particular function call to return to its
caller. What I keep arguing is that this requires a large set of other
requirements that are simply not in the Common Lisp spirit. The various
conditions under which you can actually get tail-call merging in today's
implementations of Common Lisp are what they are because of the freedom
of these implementations to do their work in various smart ways. If you
_require_ a specific set of tail-call merging conditions, you will also
mandate a particular set of implementation techniques for function calls
of _all_ kinds, and you may actually find that a particular vendor is
unable to comply with the new standard because function calls in Common
Lisp are incredibly hairy things at the machine level, especially when
you take CLOS into consideration. [I have had the good fortune to work
with Duane Rettig's code in Allegro CL, and I am quite simply in _awe_ of
the attention to detail and efficiency and the cleverness of both the
code transformations and the resulting machine code on such a weird and
varied bunch of hardware archictures. My own training and set of skills
predisposes me towards strong hesitation towards the kinds of claims that
Scheme makes, and I have studied Scheme implementations sufficiently to
understand how this "properly tail-recursive" requirement has serious
repercussions for the design and implementation of the whole language.]

| This is obviously something the vendors thought their user base wanted:
| CMUCL, CLISP, ECLS, Allegro, LispWorks, and MCL all have some form of
| tail-call elimination. This sounds to me like a good candidate for
| standardization.

It may indeed sound like that until you look beneath the hood and see how
_differently_ they need to implement this seemingly simple concept
because of their other design choices in implementing the function call.

Contrary to popular belief in certain circles, it is not _sufficient_ to
specify something in a standard to actually get it. My own struggles
with trusting people who claim publicly to be in favor of the standard
while they sneer at it and ridicule it in private communication cause me
to believe that if you require something that people are only politically
motivated to back, they will find ways to implement it that are not worth
much to you. It is somewhat like implementing a relational database with
SQL "support" that does not _actually_ support transactions and rollback,
like MySQL, because of "efficiency" reasons. This is fine as long as I
do not need it, but the day I do, I may have to invest serious effort in
recoding my application and move to a real relational database. Now, why
do people not implement something that is so fundamental to the task at
hand to begin with? Misguided ideas of their ability to judge the
appropriateness of standard requirements is most often to blame, but
irrational personal hangups can be just as powerful -- the upper-case
nature of Common Lisp symbols is one of them, where one vendor argues
strongly for a "modern" mode in lower-case, but does _not_ default the
case-insensitive argument to their apropos to true so their users can at
least give lower-case strings to apropos, and neither do they offer a way
to use modern and standard mode side-by-side, which would have made it so
much easier to experiment with modern mode. This kind of "anti-standard
attitude problem" is not at all specific to our community -- I have
worked with standards of many kinds for 15 years and every single one of
the areas were I have experience, there has been rogue vendors who think
they know better or (more often than not) who simply fail to care enough
to understand what they are doing. Microsoft is the archetypical enemy
of specifications because they have this holy mission of supporting what
they claim are "customer needs", one such need being trust in conformance
obviously ignored.

| > | (Another thing I would like to see are stronger guarantees about what
| > | conditions are signalled in exceptional situations; again, I do not have
| > | problems with my vendor here, as I can always test my implementation's
| > | behaviour, but with the standard itself.)
| >
| > What do these stronger guarantees actually entail? Is it the ability to
| > sue vendors if they decide to claim to purport to conform to a voluntary
| > standard? If so, nobody will volunteer to conform to it.
|
| Uhm, I'm pretty sure he meant changing the wording to say that certain
| conditions *will* be signalled in certain situations, instead of
| *may*. I see no reason why this would be an unreasonable request for
| a future version of the standard. Perhaps it wouldn't make it in, but
| it sounds like a normal user request for portability.

Sigh. If you write something down in a law or a specification, you need
a way to enforce it. How do you envision that you would enforce the
"guarantees" that you have gotten rolled into in the standard if you do?
The unreasonability of the whole desire to see tail-call merging required
is that the consequences of such a requirement leads to enforceability
issues that I for one do not welcome at all. It is just like bad law --
it has the _sole_ effect of making people ignore and break less bad laws.

Juliusz Chroboczek

unread,
Oct 9, 2001, 8:53:13 AM10/9/01
to
EN> However, instead of arguing for how it should be implemented and
EN> finding ways to get this, people get agitated to the point of
EN> looking like they are _betrayed_ because the standard is not
EN> sufficiently clear, as if it owed them that.

I don't think that's a fair assessment. Unlike some other languages,
Common Lisp is specified with enough rigour to make it possible to
write real code to the spec. I have found that the only cases when I
consciously wrote non-portable Common Lisp code were either when I was
writing intrinsically platform-dependent stuff, or when relying on the
tail-call elimination behaviour of the implementation that I use.

In the light of the above, I think it is only natural to want to
attract the community's attention to this issue.

Juliusz

Juliusz Chroboczek

unread,
Oct 9, 2001, 8:44:00 AM10/9/01
to
KMP> It is specifically left to the market to sort out ...

[...]

KMP> This is not an argument. It is, to the best of understanding, a fact.

Okay, I read you now. You were not arguing why this sort of thing
should not get into a hypothetical future standard, but why it didn't
get in.

KMP> Two major points follow. If you find yourself hopelessly mired in the
KMP> first [....]

Actually, your first point makes a lot of sense. I think I understand
why X3J13 was not the right time to push tail-call elimination: it's
very difficult to specify right (and you didn't have a lot of manpo-
wer), and may require reengineering of already present functionality
on the part of vendors. These seem like excellent reasons to me.

I am glad that you should agree that tail-call elimination is some-
thing that deserves at least serious consideration (I hope I am not
abusively putting words into your mouth here). Please count me as a
happy CL user that would be even happier with guarantees about
tail-call elimination. As Thomas Burdick noted, the fact that most
current CL implementations do perform tail-call elimination in some
form seems to indicate that I am not alone in this situation.

Regards,

Juliusz

Juliusz Chroboczek

unread,
Oct 9, 2001, 9:15:58 AM10/9/01
to
EN> Juliusz, I find your approach in this thread to be irrational,
EN> and I suspect that it has some philosophical underpinnings that
EN> I cannot quite get a grip on.

Okay, I'll try to make my ``philosophical underpinnings'' explicit.

One of the early proponents of rigour in the discipline of programming
was Edsger W. Dijkstra. Together with Hoare and other people,
Dijkstra developed a number of more or less formal techniques to aid
in the writing of reliable software. (An important point to
understand is that a formal technique is often useful in an informal
setting: for instance, while Dijkstra introduced the notions of loop
invariant and weakest precondition in a formal framework, most good
programmers use them to reason informally about their programs.)

In one of his notes, Dijkstra mentions that when presenting his ideas,
he sometimes met the objection ``but what if the compiler (resp. the
hardware) is unreliable?''. Dijkstra argued (and I fully agree with
this point) that this is immaterial to the argument at hand, and that
the person asking this question had completely missed the point. The
fact that we use compilers with bugs, hardware that breaks at random
points, or power companies that arbitrarily decide that darkness is
the norm does not make the task to write reliable software any less
meritoric.

Similarly, I would argue that the fact that no Common Lisp implemen-
tation fully conforms to the standard, or that my Common Lisp imple-
mentation runs on hardware that may very well not survive the day
(touch wood) does not make the task of writing a rigorous and precise
specification for the language vain. The existence of the Common Lisp
standard, which I regard as a brilliant example of how it is possible
to be intelectually rigorous without relying on formal tools, seems to
imply that at least part of the Common Lisp community shares this
view.

Regards,

Juliusz

Juliusz Chroboczek

unread,
Oct 9, 2001, 9:29:29 AM10/9/01
to
>> | (Another thing I would like to see are stronger guarantees about what
>> | conditions are signalled in exceptional situations;
>>
>> What do these stronger guarantees actually entail?

TB> Uhm, I'm pretty sure he meant changing the wording to say that certain
TB> conditions *will* be signalled in certain situations, instead of
TB> *may*.

I was actually being much more modest, and thinking of changing a
number of requirements that ``an error is signalled[1]'' to a precise
specification of which proper subclass of ERROR should be signalled.
I have sometimes found myself establishing handlers for ERROR in
distributed code, which always makes me nervous, and in my personal
opinion should never be necessary.

(As far as I can tell, there should be no cost associated to such a
change other than the presence of a few extra condition classes in the
Lisp runtime. In addition, I would like to mention that I do fully
realise that the current wording does allow implementations to signal
a more precise condition than ERROR -- but I hope you'll agree that
this is of little comfort to portable code.)

A case in point is that of DESTRUCTURING-BIND, which in its current
state requires handling ERROR whenever applying it to data that has
not been pre-validated -- in which case I usually end up doing the
destructuring by hand instead.

When I complained about that (IMHO) misdesign a few years ago on this
respectable forum, Kent replied that X3J13 didn't have enough manpower
to specify the exceptional situations more precisely. I fully accept
this point, which I hope is clear from the wording I've used.

Regards,

Juliusz

[1] er... that is ``signaled.''

Valeriy E. Ushakov

unread,
Oct 9, 2001, 10:39:43 AM10/9/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> wrote:

> On the other hand, consider the following:
>
> (defun foo ()
> (foo))
>
> and suppose that you compile FOO without tail-call elimination. Does
> FOO terminate?
[...]
> The point I'm trying to make is that FOO terminates when compiled
> without tail call elimination, and that FOO does not terminate if
> compiled with tail call elimination. Thus, tail call elimination is
> not a mere optimisation, but a semantic feature.

Have you considered the possibility of call frames allocated in the
heap and being garbage-collectable? In that case the tail-call
merging of the tail self-calls is *only* an optimization that saves
you a trouble of allocating new call frame and gc'ing the previous one
sometime later when gc is ran.

I think this was actually done for some functional programming
language (istr it was TIM (three instruction machine), but I just
don't remember...).

SY, Uwe
--
u...@ptc.spbu.ru | Zu Grunde kommen
http://www.ptc.spbu.ru/~uwe/ | Ist zu Grunde gehen

Erik Naggum

unread,
Oct 9, 2001, 11:01:27 AM10/9/01
to
* Juliusz Chroboczek <j...@pps.jussieu.fr>

| The existence of the Common Lisp standard, which I regard as a brilliant
| example of how it is possible to be intelectually rigorous without
| relying on formal tools, seems to imply that at least part of the Common
| Lisp community shares this view.

I wonder if anyone does _not_ share it. Seriously. The resistance to
"formal tools" is, however, pretty strong among those who have studied
Scheme well and are sick and tired of completely fruitless formalisms
that turn into huge implementation costs or restrictions on the freedom
of the implementors. If there is one lesson one learns from having dealt
with many language specifications, it is that overspecifying is far worse
than underspecifying.

It seems to me that you are fighting an enemy that does not exist when
you argue as if there are people who do not share what you have written
and it seems pretty arrogant that yours is the only way to express or
desire the values and results of "rigor". I actually think _everybody_
here are sufficiently well educated that you should presume that their
objections to "formalism" is to their application, not the principles.
Thus, it does not help to argue for the principles when peple are opposed
to their application. Quite the contrary, arguing for the principles
when the application is challenged only makes people more hostile to the
application, because it implies that you think the principles are not
understood unless they are agreed to be applied just your way.

I also think what Tim Bradshaw said here is very relevant: As long as you
only ask for something that meet objections, there will be no progress.
Write up an _exact_ proposal for what you want the community to agree to,
and you might actually find people able to accomodate you, but if you
only demand something that people have serious objections to, there is no
way anyone can figure out what you are _really_ after, i.e., what you
would be happy if you got. In general, if people cannot figure out when
your complaints will end, you lose credibility fast, as it is essentially
indistinguishable from an attitude problem.

Kent M Pitman

unread,
Oct 9, 2001, 12:11:48 PM10/9/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> >> | (Another thing I would like to see are stronger guarantees about what
> >> | conditions are signalled in exceptional situations;
> >>
> >> What do these stronger guarantees actually entail?
>
> TB> Uhm, I'm pretty sure he meant changing the wording to say that certain
> TB> conditions *will* be signalled in certain situations, instead of
> TB> *may*.
>
> I was actually being much more modest, and thinking of changing a
> number of requirements that ``an error is signalled[1]'' to a precise
> specification of which proper subclass of ERROR should be signalled.

Oh, this is a lot more complex than you imagine. It's not that we
don't understand your need, it's that it's very tricky for the
language to specify this (and involved more personpower than we had
available to do even specificationally, not even worrying about the
burden it imposes on implementors).

> I have sometimes found myself establishing handlers for ERROR in
> distributed code, which always makes me nervous, and in my personal
> opinion should never be necessary.

People do understand this issue, but...

Let's work historically to show you how it came to be as it is.

First there was the non-typed condition system. ERROR had a message.
ERRSET trapped things. "Sophisticated" systems could see the error
message in limited cases and on the basis of the string could do certain
very specialized actions.

Then there was the Symbolics "New Error System" (NES) [based on long
experience with Multics PL/1 condition system] allowing object-oriented
error handling, separating the "error kind" from the message. The very
first roll-out was hugely detailed with a tree of error classes.

I did the "port" of NES to a proposal I felt was appropriate to CL.
In so doing, I asked Dave Moon at Symbolics what depth of the error class
tree I thought we should include in CL. He said that if he had it to
do over (which is effectively what CL was, a doing over), he would not
specify nearly as many classes.

He observed first that it isn't clear that it's good style, for
example, for people to be making decisions on the basis of "unseen go
tag" vs "unseen catch tag"; programs relying on this level of detail
have serious style/design problems beyond the actual error that is
manifesting. So we did some cutbacks based on what we thought
reasonable programs should do, and I think those cutbacks were
well-advised. I will defend those cutbacks to this day, and I think
even you and others who want more elaboration would agree.

There is a second class where it would be better style to have a bit
more elaboration. An example might be "file not found" and
"directory not found" instead of just "file error". I don't doubt this
would be radically useful.

In the historical context, these were not added for two reasons. One
was that CLOS was new and untrusted, and it was a design constraint
which youc an see if you look that none of the system have multiple
inheritance in primitive types (so we could back out of multiple
inheritance if it was a disaster, which some (mostly those who had
never used it or who had stock hardware and worried the performance
would not be as good as lisp-specialized hardware) felt was an
essential option to keep open). (I believe the only exceptions are
type NULL, which is handled in some bizarre primitive way by all
systems, and the types SIMPLE-CONDITION, SIMPLE-ERROR, etc. which are
specified in a way that you might think had to involve multiple
inheritance because you have used a reasonable language that allowed
this, but that actually can be done by stupid other ways like Java
does (replicating functionality, etc.) to cover for lack of multiple
inheritance, where that happens in an implementation.) Anyway, the
second reason was somewhat independent, but had some bad play-in from
the single-inheritance restriction: we couldn't agree on the type
tree because it would be different for different vendors. (And,
related to this, even if we could, it might bind us to a particular
implementation that we later decided we needed to grow out of. You'll
see how that plays out in my example below.)

Consider something like directory-not-found. In something like
LispWorks or Allegro, I believe the Lisp file system can only open the
local file system. Well, maybe except for NFS, so the issue probably
still comes up. Anyway, mostly when you do OPEN a file is either
there or it is not. Same with a directory. But the only "current
practice" we had was the Lisp Machine, and its model was more
elaborate. OPEN could open any file on the internet by saying (open
"hostname:filename"). [You could manipulate independently what
protocols, network paths, etc. were used to get to that hostname, so
you might be using FTP to one host while using TFTP or SNA or
something to another host.] But a file not found might be temporary
(host down or not responding) or permanent (i got to the host but it
says there's no file there). It might be a network problem or not.
We couldn't easily make DIRECTORY-NOT-FOUND a subclass of
NETWORK-ERROR, nor could be make it not a subclass of NETWORK-ERROR.
We could have left the classes free-floating but that could have led
to other confusions we hoped to avoid, making it hard to order the
cases correctly when discriminating several error types, for example.

We ultimately decided to leave this area blank and to see what error
classes ended up springing up and then try to repair the language later
by adding specificity based on experience.

We did not know at the time that there would not be a near-term "later"
and might never be a "later".

When we got done doing the standard, it had cost a lot of money to make
and even more in the community to adopt (change code to be ANSI compliant).
It cost just shy of a half million in salaries to produce, and I'm sure
at least as much to adopt. That's money not spent producing product.
We met after 5 years to see whether we should open the standard for
change or just leave it laying, and the result was that it was thought to
be not broken enough to be worth the risk/cost of tinkering. Yes, there
are things people would want to change and this probably among them.

Here I won't use the "we wanted to leave it to the marketplace" argument
but rather the "we were willing to leave it to the marketplace" argument. :-)
That's less strong, I think all would agree. With multiple inheritance
firmly in place, I'm confident that if the spec were open to tinkering,
we could fix this. But opening to tinkering might break 180 other things
that you didn't want broken, too, because democratic organizations are
strange and often vote in things that are inconsistent with any single
member's world view. Democracy optimizes worst case design better than
it optimizes best case design, which it really doesn't address at all.

So you've got what you've got for the nearterm and that's pretty much that.

Personally, I think the right thing is to have community consensus and I
wish the ALU would play a bigger role in that. Properly acting as a block,
users could probably get some vendors to take note. Vendors don't want
to make a random change for a random user whose personal sense of order is
offended by what they offer, but if they had reasonable confidence they were
talking to a group that represented enough users that they could make a change
once and then stand by it as "what the community wanted", ESPECIALLY if, as
here, it was a compatible change and did not disturb the spec, I think
they would probably do it. Certainly there'd be a better chance.

I hope that helps you understand and accept the status quo as the best
that could have been done then, and still for different reasons close
to the best we can do now, modulo the question of whether smaller
layered standards or just "conventions" could help. That last is
still an open question, but numerous attempts by various people to
make progress in this area have failed, so it's not like it hasn't
been tried. Ultimately, people speak with their wallets as to what
they really need, and I sense most people would rather write a few
#+/#- conditionals than spend real dollars trying to solve this issue
in an aesthetic way. They may be just saying they have better uses
for their money in this struggling economy, and speaking practically,
I can't really say 100% that that's a bad thing... CL is and always has
been a language that caters to pragmatic folks.

Kent M Pitman

unread,
Oct 9, 2001, 12:15:52 PM10/9/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> I am glad that you should agree that tail-call elimination is some-
> thing that deserves at least serious consideration (I hope I am not
> abusively putting words into your mouth here).

No, that's fine. But see my other post of just a few moments ago on the
issue of specializing condition types, where I discuss the problem that
the community doesn't have a forum right now to consider such changes, and
doesn't seem likely to.

I'm not an official representative of any committee, btw. Even when I was
I couldn't speak for them. (X3)J13 speaks through its votes, not through
a publicist. Members always just express their own opinions, nothing more.
I've resigned my membership of the committee because I think it's done
as much as it can do and I now support "other ways of moving ahead".
The committee still exists, though, and it may or may not have another
agenda--I don't know. Maybe Steve Haflich of Franz, who was chair of it
last I heard, will join in with some remarks on what the status of the
committee's work or non-work is. I'd heard some rumor of a vote to put
the committee into official hibernation, but I don't know if that vote was
taken or, if so, what the outcome might have been.

Kent M Pitman

unread,
Oct 9, 2001, 12:18:09 PM10/9/01
to
Juliusz Chroboczek <j...@pps.jussieu.fr> writes:

> Similarly, I would argue that the fact that no Common Lisp implemen-
> tation fully conforms to the standard,

The standard does not require that you don't have a resource problem.
Conformance is straightforward.

(defun common-lisp ()
(write-string "Resources exhausted."))

There. That's a fully conforming implementation that stands as a
counterexample to your claim. Heh...

You might not like this as a standard of conformance, but at least we
tried to set achievable goals for implementors. ;-)

Paolo Amoroso

unread,
Oct 9, 2001, 4:17:46 PM10/9/01
to
On Tue, 09 Oct 2001 10:36:02 GMT, Erik Naggum <er...@naggum.net> wrote:

> I keep rewriting it "tail-call merging" because I think _elimination_ is
> just plain wrong terminology. Is that just me?

I had actually never thought much about it. But now that you point this
out, "merging" seems more appropriate than "elimination".


> It may indeed sound like that until you look beneath the hood and see how
> _differently_ they need to implement this seemingly simple concept
> because of their other design choices in implementing the function call.

Maybe one of the reasons why the implementation of tail-call merging looks
simple comes from the influence of textbook examples. A non-merging Scheme
interpreter/compiler is often illustrated, and then changing a bunch of
lines of code turns it into a merging one. For the toy examples discussed
in books this is indeed simple.


Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://web.mclink.it/amoroso/ency/README
[http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/]

Andy Freeman

unread,
Oct 9, 2001, 5:26:36 PM10/9/01
to
j...@itasoftware.com wrote in message news:<r8swad...@itasoftware.com>...
> Normal order semantics are unreasonable? First I've heard of that.

You hang out in the wrong bars.

Normal order semantics has nice properties for certain kinds of
analysis on certain kinds of programs. However, it's hard to
keep those properties and write certain kinds of programs in a
"natural" way. Since more people are interested in those programs
than in those properties....

Normal order semantics also has a distinct lack of locality.
That's "inefficient" for machines (which is not a big issue) and
inefficient for humans (which is a HUGE issue).

I like to play with myself as much as the next guy, but the formal
language community's mathturbating has gotten the rest of us C, C++,
and Java. The popularity of those languages is not irrational - they,
and their implementations, are better at what most users need than
"good" languages; that superiority is the fault of the designers of
"good" languages.

-andy

Roger Corman

unread,
Oct 10, 2001, 3:22:03 AM10/10/01
to
On Tue, 09 Oct 2001 21:17:46 +0100, Paolo Amoroso <amo...@mclink.it> wrote:

>On Tue, 09 Oct 2001 10:36:02 GMT, Erik Naggum <er...@naggum.net> wrote:
>
>> It may indeed sound like that until you look beneath the hood and see how
>> _differently_ they need to implement this seemingly simple concept
>> because of their other design choices in implementing the function call.
>
>Maybe one of the reasons why the implementation of tail-call merging looks
>simple comes from the influence of textbook examples. A non-merging Scheme
>interpreter/compiler is often illustrated, and then changing a bunch of
>lines of code turns it into a merging one. For the toy examples discussed
>in books this is indeed simple.

These are very good points. I can't tell you how many times I have read that
tail call elimination is trivial and any decent implementation will do it all
the time--only to struggle with it and tear my hair out trying to implement it.

Problems I have run into, most recently with Corman Lisp. (I had earlier
implemented a version in PowerLisp that occasionally generated the wrong code,
and ended up taking it out).

In Corman Lisp, I made design decisions to use C-like calling conventions. This
has some advantages over alternatives. As well as some disadvantages. In order
to eliminate all tail calls, the function needs to generate code which will tear
down the current stack frame, push the new arguments on the stack, and branch.
However, once the frame is gone, it is not possible to evaluate the arguments
any more. And if you evaluate them first, then you need to store them somewhere
as you create them, but then you have to shuffle them around when you are done.
And, in Corman Lisp, the garbage collector can happen any time (on another
thread, for instance) and the stack needs to be in a reasonable looking state
when it does. Shuffling of items on the top of the stack is messy, and it might
actually be less efficient than just using a normal call.

The stack is so large, and only rarely do you ever run out of stack space--if
you keep in mind how recursion works and use it appropriately.

I ended up implementing what I think is a half-way version in Corman Lisp.
It optimizes recursive tail calls, in a way that is very efficient, but only in
functions which have a fixed number of arguments. In this case, you don't have
to tear down and rebuild the stack frame, and you just overwrite the existing
arguments, giving the biggest bang for the buck. In addition, it is mostly done
at the source level, using GO to restart the function, making code generation
simple and not error-prone. I added some special compiler macros to overwrite
the arguments, bypassing any inner scope variables that might have the same
name.

This only works to call recursively however. If you try to call another
function, keeping the same frame, it may take a different number of arguments or
require a different sized stack frame. Even if these are the same at compile
time, the called function can always be redefined to take a different number of
parameters or a different sized frame.

I know there are probably some tricks I haven't thought of, but my point here is
that, as Eric said, many design decisions are made, and some compromises are
made, when implementing a compiler. I don't believe the problems of implementing
full tail call merging are sufficient to forsake the function calling mechanism
in this case. I am glad that tail call merging is an implementation decision and
not a mandate of the standard. For heaven's sake, the standard doesn't even
require a garbage collector. If an implementor designs a system that doesn't
provide sufficient resources to use, then it won't get much use.

Interestingly, I have never had one user complain to me about the lack of tail
recursion elimination (I only implemented it a couple months ago).
Without the tail call elimination, a simple function can make >80,000 recursive
calls before hitting the stack limit.

Roger


j...@itasoftware.com

unread,
Oct 10, 2001, 3:56:18 PM10/10/01
to
ana...@earthlink.net (Andy Freeman) writes:

> j...@itasoftware.com wrote in message news:<r8swad...@itasoftware.com>...
> > Normal order semantics are unreasonable? First I've heard of that.
>
> You hang out in the wrong bars.

I suspected as much. Where do the hot babes who hack lisp hang out?

> Normal order semantics has nice properties for certain kinds of
> analysis on certain kinds of programs. However, it's hard to
> keep those properties and write certain kinds of programs in a
> "natural" way. Since more people are interested in those programs
> than in those properties....

Agreed, but that doesn't make the semantics unreasonable, just
impractical. (And note that every applicative order language has a
few normal-order constructs in it.)

As a followup, people who are interested in the debate over preserving
termination characteristics when compiling might want to read these
papers:

@inproceedings{ rozas169695taming,
author = "Guillermo Juan Rozas",
title = "Taming the {Y} operator",
pages = "226--234",
url = "citeseer.nj.nec.com/169695.html" }

@inproceedings{ weise91automatic,
author = "D. Weise and R. Conybeare and E. Ruf and S. Seligman",
title = "Automatic Online Partial Evaluation",
booktitle = "Functional Programming Languages and Computer Architecture, Cambridge, Massachusetts, August 1991 (Lecture Notes in Computer Science, vol. 523)",
publisher = "Berlin:\ Springer-Verlag",
editor = "J. Hughes",
pages = "165--191",
year = "1991"
}

@misc{ mulmuley87full,
author = "K. Mulmuley",
title = "Full Abstraction and Semantic Equivalence",
text = "K. Mulmuley. Full Abstraction and Semantic Equivalence. MIT Press, 1987.",
year = "1987" }

Thomas F. Burdick

unread,
Oct 10, 2001, 8:39:49 PM10/10/01
to
Tim Bradshaw <t...@cley.com> writes:

> * Thomas F Burdick wrote:
> > It would be very nice to have some *portable* way of
> > ensuring that (1) certain important tail-calls are eliminated; or (2)
> > an error is raised. This is obviously something the vendors thought
> > their user base wanted: CMUCL, CLISP, ECLS, Allegro, LispWorks, and
> > MCL all have some form of tail-call elimination. This sounds to me
> > like a good candidate for standardization.
>
> I think that the pro-tail-call-elimination-in-the-standard people
> should come up with a description, in language suitable for a standard
> of just what it is they want.

That's not a bad idea.

> Judging by the experience of Scheme, where, despite Scheme being a
> much simpler language than CL from the point of view of
> tail-call-elimination, it has taken a *long* time to tie down what is
> meant by the requirement (witness article
> <m3ite0z...@localhost.localdomain> in this thread), and it may
> actually not be tied down yet, I'm not sure.

But Scheme claims to be "properly tail recursive", which is not
something that I, for one, am advocating. I'd just like some portable
way of requesting tail-call merging, and of finding out if it was
accomplished. These are *very* different goals.

Kent M Pitman

unread,
Oct 10, 2001, 9:25:52 PM10/10/01
to
t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> But Scheme claims to be "properly tail recursive", which is not
> something that I, for one, am advocating. I'd just like some portable
> way of requesting tail-call merging, and of finding out if it was
> accomplished. These are *very* different goals.

It depends on your purpose. Some people want to do things like:

(defun count-odd-numbers (upto n-so-far)
(if (= upto 0)
n-so-far
(count-odd-numbers (1- upto)
(if (oddp upto) (+ n-so-far 1) n-so-far))))


Merely "requesting" merging is not going to make this a correct program,
even with 80,000 stack being ok (as Corman says he can do), if upto
is given initially as most-positive-fixnum.

In one of the first meetings of ANSI CL standardization, we (X3J13)
wrote a charter. One goal was "establish normative Common Lisp
programming practice." See
http://world.std.com/~pitman/CL/x3j13-86-020.html
for context.

We first agreed this was a good phrase and then talked for a while
about what it might mean. :-) Someone (my memory says it was McCarthy,
actually, who came to some of the first meetings, but I'm not 100% sure),
suggested that it meant that we would try not to include things that
encouraged people to do stuff they shouldn't.

Curiously, one cited example of that in the discussion was the inclusion
of APPEND, though we decided to include it anyway. But as I recall it was
used to illustrate the point that we don't want people calling APPEND all
the time because it will encourage people to write O(n^2) programs where
O(n) programs are better written either by CONS+NREVERSE paradigms or by
something similar hidden in functionality/macrology such as LOOP.

I guess the idea was not just to say "this is what this operator does"
but to also give some guidance about where it was and was not good to
use it.

In Structure and Interpretation of Computer Programs (S&ICP), which is the
approximate equivalent in the Scheme world to CLTL in our world, in terms
of user reverence anyway, tail recursion is presented as, at minimum,
"just another way to write a loop"... it's ALMOST presented as if "writing
a loop is buggy unless you express it this way". A whole virtual religion
has been spawned on the worship of the tail recursive call as a mechanism
for iterating.

But in CLTL, the fact is that it is simply NOT a synonym for looping because
looping has a well-defined storage usage and tail calling does not.
Specifying that you WANT it to be otherwise does not change the fact.
There are reliable ways to write loops, and tail recursion is not one of
them in either present-day CL or in a CL where you can request this without
being told the request will be granted.

It would work to have the request include programmer supplied information
that could distinguish between "necessary" and "optional" as we have
discussed for an optimize setting where (OPTIMIZE (TAIL-CALLS 3)) would
mean "must have", (OPTIMIZE (TAIL-CALLS 0)) would mean "must not have",
and anywhere in between would mean "optional". It's not like
DYNAMIC-EXTENT where it's safe to ignore; it's more like SAFETY where if
the programmer makes the request and the implementation can't oblige
him/her, the implementation must signal an error rather than just do the
wrong thing.

(Maybe this is what you meant by "and finding out if it was accomplished".
But doing a request and not having it imperatively stop the program if
it failed would be more like C (with error codes you can forget to check).
The style of CL is to signal errors actively, not passively, so that the
default is that if you fail to arrange to handle something, you know an
error has not been detected because it would have been signaled if it HAD
been detected.)

Thomas F. Burdick

unread,
Oct 10, 2001, 10:00:48 PM10/10/01
to
Erik Naggum <er...@naggum.net> writes:

> * Thomas F. Burdick
> | Oh, come on. Imagine it's pre-ANSI CL and he's asking for an object
> | system to be included in the standard.
>
> Oh, come on. We are simply not in that position, anymore.

That's not much of an argument against the analogy.

> | It would be very nice to have some *portable* way of ensuring that (1)
> | certain important tail-calls are eliminated; or (2) an error is raised.
>
> I keep rewriting it "tail-call merging" because I think _elimination_ is
> just plain wrong terminology. Is that just me?

You're right; I usually say "tail-call optimization", which I think is
fine terminology, but I do think "merging" is better.

> What you seem to want is a portable way to query a function at the source
> level for the ability of all compilers to unwind its call stack to the
> point where it can redirect a particular function call to return to its
> caller.

Yes. In other words: "I want this tail call merged; tell me if you
cannot do that".

> What I keep arguing is that this requires a large set of other
> requirements that are simply not in the Common Lisp spirit. The
> various conditions under which you can actually get tail-call
> merging in today's implementations of Common Lisp are what they
> are because of the freedom of these implementations to do their
> work in various smart ways. If you _require_ a specific set of
> tail-call merging conditions,

But that's not what I'm proposing.

> you will also mandate a particular set of implementation
> techniques for function calls of _all_ kinds, and you may actually
> find that a particular vendor is unable to comply with the new
> standard because function calls in Common Lisp are incredibly
> hairy things at the machine level, especially when you take CLOS
> into consideration.

Sure they can. If a requested tail-call merge can't be performed,
raise a condition.

> [I have had the good fortune to work with Duane Rettig's code in
> Allegro CL, and I am quite simply in _awe_ of the attention to
> detail and efficiency and the cleverness of both the code
> transformations and the resulting machine code on such a weird and
> varied bunch of hardware archictures. My own training and set of
> skills predisposes me towards strong hesitation towards the kinds
> of claims that Scheme makes, and I have studied Scheme
> implementations sufficiently to understand how this "properly
> tail-recursive" requirement has serious repercussions for the
> design and implementation of the whole language.]

I would not want to argue for making CL "properly tail-recursive"

> | This is obviously something the vendors thought their user base wanted:
> | CMUCL, CLISP, ECLS, Allegro, LispWorks, and MCL all have some form of
> | tail-call elimination. This sounds to me like a good candidate for
> | standardization.
>
> It may indeed sound like that until you look beneath the hood and see how
> _differently_ they need to implement this seemingly simple concept
> because of their other design choices in implementing the function call.

Once again, I'm not proposing that CL become "properly tail-recursive"
nor that we mandate the conditions under which tail-call merging
occurs. Nor am I proposing something that would require a massive
change to implement (I don't think).

> Contrary to popular belief in certain circles, it is not _sufficient_ to
> specify something in a standard to actually get it.

Of course it's not. And people will sometimes claim to conform to
standards they don't; that doesn't make standards worthless.

You're right. The entire CL standard is worthless. Not only should
we not think about what we'd like in a future standard, we should
throw the whole thing we have now, out. Because, after all, how to we
enforce it?

Tim Bradshaw

unread,
Oct 11, 2001, 5:33:59 AM10/11/01
to

Well, come up with a design, and publish it. Probably the best group
of people to ask if it is reasonable are implementors right now, since
they'll bear all the cost of it (well, they'll pass it on in license
fees of course). You should also probably start of with at least one
non-trivial implementation (by `non-trivial' I mean one that doesn't
answer `don't know' in all cases). If you can tie it down precisely,
get vendors to agree to implement it and users like it, you're
basically done. Duane Rettig's simple streams specification and
implementation is an example of how to do this reasonably well, I
think (although I'm not sure the simple streams spec is `standards
quality' yet - I haven't read it recently - and it may be the case
that it's hard to implement in other CLs).

Just do it!

--tim

Erik Naggum

unread,
Oct 11, 2001, 11:33:15 AM10/11/01
to
* Thomas F. Burdick

| That's not much of an argument against the analogy.

That someone can imagine something does not make it an analogy worth
arguing against. It is incumbent on the analogy-producer to cough up
something worth considering. Lacking that, _you_ have no argument, and
it is counterproductive and _indecent_ to pretend that someone who
rejects your analogy is at fault for doing so, and I do not generally
reward silliness.

| Yes. In other words: "I want this tail call merged; tell me if you
| cannot do that".

So we can get that today by defining something that always yields a
warning or even error that it cannot tail-merge the call. That seems
like _such_ a great thing to add to the standard! The moment you go
beyond this silliness, however, you run into problems. I second Tim
Bradshaw's request that you (or that formal theory dude) actually come
up with something that can be subjected to careful scrutiny. As long as
you only "want" something and make up silly imaginary analogies, there
is absolutely no reason to believe that you _understand_ what you want.

| If a requested tail-call merge can't be performed, raise a condition.

At compile-time? Should the interpreter raise a condition, too?

| You're right. The entire CL standard is worthless.

Please turn your brain back on.

| Not only should we not think about what we'd like in a future standard,
| we should throw the whole thing we have now, out. Because, after all,
| how to we enforce it?

If you fail to appreciate that standards are specifications and tha
clauses in standards are requirements, I can certainly fully understand
why you "want" useless tail-call merging-requesting special forms and
have no idea what you are embarking on.

Some astonishingly unintelligent politicians always suggest "good ideas"
that cannot be enforced, in the fantastically irrational belief that
people will sort of follow the spirit of the law when nothing bad happens
to those who do not, and the actually believe that a law is the proper
place to make feeble suggestions about people's "good behavior". Every
now and then, this feeble-minded moralism passing for law-making is not
obvious to sufficiently many politicians in time to avoid embarrassing
laws to pass. If you do not understand that if you write it into law,
there _will_ be a court of law facing those who disobey it and you _will_
impose punishment on those who "disagree" with you, you should not be
messing with laws in the first place. The same goes for standards,
although we have weaker means of dealing with the criminally-minded (i.e,
those who believe themselves elevated far above the community consensus
and not at all bound by its decisions) in our community than in the legal
and law enforcement community. However, do not expect that those who
profess a criminal mind's arrogance towards community consensus to be
treated nicely. When you are in fact trying to change the community
consensus through a change to a specification that requires and survives
_only_ because it has the respect of law-abiding citizens, the people who
will most likely be affected by an undesirable change are those who want
to fuel their arrogance towards the specification. You have to recognize
that some people in our community already harbor _massively_ irrational
arrogance towards the standard and go out of their way to publish code
that actively diminishes the value of adhering to the standard where the
standard has spoken. These are people who are so hell-bent on their own
views that _they_ know better than the whole community consensus, that it
has no merit whatsoever to respect those who desire a community consensus
over some personal likes and dislikes. Search for "religious zealot" in
the archives of this newsgroup if you need to find a person who has shown
us that opening up the standard to changes will be _very_ dangerous and
that it should not be done for irrelevant and petty changes like getting
a silly form to get an error if you cannot get tail-call merging.

Those of us who want tail-call merging already _have_ the guarantees we
need because we have read the fine _documentation_ of our Common Lisp
environments and recognize that this will _have_ to be an implementation-
dependent quality. It has absolutely nothing to do with the semantics of
the code, misguided formal theories to the contrary notwithstanding, and
there is absolutely nothing in the language _today_ to suggest that you
_can_ guarantee this feature, any more than you can "add" it to other
languages that have not had the forethought to decide on "properly tail-
recursive" _early_ in the design process.

Let me make an analogy. It is sometimes possible to turn a regular
double-action pistol into an automatic pistol by breaking parts inside.
Now, automatic firing of up to 19 rounds from a handgun can be a really
good thing in approximately one situation, and we have wisely chosen not
to include that situation in _lawful_ shooting activities, but some are
not bound by the law so they want this feature. With the kind of guns
that you can achieve this feature, it is hard enough to hit the target in
its normal single-shot mode and the temperature of the gun rises so fast
that you normally want to restrict your firing to five rounds at a time.
The effect of automatic firing in this hand-gun is thus to cause the
muzzle to rise with every shot, usually by a significiant angle, and to
raise the temperature of the barrel and thus _decrease_ its diameter, not
to mention the angle and temperature of the ejected cartridge. The
hazardous nature of the modification should be fairly obvious even to
people who think guns are only used to kill people. Still, there are
automatic pistols on the market that are _designed_ with this feature and
they are quite safe, at least for the guy pulling the trigger. I have no
desire to own such a gun at all -- I shoot because it is a sport
requiring strength, concentration, body control, etc, not because I like
the actually _annoying_ sound effects, so when somebody who wants such
guns looks at my guns and argue for the merits of automatic firing, I do
think they are genuinely interested in automatic pistols, I think they
are fucking nuts and would feel a lot safer if they left the range. This
is approximately how I feel about Scheme freaks in the Common Lisp world.

It is loading more messages.
0 new messages