Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

how to validate input?

34 views
Skip to first unread message

Friedrich Dominicus

unread,
Apr 18, 2000, 3:00:00 AM4/18/00
to
I guess this is a trivial question. And anyway I come up with a somewhat
workable solution. I quite can't imagine that one can't do better. So here's
what I use at the moment
(defun get-input ()
"Prompt user for a number if it is a number
return it otherwise return nil"
(format t "Please give me a number (e for Exit) ")
(let ((line (read)))
(if (numberp line)
line
(if (y-or-n-p "Not a number, really exit?")
nil
(get-input)))))

A bunch of improvements come to my mind and can be implemented relativly
easy, e.g replacing the test with another test the input should obey and of
course other messages if the input is not what it should be.

Anyway, I think that this has been done before and I guess even in a portable
way. Could you give me a hand here?

Regards
Friedrich

Coby Beck

unread,
Apr 18, 2000, 3:00:00 AM4/18/00
to

Friedrich Dominicus <Friedrich...@inka.de> wrote in message
news:87wvlv7...@inka.de...

because you asked....

My first reaction is that recursion is not really the best idea here, it really is
(logically) a "loop until" situation. But perhaps that's just a matter of style and
there is no danger...

Otherwise, i don't do much "read"ing, so i can't help. I do remember with Allegro 3
having my (student) program crash if the user hit <enter> without typing anything! not
the most bullet-proof design :-) (have you handled that?)

Coby


Rainer Joswig

unread,
Apr 18, 2000, 3:00:00 AM4/18/00
to
In article <87wvlv7...@inka.de>, Friedrich Dominicus
<Friedrich...@inka.de> wrote:

> (defun get-input ()
> "Prompt user for a number if it is a number
> return it otherwise return nil"
> (format t "Please give me a number (e for Exit) ")
> (let ((line (read)))
> (if (numberp line)
> line
> (if (y-or-n-p "Not a number, really exit?")
> nil
> (get-input)))))

- READ neither does read a line nor does it return one, it reads the
printed representation of a Lisp object and
returns a corresponding Lisp object.
- don't use recursion, use a loop. CL implementation don't need
to be tail recursive.

> Anyway, I think that this has been done before and I guess even in a portable
> way. Could you give me a hand here?

If you have CLIM do:

(clim:accept 'number)

See the myriad of options CLIM:ACCEPT has:

CLIM:ACCEPT type
&rest accept-args
&key stream view default default-type history
provide-default prompt prompt-mode display-default
query-identifier activation-gestures
additional-activation-gestures delimiter-gestures
additional-delimiter-gestures insert-default replace-input
present-p active-p

Rainer Joswig, ISION Internet AG, Harburger Schlossstrasse 1,
21079 Hamburg, Germany, Tel: +49 40 77175 226
Email: rainer...@ision.de , WWW: http://www.ision.de/

Coby Beck

unread,
Apr 18, 2000, 3:00:00 AM4/18/00
to
How about this? (the only thing that doesn't work is the test for "e")

Coby

(defun get-input ()
(let (raw-input number)
(loop until number
do (format t "Please Enter a Number. e will exit.~%")
(setf raw-input (read-line))
(setf number (parse-integer raw-input :junk-allowed t))
(when (and (null number) (equal raw-input "e")) (return)))
number)

Friedrich Dominicus <Friedrich...@inka.de> wrote in message

news:87og76f...@inka.de...
> >>>>> "RJ" == Rainer Joswig <rainer...@ision.net> writes:
>
> RJ> In article <87wvlv7...@inka.de>, Friedrich Dominicus


> RJ> <Friedrich...@inka.de> wrote:
>
> >> (defun get-input ()
> >> "Prompt user for a number if it is a number
> >> return it otherwise return nil"
> >> (format t "Please give me a number (e for Exit) ")
> >> (let ((line (read)))
> >> (if (numberp line)
> >> line
> >> (if (y-or-n-p "Not a number, really exit?")
> >> nil
> >> (get-input)))))
>

> RJ> - READ neither does read a line nor does it return one, it reads
the
> RJ> printed representation of a Lisp object and
> RJ> returns a corresponding Lisp object.
> Ok I can use read-line instead
> RJ> - don't use recursion, use a loop. CL implementation don't need
> RJ> to be tail recursive.
> Why? I have written the same with a loop but it looks IMO more ugly. But
> maybe it's just that I've written it poorly. Here it is:
> (defun get-input ()
> "see 'get-input-recursive'"
> (let ((line))
> (do ((exit nil))
> (exit line)


> (format t "Please give me a number (e for Exit): ")

> (setf line (read))
> (if (numberp line)
> (progn
> (setf exit t)
> line)
> (when (y-or-n-p "Not a number, really exit?")
> (setf exit t)
> (setf line nil))))))


>
>
>
> >> Anyway, I think that this has been done before and I guess even in a
portable
> >> way. Could you give me a hand here?
>

> RJ> If you have CLIM do:
>
> RJ> (clim:accept 'number)
>
> RJ> See the myriad of options CLIM:ACCEPT has:
>
> RJ> CLIM:ACCEPT type
> RJ> &rest accept-args
> RJ> &key stream view default default-type history
> RJ> provide-default prompt prompt-mode display-default
> RJ> query-identifier activation-gestures
> RJ> additional-activation-gestures delimiter-gestures
> RJ> additional-delimiter-gestures insert-default
replace-input
> RJ> present-p active-p
>
> I guessed there would be way ;-)
>
> Regards
> Friedrich
>

Friedrich Dominicus

unread,
Apr 19, 2000, 3:00:00 AM4/19/00
to

Friedrich Dominicus

unread,
Apr 19, 2000, 3:00:00 AM4/19/00
to
>>>>> "CB" == Coby Beck <cb...@mercury.bc.ca> writes:

CB> How about this? (the only thing that doesn't work is the test for "e")
CB> Coby

CB> (defun get-input ()
CB> (let (raw-input number)
CB> (loop until number
CB> do (format t "Please Enter a Number. e will exit.~%")
CB> (setf raw-input (read-line))
CB> (setf number (parse-integer raw-input :junk-allowed t))
CB> (when (and (null number) (equal raw-input "e")) (return)))
CB> number)

This is really nicer, thanks

Friedrich


Erik Naggum

unread,
Apr 19, 2000, 3:00:00 AM4/19/00
to
* Friedrich Dominicus <Friedrich...@inka.de>

| I have written the same with a loop but it looks IMO more ugly.

it's quite strange, but people who have been trained to write iteration
using tail recursion tend to write the most amazingly ugly loops, and it
isn't a question of lack of training in writing loops, it's conceptual.
(yet another case of Scheme hurting people.)

| (defun get-input ()


| (let ((line))
| (do ((exit nil))
| (exit line)
| (format t "Please give me a number (e for Exit): ")
| (setf line (read))
| (if (numberp line)
| (progn
| (setf exit t)
| line)
| (when (y-or-n-p "Not a number, really exit?")
| (setf exit t)
| (setf line nil))))))

(format t "please enter a number: ")
(do ((input (read nil nil nil) (read nil nil nil)))
((or (null input) (numberp input)) input)
(format t "not a number, try again: "))

I'd use a different input function that read, but that's your call.

#:Erik

Friedrich Dominicus

unread,
Apr 19, 2000, 3:00:00 AM4/19/00
to
>>>>> "EN" == Erik Naggum <er...@naggum.no> writes:

EN> * Friedrich Dominicus <Friedrich...@inka.de>
EN> | I have written the same with a loop but it looks IMO more ugly.

EN> it's quite strange, but people who have been trained to write iteration
EN> using tail recursion tend to write the most amazingly ugly loops, and it
EN> isn't a question of lack of training in writing loops, it's conceptual.
EN> (yet another case of Scheme hurting people.)
Thanks I take it as an compliment. Maybe I do have understand a bit of Scheme
;-)

EN> (format t "please enter a number: ")
EN> (do ((input (read nil nil nil) (read nil nil nil)))
EN> ((or (null input) (numberp input)) input)
EN> (format t "not a number, try again: "))
I agree this is really nicer and I'll keep in under my pilow ;-)

Thanks
Friedrich

Fernando

unread,
Apr 24, 2000, 3:00:00 AM4/24/00
to
On Tue, 18 Apr 2000 15:25:02 +0200, Rainer Joswig
<rainer...@ision.net> wrote:


>- don't use recursion, use a loop. CL implementation don't need

> to be tail recursive.

Do you know why this was left out of the standard? Comming
from a Scheme background, it sounds a bit weird to me...

TIA


//-----------------------------------------------
// Fernando Rodriguez Romero
//
// frr at mindless dot com
//------------------------------------------------

Joe Marshall

unread,
Apr 24, 2000, 3:00:00 AM4/24/00
to
Fernando <spa...@must.die> writes:

> On Tue, 18 Apr 2000 15:25:02 +0200, Rainer Joswig
> <rainer...@ision.net> wrote:
>
>
> >- don't use recursion, use a loop. CL implementation don't need
> > to be tail recursive.
>
> Do you know why this was left out of the standard? Comming
> from a Scheme background, it sounds a bit weird to me...

CL has a rich set of iterative constructs, so it was not seen
necessary to require proper tail recursion. Proper tail recursion can
be difficult to achieve in a number situations (witness all the
`Scheme' interpreters that *don't* have proper tail recursion). At
the time CL was being designed, it was rare for a lisp to be properly
tail recursive, and people were getting along fine without it.

That being said, although CL isn't required to be properly tail
recursive, I think *most* CL implementation have the ability to
optimize tail calls between functions, and even more have the ability
to optimize `self' tail calls or tail calls between internal labels
expressions. You may need to (declare (optimize (speed 3))) or
(declare (optimize (debug 0))) or set some internal compiler switch.

(Are there any popular CommonLisps that don't do *at least* self tail
recursion?)

--
~jrm


Rainer Joswig

unread,
Apr 24, 2000, 3:00:00 AM4/24/00
to
In article <u4s8rs...@alum.mit.edu>, Joe Marshall
<jmar...@alum.mit.edu> wrote:

> (Are there any popular CommonLisps that don't do *at least* self tail
> recursion?)

OpenGenera.

Erik Naggum

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
* Fernando <spa...@must.die>

| Do you know why this was left out of the standard? Comming
| from a Scheme background, it sounds a bit weird to me...

coming from a Scheme background, it should be easy for you to explain why
Scheme needs it so badly and what implementation costs are associated
with this requirement. then you should consider whether there are any
needs in Common Lisp that are unfulfilled without it, like there would be
in Scheme without it, and whether they are worth the costs. please note
that proper tail recursion is closely related to first-class continuations.

incidentally, it is never productive to ask why things are "left out" of
a standard unless you know that that was what happened. tail recursion
is simply a non-issue in most languages, and where it matters, it had
been a choice of the compiler writers and optimization settings, not a
hard requirement in a standard.

that said, one reason to leave out random "good ideas" from a standard is
that requiring something that nobody needs in doesn't mean people will
abide by it, but it does mean they will have to start to ignore parts of
the standard for _good_ reasons. this is very, very bad when it happens.

#:Erik

Joe Marshall

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
Erik Naggum <er...@naggum.no> writes:

> please note that proper tail recursion is closely related to
> first-class continuations.

Perhaps insofar as both are required to be in a compliant Scheme
implementation but are not often specified as a requirement of any
other language.

Technically, tail recursion and _first-class_ continuations have little
to do with each other. An implementation that is properly tail
recursive could not have first-class continuations, and an
implementation that has first-class continuations need not be
tail-recursive. (There are some implementation techniques that give
you both, however.)

--
~jrm

Joe Marshall

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to

To clarify my previous post, I said:
> An implementation that is properly tail
> recursive could not have first-class continuations.

Meaning `a properly tail-recursive implementation *need not* have
first-class continuations', not `a properly tail-recursive
implementation *can not* have first-class continuations' (which is
patently absurd).

--
~jrm

Erik Naggum

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
* Joe Marshall <jmar...@alum.mit.edu>

| Technically, tail recursion and _first-class_ continuations have little
| to do with each other.

not so, they are conceptual twins, and it would be silly to try to
specify one without the other, even if it is technically possible to
ignore the other half of the conceptual solution. at the heart of the
matter is whether a function call frame is heap-based or stack-based, but
this is _not_ an implementation issue alone, it has to do with the whole
concept of the "lifetime" of a function call. tail recursion keeps the
function call alive across function entry boundaries, i.e., entering a
function does not mean you have a fresh call frame. continuations keep
the function call alive across function exit boundaries, i.e., exiting a
function does not mean you dispose of all inferior call frames. both
invalidate the trivial concept of a function call that always exist at
the leaf of a call tree and which can be talked about with a concept of
identity, or in brief: the function call that differs conceptually from
function calls in the popular literature and programming community.

I find it interesting that several for-education languages have flirted
with the heap-based call frame, but none of them have become popular, and
current (very) popular hardware is also "unwilling" to make this run as
fast as the much simpler stack-based call frame. even register-rich
hardware has chosen to make stack-based call frames very efficient, and
heap-based call frames slow and requiring a lot of manual caretaking.
this is not accidental, and it should not be dismissed as "oh, that's
just hardware", which is an expression of the unhealthy arrogance I see
in these for-education languages.

in my view, requiring proper tail recursion and first-class continuations
is evidence of an unwillingness to deal with implementation issues. in a
specification, such unwillingness is just too arrogant to be a good sign,
especially when it is attempted covered up or not even explained.

#:Erik

Chuck Fry

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
In article <31656620...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:
> I find it interesting that several for-education languages have flirted
> with the heap-based call frame, but none of them have become popular, and
> current (very) popular hardware is also "unwilling" to make this run as
> fast as the much simpler stack-based call frame. even register-rich
> hardware has chosen to make stack-based call frames very efficient, and
> heap-based call frames slow and requiring a lot of manual caretaking.
> this is not accidental, and it should not be dismissed as "oh, that's
> just hardware", which is an expression of the unhealthy arrogance I see
> in these for-education languages.

As a one-time student of computer architecture, this begs the question:
What choices could the CPU designer make to lessen the expense of
heap-based call frames?

-- Chuck
--
Chuck Fry -- Jack of all trades, master of none
chu...@chucko.com (text only please) chuc...@home.com (MIME enabled)
Lisp bigot, car nut, photographer, sometime guitarist and mountain biker
The addresses above are real. All spammers will be reported to their ISPs.

Joe Marshall

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
Erik Naggum <er...@naggum.no> writes:

> * Joe Marshall <jmar...@alum.mit.edu>
> | Technically, tail recursion and _first-class_ continuations have little
> | to do with each other.
>
> not so, they are conceptual twins, and it would be silly to try to
> specify one without the other, even if it is technically possible to
> ignore the other half of the conceptual solution.

Not at all. It can be quite difficult to make first-class
continuations work correctly in a language; making proper
tail-recursion work is far easier. Supporting first-class
continuations involves addressing significant performance issues,
there is usually no implementation penalty for providing proper tail
recursion, and there is often a gain.

Anyone who has implemented both would agree that it makes sense to
specify one without the other. In fact, there are several properly
tail recursive Scheme implementations that do not have first-class
continuations, and there are several properly tail-recursive
CommonLisps, also without first-class continuations.

> at the heart of the
> matter is whether a function call frame is heap-based or stack-based, but
> this is _not_ an implementation issue alone, it has to do with the whole
> concept of the "lifetime" of a function call.

This is one way of looking at it, but the notion of a `call frame' and
whether it is `heap-based' or `stack-based' is very much
implementation.

Consider looking at it from a denotational semantic viewpoint. When
you do this you will notice that first-class continuations are
explicitly modeled by reifying the implicit interpreter continuation
into a denotable value. Tail recursion doesn't appear *at all* in the
denotational equations.

If you cannot reify a continuation, you can prove that the lifetime of
the continuation is no shorter than the lifetime of its children, thus
when you implement your language, you can allocate your continuations
in a stack like manner.

What about tail recursion? It doesn't enter into the denotational
semantics because it has no semantic meaning. It is purely an
implementation decision to ensure that you don't needlessly nest
continuations. If the stack were of infinite size, tail recursion
would be unnecessary.

> tail recursion keeps the
> function call alive across function entry boundaries, i.e., entering a
> function does not mean you have a fresh call frame.

I'm not sure what you mean by `alive' here, but it seems you are
conflating two orthogonal issues. An implementation may `re-use' the
topmost stack frame to achieve tail recursion, but you are in a new
body of code, with new arguments. The only thing being `kept alive'
is the stack pointer, but the stack pointer is often re-used by
various functions.

Tail recursion involves freeing the current stack frame when a call to
another function will supply the value to be returned from this call.
i.e., rather than leaving the stack frame on the stack in sort of a
zombie state (it does nothing but pop itself when returned to), we
dispose of the frame immediately.

There are a few mechanisms by which you might do this: if you have a
stack, you can pop it and immediately re-use it, if you have a heap,
you could immediately put the frame on the free-list, or you could
abandon the frame.

But the presence of tail recursion doesn't invalidate the notion of
stack allocation: Every frame is managed in a last-in/first-out
manner.

> continuations keep
> the function call alive across function exit boundaries, i.e., exiting a
> function does not mean you dispose of all inferior call frames.

Yes. This changes your call-record allocation from stack-like to heap
like. Some implementations achieve this by copying the stack, others
just punt and keep the continuations in the heap.

> both
> invalidate the trivial concept of a function call that always exist at
> the leaf of a call tree and which can be talked about with a concept of
> identity, or in brief: the function call that differs conceptually from
> function calls in the popular literature and programming community.
>

> I find it interesting that several for-education languages have flirted
> with the heap-based call frame, but none of them have become popular, and
> current (very) popular hardware is also "unwilling" to make this run as
> fast as the much simpler stack-based call frame.

Bill Rozas and Jim Miller showed that stack-allocation is simply
faster than heap allocation and it is worthwhile for the compiler to
use stack-based call frames.

> even register-rich
> hardware has chosen to make stack-based call frames very efficient, and
> heap-based call frames slow and requiring a lot of manual caretaking.
> this is not accidental, and it should not be dismissed as "oh, that's
> just hardware", which is an expression of the unhealthy arrogance I see
> in these for-education languages.

Actually, using stack-based call frames requires *more* care than
using a heap based system. This is the primary reason many
educational implementations just punt on using the stack. The heap is
fast enough and is a lot simpler to manage.

I should point out that with the right hardware, stack-oriented
tail-recursion can be made incredibly fast.

> in my view, requiring proper tail recursion and first-class continuations
> is evidence of an unwillingness to deal with implementation issues. in a
> specification, such unwillingness is just too arrogant to be a good sign,
> especially when it is attempted covered up or not even explained.

I'd be happy to explain the rationale behind any facet of either.
--
~jrm


Dorai Sitaram

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
In article <31656620...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:
>* Joe Marshall <jmar...@alum.mit.edu>
>| Technically, tail recursion and _first-class_ continuations have little
>| to do with each other.
>
> not so, they are conceptual twins, and it would be silly to try to
> specify one without the other, even if it is technically possible to
> ignore the other half of the conceptual solution. at the heart of the

> matter is whether a function call frame is heap-based or stack-based, but
> this is _not_ an implementation issue alone, it has to do with the whole
> concept of the "lifetime" of a function call. ...

>
> I find it interesting that several for-education languages have flirted
> with the heap-based call frame, but none of them have become popular, and
> current (very) popular hardware is also "unwilling" to make this run as
> fast as the much simpler stack-based call frame. even register-rich

> hardware has chosen to make stack-based call frames very efficient, and
> heap-based call frames slow and requiring a lot of manual caretaking.
> this is not accidental, and it should not be dismissed as "oh, that's
> just hardware", which is an expression of the unhealthy arrogance I see
> in these for-education languages.

But tail-call optimization doesn't suffer from this
arrogance. Quite to the contrary, it is more
efficient use of the popular stack-based call frame in
current, popular hardware.

> in my view, requiring proper tail recursion and first-class continuations
> is evidence of an unwillingness to deal with implementation issues. in a
> specification, such unwillingness is just too arrogant to be a good sign,
> especially when it is attempted covered up or not even explained.

Again, I don't see how tail-call optimization
can be tarred with the same brush as first-class
continuations in this regard. TCO is precisely a
willingness to deal with implementation issues. In
fact, I've known semanticists to express dismay that
TCO deals with "grubby" implementation choices.

--d

Barry Margolin

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
In article <8e4j47$87o$1...@news.gte.com>,

>Again, I don't see how tail-call optimization
>can be tarred with the same brush as first-class
>continuations in this regard. TCO is precisely a
>willingness to deal with implementation issues. In
>fact, I've known semanticists to express dismay that
>TCO deals with "grubby" implementation choices.

TCO is more than just an implementation detail, as it has an impact on
application design. If you know that you can depend on TCO, you can write
tail-recursive programs, safe in the knowledge that they won't complain of
stack overflow on any proper implementation. If the language doesn't
guarantee TCO, and your application needs to deal with very large data
sets, you will probably have to write it iteratively, even though recursion
might be the more natural style.

This is the reason Scheme requires TCO -- the designers wanted programmers
to be able to express iteration using recursive syntax, and didn't want the
traditional limits of recursion to apply.

--
Barry Margolin, bar...@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Kragen Sitaker

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
In article <Q2lN4.41$my3.836@burlma1-snr2>,
Barry Margolin <bar...@genuity.net> wrote:
>This is the reason Scheme requires TCO [tail-call optimization] -- the

>designers wanted programmers to be able to express iteration using
>recursive syntax, and didn't want the traditional limits of recursion
>to apply.

Knuth's "Structured Programming Using Go To Statements" listed a
variety of control structures that were often useful and were not
included as parts of most languages at the time --- the
N-and-a-half-times loop, the state machine, the binary-tree search, and
exception handling (especially abnormal loop exits), if I remember
correctly.

Some of these (especially N-and-a-half-times and exception handling)
have been added to languages designed since Knuth's article, but others
haven't.

Tail-call optimization allows for natural expression of all of them ---
indeed, any kind of control flow you like --- without paying the space
penalty for recursion.

I used this ability last week to write a tokenizer and a shift-reduce
parser for an APL-like language by hand, one Scheme function per
state. (In retrospect, this was a stupid thing to do; my code is hard
to follow, and I would be much better off with a table-driven lexer and
parser from lex and yacc or some equivalent. But it was fun, and now I
think I understand shift-reduce parsers.)
--
<kra...@pobox.com> Kragen Sitaker <http://www.pobox.com/~kragen/>
The Internet stock bubble didn't burst on 1999-11-08. Hurrah!
<URL:http://www.pobox.com/~kragen/bubble.html>
The power didn't go out on 2000-01-01 either. :)

Kragen Sitaker

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
In article <ug0saq...@alum.mit.edu>,

Joe Marshall <jmar...@alum.mit.edu> wrote:
>Erik Naggum <er...@naggum.no> writes:
>> * Joe Marshall <jmar...@alum.mit.edu>
>> I find it interesting that several for-education languages have flirted
>> with the heap-based call frame, but none of them have become popular, and
>> current (very) popular hardware is also "unwilling" to make this run as
>> fast as the much simpler stack-based call frame.
>
>Bill Rozas and Jim Miller showed that stack-allocation is simply
>faster than heap allocation and it is worthwhile for the compiler to
>use stack-based call frames.

Do you have a URL for a paper on this topic? A casual search for their
names doesn't turn up anything.

I'd like to point out that each step of the Baeza-Yates string search
involves ANDing many pairs of bits, which is inherently more
computation than each step of the Knuth-Morris-Pratt algorithm (which
only requires running a simple state machine). But the Baeza-Yates
algorithm runs at very close to the same speed on modern hardware
(well, most computers, past and present) because that hardware is good
at ANDing many pairs of bits in parallel for reasons unrelated to
Baeza-Yates string searching.

Likewise, floating-point addition is far more complex --- and
inherently slower --- than integer addition. But modern hardware has
huge floating-point units to speed up the floating-point addition,
which make it nearly as fast as integer addition.

"Inherently slower" and "actually slower" can apparently have quite a
lot of distance between them when Intel and Sun spend lots of money
making inherently slower things fast. :)

Erik Naggum

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
* ds...@goldshoe.gte.com (Dorai Sitaram)

| But tail-call optimization doesn't suffer from this arrogance. Quite to
| the contrary, it is more efficient use of the popular stack-based call
| frame in current, popular hardware.

I have looked around at implementations, and what I have found is that
the old call-frame is basically unwound to the exact same point where a
return from the function would occur, then a jump to the new function
takes place instead of a call, and a new call frame is set up, just as if
it were a normal function call. the savings is realized in that the
depth of the stack is not increased, but other than that, nothing, and
there is much loss to go with this gain: information about the caller
no longer exists.

| Again, I don't see how tail-call optimization can be tarred with the same
| brush as first-class continuations in this regard. TCO is precisely a
| willingness to deal with implementation issues. In fact, I've known
| semanticists to express dismay that TCO deals with "grubby"
| implementation choices.

the point I'm trying to get across is that _requiring_ proper tail
recursion is different from _optimizing_ for tail calls as an option.
the requirement is very much a semantic issue. I'm all for optimizing
tail calls when _I_ want to, and can make an informed choice. I'm
against a requirement. please understand the difference.

#:Erik

Erik Naggum

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
* Joe Marshall <jmar...@alum.mit.edu>

| This is one way of looking at it, but the notion of a `call frame' and
| whether it is `heap-based' or `stack-based' is very much implementation.

I also said "this is not an implementation issue _alone_". therefore, I
would appreciate if you argued against the semantic issues that I think
exist in the design (and requirements) of both of these mechanisms.

| Consider looking at it from a denotational semantic viewpoint. ... Tail


| recursion doesn't appear *at all* in the denotational equations.

function calls don't appear as such in denotational equations, either, so
that's really no surprise.

rewriting a self-tail call to a jump affects recursion. _requiring_ all
tail function calls to be jumps affects the entire system adversely, and
does affect the semantics of the language, even if it should vanish along
with many other important issues in denotational semantics.

| If the stack were of infinite size, tail recursion would be unnecessary.

unfortunately, language design has to be performed in the real world.
contrary to popular belief in some quarters, programming language design
is not pure mathematics. there are _severe_ constraints on what we can
do, and programming language pragmatics do enter the picture. there are
times when you can ignore these issues to great benefit, and there are
times when you can't lest you do something terribly stupid. confusing
these times is perhaps the worst thing you can do when you design a
language. I maintain that Scheme does exactly that with both the proper
tail call _requirement_ and with the continuations, for the same reason:
they have a different function call notion than other languages.

in my view, it doesn't make much sense to _require_ tail call recursion
unless you do something that would otherwise make this expensive to add
as an obvious option. the reason is that there is a huge difference
between a self-tail (recursive) call and a normal call, but a requirement
must make them behave identically. it would make _much_ more sense to
require only that self-tail calls be syntactic sugar for a loop.

#:Erik

Erik Naggum

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
* chu...@best.com (Chuck Fry)

| As a one-time student of computer architecture, this begs the question:
| What choices could the CPU designer make to lessen the expense of
| heap-based call frames?

predictive cache line acquisition in the heap allocation direction.
the stack is fast because it's nearly always in the fastest cache.
if it weren't, it would be disastrously slow on modern hardware.

#:Erik

Flemming Gram Christensen

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Erik Naggum <er...@naggum.no> writes:

> * chu...@best.com (Chuck Fry)
> | As a one-time student of computer architecture, this begs the question:
> | What choices could the CPU designer make to lessen the expense of
> | heap-based call frames?
>
> predictive cache line acquisition in the heap allocation direction.

How do you mean? The UltraSparc II and Athlon's and the like
has prediction instructions already.

Flemming


>
> #:Erik

Raymond Wiker

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to

That's for instruction prefetch, not data prefetch, no?

--
Raymond Wiker, Orion Systems AS
+47 370 61150

Erik Naggum

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
* Flemming Gram Christensen <gr...@ull.mjolner.dk>

| How do you mean? The UltraSparc II and Athlon's and the like
| has prediction instructions already.

I believe these instructions are called "prefetch", which is different
from prediction. "prediction" usually applies to branch prediction, but
I'm talking about a similar automatic heap cache line prefetch or priming
(when the memory is known to be written first) when a function call is
coming up in the instruction stream.

these instructions are fun to watch do weird things with one's ideas
about cache line costs, but using them requires their actual presence and
some computation for the effect you get for free from reusing the same
memory the way a stack works. most stacks are amazingly shallow and the
cache hit rates for stacks (including temporaries in call frames) are
typically >99.9%. to get that good hit rates for a heap allocation
scheme that walks over fresh memory while allocating and causes much
scattering of allocated call frames unless the heap is optimized like a
stack, you have to do a huge number of more or less explicit cache line
prefetching in the absence of lots of prefetch instructions that add to
the computational load without necessarily having any effect.

#:Erik

Flemming Gram Christensen

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Raymond Wiker <ray...@orion.no> writes:

> Flemming Gram Christensen <gr...@ull.mjolner.dk> writes:

> > How do you mean? The UltraSparc II and Athlon's and the like
> > has prediction instructions already.
>

> That's for instruction prefetch, not data prefetch, no?

No. Sparc has a prefect instruction for data. You can
use BNP (branch never predicted) to prefetch instructions.

Athlons also has data prefectch in several variants (write or read only).
Please look at http://www.amd.com/products/cpg/athlon/techdocs/index.html
if you are interested.

Regards Flemming

Joe Marshall

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
kra...@dnaco.net (Kragen Sitaker) writes:

> In article <ug0saq...@alum.mit.edu>,
> Joe Marshall <jmar...@alum.mit.edu> wrote:

> >Erik Naggum <er...@naggum.no> writes:
> >> * Joe Marshall <jmar...@alum.mit.edu>

> >> I find it interesting that several for-education languages have flirted
> >> with the heap-based call frame, but none of them have become popular, and
> >> current (very) popular hardware is also "unwilling" to make this run as
> >> fast as the much simpler stack-based call frame.
> >
> >Bill Rozas and Jim Miller showed that stack-allocation is simply
> >faster than heap allocation and it is worthwhile for the compiler to
> >use stack-based call frames.
>

> Do you have a URL for a paper on this topic? A casual search for their
> names doesn't turn up anything.
>

You can download it from http://www.ai.mit.edu/pubs.html
Here is the title and abstract:

Garbage Collection is Fast, but a Stack is Faster
By James S. Miller and Guillermo J. Rozas

AI memo 1462, March 1994, 37 pages

Prompted by claims that garbage collection can outperform stack
allocation when sufficient physical memory is available, we present a
careful analysis and set of cross-architecture measurements comparing
these two approaches for the implementation of continuation (procedure
call) frames. When the frames are allocated on a heap they require
additional space, increase the amount of data transferred between
memory and registers, and, on current architectures, require more
instructions. We find that stack allocation of continuation frames
outperforms heap allocation in some cases by almost a factor of
three. Thus, stacks remain an important implementation technique for
procedure calls, even in the presence of an efficient, compacting
garbage collector and large amounts of memory.

They use a very reduced model of stack and heap allocation that shows
that the irreducible minimum amount of overhead for stack allocation
is smaller than that for heap allocation. In addition, they benchmark
the results on a real machine.


> "Inherently slower" and "actually slower" can apparently have quite a
> lot of distance between them when Intel and Sun spend lots of money
> making inherently slower things fast. :)

If only they didn't seem to spend so much effort on making inherently
fast things slower.

--
~jrm

Joe Marshall

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Erik Naggum <er...@naggum.no> writes:

> * Joe Marshall <jmar...@alum.mit.edu>


> | This is one way of looking at it, but the notion of a `call frame' and
> | whether it is `heap-based' or `stack-based' is very much implementation.
>

> I also said "this is not an implementation issue _alone_". therefore, I
> would appreciate if you argued against the semantic issues that I think
> exist in the design (and requirements) of both of these mechanisms.

I am. I assert that the _semantics_ of the language are unchanged
when tail-recursion is present: the results of *all* computations
_that produce results_ is identical under both; thus there is no
semantic issue at all.

In a particular _implementation_ of a language, a properly tail
recursive implementation will produce correct results for all programs
which produce results under a non-tail-recursive implementation.
Furthermore, there exist programs that will not run to completion in a
non-tail recursive implementation (because of stack overflow) that
will produce a result in a tail-recursive language.

Behaviorally, yes, they differ, but this is a strictly implementation
issue, not a semantic one.

> | Consider looking at it from a denotational semantic viewpoint. ... Tail


> | recursion doesn't appear *at all* in the denotational equations.
>

> function calls don't appear as such in denotational equations, either, so
> that's really no surprise.

Function calls do appear in the denotational equations. I draw your
attention to page 41 of R5RS, second column, second equation
(which I won't reproduce here because it contains too many greek
letters).

It maps expressions of the form (<expr 0> <expr>*) to mean function
invokation.

> rewriting a self-tail call to a jump affects recursion. _requiring_ all
> tail function calls to be jumps affects the entire system adversely, and
> does affect the semantics of the language, even if it should vanish along
> with many other important issues in denotational semantics.

Requiring all tail function calls to be jumps affects the system in
the following manner:

1. Some programs that will not run without tail recursion will run.

2. Performance may be increased or reduced. (It is often increased
when locality is improved, it is often decreased when the
underlying language makes it difficult to discard vacuous
continuation frames, as when you compile scheme to C.)

3. Debugging becomes more difficult because backtrace information is
lost.

However, it has no effect on the results of the computation: the
answers will always agree. It also has no adverse effect on the range
of runnable programs. All programs that ran without tail recursion
will continue to run.

> | If the stack were of infinite size, tail recursion would be unnecessary.
>

> unfortunately, language design has to be performed in the real world.
> contrary to popular belief in some quarters, programming language design
> is not pure mathematics. there are _severe_ constraints on what we can
> do, and programming language pragmatics do enter the picture. there are
> times when you can ignore these issues to great benefit, and there are
> times when you can't lest you do something terribly stupid. confusing
> these times is perhaps the worst thing you can do when you design a
> language. I maintain that Scheme does exactly that with both the proper
> tail call _requirement_ and with the continuations, for the same reason:
> they have a different function call notion than other languages.

The issue of pragmatics is a much thornier one. I agree that there
are pragmatic costs and benefits to requiring proper tail recursion
and first-class continuations. The cost of proper tail recursion in
those implementations that compile to other `high-level' languages
such as C or Java is often too high to justify. The cost of
first-class continuations is rarely justified, but in those cases that
can make good use of them, they are an incredibly powerful tool.

I don't advocate requiring proper tail recursion *or* first-class
continuations in CommonLisp, but I would greatly favor a CommonLisp
implementation that allowed me to turn on proper tail recursion. I've
never seen a CommonLisp with first-class continuations, and I rarely
use them when I am programming in Scheme.

Scheme wasn't designed as a `one-size-fits-all' language, so it isn't
surprising that many people find it unsuitable for whatever issue they
are currently dealing with.

> in my view, it doesn't make much sense to _require_ tail call recursion
> unless you do something that would otherwise make this expensive to add
> as an obvious option. the reason is that there is a huge difference
> between a self-tail (recursive) call and a normal call, but a requirement
> must make them behave identically. it would make _much_ more sense to
> require only that self-tail calls be syntactic sugar for a loop.

Certainly if full proper tail-recursion is too expensive to implement,
but no tail-recursion whatsoever is undesirable, self-tail-recursion
is an attractive alternative. Indeed, many Scheme implementations
only provide self-tail recursion.

--
~jrm

Erik Naggum

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
* Joe Marshall <jmar...@alum.mit.edu>

| I am. I assert that the _semantics_ of the language are unchanged
| when tail-recursion is present: the results of *all* computations
| _that produce results_ is identical under both; thus there is no
| semantic issue at all.

this is a very narrow definition of "semantics", and I don't think
it's useful to discuss such definitions, so suffice to say that I do
not respect the view that the only semantics are those that can be
described by denotational semantics.

| Certainly if full proper tail-recursion is too expensive to implement,
| but no tail-recursion whatsoever is undesirable, self-tail-recursion
| is an attractive alternative. Indeed, many Scheme implementations
| only provide self-tail recursion.

I still have the impression that I'm not getting through with the
distinction between _requiring_ a particular implementation of tail
calls and _allowing_ it as and when desired. there is _nothing_
whatsoever that affects "allows" if one argues against "requires".
I have no interest in countering arguments against "allows", as I
have explicitly argued _for_ "allow", already.

among the reasons I'm no fan of Scheme is that the _requirements_ in
its language design are very serious mistakes, and they affect the
way Scheme people think about language design so adversely that it
is sometimes impossible to penetrate the view that "if it isn't
required in the standard, you can't trust smart people to do it".
e.g, I don't see anyone (except Scheme people who knock down
strawman arguments) even bring up "no tail-recursion whatsoever".

it's a cheap argument, but it accurately reflects my sentiments
towards most of what distinguishes Scheme from everything else: if
it's so smart, other smart people will want to do it, but if they
don't, maybe it wasn't so smart to begin with. this is the best
simple answer to the sentiment nearly unique to Scheme: "why does
language X _lack_ random feature Y (from Scheme)", which is raised
in nearly those words in c.l.l from time to time.

as an even cheaper argument, I'd summarize Scheme by saying that all
its good ideas have been absorbed elsewhere and the only stuff left
to brag about is that which wasn't smart to begin with, which tends
to reinforce the silliness of the Scheme superiority.

in contrast to this, Common Lisp doesn't suffer from a need to feel
superior, mostly because it has so much "impure" and historic stuff
in it that any "superiority" would be ridiculous, anyway, but it has
a unique blend of pragmatic and elegant design that means you can
always attack specifics, but you can't attack the design without
implicitly or explicitly arguing for a _worse_ design. that's why
the language specification doesn't _require_ a smart feature that
isn't smart across the board and always, but also doesn't disallow
it (like some other languages do through sheer incompetence).

#:Erik

Dorai Sitaram

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
In article <u66t4r...@alum.mit.edu>,

Joe Marshall <jmar...@alum.mit.edu> wrote:
>
>Certainly if full proper tail-recursion is too expensive to implement,
>but no tail-recursion whatsoever is undesirable, self-tail-recursion
>is an attractive alternative.

But self-tail-recursion is not so necessary when you
already have iterative constructs. Indeed, a little
basic macrology can be used to give your
iterative patterns a self-tail-recursive look, as a
thread some time back discussed.

Cross-tail-recursion, OTOH, permits a pass-the-baton
style way of writing procedures that would be
prohibitively expensive to run as is in a non-TCO
world, and very difficult (impossible even?) to
rewrite iteratively. This is why TCO is appealing.
Self-TCO is small potatoes and not worth doing
implementational gyrations over.

>Indeed, many Scheme implementations
>only provide self-tail recursion.

In Scheme, this half a loaf is indeed better than no
bread, because there is no alternative iterative
construct to take up the slack.

--d

Kragen Sitaker

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
In article <8e7986$1d3$1...@news.gte.com>,

Dorai Sitaram <ds...@goldshoe.gte.com> wrote:
>Cross-tail-recursion, OTOH, permits a pass-the-baton
>style way of writing procedures that would be
>prohibitively expensive to run as is in a non-TCO
>world, and very difficult (impossible even?) to
>rewrite iteratively. This is why TCO is appealing.
>Self-TCO is small potatoes and not worth doing
>implementational gyrations over.

Pass-the-baton-style programs can be written as a case statement inside
a loop. It doesn't help the readability, that's for sure, but it can
be done.

Joe Marshall

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Erik Naggum <er...@naggum.no> writes:

> * Joe Marshall <jmar...@alum.mit.edu>


> | I am. I assert that the _semantics_ of the language are unchanged
> | when tail-recursion is present: the results of *all* computations
> | _that produce results_ is identical under both; thus there is no
> | semantic issue at all.
>

> this is a very narrow definition of "semantics", and I don't think
> it's useful to discuss such definitions, so suffice to say that I do
> not respect the view that the only semantics are those that can be
> described by denotational semantics.

Denotational semantics, axiomatic semantics, and operational semantics
all deal with assigning meaning to programs. Whether a particular
implementation is tail recursive or not does not change the semantic
meaning of a program.

You are certainly free to broaden the definition of `semantics' to
include the concrete behavior of particular implementation techniques,
but you will run the risk of confusing people who are used to the
standard definition of semantics.

> | Certainly if full proper tail-recursion is too expensive to implement,
> | but no tail-recursion whatsoever is undesirable, self-tail-recursion
> | is an attractive alternative. Indeed, many Scheme implementations
> | only provide self-tail recursion.
>

> I still have the impression that I'm not getting through with the
> distinction between _requiring_ a particular implementation of tail
> calls and _allowing_ it as and when desired.

I understand completely. Few people would argue that allowing an
option under user control is less desirable than omitting it altogether.

Kragen Sitaker

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
In article <uln20p...@alum.mit.edu>,

Joe Marshall <jmar...@alum.mit.edu> wrote:
>I understand completely. Few people would argue that allowing an
>option under user control is less desirable than omitting it altogether.

In the case of tail-call optimization, I agree in some cases.

In general, I disagree. Allowing only useful options is much better
than allowing all possible options; options contain bugs and require
time to learn about.

In the tail-call optimization case, I can't imagine why you'd want to
give the user the option to turn off tail-call optimization if it were
only a win for your particular implementation. (I think most
implementations that compile directly to machine code and can do
tail-call optimizations lose nothing by tail-call optimizations.)

In the case of allowing an option under *implementor* control, I'm
still not sure I agree. Every degree of freedom for a language
implementor is something language users must contend with changing
unpredictably.

(I have no comment on whether requiring tail-call optimization in a
language standard is good or bad.)

Erik Naggum

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
* kra...@dnaco.net (Kragen Sitaker)

| In the tail-call optimization case, I can't imagine why you'd want to
| give the user the option to turn off tail-call optimization if it were
| only a win for your particular implementation.

consider full and accurate backtraces during debugging, profiling
that reports correct call trees, and simply correct error messages.

programmers prefer to be able to control optimization levels for a
number of reasons. Scheme doesn't have a standard way to express
such preferences, either. in other words, Scheme is not an option.
(sorry, I had to.)

#:Erik

Espen Vestre

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
Joe Marshall <jmar...@alum.mit.edu> writes:

> Denotational semantics, axiomatic semantics, and operational semantics
> all deal with assigning meaning to programs. Whether a particular
> implementation is tail recursive or not does not change the semantic
> meaning of a program.

Because of the dynamic and interactive nature of Common Lisp programming,
one can very well question the useability of this view of programming
language semantics.

One could even argue that it doesn't fit Common Lisp at all, since
e.g. the behavior of modifying the symbol-function of a function will
depend on the compilation technique.

(There's a parallell in natural language semantics: Those who demand a
strictly denotational view on semantics usually end up with putting a
lot of burden on the field of _pragmatics_. In several theories
there has been a counter-reaction to this: Objects usually considered
elements of pragmatics are integrated into the semantic theory
itself, like e.g. the states of mind of the speaker and the listener)
--
(espen)

Rob Warnock

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
Erik Naggum <er...@naggum.no> wrote:
+---------------

| cache hit rates for stacks (including temporaries in call frames) are
| typically >99.9%. to get that good hit rates for a heap allocation
| scheme that walks over fresh memory while allocating and causes much
| scattering of allocated call frames unless the heap is optimized like a
| stack...
+---------------

IIRC from my reading of the GC literature, Ungar introduced the notion
of a fixed "nursery" area [that may not have been his term for it] for
allocation in generational copying collectors, with the size chosen to
be the size of the secondary data cache (or slightly smaller). When you
fill up the nursery area, you do a minor collection, copying all of the
live data out into the main heap, then immediately reusing the *same*
nursery area for subsequent allocation. This causes the nursery area
to stay "hot" in the cache, much like a stack would.

Using such techniques, people [Appel, in particular] have claimed that
heap allocation can be as cheap as stack allocation, on average. (Yes,
I know others have disputed that as well.)


-Rob

p.s. Conversely, Henry Baker's "CONS Should Not CONS Its Arguments, Part II:
Cheney on the M.T.A." <URL:ftp://ftp.netcom.com/pub/hb/hbaker/CheneyMTA.html>
shows how to use the normal C stack *as* the nursery area of a generational
collector -- by writing in a CPS style that "never returns", allocating in
normal C stack-local variables, and limiting stack growth by periodically
combining a "longjmp" to trim the stack with a minor garbage collection.
Again, you get the same result: a copying collector that keeps its
generation-zero allocation area hot in the cache.

-----
Rob Warnock, 41L-955 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043

Rob Warnock

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
Kragen Sitaker <kra...@dnaco.net> wrote:
+---------------

| Dorai Sitaram <ds...@goldshoe.gte.com> wrote:
| >Cross-tail-recursion, OTOH, permits a pass-the-baton
| >style way of writing procedures...

|
| Pass-the-baton-style programs can be written as a case statement inside a
| loop. It doesn't help the readability, that's for sure, but it can be done.
+---------------

Sure, and all subroutine calls and returns and random GOTOs can be converted
into one huge case statement inside a loop, too [with a manual stack to hold
the case indices for the returns]. I believe it was Bill Wulf who pointed
this out decades ago in a paper titled "Global Variable Considered Harmful".
That still doesn't make me want to do it very often.

The problem is that readability *does* matter, and both pass-the-baton-style
tail-call-optimized programs and CPS-style programs have their place, where
they're the most natural & readable style of doing some task. In particular,
having written a couple of lexical analyzers in Scheme in the pass-the-baton
style, I find that I really like it for that sort of thing. Sadly, it doesn't
really translate well to non-TCO environments.

Does that mean that I will always choose to accept the *other* pains of
writing in Scheme just for the benefits of TCO? Not at all, if by using
some other environment (such as CL) I can gain *more* convenience and/or
readability for the whole task than what's been lost by having to rewrite
some pass-the-baton or CPS pieces. But the presence of cross-routine TCO
*is* one (but just one) of the considerations that affects my choice of
language for a given task...


-Rob

Paolo Amoroso

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
On 26 Apr 2000 17:32:54 GMT, ds...@goldshoe.gte.com (Dorai Sitaram) wrote:

> But self-tail-recursion is not so necessary when you

If I get things right, self-tail-recursion is something similar to:

(define (myfun ...)
.
.
.
(myfun ...))


> Cross-tail-recursion, OTOH, permits a pass-the-baton

What is cross-tail-recursion? Indirect recursion via a call to another
function in tail position?


Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/

Dorai Sitaram

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
In article <MS4IOVlssobImJ...@4ax.com>,

Paolo Amoroso <amo...@mclink.it> wrote:
>On 26 Apr 2000 17:32:54 GMT, ds...@goldshoe.gte.com (Dorai Sitaram) wrote:
>
>> But self-tail-recursion is not so necessary when you
>
>If I get things right, self-tail-recursion is something similar to:
>
> (define (myfun ...)
> .
> .
> .
> (myfun ...))

Yes.

>> Cross-tail-recursion, OTOH, permits a pass-the-baton
>
>What is cross-tail-recursion? Indirect recursion via a call to another
>function in tail position?

Cross-tail-recursion is just general tail-recursion.
I was informally using "cross" to cover the
cases not covered by the more limited
self-tail-recursion.

If procedure A's last call is to proc B_1, whose
last call is to proc B_2, etc., then these calls
to the B_i can each afford to not store information
about their immediate caller, as they can directly
return to the caller of proc A.

This suggests a programming style whereby each proc
takes care of what should be called next, thus
obviating the need for a centralized clearinghouse
(like Kragen's suggestion) whose duty is to wait for
returning procs and then dispatch on "what to do
next". It is a way of developing an overall order by
having each element take care of just its private
contribution to that order by checking up with its
neighbor. A handy decentralization tactic much used
by Ms Nature and human imitations of her (e.g., life
and Life).

Of course such a decentralized program will run
semantically correctly "as is" in a non-TCO
world too, but the pragmatics differ considerably.
(I'm using the terms "semantics" and "pragmatics" in
Joe Marshall's sense; pl translate appropriately if
these terms mean something else to you.) In a TCO
world it will incur no stack penalty. In a non-TCO
world it will, and that may be enough in many
situations to make this approach a non-option.

As my example limns, cross-tail-recursion, unlike
self-tail-recursion, may not involve recursion at all,
in the sense of a procedure calling itself, directly
or indirectly. That's why I've tried to be consistent
in using "tail-call optimization" instead of
"(proper) tail-recursion". I guess I should
have used "cross-tail-call-optimization", but I didn't
want to frighten the horse.

--d

David McClain

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
Actually, I have implemented a language that is tail-pure and at the same
time has no concept of continuations. I have also implemented a version of
this same language with continuations, and I found it to run about 30%
slower. Both were based on OCaml, an ML language that is quite efficient,
uses no stack-based frames except for occaisional and rare exception
trapping frames, and runs at near C speeds or better.

My point is only that first class continuations can be nice to have, but are
not necessary for the production of high-performance tail-pure language
design.

- DM

Erik Naggum <er...@naggum.no> wrote in message
news:31656620...@naggum.no...


> * Joe Marshall <jmar...@alum.mit.edu>
> | Technically, tail recursion and _first-class_ continuations have little
> | to do with each other.
>
> not so, they are conceptual twins, and it would be silly to try to
> specify one without the other, even if it is technically possible to
> ignore the other half of the conceptual solution. at the heart of the
> matter is whether a function call frame is heap-based or stack-based,
but
> this is _not_ an implementation issue alone, it has to do with the whole

> concept of the "lifetime" of a function call. tail recursion keeps the


> function call alive across function entry boundaries, i.e., entering a

> function does not mean you have a fresh call frame. continuations keep


> the function call alive across function exit boundaries, i.e., exiting a

> function does not mean you dispose of all inferior call frames. both


> invalidate the trivial concept of a function call that always exist at
> the leaf of a call tree and which can be talked about with a concept of
> identity, or in brief: the function call that differs conceptually from
> function calls in the popular literature and programming community.
>

> I find it interesting that several for-education languages have flirted
> with the heap-based call frame, but none of them have become popular,
and
> current (very) popular hardware is also "unwilling" to make this run as

> fast as the much simpler stack-based call frame. even register-rich
> hardware has chosen to make stack-based call frames very efficient, and
> heap-based call frames slow and requiring a lot of manual caretaking.
> this is not accidental, and it should not be dismissed as "oh, that's
> just hardware", which is an expression of the unhealthy arrogance I see
> in these for-education languages.
>

> in my view, requiring proper tail recursion and first-class


continuations
> is evidence of an unwillingness to deal with implementation issues. in
a
> specification, such unwillingness is just too arrogant to be a good
sign,
> especially when it is attempted covered up or not even explained.
>

> #:Erik
>

David McClain

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to

Rob Warnock <rp...@rigden.engr.sgi.com> wrote in message
news:8e95fu$4e6r2$1...@fido.engr.sgi.com...

> Erik Naggum <er...@naggum.no> wrote:
> +---------------
> | cache hit rates for stacks (including temporaries in call frames) are
> | typically >99.9%. to get that good hit rates for a heap allocation
> | scheme that walks over fresh memory while allocating and causes much
> | scattering of allocated call frames unless the heap is optimized like
a
> | stack...
> +---------------
>
> IIRC from my reading of the GC literature, Ungar introduced the notion
> of a fixed "nursery" area [that may not have been his term for it] for
> allocation in generational copying collectors, with the size chosen to
> be the size of the secondary data cache (or slightly smaller). When you
> fill up the nursery area, you do a minor collection, copying all of the
> live data out into the main heap, then immediately reusing the *same*
> nursery area for subsequent allocation. This causes the nursery area
> to stay "hot" in the cache, much like a stack would.
>
> Using such techniques, people [Appel, in particular] have claimed that
> heap allocation can be as cheap as stack allocation, on average. (Yes,
> I know others have disputed that as well.)

I, for one, can attest to the truth of this claim. This is what OCaml does,
and it runs quite fast; on the order of 0.7 x C-speed for slower routines,
to 1.8 x C-speed for faster routines.

>
>
> -Rob
>
> p.s. Conversely, Henry Baker's "CONS Should Not CONS Its Arguments, Part
II:
> Cheney on the M.T.A."
<URL:ftp://ftp.netcom.com/pub/hb/hbaker/CheneyMTA.html>
> shows how to use the normal C stack *as* the nursery area of a
generational
> collector -- by writing in a CPS style that "never returns", allocating in
> normal C stack-local variables, and limiting stack growth by periodically
> combining a "longjmp" to trim the stack with a minor garbage collection.
> Again, you get the same result: a copying collector that keeps its
> generation-zero allocation area hot in the cache.
>

David McClain

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to

Joe Marshall <jmar...@alum.mit.edu> wrote in message
news:u66t4r...@alum.mit.edu...

> Erik Naggum <er...@naggum.no> writes:
>
> > * Joe Marshall <jmar...@alum.mit.edu>
> I don't advocate requiring proper tail recursion *or* first-class
> continuations in CommonLisp, but I would greatly favor a CommonLisp
> implementation that allowed me to turn on proper tail recursion. I've
> never seen a CommonLisp with first-class continuations, and I rarely
> use them when I am programming in Scheme.
>
I have found continuations exceedingly helpful in creating GUI based
programs, where continuations can be used to allow a more "procedural" style
of programming WRT system events, instead of requiring that an event-loop
take dominant position and thereby requiring that one invert the
"procedural" logic of the program.

- DM


at ncal point verio point com x@x.x tom

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
Courageous <jkra...@san.rr.com> writes:
> > But are you just using continuations here to hand-craft some sort
> > of threads? Couldn't you do the same thing *without* first-class
> > continuations if the implementation provided sufficiently-flexible
> > Lisp-level threads?
>
> Being a once-been Motif/X11 expert, I can tell you without a
> shadow of a doubt that neither threads nor continuations are
> required to achieve procedural/"synchronous" call functionality
> in an event-driven venue. It's quite possible, for example, for
> the called function to temporarily begin a slave event loop,
> for example. Using this technique, I wrote many functions ala:
> AskUserToInputAList() which had the apparent effect of syncronicity
> in this otherwise syncronous environment. Spinning up a thread
> just to handle that would have been expensive & spurious, IMO.

As a user of programs like that, I must say that I often find
their behavior quite annoying. Maybe your programs did better,
but in a style like that, often, large chunks of the UI
functionality are unavailable when there is a particular input
action going on.

Granted, C/C++ threads are too heavyweight to do this sort of thing,
but in languages like Lisp, threads can be extremely lightweight and
cheap, to the point where every UI element has its own thread
associated with it and most of the UI behavior is implemented by
message passing.

Tom.

David McClain

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to

Rob Warnock <rp...@rigden.engr.sgi.com> wrote in message
news:8eitcf$5nul3$1...@fido.engr.sgi.com...
> David McClain <dmcc...@azstarnet.com> wrote:
> +---------------

> | I have found continuations exceedingly helpful in creating GUI based
> | programs, where continuations can be used to allow a more "procedural"
style
> | of programming WRT system events, instead of requiring that an
event-loop
> | take dominant position and thereby requiring that one invert the
> | "procedural" logic of the program.
> +---------------

>
> But are you just using continuations here to hand-craft some sort of
threads?
> Couldn't you do the same thing *without* first-class continuations if the
> implementation provided sufficiently-flexible Lisp-level threads?
>
Yes, I suppose one could use threads here... I found that using
continuations allowed one to proceed as though the program had been written
in the old-fashioned "do-some-work, request-some-input, do-some-more-work,
...." style, using continuations rather like coroutine calls. To use threads
here would also work, as quite evidenced by the Harlequin LispWorks
environment, but it requires a conscious use of threading and the use of
inter-thread-coordination and communication, e.g., locks, mutexes, channels,
etc.

However, to play the devil's advocate, I have also had the experience of
using continuations in a manner that permits the reestablishment of an old
environment, long after that environment had expired. What happend in the
interrim were side effects that could not be unwound before restarting the
continuations and disaster struck in several instances. These side effects
so altered the global "world" environment that it made no sense to restart
the continuation environment. So in this sense, one has "continuations
considered harmful..."

- DM


David McClain

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
Along the line of "continuations considered harmful..." I have often
wondered about the lack of first-class continuations in CL. Scheme has a
primitive (I can't recall the name) that rewinds the stack (if necessary) to
reinstate a continuation.

I have the utmost respect for the collective judgement of the team that
created the CLTL2 and ANSII Standards. And so I have wondered whether this
sort of problem with continuations being reinstated after the global "world"
state has been irreversibly changed, whether this problem was in the minds
of this group when they decided to elide first-class continuations in CL?

Perhaps in their collective judgement this was a problem to be left to
individual implementors of applications that would need such a capability --
one to be solved in some other more CL-like manner?

- DM


David McClain <dmcc...@azstarnet.com> wrote in message
news:sgq5eh3...@corp.supernews.com...

Rob Warnock

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
David McClain <dmcc...@azstarnet.com> wrote:
+---------------
| I have found continuations exceedingly helpful in creating GUI based
| programs, where continuations can be used to allow a more "procedural" style
| of programming WRT system events, instead of requiring that an event-loop
| take dominant position and thereby requiring that one invert the
| "procedural" logic of the program.
+---------------

But are you just using continuations here to hand-craft some sort of threads?
Couldn't you do the same thing *without* first-class continuations if the
implementation provided sufficiently-flexible Lisp-level threads?


-Rob

Courageous

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
Rob Warnock wrote:
>
> David McClain <dmcc...@azstarnet.com> wrote:
> +---------------
> | I have found continuations exceedingly helpful in creating GUI based
> | programs, where continuations can be used to allow a more "procedural" style
> | of programming WRT system events, instead of requiring that an event-loop
> | take dominant position and thereby requiring that one invert the
> | "procedural" logic of the program.
> +---------------
>
> But are you just using continuations here to hand-craft some sort of threads?
> Couldn't you do the same thing *without* first-class continuations if the
> implementation provided sufficiently-flexible Lisp-level threads?

Being a once-been Motif/X11 expert, I can tell you without a


shadow of a doubt that neither threads nor continuations are
required to achieve procedural/"synchronous" call functionality
in an event-driven venue. It's quite possible, for example, for
the called function to temporarily begin a slave event loop,
for example. Using this technique, I wrote many functions ala:
AskUserToInputAList() which had the apparent effect of syncronicity
in this otherwise syncronous environment. Spinning up a thread
just to handle that would have been expensive & spurious, IMO.

(whether or not a *particular* GUI environment allows you this
level of flexibility would have to be determined by the GUI
environment, I suppose).

C/

Courageous

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
tom wrote:

>
> Courageous <jkra...@san.rr.com> writes:
> > > But are you just using continuations here to hand-craft some sort
> > > of threads? Couldn't you do the same thing *without* first-class
> > > continuations if the implementation provided sufficiently-flexible
> > > Lisp-level threads?
> >
> > Being a once-been Motif/X11 expert, I can tell you without a
> > shadow of a doubt that neither threads nor continuations are
> > required to achieve procedural/"synchronous" call functionality
> > in an event-driven venue. It's quite possible, for example, for
> > the called function to temporarily begin a slave event loop,
> > for example. Using this technique, I wrote many functions ala:
> > AskUserToInputAList() which had the apparent effect of syncronicity
> > in this otherwise syncronous environment. Spinning up a thread
> > just to handle that would have been expensive & spurious, IMO.
>
> As a user of programs like that, I must say that I often find
> their behavior quite annoying. Maybe your programs did better,
> but in a style like that, often, large chunks of the UI
> functionality are unavailable when there is a particular input
> action going on.

In Motif'sville, this is called "APPLICATION_MODAL", meaning
that the dialog must be responded to before the application
can continue. In certain circumstances, this is the right
thing, but not always.

Do keep in mind that a slave event loop doesn't require
that this happen. The slave can quite easily act with the
same authority as the primary event loop, albeit you would
have to take care to make certain that inconsistent states
might not pop up (I myself always confined myself to
APPLICATION_MODAL functionality as a general rule, so can't
offer an opinion on the more general use without going out
of my scope).

Do keep in mind that the callback/event handler abstraction
of Xt generally allows one to avoid having the very ugly
type of program where you're listening to events on the
loop itself. Rather you establish callbacks which are
dispatched from a general mechanism; ergo, the author of the
slave event loop only needs to make certain that events
continue to be processed and never has the responsibility
of determining what events mean what.

(the same can't be said of other gui models, of course,
so your point is quite apt in those contexts).

C/

at ncal point verio point com x@x.x tom

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
"David McClain" <dmcc...@azstarnet.com> writes:
> > But are you just using continuations here to hand-craft some sort of
> threads?
> > Couldn't you do the same thing *without* first-class continuations if the
> > implementation provided sufficiently-flexible Lisp-level threads?
> >
> Yes, I suppose one could use threads here... I found that using
> continuations allowed one to proceed as though the program had been written
> in the old-fashioned "do-some-work, request-some-input, do-some-more-work,
> ...." style, using continuations rather like coroutine calls. To use threads
> here would also work, as quite evidenced by the Harlequin LispWorks
> environment, but it requires a conscious use of threading and the use of
> inter-thread-coordination and communication, e.g., locks, mutexes, channels,
> etc.

It seems to me that the reason why threading seems more complex than
continuations is that threading is often preemptive while
continuations (out of the box) aren't. You get roughly the same
simplicity of continuation based programming if you use a cooperative
thread implementation (with an explicit "yield"). Lack of preemption
could actually a problem with pure continuation based approaches if
your GUI involves long running computations.

Threaded GUI implementations can be very simple from the application
programmer's point of view. A good approach is to minimize the amount
of shared state among GUI elements and to use a simple message passing
or mailbox paradigm for communication. With such an approach, there
is almost no need for the programmer ever to worry about locks,
mutexes, or anything else. The code looks roughly like any procedural
GUI code, except that signals between GUI elements are handled
apparently "in parallel" and that you can write bits and pieces of
"old-fashioned" code as you used to.

For people interested in this sort of thing, I recommend looking at
the SML eXene UI, which uses multithreading extensively. Also, Erlang
uses threads very naturally and without most of the hassles that
people traditionally associate with threaded programming.

Tom.

Kragen Sitaker

unread,
May 3, 2000, 3:00:00 AM5/3/00
to
In article <8e97j2$4dumc$1...@fido.engr.sgi.com>,

Rob Warnock <rp...@rigden.engr.sgi.com> wrote:
>Kragen Sitaker <kra...@dnaco.net> wrote:
>+---------------
>| Dorai Sitaram <ds...@goldshoe.gte.com> wrote:
>| >Cross-tail-recursion, OTOH, permits a pass-the-baton
>| >style way of writing procedures...
>|
>| Pass-the-baton-style programs can be written as a case statement inside a
>| loop. It doesn't help the readability, that's for sure, but it can be done.
>+---------------
>
>Sure, and all subroutine calls and returns and random GOTOs can be converted
>into one huge case statement inside a loop, too [with a manual stack to hold
>the case indices for the returns]. I believe it was Bill Wulf who pointed
>this out decades ago in a paper titled "Global Variable Considered Harmful".
>That still doesn't make me want to do it very often.

Exactly :)

>The problem is that readability *does* matter, and both pass-the-baton-style
>tail-call-optimized programs and CPS-style programs have their place, where
>they're the most natural & readable style of doing some task. In particular,
>having written a couple of lexical analyzers in Scheme in the pass-the-baton
>style, I find that I really like it for that sort of thing.

Sure, readability is the whole point of not writing things in machine
code. I've only written a couple of things in this pass-the-baton
style (a lexical analyzer and a shift-reduce parser, as I think I may
have said in a previous post) but I'm not too happy with the
readability of the results.

Maybe it's just that I'm fairly new to programming in Scheme, and maybe
it's that *any* method of doing logic-heavy stuff like this would be
hard to read, and this is the least of all available evils.

Still, I'm really tempted to go back and reimplement these guys with an
explicit state-transition table. It might not be as fast, but it seems
like it should be more readable.

When do you like using CPS?

>Sadly, it doesn't really translate well to non-TCO environments.

Baton-passing translates OK to C --- each function becomes a block with
a label at the beginning, and each tail-call becomes a goto :)

0 new messages