Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Midfunction Recursion

33 views
Skip to first unread message

Travis Whitton

unread,
Oct 23, 2002, 10:59:12 AM10/23/02
to
Hello, I'm steadily working my way through Paul Graham's ANSI Common Lisp,
and I've come to something that I don't quite understand. While I don't have
any problem understanding recursive functions where the function call is
the last statement in the function, this particular function calls itself
in the middle, and I can't rationalize exactly how it works.

(defun uncompress (lst)
(if (null lst)
nil
(let ((elt (car lst))
(rest (uncompress (cdr lst))))
(if (consp elt)
(append (apply #'list-of elt)
rest)
(cons elt rest)))))

(defun list-of (n elt)
(if (zerop n)
nil
(cons elt (list-of (- n 1) elt))))


Can someone explain how calling uncompress inside of uncompress' let statement
works? Example usage of the uncompress function is as follows:

(uncompress '((3 1) 0 (2 1) 0))
(1 1 1 0 1 1 0)

Thanks in advance,
Travis Whitton <whi...@atlantic.net>

Erik Naggum

unread,
Oct 23, 2002, 11:31:57 AM10/23/02
to
* Travis Whitton

| While I don't have any problem understanding recursive functions where
| the function call is the last statement in the function, this particular
| function calls itself in the middle, and I can't rationalize exactly how
| it works.

Pardon me, but this made me laugh out loud. All that work by the Scheme
freaks to teach people recursion and to abuse it for iteration, and the
end result is that someone does not get it except for iteration! It had
to happen. Nothing personal, Travis, this is your teachers' fault.

| Can someone explain how calling uncompress inside of uncompress' let
| statement works?

Exactly like calling from somewhere else. But watch the arguments. The
recursive call causes the tail to be processed before the head. This
means that when you call (uncompress '((3 1) (2 0))), the first thing it
does is compute (uncompress '((2 0))), which yields (0 0) by prepending a
list of two 0's to the empty list. Then the rest of the first call takes
control and prepends a list of three 1's to the tail it computed first,
(0 0), and you get (1 1 1 0 0).

The smart thing about this solution is that it avoids copying the list it
has already constructed. That is indeed a good algorithm, but if one is
concerned about wasted resources, reversing the list first does exactly
the same good without the cost of the recursion.

--
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.

Kenny Tilton

unread,
Oct 23, 2002, 11:42:43 AM10/23/02
to

Travis Whitton wrote:
> Hello, I'm steadily working my way through Paul Graham's ANSI Common Lisp,
> and I've come to something that I don't quite understand. While I don't have
> any problem understanding recursive functions where the function call is
> the last statement in the function, this particular function calls itself
> in the middle, and I can't rationalize exactly how it works.

The function list-of does the kind of recursion with which you are
comfortable, yes? Where the last form is:

(cons elt (list-of ... ...))

OK. list-of gets called recursively and returns a result which Lisp
silently puts someplace safe (the stack). Then cons gets called with elt
and the recursive result. Note that, just as with uncompress, the
temporary recursive result is indeed /computed/, but unlike uncompress
it is not /bound/ to a lexical variable you can look at in the source.

The only thing different with uncompress is that the programmer stashes
the recursive result (no diffence in the recursion per se) in the temp
variable rest, then looks at the type of the elt to see how that should
be processed. In both cases, consp and not, they pass the recursive
result to another function (here append or cons) for inclusion in the
net result.

Exercise: rewrite uncompress so it ends lexically with the recursive call.

>
> (defun uncompress (lst)
> (if (null lst)
> nil
> (let ((elt (car lst))
> (rest (uncompress (cdr lst))))
> (if (consp elt)
> (append (apply #'list-of elt)
> rest)
> (cons elt rest)))))
>
> (defun list-of (n elt)
> (if (zerop n)
> nil
> (cons elt (list-of (- n 1) elt))))
>
>
> Can someone explain how calling uncompress inside of uncompress' let statement
> works? Example usage of the uncompress function is as follows:
>
> (uncompress '((3 1) 0 (2 1) 0))
> (1 1 1 0 1 1 0)
>
> Thanks in advance,
> Travis Whitton <whi...@atlantic.net>


--

kenny tilton
clinisys, inc
---------------------------------------------------------------
""Well, I've wrestled with reality for thirty-five years, Doctor,
and I'm happy to state I finally won out over it.""
Elwood P. Dowd

Nils Goesche

unread,
Oct 23, 2002, 11:49:15 AM10/23/02
to
Travis Whitton <whi...@atlantic.net> writes:

> Hello, I'm steadily working my way through Paul Graham's ANSI Common
> Lisp, and I've come to something that I don't quite
> understand. While I don't have any problem understanding recursive
> functions where the function call is the last statement in the
> function, this particular function calls itself in the middle, and I
> can't rationalize exactly how it works.

Is this really code from ANSI Common Lisp? It looks like being
written by a Schemer...

You learn to understand recursive functions this way: Start with
recursive functions where you already know what they do. When you are
reasonably comfortable with that, you'll know how to think about them
and understanding new ones won't be hard either.

In this case, we already know what uncompress is supposed to do:
Given a list of elements x_1, ..., x_n, return a new list where each
of the x_i is replaced either by x_i itself, if x_i is a number, and
by x occurrences of k, if x_i is a list of the form (x k).

What you do now is prove, by induction on the length n of the input
list, that this implementation of UNCOMPRESS does in fact do this.

First we have to show that uncompress works for a list with n = 0
elements. It obviously does. Now just /assume/ that uncompress works
with input lists of length less than n. We'll show that it works for
lists of length n, too, then, which means that, by induction,
UNCOMPRESS works for all lists (that are legal input).

> (defun uncompress (list) ; Unlike Scheme, in Lisp you can call a
> ; list a list
> (if (null list)
> nil
> (let ((elt (car list))
> (rest (uncompress (cdr list))))

Note here that (cdr list) contains a (legal) list of length n - 1, and
we already know, by assumption, that uncompress works for such lists.
So, REST now contains the correct expansion of (cdr list). Now read
the rest of the code and you'll see that it works for our list LIST of
length n, too, QED.

> (if (consp elt)
> (append (apply #'list-of elt)
> rest)
> (cons elt rest)))))
>
> (defun list-of (n elt)
> (if (zerop n)
> nil
> (cons elt (list-of (- n 1) elt))))
>
>
> Can someone explain how calling uncompress inside of uncompress' let
> statement works? Example usage of the uncompress function is as
> follows:
>
> (uncompress '((3 1) 0 (2 1) 0))
> (1 1 1 0 1 1 0)

This is how you understand recursive functions: You do /not/ try to
follow the call chain, you simply prove, by induction, that they work.
As you have seen, for the most part this means that you can simply
assume that the function already works (for smaller input at least),
which makes things usually very easy.

Note also that this is a rather bad way of implementing such a
function. Consider rewriting it with MAPCAN or LOOP (or both).

Regards,
--
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0

Jacek Generowicz

unread,
Oct 23, 2002, 11:49:54 AM10/23/02
to
Travis Whitton <whi...@atlantic.net> writes:

> While I don't have any problem understanding recursive functions
> where the function call is the last statement in the function, this
> particular function calls itself in the middle, and I can't
> rationalize exactly how it works.

Calling itself at the end (tail-recursion) allows you to re-express
the recursive function as a loop, but that is not how recursion works
in general.

> Can someone explain how calling uncompress inside of uncompress' let
> statement works?

Just like calling any other function inside the let statement would
work. You push onto the stack whatever data are necessary to pick up
execution of the function where it left off, and go off and run
whatever function you called. Typically the stack is piled high with
information about "half-completed" functions. You don't much care
about what appears in the stack below you: there is nothing to prevent
(information about half-completed states of) the same function
appearing in the stack more than once.

Travis Whitton

unread,
Oct 23, 2002, 12:08:35 PM10/23/02
to
> Pardon me, but this made me laugh out loud. All that work by the Scheme
> freaks to teach people recursion and to abuse it for iteration, and the
> end result is that someone does not get it except for iteration! It had
> to happen. Nothing personal, Travis, this is your teachers' fault.

Well... I'm teaching myself, so I guess it's my own fault. I'm trying to
expand my programming repertoire by learning a functional language. I have
been programming imperatively for a good while, and I've always avoided
recursion mainly because I found it confusing. I started off reading a gentle
introduction to SML, and almost all of it's examples use recursion to
iterate i.e.,

fun length nil = 0
| length(h::t) = 1 + length(t);

Things like that are very easy to understand so, until now, I have simply
viewed recursion as an elegant way to interate. I've never tried Scheme(I
like fast languages), and I've never spoken with a "Scheme freak"... In the
ANSI CL book, it says that you shouldn't attempt to trace through a recursive
function; however, the SML tutorial does exactly that. Is this the wrong way of
going about things? Maybe it's the nature of functional programming that just
as you feel everything is starting to click, you're back at square one...

-
Travis Whitton <whi...@atlantic.net>

Travis Whitton

unread,
Oct 23, 2002, 12:19:41 PM10/23/02
to
Thanks to all of you for the help. I shall digest this information and
hopefully one day achieve recursive enlightenment.

-
Travis Whitton <whi...@atlantic.net>

Erik Naggum

unread,
Oct 23, 2002, 12:41:00 PM10/23/02
to
* Travis Whitton

| Well... I'm teaching myself, so I guess it's my own fault.

Damn. I really thought I had something I could whack the Scheme freaks
with this time.

| I have been programming imperatively for a good while, and I've always
| avoided recursion mainly because I found it confusing.

Do you find function calls confusing? Why?

| I've never tried Scheme (I like fast languages),

I see. This tells me that you are too quick to judge and do not pay
attention to the details that would encourage you to look for better
conclusions after the first one you reached has failed to produce the
expected or desired results.

| Maybe it's the nature of functional programming that just as you feel
| everything is starting to click, you're back at square one...

No, that would only be the nature of not getting the point.

You need to become more comfortable with not understanding something
before you can learn it well enough to understand it. There is a very
strong sense of rushing to conclusions in the way you speak of your
problems and your approaches to learning something. Acquire patience.

Nils Goesche

unread,
Oct 23, 2002, 1:06:58 PM10/23/02
to
Travis Whitton <whi...@atlantic.net> writes:

> > Pardon me, but this made me laugh out loud. All that work by the
> > Scheme freaks to teach people recursion and to abuse it for
> > iteration, and the end result is that someone does not get it
> > except for iteration! It had to happen. Nothing personal,
> > Travis, this is your teachers' fault.
>
> Well... I'm teaching myself, so I guess it's my own fault.

Not really. SML freaks are very similar to Scheme freaks in this
regard :-)

> I'm trying to expand my programming repertoire by learning a
> functional language. I have been programming imperatively for a good
> while, and I've always avoided recursion mainly because I found it
> confusing. I started off reading a gentle introduction to SML, and
> almost all of it's examples use recursion to iterate i.e.,
>
> fun length nil = 0
> | length(h::t) = 1 + length(t);
>
> Things like that are very easy to understand so, until now, I have
> simply viewed recursion as an elegant way to interate.

Yes, tail recursion is easy to understand. In fact, tail recursion is
equivalent to iteration, so there is no practical reason to prefer one
way to the other. There is a theoretical difference, though. Let's
write, for example, a function that prints Hello n times.

Here is the tail recursive way:

(defun hello (n)
(format t "~&Printing Hello ~A times." n)
(labels ((print-times (k)
(when (plusp k)
(format t "~&Hello ~A!" (1+ (- n k)))
(print-times (1- k)))))
(print-times n))
(format t "~&Done."))

And here the iterative way:

(defun hello (n)
(format t "~&Printing Hello ~A times." n)
(loop for k from 1 to n do
(format t "~&Hello ~A!" k))
(format t "~&Done."))

The iterative way looks much easier, in fact, and this example is
rather typical. The reason functional programming freaks prefer the
recursive way, anyway, is that the iterative version uses assignment:
First, "Hello" is printed with k being 1, then k is reassigned to 2,
and so on (this is hidden by the LOOP macro, here). In the recursive
version no such thing happens; print-times called with a different
argument each time instead.

And it is for this rather theoretical reason, that functional
programming freaks think it is ``cleaner'' to iterate using recursion,
even if the code looks much messier.

If you think this is a rather crazy attitude, then you are right: It
is. In fact, people who use SML or Scheme know that taking the
functional programming paradigm too far isn't a good thing (otherwise
they'd be using Haskell instead, where in order to write even a simple
function like our HELLO above, you'd have to introduce an esoteric
thing called a ``monad''!), but somehow many of them still have
something like a ``bad conscience'' whenever they use assignment and
thus try to avoid loops. Lispers are a rather more practical people:
They recognize that loops and tail recursion are equivalent, anyway,
and use whatever is easier for the case at hand.

> I've never tried Scheme(I like fast languages), and I've never
> spoken with a "Scheme freak"... In the ANSI CL book, it says that
> you shouldn't attempt to trace through a recursive function;
> however, the SML tutorial does exactly that. Is this the wrong way
> of going about things? Maybe it's the nature of functional
> programming that just as you feel everything is starting to click,
> you're back at square one...

No, what you have to understand is when to be functional and when
rather not :-)

Erik Naggum

unread,
Oct 23, 2002, 1:34:33 PM10/23/02
to
* Nils Goesche

| Yes, tail recursion is easy to understand.

It is the first time in this newsgroup, but I have heard from people who
have taken Scheme courses various places who actually think recursion is
merely a form of iteration. It would be silly to just claim this without
some evidence, but nobody has wanted to come forward to admit it when
they suspected the consequences or even grew the prerequisite clue.

Teaching iteration as a form of recursion is just plain wrong as people
get it backward: recursion is a form of iteration. It sounds too silly to
be believable, but ask around, and more than our unwitting contributor
here will effectively say this. I blame this squarely on stupid teaching
methods and silly languages that teach the wrong thing to people who are
a little too eager to jump to conclusions. But those people /abound/ and
it is the responsibility of teachers /not/ to send their students across
mine fields to sort them out the hard way.

I think teaching tail recursion as something separate from recursion is a
really bad thing. If you cannot deal with reality when you want to teach
iteration, send your students to a class on theatrical make-up and ask
them to come back to you when they understand the difference between
staged make-believe and the real story.

Barry Margolin

unread,
Oct 23, 2002, 1:48:00 PM10/23/02
to
In article <slrnardivu....@grub.atlantic.net>,

Travis Whitton <whi...@atlantic.net> wrote:
>> Pardon me, but this made me laugh out loud. All that work by the Scheme
>> freaks to teach people recursion and to abuse it for iteration, and the
>> end result is that someone does not get it except for iteration! It had
>> to happen. Nothing personal, Travis, this is your teachers' fault.
>
>Well... I'm teaching myself, so I guess it's my own fault. I'm trying to
>expand my programming repertoire by learning a functional language. I have
>been programming imperatively for a good while, and I've always avoided
>recursion mainly because I found it confusing. I started off reading a gentle
>introduction to SML, and almost all of it's examples use recursion to
>iterate i.e.,
>
>fun length nil = 0
>| length(h::t) = 1 + length(t);

That's not tail recursion, it's recursing in the middle, which you claimed
not to understand. It calls length() recursively, and *then* adds 1 to
that before returning. An equivalent Lisp version would be:

(defun length (list)
(if (null list)
0
(let ((tail-length (length (cdr list))))
(+ 1 tail-length))))

Recursion is only equivalent to iteration if you're returning the value of
the recursive call immediately, without doing anything to it. A
tail-recursive LENGTH function might look like:

(defun length (list &optional (length-so-far 0))
(if (null list)
length-so-far
(length (cdr list) (+ 1 length-so-far))))

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Marco Antoniotti

unread,
Oct 23, 2002, 2:01:35 PM10/23/02
to

Hi

many people have already answered you.

However, let me add a comment here. There are algorithms that are
inherently recursive.

If you look at the code of the QuickSort algorithm or of Binary Tree
Traversal in *any* language, you will see that one of the (two in the
above cases) recursive calls is "in the middle".

You cannot eliminate this call, unless you explicitely manipulate a
stack.

Cheers

--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
715 Broadway 10th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
"Hello New York! We'll do what we can!"
Bill Murray in `Ghostbusters'.

Nils Goesche

unread,
Oct 23, 2002, 2:08:45 PM10/23/02
to
Erik Naggum <er...@naggum.no> writes:

> * Nils Goesche
> | Yes, tail recursion is easy to understand.
>
> It is the first time in this newsgroup, but I have heard from
> people who have taken Scheme courses various places who actually
> think recursion is merely a form of iteration.

...


> Teaching iteration as a form of recursion is just plain wrong as
> people get it backward: recursion is a form of iteration. It
> sounds too silly to be believable, but ask around, and more than
> our unwitting contributor here will effectively say this.

Ugh! I probably should have written ``Tail recursive code is easy to
understand for people who already understand recursion (in general)''.
It would have never occurred to me that there might be people who get
it wrong this way. I guess this is because I heard of tail recursion
only many years after I understood recursion for the first time, when
I was still programming in Pascal. But, obviously, the danger is
real...

> I blame this squarely on stupid teaching methods and silly
> languages that teach the wrong thing to people who are a little
> too eager to jump to conclusions.

...


> I think teaching tail recursion as something separate from
> recursion is a really bad thing.

Yes, this is clear now. I can't remember where I learned about tail
recursion, only that it was some Scheme text claiming that this was a
better way to write loops. I'd like to look back whether they
properly explained recursion first... I think they didn't. How about
SICP? Do they treat general recursion first? (I don't have my copy
handy at the moment)

Barry Margolin

unread,
Oct 23, 2002, 2:29:15 PM10/23/02
to
In article <y6csmyx...@octagon.valis.nyu.edu>,

Marco Antoniotti <mar...@cs.nyu.edu> wrote:
>
>Hi
>
>many people have already answered you.
>
>However, let me add a comment here. There are algorithms that are
>inherently recursive.
>
>If you look at the code of the QuickSort algorithm or of Binary Tree
>Traversal in *any* language, you will see that one of the (two in the
>above cases) recursive calls is "in the middle".

Another well-known example that uses mid-function recursion is the
Fibonacci number series:

F(n) = F(n-1) + F(n-2)

This is difficult to express using tail recursion because you have to do
*two* recursive calls -- they can't *both* be the last thing you do.

Note: I didn't say it *can't* be done tail-recursively. Any recursive
procedure can be transformed into a tail-recursive version, but the result
isn't always pretty because you have to pass the state around as parameters
rather than just stacking it up in the caller. I haven't figured out what
the tail-recursive version of Fibonacci looks like (I think it would
involve two helper functions, one for the first subcall and the other for
the second subcall).

Joe Marshall

unread,
Oct 23, 2002, 4:07:49 PM10/23/02
to
Nils Goesche <car...@cartan.de> writes:

> How about
> SICP? Do they treat general recursion first? (I don't have my copy
> handy at the moment)

It is presented simultaneously with general recursion.

Michael Sullivan

unread,
Oct 23, 2002, 3:51:50 PM10/23/02
to
Marco Antoniotti <mar...@cs.nyu.edu> wrote:

> many people have already answered you.
>
> However, let me add a comment here. There are algorithms that are
> inherently recursive.
>
> If you look at the code of the QuickSort algorithm or of Binary Tree
> Traversal in *any* language, you will see that one of the (two in the
> above cases) recursive calls is "in the middle".
>
> You cannot eliminate this call, unless you explicitely manipulate a
> stack.

And those are great examples of algorithms that should almost never be
written without recursion.

The first time I ever saw a QuickSort algorithm was in high school, I
hadn't done all that much programming and what I did was in brain
damaging languages. The algorithm was implemented in FORTRAN. It was
only 15-20 lines of code, but it took me hours to figure out what the
heck it was doing, and when I wanted to write a sort a month later, I
couldn't remember it.

A few years later, I saw it written recursively. *bing*. Why was this
so hard to understand before?


Michael

--
Michael Sullivan
Business Card Express of CT Thermographers to the Trade
Cheshire, CT mic...@bcect.com

Michael Sullivan

unread,
Oct 23, 2002, 3:51:49 PM10/23/02
to
Nils Goesche <car...@cartan.de> wrote:

Yes. I don't have my copy handy either, but I read through that section
only a couple months ago, so I'm willing to talk from memory. They do
get right into tail recursion in the same discussion. I would have said
they take a reasonable attitude toward the subject, but they clearly
want students to try out the functional "freak" style of using tail
recursion in place of iteration. They don't imply that it's the be-all
and end-all, just that it's worth trying out for yourself, which seems
fair to me.

There's a fairly long discussion early on of what makes a function tail
recursive, and how to write functions so that they are tail recursive
when possible. It involves tracing through some functions to show what
happens to the stack with various kinds of recursion, and why tail
recursive functions can be optimized easily.

I found it informative (as someone who hadn't worked with lisp, scheme
or functional style for a dozen years). I have to agree with you that
plain iteration is usually clearer in languages that support it. (Not
always though, once you're comfortable reading recursive code). I think
there's some value in avoiding assignment, but that it is so miniscule
as not to matter when you have lexical scope and are dealing with a
variable that only lives through a few lines of simple code.

Erann Gat

unread,
Oct 23, 2002, 4:10:53 PM10/23/02
to
In article <%TBt9.23$9F6....@paloalto-snr1.gtei.net>, Barry Margolin
<bar...@genuity.net> wrote:

> I haven't figured out what
> the tail-recursive version of Fibonacci looks like

(defun tail-fib (n &optional (n1 1) (n2 1))
(if (<= n 1) n1 (tail-fib (1- n) n2 (+ n1 n2))))

;-)

E.

Kalle Olavi Niemitalo

unread,
Oct 23, 2002, 4:53:03 PM10/23/02
to
Barry Margolin <bar...@genuity.net> writes:

> I haven't figured out what
> the tail-recursive version of Fibonacci looks like (I think it would
> involve two helper functions, one for the first subcall and the other for
> the second subcall).

(defun fib (x)
(labels ((helper (i fib-i-1 fib-i)
(if (>= i x)
fib-i
(helper (+ i 1) fib-i (+ fib-i-1 fib-i)))))
(helper 0 1 0)))

I suppose this is cheating.

Nils Goesche

unread,
Oct 23, 2002, 5:48:07 PM10/23/02
to
mic...@bcect.com (Michael Sullivan) writes:

> Nils Goesche <car...@cartan.de> wrote:
>
> > How about SICP? Do they treat general recursion first? (I
> > don't have my copy handy at the moment)
>
> Yes. I don't have my copy handy either, but I read through that section
> only a couple months ago, so I'm willing to talk from memory. They do
> get right into tail recursion in the same discussion. I would have said
> they take a reasonable attitude toward the subject, but they clearly
> want students to try out the functional "freak" style of using tail
> recursion in place of iteration. They don't imply that it's the be-all
> and end-all, just that it's worth trying out for yourself, which seems
> fair to me.

I tend to agree, in principle, but we had a discussion about this
a while ago where it seemed that there are too many students who
get it wrong when it is done this way.

> I have to agree with you that plain iteration is usually
> clearer in languages that support it.

It has always been my impression that designers of languages
which don't support iteration sufficiently, like Scheme which has
only the awkward `do', do so quite deliberately in order to force
people into the tail recursive style.

Regards,
--
Nils Goesche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0

Barry Margolin

unread,
Oct 23, 2002, 5:55:16 PM10/23/02
to
In article <87elagg...@darkstar.cartan>,

Nils Goesche <n...@cartan.de> wrote:
>It has always been my impression that designers of languages
>which don't support iteration sufficiently, like Scheme which has
>only the awkward `do', do so quite deliberately in order to force
>people into the tail recursive style.

You have to remember that Scheme was designed to support the paradigms
espoused in all the "Lambda: The Ultimate XXX" papers. The idea was that
function calls could be used as the basis for almost all control
structures.

Nils Goesche

unread,
Oct 23, 2002, 6:03:09 PM10/23/02
to
Barry Margolin <bar...@genuity.net> writes:

> In article <87elagg...@darkstar.cartan>,
> Nils Goesche <n...@cartan.de> wrote:
> >It has always been my impression that designers of languages
> >which don't support iteration sufficiently, like Scheme which has
> >only the awkward `do', do so quite deliberately in order to force
> >people into the tail recursive style.
>
> You have to remember that Scheme was designed to support the
> paradigms espoused in all the "Lambda: The Ultimate XXX"
> papers. The idea was that function calls could be used as the
> basis for almost all control structures.

Hm, ok, but couldn't they have omitted `do' altogether, then?

Barry Margolin

unread,
Oct 23, 2002, 6:10:06 PM10/23/02
to
In article <87adl4g...@darkstar.cartan>,

Nils Goesche <n...@cartan.de> wrote:
>Barry Margolin <bar...@genuity.net> writes:
>
>> In article <87elagg...@darkstar.cartan>,
>> Nils Goesche <n...@cartan.de> wrote:
>> >It has always been my impression that designers of languages
>> >which don't support iteration sufficiently, like Scheme which has
>> >only the awkward `do', do so quite deliberately in order to force
>> >people into the tail recursive style.
>>
>> You have to remember that Scheme was designed to support the
>> paradigms espoused in all the "Lambda: The Ultimate XXX"
>> papers. The idea was that function calls could be used as the
>> basis for almost all control structures.
>
>Hm, ok, but couldn't they have omitted `do' altogether, then?

Yes. Over time a few "practical" features were added to the language, as
it gained more use outside the academic fold.

Erik Naggum

unread,
Oct 23, 2002, 6:11:37 PM10/23/02
to
* Nils Goesche

| It has always been my impression that designers of languages which don't
| support iteration sufficiently, like Scheme which has only the awkward
| `do', do so quite deliberately in order to force people into the tail
| recursive style.

I think `do´ is pretty strong as far as support for iteration goes. At
least it supports multi-variable sequential and parallel stepping, which
is /far/ better than most of the languages that should have had good
support for iteration because they are misdesigned as far as function
calls are concerned, such as by not having multiple value return.

However, when you encourage people to use recursion, the iteration
support does not evolve. People who tried to make improvements in the
iteration department would be encouraged to use recursion, instead.

Since recursion cannot evolve as a linguistic element, either, what you
get is a stagnant language that requires more code and "patterns" to get
things done. E.g., collecting values while traversing a list.

There are many ways to the same target, but finding the minimal set of
operations necessary to accomplish it is far more important than whether
you use an iterative or a tail-recursive or even recursive algorithm. It
appears to me that the focus on tail recursion forces people to play with
loaded dice.

Nils Goesche

unread,
Oct 23, 2002, 6:42:25 PM10/23/02
to
Erik Naggum <er...@naggum.no> writes:

> * Nils Goesche

> | It has always been my impression that designers of languages
> | which don't support iteration sufficiently, like Scheme which
> | has only the awkward `do', do so quite deliberately in order
> | to force people into the tail recursive style.
>
> I think `do´ is pretty strong as far as support for iteration
> goes. At least it supports multi-variable sequential and
> parallel stepping, which is /far/ better than most of the
> languages that should have had good support for iteration
> because they are misdesigned as far as function calls are
> concerned, such as by not having multiple value return.

Come to think of it, that's true: Some languages have only
`while' and a `for' that works only on fixnum intervals (and both
don't return a useful value). Guess I was comparing it to LOOP :-)
But even though I always used DO until I discovered that LOOP
isn't that hard to understand, after all, I never really got used
to it -- I still think it is somewhat hard to read (unlike LOOP).

> There are many ways to the same target, but finding the
> minimal set of operations necessary to accomplish it is far
> more important than whether you use an iterative or a
> tail-recursive or even recursive algorithm. It appears to me
> that the focus on tail recursion forces people to play with
> loaded dice.

Heh. Obviously there are people who think that submitting to a
certain ideology is far more important than to accomplish
anything...

Tagore Smith

unread,
Oct 23, 2002, 9:46:42 PM10/23/02
to
Travis Whitton wrote:
> Thanks to all of you for the help. I shall digest this information and
> hopefully one day achieve recursive enlightenment.

For most people learning to write recursive functions there is
eventually a kind of "Aha" experience. Before that moment they are on
thin ice writing recursive functions; afterward they can't recall why
they found it difficult. I remember the exact moment I had that
experience. You sound very close to getting it, so don't put it off till
"someday". It is important enough that you should try to understand it
today.

There is a trick to thinking about recursion. It is a bit difficult
to explain without resorting to formalisms that presuppose that you
understand recursion. Nils Goesche's post explains it very well, but I
can see how it might make your eyes glaze over if you haven't learned to
do inductive proofs.Induction and recursion are two sides of the same
coin, and induction requires exactly the same "Aha" as recursion. Both
require that you deal with only the part of the problem at hand, and
assume that the rest of the problem is solved. This is why thinking
about the stack is the wrong way to approach recursion (although
thinking about the stack can be helpful in getting to the "Aha" moment).

As Nils suggests it is best to begin with a function that you
already understand. On page 24 of my edition of ANSI Common Lisp Graham
gives the following:

(defun our-length (lst)
(if (null lst)
0
(+ (our-length (cdr lst)) 1)))

There are two ways to understand this function. One is by mentally
tracing the stack. When you call it with:

(our-length (list (1 2 3 4))

it repeatedly calls itself until the argument passed is the empty
list, at which point it returns 0. Then as each stack frame returns 1 is
added to the return value of the preceding call, and the result
returned, until the stack is empty of calls. This is the way that the
machine implements recursion, but it is not the best way to think about
it. As you have found, if your procedure is more complicated than
our-length, it becomes very difficult to hold the details in your mind.

The second way to think about this function is to _assume_ that the
recursive call to our-length will return the correct value. Then it is
easy to see that the length of lst is 1 + the length of the cdr of lst.
This assumption does require that there be a base case (the length of
the empty list is 0) and that that base case is eventually reached (the
length of the list becomes less each time, and there is a least element
to the natural numbers). The key to this way of thinking about recursion
is that you _assume_ that the recursive call returns what it says it
does- the length of the rest of the list.

The first and the second way of thinking about recursive functions
are each about as easy as the other when you are dealing with very
simple functions, like those found when using recursion in place of
iteration. But the first becomes intractable when thinking about more
complicated functions, for instance those that do deep recursion, where
there is more than one branch to a computation.Try looking at the
uncompress function again, assuming that any function calls contained in
its body do the right thing.

Nils Goesche's post explains this more formally; I'd suggest that
you read it again. You might also find it useful to learn to do
inductive proofs, as doing so recapitulates these ideas.

Tagore Smith

Nils Goesche

unread,
Oct 23, 2002, 10:12:24 PM10/23/02
to
Tagore Smith <tag...@tagoresmith.com> writes:

> Induction and recursion are two sides of the same coin, and
> induction requires exactly the same "Aha" as recursion.

Hmm, you may have a point here. I still remember very well the
moment when I first got the point of recursion; I was desperately
trying to understand a Pascal program that solved the tower of
Hanoi problem recursively when I suddenly got the idea to prove
that it works... and I was already very familiar with inductive
proofs at the time. But there are a lot of people who aren't. I
guess I should take this into account in the future. Thanks!

Ray Blaak

unread,
Oct 24, 2002, 2:14:54 AM10/24/02
to
Nils Goesche <car...@cartan.de> writes:
> The iterative way looks much easier, in fact, and this example is
> rather typical. The reason functional programming freaks prefer the
> recursive way, anyway, is that the iterative version uses assignment:
> First, "Hello" is printed with k being 1, then k is reassigned to 2,
> and so on (this is hidden by the LOOP macro, here). In the recursive
> version no such thing happens; print-times called with a different
> argument each time instead.

Actually, there is no guarantee that assignment is necessarily being used in
the LOOP case (that is, in general -- it might well be in CL). Consider that
the particular loop could have been implemented as a macro that expands into a
local tail-recursive closure.

Ultimately its a non-issue if its "true" iteration vs tail recursion, since
they are semantically equivalent. Instead, one should use the notation that
most clearly expresses the solution, eases maintenance, clarifies
understanding, etc.

Iterative looking solutions seem to map better to how people naturally think.
Recursive looking solutions tend to be more amenable to inductive proofs.

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
bl...@telus.net The Rhythm has my soul.

Jason Kantz

unread,
Oct 24, 2002, 8:19:01 AM10/24/02
to

Travis Whitton <whi...@atlantic.net> wrote in message
news:slrnardetr....@grub.atlantic.net...

> in the middle, and I can't rationalize exactly how it works.
>
> (defun uncompress (lst)
> (if (null lst)
> nil
> (let ((elt (car lst))
> (rest (uncompress (cdr lst))))

> (if (consp elt)
> (append (apply #'list-of elt)
> rest)
> (cons elt rest)))))
>
> (defun list-of (n elt)
> (if (zerop n)
> nil
> (cons elt (list-of (- n 1) elt))))
>

When looking at a recursive function, I start by looking at the base
cases. So in uncompress it is,

(uncompress nil) -> nil

Then I look at other simple cases,

(uncompress '(1)) -> (1) ; elt is 1 rest is nil

(uncompress '((3 1))) -> (1 1 1) ; elt is (3 1) rest is nil

I see that uncompress is always returning a list, and I reason forward
to see that a list is being built from the base cases with cons and
append.


Gordon Joly

unread,
Oct 24, 2002, 9:53:05 AM10/24/02
to
In article <slrnardetr....@grub.atlantic.net>,
Travis Whitton <whi...@atlantic.net> wrote:
>Hello, I'm steadily working my way through Paul Graham's ANSI Common Lisp,
>and I've come to something that I don't quite understand. While I don't have

>any problem understanding recursive functions where the function call is
>the last statement in the function, this particular function calls itself
>in the middle, and I can't rationalize exactly how it works.
>[...]

How about this instead? A more succinct and more recursive(?) example
of "Midfunction Recursion"?

****

(defun tak (x y z)
(cond ((not (ilessp y x)) z)
(t (tak (tak (isub1 x) y z)
(tak (isub1 y) z x)
(tak (isub1 z) x y)))))

****

ends:-)

Gordo

Gordon Joly

unread,
Oct 24, 2002, 9:59:47 AM10/24/02
to

(defun tak (x y z)
(cond ((not (ilessp y x)) z)
(t (tak (tak (isub1 x) y z)
(tak (isub1 y) z x)
(tak (isub1 z) x y)))))


Is the famous "Tak function", devised by I. Takeuchi.

Gordo

Erik Naggum

unread,
Oct 24, 2002, 10:42:43 AM10/24/02
to
* Ray Blaak

| Actually, there is no guarantee that assignment is necessarily being used
| in the LOOP case (that is, in general -- it might well be in CL).
| Consider that the particular loop could have been implemented as a macro
| that expands into a local tail-recursive closure.

The test for whether you have assignment or not is not found with the
syntactic sugar. Try this in both Common Lisp and Scheme:

(do ((i 1 (1+ i))
(foo () (cons (lambda () i) foo)))
((= i 5) foo))

If typed at the listener, you should now have a list of four closures in
the variable *. Scheme victims may have to write more. (map apply *)
yields (4 3 2 1), while (mapcar #'funcall *) yields (5 5 5 5). The reason
for the dramatic difference should be intuitively evident.

In fact, Common Lisp semantics is defined this way and Scheme semantics
is defined that way, so it is not the option that Scheme freaks believe.

| Ultimately its a non-issue if its "true" iteration vs tail recursion,
| since they are semantically equivalent.

No, they are not. The semantic distinction is precisely in the binding
behavior of the variables. Scheme does not actually offer iteration.

It annoys me that I know more about Scheme minutiae than Scheme freaks.

Nils Goesche

unread,
Oct 24, 2002, 11:02:23 AM10/24/02
to
Ray Blaak <bl...@telus.net> writes:

> Nils Goesche <car...@cartan.de> writes:

> > First, "Hello" is printed with k being 1, then k is reassigned to
> > 2, and so on (this is hidden by the LOOP macro, here). In the
> > recursive version no such thing happens; print-times called with a
> > different argument each time instead.
>
> Actually, there is no guarantee that assignment is necessarily being
> used in the LOOP case (that is, in general -- it might well be in
> CL). Consider that the particular loop could have been implemented
> as a macro that expands into a local tail-recursive closure.

I was quite aware of that and considering writing it with DO making
the assignment explicit. Had I been addressing you, I'd probably have
done it that way; however I also wanted to show a newbie how easy it
is to write such a loop with LOOP; you know it, he might not. (And I
doubt that there is any implementation of LOOP that wouldn't translate
this into a loop with assignment) Moreover, it is also quite
interesting that there are people who even think that using tail
recursion is still better than, say, writing

(dotimes (i 10)
(print i))

/even though/ no explicit assignment is visible here (and strictly
speaking might not even be there at all in the expansion as you
correctly point out). It is this irrationality that I find so
baffling.

> Ultimately its a non-issue if its "true" iteration vs tail
> recursion, since they are semantically equivalent. Instead, one
> should use the notation that most clearly expresses the solution,
> eases maintenance, clarifies understanding, etc.

Exactly my point.

William D Clinger

unread,
Oct 24, 2002, 11:05:24 AM10/24/02
to
Ray Blaak wrote:
> Ultimately its a non-issue if its "true" iteration vs tail recursion, since
> they are semantically equivalent.

For many people the word "iteration" implies assignment rather than
binding. For them, iteration is not the same as tail recursion.

I am not claiming that their view of iteration is superior to mine.

Will

; The difference between a DO loop in Scheme and the corresponding
; loop in Common Lisp is one of the standard examples of the difference
; between binding and assignment. People whose concept of iteration is
; sufficiently impoverished are likely to think that this example shows
; a difference between tail recursion and iteration.

; This Scheme expression returns the values 10, 362880, 7, 720.

(let ((f (lambda (x y) (values x y -1 -1))))
(do ((i 1 (+ i 1))
(j 1 (* i j)))
((= i 10)
(f i j))
(if (= i 7)
(set! f (lambda (x y) (values x y i j))))))

; This (untested) CL expression returns the values 10, 362880, 10, 362880.

(let ((f #'(lambda (x y) (values x y -1 -1))))
(do ((i 1 (+ i 1))
(j 1 (* i j)))
((= i 10)
(funcall f i j))
(if (= i 7)
(setq f #'(lambda (x y) (values x y i j))))))

Barry Margolin

unread,
Oct 24, 2002, 10:58:39 AM10/24/02
to
In article <3DB74EF8...@tagoresmith.com>,

Tagore Smith <tag...@tagoresmith.com> wrote:
> There is a trick to thinking about recursion. It is a bit difficult
>to explain without resorting to formalisms that presuppose that you
>understand recursion.

So the explanation is recursive!

Nils Goesche

unread,
Oct 24, 2002, 11:30:04 AM10/24/02
to
I <car...@cartan.de> wrote:

> Ray Blaak <bl...@telus.net> writes:
>
> > Ultimately its a non-issue if its "true" iteration vs tail
> > recursion, since they are semantically equivalent.

> Exactly my point.

And I was terribly wrong. Fascinating! Thanks to Erik and William
for pointing this out.

Kaz Kylheku

unread,
Oct 24, 2002, 11:36:57 AM10/24/02
to
Travis Whitton <whi...@atlantic.net> wrote in message news:<slrnardetr....@grub.atlantic.net>...
> Hello, I'm steadily working my way through Paul Graham's ANSI Common Lisp,
> and I've come to something that I don't quite understand. While I don't have
> any problem understanding recursive functions where the function call is
> the last statement in the function, this particular function calls itself
> in the middle, and I can't rationalize exactly how it works.

A function call suspends the calling procedure while another one
executes. The called procedure has its own fresh bindings for all of
its lexical variables, including the parameters.

> (defun uncompress (lst)

You can use the symbol list as a variable, even though it's the name
of a standard Lisp function. Lisp symbols have a separate binding for
functions and variables.

> (if (null lst)
> nil
> (let ((elt (car lst))
> (rest (uncompress (cdr lst))))

For instance, REST is also a standard Lisp function, but you were able
to use the symbol as a variable name.

> (if (consp elt)
> (append (apply #'list-of elt)
> rest)
> (cons elt rest)))))
>
> (defun list-of (n elt)
> (if (zerop n)
> nil
> (cons elt (list-of (- n 1) elt))))
>
>

> Can someone explain how calling uncompress inside of uncompress' let statement
> works? Example usage of the uncompress function is as follows:

The call is completely independent; it has its own private bindings
for the LST parameter and the ELT and REST variables.

A function call, or activation, consists of some code, and of an
environment that is set up for the evaluation of that code containing
the bindings of variables.

All of the nested activations of uncompress share the same code, and
use the same symbols to refer to variables, but each has its own
binding environment, so there is no interference.

You need not worry that the inner call to uncompress will do anything
to the local variables of the call that is already in progress; that
call is suspended and its bindings are intact upon its resumption.

Ray Blaak

unread,
Oct 24, 2002, 12:28:32 PM10/24/02
to
Erik Naggum <er...@naggum.no> writes:
> The test for whether you have assignment or not is not found with the
> syntactic sugar. Try this in both Common Lisp and Scheme:
>
> (do ((i 1 (1+ i))
> (foo () (cons (lambda () i) foo)))
> ((= i 5) foo))
>
> If typed at the listener, you should now have a list of four closures in
> the variable *. Scheme victims may have to write more. (map apply *)
> yields (4 3 2 1), while (mapcar #'funcall *) yields (5 5 5 5). The reason
> for the dramatic difference should be intuitively evident.

A remarkably clarifying example. I have been educated.

Greg Menke

unread,
Oct 24, 2002, 12:48:44 PM10/24/02
to
Erik Naggum <er...@naggum.no> writes:

> The test for whether you have assignment or not is not found with the
> syntactic sugar. Try this in both Common Lisp and Scheme:
>
> (do ((i 1 (1+ i))
> (foo () (cons (lambda () i) foo)))
> ((= i 5) foo))
>
> If typed at the listener, you should now have a list of four closures in
> the variable *. Scheme victims may have to write more. (map apply *)
> yields (4 3 2 1), while (mapcar #'funcall *) yields (5 5 5 5). The reason
> for the dramatic difference should be intuitively evident.

I don't have scheme installed & have never used it, but am trying to
understand what you're getting at here.

Given the code snippet, I duplicated the mapcar results you give
above. My interpretation is that each closure is over the variable i
itself, not its value at the time the closure is formed- thus
returning 5 for each closure. This is because i is 5 when do exits.
Is this correct?

Assuming the map results you give are for scheme, it seems Scheme is
closing over the value of i instead of the variable itself. If so,
are all Scheme closures formed that way?

Thanks- I'm having problems with my intuition today...

Greg Menke

Pascal Costanza

unread,
Oct 24, 2002, 12:57:09 PM10/24/02
to
Greg Menke wrote:
> Erik Naggum <er...@naggum.no> writes:
>
>> The test for whether you have assignment or not is not found with the
>> syntactic sugar. Try this in both Common Lisp and Scheme:
>>
>>(do ((i 1 (1+ i))
>> (foo () (cons (lambda () i) foo)))
>> ((= i 5) foo))
>>
>> If typed at the listener, you should now have a list of four closures in
>> the variable *. Scheme victims may have to write more. (map apply *)
>> yields (4 3 2 1), while (mapcar #'funcall *) yields (5 5 5 5). The reason
>> for the dramatic difference should be intuitively evident.
>
> I don't have scheme installed & have never used it, but am trying to
> understand what you're getting at here.
[...]

> Assuming the map results you give are for scheme, it seems Scheme is
> closing over the value of i instead of the variable itself.

No, Scheme closes over the variable as well, but it's a different
variable at each step.


Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Nils Goesche

unread,
Oct 24, 2002, 1:10:07 PM10/24/02
to
Pascal Costanza <cost...@web.de> writes:

> Greg Menke wrote:
> > Erik Naggum <er...@naggum.no> writes:
> >
> >> The test for whether you have assignment or not is not found with the
> >> syntactic sugar. Try this in both Common Lisp and Scheme:
> >>
> >>(do ((i 1 (1+ i))
> >> (foo () (cons (lambda () i) foo)))
> >> ((= i 5) foo))
> >>
> >> If typed at the listener, you should now have a list of four closures in
> >> the variable *. Scheme victims may have to write more. (map apply *)
> >> yields (4 3 2 1), while (mapcar #'funcall *) yields (5 5 5 5). The reason
> >> for the dramatic difference should be intuitively evident.
> > I don't have scheme installed & have never used it, but am trying to
> > understand what you're getting at here.
> [...]
>
> > Assuming the map results you give are for scheme, it seems Scheme is
> > closing over the value of i instead of the variable itself.
>
> No, Scheme closes over the variable as well, but it's a different
> variable at each step.

It might be helpful to compare these two examples:

(defun loop-closures ()
(let ((i 0)
(ret nil))
(loop while (< i 5) do
(push (lambda () i) ret)
(incf i))
ret))

(defun recursive-closures ()
(labels ((help (i acc)
(if (< i 5)
(help (1+ i) (cons (lambda () i) acc))
acc)))
(help 0 nil)))

Steven E. Harris

unread,
Oct 24, 2002, 1:26:01 PM10/24/02
to
Erik Naggum <er...@naggum.no> writes:

> (do ((i 1 (1+ i))
> (foo () (cons (lambda () i) foo)))
> ((= i 5) foo))
>
> If typed at the listener, you should now have a list of four
> closures in the variable *. Scheme victims may have to write more.
> (map apply *) yields (4 3 2 1), while (mapcar #'funcall *) yields (5
> 5 5 5). The reason for the dramatic difference should be
> intuitively evident.

It's not yet intuitively evident to me, but I'd like to get to the
point where it is. I consulted the HyperSpec on DO and found this
clause:

,----[ HyperSpec: DO Description ]
| Before the first iteration, all the /init-forms/ are evaluated, and
| each /var/ is bound to the value of its respective /init-form/, if
| supplied. This is a /binding/, not an assignment; when the loop
| terminates, the old values of those variables will be restored.
`----

Is the idea that in the example, the /i/ captured in the /foo/
closures is not a constant number, but is instead a reference to some
mutable value, such that evaluating /i/ gives one the current value of
that referenced number?

If so, how could one force the capture of the current value of /i/ to
eventually present (4 3 2 1)? I tried to treat /i/ as a symbol and
tried to capture its value, to no avail:

,----[ GNU CLISP 2.28 ]


| > (do ((i 1 (1+ i))

| (foo () (cons (lambda () (symbol-value 'i)) foo)))
| ((= i 5) foo))
|
| (#<CLOSURE :LAMBDA NIL (SYMBOL-VALUE 'I)>
| #<CLOSURE :LAMBDA NIL (SYMBOL-VALUE 'I)>
| #<CLOSURE :LAMBDA NIL (SYMBOL-VALUE 'I)>
| #<CLOSURE :LAMBDA NIL (SYMBOL-VALUE 'I)>)
| > (mapcar #'funcall *)
|
| *** - SYMBOL-VALUE: I has no dynamic value
`----

> The semantic distinction is precisely in the binding behavior of the
> variables.

I see what CL is doing in the first example -- that there is a level
of indirection -- but I don't see how one can "dereference" this level
of indirection if desired.

--
Steven E. Harris :: seha...@raytheon.com
Raytheon :: http://www.raytheon.com

Edi Weitz

unread,
Oct 24, 2002, 1:34:42 PM10/24/02
to
Steven E. Harris <seha...@raytheon.com> writes:

> Is the idea that in the example, the /i/ captured in the /foo/
> closures is not a constant number, but is instead a reference to
> some mutable value, such that evaluating /i/ gives one the current
> value of that referenced number?
>
> If so, how could one force the capture of the current value of /i/
> to eventually present (4 3 2 1)? I tried to treat /i/ as a symbol
> and tried to capture its value, to no avail:
>
> ,----[ GNU CLISP 2.28 ]
> | > (do ((i 1 (1+ i))
> | (foo () (cons (lambda () (symbol-value 'i)) foo)))
> | ((= i 5) foo))
> |
> | (#<CLOSURE :LAMBDA NIL (SYMBOL-VALUE 'I)>
> | #<CLOSURE :LAMBDA NIL (SYMBOL-VALUE 'I)>
> | #<CLOSURE :LAMBDA NIL (SYMBOL-VALUE 'I)>
> | #<CLOSURE :LAMBDA NIL (SYMBOL-VALUE 'I)>)
> | > (mapcar #'funcall *)
> |
> | *** - SYMBOL-VALUE: I has no dynamic value
> `----

You mean something like this?

* (do ((i 1 (1+ i))
(foo () (cons (let ((j i)) (lambda () j)) foo)))
((= i 5) foo))
(#<Interpreted Function "DO ((I 1 #) (FOO NIL #))" {481F5111}>
#<Interpreted Function "DO ((I 1 #) (FOO NIL #))" {481F5069}>
#<Interpreted Function "DO ((I 1 #) (FOO NIL #))" {481F4FC1}>
#<Interpreted Function "DO ((I 1 #) (FOO NIL #))" {481F4F19}>)
* (mapcar #'funcall *)
(4 3 2 1)

Edi.

Steven E. Harris

unread,
Oct 24, 2002, 1:56:44 PM10/24/02
to
Nils Goesche <car...@cartan.de> writes:

>> No, Scheme closes over the variable as well, but it's a different
>> variable at each step.
>
> It might be helpful to compare these two examples:

Thank you for these. Please confirm: In recursive-closures, each time
that /help/ calls itself with (1+ i) as an argument, a new value is
created to which is bound a different /i/ variable, so each closure in
/acc/ has captured a different /i/ variable pointing each pointing to
a different value.

Tim Bradshaw

unread,
Oct 24, 2002, 2:01:48 PM10/24/02
to
* Greg Menke wrote:

> Assuming the map results you give are for scheme, it seems Scheme is
> closing over the value of i instead of the variable itself. If so,
> are all Scheme closures formed that way?

No. The issue is that the scheme code is something like this in CL:

(labels ((next (i foo)
(if (= i 5)
foo
(next (1+ i) (cons #'(lambda () i) foo)))))
(next 1 '()))

While the CL code is something like this:

(let ((foo '())
(i 1))
(tagbody
loop
(when (= i 5) (go end))
(setf foo (cons #'(lambda () i) foo))
(setf i (1+ i))
(go loop)
end)
foo)

- In CL there is one binding of I (and FOO) which is repeatedly
changed. In Scheme there are multiple bindings. In each case the
closure simply captures the current (and in CL, unique) binding.

--tim

Erik Naggum

unread,
Oct 24, 2002, 2:08:10 PM10/24/02
to
* Greg Menke

| Assuming the map results you give are for scheme, it seems Scheme is
| closing over the value of i instead of the variable itself. If so, are
| all Scheme closures formed that way?

Scheme specifies that each new "iteration" has its own bindings, just as
if it had used (local) recursion, while Common Lisp has one binding and
iterates by asssigning new values to it. The more basic Scheme facility
"named let" makes this somewhat more explicit.

Steven E. Harris

unread,
Oct 24, 2002, 2:23:49 PM10/24/02
to
Edi Weitz <e...@agharta.de> writes:

> You mean something like this?
>
> * (do ((i 1 (1+ i))
> (foo () (cons (let ((j i)) (lambda () j)) foo)))
> ((= i 5) foo))

Yes. Thank you.

But I'm still left wondering: Here, why does /j's/ binding to /i/
"freeze" the current value of /i/ in /j/? Is it that numbers are
atomic and so are stored with each variable bound to a numeric value?
If so, which types work as "by-value" copies like this as opposed to
"by-reference?"น

Is it ever possible to have two variables pointing at, say, a number,
where an update to one variable's value is visible to the other? I
would picture that arrangement like this:

+---+
| i |----> 1
+---+ ^
|
+---+ |
| j |------+
+---+

> (setq i i)
1
> (setq j i)
1
> (incf i)
2
> j
1 <--- How could we make this answer be 2?


Footnotes:
น I'm using C/C++ terminology here because I don't know the proper
Lisp terminology. Perhaps that's a sign that the question is moot.

William D Clinger

unread,
Oct 24, 2002, 2:39:53 PM10/24/02
to
Earlier today I responded to Ray Blaak's remark that iteration and
tail recursion are semantically equivalent by pointing out that this
is not true for people whose concept of iteration is sufficiently
limited. I would like to point out that I submitted that response
before the post quoted below had showed up on my news server.

Erik Naggum wrote:
> No, they are not. The semantic distinction is precisely in the binding
> behavior of the variables. Scheme does not actually offer iteration.
>
> It annoys me that I know more about Scheme minutiae than Scheme freaks.

Will

Erik Naggum

unread,
Oct 24, 2002, 2:42:10 PM10/24/02
to
* Steven E. Harris <seha...@raytheon.com>

| It's not yet intuitively evident to me, but I'd like to get to the point
| where it is. I consulted the HyperSpec on DO and found this clause:

Investigate what the standard has to say on /bindings/.

| If so, how could one force the capture of the current value of /i/ to
| eventually present (4 3 2 1)?

As you would capture any other value in a closure, create a binding and a
closure over it. E.g., to return a function that captures a binding:

(let ((x <initial>))
(lambda (&optional (new nil new-p))
(if new-p
(setq x new)
x))))

Call the returned function with a value to set the value of the binding,
with no value to return the initial or last set value.

A more Common Lisp way to do the same would be

(let ((x <initial>))
(defun x ()
x)
(defun (setf x) (new)
(setq x new)))

Tim Bradshaw

unread,
Oct 24, 2002, 2:44:24 PM10/24/02
to
* Steven E Harris wrote:

> But I'm still left wondering: Here, why does /j's/ binding to /i/
> "freeze" the current value of /i/ in /j/? Is it that numbers are
> atomic and so are stored with each variable bound to a numeric value?
> If so, which types work as "by-value" copies like this as opposed to
> "by-reference?"น

Everything works by value, but most values are what C people would
think of as pointers. If you look up something like `call by value'
on the google cll archive you will find enormous flamewars over the
definitions of this kind of thing however.

*However* informally, you can be sure that assigning to a variable
(really: modifying a binding) never alters the value of any other
variable (really: never modifies any other binding). There are three
caveats to this (none of which make it untrue):

1. If the value of the variable is a mutable object, and you
destructively alter that object, then clearly any other variable
which has the same object as its value will see that the object has
been changed. Example:

(let ((i (cons 1 2))
(j i))
(setf (cdr i) 3)
j)

-> (1 . 3)

Note that you have *not* assigned to a variable here.

2. There may be several ways of getting at the *same* variable:

(let ((i 1))
(list #'(lambda (x) (setf i x))
#'(lambda () i)))

will return two closures (functions) which refer to the *same*
variable. Calling the first with an argument will change the value
the second will return.

3. Symbol macros. If you need to worry about these, then you
understand the issues.


In all the cases where I've said `variable' above I really mean
`binding'.

--tim

Tim Bradshaw

unread,
Oct 24, 2002, 2:48:47 PM10/24/02
to
* I wrote:

> (let ((i (cons 1 2))
> (j i))
> (setf (cdr i) 3)
> j)

But I meant LET*. Sorry.

Steven E. Harris

unread,
Oct 24, 2002, 2:01:36 PM10/24/02
to
Pascal Costanza <cost...@web.de> writes:

>> Assuming the map results you give are for scheme, it seems Scheme is
>> closing over the value of i instead of the variable itself.
>
> No, Scheme closes over the variable as well, but it's a different
> variable at each step.

Is that because Scheme is using recursion for each step of the loop?
If not, that would imply that in Scheme changing the value of variable
/i/ produces a new, distinct /i/ variable. Is that so?

Erann Gat

unread,
Oct 24, 2002, 1:57:24 PM10/24/02
to
In article <873cqvd...@harris.sdo.us.ray.com>, Steven E. Harris
<seha...@raytheon.com> wrote:

> Erik Naggum <er...@naggum.no> writes:
>
> > (do ((i 1 (1+ i))
> > (foo () (cons (lambda () i) foo)))
> > ((= i 5) foo))
> >
> > If typed at the listener, you should now have a list of four
> > closures in the variable *. Scheme victims may have to write more.
> > (map apply *) yields (4 3 2 1), while (mapcar #'funcall *) yields (5
> > 5 5 5). The reason for the dramatic difference should be
> > intuitively evident.
>
> It's not yet intuitively evident to me, but I'd like to get to the
> point where it is. I consulted the HyperSpec on DO and found this
> clause:
>
> ,----[ HyperSpec: DO Description ]
> | Before the first iteration, all the /init-forms/ are evaluated, and
> | each /var/ is bound to the value of its respective /init-form/, if
> | supplied. This is a /binding/, not an assignment; when the loop
> | terminates, the old values of those variables will be restored.
> `----
>
> Is the idea that in the example, the /i/ captured in the /foo/
> closures is not a constant number, but is instead a reference to some
> mutable value, such that evaluating /i/ gives one the current value of
> that referenced number?

No. The idea is that on each iteration, /i/ is a *different variable*
(which happens to have the same name each time), as contrasted with /i/
being the same variable that has a different value each time.

The conceptual leap that is hard to make is that there is a difference
between the *value* of a variable, and the *memory location* where that
value is stored (the binding). There is a difference between an
*assignment* to a variable (storing a new value in the same memory
location) and *rebinding* a variable (assigning a new memory location to
contain the value of that variable). DO works by rebinding, not
assignment, so in the example you get closures over four different
bindings (memory locations) each of which can hold a different value.

Did that make sense?

E.

Nils Goesche

unread,
Oct 24, 2002, 3:05:46 PM10/24/02
to
Steven E. Harris <seha...@raytheon.com> writes:

> Nils Goesche <car...@cartan.de> writes:
>
> >> No, Scheme closes over the variable as well, but it's a different
> >> variable at each step.
> >
> > It might be helpful to compare these two examples:
>
> Thank you for these. Please confirm: In recursive-closures, each
> time that /help/ calls itself with (1+ i) as an argument, a new
> value is created to which is bound a different /i/ variable, so each
> closure in /acc/ has captured a different /i/ variable pointing each
> pointing to a different value.

That's it.

Nils Goesche

unread,
Oct 24, 2002, 3:23:46 PM10/24/02
to
I <car...@cartan.de> wrote:

> Steven E. Harris <seha...@raytheon.com> writes:
>
> > Nils Goesche <car...@cartan.de> writes:
> >
> > >> No, Scheme closes over the variable as well, but it's a different
> > >> variable at each step.
> > >
> > > It might be helpful to compare these two examples:
> >
> > Thank you for these. Please confirm: In recursive-closures, each
> > time that /help/ calls itself with (1+ i) as an argument, a new
> > value is created to which is bound a different /i/ variable, so each
> > closure in /acc/ has captured a different /i/ variable pointing each
> > pointing to a different value.
>
> That's it.

Or, to be more precise, whenever the local function HELP is called
with an argument value <X>, a new binding is established (from the
local variable i to <X>). It is this /binding/ of i that is captured,
and that 1+ creates a new value is not so essential here.

Erik Naggum

unread,
Oct 24, 2002, 3:53:22 PM10/24/02
to
* Steven E. Harris <seha...@raytheon.com>
| But I'm still left wondering: Here, why does /j's/ binding to /i/
| "freeze" the current value of /i/ in /j/?

Because it is a new binding.

The tree you are barking up is in the wrong forest.

Jens Axel Søgaard

unread,
Oct 24, 2002, 6:06:37 PM10/24/02
to
Kalle Olavi Niemitalo wrote:
> Barry Margolin <bar...@genuity.net> writes:
>
>> I haven't figured out what
>> the tail-recursive version of Fibonacci looks like (I think it would
>> involve two helper functions, one for the first subcall and the
>> other for the second subcall).
>
> (defun fib (x)
> (labels ((helper (i fib-i-1 fib-i)
> (if (>= i x)
> fib-i
> (helper (+ i 1) fib-i (+ fib-i-1 fib-i)))))
> (helper 0 1 0)))
>
> I suppose this is cheating.

If that is cheating, what do you think about this formula

Fn = nint( [phi^n / sqrt(5)] ),

where phi is the golden ratio and nint is the nearest integer function.

--
Jens Axel Søgaard

Pascal Costanza

unread,
Oct 24, 2002, 9:11:12 PM10/24/02
to
Steven E. Harris wrote:
> Pascal Costanza <cost...@web.de> writes:
>
>>>Assuming the map results you give are for scheme, it seems Scheme is
>>>closing over the value of i instead of the variable itself.
>>
>>No, Scheme closes over the variable as well, but it's a different
>>variable at each step.
>
> Is that because Scheme is using recursion for each step of the loop?
> If not, that would imply that in Scheme changing the value of variable
> /i/ produces a new, distinct /i/ variable. Is that so?

No, it's because of the recursion. (But this has been posted several
times by now.)


Pascal

--
Given any rule, however ‘fundamental’ or ‘necessary’ for science, there
are always circumstances when it is advisable not only to ignore the
rule, but to adopt its opposite. - Paul Feyerabend

Vassil Nikolov

unread,
Oct 25, 2002, 12:51:38 AM10/25/02
to
On 24 Oct 2002 19:44:24 +0100, Tim Bradshaw <t...@cley.com> said:

[...]

TB> 2. There may be several ways of getting at the *same* variable:

TB> (let ((i 1))


(list #'(lambda (x) (setf i x))
#'(lambda () i)))

TB> will return two closures (functions) which refer to the *same*
TB> variable. Calling the first with an argument will change the value
TB> the second will return.
[...]
TB> In all the cases where I've said `variable' above I really mean
TB> `binding'.

For me, the memorable phrase has been "bindings, not just values"
and entering it into Google will give you chapter and verse.

---Vassil.

--
Non-googlable is googlable.

Vassil Nikolov

unread,
Oct 25, 2002, 1:05:03 AM10/25/02
to
On Wed, 23 Oct 2002 18:29:15 GMT, Barry Margolin <bar...@genuity.net> said:

[...]
BM> Another well-known example that uses mid-function recursion is the
BM> Fibonacci number series

Well, this is only superficially non-tail-recursive (for a suitable
meaning of `superficially'...) Now, stuff like Ackermann's
function, that's the Real Thing in Recursion.

[...]
BM> Note: I didn't say it *can't* be done tail-recursively. Any recursive
BM> procedure can be transformed into a tail-recursive version, but the result
BM> isn't always pretty because you have to pass the state around as parameters
BM> rather than just stacking it up in the caller.

Yes, think about the continuation for the above-mentioned beast...

Ray Blaak

unread,
Oct 25, 2002, 1:55:46 AM10/25/02
to
Erik Naggum <er...@naggum.no> writes:
> * Ray Blaak

> | Ultimately its a non-issue if its "true" iteration vs tail recursion,
> | since they are semantically equivalent.
>
> No, they are not. The semantic distinction is precisely in the binding
> behavior of the variables.

I buy this. Your example clearly showed it.

> Scheme does not actually offer iteration.

I dispute this. We understand the various machineries involved, so this is a
dispute about the meaning of "iteration".

I submit that iteration is primarily about "doing something again and again",
and not "doing something again and again with mutable bindings".

Consider that one can have an iterative construct that introduces no bindings
at all:

(dotimes 3
(whatever))

Is this iteration? Only if it expands to a loop that mutates an internal
counter? Not if implemented as a tail recursive closure? I think yes always
because the result is the same either way: the implementation is irrelevant.

I also submit that "iterative constructs" in languages are called such because
their concern is about "doing something again and again" without any other
confusing distractions, such as, say, function calls.

Whether any bindings introduced in an iterative construct are mutable or not
is an important part of the semantics, to be sure, but that is a secondary
concern: one is still iterating.

Different languages go either way with regards to these secondary concerns.

If your position is that iteration is "doing something again and again such
that any state change to achieve the next iteration necessarily involves
mutation alone", then you are on firmer ground.

However, in that case I would observe that in those cases where tail-recursion
can be optimized, this is indeed what is happening, even with your previous
wonderful example.

Nils Goesche

unread,
Oct 25, 2002, 10:51:13 AM10/25/02
to
Ray Blaak <bl...@telus.net> writes:

> Erik Naggum <er...@naggum.no> writes:
> > * Ray Blaak
> > | Ultimately its a non-issue if its "true" iteration vs tail
> > | recursion, since they are semantically equivalent.
> >
> > No, they are not. The semantic distinction is precisely in the
> > binding behavior of the variables.
>
> I buy this. Your example clearly showed it.
>
> > Scheme does not actually offer iteration.
>
> I dispute this. We understand the various machineries involved, so
> this is a dispute about the meaning of "iteration".

Obviously. However, people have been using the term ``iteration'' for
a very long time now.

> I submit that iteration is primarily about "doing something again
> and again", and not "doing something again and again with mutable
> bindings".

Maybe this can be resolved by looking at how iteration was achieved in
the old times.

> Consider that one can have an iterative construct that introduces no
> bindings at all:
>
> (dotimes 3
> (whatever))
>
> Is this iteration? Only if it expands to a loop that mutates an
> internal counter? Not if implemented as a tail recursive closure? I
> think yes always because the result is the same either way: the
> implementation is irrelevant.

It might make no difference in this particular special case of
iteration. But even then -- when I think of doing the same thing a
number of times again and again, what I have in mind is something like
this instead:

(defun iterate ()
(prog ((i 0))
start (print 'hello)
(setq i (1+ i))
(if (< i 10)
(go start))))

(and not some language construct that doesn't even exist (although you
could use (loop repeat n do (whatever))). Not that I'd actually write
it that way :-), but that's what I think is typical iteration, and I
guess most programmers would agree with me that the traditional
meaning of ``iteration'' is exactly this -- not only doing something a
number of times, but implementing it in this way. Other language
constructs to achieve this would only be syntactic sugar.

And if I am right in so far that this is the traditional meaning of
iteration (thoughts?), then I would strongly oppose the idea of
changing its meaning to something else for whatever reason.

For instance, the word ``liberal'' traditionally (and still more or
less in Europe) meant something very different from what it means now
in America. Where I would correctly say ``I am a liberal'', Americans
nowadays act as if I had said ``I am an insane crypto-communist
tree-hugger.'' So, if I want to be understood, I have to say ``I am
an evil right-wing hate-monger.'' Some people might understand what I
mean, however it would still be unclear if I might be what would
traditionally be called a ``liberal'', or a /real/ ``evil right-wing
hate-monger'', as /those/ didn't have their name changed. Which was
the goal of the people who created this language stunt in the first
place, of course: Real liberals are mistaken for right-wingers and
/they/ can pose as harmless liberals hoping that people don't
recognize them for the communist thugs they really are.

So, while name changes might be a valuable tool for people who have a
secret political agenda, they are dangerous if what we're after is
truth and clarity, rather than to manipulate by deception.

Pascal Costanza

unread,
Oct 25, 2002, 11:07:46 AM10/25/02
to
Nils Goesche wrote:

> So, while name changes might be a valuable tool for people who have a
> secret political agenda, they are dangerous if what we're after is
> truth and clarity, rather than to manipulate by deception.

Name changes happen seldomly because of secret political agendas. The
kind of name changes you describe, including the "(re)definition" of the
term iteration, are cases of so-called classificatory looping.

See "The Social Construction of What?" by Ian Hacking.
(http://makeashorterlink.com/?M2BE31D32)

Nils Goesche

unread,
Oct 25, 2002, 11:59:19 AM10/25/02
to
Pascal Costanza <cost...@web.de> writes:

> Nils Goesche wrote:
>
> > So, while name changes might be a valuable tool for people who
> > have a secret political agenda, they are dangerous if what we're
> > after is truth and clarity, rather than to manipulate by
> > deception.
>
> Name changes happen seldomly because of secret political
> agendas. The kind of name changes you describe, including the
> "(re)definition" of the term iteration, are cases of so-called
> classificatory looping.
>
> See "The Social Construction of What?" by Ian
> Hacking. (http://makeashorterlink.com/?M2BE31D32)

I am not sure what to make of this. So there is a philosopher named
Hacking, apparently you are one of his followers, who introduced a
term ``classificatory looping'' and who apparently claims that reality
is ``socially constructed'' (*lots of alarm bells ringing*), whatever
he personally means by this (I have read a lot of books by authors who
made a claim like this; all of them were marxists or leftist
relativists; that doesn't mean Hacking is a marxist but might explain
why I am not very motivated to read any of it; I've had enough of
this).

So, in order to respond, I would have to read at least one of his
books, then find out if there is any value to it, then find out if you
are interpreting him correctly, then find out how this applies to the
history of the Left, then find out how this applies to Programming
Language semantics, then find out if your particular way of applying
this philosophy to PL semantics and the history of the Left are
identical with his.

Sorry, but, I think this is a bit too much to ask.

Pascal Costanza

unread,
Oct 25, 2002, 12:25:54 PM10/25/02
to
Nils Goesche wrote:
> Pascal Costanza <cost...@web.de> writes:

>>Name changes happen seldomly because of secret political
>>agendas. The kind of name changes you describe, including the
>>"(re)definition" of the term iteration, are cases of so-called
>>classificatory looping.
>>
>>See "The Social Construction of What?" by Ian
>>Hacking. (http://makeashorterlink.com/?M2BE31D32)
>
> I am not sure what to make of this. So there is a philosopher named
> Hacking, apparently you are one of his followers, who introduced a
> term ``classificatory looping'' and who apparently claims that reality
> is ``socially constructed'' (*lots of alarm bells ringing*),

No, he isn't a social constructivist, he just tries to analyze and
explain what the term "social construction" means, without actually
making a commitment to social constructivism. It's a very insightful and
worthwhile book. There is no need for alarm bells. ;)

> So, in order to respond, I would have to read at least one of his
> books, then find out if there is any value to it, then find out if you
> are interpreting him correctly, then find out how this applies to the
> history of the Left, then find out how this applies to Programming
> Language semantics, then find out if your particular way of applying
> this philosophy to PL semantics and the history of the Left are
> identical with his.
>
> Sorry, but, I think this is a bit too much to ask.

I don't ask you to do anything. However, you /could/ follow the link I
have provided and use Amazon's "look inside" feature.

A little googling revealed the following summary of the chapter that
introduces the term: http://makeashorterlink.com/?G4F212E32

I don't know if this is actually a good summary. The important terms to
watch out for are "indifferent kind", "interactive kind", "natural
sciences", "social sciences" and "classificatory looping".

I hope this helps.

Kalle Olavi Niemitalo

unread,
Oct 25, 2002, 5:52:02 PM10/25/02
to
"Jens Axel Søgaard" <use...@soegaard.net> writes:

> If that is cheating, what do you think about this formula
>
> Fn = nint( [phi^n / sqrt(5)] ),

I think it is not tail-recursive at all... or is it?
There was a discussion about atoms as improper lists....

Ray Blaak

unread,
Oct 26, 2002, 1:45:46 AM10/26/02
to
Nils Goesche <car...@cartan.de> writes:
> And if I am right in so far that this is the traditional meaning of
> iteration (thoughts?), then I would strongly oppose the idea of
> changing its meaning to something else for whatever reason.

I don't think I am changing any meaning. Rather, I am trying unearth the
essence of the term, such that it spans multiple languages, not just Common
Lisp or Scheme.

You seem to be saying the iteration is essentially a test and a goto that
allows for a repeated action. Fine. Within this definition there is room for
no bindings, mutable bindings, readonly bindings, fresh bindings, etc.
Different languages do all these things.

Also, optimized tail recursion ultimately is implemented as a test and a goto
in the end.

My main point, however, is that it doesn't matter (in terms of being called
iteration, that is). If your construct has the meaning of repeating an action
and nothing else, it doesn't matter how it is implemented -- its focus on
repetition makes it an iterative construct.

There is also a subtlety I need to clarify: I am not saying that explicit,
visible tail recursion is necessarily iteration (opinions can go either way).
I am saying that constructs that deal with repetition *in terms of their
notation* are iterative constructs, based on what they do, their focus on
repetition.

I suppose one could argue that if the stack is growing with each cycle, then
something is wrong, but is still unclear it is not iteration. Perhaps it
really is non-tail recursion. Or perhaps it is a test and a goto with memory
leaks involving the local iteration variables.

Eventually the distinctions become a little arbitrary and artificial and there
are fuzzy boundary cases. "Test and goto" is a good a heuristic for the
meaning of "iteration" as any.

But even in that case, Scheme *does* have iteration. It doesn't have CL's
notion of iteration (predefined, that is), since (do ...) gives fresh
bindings, but it is iteration nonetheless, in this general sense.

Futhermore, there is nothing preventing anyone from writing a macro that
implements CL's iteration semantics in Scheme, just as there is nothing
stopping one from implementing the typical Scheme iteration semantics in CL,
as examples have already shown.

0 new messages