Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Please help!!

18 views
Skip to first unread message

Mike Chu

unread,
Feb 6, 1999, 3:00:00 AM2/6/99
to
Excuse me, could somebody give me a hand of explaining what the
following is all about?
i really have no clue for that, thankx in advance!!!

from Mike
;
; call this with (solve 8) or (solve 12) etc.
;

(defun solve (N)
(setq Final-Depth N)
(setq Actions (reverse (make-list N)))
(search 0 nil Actions))

(defun make-list (N)
(cond
((equal N 0) nil)
(t (cons N (make-list (- N 1))))))

;
; can anyone see how to add a few characters so that it prints ALL
answers?
;

(defun search (Level State Remaining-Actions)
; (print State) (terpri)
(cond
((and (ok State) (> Level (- Final-Depth 1))) State)
((and (ok State)
(search (+ Level 1)
(cons (list (+ Level 1) (car Actions)) State)
(cdr Actions))))
((null Remaining-Actions) nil)
(t (search Level (cons (list Level (car Remaining-Actions)) (cdr
State))
(cdr Remaining-Actions)))))

(defun ok (State)
(cond
((null State) t)
((null (cdr State)) t)
((attacks (car State) (car (cdr State))) nil)
(t (ok (cons (car State) (cddr State))))))

(defun attacks (S1 S2)
(cond
((equal (first S1) (first S2)) t)
((equal (second S1) (second S2)) t)
((equal (abs (- (first S1) (first s2))) (abs (- (second S1)(second
S2))))
t)
(t nil)))

Steve Gonedes

unread,
Feb 6, 1999, 3:00:00 AM2/6/99
to

Mike Chu <yac...@hotmail.com> writes:

< Excuse me, could somebody give me a hand of explaining what the
< following is all about?
< i really have no clue for that, thankx in advance!!!
<
< from Mike
< ;
< ; call this with (solve 8) or (solve 12) etc.
< ;
<
< (defun solve (N)
< (setq Final-Depth N)
< (setq Actions (reverse (make-list N)))
< (search 0 nil Actions))
<
< (defun make-list (N)
< (cond
< ((equal N 0) nil)
< (t (cons N (make-list (- N 1))))))


This is grossly inefficient. Recursion is great for recursive
problems; `loop' seems better suited for this task IMHO. I would
recommend the following.

(defun make-list-of-numbers (n)
(loop for number upfrom 1 to n collect number))

(make-list-of-numbers 5) => (1 2 3 4 5)

This also eliminates the call to reverse, which was copying the list
for no reason. You could have called nreverse, which may reuse the
list - but there really is no need to reverse the list at all.

Also, `make-list' is already defined in the common-lisp-user package
so it's probably not a good idea to redefine it. I would use another
package (actually I would rename the function since it doesn't do what
make-list does). This is of course assuming your using an ansi common
lisp.

< ... [code] ...

This looks like a queens problem. What don't you understand about it?
Is the code unclear or are you unfamiliar with the techniques of
searching?


Rainer Joswig

unread,
Feb 7, 1999, 3:00:00 AM2/7/99
to
In article <m2zp6rr...@KludgeUnix.com>, Steve Gonedes <jgon...@worldnet.att.net> wrote:

> < (defun make-list (N)
> < (cond
> < ((equal N 0) nil)
> < (t (cons N (make-list (- N 1))))))
>
>
> This is grossly inefficient. Recursion is great for recursive
> problems; `loop' seems better suited for this task IMHO. I would
> recommend the following.

It is a recursive problem. In a Lisp implementation
with tail-call-elimination (-> Scheme) ...

--
http://www.lavielle.com/~joswig

Erik Naggum

unread,
Feb 7, 1999, 3:00:00 AM2/7/99
to
* jos...@lavielle.com (Rainer Joswig)

| It is a recursive problem.

that's kind of intriguing. what distinguishes a recursive _problem_?

#:Erik
--
Y2K conversion simplified: Januark, Februark, March, April, Mak, June,
Julk, August, September, October, November, December.

Steve Gonedes

unread,
Feb 7, 1999, 3:00:00 AM2/7/99
to

jos...@lavielle.com (Rainer Joswig) writes:

I was really thinking about the call to reverse and equal on top of
the recursion. I guess it is a recursive problem.

Can the stack really remain constant for this though? I don't doubt
it, as it seems to be a typical candidate for this kind of
elimination.

I always get confused and mix up the names of these optimizations. I
think the following example gets some kind of special treatment,
called `tail merging' I think, dunno how this relates to
`constant-stack' tail-elimination in recursive functions though.

The example is from the ACL manual, so it may be specific to ACL?

(defun foo (lis)
(pprint lis)
(list-length lis))

The following example is what usually pops to mind when hearing
mention of `tail-call-elimination'. What I don't understand is how the
stack could be kept constant with the accumulation happening in
`result'. The variable `result' must be put in the heap right? Maybe
my model of the whole lisp-function call thing is out of whack.

I probably have no idea what I'm talking about, but wouldn't the
function lookup still be just as expensive and maintained just like
normal? Maybe lisp compilers know about this stuff - wouldn't
surprise me either way.

(defun this (x)
(cond ((zerop x)
(setf (fdefinition 'this) #'(lambda (x) (+ x 2)))
(this x)) ; This is tail recursive right?
((minusp x) (this (- x)))
(t (this (1- x)))))

(this 4) => 2
(this 4) => 6

A bit extreme, but the function can't be stashed anywere special - a
full lookup is required as usual? So if the function location, and the
arguments can't be stored on the stack - I just don't get it.

In scheme re-defining a function inside of a function declares it as a
local function. I think I see why now. Two different ways to deal with
a problem I guess? Those being iteration and local redefinition.

Anyway, is the following function candidate for tail-call-elimination?
I just don't see how it is possible - even if the compiler knows no
redefinition is going to occur.

(defun make-nums (count &optional (result ()))
(cond ((zerop count) result)
(t (make-nums (1- count) (cons count result)))))

Maybe there is another optimization for when there is no accumulation
of values on the heap in a recursive call?

(defun add-all-numbers (list &optional (result 0))
(declare (fixnum result))
(cond ((endp list) result)
(t (add-all-numbers (cdr list) (+ (car list) result)))))

I dunno, I'm tired and confused. I am always amazed that anyone or group
of people could implement a common lisp as well as they do - so many
damn things to think about. Time to re-read SICP ?


Rainer Joswig

unread,
Feb 7, 1999, 3:00:00 AM2/7/99
to
In article <31273560...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:

> * jos...@lavielle.com (Rainer Joswig)


> | It is a recursive problem.
>

> that's kind of intriguing. what distinguishes a recursive _problem_?

In this case we have a data structure (a Lisp list) that is easily defined
recursive. Recursive algorithms are very natural solutions
for problems like traversal, creation, filtering, etc.

--
http://www.lavielle.com/~joswig

Rainer Joswig

unread,
Feb 7, 1999, 3:00:00 AM2/7/99
to
In article <m2ww1uw...@KludgeUnix.com>, Steve Gonedes <sgon...@worldnet.att.net> wrote:

> I was really thinking about the call to reverse and equal on top of
> the recursion. I guess it is a recursive problem.

You can remove the REVERSE by defining the recursive function a
bit differently and EQUAL makes no big difference (you may
want to use "=").

> The example is from the ACL manual, so it may be specific to ACL?
>
> (defun foo (lis)
> (pprint lis)
> (list-length lis))

You can implement the call of the function LIST-LENGTH with
a jump.

> (defun this (x)
> (cond ((zerop x)
> (setf (fdefinition 'this) #'(lambda (x) (+ x 2)))
> (this x)) ; This is tail recursive right?
> ((minusp x) (this (- x)))
> (t (this (1- x)))))

All the local calls to THIS in the above definition can
be implemented by a JUMP.

The above function looks a bit confused. Replacing the global
definition from within itself makes code hard to read.
The new definition of THIS is being used
the next time the global function THIS is being called.

What do you think happens when you call THIS with 0 the first time?

(THIS 0)

Should it return 2?
I fear you will get an endless loop.

We better look at your other examples. ;-)

> In scheme re-defining a function inside of a function declares it as a
> local function. I think I see why now. Two different ways to deal with
> a problem I guess? Those being iteration and local redefinition.

In Common Lisp you would use FLET and LABELS for local
functions.

> Anyway, is the following function candidate for tail-call-elimination?
> I just don't see how it is possible - even if the compiler knows no
> redefinition is going to occur.
>
> (defun make-nums (count &optional (result ()))
> (cond ((zerop count) result)
> (t (make-nums (1- count) (cons count result)))))

MAKE-NUMS is being called in tail-call position. You can
implement this call with a jump instruction. COUNT and
RESULT can be reused.

> (defun add-all-numbers (list &optional (result 0))
> (declare (fixnum result))
> (cond ((endp list) result)
> (t (add-all-numbers (cdr list) (+ (car list) result)))))

Same here.

> I dunno, I'm tired and confused. I am always amazed that anyone or group
> of people could implement a common lisp as well as they do - so many
> damn things to think about. Time to re-read SICP ?

Remember that Common Lisp implementations are not required to
provide this facility. Sigh. Several implementations don't provide
it. Sigh.

--
http://www.lavielle.com/~joswig

Erik Naggum

unread,
Feb 7, 1999, 3:00:00 AM2/7/99
to
* Steve Gonedes <sgon...@worldnet.att.net>

| Anyway, is the following function candidate for tail-call-elimination?
| I just don't see how it is possible - even if the compiler knows no
| redefinition is going to occur.
|
| (defun make-nums (count &optional (result ()))
| (cond ((zerop count) result)
| (t (make-nums (1- count) (cons count result)))))

here's the last function call compiled with tail call merging.

leave
movl ebx,[esi+50] ; make-nums
jmp *[edi+39] ; tramp-two

and here it is without:

movl ebx,[esi+50] ; make-nums
call *[edi+39] ; tramp-two
leave
ret

(yeah, the CPU is an Intel Pentium. sorry about that, but at least it
may be well known.)

to understand this, consider that a function call pushes the instruction
pointer on the stack, _jumps_ to the new code, which sets up a stack
frame with an ENTER instruction, does its work, performs LEAVE to unwind
the stack, and finally returns to the caller with a RET instruction which
pops the return address from the stack, and _jumps_ to it.

consider the normal call case first: every call is made within the stack
frame set up at each call, so every call consumes a fairly large amount
of space, typically 50 to 100 bytes. then all it does when it returns is
unwind the stack frame and return to the its caller, which does the same
as many times as the function has called itself. pretty boring. (we can
all be thankful that CPU's aren't self-aware or they'd just revolt.)

instead of making a new function call, we can unwind the stack frame if
there is nothing on it that we need (even Intel has half a register left
that can be used to pass arguments), and then call the function with our
caller's return address still on the stack. all we keep around is the
address of our caller. everything else is still set up at every call,
but it happens over and over in the _same_ area of memory instead of new
memory at every call.

some people believe that a tail call is as efficient as a loop, but they
either don't know their CPU very well (typical of some people's arrogance
towards their tools), and/or they believe in magic or propaganda, neither
of which are proven reliable sources of technical information. it is
generally unwise to assume that you can use an existing stack frame when
entering a function anew, even the _same_ function. to do that reliably,
you use a looping construct that has control over the variable assignment
and works _inside_ the stack frame and the known function body.

in the event that the compiler actually transforms a self-recursive tail
call into a loop, things turn a bit different, since we're no longer
calling the function via the variable's value (in Scheme). to make such
a transformation, one would have to know that no side effects are made
during the loop, as you have pointed out. so generally, you can't, and
the special cases where you can are probably a lot easier done with a
real loop, anyway, despite the attraction of syntactic recursion.

Erik Naggum

unread,
Feb 7, 1999, 3:00:00 AM2/7/99
to
* jos...@lavielle.com (Rainer Joswig)
| It is a recursive problem.

* Erik Naggum <er...@naggum.no>


| that's kind of intriguing. what distinguishes a recursive _problem_?

* jos...@lavielle.com (Rainer Joswig)


| In this case we have a data structure (a Lisp list) that is easily defined
| recursive. Recursive algorithms are very natural solutions
| for problems like traversal, creation, filtering, etc.

pardon me for insisting, but definitions, algorithms, implementations,
and _solutions_ are easily described as recursive. my question relates
to calling things recursive _problems_. if it is "a problem for which a
recursive solution is deemed most aesthetic", it is just meaningless, but
if it actually means something specific, I'd like to know. I find the
term puzzling, apart from the more or less meaningless interpretation
that doesn't refer to the problem at all.

Johan Kullstam

unread,
Feb 7, 1999, 3:00:00 AM2/7/99
to
Steve Gonedes <jgon...@worldnet.att.net> writes:

> Mike Chu <yac...@hotmail.com> writes:
>
> < (defun make-list (N)
> < (cond
> < ((equal N 0) nil)
> < (t (cons N (make-list (- N 1))))))
>
>
> This is grossly inefficient. Recursion is great for recursive
> problems; `loop' seems better suited for this task IMHO. I would
> recommend the following.
>

> (defun make-list-of-numbers (n)
> (loop for number upfrom 1 to n collect number))
>
> (make-list-of-numbers 5) => (1 2 3 4 5)
>
> This also eliminates the call to reverse, which was copying the list
> for no reason. You could have called nreverse, which may reuse the
> list - but there really is no need to reverse the list at all.

well, you could eliminate the need for nreverse by building the list
backwards. i do this all the time. for example:

(defun enumerate (n)
"build list (1 2 3 ... n) by working backwards. n must be positive."
(let ((acc nil))
(labels ((rec (n acc)
(if (zerop n)
acc
(rec (- n 1) (cons n acc)))))
(rec n acc))))

i probably don't need to drag `acc' around in the recursor. i haven't
bothered to check if `enumerate' is taken. if it is, call it
something else. there are just so many pre-defined things out
there...

--
Johan Kullstam [kull...@ne.mediaone.net] Don't Fear the Penguin!

Kent M Pitman

unread,
Feb 7, 1999, 3:00:00 AM2/7/99
to
Erik Naggum <er...@naggum.no> writes:

> * jos...@lavielle.com (Rainer Joswig)
> | It is a recursive problem.

> ...


> pardon me for insisting, but definitions, algorithms, implementations,
> and _solutions_ are easily described as recursive. my question relates
> to calling things recursive _problems_. if it is "a problem for which a
> recursive solution is deemed most aesthetic", it is just meaningless, but
> if it actually means something specific, I'd like to know. I find the
> term puzzling, apart from the more or less meaningless interpretation
> that doesn't refer to the problem at all.

Warning: Kent's going to make an analogy to a Clintonesque issue again.

I had to laugh at this because it touched a place in my analogy engine
that I had not previously focused on as a computational issue.

I normally associate this issue with ethics. Consider Clinton's dilemma
about getting Monica a job. Some say he shouldn't have helped her because
she was a friend and it was a conflict of interest. Clinton has said, on
occasion, things that lead me to believe he has what I see as the fully
symmetric position--that it wasn't fair to not try to get her a job just
because she was a friend, since he has helped many people who are not
friends. We can all (in another forum!) debate about whether he should have
been in that situation, but it's clear to me that it's the question, not
the answer, that is the conflict of interest.

Another related point I would make, again from recent US politics:
Congress claims it cannot find Clinton innocent because that would
make a mockery of the rule of law. Weird. To me, the rule of law
says they should make a choice between convinction and innocence. To
say that there is no choice for innocence is to say there is no choice
to be made. To me, to say there is no choice to be made is what is a
mockery of the rule of law.

I have often said in political arenas: there are no political answers,
only political questions. If a question is political, the answer is
bpolitical. People often have a political way of viewing a situation
that obscures this symmetry, but it's there.

I've said essentially the same in ethics: there are no ethical choices,
only ethical dilemmas.

And so returning to the computational arena, and `informed by recent US
politics', here's my take:

I think there probably is something to joswig's implicit claim (not
his statement per se quoted above, but his choice of terms in the
statement quoted) that there is such a thing as a recursive problem.
At least, I know of no general purpose way to reduce a recursive solution
into an iterative one. (Well, let me constrain that: if I know of such
a technique, I don't presently acknowledge it as such. I do know about
continuation passing and CPS conversion, and I do understand how
interpreters manage continuations as data in Scheme so that they can
implement even fully recursive programs as a kind of iterative procedure.
But I haven't convinced myself this is the same...) But certainly, I
know of general purpose equivalences between conventional "do loops"
and "tail recursion". And it's commonly remarked for this reason that
"tail recursion" is not really "recursion" but rather just a syntactic
way of expressing "iteration". So maybe what I think is that when Erik
is showing that a tail recursion compiles to an iteration is that the
compiler has done a proof that certain problems are equivalent to
non-recursive problems. And I suppose I think that's not really a proof
that there are no recursive problems--it's only a proof that there is
some set of problems that may appear to be recursive because they use
recursive solutions but that turn out not to be when the recursion is
folded out. Maybe that means all solutions are of this kind--capable of
being reduced to a non-recursive form--and Erik is right. Or maybe
there are some solutions that don't yield to this technique. In which case,
Rainer is right. Or maybe, as is more likely, the notion of a recursive
problem is like the notion of a recursive solution, and is just a way of
saying that "if you think about the problem in this formalism, you'll get
a useful point of view that may yield leverage" and not a way of saying
"this problem is fundamentally of a certain representation class".

Have I said recently that there are no total solutions, only trade-offs?
Perhaps the choice to view something as recursive is only a trade-off.
But that doesn't mean there's no value in viewing it that way. Or that
there's only value in viewing it that way.

Our job should be to provide people with options in how to view things,
so they can optimize their pain and progress according to subjective
measures that optimize quality of life.

Does this help or just obfuscate things, Erik? Or did you fall asleep
before getting to this question...? :-)

Rainer Joswig

unread,
Feb 7, 1999, 3:00:00 AM2/7/99
to
In article <sfw679e...@world.std.com>, Kent M Pitman <pit...@world.std.com> wrote:

<many complicated political/philosophical/CS topics and something
about Bill Clinton and Monica Lewinsky. Especially this latter
part I could not really follow - maybe my thoughts are always
wandering around when it comes to this topic...>

...

...

...

Well, I just wanted to answer something to the original statement
where the poster said, that it is not a "recursive problem".
Hopefully my attempt did not puzzle him to much. If I left him
even more confused, then I have to apologize.

--
http://www.lavielle.com/~joswig

Erik Naggum

unread,
Feb 8, 1999, 3:00:00 AM2/8/99
to
* Erik Naggum <er...@naggum.no>
| if [recursive problem] is "a problem for which a recursive solution is

| deemed most aesthetic", it is just meaningless, but if it actually means
| something specific, I'd like to know.

* Kent M Pitman <pit...@world.std.com>


| Maybe that means all solutions are of this kind--capable of being reduced
| to a non-recursive form--and Erik is right.

that was not at all what I was trying to say or ask. all I was wasking
was: is it valid to talk about _problems_ as being recursive. as I have
already stated, the obvious meaning is "a problem for which a recursive
solution is deemed most aesthetic", which is like defining philosphy as
"anything whoever is called a philosopher does".

it seems that there is nothing more to "recursive problem" that just that
-- a problem to which a recursive solution is deemed most aesthetic.

so, in conclusion, nothing distinguishes a recursive problem from any
other problem other than the fact that after all has been said and done,
the recursive solution was most appropriate. I had initially hoped there
had been somewhat more of an analytical approach to this, which would
have meant that "recursive problem" actually added something to the
nature of recursion or of problems. maybe some other time.

Kellom{ki Pertti

unread,
Feb 8, 1999, 3:00:00 AM2/8/99
to
In article <31273767...@naggum.no> Erik Naggum <er...@naggum.no> writes:
some people believe that a tail call is as efficient as a loop, [...]

it is
generally unwise to assume that you can use an existing stack frame when
entering a function anew, even the _same_ function. to do that reliably,
you use a looping construct that has control over the variable assignment
and works _inside_ the stack frame and the known function body.

Could you elaborate on this? I can believe that there may be some
complications in more complicated situations, but I fail to see the
problems in compiling e.g.

(define (last lst)
(if (null? (cdr lst))
(car lst)
(last (cdr lst))))

Or are you referring to the possibility of a redefinition?
--
Pertti Kellom\"aki, Tampere Univ. of Technology, Software Systems Lab

Harley Davis

unread,
Feb 8, 1999, 3:00:00 AM2/8/99
to
>* Erik Naggum <er...@naggum.no>

> so, in conclusion, nothing distinguishes a recursive problem from any
> other problem other than the fact that after all has been said and done,
> the recursive solution was most appropriate. I had initially hoped there
> had been somewhat more of an analytical approach to this, which would
> have meant that "recursive problem" actually added something to the
> nature of recursion or of problems. maybe some other time.


I guess the naive interpretation of "recursive problem" would be a problem
that is itself described in terms of a simpler version of itself and one or
more degenerate cases. I'm having trouble thinking of a reasonable example
but the concept doesn't seem completely nutty. However, I suppose one could
object that I described a "recursive problem definition" and that such
problems could perhaps be otherwise described in non-recursive ways, and
that the problem itself is not in the category of objects to which the word
"recursive" applies.

-- Harley

Barry Margolin

unread,
Feb 8, 1999, 3:00:00 AM2/8/99
to
In article <79nb45$c15$1...@news1-alterdial.uu.net>,

Harley Davis <spamless_davis@spamless_ilog.com> wrote:
>I guess the naive interpretation of "recursive problem" would be a problem
>that is itself described in terms of a simpler version of itself and one or
>more degenerate cases. I'm having trouble thinking of a reasonable example
>but the concept doesn't seem completely nutty.

The Fibonacci series is naturally described in a recursive manner.

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Don't bother cc'ing followups to me.

Thomas A. Russ

unread,
Feb 8, 1999, 3:00:00 AM2/8/99
to

Erik Naggum <er...@naggum.no> writes:

> pardon me for insisting, but definitions, algorithms, implementations,
> and _solutions_ are easily described as recursive. my question relates
> to calling things recursive _problems_.

I believe that there are. A quick perusal of my complexity books didn't
help jog any specific memories, but I'm make an attempt to give an
example anyway.

Some problems really are inherently recursive in nature. For example,
the enumeration of the leaves of a (binary or other) tree. You can't
write an interative algorithm for solving this problem in general, since
you would need (potentially) arbitrary amounts of memory in order to
hold the state of your traversal.

--
Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu

Kent M Pitman

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to

There are two problems, which by their nature operate at different
meta-levels.

The trivial problem is that the language does not require a compiler
to optimize what you have just said if you are using CL. Scheme
requires tail call elimination, but CL does not. Expecting a compiler
to perform any service not required of it (even one you can convince
yourself it would be a good idea to do and one that it would not be
non-conforming not to do) is a potential pitfall by definition. The
minimal expectation of a compiler is that it will do what it is
defined to do, and since it is not defined to do this, it is not a
reasonable minimal expectation to expect that "at minimum" it will do
this.

The meta-issue is that the reason that the language does not require
this is that it defies stack debugging, and some people like to do that.
Debugging programs in Scheme is way hard, IMO, because often Scheme
debuggers do not have enough context to answer the simple and most
important question I always have when I get into the debugger, which
is "how did I get here"? CL permits an implementation to be driven by
its commercial marketplace in deciding whether tail call elimination
is more important or whether stack debugging is more important or whether
people should use declarations or compiler options to control it. But
a cost of this is that it's not good to use recursion on data structures
of unbounded size because even if those programs run in some implementations,
they are not (effectively by definition) portable.

Kent M Pitman

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to
t...@sevak.isi.edu (Thomas A. Russ) writes:

> Some problems really are inherently recursive in nature. For example,
> the enumeration of the leaves of a (binary or other) tree. You can't
> write an interative algorithm for solving this problem in general, since
> you would need (potentially) arbitrary amounts of memory in order to
> hold the state of your traversal.

Well, I'm not sure that's true. Don't you just need log(n) amount of
space? You either use the fact that the stack in your programming
language accumulates log(n) space or else you use no stack and carry
log(n) data. I usually prefer using the stack because it gc's faster. :-)

I do agre with your premise, though, that there are recursive problems.
I just think that I'd express the justification differently. I'd say that
if the best you can do is reduce the problem to something that requires
that you maintain something that looks like a stack, then it's a recursive
problem. Or, at least, that's my working theory. I haven't convinced myself
I've though hard enough to have a 100% firm stand on this--I'm just pondering
this idly for fun between "real work".


Christopher R. Barry

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to
Barry Margolin <bar...@bbnplanet.com> writes:

> In article <79nb45$c15$1...@news1-alterdial.uu.net>,
> Harley Davis <spamless_davis@spamless_ilog.com> wrote:
> >I guess the naive interpretation of "recursive problem" would be a problem
> >that is itself described in terms of a simpler version of itself and one or
> >more degenerate cases. I'm having trouble thinking of a reasonable example
> >but the concept doesn't seem completely nutty.
>
> The Fibonacci series is naturally described in a recursive manner.

And I've never seen any Common Lisp or C compiler compile the
exponential recursive version of the fibonacci function to the
equivalent code of the efficient recursive or iterative version.

Does anyone have any insightful commentary on why current compilers
can't optimize out the millions of unneeded function calls for code
like the below? Finding the 40th term takes the better part of a
minute on my Pentium 200MMX, while only a small fraction of a second
with the efficient versions.


(defun fibonacci (n)
(if (> n 2)
(+ (fibonacci (- n 1))
(fibonacci (- n 2)))
1))


#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[])
{
long double i, fibonacci(long double);

printf("%.0Lf\n", fibonacci(atof(argv[1])));

return EXIT_SUCCESS;
}

long double fibonacci(long double n)
{
if (n > 2.)
return (fibonacci(n - 1.) + fibonacci(n - 2.));
else
return 1.;
}

Christopher

Erik Naggum

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to
* Erik Naggum <er...@naggum.no>

| it is generally unwise to assume that you can use an existing stack frame
| when entering a function anew, even the _same_ function. to do that
| reliably, you use a looping construct that has control over the variable
| assignment and works _inside_ the stack frame and the known function body.

* p...@kuovi.cs.tut.fi (Kellom{ki Pertti)


| Could you elaborate on this? I can believe that there may be some
| complications in more complicated situations, but I fail to see the
| problems in compiling e.g.
|
| (define (last lst)
| (if (null? (cdr lst))
| (car lst)
| (last (cdr lst))))
|
| Or are you referring to the possibility of a redefinition?

no, redefinition is an additional reason to avoid optimizing away the
stack frame release.

let's consider a function call in all its glory. the arguments are
evaluated before anything interesting happens, but we'll skip that part
and just assume that all the values to pass are available. there are
several ways to pass arguments to functions, but let's consider the
simplest case: a stack-allocated vector. in languages with variable
numbers of arguments, you need to pass the length of this vector. that's
typically passed in a register, so not to complicate things, let's assume
it is not part of the vector. (the stack-allocated vector is typically
set up with PUSH instructions, but it helps to view it as a vector.)

the actual function call is made, and this is typically very fast, like
keeping the next instruction address somewhere and transfering control to
the start of the new function, but in Lisp there typically needs to be
some lookup to find the actual address. this is usefully handled by a
trampoline -- a function that does something important but does not
disturb the stack frame other than in the expected ways. (it's called a
trampoline because you jump on/to it and it helps you jump elsewhere.)

we have arrived at the start of the our code, where there's an prologue
waiting for us. the prologue may check the argument count, do call
counting statistics, check for interrupts, check for space on the stack,
set up a stack frame, process the argument list (&optional with default
values, &key arguments), initialize &aux variables, etc, and the order of
these things is determined by a lot of factors. the interesting things
to us are: error handling, the location of the argument vector, the space
requirements, the initialization of lambda-bound variables.

the body of the function is not relevant to us, but after the body (in
time) is an epilogue that needs to clean up from the prologue, unwind the
stack, and return to wherever we were called from. a function is allowed
to maintain an inconsistent stack state as long as it unwinds correctly.
also note that CPU's suffering from register starvation (Intel), use the
stack for temporary memory in some cases, and that local variables are
allocated as part of this procedure.

now, to make a tail-call that does not need to unwind the stack, and
which jumps to the location after the stack frame has been set up, we
need to know the prologue very intimately. of course, a compiler writer
does that, but we also need to make sure that everything that happens
before the stack frame is set up is handled by the tail-call jump.
suppose that this is a non-trivial amount of work in the general, such as
computing the size of a stack-allocated &rest list and such. is it worth
the trouble to recognize simple functions rather than go through the
epilogue and the prologue again?

and since you cannot actually optimize away the self-tail call completely
in the general case, it may be a _lot_ of work to recognize the simple
case in Common Lisp. now, Scheme has decided to make this decision pay
off, but I do wonder whether they actually manage to make tail calls as
efficient as loops. it is not possible in the general case.

further complications, by the way, occur when you make a tail call from
inside a let binding, and that's what we normally do. if the values are
stack-allocated, the semantics of tail-call changes from a jump before
the epilogue to a jump after the epilogue. a special binding prohibits
the epilogue from running before the callee has returned. etc, etc.
these further complications are, however, fairly easy to spot and argue
with. the harder problems occur when even their absence makes it unsafe
to assume you can just jump directly to the point after the prologue.

now for an illustrative example. when compiled with (optimize (speed 3)
(safety 0) (debug 0)), the simple function

(defun foo (x y z)
(list x y z))

should cause a very simple jump to the LIST function, or whatever takes
care of it with three arguments. here's the SPARC disassembly with
Allegro CL 4.1.3:

ld [%g4 + 95], %g2 ; qlist3
jmp %g2 + 0
xor %g0, #x3, %g3

(the SPARC is explicitly pipelined, so the XOR is actually executed
before the first instruction at the jump target. %G0 is a constant 0, so
this sets %G3 to 3. that's the argument count register. where the other
argument values are is immaterial since list is given the exact same
arguments as foo got.)

the Intel disassembly with ACL 5.0 is somewhat different.

pushl ebp
movl ebp,esp
pushl esi
subl esp,$44
addl esp,$12
pushl [ebp+16] ; (argument 2)
pushl edx
pushl eax
xorl ecx,ecx
movb cl,$3
call *[edi+95] ; qlist3
leave
movl esi,[ebp-4]
ret

note that the tail-call did in fact not get merged, but consider the
slightly simpler function

(defun bar (x y)
(list x y))

the SPARC assembly is identical to the previous function, except that
QLIST2 is called instead of QLIST3. the interesting thing happens to the
Intel code:

xorl ecx,ecx
movb cl,$2
jmp *[edi+47] ; qlist2

in other words, the number of arguments can itself be an impediment to
certain optimizations, and the number may vary from architecture to
architecture.

your particular function compiles to a self-tail-call merge, but still
goes through the symbol-function slot of the symbol naming the function.

Marco Antoniotti

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to

cba...@2xtreme.net (Christopher R. Barry) writes:

> Barry Margolin <bar...@bbnplanet.com> writes:
>
> > In article <79nb45$c15$1...@news1-alterdial.uu.net>,
> > Harley Davis <spamless_davis@spamless_ilog.com> wrote:
> > >I guess the naive interpretation of "recursive problem" would be a problem
> > >that is itself described in terms of a simpler version of itself and one or
> > >more degenerate cases. I'm having trouble thinking of a reasonable example
> > >but the concept doesn't seem completely nutty.
> >
> > The Fibonacci series is naturally described in a recursive manner.
>
> And I've never seen any Common Lisp or C compiler compile the
> exponential recursive version of the fibonacci function to the
> equivalent code of the efficient recursive or iterative version.

Because the fibonacci specification you wrote is *inherently*
recursive. You can optimize the last tail recursion but not the first
one. And the iterative version of fibonacci is inherently
different. Asking a compiler to rewrite your inherently recursive code
would be quite a request indeed.

To this add that AFAIK not very many C compilers can do tail recurion
optimization. Of the CL compilers around, AFAIK the only one that does
not do the right thing on tail calls (at least not completely) is GCL.

Cheers

--
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - (0)6 - 68 10 03 17, fax. +39 - (0)6 - 68 80 79 26
http://www.parades.rm.cnr.it

Stig Hemmer

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to
cba...@2xtreme.net (Christopher R. Barry) writes:
> Does anyone have any insightful commentary on why current compilers
> can't optimize out the millions of unneeded function calls for code
> like the below?

[Doubly recursive fibonacci]

The short answer is that the programming work wouldn't be worth the
effort.

It is obvious to smart humans how to optimize fibonacci, but it is
much harder to describe to a computer in a way that will be useful.

The problem is that you will spend a lot of effort making a
optimalization routine that will only be used very very seldom. Most
programs do _not_ look like fibonacci.

And even those programs that can in fact be improved like fibonacci
can are different in ways that are unimportant to a human, but which
are critical to another computer program, the compiler.

Stig Hemmer,
Jack of a Few Trades.

Erik Naggum

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to
* Stig Hemmer <st...@pvv.ntnu.no>

| The problem is that you will spend a lot of effort making a
| optimalization routine that will only be used very very seldom. Most
| programs do _not_ look like fibonacci.

well... apart from wanting to avoid the first recursive function call
(which can be done with a simple iteration from a lower limit to the
value in question, but this is not something I want the compiler to
decide for me), optimizing fibonacci is mostly a matter of memoizing. it
_would_ be useful if a compiler could see that there is no way a function
would return a different value if given the same arguments, i.e.,
automated detection of side-effect-free functions and various means of
propagation of this information.

Barry Margolin

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to
In article <31275413...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:
> and since you cannot actually optimize away the self-tail call completely
> in the general case, it may be a _lot_ of work to recognize the simple
> case in Common Lisp. now, Scheme has decided to make this decision pay
> off, but I do wonder whether they actually manage to make tail calls as
> efficient as loops. it is not possible in the general case.

I'm not sure that they make the claim that tail-call elimination is as
efficient as looping, merely as *space-efficient* as iteration. It allows
you to write recursive functions that will never die to stack overflow.

Note that in the case of direct tail-recursion, the compiler often does
have enough information available to compile away the prologue and
epilogue. It can tell that the function takes a fixed number of arguments,
and that the call has the correct number, and simply rebind the lambda
variables and do a goto as in a DO loop.

I haven't read the Sussman/Steel "Lambda: the Ultimate ..." papers in quite
a while, but I can't imagine they didn't address most of the issues you
raise.

Juliusz Chroboczek

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to
Erik Naggum:

EN> further complications, by the way, occur when you make a tail call
EN> from inside a let binding, and that's what we normally do.

There's a case that I find more disturbing. Consider the following:

(defun my-fact (n &optional (m 1))
(declare (integer n m))
(cond
((= n 0) "A string to break the typechecker")
((= n 1) m)
((> n 1) (the integer (my-fact (- n 1) (* n m))))
(t (error ...))))

If the THE declaration were removed, the third clause would become a
tail self call. However, as it stands, it is only a tail call if the
compiler decides not to check the validity of the THE. This is a
decision which is typically made depending on the OPTIMIZE settings.

Therefore, the recursion elimination is only likely to happen for low
safety settings of the OPTIMIZE declaration. This means that if the
standard were to mandate tail-call elimination, the semantics of a
correct program could depend on the level of OPTIMIZE-ation.

(Why's that? Consider

(defun do-nothing-forever ()
(do-nothing-forever))

In a language such as Scheme, the programmer may rely on this function
to loop forever; in Common Lisp, which does not mandate elimination of
tail recursion, he cannot. This is an important difference in the
semantics of the two languages.)

J.

Tim Bradshaw

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to
* Stig Hemmer wrote:

> The problem is that you will spend a lot of effort making a
> optimalization routine that will only be used very very seldom. Most
> programs do _not_ look like fibonacci.

And it's trivial to write, at the user level, a memoized version of
such problems which do not have the problem. So there's really no
reason a compiler should want to optimise this.

(In CL you can write DEF-MEMOIZED-FUNCTION really pretty easily.
Peter Norvig's book has a version, and there are others floating
around the net I'm sure.)

--tim

--
Tim Bradshaw, Formerly system manager,
Artificial Intelligence Applications Institute,
University of Edinburgh
(Present address: t...@tfeb.org)

Harvey J. Stein

unread,
Feb 9, 1999, 3:00:00 AM2/9/99
to
cba...@2xtreme.net (Christopher R. Barry) writes:

> Does anyone have any insightful commentary on why current compilers
> can't optimize out the millions of unneeded function calls for code

> like the below? Finding the 40th term takes the better part of a
> minute on my Pentium 200MMX, while only a small fraction of a second
> with the efficient versions.

You're asking why the compilers can't replace the algorithm you
specified with a different algorithm that's faster? Maybe because
they're compiling your code, not writing it? Maybe because analysing
your algorithm and replacing it with a better algorithm is
equivalent to the stopping problem?

--
Harvey J. Stein
BFM Financial Research
hjs...@bfr.co.il

Christopher R. Barry

unread,
Feb 10, 1999, 3:00:00 AM2/10/99
to
hjs...@bfr.co.il (Harvey J. Stein) writes:

> cba...@2xtreme.net (Christopher R. Barry) writes:
>
> > Does anyone have any insightful commentary on why current compilers
> > can't optimize out the millions of unneeded function calls for code
> > like the below? Finding the 40th term takes the better part of a
> > minute on my Pentium 200MMX, while only a small fraction of a second
> > with the efficient versions.
>
> You're asking why the compilers can't replace the algorithm you
> specified with a different algorithm that's faster? Maybe because
> they're compiling your code, not writing it?

Is not tail-call elimination tweaking the fundamental algorithm behind
the scenes? Doing it for double recursion would be taking this a big
step furthur, but a lot of tree operations and complex sorts are most
naturally expressed as double recursive algorithms and I'm sure there
are even more complex examples that I've yet to acquaint myself
with. Freeing a programmer to not spend time and sacrifice
maintainablity and readability converting a double recursive prototype
into production code would be a nice win.

But thinking hard about this I do realize that there are more
important things to work on in a compiler but in an ideal world where
compiler design, implementation and maintainance are not prohibitively
expensive and time-consuming and there isn't already a ton of other
work to be done it would be nice thing.

> Maybe because analysing your algorithm and replacing it with a
> better algorithm is equivalent to the stopping problem?

If the "stopping problem" is different from the halting problem then
I'm not familiar with it. Otherwise, it kind of depends on how you're
defining "analyze".

"Recognizing" certain usage patterns and forms as having properties
that allow certain provably equivalent reductions does not nessecarily
get you into trouble ("provably equivalent" being the hard part, but
provably or intuitively not impossible or intractable at least). In
fact it is done all the time, just maybe not at this level yet, or for
any time soon to come....

I'd like to know more about the specific technical hurdles involved in
this type of thing, but this probably wasn't the right newsgroup to
bring it up in. And the answer is quite probably above my current
level anyways, but it would give me a start.

Christopher

Paul Rudin

unread,
Feb 10, 1999, 3:00:00 AM2/10/99
to
Kent M Pitman <pit...@world.std.com> writes:

> I think there probably is something to joswig's implicit claim (not
> his statement per se quoted above, but his choice of terms in the
> statement quoted) that there is such a thing as a recursive problem.
> At least, I know of no general purpose way to reduce a recursive solution
> into an iterative one. (Well, let me constrain that: if I know of such
> a technique, I don't presently acknowledge it as such.

Depends what you mean by this. You can write a compiler that will
translate arbitrary lisp code into a turing machine (or say, a fortran
77 program) that will produce the same outputs on the same inputs.

Whether this "iterative" or not I'm not sure, but there's certainly no
explicit recurion.

More generally the question turns on whether there is a distinction
between some abstract platonic ideal of a particular problem and a
given description thereof. Any algorithm can be described without
explicit reference to recurion. Similarly any algorithm can be
decrsibed using recurion with no explicit iteration.

In normal parlance I would argue that "recursive problem" carries
enough meaning to be a useful phrase: they are problems where data
structure and processes involved in the description are naturally
expressed in recursive terms.

Of course what seems "natural" varies from person to person, but there
are many data structures e.g. lists and trees; that it would be
perverse to describe without using recurion.


Whether or not a given compiler optimises away tail recusion is
neither here nor there. The "recursiveness" or "iterativeness" of a
problem surely lies in the problem description or assocaited program
implemention, not in the machine code that a compiler eventually
produces.


Marco Antoniotti

unread,
Feb 10, 1999, 3:00:00 AM2/10/99
to

cba...@2xtreme.net (Christopher R. Barry) writes:

> hjs...@bfr.co.il (Harvey J. Stein) writes:
>
> > cba...@2xtreme.net (Christopher R. Barry) writes:
> >
> > > Does anyone have any insightful commentary on why current compilers
> > > can't optimize out the millions of unneeded function calls for code
> > > like the below? Finding the 40th term takes the better part of a
> > > minute on my Pentium 200MMX, while only a small fraction of a second
> > > with the efficient versions.
> >
> > You're asking why the compilers can't replace the algorithm you
> > specified with a different algorithm that's faster? Maybe because
> > they're compiling your code, not writing it?
>
> Is not tail-call elimination tweaking the fundamental algorithm behind
> the scenes? Doing it for double recursion would be taking this a big
> step furthur, but a lot of tree operations and complex sorts are most
> naturally expressed as double recursive algorithms and I'm sure there
> are even more complex examples that I've yet to acquaint myself
> with. Freeing a programmer to not spend time and sacrifice
> maintainablity and readability converting a double recursive prototype
> into production code would be a nice win.

I think, you are really underestimating the problem. Let's consider the
following number sequence.

F3(0) = 1;
F3(1) = 1;
F3(2) = 3;

F3(n) = F3(n - 1) + F3(n - 2) + F3(n - 3), for n >= 3;

In Common Lisp

(defun f3 (n)
(cond ((zerop n) 1)
((= n 1) 1)
((= n 2) 3)
((>= n 3) #I(f3(n - 1) + f3(n - 2) + f3(n - 3)))))
;;; with the infix package thrown in, just to remind everybody of its
;;; existence. :)

Now you have a triple recursion. What do you do? While there is a
great iterative algorithm to compute the Fibonacci sequence (even a
'closed form' one which buries the iteration in the exponential
computation), I don't recall any good one for this triple recursive
one (although I believe there might be).

Now let's go one step further

You have something like (bear with the Italian :) )

(defun uno (x)
(if (zup-p x)
'something
(due (foo x))))

(defun due (y)
(if (zot-p y)
'something-else
(tre (bar y))))

(defun tre (z)
(if (elura-p z)
'again-something
(uno (baz y))))

;;; where foo, bar, and baz somehow "reduce" the 'size of the input.

Now you have three mutually tail recursive function which a good compiler
can 'optimize' away. The problem is that the overall algorithm
becomes (probably) exponential with just a little twist.


(defun tre (z)
(cond ((elura-p z) 'again-something)
(t
(uno (baz (foo y)))
(uno (baz (bar y))))
))

Now, is this an immediately recognizable pattern?

As usual

fast program = appropriate data structures + appropriate algorithms

>
> But thinking hard about this I do realize that there are more
> important things to work on in a compiler but in an ideal world where
> compiler design, implementation and maintainance are not prohibitively
> expensive and time-consuming and there isn't already a ton of other
> work to be done it would be nice thing.
>
> > Maybe because analysing your algorithm and replacing it with a
> > better algorithm is equivalent to the stopping problem?
>
> If the "stopping problem" is different from the halting problem then
> I'm not familiar with it. Otherwise, it kind of depends on how you're
> defining "analyze".
>
> "Recognizing" certain usage patterns and forms as having properties
> that allow certain provably equivalent reductions does not nessecarily
> get you into trouble ("provably equivalent" being the hard part, but
> provably or intuitively not impossible or intractable at least). In
> fact it is done all the time, just maybe not at this level yet, or for
> any time soon to come....

Yes. It is done all the time, but usually on paper or on very specific
problems and formalisms. This is not really "compiler technology"
(not yet, at least). It is more like theorem proving or model checking.

>
> I'd like to know more about the specific technical hurdles involved in
> this type of thing, but this probably wasn't the right newsgroup to
> bring it up in. And the answer is quite probably above my current
> level anyways, but it would give me a start.

To have a glimpse of just what may be involved in compiling some
complex semantics in the 'proper way' (i.e. provably correct way) you
may look at some work on synchronous languages like Esterel and Lustre
(http://www.inria.fr/meije).

The topic of "algorithm discovery" has been the field of study of
Professor Paige at Courant Institute of NYU, you can check out his
pages and work at http://www.cs.nyu.edu or http://www.cims.nyu.edu.

As you can see it is not stuff for the faint of heart :)

Marco Antoniotti

unread,
Feb 10, 1999, 3:00:00 AM2/10/99
to

Juliusz Chroboczek <j...@dcs.ed.ac.uk> writes:

> (defun do-nothing-forever ()
> (do-nothing-forever))
>
> In a language such as Scheme, the programmer may rely on this function
> to loop forever; in Common Lisp, which does not mandate elimination of
> tail recursion, he cannot. This is an important difference in the
> semantics of the two languages.)
>

Most implementations of Common Lisp do optimize the tail call. I do
not know how many implementations of Scheme respect the standard
dictate.

Kellom{ki Pertti

unread,
Feb 10, 1999, 3:00:00 AM2/10/99
to
It has been most instructive to read about the issues connected with
tail call optimization in Common Lisp. I can see why vendors and many
users would not be happy about making such an optimization a required
feature of the language.

However, even after this discussion I do not see any reason why
expressing iteration using tail calls would be inherently any less
efficient than when explicit looping constructs are used, given that
certain assumptions about the programming language are true. If a
procedure call does not involve any special magic (such as &optional),
and the absence of redefinition of a symbol's function value is easily
statically checkable, then I fail to see any efficiency problems.

Whether one wants to program in such a language (e.g. Scheme without
macros) is a different matter, but I would be genuinely interested
about any problems with tail recursion. That is, please do not
consider this as yet another schemer's troll.

Gareth McCaughan

unread,
Feb 10, 1999, 3:00:00 AM2/10/99
to
Marco Antoniotti wrote:

> F3(0) = 1;
> F3(1) = 1;
> F3(2) = 3;
>
> F3(n) = F3(n - 1) + F3(n - 2) + F3(n - 3), for n >= 3;

...


> Now you have a triple recursion. What do you do? While there is a
> great iterative algorithm to compute the Fibonacci sequence (even a
> 'closed form' one which buries the iteration in the exponential
> computation), I don't recall any good one for this triple recursive
> one (although I believe there might be).

Of course there is.

(let ((a 1) (b 1) (c 3))
(loop for i from 2 to n do
(shiftf a b c (+ a b c)))
c)

--
Gareth McCaughan Dept. of Pure Mathematics & Mathematical Statistics,
gj...@dpmms.cam.ac.uk Cambridge University, England.

Marco Antoniotti

unread,
Feb 10, 1999, 3:00:00 AM2/10/99
to

Gareth McCaughan <gj...@dpmms.cam.ac.uk> writes:

> Marco Antoniotti wrote:
>
> > F3(0) = 1;
> > F3(1) = 1;
> > F3(2) = 3;
> >
> > F3(n) = F3(n - 1) + F3(n - 2) + F3(n - 3), for n >= 3;
> ...
> > Now you have a triple recursion. What do you do? While there is a
> > great iterative algorithm to compute the Fibonacci sequence (even a
> > 'closed form' one which buries the iteration in the exponential
> > computation), I don't recall any good one for this triple recursive
> > one (although I believe there might be).
>
> Of course there is.
>
> (let ((a 1) (b 1) (c 3))
> (loop for i from 2 to n do
> (shiftf a b c (+ a b c)))
> c)
>

Very nice. As usual, I should be a little slower in hitting the 'send'
key :}

Raymond Toy

unread,
Feb 10, 1999, 3:00:00 AM2/10/99
to
>>>>> "Gareth" == Gareth McCaughan <gj...@dpmms.cam.ac.uk> writes:

Gareth> Marco Antoniotti wrote:
>> F3(0) = 1;
>> F3(1) = 1;
>> F3(2) = 3;
>>
>> F3(n) = F3(n - 1) + F3(n - 2) + F3(n - 3), for n >= 3;

Gareth> ...


>> Now you have a triple recursion. What do you do? While there is a
>> great iterative algorithm to compute the Fibonacci sequence (even a
>> 'closed form' one which buries the iteration in the exponential
>> computation), I don't recall any good one for this triple recursive
>> one (although I believe there might be).

Gareth> Of course there is.

Gareth> (let ((a 1) (b 1) (c 3))
Gareth> (loop for i from 2 to n do
Gareth> (shiftf a b c (+ a b c)))
Gareth> c)

Cool!

In addition, the closed form solution can be easily derived.

F3(n) = c1*r1^n + c2*r2^n + c3*r3^n

where r1, r2, r3 are the roots of the cubic:

x^3 = x^2 + x + 1

(The roots are approximately 1.839, -0.4196 +/- 0.606*i)

and c1, c2, c3 are the solution to the system of equations


[ 1 1 1 ] [ c1 ] [ 1 ]
[ ] [ ] [ ]
[ r1 r2 r3 ] [ c2 ] = [ 1 ]
[ ] [ ] [ ]
[ 2 2 2 ] [ c3 ] [ 3 ]
[ r1 r2 r3 ]

Since r2 and r3 are complex conjugates, c2 and c3 must also be complex
conjugates to insure that F3(n) is still real.

Of course, this has nothing to do with Lisp anymore....

Ray

Barry Margolin

unread,
Feb 10, 1999, 3:00:00 AM2/10/99
to
In article <m3btj2i...@shodan.demon.co.uk>,

Paul Rudin <pa...@shodan.demon.co.uk> wrote:
>Depends what you mean by this. You can write a compiler that will
>translate arbitrary lisp code into a turing machine (or say, a fortran
>77 program) that will produce the same outputs on the same inputs.
>
>Whether this "iterative" or not I'm not sure, but there's certainly no
>explicit recurion.

In fact, when you look at compiled code for typical processors, there isn't
really any "recursion". The stack is just a big array in memory; function
calling is just a matter of appending parameters and the PC to this array,
updating some registers that index into the array, and performing a GOTO.

But the reason we use high level languages is because they allow us to
express control and data flow in more natural manners. Recursion is a
common mode of thought about processes and data.

Erann Gat

unread,
Feb 10, 1999, 3:00:00 AM2/10/99
to
In article <m3hfsv8...@eho.eaglets.com>, s...@goems.com wrote:

> recursion doesn't have to be slow. the above calls `fibonacci' about 2^n
> times; my code below calls it about log n times:
>
> (defun fibonacci (nn)
> "Return 2 consecutive Fibonacci numbers."
> (declare (type (integer 0) nn) (values (integer 0) (integer 0)))
> (case nn
> (0 (values 0 0)) (1 (values 1 0)) (2 (values 1 1)) (3 (values 2 1))
> (t (multiple-value-bind (mm rr) (floor nn 2)
> (declare (integer mm) (type (integer 0 1) rr))
> (multiple-value-bind (f0 f1) (fibonacci mm)
> (declare (type (integer 0) f0 f1))
> (if (zerop rr)
> (values (* f0 (+ (* f1 2) f0))
> (+ (* f0 f0) (* f1 f1)))
> (values (+ (* f0 f0) (sqr (+ f0 f1)))
> (* f0 (+ (* f1 2) f0)))))))))

Here's an even faster version. It's still O(log(n)) but it optimizes
the multiple value bindings and the floor operation. It's also a
little easier to read IMO.

(defun fib (n &aux (r (integer-length n)) (f0 0) (f1 1))
(dotimes (i r)
(let ( (s1 (* f0 f0))
(s2 (* 2 f0 f1)) )
(if (logbitp (- r i 1) n)
(psetf f0 (+ s1 s1 s2 (* f1 f1))
f1 (+ s1 s2))
(psetf f0 (+ s1 s2)
f1 (+ s1 (* f1 f1))))))
f0)

Gareth McCaughan wrote:

>Of course there is.


>
> (let ((a 1) (b 1) (c 3))

> (loop for i from 2 to n do

> (shiftf a b c (+ a b c)))

> c)

Of course there is also:

(let ( (a 0) (b 1) )
(dotimes (i n)
(psetf a b b (+ a b)))
b)

E.

Erik Naggum

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
* cba...@2xtreme.net (Christopher R. Barry)

| Freeing a programmer to not spend time and sacrifice maintainablity and
| readability converting a double recursive prototype into production code
| would be a nice win.

I hope I have not contributed to this confusion by trying to force people
to rethink this "recursive problem" thingie. see, I don't _believe_ in
recursion in the moral sense of being _superior_ to iteration. I think I
made the case that recursion is not as efficient as looping, but Barry is
quite right that the argument in favor of recursion doesn't seem to be
_total_ efficieny, just space efficiency. however, to the people who
_believe_ in recursion, it should be valuable to understand that omitting
the prologue and epilogue of a machine-language function, as it were, is
not something you do lightly -- it adds significantly to the complexity
of the compiler.

some problems have not yet found solutions that can avoid space
requirements linear or worse in the length of the input. I suspect those
are the ones people call "recursive problems", but when it is possible to
write algorithms that are sublinear in the length of the input, and smart
people continue to astound with more efficient algorithms like that all
the time, I'm not willing to buy the "intrinsically recursive" argument.

take Fibonacci, which is _defined_ recursively, but which can _easily_ be
implemented in constant memory and linear time. so even if Fibonacci is
called a "recursive problem", the solution doesn't have to be.

in any case, if an algorithm has a space requirement linear in the length
of the input, you have a choice between real recursion and managing the
space requirements elsewhere. sometimes, the stack space requirements of
function calls make it unpalatable to use actual recursion, and it's
easier to cons up the values on one or more lists. however, this would
reduce the memory consumption with a constant number of bytes per call,
and you should be pretty hard pressed for memory before this begins to be
a serious impediment between "prototype" (another thing I don't _believe_
in) and production code.



| I'd like to know more about the specific technical hurdles involved in
| this type of thing, but this probably wasn't the right newsgroup to bring
| it up in. And the answer is quite probably above my current level
| anyways, but it would give me a start.

discussions like these have raged in comp.compilers in the past, when I
had time to follow that group. I learned a lot from following them, and,
incidentally, much more fundamental stuff than the compiler course at the
university, where the bondage with static typing was horribly stifling.
(just because it makes sense in some cases to declare or compute types in
the compiler doesn't mean it's smart to forget everything about types at
run-time, but for some people it seems that the entire goal of static
typing is to exclude type information at run-time, and I find that to be
curiously irrational.) however, what has become clear is that a lot of
people have no clue what to do _without_ type declarations, and become
very insecure when they see Lisp programs without any explicit type
information at all, so I recommend to learn all there is about compilers
from the point of view that you _don't_ know the type of anything. this
will improve your understanding greatly, as the explicit-typing crowd
tends to gloss over type problems in the simple cases, and then tackle
them inefficiently and clumsily in object systems and the like because
they have never had an incentive to make it super efficient.

sometimes, it helps to have been hurt by C, but I hope it's possible to
learn this stuff without the pain and suffering. (a colleague was once
watching me working out how to deal with unclosed sockets in Allegro CL
under Linux, including comparing the symlinks in /proc/<pid>/fd/ to the
table in /proc/net/tcp by hand and isolating the stream object that used
it with a heap-walker -- this was before my auto-closing sockets -- and
he went "wow, I didn't know you were such an expert!". I looked at him
wearily and said "`oooh, I didn't know you had been hurt that much' is a
lot more accurate.")

Kent M Pitman

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
Barry Margolin <bar...@bbnplanet.com> writes:

>
> In article <m3btj2i...@shodan.demon.co.uk>,
> Paul Rudin <pa...@shodan.demon.co.uk> wrote:
> >Depends what you mean by this. You can write a compiler that will
> >translate arbitrary lisp code into a turing machine (or say, a fortran
> >77 program) that will produce the same outputs on the same inputs.
> >
> >Whether this "iterative" or not I'm not sure, but there's certainly no
> >explicit recurion.
>
> In fact, when you look at compiled code for typical processors, there isn't
> really any "recursion". The stack is just a big array in memory; function
> calling is just a matter of appending parameters and the PC to this array,
> updating some registers that index into the array, and performing a GOTO.
>
> But the reason we use high level languages is because they allow us to
> express control and data flow in more natural manners. Recursion is a
> common mode of thought about processes and data.

Right. Plus I specifically ruled out (not sure in the message being
replied to by Paul or in a clarification later to someone) the idea of
translating to something where the "stack" just became a datastructure
passed explicitly. To me, that's not really getting rid of the
recursion, it's just changing how we perceive it. If it ever was there
(and not, as Barry suggests, just a way of thinking about things),
it is still there.

The kind of issue I had meant to allude to was the following...

If you have something like factorial and you write it as
(defun factorial (x) ;Step 1
(if (zerop x) 1 (* x (factorial (- x 1)))))
you can translate this to
(defun factorial (nn) ;Step 2
(labels ((f (n l)
(if (zerop n)
(reduce #'* l)
(f (1- n) (cons n l)))))
(f nn '())))
which, if it were compiled with tail-call elimination, would
not accumulate traditional stack but would keep space proportional to a stack
as an explicit argument, l, which amounts to a stack because it keeps track
of "things to do next" albeit in a kind of stylized way.

Normally no serious programmer would write the listy version like in step
2. One would instead do the further shift to
(defun factorial (nn) ;Step 3
(labels ((f (n r)
(if (zerop n) r
(f (1- n) (* n r)))))
(f nn 1)))
which does NOT take space proportional to the stack. And probably in CL
one would do the mechanical translation to the following to avoid the
issue of lack of tail call elimination:
(defun factorial (nn) ;Step 4
(do ((n nn (1- n))
(r 1 (* n r)))
((zerop n) r)))
And voila we have an efficienct means of computing factorial iteratively
from the recursive definition.

BUT the reason step 3 (the non-bloating definition) can be computed
from step 2 (a bloating definition) is because there is a specific
property of this problem which is that the operation which is doing
the combining is associative, and so we can choose to apply the
operation BEFORE the recursive step instead of, as in the step 1
above, AFTER. This keeps the intermediate result tidy (a single
integer representing a partial product) instead of messy (a list of
integers representing things to later multiply). I'm pretty sure that
optimization is not something you can in general rely on being
available. And when I made the remark about getting rid of the
recursion, I meant getting rid of this property that the deeper the
problem, the more storage it took. Space bloat may well not be a good
definition of recursion, but it was what I was focusing on as the way
of measuring whether the recursion was gone, or just transformed.

In Scheme, for example, the tail call elimination requirement seems to be
evolving to be a clear statement about the memory consumption that a
tail recursive program takes up. There was a change in verbiage about
this for r5rs and it will be probably refined some more in r6rs since
it is an area of active work.

But it's an optimization that is only required for tail calling, not
for all recursion, and I suspect for good reasons.

(Incidentally, in a rare move, I actually did test all 4 definitions
above on at least small values and I think I got it right.)

Kent M Pitman

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
Marco Antoniotti <mar...@copernico.parades.rm.cnr.it> writes:

> Juliusz Chroboczek <j...@dcs.ed.ac.uk> writes:
>
> > (defun do-nothing-forever ()
> > (do-nothing-forever))
> >
> > In a language such as Scheme, the programmer may rely on this function
> > to loop forever; in Common Lisp, which does not mandate elimination of
> > tail recursion, he cannot. This is an important difference in the
> > semantics of the two languages.)
> >
>
> Most implementations of Common Lisp do optimize the tail call. I do
> not know how many implementations of Scheme respect the standard
> dictate.

Uh, some control it by the optimization settings, so don't be so quick
to make it a binary issue. Also, some distinguish between self-call
and call of another function. It's more likely that the above will be
optimized, and I think the standard even somewhere recommends that programs
that don't want the optimization should declare themselves notinline.
But it's another matter entirely if you have mutually recursive functions
tail calling each other. I think some implementations will optimize
that as well, but it may in some or all cases depend on the optimization
settings. And anyway, it's something you should both get an explicit
promise from your vendor about AND put a piece of code in to remind you
that if you port the code, you have to check again.

#-(or Acme-Lisp Frobozz-Lisp) ;List of implementations checked
(eval-when (:execute :compile-toplevel)
(error "This code depends on tail call elimination.~
~%Check that this implementation does it, and update this check."))

Barry Margolin

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
In article <sfwd83h...@world.std.com>,

Kent M Pitman <pit...@world.std.com> wrote:
>you can translate this to
> (defun factorial (nn) ;Step 2
> (labels ((f (n l)
> (if (zerop n)
> (reduce #'* l)
> (f (1- n) (cons n l)))))
> (f nn '())))
>which, if it were compiled with tail-call elimination, would
>not accumulate traditional stack but would keep space proportional to a stack
>as an explicit argument, l, which amounts to a stack because it keeps track
>of "things to do next" albeit in a kind of stylized way.

There's a low-level, technical reason why this version may be useful.
Traditionally, stacks are given a limited amount of space (possibly because
most programs are not highly recursive, so deep recursion usually indicates
an infinite recursion bug), while the heap is allowed to consume all
remaining process memory. So, while moving data from the stack to the heap
doesn't change the memory use of the program, it allows it to use "cheaper"
memory.

On the other hand, the local stack frame is almost always in the data
cache, so it can be faster to access. In that sense, stack memory is
"cheaper".

Paul Dietz

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
Barry Margolin wrote:

> The Fibonacci series is naturally described in a recursive manner.

The sequence (not series!) is also naturally described in
an iterative manner.

Let A be the 2x2 matrix

( 1 1 )
( )
( 1 0 )

Then, A^n (n>0) is the matrix

( F(n+1) F(n) )
( )
( F(n) F(n-1) )

where F(0) = 0, F(1) = 1, F(2) = 1, etc. are the
fibonacci numbers.

This means F(n) can be computed in O(log n)
integer arithmetic operations (on bignums, granted)
by using the logarithmic method for computing
powers. Moreover, the sizes of the numbers
roughly double at each step, so most of the time
is spent in the last few squarings. The running
time is proportional to the time to do multiplication
on bignums of O(n) bits.

This technique extends to any sequence defined
by a linear recurrence.

Paul

Jens Kilian

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
Paul Dietz <di...@interaccess.com> writes:
> Barry Margolin wrote:
>
> > The Fibonacci series is naturally described in a recursive manner.
>
> The sequence (not series!) is also naturally described in
> an iterative manner.
>
> Let A be the 2x2 matrix
>
> ( 1 1 )
> ( )
> ( 1 0 )
>
> Then, A^n (n>0) is the matrix
>
> ( F(n+1) F(n) )
> ( )
> ( F(n) F(n-1) )
>
> where F(0) = 0, F(1) = 1, F(2) = 1, etc. are the
> fibonacci numbers.
>
> This means F(n) can be computed in O(log n)
> integer arithmetic operations (on bignums, granted)
> by using the logarithmic method for computing
> powers.

In fact, F(n) can be found in O(1) time by computing the eigenvalues of
the above matrix.

Jens.
--
mailto:j...@acm.org phone:+49-7031-14-7698 (HP TELNET 778-7698)
http://www.bawue.de/~jjk/ fax:+49-7031-14-7351
PGP: 06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]

Kent M Pitman

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
Barry Margolin <bar...@bbnplanet.com> writes:

> In article <sfwd83h...@world.std.com>,
> Kent M Pitman <pit...@world.std.com> wrote:
> >you can translate this to
> > (defun factorial (nn) ;Step 2
> > (labels ((f (n l)
> > (if (zerop n)
> > (reduce #'* l)
> > (f (1- n) (cons n l)))))
> > (f nn '())))
> >which, if it were compiled with tail-call elimination, would not
> >accumulate traditional stack but would keep space proportional to a
> >stack as an explicit argument, l, which amounts to a stack because
> >it keeps track of "things to do next" albeit in a kind of stylized
> >way.
>
> There's a low-level, technical reason why this version may be
> useful. Traditionally, stacks are given a limited amount of space
> (possibly because most programs are not highly recursive, so deep
> recursion usually indicates an infinite recursion bug), while the
> heap is allowed to consume all remaining process memory. So, while
> moving data from the stack to the heap doesn't change the memory use
> of the program, it allows it to use "cheaper" memory.

Interesting point. Thanks for highlighting this.

> On the other hand, the local stack frame is almost always in the data
> cache, so it can be faster to access. In that sense, stack memory is
> "cheaper".

And stack memory is gc'd more cheaply.

Marco Antoniotti

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to

Raymond Toy <t...@rtp.ericsson.se> writes:


Apart from the fact that I am overly lazy these days :) I hereby state
that Raymond's las sentence is false.

You can derive the result using Macsyma :)

Gareth McCaughan

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
Erann Gat wrote:

> Gareth McCaughan wrote:
>
>> Of course there is.
>>

>> (let ((a 1) (b 1) (c 3))

>> (loop for i from 2 to n do

>> (shiftf a b c (+ a b c)))

>> c)
>
> Of course there is also:
>
> (let ( (a 0) (b 1) )
> (dotimes (i n)
> (psetf a b b (+ a b)))
> b)

Which computes a different sequence. (Possibly you know this
perfectly well and are just observing that the same trick can
be used for the original Fibonacci numbers, in which case I
apologise for pointing it out.)

Do you really prefer (psetf a b b (+ a b)) to (shiftf a b (+ a b)),
by the way? I admit to being surprised.

Juliusz Chroboczek

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
Me:

>> In a language such as Scheme, the programmer may rely on this
>> function to loop forever; in Common Lisp, which does not mandate
>> elimination of tail recursion, he cannot.

Marco Antoniotti <mar...@copernico.parades.rm.cnr.it>:

MA> Most implementations of Common Lisp do optimize the tail call.

The point I was trying to make is that tail-call elimination is more
than a mere optimisation. Tail-call elimination changes the semantics
of the language, and a language specification that mandates it must
therefore make it very clear when it is compulsory so that the
programmer may rely on it. I tried to show that this is much more
difficult to do properly in Common Lisp than in Scheme.

I guess my conclusion is that, while tail-call elimination is useful,
I am happier with a Common Lisp spec that does leave it to the imple-
mentation, than with a spec that would mention it but not make it
perfectly clear when it must apply.

J.

Gareth McCaughan

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
Jens Kilian wrote:

> In fact, F(n) can be found in O(1) time by computing the eigenvalues of
> the above matrix.

If you consider that operations on real numbers take O(1) time,
sure. I do not recommend this.

Calculating F(n) accurately using the explicit formula requires
O(log n) operations on numbers of length O(n), just as calculating
it by repeated squaring of the matrix does.

Andreas Eder

unread,
Feb 11, 1999, 3:00:00 AM2/11/99
to
Erik Naggum <er...@naggum.no> writes:

> take Fibonacci, which is _defined_ recursively, but which can _easily_ be
> implemented in constant memory and linear time. so even if Fibonacci is
> called a "recursive problem", the solution doesn't have to be.

You cab even do much better than linear time, and to program it either
iteratively or recursively is not difficult. Just observe that for the
Fibonacci numbers you have th following formulae:

F(2n-1) = F(n)^2 + F(n-1)^2
F(2n) = F(n)^2 + F(n)F(n-1)

Using this you can reduce the time complexity to O(log(n))
(Except that for large numbers you should consider the time for
multiplications and additions, which then is not constant)

Andreas
--
Wherever I lay my .emacs, there's my $HOME

Jens Kilian

unread,
Feb 12, 1999, 3:00:00 AM2/12/99
to
Gareth McCaughan <gj...@dpmms.cam.ac.uk> writes:
> Jens Kilian wrote:
>
> > In fact, F(n) can be found in O(1) time by computing the eigenvalues of
> > the above matrix.
>
> If you consider that operations on real numbers take O(1) time,
> sure. I do not recommend this.
>
> Calculating F(n) accurately using the explicit formula requires
> O(log n) operations on numbers of length O(n), just as calculating
> it by repeated squaring of the matrix does.

Oops. I should have said "there is a closed form for F(n) which can be found
by computing etc."

Paul Dietz

unread,
Feb 12, 1999, 3:00:00 AM2/12/99
to
Jens Kilian wrote:
>
> Gareth McCaughan <gj...@dpmms.cam.ac.uk> writes:

> > Calculating F(n) accurately using the explicit formula requires
> > O(log n) operations on numbers of length O(n), just as calculating
> > it by repeated squaring of the matrix does.
>
> Oops. I should have said "there is a closed form for F(n) which can be found
> by computing etc."


Actually, calculating F(n) by repeated squaring of
the matrix requires only O(1) operations on numbers
of length O(n). Most of the O(log n) stages operate
on much shorter numbers.

Paul

Gareth McCaughan

unread,
Feb 12, 1999, 3:00:00 AM2/12/99
to
Paul Dietz wrote:

[I said:]


>>> Calculating F(n) accurately using the explicit formula requires
>>> O(log n) operations on numbers of length O(n), just as calculating
>>> it by repeated squaring of the matrix does.

...


> Actually, calculating F(n) by repeated squaring of
> the matrix requires only O(1) operations on numbers
> of length O(n). Most of the O(log n) stages operate
> on much shorter numbers.

Er, quite right. My apologies. (But then, the same is
true in effect for the explicit formula; you can evaluate
the powers there by repeated squaring, too. My point,
that neither method is asymptotically much faster than
the other, stands.)

Frank A. Adrian

unread,
Feb 12, 1999, 3:00:00 AM2/12/99
to
Actually, if you're willing to put up with round-off issues, the closed
form:

fact (n) = ((phi ^ n) - (-1/phi ^ n))/sqrt(5) where phi = (1+ sqrt(5))/2

can be computed rather quickly...

faa

Gareth McCaughan wrote in message <86u2wra...@g.pet.cam.ac.uk>...

Paul Dietz

unread,
Feb 13, 1999, 3:00:00 AM2/13/99
to
"Frank A. Adrian" wrote:
>
> Actually, if you're willing to put up with round-off issues, the closed
> form:
>
> fact (n) = ((phi ^ n) - (-1/phi ^ n))/sqrt(5) where phi = (1+ sqrt(5))/2
>
> can be computed rather quickly...

(That's Fib(n).)

Let's see how much roundoff is tolerable.

(phi + epsilon)^n = phi^n + n epsilon phi^(n-1) + l.o.t.

If we want the error to be small, then

n epsilon < 1/phi^(n-1)

So, log(1/epsilon) is O(n). This means phi has to
be computed to O(n) significant bits, and all the
computations are on O(n) bits.

Contrast this to the matrix squaring approach, where
only the last O(1) stages involve O(n) bit numbers.
This integer approach is asymptotically faster by
a factor of O(log n).

Paul

Paul Rudin

unread,
Feb 13, 1999, 3:00:00 AM2/13/99
to
Kent M Pitman <pit...@world.std.com> writes:

>
> Right. Plus I specifically ruled out (not sure in the message being
> replied to by Paul or in a clarification later to someone) the idea of
> translating to something where the "stack" just became a datastructure
> passed explicitly. To me, that's not really getting rid of the
> recursion, it's just changing how we perceive it. If it ever was there
> (and not, as Barry suggests, just a way of thinking about things),
> it is still there.

Yes, even without explcit recursion you may have in there in some
hidden form, and the extent to which its hidden may vary greatly.

The trouble is that this doen't give you a clear idea of where the
boundary between "iterative" and "recursive" is. Any algorithm can be
made to use explicit recursion, so the quetsion becomes: how hidden or
obfuscated do you want the recursion to be before something counts as
iterative?

Gareth McCaughan

unread,
Feb 15, 1999, 3:00:00 AM2/15/99
to
Paul Dietz wrote:

> Let's see how much roundoff is tolerable.

...


> So, log(1/epsilon) is O(n). This means phi has to
> be computed to O(n) significant bits, and all the
> computations are on O(n) bits.

Yep. Which means that something I said earlier is wrong;
the matrix-squaring approach is *better* than the explicit
formula, whereas I just said it was *as good*. I had assumed,
without actually thinking about it, that you could use lower
precision in the earlier stages of the calculation of phi^n.

One thing the advocates of the explicit formula haven't mentioned
is that you only actually need to evaluate half of it, because
it turns out that the nearest integer to phi^n/root(5) is always
equal to the Fibonacci number (except maybe when n=1 or something).

Bruno Haible

unread,
Feb 16, 1999, 3:00:00 AM2/16/99
to
Gareth McCaughan <gj...@dpmms.cam.ac.uk> writes:

> Calculating F(n) accurately using the explicit formula requires
> O(log n) operations on numbers of length O(n), just as calculating
> it by repeated squaring of the matrix does.

Yes. Nevertheless, the differences in computing time are impressing.
Here are timings, done with CLN-1.0 [1], in milliseconds, on a Pentium
with 100 MHz.

integer floating
n matrix point

10 0.03 0.20
100 0.12 0.34
1000 0.37 1.82
10000 3.3 63
100000 122 ~3000

Bruno

[1] http://clisp.cons.org/~haible/packages-cln.html
See cln-1.0/examples/fibonacci.cc

Thomas A. Russ

unread,
Feb 16, 1999, 3:00:00 AM2/16/99
to
Kent M Pitman <pit...@world.std.com> writes:

>
> t...@sevak.isi.edu (Thomas A. Russ) writes:
>
> > Some problems really are inherently recursive in nature. For example,
> > the enumeration of the leaves of a (binary or other) tree. You can't
> > write an interative algorithm for solving this problem in general, since
> > you would need (potentially) arbitrary amounts of memory in order to
> > hold the state of your traversal.
>
> Well, I'm not sure that's true. Don't you just need log(n) amount of
> space? You either use the fact that the stack in your programming
> language accumulates log(n) space or else you use no stack and carry
> log(n) data. I usually prefer using the stack because it gc's faster. :-)

Well, technically you could need N space in the worst case if the trees
weren't balanced. The difficulty is that you can't figure out what N is
from the input argument, which is typically a node in the tree. This is
a weak argument, however, since the same could be said of a linked list,
which is, in fact, easily made into iteration.

> I do agre with your premise, though, that there are recursive problems.
> I just think that I'd express the justification differently.

Unfortunately, my once meager knowledge of mathematical complexity has
been atrophying, so I can't quite put the right argument together.

We really need to find a computer science theoretician who can deal with
this problem.

Perhaps a better justification for an inherently recursive problem would
be one where it is necessary to retain information about the state of
the computation, where the size of this state information is not
constrained in advance.

In the case of moving down a (linked) list, then only state information
that is needed is the current location in the list and possibly the
number of items visited so far. This is a fixed number.

To do a tree traversal, one needs to keep track of the next branch of
the tree for every node visited. This is not a fixed amount of state,
so this makes for an inherently recursive problem.

I will make a few conjectures here, but I don't have proof for any of
them:

(1) An inherently recursive problem will require at least 2 recursive
calls in the algorithm. This is necessary, but not sufficient,
since Fibonacci has two recursive calls.

(2) An algorithm with only a single recursive call can be transformed
into iteration. For tail recursion, this is trivial. For other
cases, I assume (without proof) that the problem can be recast into a
tail recursive version.


Disclaimer: I am not a theorist. I don't even play on on the Net.

--
Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu

0 new messages