Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Recursion Usage and Concepts - Newbie Question

23 views
Skip to first unread message

Taria

unread,
Oct 12, 2007, 7:52:19 AM10/12/07
to
Hello all,

I'm struggling with an assignment that involves using a 'quicksort'
search for a user defined kth element within a list of numbers. The
use of recursion is a requirement of this assignment.

>From what I understand, recursion breaks down a big task into smaller
tasks until it solves the problem. My question is this: (I hope I
make sense in asking this) what if I was only interested in the result
at the very end of the recursion, how would I pass that value I found
to the first calling statement?

I figured out (and I'm sure my program is clumsy) how to recusively
solve the assignment but I don't know how to pass back the value I
find (after I split the list into smaller bits.) I could put a print
statement right there to indicate my answer, but after looking at a
few examples of other ppl's recursive Java routines, I feel
uncomfortable with this solution. I feel like I should be able to pass
back a value at the end of a recusrive call without it being changed
when it's returning to the main routine (yes, that's what happened
i.e. I HAD a statement that looked similiar to this:

return (aValue);

and it changed the value of aValue to something else. Conceptually,
should I be able to use that statemen above?

Also, because I'm so intimidated by the idea of recursion and still
somewhat afraid of objects, I resorted to using static methods instead
of objects. Am I hurting myself by doing this? Isn't the concepts of
the behavior of recursion the same whether it be by object or static?
All the examples I'm looking at use objects hence I've begun to
question my pursuit of using static methods. (Really, I was going
to rewrite this program using object-oriented code after I figured out
the recursive part, I know I can do it! :p)

Any advice is appreciated and I apologize in advance for anything that
sounds 'wrong.'

-t

Roedy Green

unread,
Oct 12, 2007, 8:17:44 AM10/12/07
to
On Fri, 12 Oct 2007 04:52:19 -0700, Taria <mch...@hotmail.com> wrote,
quoted or indirectly quoted someone who said :

>I'm struggling with an assignment that involves using a 'quicksort'
>search for a user defined kth element within a list of numbers. The
>use of recursion is a requirement of this assignment.

You can have a look at an working Quicksort by downloading
http://mindprod.com/products.html#QUICKSORT
--
Roedy Green Canadian Mind Products
The Java Glossary
http://mindprod.com

Roedy Green

unread,
Oct 12, 2007, 8:19:00 AM10/12/07
to
On Fri, 12 Oct 2007 04:52:19 -0700, Taria <mch...@hotmail.com> wrote,
quoted or indirectly quoted someone who said :

>Also, because I'm so intimidated by the idea of recursion and still
>somewhat afraid of objects,

see http://mindprod.com/jgloss/recursion.html

You might write something simple to get over your fear. The key thing
to understand is each layer of call has its own local variables. When
you return, it remembers everything as they were before you did the
call.

Eric Sosman

unread,
Oct 12, 2007, 8:38:08 AM10/12/07
to
Taria wrote:
> Hello all,
>
> I'm struggling with an assignment that involves using a 'quicksort'
> search for a user defined kth element within a list of numbers. The
> use of recursion is a requirement of this assignment.
>
>>From what I understand, recursion breaks down a big task into smaller
> tasks until it solves the problem. My question is this: (I hope I
> make sense in asking this) what if I was only interested in the result
> at the very end of the recursion, how would I pass that value I found
> to the first calling statement?
> [...]

The usual way to apply recursion is to divide a maxi-problem
into two or more mini-problems, solve the minis, and combine
their solutions to find the answer to the maxi-problem. The
"recursive" aspect means that you apply the same technique to
solve the mini-problems: Split them into micro-problems, solve
the micros, and combine their solutions to solve the minis.
And the micro-problems split into nano-problems, and so on.
As the levels descend you eventually arrive at problems that
are so small they can be solved by some other means (this is
sometimes called the "ground case").

The magic piece I think you may be missing is that each
small problem's solution is passed back to the recursive step
that asked for it, where it gets combined with the solutions
of other small problems to solve the larger problem. I don't
much blame you for this, because the assignment you've been
told to solve recursively isn't really suited to recursion:
most of the split-and-recombine steps are degenerate, vacuous,
and easy to overlook.

But let's look at the structure anyhow. You rearrange
the original list of numbers into sub-lists, and learn that
one of those sub-lists holds the number you want (you don't
know where it is in that sub-list, but you know it's there).
The *other* sub-lists are "solved" trivially: They don't
hold the number of interest, so you're finished with them.
Their "solutions," if you like, are empty.

The "hot" list still needs to be solved, though, so you
can call your own method to get a solution for it. (It's
best at this point to pretend that this second-level call
just works by magic or something; we'll get to the details
later.) The inner call returns its answer, which you "combine"
with the answers from the other sub-lists (by ignoring them;
you already know they don't matter), so the answer returned
by the inner call is also the answer you return.

Now to the magic piece. The outer call was asked to find
the Kth-largest of a big list, and the inner method is told
to find the Lth-largest of a smaller list. If the small list
is *really* small -- one number, for example -- you can find
the solution easily and return it. If it's longer you can
apply the same technique the outer call did: divide the short
list into still shorter lists, ignore most of them, and call
yourself yet again to find the Mth-largest of the one interesting
very short list.

As I said, it's not the best example for illustrating the
recursive technique. At each stage you ignore all but one of
them sub-problems (usually you would need to solve them, too),
and the combine-the-answers step is trivial (usually you need
to do a little more computing at this point).

Here's a better one: Imagine adding a million numbers with
pencil and paper. It'd take forever, but fortunately you have
a hundred friends: You give each of them ten thousand of the
numbers and ask for the sub-totals, then you add the hundred
sub-totals to get the grand total. Clear?

Here's the recursive bit: The hundred friends don't want
to add ten thousand numbers apiece, but luckily each of them
has another hundred friends. So they divide their batches of
ten thousand into a hundred batches of a hundred and dole those
out to *their* friends. Each friend-of-a-friend adds his group
of a hundred numbers (the "ground case"), then they pass their
totals to one of your friends. Your friends add up their
hundred sub-sub-totals and pass them back to you, and you (as
we said before) add them to get the grand total.

And that's recursion: Divide into sub-problems, solve
sub-problems recursively or directly, combine the solutions,
and report the result to the caller.

--
Eric Sosman
eso...@ieee-dot-org.invalid

Lew

unread,
Oct 12, 2007, 9:04:29 AM10/12/07
to
Taria wrote:
>> I'm struggling with an assignment that involves using a 'quicksort'
>> search for a user defined kth element within a list of numbers. The
>> use of recursion is a requirement of this assignment.
>>
>>> From what I understand, recursion breaks down a big task into smaller
>> tasks until it solves the problem.

No.

Eric Sosman wrote:
> The usual way to apply recursion is to divide a maxi-problem
> into two or more mini-problems, solve the minis, and combine
> their solutions to find the answer to the maxi-problem.

No.

Recursion refers to a routine that calls itself, as with the classic Fibonacci
sequence method:

public static int fibonacci( int n )
{
if ( n < 0 ) return 0;
if ( n == 0 || n == 1 )
{
return 1;
}
return fibonacci( n - 1 ) + fibonacci( n - 2 );
}

Notice that fibonacci() calls itself, unless certain conditions are met. If
those conditions didn't exist, the recursion would never end.

If the routine doesn't call itself, you will not get a good grade on the
assignment.

Try looking up "Recursion (computer science)" in Wikipedia.

--
Lew
recursion /n/: /see/ recursion.

Jean-Baptiste Nizet

unread,
Oct 12, 2007, 9:42:26 AM10/12/07
to
You could combine the explanations from Eric and Lew with this
problem:

Count the number of files in a directory tree.

The problem is easily solved using recursion. Here's the structure of
the code

int countFilesInTreeWithRoot(File directory) {
int numberOfFilesInTheDirectory = countFilesInDirectory(directory);
int result = numberOfFilesInTheDirectory;
File[] subDirectories = getSubDirectories(directory);
for (File subDirectory : subDirectories) {
result += countFilesInTreeWithRoot(subDirectory);
}
return result;
}

The ground cases here are the "leaf" directories, i.e. directories
which don't have any subdirectory. In this case, the answer of the
problem is trivial: the tree is reduced to the directory itself, and
the answer is the number of files it contains.
For a non-leaf directory, the result is the number of files directly
stored under this directory + the number of files found recursively in
each of its subDirectories.

You end up with the above method, which calls itself recursively,
unless the directory given as argument is a lead directory.

JB.

JB.

Lew

unread,
Oct 12, 2007, 9:46:46 AM10/12/07
to
Jean-Baptiste Nizet wrote:
> int countFilesInTreeWithRoot(File directory) {
> int numberOfFilesInTheDirectory = countFilesInDirectory(directory);
> int result = numberOfFilesInTheDirectory;

Why do you feel that you need two variables for the same value?

> File[] subDirectories = getSubDirectories(directory);
> for (File subDirectory : subDirectories) {
> result += countFilesInTreeWithRoot(subDirectory);
> }
> return result;
> }

int countFilesInTreeWithRoot(File directory) {
int result = countFilesInDirectory(directory);


File[] subDirectories = getSubDirectories(directory);
for (File subDirectory : subDirectories) {
result += countFilesInTreeWithRoot(subDirectory);
}
return result;
}

--
Lew

Taria

unread,
Oct 12, 2007, 10:12:54 AM10/12/07
to

Wow, don't you guys sleep? It's 4am here! :p

Ok, well, I figured out how to make the recursive routine I wrote
perform the way I wanted it to (yay!) and the form sort of looks like
what was posted above.

And I have only one more question about the definition of recursion.

Given the fibonacci recursion routine above, let's say I were to have
a more complex method where I have to determine certain variables (the
parameters) before calling it again...is that considered recursion?
For example in psuedocode and Java:

public static int myMethod (int parm1, int parm2, int parm3){

//if (the parms don't give me what I'm looking for)
//then keep on searching the list

//arbitrary data manipulation here
//the parms shrink in length as they are called over and over.
int parm4 = parm3 - parm1;
int parm5 = parm2 - 1;

if (someVar1 > someVar2) {
aResult = myMethod (parm1, parm5, parm4);
}
else {
if someVar2 > someVar1) {
aRssult = myMethod (parm1, parm4, parm3);
}
slse {
//I found my value here!
return (happyValue); //this would be the base case
}
{
return (aResult)

The meaning of why I'm changing the values of the parms is not
important, but I'm calling myMethod again inside itself. Is that the
same concept of recursion as shown in the Fibonacci recursion
routine? It's certainly not as pretty but it feels right but that
doesn't say much. :p

The example code above is conceptually what I wrote for this
assignment. The 2 returns listed in it bothers me but it wouldn't
compile unless I put a return at the very bottom (outside of all the
if's, while and what have you statements.)

I've been advised by more than one professor not to use Wikipedia as a
resource! I still go to it though, it's everywhere and lots of
info. :) I thank all of you for your help.

-t

One thing good about insomnia is that you can spend time coding
recursion Java programs. :)

Taria

unread,
Oct 12, 2007, 10:27:26 AM10/12/07
to

Sometimes I wish they had delete msg here! I just discovered that you
can put the name of a method in your return statement! Wow!! It looks
better, works the same, makes more sense, too!

(Thanks to the Fibonacci example code ... I noticed somethng on it
that I didn't before.)

Does it matter how many returns you have in your recursion method? Is
it ok to have 3 returns in it? Like if I want it to sort just the
right side of the array, I call it again with right side array values,
vice versa for left and base case has its own return too.

And I hope you guys realize that the directory code was a bit
overwhelming for me right now. I see what you are trying to show.

thanks again!
-t

Now onward to trying to implement this with object oriented code. :)

Jean-Baptiste Nizet

unread,
Oct 12, 2007, 10:42:27 AM10/12/07
to
On 12 oct, 15:46, Lew <l...@lewscanon.com> wrote:
> Jean-Baptiste Nizet wrote:
> > int countFilesInTreeWithRoot(File directory) {
> > int numberOfFilesInTheDirectory = countFilesInDirectory(directory);
> > int result = numberOfFilesInTheDirectory;
>
> Why do you feel that you need two variables for the same value?
>

Err, I don't know. You're right. It's stupid.

Jean-Baptiste Nizet

unread,
Oct 12, 2007, 10:55:55 AM10/12/07
to
On 12 oct, 16:27, Taria <mche...@hotmail.com> wrote:
> Sometimes I wish they had delete msg here! I just discovered that you
> can put the name of a method in your return statement! Wow!! It looks
> better, works the same, makes more sense, too!
>
> (Thanks to the Fibonacci example code ... I noticed somethng on it
> that I didn't before.)
>
> Does it matter how many returns you have in your recursion method? Is
> it ok to have 3 returns in it? Like if I want it to sort just the
> right side of the array, I call it again with right side array values,
> vice versa for left and base case has its own return too.
>

You probably don't realize it, but you could start a religious war by
asking such a question :-)

It's a matter of taste, readability and maintainability. Some people
don't like multiple return statements at all in a method. One of the
often heard arguments is that having only one return statement helps
in debugging: you just put a log and/or breakpoint just before the
unique return statement to know what the method returns.
On the other hand, it's sometimes easier to read and follow when you
have several return statements, and it helps avoiding too many
statement imbrications. For example code like

public String foo(String s) {
if (s == null) {
return null;
}

// some complex code
}

is easier to read (usually) than

public String foo(String s) {
String result;
if (s == null) {
result = null;
}
else {
// some complex code
}
return result;
}

> And I hope you guys realize that the directory code was a bit
> overwhelming for me right now. I see what you are trying to show.
>

Sorry. I wanted to give you this example because I consider it more
concrete, though a bit more complex, than the traditional fibonacci
example.
If the fibonacci example is sufficient for you to understand the
concept, then fine.

JB.

Eric Sosman

unread,
Oct 12, 2007, 12:39:07 PM10/12/07
to
Lew wrote On 10/12/07 09:04,:

> Taria wrote:
>
>>>I'm struggling with an assignment that involves using a 'quicksort'
>>>search for a user defined kth element within a list of numbers. The
>>>use of recursion is a requirement of this assignment.
>>>
>>>
>>>>From what I understand, recursion breaks down a big task into smaller
>>>
>>>tasks until it solves the problem.
>
>
> No.
>
> Eric Sosman wrote:
>
>> The usual way to apply recursion is to divide a maxi-problem
>>into two or more mini-problems, solve the minis, and combine
>>their solutions to find the answer to the maxi-problem.
>
>
> No.

You've misspelled "Yes."

> Recursion refers to a routine that calls itself, as with the classic Fibonacci
> sequence method:
>
> public static int fibonacci( int n )
> {
> if ( n < 0 ) return 0;
> if ( n == 0 || n == 1 )
> {
> return 1;
> }
> return fibonacci( n - 1 ) + fibonacci( n - 2 );
> }
>
> Notice that fibonacci() calls itself, unless certain conditions are met. If
> those conditions didn't exist, the recursion would never end.

Notice also that the function does exactly as I
described: It decomposes the maxi-problem of finding
fibonacci(n) into the mini-problems of fibonacci(n-1)
and fibonacci(n-2), then it combines the solutions of
those smaller problems to reach its own solution. QED.

As an aside, I have always been appalled at the use
of Fibonacci numbers to illustrate recursion, because
recursion is probably the worst[*] way to compute them.
The obvious iterative solution executes in O(n) time, and
there's also an O(1) method, albeit with some drawbacks.
Why would anyone in his right mind choose the O(e^n)
recursive method? Oh, yeah: Because he's a classroom
drudge trying to teach recursion, and it's the only
problem that comes to mind ...

The O.P.'s teacher has not chosen a good problem to
illustrate recursion, but has at least stayed away from
the hackneyed, overused, and unsuitable Fibonacci numbers.

[*] Well, "worst" isn't right, because there is always
a way to disimprove any algorithm. This worse algorithm
can itself be disimproved, and so on ad infinitum: "worst
algorithm" is like "largest integer." But among what we
might call "non-bogus" algorithms that compute Fibonacci
numbers, recursion is probably the worst.

--
Eric....@sun.com

Mark Space

unread,
Oct 12, 2007, 2:16:07 PM10/12/07
to
Jean-Baptiste Nizet wrote:

>
> You probably don't realize it, but you could start a religious war by
> asking such a question :-)
>

NOBODY expects the Spanish Inquisition! Our chief weapon is surprise...!

Surprise and fear...fear and surprise.... Our two weapons are fear and
surprise...!

And ruthless posting efficiency....

Our *three* weapons are fear, surprise, and ruthless efficiency...and an
almost fanatical devotion to the Fred Brooks....

Our *four*...no... *Amongst* our weapons.... Amongst our weaponry...are
such elements as fear, surprise....


I'll just post again.

Taria

unread,
Oct 12, 2007, 2:19:46 PM10/12/07
to
> Eric.Sos...@sun.com- Hide quoted text -
>
> - Show quoted text -

I've always wondered why recursion performed badly for the fibonacci
numbers recursion algorithm. I've read quite a few forums on this bad
performance but no one's ever explained why. Thanks Eric, your
explanation was interesting. :)

Yes, I realized too, JP that the classical recursion problem usually
"added" a result (I imagine a snowball) as it backs back out after
finding its base case, when the problem I have at hand doesn't care
about the results of the other mini problems that it solved other than
it wasn't able to derive an answer in which case I'd call the routine
again with diff parms after I decide which side of the list to look
in. This is when I became confused about recursion as I expected it
to snowball into an answer when really I only care about one answer
(base case result.)

I have never differentiated between the ways recursion can be used
before (or understood it beyond textbook for that matter) till today.
I understand the directory recursion problem a lot more now (sleep
does wonders!) and I'm glad you gave me a practical approach on how to
use recursion in that way. That was a question that arose in my
mindwhile reading numerous web sites and sorting recursion java code,
(if it's such bad performance, why would anyone want to use the code
was a mystery to me but I figured it was a way to illustrate a
point.)

I would have thought having only one return in a method would be
easier to debug/maintain but as I wrote this code, it made perfect
sense why I'd have multiple returns but I wasn't sure if this was
acceptable as good coding style. I appreciate your thoughts on this.

Awesome learning experience! :) Thanks everyone.

-t

Patricia Shanahan

unread,
Oct 12, 2007, 2:33:22 PM10/12/07
to
Eric Sosman wrote:
...

> As an aside, I have always been appalled at the use
> of Fibonacci numbers to illustrate recursion, because
> recursion is probably the worst[*] way to compute them.
...

Is there a good problem for the job?

As far as I can tell, problems divide, for this purpose, into two
categories:

1. Problems that would be better done by other means.

2. Problems that are too difficult, or involve too much background
knowledge, for use in teaching the technique.

Can anyone suggest a simple problem that is both best solved by recursion?

Patricia

Taria

unread,
Oct 12, 2007, 3:59:03 PM10/12/07
to

How about the factorial problem?

public static int factorial (n)
if n = 0
return 1
else
return (factorial (n*(n-1)))

I'm not very good at analyzing the time complexity of this problem so
I don't know if this is efficient or not, but this allows me to see
the recursive property to where I understand it.

Patricia Shanahan

unread,
Oct 12, 2007, 4:23:23 PM10/12/07
to
Taria wrote:
> On Oct 12, 8:33 am, Patricia Shanahan <p...@acm.org> wrote:
>> Eric Sosman wrote:
>>
>> ...> As an aside, I have always been appalled at the use
>>> of Fibonacci numbers to illustrate recursion, because
>>> recursion is probably the worst[*] way to compute them.
>> ...
>>
>> Is there a good problem for the job?
...

> How about the factorial problem?

It is indeed a popular choice.

>
> public static int factorial (n)
> if n = 0
> return 1
> else
> return (factorial (n*(n-1)))

I think you mean "return n * factorial(n-1)".

> I'm not very good at analyzing the time complexity of this problem so
> I don't know if this is efficient or not, but this allows me to see
> the recursive property to where I understand it.

public static int factorial(n) {
int fact = 1;
for(int i = 1; i<=n; i++) {
fact *= i;
}
return fact;
}

is more direct, and usually faster. There is no need for all those
calls. The time complexity in both cases is O(n).

Patricia

Martin Gregorie

unread,
Oct 12, 2007, 4:46:36 PM10/12/07
to
Patricia Shanahan wrote:
>
> Can anyone suggest a simple problem that is both best solved by recursion?
>
Yes. Tree walking.


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |

Eric Sosman

unread,
Oct 12, 2007, 6:17:00 PM10/12/07
to
Taria wrote On 10/12/07 14:19,:
> [...]

> I've always wondered why recursion performed badly for the fibonacci
> numbers recursion algorithm. I've read quite a few forums on this bad
> performance but no one's ever explained why. Thanks Eric, your
> explanation was interesting. :)

I didn't really "explain" the bad performance; I
just claimed it. To understand what's going on, try
to figure out how many times you call fib(0) in the
recursive calculation of fib(n).

fib(1) is a ground case: no calls to fib(0);

fib(2) calls fib(1) and fib(0), getting to
fib(0) once.

fib(3) calls fib(2) and fib(1), making 1+0=1
calls to fib(0).

fib(4) calls fib(3) and fib(2), making 1+1=2
calls to fib(0).

fib(5) calls fib(4) and fib(3), making 2+1=3
calls to fib(0).

Does a familiar pattern begin to emerge here? Would
you care to guess how many times fib(40) will wind up
calling fib(0)?

--
Eric....@sun.com

Message has been deleted

Lew

unread,
Oct 12, 2007, 6:55:05 PM10/12/07
to
Mark Space wrote:
> NOBODY expects the Spanish Inquisition! Our chief weapon is surprise...!
>
> Surprise and fear...fear and surprise.... Our two weapons are fear and
> surprise...!
>
> And ruthless posting efficiency....
>
> Our *three* weapons are fear, surprise, and ruthless efficiency...and an
> almost fanatical devotion to the Fred Brooks....
>
> Our *four*...no... *Amongst* our weapons.... Amongst our weaponry...are
> such elements as fear, surprise....
>
>
> I'll just post again.

That is one of the two best paraphrases of that routine I've seen in these
groups. Actually, on balance it's better than the other one - that is the
best paraphrase of that routine I've seen in these groups.

--
Lew

Lew

unread,
Oct 12, 2007, 7:18:38 PM10/12/07
to
Patricia Shanahan wrote:
> public static int factorial(n) {
> int fact = 1;
> for(int i = 1; i<=n; i++) {
> fact *= i;
> }
> return fact;
> }
>
> is more direct, and usually faster. There is no need for all those
> calls. The time complexity in both cases is O(n).

Tail recursion can always be refactored to a loop with little difficulty.
<http://en.wikipedia.org/wiki/Tail_recursion>

--
Lew

Patricia Shanahan

unread,
Oct 12, 2007, 7:24:02 PM10/12/07
to
Martin Gregorie wrote:
> Patricia Shanahan wrote:
>>
>> Can anyone suggest a simple problem that is both best solved by
>> recursion?
>>
> Yes. Tree walking.

Are trees usually introduced before recursion?

Patricia

Steve Wampler

unread,
Oct 12, 2007, 7:09:06 PM10/12/07
to
Stefan Ram wrote:
> In my programming classes, I once had the exercise to read in
> a 0-terminated sequence of numbers from the user (for example,
> via the console) and print them in the reverse order. For example:
...
> This exercise was given at a time, when arrays and containers
> were not yet introduced, and - anyway - it also does not
> contain a specific limit for the number of numbers to be read in.

Heh. I assigned something like that once (not in Java, but translates
well). The 'cleverest' solution turned in was generally (ignoring
possible failure of obtaining console and any typos):

public static void main(String args[]) {
String line = null;
if (null != (line = System.console().readLine())) {
main(args);
System.writeln(line);
}
}

Wasn't what I was looking for, but showed recursion and a
good understanding of how methods/functions work.
(The original solution was in a language "Icon" and was
simply:

procedure main()
write(1(read(),main()))
return
end
)
--
Steve Wampler -- swam...@noao.edu
The gods that smiled on your birth are now laughing out loud.

Message has been deleted

Mark Space

unread,
Oct 12, 2007, 8:32:48 PM10/12/07
to
Lew wrote:

> That is one of the two best paraphrases of that routine I've seen in
> these groups. Actually, on balance it's better than the other one -
> that is the best paraphrase of that routine I've seen in these groups.
>

^_^

Which is kind of surprising considering how little actual paraphrasing I
did. I added the word "post" a couple of times and substituted Fred
Brooks for the Pope. Incorrectly, I see, because I left the word "the"
in there, dang it.

Patricia Shanahan

unread,
Oct 12, 2007, 9:31:24 PM10/12/07
to
Stefan Ram wrote:

> Patricia Shanahan <pa...@acm.org> writes:
>>>> Can anyone suggest a simple problem that is
>>>> both best solved by recursion?
>>> Yes. Tree walking.
>> Are trees usually introduced before recursion?
>
> The problem might be: to determine for
> a directory given by an object of type
>
> http://download.java.net/jdk7/docs/api/java/io/File.html
>
> whether it contains (directly or indirectly in
> any subdirectory) any file with an underscore
> in its name.
>
> It is sufficient to give the above documentation as
> a reference. No need to explicitly introduce trees
> before. (Most pupils in a programming class already
> have heard the term »directory« and »file« before).
>

I like that one. It does meet both the requirements I suggested earlier.

Patricia

Roedy Green

unread,
Oct 13, 2007, 1:06:37 AM10/13/07
to
On Fri, 12 Oct 2007 12:59:03 -0700, Taria <mch...@hotmail.com> wrote,

quoted or indirectly quoted someone who said :

>public static int factorial (n)


>if n = 0
> return 1
>else
> return (factorial (n*(n-1)))
>
>I'm not very good at analyzing the time complexity of this problem so
>I don't know if this is efficient or not, but this allows me to see
>the recursive property to where I understand it.

You an do factorial more efficiently with iteration. Way back in the
olden days of Fortran I wrote an iterative version of QuickSort. It
was hideously more complicated though.

Recursion is pretty well the only way to traverse a tree (e.g.
enumerate all the files in a directory tree).

Thomas Fritsch

unread,
Oct 13, 2007, 7:22:50 AM10/13/07
to
Patricia Shanahan wrote:
> As far as I can tell, problems divide, for this purpose, into two
> categories:
>
> 1. Problems that would be better done by other means.
>
> 2. Problems that are too difficult, or involve too much background
> knowledge, for use in teaching the technique.
>
> Can anyone suggest a simple problem that is both best solved by recursion?
I'd suggest the Towers of Hanoi.
<http://en.wikipedia.org/wiki/Tower_of_Hanoi>

--
Thomas

Martin Gregorie

unread,
Oct 13, 2007, 8:05:48 AM10/13/07
to
I think I was introduced to trees and recursion at nearly the same time,
but it was a long time ago and I don't really remember. I'm pretty
certain factorial() was the first recursive example I saw, but the first
useful examples of recursion I met were connected with building and
walking ordered binary trees - quite probably they were in Nicolas
Wirth's book "Algorithm + Data Structure = Program".
Message has been deleted

Eric Sosman

unread,
Oct 13, 2007, 9:46:34 AM10/13/07
to

Perhaps a program to play a fairly simple game like
tic-tac-toe ("naughts and crosses") would be a candidate.
Yes, it's a tree search and the beginning student may not
yet know about trees, but it seems to me the problem could
be posed and discussed and solved without using the word
"tree" at all.

Any kind of partitioning sort might do well, if "sort"
won't scare the students. For arrays a radix sort or a
Quicksort seems a reasonable problem. If knowledge of
linked lists can be assumed, sorting them with a top-down
straight merge is an easy recursion (whether it's best or
not depends on what you mean by "best").

Recursive-descent parsing might be too intricate for
someone who hasn't yet met recursion, but maybe a simple
expression evaluator: Numeric operands only, all operators
at equal precedence. Think of it as a four-function
calculator with ( ) keys to invoke and terminate sub-
instances of itself. ("Best solution" is debatable again,
though; an explicit stack is probably more convenient.)

--
Eric Sosman
eso...@ieee-dot-org.invalid

Daniel Pitts

unread,
Oct 13, 2007, 1:02:24 PM10/13/07
to
Lew wrote:
> Jean-Baptiste Nizet wrote:
>> int countFilesInTreeWithRoot(File directory) {
>> int numberOfFilesInTheDirectory = countFilesInDirectory(directory);
>> int result = numberOfFilesInTheDirectory;
>
> Why do you feel that you need two variables for the same value?
>
>> File[] subDirectories = getSubDirectories(directory);
>> for (File subDirectory : subDirectories) {
>> result += countFilesInTreeWithRoot(subDirectory);
>> }
>> return result;
>> }
>
> int countFilesInTreeWithRoot(File directory) {
> int result = countFilesInDirectory(directory);
> File[] subDirectories = getSubDirectories(directory);
> for (File subDirectory : subDirectories) {
> result += countFilesInTreeWithRoot(subDirectory);
> }
> return result;
> }
Probably the same reason you feel like having unnecessary variables...

int countFilesInTreeWithRoot(File directory) {
int result = countFilesInDirectory(directory);
for (File subDirectory : getSubDirectories(directory)) {
result += countFilesInTreeWithRoot(subDirectory);
}
return result;
}

--
Daniel Pitts' Tech Blog: <http://virtualinfinity.net/wordpress/>

Daniel Pitts

unread,
Oct 13, 2007, 1:09:57 PM10/13/07
to
You can have as many return statements in your code as make sense, but
only the first one that gets executed will do anything...

public String bob(int value) {
if (value == 0) {
return "Foo";
}
if (value < 2) {
return "Bar";
}
if (value > 2) {
return "Baz";
}
return "Value is 2!";
}

There is a school of thought, which I disagree with completely, that
your should have exactly one exit point from a function/procedure. They
would have you write the above code like:

public String bob(int value) {
final String result;
if (value == 0) {
result = "Foo";
} else if (value < 2) {
result = "Bar";
} else if (value > 2) {
result = "Baz";
} else {
result = "Value is 2!";
}
return result
}

In my opinion, that is actually more complicated to understand. On the
other hand, if you have a method that is 100 lines long, and a return
statement somewhere in the middle, you can confuse people who don't
notice that return. In practice, that doesn't matter, because you
shouldn't have methods that are longer than, say, 15 lines (give or
take). If you're writing a method thats long, try to think about the
subtasks inside of it, and breaking them out into their own methods.

Hope this helps,
Daniel.

Daniel Pitts

unread,
Oct 13, 2007, 1:11:53 PM10/13/07
to

xkcd on Monty Python quotes:
http://xkcd.com/16/

Daniel Pitts

unread,
Oct 13, 2007, 1:17:19 PM10/13/07
to
This is actually a space complexity issue. If you called factorial 500,
you'd end up with a stack 500 deep. Actually, in this case you'd also
end up with an integer overflow, but lets assume you're using a special
int that can hold 500! (hint, BigInteger can)

For these reasons, Factorial is best solved iteratively too.
public static int factorial(int n) {
int f = 1;
while (n > 1) {
f *= n--;
}
return f;
}

I think tree traversal is probably one of the better examples of
recursion, but it implies understanding of trees.

Lew

unread,
Oct 13, 2007, 2:24:58 PM10/13/07
to

Well, I could say that I felt it unnecessary to point out every example, and
that I should leave one for you to find.

--
Lew

Lasse Reichstein Nielsen

unread,
Oct 14, 2007, 5:40:06 AM10/14/07
to
Lew <l...@lewscanon.com> writes:

> Tail recursion can always be refactored to a loop with little difficulty.
> <http://en.wikipedia.org/wiki/Tail_recursion>

But the traditional recursive factorial isn't tail-recursive. It calls
recursively and then multiplies the result afterwards.

However, if you write it with an accumulator, it will be.
This is a tail-recursive version corresponding to Patricia Shananhan's
Java-code.

fun factorial(n) =
let fun fact_for(fact, i) = if (i <= n)
then fact_for(fact * i, i + 1)
else fact
in fact_for(1, 1)

/L
--
Lasse Reichstein Nielsen - l...@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'

Lasse Reichstein Nielsen

unread,
Oct 14, 2007, 5:53:14 AM10/14/07
to
Eric Sosman <Eric....@sun.com> writes:

> fib(5) calls fib(4) and fib(3), making 2+1=3
> calls to fib(0).
>
> Does a familiar pattern begin to emerge here? Would
> you care to guess how many times fib(40) will wind up
> calling fib(0)?

Yes, that's the inductive argument. The inductive methods can usually
be handwaved. More formally:

A call to fib(n) makes fib(n)-1 additions.
This is the case for the base cases (0 additions, just return 1).
For a non-base-case (n>1), it makes two calls, to fib(n-1) and
fib(n-2) and adds these. If these use fib(n-1)-1 and fib(n-2)-1
additions, then calculating the result uses a total of
(fib(n-1)-1)+(fib(n-2)-1)+1
= fib(n-1)+fib(n-2)-1 =
= fib(n)-1
additions.

Or more handwavy: Notice that the only constant occuring in an
addition is 1, and that nowhere is a calculated value used more
than once, so the result is found by adding 1's until you reach
fib(n).
This is the version that enlightened me :)

Roedy Green

unread,
Oct 14, 2007, 6:36:29 AM10/14/07
to
On Sat, 13 Oct 2007 10:17:19 -0700, Daniel Pitts
<newsgroup....@virtualinfinity.net> wrote, quoted or indirectly
quoted someone who said :

>For these reasons, Factorial is best solved iteratively too.

for three different factorial algorithms see
http://mindprod.com/jgloss/factorial.html

Roedy Green

unread,
Oct 14, 2007, 6:40:26 AM10/14/07
to
On Sat, 13 Oct 2007 13:05:48 +0100, Martin Gregorie
<mar...@see.sig.for.address> wrote, quoted or indirectly quoted
someone who said :

>I'm pretty

>certain factorial() was the first recursive example I saw, but the first
>useful examples of recursion I met were connected with building and
>walking ordered binary trees - quite probably they were in Nicolas
>Wirth's book "Algorithm + Data Structure = Program".

Recursion is not big in my toolbox. I have used it for:

1. traversing directory trees.

2. traversing tree structures in RAM.

3. parsing.

4. generating nonsense sentences.

5. QuickSort

I can't think of anything else offhand.

It is not nearly as useful as induction is in mathematics. Your depth
is pretty seriously limited.

Mark Space

unread,
Oct 14, 2007, 2:11:25 PM10/14/07
to
Martin Gregorie wrote:
> Patricia Shanahan wrote:
>>
>> Can anyone suggest a simple problem that is both best solved by
>> recursion?
>>
> Yes. Tree walking.
>
>

I actually might have to disagree with you here. CPUs have a physical
limit regarding stack use. Several use local stacks (internal cache) to
implement call and return stacks. Go beyond that limit and you incur a
significant performance penalty.

AMD explicitly recommends implementing tree walks with a loop and a
local stack data structure, not using recursion.

I think the afore mentioned directory tree walk should probably be
implemented the same way. Hmmm, although maybe the IO involved would
make stack performance a moot issue.

Lew

unread,
Oct 14, 2007, 2:53:03 PM10/14/07
to
Patricia Shanahan wrote:
>>> Can anyone suggest a simple problem that is both best solved by
>>> recursion?

Martin Gregorie wrote:
>> Yes. Tree walking.

Mark Space wrote:
> I actually might have to disagree with you here. CPUs have a physical
> limit regarding stack use. Several use local stacks (internal cache) to
> implement call and return stacks. Go beyond that limit and you incur a
> significant performance penalty.
>
> AMD explicitly recommends implementing tree walks with a loop and a
> local stack data structure, not using recursion.

Brian Goetz's superb /Java Concurrency in Practice/ has an example of a
recursive, sequential tree-walk algorithm (Listing 8.15), which is stack
bound, transformed to a thread-based concurrent algorithm (Listing 8.16). The
concurrent version is also recursive, but puts its intermediate state on the
heap, not the stack, so goes much, much deeper. This is an interesting case
of a parallel algorithm providing benefits even on a single-processor system.

It shows that part of the problem is not with recursion, but with use of the
stack to support recursion.

Of course, heap is finite, too. Recursion must maintain state proportionate
to problem size, versus a loop's summary state like a running total, with no
intermediate state to unwind. The state required by recursive algorithms
supports a certain economy of algorithmic expression, but often is too high a
price to pay.

> I think the afore mentioned directory tree walk should probably be
> implemented the same way. Hmmm, although maybe the IO involved would
> make stack performance a moot issue.

CPUs use cache for heap memory, too. Locality of reference is something to
think about, but for a cross-platform (and evolving-platform) world like
Java's, I suggest we not get too hung up on the specifics of how a CPU
implements program stack vs. program heap. The real issue is the depth of
that memory. Java's execution model has logically separate stack and heap,
and heap is (nearly?) always much larger. (I'm explicitly disclaiming JME here.)

The parallel algorithm suggested in /Concurrency/ not only shifts state to the
heap from the stack, thus buying larger capacity, it is inherently scalable to
more processor threads.

I assess that the recursive expression of Goetz's example directly lends
itself to parallelization. A loop idiom might have been more difficult to
render as multithreaded.

--
Lew

Martin Gregorie

unread,
Oct 14, 2007, 3:00:57 PM10/14/07
to
I agree that its usefulness is limited, but I can add one further case:

6. Handling Bill Of Materials structures in databases.

These are usually represented in a database with the following
entity-relationship diagram where 'part' describes a product,
subassembly or indivisible item and 'component' serves both to implement
the many:many relationship between parts and to say how many components
of a given type there are in something.

--------
{ part )
--------
is_made_from | | is_a _subassembly_of
/|\ /|\
-----------
( component }
-----------

This is a self-referential many-to-many structure and as such is
inherently recursive: You walk the 'is_made_from' relationship to find
which components a part (which can be a subassembly or unitary, such as
a washer) is made from and you walk the 'is_a_subassembly_of'
relationship to discover where a part is used. IOW 'is_made_from' can be
used for costing products and 'is_a_subassembly_of' would be used in
stock control.

I've also seen these structures used in financial systems to define
financial products, e.g, a current account is assembled from lower level
components such as a balance sheet, interest rate, a direct debit
facility, a cheque drawing facility and an overdraft facility. A balance
sheet consists of a balance and a transaction list. A transaction list
consists of.....

I've tried both iterative and recursive methods of handling these
structures and recursion was both more elegant and faster (at least when
implemented in a VAX/VMS Rdb environment).

lsch...@d.umn.edu

unread,
Oct 14, 2007, 6:39:48 PM10/14/07
to
> As an aside, I have always been appalled at the use
> of Fibonacci numbers to illustrate recursion, because
> recursion is probably the worst[*] way to compute them.
> The obvious iterative solution executes in O(n) time, and
> there's also an O(1) method, albeit with some drawbacks.
> Why would anyone in his right mind choose the O(e^n)
> recursive method? Oh, yeah: Because he's a classroom
> drudge trying to teach recursion, and it's the only
> problem that comes to mind ...

You might be interested to know that there is also a O(log n) method
for computing the Fibonacci sequence by matrix exponentiation. I've
used this for job interviews before. If a candidate knows the O(2^n),
O(n), O(log n) and O(1) methods of computing a Fibonacci sequence and
can explain the different approaches, then I am almost 100% confident
that they have an extremely solid computer science foundation. If
they can derive the O(1) solution on the fly, they're hired. ;)

For the curious, let the matrix A be defined as

n
A = [1 1] = [F(n+1) F(n) ]
[1 0] [F(n) F(n-1)]

Since computing the power of a matrix can be efficiently done by
recursion(!), the Fibonacci sequence is efficiently computed as well.
Here is the recursive exponentiation in pseudo-code since this is a
discussion about recursive examples.

public static Matrix matrixPower(Matrix matrix, int exponent)
{
if ( exponent == 0 )
return IdentityMatrix;

if ( exponent == 1 )
return matrix;

if ( exponent.isOdd() ) {
return matrix.times( matrixPower( matrix, exponent-1 ));
} else {
// This value must be cached to make the recursion
// efficient! Taria, can you explain why?
Matrix halfPower = matrixPower( matrix, exponent / 2 );
return halfPower.times( halfPower );
}
}

See http://www.ics.uci.edu/~eppstein/161/960109.html

Note that this decomposition implies the following Fibonacci
relationship

[F(2n+1) F(2n) ] = [F(n+1) F(n) ][F(n+1) F(n) ]
[F(2n) F(2n-1)] [F(n) F(n-1)][F(n) F(n-1)]

= [-- F(n+1)F(n) + F(n)F(n-1)]
[-- ---]

Thus

F(2n) = F(n) * (F(n+1) + F(n-1))

You could use this new identity to write a new recursive function that
makes three recursive calls when n is even and the standard recursion
when n is odd. This would be slightly more efficient than using the
matrix multiplications directly since you would only be computing with
the non-zero entries of the matrix.

public static int fib( int n )
{
if ( n < 2 )
return 1;

if ( isEven( n ))
{
int h = n / 2;
return fib(h) * ( fib(h+1) + fib(h-1) );
}
else
{
return fib(n-1) + fib(n-2);
}
}

Aren't algorithms fun? :)

-Lucas

Mark Space

unread,
Oct 14, 2007, 7:38:01 PM10/14/07
to
Lew wrote:

> Brian Goetz's superb /Java Concurrency in Practice/ has an example of a
> recursive, sequential tree-walk algorithm (Listing 8.15), which is stack
> bound, transformed to a thread-based concurrent algorithm (Listing
> 8.16).

Thanks, I added this to my Amazon wish list. I've been meaning to for a
while and you just reminded me.

I have a couple of algorithms I fall back on when I need faster tree
walks. One is Michael Abrash's algorithm from The Zen of Graphics
Programming. It uses a fixed length array of pointer allocated locally
(on the stack). Good if your tree can publish it's max depth in
advance. A simple alteration of Abrash can use a regular Java stack
structure to provide tree traversal up to the depth allowed my heap memory.

(Actually, I have no idea how the speed of these two compare. Hmmm,
goona need to test this.)

The other is J. M. Morris's simple tree walk. (Information Processing
Letters, Dec. 1979.) It uses no extra memory at all, but does transform
the tree as it walks. Good for limited memory devices where multiple
threads aren't an issue, but I've never had a chance to try it out in
practice.

You make good points. I hadn't thought about using multiple threads to
speed up a tree walk. It's something I should investigate.

John W. Kennedy

unread,
Oct 14, 2007, 10:19:11 PM10/14/07
to

As a thought experiment only, I've always liked "Implement the #include
directive" (or %INCLUDE or COPY or whatever) -- but that's no good for
someone starting with Java, and, as I say, works only as a thought
experiment.

Walking the directory tree may indeed be the best example. It's a
concept that modern beginners will normally have an intuitive grasp of,
even if they don't know about other sorts of treewalking. We old farts,
of course, may not have been exposed to computers at all before our
first programming courses, but that's not normal today.

--
John W. Kennedy
"The whole modern world has divided itself into Conservatives and
Progressives. The business of Progressives is to go on making mistakes.
The business of the Conservatives is to prevent the mistakes from being
corrected."
-- G. K. Chesterton

Taria

unread,
Oct 14, 2007, 11:31:38 PM10/14/07
to

>
> Aren't algorithms fun? :)
>
> -Lucas

It's a lot of fun when I finally understand what's being said. :p My
class is an intro to algorithms, as a matter of fact and if we were to
put the class on a standard curve of grading, we'd be all failing. By
this, at least I know I'm not the only person struggling in the class.

I am not sure why I'm having such a hard time with the O(n) concept
and everything else I'm learning about, sometimes it seems so simple
until someone asks me how to compute O(1) for a fib series. lol. I'm
going to have to look that up, I have no clue about that one. And I
didn't know there were different time complexities tagged onto the fib
series (other than if it was computed using iterative vs recursion.)

Interesting web page Lucas, I bookmarked so I can read it for a
different perspective on concepts when i get stuck trying to
understand them. :)

-t

And no, I can't tell you answer the cache question either. I'll think
about it for a bit though but whether I get any usable answers is
another. :p

Lasse Reichstein Nielsen

unread,
Oct 15, 2007, 12:55:21 AM10/15/07
to
lsch...@d.umn.edu writes:

> You might be interested to know that there is also a O(log n) method
> for computing the Fibonacci sequence by matrix exponentiation.

...


> If they can derive the O(1) solution on the fly, they're hired. ;)

> For the curious, let the matrix A be defined as
>
> n
> A = [1 1] = [F(n+1) F(n) ]
> [1 0] [F(n) F(n-1)]

I'm guessing the O(1) solution is the one that uses the eigenvalues of
that matrix.
I'm hard pressed to call it O(1) though. Calculating the power of a
real number also takes logarithmic time, as soon as you leave the range
of the machine supported doubles. But if you limit yourself to a fixed
range of output values, then there is an O(1) algorithm: table lookup.
For an int result, the table only need 12 entries :)

Actually, if you need arbitrarily large results, the simple operations
like addition and multiplication won't be O(1) any more, but probably
log(n) and log(n)^2 where n is the size of the numbers added or
multiplied.

Christian

unread,
Oct 15, 2007, 6:28:10 AM10/15/07
to
Taria schrieb:
> Hello all,
>
> I'm struggling with an assignment that involves using a 'quicksort'
> search for a user defined kth element within a list of numbers. The
> use of recursion is a requirement of this assignment.
>
>>From what I understand, recursion breaks down a big task into smaller
> tasks until it solves the problem. My question is this: (I hope I
> make sense in asking this) what if I was only interested in the result
> at the very end of the recursion, how would I pass that value I found
> to the first calling statement?
>
> I figured out (and I'm sure my program is clumsy) how to recusively
> solve the assignment but I don't know how to pass back the value I
> find (after I split the list into smaller bits.) I could put a print
> statement right there to indicate my answer, but after looking at a
> few examples of other ppl's recursive Java routines, I feel
> uncomfortable with this solution. I feel like I should be able to pass
> back a value at the end of a recusrive call without it being changed
> when it's returning to the main routine (yes, that's what happened
> i.e. I HAD a statement that looked similiar to this:
>
> return (aValue);
>
> and it changed the value of aValue to something else. Conceptually,
> should I be able to use that statemen above?
>
> Also, because I'm so intimidated by the idea of recursion and still
> somewhat afraid of objects, I resorted to using static methods instead
> of objects. Am I hurting myself by doing this? Isn't the concepts of
> the behavior of recursion the same whether it be by object or static?
> All the examples I'm looking at use objects hence I've begun to
> question my pursuit of using static methods. (Really, I was going
> to rewrite this program using object-oriented code after I figured out
> the recursive part, I know I can do it! :p)
>
> Any advice is appreciated and I apologize in advance for anything that
> sounds 'wrong.'
>
> -t
>

Recursion is only if a function calls itself. What you are talking about
is one of the typical algorithm design paradigms to solve a problem:
"Divide and Conquer"
there are some other Algorithm paradigms that may use recusrsion to work.
Another paradigm that uses recursion is Dynamic Programming.
While you can do the devide and conquer in static method calls Dynamic
programming needs some kind of object that holds results of earlier
computed values. So either create an object that does the computation
and holds these values .. or you have to pass this computed values with
each methodcall..

In other parts of the thread we had the fibonacci series..
with only Divide and Conquer you will get exponential nr of method calls
while if you are using dynamic programming and store solved subproblems
you stay in O(n) (which is quite bad for fibonacci but better than
nothing..)

Christian

Taria

unread,
Oct 15, 2007, 8:10:12 AM10/15/07
to
> Recursion is only if a function calls itself. What you are talking about
> is one of the typical algorithm design paradigms to solve a problem:
> "Divide and Conquer"
> there are some other Algorithm paradigms that may use recusrsion to work.

> Another paradigm that uses recursion is Dynamic Programming.

Dynamic Programming sounds like the start of smart programs that
'learn' solutions by remembering things it's done in the past. This
sounds like an interesting subject!

> While you can do the devide and conquer in static method calls Dynamic
> programming needs some kind of object that holds results of earlier
> computed values. So either create an object that does the computation
> and holds these values .. or you have to pass this computed values with
> each methodcall..

I did a combination of what you suggested, in a static method, I
passed computed parameters (left, right ArrayList indexes and the
value of k) using recursion and stored the list values into objects.
That was kinda fun...I enjoyed doing it once I had the recursive
concept intact. I can see why object-oriented languages became so
popular, they're really easy to use. (Once you get the hang of it,
anyway.) I have yet to use computational statements in an
object..that's new to me and sounds fun!

>
> In other parts of the thread we had the fibonacci series..
> with only Divide and Conquer you will get exponential nr of method calls
> while if you are using dynamic programming and store solved subproblems
> you stay in O(n) (which is quite bad for fibonacci but better than
> nothing..)
>

> Christian- Hide quoted text -
>
> - Show quoted text -

Reading different perspectives of this subject (or any other subject)
helps tremendously in expanding 'beyond the textbook' stuff that is
presented in class. I'm enjoying this thread. :)

-t

Ingo Menger

unread,
Oct 15, 2007, 8:29:36 AM10/15/07
to
On 13 Okt., 01:24, Patricia Shanahan <p...@acm.org> wrote:
> Martin Gregorie wrote:
> > Patricia Shanahan wrote:
>
> >> Can anyone suggest a simple problem that is both best solved by
> >> recursion?
>
> > Yes. Tree walking.
>
> Are trees usually introduced before recursion?

This would be quite impossible, wouldn't it?
Since trees are recursive data structures, when you introduce them,
you also introduce recursion.

Eric Sosman

unread,
Oct 15, 2007, 9:47:13 AM10/15/07
to
Lasse Reichstein Nielsen wrote:
> [...]

> I'm guessing the O(1) solution is the one that uses the eigenvalues of
> that matrix.

See TAOCP equation 1.2.8(15).

--
Eric Sosman
eso...@ieee-dot-org.invalid

Roedy Green

unread,
Oct 15, 2007, 11:12:55 AM10/15/07
to
On Mon, 15 Oct 2007 12:28:10 +0200, Christian <fake...@xyz.de> wrote,

quoted or indirectly quoted someone who said :

>s Dynamic


>programming needs some kind of object that holds results of earlier
>computed values

I did all kinds of dynamic programming back in the days of Fortran
where you don't have recursion. It never dawned on me until now that
you could handle tracking the history with recursion.

Steve Wampler

unread,
Oct 15, 2007, 12:35:24 PM10/15/07
to
lsch...@d.umn.edu wrote:
> public static int fib( int n )
> {
> if ( n < 2 )
> return 1;
>
> if ( isEven( n ))
> {
> int h = n / 2;
> return fib(h) * ( fib(h+1) + fib(h-1) );
> }
> else
> {
> return fib(n-1) + fib(n-2);
> }
> }
>
> Aren't algorithms fun? :)

Sure are! (What happens when this is called with n == 2?)

--
Steve Wampler -- swam...@noao.edu
The gods that smiled on your birth are now laughing out loud.

Christian

unread,
Oct 15, 2007, 2:19:52 PM10/15/07
to
Roedy Green schrieb:

> On Mon, 15 Oct 2007 12:28:10 +0200, Christian <fake...@xyz.de> wrote,
> quoted or indirectly quoted someone who said :
>
>> s Dynamic
>> programming needs some kind of object that holds results of earlier
>> computed values
>
> I did all kinds of dynamic programming back in the days of Fortran
> where you don't have recursion. It never dawned on me until now that
> you could handle tracking the history with recursion.
I meant that if your Dynamic Programming based algorithm must somehow
provide the already calculated subproblems to each call of the function..

so either you could pass these as some object..
ex
public static Integer calc(Integer what, Map<Integer,Integer> subresults) {}

or you could use a calc object
class CalcObj {
private Map<Integer,Integer> subresults ...
public Integer calc(Integer what){}
}

I don't see any reason why a dynamic programming algorithm shouldn't
also use recursion if the problem is predestined for recursion but
without dynamic programming to hard to solve .. i.e fibonacci(again bad
example as O(1) is possible)..

lsch...@d.umn.edu

unread,
Oct 15, 2007, 2:36:06 PM10/15/07
to

> > Aren't algorithms fun? :)
>
> Sure are! (What happens when this is called with n == 2?)

Yeah, I noticed that it would go into an infinite recursion after I
posted it. I wasn't careful in stating whether I was counting from
zero or one. Let me be explicit:

Count the fibonacci numbers starting from 1,


{ 0 n < 1
F(n) = { 1 n = 1
{ 1 n = 2
{ F(n-1) + F(n-2) else

The relation still holds,

F(2) = F(1) * (F(0) + F(2))
= 1 * (0 + F(2))
= F(2)

but we now have a base case that returns F(2) immediately. Here is
the correct implementation. I hope. It's always dangerous posting
code for everyone to see. :)

public static int fib( int n )
{

if ( n < 1 )
return 0;

if ( n == 1 || n == 2 )
return 1;

if ( isEven( n ))
{
int h = n / 2;
return fib(h) * ( fib(h+1) + fib(h-1) );
}
else
{
return fib(n-1) + fib(n-2);
}
}

-Lucas

lsch...@d.umn.edu

unread,
Oct 15, 2007, 3:01:20 PM10/15/07
to
On Oct 14, 8:31 pm, Taria <mche...@hotmail.com> wrote:

> I am not sure why I'm having such a hard time with the O(n) concept
> and everything else I'm learning about, sometimes it seems so simple
> until someone asks me how to compute O(1) for a fib series. lol. I'm
> going to have to look that up, I have no clue about that one. And I
> didn't know there were different time complexities tagged onto the fib
> series (other than if it was computed using iterative vs recursion.)

Well, to be fair, the time complexity of an algorithm is totally
independent of the whether or not one computes via recursion or not.
Also, the O(1) solution to the fibonacci sequence requires analysis
techniques beyond an introductory course.

There are subtle issues here, too, in terms of accurately
characterizing the big-O running time -- one can get different
asymptotic results assuming a RAM computer (like most every computer
is these day) versus a Turing Machine model. If you take an analysis
of algorithms course you will be exposed to these topics.

If you want to look up the closed-form solution to the nth Fibonacci
number, please do. There is some neat math in there and it shows how
the Golden Ratio is related to the Fibonacci sequence -- since the
golden ratio is considered the "most aesthetic" ratio and many natural
processes follow the Fibonacci sequence, some conjecture that this is
a reason humans find natural forms and structure so visually pleasing,
and this is why the golden ratio is used by practicing architects.

http://en.wikipedia.org/wiki/Golden_ratio#Architecture
http://www.mcs.surrey.ac.uk/Personal/R.Knott/Fibonacci/fibnat.html

But it is probably best that you just treat this as a bit of "trivia"
for the moment. You know such a solution exists, so if it is
presenting to you later on, the analysis may be more relevant. When
you have the background to understand "why" a particular algorithm or
analysis technique can be useful, it's often much easier to
internalize that knowledge.


> Interesting web page Lucas, I bookmarked so I can read it for a
> different perspective on concepts when i get stuck trying to
> understand them. :)
>

> And no, I can't tell you answer the cache question either. I'll think
> about it for a bit though but whether I get any usable answers is
> another. :p

Hint: This is (loosely) related to dynamic programming. You have
solved a subproblem if size n/2 and can recycle that answer to solve a
problem of size n. Consider how much extra computation would be
required if you recomputed the n/2 solution. Remember, this is
repeated at each step of the recursion on sub-problems of size n/2, n/
4, n/8, ...

-Lucas


Steve Wampler

unread,
Oct 15, 2007, 4:46:56 PM10/15/07
to
lsch...@d.umn.edu wrote:
> Yeah, I noticed that it would go into an infinite recursion after I
> posted it. I wasn't careful in stating whether I was counting from
> zero or one.

Interestingly, because in the end it makes little difference
whether one starts counting an 0 or 1, the original solution
works if:

if ( isEven(n) )

is simply replaced with:

if ( !isEven(n) ) # i.e. isOdd(n)

Taria

unread,
Oct 15, 2007, 5:11:54 PM10/15/07
to
>
> > And no, I can't tell you answer the cache question either. I'll think
> > about it for a bit though but whether I get any usable answers is
> > another. :p
>
> Hint: This is (loosely) related to dynamic programming. You have
> solved a subproblem if size n/2 and can recycle that answer to solve a
> problem of size n. Consider how much extra computation would be
> required if you recomputed the n/2 solution. Remember, this is
> repeated at each step of the recursion on sub-problems of size n/2, n/
> 4, n/8, ...
>
> -Lucas

I see the answer now. It must be cached for 'instant' access
to the already computed subparts for it to be efficient.
The dynamic programming approach is actually a good example
for the fib series because it demostrates the weakness and
once you compensate for that by using an if statement:

if O(1) then
here's the answer
else
perform recursion solve of fib(n)(n-1)

it is a good replacement for the classical recursion solution and
(also, it's a good contrast.)

Anyway, those are my thoughts for this morning. :)

-t
Why is it I understand everything so much more in the morning
than 2am at night? :p

lsch...@d.umn.edu

unread,
Oct 15, 2007, 8:16:02 PM10/15/07
to

What you wrote is pseudo code is actually a technique called
"memoization". Yes, "memo"-ize, not "memor"-ize. It's the direct way
to make recursive functions efficient by caching values. This is a
top-down method -- you begin with a problem of size n and work down to
the base cases. With dynamic programming, you work bottom-up by
starting with the base case and constructing larger solutions. Here
are the three approaches presented together -- recursive, memoized
recursion and dynamic programming:

// Fibonacci numbers, starting at zero, F(0) = F(1) = 1

////
// A standard recursive method that is
// inefficient
public static int fib_recursive(int n)
{
if (n < 2 )
return 1;

return fib_recursive(n-1) + fib_recursive(n-2);
}


////
// A memoized version that is efficient
// and retains the recursive structure.
public static int memos[];
public static in fib_memoize(int n)
{
memos = new int[n+1];
memos[0] = 1;
memos[1] = 1;

return fib_helper( n );
}

public static int fib_helper(n)
{
// This is your, 'if O(1) then here's the answer'
if ( memos[n] != 0 )
return memos[n];

// Memoize the results
memos[n-1] = fib_helper(n-1);
memos[n-2] = fib_helper(n-2);

return memos[n-1] + memos[n-2];
}


////
// A dynamic programming solution
// that builds the answer up from
// the base cases
public static int fib_dynamic(int n)


{
if ( n < 2 )
return 1;

// define the base cases
int n_2 = 1;
int n_1 = 1;

// A variable to hold the answer
int ans;

// iteratively build up the
// solution. Dynamic programming
// it trivial for this problem
for ( int i = 1; i < n; i++ )
{
// F(n) = F(n-1) + F(n-2);
ans = n_2 + n_1;

// Update the dynamic programming state
n_2 = n_1;
n_1 = ans;
}

return ans;
}


As you can see, the three implementation have different characteristic

Algorithm Time Complexity Space Complexity
--------- --------------- ----------------
Recursive O(1.5^n) O(n)
Memoize O(n) O(n)
Dynamic Prog. O(n) O(1)

The more complicated approaches have superior algorithmic properties,
but we pay for that performance at the cost of a more complex
implementation. Also, I should note that the O(n) space complexity of
the recursive approach comes from the stack usage from each recursive
call. This may not be obvious at a first glance.

-Lucas

Taria

unread,
Oct 16, 2007, 1:03:28 AM10/16/07
to
> With dynamic programming, you work bottom-up by
> starting with the base case and constructing larger
> solutions.

Question here: So if dynamic programming works from the bottom up,
then it really is simliar to an iterative approach, isn't it? So, is
the only difference here is it uses a recursive call to compute and
caches that instead of accumulating the answer into a variable?

And if I were to compare the two: iterative vs dynamic programming,
the iterative approach would seem 1) more readable, and 2) same time
complexity. I can't even begin to figure out what the space overhead
would be for an iterative approach. But if I were to exercise my
newly formed idea of what it would be, the only things it needs to
remember is the sum. So my guess would be O(1). (I am afraid to post
what I think are answers to problems for fear of getting 'shot!' :p
but I'll take a chance this time because mistakes are the best to
learn from.)

> As you can see, the three implementation have different characteristic
>
> Algorithm Time Complexity Space Complexity
> --------- --------------- ----------------
> Recursive O(1.5^n) O(n)
> Memoize O(n) O(n)
> Dynamic Prog. O(n) O(1)
>

Another question here...why is the space complexity only O(1) for the
dynamic programming? Does that represent where the answers are stored
at?

Dynamic programming is a subject that will be covered this week, I'm
quite curious about it now.

-t

Patricia Shanahan

unread,
Oct 16, 2007, 1:14:19 AM10/16/07
to
Taria wrote:
>> With dynamic programming, you work bottom-up by
>> starting with the base case and constructing larger
>> solutions.
>
> Question here: So if dynamic programming works from the bottom up,
> then it really is simliar to an iterative approach, isn't it? So, is
> the only difference here is it uses a recursive call to compute and
> caches that instead of accumulating the answer into a variable?

Dynamic programming can be done with no recursion. The distinguishing
feature is that dynamic programming algorithms work by building up
subproblem solutions in an organized fashion.

Patricia

lsch...@d.umn.edu

unread,
Oct 16, 2007, 1:32:30 AM10/16/07
to
On Oct 15, 10:03 pm, Taria <mche...@hotmail.com> wrote:
> > With dynamic programming, you work bottom-up by
> > starting with the base case and constructing larger
> > solutions.
>
> Question here: So if dynamic programming works from the bottom up,
> then it really is simliar to an iterative approach, isn't it? So, is
> the only difference here is it uses a recursive call to compute and
> caches that instead of accumulating the answer into a variable?
>
> And if I were to compare the two: iterative vs dynamic programming,
> the iterative approach would seem 1) more readable, and 2) same time
> complexity. I can't even begin to figure out what the space overhead
> would be for an iterative approach. But if I were to exercise my
> newly formed idea of what it would be, the only things it needs to
> remember is the sum. So my guess would be O(1). (I am afraid to post
> what I think are answers to problems for fear of getting 'shot!' :p
> but I'll take a chance this time because mistakes are the best to
> learn from.)

Dynamic programming *is* the iterative solution. Also, dynamic
programming and recursion are opposite ways of solving the same
problem, so when you say "the only difference here is it (dynamic
programming) uses a recursive call to compute", that is confusion the
two approaches. Take another look at the dynamic programming solution
to the fibonacci sequence; you can see that it *never* calls itself
(no calls to fib_dynamic), so it does not perform *any* recursion at
all.

As far as the space complexity, I'm afraid I might have skipped ahead
too far and muddled the relationship between memoization and dynamic
programming. Let me try again. Compare these two methods and assume
that the array cached[] contains enough space to store n values.

public static int fib_memoize(int n)
{
// If we have already solved this subproblem,
// just return the answer immediately.
if ( cached[n] != 0 )
return cached[n];

// Memoize the results. Notice that we started
// with a solution of size n and are saving the
// solutions to the subproblem. We are
// recursively invoking fib_memoize
cached[n-1] = fib_memoize(n-1);
cached[n-2] = fib_memoize(n-2);

return cached[n-1] + cached[n-2];
}

public static int fib_dynamic(int n)
{

// Define the base cases. Think of
// this as solving the two smallest
// problems
cached[0] = 1;
cached[1] = 1;

// Build the solution by dynamic
// programming. Notice we do not
// call fib_dynamic at any point, so
// this is *not* a recursive solution
for ( int i = 2; i <= n; i++ )
cached[n] = cached[n-1] + cached[n-2];

return cached[n]
}

We can write the same functions for factorial, too. Memoization is
not needed in this case to make the recursive solution efficient. The
recursive factorial will use O(n) space because of the n calls to
itself. The dynamic programming solution will use O(n) space as it
fill up the array.

public static int factorial(int n)
{
if ( n == 0 )
return 1;

return n * factorial(n - 1);
}

public static int factorial_dynamic(int n)
{
// "solve" the base case
cached[0] = 1;

// build up the solution from the smallest problem to
// largest problem
for ( int i = 1; i <= n; i++ )
cached[i] = i * cached[i-1];

return cached[i];
}

Of course the dynamic programming solution can be optimized because
computing the next factorial value requires only the previous value.
Thus we can write the factorial function to use only O(1) space. The
same approach is used for the fibonacci dynamic programming solution.

public static int factorial_dynamic_2(int n)
{
// "solve" the base case (n = 0) and make
// it the current solution
int solution = 1;

// build up the solution from the smallest problem to
// largest problem
for ( int i = 1; i <= n; i++ )
solution = i * solution;

return solution;
}

> > As you can see, the three implementation have different characteristic
>
> > Algorithm Time Complexity Space Complexity
> > --------- --------------- ----------------
> > Recursive O(1.5^n) O(n)
> > Memoize O(n) O(n)
> > Dynamic Prog. O(n) O(1)
>
> Another question here...why is the space complexity only O(1) for the
> dynamic programming? Does that represent where the answers are stored
> at?

Hopefully the new examples makes this clear. There is no inherent
reason for dynamic programming to be more space efficient than
recursion. In the specific case of the Fibonacci sequence, we only
need the last two values in order to calculate the new value, so the
space complexity can be improved from O(n) (the size of the cached[]
array), to O(1) (the space for holding 2 values). Consider the O(1)
dynamic programming solution to be an optimization.

-Lucas


Taria

unread,
Oct 17, 2007, 6:22:04 AM10/17/07
to
On Oct 15, 7:32 pm, lscha...@d.umn.edu wrote:
> On Oct 15, 10:03 pm, Taria <mche...@hotmail.com> wrote:
>
>
>
>
>
> > > With dynamic programming, you work bottom-up by
> > > starting with the base case and constructing larger
> > > solutions.
>

> Hopefully the new examples makes this clear. There is no inherent


> reason for dynamic programming to be more space efficient than
> recursion. In the specific case of the Fibonacci sequence, we only
> need the last two values in order to calculate the new value, so the
> space complexity can be improved from O(n) (the size of the cached[]
> array), to O(1) (the space for holding 2 values). Consider the O(1)
> dynamic programming solution to be an optimization.
>

> -Lucas- Hide quoted text -


>
> - Show quoted text -

Your explanations have helped make the concept of DP clearer. Thanks
a million!

However, in the lecture I just had today, dynamic programming was
intertwined with the notion of optimal binary search trees. :x

I tried hard to fit the defintion of dp into the lecture, but couldn't
spend too much time doing that because there was a lot of focus on how
to derive the numerical value of the optimal path (?) of an optimal
binary tree given X amt of nodes with their probabilities attached.
Thankfully, the professor tried to make it easier by not including the
dummy nodes that Cormen mentions. In the book, Cormen explains how he
uses dp by creating a table that contains the root and an e (something
i have no clue what he's referring to, not yet anyway) once he
calculates the values between this node and that node. (that part
makes sense)

Since I have a few days free of homework, I'm going to write a
iterative solving program that will do this for me and then see if I
can do it rcursively, then try to add in the stored values lookup
table for values already calculated. It doesn't seem hard right now,
but I won't know till I try it. :p

I appreciate all the help given to me in the past posts, btw.

-t

bugbear

unread,
Oct 17, 2007, 6:57:29 AM10/17/07
to
Eric Sosman wrote:
>> Recursion refers to a routine that calls itself, as with the classic Fibonacci
>> sequence method:
>>
>> public static int fibonacci( int n )
>> {
>> if ( n < 0 ) return 0;
>> if ( n == 0 || n == 1 )
>> {
>> return 1;
>> }
>> return fibonacci( n - 1 ) + fibonacci( n - 2 );
>> }
>>
>> Notice that fibonacci() calls itself, unless certain conditions are met. If
>> those conditions didn't exist, the recursion would never end.
>
> Notice also that the function does exactly as I
> described: It decomposes the maxi-problem of finding
> fibonacci(n) into the mini-problems of fibonacci(n-1)
> and fibonacci(n-2), then it combines the solutions of
> those smaller problems to reach its own solution. QED.

>
> As an aside, I have always been appalled at the use
> of Fibonacci numbers to illustrate recursion, because
> recursion is probably the worst[*] way to compute them.
> The obvious iterative solution executes in O(n) time, and
> there's also an O(1) method, albeit with some drawbacks.
> Why would anyone in his right mind choose the O(e^n)
> recursive method? Oh, yeah: Because he's a classroom
> drudge trying to teach recursion, and it's the only
> problem that comes to mind ...

Before dismissing recursion as a solution
to fibonacci, read this page:

http://www.csse.monash.edu.au/~lloyd/tildeAlgDS/Recn/Binary/

I found it highly instructive.

BugBear

0 new messages