Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Defence of recursive programming

26 views
Skip to first unread message

Jonathan Bartlett

unread,
Jun 20, 2005, 5:43:56 PM6/20/05
to
You all are probably familiar with these ideas, but if you wanted to
give an article to your imperative-loving friends about why recursion is
better than imperative looping, you might show them this article I wrote
for DeveloperWorks:

http://www-128.ibm.com/developerworks/linux/library/l-recurs.html

Jon
----
Learn to program using Linux assembly language
http://www.cafeshops.com/bartlettpublish.8640017

Dinko Tenev

unread,
Jun 21, 2005, 4:10:37 AM6/21/05
to
Jonathan Bartlett wrote:
> You all are probably familiar with these ideas, but if you wanted to
> give an article to your imperative-loving friends about why recursion is
> better than imperative looping, you might show them this article I wrote
> for DeveloperWorks:
>
> http://www-128.ibm.com/developerworks/linux/library/l-recurs.html

I am slightly puzzled by the code proposed for the tail-call
optimization example. It unwinds the stack first, then calls itself
(i.e. pushes again args and all,) which is quite a lot of work to be
done by -erm- optimized code :)

My expectation was that the optimized tail-call would just replace the
params in the current frame, as in the following example:

int factorial( int n, int acc = 1 ) {
factorial_start:
if( n <= 1 ) {
return acc;
} else {
// return factorial( n - 1, acc * n );

// becomes this after optimization:
n = n - 1;
acc = acc * n;
goto factorial_start;
}
}

I guess that, in certain cases, you actually have to call some
destructors or finalizers before you replace entities in the frame, but
where this optimization really shines (e.g. simulating simple loops,)
full stack unwinding is an overkill, and the example is a bit
discouraging to "imperative" coders :)

Cheers,

D. Tenev

Jonathan Bartlett

unread,
Jun 21, 2005, 9:51:41 AM6/21/05
to
> I am slightly puzzled by the code proposed for the tail-call
> optimization example. It unwinds the stack first, then calls itself
> (i.e. pushes again args and all,) which is quite a lot of work to be
> done by -erm- optimized code :)

This is done to allow tail-calls to standard C functions.

> My expectation was that the optimized tail-call would just replace the
> params in the current frame, as in the following example:

I could have optimized it better, but I was more interested in
explaining the point of what is occurring than giving the absolute
fastest approach. As I mentioned in the article, when doing
tail-recursive functions (as apposed to the generic tail-call), then
copy+goto is the best way. But you have to do a little more work when
doing a generic tail-call to a function not expecting it, especially if
it has a different number of arguments than the current function.

Now, Randall Hyde read the article and posted on alt.lang.asm a very
interesting assembly language macro (for his HLA assembler) which
defines two entry points for any defined function -- a normal one and a
tail-call one, so that you can get better optimizations when
tail-calling functions written that way. His macro is very specialized,
but his HLA is very expandable and you could likely write a much more
generic macro.

> I guess that, in certain cases, you actually have to call some
> destructors or finalizers before you replace entities in the frame, but
> where this optimization really shines (e.g. simulating simple loops,)
> full stack unwinding is an overkill, and the example is a bit
> discouraging to "imperative" coders :)

Perhaps I did not state clearly enough how the optimization works for
tail-recursive calls as apposed to generic tail-calls.

Thanks for the suggestion!

Albert Lai

unread,
Jun 21, 2005, 3:02:07 PM6/21/05
to
Jonathan Bartlett <joh...@eskimo.com> writes:

> http://www-128.ibm.com/developerworks/linux/library/l-recurs.html

This is a great service, especially from someone known to be hardcore
because of his work in assembly. I thank you for this.

I do not believe that recursion is unconditionally unintuitive
(equivalently iteration is unconditionally intuitive). I believe that
iteration is intuitive when following directions without thinking.
I believe that recursion is intuitive when thinking how to solve a
problem, as it is just divide and conquer: we all try to break up the
problem into hopefully smaller ones, or transform the problem into a
more familiar or smaller one... and quite often it's just natural for
the smaller one to be like the original one except with smaller
parameters.

As for giving directions, it depends on whether the recipient wants to
think or not. Someone who says "just tell me what to do" will
appreciate the tail-call-optimized iterative code; someone who says
"but why does it work?" will appreciate the original,
divide-and-conquer thought process. I know because I'm among the
latter.

I believe that programming students lean towards iteration because
their first lessons are on how to execute code, not how to solve
problems.

Jerzy Karczmarczuk

unread,
Jun 21, 2005, 6:04:01 PM6/21/05
to
Albert Lai:

> I do not believe that recursion is unconditionally unintuitive
> (equivalently iteration is unconditionally intuitive). I believe that
> iteration is intuitive when following directions without thinking.
> I believe that recursion is intuitive when thinking how to solve a
> problem, as it is just divide and conquer: we all try to break up the
> problem into hopefully smaller ones, or transform the problem into a
> more familiar or smaller one... and quite often it's just natural for
> the smaller one to be like the original one except with smaller
> parameters.

Actually, this - for me - is a biased, anthropintellectomorphic view.
Take any mind-less beast such as snail. It makes its shell which has
the form of a logarithmic spiral.
Do you think that it does it by iteration or recursion?

I tried to discuss such fuzzy problems with some biologists. We didn't
get to any unambiguous conclusion, obviously (apart from the conclusion
that the second bottle was better...)

... but, when I tried to specify what IS an iteration: looping
construct,
backward branching, and thus a *globally designed* control structure,
they were rather inclined to refuse to believe that the snail has
something of the kind in its highly developed brain...

On the other hand the *self-reference* -- why not?... Mathematics is
often self referring. A periodic movement, thus a "loop" is everywhere
in physics. You have some differential equations, where changes of
someThing are related to the same Thing.

This, actually, is rather a case for co-recursion, the extrapolating
recursion, not a reducing one. The reducing one is somehow related to
a finite fixed point of some transformation. To the very notion of
equilibrium. so, this is also something fundamental.

Anyway, I believe in an almost sectarian way, that Nature is inherently
recursive, and iterations have been invented by the devil. It doesn't
change anything in our programming or pedagogic habits, but it makes
me feel different.

Use the Force.

Jerzy Karczmarczuk

--
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG

Bruce Hoult

unread,
Jun 21, 2005, 7:37:40 PM6/21/05
to
In article <4u4qbrl...@shell.vex.net>,
Albert Lai <tre...@shell.vex.net> wrote:

> I do not believe that recursion is unconditionally unintuitive
> (equivalently iteration is unconditionally intuitive). I believe that
> iteration is intuitive when following directions without thinking.

If iteration was intuitive then there wouldn't have been so many
articles in academic journals in the 70's and 80's on "how to design
loops" and "loop invariants" and "weakest preconditions" -- let alone
the huge debate over structured vs unstructured loops!

--
Bruce | 41.1670S | \ spoken | -+-
Hoult | 174.8263E | /\ here. | ----------O----------

Greg Buchholz

unread,
Jun 21, 2005, 7:36:27 PM6/21/05
to
Jerzy Karczmarczuk wrote:
> Actually, this - for me - is a biased, anthropintellectomorphic view.
> Take any mind-less beast such as snail. It makes its shell which has
> the form of a logarithmic spiral.
> Do you think that it does it by iteration or recursion?

Well, Stephen Wolfram make the case that mollusk shell growth and
pigmentation can be modeled with cellular automata, but I don't know if
that would put him in the recusion or iteration camp...

http://www.wolframscience.com/nksonline/page-414
http://www.wolframscience.com/nksonline/page-423


Greg Buchholz

Dinko Tenev

unread,
Jun 22, 2005, 3:06:15 AM6/22/05
to
Bruce Hoult wrote:
> In article <4u4qbrl...@shell.vex.net>,

> If iteration was intuitive then there wouldn't have been so many
> articles in academic journals in the 70's and 80's on "how to design
> loops" and "loop invariants" and "weakest preconditions" -- let alone
> the huge debate over structured vs unstructured loops!

Can you say that recursion has received less attention?

Cheers,

D. Tenev

Joachim Durchholz

unread,
Jun 22, 2005, 3:22:18 AM6/22/05
to
Greg Buchholz wrote:

> Well, Stephen Wolfram make the case that mollusk shell growth and
> pigmentation can be modeled with cellular automata,

Well, mollusks are composed from cells, and local communication
mechanisms are the easiest ones, so that's very near to cellular automata.

> but I don't know if that would put him in the recusion or iteration
> camp...

Cellular automata are inherently iterative. You can use recursion to
describe the evolving pattern, but the construction is iterative.

Of course, the FPL implementation of a set of cellular automata would be
recursive anyway :-)

Regards,
Jo

Chris F Clark

unread,
Jun 22, 2005, 1:46:24 PM6/22/05
to
I would argue just the opposite. I believe the universe is inherently
quantized (or is it quanticized?) and that iteration is the natural
representation, of "you do it first to this quanta and then to the
"next" quanta, and so on". Recursion works only with infinitely
subdividable quantities, which I don't think the universe has--I don't
think there are any actual lines, points, or real numbers in the
universe (just finite approximations thereto). On the other hand, the
numbers of elementary quanta are so enormous, that for practical
purposes, the universe "seems" infinitely subdividable and recursive
equations work over the classical scale and the artifacts of the
quantization are mostly unobservable.

I am less certain of whether the universe is finite or not (i.e. are
there a number of the quanta). However, I am convinced that the
observable universe is finite (but perhaps unbounded over the
progression of time). In any case, my universe looks a lot like a
cellular atomata or a Turing machine, or something else with discrete
parts that I would want to "iterate" over and not that I would want to
recursively divide.

Now, perhaps this is simply an artifact of my own mind. Perhaps the
nature of experiences my mind is capable of having, simply enforce a
quantization. Clearly if the universize is quantized, so is my mind
(and I would argue that it is literally a finite state machine). The
inverse is not necessarily true. The contrapositive would, of course,
disprove the theorem. If I could have a truly infinite and not
quantized thought, then the universe would not be quantized. However,
I don't think anyone is capable of having such thoughts.

As a result, I think when people think about recursion, they are only
imagining large finite sets, which are simply too big for them to
enumerate, and treating those sets as infinitely subdivisible, when in
reality they aren't. The set could be enumerated, but not by the
person imagining it.

That is not to say that I think recursive algortihms are inherently
flawed and iterative algorithms are better. I think with formal
methods we can handle recursive algorithms simply and get them correct
and at times do that for a recursive algorithm that we couldn't
formulate correctly as an iterative algorithm. This is a direct
consequence of our minds being finite, and only being able to
manipulate a (small) finite number of things at a time. Thus, we can
have systems that we can manipulate the take something an recursively
divide it, which we could not manipulate from the quantizied atoms,
simply because their would be too many atoms. Just look at the
large prime numbers, can you really grasp one of them? Personally, I
find dealing with the primes at the far end of the 1..100 range quite
challenging enough to visualize. I can't even be certain that I can
make a mental image of something with exactly 17 elements. I could
draw it, but to picture it in my brain and manipulate it--no. I'm not
likely to believe you if you claim to be able to do so, either.

The inverse is also true. There are things we can only deal with from
the elementary quanta level, and which we cannot deal with at the
combination level, because we cannot build arge enough sets (we simply
substitute our large finite sets for the truly larger sets in our
mental models and the sets aren't really the same). To my mind
diagonalization prrofs are a good example of something that we can
only deal with iteratively (at least at the conceptual level). We
need to have that nice matching of one element at a time. If you try
to picture the diagonalization process as operating on whole sets at a
time, I think you will find yourself making all sorts of conceptual
errors. You need fixed finite things to visualize. Not fuzzy
infinitely subdividable things.

I think it is no surprise that we don't argue about infinite state
machines, which between any two states there is another state such
that . . . . That world is too fuzzy for us to imagine and deal with
realiably.

Even when we recursively divide a problem we do it in fixed steps.
The point is we are imposing a nice finite world onto the infinite
subdivision. We do the first division, then we do the second, and so
forth. That sounds a lot like iteration to me.

-Chris

Joachim Durchholz

unread,
Jun 22, 2005, 2:03:47 PM6/22/05
to
Jerzy Karczmarczuk wrote:

> Joachim Durchholz


>
>>Cellular automata are inherently iterative. You can use recursion to
>>describe the evolving pattern, but the construction is iterative.
>

> There is NOTHING inherently iterative in cellular automata, if by this
> word you understand an *iterative control structure* (loops, etc.)

No. It's just that cellular automata are (usually) described as a cyclic
process. Which is - at least in my book - iterative, and not recursive;
YMMV.
I think "iterative" and "recursive" nature aren't precise terms; it's
how people perceive something. Perception can vary wildly; one person's
iteration may be another person's recursion and vice versa.

> What is iterative in a planet which circles around the Sun? (or any
> natural system which obeys a differential equation).

It's neither iterative nor recursive, it's a differential equation over
continuous variables. Iteration doesn't work (anything iteration could
produce would be just an approximation, not the actual behavior), and
recursion doesn't work because sets of reals don't always have a least
element (or highest element, depending on how you do your fixed-point
theory).

I'm talking from a classical standpoint here. For quantum theory, I'm
not even sure that space and time quantisation are explored well enough
to say anything about the applicability of fixed-point theory.

Regards,
Jo

Patrick

unread,
Jun 22, 2005, 2:06:01 PM6/22/05
to
Chris F Clark wrote:
> Recursion works only with infinitely subdividable quantities,

Any iterative procedure can be implemented as a recursive procedure,
but not the other way around.

If recursion works only with infinitely subdividable quantities,
as you say, then it implies that iteration also only works with
infinitely subdividable quantities.

Jerzy Karczmarczuk

unread,
Jun 22, 2005, 5:53:25 AM6/22/05
to
Joachim Durchholz

> Cellular automata are inherently iterative. You can use recursion to
> describe the evolving pattern, but the construction is iterative.
>
> Of course, the FPL implementation of a set of cellular automata would be
> recursive anyway :-)

There is NOTHING inherently iterative in cellular automata, if by this


word you understand an *iterative control structure* (loops, etc.)

What is iterative in a planet which circles around the Sun? (or any
natural system which obeys a differential equation).

Now, the 'Sicilian defense' would be the question: "what's recursive
in them, hm?"

Well, forgetting the existence of computers... The functions which
describe the states of the system, the observables, etc. [In classical
world it is the same, in quantum, no] obey some recurrential equations.

So, their mathematical basis is closer to recursion than to
iterwhatever.

But I won't organize a church of the True Recursivists around it...

Joachim Durchholz

unread,
Jun 22, 2005, 4:06:41 PM6/22/05
to
Patrick wrote:
> I had a fuzzy recollection that a non-primitive recursive function,
> like Ackermann, couldn't be implemented as an iterative algorithm.
> This recollection may simply be wrong...
>
> Even if "but not the other way around" was wrong, it doesn't change
> the fact that, if "recursion works only with infinitely subdivisible
> quantities" then so does iteration. So I think my main point is intact.
> Which was: iterative procedures aren't a completely different kind of
> thing than recursive procedures, they are in one sense (at least) a
> subset of them.

You're right if you add the restriction that the iterative version must
not use unbounded space (i.e. it must be O(1) in terms of space).

In other words: to implement Ackermann iteratively, you need a stack -
and in that case, the transformation from recursive to iterative is (a)
automatic (and hence not very interesting) and (b) has the same
asymptotic space complexity as the recursive variant.

For simpler recursion schemes, going from recursive to iterative
typically removes a factor of O(N) from asymptotic complexity, which is
why that transformation is so heavily used in FPL implementations.

Regards,
Jo

Matthias Blume

unread,
Jun 22, 2005, 3:01:09 PM6/22/05
to
Patrick <mathgee...@hotmail.com> writes:

> Chris F Clark wrote:
>> Recursion works only with infinitely subdividable quantities,
>
> Any iterative procedure can be implemented as a recursive procedure,
> but not the other way around.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^

That, of course, is not true. Witness: Turing Machines are not
recursive.

(That is not to say that I agree with Chris F. Clark's comments, or
that I even care whether or not the universe would be inherently
imperative. Also, just because there aren't any infinitely
subdividable quantities does not rule out recursion. Only infinite
recursion would require infinitely subdividable quantities.)

Patrick

unread,
Jun 22, 2005, 3:48:14 PM6/22/05
to

I think I was a little knee-jerk in my response. After posting
I realized I might have made a mistake :(

Chris F Clark

unread,
Jun 22, 2005, 5:03:01 PM6/22/05
to
Chris F Clark wrote (some nonsense including):

> Recursion works only with infinitely subdividable quantities,

Patrick <mathgee...@hotmail.com> writes:

> Any iterative procedure can be implemented as a recursive procedure,
> but not the other way around.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Mathais Blume wrote:
> That, of course, is not true. Witness: Turing Machines are not
> recursive.

Yes, excuse me for some of the nonsense I wrote (after I posted it I
regretted some of these exact points, but it was too late). I was
thinking about the kind of recursion that makes differential equations
work.

It is well-known (even by me) that at some level recursion equals
iteration plus a stack (or more potent data storage media). That is
you cannot express a recursive routine with iteration in a FSA, but
can in PDA. If I recall correctly, other general forms of recursion
are equivalent in power to a TM. (I think that was Patrick's point
and it is a valid one.)

Moreover, one doesn't only apply recursion to infinite sets. (Again,
Mathias: Only infinite recursion would require infinitely subdividable
quantities.) We have lots of recursive algorithms that we apply to
finite sets and they work just fine.

Moreover, one cannot argue that the implicit stack in a recursive
function does not conveniently simplify some calculations. I tried to
point that out when I mentioned that some areas we can only reason
about from their recursive formulations and not by building them up
from iterations on the primitive parts.

------------------------------------------------------------------------

My point, which I truly botched, is that mentally we think in discrete
steps and generally in discrete ordered steps. That stepwise process
has a nice mapping to iteration. You do one thing, then you do the
next, and so on. Even things which we treat primarily recursively,
such as the Fibonacci function, we generally learn by 1 of 2 methods,
both which have discrete ordered steps.

The 1st method is we start with 1 1 as the 1st 2 Fibonacci numbers and
then say from then on each number is the sum of the two previous.
Clearly, that is a nice iterative process.

In the 2nd method, we start with a recursive version, but we plug in
some actual values and "step through" the recursive calculations until
an answer is determined. I mean[t] to argue that that is our mental
transformation of a recursive process into an iterative one.

What we don't do, until we have some sophistication, is reason
directly and exclusively from the recursive formulation without any
recourse to discrete examples. Most of us are not capable of that
level of abstraction.

Evantually, we do have the sophistication to understand recursion and
can deal with it without "dealing with it iteratively". However, I
[would] argue that that is not something one does naively/natively.
Moreover, I [would further] argue that that is a level of
sophistication not all of us obtain. I think many programmers never
get to the point of understanding recursion directly (and some not
even at all). Can one argue that there is someone who doesn't
understand simple iteration (and who does understand recursion)?

------------------------------------------------------------------------

Now many programmers fall into certain traps of not fully generally
dealing with their problem when they think iteratively. We too often
have some example in mind which doesn't cover all the subtle aspects
of the problem. However, I'm not convinced that being forced to
formulate the algorithm recursively would necessarily cause one not to
make such errors. I do think that being forced to formalize ones
model and write pre- and post-conditions, invariants, and assertions
(ala Djikstra, Hoare, Gries, et. al.) helps somewhat.

As such, I think that the advantage of FP languages is not that they
force one to formulate recursive solutions to problems, but that they
have immutable objects (forcing one to distinguish input from output),
which make reasoning about them more simple. In fact, I would say
that the advantage of FP languages is that they make it easy to
construct output values which are complicated structures. Imperative
languages tend to be quite lacking in their ability to do that. I
think it could be aruged that the subtle semantics of most imperative
programming lanugages is due to the fact that within them we write
imperative loops generally to modify something in place. I don't
think if one used recursion to modify things in place, the average
recursive solution would be any simpler, clearer, or more correct.

I doubt that this has made my point any clearer. I am still trying to
iteratively converge on what I mean :-).

-Chris

Albert Lai

unread,
Jun 22, 2005, 11:35:53 PM6/22/05
to
Bruce Hoult <br...@hoult.org> writes:

> If iteration was intuitive then there wouldn't have been so many
> articles in academic journals in the 70's and 80's on "how to design
> loops" and "loop invariants" and "weakest preconditions" -- let alone
> the huge debate over structured vs unstructured loops!

YES! See also an old post of mine:
http://groups-beta.google.com/group/comp.lang.functional/msg/51df24fbf33b7059?hl=en

Albert Lai

unread,
Jun 23, 2005, 12:47:37 AM6/23/05
to
Chris F Clark <c...@shell01.TheWorld.com> writes:

> Can one argue that there is someone who doesn't
> understand simple iteration (and who does understand recursion)?

When I learned inserting and deleting nodes in linked lists, I learned
both an iterative algorithm and a recursive algorithm. I immediately
saw the correctness of the recursive algorithm; I struggled, like all
other students, through animated sample executions before I could
handwave the correctness of the iterative algorithm. (The other
students, indoctrinated to fear recursion, were deprived of the chance
of understanding either.) Moreover, the invention of the recursive
algorithm in the first place was obvious and straightforward; the
invention of the iterative algorithm seemed a mystery, a painful night
of trial and error, or a rabbit pulled out of a hat.

Disclosure: I was a first-year student like everybody else. We used
Pascal. Before that, I had written a few short programs in BASIC
during highschool. That was all the programming exposure I had had.
My highschool math grade was D, though admittedly it was D in Hong
Kong (under British rule at the time) rather than D in the USA. So I
should have been a rather naive, imperatively trained, even
math-deprived slate. My first exposure to functional programming
would be in my second year with Lisp.

I am nothing in terms of statistics, but you just asked for existence.

But some years later I met a friend who shared the sentiment. When
she learned recursion, she listened to two different instructors, one
telling you to unfold it and the other telling you to just trust
induction. She liked the induction story and not the unfolding story.
We both didn't understand why other students liked the unfolding
story.

We were both religious. Perhaps we had less trouble having faith in a
higher power that promises to do the right thing, letting go, and not
asking to see everything unfolded. :)

Bruce Hoult

unread,
Jun 23, 2005, 1:13:27 AM6/23/05
to
In article <4upsud3...@shell.vex.net>,
Albert Lai <tre...@shell.vex.net> wrote:

"Writing a working loop is swim-or-sink in education and dark magic in
practice."

Right.

It's a long time ago (81 or 82) but I remember having a *lot* of trouble
being really confident that complex loops I was writing in Pascal or
FORTRAN or C or COBOL were really correct in all cases. In fact,
despite acing all my assignments and exams I never really understood it
properly and didn't become confident until I studied Dijkstra's
programming calculus.

And I don't think most other people understand loops either. Ok, so
they might be *confident*, but they are wrong, and create loops with
bugs in them.

Recursion seems harder at first, but it's far easier to master.

Ketil Malde

unread,
Jun 23, 2005, 3:00:40 AM6/23/05
to
Chris F Clark <c...@shell01.TheWorld.com> writes:

> I would argue just the opposite. I believe the universe is inherently

> quantized [...]

But from a quantum mechanics POV, things like the position of a
particle are just a probability distributions, until actually
observed, at which point the actual value is determined.

Doesn't that mean the universe is non-strict?

:-)

-k
--
If I haven't seen further, it is by standing in the footprints of giants

Dinko Tenev

unread,
Jun 23, 2005, 4:12:37 AM6/23/05
to
Bruce Hoult wrote:
> In article <4upsud3...@shell.vex.net>,

> "Writing a working loop is swim-or-sink in education and dark magic in
> practice."
>
> Right.

Oh please...

If you have to convert each element of an array and store it someplace
else, it just takes a loop. If you have to output something for each
element of an array, it just takes a loop. How much simpler does it
get than "for each element, do this and do that"?

I don't know what "complex loops" you're talking about, even though I
do recognize some of the frustration. Writing loops is a somewhat more
tedious task than writing a recursive definition, but industry
practices have improved, and "getting it wrong" today is much harder
than it was in the days of FORTRAN.

Cheers,

D. Tenev

Jerzy Karczmarczuk

unread,
Jun 23, 2005, 5:01:22 AM6/23/05
to
Dinko Tenev wrote:

> ...industry


> practices have improved, and "getting it wrong" today is much harder
> than it was in the days of FORTRAN.

I don't think that I accept this phrasing.

First, the days of Fortran are *not* over. Fortran evolves with time,
adapts, and its security is sometimes much higher than for "C", unless
you are negligent. Using "Fortran" as a swearword is as silly, or worse,
as the word "Lisp" for number crunchers, for whom all languages which
use lists and symbols are inherently maniaco-intello-academic.

Second, doing things wrong as *much more easy* nowadays than it was some
50 years ago. Certainly, an airplane is modelled in such a way that the
test pilots who first shouted 'décollage' in the cockpit of A380 were
much less afraid than those who tested the first big Boeings, but -

this is the result of *massive* testing, and repeating, and repeating.

I believe - somehow religiously, since I can't convince myself that it
is a universal practical methodology - that "getting it wrong" will
*really* drop significantly when the industry learns (and the research
develops it well) more of FORMAL testing, proofs of correctness, etc.
This won't replace testing, but it will rationalize it.


Jerzy Karczmarczuk

Ketil Malde

unread,
Jun 23, 2005, 5:02:48 AM6/23/05
to
"Dinko Tenev" <dinko...@gmail.com> writes:

> If you have to convert each element of an array and store it someplace
> else, it just takes a loop. If you have to output something for each
> element of an array, it just takes a loop. How much simpler does it
> get than "for each element, do this and do that"?

A little bit: "map (this配hat) array".

Dinko Tenev

unread,
Jun 23, 2005, 5:35:01 AM6/23/05
to
Jerzy Karczmarczuk wrote:
> Dinko Tenev wrote:
>
> > ...industry
> > practices have improved, and "getting it wrong" today is much harder
> > than it was in the days of FORTRAN.
>
> I don't think that I accept this phrasing.

That's OK, I don't quite insist on this particular phrasing. In case
that the point might have been lost on you: "was" is the keyword.

> Using "Fortran" as a swearword is as silly, or worse,
> as the word "Lisp" for number crunchers, for whom all languages which
> use lists and symbols are inherently maniaco-intello-academic.

For one thing, I don't mind LISP. For another, the shortest way of
describing an outdated, unstructured, and generally
programmer-unfriendly language is "FORTRAN," and it is going to be that
way for a long time, whether you like it or not :) What I concede
though, is that today's Fortran has little to do with early Fortran,
that is - it's not really that much of a Fortran :)))

> Second, doing things wrong as *much more easy* nowadays than it was some
> 50 years ago. Certainly, an airplane is modelled in such a way that the
> test pilots who first shouted 'décollage' in the cockpit of A380 were
> much less afraid than those who tested the first big Boeings, but -
>
> this is the result of *massive* testing, and repeating, and repeating.

Yeah, alright - can we switch back to programming now? :)

> I believe - somehow religiously, since I can't convince myself that it
> is a universal practical methodology - that "getting it wrong" will
> *really* drop significantly when the industry learns (and the research
> develops it well) more of FORMAL testing, proofs of correctness, etc.
> This won't replace testing, but it will rationalize it.

"Getting it wrong" can only drop as common understanding and
comprehension of everyday practices improves. Proofs of correctness
are a Good Thing, but way too expensive for certain (most?)
applications, and I just can't be sure about "FORMAL testing" or "etc."

Cheers,

D. Tenev

Jerzy Karczmarczuk

unread,
Jun 23, 2005, 5:39:43 AM6/23/05
to
Ketil Malde wrote:
> Chris F Clark <c...@shell01.TheWorld.com> writes:
>
>
>>I would argue just the opposite. I believe the universe is inherently
>>quantized [...]
>
>
> But from a quantum mechanics POV, things like the position of a
> particle are just a probability distributions, until actually
> observed, at which point the actual value is determined.
>
> Doesn't that mean the universe is non-strict?
>
> :-)


People, you are horrible... Both of you.

Chris Clark wrote several things about quantization, finiteness, etc.,
which is absolutely unrelated to recursiveness or not. Now Ketil says
about the position of particle something which many physicists won't
accept (I am one of those). Since we *should not* discuss physics here,
I would like just to point out that in quantum theories you have plenty
of *recurrential equations*, e.g. the Schwinger Dyson equation for
the amplitude of propagation of a particle which undergoes interaction:

G = INTEGRAL{ ..... G [OtherStuff] ... G ...}

and *all* theoretical distributions, wave functions, etc. are (or may
be) continuous independently of the quantization of this or that
spectrum.

Now, for most of you, true computer physicists (while I have never
passed a single CS exam in my life) the concepts such as iteration or
recursion are *protocols*, something which is implemented, which uses
stacks, non-local counters or whatever.

For me the recursivity -- although the previous facette exists in my
mind, after all I teach that stuff for many years -- is something
more fundamental; it MEANS that the model of an entity is self-
referring.

In classical physics, a differential equation solved:

y(t) = y0 + Int{F(s,y(s))*ds}

is a recurrence. This fact has *nothing* to do with infinite
divisibility, or the existence of Master Yoda in a far away galaxy.

This auto-reference, "strange loops" --

You MUST, repeat MUST see this: http://escherdroste.math.leidenuniv.nl/

--

is ubiquitous in math. Iteration not so. Well, as Joachim pointed out
what is the real definition of 'iteration'?
For me, unless somebody convinces me otherwise, is the existence of
*global* control mechanism, the "loop driver". I don't see those
loop drivers in Nature. Cyclic or not cyclic, development processes
seem for me *co-recursive*.

The real programs written by us from this perspective are all iterative,
the recursion just needs some stacks. You see, for me the
conceptualization is much more important than the implementation...

+

Back to Ketil's conjecture:
You don't need quanta in order to "see the non-strictness". For me the
most natural solution of a differential equation in classic theory is
through a lazy stream.

More! (But now you are entering the world of the Dark Side of the Force)
In quanta the evolution is *reversible in time*, and there is no
irreversible assignments, no imperative constructs in nature. The
particle doesn't "change its position", but a NEW STATE of the universe
is created, with the particle elsewhere. The Universe is composed of
lazy functional objects.

==

OK. Now I can take off my ritual robe, and throw away the candles.
May Wisdom and Beauty be with you. Forget the Force.


Jerzy Karczmarczuk

Dinko Tenev

unread,
Jun 23, 2005, 5:41:27 AM6/23/05
to
Ketil Malde wrote:
> "Dinko Tenev" <dinko...@gmail.com> writes:
>
> > If you have to convert each element of an array and store it someplace
> > else, it just takes a loop. If you have to output something for each
> > element of an array, it just takes a loop. How much simpler does it
> > get than "for each element, do this and do that"?
>
> A little bit: "map (this°that) array".

Or "map (this°that) $ elems array" perhaps? I am left wondering how
it can be (apparently) so much easier to get it wrong in Haskell than
in, say, Java :P

Cheers,

D. Tenev

Ketil Malde

unread,
Jun 23, 2005, 6:49:18 AM6/23/05
to
"Dinko Tenev" <dinko...@gmail.com> writes:

>> A little bit: "map (this°that) array".

> Or "map (this°that) $ elems array" perhaps?

Well, it was intended as functional pseudcode, rather than Haskell.
In Haskell, the exact form would depend on what 'this' and 'that' does
(i.e. are they IO operations? pass state?), and whether you want the
result as an array, list, or whatever. For that matter, function
composition is . (dot) rather than ° (degrees, I couldn't find the
circle).

So "map" could be "fmap", or "map" with "elems" (like in your
example), or perhaps a fold. "°" could be (pure) function
composition, or something else.

> I am left wondering how it can be (apparently) so much easier to get
> it wrong in Haskell than in, say, Java :P

Two observations: 1) The compiler would catch "map (this°that)
array". And 2) "for each element, do this and do that" is hardly any
more correct as a Java implementation.

So there.

Dinko Tenev

unread,
Jun 23, 2005, 8:13:51 AM6/23/05
to
Ketil Malde wrote:
> "Dinko Tenev" <dinko...@gmail.com> writes:
>
> >> A little bit: "map (this°that) array".
>
> > Or "map (this°that) $ elems array" perhaps?
>
> Well, it was intended as functional pseudcode, rather than Haskell.

My bad. But it did look more like actual code, and I can't shake-off
the feeling that it aimed to impress with that (unless you're willing
to testify to the contrary ;)

> In Haskell, the exact form would depend on what 'this' and 'that' does
> (i.e. are they IO operations? pass state?), and whether you want the
> result as an array, list, or whatever. For that matter, function
> composition is . (dot) rather than ° (degrees, I couldn't find the
> circle).
>
> So "map" could be "fmap", or "map" with "elems" (like in your
> example), or perhaps a fold. "°" could be (pure) function
> composition, or something else.

All well, but how does this promote recursion over iteration? (I just
remembered what the actual point was - sorry for the digression :)
You're not expressing yourself in terms of recursion, but in terms of
high-level primitives, which could (and, in many cases, actually would)
have been implemented using loops, for all you care at the call-site.

> > I am left wondering how it can be (apparently) so much easier to get
> > it wrong in Haskell than in, say, Java :P
>
> Two observations: 1) The compiler would catch "map (this°that)
> array". And 2) "for each element, do this and do that" is hardly any
> more correct as a Java implementation.

Ah, but the following is:

for( e: array ) {
// do this with e
// do that with e
}

And this isn't substantially more elaborate than the verbal
description. Actually, it's a typical one-liner with e.g. shell
scripts (and hard to get wrong, too ;) But I take your point.

Cheers,

D. Tenev

Chris F Clark

unread,
Jun 23, 2005, 12:09:23 PM6/23/05
to
Albert Lei wrote an interesting counter-point to my musings on the
topic of iteration versus recursion, starting with:

> When I learned inserting and deleting nodes in linked lists, I learned
> both an iterative algorithm and a recursive algorithm. I immediately
> saw the correctness of the recursive algorithm; I struggled, like all
> other students, through animated sample executions before I could
> handwave the correctness of the iterative algorithm.

I never meant to say that for all algorithms the most transparent
version is the iterative version. In fact, there are many algorithms
and areas where I myself can only understand the recursive version.
It is true, that some things we only understand from the outside in.

However, I think we still generally understand most things
one-step-at-a-time, and this is "my definition" of iteration. That's
a little different from iteration is the master control operation.

To me iteration is a way of anthropromophising a program and its
state. In iteration there is generally the concept of the "current
state" (current index, the loop variable) something that we map onto
"ourselves at the present time". When we envision iteration, we
imagine ourselves with some set of data laid out in front of us and
we takes some action, rearranging and changing that data, until we get
to some next "present state in the future" (which is the next
iteration).

The point of iteration is that it makes that focus explicit. The loop
variable is a proxy for ourselves. It takes action and things change.
It is the cause of the cause-and-effect relationship. It reminds me
of a story I heard about the learning process of infants. At some
early age infants do not distinguish self from other, they think that
everything happens because they cause it to happen. It's only later
(and not that late) that they learn that their mother is a separate
entity and they do not make her disappear and more importantly that
their wanting her does not make her reappear, but that she has
separate and independent existence. Whether infants really develop
that way or not, it is a good analogy for iteration. Iteration is the
encapsulation of I do something, it has an effect, and if I do it
enough times, it solves the entire problem.

Now, the point that complicated iteration is problematic to understand
is valid. Nested loops quickly exceed ones grasp of what's going
on. The real problem in imperative programming is not that we can't
write simple loops and get them right (where we could write simple
recursions and get them right, although I will grant as above there
are some programs which are simpler and more understandable
recursively and are actually untractable iteratively). It's that we
are changing one thing, and in the middle of that we invoke some
function that has it's own simple loop and changes something else, and
that may invoke yet another thing, and pretty soon we are not certain
what is changing and what is stable. The worst case, is when we think
we know what is changing, but we are subtly wrong.

It is my fervent opinion that rewriting that code using recursion
rather than iteration does not solve that problem. The solution to
the problem is not modelling the data as mutable and something we
modify to get into the correct shape (especially through complex
nested modifications). No, even that's not quite what I mean. I think
there is a good place for mutating some data. I think the solution is
having languages where we can create complicated results without fear.
The problem with imperative programming is that we don't have easy
ways of creating complex outputs. Since, it is hard (and inefficient)
in an imperative language to make a new structure that represents what
we want, we are continually forced to change the old structure into
the new one. Moeover, we are forced to do all the bookkeeping
ourselves, and if we don't get all the details right, the program
fails on some cases.

And that brings us back to iteration. What makes iteration so simple
to begin with, the fact that one has made the concept of self (the
loop variable) going through and changing things explicit, is now a
downfall. That explicitness exposes to oneself the bookkeeping
details. When one needs the control that can be a good thing.

However, it is usually a terrible burden. As humans, we are not
capable of coordinating all the details of an entire program. The
only solution is to abstract some of those details away. It's my
argument that it is the FP style of creating new complex outputs
rather than mutating the input which facilitates that abstraction, not
the use of iteration versus recursion.

The key point of FP functions is not that they are recursive, but that
they are pure, and one can assume that one's input arguemts are not
affected by the output one is constructing. That is what is
liberating. That is what makes them easily composable. That is what
helps solve the bugs in complex applications.

Finally, in my opinion, what made the work of Djikstra
et. al. signficant was not how they dealt with loops, but how it dealt
with mutable data. The importantance of an invariant is not the it
captures a loop, but how it makes explicit what one expects to have
changed in the past (the precondition) versus what will be changed in
the future (the postcondition) and what one does in the present to
effect that change. Hmm, or maybe that does have to do with iteration
after all, but the kind of iteration that even recursion uses.

-Chris

Chris F Clark

unread,
Jun 23, 2005, 12:21:40 PM6/23/05
to
Jerzy Karczmarczuk wrote:

> For me the recursivity -- although the previous facette exists in my
> mind, after all I teach that stuff for many years -- is something
> more fundamental; it MEANS that the model of an entity is self-
> referring.

Self-reference is important. No doubt.

And in response to your question, the "loop driver" of iteration is
the observer. Iteration makes the position of the observer explicit
and extracts it from the self-referential recursion.

However, I am not a quantum physicist, so I can not say if that
analogy is accurate or makes sense. It just does to me from my naive
position.

-Chris

Joachim Durchholz

unread,
Jun 23, 2005, 3:00:34 PM6/23/05
to
Dinko Tenev wrote:
> Bruce Hoult wrote:
>
>>In article <4upsud3...@shell.vex.net>,
>>"Writing a working loop is swim-or-sink in education and dark magic in
>> practice."
>>
>>Right.
>
>
> Oh please...
>
> If you have to convert each element of an array and store it someplace
> else, it just takes a loop. If you have to output something for each
> element of an array, it just takes a loop. How much simpler does it
> get than "for each element, do this and do that"?

Use a higher-order function.

Or a language construct if it offers one (such as PHP's "foreach" for
its associative arrays).

Recursion vs. iteration vanishes under the hood of the HOF (or language
construct).

Regards,
Jo

Neelakantan Krishnaswami

unread,
Jun 23, 2005, 3:00:10 PM6/23/05
to
In article <bruce-75C037....@news.clear.net.nz>, Bruce Hoult wrote:
>
> It's a long time ago (81 or 82) but I remember having a *lot* of
> trouble being really confident that complex loops I was writing in
> Pascal or FORTRAN or C or COBOL were really correct in all cases.
> In fact, despite acing all my assignments and exams I never really
> understood it properly and didn't become confident until I studied
> Dijkstra's programming calculus.

Yeah, I had the same experience as you, only time-shifted.

> Recursion seems harder at first, but it's far easier to master.

But I don't really think recursive functions are much easier to
write. To prove a function correct (whether recursive or with a loop),
you need to give a termination condition, and show that it either gets
smaller at each recursive call or trip through the loop, and then give
an inductive argument showing that correctness is maintained. This is
the same amount of work.

Where functional style wins is mostly from the freedom from aliasing.
This makes understanding VASTLY easier. I just can't overstate how
much this helps.

There's a secondary win from higher order functions, because they let
you package up much of the proof into a function (like map or fold)
and do it once and for all. Contrawise, with loops you have to repeat
the proof each time you write the loop. But it's not as important as
aliasing-free code.

--
Neel Krishnaswami
ne...@cs.cmu.edu

Donn Cave

unread,
Jun 23, 2005, 4:04:44 PM6/23/05
to
In article <slrndbm1lp...@gs3106.sp.cs.cmu.edu>,
Neelakantan Krishnaswami <ne...@cs.cmu.edu> wrote:
...

> Where functional style wins is mostly from the freedom from aliasing.
> This makes understanding VASTLY easier. I just can't overstate how
> much this helps.

Could I impose on you for a definition for "aliasing"? The way
I understand the term in this context, it's about assumptions
that can be made about the value of a variable, in a context where
it's exposed to modifications outside the present program scope.
Particularly where C's flexibility with pointers is exploited in
a way that runs a high risk of bugs and can confound an optimizer.

Donn Cave, do...@u.washington.edu

Neelakantan Krishnaswami

unread,
Jun 23, 2005, 6:31:27 PM6/23/05
to
In article <donn-C27D39.1...@gnus01.u.washington.edu>, Donn Cave
wrote:

I mean that, only a little more broadly. I say that aliasing occurs
whenever two variables refer to the same mutable data structure. In a
local scope, you can keep track of the aliasing relationships
manually, but it's very tedious -- Hoare logic proofs of pointer or
array programs usually have vast clouds of no-aliasing assertions.
This problem is one of the main reasons that program proof never
caught on, IMO.

In a purely functional setting, these difficulties just go away. It
doesn't matter how many aliases there are, since you never update.
It's very, very pleasant.

There are very recent techniques that let you effectively reason about
aliasing; separation logic is the one I'm most familiar with -- for an
introduction see:

<ftp://ftp.cs.cmu.edu/user/jcr/seplogic.ps.gz>

For further info:

<http://www.dcs.qmw.ac.uk/~ohearn/localreasoning.html>)


--
Neel Krishnaswami
ne...@cs.cmu.edu

Jerzy Karczmarczuk

unread,
Jun 23, 2005, 7:30:01 PM6/23/05
to
Neelakantan Krishnaswami

> ...I don't really think recursive functions are much easier to


> write. To prove a function correct (whether recursive or with a loop),
> you need to give a termination condition, and show that it either gets
> smaller at each recursive call or trip through the loop, and then give
> an inductive argument showing that correctness is maintained. This is
> the same amount of work.

Unless you pay some attention to the *fundamental role* of co-recursion
(but it seems that this is not so popular, no doubt because of historic
reasons, and conservative pedagogy...)

In co-recursive algorithms you have *NO* termination. You use such
beasts as bisimulation etc. to prove the *finite progress* upon a
co-recursive step.

Jerzy Karczmarczuk

PS. Anyway, I thank sincerely all the people who take part in this
discussion. It is really inspiring. Even if Chris has some very dubious
ideas about quanta, I am grateful, because he make me think a bit more.

Peter Seibel

unread,
Jun 24, 2005, 12:40:49 AM6/24/05
to
Neelakantan Krishnaswami <ne...@cs.cmu.edu> writes:

> Where functional style wins is mostly from the freedom from
> aliasing. This makes understanding VASTLY easier. I just can't
> overstate how much this helps.

Okay, I think I understand all the words but I don't understand what
you mean. Can you give an example of what you mean by aliasing here?
"Aliasing", as I have always understood it, is a consequence of
allowing pointers, not a consequence of non-functional styles. But
that just suggests my understanding is incomplete or wrong.

-Peter

--
Peter Seibel * pe...@gigamonkeys.com
Gigamonkeys Consulting * http://www.gigamonkeys.com/
Practical Common Lisp * http://www.gigamonkeys.com/book/

Dinko Tenev

unread,
Jun 24, 2005, 4:01:39 AM6/24/05
to

Sure. But, whether you did resort to higher-level abstraction or not
in either case, how is recursion so much simpler, and how is iteration
"dark magic"? :)

Cheers,

D. Tenev

Joachim Durchholz

unread,
Jun 24, 2005, 5:22:52 AM6/24/05
to
Dinko Tenev wrote:

When I'm using a higher-level abstraction, I don't use recursion or
iteration myself. It's done by the HOF, and I don't have to "understand"
anything - the HOF's specification says "applies this-and-that to all
elements of the container, yielding a such-and-so result". I don't have
to think about termination, fencepost errors, and whatnot.

Besides, I never had problems with iterating over containers, wether
iteratively nor recursively. (Actually I never did it recursively, but
that's because the languages I've been working with were either
imperative or had higher-order functions that would do the recursion for
me, not because I have any troubles with recursion as a concept.)

The difficult cases are when you iterate over more abstract domains.
It's been quite a long time, but I dimly recall having done some really
awful loops, debugging along and wrestling with the code, actually
having to think about the first, the second, the next-to-last, and the
last iteration, to get entry into and exit from the loop right. I was
raised with a purely imperative paradigm, so I have no way to compare
whether recursion would have been easier or not, but I tend to think
that recursion makes it easier because you have only two cases to check.

Um, when thinking about that, it's sometimes three cases: when you have
to put the recursion into a helper function. Given that you don't really
check all four cases with loops, the difference isn't that large anymore.

I think the difference between learning iterative and recursive is more
one of thinking habits:
With recursion, you ignore the step-by-step nature of computing and work
in terms of equations. You need to learn and practise stating an
induction hypothesis, and that's a nontrivial thing.
With iteration, you start with the step-by-step nature of computing.
When practicing, you discover that you have trouble "bending the control
flow back": variables have to serve a dual purpose, depending on whether
you're first-time into the loop or in a later iteration. (Similar
problems can exist when exiting from the loop. At that point, writing
loops turns really nasty.) If you're lucky, somebody introduces you to
Dijkstra's notion of loop invariants, at which point things become quite
manageable again (but then stating a loop invariant is still nontrivial:
it's equivalent to stating the induction hypothesis).

I think it's probably best to give students a choice. Show them both
recursion and loops (with invariants), and let them explore both.
They'll pick up the concept that's easier on them, then explore the
analogies with the other one.

Regards,
Jo

Joachim Durchholz

unread,
Jun 24, 2005, 5:37:25 AM6/24/05
to
Peter Seibel wrote:

> Neelakantan Krishnaswami <ne...@cs.cmu.edu> writes:
>
>> Where functional style wins is mostly from the freedom from
>> aliasing. This makes understanding VASTLY easier. I just can't
>> overstate how much this helps.
>
> Okay, I think I understand all the words but I don't understand what
> you mean. Can you give an example of what you mean by aliasing here?
> "Aliasing", as I have always understood it, is a consequence of
> allowing pointers, not a consequence of non-functional styles. But
> that just suggests my understanding is incomplete or wrong.

A slight correction: Aliasing can also occur when e.g. two indexes refer
to the same array slot. It's not limited to pointers.

Aliasing is a problem because modifying a value through one alias will
modify the other object, *without the source code showing that this is
done*.
An example:
exchange_in_place (*a, *b)
*a ^= *b;
*b ^= *a;
This exchanges two values by XOR-ing them over each other. (It's a trick
from C folklore. It makes two values swap places without using a
temporary variable, and in just two machine operations.)

Now if a and b are aliases, i.e. if they refer to the same variable,
then this isn't a no-op, it will zero the value, since the first
assignment essentially says
*a ^= *a;
under these conditions.

A more realistic example would be
transfer_funds (amount, source, destination)
source.balance += amount;
destination.balance -= amount;
assert
source.balance == old (source.balance) - amount
destination.balance == old (destination.balance) + amount
which will break the assertion if source and destination accounts happen
to be the same. One could argue that the bug is in the assertion, of
course...


... but the point of all this is that if you have a potential for
aliasing, you have to check whether your code is correct in that case,
too. And the bad thing is that the source code won't warn you.

Actually the problem needs to be considered only if data is modified
through an alias. If data cannot be changed, the code will behave
exactly the same whether the data is aliased or not.

Regards,
Jo

Jerzy Karczmarczuk

unread,
Jun 24, 2005, 5:43:01 AM6/24/05
to
Joachim Durchholz wrote:

> Aliasing is a problem because modifying a value through one alias will
> modify the other object, *without the source code showing that this is
> done*.
> An example:
> exchange_in_place (*a, *b)
> *a ^= *b;
> *b ^= *a;
> This exchanges two values by XOR-ing them over each other. (It's a trick
> from C folklore. It makes two values swap places without using a
> temporary variable, and in just two machine operations.)

Jo, this is false. The restoration of *a is missing.

Jerzy K.

Joachim Durchholz

unread,
Jun 24, 2005, 7:59:00 AM6/24/05
to
Jerzy Karczmarczuk wrote:

Indeed. It should have been

exchange_in_place (*a, *b)
*a ^= *b;
*b ^= *a;

*a ^= *b;

exchange_in_place (&foo, &foo) will still zero out foo.

Note that aliasing bugs can be rather innocuous. For example, code to
transpose a matrix could do

for i from 1 to x_size
for j from 1 to i
exchange_in_place (matrix [i, j], matrix [j, i])

This will inadvertendly zero out the diagonal of the matrix, instead of
transposing it.

Regards,
Jo

Peter Seibel

unread,
Jun 24, 2005, 12:05:40 PM6/24/05
to
Joachim Durchholz <j...@durchholz.org> writes:

> Peter Seibel wrote:
>
>> Neelakantan Krishnaswami <ne...@cs.cmu.edu> writes:
>>
>>> Where functional style wins is mostly from the freedom from
>>> aliasing. This makes understanding VASTLY easier. I just can't
>>> overstate how much this helps.
>> Okay, I think I understand all the words but I don't understand what
>> you mean. Can you give an example of what you mean by aliasing here?
>> "Aliasing", as I have always understood it, is a consequence of
>> allowing pointers, not a consequence of non-functional styles. But
>> that just suggests my understanding is incomplete or wrong.
>
> A slight correction: Aliasing can also occur when e.g. two indexes
> refer to the same array slot. It's not limited to pointers.

Of course. Duh. Thanks.

Chris F Clark

unread,
Jun 24, 2005, 4:52:41 PM6/24/05
to
Joachim Durchholz <j...@durchholz.org> writes:

> Use a higher-order function.

I think there is no argument that a HOF is more concise than either
iteration or recursion. I also think it is more sophisticated (and
less fundamental) than either iteration than recursion.

> I think the difference between learning iterative and recursive is
> more one of thinking habits:
> With recursion, you ignore the step-by-step nature of computing and
> work in terms of equations. You need to learn and practise stating an
> induction hypothesis, and that's a nontrivial thing.
> With iteration, you start with the step-by-step nature of

> computing. (quote continued below)

With more of my dubious ideas (thanks Jerry!), I still think iteration
(as do this, do that, do more until you have done enough) is more
"fundamental" than recursion. I think step-by-step thinking is more
primitive than equational.

> When practicing, you discover that you have trouble
> "bending the control flow back": variables have to serve a dual
> purpose, depending on whether you're first-time into the loop or in a
> later iteration. (Similar problems can exist when exiting from the
> loop. At that point, writing loops turns really nasty.) If you're
> lucky, somebody introduces you to Dijkstra's notion of loop
> invariants, at which point things become quite manageable again (but
> then stating a loop invariant is still nontrivial: it's equivalent to
> stating the induction hypothesis).

That is in some sense the advantage of recursive (equational)
thinking. You are already thinking (naturally) in terms of induction
hypothesis, in terms of forward progress, in terms of how does one
reduce this problem to a smaller one.

I suspect that recursive thinking may be simpler than forming loop
invariants, but I'm not certain. I do know that when a problem
reaches a certain level of complexity, I quit trying to write a loop
and instead call a routine (which may be a recursive instance of the
routine I'm writing).

At this point, I want to mention a specific paper written by Arch
Robinson (then of Kuch Associates) about recursive tree traversals,
something like "Recursive Tree Traversals Considered Harmful". The
point of his paper is that many programmers of the system he supported
wrote mutually recursive routines the iterated over parse trees to
calculate results. They then wondered why (complained that) as their
inputs grew the performance degraded non-linearly. The problem was
that the users weren't properly accounting for the number of
traversals they were doing and the results were n**3 and worse
algorithms. When the recursion was presented to them as loops, the
non-linear performance was immediately apparent, and the users were
able to resolve their problems.

> I think it's probably best to give students a choice. Show them both
> recursion and loops (with invariants), and let them explore
> both. They'll pick up the concept that's easier on them, then explore
> the analogies with the other one.

No question that every (imperative) student should learn both. I
think every student should also be exposed to HOF's as the ultimate
solution, where you don't care about order, just that something gets
done to all the right parts.

BTW, this discussion is relevant to my own work in parser generation.
I am in the process of adding the "visitor pattern" to my tool. I
have noticed that in the visitor pattern, there is often a step when
the pattern recurses. I had previously realized that this recursion
is not related to the visitor itself, but part of the traversal. That
made me realize that the "pattern" has three parts, I call V, I, and
T. The V is the visitor, that action we take at each node of the tree.
The I is the iterator, which is the order we walk over the tree, such
as pre-order, post-order, in-order, or something special. The T is
the tree itself, or the data structure being walked over. At some
level the three parts are separable--you may want to walk over many
different trees, with many different visitors, but do the wlks all in
pre-order fashion. They are also interconnected, what a pre-order
walk means, has much to do with the shape of the tree itself.
Similarly, the visitor is like pattern-matching, and what is done at
each node of the tree depends on the way the tree is labelled.

In any case, I intend to present the pattern as applying a function
(the visitor) using a HOF (the iterator) to a type (the tree). I was
also thinking of allowing the user to have a for-loop version of the
same. I think it is clear now, I want a recursive representation also.

F(n) = F(n.next) + F(n.next.next)

Thanks,
-Chris

Neelakantan Krishnaswami

unread,
Jun 25, 2005, 12:48:26 PM6/25/05
to
In article <7618f40ed15c4a660d0...@mygate.mailgate.org>,

Jerzy Karczmarczuk wrote:
> Neelakantan Krishnaswami
>
>> ...I don't really think recursive functions are much easier to
>> write. To prove a function correct (whether recursive or with a loop),
>> you need to give a termination condition, and show that it either gets
>> smaller at each recursive call or trip through the loop, and then give
>> an inductive argument showing that correctness is maintained. This is
>> the same amount of work.
>
> Unless you pay some attention to the *fundamental role* of co-recursion
> (but it seems that this is not so popular, no doubt because of historic
> reasons, and conservative pedagogy...)

The reason in my case is that I'm mostly an ML programmer, so I tend
to think of datatypes as inductively defined least fixpoints. I do
realize that things are different in the Haskell world, and that
Haskell datatypes are more like greatest fixpoints.

> In co-recursive algorithms you have *NO* termination. You use such
> beasts as bisimulation etc. to prove the *finite progress* upon a
> co-recursive step.

Sure, but you still have to prove this. Showing that responses to
webserver requests happen in finite time is a proof obligation of the
same kind as showing that a compiler walking over an AST terminates.

Regardless of whether you must show termination or finite progress,
the obligation arises from the presence of unrestricted recursion in
the language. If you consider the logical, Curry-Howard interpretation
of the type of a fixpoint operator

rec : forall a. (a -> a) -> a

you'l see that it is an axiom that causes inconsistency. That is,
since "A -> A" is true for all "A", the fixpoint axiom means that all
propositions are true. So the programmer's proof obligation to show
that his loop terminates or a process makes progress precisely
corresponds to the logician's obligation to show that the fixpoints in
her proof aren't being used to prove a false statement.


--
Neel Krishnaswami
ne...@cs.cmu.edu

0 new messages