_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Instead of writing "fac x = x * fac (x-1)", we write:
fac x = fac_essence fac x
fac_essence f 0 = 0
fac_essence f x = x * f (x-1)
And instead of writing "fib x = fib (x-1) + fib (x-2)", we write:
fib x = fib_essence fib x
fib_essence f 0 = 0
fib_essence f 1 = 1
fib_essence f x = f (x-1) + f (x-2)
The general pattern: Write a helper essence function that takes one more
parameter, a function parameter; don't recurse yet, use the function
parameter instead. Leave the recursion to the main function.
Why would we do this?
One reason is philosophical, like you said, to factor out the recursion
from the essence of one iteration of the process.
Another reason is that you don't have to merely put back the recursion
in the main function. You can now play some dependency injection trick
before recursing, or maybe you don't even recurse.
For example, how about injecting a dependency on a lookup-list, with the
content of said lookup-list coming from said injection --- there hides
the recursion.
fib_list = [fib_essence (fib_list !!) i | i <- [0..]]
or maybe you prefer
fib_list = map (fib_essence (fib_list !!)) [0..]
You will find that "fib_list !! 100" is less slow than "fib 100".
Sometimes you prefer a function-application interface. So we go on to write:
fib_notslow x = fib_list !! x
In fact sometimes we turn it around, hide away the lookup table, and write:
fib_notslow = (fib_list !!)
where
fib_list = [fib_essence fib_notslow i | i <- [0..]]
-- I could write "fib_essence (fib_list !!)",
-- but now that (fib_list !!) has a name, "fib_notslow"...
Thus we have obtained a revisionist-history definition of dynamic
programming and memoization: Dependency injection of a lookup table into
a recursion.
List is still not fast enough; a binary search tree (lazily generated)
is even better. The "memoize" library
(http://hackage.haskell.org/package/memoize) does that.
Your next food for thought: You have seen open recursion for recursive
programs. How about open recursion for recursive data?
Hi Lawrence, I too worked on land parcels -- a local
government rates billing
and land valuation application.
>> ... how property changed hands, and especially how
property lines
>> changed due to properties either merging or being split
up.
>> That is to say, how the parcel map changed over time.
I think the important aspect is that the a parcel
can be split, and one of the splits merged with another
parcel
that was not originally with it.
(I don't know how it was in your application. but in mine
a legal parcel might consist of discontiguous areas of land
-- for example a house in a terrace parcelled with a garage
amongst
a block of garages at the end of the street,)
That is, looking over history you cannot group the parcels
into a
strict hierarchy of splits.
>> Somewhere in the functional paradigm, specifically
recursion,
>> would seem to be a model for this issue.
No I'm not seeing anything specifically functional about
this.
It's a directed graph, where each node is an instance of
parcel ownership.
>> So in chapter 1 of any functional programming tutorial is
the factorial
>> calculation function done with recursion. We beginners
see the recursion
>> "going out" to the "last one," then "coming back," adding
up the results of
>> each stage as it returns . . . like a yo-yo winding out,
>> then winding up again.
Nice image, but there's no "last one" here (nor a "first
one"
that we could make sense of), There's no boundary
from where we could be "coming back".
Recursion does not apply [see more below]
>> David Turner dct25-561bs at mythic-beasts.com
>> Mon Jan 16 07:47:41 UTC 2017
>>
>> I suspect you have stumbled onto the dual concept of
corecursion: ...
Nor corecursion.
a) because of the infinite and arbitrary potential for
splits/mergers;
b) because even if there was some time in the distant past
when the whole country was a single parcel with a single
owner.
(The Emperor? But even empires and countries split and
merge.
Or no owner, only hunter-gatherers/nomads,
which might as well be a single parcel.)
We're never going to try to rebuild that history.
c) because there's never going to be a future end-state.
Arbitrary splits/mergers will continue forever.
We can't predict what might be the smallest parcels
some area of land could be split into.
(Note that one trick to prevent mineral exploitation
companies
from buying up glorious scenery
is to split it into $1-parcels and sell each to a
different owner.)
>> ... Haskell being the most pure functional language, is,
hence,
>> my starting point on this question.
There's nothing specifically Haskell in this.
A better place to ask might be lambda-the-ultimate.
Anthony
Question: Does every parcel have its own time, or is there a universal clock ticking in the background?
If you have a uiversal clock, you could model your world as a linear series of state changes, stretching infinitely into the past and future. Hence two streams going out form a "now" would be the overarching data type. This has been used e.g. for doubly linked lists. Doesn't functional reactive programming use a similar model?
Question: How does one identify a parcel of land? Is it a polygon of coordinates in some reference system? Hence can the state be seen as a finite set of (Polygon,Owner) pairs?
Question: What is more important: The state or the state change? If the state is more important, a kind of graph might be the appropriate data structure. If the change is important, it might be interesting to model the world (and time) as a network of functions.
Question to all mathematicians on this list: Suppose (again assuming a universal clock) there is a type (a set?) X and a function
change :: Integer -> (X -> X)
where the state of "now" is to be understood as the value
now = change 0 . change (-1) . change (-2) . {...ad infinitum}
Under which circumstances does this determine the value of "now"? The only instance I can think of are contractions on a (compact) metric space. (Brouwer's/Banach's fixed point theorems)
I second the contributors who suggest that this has not much to do with recursion nor corecursion. To me, the essence of recusion is self-reference. A function building e.g. an infinite stream from a seed is not self-referential in itself, it is the resulting data that is self-referential.
Olaf