[Haskell-cafe] Hyper-recursion?

29 views
Skip to first unread message

Lawrence Bottorff

unread,
Jan 15, 2017, 11:44:16 PM1/15/17
to haskel...@haskell.org, lp...@cam.ac.uk
A while back I worked at an assessor's office, i.e., the people responsible for handling properties as land parcels. I was brought in to do GIS. One of the things we wanted was to get a sense of the history of land use, specifically how property changed hands, and especially how property lines changed due to properties either merging or being split up. That is to say, how the parcel map changed over time.

Problematic was how the rugged old mainframe system (IBM AS/400 and a Cobol app) simply updated the data -- with no concept of remembering how things used to be. My GIS system (ESRI) also lacked any state memory -- other than simply snapshooting everything ad nauseum.

Years later I become familiar with the world of functional programming. Somewhere in the functional paradigm, specifically recursion, would seem to be a model for this issue. That is, the whole issue of parcels changing through merges or splits might be a simple question of recursion. If functional is "beyond state," then changes in a map would only be stages, places in a never-ending outward-growing recursion. Seen as "infinite mirrors," one map change would simply be a mirror layer -- click on it to call it up.

So in chapter 1 of any functional programming tutorial is the factorial calculation function done with recursion. We beginners see the recursion "going out" to the "last one," then "coming back," adding up the results of each stage as it returns . . . like a yo-yo winding out, then winding up again.

But what I'm describing with changing parcels maps would seem to be a process that never comes out of recursion, rather, lives inside a "hyper-recursion." That is to say, an overarching recursion is the whole world. However, today's mentality would simply say, Obviously, you want to save off each change you make to your parcel map . . . again, the whole snapshot idea. This thinking is typical data management coupled, perhaps, with the human penchant to see things in terms of time and timelines. So, again, the standard solution would be to timestamp a parcel change and save it off, i.e., a "change" is a tick on the timeline, mark it, make note of it.

But again, couldn't we see this whole issue of parcel changes as (using my yo-you analogy) just levels in what seems like a continual winding out -- which in a topological sense doesn't even need to be seen as geographical? This would seem to put the process in a recursion, rather than recursion(s) in a process. Haskell being the most pure functional language, is, hence, my starting point on this question. I'm a bright enough person, but I can't believe I'm the first person to notice this Alice in Wonderland situation. . .  BTW, we could see a chess game in a similar light. . . . if not many, many other evolving phenomena of supposed (cumulative, consequential?) change.

Thanks,
Lawrence Bottorff

David Turner

unread,
Jan 16, 2017, 2:50:03 AM1/16/17
to Lawrence Bottorff, lp...@cam.ac.uk, haskel...@haskell.org
Hi Lawrence,

I suspect you have stumbled onto the dual concept of corecursion: rather than starting from a complex structure and recursively consuming it until you reach a simple base case, you're starting from a simple base case and corecursively producing more complex ones.

See also Event Sourcing.

Best wishes,

David


_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

Albert Y. C. Lai

unread,
Jan 16, 2017, 8:56:23 PM1/16/17
to haskel...@haskell.org
There is an idea called "open recursion". Illustrating by Fibonacci (the
direct but naively slow way) and factorial (the direct but naively slow
way again, see
http://www.luschny.de/math/factorial/FastFactorialFunctions.htm
):

Instead of writing "fac x = x * fac (x-1)", we write:

fac x = fac_essence fac x

fac_essence f 0 = 0
fac_essence f x = x * f (x-1)

And instead of writing "fib x = fib (x-1) + fib (x-2)", we write:

fib x = fib_essence fib x

fib_essence f 0 = 0
fib_essence f 1 = 1
fib_essence f x = f (x-1) + f (x-2)

The general pattern: Write a helper essence function that takes one more
parameter, a function parameter; don't recurse yet, use the function
parameter instead. Leave the recursion to the main function.

Why would we do this?

One reason is philosophical, like you said, to factor out the recursion
from the essence of one iteration of the process.

Another reason is that you don't have to merely put back the recursion
in the main function. You can now play some dependency injection trick
before recursing, or maybe you don't even recurse.

For example, how about injecting a dependency on a lookup-list, with the
content of said lookup-list coming from said injection --- there hides
the recursion.

fib_list = [fib_essence (fib_list !!) i | i <- [0..]]

or maybe you prefer

fib_list = map (fib_essence (fib_list !!)) [0..]

You will find that "fib_list !! 100" is less slow than "fib 100".

Sometimes you prefer a function-application interface. So we go on to write:

fib_notslow x = fib_list !! x

In fact sometimes we turn it around, hide away the lookup table, and write:

fib_notslow = (fib_list !!)
where
fib_list = [fib_essence fib_notslow i | i <- [0..]]
-- I could write "fib_essence (fib_list !!)",
-- but now that (fib_list !!) has a name, "fib_notslow"...

Thus we have obtained a revisionist-history definition of dynamic
programming and memoization: Dependency injection of a lookup table into
a recursion.

List is still not fast enough; a binary search tree (lazily generated)
is even better. The "memoize" library
(http://hackage.haskell.org/package/memoize) does that.

Your next food for thought: You have seen open recursion for recursive
programs. How about open recursion for recursive data?

Anthony Clayden

unread,
Jan 16, 2017, 9:10:45 PM1/16/17
to haskel...@haskell.org
>> On 16 Jan 2017 04:43, "Lawrence Bottorff" <borgauf at

gmail.com> wrote:
>>
>> A while back I worked at an assessor's office, i.e., the
people responsible
>> for handling properties as land parcels. ...

Hi Lawrence, I too worked on land parcels -- a local
government rates billing
and land valuation application.

>> ... how property changed hands, and especially how


property lines
>> changed due to properties either merging or being split
up.
>> That is to say, how the parcel map changed over time.

I think the important aspect is that the a parcel
can be split, and one of the splits merged with another
parcel
that was not originally with it.

(I don't know how it was in your application. but in mine
a legal parcel might consist of discontiguous areas of land
-- for example a house in a terrace parcelled with a garage
amongst
a block of garages at the end of the street,)

That is, looking over history you cannot group the parcels
into a
strict hierarchy of splits.

>> Somewhere in the functional paradigm, specifically
recursion,
>> would seem to be a model for this issue.

No I'm not seeing anything specifically functional about
this.
It's a directed graph, where each node is an instance of
parcel ownership.

>> So in chapter 1 of any functional programming tutorial is
the factorial
>> calculation function done with recursion. We beginners
see the recursion
>> "going out" to the "last one," then "coming back," adding
up the results of
>> each stage as it returns . . . like a yo-yo winding out,
>> then winding up again.

Nice image, but there's no "last one" here (nor a "first
one"
that we could make sense of), There's no boundary
from where we could be "coming back".

Recursion does not apply [see more below]

>> David Turner dct25-561bs at mythic-beasts.com
>> Mon Jan 16 07:47:41 UTC 2017


>>
>> I suspect you have stumbled onto the dual concept of

corecursion: ...

Nor corecursion.

a) because of the infinite and arbitrary potential for
splits/mergers;
b) because even if there was some time in the distant past
when the whole country was a single parcel with a single
owner.
(The Emperor? But even empires and countries split and
merge.
Or no owner, only hunter-gatherers/nomads,
which might as well be a single parcel.)
We're never going to try to rebuild that history.
c) because there's never going to be a future end-state.
Arbitrary splits/mergers will continue forever.
We can't predict what might be the smallest parcels
some area of land could be split into.
(Note that one trick to prevent mineral exploitation
companies
from buying up glorious scenery
is to split it into $1-parcels and sell each to a
different owner.)


>> ... Haskell being the most pure functional language, is,


hence,
>> my starting point on this question.

There's nothing specifically Haskell in this.
A better place to ask might be lambda-the-ultimate.


Anthony

Lawrence Bottorff

unread,
Jan 16, 2017, 11:59:57 PM1/16/17
to haskel...@haskell.org
@Albert Y. C. Lai, Yes, I'll look into open recursion.

@Anthony Clayden, I like the idea of the directed graph, no matter what the final answer. Land parcel changes can be seen as hierarchical trees growing branches -- until something needs to jump to another tree -- or a tree pruned and moved to a new home. The WWW is a great example of this, i.e., a DOM page inside a hierarchical site -- with links jumping outside. True, the splitting and recombining of parcels goes on forever. But I see that as recursion, like one of the classic examples, "infinite mirrors" (the cascading layers produced when two mirrors face one another and you stand in the middle). Basically, anything that is changing is bifurcating and, thus, can be seen as recursion -- whether it "comes back" or not.

Programs "execute," which means they do steps, and then they're through and give an answer or result. They leave logs, paper printouts, new things on the disk, answers on the screen, etc. These are terminating discrete events -- with no natural sense of connection other than piping to a next discrete step or a historical recounting we set up ("chess software, show me my last two moves."). Sure, looped GUI apps or your bash terminal or a REPL place a wrappers around this; but there is no concept of having an event be just one layer of the persistent mirror stack. Of course my infinite mirrors analogy breaks down, because progressing from one layer to the next can totally reshape the whole landscape, like completing a row in Tetris can collapse lots of real estate instantly. And yet with today's Tetris (or chess) software, what you see is just a graphical of the continually updated memory field underneath. Again, any chaining of the events or remembering a previous state is unnatural, intentional after-the-fact incursion we have done, not a look at a truly stateless, timeless "unfolding." This may be getting a bit theoretical, but consider the WWW. It grows like an organism -- and, surely, not inside a "program" doing it, imposing direction; there's no program stepping through tasks, no loop creating sites and pages. This is recursion in the wild. For that matter, life is recursion in the wild.

Maybe software simply can't go there. The best we can do is have stuff live in a REPL session; but I'm not sure a REPL could be true hyper-recursion.

I harp on all this because if Haskell is stateless/immutable and in total disregard of any underlying memory field, then it seems it should progress to hyper-recursion. All examples like parcel maps, Tetris screens, chess apps changing -- is just -- as I said above -- graphical views of an underlying memory field's undoubtedly destructive overwriting. I guess I'm asking, What is the opposite of this? What would we have if the memory field wasn't changing state? The most obvious answer would be -- a runaway memory leak. But could we avoid the runaway memory consumption, but have the "unfolding," the statelessness by some comp-sci sleight of hand?

William Yager

unread,
Jan 17, 2017, 12:28:08 AM1/17/17
to Lawrence Bottorff, Haskell Cafe
I'm not sure if this is what you're looking for, but all this talk of bifurcation and infinite recursion suggests to me that you might be interested in Control.Monad.Free and Data.Fix. They may correspond to the concept you're getting at.

But could we avoid the runaway memory consumption, but have the "unfolding," the statelessness by some comp-sci sleight of hand?

Garbage collection?

--Will

Sergiu Ivanov

unread,
Jan 17, 2017, 6:34:29 AM1/17/17
to Lawrence Bottorff, haskel...@haskell.org

Hi Lawrence,

Thus quoth Lawrence Bottorff at 04:56 on Tue, Jan 17 2017:
>
> I harp on all this because if Haskell is stateless/immutable and in
> total disregard of any underlying memory field, then it seems it
> should progress to hyper-recursion.

The characteristics you are putting forward are generally attributed to
declarative programming (kinda): loosely put, programming in which you
describe your intentions instead of describing how they should actually
be implemented.

While Haskell falls rather heavily within declarative programming, some
(theoretically proven undecidability) issues prevent it from entirely
having the mentioned characteristics.


On the other hand, hyper-recursion seems to be an informal metaphor
overhanging a couple formal concepts (fixed points, corecursion, but not
only). Haskell aims to be a "well formalised" language, so while it can
handle fixed points and corecursion, it cannot handle some abstractions
which are too loose and therefore difficult to formalise (but still
useful as tools for intuitive exploration).


Conclusion: you are describing an intuition of a model of computing,
which could fall within declarative programming once formalised.
Haskell also falls within declarative programming, but this does not
mean it is particularly better suited than any other programming
language.


<optional-part1>
I believe I'm indirectly citing what people have already said in this
discussion.
</optional-part1>

<optional-part2>
I do not mean to criticise your point of view, I actually find it rather
interesting.
</optional-part2>

--
Sergiu

https://en.wikipedia.org/wiki/Declarative_programming
signature.asc

Olaf Klinke

unread,
Jan 17, 2017, 3:15:39 PM1/17/17
to bor...@gmail.com, haskel...@haskell.org
If I may step into this discussion, I'd like to suggest you start modelling your world of parcels more concretely, as data types. From there it should be easier to grasp our hyper-recursion.

Question: Does every parcel have its own time, or is there a universal clock ticking in the background?

If you have a uiversal clock, you could model your world as a linear series of state changes, stretching infinitely into the past and future. Hence two streams going out form a "now" would be the overarching data type. This has been used e.g. for doubly linked lists. Doesn't functional reactive programming use a similar model?

Question: How does one identify a parcel of land? Is it a polygon of coordinates in some reference system? Hence can the state be seen as a finite set of (Polygon,Owner) pairs?

Question: What is more important: The state or the state change? If the state is more important, a kind of graph might be the appropriate data structure. If the change is important, it might be interesting to model the world (and time) as a network of functions.

Question to all mathematicians on this list: Suppose (again assuming a universal clock) there is a type (a set?) X and a function

change :: Integer -> (X -> X)

where the state of "now" is to be understood as the value

now = change 0 . change (-1) . change (-2) . {...ad infinitum}

Under which circumstances does this determine the value of "now"? The only instance I can think of are contractions on a (compact) metric space. (Brouwer's/Banach's fixed point theorems)

I second the contributors who suggest that this has not much to do with recursion nor corecursion. To me, the essence of recusion is self-reference. A function building e.g. an infinite stream from a seed is not self-referential in itself, it is the resulting data that is self-referential.

Olaf

Lawrence Bottorff

unread,
Jan 17, 2017, 8:13:14 PM1/17/17
to Olaf Klinke, haskel...@haskell.org
I guess I'm using a looser definition of recursion, something not exactly "self-referential." For example, I'm on the phone with someone, and then I accept a call-waiting call. You can go into multiple depths of call waiting, then wind your way back out as you hang up on each -- all the way back to the original caller. Or, with my example, I could have a state represented as some polynomial, a_nx^n + a_n-1x^(n-1) + . . ., then a "change" happens when a new polynomial "goes into" the first,

a_n(b_ny^n + b_n-1y^n-1 + ...)^n + a_n-1x^n-1 ...

This could go on and on, new polynomials substituted into one of the variables of the previous. All I want to say with this is that I consider this also recursive in nature, even though I can't imagine how this would be "self-referential" or looking typically inductive.

I'm not a expert on the whole Bitcoin blockchain, but your (Olaf)

now = change 0 . change (-1) . change (-2) . {...ad infinitum}

looks like one. And no, there shouldn't be a clock, per se. Yes, parcels would be polygons, with sides shared -- although this isn't germane to the bigger question.

Sergiu Ivanov

unread,
Jan 18, 2017, 3:34:39 AM1/18/17
to Lawrence Bottorff, Olaf Klinke, haskel...@haskell.org

Thus quoth Lawrence Bottorff at 01:11 on Wed, Jan 18 2017:
>
> I guess I'm using a looser definition of recursion, something not
> exactly "self-referential." For example, I'm on the phone with
> someone, and then I accept a call-waiting call. You can go into
> multiple depths of call waiting, then wind your way back out as you
> hang up on each -- all the way back to the original caller.
[...]
> This could go on and on, new polynomials substituted into one of the
> variables of the previous.

This sounds a lot like recursive (inductive/coinductive) data
structures.

--
Sergiu
signature.asc
Reply all
Reply to author
Forward
0 new messages