Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Haskell-cafe] Clearly, Haskell is ill-founded

1,045 views
Skip to first unread message

Thomas Conway

unread,
Jul 9, 2007, 1:42:41 AM7/9/07
to Haskell Cafe
I don't know if you saw the following linked off /.

http://www.itwire.com.au/content/view/13339/53/

An amazon link for the book is here:

http://www.amazon.com/Computer-Science-Reconsidered-Invocation-Expression/dp/0471798142

The basic claim appears to be that discrete mathematics is a bad
foundation for computer science. I suspect the subscribers to this
list would beg to disagree.

Enjoy,
T.
--
Dr Thomas Conway
drt...@gmail.com

Silence is the perfectest herald of joy:
I were but little happy, if I could say how much.
_______________________________________________
Haskell-Cafe mailing list
Haskel...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Donald Bruce Stewart

unread,
Jul 9, 2007, 1:48:27 AM7/9/07
to Thomas Conway, Haskell Cafe
drtomc:

> I don't know if you saw the following linked off /.
>
> http://www.itwire.com.au/content/view/13339/53/
>
> An amazon link for the book is here:
>
> http://www.amazon.com/Computer-Science-Reconsidered-Invocation-Expression/dp/0471798142
>
> The basic claim appears to be that discrete mathematics is a bad
> foundation for computer science. I suspect the subscribers to this
> list would beg to disagree.
>
> Enjoy,

:-)

And he's patented it...

http://www.patentstorm.us/patents/5355496-description.html

SUMMARY OF THE INVENTION

A method and system for process expression and resolution is described. A
first language structure comprising a possibility expression having at least
one definition which is inherently and generally concurrent is provided.
Further, a second language structure comprising an actuality expression
including a fully formed input data name to be resolved is provided.
Furthermore, a third language structure comprising an active expression
initially having at least one invocation, the invocation comprising an
association with a particular definition and the fully formed input data name
of the actuality expression is provided. Subsequently, the process of resolving
invocations begins in the active expression with fully formed input data names
in relation to their associated definition to produce at least one or both of
the following: (1) an invocation with a fully formed input data name and (2) a
result data name.

Interesting...

-- Don

Thomas Conway

unread,
Jul 9, 2007, 2:04:42 AM7/9/07
to Donald Bruce Stewart, Haskell Cafe
On 7/9/07, Donald Bruce Stewart <do...@cse.unsw.edu.au> wrote:
> And he's patented it...
>
> http://www.patentstorm.us/patents/5355496-description.html

Clearly a winner then. :-)

T.
--
Dr Thomas Conway
drt...@gmail.com

Silence is the perfectest herald of joy:
I were but little happy, if I could say how much.

Daniel McAllansmith

unread,
Jul 9, 2007, 2:15:32 AM7/9/07
to haskel...@haskell.org
On Monday 09 July 2007 17:42, Thomas Conway wrote:
> I don't know if you saw the following linked off /.
>
> http://www.itwire.com.au/content/view/13339/53/
>
> An amazon link for the book is here:
>
> http://www.amazon.com/Computer-Science-Reconsidered-Invocation-Expression/d
>p/0471798142
>
> The basic claim appears to be that discrete mathematics is a bad
> foundation for computer science. I suspect the subscribers to this
> list would beg to disagree.

I wouldn't want to comment on the validity of his claim, maybe he's wrong, or
maybe he's... well, anyway... what I will say is I got a chuckle out of
the 'Citations' that Amazon lists.

I especially like it that Mr. Fant's book is apparently cited in 'The
Essential Guide to Psychiatric Drugs: Includes The Most Recent Information
On: Antidepressants, Tranquilizers and Antianxiety Drugs,
Antipsychotics, ...'

I shudder to think of the creative processes involved in the creation of the
book.

Stephen Forrest

unread,
Jul 9, 2007, 3:18:46 AM7/9/07
to haskel...@haskell.org
On 7/9/07, Daniel McAllansmith <dm...@gmail.com> wrote:

> I wouldn't want to comment on the validity of his claim, maybe he's wrong, or
> maybe he's... well, anyway... what I will say is I got a chuckle out of
> the 'Citations' that Amazon lists.

As amusing as that thought is, it seems that this is regrettably an
error on Amazon's part. After looking at the actual page images where
the alleged citations occur, there is nowhere any mention of this
book. (How could there be? It was just published.)

It looks like Amazon's citation database is mistakenly using the index
for the book _Beating Depression_ by John Rush (Toronto: John Wiley &
Sons, Canada Ltd., 1983).

Steve

Puneet

unread,
Jul 9, 2007, 4:20:34 AM7/9/07
to haskel...@haskell.org
> It looks like Amazon's citation database is mistakenly using the index
> for the book _Beating Depression_ by John Rush (Toronto: John Wiley &
> Sons, Canada Ltd., 1983).
>

Yes it is so. Amazon.com mistakenly thinks that the given book is a
new edition of the book titled "beating depression".

Amazon also links hardcover and softcover editions of "beating
depression" just below where the price and availability of the book is
mentioned.

Regards

Pasqualino 'Titto' Assini

unread,
Jul 9, 2007, 6:05:37 AM7/9/07
to haskel...@haskell.org
Doesn't Haskell already implement the 3-valued logic (True, False, NULL), that
Karl Fant proposes (see papers at
http://www.theseusresearch.com/invocation%20model.htm) as an alternative to
centralised clock-based coordination, by postulating that every data type
includes the bottom value?

I like his concept that:

"concurrency is simple and primitive and sequentiality is a complex and risky
derivative of concurrency."

Can someone remind me why, in a language like Haskell that is referentially
transparent and therefore inherently 'concurrent', we need explicit
concurrency (threads, etc.) ?

titto

Conor McBride

unread,
Jul 9, 2007, 6:11:38 AM7/9/07
to Haskell Cafe
Hi all

On 9 Jul 2007, at 06:42, Thomas Conway wrote:

> I don't know if you saw the following linked off /.
>
> http://www.itwire.com.au/content/view/13339/53/

[..]

> The basic claim appears to be that discrete mathematics is a bad
> foundation for computer science. I suspect the subscribers to this
> list would beg to disagree.

It's true that some systems are better characterised as corecursive
"coprograms", rather than as recursive "programs". This is not a
popular or well-understood distinction. In my career as an advocate
for total programming (in some carefully delineated fragment of a
language) I have many times been gotcha'ed thus: "but an operating
system is a program which isn't supposed to terminate". No, an
operating system is supposed to remain responsive. And that's what
total coprograms do.

By the looks of this article, the program versus coprogram distinction
seems to have occasioned an unprecedented degree of existential angst
for this individual.

Even so, I'd say that it's worth raising awareness of it. Haskell's
identification of inductive data with coinductive data, however well
motivated, has allowed people to be lazy. People aren't so likely to
be thinking "do I mean inductive or coinductive here?", "is this
function productive?" etc. The usual style is to write as if
everything is inductive, and if it still works on infinite data, to
pat ourselves on the back for using Haskell rather than ML. I'm
certainly guilty of that.

I'd go as far as to suggest that "codata" be made a keyword, at
present doubling for "data", but with the documentary purpose of
indicating that a different mode of (co)programming is in order. It
might also be the basis of better warnings, optimisations, etc.
Moreover, it becomes a necessary distinction if we ever need
to identify a total fragment of Haskell. Overkill, perhaps, but
I often find it's something I want to express.

Just a thought

Conor

Asumu Takikawa

unread,
Jul 9, 2007, 6:34:30 AM7/9/07
to Thomas Conway, Haskell Cafe
On 15:42 Mon 09 Jul , Thomas Conway wrote:
> I don't know if you saw the following linked off /.
>
> http://www.itwire.com.au/content/view/13339/53/
>

I read that earlier and his comments, such as

"This concept of 'process expression' is, he says, a common thread
running through the various disciplines of computer science",

made me think of arrows and category theory.

And I wonder what kind of aberration a monte-carlo algorithm would be if
this excerpt is to be taken seriously:

"Any program utilising random input to carry out its process, such...is
not an algorithm."

Cheers,
Asumu Takikawa

signature.asc

Dan Piponi

unread,
Jul 9, 2007, 2:20:01 PM7/9/07
to Haskell Cafe
On 7/8/07, Thomas Conway <drt...@gmail.com> wrote:
> The basic claim appears to be that discrete mathematics is a bad
> foundation for computer science. I suspect the subscribers to this
> list would beg to disagree.

Wearing my tin foil hat for the moment, I think that there is a
conspiracy by some computer scientists to drive a wedge between
mathematicians and computer scientists. You can see hints of it in
many places where both mathematicians and computer scientists hang out
and there have been quite a few recent articles setting up mathematics
and computer science as somehow in competition with each other.

Many of the structures studied by mathematicians are algebraic. Many
of the structures studied by computer scientists are coalgebraic (eg.
the web itself can be seen as a vast coalgebraic structure). Sometimes
I wonder if the only difference between mathematicians and computer
scientists is the direction of their arrows.

Andrew Coppin

unread,
Jul 10, 2007, 3:08:55 PM7/10/07
to haskel...@haskell.org
Conor McBride wrote:
> Hi all

Erm... Wait a sec... coroutines, comonads, coprograms, codata... what in
the name of goodness does "co" actually *mean* anyway??

Stefan O'Rear

unread,
Jul 10, 2007, 3:26:47 PM7/10/07
to Andrew Coppin, haskel...@haskell.org
On Tue, Jul 10, 2007 at 08:08:52PM +0100, Andrew Coppin wrote:
> Erm... Wait a sec... coroutines, comonads, coprograms, codata... what in
> the name of goodness does "co" actually *mean* anyway??

Nothing.

When mathematicians find a new thing completely unlike an OldThing, but
related by some symmetry, they often call the new thing a CoOldThing.

Data can only be constructed using constructors, but can be
deconstructed using recursive folds;
Codata can only be deconstructed using case analysis, but can be
constructed using recursive unfolds.

Monads keep things inside.
Comonads keep things outside.

Homology theory studies the boundaries of shapes.
Cohomology theory studies the insides of curves.

..

Stefan

signature.asc

Creighton Hogg

unread,
Jul 10, 2007, 3:30:13 PM7/10/07
to Dan Piponi, Haskell Cafe
On 7/9/07, Dan Piponi <dpi...@gmail.com> wrote:
>
> On 7/8/07, Thomas Conway <drt...@gmail.com> wrote:
> > The basic claim appears to be that discrete mathematics is a bad
> > foundation for computer science. I suspect the subscribers to this
> > list would beg to disagree.
>
> Wearing my tin foil hat for the moment, I think that there is a
> conspiracy by some computer scientists to drive a wedge between
> mathematicians and computer scientists. You can see hints of it in
> many places where both mathematicians and computer scientists hang out
> and there have been quite a few recent articles setting up mathematics
> and computer science as somehow in competition with each other.
>
> Many of the structures studied by mathematicians are algebraic. Many
> of the structures studied by computer scientists are coalgebraic (eg.
> the web itself can be seen as a vast coalgebraic structure).


Okay Mr. Piponi, as a math geek I can't let that comment about the web slide
without further explanation. Is it just the idea that coalgebras can
capture the idea of side affects (a -> F a) or is there something more
specific that you're thinking of?

Creighton Hogg

unread,
Jul 10, 2007, 3:35:25 PM7/10/07
to Conor McBride, Haskell Cafe
On 7/9/07, Conor McBride <c...@cs.nott.ac.uk> wrote:
>
> Hi all
>
> On 9 Jul 2007, at 06:42, Thomas Conway wrote:
>
> > I don't know if you saw the following linked off /.
> >
> > http://www.itwire.com.au/content/view/13339/53/
>
> [..]
>
> > The basic claim appears to be that discrete mathematics is a bad
> > foundation for computer science. I suspect the subscribers to this
> > list would beg to disagree.
>
> It's true that some systems are better characterised as corecursive
> "coprograms", rather than as recursive "programs". This is not a
> popular or well-understood distinction. In my career as an advocate
> for total programming (in some carefully delineated fragment of a
> language) I have many times been gotcha'ed thus: "but an operating
> system is a program which isn't supposed to terminate". No, an
> operating system is supposed to remain responsive. And that's what
> total coprograms do.


I'm sorry, but can you expand a little further on this? I guess I don't
understand how a corecursion => responsive to input but not terminating.
Where does the idea of waiting for input fit into corecursion?

Dan Piponi

unread,
Jul 10, 2007, 3:58:27 PM7/10/07
to Haskell Cafe, Creighton Hogg
On 7/10/07, Creighton Hogg <wch...@gmail.com> wrote:
> Okay Mr. Piponi, as a math geek I can't let that comment about the web slide
> without further explanation. Is it just the idea that coalgebras can
> capture the idea of side affects (a -> F a) or is there something more
> specific that you're thinking of?

First a quick bit of background on algebras.

If F is a functor, an F-algebra is an arrow FX->X. For example if we
choose FX = 1+X+X^2 (using + to mean disjoint union) then an F-algebra
is a function 1+X+X^2->X. The 1->X part just picks out a constant, the
image of 1. The X^2->X defines a binary operator and the X->X part is
an endomorphism. A group has a constant element (the identity) an
endomorphism (the inverse) and a binary operator (multiplication). So
a group is an example of an F-algebra (with some extra equations added
in so a group isn't *just* an F-coalgebra).

A F-coalgebra is an arrow X->FX. As an example, let's pick
FX=(String,[X]). So an F-coalgebra is a function X->(String,[X]). We
can view this as two functions, 'appearance' of type X->String and
'links' of type X->[X]. If X is the type of web pages, then interpret
'appearance' as the rendering (as plain text) of the web page and
links as the function that gives a list of links in the page. So the
web forms a coalgebra. (Though you'll need some extra work to deal
with persistent state like cookies.)

The theme is that mathematicians often like to study objects with some
kind of 'combination' operation like (generalised) addition or
multiplication. These form algebras with maps FX->X. Computer
scientists often like to study things that generate more stuff (eg.
when you press a button or input something). So you end up with
something of the form X->FX. This includes many familiar things like
web pages, state machines and formal languages. This isn't a sharp
divide (of course) but I think it reflects a real difference in
emphasis.

A great source for this stuff is the book 'Vicious Circles' by Barwise
and Moss. It's full of computer sciencey stuff but it seems to be
written for an audience that includes mathematicians and computer
scientists. (It has quite a few typos and more serious errors
however.)

Andrew Coppin

unread,
Jul 10, 2007, 4:02:16 PM7/10/07
to haskel...@haskell.org
Dan Piponi wrote:
> First a quick bit of background on algebras.
>
> If F is a functor, an F-algebra is an arrow FX->X. For example if we
> choose FX = 1+X+X^2 (using + to mean disjoint union) then an F-algebra
> is a function 1+X+X^2->X. The 1->X part just picks out a constant, the
> image of 1. The X^2->X defines a binary operator and the X->X part is
> an endomorphism. A group has a constant element (the identity) an
> endomorphism (the inverse) and a binary operator (multiplication). So
> a group is an example of an F-algebra (with some extra equations added
> in so a group isn't *just* an F-coalgebra).
>
> A F-coalgebra is an arrow X->FX. As an example, let's pick
> FX=(String,[X]). So an F-coalgebra is a function X->(String,[X]). We
> can view this as two functions, 'appearance' of type X->String and
> 'links' of type X->[X]. If X is the type of web pages, then interpret
> 'appearance' as the rendering (as plain text) of the web page and
> links as the function that gives a list of links in the page. So the
> web forms a coalgebra. (Though you'll need some extra work to deal
> with persistent state like cookies.)

..wooooosh...

..and now I know what normal people must feel like when *I* open my
mouth. o_O

Andrew Coppin

unread,
Jul 10, 2007, 4:07:51 PM7/10/07
to haskel...@haskell.org
> ...
>

..so it's similar to the term "normal"?

As in

Normal vector - a vector having unit length.
Normal distribution - a common monomodal distribution following a
characterstic Gaussian bell curve.
Normal subgroup - a subset of a group such that all elements of it
commute with the all elements of the whole group.
..

Dan Piponi

unread,
Jul 10, 2007, 4:17:56 PM7/10/07
to Andrew Coppin, haskel...@haskell.org
On 7/10/07, Andrew Coppin <andrew...@btinternet.com> wrote:
> Stefan O'Rear wrote:
> > On Tue, Jul 10, 2007 at 08:08:52PM +0100, Andrew Coppin wrote:
> >
> >> Erm... Wait a sec... coroutines, comonads, coprograms, codata... what in
> >> the name of goodness does "co" actually *mean* anyway??
> > Nothing.
> > When mathematicians find a new thing completely unlike an OldThing, but
> > related by some symmetry, they often call the new thing a CoOldThing.

(I got lost somewhere with the levels of quotation there...)

It's more specific than this. Coalgebra, cohomology, codata, comonads
and so on derive their name from the fact that they can be described
using category theory. In category theory you draw lots of diagrams
with arrows in them. When you flip all the arrows round you get a
description of something else. Pairs of concepts connected in this way
often differ by the prefix "co-". Often theorems you prove about
objects have analogous theorems about the respective co-objects. In
fact, often the proof is the same, just written with all the arrows
pointing the other way.

This carries over to Haskell too. You can sometimes write functional
(as in useful) code simply by taking an already existing piece of code
and figuring out what flipping the arrows means. It often means
something very different, but it still makes sense. A really cool
example is the relationship between fold and unfold. But I'll leave
that for someone else.
--
Dan

Andrew Coppin

unread,
Jul 10, 2007, 4:21:57 PM7/10/07
to haskel...@haskell.org
Dan Piponi wrote:
> (I got lost somewhere with the levels of quotation there...)
>
> It's more specific than this. Coalgebra, cohomology, codata, comonads
> and so on derive their name from the fact that they can be described
> using category theory. In category theory you draw lots of diagrams
> with arrows in them. When you flip all the arrows round you get a
> description of something else. Pairs of concepts connected in this way
> often differ by the prefix "co-". Often theorems you prove about
> objects have analogous theorems about the respective co-objects. In
> fact, often the proof is the same, just written with all the arrows
> pointing the other way.
>
> This carries over to Haskell too. You can sometimes write functional
> (as in useful) code simply by taking an already existing piece of code
> and figuring out what flipping the arrows means. It often means
> something very different, but it still makes sense. A really cool
> example is the relationship between fold and unfold. But I'll leave
> that for someone else.

Sounds a lot like the Boolean duality principle. (If a statement works
one way, if you flip all the true/false and/or stuff, you get a brand
new statement, which also works.)

Dan Piponi

unread,
Jul 10, 2007, 4:28:03 PM7/10/07
to Andrew Coppin, haskel...@haskell.org
On 7/10/07, Andrew Coppin <andrew...@btinternet.com> wrote:
> Sounds a lot like the Boolean duality principle.

That is, in fact, very closely related.
--
Dan

a...@spamcop.net

unread,
Jul 10, 2007, 10:29:27 PM7/10/07
to haskel...@haskell.org
G'day all.

Quoting Stefan O'Rear <stef...@cox.net>:

> Data can only be constructed using constructors, but can be

> deconstructed [...]

You mean "nstructed".

Now if you'l excuse me, I'm off to write some "de".

Cheers,
Andrew Bromage

Dougal Stanton

unread,
Jul 11, 2007, 4:40:23 AM7/11/07
to a...@spamcop.net, haskel...@haskell.org
On 11/07/07, a...@spamcop.net <a...@spamcop.net> wrote:
> You mean "nstructed".
>
> Now if you'l excuse me, I'm off to write some "de".

Ah, everyone's a median ;-)

Cheers,

Dougal.

Conor McBride

unread,
Jul 15, 2007, 9:30:00 PM7/15/07
to Creighton Hogg, Haskell Cafe
Hi

Whoops! Only just spotted this. Many apologies.

On 10 Jul 2007, at 20:35, Creighton Hogg wrote:

> Me:

> > No, an
> > operating system is supposed to remain responsive. And that's what
> > total coprograms do.
>
> I'm sorry, but can you expand a little further on this? I guess I
> don't understand how a corecursion => responsive to input but not
> terminating. Where does the idea of waiting for input fit into
> corecursion?
>


You'll be needing a bit of higher-order corecursion for that.
Here's a coprogram for haskell-cafe:

> data{-codata-} Punter = Speak String (String -> Punter)

A Punter is guaranteed to ask a question, and whatever answer you give
them, they've always got another question, forever! Meanwhile, a
String -> Punter is a good-natured soul, always up for answering
questions.
Hence haskell-cafe is a productive coprogram(*) producing a stream of
questions
and answers!

> data{-codata-} Stream x = x :> (Stream x)

> cafe :: Punter -> (String -> Punter) -> Stream (String, String)
> cafe (Speak question learn) guru =
> let Speak answer guru' = guru question
> in (question, answer) :> (cafe (learn answer) guru')

All the best

Conor

(*) and yes, I know what that means in Greek...

Derek Elkins

unread,
Jul 15, 2007, 9:48:45 PM7/15/07
to Conor McBride, Haskell Cafe

If the Punter asks the appropriate question, perhaps the guru will spend
the rest of time thinking about an answer.

Conor McBride

unread,
Jul 16, 2007, 3:15:16 AM7/16/07
to Derek Elkins, Haskell Cafe
Hi Derek

On 16 Jul 2007, at 02:48, Derek Elkins wrote:

> On Mon, 2007-07-16 at 02:29 +0100, Conor McBride wrote:
>> Hi
>>

>>> data{-codata-} Punter = Speak String (String -> Punter)
>>

[..]

>>
>>> data{-codata-} Stream x = x :> (Stream x)
>>
>>> cafe :: Punter -> (String -> Punter) -> Stream (String, String)
>>> cafe (Speak question learn) guru =
>>> let Speak answer guru' = guru question
>>> in (question, answer) :> (cafe (learn answer) guru')
>
> If the Punter asks the appropriate question, perhaps the guru will
> spend
> the rest of time thinking about an answer.

It's true that answers can take a while, but not forever if the guru is
also a productive coprogram. In more realistic examples, mere
productivity
might not be enough: you might want to be sure that questions will
eventually be answered, after some initial segment of "busy" responses.

To that end, an exercise. Implement a codata type

data{-codata-} Mux x y = ...

which intersperses x's and y's in such a way that

(1) an initial segment of a Mux does not determine whether the next
element is an x or a y (ie, no forced *pattern* of alternation)

(2) there are productive coprograms

demuxL :: Mux x y -> Stream x
demuxR :: Mux x y -> Stream y

(ie, alternation is none the less forced)

You may need to introduce some (inductive) data to achieve this. If you
always think "always", then you need codata, but if you eventually think
"eventually", you need data.

All the best

Conor

apfelmus

unread,
Jul 16, 2007, 11:54:42 AM7/16/07
to haskel...@haskell.org
Conor McBride wrote:
> To that end, an exercise. Implement a codata type
>
> data{-codata-} Mux x y = ...
>
> which intersperses x's and y's in such a way that
>
> (1) an initial segment of a Mux does not determine whether the next
> element is an x or a y (ie, no forced *pattern* of alternation)
>
> (2) there are productive coprograms
>
> demuxL :: Mux x y -> Stream x
> demuxR :: Mux x y -> Stream y
>
> (ie, alternation is none the less forced)
>
> You may need to introduce some (inductive) data to achieve this. If you
> always think "always", then you need codata, but if you eventually think
> "eventually", you need data.

------------- Spoiler warning: significant λs follow -------------

A very interesting exercise! Here's a solution:

-- lists with at least one element
data List1 x = One x | Cons x (List1 x)

append :: List1 x -> Stream x -> Stream x
append (One x) ys = x :> ys
append (Cons x xs) ys = x :> prepend xs ys


-- stream of alternating runs of xs and ys
codata Mix x y = Stream (List1 x, List1 y)

demixL ((xs,ys) :> xys) = xs `append` demixL xys
demixR ((xs,ys) :> xys) = ys `append` demixR xys

-- remove x-bias
codata Mux x y = Either (Mix x y) (Mix y x)

demuxL (Left xys) = demixL xys
demuxL (Right yxs) = demixR yxs

demuxR (Left xys) = demixR xys
demuxR (Right yxs) = demixL yxs


A non-solution would simply be the pair (Stream x, Stream y), but this
doesn't capture the order in which xs and ys interleave. I think this
can be formalized with the obvious operations

consL :: x -> Mux x y -> Mux x y
consR :: y -> Mux x y -> Mux x y

by requiring that they don't commute

consL x . consR y ≠ consR y . consL x

Or rather, one should require that the observation

observe :: Mux x y -> Stream (Either x y)

respects consL and consR:

observe . consL x = (Left x :>)
observe . consR y = (Right y :>)


Regards,
apfelmus

Stefan Holdermans

unread,
Jul 16, 2007, 2:34:16 PM7/16/07
to haskell-cafe Café
Conor's exercise:

> To that end, an exercise. Implement a codata type
>
> data{-codata-} Mux x y = ...
>
> which intersperses x's and y's in such a way that
>
> (1) an initial segment of a Mux does not determine whether the next
> element is an x or a y (ie, no forced *pattern* of alternation)
>
> (2) there are productive coprograms
>
> demuxL :: Mux x y -> Stream x
> demuxR :: Mux x y -> Stream y
>
> (ie, alternation is none the less forced)
>
> You may need to introduce some (inductive) data to achieve this. If
> you
> always think "always", then you need codata, but if you eventually
> think
> "eventually", you need data.

I came up with:

data Stream a = ConsS a (Stream a) -- CODATA
data Mux a b = Mux (L a b) (R a b) (Mux a b) -- CODATA

data L a b = LL a | LR b (L a b)
data R a b = RL a (R a b) | RR b

lastL :: L a b -> a
lastL (LL x) = x
lastL (LR y l) = lastL l

initL :: L a b -> Stream b -> Stream b
initL (LL x) = id
initL (LR y l) = ConsS y . initL l

lastR :: R a b -> b
lastR (RL x r) = lastR r
lastR (RR y) = y

initR :: R a b -> Stream a -> Stream a
initR (RL x r) = ConsS x . initR r
initR (RR y) = id

demuxL :: Mux a b -> Stream a
demuxL (Mux l r m) = ConsS (lastL l) (initR r (demuxL m))

demuxR :: Mux a b -> Stream b
demuxR (Mux l r m) = initL l (ConsS (lastR r) (demuxR m))

Cheers,

Stefan

Stefan Holdermans

unread,
Jul 16, 2007, 2:53:48 PM7/16/07
to haskell-cafe Café
I wrote:

> I came up with [...]

apfelmus' solution is of course more elegant, but I guess it boils
down to the same basic idea.

Conor McBride

unread,
Jul 17, 2007, 9:51:40 AM7/17/07
to Stefan Holdermans, apfelmus, haskell-cafe Café

On 16 Jul 2007, at 19:53, Stefan Holdermans wrote:

> I wrote:
>
>> I came up with [...]
>
> apfelmus' solution is of course more elegant, but I guess it boils
> down to the same basic idea.

Yep, you need inductive data to guarantee that you eventually stop
spitting out one sort of thing and flip over to the other. Here's my
version.

Mux...

> data{-codata-} Mux x y = Mux (Muy x y)

..is defined by mutual induction with...

> data Muy x y = y :- Muy x y | x :~ Mux y x

..which guarantees that we will get more x, despite the odd y
in the way. It's inductively defined, so we can't (y :-) forever;
we must eventually (x :~), then flip over. So,

> demuxL :: Mux x y -> Stream x

> demuxL (Mux muy) =
> let (x, mux) = skipper muy
> in x :> demuxR mux

is productive, given this helper function

> skipper :: Muy x y -> (x, Mux y x)
> skipper (y :- muy) = skipper muy
> skipper (x :~ mux) = (x, mux)

which is just plain structural recursion. Once we've got out x,
we carry on with a Mux y x, flipping round the obligation to
ensure that we don't stick with x forever. To carry on getting
xs out...

> demuxR :: Mux y x -> Stream x
> demuxR (Mux (x :- muy)) = x :> demuxR (Mux muy)
> demuxR (Mux (y :~ mux)) = demuxL mux

..just pass them as they come, then flip back when the y arrives.

Apfelmus, thanks for

| Or rather, one should require that the observation
|
| observe :: Mux x y -> Stream (Either x y)
|
| respects consL and consR:
|
| observe . consL x = (Left x :>)
| observe . consR y = (Right y :>)

which is a very nice way to clean up my waffly spec.

I rather like this peculiar world of mixed coinductive and inductive
definitions. I haven't really got a feel for it yet, but I'm glad I've
been introduced to its rich sources of amusement and distraction, as
well as its potential utility. (My colleague, Peter Hancock, is the
culprit; also the man with the plan. For more, google

Peter Hancock eating streams

and poke about.)

Cheers for now

Conor

Derek Elkins

unread,
Jul 17, 2007, 12:42:58 PM7/17/07
to Conor McBride, apfelmus, haskell-cafe Café
On Tue, 2007-07-17 at 13:23 +0100, Conor McBride wrote:
> On 16 Jul 2007, at 19:53, Stefan Holdermans wrote:
>
> > I wrote:
> >
> >> I came up with [...]
> >
> > apfelmus' solution is of course more elegant, but I guess it boils
> > down to the same basic idea.
>
> Yep, you need inductive data to guarantee that you eventually stop
> spitting out one sort of thing and flip over to the other. Here's my
> version.
>
> Mux...
>
> > data{-codata-} Mux x y = Mux (Muy x y)
>
> ...is defined by mutual induction with...

>
> > data Muy x y = y :- Muy x y | x :~ Mux y x

As an inductive data type, isn't this empty?

Isaac Dupree

unread,
Jul 17, 2007, 5:43:09 PM7/17/07
to Conor McBride, Haskell Cafe
Conor McBride wrote:
> Hi all
>
> On 9 Jul 2007, at 06:42, Thomas Conway wrote:
>
>> I don't know if you saw the following linked off /.
>>
>> http://www.itwire.com.au/content/view/13339/53/
>
> [..]
>
>> The basic claim appears to be that discrete mathematics is a bad
>> foundation for computer science. I suspect the subscribers to this
>> list would beg to disagree.
>
> It's true that some systems are better characterised as corecursive
> "coprograms", rather than as recursive "programs". This is not a
> popular or well-understood distinction. In my career as an advocate
> for total programming (in some carefully delineated fragment of a
> language) I have many times been gotcha'ed thus: "but an operating
> system is a program which isn't supposed to terminate". No, an
> operating system is supposed to remain responsive. And that's what
> total coprograms do.

I like that distinction.. how is "shutting down" or "executing
(switching to) another arbitrary OS's kernel" modeled (in response to
input, in a total way, of course)?


> Even so, I'd say that it's worth raising awareness of it. Haskell's
> identification of inductive data with coinductive data, however well
> motivated, has allowed people to be lazy. People aren't so likely to
> be thinking "do I mean inductive or coinductive here?", "is this
> function productive?" etc. The usual style is to write as if
> everything is inductive, and if it still works on infinite data, to
> pat ourselves on the back for using Haskell rather than ML. I'm
> certainly guilty of that.
>
> I'd go as far as to suggest that "codata" be made a keyword, at
> present doubling for "data", but with the documentary purpose of
> indicating that a different mode of (co)programming is in order. It
> might also be the basis of better warnings, optimisations, etc.
> Moreover, it becomes a necessary distinction if we ever need
> to identify a total fragment of Haskell. Overkill, perhaps, but
> I often find it's something I want to express.

I find that one of Haskell's faults is it's too hard to avoid the
"everything is lazy " even when I want to - partly because I don't
understand inductive vs. coinductive very well (particularly, not in
practice). If "data List a = Nil | Cons a (List a)" is finite and
"codata Stream a = Cons a (Stream a)" is infinite, what is "codata
CoList a = Nil | Cons a (CoList a)"? I need a tutorial on "more-total"
programming in Haskell =)

(leading to a keener awareness of just where the untamed laziness of
Haskell can occasionally make code much more concise and understandable,
and where the laziness actually has a formal structure that one can stay
within)


Non-inductive, finite structures - cyclic directed graphs, usually - are
quite annoying to implement and use in Haskell. (Especially if you want
garbage collection and sharing to work well with them... or if different
nodes should be different types, only allowed in particular arrangements
- I'm pretty sure that dependent types would alleviate that last one,
not sure about the other irritations )

Isaac

Conor McBride

unread,
Jul 19, 2007, 6:04:19 AM7/19/07
to Derek Elkins, apfelmus, haskell-cafe Café
Hi Derek

On 17 Jul 2007, at 17:42, Derek Elkins wrote:

> On Tue, 2007-07-17 at 13:23 +0100, Conor McBride wrote:
>>
>> Mux...
>>
>>> data{-codata-} Mux x y = Mux (Muy x y)
>>
>> ...is defined by mutual induction with...
>>
>>> data Muy x y = y :- Muy x y | x :~ Mux y x
>
> As an inductive data type, isn't this empty?

Not in the manner which I intended. But it's a good question whether
what I
wrote unambiguously represented what I intended. In full-on nuspeak,
I meant

Mux = Nu mux. Mu muy. /\x y. Either (y, muy x y) (x, mux y x)

with the inductive definition nested inside the coinductive one, so that
we always eventually twist. Just once a summer, perhaps. I've basically
written Stefan's version, exploiting fixpoints at kind * -> * -> * to
deliver
the symmetry by the twisted :~ constructor.

As Apfelmus made rather more explicit, we have a coinductive sequence of
(nonempty inductive sequences which end with a twist). I was hoping to
signify this nesting by defining Mux to pack Muy, but I'm not sure
that's
a clear way to achieve the effect. With the Nu-Mu interpretation, the
fair
multiplexer is a coprogram:

mux :: Stream x -> Stream y -> Mux x y
mux (x :> xs) (y :> ys) = Mux (x :~ Mux (y :~ mux xs ys))

Nesting the other way around, we get this rather odd beast

Xum = Mu muy. Nu mux. /\x y. Either (y, muy x y) (x, mux y x)

which isn't empty either. Rather, it only allows finitely many uses
of :-
before it settles down to use :~ forever. That is, we eventually always
twist. So this way round allows the fair multiplexer, but not the
slightly
biased one which produces two xs for every y.

All of which goes to show that mixed co/programming is quite a sensitive
business, and it's easy to be too casual with notation.

All the best

Conor

apfelmus

unread,
Jul 19, 2007, 6:52:25 AM7/19/07
to haskel...@haskell.org
Conor McBride wrote:

> Derek Elkins wrote:
>> As an inductive data type, isn't this empty?
>
> Not in the manner which I intended. But it's a good question whether what I
> wrote unambiguously represented what I intended. In full-on nuspeak, I
> meant
>
> Mux = Nu mux. Mu muy. /\x y. Either (y, muy x y) (x, mux y x)
>
> with the inductive definition nested inside the coinductive one, so that
> we always eventually twist. Just once a summer, perhaps. I've basically
> written Stefan's version, exploiting fixpoints at kind * -> * -> * to
> deliver the symmetry by the twisted :~ constructor.

I too thought that the Mu fixed point should be empty since it's missing
a base case. But that's not correct:

Mu muy . /\ x y. Either (y, muy x y) (x, mux y x)
~ Mu muy . /\ y . Either (y, muy y) (Constant1, Constant2 y)
~ Mu muy . Either (Constant3 , muy) Constant4

The result is just a list of Constant3s that ends with a Constant4
instead of []. In other words, Constant4 = (x, mux y x) is the base
case for the induction. Very cunning!

> Nesting the other way around, we get this rather odd beast
>
> Xum = Mu muy. Nu mux. /\x y. Either (y, muy x y) (x, mux y x)
>
> which isn't empty either. Rather, it only allows finitely many uses of :-
> before it settles down to use :~ forever. That is, we eventually always
> twist. So this way round allows the fair multiplexer, but not the slightly
> biased one which produces two xs for every y.
>
> All of which goes to show that mixed co/programming is quite a sensitive
> business, and it's easy to be too casual with notation.

Can the order of quantifiers be deduced from the declarations?

codata Mux x y = Mux (Muy x y)


data Muy x y = y :- Muy x y | x :~ Mux y x

Probably, since Xum would be declared as

data Yum x y = Yum (Xum x y)
codata Xum x y = y :- Yum x y | x :~ Xum y x

Regards,
apfelmus

0 new messages