Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why purely functional languages?

5 views
Skip to first unread message

rincewind

unread,
Nov 21, 2006, 2:53:27 PM11/21/06
to
What are advantages of purely functional languages?

I mean, functions as values, tail recursion optimization and other
attributes of functional languages are great, I can easily appreciate
them. But why prohibit operations with side-effects completely? How can
the language be better by explicitly lacking some fundamental feature?

I don't think any argument is needed to state that some kind of global
state-changing operations is needed, sometimes. Existence of monads in
Haskell kind of proves this. But why invent some complicated concept,
why not just bring in all the traditional stuff (assignments, etc)? How
monads are better?

Mark Tarver

unread,
Nov 21, 2006, 3:25:20 PM11/21/06
to

I don't think you can scientifically try to prove that they are better
- not without committing the prescriptive fallacy (see
www.lambdassociates.org/fallacy.htm). Its certainly onerous sometimes
to have to work within the confines of a purely functional language.

I think that *one* advantage of a purely functional implementation
comes in formal proofs of correctness. Its fairly easy to reduce such
implementations to a series of assertions in a formal language like FOL
and then build a proof. However in the presence of side-effects even
the Law of Self Identity breaks down.

Mark

Mark Tarver

unread,
Nov 21, 2006, 3:31:50 PM11/21/06
to

> However in the presence of side-effects even the Law of Self Identity breaks down.

As an example.

(0-) (define increment
-> (set *counter* (+ 1 (value *counter*))))
increment

(1-) (set *counter* 0)
0

(2-) (= (increment) (increment))
false

Mark

Taylor Venable

unread,
Nov 21, 2006, 3:48:44 PM11/21/06
to
On Tue, 21 Nov 2006 22:53:27 +0300
rincewind <fa...@not.real> wrote:

> I don't think any argument is needed to state that some kind of
> global state-changing operations is needed, sometimes. Existence of
> monads in Haskell kind of proves this. But why invent some
> complicated concept, why not just bring in all the traditional stuff
> (assignments, etc)? How monads are better?

From what I understand about them, the point of using a monad is that
they not only allow side effects, but they also *contain* them. In
other words, they are used to prevent side effects from leaking into
other parts of the system. That allows you to verify formally that the
rest of your program is working correctly, without having to worry about
any side-effects produced by the monad that might otherwise affect it.

--
Taylor Venable
tay...@metasyntax.net
http://www.metasyntax.net/

Message has been deleted

Dan Bensen

unread,
Nov 22, 2006, 12:35:50 AM11/22/06
to
Stefan Ram wrote:
> If we already would know now that this is "better" than other
> languages, other languages could be dropped immediatly and we
> all would be using only functional programming languages.
No, that takes time. Languages never get dropped right away once
they're entrenched in legacy code.

How is Haskell doing in the real world these days? Has it proven to be
commercially useful yet, or is it still mostly academic?

--
Dan
www.prairienet.org/~dsb

Stefan Holdermans

unread,
Nov 22, 2006, 1:59:54 AM11/22/06
to

> rincewind wrote:
> > I mean, functions as values, tail recursion optimization and other
> > attributes of functional languages are great, I can easily appreciate
> > them. But why prohibit operations with side-effects completely? How can
> > the language be better by explicitly lacking some fundamental feature?

Mark replied:>


> I think that *one* advantage of a purely functional implementation
> comes in formal proofs of correctness.

Yes, that's just it. Pure functional languages are GOFER: GOod For Equational Reasoning. Equational reasoning is a very accesible concepts that goes very well with proof techniques like structural induction.

Cheers,

Stefan

Tomasz Zielonka

unread,
Nov 22, 2006, 3:33:10 AM11/22/06
to
Dan Bensen wrote:
> How is Haskell doing in the real world these days? Has it proven to be
> commercially useful yet, or is it still mostly academic?

From real life: Recently I was presenting a talk at Warsaw University
about a part of my job. Later I was discussing a possibility of
writing a thesis on a similar subject with a university assistant.
The funny thing is: I am leaning to move more of my code to Haskell,
and he was convincing me that C++ may be a better choice ;-) The
roles are changing?

And yes, Haskell is useful in my job.

Best regards
Tomasz

Joachim Durchholz

unread,
Nov 22, 2006, 3:39:13 AM11/22/06
to
Dan Bensen schrieb:

> How is Haskell doing in the real world these days? Has it proven to be
> commercially useful yet, or is it still mostly academic?

From what I gather on haskell-cafe and haskell.org, it's accumulating
useful libraries right now. People already are putting together "How to
set up my first Haskell project" recipes and such right now, and this
kind of contribution is considered useful in the community.

In other words, Haskell is approaching critical mass but hasn't exploded
into the mainstream yet.
Unfortunately, my scrying ball is out for maintenance repairs, so I
can't predict when that explosion will happen, or whether it will happen
at all.

Regards,
Jo

Joachim Durchholz

unread,
Nov 22, 2006, 3:52:38 AM11/22/06
to
rincewind schrieb:

> I don't think any argument is needed to state that some kind of global
> state-changing operations is needed, sometimes. Existence of monads in
> Haskell kind of proves this.

Actually it's not "monads", it's "monadic I/O" (which, as far as I
understand it, *describes* I/O activities instead of executing them
directly).

> But why invent some complicated concept,

Monads aren't complicated. Saying "foo is a monad" is just a way of
saying "foo provides a way to associatively stick together functions".

(I don't know whether monads are complicated in category theory, but I
don't care too much anyway.)

> why not just bring in all the traditional stuff (assignments, etc)? How
> monads are better?

As others have said, assignments destroy equational reasoning.
Which means that you can't simply substitute a function's definition
wherever the function is called.
(Aliasing problems are one of the more general symptoms if this kind of
illness.)

This makes it far harder to reason about what some piece of code does.
Avoiding assignments and other kinds of state-dependent manipulation in
your internal, non-IO-related code means the code becomes easier to
debug and maintain.
Moving the remaining dangerous stuff into an I/O monad means that (a)
it's all in one place, and you don't have to worry about it elsewhere,
and (b) all the code that isn't directly related to I/O is separated
out, so you have a far smaller amount of dangerous code to tackle at a
time, and any bugs stand out far better.


There's yet another effect. If functions don't have side effects, you
can pass them around as parameters or return them as results, without
having to worry about when and in what contexts they will be called.
This means that writing higher-order functions (functions that call or
return functions) isn't rocket science anymore.
This is relevant because higher-order functions are an immensely
powerful abstraction tool. There have been reports that rewriting code
so that it makes liberal use of higher-order functions shrinks it by a
factor between 5 and 10.

Regards,
Jo

Torben Ægidius Mogensen

unread,
Nov 22, 2006, 3:55:08 AM11/22/06
to
rincewind <fa...@not.real> writes:

> What are advantages of purely functional languages?
>
> I mean, functions as values, tail recursion optimization and other
> attributes of functional languages are great, I can easily appreciate
> them. But why prohibit operations with side-effects completely? How
> can the language be better by explicitly lacking some fundamental
> feature?
>

Absense of side effects means that a piece of code has a very clear
interface that is fully described in its type. Furthermore, all
testing can be done by giving arguments and inspecting results, as you
don't have to worry about chhanges to hidden or unknown state between
calls. This makes it very easy to develop, test and maintain highly
modular code.

> I don't think any argument is needed to state that some kind of global
> state-changing operations is needed, sometimes. Existence of monads in
> Haskell kind of proves this. But why invent some complicated concept,
> why not just bring in all the traditional stuff (assignments, etc)?
> How monads are better?

But as you said, you sometimes need to interact with the outside
world, which is stateful. There are various ways functional languages
have allowed this:

- Unrestricted access to side-effects and state, that is limited by
programmer discipline only (SML, O'Caml, Scheme, ...).

- Linear types means that you don't have to worry about
repeatability, as there is no possibility of repeating (Clean).

- Monads encapsulate stateful programming in an explicitly linear
construction (Haskell).

- Types and effects make effects visible in the type and allows
encapsulation of effects to local regions, so purely local use of
effects can be made invisible (FX).

All of the above allows a programmer to write in C-like style if he so
wishes, but (apart from the first), it is visible to the outside world
when this is done, and you have various means of encapsulating local
effects when repeatability is preserved.

Torben

Message has been deleted

Joachim Durchholz

unread,
Nov 22, 2006, 8:30:12 AM11/22/06
to
Stefan Ram schrieb:

>>> But why invent some complicated concept,
>> Monads aren't complicated.
>
> I heard someone saying once something like: »What you have understood
> is easy, what you have not understood is complicated.«

Sure, but monads aren't more complicated than associativity.
Well, they *are* more complicated; they use more symbols and more laws.
But the difference isn't large.

Regards,
Jo

Neo

unread,
Nov 22, 2006, 10:34:52 AM11/22/06
to
> (0-) (define increment
> -> (set *counter* (+ 1 (value *counter*))))
> increment
>
> (1-) (set *counter* 0)
> 0
>
> (2-) (= (increment) (increment))
> false

I am new to FP so my questions are very basic. What language is the
above written in? What is the above doing? It looks like commands are
being submitted to a parser which returns a value. Is line 0 defining a
function called increment which adds 1 to a variable named counter? How
does the function know *counter* is an input to the function? OK, line
1 sets variable counter to 0? Why does counter have asterisks around
it. Is the last line comparing two functions or the output of two
functions? How do you specify a function itself versus the output of a
function? They look the same so why aren't they equal?

Mark Tarver

unread,
Nov 22, 2006, 11:43:12 AM11/22/06
to
Neo wrote:
> > (0-) (define increment
> > -> (set *counter* (+ 1 (value *counter*))))
> > increment
> >
> > (1-) (set *counter* 0)
> > 0
> >
> > (2-) (= (increment) (increment))
> > false
>
> I am new to FP so my questions are very basic. What language is the
> above written in?

The language is Qi - www.lambdassociates.org

> What is the above doing? It looks like commands are
> being submitted to a parser which returns a value. Is line 0 defining a
> function called increment which adds 1 to a variable named counter?

Yes.

> Why does counter have asterisks around it.

Purely a convention which I borrowed from Lisp. They asterix their
global variables so that they stand out.

> Is the last line comparing two functions or the output of two functions?

The output.

> How do you specify a function itself versus the output of a function?

In this example, increment is a zero-place function (takes no inputs)
and I invoke a function by placing it in brackets ( ). The output of
the function is what you get by evaluating the whole expression. You
specify the function itself simply by writing it.

> They look the same so why aren't they equal?

Well this is why purists do not like side-effects. In the expression
(= (increment) (increment)), the first (increment) is evaluated -
adding 1 to the *counter* and returning the value of the *counter*
which is 1. The second (increment) repeats the same operation adding 1
to the *counter* which now has the value 2 and returns the value of the
*counter* which is 2. Since 1 and 2 are not the same the equality test
returns false.

If you use side-effects then the Law of Self-Identity apparently breaks
down. Actually of course it doesn't. What actually breaks down is the
simple transcription of functional programs into sets of equations.
This means that the axiomatic semantics that you are using is not fit
for the job and you have to use something more complicated that appeals
to the concept of state. But formal methods people would prefer to use
the cleaner simpler method if possible and purely functional languages
allow them to do that.

This said, you can actually make Qi pure if you want. Its very bendy
in many ways. The function destroy will whack even system functions and
with half a dozen lines you can have a pure language if you desire - at
the expense of convenience of course.

(0-) (set y 8)
8

(1-) (destroy set)
set

(2-) (set name Mark)
APPLY: undefined function set

And after you've inflicted mayhem, in order to stop tiny hands doing
what you've done.

(3-) (destroy destroy)
destroy

(4-) (save)

Neo

unread,
Nov 22, 2006, 1:31:54 PM11/22/06
to
> The language is Qi - www.lambdassociates.org

Interesting.

> (2-) (= (increment) (increment))
> false

Based on your explanation, since function outputs (not functions
themselves) are being compared and the function increments a global
variable, the evaluation of false seems correct. How does this violate
the "Law of Self-Identity"? I no longer see where a thing is being
compared to itself.

Mark Tarver

unread,
Nov 22, 2006, 2:20:12 PM11/22/06
to

Well thats why I said "If you use side-effects then the Law of
Self-Identity *apparently* breaks down." It depends on how you frame
the Law of Self-Identity . If its framed *de re* it states everything
is identical to itself - which is always true. If its framed *de
dicto* it says that every substitution instance of 'x' in '(= x x)' is
true. Now thats true too, if you are working in a purely functional
language. But it fails with destructive operations like set.

Steve Schafer

unread,
Nov 22, 2006, 2:23:25 PM11/22/06
to
On 22 Nov 2006 10:31:54 -0800, "Neo" <neo5...@hotmail.com> wrote:

>Based on your explanation, since function outputs (not functions
>themselves) are being compared and the function increments a global
>variable, the evaluation of false seems correct. How does this violate
>the "Law of Self-Identity"? I no longer see where a thing is being
>compared to itself.

Rather than "law of self-identity," the underlying principle that is
being discussed here is referential transparency (look it up on
wikipedia). The idea is that when examining an expression, replacing a
function application (i.e., a function call) by the result of that
function application should not change the meaning of the expression.

There are two main ways in which referential transparency may be
violated by impure functions:

1) An impure function may have side effects. For example, if the
function looks like this (expressed here in C-like syntax for the
benefit of functional programming newbies):

int launch_and_return_value(int x)
{
launch-nuclear-missile;
return x;
}

then it's pretty obvious that replacing

launch_and_return_value(42)

with

42

results in a new expression that doesn't have the same meaning as the
original expression.

2) An impure function, applied multiple times to a given set of
arguments, may return different values each time. C's rand() function is
an obvious example of this; by design, it returns a different
pseudo-random value every time you call it.

And so it should also be obvious that you can't replace a call to rand()
with its return value, since it doesn't have a fixed return value.

(This is the type of impurity that is exhibited by the example in the
previous posts; every call to increment() returns a different value.)

Why is referential transparency a big deal? Because it vastly simplifies
the analysis required to answer the question, "What does this program
do?" Both types of "referential opacity" described above involve
statefulness (external state in the first case, and internal state in
the second case). And it is well known that introduction of statefulness
greatly complicates program analysis, because now temporal order becomes
important; not only do you have to know what computations are performed,
you also have to know in what order they're performed. And if you have
multiple disparate parts of the program that are manipulating the same
state, it's very easy to get into a situation where making a change to
the code has an unpredictable ripple effect, because it changes the
order of side effects in difficult-to-analyze ways.

If the behavior of a program is easier to analyze, then the probability
that it does what you intended it to do is increased. That's really what
sets functional programming apart. Things like higher-order and
first-class functions are extremely useful, but I don't consider them to
be as fundamental to the concept of functional programming as
referential transparency.

Steve Schafer
Fenestra Technologies Corp.
http://www.fenestra.com/

Benjamin Franksen

unread,
Nov 22, 2006, 2:32:40 PM11/22/06
to

I am astonished that no one has mentioned this yet:

One of the main reasons to make a language purely functional is so that it
can be lazy. Laziness means, basically, that when a function gets called,
the arguments are not necessarily evaluated before the call, but rather
only when and if their value is actually demanded /inside/ the call. For
instance in Haskell the function 'const' is defined by

const x y = x

When const is called, its second argument never gets evaluated because to
determine the result of 'const x y' it is not necessary to look y's value
(as the definition clearly shows).

Laziness and side-effects during evaluation don't go well together. The
order of evaluation in a lazy language is something that is far from
obvious by looking at the code. Thus it would be (almost) impossible to
determine in which order (or whether at all!) the side-effects would
happen.

It is probably a safe bet to say that, had Haskell not been lazy, then
side-effects /would/ have crept into the language, sooner or later.

Ben

Mark Tarver

unread,
Nov 22, 2006, 3:09:08 PM11/22/06
to

rincewind wrote:
> What are advantages of purely functional languages?

Well the list grows as this thread develops. Here is another one:
memoisation.

Memoisation is a technique used by functional programs to store values
in memory to avoid repeated computations (fibonacchi is a classic
example). An expression (f x1 ...xn) is evaluated and the result R
stored in memory as associated with this expression. The next time (f
x1 ...xn) is evaluated, the computer looks up the value that was
previously computed and returns it without repeating the computation.

Memoisation does have its challenges, but it becomes much more awkward
in an impure language where the second evaluation of (f x1 ...xn) may
not necessarily return the same result as did the first. In this case
memoisation will actually cause the program to return an incorrect
result.

Mark

Joachim Durchholz

unread,
Nov 22, 2006, 5:21:57 PM11/22/06
to
Benjamin Franksen schrieb:

> I am astonished that no one has mentioned this yet:
>
> One of the main reasons to make a language purely functional is so that it
> can be lazy.

That's just one flavor of functional programming languages.
Non-lazy FPLs do exist.

> Laziness and side-effects during evaluation don't go well together.

Things get worse if you add laziness into the mix, but side effects and
higher-order functions already don't go very well together, so you don't
need laziness to explain that side effects are a bad thing.

> It is probably a safe bet to say that, had Haskell not been lazy, then
> side-effects /would/ have crept into the language, sooner or later.

I think Simon Peyton-Jones would have prevented that. He's been one of
the most influential people in Haskell's design, which is generally
considered a Good Thing, and he's strongly opposed to side effects (and
rightly so IMVHO).

Regards,
Jo

Benjamin Franksen

unread,
Nov 22, 2006, 7:36:05 PM11/22/06
to
Joachim Durchholz wrote:
> Benjamin Franksen schrieb:

>> One of the main reasons to make a language purely functional is so that
>> it can be lazy.
>
> That's just one flavor of functional programming languages.
> Non-lazy FPLs do exist.

How many of them are /purely/ functional?

>> Laziness and side-effects during evaluation don't go well together.
>
> Things get worse if you add laziness into the mix, but side effects and
> higher-order functions already don't go very well together, so you don't
> need laziness to explain that side effects are a bad thing.
>
>> It is probably a safe bet to say that, had Haskell not been lazy, then
>> side-effects /would/ have crept into the language, sooner or later.
>
> I think Simon Peyton-Jones would have prevented that. He's been one of
> the most influential people in Haskell's design, which is generally
> considered a Good Thing, and he's strongly opposed to side effects (and
> rightly so IMVHO).

Don't get me wrong: I completely agree that purity is a fantastically good
thing, especially since the invention of monads. I also agree that there
are many more technical reasons to prefer a pure language than laziness. It
is just that without the "hair shirt" laziness there's a good chance that
things like monadic I/O would never have been invented in the first place.
This is btw not my own idea but has been expressed by JPJ himself, see this
(very funny and also quite educational) presentation:
http://research.microsoft.com/~simonpj/papers/haskell-retrospective/index.htm

Quote: "So why wear the hair shirt of laziness? [...] The big one: Laziness
keeps you honest - every call-by-value language has given into the siren
call of side-effects. But in Haskell [...this...] just doesn't make
sense... So effects ... are not an option."

Ben

Benjamin Franksen

unread,
Nov 22, 2006, 7:48:59 PM11/22/06
to
Benjamin Franksen wrote:
> It is just that without the "hair shirt" laziness there's a good chance
> that things like monadic I/O would never have been invented in the first
> place. This is btw not my own idea but has been expressed by JPJ himself,

s/JPJ/SPJ/

Gotta get some sleep real soon...

Ben

Neo

unread,
Nov 22, 2006, 9:17:55 PM11/22/06
to
> > I no longer see where a thing is being compared to itself.
>
> Well thats why I said "If you use side-effects then the Law of
> Self-Identity *apparently* breaks down." It depends on how you frame
> the Law of Self-Identity . If its framed *de re* it states everything
> is identical to itself - which is always true. If its framed *de
> dicto* it says that every substitution instance of 'x' in '(= x x)' is
> true. Now thats true too, if you are working in a purely functional
> language. But it fails with destructive operations like set.

Ok, so
(= (incr) (incr)) is short for
(= (incr *cntr*) (incr *cntr*)) which is short for
(= (incr 0) (incr 0)) assuming cntr was initially 0, which becomes
(= 1 (incr 1)) when input1 is processed, which becomes
(= 1 2) when input2 is processed

Neo

unread,
Nov 22, 2006, 11:19:31 PM11/22/06
to
> referential transparency (look it up on wikipedia).

Thanks, though still a bit confusing for me.

Tomasz Zielonka

unread,
Nov 23, 2006, 2:45:13 AM11/23/06
to
Benjamin Franksen wrote:
> I am astonished that no one has mentioned this yet:

You did ;-)

At this point I have to recommend this classic paper:
http://www.math.chalmers.se/~rjmh/Papers/whyfp.html

Best regards
Tomasz

Tomasz Zielonka

unread,
Nov 23, 2006, 2:46:59 AM11/23/06
to
Benjamin Franksen wrote:
> It is probably a safe bet to say that, had Haskell not been lazy, then
> side-effects /would/ have crept into the language, sooner or later.

Yes, it happened and it's called OCaml, or SML.

Laziness is one of the biggest reasons for Haskell's existence, isn't
it?

Best regards
Tomasz

Jon Harrop

unread,
Nov 23, 2006, 3:25:22 AM11/23/06
to
Neo wrote:
>> (0-) (define increment
>> -> (set *counter* (+ 1 (value *counter*))))
>> increment
>>
>> (1-) (set *counter* 0)
>> 0
>>
>> (2-) (= (increment) (increment))
>> false
>
> What is the above doing?

let mutable n = 0

let incr() =
n <- n + 1;
n

incr() = incr()

> It looks like commands are being submitted to a parser which returns a
> value.

When you type it into the interactive mode of a language you get your
answers back immediately:

> let mutable n = 0;;
val n : int

> let incr() =
n <- n + 1;
n;;
val incr : unit -> int

> incr() = incr();;
val it : bool = false

You could write this in C:

int n=0;

int incr() { return ++n; }

> Is line 0 defining a function called increment which adds 1 to a variable
> named counter?

Yes.

> How does the function know *counter* is an input to the function?

Lisp is doing a symbol lookup at run-time, so you can define everything in a
random order to confuse people.

> Is the last line comparing two functions or the output of two
> functions?

In Lisp, "(f)" means "f()".

> How do you specify a function itself versus the output of a
> function?

Use "f" instead of "(f)".

> They look the same so why aren't they equal?

Every time "incr" is called it returns a different value. So you get:

incr() = incr()
1 = incr()
1 = 2
false

assuming it is evaluated left-to-right, which it often isn't.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists

Tomasz Zielonka

unread,
Nov 23, 2006, 4:10:28 AM11/23/06
to
Benjamin Franksen wrote:
> Don't get me wrong: I completely agree that purity is a fantastically good
> thing, especially since the invention of monads. I also agree that there
> are many more technical reasons to prefer a pure language than laziness.

I think a pure language without laziness would be too constrained. Maybe
you mean no laziness by default, but with the ability to write parts
using laziness?

> It is just that without the "hair shirt" laziness there's a good
> chance that things like monadic I/O would never have been invented in
> the first place.

Laziness is not a hair shirt, it a feature! :-)

Best regards
Tomasz

Joachim Durchholz

unread,
Nov 23, 2006, 6:48:58 AM11/23/06
to
Tomasz Zielonka schrieb:

> Laziness is one of the biggest reasons for Haskell's existence, isn't
> it?

AFAIK Haskell is the common successor of a bunch of lazily-evaluating
languages.
It came into existence when a bunch of people researching on laziness
got together and integrated those features that had proven themselves.

What I find remarkable is that
* all the predecessor languages are essentially extinct
(not used, no more developed), and
* no alternate language has sprung up.
Seems like the Haskell team did *some* things right.
Or that the design space for lazy languages is rather constrained.
Or that designing a good lazy language is so difficult that nobody has
come up with a good alternative yet.

Regards,
Jo

Joachim Durchholz

unread,
Nov 23, 2006, 7:10:38 AM11/23/06
to
Benjamin Franksen schrieb:

> Joachim Durchholz wrote:
>> Benjamin Franksen schrieb:
>>> One of the main reasons to make a language purely functional is so that
>>> it can be lazy.
>> That's just one flavor of functional programming languages.
>> Non-lazy FPLs do exist.
>
> How many of them are /purely/ functional?

Purity wasn't part of the OP's question.

One could argue that purity is essential for achieving the full
potential of functional languages, but I doubt that. Effect systems and
mode systems seem to be quite good at isolating side effects.

I do agree that Erlang, the *ML and Lisp languages dropped the ball by
allowing functions to have side effects without even marking them as
"impure" or something.
This doesn't make these languages useless or nonfunctional; you still
get most of the benefits of functional programming.
In fact the most tangible comparisons between functional and
non-functional programming style have been provided by the Erlang
camp... and Erlang is firmly based on effects for
programming-in-the-large, it's the small scale at wich they do
higher-order functions and side-effect-free programming (and the latter
is just by conventions, Erlang does have variables at the process and
system level).

Regards,
Jo

ross...@ps.uni-sb.de

unread,
Nov 23, 2006, 7:46:10 AM11/23/06
to
Joachim Durchholz wrote:
> Benjamin Franksen schrieb:
> > Joachim Durchholz wrote:
> >> Benjamin Franksen schrieb:
> >>> One of the main reasons to make a language purely functional is so that
> >>> it can be lazy.
> >> That's just one flavor of functional programming languages.
> >> Non-lazy FPLs do exist.
> >
> > How many of them are /purely/ functional?
>
> Purity wasn't part of the OP's question.

Funny statement you make there, particularly considering the subject
line... ;-)

- Andreas

Jon Harrop

unread,
Nov 23, 2006, 8:12:11 AM11/23/06
to
Joachim Durchholz wrote:
> Benjamin Franksen schrieb:
>> How many of them are /purely/ functional?
>
> Purity wasn't part of the OP's question.

But it's in the subject. :-)

> I do agree that Erlang, the *ML and Lisp languages dropped the ball by
> allowing functions to have side effects without even marking them as
> "impure" or something.

I'm not sure that could work. I often wrap impure internals with a pure API.
Would that be marked as impure?

Philippa Cowderoy

unread,
Nov 23, 2006, 9:18:04 AM11/23/06
to
On Thu, 23 Nov 2006, Jon Harrop wrote:

> Joachim Durchholz wrote:
> > I do agree that Erlang, the *ML and Lisp languages dropped the ball by
> > allowing functions to have side effects without even marking them as
> > "impure" or something.
>
> I'm not sure that could work. I often wrap impure internals with a pure API.
> Would that be marked as impure?
>

That depends. It's easily avoidable in Haskell - the lazy version of the
ST monad is particularly neat in that regard.

--
fli...@flippac.org

Ivanova is always right.
I will listen to Ivanova.
I will not ignore Ivanova's recomendations.
Ivanova is God.
And, if this ever happens again, Ivanova will personally rip your lungs out!

Jon Harrop

unread,
Nov 23, 2006, 11:03:13 AM11/23/06
to
Philippa Cowderoy wrote:
> On Thu, 23 Nov 2006, Jon Harrop wrote:
>> Joachim Durchholz wrote:
>> > I do agree that Erlang, the *ML and Lisp languages dropped the ball by
>> > allowing functions to have side effects without even marking them as
>> > "impure" or something.
>>
>> I'm not sure that could work. I often wrap impure internals with a pure
>> API. Would that be marked as impure?
>
> That depends. It's easily avoidable in Haskell - the lazy version of the
> ST monad is particularly neat in that regard.

Can you elaborate? Are you saying that monadic programming can make pure and
impure programming indistinguishable?

Joachim Durchholz

unread,
Nov 23, 2006, 1:33:04 PM11/23/06
to
ross...@ps.uni-sb.de schrieb:

> Joachim Durchholz wrote:
>> Benjamin Franksen schrieb:
>>> Joachim Durchholz wrote:
>>>> Benjamin Franksen schrieb:
>>>>> One of the main reasons to make a language purely functional is so that
>>>>> it can be lazy.
>>>> That's just one flavor of functional programming languages.
>>>> Non-lazy FPLs do exist.
>>> How many of them are /purely/ functional?
>> Purity wasn't part of the OP's question.
>
> Funny statement you make there, particularly considering the subject
> line... ;-)

Ooops... you're right.

Joachim Durchholz

unread,
Nov 23, 2006, 1:42:39 PM11/23/06
to
Jon Harrop schrieb:

> Joachim Durchholz wrote:
>> Benjamin Franksen schrieb:
>>> How many of them are /purely/ functional?
>> Purity wasn't part of the OP's question.
>
> But it's in the subject. :-)
>
>> I do agree that Erlang, the *ML and Lisp languages dropped the ball by
>> allowing functions to have side effects without even marking them as
>> "impure" or something.
>
> I'm not sure that could work. I often wrap impure internals with a pure API.
> Would that be marked as impure?

Well, I'm not sure how concrete languages handle the situation.

With respect to external interfaces, any compiler would probably have to
depend on programmer-provided declarations. C interfaces don't allow the
compiler to infer purity, so there's room for blatant lies (or errors).
(I know this is an issue for Haskell.)

Within a language, it would be desirable to be able to write impure code
and prove to the compiler that the interface is still pure.
I'm not aware of any language that does this, though. I even don't have
the fuzziest idea how to prove that an interface is pure in that
situation...

Regards,
Jo

Benjamin Franksen

unread,
Nov 23, 2006, 2:01:01 PM11/23/06
to
Jon Harrop wrote:
> Philippa Cowderoy wrote:
>> On Thu, 23 Nov 2006, Jon Harrop wrote:
>>> Joachim Durchholz wrote:
>>> > I do agree that Erlang, the *ML and Lisp languages dropped the ball by
>>> > allowing functions to have side effects without even marking them as
>>> > "impure" or something.
>>>
>>> I'm not sure that could work. I often wrap impure internals with a pure
>>> API. Would that be marked as impure?
>>
>> That depends. It's easily avoidable in Haskell - the lazy version of the
>> ST monad is particularly neat in that regard.
>
> Can you elaborate? Are you saying that monadic programming can make pure
> and impure programming indistinguishable?

Well, state /is/ purely functional, if you pass in the state and get a new
state out as (part of) the result. Monads can be used to hide the passing
in and out (commonly called "plumbing"), effectively enabling you to use
the state as if it were an imperative mutable variable, but this stuff
(state monad) still remains pure. The ST monad goes one step further: it
allows the compiler to optimise the computation so that it /really/
destructively updates memory internally. The type system makes sure that
the state can only be used in a strictly single threaded manner, i.e. no
separate copies of the state can evolve independently. Thus, if you runST
such a computation you again get a pure function.

Cheers
Ben

Benjamin Franksen

unread,
Nov 23, 2006, 2:17:51 PM11/23/06
to
Tomasz Zielonka wrote:
> Benjamin Franksen wrote:
>> Don't get me wrong: I completely agree that purity is a fantastically
>> good thing, especially since the invention of monads. I also agree that
>> there are many more technical reasons to prefer a pure language than
>> laziness.
>
> I think a pure language without laziness would be too constrained. Maybe
> you mean no laziness by default, but with the ability to write parts
> using laziness?

From what I have heard here or there, there are certain difficulties with
adding laziness to an otherwise call-by-value language, to the effect that
add-on laziness isn't quite as useful as the 'real deal'. I may be wrong
though; maybe some experts can comment on that?

>> It is just that without the "hair shirt" laziness there's a good
>> chance that things like monadic I/O would never have been invented in
>> the first place.
>
> Laziness is not a hair shirt, it a feature! :-)

Well, of course it is a feature, and one I dearly love. However, I can
vividly imagine how lazy programmers sometimes thought of it as a hair
shirt way back when they didn't have monads for stuff like IO. Ever tried
to write a web server using 'interact'? =:o

Cheers,
Ben

Neo

unread,
Nov 23, 2006, 3:37:44 PM11/23/06
to
> Dr Jon D Harrop, Flying Frog Consultancy

Thanks for the nice explanation, especially its equivalent in terms of
C (wouldn't have expected that from a consultant who calls himself a
Flying Frog!)

I wonder if "(= (incr) (incr))" is valid.
Might it really should be (= (incr (incr x))).
And allowing x to be global seem to cause problems. The output should
go straight into the next function, right? Seems that keeping track of
an intermediate output is undesirable in fp's scheme of things. Then I
wonder if = is even a valid function because it just has one input and
no second item to compare with in above simplification. I was reading
thru the rest of the post and didn't quite understand a monad. Is there
an equivalent thing in C?

Jon Harrop

unread,
Nov 23, 2006, 6:41:33 PM11/23/06
to
Neo wrote:
>> Dr Jon D Harrop, Flying Frog Consultancy
>
> Thanks for the nice explanation, especially its equivalent in terms of
> C (wouldn't have expected that from a consultant who calls himself a
> Flying Frog!)

We're full of surprises. :-)

> I wonder if "(= (incr) (incr))" is valid.

Consider "++i == ++i" in C.

> Might it really should be (= (incr (incr x))).
> And allowing x to be global seem to cause problems. The output should
> go straight into the next function, right? Seems that keeping track of
> an intermediate output is undesirable in fp's scheme of things.

It is always undesirable but in FP you have alternatives.

> Then I
> wonder if = is even a valid function because it just has one input and
> no second item to compare with in above simplification. I was reading
> thru the rest of the post and didn't quite understand a monad. Is there
> an equivalent thing in C?

There is not an equivalent thing in C, but you did just describe a monad (to
some extent) when you said that the incr function should return the next
incr function rather than doing things behind the scenes.

In an FP you can make incr return the next incr function along with its
result. For example, in OCaml:

$ ocaml -rectypes
Objective Caml version 3.09.2

# let rec incr n = n, fun () -> incr (n+1);;
val incr : int -> (int * (unit -> 'a) as 'a) = <fun>

Now you can create zero and a purely functional stream of the remaining
numbers (called "rest"):

# let zero, rest = incr 0;;
val zero : int = 0
val rest : unit -> int * 'a as 'a = <fun>

Now you can apply "rest" to get the next number and function:

# let one, rest = rest();;
val one : int = 1
val rest : unit -> int * 'a as 'a = <fun>

and so on.

--

Dr Jon D Harrop, Flying Frog Consultancy

Tomasz Zielonka

unread,
Nov 24, 2006, 12:47:16 AM11/24/06
to
Joachim Durchholz wrote:
> Within a language, it would be desirable to be able to write impure code
> and prove to the compiler that the interface is still pure.
> I'm not aware of any language that does this, though. I even don't have
> the fuzziest idea how to prove that an interface is pure in that
> situation...

Haskell's ST monad gives that for code using imperative arrays and
variables.

Best regards
Tomasz

Joachim Durchholz

unread,
Nov 24, 2006, 3:16:49 AM11/24/06
to
Tomasz Zielonka schrieb:

Yes, but building the ST monad is something for people of Elated Guru state.

I think mere Library Designers should be able to write impure data types
with a pure interface (e.g. splay trees).

And Application Programmers should be able to do that if really needed,
too, though they should be able to get away with handwaving when it
comes to proving that the interfaces are pure (provided that the
compiler can clearly blame the developer when it turns out that the
interface wasn't so pure after all).

Of course, all of this is possible only if there's a well-known and
reasonably simple set of proof techniques for purity.
Anything reasonable in that direction?

Regards,
Jo

Torben Ægidius Mogensen

unread,
Nov 24, 2006, 4:10:46 AM11/24/06
to
Jon Harrop <j...@ffconsultancy.com> writes:

> Joachim Durchholz wrote:
>
>> I do agree that Erlang, the *ML and Lisp languages dropped the ball by
>> allowing functions to have side effects without even marking them as
>> "impure" or something.
>
> I'm not sure that could work. I often wrap impure internals with a pure API.
> Would that be marked as impure?

That depends. Type-and-effect systems can hide purely local side
effects. For example, if you have a function that takes an array and
returns a sorted version of that array (without overwriting the
original), the function might use imperative quicksort (on a copy) to
do the sorting, but the function behaves as if it was purely
functional (identical arguments give identical results), so the sort
function can be given a pure type.

On the other hand, if you have a function that retains state between
calls (such as a gensym function), it will be given an impure type.
Same with i/o functions.

If you combine with linearity, the sorting function need not even make
a copy of the argument if the type of the sort function indicates that
the argument must be unshared. This just means that you must make an
explicit copy when calling the sort function with a potentially shared
argument.

Linear types can give you pure types to functions that do retain state
between calls, but you need to return a token that has a linear type
and give it as argument to the next call. For example, when you open
a file, you return a handle that must be given as argument to a call
that reads from the file. The handle has a linear type, so it can be
passed to one call only, but that call can return the handle with its
result, so it can be passed to another call. Essentially, the linear
type forces the calls to be threaded sequentially. The type will even
enforce that you close the file, as a linearly typed value _must_ be
used, and the only function that takes a file handle and does not
return it is the close function.

Torben

Philippa Cowderoy

unread,
Nov 24, 2006, 5:03:02 AM11/24/06
to
On Fri, 24 Nov 2006, Joachim Durchholz wrote:

> Tomasz Zielonka schrieb:
> > Joachim Durchholz wrote:
> > > Within a language, it would be desirable to be able to write impure code
> > > and prove to the compiler that the interface is still pure.
> > > I'm not aware of any language that does this, though. I even don't have
> > > the fuzziest idea how to prove that an interface is pure in that
> > > situation...
> >
> > Haskell's ST monad gives that for code using imperative arrays and
> > variables.
>
> Yes, but building the ST monad is something for people of Elated Guru state.
>

Doesn't matter, it's been built already and ships with the compiler.
Unless you mean writing code in it, which is easy - it's pretty much like
working in a subset of IO that only does references.

--
fli...@flippac.org

'In Ankh-Morpork even the shit have a street to itself...
Truly this is a land of opportunity.' - Detritus, Men at Arms

Joachim Durchholz

unread,
Nov 24, 2006, 5:38:23 AM11/24/06
to
Philippa Cowderoy schrieb:

> On Fri, 24 Nov 2006, Joachim Durchholz wrote:
>> Yes, but building the ST monad is something for people of Elated Guru state.
>
> Doesn't matter, it's been built already and ships with the compiler.
> Unless you mean writing code in it, which is easy - it's pretty much like
> working in a subset of IO that only does references.

Sorry, no, I was unclear - I meant "writing *something like* the ST
monad is something for gurus".

Regards,
Jo

Simon Richard Clarkstone

unread,
Nov 24, 2006, 3:23:57 PM11/24/06
to
Jon Harrop wrote:

> Neo wrote:
>>I wonder if "(= (incr) (incr))" is valid.
>
> Consider "++i == ++i" in C.

Actually, that *isn't* valid in C AFAIK. It involves using a variable
in an expression when it is modified elsewhere in the expression, and
that invokes undefined behaviour. The compiler is allowed to do the
obvious thing, or the other obvious thing, or to emit nonsensical object
code, or just to crash horribly if it wants.

The comp.lang.c FAQ can tell you more, but that's not really on-topic in
this group.

--
Simon Richard Clarkstone: s.r.cl?rkst?n?@durham.ac.uk/s?m?n.cl?rkst?n?@
hotmail.com ### "I have a spelling chequer / it came with my PC /
it plainly marks for my revue / Mistake's I cannot sea" ...
by: John Brophy (at: http://www.cfwf.ca/farmj/fjjun96/)

Benjamin Franksen

unread,
Nov 24, 2006, 5:08:40 PM11/24/06
to

What sort of thing /like/ ST monad are you thinking of?

Ben

Jon Harrop

unread,
Nov 24, 2006, 5:24:01 PM11/24/06
to
Simon Richard Clarkstone wrote:
> Jon Harrop wrote:
>> Neo wrote:
>>>I wonder if "(= (incr) (incr))" is valid.
>>
>> Consider "++i == ++i" in C.
>
> Actually, that *isn't* valid in C AFAIK...

Exactly.

ross...@ps.uni-sb.de

unread,
Nov 25, 2006, 3:53:23 AM11/25/06
to
Simon Richard Clarkstone write:
> Jon Harrop wrote:
> > Neo wrote:
> >>I wonder if "(= (incr) (incr))" is valid.
> >
> > Consider "++i == ++i" in C.
>
> Actually, that *isn't* valid in C AFAIK.

It is valid C, in the sense that it is legal code. It just has
"undefined behaviour" - like so many things in C.

- Andreas

Joachim Durchholz

unread,
Nov 25, 2006, 7:04:56 AM11/25/06
to
Benjamin Franksen schrieb:

> What sort of thing /like/ ST monad are you thinking of?

None specifically. I just wanted to cover the whole spectrum, from stuff
that is (and should remain) for gurus, down to everyday stuff that the
average application programmer does.

I'm not 100% sure that anything beyond ST is actually needed.
OTOH while designing monads is strictly guru stuff, the number of people
exploring that design space will be limited and the probability that
something useful remains hidden in it is higher.

Regards,
Jo

Benjamin Franksen

unread,
Nov 25, 2006, 7:31:06 PM11/25/06
to
Joachim Durchholz wrote:
> Benjamin Franksen schrieb:
>> What sort of thing /like/ ST monad are you thinking of?
>
> None specifically. I just wanted to cover the whole spectrum, from stuff
> that is (and should remain) for gurus, down to everyday stuff that the
> average application programmer does.

Ok.

> I'm not 100% sure that anything beyond ST is actually needed.
>
> OTOH while designing monads is strictly guru stuff, the number of people
> exploring that design space will be limited and the probability that
> something useful remains hidden in it is higher.

I'm not so sure it must be (or remain) guru stuff. Take a look at this:

http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads-and.html

Quote: "All of these [tutorials] introduce monads as something esoteric in
need of explanation. But what I want to argue is that they aren't esoteric
at all. In fact, faced with various problems in functional programming you
would have been led, inexorably, to certain solutions, all of which are
examples of monads. In fact, I hope to get you to invent them now if you
haven't already. It's then a small step to notice that all of these
solutions are in fact the same solution in disguise."

Cheers,
Ben

Tomasz Zielonka

unread,
Nov 26, 2006, 1:06:02 AM11/26/06
to
Benjamin Franksen wrote:
> I'm not so sure it must be (or remain) guru stuff. Take a look at this:
>
> http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads-and.html
>
> Quote: "All of these [tutorials] introduce monads as something esoteric in
> need of explanation. But what I want to argue is that they aren't esoteric
> at all. In fact, faced with various problems in functional programming you
> would have been led, inexorably, to certain solutions, all of which are
> examples of monads. In fact, I hope to get you to invent them now if you
> haven't already. It's then a small step to notice that all of these
> solutions are in fact the same solution in disguise."

I have a good example for this: this wednesday I was at the talk given
by people from a big polish IT company. They described how functional
programming helps them to master the complexity of their business
intelligence code. Then they went to describe some problems with FP,
like interaction with the outside world or, in their case, influencing
the behavior of the whole application, and show their solution to this
problem. It was basically a monad with Writer and State capabilities,
but not presented as such.

Best regards
Tomasz

Daniel C. Wang

unread,
Nov 26, 2006, 3:09:09 AM11/26/06
to rincewind
I'm glad this is now freely available it is an excellent book.

http://dspace.library.cornell.edu/handle/1813/83

Direct link..
http://dspace.library.cornell.edu/retrieve/300/Art_of_Discovery_Oliver2.pdf

Page 39 jumps to mind when I think about your question. There really are
no demonstrated advantages to purely functional languages, other than
the ability to publish reams and reams of papers about them.

Of course all those papers are not useless, many interesting ideas came
out of them and by that measure Haskell is a success. If you look at C#
3.0 and LINQ you'll discover many ideas borrowed from Haskell and put to
use. Purity was not one of those ideas, because it really buys you nothing.


rincewind wrote:
> What are advantages of purely functional languages?
>
> I mean, functions as values, tail recursion optimization and other
> attributes of functional languages are great, I can easily appreciate
> them. But why prohibit operations with side-effects completely? How can
> the language be better by explicitly lacking some fundamental feature?
>
> I don't think any argument is needed to state that some kind of global
> state-changing operations is needed, sometimes. Existence of monads in
> Haskell kind of proves this. But why invent some complicated concept,
> why not just bring in all the traditional stuff (assignments, etc)? How
> monads are better?

Mark Tarver

unread,
Nov 26, 2006, 5:37:34 AM11/26/06
to

> Purity ...... really buys you nothing.

> > What are advantages of purely functional languages?

I think this is roughly right. The advantages cited in this thread -
memoisation and formal methods - don't really compensate very well for
what you lose. The fact is that very very few functional programs are
formally verified anyway. So essentially you're giving up a lot for
the right to exercise a facility which you're not likely to want to
use.

Its interesting that one of the most powerful and succcessful formal
verification systems is the Boyer-Moore and this was implemented in and
for Lisp (which is as impure as you like). The Boyer-Moore people just
simply chose to work in the pure functional subset of Lisp.

Mark

Joachim Durchholz

unread,
Nov 26, 2006, 7:10:11 AM11/26/06
to
Benjamin Franksen schrieb:
> Joachim Durchholz wrote:
>> OTOH while designing monads is strictly guru stuff, the number of people
>> exploring that design space will be limited and the probability that
>> something useful remains hidden in it is higher.
>
> I'm not so sure it must be (or remain) guru stuff. Take a look at this:
>
> http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads-and.html

Just the kind of introduction that I'be been trying to locate (or even
write myself).

> Quote: "All of these [tutorials] introduce monads as something esoteric in
> need of explanation. But what I want to argue is that they aren't esoteric
> at all. In fact, faced with various problems in functional programming you
> would have been led, inexorably, to certain solutions, all of which are
> examples of monads. In fact, I hope to get you to invent them now if you
> haven't already. It's then a small step to notice that all of these
> solutions are in fact the same solution in disguise."

Yes, that has been my thinking almost from the start that I encountered
them.

Let me qualify my statement a bit:
Designing something that happens to be a monad is a relatively normal
activity.
However, making sure it is a monad (i.e. verifying that the monad laws
hold, without exception, including in the presence of nonterminating
parameters) seems to be guru-ish stuff to me. (Unless and until proof
exploration/checking systems become commonplace tools for the
programmer, that is. And that's not very far away, actually: type
systems are an example of a proof checker.)

Regards,
Jo

Joachim Durchholz

unread,
Nov 26, 2006, 7:19:14 AM11/26/06
to
Mark Tarver schrieb:

> The fact is that very very few functional programs are
> formally verified anyway. So essentially you're giving up a lot for
> the right to exercise a facility which you're not likely to want to
> use.

Formal proofs are just a tiny part of all the things what purity helps
with. And the majority of these things is firmly in the realm of
day-to-day design&programming.

Regards,
Jo

Benjamin Franksen

unread,
Nov 26, 2006, 10:41:43 AM11/26/06
to
Joachim Durchholz wrote:
> Designing something that happens to be a monad is a relatively normal
> activity.
> However, making sure it is a monad (i.e. verifying that the monad laws
> hold, without exception, including in the presence of nonterminating
> parameters) seems to be guru-ish stuff to me.

Hm, yes, right. I haven't really been thinking about the proof obligation.
Maybe "guru" is still slightly exaggerated. I mean, writing such proofs is
surely not your joe average programmer's daily task. But if you forget (for
a moment) bottoms and such and just do the 'fast and loose' equational
reasoning, it is really not that hard. OTOH, I wonder how many widely used
monads wouldn't actually fail any 'hard test'. For instance, 'seq' seems to
break monad laws, see e.g. this discussion:

http://www.mail-archive.com/haskell%40haskell.org/msg10867.html

> (Unless and until proof
> exploration/checking systems become commonplace tools for the
> programmer, that is. And that's not very far away, actually: type
> systems are an example of a proof checker.)

Yup (looking forward to the first optimizing Epigram2 compiler... ;-)

Ben

Benjamin Franksen

unread,
Nov 26, 2006, 11:16:16 AM11/26/06
to
Daniel C. Wang wrote:
> Of course all those papers are not useless, many interesting ideas came
> out of them and by that measure Haskell is a success. If you look at C#
> 3.0 and LINQ you'll discover many ideas borrowed from Haskell and put to
> use. Purity was not one of those ideas, because it really buys you
> nothing.

I beg to differ. There are enormous practical advantages to purity. I
already mentioned laziness. Another area where it is helpful is
concurrency, especially when combined with stuff like software
transactional memory (STM).

More generally, pure languages like Haskell make it easier to reason about
complex programs, since a large and clearly identifiable part of the code
(that which is not in the IO monad) follows much a simpler semantics. This
point may be hard to demonstrate in an objective way. Still, my preference
for pure languages is quite strong and I don't think this is merely because
of their theoretical beauty, but at least to the same degree due to the
ease with which I can understand and write complex algorithms in them.

Ben

Daniel C. Wang

unread,
Nov 26, 2006, 12:45:14 PM11/26/06
to Benjamin Franksen
Benjamin Franksen wrote:
> Daniel C. Wang wrote:
>> Of course all those papers are not useless, many interesting ideas came
>> out of them and by that measure Haskell is a success. If you look at C#
>> 3.0 and LINQ you'll discover many ideas borrowed from Haskell and put to
>> use. Purity was not one of those ideas, because it really buys you
>> nothing.
>
> I beg to differ. There are enormous practical advantages to purity. I
> already mentioned laziness. Another area where it is helpful is
> concurrency, especially when combined with stuff like software
> transactional memory (STM).

C# 3.0. and LINQ is built on top of the idea of a lazy list. They just
call them Enumerators and don't bother with the purity.

STM is some pie in the sky research idea that's generated yet more
papers but really hasn't solved any real problems. They've just made it
much easier to write slow concurrent programs. Much like garbage
collectors don't really solve the resource management problem. They just
shuffle the problems to different part of the system where they can be
solved in more centralized ways.

> More generally, pure languages like Haskell make it easier to reason about
> complex programs, since a large and clearly identifiable part of the code
> (that which is not in the IO monad) follows much a simpler semantics. This
> point may be hard to demonstrate in an objective way.

If you can't demonstrate this in an objective way, you're not doing real
science. If I look at the evidence and the experience of the proof
carrying code community, I'd say this is statement is just false. I mean
really, when you can tractably prove things about raw machine code I
don't see how purity would have made things significantly simpler. Also,
any high-level proof is not what you really care about, the proofs that
matter are what happen at the machine level.


> Still, my preference
> for pure languages is quite strong and I don't think this is merely because
> of their theoretical beauty, but at least to the same degree due to the
> ease with which I can understand and write complex algorithms in them.

There are many advanced features in Haskell that make it a nice
language. Enforced purity would not be on my list. I mean nothing is
preventing you from writing your algorithms in a pure subset of ML,
doing the nice proofs in your head and moving on.

BTW It's my opinion that Haskell is not purely-functional because it
allows non-termination as an implicit effect.

The strongly-typed lambda calculus would be an example of a pure
functional language. Because everything is strongly normalizing you can
use it as a logic. Because Haskell admits non-termination in it's
"purely-functional" subset you can't use that subset as a logic.
non-termination should be wrapped up in a monad.

There are many good reasons to like purity, but the way it is
implemented and designed into Haskell has bought us really nothing. So I
should qualify my statement by saying "purely-functional" programing
languages like Haskell really provide us no advantages with their sort
of "purity".

Philippa Cowderoy

unread,
Nov 26, 2006, 1:39:56 PM11/26/06
to
On Sun, 26 Nov 2006, Daniel C. Wang wrote:

> There are many good reasons to like purity, but the way it is implemented and
> designed into Haskell has bought us really nothing.

It may have bought you nothing - many Haskell coders (myself included)
find value in it!

--
fli...@flippac.org

A problem that's all in your head is still a problem.
Brain damage is but one form of mind damage.

Daniel C. Wang

unread,
Nov 26, 2006, 1:48:54 PM11/26/06
to Philippa Cowderoy
Philippa Cowderoy wrote:
> On Sun, 26 Nov 2006, Daniel C. Wang wrote:
>
>> There are many good reasons to like purity, but the way it is implemented and
>> designed into Haskell has bought us really nothing.
>
> It may have bought you nothing - many Haskell coders (myself included)
> find value in it!


You, guys probably could have gotten by in OCaml and not really noticed.
Well, maybe you would have noticed your programs ran faster..... and
could be debugged...

There are many things in Haskell, purity is just one, and its my
experience that I've never missed it in the other languages I've used.


If you can show me a concert example where purity was key... (not just
laziness..) that might be more illustrative.

Joachim Durchholz

unread,
Nov 26, 2006, 3:06:05 PM11/26/06
to
Daniel C. Wang schrieb:

> Much like garbage
> collectors don't really solve the resource management problem. They just
> shuffle the problems to different part of the system where they can be
> solved in more centralized ways.

Sorry, you're firmly in the realm of sweeping overgeneralizations here.

Please take the time to bolster your claims with arguments before taking
them as analogies for other dismissive statements.

>> More generally, pure languages like Haskell make it easier to
>> reason about complex programs, since a large and clearly
>> identifiable part of the code (that which is not in the IO monad)
>> follows much a simpler semantics. This point may be hard to
>> demonstrate in an objective way.
>
> If you can't demonstrate this in an objective way, you're not doing
> real science.

The original point is easy to demonstrate.

> If I look at the evidence and the experience of the proof
> carrying code community, I'd say this is statement is just false. I mean
> really, when you can tractably prove things about raw machine code I
> don't see how purity would have made things significantly simpler.

Purity is for reasoning about properties of source code, no more, no
less. Whether it's still relevant at machine level is an entirely
different issue, and has nothing to do with the question whether purity
helps programmers.
Besides, you're ignoring the advantages of purity when reasoning
informally. Purity eliminates a whole host of caveats like sequencing
and aliasing, each of which make reasoning more difficult, more
burdensome, and/or more error-prone. That's independent of whether that
reasoning is formal, semi-formal, or informal.

I think PCC points just the other direction: if having pure stuff were
so irrelevant, then why are the assertions that PCC proves all declarative?

> Also, any high-level proof is not what you really care about, the
> proofs that matter are what happen at the machine level.

At least for me, arguing about programs at the source level is what I do
about 90% of my programming time.
I do that when hunting for bugs, when changing the code, or about any
other activity that involves an editor and source code (and a lot of
whiteboard activity as well).
I do care about machine-level proofs, as a means to validate compilers,
or code that got loaded from an untrusted source. But that's specialist
concerns; in my day-to-day job, I don't write compilers or
dynamically-loading systems.

Regards,
Jo

Tomasz Zielonka

unread,
Nov 26, 2006, 3:38:23 PM11/26/06
to
Daniel C. Wang wrote:
> You, guys probably could have gotten by in OCaml and not really noticed.

I used OCaml before Haskell. I liked it at the beginning, but as time
went, I was liking it less and less. One of the biggest reasons was that
due to strictness, I was penalized when I used some obvious functional
code, like composing many list transformations, because every
intermediate list was fully constructed before passing it to the next
stage. I could use lazy lists in OCaml, but it was better to switch to
Haskell at this point.

> Well, maybe you would have noticed your programs ran faster.....

Efficiency is not everything. Ocaml's focus on efficiency was one of
things that drove me away from it. Some obvious functions, like string
implode / explode, were not added to the standard libraries, because
developers of Ocaml feared that someone could use them and get
inefficient programs. I don't know about you, but I don't like being
treated like a child.

> and could be debugged...

That's just silly. I am not sure I should even bother replying you.

Best regards
Tomasz

Joachim Durchholz

unread,
Nov 26, 2006, 4:22:29 PM11/26/06
to
Benjamin Franksen schrieb:

> Joachim Durchholz wrote:
>> Designing something that happens to be a monad is a relatively normal
>> activity.
>> However, making sure it is a monad (i.e. verifying that the monad laws
>> hold, without exception, including in the presence of nonterminating
>> parameters) seems to be guru-ish stuff to me.
>
> Hm, yes, right. I haven't really been thinking about the proof obligation.
> Maybe "guru" is still slightly exaggerated. I mean, writing such proofs is
> surely not your joe average programmer's daily task. But if you forget (for
> a moment) bottoms and such and just do the 'fast and loose' equational
> reasoning, it is really not that hard.

Agreed.

> OTOH, I wonder how many widely used monads wouldn't actually fail any
> 'hard test'. For instance, 'seq' seems to break monad laws,

Yes, that's why I included "nonterminating" in the list of things to
prove :-)

Regards,
Jo

Benjamin Franksen

unread,
Nov 26, 2006, 4:59:26 PM11/26/06
to
Daniel C. Wang wrote:
> Benjamin Franksen wrote:
>> Daniel C. Wang wrote:
>>> Of course all those papers are not useless, many interesting ideas came
>>> out of them and by that measure Haskell is a success. If you look at C#
>>> 3.0 and LINQ you'll discover many ideas borrowed from Haskell and put to
>>> use. Purity was not one of those ideas, because it really buys you
>>> nothing.
>>
>> I beg to differ. There are enormous practical advantages to purity. I
>> already mentioned laziness. Another area where it is helpful is
>> concurrency, especially when combined with stuff like software
>> transactional memory (STM).
>
> C# 3.0. and LINQ is built on top of the idea of a lazy list. They just
> call them Enumerators and don't bother with the purity.

I have to admit that I know neither C# 3.0 nor LINQ, so I can't comment on
them and their enumerators directly. Anyway laziness is about more than
streams aka lazy lists. There are many libraries for which laziness is an
important ingredient. To mention just two random examples off the top of my
head: 1. self-optimizing lexer combinators
(citeseer.ist.psu.edu/chakravarty99lazy.html) 2. frisby, a library of
packrat parser combinators (http://repetae.net/computer/frisby).

> STM is some pie in the sky research idea that's generated yet more
> papers but really hasn't solved any real problems. They've just made it
> much easier to write slow concurrent programs.

It is available as a practically usable library in ghc since about version
6.4. And AFAIK it enables extremely fast concurrent programs. And it
is /not/ accidental that it was Haskell which has brought STM from "pie in
the sky research" to useful library status. You can't rollback an IO
action, can you? So you need a way to make sure that your STM transaction
can't do IO and that's quite easy to do in Haskell, due to its purity and
its advanced type system.

Oh, wait there is yet another very modern feat that really works (safely)
only in pure languages: asynchronous exceptions. Ever tried to use signals
and concurrent threads in the same C program? It's a nightmare. Once again,
Haskell was the first language I am aware of in which this combination
works in practice.

> Much like garbage
> collectors don't really solve the resource management problem. They just
> shuffle the problems to different part of the system where they can be
> solved in more centralized ways.

GCs were never designed to solve any general "resource management problem".
They were designed to solve the problem of managing /one/ resource, and
that is memory. This is in practice so far superior to manual management
that even the mainstream has picked up on the idea (witness java, C#). GC
is not perfect but the only stuff that aspires to be even better (region
inference) is /really/ research at the moment and I know of no
language/compiler that has solved all the practical problems. (I'd be
thankful for hints, though, if it turns out my knowledge isn't up to date.)

GC is surely not the golden key to solve the (general) resource management
problem. I also have not heard such a claim from anyone before.

>> More generally, pure languages like Haskell make it easier to reason
>> about complex programs, since a large and clearly identifiable part of
>> the code (that which is not in the IO monad) follows much a simpler
>> semantics. This point may be hard to demonstrate in an objective way.
>
> If you can't demonstrate this in an objective way, you're not doing real
> science.

I never claimed to be doing any science. I am /not/ a computer scientist,
but a practitioner. I argued from this position, not from a scientific one.
And I find that Haskell, laziness, purity and such are fine tools to write
good programs with fun and ease.

And, dear Great Scientist, may this humble programmer inquire what the
objective scientific measure is for programmer productivity and about what
hard numbers your science has come up with to clarify how purity of Haskell
apparently has no effect whatsoever on us poor programmer's productivity?

> If I look at the evidence and the experience of the proof
> carrying code community, I'd say this is statement is just false.

Objectively? C'mon, gimme you' numbers!

> I mean
> really, when you can tractably prove things about raw machine code I
> don't see how purity would have made things significantly simpler.

You must be joking, really. Have you ever tried to actually prove any
property of stateful, imperative code?

> Also,
> any high-level proof is not what you really care about, the proofs that
> matter are what happen at the machine level.

This is quite debatable, don't you think? Anyway it just side-steps what
I've been talking about, which is not about formal proofs. I've been
talking about reasoning which is a much wider notion. To give you a
picture: you programmed something; it doesn't do exactly what you thought
it would or should. How do you find out what's wrong? Exactly, you start
thinking about the code. The source code that you wrote, not anything on
the machine level. See, I am arguing practical advantages, not theory.

>> Still, my preference
>> for pure languages is quite strong and I don't think this is merely
>> because of their theoretical beauty, but at least to the same degree due
>> to the ease with which I can understand and write complex algorithms in
>> them.
>
> There are many advanced features in Haskell that make it a nice
> language. Enforced purity would not be on my list. I mean nothing is
> preventing you from writing your algorithms in a pure subset of ML,
> doing the nice proofs in your head and moving on.

Repeat, I am not talking about proofs and I like to write programs that do
something useful, so I don't want to program in some 'pure subset', whether
of C or ML. Does ML have an IO monad so i can remain pure throughout? I
suppose not. So there.

> BTW It's my opinion that Haskell is not purely-functional because it
> allows non-termination as an implicit effect.
>
> The strongly-typed lambda calculus would be an example of a pure
> functional language. Because everything is strongly normalizing you can
> use it as a logic. Because Haskell admits non-termination in it's
> "purely-functional" subset you can't use that subset as a logic.
> non-termination should be wrapped up in a monad.

Go ahead and propose something better than Haskell. But throwing out the
baby with the bathwater is going to do any good. Haskell may not be perfect
(what is?) but it clearly and safely separates computational effects from
the pure functional core via the type system. The functional core migth not
be pure from a radical point of view but I haven't seen anything yet that
does better. Have you?

Besides, you wouldn't even be /talking/ about monads (assuming you are not a
category theorist), let alone about "wrapping [some effect] up in a monad",
if not for Haskell and its purity because this is where and why monads (for
programming) have first been brought to practical use.

> There are many good reasons to like purity, but the way it is
> implemented and designed into Haskell has bought us really nothing. So I
> should qualify my statement by saying "purely-functional" programing
> languages like Haskell really provide us no advantages with their sort
> of "purity".

Cheers
Ben

Daniel C. Wang

unread,
Nov 26, 2006, 6:41:57 PM11/26/06
to Benjamin Franksen
Benjamin Franksen wrote:
> It is available as a practically usable library in ghc since about version
> 6.4. And AFAIK it enables extremely fast concurrent programs. And it
> is /not/ accidental that it was Haskell which has brought STM from "pie in
> the sky research" to useful library status. You can't rollback an IO
> action, can you? So you need a way to make sure that your STM transaction
> can't do IO and that's quite easy to do in Haskell, due to its purity and
> its advanced type system.

So what do you do when you want to do IO in an atomic block? Also, there
are tons of ways via type and effect systems to disallow IO in an
atomic block. I could do this in C or ML. The purity actually doesn't
help. You're confusing the ability to describe effects with types (which
is a good thing) from having no effects (except for non-termination) for
most of your code.

> Oh, wait there is yet another very modern feat that really works (safely)
> only in pure languages: asynchronous exceptions. Ever tried to use signals
> and concurrent threads in the same C program? It's a nightmare. Once again,
> Haskell was the first language I am aware of in which this combination
> works in practice.

You will have to explain why purity helps here. C is a nightmare for
several reasons. Would a pure C (i.e. something in SSA form) help?

>> Much like garbage
>> collectors don't really solve the resource management problem. They just
>> shuffle the problems to different part of the system where they can be
>> solved in more centralized ways.
>
> GCs were never designed to solve any general "resource management problem".
> They were designed to solve the problem of managing /one/ resource, and
> that is memory. This is in practice so far superior to manual management
> that even the mainstream has picked up on the idea (witness java, C#). GC
> is not perfect but the only stuff that aspires to be even better (region
> inference) is /really/ research at the moment and I know of no
> language/compiler that has solved all the practical problems. (I'd be
> thankful for hints, though, if it turns out my knowledge isn't up to date.)

There was roughly 30 years between the invention of LISP and Java. There
were many real practical reasons as to why it took that long. Thank good
for Moore's law.

In the mean time people managed memory the plain old fashioned way. I'd
say the new fashioned way is to use something like Cyclone.

> GC is surely not the golden key to solve the (general) resource management
> problem. I also have not heard such a claim from anyone before.

So what are weak pointers, finalizers,.. etc doing in the Java language
standard? Why did the DIS VM of Infero destroy windows when the
referenced count went to 0? There's quite a bit of literature attempting
to make GCs manage many resources. However, in the end people are just
"mostly" happy with it managing memory on their 1Ghz processors with 1GB
of memory.

Oh and then there's this...
http://forum.java.sun.com/thread.jspa?threadID=770152&messageID=4389104


>>> More generally, pure languages like Haskell make it easier to reason
>>> about complex programs, since a large and clearly identifiable part of
>>> the code (that which is not in the IO monad) follows much a simpler
>>> semantics. This point may be hard to demonstrate in an objective way.
>> If you can't demonstrate this in an objective way, you're not doing real
>> science.
>
> I never claimed to be doing any science. I am /not/ a computer scientist,
> but a practitioner. I argued from this position, not from a scientific one.
> And I find that Haskell, laziness, purity and such are fine tools to write
> good programs with fun and ease.

Laziness, buys you something. I need and example again where purity
helps in your day to day job, that you couldn't achieve in some language
like Ocaml or ML. Purity as a discipline or concept is useful. The way
Haskell enforces purity as a language feature is not useful.

> And, dear Great Scientist, may this humble programmer inquire what the
> objective scientific measure is for programmer productivity and about what
> hard numbers your science has come up with to clarify how purity of Haskell
> apparently has no effect whatsoever on us poor programmer's productivity?

Umm... the fact that the vast majority of programs are written in C/C++
or other less painfully disgusting languages. If there was a significant
advantage of using Haskell I'd expect to see more pure languages in the
style of Haskell.

>> If I look at the evidence and the experience of the proof
>> carrying code community, I'd say this is statement is just false.
>
> Objectively? C'mon, gimme you' numbers!

Evidence can be qualitative.

>> I mean
>> really, when you can tractably prove things about raw machine code I
>> don't see how purity would have made things significantly simpler.
>
> You must be joking, really. Have you ever tried to actually prove any
> property of stateful, imperative code?

Um yes, and lot of other people have done too. Before Haskell even
existed, Tony Hoare wasn't doing nothing in his spare time you know.

Formally, the only extra reasoning rule you get is a beta rule as
opposed to a beta-v rule. In the process you give up some eta rules. BTW
have you looked at that Haskell semantics for pattern matching and
compared them to the semantics of pattern matching for SML? Purity in
the Haskell sense really simplify anything.

>> Also,
>> any high-level proof is not what you really care about, the proofs that
>> matter are what happen at the machine level.
>
> This is quite debatable, don't you think? Anyway it just side-steps what
> I've been talking about, which is not about formal proofs. I've been
> talking about reasoning which is a much wider notion. To give you a
> picture: you programmed something; it doesn't do exactly what you thought
> it would or should. How do you find out what's wrong? Exactly, you start
> thinking about the code. The source code that you wrote, not anything on
> the machine level. See, I am arguing practical advantages, not theory.

I bring proofs up because advocates say that purity makes the proof
easier. I'd say that's not true. Hey, we don't have to argue about it if
you don't care about formal proofs.

>>> Still, my preference
>>> for pure languages is quite strong and I don't think this is merely
>>> because of their theoretical beauty, but at least to the same degree due
>>> to the ease with which I can understand and write complex algorithms in
>>> them.
>> There are many advanced features in Haskell that make it a nice
>> language. Enforced purity would not be on my list. I mean nothing is
>> preventing you from writing your algorithms in a pure subset of ML,
>> doing the nice proofs in your head and moving on.
>
> Repeat, I am not talking about proofs and I like to write programs that do
> something useful, so I don't want to program in some 'pure subset', whether
> of C or ML. Does ML have an IO monad so i can remain pure throughout? I
> suppose not. So there.

Why do you need language support?

>> BTW It's my opinion that Haskell is not purely-functional because it
>> allows non-termination as an implicit effect.
>>
>> The strongly-typed lambda calculus would be an example of a pure
>> functional language. Because everything is strongly normalizing you can
>> use it as a logic. Because Haskell admits non-termination in it's
>> "purely-functional" subset you can't use that subset as a logic.
>> non-termination should be wrapped up in a monad.
>
> Go ahead and propose something better than Haskell. But throwing out the
> baby with the bathwater is going to do any good. Haskell may not be perfect
> (what is?) but it clearly and safely separates computational effects from
> the pure functional core via the type system. The functional core migth not
> be pure from a radical point of view but I haven't seen anything yet that
> does better. Have you?

I'd argue a type-an-effects system is a better way to do it from a
practical point of view. There's also the Concurrent Clean approach
which solves some of the monad composition problems.

> Besides, you wouldn't even be /talking/ about monads (assuming you are not a
> category theorist), let alone about "wrapping [some effect] up in a monad",
> if not for Haskell and its purity because this is where and why monads (for
> programming) have first been brought to practical use.

I never said monads weren't useful. I also wouldn't say Haskell didn't
stimulate a lot of interesting research, but the Haskell style of purity
is still a kind of useless idea in my book.

Daniel C. Wang

unread,
Nov 26, 2006, 6:46:09 PM11/26/06
to tomasz....@gmail.com
Tomasz Zielonka wrote:
{stuff deleted}

>> and could be debugged...
>
> That's just silly. I am not sure I should even bother replying you.

Just to clarify, I was talking about how lazy evaluation make
understanding program execution more difficult.

I'd also like to point out that you don't need purity to have lazy
lists. The fact that Haskell has good support for lazy programing,
doesn't mean a lazy impure version of OCaml wouldn't be a good thing.

Daniel C. Wang

unread,
Nov 26, 2006, 6:58:12 PM11/26/06
to Joachim Durchholz
Joachim Durchholz wrote:
> Daniel C. Wang schrieb:
> > Much like garbage
>> collectors don't really solve the resource management problem. They
>> just shuffle the problems to different part of the system where they
>> can be solved in more centralized ways.
>
> Sorry, you're firmly in the realm of sweeping overgeneralizations here.

This is USENET right? ;)

> Please take the time to bolster your claims with arguments before taking
> them as analogies for other dismissive statements.
>

{stuff deleted}


>
> The original point is easy to demonstrate.

Do the following take the simply typed lambda calculus with a fixpoint
and *sums*. Write down the various program equivalences that exist with
CBV and CBN semantics, and tell me which equivalence exists in CBN
that you don't have in CBV. You'll be missing the beta rule in CBV but
you still have a beta-v rule in CBV. Ahh, but you'll also notice you
are missing a eta rule on sums in CBN. In particular

(case e of InjL x => InjL x
| InjR y => InjR y) == e
under cbv
but it doesn't hold under cbn.

So please, explain to me how CBN simplifies things.

{stuff deleted}


> At least for me, arguing about programs at the source level is what I do
> about 90% of my programming time.
> I do that when hunting for bugs, when changing the code, or about any
> other activity that involves an editor and source code (and a lot of
> whiteboard activity as well).
> I do care about machine-level proofs, as a means to validate compilers,
> or code that got loaded from an untrusted source. But that's specialist
> concerns; in my day-to-day job, I don't write compilers or
> dynamically-loading systems.

Hey, lot so people program they seem to turn out reasonably correct
programs without purity built into their systems. I wonder how they are
doing this.

Daniel C. Wang

unread,
Nov 26, 2006, 7:16:05 PM11/26/06
to
BTW just to avoid any confusions when I rail against purity. I'm railing
against the design of a language around purity in the way Haskell is
designed. Purity as a concept is useful, but designing a programming
language around it is a completely academic exercise.

I would also argue that designing a language around decidable
type-inference as SML is is also an academic exercise. There have been
many useful features that could have been added to SML, but rejected
because decidable type inference for those features was hard to guarantee.

That is not to say either of these things "decidable type inference" or
purity are useless. Drawing a line in the sand when designing your
language around these concepts is useful for generating lots of cool
research ideas. However, the resulting designs of such systems should
not be studied and copied blindly. Very often things are the way they
are "just because we wanted to try and see what happens". That's a
perfectly legitimate answer.

What bothers me personally, is when people spin stories around the
"real" reason and try to convince other people that it's clearly a good
thing to do such a thing in a certain way.

"Purity helps referential transparency which is a good thing" is one
such infuriatingly bogus story.

Anton van Straaten

unread,
Nov 26, 2006, 7:49:03 PM11/26/06
to
Daniel C. Wang wrote:
>> The original point is easy to demonstrate.
>
>
> Do the following take the simply typed lambda calculus with a fixpoint
> and *sums*. Write down the various program equivalences that exist with
> CBV and CBN semantics, and tell me which equivalence exists in CBN
> that you don't have in CBV. You'll be missing the beta rule in CBV but
> you still have a beta-v rule in CBV. Ahh, but you'll also notice you
> are missing a eta rule on sums in CBN. In particular
>
> (case e of InjL x => InjL x
> | InjR y => InjR y) == e
> under cbv
> but it doesn't hold under cbn.
>
> So please, explain to me how CBN simplifies things.

I like the way you're resorting to an argument in terms of lambda
calculus in one place, and in another objecting to a core aspect of the
design of Haskell (one which it shares with lambda calculus) as a
"completely academic exercise".

You sound conflicted to me.

Anton

Daniel C. Wang

unread,
Nov 26, 2006, 8:29:40 PM11/26/06
to Anton van Straaten

No, I'm simply demonstrating in a rigorous way the notion of referential
transparency is bogus. The whole concept of referential transparency is
an exercise in elevating the beta rule to godhood for no reason.

Do you agree or disagree with that last statement?

At one point in time many people would claim that CBN "made more
equations true" which is false. Yet it is often repeated. If you study
the theories of sufficiently rich system you discover the systems are
just different. One is not richer than the other.

If they are just different then why does such a difference matter? Why
is one easier to reason about then the other? I've conclude they just
aren't, and believe most reasonable people would agree if they really
locked at the facts and rules. People argue purity makes the "reasoning
easier" without being clear exactly how. If you actually compare the
systems you see that claim is silly at best.

No one sees to get past the dogma that "replacing equals with equals"
makes reasoning easier, by explaining why it should and demonstrating
how. What's so magic about the beta rule?

This whole CBN vs CBV is completely irrelevant in the actual lambda
calculus as envisioned by Church, because everything was strongly
normalizing. Haskell is a bastardize version of the real lambda calculus.

BTW no one would find it odd if a civil engineer did some complicated
bit of math to figure out how to make sure the bridge as designed
doesn't collapse or costs the same as some alternative design. I'm
confused as to why I should be conflicted when I do the same when
talking about language design.

Steve Harris

unread,
Nov 26, 2006, 11:04:38 PM11/26/06
to
> C# 3.0. and LINQ is built on top of the idea of a lazy list. They just
> call them Enumerators and don't bother with the purity.

I'm not familiar with LINQ : does it give you the ability to do the
sort of transactions that STM does? Knowing C# it seems that it
couldn't, not with the static gaurantees at least...


Another interesting application of Haskell's purity is Oleg's zipper
file system. He uses the strong guarantees that Haskell's type system
provides in a fundamental way - it guarantees the necessary isolation
so that more heavyweight forms of isolation are not necessary. I think
it's a really neat approach - probably because I've written
multithreaded servers that were hard to debug, so naturally I'm
jealous!

At least read the feature list at:
http://okmij.org/ftp/Computation/Continuations.html#zipper-fs
a really neat project...


Steve

Daniel C. Wang

unread,
Nov 26, 2006, 11:14:33 PM11/26/06
to Steve Harris
Steve Harris wrote:
>> C# 3.0. and LINQ is built on top of the idea of a lazy list. They just
>> call them Enumerators and don't bother with the purity.
>
> I'm not familiar with LINQ : does it give you the ability to do the
> sort of transactions that STM does? Knowing C# it seems that it
> couldn't, not with the static gaurantees at least...

I should look at the zipperFS some more. On a whole I'm not impressed
with STM. It just solves old problems by creating new ones. Of course
it's sold/marketed as the solution to all the "old problems". I think
there are many other solutions to the old problems that aren't as
radical as STM.

The one nice thing to say about STM is that the idea of an atomic region
is a good PL concept. I'm just quibbling with the way they guarantee
atomicity using STM and opportunistic concurrency.

Tomasz Zielonka

unread,
Nov 26, 2006, 11:44:48 PM11/26/06
to
Daniel C. Wang wrote:
> Just to clarify, I was talking about how lazy evaluation make
> understanding program execution more difficult.

Really? And side effect add nothing to the difficulty?

This is still somewhat too general statement for me. Sure, with
lazy evaluation you can have difficulty to get the right performance,
but I guess you don't mean only debugging efficiency problems.

Personally, I only care about how program executes when I have big
performance needs. There are so many places where I don't want to care
about performace. Strict languages force me to think about performance
all the time.

> I'd also like to point out that you don't need purity to have lazy
> lists. The fact that Haskell has good support for lazy programing,
> doesn't mean a lazy impure version of OCaml wouldn't be a good thing.

I already said "I could use lazy lists in OCaml". I should add that it
wouldn't be straightforward - I would have to rewrite all the needed
list functions (map, folds, etc.) again for the lazy version.

Also, I don't buy such "you don't need" arguments. There are so many
things I don't strictly need, but without them, I am a hopeless
programmer ;-)

Best regards
Tomasz

Daniel C. Wang

unread,
Nov 27, 2006, 12:59:32 AM11/27/06
to tomasz....@gmail.com
Tomasz Zielonka wrote:
> Daniel C. Wang wrote:
>> Just to clarify, I was talking about how lazy evaluation make
>> understanding program execution more difficult.
>
> Really? And side effect add nothing to the difficulty?
It doesn't confuse your average C programmer using GDB.

> This is still somewhat too general statement for me. Sure, with
> lazy evaluation you can have difficulty to get the right performance,
> but I guess you don't mean only debugging efficiency problems.

Debugging in general, and sprinkling in "printfs" to understand why your
program is bottom rather than producing the right result. Though, I'll
admit the kind of debugging you need to do in Haskell is of a different
flavor.

> Personally, I only care about how program executes when I have big
> performance needs. There are so many places where I don't want to care
> about performace. Strict languages force me to think about performance
> all the time.

It's ironic that those who talk about referential transparency say it to
help optimizations when as people who use these languages will admit you
don't get the nice lazy Haskell programs without paying some performance
cost.

>> I'd also like to point out that you don't need purity to have lazy
>> lists. The fact that Haskell has good support for lazy programing,
>> doesn't mean a lazy impure version of OCaml wouldn't be a good thing.
>
> I already said "I could use lazy lists in OCaml". I should add that it
> wouldn't be straightforward - I would have to rewrite all the needed
> list functions (map, folds, etc.) again for the lazy version.

yeah, that's just a library issue.

> Also, I don't buy such "you don't need" arguments. There are so many
> things I don't strictly need, but without them, I am a hopeless
> programmer ;-)

I'm just saying the costs associated with purity don't payoff for the
supposed benefits.

Tomasz Zielonka

unread,
Nov 27, 2006, 2:13:58 AM11/27/06
to
Daniel C. Wang wrote:
> Debugging in general, and sprinkling in "printfs" to understand why your
> program is bottom rather than producing the right result.

Doesn't happen too often for me.

> Though, I'll admit the kind of debugging you need to do in Haskell is
> of a different flavor.

Oh yes, but the difference is also in amount of debugging.

>> Personally, I only care about how program executes when I have big
>> performance needs. There are so many places where I don't want to care
>> about performace. Strict languages force me to think about performance
>> all the time.
>
> It's ironic that those who talk about referential transparency say it to
> help optimizations

Not only for that.

Maybe I should explain my views on efficiency in more detail - I do care
about efficiency - my job is almost entirely about efficiency. However,
I am not sure this discussion leads anywhere.

> when as people who use these languages will admit you don't get the
> nice lazy Haskell programs without paying some performance cost.

Of course there are performance costs. There are also performance gains.

>> I already said "I could use lazy lists in OCaml". I should add that it
>> wouldn't be straightforward - I would have to rewrite all the needed
>> list functions (map, folds, etc.) again for the lazy version.
>
> yeah, that's just a library issue.

Just? Having to rewrite all the functions is "just" a library
issue?

>> Also, I don't buy such "you don't need" arguments. There are so many
>> things I don't strictly need, but without them, I am a hopeless
>> programmer ;-)
>
> I'm just saying the costs associated with purity don't payoff for the
> supposed benefits.

OK, we just have different personal views on this matter. It pays off
for me, it doesn't for you.

I am a bit confused when someone tries to convince me that "I don't need
it", "I don't get any benefit from it", perhaps even "I don't like it".

Best regards
Tomasz

Tomasz Zielonka

unread,
Nov 27, 2006, 2:17:20 AM11/27/06
to
Daniel C. Wang wrote:
> Hey, lot so people program they seem to turn out reasonably correct
> programs without purity built into their systems. I wonder how they are
> doing this.

I write reasonably correct programs in C++. Hell, I am sometimes
delighted with the result! Still, I prefer to write in Haskell.

Best regards
Tomasz

Tomasz Zielonka

unread,
Nov 27, 2006, 2:23:57 AM11/27/06
to
Daniel C. Wang wrote:
> This whole CBN vs CBV is completely irrelevant in the actual lambda
> calculus as envisioned by Church, because everything was strongly
> normalizing. Haskell is a bastardize version of the real lambda calculus.
>
> BTW no one would find it odd if a civil engineer did some complicated
> bit of math to figure out how to make sure the bridge as designed
> doesn't collapse or costs the same as some alternative design. I'm
> confused as to why I should be conflicted when I do the same when
> talking about language design.

No, strong normalization is not everything. What benefit you have from
termination of your program, if it gives you wrong results? From my
personal experience, languages like Haskell at least help you to arrive
with a (partially) correct program. Haskell also encourages you to
program with maps, folds and unfolds, which really helps to get
terminating programs. Languages like OCaml discourage such programming
style, unfortunately.

If you insist of having a formal correctness proof for all you programs,
then we are in different leagues and this discussion is pointless in its
current form.

Best regards
Tomasz

M E Leypold

unread,
Nov 26, 2006, 5:17:16 PM11/26/06
to

"Daniel C. Wang" <danw...@gmail.com> writes:

> Philippa Cowderoy wrote:
> > On Sun, 26 Nov 2006, Daniel C. Wang wrote:
> >
> >> There are many good reasons to like purity, but the way it is implemented and
> >> designed into Haskell has bought us really nothing.
> > It may have bought you nothing - many Haskell coders (myself
> > included) find value in it!
>
>
> You, guys probably could have gotten by in OCaml and not really

Well, I know a variant of THAT line from old Fortran programmers: "I
can do everything I want to do in FORTRAN" (old Fortran programmers
spell it in capitals :-), since they mean FORTRAN, not Fortran).

You have been berating other posters as "not being scientific", since
they couldn't conclusively prove what they stipulated as perhaps being
the win in purity.

I suggest You start proving some of you assertions ("purity buys you
nothing", etc).

> noticed. Well, maybe you would have noticed your programs ran
> faster..... and could be debugged...


> There are many things in Haskell, purity is just one, and its my
> experience that I've never missed it in the other languages I've used.

As with the old Fortran programmers you might have programming in
other languages in Haskell (if you can understand what I mean).

> If you can show me a concert example where purity was key... (not just
> laziness..) that might be more illustrative.

Just show me an example where side effects / impurity was the key and
could not have been done otherwise, that might be more illustrative
...

Purity is just one feature in a whole cocktail of features (the type
system, lazyness, purity, type classes) that make sense together and
often only together. Lazyness by foot (i.e. lazyness OCaml style) is --
well -- sometimes a pain, like travelling in a car with wooden
benches. It can be done, but when you start to become older (or say
more experienced), you prefer to concentrate on other things.

> STM is some pie in the sky research idea that's generated yet more

As "automatic formula translation" (read compiler construction) was
once: first comes the research and then a mature tool set.

> papers but really hasn't solved any real problems. They've just made
> it much easier to write slow concurrent programs. Much like garbage
> collectors don't really solve the resource management problem. They
> just shuffle the problems to different part of the system where they

> can be solved in more centralised ways.

Well, as I said: When you become older you prefer to solve the problem
you want to solve and are glad that those other problems have been
"centralised" and solved (mostly) by other people, here, the authors
of your run time system. I fail to see, why that should not be
valuable, quite the opposite: With your attitude everything apart from
assembler programming (or toggling front panel switches ...) becomes
somehow dishonest (at least that is what I'm sensing in your
statements).

Using GC is not cheating, but smart. And it solves _my_ resource
management problems, so I'd be intrested to hear some example of
resource management problem it doesn't solve for you.

Regards -- Markus


M E Leypold

unread,
Nov 26, 2006, 7:21:44 PM11/26/06
to

"Daniel C. Wang" <danw...@gmail.com> writes:

> Tomasz Zielonka wrote:
> {stuff deleted}
> >> and could be debugged...
> >
> > That's just silly. I am not sure I should even bother replying you.
>
> Just to clarify, I was talking about how lazy evaluation make
> understanding program execution more difficult.
>
> I'd also like to point out that you don't need purity to have lazy
> lists. The fact that Haskell has good support for lazy programing,

I'd also not need any modern language or LISP to have lists at
all. Hell, real programmers can have lists in FORTRAN or even in a
Turing machine!

I'd say, your way to conduct an argument is quite useless. You'll
probably win this discussion against anyone else by your way of
introducing always new topics, spouting a plethora of half truths
garnished with important sounding buzzwords and generally not replying
exactly on topic.

Well, let's just go back 2 steps and do "real science" (of which you
seem to be rather fond ...):

What exactly is your hypothesis? If possible, please state it in a way
that allows to formulate a number of ways in which the hypothesis
could possibly be falsified, since according to Popper, a hypothesis
which by it's very structure excludes falsification is not scientific.

I'm looking forward to your answer.

Regards -- Markus


> doesn't mean a lazy impure version of OCaml wouldn't be a good thing.

I can already see the effects happen at random time ... What exactly
do mean by "lazy" "impure" and how could such a language be used in
a useful way?


M E Leypold

unread,
Nov 26, 2006, 7:24:54 PM11/26/06
to

"Daniel C. Wang" <danw...@gmail.com> writes:
>
> So what are weak pointers, finalizers,.. etc doing in the Java
> language standard? Why did the DIS VM of Infero destroy windows when
> the referenced count went to 0? There's quite a bit of literature
> attempting to make GCs manage many resources. However, in the end
> people are just "mostly" happy with it managing memory on their 1Ghz
> processors with 1GB of memory.


Certainly not. As I've repeatedly asserted in the past, freeing memory
or deleteing an object is a global decision. In a truly modular
system, not part of the system can take that decision. "Manual" mamory
management hinders modularity. Truly modular systems NEED garbage
collection.

Regards -- Markus

M E Leypold

unread,
Nov 26, 2006, 7:37:35 PM11/26/06
to

"Daniel C. Wang" <danw...@gmail.com> writes:

> Benjamin Franksen wrote:
> > It is available as a practically usable library in ghc since about version
> > 6.4. And AFAIK it enables extremely fast concurrent programs. And it
> > is /not/ accidental that it was Haskell which has brought STM from "pie in
> > the sky research" to useful library status. You can't rollback an IO
> > action, can you? So you need a way to make sure that your STM transaction
> > can't do IO and that's quite easy to do in Haskell, due to its purity and
> > its advanced type system.
>
> So what do you do when you want to do IO in an atomic block? Also,
> there are tons of ways via type and effect systems to disallow IO in
> an atomic block. I could do this in C or ML. The purity actually
> doesn't help. You're confusing the ability to describe effects with
> types (which is a good thing) from having no effects (except for
> non-termination) for most of your code.

No, he doesn't. Thanks to monads, effects can be controlled by purely
functional code. :-).

> Laziness, buys you something. I need and example again where purity
> helps in your day to day job, that you couldn't achieve in some
> language like Ocaml or ML. Purity as a discipline or concept is
> useful. The way Haskell enforces purity as a language feature is not
> useful.

A lot of people seem to disagree with you. I wonder why.

> > And, dear Great Scientist, may this humble programmer inquire what the
> > objective scientific measure is for programmer productivity and about what
> > hard numbers your science has come up with to clarify how purity of Haskell
> > apparently has no effect whatsoever on us poor programmer's productivity?
>
> Umm... the fact that the vast majority of programs are written in
> C/C++ or other less painfully disgusting languages. If there was a
> significant advantage of using Haskell I'd expect to see more pure
> languages in the style of Haskell.


I think you have still a lot to learn about the reasons for success of
systems or programming languages.


> >> If I look at the evidence and the experience of the proof carrying
> >> code community, I'd say this is statement is just false.
> > Objectively? C'mon, gimme you' numbers!
>
> Evidence can be qualitative.

So, give it, man. We are waiting for it on our toes.

> > something useful, so I don't want to program in some 'pure subset', whether
> > of C or ML. Does ML have an IO monad so i can remain pure throughout? I
> > suppose not. So there.
>
> Why do you need language support?

To allow th compiler to aggressively optimize in the pure language
domain where more optimizations are possible? Just a guess.


> I'd argue a type-an-effects system is a better way to do it from a
> practical point of view. There's also the Concurrent Clean approach
> which solves some of the monad composition problems.


Well, this is actually some pie in the sky you're writing about isn't
it. You could write academic papers about this, couldn't you? 'Cause I
don't see you selling this new development tools which favour
"type-an-effects" over monads ...


> I never said monads weren't useful. I also wouldn't say Haskell didn't
> stimulate a lot of interesting research, but the Haskell style of
> purity is still a kind of useless idea in my book.


Please define "useless"? How do I distinguish "useless" from "useful"
in discussions with your scientific self? can you give some criteria
which I might use to evaluate your assertions against?


Regards .-- Markus

M E Leypold

unread,
Nov 26, 2006, 5:51:29 PM11/26/06
to

Benjamin Franksen <benjamin...@bessy.de> writes:
>
> And, dear Great Scientist, may this humble programmer inquire what the
> objective scientific measure is for programmer productivity and about what
> hard numbers your science has come up with to clarify how purity of Haskell
> apparently has no effect whatsoever on us poor programmer's productivity?


:-)

And Benjamin -- thanks for some of the pointers in your answer. Even
if you can't convince Dan C Wang, you got me interested in STM and the
way Haskell handles async signals.

> You must be joking, really. Have you ever tried to actually prove any
> property of stateful, imperative code?
>
> > Also,
> > any high-level proof is not what you really care about, the proofs that
> > matter are what happen at the machine level.
>
> This is quite debatable, don't you think? Anyway it just side-steps what

It's also nonsense. Every sane mathematicion / computer scientist
partitions his proofs into parts. Here, one just proves that the
compiler works (hopefully other people have already done that once and
for all :-) and then prove the rest in the compilers source
language. Which together makes a proof about the generated machine
code.

The same principle applies to "informal reasoning".

Regards -- Markus


M E Leypold

unread,
Nov 26, 2006, 7:57:18 PM11/26/06
to

Uh, thanks. I'll have to remember this phrase :-). Sometimes I find it
difficult as a non native speaker to get to the point concisely in one
sentence.

Regards -- Markus

M E Leypold

unread,
Nov 26, 2006, 7:52:45 PM11/26/06
to

"Daniel C. Wang" <danw...@gmail.com> writes:

> Joachim Durchholz wrote:
> > Daniel C. Wang schrieb:
> > > Much like garbage
> >> collectors don't really solve the resource management problem. They
> >> just shuffle the problems to different part of the system where
> >> they can be solved in more centralized ways.
> > Sorry, you're firmly in the realm of sweeping overgeneralizations
> > here.
>
> This is USENET right? ;)

Certainly. So I wouldn't be surprised if everyone around here would
suddenly drop silent, since it is difficult to argue against your
foggy kind of arguments and people probably have better things to
do. Just my 2 c.

>
> > Please take the time to bolster your claims with arguments before
> > taking them as analogies for other dismissive statements.
> >
> {stuff deleted}
> > The original point is easy to demonstrate.
>
> Do the following take the simply typed lambda calculus with a fixpoint
> and *sums*. Write down the various program equivalences that exist
> with CBV and CBN semantics, and tell me which equivalence exists in
> CBN that you don't have in CBV. You'll be missing the beta rule in
> CBV but you still have a beta-v rule in CBV. Ahh, but you'll also
> notice you are missing a eta rule on sums in CBN. In particular

But you see, that all this is not about equivalence in the sense
you're trying to assert, but _how easy_ things can be done. To give
you an example: Most languages are turing complete. So everything I
can compute in any of them I can compute on a tuting machine. So why
don't we just stick with turing machines? Well -- there are not usable
in a PRACTICAL way and that has nothing to do with formal equivalence
as far as computing power goes.

Your argument completely misses the point in similar fashion.

>
> (case e of InjL x => InjL x
> | InjR y => InjR y) == e
> under cbv
> but it doesn't hold under cbn.
>
> So please, explain to me how CBN simplifies things.

You're also arguing rather theoretically for a guy who berated
purity as "pie in sky" research without any practical
application. Where is the practical aspect in your evidence? All I see
is : Theory. (No, i'm not against theory, quite the opposite, but
since you made the rules first, please stick to them: if other people
give witness, that it is easier _in practice_ to prove things in a
pure language, your theoretically bolstered argument that everything
can be proved as well in an impure language in principle (that is: in
theory), is somewhat inadmissible).


> {stuff deleted}
> > At least for me, arguing about programs at the source level is what
> > I do about 90% of my programming time.
> > I do that when hunting for bugs, when changing the code, or about
> > any other activity that involves an editor and source code (and a
> > lot of whiteboard activity as well).
> > I do care about machine-level proofs, as a means to validate
> > compilers, or code that got loaded from an untrusted source. But
> > that's specialist concerns; in my day-to-day job, I don't write
> > compilers or dynamically-loading systems.
>
> Hey, lot so people program they seem to turn out reasonably correct
> programs without purity built into their systems. I wonder how they

Like C. well. We all know about the security C programs have on
average. If your argument would apply, C programs should be as secure
in the wild as Ada or Haskell programs. Even worse: Haskell programs
should dump core as often as C programs, if anything is equivalent ...

Well?

Regards -- Markus

M E Leypold

unread,
Nov 26, 2006, 7:58:29 PM11/26/06
to

M E Leypold <development-2006-8...@ANDTHATm-e-leypold.de> writes:

<lots of typos>

Sorry for them. I've to get some sleep.

- M

Daniel C. Wang

unread,
Nov 27, 2006, 9:08:22 AM11/27/06
to M E Leypold
M E Leypold wrote:
{stuff deleted}

> It's also nonsense. Every sane mathematicion / computer scientist
> partitions his proofs into parts. Here, one just proves that the
> compiler works (hopefully other people have already done that once and
> for all :-) and then prove the rest in the compilers source
> language. Which together makes a proof about the generated machine
> code.

This is just non-sense. Read up on PCC before you bother. Xavier Leroy's
proof of compiler correctness is a very illustrative example of why
doing this thing is hard, and why the PCC route is a useful approach. He
admits to it in his conclusions in a backhanded way. Note he also proved
correctness for a subset of C. Proving correctness for a Haskell
compiler would arguably be harder.

Daniel C. Wang

unread,
Nov 27, 2006, 9:10:59 AM11/27/06
to M E Leypold

False. Modular systems can just refuse to allocate.. (not thinking like
a FORTRAN programmer are you..) also there is Cyclone.

M E Leypold

unread,
Nov 27, 2006, 9:30:12 AM11/27/06
to

What are you talking about? "Refuse to allocate?" Please explain
yourself. And Cyclone has manual memory management, AFAIK, so I
completely fail to see the connection -- how does the existence of
cyclone relate to my statement that manual memory management affects
modularity adversly?

Regards -- Markus

Daniel C. Wang

unread,
Nov 27, 2006, 9:36:53 AM11/27/06
to tomasz....@gmail.com
Tomasz Zielonka wrote:
> Daniel C. Wang wrote:
{stuff deleted}

> Haskell also encourages you to
> program with maps, folds and unfolds, which really helps to get
> terminating programs. Languages like OCaml discourage such programming
> style, unfortunately.

Is this because of the intermediate data structures that are created?
If you are programming with just maps, fold, and unfolds, you can
manually deforest your program yourself and avoid some intermediate
results. This I agree is not ideal. Just wondering what aspect of OCaml
you are talking about that discourages this style.. since I use it
pretty much all the time. In fact I wrote an entire compiler using this
style. It's quite interesting to see CPS conversion broken down into a fold.

> If you insist of having a formal correctness proof for all you programs,
> then we are in different leagues and this discussion is pointless in its
> current form.

The claim made by many people is that purity leads to referential
transparency and referential transparency is a "good thing" because it
makes it easier to reason about programs. Therefore purity is a good
thing and we must bend over backward to have it in our language.

I have yet to see anyone explain to me in a concrete way how referential
transparency (which is a fancy name for the beta equivalence..)
makes reasoning easier. People, show a bunch of examples and claim the
examples make it easier.. but there's no explanation of the "how".

Let me give you a more concrete example. One could say that separation
logic (google for details..) is just a fancy warmed over version of
Hoare logic. However, the separation logic has a "frame rule" that works
in the presence of certain restricted forms of aliasing. The "frame
rule" allows for local reasoning while Hoare logic does not.

I understand what the frame rule is. I understand that without it you
can't reason locally about your programs. I therefore understand that
separation logic is a non-trivial improvement over Hoare logic.

With respect to referential transparency. I understand what it is. I
don't understand how it significantly simplifies the reasoning of any
program. I therefore don't understand why languages that provide RT
non-trivial improvements over languages that don't.

In fact if you look at the claims about RT in detail, you see they are
often false in both theory and practice. That's at least how I've read
the literature.

Yet the vast majority of the world seems brainwashed by this concept,
because it shows up in an FAQ.


Daniel C. Wang

unread,
Nov 27, 2006, 9:40:49 AM11/27/06
to M E Leypold
M E Leypold wrote:

> What are you talking about? "Refuse to allocate?" Please explain
> yourself. And Cyclone has manual memory management, AFAIK, so I
> completely fail to see the connection -- how does the existence of
> cyclone relate to my statement that manual memory management affects
> modularity adversly?
>

I what way is Cyclone's manual memory management not modular? I don't
understand your definition of "modular".

Torben Ægidius Mogensen

unread,
Nov 27, 2006, 9:47:47 AM11/27/06
to
"Mark Tarver" <dr.mt...@ukonline.co.uk> writes:

>> Purity ...... really buys you nothing.
>> > What are advantages of purely functional languages?
>
> I think this is roughly right. The advantages cited in this thread -
> memoisation and formal methods - don't really compensate very well for
> what you lose.

These are, though oft sited, not (IMO) the biggest benefits. Bigger
"real world" benefits are:

- Since there is no hidden state, unit testing is just a matter of
applying functions to arguments and comparing results.

- Since the type describes all the possible interactions, it is
possible to generate test arguments from the type.

- Since the type describes all the possible interactions, refactoring
is much easier.

- Understanding code is much easier, as you don't have to think of
possible hidden updates between calls.

Torben

Daniel C. Wang

unread,
Nov 27, 2006, 9:51:26 AM11/27/06
to Daniel C. Wang, M E Leypold

Just to be a bit clearer. Cyclone lets you write programs/libraries that
are parameterized by the memory management discipline. I can pass an
object that is allocated in a region, garbage collected, or uniquely
owned to the same function that is oblivious to the memory management
discipline. I consider this to be a modular approach.

But to be fair, I would say GC is a mostly good thing for modularity if
you can afford to use it, but there are still many places where you
can't and shouldn't.

Torben Ægidius Mogensen

unread,
Nov 27, 2006, 9:59:04 AM11/27/06
to
"Daniel C. Wang" <danw...@gmail.com> writes:

> Do the following take the simply typed lambda calculus with a fixpoint
> and *sums*. Write down the various program equivalences that exist
> with CBV and CBN semantics, and tell me which equivalence exists in
> CBN that you don't have in CBV. You'll be missing the beta rule in
> CBV but you still have a beta-v rule in CBV. Ahh, but you'll also
> notice you are missing a eta rule on sums in CBN. In particular
>

> (case e of InjL x => InjL x
> | InjR y => InjR y) == e
> under cbv
> but it doesn't hold under cbn.
>
> So please, explain to me how CBN simplifies things.

In CBN, fst(e1,e2) = e1, which doesn't hold in CBV.

Generally, sums and products are dual, and for each property of one
that doesn't hold in CBV there is a property on the other that doesn't
hold in CBN and vice-versa.

Torben

M E Leypold

unread,
Nov 27, 2006, 10:21:52 AM11/27/06
to

"Daniel C. Wang" <danw...@gmail.com> writes:

> M E Leypold wrote:
> {stuff deleted}

You should also quote the context, namely that you asserted:

> > > Also,
> > > any high-level proof is not what you really care about, the proofs that
> > > matter are what happen at the machine level.

> > It's also nonsense. Every sane mathematician / computer scientist


> > partitions his proofs into parts. Here, one just proves that the
> > compiler works (hopefully other people have already done that once and
> > for all :-) and then prove the rest in the compilers source
> > language. Which together makes a proof about the generated machine
> > code.
>
> This is just non-sense. Read up on PCC before you bother. Xavier

The fact that formal proofs are hard and therefore in practice
"demonstrations of plausibility" are preferred does change the
principle of the thing: That it spares a lot of work, to do proofs and
demonstrations in 2 steps and avoid reasoning about machine code
directly (except you're a compiler writer and want to demonstrate
general properties of the output of your compiler -- not again, that
here you don't reason about a specific sequence of instructions, only
about general properties, i.e. that the semantics of your yource
language is correctly implemented on the target machine). The
programmer of any specific application then only has to reason about
his (high level language) source, which is much easier.

You on the other side only leave yourself the options to

(a) Program assembly and only reason about that

(b) Proof/demonstrate things about the output of your compiler

Much fun with both. You started out with asserting how purity is
useless _in_practice_ and only (I repeat myself) "pie in the sky". As
a practitioner of the craft I find purity (in the language, yes) a
very useful harness into which I can build my code. Your approach on
the other side (machine code over everything) seems to lack a certain
practicability. This is strange, don't you think so?


> Leroy's proof of compiler correctness is a very illustrative example
> of why doing this thing is hard, and why the PCC route is a useful
> approach. He admits to it in his conclusions in a backhanded way. Note

Let me quote yourself again

> > > Also, any high-level proof is not what you really care about,
> > > the proofs that matter are what happen at the machine level.

I wonder how that relates to PCC? And how does the usefulness of PCC
contradict the idea that proofs are made easier by the absence of
side effects (or presence of purity?).

Proof-Carrying Code (PCC) is a technique that can be used for safe
execution of untrusted code. In a typical instance of PCC, a code
receiver establishes a set of safety rules that guarantee safe
behavior of programs, and the code producer creates a formal
safety proof that proves, for the untrusted code, adherence to the
safety rules. Then, the receiver is able to use a simple and fast
proof validator to check, with certainty, that the proof is valid
and hence the untrusted code is safe to execute.

Where do you suppose the " formal safety proof" does come from? Does
it just pop up magically at "the machine level" or don't you think
it's the compiled down version of a proof in the original source
language (the "high level proof" you don't "care about") or a result
from some other implicit verification (i.e. type safety) which the
compiler performs while compiling high level source?

> he also proved correctness for a subset of C. Proving correctness for
> a Haskell compiler would arguably be harder.

To come back to your intro:

> This is just non-sense. Read up on PCC before you bother.

You, Honey, don't know, what and how much I've read. You, on the other
side, seem to be full of half baked over-generalizing half
truths.

You're trying to dismiss 20 years of programming language research as
useless academic exercise and act as if you're just on your way to
revolutionize the whole field ("purity is useless", "side effects are
harmless", "referential transparency is overrated" ...). Either you're a
genius or a complete crank. In any case I'm probably not able to
contribute anything useful to your topic (what is it by the way?).

If it is not below your enlightened status, and you're really
interested in discussion rather than a one-way monologue, you could
just try to summarize your arguments so far:

- What has been your hypothesis?

- What is been your demonstration of plausibility / proof / data
supporting your hypothesis?


A typical dialog with you so far went like this

You: "Hey, purity buys you nothing!"

Others: "But ..." (enumerate advantages of purity, apart from giving
testimony that purity works rather well in practice).

You: "But the eta-beta-mu-rule in KVB/KRA has only one additionaly
reductive vertex, so there!" (Note how you somehow fail to
conclusively connect that to your initial statement.

Others: "?" (or equivalent thereof)

You: "And anyway, you said it improves equational reasoning, but
proofs are HARD man, HARD" (Note, how "reasoning" becomes
"proof" for the purpose of refusing and also that "Y does not
improve X" is a non sequitur from "X is HARD in Y". It might be
even harde without ...)

You: "You said, You could do Z with Y, but I can also do it (somehow)
with Q" (Note how that fails to demonstrate that Q makes Z as
easy as Y).


Forgive me if I somehow caricatured your way to argue, but I think I
captured the essence. Please, please, please: If you really want to
cintinue this discussion, learn how to discuss (as opposed to applying
verbal Jiu Jitsu just to be right (or look as if).

Regards -- Markus


Anton van Straaten

unread,
Nov 27, 2006, 11:28:36 AM11/27/06
to
Daniel C. Wang wrote:
> No, I'm simply demonstrating in a rigorous way the notion of referential
> transparency is bogus. The whole concept of referential transparency is
> an exercise in elevating the beta rule to godhood for no reason.
>
> Do you agree or disagree with that last statement?

I disagree. But I'm not sure I understand exactly how you the
connections between purity, referential transparency, and CBV/CBN fir
into your argument. I'll address that in a separate response, later.

> BTW no one would find it odd if a civil engineer did some complicated
> bit of math to figure out how to make sure the bridge as designed
> doesn't collapse or costs the same as some alternative design. I'm
> confused as to why I should be conflicted when I do the same when
> talking about language design.

But it would be odd if the engineer was complaining that the math he was
using "buys you nothing". At the very least, you're demonstrating the
usefulness of metalanguages that resemble Haskell. Being a metalanguage
for PL semantics and related areas has been an area of some strength for
Haskell.

I suppose you might say that this has nothing to do with either CBN or
purity, and that ML has also been used successfully for this purpose.
But it's usually a pure subset of ML that's used, sometimes even with
monads, which seems more like a validation of the Haskell approach in
this context, than an argument against it.

Anton

Anton van Straaten

unread,
Nov 27, 2006, 12:16:20 PM11/27/06
to
Daniel C. Wang wrote:
> I have yet to see anyone explain to me in a concrete way how referential
> transparency (which is a fancy name for the beta equivalence..) makes
> reasoning easier. People, show a bunch of examples and claim the
> examples make it easier.. but there's no explanation of the "how".
...

> Yet the vast majority of the world seems brainwashed by this concept,
> because it shows up in an FAQ.

I suspect many people accept that referential transparency makes
reasoning easier because it matches their experience. That often
includes experience with programs that use side effects in ways that
interfere with reasoning quite badly, and which become much more
tractable when rewritten in a pure form.

I think it would be relatively easy to formulate an explanation of why
and how this is the case. Some suggestions have already been made in
this thread.

However, to avoid battles between straw men, we need to more clearly
define what (classes of) alternatives to referential transparency are
being discussed. For example, I think a solid case can be made for
referential transparency when compared to object-oriented approaches
that rely on mutable objects. That wouldn't be a general argument
against all non-RT systems, but it would undermine your claim, unless
you intend to exclude certain classes of system. Where you stand on
this isn't clear to me.

Note that I think that the case can also be made more generally than the
above example, but that still requires some knowledge of the space of
alternatives.

I'm also having difficulty figuring out how to integrate the various
points you've made, some examples of which are:

1. "Purity buys you nothing" - subsequently qualified, see #2.

2. "There are many good reasons to like purity, but the way it is

implemented and designed into Haskell has bought us really nothing."

3. "Purity as a concept is useful."

4. "I don't understand how [RT] significantly simplifies the reasoning
of any program."

Point 4 seems somewhat at odds with point 3 and possibly 2, and could
use some clarification. For example, if RT doesn't significantly
simplify the reasoning of programs, what do you see as the benefits of
purity as a concept, and what are the many good reasons to like it? I
suspect that the answers to those questions will at least overlap with
the evidence you're asking for about RT, i.e. you may already have the
answers to your questions.

My impression is still that you're either conflicted about this issue,
i.e. that you hold contradictory positions, or that you haven't
succeeded in clearly expressing whatever it is that you're really
getting at. Of course, there's a remote theoretical possibility that
I'm just being dense, but I leave any proof of that as an exercise for
the reader.

Anton

Joachim Durchholz

unread,
Nov 27, 2006, 1:06:10 PM11/27/06
to
Tomasz Zielonka schrieb:

> This is still somewhat too general statement for me. Sure, with
> lazy evaluation you can have difficulty to get the right performance,
> but I guess you don't mean only debugging efficiency problems.

One thing that I heard about debugging difficulties with a lazy language
is that inspecting a value will (often) give you just a thunk, which is
composed of lots of functions from all over the program, partially
evaluated so you don't even recognize the fragments anymore.

I'm not sure how often this becomes a problem though.
Or what the progress has been at that front - the Haskell community
seems to be working on the issue, though I don't know of any results.

Regards,
Jo

Joachim Durchholz

unread,
Nov 27, 2006, 1:19:39 PM11/27/06
to
Benjamin Franksen schrieb:

> Daniel C. Wang wrote:
>> Much like garbage
>> collectors don't really solve the resource management problem. They just
>> shuffle the problems to different part of the system where they can be
>> solved in more centralized ways.
>
> GC is surely not the golden key to solve the (general) resource management
> problem. I also have not heard such a claim from anyone before.

C++ gurus claimed that "memory deallocation is resource deallocation".

I'm not sure why Daniel assumes that Haskell's design followed that
principle, too. (Memory management and resource management have enough
parallels that pushing both into the same corners of the program makes
some parallels exploitable, hence easing the pain of having to do it all
manually. Nevermind the parallels don't exist in a garbage-collected
language because memory management doesn't need manual work anymore,
leaving other resource management free to satisfy other design
constraints than having to go in the same place as memory management.)

Regards,
Jo

It is loading more messages.
0 new messages