Why purely functional languages?

3 views
Skip to first unread message

rincewind

unread,
Nov 21, 2006, 2:53:27 PM11/21/06
to
What are advantages of purely functional languages?

I mean, functions as values, tail recursion optimization and other
attributes of functional languages are great, I can easily appreciate
them. But why prohibit operations with side-effects completely? How can
the language be better by explicitly lacking some fundamental feature?

I don't think any argument is needed to state that some kind of global
state-changing operations is needed, sometimes. Existence of monads in
Haskell kind of proves this. But why invent some complicated concept,
why not just bring in all the traditional stuff (assignments, etc)? How
monads are better?

Mark Tarver

unread,
Nov 21, 2006, 3:25:20 PM11/21/06
to

I don't think you can scientifically try to prove that they are better
- not without committing the prescriptive fallacy (see
www.lambdassociates.org/fallacy.htm). Its certainly onerous sometimes
to have to work within the confines of a purely functional language.

I think that *one* advantage of a purely functional implementation
comes in formal proofs of correctness. Its fairly easy to reduce such
implementations to a series of assertions in a formal language like FOL
and then build a proof. However in the presence of side-effects even
the Law of Self Identity breaks down.

Mark

Mark Tarver

unread,
Nov 21, 2006, 3:31:50 PM11/21/06
to

> However in the presence of side-effects even the Law of Self Identity breaks down.

As an example.

(0-) (define increment
-> (set *counter* (+ 1 (value *counter*))))
increment

(1-) (set *counter* 0)
0

(2-) (= (increment) (increment))
false

Mark

Taylor Venable

unread,
Nov 21, 2006, 3:48:44 PM11/21/06
to
On Tue, 21 Nov 2006 22:53:27 +0300
rincewind <fa...@not.real> wrote:

> I don't think any argument is needed to state that some kind of
> global state-changing operations is needed, sometimes. Existence of
> monads in Haskell kind of proves this. But why invent some
> complicated concept, why not just bring in all the traditional stuff
> (assignments, etc)? How monads are better?

From what I understand about them, the point of using a monad is that
they not only allow side effects, but they also *contain* them. In
other words, they are used to prevent side effects from leaking into
other parts of the system. That allows you to verify formally that the
rest of your program is working correctly, without having to worry about
any side-effects produced by the monad that might otherwise affect it.

--
Taylor Venable
tay...@metasyntax.net
http://www.metasyntax.net/

Message has been deleted

Dan Bensen

unread,
Nov 22, 2006, 12:35:50 AM11/22/06
to
Stefan Ram wrote:
> If we already would know now that this is "better" than other
> languages, other languages could be dropped immediatly and we
> all would be using only functional programming languages.
No, that takes time. Languages never get dropped right away once
they're entrenched in legacy code.

How is Haskell doing in the real world these days? Has it proven to be
commercially useful yet, or is it still mostly academic?

--
Dan
www.prairienet.org/~dsb

Stefan Holdermans

unread,
Nov 22, 2006, 1:59:54 AM11/22/06
to

> rincewind wrote:
> > I mean, functions as values, tail recursion optimization and other
> > attributes of functional languages are great, I can easily appreciate
> > them. But why prohibit operations with side-effects completely? How can
> > the language be better by explicitly lacking some fundamental feature?

Mark replied:>


> I think that *one* advantage of a purely functional implementation
> comes in formal proofs of correctness.

Yes, that's just it. Pure functional languages are GOFER: GOod For Equational Reasoning. Equational reasoning is a very accesible concepts that goes very well with proof techniques like structural induction.

Cheers,

Stefan

Tomasz Zielonka

unread,
Nov 22, 2006, 3:33:10 AM11/22/06
to
Dan Bensen wrote:
> How is Haskell doing in the real world these days? Has it proven to be
> commercially useful yet, or is it still mostly academic?

From real life: Recently I was presenting a talk at Warsaw University
about a part of my job. Later I was discussing a possibility of
writing a thesis on a similar subject with a university assistant.
The funny thing is: I am leaning to move more of my code to Haskell,
and he was convincing me that C++ may be a better choice ;-) The
roles are changing?

And yes, Haskell is useful in my job.

Best regards
Tomasz

Joachim Durchholz

unread,
Nov 22, 2006, 3:39:13 AM11/22/06
to
Dan Bensen schrieb:

> How is Haskell doing in the real world these days? Has it proven to be
> commercially useful yet, or is it still mostly academic?

From what I gather on haskell-cafe and haskell.org, it's accumulating
useful libraries right now. People already are putting together "How to
set up my first Haskell project" recipes and such right now, and this
kind of contribution is considered useful in the community.

In other words, Haskell is approaching critical mass but hasn't exploded
into the mainstream yet.
Unfortunately, my scrying ball is out for maintenance repairs, so I
can't predict when that explosion will happen, or whether it will happen
at all.

Regards,
Jo

Joachim Durchholz

unread,
Nov 22, 2006, 3:52:38 AM11/22/06
to
rincewind schrieb:

> I don't think any argument is needed to state that some kind of global
> state-changing operations is needed, sometimes. Existence of monads in
> Haskell kind of proves this.

Actually it's not "monads", it's "monadic I/O" (which, as far as I
understand it, *describes* I/O activities instead of executing them
directly).

> But why invent some complicated concept,

Monads aren't complicated. Saying "foo is a monad" is just a way of
saying "foo provides a way to associatively stick together functions".

(I don't know whether monads are complicated in category theory, but I
don't care too much anyway.)

> why not just bring in all the traditional stuff (assignments, etc)? How
> monads are better?

As others have said, assignments destroy equational reasoning.
Which means that you can't simply substitute a function's definition
wherever the function is called.
(Aliasing problems are one of the more general symptoms if this kind of
illness.)

This makes it far harder to reason about what some piece of code does.
Avoiding assignments and other kinds of state-dependent manipulation in
your internal, non-IO-related code means the code becomes easier to
debug and maintain.
Moving the remaining dangerous stuff into an I/O monad means that (a)
it's all in one place, and you don't have to worry about it elsewhere,
and (b) all the code that isn't directly related to I/O is separated
out, so you have a far smaller amount of dangerous code to tackle at a
time, and any bugs stand out far better.


There's yet another effect. If functions don't have side effects, you
can pass them around as parameters or return them as results, without
having to worry about when and in what contexts they will be called.
This means that writing higher-order functions (functions that call or
return functions) isn't rocket science anymore.
This is relevant because higher-order functions are an immensely
powerful abstraction tool. There have been reports that rewriting code
so that it makes liberal use of higher-order functions shrinks it by a
factor between 5 and 10.

Regards,
Jo

Torben Ægidius Mogensen

unread,
Nov 22, 2006, 3:55:08 AM11/22/06
to
rincewind <fa...@not.real> writes:

> What are advantages of purely functional languages?
>
> I mean, functions as values, tail recursion optimization and other
> attributes of functional languages are great, I can easily appreciate
> them. But why prohibit operations with side-effects completely? How
> can the language be better by explicitly lacking some fundamental
> feature?
>

Absense of side effects means that a piece of code has a very clear
interface that is fully described in its type. Furthermore, all
testing can be done by giving arguments and inspecting results, as you
don't have to worry about chhanges to hidden or unknown state between
calls. This makes it very easy to develop, test and maintain highly
modular code.

> I don't think any argument is needed to state that some kind of global
> state-changing operations is needed, sometimes. Existence of monads in
> Haskell kind of proves this. But why invent some complicated concept,
> why not just bring in all the traditional stuff (assignments, etc)?
> How monads are better?

But as you said, you sometimes need to interact with the outside
world, which is stateful. There are various ways functional languages
have allowed this:

- Unrestricted access to side-effects and state, that is limited by
programmer discipline only (SML, O'Caml, Scheme, ...).

- Linear types means that you don't have to worry about
repeatability, as there is no possibility of repeating (Clean).

- Monads encapsulate stateful programming in an explicitly linear
construction (Haskell).

- Types and effects make effects visible in the type and allows
encapsulation of effects to local regions, so purely local use of
effects can be made invisible (FX).

All of the above allows a programmer to write in C-like style if he so
wishes, but (apart from the first), it is visible to the outside world
when this is done, and you have various means of encapsulating local
effects when repeatability is preserved.

Torben

Message has been deleted

Joachim Durchholz

unread,
Nov 22, 2006, 8:30:12 AM11/22/06
to
Stefan Ram schrieb:

>>> But why invent some complicated concept,
>> Monads aren't complicated.
>
> I heard someone saying once something like: »What you have understood
> is easy, what you have not understood is complicated.«

Sure, but monads aren't more complicated than associativity.
Well, they *are* more complicated; they use more symbols and more laws.
But the difference isn't large.

Regards,
Jo

Neo

unread,
Nov 22, 2006, 10:34:52 AM11/22/06
to
> (0-) (define increment
> -> (set *counter* (+ 1 (value *counter*))))
> increment
>
> (1-) (set *counter* 0)
> 0
>
> (2-) (= (increment) (increment))
> false

I am new to FP so my questions are very basic. What language is the
above written in? What is the above doing? It looks like commands are
being submitted to a parser which returns a value. Is line 0 defining a
function called increment which adds 1 to a variable named counter? How
does the function know *counter* is an input to the function? OK, line
1 sets variable counter to 0? Why does counter have asterisks around
it. Is the last line comparing two functions or the output of two
functions? How do you specify a function itself versus the output of a
function? They look the same so why aren't they equal?

Mark Tarver

unread,
Nov 22, 2006, 11:43:12 AM11/22/06
to
Neo wrote:
> > (0-) (define increment
> > -> (set *counter* (+ 1 (value *counter*))))
> > increment
> >
> > (1-) (set *counter* 0)
> > 0
> >
> > (2-) (= (increment) (increment))
> > false
>
> I am new to FP so my questions are very basic. What language is the
> above written in?

The language is Qi - www.lambdassociates.org

> What is the above doing? It looks like commands are
> being submitted to a parser which returns a value. Is line 0 defining a
> function called increment which adds 1 to a variable named counter?

Yes.

> Why does counter have asterisks around it.

Purely a convention which I borrowed from Lisp. They asterix their
global variables so that they stand out.

> Is the last line comparing two functions or the output of two functions?

The output.

> How do you specify a function itself versus the output of a function?

In this example, increment is a zero-place function (takes no inputs)
and I invoke a function by placing it in brackets ( ). The output of
the function is what you get by evaluating the whole expression. You
specify the function itself simply by writing it.

> They look the same so why aren't they equal?

Well this is why purists do not like side-effects. In the expression
(= (increment) (increment)), the first (increment) is evaluated -
adding 1 to the *counter* and returning the value of the *counter*
which is 1. The second (increment) repeats the same operation adding 1
to the *counter* which now has the value 2 and returns the value of the
*counter* which is 2. Since 1 and 2 are not the same the equality test
returns false.

If you use side-effects then the Law of Self-Identity apparently breaks
down. Actually of course it doesn't. What actually breaks down is the
simple transcription of functional programs into sets of equations.
This means that the axiomatic semantics that you are using is not fit
for the job and you have to use something more complicated that appeals
to the concept of state. But formal methods people would prefer to use
the cleaner simpler method if possible and purely functional languages
allow them to do that.

This said, you can actually make Qi pure if you want. Its very bendy
in many ways. The function destroy will whack even system functions and
with half a dozen lines you can have a pure language if you desire - at
the expense of convenience of course.

(0-) (set y 8)
8

(1-) (destroy set)
set

(2-) (set name Mark)
APPLY: undefined function set

And after you've inflicted mayhem, in order to stop tiny hands doing
what you've done.

(3-) (destroy destroy)
destroy

(4-) (save)

Neo

unread,
Nov 22, 2006, 1:31:54 PM11/22/06
to
> The language is Qi - www.lambdassociates.org

Interesting.

> (2-) (= (increment) (increment))
> false

Based on your explanation, since function outputs (not functions
themselves) are being compared and the function increments a global
variable, the evaluation of false seems correct. How does this violate
the "Law of Self-Identity"? I no longer see where a thing is being
compared to itself.

Mark Tarver

unread,
Nov 22, 2006, 2:20:12 PM11/22/06
to

Well thats why I said "If you use side-effects then the Law of
Self-Identity *apparently* breaks down." It depends on how you frame
the Law of Self-Identity . If its framed *de re* it states everything
is identical to itself - which is always true. If its framed *de
dicto* it says that every substitution instance of 'x' in '(= x x)' is
true. Now thats true too, if you are working in a purely functional
language. But it fails with destructive operations like set.

Steve Schafer

unread,
Nov 22, 2006, 2:23:25 PM11/22/06
to
On 22 Nov 2006 10:31:54 -0800, "Neo" <neo5...@hotmail.com> wrote:

>Based on your explanation, since function outputs (not functions
>themselves) are being compared and the function increments a global
>variable, the evaluation of false seems correct. How does this violate
>the "Law of Self-Identity"? I no longer see where a thing is being
>compared to itself.

Rather than "law of self-identity," the underlying principle that is
being discussed here is referential transparency (look it up on
wikipedia). The idea is that when examining an expression, replacing a
function application (i.e., a function call) by the result of that
function application should not change the meaning of the expression.

There are two main ways in which referential transparency may be
violated by impure functions:

1) An impure function may have side effects. For example, if the
function looks like this (expressed here in C-like syntax for the
benefit of functional programming newbies):

int launch_and_return_value(int x)
{
launch-nuclear-missile;
return x;
}

then it's pretty obvious that replacing

launch_and_return_value(42)

with

42

results in a new expression that doesn't have the same meaning as the
original expression.

2) An impure function, applied multiple times to a given set of
arguments, may return different values each time. C's rand() function is
an obvious example of this; by design, it returns a different
pseudo-random value every time you call it.

And so it should also be obvious that you can't replace a call to rand()
with its return value, since it doesn't have a fixed return value.

(This is the type of impurity that is exhibited by the example in the
previous posts; every call to increment() returns a different value.)

Why is referential transparency a big deal? Because it vastly simplifies
the analysis required to answer the question, "What does this program
do?" Both types of "referential opacity" described above involve
statefulness (external state in the first case, and internal state in
the second case). And it is well known that introduction of statefulness
greatly complicates program analysis, because now temporal order becomes
important; not only do you have to know what computations are performed,
you also have to know in what order they're performed. And if you have
multiple disparate parts of the program that are manipulating the same
state, it's very easy to get into a situation where making a change to
the code has an unpredictable ripple effect, because it changes the
order of side effects in difficult-to-analyze ways.

If the behavior of a program is easier to analyze, then the probability
that it does what you intended it to do is increased. That's really what
sets functional programming apart. Things like higher-order and
first-class functions are extremely useful, but I don't consider them to
be as fundamental to the concept of functional programming as
referential transparency.

Steve Schafer
Fenestra Technologies Corp.
http://www.fenestra.com/

Benjamin Franksen

unread,
Nov 22, 2006, 2:32:40 PM11/22/06
to

I am astonished that no one has mentioned this yet:

One of the main reasons to make a language purely functional is so that it
can be lazy. Laziness means, basically, that when a function gets called,
the arguments are not necessarily evaluated before the call, but rather
only when and if their value is actually demanded /inside/ the call. For
instance in Haskell the function 'const' is defined by

const x y = x

When const is called, its second argument never gets evaluated because to
determine the result of 'const x y' it is not necessary to look y's value
(as the definition clearly shows).

Laziness and side-effects during evaluation don't go well together. The
order of evaluation in a lazy language is something that is far from
obvious by looking at the code. Thus it would be (almost) impossible to
determine in which order (or whether at all!) the side-effects would
happen.

It is probably a safe bet to say that, had Haskell not been lazy, then
side-effects /would/ have crept into the language, sooner or later.

Ben

Mark Tarver

unread,
Nov 22, 2006, 3:09:08 PM11/22/06
to

rincewind wrote:
> What are advantages of purely functional languages?

Well the list grows as this thread develops. Here is another one:
memoisation.

Memoisation is a technique used by functional programs to store values
in memory to avoid repeated computations (fibonacchi is a classic
example). An expression (f x1 ...xn) is evaluated and the result R
stored in memory as associated with this expression. The next time (f
x1 ...xn) is evaluated, the computer looks up the value that was
previously computed and returns it without repeating the computation.

Memoisation does have its challenges, but it becomes much more awkward
in an impure language where the second evaluation of (f x1 ...xn) may
not necessarily return the same result as did the first. In this case
memoisation will actually cause the program to return an incorrect
result.

Mark

Joachim Durchholz

unread,
Nov 22, 2006, 5:21:57 PM11/22/06
to
Benjamin Franksen schrieb:

> I am astonished that no one has mentioned this yet:
>
> One of the main reasons to make a language purely functional is so that it
> can be lazy.

That's just one flavor of functional programming languages.
Non-lazy FPLs do exist.

> Laziness and side-effects during evaluation don't go well together.

Things get worse if you add laziness into the mix, but side effects and
higher-order functions already don't go very well together, so you don't
need laziness to explain that side effects are a bad thing.

> It is probably a safe bet to say that, had Haskell not been lazy, then
> side-effects /would/ have crept into the language, sooner or later.

I think Simon Peyton-Jones would have prevented that. He's been one of
the most influential people in Haskell's design, which is generally
considered a Good Thing, and he's strongly opposed to side effects (and
rightly so IMVHO).

Regards,
Jo

Benjamin Franksen

unread,
Nov 22, 2006, 7:36:05 PM11/22/06
to
Joachim Durchholz wrote:
> Benjamin Franksen schrieb:

>> One of the main reasons to make a language purely functional is so that
>> it can be lazy.
>
> That's just one flavor of functional programming languages.
> Non-lazy FPLs do exist.

How many of them are /purely/ functional?

>> Laziness and side-effects during evaluation don't go well together.
>
> Things get worse if you add laziness into the mix, but side effects and
> higher-order functions already don't go very well together, so you don't
> need laziness to explain that side effects are a bad thing.
>
>> It is probably a safe bet to say that, had Haskell not been lazy, then
>> side-effects /would/ have crept into the language, sooner or later.
>
> I think Simon Peyton-Jones would have prevented that. He's been one of
> the most influential people in Haskell's design, which is generally
> considered a Good Thing, and he's strongly opposed to side effects (and
> rightly so IMVHO).

Don't get me wrong: I completely agree that purity is a fantastically good
thing, especially since the invention of monads. I also agree that there
are many more technical reasons to prefer a pure language than laziness. It
is just that without the "hair shirt" laziness there's a good chance that
things like monadic I/O would never have been invented in the first place.
This is btw not my own idea but has been expressed by JPJ himself, see this
(very funny and also quite educational) presentation:
http://research.microsoft.com/~simonpj/papers/haskell-retrospective/index.htm

Quote: "So why wear the hair shirt of laziness? [...] The big one: Laziness
keeps you honest - every call-by-value language has given into the siren
call of side-effects. But in Haskell [...this...] just doesn't make
sense... So effects ... are not an option."

Ben

Benjamin Franksen

unread,
Nov 22, 2006, 7:48:59 PM11/22/06
to
Benjamin Franksen wrote:
> It is just that without the "hair shirt" laziness there's a good chance
> that things like monadic I/O would never have been invented in the first
> place. This is btw not my own idea but has been expressed by JPJ himself,

s/JPJ/SPJ/

Gotta get some sleep real soon...

Ben

Neo

unread,
Nov 22, 2006, 9:17:55 PM11/22/06
to
> > I no longer see where a thing is being compared to itself.
>
> Well thats why I said "If you use side-effects then the Law of
> Self-Identity *apparently* breaks down." It depends on how you frame
> the Law of Self-Identity . If its framed *de re* it states everything
> is identical to itself - which is always true. If its framed *de
> dicto* it says that every substitution instance of 'x' in '(= x x)' is
> true. Now thats true too, if you are working in a purely functional
> language. But it fails with destructive operations like set.

Ok, so
(= (incr) (incr)) is short for
(= (incr *cntr*) (incr *cntr*)) which is short for
(= (incr 0) (incr 0)) assuming cntr was initially 0, which becomes
(= 1 (incr 1)) when input1 is processed, which becomes
(= 1 2) when input2 is processed

Neo

unread,
Nov 22, 2006, 11:19:31 PM11/22/06
to
> referential transparency (look it up on wikipedia).

Thanks, though still a bit confusing for me.

Tomasz Zielonka

unread,
Nov 23, 2006, 2:45:13 AM11/23/06
to
Benjamin Franksen wrote:
> I am astonished that no one has mentioned this yet:

You did ;-)

At this point I have to recommend this classic paper:
http://www.math.chalmers.se/~rjmh/Papers/whyfp.html

Best regards
Tomasz

Tomasz Zielonka

unread,
Nov 23, 2006, 2:46:59 AM11/23/06
to
Benjamin Franksen wrote:
> It is probably a safe bet to say that, had Haskell not been lazy, then
> side-effects /would/ have crept into the language, sooner or later.

Yes, it happened and it's called OCaml, or SML.

Laziness is one of the biggest reasons for Haskell's existence, isn't
it?

Best regards
Tomasz

Jon Harrop

unread,
Nov 23, 2006, 3:25:22 AM11/23/06
to
Neo wrote:
>> (0-) (define increment
>> -> (set *counter* (+ 1 (value *counter*))))
>> increment
>>
>> (1-) (set *counter* 0)
>> 0
>>
>> (2-) (= (increment) (increment))
>> false
>
> What is the above doing?

let mutable n = 0

let incr() =
n <- n + 1;
n

incr() = incr()

> It looks like commands are being submitted to a parser which returns a
> value.

When you type it into the interactive mode of a language you get your
answers back immediately:

> let mutable n = 0;;
val n : int

> let incr() =
n <- n + 1;
n;;
val incr : unit -> int

> incr() = incr();;
val it : bool = false

You could write this in C:

int n=0;

int incr() { return ++n; }

> Is line 0 defining a function called increment which adds 1 to a variable
> named counter?

Yes.

> How does the function know *counter* is an input to the function?

Lisp is doing a symbol lookup at run-time, so you can define everything in a
random order to confuse people.

> Is the last line comparing two functions or the output of two
> functions?

In Lisp, "(f)" means "f()".

> How do you specify a function itself versus the output of a
> function?

Use "f" instead of "(f)".

> They look the same so why aren't they equal?

Every time "incr" is called it returns a different value. So you get:

incr() = incr()
1 = incr()
1 = 2
false

assuming it is evaluated left-to-right, which it often isn't.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists

Tomasz Zielonka

unread,
Nov 23, 2006, 4:10:28 AM11/23/06
to
Benjamin Franksen wrote:
> Don't get me wrong: I completely agree that purity is a fantastically good
> thing, especially since the invention of monads. I also agree that there
> are many more technical reasons to prefer a pure language than laziness.

I think a pure language without laziness would be too constrained. Maybe
you mean no laziness by default, but with the ability to write parts
using laziness?

> It is just that without the "hair shirt" laziness there's a good
> chance that things like monadic I/O would never have been invented in
> the first place.

Laziness is not a hair shirt, it a feature! :-)

Best regards
Tomasz

Joachim Durchholz

unread,
Nov 23, 2006, 6:48:58 AM11/23/06
to
Tomasz Zielonka schrieb:

> Laziness is one of the biggest reasons for Haskell's existence, isn't
> it?

AFAIK Haskell is the common successor of a bunch of lazily-evaluating
languages.
It came into existence when a bunch of people researching on laziness
got together and integrated those features that had proven themselves.

What I find remarkable is that
* all the predecessor languages are essentially extinct
(not used, no more developed), and
* no alternate language has sprung up.
Seems like the Haskell team did *some* things right.
Or that the design space for lazy languages is rather constrained.
Or that designing a good lazy language is so difficult that nobody has
come up with a good alternative yet.

Regards,
Jo

Joachim Durchholz

unread,
Nov 23, 2006, 7:10:38 AM11/23/06
to
Benjamin Franksen schrieb:

> Joachim Durchholz wrote:
>> Benjamin Franksen schrieb:
>>> One of the main reasons to make a language purely functional is so that
>>> it can be lazy.
>> That's just one flavor of functional programming languages.
>> Non-lazy FPLs do exist.
>
> How many of them are /purely/ functional?

Purity wasn't part of the OP's question.

One could argue that purity is essential for achieving the full
potential of functional languages, but I doubt that. Effect systems and
mode systems seem to be quite good at isolating side effects.

I do agree that Erlang, the *ML and Lisp languages dropped the ball by
allowing functions to have side effects without even marking them as
"impure" or something.
This doesn't make these languages useless or nonfunctional; you still
get most of the benefits of functional programming.
In fact the most tangible comparisons between functional and
non-functional programming style have been provided by the Erlang
camp... and Erlang is firmly based on effects for
programming-in-the-large, it's the small scale at wich they do
higher-order functions and side-effect-free programming (and the latter
is just by conventions, Erlang does have variables at the process and
system level).

Regards,
Jo

ross...@ps.uni-sb.de

unread,
Nov 23, 2006, 7:46:10 AM11/23/06
to
Joachim Durchholz wrote:
> Benjamin Franksen schrieb:
> > Joachim Durchholz wrote:
> >> Benjamin Franksen schrieb:
> >>> One of the main reasons to make a language purely functional is so that
> >>> it can be lazy.
> >> That's just one flavor of functional programming languages.
> >> Non-lazy FPLs do exist.
> >
> > How many of them are /purely/ functional?
>
> Purity wasn't part of the OP's question.

Funny statement you make there, particularly considering the subject
line... ;-)

- Andreas

Jon Harrop

unread,
Nov 23, 2006, 8:12:11 AM11/23/06
to
Joachim Durchholz wrote:
> Benjamin Franksen schrieb:
>> How many of them are /purely/ functional?
>
> Purity wasn't part of the OP's question.

But it's in the subject. :-)

> I do agree that Erlang, the *ML and Lisp languages dropped the ball by
> allowing functions to have side effects without even marking them as
> "impure" or something.

I'm not sure that could work. I often wrap impure internals with a pure API.
Would that be marked as impure?

Philippa Cowderoy

unread,
Nov 23, 2006, 9:18:04 AM11/23/06
to
On Thu, 23 Nov 2006, Jon Harrop wrote:

> Joachim Durchholz wrote:
> > I do agree that Erlang, the *ML and Lisp languages dropped the ball by
> > allowing functions to have side effects without even marking them as
> > "impure" or something.
>
> I'm not sure that could work. I often wrap impure internals with a pure API.
> Would that be marked as impure?
>

That depends. It's easily avoidable in Haskell - the lazy version of the
ST monad is particularly neat in that regard.

--
fli...@flippac.org

Ivanova is always right.
I will listen to Ivanova.
I will not ignore Ivanova's recomendations.
Ivanova is God.
And, if this ever happens again, Ivanova will personally rip your lungs out!

Jon Harrop

unread,
Nov 23, 2006, 11:03:13 AM11/23/06
to
Philippa Cowderoy wrote:
> On Thu, 23 Nov 2006, Jon Harrop wrote:
>> Joachim Durchholz wrote:
>> > I do agree that Erlang, the *ML and Lisp languages dropped the ball by
>> > allowing functions to have side effects without even marking them as
>> > "impure" or something.
>>
>> I'm not sure that could work. I often wrap impure internals with a pure
>> API. Would that be marked as impure?
>
> That depends. It's easily avoidable in Haskell - the lazy version of the
> ST monad is particularly neat in that regard.

Can you elaborate? Are you saying that monadic programming can make pure and
impure programming indistinguishable?

Joachim Durchholz

unread,
Nov 23, 2006, 1:33:04 PM11/23/06
to
ross...@ps.uni-sb.de schrieb:

> Joachim Durchholz wrote:
>> Benjamin Franksen schrieb:
>>> Joachim Durchholz wrote:
>>>> Benjamin Franksen schrieb:
>>>>> One of the main reasons to make a language purely functional is so that
>>>>> it can be lazy.
>>>> That's just one flavor of functional programming languages.
>>>> Non-lazy FPLs do exist.
>>> How many of them are /purely/ functional?
>> Purity wasn't part of the OP's question.
>
> Funny statement you make there, particularly considering the subject
> line... ;-)

Ooops... you're right.

Joachim Durchholz

unread,
Nov 23, 2006, 1:42:39 PM11/23/06
to
Jon Harrop schrieb:

> Joachim Durchholz wrote:
>> Benjamin Franksen schrieb:
>>> How many of them are /purely/ functional?
>> Purity wasn't part of the OP's question.
>
> But it's in the subject. :-)
>
>> I do agree that Erlang, the *ML and Lisp languages dropped the ball by
>> allowing functions to have side effects without even marking them as
>> "impure" or something.
>
> I'm not sure that could work. I often wrap impure internals with a pure API.
> Would that be marked as impure?

Well, I'm not sure how concrete languages handle the situation.

With respect to external interfaces, any compiler would probably have to
depend on programmer-provided declarations. C interfaces don't allow the
compiler to infer purity, so there's room for blatant lies (or errors).
(I know this is an issue for Haskell.)

Within a language, it would be desirable to be able to write impure code
and prove to the compiler that the interface is still pure.
I'm not aware of any language that does this, though. I even don't have
the fuzziest idea how to prove that an interface is pure in that
situation...

Regards,
Jo

Benjamin Franksen

unread,
Nov 23, 2006, 2:01:01 PM11/23/06
to
Jon Harrop wrote:
> Philippa Cowderoy wrote:
>> On Thu, 23 Nov 2006, Jon Harrop wrote:
>>> Joachim Durchholz wrote:
>>> > I do agree that Erlang, the *ML and Lisp languages dropped the ball by
>>> > allowing functions to have side effects without even marking them as
>>> > "impure" or something.
>>>
>>> I'm not sure that could work. I often wrap impure internals with a pure
>>> API. Would that be marked as impure?
>>
>> That depends. It's easily avoidable in Haskell - the lazy version of the
>> ST monad is particularly neat in that regard.
>
> Can you elaborate? Are you saying that monadic programming can make pure
> and impure programming indistinguishable?

Well, state /is/ purely functional, if you pass in the state and get a new
state out as (part of) the result. Monads can be used to hide the passing
in and out (commonly called "plumbing"), effectively enabling you to use
the state as if it were an imperative mutable variable, but this stuff
(state monad) still remains pure. The ST monad goes one step further: it
allows the compiler to optimise the computation so that it /really/
destructively updates memory internally. The type system makes sure that
the state can only be used in a strictly single threaded manner, i.e. no
separate copies of the state can evolve independently. Thus, if you runST
such a computation you again get a pure function.

Cheers
Ben

Benjamin Franksen

unread,
Nov 23, 2006, 2:17:51 PM11/23/06
to
Tomasz Zielonka wrote:
> Benjamin Franksen wrote:
>> Don't get me wrong: I completely agree that purity is a fantastically
>> good thing, especially since the invention of monads. I also agree that
>> there are many more technical reasons to prefer a pure language than
>> laziness.
>
> I think a pure language without laziness would be too constrained. Maybe
> you mean no laziness by default, but with the ability to write parts
> using laziness?

From what I have heard here or there, there are certain difficulties with
adding laziness to an otherwise call-by-value language, to the effect that
add-on laziness isn't quite as useful as the 'real deal'. I may be wrong
though; maybe some experts can comment on that?

>> It is just that without the "hair shirt" laziness there's a good
>> chance that things like monadic I/O would never have been invented in
>> the first place.
>
> Laziness is not a hair shirt, it a feature! :-)

Well, of course it is a feature, and one I dearly love. However, I can
vividly imagine how lazy programmers sometimes thought of it as a hair
shirt way back when they didn't have monads for stuff like IO. Ever tried
to write a web server using 'interact'? =:o

Cheers,
Ben

Neo

unread,
Nov 23, 2006, 3:37:44 PM11/23/06
to
> Dr Jon D Harrop, Flying Frog Consultancy

Thanks for the nice explanation, especially its equivalent in terms of
C (wouldn't have expected that from a consultant who calls himself a
Flying Frog!)

I wonder if "(= (incr) (incr))" is valid.
Might it really should be (= (incr (incr x))).
And allowing x to be global seem to cause problems. The output should
go straight into the next function, right? Seems that keeping track of
an intermediate output is undesirable in fp's scheme of things. Then I
wonder if = is even a valid function because it just has one input and
no second item to compare with in above simplification. I was reading
thru the rest of the post and didn't quite understand a monad. Is there
an equivalent thing in C?

Jon Harrop

unread,
Nov 23, 2006, 6:41:33 PM11/23/06
to
Neo wrote:
>> Dr Jon D Harrop, Flying Frog Consultancy
>
> Thanks for the nice explanation, especially its equivalent in terms of
> C (wouldn't have expected that from a consultant who calls himself a
> Flying Frog!)

We're full of surprises. :-)

> I wonder if "(= (incr) (incr))" is valid.

Consider "++i == ++i" in C.

> Might it really should be (= (incr (incr x))).
> And allowing x to be global seem to cause problems. The output should
> go straight into the next function, right? Seems that keeping track of
> an intermediate output is undesirable in fp's scheme of things.

It is always undesirable but in FP you have alternatives.

> Then I
> wonder if = is even a valid function because it just has one input and
> no second item to compare with in above simplification. I was reading
> thru the rest of the post and didn't quite understand a monad. Is there
> an equivalent thing in C?

There is not an equivalent thing in C, but you did just describe a monad (to
some extent) when you said that the incr function should return the next
incr function rather than doing things behind the scenes.

In an FP you can make incr return the next incr function along with its
result. For example, in OCaml:

$ ocaml -rectypes
Objective Caml version 3.09.2

# let rec incr n = n, fun () -> incr (n+1);;
val incr : int -> (int * (unit -> 'a) as 'a) = <fun>

Now you can create zero and a purely functional stream of the remaining
numbers (called "rest"):

# let zero, rest = incr 0;;
val zero : int = 0
val rest : unit -> int * 'a as 'a = <fun>

Now you can apply "rest" to get the next number and function:

# let one, rest = rest();;
val one : int = 1
val rest : unit -> int * 'a as 'a = <fun>

and so on.

--

Dr Jon D Harrop, Flying Frog Consultancy

Tomasz Zielonka

unread,
Nov 24, 2006, 12:47:16 AM11/24/06
to
Joachim Durchholz wrote:
> Within a language, it would be desirable to be able to write impure code
> and prove to the compiler that the interface is still pure.
> I'm not aware of any language that does this, though. I even don't have
> the fuzziest idea how to prove that an interface is pure in that
> situation...

Haskell's ST monad gives that for code using imperative arrays and
variables.

Best regards
Tomasz

Joachim Durchholz

unread,
Nov 24, 2006, 3:16:49 AM11/24/06
to
Tomasz Zielonka schrieb:

Yes, but building the ST monad is something for people of Elated Guru state.

I think mere Library Designers should be able to write impure data types
with a pure interface (e.g. splay trees).

And Application Programmers should be able to do that if really needed,
too, though they should be able to get away with handwaving when it
comes to proving that the interfaces are pure (provided that the
compiler can clearly blame the developer when it turns out that the
interface wasn't so pure after all).

Of course, all of this is possible only if there's a well-known and
reasonably simple set of proof techniques for purity.
Anything reasonable in that direction?

Regards,
Jo

Torben Ægidius Mogensen

unread,
Nov 24, 2006, 4:10:46 AM11/24/06
to
Jon Harrop <j...@ffconsultancy.com> writes:

> Joachim Durchholz wrote:
>
>> I do agree that Erlang, the *ML and Lisp languages dropped the ball by
>> allowing functions to have side effects without even marking them as
>> "impure" or something.
>
> I'm not sure that could work. I often wrap impure internals with a pure API.
> Would that be marked as impure?

That depends. Type-and-effect systems can hide purely local side
effects. For example, if you have a function that takes an array and
returns a sorted version of that array (without overwriting the
original), the function might use imperative quicksort (on a copy) to
do the sorting, but the function behaves as if it was purely
functional (identical arguments give identical results), so the sort
function can be given a pure type.

On the other hand, if you have a function that retains state between
calls (such as a gensym function), it will be given an impure type.
Same with i/o functions.

If you combine with linearity, the sorting function need not even make
a copy of the argument if the type of the sort function indicates that
the argument must be unshared. This just means that you must make an
explicit copy when calling the sort function with a potentially shared
argument.

Linear types can give you pure types to functions that do retain state
between calls, but you need to return a token that has a linear type
and give it as argument to the next call. For example, when you open
a file, you return a handle that must be given as argument to a call
that reads from the file. The handle has a linear type, so it can be
passed to one call only, but that call can return the handle with its
result, so it can be passed to another call. Essentially, the linear
type forces the calls to be threaded sequentially. The type will even
enforce that you close the file, as a linearly typed value _must_ be
used, and the only function that takes a file handle and does not
return it is the close function.

Torben

Philippa Cowderoy

unread,
Nov 24, 2006, 5:03:02 AM11/24/06
to
On Fri, 24 Nov 2006, Joachim Durchholz wrote:

> Tomasz Zielonka schrieb:
> > Joachim Durchholz wrote:
> > > Within a language, it would be desirable to be able to write impure code
> > > and prove to the compiler that the interface is still pure.
> > > I'm not aware of any language that does this, though. I even don't have
> > > the fuzziest idea how to prove that an interface is pure in that
> > > situation...
> >
> > Haskell's ST monad gives that for code using imperative arrays and
> > variables.
>
> Yes, but building the ST monad is something for people of Elated Guru state.
>

Doesn't matter, it's been built already and ships with the compiler.
Unless you mean writing code in it, which is easy - it's pretty much like
working in a subset of IO that only does references.

--
fli...@flippac.org

'In Ankh-Morpork even the shit have a street to itself...
Truly this is a land of opportunity.' - Detritus, Men at Arms

Joachim Durchholz

unread,
Nov 24, 2006, 5:38:23 AM11/24/06
to
Philippa Cowderoy schrieb:

> On Fri, 24 Nov 2006, Joachim Durchholz wrote:
>> Yes, but building the ST monad is something for people of Elated Guru state.
>
> Doesn't matter, it's been built already and ships with the compiler.
> Unless you mean writing code in it, which is easy - it's pretty much like
> working in a subset of IO that only does references.

Sorry, no, I was unclear - I meant "writing *something like* the ST
monad is something for gurus".

Regards,
Jo

Simon Richard Clarkstone

unread,
Nov 24, 2006, 3:23:57 PM11/24/06
to
Jon Harrop wrote:

> Neo wrote:
>>I wonder if "(= (incr) (incr))" is valid.
>
> Consider "++i == ++i" in C.

Actually, that *isn't* valid in C AFAIK. It involves using a variable
in an expression when it is modified elsewhere in the expression, and
that invokes undefined behaviour. The compiler is allowed to do the
obvious thing, or the other obvious thing, or to emit nonsensical object
code, or just to crash horribly if it wants.

The comp.lang.c FAQ can tell you more, but that's not really on-topic in
this group.

--
Simon Richard Clarkstone: s.r.cl?rkst?n?@durham.ac.uk/s?m?n.cl?rkst?n?@
hotmail.com ### "I have a spelling chequer / it came with my PC /
it plainly marks for my revue / Mistake's I cannot sea" ...
by: John Brophy (at: http://www.cfwf.ca/farmj/fjjun96/)

Benjamin Franksen

unread,
Nov 24, 2006, 5:08:40 PM11/24/06
to

What sort of thing /like/ ST monad are you thinking of?

Ben

Jon Harrop

unread,
Nov 24, 2006, 5:24:01 PM11/24/06
to
Simon Richard Clarkstone wrote:
> Jon Harrop wrote:
>> Neo wrote:
>>>I wonder if "(= (incr) (incr))" is valid.
>>
>> Consider "++i == ++i" in C.
>
> Actually, that *isn't* valid in C AFAIK...

Exactly.

ross...@ps.uni-sb.de

unread,
Nov 25, 2006, 3:53:23 AM11/25/06
to
Simon Richard Clarkstone write:
> Jon Harrop wrote:
> > Neo wrote:
> >>I wonder if "(= (incr) (incr))" is valid.
> >
> > Consider "++i == ++i" in C.
>
> Actually, that *isn't* valid in C AFAIK.

It is valid C, in the sense that it is legal code. It just has
"undefined behaviour" - like so many things in C.

- Andreas

Joachim Durchholz

unread,
Nov 25, 2006, 7:04:56 AM11/25/06
to
Benjamin Franksen schrieb:

> What sort of thing /like/ ST monad are you thinking of?

None specifically. I just wanted to cover the whole spectrum, from stuff
that is (and should remain) for gurus, down to everyday stuff that the
average application programmer does.

I'm not 100% sure that anything beyond ST is actually needed.
OTOH while designing monads is strictly guru stuff, the number of people
exploring that design space will be limited and the probability that
something useful remains hidden in it is higher.

Regards,
Jo

Benjamin Franksen

unread,
Nov 25, 2006, 7:31:06 PM11/25/06
to
Joachim Durchholz wrote:
> Benjamin Franksen schrieb:
>> What sort of thing /like/ ST monad are you thinking of?
>
> None specifically. I just wanted to cover the whole spectrum, from stuff
> that is (and should remain) for gurus, down to everyday stuff that the
> average application programmer does.

Ok.

> I'm not 100% sure that anything beyond ST is actually needed.
>
> OTOH while designing monads is strictly guru stuff, the number of people
> exploring that design space will be limited and the probability that
> something useful remains hidden in it is higher.

I'm not so sure it must be (or remain) guru stuff. Take a look at this:

http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads-and.html

Quote: "All of these [tutorials] introduce monads as something esoteric in
need of explanation. But what I want to argue is that they aren't esoteric
at all. In fact, faced with various problems in functional programming you
would have been led, inexorably, to certain solutions, all of which are
examples of monads. In fact, I hope to get you to invent them now if you
haven't already. It's then a small step to notice that all of these
solutions are in fact the same solution in disguise."

Cheers,
Ben

Tomasz Zielonka

unread,
Nov 26, 2006, 1:06:02 AM11/26/06
to
Benjamin Franksen wrote:
> I'm not so sure it must be (or remain) guru stuff. Take a look at this:
>
> http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads-and.html
>
> Quote: "All of these [tutorials] introduce monads as something esoteric in
> need of explanation. But what I want to argue is that they aren't esoteric
> at all. In fact, faced with various problems in functional programming you
> would have been led, inexorably, to certain solutions, all of which are
> examples of monads. In fact, I hope to get you to invent them now if you
> haven't already. It's then a small step to notice that all of these
> solutions are in fact the same solution in disguise."

I have a good example for this: this wednesday I was at the talk given
by people from a big polish IT company. They described how functional
programming helps them to master the complexity of their business
intelligence code. Then they went to describe some problems with FP,
like interaction with the outside world or, in their case, influencing
the behavior of the whole application, and show their solution to this
problem. It was basically a monad with Writer and State capabilities,
but not presented as such.

Best regards
Tomasz

Daniel C. Wang

unread,
Nov 26, 2006, 3:09:09 AM11/26/06
to rincewind
I'm glad this is now freely available it is an excellent book.

http://dspace.library.cornell.edu/handle/1813/83

Direct link..
http://dspace.library.cornell.edu/retrieve/300/Art_of_Discovery_Oliver2.pdf

Page 39 jumps to mind when I think about your question. There really are
no demonstrated advantages to purely functional languages, other than
the ability to publish reams and reams of papers about them.

Of course all those papers are not useless, many interesting ideas came
out of them and by that measure Haskell is a success. If you look at C#
3.0 and LINQ you'll discover many ideas borrowed from Haskell and put to
use. Purity was not one of those ideas, because it really buys you nothing.


rincewind wrote:
> What are advantages of purely functional languages?
>
> I mean, functions as values, tail recursion optimization and other
> attributes of functional languages are great, I can easily appreciate
> them. But why prohibit operations with side-effects completely? How can
> the language be better by explicitly lacking some fundamental feature?
>
> I don't think any argument is needed to state that some kind of global
> state-changing operations is needed, sometimes. Existence of monads in
> Haskell kind of proves this. But why invent some complicated concept,
> why not just bring in all the traditional stuff (assignments, etc)? How
> monads are better?

Mark Tarver

unread,
Nov 26, 2006, 5:37:34 AM11/26/06
to

> Purity ...... really buys you nothing.

> > What are advantages of purely functional languages?

I think this is roughly right. The advantages cited in this thread -
memoisation and formal methods - don't really compensate very well for
what you lose. The fact is that very very few functional programs are
formally verified anyway. So essentially you're giving up a lot for
the right to exercise a facility which you're not likely to want to
use.

Its interesting that one of the most powerful and succcessful formal
verification systems is the Boyer-Moore and this was implemented in and
for Lisp (which is as impure as you like). The Boyer-Moore people just
simply chose to work in the pure functional subset of Lisp.

Mark

Joachim Durchholz

unread,
Nov 26, 2006, 7:10:11 AM11/26/06
to
Benjamin Franksen schrieb:
> Joachim Durchholz wrote:
>> OTOH while designing monads is strictly guru stuff, the number of people
>> exploring that design space will be limited and the probability that
>> something useful remains hidden in it is higher.
>
> I'm not so sure it must be (or remain) guru stuff. Take a look at this:
>
> http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads-and.html

Just the kind of introduction that I'be been trying to locate (or even
write myself).

> Quote: "All of these [tutorials] introduce monads as something esoteric in
> need of explanation. But what I want to argue is that they aren't esoteric
> at all. In fact, faced with various problems in functional programming you
> would have been led, inexorably, to certain solutions, all of which are
> examples of monads. In fact, I hope to get you to invent them now if you
> haven't already. It's then a small step to notice that all of these
> solutions are in fact the same solution in disguise."

Yes, that has been my thinking almost from the start that I encountered
them.

Let me qualify my statement a bit:
Designing something that happens to be a monad is a relatively normal
activity.
However, making sure it is a monad (i.e. verifying that the monad laws
hold, without exception, including in the presence of nonterminating
parameters) seems to be guru-ish stuff to me. (Unless and until proof
exploration/checking systems become commonplace tools for the
programmer, that is. And that's not very far away, actually: type
systems are an example of a proof checker.)

Regards,
Jo

Joachim Durchholz

unread,
Nov 26, 2006, 7:19:14 AM11/26/06
to
Mark Tarver schrieb:

> The fact is that very very few functional programs are
> formally verified anyway. So essentially you're giving up a lot for
> the right to exercise a facility which you're not likely to want to
> use.

Formal proofs are just a tiny part of all the things what purity helps
with. And the majority of these things is firmly in the realm of
day-to-day design&programming.

Regards,
Jo

Benjamin Franksen

unread,
Nov 26, 2006, 10:41:43 AM11/26/06
to
Joachim Durchholz wrote:
> Designing something that happens to be a monad is a relatively normal
> activity.
> However, making sure it is a monad (i.e. verifying that the monad laws
> hold, without exception, including in the presence of nonterminating
> parameters) seems to be guru-ish stuff to me.

Hm, yes, right. I haven't really been thinking about the proof obligation.
Maybe "guru" is still slightly exaggerated. I mean, writing such proofs is
surely not your joe average programmer's daily task. But if you forget (for
a moment) bottoms and such and just do the 'fast and loose' equational
reasoning, it is really not that hard. OTOH, I wonder how many widely used
monads wouldn't actually fail any 'hard test'. For instance, 'seq' seems to
break monad laws, see e.g. this discussion:

http://www.mail-archive.com/haskell%40haskell.org/msg10867.html

> (Unless and until proof
> exploration/checking systems become commonplace tools for the
> programmer, that is. And that's not very far away, actually: type
> systems are an example of a proof checker.)

Yup (looking forward to the first optimizing Epigram2 compiler... ;-)

Ben

Benjamin Franksen

unread,
Nov 26, 2006, 11:16:16 AM11/26/06
to
Daniel C. Wang wrote:
> Of course all those papers are not useless, many interesting ideas came
> out of them and by that measure Haskell is a success. If you look at C#
> 3.0 and LINQ you'll discover many ideas borrowed from Haskell and put to
> use. Purity was not one of those ideas, because it really buys you
> nothing.

I beg to differ. There are enormous practical advantages to purity. I
already mentioned laziness. Another area where it is helpful is
concurrency, especially when combined with stuff like software
transactional memory (STM).

More generally, pure languages like Haskell make it easier to reason about
complex programs, since a large and clearly identifiable part of the code
(that which is not in the IO monad) follows much a simpler semantics. This
point may be hard to demonstrate in an objective way. Still, my preference
for pure languages is quite strong and I don't think this is merely because
of their theoretical beauty, but at least to the same degree due to the
ease with which I can understand and write complex algorithms in them.

Ben

Daniel C. Wang

unread,
Nov 26, 2006, 12:45:14 PM11/26/06
to Benjamin Franksen
Benjamin Franksen wrote:
> Daniel C. Wang wrote:
>> Of course all those papers are not useless, many interesting ideas came
>> out of them and by that measure Haskell is a success. If you look at C#
>> 3.0 and LINQ you'll discover many ideas borrowed from Haskell and put to
>> use. Purity was not one of those ideas, because it really buys you
>> nothing.
>
> I beg to differ. There are enormous practical advantages to purity. I
> already mentioned laziness. Another area where it is helpful is
> concurrency, especially when combined with stuff like software
> transactional memory (STM).

C# 3.0. and LINQ is built on top of the idea of a lazy list. They just
call them Enumerators and don't bother with the purity.

STM is some pie in the sky research idea that's generated yet more
papers but really hasn't solved any real problems. They've just made it
much easier to write slow concurrent programs. Much like garbage
collectors don't really solve the resource management problem. They just
shuffle the problems to different part of the system where they can be
solved in more centralized ways.

> More generally, pure languages like Haskell make it easier to reason about
> complex programs, since a large and clearly identifiable part of the code
> (that which is not in the IO monad) follows much a simpler semantics. This
> point may be hard to demonstrate in an objective way.

If you can't demonstrate this in an objective way, you're not doing real
science. If I look at the evidence and the experience of the proof
carrying code community, I'd say this is statement is just false. I mean
really, when you can tractably prove things about raw machine code I
don't see how purity would have made things significantly simpler. Also,
any high-level proof is not what you really care about, the proofs that
matter are what happen at the machine level.


> Still, my preference
> for pure languages is quite strong and I don't think this is merely because
> of their theoretical beauty, but at least to the same degree due to the
> ease with which I can understand and write complex algorithms in them.

There are many advanced features in Haskell that make it a nice
language. Enforced purity would not be on my list. I mean nothing is
preventing you from writing your algorithms in a pure subset of ML,
doing the nice proofs in your head and moving on.

BTW It's my opinion that Haskell is not purely-functional because it
allows non-termination as an implicit effect.

The strongly-typed lambda calculus would be an example of a pure
functional language. Because everything is strongly normalizing you can
use it as a logic. Because Haskell admits non-termination in it's
"purely-functional" subset you can't use that subset as a logic.
non-termination should be wrapped up in a monad.

There are many good reasons to like purity, but the way it is
implemented and designed into Haskell has bought us really nothing. So I
should qualify my statement by saying "purely-functional" programing
languages like Haskell really provide us no advantages with their sort
of "purity".

Philippa Cowderoy

unread,
Nov 26, 2006, 1:39:56 PM11/26/06
to
On Sun, 26 Nov 2006, Daniel C. Wang wrote:

> There are many good reasons to like purity, but the way it is implemented and
> designed into Haskell has bought us really nothing.

It may have bought you nothing - many Haskell coders (myself included)
find value in it!

--
fli...@flippac.org

A problem that's all in your head is still a problem.
Brain damage is but one form of mind damage.

Daniel C. Wang

unread,
Nov 26, 2006, 1:48:54 PM11/26/06
to Philippa Cowderoy
Philippa Cowderoy wrote:
> On Sun, 26 Nov 2006, Daniel C. Wang wrote:
>
>> There are many good reasons to like purity, but the way it is implemented and
>> designed into Haskell has bought us really nothing.
>
> It may have bought you nothing - many Haskell coders (myself included)
> find value in it!


You, guys probably could have gotten by in OCaml and not really noticed.
Well, maybe you would have noticed your programs ran faster..... and
could be debugged...

There are many things in Haskell, purity is just one, and its my
experience that I've never missed it in the other languages I've used.


If you can show me a concert example where purity was key... (not just
laziness..) that might be more illustrative.

Joachim Durchholz

unread,
Nov 26, 2006, 3:06:05 PM11/26/06
to
Daniel C. Wang schrieb:

> Much like garbage
> collectors don't really solve the resource management problem. They just
> shuffle the problems to different part of the system where they can be
> solved in more centralized ways.

Sorry, you're firmly in the realm of sweeping overgeneralizations here.

Please take the time to bolster your claims with arguments before taking
them as analogies for other dismissive statements.

>> More generally, pure languages like Haskell make it easier to
>> reason about complex programs, since a large and clearly
>> identifiable part of the code (that which is not in the IO monad)
>> follows much a simpler semantics. This point may be hard to
>> demonstrate in an objective way.
>
> If you can't demonstrate this in an objective way, you're not doing
> real science.

The original point is easy to demonstrate.

> If I look at the evidence and the experience of the proof
> carrying code community, I'd say this is statement is just false. I mean
> really, when you can tractably prove things about raw machine code I
> don't see how purity would have made things significantly simpler.

Purity is for reasoning about properties of source code, no more, no
less. Whether it's still relevant at machine level is an entirely
different issue, and has nothing to do with the question whether purity
helps programmers.
Besides, you're ignoring the advantages of purity when reasoning
informally. Purity eliminates a whole host of caveats like sequencing
and aliasing, each of which make reasoning more difficult, more
burdensome, and/or more error-prone. That's independent of whether that
reasoning is formal, semi-formal, or informal.

I think PCC points just the other direction: if having pure stuff were
so irrelevant, then why are the assertions that PCC proves all declarative?

> Also, any high-level proof is not what you really care about, the
> proofs that matter are what happen at the machine level.

At least for me, arguing about programs at the source level is what I do
about 90% of my programming time.
I do that when hunting for bugs, when changing the code, or about any
other activity that involves an editor and source code (and a lot of
whiteboard activity as well).
I do care about machine-level proofs, as a means to validate compilers,
or code that got loaded from an untrusted source. But that's specialist
concerns; in my day-to-day job, I don't write compilers or
dynamically-loading systems.

Regards,
Jo

Tomasz Zielonka

unread,
Nov 26, 2006, 3:38:23 PM11/26/06
to
Daniel C. Wang wrote:
> You, guys probably could have gotten by in OCaml and not really noticed.

I used OCaml before Haskell. I liked it at the beginning, but as time
went, I was liking it less and less. One of the biggest reasons was that
due to strictness, I was penalized when I used some obvious functional
code, like composing many list transformations, because every
intermediate list was fully constructed before passing it to the next
stage. I could use lazy lists in OCaml, but it was better to switch to
Haskell at this point.

> Well, maybe you would have noticed your programs ran faster.....

Efficiency is not everything. Ocaml's focus on efficiency was one of
things that drove me away from it. Some obvious functions, like string
implode / explode, were not added to the standard libraries, because
developers of Ocaml feared that someone could use them and get
inefficient programs. I don't know about you, but I don't like being
treated like a child.

> and could be debugged...

That's just silly. I am not sure I should even bother replying you.

Best regards
Tomasz

Joachim Durchholz

unread,
Nov 26, 2006, 4:22:29 PM11/26/06
to
Benjamin Franksen schrieb:

> Joachim Durchholz wrote:
>> Designing something that happens to be a monad is a relatively normal
>> activity.
>> However, making sure it is a monad (i.e. verifying that the monad laws
>> hold, without exception, including in the presence of nonterminating
>> parameters) seems to be guru-ish stuff to me.
>
> Hm, yes, right. I haven't really been thinking about the proof obligation.
> Maybe "guru" is still slightly exaggerated. I mean, writing such proofs is
> surely not your joe average programmer's daily task. But if you forget (for
> a moment) bottoms and such and just do the 'fast and loose' equational
> reasoning, it is really not that hard.

Agreed.

> OTOH, I wonder how many widely used monads wouldn't actually fail any
> 'hard test'. For instance, 'seq' seems to break monad laws,

Yes, that's why I included "nonterminating" in the list of things to
prove :-)

Regards,
Jo