how useful is lazy-by-default and why?

17 views
Skip to first unread message

Ben Hutchison

unread,
Jul 13, 2009, 8:23:23 AM7/13/09
to fpu...@googlegroups.com
I ask this question out of curiosity as a FP newcomer: how useful is
lazy-by-default and why?

Ive recently encountered 2 superficially conflicting pieces of evidence:

- Paul Chiasano makes a decent argument that in Scala's
strict-default, lazy-optional design, use of laziness is fairly
limited and not very composable
[http://pchiusano.blogspot.com/2009/05/optional-laziness-doesnt-quite-cut-it.html].

The problem is that, if any function denoted to lazily evaluate its
arguments is then wrapped by some other client code, the
default-strictness of the wrapping code will likely force the
evaluation of arguments at call time.

Presumably, there's some empirical evidence from eg Scheme or OCAML,
showing, Im guessing, that this aint a big problem in practice? Does
that simply indicate laziness is rarely necessary, or does it subtly
limit these kinds of languages?


- Meanwhile, the DDC [http://www.haskell.org/haskellwiki/DDC] project
is abandoning lazy-by-default in Haskell. Is this because, while DDC
would have liked to be lazy, it wanted to be mutable more, and was
forced to choose one because they wont mix nicely? Or for another
reason?


Is there any consensus on the value, or lack-thereof, of laziness?

An impression Ive got thus far in my travels goes something like "We
wanted a to make Haskell a lazy language, so it had to be pure.
Somewhere along the way, we realised the purity was much more valuable
than the laziness". Accurate?

-Ben

Les Kitchen

unread,
Jul 13, 2009, 8:29:27 AM7/13/09
to fpu...@googlegroups.com, ljk+...@csse.unimelb.edu.au
Dear Sender:

Unfortunately, I receive large amounts of junk email, which
takes excessive time to process manually, even just to delete
it. Therefore I have installed an automatic mail filter to
detect mail from addresses unknown to me, and to send this
automatic reply.

In order to reach me directly, you will need to include in your
"Subject:" line the password that appears as the last line of
this message. (Your "Subject:" line can include additional text
as well, but must include the password in order to pass the mail
filter.) This password will be valid for one week -- after that
you may need to send a new message in order to receive a fresh
password.

If you reply to this message, do not include any of this message
in the body of your reply, since my mail filter uses tests
against the contents of this message to detect bounces from
bogus mail addresses. (Mail-delivery systems normally include
the entire bounced message in their notification of
non-delivery.)

Your original message to me will be stored for a short time
(approximately one week), so you will probably not need to
resend your original message -- just inform me of its existence,
so I can retrieve it manually.

I apologize for this inconvenience. If you have any
difficulties contacting me (possibly through some unintended
malfunction of my mail filter), then you can find non-electronic
contact details on my Web page, whose http URL you can obtain by
replacing commas with dots, and the minus sign with a tilde in
the following line:

www,csse,unimelb,edu,au/-ljk


Yours sincerely,
Les Kitchen.


Password is on the next non-blank line:

nCFGc8G0

Paul Bone

unread,
Jul 13, 2009, 7:21:55 PM7/13/09
to fpu...@googlegroups.com
On Mon, Jul 13, 2009 at 10:23:23PM +1000, Ben Hutchison wrote:
>
> Is there any consensus on the value, or lack-thereof, of laziness?

My own point of view on this (as it can be subjective) is that Laziness
is useful but languages should be eager-by-default. Laziness has a
non-trivial runtime cost that I would rather not pay everywhere in my
program for a feature that I use sparingly if at all.

I understand that the GHC guys have fought against the runtime cost of
lazy evaluation for a long time now and have done well to reduce this
cost. I believe this is still a non-negligible cost but I haven't
tested it. I also believe that a GHC has an analysis that find thunks
that it knows _will_ be 'entered' if 'created' (these are GHC's words),
so that an optimization can make these eager evaluations..

signature.asc

Ralph Becket

unread,
Jul 13, 2009, 7:46:40 PM7/13/09
to fpu...@googlegroups.com
My experience of using and teaching Haskell is that default laziness gets in the way much more than it should.  There are four main problems:
(1) it is hard to work out where your time and memory resources are being consumed in a fully lazy program;
(2) it is hard to know where you have to force eager evaluation in order to make your program not suck;
(3) it makes debugging much more difficult;
(4) it's quite a hard concept for students to grok - it seems eagerness is more "natural".
In a nutshell, I think the lazy performance model is too complicated for a general purpose programming language.

-- Ralph

Bernie Pope

unread,
Jul 14, 2009, 1:40:14 AM7/14/09
to fpu...@googlegroups.com

On 13/07/2009, at 10:23 PM, Ben Hutchison wrote:

> I ask this question out of curiosity as a FP newcomer: how useful is
> lazy-by-default and why?

This is a good question, which has no definitive answer.

First, it is important to get some of the concepts and terminology
sorted out.

Let _|_ (bottom) denote the (abstract) value of a divergent term
(something which has no normal form - reduction of it never terminates
- an infinite loop).

A function f is strict iff:

f _|_ = _|_

That is, given a divergent argument, the function yields a divergent
result. The same idea can be extended to functions of multiple
arguments, where we would say "f is strict in its nth argument ..."

For example (in Haskell notation):

foo x = x
bar x = True

foo is strict, bar is not strict (regardless of evaluation order).

A programming language has strict semantics if it employs an
evaluation order which "makes" every procedure/function strict in
every argument. Call-by-value evaluation (eager) is the most common
example of an evaluation order which gives rise to strict semantics.

The converse of strict is "non-strict". Non-strict languages permit
semantic equations like this (but strict languages do not):

g _|_ != _|_

(!= means not equals).

Call-by-name evaluation is the most common example of an evaluation
order which gives rise to non-strict semantics. Lazy evaluation is an
"improvement" of call-by-name which requires that the evaluation of
argument terms is shared (not repeated) in the reduction of a function
application. You do not have to be lazy to be non-strict.

Many people say "lazy" when they mean "non strict". For example, in
eager languages you can obtain non-strict behaviour by wrapping terms
inside lambda abstractions. Most languages do not do evaluation under
lambda abstractions, so the evaluation of the original term is
suspended until the lambda is applied to a (dummy) argument. Note that
this does not normally result in the same amount of sharing as lazy
evaluation. Call-by-name can lead to exponential explosion in re-
evaluation of argument terms, so it is not always a sufficient
substitute for lazy evaluation (this is a point that many people
forget in the debate).

Haskell is a non-strict language. It does not require that an
implementation use lazy evaluation. In fact, it says nothing about
what evaluation order to use. You could say that Haskell specifies its
denotational semantics, but not its operational semantics. Most
Haskell implementations use lazy evaluation, but some don't: see
Optimistic Evaluation and Eager Haskell.

It was thought that non-strict semantics was theoretically cleaner
than strict semantics. People hoped that non-strict languages would be
easier to reason about formally, and this was one of the main driving
forces in the early days of Miranda and its forebears which led to
Haskell. So far it seems that non-strict semantics has not made formal
reasoning significantly easier. It seems that once you allow non-
termination, lots of nice properties are eroded.

Non-strict semantics does give more terminating programs. In practice
this means that programs written in non-strict languages are more
modular and declarative (see: Why functional programming matters, by
John Hughes). My experience is that this is true. It also tends to
make data structures more useful, since you can take advantage of the
duality between structure and control flow. For instance, list
processing in Haskell has a co-routining behaviour.

The main arguments against non-strict languages are that they are
harder to implement efficiently on conventional hardware, and they are
harder to reason about (for humans) operationally than strict
languages. People say that you can always regain non-strictness when
you need it by "thunking" with lambdas (but remember the possibility
of exponential blow up), furthermore non-strictness is reputed to be
only essential in very limited circumstances (contentious issue which
is hard to measure).

> Ive recently encountered 2 superficially conflicting pieces of
> evidence:
>
> - Paul Chiasano makes a decent argument that in Scala's
> strict-default, lazy-optional design, use of laziness is fairly
> limited and not very composable
> [http://pchiusano.blogspot.com/2009/05/optional-laziness-doesnt-quite-cut-it.html
> ].

I think that Scala has optional "call-by-name" semantics, which is
done by automating the thunking under lambdas (but I haven't checked
closely). Call-by-name is less useful than lazy evaluation because of
the difference in sharing.

> Presumably, there's some empirical evidence from eg Scheme or OCAML,
> showing, Im guessing, that this aint a big problem in practice? Does
> that simply indicate laziness is rarely necessary, or does it subtly
> limit these kinds of languages?

It is tough to answer this question because strict functional
languages tend to encourage a different style of programming than non-
strict languages (they also tend to include unrestricted side-effects
(but not Mercury)), making comparisons difficult.

> - Meanwhile, the DDC [http://www.haskell.org/haskellwiki/DDC] project
> is abandoning lazy-by-default in Haskell. Is this because, while DDC
> would have liked to be lazy, it wanted to be mutable more, and was
> forced to choose one because they wont mix nicely? Or for another
> reason?

It is mostly that lazy evaluation and mutation effects are hard for
humans to comprehend, so they are curtailed in DDC.

> Is there any consensus on the value, or lack-thereof, of laziness?

No, I do not believe there is a consensus.

> An impression Ive got thus far in my travels goes something like "We
> wanted a to make Haskell a lazy language, so it had to be pure.
> Somewhere along the way, we realised the purity was much more valuable
> than the laziness". Accurate?

I think Simon Peyton Jones has said something to that effect in his
retrospective talk on Haskell, but I do not think that this opinion
represents the overall Haskell community. But is is probably true that
Haskell's type system is more significant than its non-strict semantics.

My opinion is that any default evaluation order is a compromise. We
still have a long way to go before we can reconcile the needs of
procedural and declarative reasoning about programs. People who favour
strict languages tend to err on the side of procedural reasoning,
whereas people who favour non-strict languages tend to err on the side
of declarative reasoning. Neither camp has the silver bullet for
software construction, and each has its own share of problems.

Many people have wished for a strict version of Haskell, but I am not
aware of any detailed study which has investigated what effect this
would have on programming style, or on how much of the existing
software would have to be re-written. It would be an interesting
research project.

Cheers,
Bernie.

Les Kitchen

unread,
Jul 14, 2009, 2:00:02 AM7/14/09
to fpu...@googlegroups.com

> > I ask this question out of curiosity as a FP newcomer: how useful is
> > lazy-by-default and why?
> This is a good question, which has no definitive answer.

Good answer, Bernie.

> Call-by-name evaluation is the most common example of an evaluation
> order which gives rise to non-strict semantics. Lazy evaluation is an
> "improvement" of call-by-name which requires that the evaluation of
> argument terms is shared (not repeated) in the reduction of a function
> application. You do not have to be lazy to be non-strict.

Somewhere in this space is Scheme's delay/force, which gives you
explicit laziness (with the sharing) in a default strict
(more or less call-by-value) language.

-- Smiles, Les.

Bernie Pope

unread,
Jul 14, 2009, 2:56:15 AM7/14/09
to fpu...@googlegroups.com

On 14/07/2009, at 4:00 PM, Les Kitchen wrote:

> Somewhere in this space is Scheme's delay/force, which gives you
> explicit laziness (with the sharing) in a default strict
> (more or less call-by-value) language.

Yes, there's quite a lot of wiggle room in the strict/non-strict
spectrum, and most real programming languages have elements of both.
Short-circuit evaluation is a typical example of non-strictness in
otherwise strict languages.

Bernie.

Lee Naish

unread,
Jul 14, 2009, 3:54:48 AM7/14/09
to fpu...@googlegroups.com

On 13/07/2009, at 10:23 PM, Ben Hutchison wrote:
> I ask this question out of curiosity as a FP newcomer: how useful is
> lazy-by-default and why?

A few random points:

This is a bit like asking how useful is non-deterministic-by-default in
logic programming languages.

For programming in the small lazy evaluation, nondeterminism and many
other fancy evaluation mechanisms (coroutining, intelligent backtracking,
tabling/memoization to name a few) are really neat. You can write small
elegant programs which implement clever algorithms.

For programming in the large the tradeoff is sometimes different.
Reducing complex interactions between program components is a big issue,
and the fancy evaluation mechanisms tend to make this worse. Space
leaks can be elevated from a niggling worry to a major headache.

One of the strong points of declarative languages is automatic memory
management. However, non-determinism in Prolog and, to a lesser extent
lazy evaluation in functional languages can lead to subtle space leaks,
reducing this benefit.

lee

Les Kitchen

unread,
Jul 14, 2009, 7:01:58 PM7/14/09
to fpu...@googlegroups.com

> Yes, there's quite a lot of wiggle room in the strict/non-strict
> spectrum, and most real programming languages have elements of both.
> Short-circuit evaluation is a typical example of non-strictness in
> otherwise strict languages.

Mmmmm... It's interesting (though probably not surprising) how
much of this shows up in main-stream languages, when you look
the right way.

-- Les.

Kevin Glynn

unread,
Jul 14, 2009, 7:26:28 PM7/14/09
to fpu...@googlegroups.com
On Tue, Jul 14, 2009 at 12:40 AM, Bernie Pope<bj...@csse.unimelb.edu.au> wrote:
> On 13/07/2009, at 10:23 PM, Ben Hutchison wrote:
>> An impression Ive got thus far in my travels goes something like "We
>> wanted a to make Haskell a lazy language, so it had to be pure.
>> Somewhere along the way, we realised the purity was much more valuable
>> than the laziness". Accurate?
>
> I think Simon Peyton Jones has said something to that effect in his
> retrospective talk on Haskell, but I do not think that this opinion
> represents the overall Haskell community.
> ....

I think this means 'purity was much more valuable for generating
research than laziness' ;)

k

Reply all
Reply to author
Forward
0 new messages