Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lazy languages considered questionable

9 views
Skip to first unread message

Vance P. Morrison

unread,
Jul 15, 1992, 10:28:20 AM7/15/92
to
Hello,

I have read many a paper on functional languages that begin as a
premise that lazy evaluation is good (that is better than strict
evaluation). For some time I accepted this premise basically on
'faith' but more and more I am questioning this assumption. This
note is meant to start a technical dialog on this question which
I hope will help resolve this issue in my mind. (But you can see
my present opinion in the title of this message).

I think everyone agrees that lazy evaluation is harder to implement
efficiently on conventional computers but that is not the kind of
problem I wish to discuss (one could imagine a good compiler removing
this difficulty). On the other hand, since it is more difficult
to implement Lazy evaluation we have to have good reasons for going
to this extra trouble. Thus my first question is

1) What are the advantages lazy evaluation that make
it worth the extra trouble to implement?

To the best of my knowledge the only advantage of lazy evaluation
is the fact that it allows the manipulation of potentially infinite
data structures. Now this advantage is only worth it if it actually
gets used. I have only seen two basic uses of this feature.

1) Generators: With infinite lists you can generate
a sequence without having to specify bounds.
The classic example of this is a sieve function that
generates the infinite stream of primes.

2) I/O: This seems to the be the use driving the need for
lazy evaluation.

I consider the first use insufficient to warrant the need for lazy
languages (since it seems that generators can be programmed in reasonably
elegant ways without lazy evaluation). My experience with lazy
lists for I/O have left me unimpressed. Most I/O involves a handshaking
protocol which depends upon causality relationships (that is the request
causes the system to send a reply). It seems that this relationship
is at BEST implicit in lazy list I/O and thus code is very convoluted
and hard to read. In addition the semantics of the lazy code no longer
has its simple functional reading but must be understood in the context
of the exact evaluation order. This is just the kind of detail function
language we designed to remove!!

The above two examples are the only ones I could come up with. If
anyone has others, or wishes to dispute my argument that generators
are not a sufficient reason for laziness or that lazy I/O creates
more problems than it solves please jump right in.

---------------------------------------------------------------------
Thus at present I can find no compelling reason to go to the extra
effort to implement lazy languages. In addition I have a reason
for NOT using them. When I use a lazy language AND really use
the fact that it is lazy, I must constantly be worried about the
order of evaluation. On the other hand, 99% of all programs
(that is programs that have the same interpretation in a strict
and lazy interpreter) I don't have to worry about the order of
evaluation. Thus it seems to me that since we can get what we
want done in a more abstract environment where the order of evaluation
basically doesn't matter, that is what we should do.

What's more, that is in fact what is done right now. Even lazy
language programmers mostly write code that would work under
strict evaluation. When he does write something that requires
laziness, he has to check very carefully that all the functions
he calls with his infinite data structure will actually terminate
with that input. To do this he has to examine the actual code
(breaking modularity). The only reason why this is OK at present
is because laziness simply isn't used that often. If it ever
were, the number of 'termination bugs' is likely to skyrocket.

The spirit of function languages has been 'simplify, simplify'. Thus
imperative languages were considered to complex because the sharing
conditions cause by pointers were hard to describe and reason about.
Thus they were eliminated from the language (even though it caused
a NOTICEABLE loss of expressive power). I assert that lazy evaluation
causes a similar problem. Namely it requires the programmer to worry
about evaluation details that MOST of the time don't matter. Thus it
seems that laziness should be abandoned unless it has clear advantages
(which from what I have been able to surmise, it does not).

-------------------------------------------------------------------
As stated before the above is intended to start a FRIENDLY technical
discussion about the merits of lazy evaluation. I am quite willing
to change my opinion if someone has addition information that I did
not consider or has better insight into the problem than I.

I look forward to the discussion.

Vance


Fergus James HENDERSON

unread,
Jul 15, 1992, 12:11:25 PM7/15/92
to
vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:

[...]


>Thus at present I can find no compelling reason to go to the extra
>effort to implement lazy languages.

[...]

As it happens, I gave a talk on lazy evaluation vs. eager evaluation
and strictness analysis just today (as part of my Honours degree).

It's late, so I'll just summarize:

Lazy evaluation:

- may avoid evaluating arguments (efficient)

- allows (conceptually) infinite data structures (very nice programming
technique)

- non-strict

- this allows functions such as if-then-else
short-circuit boolean operators
case statements
which are inherently non-strict to fit in with the rest of the
language

- hence allows certain program transformations which are invalidated by
eager evaluation

BUT

- extra overhead for parameter passing (inefficient)

Generally lazy evaluation is a much more natural semantics for an
applicative programming language. (Someone else will be able to explain
this much better than I have).

So ideal approach is for the language to have lazy evaluation semantics, but
for the compiler to perform strictness analysis to optimize to eager
evaluation where the two are equivalent, since call-by-value will in
general be more efficient. Most parts of most programs do not require
lazy evaluation, but when you do need it, it's much easier if it is
a natural part of the language.

>In addition I have a reason
>for NOT using them. When I use a lazy language AND really use
>the fact that it is lazy, I must constantly be worried about the
>order of evaluation.

I don't have this problem at all. With a lazy language I know that a
function will only be evaluated if its result is needed, so why should
I worry about the order of evaluation?

Maybe you could explain this point further.

--
Fergus Henderson f...@munta.cs.mu.OZ.AU
This .signature VIRUS is a self-referential statement that is true - but
you will only be able to consistently believe it if you copy it to your own
.signature file!

Vance P. Morrison

unread,
Jul 15, 1992, 2:33:48 PM7/15/92
to
f...@munta.cs.mu.OZ.AU (Fergus James HENDERSON) writes:

>>In addition I have a reason
>>for NOT using them. When I use a lazy language AND really use
>>the fact that it is lazy, I must constantly be worried about the
>>order of evaluation.

>I don't have this problem at all. With a lazy language I know that a
>function will only be evaluated if its result is needed, so why should
>I worry about the order of evaluation?

Think of it this way. You are a programmer and want to use a large
code library that was written by someone else. In it is a function
call FOO: List[Nat] -> List[Nat]. Now you extensively use infinite
data structures and at least in theory FOO: can written so that it
can operate on infinite data structures. The problem is that you
don't know for certain that the writer of this has programmed it this way.

Now you say, wait a second, part of the 'users manual' that came with
the library will contain the fact that 'FOO' will work on infinite length
lists. Problem solved. But what about more complicated data structures
that can be infinite in many different ways. Now describing in our
user's manual which infinite structures 'FOO' converges on is quite
a bit more difficult.

It boils down to this: in a strict language all datatypes are finite
and thus only very weak assumptions are needed to insure convergence
of a function ('if' and 'case' need to be lazy, and thats about it).
In lazy languages infinite structures are allowed. Unfortuately many
functions that work fine for any finite structure fail for infinite
structures. Thus the programmer needs to keep track of this whenever
he uses a infinite structure.

Now you say 'that's not a big deal. I disagree. It is only a simple
matter now because frankly we don't use laziness much. Pointers are
simple when you only use the once or twice in a program. If laziness
is REALLY worth it, it will be used extensively, and thus the above
problem could get as bad as pointer are in imperative languages.

Vance

Hans Boehm

unread,
Jul 15, 1992, 8:09:20 PM7/15/92
to
vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:

>It boils down to this: in a strict language all datatypes are finite
>and thus only very weak assumptions are needed to insure convergence
>of a function ('if' and 'case' need to be lazy, and thats about it).
>In lazy languages infinite structures are allowed. Unfortuately many
>functions that work fine for any finite structure fail for infinite
>structures. Thus the programmer needs to keep track of this whenever
>he uses a infinite structure.

I second this. See our paper "Exact Real Arithmetic: A Case Study in Higher
Order Programming" (L&FP 86) for another example of this. If you represent real
numbers as infinite sequences, it's amazingly easy (once you get a correct
representation) to get a partially correct implementation of multiplication
and division. It's significantly more subtle to get one that converges
for all inputs. Getting an implementation that performs competitively
with other (e.g. variable precision interval based) representations may
be possible, but is currently an unsolved problem. And one that requires
very careful reasoning about how evaluation proceeds, something I find
very unintuitive. (Different natural algorithms may evaluate different
parts of the data structure. The natural algorithms all tend to evaluate
way to much, leading to anything from divergence to poor asymptotic
complexity. For some of the details, see the paper.)

Exact real arithmetic may not be the world's most important application.
But I would consider it the canonical mathematical example of higher-order
data. The fact that lazy evaluation doesn't help here seems disconcerting.

Hans-J. Boehm
(bo...@parc.xerox.com)

Dan Johnston (D.B.)

unread,
Jul 15, 1992, 8:43:19 PM7/15/92
to
In <BrFpJ...@news.cso.uiuc.edu> vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:

>To the best of my knowledge the only advantage of lazy evaluation
>is the fact that it allows the manipulation of potentially infinite
>data structures. Now this advantage is only worth it if it actually
>gets used. I have only seen two basic uses of this feature.

> 1) Generators: With infinite lists you can generate
> a sequence without having to specify bounds.
> The classic example of this is a sieve function that
> generates the infinite stream of primes.

> 2) I/O: This seems to the be the use driving the need for
> lazy evaluation.

In my (somewhat limited) experience, the most important application of
lazyness is in situations where a problem can be most simply described by
a number of independent processes communicating via lazy streams,
particularly where feedback is involved. The primes sieve is one example.
Another similar, but more convincing example, to my mind is the Hamming
numbers example (quoted from "A theory of nondeterminism, parallelism,
and concurrency" by M. Broy, "Theoretical Computer Science" vol 45, pp1-61,
example on page 9.
A program is required which generates the infinite stream of all numbers
greater than 1 of the form 2^i*3^j*4^k in ascending order.
funct streammult = lambda n,s: (n * first s) & streammult(n, rest(s)),
funct merge = lambda s1,s2: if first s1 <= first s2
then first s1 & merge(rest s1, s2)
else first s2 & merge( s1, rest s2) fi,
stream s1 = streammult( 5, 1 & s1 ),
stream s2 = merge(streammult( 3, 1 & s2 ), s1 ),
stream s3 = merge( streammult( 2, 1 & s3 ), s2 )
---
To me, this is nicer than other solutions (eg Dijkstra. "A discipline of
programming" P129) I have seen or could construct myself. That the stream is
infinite is not important - it would be a equally useful technique for a
finite stream. Broy's paper also has other examples of lazy streams which
convinced me of their usefulness. (It also uses some concurrency features
which are not available in most functional languages.)

>I consider the first use insufficient to warrant the need for lazy
>languages (since it seems that generators can be programmed in reasonably
>elegant ways without lazy evaluation). My experience with lazy
>lists for I/O have left me unimpressed. Most I/O involves a handshaking
>protocol which depends upon causality relationships (that is the request
>causes the system to send a reply). It seems that this relationship
>is at BEST implicit in lazy list I/O and thus code is very convoluted
>and hard to read. In addition the semantics of the lazy code no longer
>has its simple functional reading but must be understood in the context
>of the exact evaluation order. This is just the kind of detail function
>language we designed to remove!!

I'm not convinced of this. I think that we can cleanly separate components
which communicate by lazy streams, although it is necessary to ensure that the
system does not deadlock by requiring something that is not yet available.
In my experience, this reflects essential logical (eg causality) conditions
in the models.
I still have serious reservations about functional languages, my main
reservation being in the area of I/O (particularly interactive I/O), and
while streams are perhaps the best solution to this problem, it still
seems very weak and tedious compared with conventional languages.

>Vance
dan.

John Farrell

unread,
Jul 15, 1992, 8:42:11 PM7/15/92
to
In <BrFpJ...@news.cso.uiuc.edu> vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:
> 1) What are the advantages lazy evaluation that make
> it worth the extra trouble to implement?

In a friendly manner, I dispute the claim that lazy evaluation is harder to
implement. TIM [Fairbairn & Wray] and the Krivine machine (Curien) both
implement lazy languages (combinators and lambda calculus respectively) in an
almost trivial manner. Similarly, purely strict languages may be implemented
very easily. It's only when you combine them that you get problems. Since
functions like + which are fairly popular in the brainwashed Von Neumann /
Backus procedural programming oligarchy cabal are strict, we must implement at
least strictness. There is no enormous reason other than efficiency for
implementation on available micorprocessors that these functions must be
strict, but that's another argument.

> 1) Generators: With infinite lists you can generate
> a sequence without having to specify bounds.
> The classic example of this is a sieve function that
> generates the infinite stream of primes.

This seems to be underestimated by people who haven't had a lot of
experience. It's much easier to write an infinite loop than one that
terminates after a specified time or when a specific condition is met. An
example in Miranda - the list of numbers from n to m:

from n m = [], n > m
= n:(from (n+1) m), otherwise

Now the list of numbers from n:

from n = n:(from (n+1))

As you can see, the infinite list is easier to produce because you don't
have to worry about the code to make it finite. Now if we write all of our
functions with similar disregard for finiteness, we only write half the code
that we would have. This is a good thing, because we have only half the
opportunities to make errors.

>In addition the semantics of the lazy code no longer
>has its simple functional reading but must be understood in the context
>of the exact evaluation order.

I would agree that for I/O, lazy evaluation order is a problem, but then
again I haven't implemented enough interactive lazy functional programs to be
qualified to comment. I'm certain with experience I'd get used to it. However,
when I/O is not involved, evaluation order becomes of less concern than the
gravitational pull of the moon.

>When I use a lazy language AND really use
>the fact that it is lazy, I must constantly be worried about the
>order of evaluation. On the other hand, 99% of all programs
>(that is programs that have the same interpretation in a strict
>and lazy interpreter) I don't have to worry about the order of
>evaluation.

Of course you have to. As another poster pointed out, if-then, and in C,
.?.:. are evaluation order type things you have to worry about. Furthermore,
you have trained yourself not to write procedures with arguments whose
evaluation order matters.

>What's more, that is in fact what is done right now. Even lazy
>language programmers mostly write code that would work under
>strict evaluation.

Well yes, of course. No-one goes around aiming to write code which doesn't
get evaluated. But the from example above is a counter-example.

>When he does write something that requires
>laziness, he has to check very carefully that all the functions
>he calls with his infinite data structure will actually terminate
>with that input.

To tell you the truth I never considered such a thing. There aren't so many
functions which use all of an infinite data structure. One is sum, another is
user output. Certainly, I know which things are finite and which are infinite,
but it is not the sort of thing which keeps me awake at night or even delays
me for a split second.


John

John Farrell

unread,
Jul 16, 1992, 2:46:48 AM7/16/92
to
In <BrG0w...@news.cso.uiuc.edu> vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:
>Now you say 'that's not a big deal. I disagree. It is only a simple
>matter now because frankly we don't use laziness much. Pointers are
>simple when you only use the once or twice in a program. If laziness
>is REALLY worth it, it will be used extensively, and thus the above
>problem could get as bad as pointer are in imperative languages.

It occurs to me that Unix pipes behave similarly to lazy functions on infinitelists. Certainly, if I say "yes fred | less", I will achieve the expected
result, i.e. as many pages of fred as I want. On the other hand, if I say
"yes fred | sort | less", I will get nothing because the sort will not
terminate. Now no-one every worries about Unix pipes being difficult, though
they use them all the time. Why should laziness be any different?

John

s...@doc.ic.ac.uk

unread,
Jul 16, 1992, 6:09:59 AM7/16/92
to
In article <BrFpJ...@news.cso.uiuc.edu> vpmg...@uxa.cso.uiuc.edu (Vance P.
Morrison) writes:
> ...

> 1) What are the advantages lazy evaluation that make
> it worth the extra trouble to implement?
> ...

Fergus Henderson gave a good account of the usual reasons. The obvious one not
mentioned in Vance's posting is that lazy evaluation is more general because it
can give answers where eager evaluation fails to terminate.

I just want to add a very mundane example to illustrate how lazy evaluation can
allow you to write clearer programs, because you don't need explicit control
over expression evaluation (expressions are evaluated "as necessary").

This Miranda definition expresses a well-known algorithm for multiplying
without using multiplication (except multiplication and division by 2, which
are easy in binary).

mult::num->num->num
||precondition: n a natural number
||postcondition: mult x n = x*n

mult x n = 0, if n = 0
= y, if n > 0 & n mod 2 = 0
= y+x, otherwise
where y = 2*(mult x (n div 2))

Obviously the "where" part is a convenient notation, saving some typing and
clarifying the structure (by making explicit the fact that the same value y is
calculated for both the 2nd and 3rd lines). But if y is evaluated eagerly, then
this function as written diverges. Therefore for an eager implementation extra
information is needed to say which cases y is calculated for.

(I'm teaching with the eager language Hope this autumn, so if anyone can prove
me wrong with an eager implementation of mult that really is just as simple as
my Miranda one I'd be grateful to have it!)

Don Sanella posted some examples in ML of how to implement infinite data
structures, using functions from the unit type. His point, made very clearly,
was that infinite data structures are implementable eagerly in a natural way.
However, I think you can also see my point in his examples, because there is
always just a little extra explicit information needed to control the
evaluation.

Vance Morrison also mentions problems with lazy list I/O, saying that
evaluation order seemed to be important. I don't know his examples, but I would
hazard the guess that the problems are with functional I/O (i.e. turning the
side effects of I/O into functional programming) rather with laziness as such,
and that.

Steve Vickers

Stefan Kahrs

unread,
Jul 16, 1992, 8:09:43 AM7/16/92
to
In article <BrFpJ...@news.cso.uiuc.edu> vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:

VM> I think everyone agrees that lazy evaluation is harder to implement
VM> efficiently on conventional computers but that is not the kind of
VM> problem I wish to discuss (one could imagine a good compiler removing
VM> this difficulty). On the other hand, since it is more difficult

I don't agree. I implemented once a strict language and later changed
the strategy to lazy. The change was not trivial, but it was not difficult
either. [Efficiency was reasonable in both cases.]

VM> to implement Lazy evaluation we have to have good reasons for going
VM> to this extra trouble. Thus my first question is
VM>
VM> 1) What are the advantages lazy evaluation that make
VM> it worth the extra trouble to implement?

This is simply the wrong question, because you neither have to
implement the languages you want to use nor have you to use the
languages you implement.

VM> To the best of my knowledge the only advantage of lazy evaluation
VM> is the fact that it allows the manipulation of potentially infinite
VM> data structures. Now this advantage is only worth it if it actually

It is not the only advantage (not even the most important one),
it is only the most apparent difference.

Related to the infinite data structures is the use of circular
programs, you can sometimes write programs in an attribute grammar
style. Richard Bird wrote a paper about this
(Acta Informatica 21, 239-250 (1984)).
Example: Suppose you have a list of natural numbers and you want to
subtract the least element of the list from all elements. In a lazy
language, you can do it all in one list traversal:

normalize ls = res
where (m,res) = norm m ls
norm m [] = (0,[])
norm m (x:xs) = (min x m',x-m : res)
where (m',res) = norm m xs

The first argument of "norm" is like an inherited attribute,
it has no value yet when "norm" is called from "normalize".
But when "norm" comes back, it returns (synthesized attribute)
the changed list and the minimum of the list
and the local declaration in "normalize" puts the two attributes together.
This only works, because the subtractions (x-m) in the result of
"norm" are delayed, their result is not needed for "norm" to proceed.

VM> [...]
VM> Thus at present I can find no compelling reason to go to the extra
VM> effort to implement lazy languages. In addition I have a reason
VM> for NOT using them. When I use a lazy language AND really use
VM> the fact that it is lazy, I must constantly be worried about the
VM> order of evaluation. On the other hand, 99% of all programs

"Constantly" is an exaggeration.
I dare say you have to be worried more often in strict languages ---
you simply don't realize this when you have grown up with languages
like Pascal, C, LISP etc. which always first evaluate the arguments
of a function call. And with 'worried' I not only mean "Oops, this could
sometimes fail to terminate!" but also (and more often) "Oh no, this is
horribly inefficient!".

VM> (that is programs that have the same interpretation in a strict
VM> and lazy interpreter) I don't have to worry about the order of
VM> evaluation. Thus it seems to me that since we can get what we
VM> want done in a more abstract environment where the order of evaluation
VM> basically doesn't matter, that is what we should do.

In other words: You simply choose the language that is most suitable for your
application. There is nothing wrong with this principle. I don't
believe in the universally-best-for-all-problems language anyway.

VM> [...]
VM> The spirit of function languages has been 'simplify, simplify'. Thus
VM> imperative languages were considered to complex because the sharing
VM> conditions cause by pointers were hard to describe and reason about.
VM> Thus they were eliminated from the language (even though it caused
VM> a NOTICEABLE loss of expressive power). I assert that lazy evaluation
VM> causes a similar problem. Namely it requires the programmer to worry
VM> about evaluation details that MOST of the time don't matter. Thus it
VM> seems that laziness should be abandoned unless it has clear advantages
VM> (which from what I have been able to surmise, it does not).

In lambda-calculus, lazy evaluation has an important theoretical
property: It is normalizing.
This means, if you prove by purely equational reasoning
that an expression is equal to a value, then this expression will
reduce to this value under lazy evaluation. For strict evlauation,
you have additionally to prove termination.
In this respect, lazy evaluation provides a simplification.

Unfortunately, this property is lost in most lazy languages,
because they are closer to orthogonal Term Rewriting Systems than to
lambda-calculus.
--
Stefan Kahrs JANET: s...@uk.ac.ed.dcs
LFCS, Dept. of Computer Science Internet: s...@dcs.ed.ac.uk
University of Edinburgh UUCP: ..!mcsun!uknet!dcs!smk
Edinburgh ARPA: smk%dcs.ed...@nsfnet-relay.ac.uk
EH9 3JZ Tel: (44)-31-650-5139
SCOTLAND Fax: (44)-31-667-7209

Martin Odersky

unread,
Jul 16, 1992, 9:47:49 AM7/16/92
to

Vance Morrison poses a very interesting question:

1) What are the advantages lazy evaluation that make
it worth the extra trouble to implement?

I agree that, if it were just for inifinite data structures,
laziness would be justified only in very special circumstances.
But I think there is more to laziness than that.

I like lazy functional programming because it gives me a simple but powerful
way to reason about programs. Most of us would employ routinely the
following three rules of reasoning:

1. An application (\x.e) e' is equivalent to e with e' substituted for x.

2. In an expression (let x = e' in e), substituting e' for some
occurrence of x in e does not change the meaning of the program.

3. An expression (let x = e' in e), where x does not occur in e, is
equivalent to just e.

As Karl-Philip Faxen noted, in lazy functional languages these rules are
always true, barring occasional complaints of a type-checker (if the
language is typed, that is). In strict functional languages, these rules are
true only if e' is a _value_. This is considerably less useful, because it
means one cannot reason unconditionally about program fragments. To find out
whether an expression has a value, I need to know global information.

The bottom line is that, while strict functional programs are often as easy
to write as lazy funtional programs, lazy functional programs are
considerably simpler to manipulate. Depending on your way to program,
this might be worth the added implementation overhead.

-- Martin Odersky


can exchange

I can exchange the left- and right hand sides of a
declaration without changing the meaning of a program (I
believe now that similarly simple reasoning techniques can be developed for
imperative programs, but that is another matter).

can substitute equals for equals
Substitute equals for equals.


Vance P. Morrison

unread,
Jul 16, 1992, 9:50:55 AM7/16/92
to
Hello again,

>> 1) Generators: With infinite lists you can generate
>> a sequence without having to specify bounds.
>> The classic example of this is a sieve function that
>> generates the infinite stream of primes.

In my original post I stated that I did not believe generates warrented
the effort of making languages lazy. I have gotten replies to this that
suggest that generators are in fact very useful and they give examples of
this. Let me clarify why I said what I did.

Acutally I would agree that in generators are useful, and that placing
arbitary bounds on the code just to insure termaination complicates things
unnecessary. But generates CAN be created in a strict language by using
closures to defer the evaluation. The difference between this approach
and a lazy language is that the finite data structuren and its infinite
counterpart ARE DIFFERENT DATATYPES. Thus the typing system insures that
you only use infinite data structures where it is legal to do so. In
languages that support subtyping or auto-coersions, you can have the
infinite data strcuture be a supertype of the finite one (and thus you
can use a finite structure on any function that wants an infinite one).

It seems to me that such a design can be very elegant and can be implemented
in a strict langauge. (Another possibility is to add a function 'delay'
like they do in scheme, but we loose the distinction between the infinite
and the finite).

I guess what is at the root of my suspicion of lazy languages is that
I believe that there is a fundamental distinction between the finite and
the infinite (the infinite takes much more care), and that this distinction
should be not be suppressed, but rather made as obvious as possible
(this is why I like type theory instead of set theory). Current Lazy
evalutation languages blur this distinction and I believe that is bad

Vance


Vance P. Morrison

unread,
Jul 16, 1992, 10:06:46 AM7/16/92
to
far...@coral.cs.jcu.edu.au (John Farrell) writes:

There are two points to make. First most programs do NOT produce an
infinite output on finite input. Thus for MOST programs you need not
worry about whether the execution of pipes is interleaved or not. In
fact on MS-DOS pipes are NOT interleaved and most people never notice
the difference. Thus the simple sematices of pipes, namely

1) First program generates output
2) This output is sent to next program
....
n) This output is sent to next program
n+1) This output is sent to terminal

suffices. Notice that this explanation does not get into the exact
or the UNIX actually uses yet is sufficient for 99% of all pipe programming.
(It abstracts away order of evaluation details).

If you asked non-gurus what "yes fred | sort | less" would produce they
may not know because it depends upon details that they never bothered
to learn or deal with. Thus most people would call the above pipe
'tricky' and hard to understand. Thus pipes ARE easy, but lazy pipes
are more difficult (which is my point).

The second point is UNIX pipe 'programs' (almost never exceeding one
line) are at LEAST several orders of magnitude less complicated then
a typical program. Lets face it most code that can fit on one page
can be made easy to understand, the real trick is if it is still easy
to understand when it only fits on 10000 pages.

Vance

Scot Dyer

unread,
Jul 16, 1992, 11:22:09 AM7/16/92
to

(V. Morrison)

>I have read many a paper on functional languages that begin as a
>premise that lazy evaluation is good (that is better than strict
>evaluation). For some time I accepted this premise basically on
>'faith' but more and more I am questioning this assumption.

I find myself in the same boat. As someone with an interest in pure
functional programming procured from some small amount of experience in LISP
and SCHEME, my first attraction to _pure_ functional languages was their
potential to be optimized, especially on massively-parallel machines. One of
these days we'll hit the 'Van Neumann barrier' with respect to the speed of
imperative programs compiled and optimized, and the next step after that seems
to be to use parallel languages. Existing 'parallel' imperative languages are
_weak_. One has to specify what tasks to execute, which ones to wait for, etc.
You might as well be programming in assembly code at that point -- but with
a referentially transparent program, this parallelization (optimization?) is
almost trivial (once you get past the hardware requirements).

(V. Morrison)
> [...] I have only seen two basic uses of this feature.


>
> 1) Generators: With infinite lists you can generate
> a sequence without having to specify bounds.

> [...]


>
> 2) I/O: This seems to the be the use driving the need for
> lazy evaluation.

I think here we ought to add: '3) Parallel optimization'

A lazy list can be implemented as a stream very efficiently, allowing
the producer of the list to execute in parallel with the consumer. For very
long lists, this would not only save memory (=time w/gc), but improve
performance dramatically, if used sparingly.

IMHO, we need languages where 'lazy' (aka 'normal order') evaluation
is primitive in which 'eager' (aka 'applicative order') is also primative.
These languages, if they gave the programmer the ability to specify which is
to be used in a situation, and if they optimized those situations where the
programmer has not supplied this information to the evaluative order most
efficient (this can be done via profiling, to avoid needing to know the number
of processors/processes availible before coding), would be our best best.
(Sorry about that convoluted sentance...)

In other words, why not recognize the value of normal order evaluation
in some contexts and the value of applicative order evaluation in others and
allow them to co-exist? Using only normal order evaluation seems like
over-kill, and an un-necessary burden to the run-time system (which would then,
even under a parallel model need to assign processors). On the other hand,
considerable ability to optimize code is lost by insisting on allowing only
applicative order evaluation.

Oh!! BTW, lazy lists also allows the programmer to code things that s/he would
need 'static' variables for in 'C' -- like a random number generator -- since
there's no place for the 'seed' in a purely applicative order program. A lazy
program could define randoms() as follows to supply an infinite list of random
numbers...

randoms() --> randoms(start_seed); % start_seed is constant
randoms (n) --> lazy [ nh | randoms (nh) ]
where nh == some_seed_hasher (n);

Does anyone know of another way to do this in a purely functional language?
If so, I'd _love_ to hear it... Or should we add: '4) this kind of thing' to
the list of things that 'lazy' evaluation buys for us?

Just my $.02...
-- Scot

Frank Silbermann

unread,
Jul 16, 1992, 4:07:46 PM7/16/92
to
In article <boehm.711245360@siria> bo...@parc.xerox.com (Hans Boehm) writes:
>
> Exact real arithmetic may not be the world's
> most important application. But I would consider it
> the canonical mathematical example of higher-order data.
> The fact that lazy evaluation doesn't help here seems disconcerting.

I cannot even imagine even what kind of domain
could represent the set of real numbers.
What would be the basis and partial order?

I would expect the uncountability of real numbers
to be a real stumbling block. How can you imagine
coming up with reasonably efficient algorithms
for the basic functions when there don't even exist
inefficient generate-and-test algorithms?

------------------------------------------
Frank Silbermann f...@cs.tulane.edu
Tulane University New Orleans, Louisiana USA

Hans Boehm

unread,
Jul 16, 1992, 5:45:32 PM7/16/92
to
f...@cs.tulane.edu (Frank Silbermann) writes:

>In article <boehm.711245360@siria> bo...@parc.xerox.com (Hans Boehm) writes:
>>
>> Exact real arithmetic may not be the world's
>> most important application. But I would consider it
>> the canonical mathematical example of higher-order data.
>> The fact that lazy evaluation doesn't help here seems disconcerting.

>I cannot even imagine even what kind of domain
>could represent the set of real numbers.
>What would be the basis and partial order?

>I would expect the uncountability of real numbers
>to be a real stumbling block. How can you imagine
>coming up with reasonably efficient algorithms
>for the basic functions when there don't even exist
>inefficient generate-and-test algorithms?

I apologize for my original sloppiness. What the paper and I actually
refer to are what are variously called the recursive reals, constructive
reals, computable reals, or representable reals, i.e. the set of real
numbers x such that there is a computable function f from positive
integers to rationals, such that for all n, |x - f(n)| < 1/n.

There are countably many such numbers. Arithmetic operations, trig functions,
etc. are computable on them. Equality is not in general computable.
Thus for practical purposes, you can perform arithmetic without cumulative
rounding errors, but comparisons can only be performed to a a specified
accuracy.

There are no implementations that are competitive with floating point
arithmetic in performance, certainly. We have some implementations that
are appreciably better than what's needed for a desk calculator (when
run on a reasonably modern workstation). But it's very hard to do even
that well with a representation based on lazy sequences of digits or
lazily evaluated infinite continued fractions.

Hans-J. Boehm
(bo...@parc.xerox.com)

Patrick Sansom

unread,
Jul 16, 1992, 6:48:24 PM7/16/92
to
vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:

> 1) What are the advantages lazy evaluation that make
> it worth the extra trouble to implement?

>To the best of my knowledge the only advantage of lazy evaluation
>is the fact that it allows the manipulation of potentially infinite
>data structures.

This discussion has concentrated on the issue of infinite data
structures, however there is a second important characteristic of
lazy languages.

The order of evaluation does not matter.

This allows us to suspend the computations to be evaluated later if
required. Having these suspended computations we can now go off and
evaluate them in parallel. No fancy analysis was necessary to work out
that it was safe to do so. All we need to do is to maintain the
implicit data dependencies.

I am not saying that this alone is a reason to use lazy evaluation but
it is another factor which should be taken into account (even if
current technology does not provide us with commertial parallel
implementations)

Patrick.
--
Patrick Sansom | E-Mail: san...@dcs.glasgow.ac.uk
Computing Science Research Student | Real-Mail: Dept. of Computing Science,
University of Glasgow | The University,
| Glasgow G12 8QQ

s...@doc.ic.ac.uk

unread,
Jul 17, 1992, 6:19:02 AM7/17/92
to
In article <SMK.92Ju...@scarp.dcs.ed.ac.uk> s...@dcs.ed.ac.uk (Stefan
Kahrs) writes:

>Example: Suppose you have a list of natural numbers and you want to
>subtract the least element of the list from all elements. In a lazy
>language, you can do it all in one list traversal:
>
>normalize ls = res
> where (m,res) = norm m ls
>norm m [] = (0,[])
>norm m (x:xs) = (min x m',x-m : res)
> where (m',res) = norm m xs
>

This is a nice example, but I think the statement contains an error. For
instance, according to the definition

normalize [x] = [x]

when the answer should be [0].

The unstated specification for norm is that

norm m ls = (min element of ls, ls with m subtracted off all elements)

norm m [] shouldn't be (0,[]), because the minimum element of [] "is" infinity.
(More precisely, the unit element for the min operation doesn't exist because
it would have to be greater than all natural numbers.)

Supplying an arbitrary value of 0 for this impossible minimum doesn't affect
the calculation of normalize [], but it does throw out the calculation of the
minima of non-empty lists. A correction would be -

norm m [x] = (x,[x-m])
norm m (x:y:xs) = (min x m',x-m : res)
where (m',res) = norm m (y:xs)

(You get exactly the same problem if you calculate the minimum value of an
array in an imperative language. Unlike the sum of an array, where you can
initialize an accumulator to 0 before adding in the elements, for the minimum
you initialize it to the first element before comparing the rest.)

Perhaps better would be not to define norm m [] but to say

normalize [] = [] || this case is actually ill-specified
normalize (x:ls) = ...

Steve Vickers.

Thorsten Altenkirch

unread,
Jul 17, 1992, 7:04:48 AM7/17/92
to
In article <BrHIG...@news.cso.uiuc.edu> vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:

Acutally I would agree that in generators are useful, and that placing
arbitary bounds on the code just to insure termaination complicates things
unnecessary. But generates CAN be created in a strict language by using
closures to defer the evaluation. The difference between this approach
and a lazy language is that the finite data structuren and its infinite
counterpart ARE DIFFERENT DATATYPES. Thus the typing system insures that
you only use infinite data structures where it is legal to do so. In
languages that support subtyping or auto-coersions, you can have the
infinite data strcuture be a supertype of the finite one (and thus you
can use a finite structure on any function that wants an infinite one).

In this context it may be worth to remark that infinite data structures
can be considered as codatatypes i.e. as the categorical dual to the
usual inductive datatypes. I believe this was first proposed by
Hagino.

It is possible to introduce codatatypes without introducing
non-termination. Therefore you may use codatatypes in a call-by-value
language without having to resort to call-by-name evaluation in
general. And it is also much neater then the coding up with closures.

T.

[1]
@PhdThesis{haginoPhD,
author = "Tatsuya Hagino",
title = "A Categorical Programming Language",
school = "University of Edinburgh",
year = "1987",
month = "September",
}
--
+---------------------+-------------------------------------------------------+
| Thorsten Altenkirch | What is the difference between a raven and a writing |
| LFCS, Edinburgh | desk ? - There is a 'B' in both. |
+---------------------+-------------------------------------------------------+

Vance Morrison

unread,
Jul 17, 1992, 9:34:23 AM7/17/92
to
In comp.lang.functional you write:

>This discussion has concentrated on the issue of infinite data
>structures, however there is a second important characteristic of
>lazy languages.

> The order of evaluation does not matter.

I disagree with this. If you ever use 'Lazy' features (infinite structures)
then the order DEFINATELY matters (since if you evaluate too eagerly
you diverge). This is my basic problem with using these features,
it ties the hands of the compiler (since it must guarentee semantics
that mirrors a particular order of evaluation, and it makes programs
hard to understand since the programmer now has to think about that too.

>This allows us to suspend the computations to be evaluated later if
>required. Having these suspended computations we can now go off and
>evaluate them in parallel. No fancy analysis was necessary to work out
>that it was safe to do so. All we need to do is to maintain the
>implicit data dependencies.

It seems to me this property is possessed equally well by either strict
or lazy evaluation. Both can be parallelized with the same ease.

Vance

Dave Berry

unread,
Jul 17, 1992, 11:43:04 AM7/17/92
to
s...@doc.ic.ac.uk writes:
>mult x n = 0, if n = 0
> = y, if n > 0 & n mod 2 = 0
> = y+x, otherwise
> where y = 2*(mult x (n div 2))

fun mult x 0 = 0
| mult x n =
let val y = 2 * (mult x (n div 2))
in if n > 0 andalso n mod 2 = 0
then y
else x + y
end;

This seems to work. Have I missed something?

Dave.
--

These boundaries are false.

Mark C. Carroll

unread,
Jul 17, 1992, 5:07:29 PM7/17/92
to
In article <39...@skye.dcs.ed.ac.uk> d...@dcs.ed.ac.uk (Dave Berry) writes:
>s...@doc.ic.ac.uk writes:
>>mult x n = 0, if n = 0
>> = y, if n > 0 & n mod 2 = 0
>> = y+x, otherwise
>> where y = 2*(mult x (n div 2))
>
>fun mult x 0 = 0
>| mult x n =
> let val y = 2 * (mult x (n div 2))
> in if n > 0 andalso n mod 2 = 0
> then y
> else x + y
> end;
>
>This seems to work. Have I missed something?
>

Sort of... andalso is lazy, so that's not a strict program. It's
written in a mostly strict language, but the reason that it works is
because it uses a lazy special form.

<MC>
--
|| Mark Craig Carroll: <MC> || "There is no such thing as a problem
|| Univ of Delaware, Dept of CIS|| without a gift for you in its hands. You
|| Grad Student/Labstaff Hacker || seek problems because you need their
|| car...@udel.edu || gifts" - _Illusions_, by Richard Bach

Karl-Filip Faxen

unread,
Jul 17, 1992, 9:13:08 AM7/17/92
to
In article <BrHIG...@news.cso.uiuc.edu>, vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:

|> I guess what is at the root of my suspicion of lazy languages is that
|> I believe that there is a fundamental distinction between the finite and
|> the infinite (the infinite takes much more care), and that this distinction
|> should be not be suppressed, but rather made as obvious as possible
|> (this is why I like type theory instead of set theory). Current Lazy
|> evalutation languages blur this distinction and I believe that is bad
|>

I agree with you about that. That is, that you've found the root of
your suspicion. As for myself, I don't feel that infinite datastructures
are fundamentally different from finite ones. The difference may come
from where ones "mathematical roots" are: If you've started out from logic
you probably associate computations over infinite datastructures with
infinite derivations in a logical calculus, something I've understood to
be a problem. If, on the other hand, one comes from domain theory and
denotational semantics (no, I don't know much domain theory, just enough
to do some abstract interpretation when I need it), then you deal with
mathematical objects like functions and sets and there's nothing strange
with these being infinite as long as they can be defined as limits of an
appropriate fixpoint operator. To us the mathematical notion of limit in
a natural way models demand driven evaluation: for any finite demand you
have, I promise to deliver in finite time.

Also, as has been argued by myself and others, it is only under lazy
evaluation that function application in the language can be modeled as
function application in the mathematical sense. Otherwise it's procedure
call, pure and simple. The practical implication is the ease with wich
programs can be manipulated - what looks syntactically like a function
composition can be treated like mathematical function composition, by
people as well as programs. Many program trasformations and optimizations
are based on what is called the fold/unfold methodology where the basic
steps are unfolding (inlining), folding (the reverse of inlining) and
definition of new functions (which are brought to use by folding). In
a strict language you have to check separately, in every step, that the
termination properties remain unchanged and this is very difficult to
do automatically. In a lazy language, you know that they won't change.
This is referential transparency at work.

So the performance problems of lazy languages might in the future be
offset by their greater amendability to optimization and perhaps greater
potential for parallel implementation, at which time they will have a
performance advantage. I think that will happen.

Cheers,

Karl-Filip
----------------------------------------------------------------------
Make the frequent case fast, and the fast case frequent!

Vance Morrison

unread,
Jul 18, 1992, 9:32:25 AM7/18/92
to
car...@bugs.cis.udel.edu (Mark C. Carroll) writes:

>> in if n > 0 andalso n mod 2 = 0
>> then y
>> else x + y
>> end;
>>
>>This seems to work. Have I missed something?
>>

>Sort of... andalso is lazy, so that's not a strict program. It's
>written in a mostly strict language, but the reason that it works is
>because it uses a lazy special form.

This is of course true, but I think a bit unfair. NO programming
language is PURELY strict. At the very least there is some forms that
are lazy (usually if, case...). Now you can argue that this is an
asymmetry in the language that should be eliminated. You can also
argue that in the above program that the programmer had to worry about
order of evaluation (since he had to remember to use 'andalso' instead
of 'and'). But the fact remains that the above code captured the
example with reasonable elegance in a strict language.

Vance

Andrew Koenig

unread,
Jul 18, 1992, 12:15:37 PM7/18/92
to
In article <1992Jul17.2...@udel.edu> car...@bugs.cis.udel.edu (Mark C. Carroll) writes:

...

> > if n > 0 andalso n mod 2 = 0
> > then y
> > else x + y

...

> >This seems to work. Have I missed something?

> Sort of... andalso is lazy, so that's not a strict program. It's
> written in a mostly strict language, but the reason that it works is
> because it uses a lazy special form.

Eh? "a andalso b" is syntactic sugar for "if a then b else false", so

if n > 0 andalso n mod 2 = 0
then y
else x+y

is syntactic sugar for

if if n > 0 then n mod 2 = 0 else false
then y
else x+y

Are you also going to reject this because "if" is a lazy special form?
--
--Andrew Koenig
a...@europa.att.com

Richard O'Neill

unread,
Jul 18, 1992, 6:41:03 PM7/18/92
to
Andrew Koenig <a...@europa.att.com> writes:

# Mark C. Carroll <car...@bugs.cis.udel.edu> writes:
>>> if n > 0 andalso n mod 2 = 0
>>> then y
>>> else x + y
>
>>> This seems to work. Have I missed something?
>
>> Sort of... andalso is lazy, so that's not a strict program. It's
>> written in a mostly strict language, but the reason that it works is
>> because it uses a lazy special form.
>
> Eh? "a andalso b" is syntactic sugar for "if a then b else false", so [it]

> is syntactic sugar for
>
> if if n > 0 then n mod 2 = 0 else false
> then y
> else x+y

Also remember that:
if <test> then <true-case> else <false-case>

is just syntactic sugar for:

case <test> of true => <true-case>
| false => <false-case>

which in turn can be viewed as syntactic sugar for (*)

let
fun doCase true = <true-case>
| doCase false = <false-case>
in
doCase <test>
end

Can you describe the above construct as 'lazy'? If so, doesn't that
make the term a little meaningless, since there is only one sensible
interpretation of this form?

Regards,
Richard.

(*) Of course, for SML it is really syntactic sugar for
(fn true => <true-case> | false => <false-case>) <test>
or at least that's what I remember...
--
Composing a suitably apt and witty | ric...@smaug.questor.wimsey.bc.ca
signature and disclaimer is left | one...@fornax.UUCP | NeXTmail OK, but
as an exercise for the reader. | one...@cs.sfu.ca | not >100K OK!

Graham Matthews

unread,
Jul 18, 1992, 11:34:56 PM7/18/92
to
(Vance P. Morrison) writes:
>I guess what is at the root of my suspicion of lazy languages is that
>I believe that there is a fundamental distinction between the finite and
>the infinite (the infinite takes much more care),and that this distinction

>should be not be suppressed, but rather made as obvious as possible
>(this is why I like type theory instead of set theory). Current Lazy
>evalutation languages blur this distinction and I believe that is bad

(Karl-Filip Faxen) writes:
>I agree with you about that. That is, that you've found the root of
>your suspicion. As for myself, I don't feel that infinite datastructures
>are fundamentally different from finite ones. The difference may come
>from where ones "mathematical roots" are: If you've started out from logic
>you probably associate computations over infinite datastructures with
>infinite derivations in a logical calculus, something I've understood to
>be a problem. If, on the other hand, one comes from domain theory and
>denotational semantics (no, I don't know much domain theory, just enough
>to do some abstract interpretation when I need it), then you deal with
>mathematical objects like functions and sets and there's nothing strange
>with these being infinite as long as they can be defined as limits of an
>appropriate fixpoint operator. To us the mathematical notion of limit in
>a natural way models demand driven evaluation: for any finite demand you
>have, I promise to deliver in finite time.

I would have to agree with Vance. Infinite data structures are
fundamentally different to finite ones. For example propositions over
infinite sets are undecidable, whereas over finite sets they are
decidable. Moreover infinite sets may not be enumerable whereas finite
sets (always?) are. To me these two facts suggest that infinite data
structures are very very differtent to finite ones, and languages should
at the very least indicate this difference with different syntax.

graham
--
Graham Matthews And it's true we are immune
Pure Math, Uni.Sydney, Oz When fact is fiction and T.V. is reality
gra...@maths.su.oz.au

s...@doc.ic.ac.uk

unread,
Jul 20, 1992, 5:23:16 AM7/20/92
to
In article <39...@skye.dcs.ed.ac.uk> d...@dcs.ed.ac.uk (Dave Berry) writes:
>s...@doc.ic.ac.uk writes:
>>mult x n = 0, if n = 0
>> = y, if n > 0 & n mod 2 = 0
>> = y+x, otherwise
>> where y = 2*(mult x (n div 2))
>
>fun mult x 0 = 0
>| mult x n =
> let val y = 2 * (mult x (n div 2))
> in if n > 0 andalso n mod 2 = 0
> then y
> else x + y
> end;
>
>This seems to work. Have I missed something?

No problem. I'm sure it works.

My point was that this kind of solution involves a minute bit of extra
scheduling responsibility to say that the where/let clause should only be
evaluated in the last two cases, whereas the lazy solution can leave that to
the compiler and run-time system.

The follow-up postings that criticized Dave's solution on the grounds of hidden
laziness have missed the point. It's quite plain that eager languages do have
lazy features, of which if...then...else is the most obvious example. The sort
of eager solution I'd really like is one that shows how these features can be
used to alleviate the programmer of mult of his minute bit of scheduling
responsibility. (It really is minute in the case of mult, but I should think
the same responsibility could be more considerable in bigger examples.)

lor...@cs.wisc.edu (I don't know his human name) has sent me a very economical
solution in SML, though it still has the explicit scheduling:

>>>>>>>>>>>>>>>>>>>>>>>>>
fun mult _ 0 = 0
| mult x n = 2*(mult x (n div 2))+(if (n mod 2) = 0 then 0 else x)
<<<<<<<<<<<<<<<<<<<<<<<<<

Steve Vickers.

Rob Turner

unread,
Jul 20, 1992, 3:01:37 PM7/20/92
to
In article <BrFpJ...@news.cso.uiuc.edu> vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:
>Hello,
>
> When I use a lazy language AND really use

>the fact that it is lazy, I must constantly be worried about the
>order of evaluation. On the other hand, 99% of all programs
>

I find this very interesting. One of the touted advantages of lazy
evaluation is that the programmer does not have to be concerned with
evaluation order. This is true for many programs, and must be
considered to be a great boon over the explicit sequencing which
goes on in imperative programs, but increasingly these days there
are a lot of people who are saying "I *want* to know the order in
which values are calculated, because it will affect the way I write
my program."

Being a software engineer I think it is very important that we move
in a direction where we *shouldn't* have to worry about evaluation
order. I'd like to think that if lazy evaluation is slow, then we'll
have to design faster machines. There must come a point where ease
of programming overrides speed/memory considerations...

Rob

--
--
Rob Turner, Dept. of Computer Science, University of Hull, UK.
Internet: r...@cs.hull.ac.uk Phone: (0482) 465212

R.S. Nikhil

unread,
Jul 20, 1992, 3:33:18 PM7/20/92
to
Many people have been using these terms interchangeably, i.e.,

``non-strict'' = ``lazy'' (1)

I would like to point out that this is inaccurate; in fact, it
compares apples and oranges.

I think of ``non-strictness'' as a concept from denotational
semantics--- it asks if ``f BOTTOM = BOTTOM''.

I think of ``laziness'' as a concept from operational semantics--- it
is a strategy for choosing redexes such that no ``unnecessary''
redexes are ever reduced. In other words, it is a scheduling
strategy.

Now, lazy evaluation may be a way to IMPLEMENT non-strictness, but
equation (1) is still inaccurate because it is not the only way. Here
are two (of many) possible implementations of non-strictness that are
alternatives to lazy evaluation:

- The ``parallel outermost'' computation rule, which reduces all
redexes that are outermost in parallel.

- The ``lenient evaluation'' computation rule [Traub, FPCA 89], where
all redexes are evaluated in parallel except inside the arms of
conditionals and inside lambdas.

In both the above parallel computation rules, one can replace ``all
redexes'' by ``some redexes'' if we add a priority condition that
ensures progress on the normal-order spine.

Neither of the above computation rules are ``lazy'' (because they
reduce unnecessary redexes)--- in fact, they are both ``eager'' (since
they both reduce speculatively). However, they also implement
non-strictness.

Now, one can argue about which computational strategy is ``better''
(time, space, parallelism, ...) but please let us not inaccurately
equate a concept from denotational semantics (non-strictness) with a
particular reduction strategy (lazy evaluation).

Regards,

Nikhil (nik...@crl.dec.com)

Vance Morrison

unread,
Jul 20, 1992, 5:39:29 PM7/20/92
to
rtu...@nyx.cs.du.edu (Rob Turner) writes:

>In article <BrFpJ...@news.cso.uiuc.edu> vpmg...@uxa.cso.uiuc.edu (Vance P. Morrison) writes:
>>Hello,
>>
>> When I use a lazy language AND really use
>>the fact that it is lazy, I must constantly be worried about the
>>order of evaluation. On the other hand, 99% of all programs
>>

>I find this very interesting. One of the touted advantages of lazy
>evaluation is that the programmer does not have to be concerned with
>evaluation order.

>Being a software engineer I think it is very important that we move


>in a direction where we *shouldn't* have to worry about evaluation
>order.

That is the very point I was trying to make. Namely that when you
use 'lazy lists' or other structures that work only in lazy languages
you ARE worrying about order of evaluation. (Since you have to make
sure that your program terminates). Now granted, you have to worry
about order of evaluation in a strict language too, but since lazy
data structures are not allowed, this reason can be highly localized
(that is the termination properties of a piece of code are only
very loosely coupled to the inputs to that code). If infinite
structures are allowed, that reasoning is not longer has localized
(since figuring out the termination properties now depends strongly
on the input given).

The bottom line is that if you REALLY want to abstract order of
evaluation away, you have to use a STRONGLY NORMALIZING language
(they do exist, take type theory for example). Now order of evaluation
truely doesn't matter since any evaluation order terminates.

In fact, the language I am working on is in fact strongly normalizing.
It does this by requiring a proof of termination whenever a function
is defined. This proof esentially tells the compiler what constraints
there are on order of evaluation. The beauty of this approach is that
function calls ALWAYS denote the value of their application (since there
is no nasy non-termination case), subsitution is always valid and normal
naive set theory can be used to reason about it (no domains needed).

Vance

Andrew Koenig

unread,
Jul 20, 1992, 10:37:59 PM7/20/92
to
People interested in this issue may wish to read the excellent discussion
in Chris Reade's book ``Elements of Functional Programming.''

Among other things, Reade points out that some techniques that work well
for non-lazy languages cause trouble for lazy languages. For example:

fun accumulate f a [] = a
| accumulate f a (b::x) = accumulate f (f a b) x

This function is fine in a non-lazy language, but for a lazy language
it has the serious flaw that it returns no result until it has seen its
entire argument. On the other hand, the similar function

fun reduce f a [] = a
| reduce f a (b::x) = f b (reduce f a x)

is capable of yielding a result without first seeing the entire argument.

That means that if the append operator @ is defined as

fun (x @ y) = reduce cons y x

then its behavior will be quite different in a lazy language from what
it would have been if it were defined this way:

fun (x @ y) = revonto y (rev x)
and rev x = revonto [] x
and revonto y x = accumulate consonto y x
and consonto y a = a :: y

In this latter case (x @ omega) = omega for all x (where omega is the
`undefined' or `nonterminating' value). A quote:

It seems unfortunate that a common style of programming is not
possible for both lazy and non-lazy evaluation. The subtle
differences that appear really make the case for further
separation of specification and optimisation concerns in
programming. It is often the case that the most natural
specification of a function is closer to being an efficient
program for lazy evaluation than it is for non-lazy evaluation.
Knowledge of equivalences and performance of function
definitions becomes more important for gaining efficiency in
non-lazy implementations. Some commonality can be regained by
turning to a more transformational approach to program
construction.
--
--Andrew Koenig
a...@europa.att.com

s...@doc.ic.ac.uk

unread,
Jul 21, 1992, 5:33:54 AM7/21/92
to
In article <1992Jul20.2...@m.cs.uiuc.edu> morr...@dante.cs.uiuc.edu
(Vance Morrison) writes:
>...

>That is the very point I was trying to make. Namely that when you
>use 'lazy lists' or other structures that work only in lazy languages
>you ARE worrying about order of evaluation. (Since you have to make
>sure that your program terminates). Now granted, you have to worry
>about order of evaluation in a strict language too, but since lazy
>data structures are not allowed, this reason can be highly localized
>(that is the termination properties of a piece of code are only
>very loosely coupled to the inputs to that code). If infinite
>structures are allowed, that reasoning is not longer has localized
>(since figuring out the termination properties now depends strongly
>on the input given).
>...

Please can we have some examples to illustrate these points?


>The bottom line is that if you REALLY want to abstract order of
>evaluation away, you have to use a STRONGLY NORMALIZING language
>(they do exist, take type theory for example). Now order of evaluation
>truely doesn't matter since any evaluation order terminates.

On the face of it, evaluation order is related more closely to the
Church-Rosser property than to strong normalization.

Steve Vickers.

Stefan Kahrs

unread,
Jul 21, 1992, 11:41:55 AM7/21/92
to
VM=Vance Morrison
SV=Steve Vickers

VM >The bottom line is that if you REALLY want to abstract order of
VM >evaluation away, you have to use a STRONGLY NORMALIZING language
VM >(they do exist, take type theory for example). Now order of evaluation
VM >truely doesn't matter since any evaluation order terminates.

SV On the face of it, evaluation order is related more closely to the
SV Church-Rosser property than to strong normalization.

An addition to Steve Vickers' remark:
Take Standard ML without side-effects, but with exceptions:
if you evaluate with different strategies, you can get _different_
results, because an exception-raising expression can be dropped in a
non-strict evaluation.

---

But the Church-Rosser (CR) does not give you the whole story.
The CR property is a property of the full
(non-deterministic) evaluation relation,
but when we deal with strategies,
we deal with subrelations of the evaluation.
Although in principle we are _interested_ in the full evaluation
relation, we actually _deal_ only with this particular subrelation.

If our evaluation relation is CR, our strategy cannot go "wrong".
"But does it go right?" is a different thing.
Note that the question can mean different things, for example:

- Suppose t and u are equal wrt. forward/backward evaluation.
Does the strategy F find a common reduct, i.e. are there
numbers m and n, such F^m(t) and F^n(u) are identical?

- Suppose t has a normal form u.
Does the strategy F find it,
i.e. is there a number n such that F^n(t)=u?

Notice that the first property is very strong
and implies the second one (in Barendregt's Lambda-Calculus book
the first property is called CR [for strategies],
the second one "normalising").

Leftmost-outermost evaluation for lambda-calculus is normalising, but it
does not have the stronger CR-property.

As long as we are only interested in normal forms, the "normalising"
property is quite sufficient. But:
An unfortunate property of functional languages with pattern
matching is that their definitions are more like term rewriting rules.
For term rewriting systems, even of this particular form,
leftmost-outermost reduction is _not_ normalising.
Example:

myand False x = False
myand x False = False
myand True True = True

For the above definition (in Miranda),
the term (myand undef False) is obviously equal to False,
but it does not evaluate to it. Switching the first two
equations makes it defined in Miranda.

The example shows that evaluation order does matter in lazy languages
(I bet in Haskell happens the same), their evaluation strategy
is _not_ normalising.
That doesn't mean that there is no such strategy (indeed there is),
but it is much harder to implement it efficiently.

Of course one can claim that definitons like the one above are
only syntactic sugar for certain nested case-expressions,
and that for those everything is fine, but this misses the point:
As a programmer, I think of the equations I write down as equations
and not as nested case-expressions, and I would like to be able to
manipulate them as I usually manipulate equations.
In other words: I would like the semantics to keep the promises
the syntax makes.

I don't have this in strict languages, but -- as I pointed out --
I don't have it either in lazy languages.

0 new messages