Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

local variables don't cause side effects

53 views
Skip to first unread message

Marshall Spight

unread,
Jul 9, 2003, 1:29:45 AM7/9/03
to
Hi all,

So I'm reading all this stuff about functional programming lately,
and it seems like the literature is often conflating two ideas:
that functional languages don't have variables, and that a function's
return value(s) are determined solely by the argument values.

Well, they are not the same thing. If a function doesn't have
access to global state, but its implementation *can* make
use of local variables, then it's going to have the parameters-
uniquely-determine-results property. Right?

So, can a language still be considered functional if it allows
for local variables? Do local variables really make it any harder
to reason about a function's implementation?


Marshall

Feuer

unread,
Jul 9, 2003, 2:35:05 AM7/9/03
to

Marshall Spight wrote:

> So, can a language still be considered functional if it allows
> for local variables? Do local variables really make it any harder
> to reason about a function's implementation?

Side effects within a function are just as hard to deal with as ones
at the top level. Learn a functional language first, then start asking
weird questions.

David

Joachim Durchholz

unread,
Jul 9, 2003, 4:31:36 AM7/9/03
to
Marshall Spight wrote:
> Well, they are not the same thing. If a function doesn't have
> access to global state, but its implementation *can* make
> use of local variables, then it's going to have the parameters-
> uniquely-determine-results property. Right?

Right.
Uh, well, sort of. If you can construct references to local variables,
and allow assignments via references, a function can indeed modify
variables outside its scope, so you have to be rather careful about what
you allow programmers to do with locals. But let me assume that all
mechanisms that might modify a nonlocal variable were disabled, to keep
the answer interesting ;-)

But you don't need them. To transform an imperatively-written function
into a functional version:
1) Whenever the imperative code reassigns to a local variable, just
create another variable that will take up the new value.
2) Whenever there's a loop, move the loop body into a recursive helper
function. (Note most loops that you'll encounter can be transformed into
higher-order functions which will do the recursion for you. You don't
have to reinvent the wheel in every case.)

> So, can a language still be considered functional if it allows
> for local variables?

This depends a lot on the definition of the day for "functional" ;-)

However, at the interface level, the function will indeed be functional
(the more generally accepted term for that property is "referentially
transparent", which means it's irrelevant whether you substitute the
function directly into the call place or use the function name - i.e. a
reference to the actual function).

> Do local variables really make it any harder
> to reason about a function's implementation?

In automatic reasoning: yes.
(Note that "automatic reasoning" is important when it comes to things
like compiler optimization. The compiler must reason whether a given
code transformation preserves the semantics of the program, after all.)

For a human who's trying to understand a program piece, this depends: on
prior experience of that human (i.e. whether he has an imperative or a
functional background, and his general level of expertise); on the size
of the function; on how clever the original writer of the function tried
to be; etc.
In general, it's much more difficult to write obscure code if function
bodies are written functionally. For a team leader, this alone should be
enough incentive to select a language that uses referential transparency
inside of functions :-)


Hope this helps.
Jo

P.S.: Don't let yourself be deterred by responses like those of David
Feuer - I don't know why he reacted in such a harsh manner.
Let me assure that I don't consider your question "weird". Actually it
made me rethink an aspect of my personal definition of "fundamental",
and I generally consider it good when I can sharpen my perception of
things by answering such questions :-)
JD

Neelakantan Krishnaswami

unread,
Jul 9, 2003, 8:07:51 AM7/9/03
to
Marshall Spight <msp...@dnai.com> wrote:
> Hi all,
>
> So I'm reading all this stuff about functional programming lately,
> and it seems like the literature is often conflating two ideas:
> that functional languages don't have variables, and that a function's
> return value(s) are determined solely by the argument values.
>
> Well, they are not the same thing. If a function doesn't have
> access to global state, but its implementation *can* make
> use of local variables, then it's going to have the parameters-
> uniquely-determine-results property. Right?

Not necessarily -- here's an example in Scheme and Ocaml:

(define gensym
(let ((state 0))
(lambda ()
(set! state (+ state 1))
state)))

let gensym =
let state = ref 0 in
fun() ->
(state := !state + 1;
!state)

What this does, at the interpreter prompt:

# gensym();;
- : int = 1
# gensym();;
- : int = 2
# gensym();;
- : int = 3

We've defined a function gensym that has some local state (a counter)
that persists from call to call, but which doesn't access any global
state.

The general idea that you haven't quite wrapped your head around is
that functions are values too, and that you can define new functions
and return them as values. Since a closure can capture bindings, this
means that local state can "escape" the caller.

> So, can a language still be considered functional if it allows for
> local variables? Do local variables really make it any harder to
> reason about a function's implementation?

A language can be considered functional even if it has variables and
side-effects, as both Scheme and ML do. IMO the two things that permit
programming in a functional style are the ability to create function
closures as first-class values, and tail recursion.

--
Neel Krishnaswami
ne...@alum.mit.edu

Marshall Spight

unread,
Jul 9, 2003, 10:43:36 AM7/9/03
to
"Feuer" <fe...@his.com> wrote in message news:3F0BB799...@his.com...

If the noobs didn't come around at regular intervals and ask weird
questions, the experts would get bored. How would they know
what they know after a while? Epistemology would suffer. :-)


Marshall

Marcin 'Qrczak' Kowalczyk

unread,
Jul 9, 2003, 10:58:33 AM7/9/03
to
Neelakantan Krishnaswami wrote:

> A language can be considered functional even if it has variables and
> side-effects, as both Scheme and ML do. IMO the two things that permit
> programming in a functional style are the ability to create function
> closures as first-class values, and tail recursion.

IMHO it's essential that you can conveniently program without side effects.
Working with immutable values, the essence of functional programming, may be
hard or easy depending on the language. Most modern languages, even those
not considered functional, make it easy (e.g. Python, Ruby) but not all
(e.g. Eiffel and languages without GC).

This requires:

- Garbage collection, so you can freely share identical substructures which
occur often if an object is replaced with a similar one instead of
mutating it in place.

- Efficient passing and returning objects by reference, because you do this
a lot. A function returns a large structure instead of putting it in
another mutable data structure. In practice this coincides with garbage
collection.

- Data structures holding pure data - easily defined, constructible by
expressions, and supported by libraries.

Negative examples: In Eiffel an object's constructor is invoked by a
statement initializing a variable, not by an expression. In C++ there
are no tuples nor array literals, fortunately libraries can help a bit.
Smalltalk makes functional programming harder because the standard library
provides only imperative data structures - I think even Lisp-like lists
would have to be defined from scratch if one wanted to program in the
functional style.

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Robert Will

unread,
Jul 9, 2003, 12:40:20 PM7/9/03
to
Marcin 'Qrczak' Kowalczyk wrote:
>
> IMHO it's essential that you can conveniently program without side effects.
> Working with immutable values, the essence of functional programming, may be
> hard or easy depending on the language. Most modern languages, even those
> not considered functional, make it easy (e.g. Python, Ruby) but not all
> (e.g. Eiffel and languages without GC).

Imho Eiffel does allow functional programming much better than Ruby.
(If only because it allows programming much better.)

> Negative examples: In Eiffel an object's constructor is invoked by a
> statement initializing a variable, not by an expression.

Eiffel includes creation expressions since quite a time. Also it
includes first-class functions and the standard library provides some
higher-order-standard-function.

But traditionally Eiffel programmers have used a very imperative style,
and, consequently, standard libraries have not been very functional
either. Probably this is because Eiffel's good structured programming
and abstraction mechanisms do take imperative programming very far
before it sucks. Other languages (like Ruby) rather go for more
features than an overall good design.


> I think even Lisp-like lists would have to be defined from scratch
> if one wanted to program in the functional style.

But Lisp-like lists is not what programming needs. In Eiffel I am
programming with sequences (an immutable data structure, whose mutable
sister I call list), which are implemented in a class, and I don't care
how. (Although I wrote the implementation.) Instead of working CAR and
CDR (or head and tail), I'm using more powerful sequence combinators:
map, concat, filter. Of course, there is also head and tail (although I
followed the Scheme convention and called them first and but_first,
because there's also last and but_last and all of them are similarly
efficient: constant amortized or logarithmic depending on the
implementation variant).

Bob.

PS: I'll post a link to my immutable Eiffel data structures, when the
library is finished. They're really simpler and more flexible than
those in other languages.


Joachim Durchholz

unread,
Jul 9, 2003, 2:37:04 PM7/9/03
to
Robert Will wrote:

> Marcin 'Qrczak' Kowalczyk wrote:
>> Negative examples: In Eiffel an object's constructor is invoked by a
>> statement initializing a variable, not by an expression.
>
> Eiffel includes creation expressions since quite a time.

Are they standardized yet?

> Also it
> includes first-class functions and the standard library provides some
> higher-order-standard-function.

Yes - but Eiffel doesn't even attempt to keep the consequences of having
higher-order subroutines and side effects under controls.
It's powerful but too dangerous IMHO.

> But traditionally Eiffel programmers have used a very imperative style,

Mostly because creation expressions are a rather late addition :-)

> and, consequently, standard libraries have not been very functional
> either. Probably this is because Eiffel's good structured programming
> and abstraction mechanisms do take imperative programming very far
> before it sucks.

I think it's more a question of mentality.
Eiffel was created in a strongly imperative mindset, attracted
imperative programmers, and caused the vast majority of library code to
be written imperatively.
Even the design philosophy of Eiffel is strongly imperative: there's a
sharp distinction between functions (returning values from an object)
and procedures (modifying the state of an object).

Another problem in Eiffel is its a-class-is-a-module-is-a-class
philosophy. A collection of routines that create and return functional
objects doesn't have an easy resting place. If you want to write (say) a
data structure library, you need two classes for every type: one to
define the actual type, one to define the constructor routines.
Not having creation expressions just added to the pain... (I know, I
tried exactly this two years ago, on a compiler that didn't support
creation expressions... this could be worked around, but being forced to
define a type when I just wanted a module drove me nuts as well).

> Other languages (like Ruby) rather go for more
> features than an overall good design.

I can't comment on Ruby, I don't know it.
I have read people singing its praise though.

> PS: I'll post a link to my immutable Eiffel data structures, when the
> library is finished. They're really simpler and more flexible than
> those in other languages.

Ah - send me a note when you're done. It will be interesting to compare
ideas and mechanisms.

Regards,
Jo

Neelakantan Krishnaswami

unread,
Jul 9, 2003, 2:58:56 PM7/9/03
to
Marcin 'Qrczak' Kowalczyk <qrc...@knm.org.pl> wrote:
> Neelakantan Krishnaswami wrote:
>
> > A language can be considered functional even if it has variables and
> > side-effects, as both Scheme and ML do. IMO the two things that permit
> > programming in a functional style are the ability to create function
> > closures as first-class values, and tail recursion.
>
> IMHO it's essential that you can conveniently program without side effects.
> Working with immutable values, the essence of functional programming, may be
> hard or easy depending on the language. Most modern languages, even those
> not considered functional, make it easy (e.g. Python, Ruby) but not all
> (e.g. Eiffel and languages without GC).

Ruby is a weird little language. It definitely strongly encourages a
higher-order style of programming -- it has a feature called
"iterators" which is basically a way of passing a function to a
routine as an implicit argument. But it doesn't have tail calls.

I think that FP techniques are soaking into the open-source PL-hacker
community, and this is great news. Maybe the next big scripting
language will have support for tail call optimization.

> This requires:
>
> - Garbage collection, so you can freely share identical substructures
> which occur often if an object is replaced with a similar one instead
> of mutating it in place.

I had just kind of assumed this was necessary to implement closures.

> Negative examples: In Eiffel an object's constructor is invoked by a
> statement initializing a variable, not by an expression. In C++ there
> are no tuples nor array literals, fortunately libraries can help a bit.

I'm very surprised that creation expressions weren't in Eiffel from
the start -- the progress of programming language design can, in many
ways, be seen as making more and more things anonymous. Eg, in
assembly language, you have to explicitly name all temporaries, but in
Fortran, you can have arithmetic and boolean expressions. In Pascal
and C, you have to name all your functions, but in Scheme and ML, you
can have anonymous functions, etc etc.


--
Neel Krishnaswami
ne...@alum.mit.edu

Marcin 'Qrczak' Kowalczyk

unread,
Jul 9, 2003, 6:27:20 PM7/9/03
to
Robert Will wrote:

> Imho Eiffel does allow functional programming much better than Ruby.
> (If only because it allows programming much better.)

What Ruby lacks wrt. functional programming? I know it doesn't provide
functional data structures in the library, e.g. it has no immutable list
type (with constant time cons & tail) or dictionary type (with logarithmic
time non-destructive insertion and deletion). But most "non-functional"
languages don't provide them either and Ruby has the needed language
features, and its syntax is sufficiently expression-oriented.

Eiffel is in minority with returning values from functions by assigning them
to the special variable called Result. Of course it's as expressible as the
most common styles (function body being an expression or having the return
statement), but it doesn't feel functional :-)

> Eiffel includes creation expressions since quite a time.

Good to know. It didn't when I read about it (about two years ago).

> Also it includes first-class functions and the standard library provides
> some higher-order-standard-function.

Can you wrap in functions expressions and statements written inline, with
arbitrary free variables? Or is it still only the "bind some arguments
and/or the receiver of a method" feature called agents?

Do other implementations support that, besides the commercial implementation
associated with Bertrand Meyer (I think there is a "main" implementation
which drives the language development, and it's not open source)?

> PS: I'll post a link to my immutable Eiffel data structures, when the
> library is finished. They're really simpler and more flexible than
> those in other languages.

Aha, so you can program in the functional style in Eiffel provided that you
use a non-standard library which isn't finished yet? :-)

One side is language features: function closures, some equivalent of
algebraic types, ability to freely share immutable subcomponents. The other
side is the library - the need of reinventing functional data structures
and making them fit the rest of the library.

For example it would be a pity if the method returning sequence length used
in the standard library was compatible only with flat mutable sequences
(being declared in some superclass which provides an array-like interface),
or if iteration protocol relied on random-access indexing (as Python used
to do).

How to emulate algebraic types in Eiffel? E.g. imagine a tree representing
an intermediate form of a code inside a compiler. Do I have to define each
case in a separate file? Can I write an ML-style case expression (or at
least a statement) - matching just the top level class is enough, but I
want the access "constructor arguments" of particular cases in their
respective branches?

For example let's take an OCaml function I've actually used (Pair is one of
constructors of a large algebraic type). How would it look like in Eiffel
in functional style?

let rec flattenExprs = function
| [] -> []
| Pair (_, x, y) :: rest -> flattenExprs (x :: y :: rest)
| x :: rest -> x :: flattenExprs rest

Joachim Durchholz

unread,
Jul 9, 2003, 6:34:33 PM7/9/03
to
Marcin 'Qrczak' Kowalczyk wrote:
> [lots of relevant critique of Eiffel as a not really functional language snipped]

The point of the agent stuff in Eiffel is not making Eiffel a functional
language. It's making Eiffel more useful - not of particular interest
here in clf, but still a laudable intention.
An immutable data structure library would be somewhat interesting, just
to point out that functional data structures have begun making their way
into imperative languages. (Eiffel is still massively imperative. You
can't guarantee that an object is immutable, it might be of a subtype
with mutators added to the class. The main designer of the language will
accept ideas only on a thrice-per-decade basis, so it's unlikely that
he'll ever joint the functional bandwagon - which is a Bad Thing if you
believe in functional programming, and a Good Thing if you believe in
multiple paradigms being explored, the failing paradigms still useful as
a contrast to the succeeding ones. Besides, making Eiffel a "clean
functional language" would require ripping out its type conformance
rules and rewriting all the base libraries and all the applications
written in it - nothing that's even remotely feasible.)

Regards,
Jo

David Basil Wildgoose

unread,
Jul 10, 2003, 4:09:01 AM7/10/03
to
Marcin 'Qrczak' Kowalczyk <qrc...@knm.org.pl> wrote in message news:<2970244.

> Aha, so you can program in the functional style in Eiffel provided that you
> use a non-standard library which isn't finished yet? :-)

A little unfair I think, (and yes I did notice the "smiley").

An equivalent point is that you can program in the imperative style in
Haskell provided that you use the Monad library which is still under
research and development.

And the difference is?

Andreas Rossberg

unread,
Jul 10, 2003, 6:06:47 AM7/10/03
to
Neelakantan Krishnaswami wrote:

> Marshall Spight <msp...@dnai.com> wrote:
>>
>> If a function doesn't have
>> access to global state, but its implementation *can* make
>> use of local variables, then it's going to have the parameters-
>> uniquely-determine-results property. Right?
>
> Not necessarily -- here's an example in Scheme and Ocaml:
>
> (define gensym
> (let ((state 0))
> (lambda ()
> (set! state (+ state 1))
> state)))
>
> let gensym =
> let state = ref 0 in
> fun() ->
> (state := !state + 1;
> !state)

This is not a counter example, IMHO, because the state reference is
global to the actual function expression (even though its scope is
restricted to it).

I think Marshall is right with his conjecture. E.g. consider Haskell's
ST monad, which you can run locally. But as others pointed out, that
does not necessarily imply anything.

| Andreas

--
Andreas Rossberg, ross...@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.

Joachim Durchholz

unread,
Jul 10, 2003, 8:55:30 AM7/10/03
to
David Basil Wildgoose wrote:
> An equivalent point is that you can program in the imperative style in
> Haskell provided that you use the Monad library which is still under
> research and development.
>
> And the difference is?

That Haskell monads aren't under research anymore?

I'm sure that there's a lot of reasearch underway concerning monads, but
I'd be rather surprised if there were no usable, useful stable subset of
the monad library. Last time I looked, there wasn't even a defacto
language standard for agents (Eiffel's variant of closures).
Since I have left the Eiffel community (and lost most of my interest in
Eiffel as well), I don't know how things have been progressing in the
last, say, three or four months.

Regards,
Jo

William Lovas

unread,
Jul 10, 2003, 1:06:05 PM7/10/03
to
In article <265d96ac.03071...@posting.google.com>, David Basil

One major difference, if I understand the debate correctly, is that the
Monad library is part of the Haskell 98 standard -- any *standard
conforming* implementation *must* include Monad and IO facilities.

William

Albert Lai

unread,
Jul 10, 2003, 1:45:53 PM7/10/03
to
Joachim Durchholz <joachim....@web.de> writes:

> Right.
> Uh, well, sort of. If you can construct references to local variables,
> and allow assignments via references, a function can indeed modify
> variables outside its scope, so you have to be rather careful about
> what you allow programmers to do with locals. But let me assume that
> all mechanisms that might modify a nonlocal variable were disabled, to
> keep the answer interesting ;-)
>
> But you don't need them. To transform an imperatively-written function
> into a functional version:
> 1) Whenever the imperative code reassigns to a local variable, just
> create another variable that will take up the new value.
> 2) Whenever there's a loop, move the loop body into a recursive helper
> function. (Note most loops that you'll encounter can be transformed
> into higher-order functions which will do the recursion for you. You
> don't have to reinvent the wheel in every case.)

Example: A function that computes 0 + 1 + 2 + ... + n, given natural
number n.

Incarnation #0:

fun triangle n =
var s := 0, i := 0
while i <= n do
s := s + i; i := i + 1
end while
s

Incarnation #1:

triangle n =
let fun myloop (s,i) =
if i<=n then
myloop (s+i,i+1)
else
(s,i)
in fst (myloop (0,0))

Remark: myloop could return just s and not both s and i - a human
certainly always performs this optimization, and so does an optimizing
compiler that knows live/dead variable analysis. But conceivably
a naive de-sugarer would simply translate the while-loop into a helper
function that inputs all local variables and outputs all updated local
variables, and leave all the obvious optimizations to another stage.

> > Do local variables really make it any harder
> > to reason about a function's implementation?
>
> In automatic reasoning: yes.
> (Note that "automatic reasoning" is important when it comes to things
> like compiler optimization. The compiler must reason whether a given
> code transformation preserves the semantics of the program, after all.)

The de-sugaring from incarnation #0 to #1 can be done automatically;
afterwards it is just pure functional reasoning. If it is deemed
"harder" simply because there is an extra step of de-sugaring, you are
of course technically right - electricity is not cheap during Summer
and in these days of privatization and de-regulation. (Apologies to
those of you in Winter.)

> For a human who's trying to understand a program piece, this depends:
> on prior experience of that human (i.e. whether he has an imperative
> or a functional background, and his general level of expertise); on
> the size of the function; on how clever the original writer of the
> function tried to be; etc.
> In general, it's much more difficult to write obscure code if function
> bodies are written functionally. For a team leader, this alone should
> be enough incentive to select a language that uses referential
> transparency inside of functions :-)

There are people who find incarnation #0 obscure - "variables! Now I
won't be able to follow who modifies what and when! This is all magic
to me."

There are people who find incarnation #1 obscure - "functions! Now I
won't be able to follow which value goes to which argument and how!
This is all Greek to me."

Aaron Denney

unread,
Jul 10, 2003, 3:43:32 PM7/10/03
to
In article <4uznjm1...@vex.net>, Albert Lai wrote:
> Example: A function that computes 0 + 1 + 2 + ... + n, given natural
> number n.
>
> Incarnation #0:
>
> fun triangle n =
> var s := 0, i := 0
> while i <= n do
> s := s + i; i := i + 1
> end while
> s
>
> Incarnation #1:
>
> triangle n =
> let fun myloop (s,i) =
> if i<=n then
> myloop (s+i,i+1)
> else
> (s,i)
> in fst (myloop (0,0))

Incarnation #2:
triangle n = n * (n + 1) / 2

Tim Sweeney

unread,
Jul 10, 2003, 9:33:50 PM7/10/03
to
Marshall,

This is actually a great question. Pay no attention to the functional
programming curmudgeons (referential transparency isn't a religion,
it's a tool).

From the outside, a function that has no side-effects or dependence on
external state or I/O is referentially transparent and can be treated
uniformly regardless of whether its internal implementation is purely
functional or locally imperative.

From the inside -- when you get to evaluating the function's body --
whether the code is purely functional or (at least partly) imperative
has a huge impact on your evaluation strategy. For example, lazy
evaluation can be a reasonable strategy when evaluation functional
code, whereas evaluation order must be precise and explicit in
imperative code.

In a language that supports both paradigms, being able to distinguish
"functional functions" and "imperative functions" is very valuable.
For one example, in an implicitly parallelizing compiler, a call to a
purely functional function can be handed off to another thread; if
many such calls exist in a region of code with no intermingled
side-effects, then N calls can be handed off to N independent CPU's
without worrying about their global evaluation order -- even if those
functions happen to use locally imperative code internally (for
example, by manipulating a local heap that's discarded upon return).

I've been able to mix and match both paradigms in an experimental
compiler and am very happy with the results. My feeling is that for
real-world development projects carried out by mainstream programmers
(i.e. games, graphical user interfaces, large-scale server apps),
imperative features are essential to a language's success.

But allowing both functional and imperative approaches to be mixed, in
the local-vs-global approach you suggest, gives you the best of both
worlds.

Tim Sweeney

unread,
Jul 10, 2003, 9:49:17 PM7/10/03
to
>> If you can construct references to local variables,
and allow assignments via references, a function can indeed modify
variables outside its scope, so you have to be rather careful about
what you allow programmers to do with locals. But let me assume that
all
mechanisms that might modify a nonlocal variable were disabled, to
keep
the answer interesting ;-) <<

For those interested, there is a great body of research on this topic,
the general idea being that multiple heaps can exist; pointers carry
around "which heap do I point to" information, and one is prevented
(at the typechecking level) from writing to someone else's heap.
Google for "Typed Regions".

>> But you don't need them. To transform an imperatively-written
function
into a functional version:
1) Whenever the imperative code reassigns to a local variable, just
create another variable that will take up the new value.
2) Whenever there's a loop, move the loop body into a recursive helper
function. (Note most loops that you'll encounter can be transformed
into
higher-order functions which will do the recursion for you. You don't
have to reinvent the wheel in every case.) <<

This works for the case of loops within a single function which
"update" loop variables during iteration. Such constructs can always
be translated into tail-recursive functions quite easily.

But that's not the case, in general, with imperative code. For
example, if one desires the ability to dynamically allocate local
objects within a function and manipulate those objects as well as
mutable references to them, in that function, and in other nested
functions, with the general ability to update such references through
closures, then the translation to purely functional code is far more
complex. Haskell has solved the problem in purely functional exposure
of a behind-the-scenes native implementation quite elegantly (google
"Haskell IO Monad" and "RefMonad"). But the case of a purely
functional typesafe verifyably divergence-free mechanism is an open
research problem that probably requires a more advanced type system to
solve.

Overall, I think the notion of "translating arbitrary imperative
programs into purely functional ones" is a very valuable way of
conceptualizing, theorizing, and reasoning about things like mutable
references and exceptions. But it shouldn't be considered an
implementation technique, because it's vastly more complex and less
efficient than just exposing imperative functionality in the language
itself.

Joachim Durchholz

unread,
Jul 11, 2003, 5:32:06 AM7/11/03
to
Aaron Denney wrote:
> Incarnation #2:
> triangle n = n * (n + 1) / 2

You're beside the point.
The discussion is about optimizing a given algorithm, not about
replacing an algorithm with an entirely different one.

Regards,
Jo

Frank Buss

unread,
Jul 11, 2003, 5:59:52 AM7/11/03
to
Joachim Durchholz <joachim....@web.de> wrote:

if the compiler or interpreter knows that the arguments are integers, then
this is the best optimization and it would be fine, if the system could do
it automatically, perhaps with an integrated math CAS system for symbolic
calculation.

--
Frank Buß, f...@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

Joachim Durchholz

unread,
Jul 11, 2003, 6:33:37 AM7/11/03
to
Frank Buss wrote:
> Joachim Durchholz <joachim....@web.de> wrote:
>
>>Aaron Denney wrote:
>>
>>>Incarnation #2:
>>>triangle n = n * (n + 1) / 2
>>
>>The discussion is about optimizing a given algorithm, not about
>>replacing an algorithm with an entirely different one.
>
> if the compiler or interpreter knows that the arguments are integers, then
> this is the best optimization and it would be fine, if the system could do
> it automatically, perhaps with an integrated math CAS system for symbolic
> calculation.

It's not worth the effort.
It would take more effort to teach compilers to apply the right
transformations, than simply to write down the simpler algorithm.

Computers are no substitute for mental effort. I'm not sure how long it
will take until they are, but I fear this will happen within the next
few decades.
Not that mental laziness in itself is bad, but I shudder at the
consequences in the political arena. (Well, that's definitely going
off-topic now.)

Regards,
Jo

Marshall Spight

unread,
Jul 11, 2003, 11:20:36 AM7/11/03
to
"Albert Lai" <tre...@vex.net> wrote in message news:4uznjm1...@vex.net...

>
> There are people who find incarnation #0 obscure - "variables! Now I
> won't be able to follow who modifies what and when! This is all magic
> to me."
>
> There are people who find incarnation #1 obscure - "functions! Now I
> won't be able to follow which value goes to which argument and how!
> This is all Greek to me."

It strikes me that the thing about #1 that will cause consternation in some
people isn't functions; it's recursion.

Probably it's been a long time since anyone here hard a hard time thinking
about recursion, but it's a real sticking point in teaching programming.
I don't think loops with variables produce the same level of difficulty.


Marshall

Marshall Spight

unread,
Jul 11, 2003, 11:28:15 AM7/11/03
to
"Tim Sweeney" <t...@epicgames.com> wrote in message news:9ef8dc7.03071...@posting.google.com...

>
> For one example, in an implicitly parallelizing compiler, a call to a
> purely functional function can be handed off to another thread; if
> many such calls exist in a region of code with no intermingled
> side-effects, then N calls can be handed off to N independent CPU's ...

I find this possibility fascinating. It seems clear that available architectures
influence the development of computer languages, and vice versa. We
are entering an era where clustering is available to the home user;
what impact this might have on languages is interesting to speculate on.
My suspicion is that computing models that are easier to paralellize
will become more popular.


> My feeling is that for
> real-world development projects carried out by mainstream programmers
> (i.e. games, graphical user interfaces, large-scale server apps),
> imperative features are essential to a language's success.
>
> But allowing both functional and imperative approaches to be mixed, in
> the local-vs-global approach you suggest, gives you the best of both
> worlds.

Right on!


Marshall

Joachim Durchholz

unread,
Jul 11, 2003, 12:55:33 PM7/11/03
to
Marshall Spight wrote:
>
> Probably it's been a long time since anyone here hard a hard time thinking
> about recursion, but it's a real sticking point in teaching programming.
> I don't think loops with variables produce the same level of difficulty.

Yes they do.
I found it quite unnerving that I had to write bogus things like
"I = I + 1" to increment the loop variable.

At a more fundamental level, assuring that a loop works is just as
simple or easy as assuring that a recursion Does the Right Thing. Loops
and recursion are equivalent in so many ways that I'm somewhat surprised
that recursion is still considered more difficult to teach (and maybe
it's really harder to understand - for some reasonable definition).

Regards,
Jo

Joachim Durchholz

unread,
Jul 11, 2003, 1:03:28 PM7/11/03
to
Marshall Spight wrote:

> "Tim Sweeney" <t...@epicgames.com> wrote:
>
>>For one example, in an implicitly parallelizing compiler, a call to a
>>purely functional function can be handed off to another thread; if
>>many such calls exist in a region of code with no intermingled
>>side-effects, then N calls can be handed off to N independent CPU's ...
>
> I find this possibility fascinating.

Don't hold your breath: home users have little if any use for that.
Unless they are heavily into gaming, in which case the CPU load is most
likely split between mainboard and graphics card, which is a scenario
that's quite the opposite of parallelizing function calls.

That's just a side note. The *real* problem is that handing off
computations to a different processor often entails more work than just
stacking the call. Compilers may try to predict which calls should be
parallelized and which shouldn't, but mispredictions in that area will
quickly eat up the advantages gained here.

Besides, you usually don't have N processors, you have at most two.
Maybe four, in the next five years or so - but I wouldn't bet on that.
Besides, even with SMP, if two processors share the same memory bus,
memory latencies tend to eat up another gob of the performance advantage.

In principle, it's all a great idea. In practice, there are so many
little hurdles that take away a percent here, a percent there that it's
not so great anymore.

>>But allowing both functional and imperative approaches to be mixed, in
>>the local-vs-global approach you suggest, gives you the best of both
>>worlds.

The one thing that I've been missing for two years now is that nobody
has yet shown a convincing example of this:
How to write code that
a) is (possibly heavily) imperative
b) has a purely functional interface
c) allows the compiler to infer (b) even in the presence of (a).

For example, I'd like to use imperative data structures to build
functional containers. No way, even though I'm prepared to annotate all
the loops with loop invariants to help the compiler along...

Regards,
Jo

Benjamin Ylvisaker

unread,
Jul 11, 2003, 1:39:20 PM7/11/03
to
On Fri, 11 Jul 2003 19:03:28 +0200
Joachim Durchholz <joachim....@web.de> wrote:

> Marshall Spight wrote:
> >>But allowing both functional and imperative approaches to be mixed,
> >>in the local-vs-global approach you suggest, gives you the best of
> >>both worlds.
>
> The one thing that I've been missing for two years now is that nobody
> has yet shown a convincing example of this:
> How to write code that
> a) is (possibly heavily) imperative
> b) has a purely functional interface
> c) allows the compiler to infer (b) even in the presence of (a).
>
> For example, I'd like to use imperative data structures to build
> functional containers. No way, even though I'm prepared to annotate
> all the loops with loop invariants to help the compiler along...

You may be interested in Aleksandar Nanevski's paper "A Modal Calculus
for Effect Handling" (http://www-2.cs.cmu.edu/~aleks/). It is quite
theoretical, but I believe it addresses the issue you raise. Perhaps in
a few years we can hope to get some languages that incorporate these new
ideas.

Benjamin

Joachim Durchholz

unread,
Jul 11, 2003, 1:55:01 PM7/11/03
to
Benjamin Ylvisaker wrote:
> Joachim Durchholz <joachim....@web.de> wrote:
>>The one thing that I've been missing for two years now is that nobody
>>has yet shown a convincing example of this:
>>How to write code that
>>a) is (possibly heavily) imperative
>>b) has a purely functional interface
>>c) allows the compiler to infer (b) even in the presence of (a).
>>
>>For example, I'd like to use imperative data structures to build
>>functional containers. No way, even though I'm prepared to annotate
>>all the loops with loop invariants to help the compiler along...
>
>
> You may be interested in Aleksandar Nanevski's paper "A Modal Calculus
> for Effect Handling" (http://www-2.cs.cmu.edu/~aleks/).

Thanks for the pointer.
A bit too theoretical for my taste, but then this is obviously an open
research issue so I'll have to wait until this all solidifies into a
usable framework.

(I do have my own ideas in this area, but not being a researcher I don't
get paid to follow them up to anything useful. Pity, but...)

Regards,
Jo

Adrian Kubala

unread,
Jul 11, 2003, 6:13:27 PM7/11/03
to
On Fri, 11 Jul 2003, Joachim Durchholz wrote:
> Frank Buss wrote:
> > if the compiler or interpreter knows that the arguments are integers, then
> > this is the best optimization and it would be fine, if the system could do
> > it automatically, perhaps with an integrated math CAS system for symbolic
> > calculation.
> It's not worth the effort. It would take more effort to teach compilers
> to apply the right transformations, than simply to write down the
> simpler algorithm.

It depends -- you only have to teach the compiler once, whereas the
algorithm will be human optomized many many times, by many different
people.

Adrian Kubala

unread,
Jul 11, 2003, 6:21:59 PM7/11/03
to
On Fri, 11 Jul 2003, Joachim Durchholz wrote:
> At a more fundamental level, assuring that a loop works is just as
> simple or easy as assuring that a recursion Does the Right Thing. Loops
> and recursion are equivalent in so many ways that I'm somewhat surprised
> that recursion is still considered more difficult to teach (and maybe
> it's really harder to understand - for some reasonable definition).

I think it comes down to people simply being uncomfortable with abstract
thought. If you try and reason about recursion the same way you would a
loop, by mentally executing it, it can be quite confusing. It takes
practice to have enough faith in logic to let go and reason comfortably
about abstractions.

Joachim Durchholz

unread,
Jul 11, 2003, 7:44:33 PM7/11/03
to

That's the usual argument, but it doesn't transfer to this specific
situation.

Let me elaborate.

First, the question is: What is it that we teach the compiler? How to
optimize this particular function, or how to optimize an entire class of
similar functions?

In the latter case, I simply contend that automatic proof generation,
identification of interesting function classes, judging the relative
merit of conflicting optimization goals and other related topics aren't
researched well enough to give conclusive answers here.

In the former case, we see that the algorithm must be known (and
considered important) by the compiler writers. Since these are (usually)
closely working with the writers of the standard libraries, they can
just tell the library guys: "Hey, this function is important enough that
we start thinking about optimizing it specially, please try to find a
better algorithm, that's probably more effective than when we let the
compiler optimize it".
In other words, it's simpler to write down the better algorithm than to
teach the compiler how to transform the bad algorithm to the better one...

Regards,
Jo

Terence Hoosen

unread,
Jul 11, 2003, 10:26:38 PM7/11/03
to
On Fri, 11 Jul 2003 15:20:36 +0000, Marshall Spight wrote:
>
> It strikes me that the thing about #1 that will cause consternation in some
> people isn't functions; it's recursion.
>
> Probably it's been a long time since anyone here hard a hard time thinking
> about recursion, but it's a real sticking point in teaching programming.
> I don't think loops with variables produce the same level of difficulty.

Hmm. That's very interesting.

I remember a lecturer of mine who remarked that he believed that recursion
was more natural, intuitive idea than iteration. But since most teaching,
textbooks and beginners texts treat iteration as a lower-level concept, it
usually took a post-beginner programmer some effort to 'unlearn' a little
to feel compfortable with recursion.

I still don't know which I think of recursion or iteration to be the more
intuitive concept.

-Tez

Marshall Spight

unread,
Jul 11, 2003, 10:39:44 PM7/11/03
to
"Joachim Durchholz" <joachim....@web.de> wrote in message news:bemqon$6pdlt$1...@ID-9852.news.uni-berlin.de...

> Marshall Spight wrote:
> > "Tim Sweeney" <t...@epicgames.com> wrote:
> >
> >>For one example, in an implicitly parallelizing compiler, a call to a
> >>purely functional function can be handed off to another thread; if
> >>many such calls exist in a region of code with no intermingled
> >>side-effects, then N calls can be handed off to N independent CPU's ...
> >
> > I find this possibility fascinating.
>
> Don't hold your breath: home users have little if any use for that.

Everyone keeps telling me that, but I don't believe it. A simple example
(maybe metaphor would be a better word) of the benefits of parallelism in
home hardware: my living room ceiling has 6 lightfixtures in it. When a
bulb burns out, I don't care. When I'm down to three or so, I replace
a bunch of them. Substitute disk drives for bulbs in the above and
you've got an excellent reason for home users to want to use clustering.

You could argue that that's just a distributed file system, but I'd claim
a distributed file system is a clustering application. Clustering is about
more than just sharing computation; it's about sharing all machine
resources. And there's also the reliability that comes with massive
redundancy.

And I'm sure there are plenty of home applications we haven't thought
of yet that only become practical when the home has 2 or 3 orders of
magnitude more computing power than it does now. Or 5 or 6.


> That's just a side note. The *real* problem is that handing off
> computations to a different processor often entails more work than just
> stacking the call. Compilers may try to predict which calls should be
> parallelized and which shouldn't, but mispredictions in that area will
> quickly eat up the advantages gained here.

Yes.


> Besides, you usually don't have N processors, you have at most two.
> Maybe four, in the next five years or so - but I wouldn't bet on that.

I like to think in terms of 10^2 - 10^4 processors. But in the enterprise,
that number could be order 10^5 or even 10^6. I'm more talking about how
many CPUs you have in a building, not in a box. The right level to be
thinking at has always been at the box, but now it's time to think in
terms of the building, or maybe the campus. It's time to think outside
the box. (Sorry, couldn't resist.)


Marshall

Albert Lai

unread,
Jul 12, 2003, 12:33:45 AM7/12/03
to
"Marshall Spight" <msp...@dnai.com> writes:

> "Albert Lai" <tre...@vex.net> wrote in message news:4uznjm1...@vex.net...

> > There are people who find incarnation #1 obscure - "functions! Now I
> > won't be able to follow which value goes to which argument and how!
> > This is all Greek to me."
>
> It strikes me that the thing about #1 that will cause consternation in some
> people isn't functions; it's recursion.

Yes, five minutes after I posted that, I realized it was not so much
functions as recursion that scared that category of programmers.

> Probably it's been a long time since anyone here hard a hard time thinking
> about recursion, but it's a real sticking point in teaching programming.
> I don't think loops with variables produce the same level of difficulty.

There are several ways of teaching programming. One way teaches how
to execute a program. Another way teaches how to solve a problem.

Looping with variables is very easy to execute by hand. Recursion
with parameters is harder to execute by hand, especially when the
human is not told about tail-call shortcuts. Most programming courses
spend all their time on how to execute if-then-else, how to execute a
while loop, and how to execute recursive calls. Therefore looping must
look simpler than recursion.

Recursion is more handy in solving problems. To add up 10 numbers,
you just have to know how to add up 9 numbers and how to account for
the 10th. To find a number in a sorted array of size 16, you just
have to know how to find a number in a sorted array of size 8 and how
to decide which 8. Solving a problem by divide-and-conquer or
reduction is a natural instinct, and we do that all the time, inside
or outside programming. In contrast, to solve a problem by looping,
the approach is either trial-and-error (which ironically is itself a
loop:

0: clue := empty
1: P := "while ? do ?"
2: repeat
3: P := genetic mutation of P using clue
4: r := result of executing P 1 time, 2 times, "many" times
5: d := difference between r and expectation/specification
6: (* as a result there is always a bug when P is executed 0 times *)
7: clue := clue U some conjectures made from d
8: until d is empty
9: return P

) which is tedious, error-prone, and seldom convergent; or refinement
rules based on loop invariants, which is Greek to most people.
Writing a working loop is swim-or-sink in education and dark magic in
practice.

Unfortunately as an artifact of teaching programming by execution and
not problem solving, students do not think of divide-and-conquer when
writing a recursion, but rather waste time worrying about how the
program will be executed, cornering themselves into resorting to the
above trial-and-error method, of which step 4 becomes much harder
indeed. So writing a working recursion looks more dark magic due to
misguidance.

Albert Lai

unread,
Jul 12, 2003, 12:50:03 AM7/12/03
to
Joachim Durchholz <joachim....@web.de> writes:

> The one thing that I've been missing for two years now is that nobody
> has yet shown a convincing example of this:
> How to write code that
> a) is (possibly heavily) imperative
> b) has a purely functional interface
> c) allows the compiler to infer (b) even in the presence of (a).

I implemented the shuffling algorithm discussed in Knuth's TAOCP v2
using the ST monad in Haskell. It goes like this:

Parameter: list (immutable)
Return value: shuffled list (immutable)

so the interface is purely functional.

Implementation:
Make an ST monad that does these:
Transfer the input list into a mutable array
Do all the random swapping as in Knuth's description
Return the array as a list
Run the monad

I think it fits all your requirements.

The papers on the ST monad also contain similar examples. The
classical one is the depth-first search algorithm (input tree, output
list of nodes in depth-first order) using an internal mutable array to
remember visited nodes.

Frank Buss

unread,
Jul 12, 2003, 4:03:11 AM7/12/03
to
Joachim Durchholz <joachim....@web.de> wrote:

> In other words, it's simpler to write down the better algorithm than
> to teach the compiler how to transform the bad algorithm to the better
> one...

Sounds like: It's simpler to write down the assembler code instead of
writing a compiler for a high level language :-)

Fergus Henderson

unread,
Jul 12, 2003, 1:13:47 PM7/12/03
to
Joachim Durchholz <joachim....@web.de> writes:

>David Basil Wildgoose wrote:
>> An equivalent point is that you can program in the imperative style in
>> Haskell provided that you use the Monad library which is still under
>> research and development.
>>
>> And the difference is?
>
>That Haskell monads aren't under research anymore?
>
>I'm sure that there's a lot of reasearch underway concerning monads, but
>I'd be rather surprised if there were no usable, useful stable subset of
>the monad library.

There is indeed usable, useful, stable support for Monads -- all that
you need for writing programs in the imperative style -- in Haskell 98.

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.

Marshall Spight

unread,
Jul 12, 2003, 2:06:59 PM7/12/03
to
"Terence Hoosen" <te...@nospam.yahoo.com> wrote in message news:pan.2003.07.12....@nospam.yahoo.com...

>
> I still don't know which I think of recursion or iteration to be the more
> intuitive concept.

I'm not 100% decided either, but I'm leaning towards iteration as being the
simpler.

It strikes me that this question would be amenable to a modest amount of
congitive experimentation on non-programmers.


Marshall

Peter G. Hancock

unread,
Jul 12, 2003, 3:26:32 PM7/12/03
to

>>>>> Marshall Spight wrote (on Fri, 11 Jul 2003 at 16:20):

> Probably it's been a long time since anyone here had a hard


> time thinking about recursion, but it's a real sticking point in
> teaching programming. I don't think loops with variables
> produce the same level of difficulty.

Well, I've still a hard time thinking about recursion, after about
thirty years. What about well-founded recursion versus co-recursion
(catamorphisms versus apomorphisms)? What about guarded recursion
and infinite data structures? etc etc ....

At the level of teaching/learning, my impression (fwiw) is that many
people can find "recursion" extremely baffling, while others just
immediately get it, at least at a basic level. Except for extremely
stupid or extremely _bright_ students, scarcely anyone has problems
with iteration (with loop variables). So I agree with your last
sentence.

I speculate that it is the idea of "defining a thing in terms of
itself" which baffles people -- as perhaps it ought to.

Peter Hancock

Joachim Durchholz

unread,
Jul 12, 2003, 4:48:05 PM7/12/03
to
Frank Buss wrote:
> Joachim Durchholz <joachim....@web.de> wrote:
>
>>In other words, it's simpler to write down the better algorithm than
>>to teach the compiler how to transform the bad algorithm to the better
>>one...
>
> Sounds like: It's simpler to write down the assembler code instead of
> writing a compiler for a high level language :-)

You snipped the most relevant part of my post:


>>That's the usual argument, but it doesn't transfer to this specific
>>situation.

I'd be interested to read why you disagree. This would also help me
reduce the amount of nonsense in my postings :-)

Regards,
Jo

Joachim Durchholz

unread,
Jul 12, 2003, 4:56:13 PM7/12/03
to
Marshall Spight wrote:
> "Terence Hoosen" <te...@nospam.yahoo.com> wrote

>
>>I still don't know which I think of recursion or iteration to be the more
>>intuitive concept.
>
> I'm not 100% decided either, but I'm leaning towards iteration as being the
> simpler.
>
> It strikes me that this question would be amenable to a modest amount of
> congitive experimentation on non-programmers.

Which concept is easier to teach is an interesting question, but it
would be misleading unless we also got another question answered: which
concept takes longer to learn so that people can use it.
More concretely: Does anybody still remember how long it took him to be
on the watchout for fencepost errors in iterations? Are there similar
standard things to look for in recursion? (Off the top of my head I
remember accidentally writing nonterminating indirect recursion. Are
there other traps?)

Regards,
Jo

Joachim Durchholz

unread,
Jul 12, 2003, 5:05:05 PM7/12/03
to
Albert Lai wrote:
> I implemented the shuffling algorithm discussed in Knuth's TAOCP v2
> using the ST monad in Haskell. It goes like this:
>
> Parameter: list (immutable)
> Return value: shuffled list (immutable)
>
> so the interface is purely functional.

OK.

> Implementation:
> Make an ST monad that does these:
> Transfer the input list into a mutable array
> Do all the random swapping as in Knuth's description
> Return the array as a list
> Run the monad
>
> I think it fits all your requirements.

Agreed.

I got probably misdirected by the consensus that there's no way to get
rid of the IO monad.
Which leads me to my next questions:
1. Why is it possible to run the ST monad and get a functional result,
but impossible to run the IO monad and get a functional result?
2. Is this difference a specialty of IO, or a specialty of ST?
3. Does running ST use workarounds like unsafePerformIO (which
essentially means that the compiler doesn't really infer that there's no
side effect, it just assumes that the ST implementation is correct)?

Regards,
Jo

Joachim Durchholz

unread,
Jul 12, 2003, 5:50:39 PM7/12/03
to
Marshall Spight wrote:
> "Joachim Durchholz" <joachim....@web.de> wrote
>
>>Marshall Spight wrote:
>>
>>>"Tim Sweeney" <t...@epicgames.com> wrote:
>>>
>>>>For one example, in an implicitly parallelizing compiler, a call to a
>>>>purely functional function can be handed off to another thread; if
>>>>many such calls exist in a region of code with no intermingled
>>>>side-effects, then N calls can be handed off to N independent CPU's ...
>>>
>>>I find this possibility fascinating.
>>
>>Don't hold your breath: home users have little if any use for that.
>
> Everyone keeps telling me that, but I don't believe it. A simple example
> (maybe metaphor would be a better word) of the benefits of parallelism in
> home hardware: my living room ceiling has 6 lightfixtures in it. When a
> bulb burns out, I don't care. When I'm down to three or so, I replace
> a bunch of them. Substitute disk drives for bulbs in the above and
> you've got an excellent reason for home users to want to use clustering.

There are several relevant differences here:
Lightbulbs are cheap. Harddisks are, by comparison, very expensive
(several 100$ vs. a few dozen cent).
It's easy to arrange the cabling so that lightbulbs are indeed
redundant. It's difficult and expensive to have redundant harddisks (you
need a RAID controller, which is still above 100$ if you want something
that works, and besides, there's still a single point of failure in the
controller itself - and getting redundant RAID controllers to work is
something that I wouldn't even try. I *could* imagine a redundant
cabling system for lightbulbs, without too much cost overhead.)

What's good is, to a large extent, a matter of cost/performance issues.

Let me give an example: Graphics coprocessors.
It all started with the special-purpose "blitter" chips in Amiga and
Atari ST computers.
Then CPUs got faster, got better bit manipulation instructions, and the
blitters died.
Then we had "graphics accelerator cards". Again, we had faster CPUs, and
we had more and larger fonts, so it wasn't *that* much of an advantage
to have the processor on the graphics card do all the font rendering on
its own - but they survived. Actually this is mostly due to the fact
that graphics needs a lot of memory bandwidth, and that this commodity
has lagged behind other commodities like CPU power. (CPU caches and L2
caches are there just because memory is so slow, not because processor
manufacturers like complicated and error-prone cache coherence protocols...)
Today, we have 3D cards. With hardwired on-board memory with (surprise!)
*huge* memory bandwidth.
As soon as main memory catches up on bandwidth, the 3D cards will disappear.

Moral: Abstract considerations like OS structure and the usefulness of
clustering are far more sensitive to concrete pricing than would expect
at first sight.

In the case of clustering, the main problem would be the lack of cheap,
generally available short-range high-speed connections between computers
in a cluster.
Serial ATA is a step in that direction. I doubt that it's enough, and I
don't expect serial links to be up to the load of serious clustering for
the next few years. (Which is why I t expect that clustering will take
at least five years. There was a tiny litte bit of substance behind that
number *g*.)

> You could argue that that's just a distributed file system, but I'd claim
> a distributed file system is a clustering application.

I'd agree with that claim.
It's a somewhat simple clustering application, of course - but it
already shows all the symptoms of clustering. The worst problem with
distributed file systems is the unreliability of the interconnections,
immediately followed by the (comparatively) low bandwidth of the
interconnections.
There are interesting technology around that solve at least the
bandwidth problem (serial storage bus or something), but these seem to
be too expensive to work in a home environment.

And if bandwidth is too low even for storage technology, I don't think
it will work for the higher demands of CPU-bound computations, which
further restricts the general usefulness of clustering...

> And I'm sure there are plenty of home applications we haven't thought
> of yet that only become practical when the home has 2 or 3 orders of
> magnitude more computing power than it does now. Or 5 or 6.

Even faster framerates for ego shooters? (Just kidding)

But I don't think that lack of CPU power is a real problem. The only
serious (i.e. non-gaming) application that I can think of would be
speech and image recognition. Clustering might help a bit with image
recognition, provided that the image can be split into pieces that can
be processed separately (I'm not very hopeful on that account). For
speech recognition, I see little or no gains in splitting up - there's
too much back-and-forth referencing going on in that.
All of this under the assumption that inter-box data transmission offers
at least an order of magnitude less bandwidth than intra-box data
transmission. SMP would be a different story.

>>Besides, you usually don't have N processors, you have at most two.
>>Maybe four, in the next five years or so - but I wouldn't bet on that.
>
> I like to think in terms of 10^2 - 10^4 processors. But in the enterprise,
> that number could be order 10^5 or even 10^6. I'm more talking about how
> many CPUs you have in a building, not in a box. The right level to be
> thinking at has always been at the box, but now it's time to think in
> terms of the building, or maybe the campus. It's time to think outside
> the box. (Sorry, couldn't resist.)

Well, right - but "outside the box" means "less bandwidth". In other
words, you can't parallelize applications that require lots of
intra-application bandwidth. At least not that way.

Regards,
Jo

Simon Helsen

unread,
Jul 12, 2003, 5:52:33 PM7/12/03
to
On Sat, 12 Jul 2003, Peter G. Hancock wrote:

>At the level of teaching/learning, my impression (fwiw) is that many
>people can find "recursion" extremely baffling, while others just
>immediately get it, at least at a basic level. Except for extremely
>stupid or extremely _bright_ students, scarcely anyone has problems
>with iteration (with loop variables). So I agree with your last
>sentence.

We're getting off-topic here, but anyways.

I have heard this point so often by so many people, but I simply can't
beleive that. If students feel more comfortable with iteration than
recursion, than that has probably to do with how they were pre-trained.
And I don't just mean those who already programmed in high-school. What
are the kind of people going into CS or disciplines where they have to use
typical programming languages? As far as I can tell, these tend to be
people with some affection or ability in science and math. Mathematics,
at the high-school level(!), rarely deals with induction proofs or
induction principles. They do however deal with tons of iterative
concepts, the most common probably the sum-notation. As a result, it seems
to me, they like iteration more because they have been using it over and
over again. Does this mean that iteration is 'simpler' than 'recursion'?
I think that this is at least a dangerous conclusion.

Circumstantial evidence to the contrary comes from linguists who, in
Europe at least, often work with Prolog. They claim that prolog is
particularly easy for their students to understand and they testify that
those already familiar with common programming languages tend to have more
difficulties. Prolog, like FPs, is recursive in nature and does not use
imperative loops. You're not going to tell me that arts students are
intrinsically more intelligent in the nature of recursion.

I conjecture that an early introduction of inductive reasoning in
high-school math would massively increase the tolerance for recursive
programming.

Regards,

Simon


Aaron Denney

unread,
Jul 12, 2003, 6:07:01 PM7/12/03
to

Right. I just wanted to point out that a better example might be
chosen of algorithms to optimize. Oh, and I enjoy being an occasional
smart-ass.

--
Aaron Denney
-><-

Peter "Firefly" Lund

unread,
Jul 12, 2003, 7:45:05 PM7/12/03
to
On Sat, 12 Jul 2003, Joachim Durchholz wrote:

> It all started with the special-purpose "blitter" chips in Amiga and
> Atari ST computers.

"On the Design of Display Processors", T.H.Myer & I.E.Sutherland
Communications of the ACM, vol. 11, no. 6, June 1968.

http://cva.stanford.edu/cs99s/papers/myer-sutherland-design-of-display-processors.pdf

-Peter

Joachim Durchholz

unread,
Jul 12, 2003, 9:46:46 PM7/12/03
to
Aaron Denney wrote:
> Joachim Durchholz wrote:
>
>>Aaron Denney wrote:
>>
>>>Incarnation #2:
>>>triangle n = n * (n + 1) / 2
>>
>>You're beside the point.
>>The discussion is about optimizing a given algorithm, not about
>>replacing an algorithm with an entirely different one.
>
> Right. I just wanted to point out that a better example might be
> chosen of algorithms to optimize.

Give one :-)

> Oh, and I enjoy being an occasional smart-ass.

:-)

Regards,
Jo

Joachim Durchholz

unread,
Jul 12, 2003, 9:54:41 PM7/12/03
to
Simon Helsen wrote:
> Mathematics,
> at the high-school level(!), rarely deals with induction proofs or
> induction principles. They do however deal with tons of iterative
> concepts, the most common probably the sum-notation. As a result, it seems
> to me, they like iteration more because they have been using it over and
> over again. Does this mean that iteration is 'simpler' than 'recursion'?
> I think that this is at least a dangerous conclusion.

Well, it may not be "simpler" - but easier to understand.
Which makes it a relevant topic regardless.

> I conjecture that an early introduction of inductive reasoning in
> high-school math would massively increase the tolerance for recursive
> programming.

Agreed - if only because then there would be a bias towards recursion.
I don't think that recommending a "more recursive mathematics at school"
is going to have much effect, though.
The advantages of recursion over iteration in daily use should be much
more convincing. A lot of math is unintuitive, yet it is taught because
it is useful.

Regards,
Jo

Anton van Straaten

unread,
Jul 13, 2003, 12:47:41 AM7/13/03
to
Joachim Durchholz wrote:
> Which concept is easier to teach is an interesting question, but it
> would be misleading unless we also got another question answered:
> which concept takes longer to learn so that people can use it.

What about which concept has the better payoff once you're familiar with it,
or whether either concept allows the other to subsequently be learned more
easily? It does seem that what is learned first has a huge impact on what
is easily learned subsequently. Which is exactly why the majority of those
who learn iteration first claim that it's the "natural" approach, and ditto
for recursion.

If the discussion is about teaching programming to people who plan never to
learn any advanced concepts, then it could make sense to argue about whether
iteration or recursion is the better choice - and it's still arguable. But
if iteration is taught at the expense of recursion, doesn't that perpetuate
what Dijkstra complained about, producing students "mentally mutilated
beyond hope of regeneration"? A slight exaggeration, perhaps, but the
underlying point stands.

Anton

Aaron Denney

unread,
Jul 13, 2003, 1:37:48 AM7/13/03
to
In article <beqdpn$859sf$1...@ID-9852.news.uni-berlin.de>, Joachim Durchholz wrote:
> Aaron Denney wrote:
>> Joachim Durchholz wrote:
>>
>>>Aaron Denney wrote:
>>>
>>>>Incarnation #2:
>>>>triangle n = n * (n + 1) / 2
>>>
>>>You're beside the point.
>>>The discussion is about optimizing a given algorithm, not about
>>>replacing an algorithm with an entirely different one.
>>
>> Right. I just wanted to point out that a better example might be
>> chosen of algorithms to optimize.
>
> Give one :-)

fact n

Marcin 'Qrczak' Kowalczyk

unread,
Jul 13, 2003, 2:29:47 AM7/13/03
to
Joachim Durchholz wrote:

> 1. Why is it possible to run the ST monad and get a functional result,
> but impossible to run the IO monad and get a functional result?

Because ST provides only mutable state, which can be "localized away" so
this fact is not visible from outside. The state can't persist between
computations nor be used outside the computation which created it (the type
system ensures that), so the result of runST is necessarily pure.

IO provides I/O in addition to non-standard mutable variables. Any IO
computation could perform I/O while having the same type, so there can be
no runIO which distinguishes between computations which are safe and unsafe
to wrap. Also computations can refer to IO-mutable variables which they
didn't create, so even this would be unsafe to wrap.

> 2. Is this difference a specialty of IO, or a specialty of ST?

Perhaps ST. It's design with those s type parameters is funny, just to
enable safe runST.

> 3. Does running ST use workarounds like unsafePerformIO (which
> essentially means that the compiler doesn't really infer that there's no
> side effect, it just assumes that the ST implementation is correct)?

The implementation of ST is necessarily compiler-specific, so the question
doesn't make sense - the compiler doesn't assume about ST, it provides it.

It probably could be implemented using IO and unsafePerformIO, but for
example in GHC it's more like the other way around.

Using ST is safe - you can't create impure functions which look like pure,
unlike with unsafePerformIO.

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Frank Buss

unread,
Jul 13, 2003, 11:27:51 AM7/13/03
to
Joachim Durchholz <joachim....@web.de> wrote:

>> Sounds like: It's simpler to write down the assembler code instead of
>> writing a compiler for a high level language :-)
>

> I'd be interested to read why you disagree.

The main reason is, that optimizing the compiler means you have to
optimize only once the compiler and not every program you write.

Another reason is, that the program is easier to read, because if you use
an optimized algorithm, it is not obvious what is going on: When you
write "n * (n + 1) / 2" it is not obvious to the non-mathematican that
you mean "foldl1 (+) [1..n]" (Haskell syntax; "Sum[n, {n, 1, n}]" in
Mathematica syntax).

It is possible (Mathematica shows me the right formula as a result for
the Sum-expression), so why not implement it? Compilers are already doing
some simple optimizations, like calculating constants at compile-time,
loop-unrolling, function inlining, tail recursion elimination etc. And
some languages looks like they have implemented symbolic manipulation
already, like Mozart, where you can write, that x<y and y<10 and the
system can detect an error, if say x=11.

But I think it is not easy to implement it. For example the compiling
time could be very long or the compiler could hang, if a mathematic
problem don't stop, perhaps because you specified a recursive function
for calculating the n-th prime number and the systems tries to find an
explicit formula :-)

Thomas Lindgren

unread,
Jul 13, 2003, 12:30:02 PM7/13/03
to

Frank Buss <f...@frank-buss.de> writes:

> It is possible (Mathematica shows me the right formula as a result for
> the Sum-expression), so why not implement it? Compilers are already doing
> some simple optimizations, like calculating constants at compile-time,
> loop-unrolling, function inlining, tail recursion elimination etc. And
> some languages looks like they have implemented symbolic manipulation
> already, like Mozart, where you can write, that x<y and y<10 and the
> system can detect an error, if say x=11.

The basic problem with high-level transformations is that they often
are brittle: equivalent formulations might generate dramatically
different code with integer factor performance differences. A
well-known example is Fortran vectorization.

When faced with this, programmers adjust by coding for the optimizer,
rather than expressing the problem. This can be very undesirable,
particularly when a new, improved optimizer comes along. A classic
example is trying to vectorize C code where array references have been
turned into pointer operations for the sake of performance. In some
cases, however, it might however be desirable: in particular, when the
idioms used by programmers actually lead to clearer code. E.g, a
vectorizing Fortran program might be easier to understand, reuse,
maintain and extend than the performance-obfuscated equivalent.

A few compilers try to provide hints as to why optimization failed
(CMUCL and descendants are the ones I can think of at the moment),
which is probably necessary with an industrial strength complex
optimizer. As far as I know, CMUCL optimizes locally, with compiler
feedback mainly on what types or subtypes could be inferred, and
(importantly) what is needed to get a better result. A good high-level
optimizer would target a somewhat different issue, but probably need the
same pedagogical mindset.

Best,
Thomas
--
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin

Graham Matthews

unread,
Jul 10, 2003, 5:23:12 PM7/10/03
to
In article <4uznjm1...@vex.net>, Albert Lai wrote:
> > Example: A function that computes 0 + 1 + 2 + ... + n, given natural
> > number n.
> >
> > Incarnation #0:
> >
> > fun triangle n =
> > var s := 0, i := 0
> > while i <= n do
> > s := s + i; i := i + 1
> > end while
> > s

Given single assignment semantics this is a functional program. That's
pretty much exactly how you would write it in Sisal or SAC.

graham

Frank Buss

unread,
Jul 13, 2003, 1:43:45 PM7/13/03
to
Thomas Lindgren <***********@*****.***> wrote:

> A few compilers try to provide hints as to why optimization failed
> (CMUCL and descendants are the ones I can think of at the moment),
> which is probably necessary with an industrial strength complex
> optimizer. As far as I know, CMUCL optimizes locally, with compiler
> feedback mainly on what types or subtypes could be inferred, and
> (importantly) what is needed to get a better result. A good high-level
> optimizer would target a somewhat different issue, but probably need
> the same pedagogical mindset.

A nice idea would be to enhance the editor, like in Smalltalk or in
Mathematica. I think it is a anachronism to use pure text files for
programming, from the time when computer access was limited. Modern IDEs,
for example for Java, can show you the methods while you type in a
identifier. Why not using this for optimization?

Like in Mathematica you type in a formula or function and the IDE tries
to optimize it, if you want it. You write the program in an interactive
session with the IDE. The original formula perhaps can have a different
color than the optimized version and you can switch the views, filtering
all optimized output for reading the code or enabling it for hand-
optimize. I don't know the details, but I think you can see what my
vision is.

Marcin 'Qrczak' Kowalczyk

unread,
Jul 13, 2003, 2:22:59 PM7/13/03
to
Frank Buss wrote:

> A nice idea would be to enhance the editor, like in Smalltalk or in
> Mathematica. I think it is a anachronism to use pure text files for
> programming, from the time when computer access was limited.

I wouldn't say anachronism. It's essential that the program *can* be
expressed as pure text files, so you can use a lot of language-independent
tools which were developed for years: for version control, diff & patch,
archiving and packaging for installation, searching, putting sources on the
Web or sending parts of the program by mail, making tools which extract
documentation, making temporary copies of parts of sources to experiment
with a new design without losing the old version, printing parts of source,
converting charsets, etc.

It's unreasonable to expect to have all this reimplemented by an IDE for
each new language with comparable quality to mature universal tools we have
for years. I hate when a language tries to force me to forget about the
rest of the world and use only tools designed especially for this language.
This is my primary complaint about Smalltalk.

Smart IDE's are ok if somebody likes them, but they won't replace the
ability to represent the program as text and non-interactive compilers.

Thomas Lindgren

unread,
Jul 13, 2003, 2:23:20 PM7/13/03
to

Frank Buss <f...@frank-buss.de> writes:

> Like in Mathematica you type in a formula or function and the IDE tries
> to optimize it, if you want it. You write the program in an interactive
> session with the IDE. The original formula perhaps can have a different
> color than the optimized version and you can switch the views, filtering
> all optimized output for reading the code or enabling it for hand-
> optimize. I don't know the details, but I think you can see what my
> vision is.

Eclipse might be a good IDE starting point.

Anyway, I would add one more thing to that: some way of telling the
programmer "how close they are to the solution", not just that they
have found it. Another approach might be to use templates of some sort
(e.g., something like Squiggol?), or perhaps generate code from
diagrams, if we want to get UMLish.

(As an aside, I am coming to the belief that the next battle will be
full productivity environments, not compilers. My Java-programming
friends already seem to find a world without powerful IDEs, advanced
debuggers, graphical modelling, etc etc, to be a bit quaint.
Personally, I think emacs and a powerful language still comfortably
beats Java + flashy IDE, but who can say in five years?)

Frank Buss

unread,
Jul 13, 2003, 3:15:14 PM7/13/03
to
Marcin 'Qrczak' Kowalczyk <qrc...@knm.org.pl> wrote:

> I wouldn't say anachronism. It's essential that the program *can* be
> expressed as pure text files, so you can use a lot of
> language-independent tools which were developed for years

Good point. I'm using CVS, too (see for example my JHex game, checked in at
CVS at Sourceforge: http://jhex.sourceforge.net/ ) and external editors
(but the Eclipse IDE for Java) and it would be difficult to use a Unix
style diff on binary files or UML diagrams and the like. Perhaps saving it
in XML is a solution, but you need new tools to show a useful diff and a
special editor to show the intended content instead of raw XML. Perhaps it
could be an universal editor or diff tool with the help of XSLT
transformations and DTDs.

Joachim Durchholz

unread,
Jul 13, 2003, 4:22:04 PM7/13/03
to
Frank Buss wrote:
> Joachim Durchholz <joachim....@web.de> wrote:
>
>>>Sounds like: It's simpler to write down the assembler code instead of
>>>writing a compiler for a high level language :-)
>>
>>I'd be interested to read why you disagree.
>
>
> The main reason is, that optimizing the compiler means you have to
> optimize only once the compiler and not every program you write.

Then you didn't really understand the point I made: that optimizing the
compiler to detect "1+2+3+...+n" and optimize it as "n*(n+1)/2" would
just solve that specific foldl1 expression and nothing else.
In which case it's better to optimize the library routine than to make
the compiler optimize it. Adding an optimization that works just for a
single case isn't really worth the effort :-)

Regards,
Jo

Joachim Durchholz

unread,
Jul 13, 2003, 4:24:29 PM7/13/03
to
Graham Matthews wrote:
> In article <4uznjm1...@vex.net>, Albert Lai wrote:
>
>>>Example: A function that computes 0 + 1 + 2 + ... + n, given natural
>>>number n.
>>>
>>>Incarnation #0:
>>>
>>> fun triangle n =
>>> var s := 0, i := 0
>>> while i <= n do
>>> s := s + i; i := i + 1
>>> end while
>>> s
>
>
> Given single assignment semantics this is a functional program.

Er... the assignments to s and i are incompatible with the assignments
to s and i in the loop body, so I don't understand what you mean here.

Regards,
Jo

Frank Buss

unread,
Jul 13, 2003, 5:57:42 PM7/13/03
to
Joachim Durchholz <joachim....@web.de> wrote:

> Then you didn't really understand the point I made: that optimizing the
> compiler to detect "1+2+3+...+n" and optimize it as "n*(n+1)/2" would
> just solve that specific foldl1 expression and nothing else.

Sorry, we talked at cross-purposes. I agree, it is not very useful to
optimize every single special case in the compiler, but to implement some
general optimizations. For example it would be nice, if the compiler can
detect all arithmetic series (
http://mathworld.wolfram.com/ArithmeticSeries.html ) and I'm sure there are
more general mathemtical algorithms for detecting more series without
special handling for every case.

Joachim Durchholz

unread,
Jul 13, 2003, 8:36:52 PM7/13/03
to
Frank Buss wrote:
> Joachim Durchholz <joachim....@web.de> wrote:
>>Then you didn't really understand the point I made: that optimizing the
>>compiler to detect "1+2+3+...+n" and optimize it as "n*(n+1)/2" would
>>just solve that specific foldl1 expression and nothing else.
>
> Sorry, we talked at cross-purposes. I agree, it is not very useful to
> optimize every single special case in the compiler, but to implement some
> general optimizations.

OK.

> For example it would be nice, if the compiler can
> detect all arithmetic series (
> http://mathworld.wolfram.com/ArithmeticSeries.html ) and I'm sure there are
> more general mathemtical algorithms for detecting more series without
> special handling for every case.

In 20 years of programming, I have seen (much less programmed) about a
dozen or so arithmetic series.
I suspect there are other general patterns that are more useful :-)

Regards,
Jo

gr...@cs.uwa.edu.au

unread,
Jul 14, 2003, 5:44:50 AM7/14/03
to
Albert Lai <tre...@vex.net> wrote:
: "Marshall Spight" <msp...@dnai.com> writes:
:> Probably it's been a long time since anyone here hard a hard time thinking

:> about recursion, but it's a real sticking point in teaching programming.
:> I don't think loops with variables produce the same level of difficulty.

: There are several ways of teaching programming. One way teaches how
: to execute a program. Another way teaches how to solve a problem.

My experience is that recursion _as_an_abstract_concept_ comes more
easily to those learning a pure functional language (Haskell)
as-a-first-language than those learning an OO one (Java), and more
easily the more equationally/declaratively they are taught to think
about it - a.k.a. problem-solving.

Conversely, the imperative students usually find it easier to get
started with using it _as_a_tool_, but later get tripped up when
they realise that, even having used it in a few programs, they
don't really understand it.

But I've only a few years of experience teaching these languages;
others will probably have a broader view.

-Greg

Richard Bos

unread,
Jul 14, 2003, 8:48:03 AM7/14/03
to
In article <4116211.d...@qrnik.knm.org.pl>,

Marcin 'Qrczak' Kowalczyk <qrc...@knm.org.pl> wrote:

> Frank Buss wrote:
>
> > A nice idea would be to enhance the editor, like in Smalltalk or in
> > Mathematica. I think it is a anachronism to use pure text files for
> > programming, from the time when computer access was limited.
>
> I wouldn't say anachronism. It's essential that the program *can* be
> expressed as pure text files, so you can use a lot of language-independent
> tools which were developed for years: for version control, diff & patch,

Apart from that, text files are suitable, in a way that other formats
are usually not, from hauling through a pretty-printer and dumping to
hardcopy _without any loss of information_. Yes, I do desk-checks; I am
given to understand that I am old-fashioned, but I really don't
understand why.

Richard

Graham Matthews

unread,
Jul 14, 2003, 9:56:31 AM7/14/03
to

Incompatible? I don't know what you mean by that ...

In Sisal this code looks something like this (can't remember the exact
syntax),

initial
s = 0;
i = 0;
in while i <= n d0
new s = s + i;
new i = i + 1;
yields s

Now the "new"s are actually redundant since the compiler can easily
verify that you are only doing single assignment in this loop.

Remove the "new"s and you pretty much have the original code.

graham

Richard Bos

unread,
Jul 14, 2003, 10:07:27 AM7/14/03
to
In article
<Pine.SOL.4.44.030712...@crete.uwaterloo.ca>,
Simon Helsen <she...@computer.org> wrote:

> We're getting off-topic here, but anyways.
>
> I have heard this point so often by so many people, but I simply can't
> beleive that. If students feel more comfortable with iteration than
> recursion, than that has probably to do with how they were pre-trained.
> And I don't just mean those who already programmed in high-school. What
> are the kind of people going into CS or disciplines where they have to use
> typical programming languages? As far as I can tell, these tend to be
> people with some affection or ability in science and math. Mathematics,
> at the high-school level(!), rarely deals with induction proofs or
> induction principles. They do however deal with tons of iterative
> concepts, the most common probably the sum-notation. As a result, it seems
> to me, they like iteration more because they have been using it over and
> over again. Does this mean that iteration is 'simpler' than 'recursion'?
> I think that this is at least a dangerous conclusion.

It is indeed dangerous to generalise. However, in this context it is
perhaps significant that I _have_ seen logical systems that did away
with recursion, but never mathematical systems that got rid of iteration.

Richard

Joachim Durchholz

unread,
Jul 14, 2003, 3:45:49 PM7/14/03
to
Graham Matthews wrote:
> Joachim Durchholz <joachim....@web.de> wrote:
>
>>Graham Matthews wrote:
>>
>>>In article <4uznjm1...@vex.net>, Albert Lai wrote:
>>>
>>>>>Incarnation #0:
>>>>>
>>>>> fun triangle n =
>>>>> var s := 0, i := 0
>>>>> while i <= n do
>>>>> s := s + i; i := i + 1
>>>>> end while
>>>>> s
>>>
>>>
>>>Given single assignment semantics this is a functional program.
>>
>>Er... the assignments to s and i are incompatible with the assignments
>>to s and i in the loop body, so I don't understand what you mean here.
>
> Incompatible? I don't know what you mean by that ...

Aww... I shouldn't be writing so late in the evening.
I meant to say that the assignments in the loop body are not compatible
with single-assignment semantics.

> In Sisal this code looks something like this (can't remember the exact
> syntax),
>
> initial
> s = 0;
> i = 0;
> in while i <= n d0
> new s = s + i;
> new i = i + 1;
> yields s
>
> Now the "new"s are actually redundant since the compiler can easily
> verify that you are only doing single assignment in this loop.
>
> Remove the "new"s and you pretty much have the original code.

Hmm... if "new" does what I think, this is indeed single-assignment code.

Regards,
Jo

Tim Sweeney

unread,
Jul 15, 2003, 12:05:41 AM7/15/03
to
> >>For one example, in an implicitly parallelizing compiler, a call to a
> >>purely functional function can be handed off to another thread; if
> >>many such calls exist in a region of code with no intermingled
> >>side-effects, then N calls can be handed off to N independent CPU's ...
> >
> > I find this possibility fascinating.
>
> Don't hold your breath: home users have little if any use for that.
> Unless they are heavily into gaming, in which case the CPU load is most
> likely split between mainboard and graphics card, which is a scenario
> that's quite the opposite of parallelizing function calls.

Lots of bulk data processing operations besides low-level rendering
are highly parallelizable. Examples from game development: video
codecs, audio codecs (a significant CPU hit when you're playing 20
simultaneous MP3/OGG-format ambient sound and special effects
streams), physics solvers, collision detection algorithms, scene
traversal algorithms. Many of these algorithms are scalable to the
extent that if we had significantly more CPU power, we could make
games that are significantly more interesting and impressive.

> That's just a side note. The *real* problem is that handing off
> computations to a different processor often entails more work than just
> stacking the call. Compilers may try to predict which calls should be
> parallelized and which shouldn't, but mispredictions in that area will
> quickly eat up the advantages gained here.

Agreed. On current popular architectures, the overhead of thread
communication is such that you can only benefit from handing off
10,000+ CPU cycle operations to threads. In this case, some form of
explicit parallelism is certainly needed because it's surprisingly
hard to write a code generator that can guess, from looking at a piece
of code, how much time it is likely to take, even within several
orders of magnitude. Whereas, a reasonably experienced programmer is
far better at judging such things.

That pratical parallelism has to be explicit for the time being
doesn't negate the merits of this approach though. In C++, to split
up a complex algorithm into multiple threads, I not only have to
explicitly create those threads, I have to write and debug a complex
algorithm that is free of deadlocks and sharing violations, which is
terribly difficult and error-prone in the presence of the kind of
nondeterminism present in typical multithreaded C++ code.

On the other hand, consider a language where one can specify that a
particular function is globally side-effect free and have the
typecheck verify that it really is. Then when you annotate the text
program with parallelism hints, the compiler can verify that those
hints can be safely applied. Thus that a program is deadlock-free can
be mechanically verified in the same way that a typechecker verifies
that a program is type-mismatch-free.

> Besides, you usually don't have N processors, you have at most two.
> Maybe four, in the next five years or so - but I wouldn't bet on that.

> Besides, even with SMP, if two processors share the same memory bus,
> memory latencies tend to eat up another gob of the performance advantage.
>
> In principle, it's all a great idea. In practice, there are so many
> little hurdles that take away a percent here, a percent there that it's
> not so great anymore.

In 1999, this was unquestionably true. In 2003, this is basically
true as far as practical consumer software deployment goes. By 2005,
it will be mostly false thanks to symmetric multithreading, and by
2007 it will be rendered a complete lie :-) by multicore CPU's with
very large caches.

> >>But allowing both functional and imperative approaches to be mixed, in
> >>the local-vs-global approach you suggest, gives you the best of both
> >>worlds.
>
> The one thing that I've been missing for two years now is that nobody
> has yet shown a convincing example of this:
> How to write code that
> a) is (possibly heavily) imperative
> b) has a purely functional interface
> c) allows the compiler to infer (b) even in the presence of (a).
>
> For example, I'd like to use imperative data structures to build
> functional containers. No way, even though I'm prepared to annotate all
> the loops with loop invariants to help the compiler along...

I've solved the much more modest problem of checking side-effect-free
function declarations which do imperative things internally, by using
something along the lines of "region types" for local mutable objects,
and requiring some explicit source annotations of pointer regions and
scopes of effects. This approach works and is relatively easy because
whatever imperative stuff happens inside your function invocation is
guaranteed to have vanished when your function returns, i.e. by
disallowing closure-capture and so on.

Regarding example here, that's a great example and a much harder
problem, which might serve as a torture test of a language claiming to
fully mix functional and imperative programming safely.

Perhaps if you could prove (in the Curry-Howard sense) that your
module is observationally equivalant (in the Leibniz equality sense)
to a purely functional module type satisfying the same identities,
then the compiler could accept it as being an instance of that purely
functional module. A bunch of issues immediately come to mind:

- From an operational point of view, this approach has got to be
regarded as a coercion that introduces things such as synchronization
primitives into the module's function. Just going through and proving
that each function in the module satisfies its invariants invokes a
bunch of hidden assumptions, such as that the code is executed in a
single-threaded manner.

- But proving that it's deadlock-free, deadlocks being another
manifestation of _|_, would seem to impose additional restrictions on
what is allowed in the module. If each invocation of a
functional-datatype-with-imperative-implementation requires
single-threaded invocation, then any sort of polymorphic behaviour in
such datatypes could lead to deadlocks that are impossible to detect
at compile time.

Overall, this sounds promising, but whether or not it becomes
practical will depend on the severity of the restrictions, because
functional-modules-implemented-imperatively wouldn't likely be
accepted if it introduced undetectable deadlocks in code.

Joachim Durchholz

unread,
Jul 15, 2003, 5:37:52 AM7/15/03
to
Tim Sweeney wrote:
>>>>For one example, in an implicitly parallelizing compiler, a call to a
>>>>purely functional function can be handed off to another thread; if
>>>>many such calls exist in a region of code with no intermingled
>>>>side-effects, then N calls can be handed off to N independent CPU's ...
>>>
>>>I find this possibility fascinating.
>>
>>Don't hold your breath: home users have little if any use for that.
>>Unless they are heavily into gaming, in which case the CPU load is most
>>likely split between mainboard and graphics card, which is a scenario
>>that's quite the opposite of parallelizing function calls.
>
> Lots of bulk data processing operations besides low-level rendering
> are highly parallelizable. Examples from game development: video
> codecs, audio codecs (a significant CPU hit when you're playing 20
> simultaneous MP3/OGG-format ambient sound and special effects
> streams), physics solvers, collision detection algorithms, scene
> traversal algorithms. Many of these algorithms are scalable to the
> extent that if we had significantly more CPU power, we could make
> games that are significantly more interesting and impressive.

Hmm... I'm not sure that there's so much difference here.
But, right, even when graphics load is shoved off to the graphics CPU on
the 3D board, current-day CPUs are still heavily loaded.
I'm not sure that these things can be parallelized implicitly.
Ultimately, all these computations end with side effects that must be
coordinated. In other words, splitting off the side-effect-free parts of
this stuff entails a lot of coordination overhead.
It's probably still worth the effort, given high enough performance
requirements :-)

> That pratical parallelism has to be explicit for the time being
> doesn't negate the merits of this approach though. In C++, to split
> up a complex algorithm into multiple threads, I not only have to
> explicitly create those threads, I have to write and debug a complex
> algorithm that is free of deadlocks and sharing violations, which is
> terribly difficult and error-prone in the presence of the kind of
> nondeterminism present in typical multithreaded C++ code.
>
> On the other hand, consider a language where one can specify that a
> particular function is globally side-effect free and have the
> typecheck verify that it really is. Then when you annotate the text
> program with parallelism hints, the compiler can verify that those
> hints can be safely applied. Thus that a program is deadlock-free can
> be mechanically verified in the same way that a typechecker verifies
> that a program is type-mismatch-free.

Fully agreed.

>>Besides, you usually don't have N processors, you have at most two.
>>Maybe four, in the next five years or so - but I wouldn't bet on that.
>>Besides, even with SMP, if two processors share the same memory bus,
>>memory latencies tend to eat up another gob of the performance advantage.
>>
>>In principle, it's all a great idea. In practice, there are so many
>>little hurdles that take away a percent here, a percent there that it's
>>not so great anymore.
>
> In 1999, this was unquestionably true. In 2003, this is basically
> true as far as practical consumer software deployment goes. By 2005,
> it will be mostly false thanks to symmetric multithreading, and by
> 2007 it will be rendered a complete lie :-) by multicore CPU's with
> very large caches.

I'm quite sceptical about this. Personally, I find the following
scenario more likely: Most software is imperative, which makes
parallelization Bl**dy Hard, so people will see that SMP just makes
their Windows boxes less reliable, and the interest in SMP drops back to
zero. (That's in the consumer area, but consumer hardware is what makes
the technology cheap enough for everyday use.)
Well, we'll see. Intel's "virtual multiprocessor" technology is a good
test balloon. (Actually, I've already seen the first complaints that
neither Windows nor Linux work well with these... not in the OS area,
which is already multi-threaded to no end, but certain common
applications started to fail. To my surprise, no complaints about device
drivers - FWIW, I just saw a synopsis of common problems a few months
ago and haven't read about the topic since.)

>>>>But allowing both functional and imperative approaches to be mixed, in
>>>>the local-vs-global approach you suggest, gives you the best of both
>>>>worlds.
>>
>>The one thing that I've been missing for two years now is that nobody
>>has yet shown a convincing example of this:
>>How to write code that
>>a) is (possibly heavily) imperative
>>b) has a purely functional interface
>>c) allows the compiler to infer (b) even in the presence of (a).
>>
>>For example, I'd like to use imperative data structures to build
>>functional containers. No way, even though I'm prepared to annotate all
>>the loops with loop invariants to help the compiler along...
>

> [...]


> Regarding example here, that's a great example and a much harder
> problem, which might serve as a torture test of a language claiming to
> fully mix functional and imperative programming safely.

:-)

> Perhaps if you could prove (in the Curry-Howard sense) that your
> module is observationally equivalant (in the Leibniz equality sense)
> to a purely functional module type satisfying the same identities,
> then the compiler could accept it as being an instance of that purely
> functional module.

Hmm... right. *That* would be an interesting approach, and might be
simpler than directly proving that (for example) an imperative module is
referentially transparent.

> A bunch of issues immediately come to mind:
>
> - From an operational point of view, this approach has got to be
> regarded as a coercion that introduces things such as synchronization
> primitives into the module's function. Just going through and proving
> that each function in the module satisfies its invariants invokes a
> bunch of hidden assumptions, such as that the code is executed in a
> single-threaded manner.

I *think* that locking can be added by the compiler, automatically
applying locks just as far as needed and unlocking stuff as early as
possible.
This requires that the imperative code has its semantics fully defined
using preconditions and postconditions (which also means that the
compiler should be able to infer the routine bodies automatically unless
the bodies use loops and/or recursion, but that's another aspect of how
to embed imperative code into a functional language).
Given a full assertions of every routine, the compiler can automatically
infer which objects are needed unchanged. Actually, if there's a
sequence "a := foo (x); b ();" and b somehow mentions a in its
preconditions, the compiler can infer that whatever a is referring to
must be locked until the call to b is through - unless b mentions these
objects in its postconditions and some yet later call refers to these
objects again.
The details are a bit intricate, and I once started to draw up a full
ruleset but didn't quite finish it. And I didn't even start a proof that
the locks thus placed are enough to protect the semantics of the
routines, that's slightly beyond my current expertise :-(

> - But proving that it's deadlock-free, deadlocks being another
> manifestation of _|_, would seem to impose additional restrictions on
> what is allowed in the module.

Yes, that's a quite nasty can of worms.

> If each invocation of a
> functional-datatype-with-imperative-implementation requires
> single-threaded invocation, then any sort of polymorphic behaviour in
> such datatypes could lead to deadlocks that are impossible to detect
> at compile time.

I'm not sure how this could happen.
Could you give an example?

> Overall, this sounds promising, but whether or not it becomes
> practical will depend on the severity of the restrictions, because
> functional-modules-implemented-imperatively wouldn't likely be
> accepted if it introduced undetectable deadlocks in code.

Particularly if the locking is applied automatically...

Regards,
Jo

Graham Matthews

unread,
Jul 15, 2003, 10:04:06 AM7/15/03
to
Joachim Durchholz <joachim....@web.de> wrote:
> > In Sisal this code looks something like this (can't remember the exact
> > syntax),
> >
> > initial
> > s = 0;
> > i = 0;
> > in while i <= n d0
> > new s = s + i;
> > new i = i + 1;
> > yields s
> >
> > Now the "new"s are actually redundant since the compiler can easily
> > verify that you are only doing single assignment in this loop.
> >
> > Remove the "new"s and you pretty much have the original code.
>
> Hmm... if "new" does what I think, this is indeed single-assignment code.

"new" distinguishes between the value of a variable V in this iteration,
and the value of V at the start of the next iteration.

graham

Tim Sweeney

unread,
Jul 15, 2003, 1:14:09 PM7/15/03
to
> > If each invocation of a
> > functional-datatype-with-imperative-implementation requires
> > single-threaded invocation, then any sort of polymorphic behaviour in
> > such datatypes could lead to deadlocks that are impossible to detect
> > at compile time.
>
> I'm not sure how this could happen.
> Could you give an example?

Say you've implemented functional-style arrays using imperative
techniques (heaps, mutability, sequential execution), and have somehow
proven to the compiler that your implementation is observationally
equivalant to a functional implementation. What if a given function
(for example, array concatenation) contains a loop that leaves some
linked lists temporarily inconsistent, even though it's guaranteed to
be fix everything up by the time it exits. Wouldn't the code
generator have to then insert locks at entry/exit from the function to
assure atomicity? What if your imperative implementation had to call
some user-defined function passed into it as a parameter, and it
called that function at a point when it left a linked list in an
inconsistent state? What if that user-defined function happens to
access the same data structure that is currently in an inconsistent
state?

The real question is: what limitations would a type system have to
impose on programs in order for it to admit
purely-functional-constructs-with-opaque-imperative-implementations
without introducing potential deadlocks or nondeterminism? (An open
question, I think.)

Joachim Durchholz

unread,
Jul 15, 2003, 8:14:22 PM7/15/03
to
Tim Sweeney wrote:
>>>If each invocation of a
>>>functional-datatype-with-imperative-implementation requires
>>>single-threaded invocation, then any sort of polymorphic behaviour in
>>>such datatypes could lead to deadlocks that are impossible to detect
>>>at compile time.
>>
>>I'm not sure how this could happen.
>>Could you give an example?
>
> Say you've implemented functional-style arrays using imperative
> techniques (heaps, mutability, sequential execution), and have somehow
> proven to the compiler that your implementation is observationally
> equivalant to a functional implementation. What if a given function
> (for example, array concatenation) contains a loop that leaves some
> linked lists temporarily inconsistent, even though it's guaranteed to
> be fix everything up by the time it exits. Wouldn't the code
> generator have to then insert locks at entry/exit from the function to
> assure atomicity?

Right.
Though it would be relatively simple to make these locks: just lock all
data that may be accessed by the function. This is decidable, and
amounts to computing a closure. (Warning: I'm deliberately skipping
efficiency issues here; I can see several ways to address them, but I'm
not sure which is the best one *g*.)

> What if your imperative implementation had to call
> some user-defined function passed into it as a parameter, and it
> called that function at a point when it left a linked list in an
> inconsistent state?

Higher-orderedness does indeed have implications.
There are two cases:
1) The passed-in function is functional.
2) The passed-in function is imperative.

For case 1 and 2, the data structure must be locked down before the
function is called, else the function might see an inconsistent state.
However, since we're living in a potentially parallel environment, it's
probably a better idea to lock the entire data structure on routine
entry, and to unlock it on routine exit. The consequence is that the
passed-in function can be applied only to those data structures that are
not locked. (It is possible to automatically determine where to stop
locking, the run-time system need not recursively lock all reachable
nodes: it's enough to reach those nodes that may be directly or
indirectly accessed by the main routine. Immutable objects need not be
locked at all, whether they are accessed or not - but note that an
object referencing a mutable object is itself mutable.)

> What if that user-defined function happens to
> access the same data structure that is currently in an inconsistent
> state?

It should fail. Locking is the most straightforward way to achieve this.

> The real question is: what limitations would a type system have to
> impose on programs in order for it to admit
> purely-functional-constructs-with-opaque-imperative-implementations
> without introducing potential deadlocks or nondeterminism? (An open
> question, I think.)

I can't think of a type system that does this, at least not in its entirety.
Actually I doubt it's at all possible, some preliminary tinkering with
ideas quickly ran into problems that looked unsolvable. I'll have to
rethink these ideas though, it's a line of thought that I never fully
explored.

Regards,
Jo

Donn Cave

unread,
Jul 16, 2003, 12:25:12 AM7/16/03
to
Quoth Joachim Durchholz <joachim....@web.de>:
| Tim Sweeney wrote:
...

|> The real question is: what limitations would a type system have to
|> impose on programs in order for it to admit
|> purely-functional-constructs-with-opaque-imperative-implementations
|> without introducing potential deadlocks or nondeterminism? (An open
|> question, I think.)
|
| I can't think of a type system that does this, at least not in its entirety.
| Actually I doubt it's at all possible, some preliminary tinkering with
| ideas quickly ran into problems that looked unsolvable. I'll have to
| rethink these ideas though, it's a line of thought that I never fully
| explored.

Isn't that just what Haskell is today - with the opaque imperative
implementations written in C?

Donn Cave, do...@drizzle.com

Joachim Durchholz

unread,
Jul 16, 2003, 4:22:00 AM7/16/03
to

The issue is that the imperative parts must be bugfree to ensure
referential transparency, and that we'd like to see some proof that the
Haskell run-time is indeed correct. Automation of that proof would be
highly desirable. And if a proof is not feasible, it would still be
helpful to have a "debugging mode" in the Haskell run-time that switches
on all sorts of assertion checking; this would not prove the run-time
system's correctness, but as long as no assertion fires, we know that no
bug has had any effect during a particular execution (which is a useful
information in its own right).

(Substitute the purely functional subset of your favourite language for
"Haskell" in the above if you like *g*.)

Regards,
Jo

Donn Cave

unread,
Jul 17, 2003, 12:35:19 AM7/17/03
to
Quoth Joachim Durchholz <joachim....@web.de>:

I still don't get it - person who is not very learned in these matters,
so I'm used to it, but anyway - my assumptions:

1. Haskell programs are provable today, to your satisfaction. I'm
confused about this, because I don't see anything but type safety
here, and that kind of structural correctness seems not really that
conclusive to me. But to insist on a standard that Haskell can't
meet anyway seems unreasonable, so it's about a type system at the
current Haskell level.

2. There is an imperative substrate that is not up to that level
Haskell core elements and extensions written in C. External
libraries, system calls.

3. We commonly dismiss that as out of bounds of the problem.

4. A C implementation can be transformed into semantically
equivalent but more provable Haskell one, even though
imperative.

My conclusion from this is that the Haskell implementation
that appears in (4) may be treated just the same (3) as the
C implementation in (2), with the sole difference better
justification for 3.

``The real question is: what limitations would a type system have to


impose on programs in order for it to admit
purely-functional-constructs-with-opaque-imperative-implementations
without introducing potential deadlocks or nondeterminism? (An open

question, I think.)''

- Haskell already has the opaque imperative implementation,
- promoting it from C to Haskell makes it no worse
- the type system can continue to ignore opaque parts

Donn Cave, do...@drizzle.com

Joachim Durchholz

unread,
Jul 17, 2003, 2:35:13 AM7/17/03
to
Donn Cave wrote:
> 2. There is an imperative substrate that is not up to that level
> Haskell core elements and extensions written in C. External
> libraries, system calls.
>
> 3. We commonly dismiss that as out of bounds of the problem.

This is exactly the point: I don't see this as out-of-bounds.

I'd like to see an implementation of the Haskell run-time system,
written in Haskell, just as C run-time systems are written in C. With
the added bonus that the Haskell run-time is easier to verify :-)

I'd also like to have APIs that are functional, or where the side
effects are formally specified. I know that this would mean reworking
all libraries in the world, which isn't going to happen anytime soon... ;-)
Actually, it's impossible to even start writing such libraries because
there's no generally accepted formalism that captures precisely the
information that a caller must know about the side effects of a call,
and precisely leaves out what a caller should *not* know. (Actually I'm
not sure that such a formalism exists at all, generally accepted or not.)

> 4. A C implementation can be transformed into semantically
> equivalent but more provable Haskell one, even though
> imperative.

I don't think that this holds.

What I do think is that the problems that an imperative program poses to
a proof will be just transformed into its functional equivalent.

I also think that functional programs are easier to prove simply because
functional languages encourage many idioms that are easy to prove, and
discourage many idioms that are difficult to prove. (Real Programmers
can write Fortran programs in any language... they just write a Fortran
program and have a translator to that other language *g*)

Regards,
Jo

William Lovas

unread,
Jul 17, 2003, 1:17:05 PM7/17/03
to
In article <bf5g6v$b9ovv$1...@ID-9852.news.uni-berlin.de>, Joachim Durchholz wrote:
> Donn Cave wrote:
>> 2. There is an imperative substrate that is not up to that level
>> Haskell core elements and extensions written in C. External
>> libraries, system calls.
>>
>> 3. We commonly dismiss that as out of bounds of the problem.
>
> This is exactly the point: I don't see this as out-of-bounds.
>
> I'd like to see an implementation of the Haskell run-time system,
> written in Haskell, just as C run-time systems are written in C. With
> the added bonus that the Haskell run-time is easier to verify :-)

Isn't GHC completely implemented in Haskell, as of version 5-something?

More importantly, though, isn't there a chicken-and-egg problem here?
Haskell didn't exist forever (and neither did C, for that matter), so
at some point there was a bootstrapping step. Haskell-as-a-standard
may be "easy to verify", but Haskell-as-an-implementation, even one
written in standard Haskell, will always depend on the correctness of
the original non-Haskell implementation.

In fact, if you want to get really pathological, even the correctness
of that original implementation is dependent on far more than the
semantics of its implementation language (C, perhaps) -- it depends in
turn on *that* language's implementation's correctness, which depends
still on the system call implementations and thus the operating system
implementation (quite a leap of faith for *cough* some operating systems),
which probably invokes some kind of loop, because the operating system was
probably written in C, and beside the point, the whole thing depends on the
correctness of the *machine's* implementation, hardware-wise!

In *fact*, if you want to get *reeeeally* pathological, even the machine's
implementation depends on the correctness of our understanding of physical
principles and the way the world works, and ... etc. I'm reminded of Carl
Sagan's quote, "In order to make an apple pie from scratch, you must first
create the universe."

So i guess we have to draw the line somewhere, huh? :)

William

Joachim Durchholz

unread,
Jul 17, 2003, 1:50:47 PM7/17/03
to
William Lovas wrote:
> So i guess we have to draw the line somewhere, huh? :)

Well, there's some validity to that. At least that the correctness
ultimately depends on machine correctness (which is highly dubious,
given the lists of errata that Intel publishes...)

However, at least *one* of the holes can be plugged:

> Haskell didn't exist forever (and neither did C, for that matter), so
> at some point there was a bootstrapping step. Haskell-as-a-standard
> may be "easy to verify", but Haskell-as-an-implementation, even one
> written in standard Haskell, will always depend on the correctness of
> the original non-Haskell implementation.

This is not the case. If somebody writes a Haskell compiler that emits C
or machine code, you don't have to go back to the initial
implementation: it's enough to inspect the machine code generated by the
compiler when it compiles itself.
If you wish to reduce the task, you first build a compiler for a
sublanguage which is barely powerful enough to compile the full
compiler, have it compile itself to check the implementation, then run
the full compiler through it. (This can be done in more than one stage
if desired.)
A popular test for compiler correctness is to compile the compiler, then
use the compiler to compile itself; the original compiler and the
results of that compilation must be bitwise identical. It's nothing more
than a first quick test, but given that the compiler for the sublanguage
is stripped down exactly to those features that are useful for writing
compilers, code coverage is usually quite good.

In other words, the correctness of a compiler is not that of the weakest
link in the chain (which is a Good Thing, else we'd be writing compilers
from scratch all the time).

Regards,
Jo

Ivan Boldyrev

unread,
Jul 20, 2003, 11:17:43 AM7/20/03
to
On 8439 day of my life Graham Matthews wrote:
>
> In Sisal this code looks something like this (can't remember the exact
> syntax),
>
> initial
> s = 0;
> i = 0;
> in while i <= n d0
> new s = s + i;
> new i = i + 1;
> yields s

AFAIR, you must use 'old', not 'new':

s=old s+old i;
i=old i+1;


:)

--
Ivan Boldyrev

There are 256 symbols in English alphabet.

Marshall Spight

unread,
Jul 22, 2003, 1:02:24 AM7/22/03
to
"Tim Sweeney" <t...@epicgames.com> wrote in message news:9ef8dc7.03071...@posting.google.com...

>
> The real question is: what limitations would a type system have to
> impose on programs in order for it to admit
> purely-functional-constructs-with-opaque-imperative-implementations
> without introducing potential deadlocks or nondeterminism? (An open
> question, I think.)

Isn't "no object aliasing" enough? That is to say, there can only be one
client of any given memory address.

Lately that seems to me a very desirable property for a programming
language to have, although I don't know of any languages that do have
it.


Marshall

gr...@cs.uwa.edu.au

unread,
Jul 22, 2003, 1:18:43 AM7/22/03
to
Marshall Spight <msp...@dnai.com> wrote:
: "Tim Sweeney" <t...@epicgames.com> wrote in message news:9ef8dc7.03071...@posting.google.com...

:> The real question is: what limitations would a type system have to
:> impose on programs in order for it to admit
:> purely-functional-constructs-with-opaque-imperative-implementations
:> without introducing potential deadlocks or nondeterminism?

: Isn't "no object aliasing" enough? That is to say, there can only be one


: client of any given memory address. Lately that seems to me a very
: desirable property for a programming language to have, although I don't
: know of any languages that do have it.

Sounds a little like Clean's "uniqueness typing"? Does that fit your bill?

-Greg

Marshall Spight

unread,
Jul 24, 2003, 3:27:33 AM7/24/03
to
<gr...@cs.uwa.edu.au> wrote in message news:bfihfj$ac8$1...@enyo.uwa.edu.au...


I guess I'll have to check! :-)


Marshall

Ralph Becket

unread,
Jul 24, 2003, 9:57:34 PM7/24/03
to
"Marshall Spight" <msp...@dnai.com> wrote in message news:<Az3Ta.106951$sY2....@rwcrnsc51.ops.asp.att.net>...

>
> Isn't "no object aliasing" enough? That is to say, there can only be one
> client of any given memory address.
>
> Lately that seems to me a very desirable property for a programming
> language to have, although I don't know of any languages that do have
> it.

Mercury has a comprehensive uniqueness system which can be used to support
this sort of thing. This is how arrays and IO, for instance, are handled
in Mercury. Have a look at www.mercury.cs.mu.oz.au

-- Ralph

0 new messages