Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Me and FP (somewhat longish and boring...)

16 views
Skip to first unread message

WildHeart'2k

unread,
Aug 21, 2000, 3:00:00 AM8/21/00
to
I apologize in advance for my imprecise English...

Some of you may be interested, so I'll quickly tell you the story of my
relationship with FP and FPLs.
I have an Italian master in Physics and have been playing with computers
since I was a child. Now 28, this makes me an "old-timer" :-) I've been
collecting programming languages like some people collect stamps and I could
not resist the temptation every time I came across a new language to spend
some time trying to learn at least the philosophy behind it. I am now a
software developer, and I do most of my development in APL, but I don't mind
C, C++ or any other language when it comes to it, if I can get my job more
easily done.
About two years I came across a very interesting paper called "Why
functional programming matters". I was intrigued but I admit I somehow
dismissed FPLs as something academic, really cool, but not useful in the
real world of programming. The same thing everybody outside the APL
community says about APL, for the matter...
Last year I had an annoying accident and I spent all the summer lying in a
bed with my laptop and a slow internet connection. When the ICFP contest
showed how impressive was the advantage of FPLs at least for a certain class
of problems I retuned my brain: there had to be something more. So I gave
them another chance. I started collecting papers and implementations to try
and understand more. The period of free research ended so most of that
remained in my (virtual) drawers until this summer. In the meantime I
decided that it was about time I tried to fill some of the holes due to my
previous studies, and I bought myself a few books in pure computer science.
Mostly books about compiler design and implementation of languages. I can't
say working and studying (for the sake of it: my work does not contemplate
me as a compiler designer) is something easy to do and, mostly, my studies
are very inconsistent and fragmented. But they progress.
Recently a friend of mine showed interest in the realization of a small
scripting language to control a computer game they are developing and I
decided to put me to test. I said I would try and come up with something,
mostly for my cultural growth: if it works in the end they can pay me :-) I
can only work on this thing during week-ends....
The first experiments were obviously conducted in APL, since I know the
language quite well and I am at ease. I wrote a while ago a horrible but
working Yacc-like tool in APL so I could try building ASTs and stuff like
that. In less than a day I had a working compiler for a simple, imperative,
untyped language.
The next step was to try and redo the work in a real language. I chose OCaml
because of its imperative features. I wasn't very comfortable with the
syntax since I did not know the language very well, and the documentation
about ocamlyacc and ocamllex is somewhat terse. Nevertheless in one evening
the parser for the expression subset of the language was working. On the
next day I wrote the code generator (it targets a simple stack machine) and
added control structures. It was all going very well. At the end of the
second day I decided, for the sake of it, to introduce a simple
optimization, just to prove that I could do it, and in less than 10 minutes
constant folding on the AST was perfectly working.
On the third and fourth day I implemented the VM, also in OCaml, and
extended the language to introduce new primitive data types.
I truly am amazed. I am not claiming I am doing anything more complex than
the half-course test of a single CS course. Because the VM runtime is doing
all the type conversion I don't have to bother with tree annotations and
type checking. Since the VM it's a stack machine I don't have to worry about
register allocation, so, as you can see, it's all very trivial. The thing
which amazes me, though, is that once I get OCaml compiler to accept my
source code (that is: once I cleared all the syntax errors and the type
mismatches), my little compiler - VM runtime, just works! The simple
constant folder worked the first time. Now, what I was trying to achieve is
not exactly rocket science, but even in APL (which I consider a highly
productive language) the first-attempt-success rate is not this high. I
believe this is due to static type-checking that caught a few simple but
fundamental mistakes I was making.
I would like to add that despite OCaml's imperative features, I am trying to
stick to a pure functional paradigm when I write OCaml and I only used a
little bit of statement sequencing (>> in Haskell) as something not purely
functional, at least until the day I implemented the vectors of my language
with mutable arrays, and even that only for performance's reasons and not
because lists weren't good enough from a theorical point of view.

You can make what you want of my story. One thing for sure is that my entire
attitude towards FPLs has changed. If you want to quote me as a success
story, please do :-)

For the record: if my little language is ever going to be accepted, I am
going to have to rewrite it all in C++ (argh!) for the usual reasons (and
maybe also for performance, but this is all to be seen).
--
WildHeart'2k

Fergus Henderson

unread,
Aug 21, 2000, 3:00:00 AM8/21/00
to
"WildHeart'2k" <s...@apl.it> writes:

>I wasn't very comfortable with the

>syntax since I did not know the language very well, ...


>Nevertheless in one evening
>the parser for the expression subset of the language was working.

>... in less than 10 minutes constant folding on the AST was perfectly working.


>On the third and fourth day I implemented the VM, also in OCaml, and
>extended the language to introduce new primitive data types.
>I truly am amazed.

...
>... it's all very trivial. The thing


>which amazes me, though, is that once I get OCaml compiler to accept my
>source code (that is: once I cleared all the syntax errors and the type
>mismatches), my little compiler - VM runtime, just works!

I think you're not alone there. This experience -- of writing
relatively complicated code and having it work first time, even when
not very familiar with the language -- is a very common one for users
of strongly typed functional (or similar) languages, if my own
experience and the large amount of anecdotal evidence that I've heard
is any judge. Whether it be ML, Haskell, or Mercury, the proportion
of errors that make it past the compiler seems to be quite small, so
once the compiler does accept a program, it will often "just work".

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger f...@128.250.37.3 | -- the last words of T. S. Garp.

graham

unread,
Aug 21, 2000, 3:00:00 AM8/21/00
to
Fergus Henderson at f...@cs.mu.oz.au wrote on 8/21/00 1:57 PM:

> "WildHeart'2k" <s...@apl.it> writes:
>
>> The thing
>> which amazes me, though, is that once I get OCaml compiler to accept my
>> source code (that is: once I cleared all the syntax errors and the type
>> mismatches), my little compiler - VM runtime, just works!
>
> I think you're not alone there. This experience -- of writing
> relatively complicated code and having it work first time, even when
> not very familiar with the language -- is a very common one for users
> of strongly typed functional (or similar) languages

Is writing a toy compiler really "relatively" complicated code? I personally
don't think so (no code generation issues in a toy compiler for example).
It's a serious question since people often cite writing toy compilers as
proof of the applicability of functional languages to complex problems.

graham


Albert Y. C. Lai

unread,
Aug 21, 2000, 3:00:00 AM8/21/00
to
graham <grah...@telocity.com> writes:

> Is writing a toy compiler really "relatively" complicated code? I personally
> don't think so (no code generation issues in a toy compiler for example).

Depends on the programming language used. In ACM programming
contests, most contestants avoid any problem that smells like "write a
toy parser" because C-like languages with no tool support except for
the standard libraries are to be used.

> It's a serious question since people often cite writing toy compilers as
> proof of the applicability of functional languages to complex problems.

I agree with you because toy compilers are conceptually simple. We
cannot use this to deduce how well FPLs solve complex problems.
However, we can conclude that FPLs keep easy problems easy, whereas
C-like languages make easy problems complicated.

Simon Helsen

unread,
Aug 22, 2000, 3:00:00 AM8/22/00
to
On 21 Aug 2000, Fergus Henderson wrote:

>I think you're not alone there. This experience -- of writing
>relatively complicated code and having it work first time, even when
>not very familiar with the language -- is a very common one for users

>of strongly typed functional (or similar) languages, if my own
>experience and the large amount of anecdotal evidence that I've heard
>is any judge. Whether it be ML, Haskell, or Mercury, the proportion
>of errors that make it past the compiler seems to be quite small, so
>once the compiler does accept a program, it will often "just work".

as some experimental support and refinement of this claim: this "just
work"-property only works out if you avoid imperative programming. Of
course no issue in Haskell (don't know about Mercury), but not so for ML
(my main development language). I'm currently developing a partial
evaluator (an automatic program transformation tool) which is far from a
"simple" application. Somehow, most (if not all) bugs and runtime porblems
originate from imperative datastructures/algorithms, which I use mostly
for efficiency and sometimes also for elegance.

I say so, because people often claim that static typing is the reason for
the "just work" claim. Well, I conjecture it is the combination of static
typing and pure functional code.

regards,

Simon


George Russell

unread,
Aug 22, 2000, 3:00:00 AM8/22/00
to
Simon Helsen wrote (snipped):

> Whether it be ML, Haskell, or Mercury, the proportion
> >of errors that make it past the compiler seems to be quite small, so
> >once the compiler does accept a program, it will often "just work".
>
> as some experimental support and refinement of this claim: this "just
> work"-property only works out if you avoid imperative programming. Of
> course no issue in Haskell (don't know about Mercury), but not so for ML
> (my main development language).
I've written quite a lot of C, Haskell and ML. The first time I get a C program
past the compiler it will typically crash because of a pointer error. This at
least is unlikely to happen in ML. I'm not convinced that Haskell is much better
than ML for writing code that works "first time". This may be because a lot of my
Haskell programming has to be fairly imperative anyway (using monads, but still
imperative). I do as a matter of fact think that Haskell is a much better language
than ML, but for other reasons, many of which will seem to readers of this newsgroup
fairly trivial. I am not convinced that the difference between lazy and strict
is really so important for ease of programming; indeed for me it might easily
be outweighed by, for example, the fact that you can link in C easily to Glasgow
Haskell, or that Standard ML of New Jersey has an extensive interactive mode.

Brian Rogoff

unread,
Aug 22, 2000, 3:00:00 AM8/22/00
to
On Tue, 22 Aug 2000, George Russell wrote:
> Simon Helsen wrote (snipped):
> > Whether it be ML, Haskell, or Mercury, the proportion
> > >of errors that make it past the compiler seems to be quite small, so
> > >once the compiler does accept a program, it will often "just work".
> > as some experimental support and refinement of this claim: this "just
> > work"-property only works out if you avoid imperative programming. Of
> > course no issue in Haskell (don't know about Mercury), but not so for ML
> > (my main development language).
> I've written quite a lot of C, Haskell and ML. The first time I get a
> C program past the compiler it will typically crash because of a
> pointer error. This at least is unlikely to happen in ML.

It's also the case that large numbers of Ada programmers (I include
myself in that list) report a similar "works when it compiles" phenomena.
And Ada is most certainly not a functional programming language, nor does
it have a reasonably functional subset. I've also had this experience with
OCaml, and with some very "imperative" Ocaml code too. So I think the
claim is more about static typing than about functional/declarative vs
imperative, though I agree that side-effect free code is easier to work
with.

> I do as a matter of fact think that Haskell is a much better language
> than ML, but for other reasons, many of which will seem to readers of
> this newsgroup fairly trivial.

Please do tell. Those trivial reasons may be important to others. Which
ML are you comparing to?

-- Brian

Mark Carroll

unread,
Aug 22, 2000, 3:00:00 AM8/22/00
to
In article <39A2B1AE...@informatik.uni-bremen.de>,
George Russell <g...@informatik.uni-bremen.de> wrote:
(snip)

>I've written quite a lot of C, Haskell and ML. The first time I get a C program
>past the compiler it will typically crash because of a pointer error. This at
>least is unlikely to happen in ML. I'm not convinced that Haskell is
>much better
>than ML for writing code that works "first time". This may be because a
>lot of my
>Haskell programming has to be fairly imperative anyway (using monads, but still
(snip)

Then again, the language I've most strongly noticed that when I get my
program past the compiler it'll probably work is Modula-3 - but then
again, it's forcing me to declare what everything is, it does very
little implicit casting, and a lot can be checked statically at
compile-time.

So, I suspect it's more that languages that encourage you to declare
lots of stuff that can be checked statically that allows the compiler
to catch most of your bugs, whether they be functional or imperative.
In contrast, quite a few of my Lisp mistakes, even though I program
fairly functionally, are caught instead through bizarre errors because
I don't write enough declare's and much of the checking is done at run
time as the problem code is actually executed.

-- Mark

Russell Wallace

unread,
Aug 22, 2000, 9:58:30 PM8/22/00
to
Albert Y. C. Lai wrote:
> Depends on the programming language used. In ACM programming
> contests, most contestants avoid any problem that smells like "write a
> toy parser" because C-like languages with no tool support except for
> the standard libraries are to be used.

They do?

*blink*

Me, I find writing simple parsers one of the cleanest and easiest things
to do in straight C. I'd jump on a "write a toy parser" problem :)

--
"To summarize the summary of the summary: people are a problem."
Russell Wallace
mailto:rwal...@esatclear.ie
http://www.esatclear.ie/~rwallace

Florian Hars

unread,
Aug 23, 2000, 2:21:11 AM8/23/00
to
f...@cs.mu.oz.au (Fergus Henderson) writes:
> Whether it be ML, Haskell, or Mercury, the proportion
> of errors that make it past the compiler seems to be quite small

Unless pattern matching fails on

doInclude acc [] = acc
doinclude acc (x:xs) =
-- do something to x
doInclude (x:acc) xs

Hugs' runtime errors are even more cryptic than its compiler errors.

:-)

Yours, Florian

Albert Y. C. Lai

unread,
Aug 23, 2000, 1:48:47 AM8/23/00
to
jam...@volition-inc.com (James Hague) writes:

> When using an FPL, I write small functions and test as I go.

[...]

> As such, I've found very little difference in practice between using
> dynamically and statically typed languages.

As far as writing is concerned, I agree with you.

However, I cannot say the same about reading, especially reading
someone else's programs. I have read some of other people's Perl code
and other people's [Emacs] Lisp code, and both cases have been rather
like watching suspense movies because the data structures used became
clear only after reading the whole damn thing twice. Actually, only
the Perl case was like that; in the Emacs Lisp case, I actively
avoided understanding the code because I just needed to copy a few
lines and I learned my lessons from the Perl case. If my life
depended on understanding other people's Emacs Lisp code, I would
rather die and go to heaven.

But of course, I write comments like people add ketchup to fries when
it is my turn to write Lisp and Perl code, so you will have no problem
reading *mine*. Ha.

But really, if programming languages evolve in a write-only direction,
something has gone really wrong. I am supposed to write comments no
matter what, but programming languages are also supposed to relief,
not worsen, the software crisis. So yes, I am horrified --- I am
horrified that some or most programmers judge and choose programming
languages based on solely experience in writing.

Albert Y. C. Lai

unread,
Aug 23, 2000, 1:12:14 AM8/23/00
to
Russell Wallace <rwal...@esatclear.ie> writes:

> Me, I find writing simple parsers one of the cleanest and easiest things
> to do in straight C. I'd jump on a "write a toy parser" problem :)

Then you will probably love this:

http://acm.gui.uva.es/problemset/v6/606.html

(First appeared: ACM Eastern Regional, November 1998.)

Enjoy!

George Russell

unread,
Aug 23, 2000, 3:00:00 AM8/23/00
to
Brian Rogoff wrote:
> Please do tell. Those trivial reasons may be important to others. Which
> ML are you comparing to?
Oh alright. But you'll all laugh at me. SML/NJ and GHC implementors, please don't
take it personally, I love you all really . . .

I'm comparing Standard ML/New Jersey (circa 110.7) and Glasgow Haskell (circa 4.08),
together with the documentation supplied. I'll list the advantages each has which the
other one doesn't, in decreasing order of importance.

Where SML/NJ wins
(1) the interactive mode. (Haskellers may want to remind me of Hugs but
for me Hugs is almost useless because (a) Hugs doesn't do everything GHC does
so won't compile my code; (b) you can't define new functions on the Hugs command
line.)
(2) the library documentation. If I want to find out all the functions operating on
say Lists, I can get a web page with for each function (1) the name; (2) the
type; (3) a brief description in English of what the function does. For Haskell
you need to look in several places and you usually are expected to decode the
standard-writer's Haskell to work out what the function does, and whether it might
be useful.
(3) Error checking in the compiler. I usually work by producing (say) 1000 lines of
code at a time. I then like to pass it through the compiler repeatedly, hopefully
eliminating 2/3 of the compiler warnings at each stage, until finally it compiles.
SML/NJ is far better for this because (a) it tries harder to recover from errors;
(b) all errors are located precisely (by column as well as line); (c) the error
messages are much shorter, but still tell me what I need to know.
(4) The CM.make system. To get where I am with UniForM, the GHC project I am
now working on, I more or less had to become a GNU make guru. Now I've done
this my Makefiles are about as concise as they were with CM.make. I suppose
that if I'd been starting from scratch it might have been easier to use a
system like hmake.
(5) Exceptions. The error-handling facilities in GHC are getting better, but still
somewhat clunky compared with those of standard ML.

Where GHC wins
(1) Having modules and classes rather than structures and functors. I know this is
controversial, but I simply find the ML structures/functors/signatures too complicated.
For example, with structures and signatures, you have to duplicate datatype definitions
which you expose between the structure and the signature. I prefer (heresy!) to keep the
specification and the implementation in the same file, though I think it's good practice
to put the specification near the top of the file. (I would like GHC's module export
lists to function as a sort of lightweight signature by allowing types to be put there;
at the moment many people put the types in anyway as comments, but these types are of
course unchecked and sometimes indeed false.)
(2) Linking to C. This is really important and something the GHC implementors have worked
hard on. In theory you can do it also with SML/NJ, but I looked at the documentation
for how to do it and thought "yuck!". In particular I think you have to recompile bits
of the compiler or something like that, as well as modifying various source files.
However I believe the SML/NJ implementors are planning to improve this . . .
(3) Haskell separates out pure from stateful computation, without making stateful computation
deliberately cumbersome. Well mostly - it's a pity that mutable arrays and references
aren't supported by the standard libraries, but GHC does provide them nevertheless.
(4) Haskell gets rid of those horrid equality types, replacing them by the much more powerful
type classes. They also replace functors (see (1)).
(5) Definition of the language. The Haskell standard actually
contains English explanations of all the clever mathematical formulae.
I confess to usually skipping the mathematical formulae . . .

I could go on and on but I think that'll do for now.

Peter Hancock

unread,
Aug 23, 2000, 3:00:00 AM8/23/00
to
>>>>> "George" == George Russell <g...@informatik.uni-bremen.de> writes:
> Where SML/NJ wins
> (1) the interactive mode. (Haskellers may want to remind me of Hugs but
> for me Hugs is almost useless because (a) Hugs doesn't do everything GHC does
> so won't compile my code; (b) you can't define new functions on the Hugs command
> line.)

As regards the last point, I'm not sure what you mean. The following
Hugs command line defines two new functions. ??

Main> let twice f = f . f ; s x = x ++ "'" ; o = "0" in twice twice s o

Where one wants to add new functions (of a less one-off kind, or
bigger), why not put them in an editor buffer and load them? Maybe
I'm not seeing the problem since I usually run hugs from an editor.

Peter

George Russell

unread,
Aug 23, 2000, 3:00:00 AM8/23/00
to
Peter Hancock wrote:
> Where one wants to add new functions (of a less one-off kind, or
> bigger), why not put them in an editor buffer and load them? Maybe
> I'm not seeing the problem since I usually run hugs from an editor.
Once I've loaded the editor buffer I then lose the previous module.
Unless I explicitly load it as well, or import it in the editor buffer.
OK, so if I jump through various hoops I suppose it's doable, but I'd
still much prefer just to type the definition and then later on use it.

Andreas Rossberg

unread,
Aug 23, 2000, 3:00:00 AM8/23/00
to
Usually I stay away from flame wars about static typing, but this time I
can't resist...

James Hague wrote:
>
> As such, I've found very little difference in practice between using

> dynamically and statically typed languages. In one case, something
> dumb is caught by the compiler. In the other, it is caught
> immediately upon trivial testing (whoops, I'm appending a list to a
> float). Accidental infinite recursion is a problem in both cases.
> After using a language for a while, trivial type errors get
> progressively rarer.

I've read this argument so many times now, and it's still in serious
conflict with my experience. Here is some anecdotic evidence that your
personal experience may be one-sided: Just two or three weeks ago a
colleague of mine was hunting a bug. The bug did not show up in simple
cases, only for complex input, as a result of the interplay of several
transformation functions. And the resulting effect was obscure. After a
whole week full time of chasing that single bug he almost gave up. In
the end it turned out to be a completely trivial error that a type
checker would have caught immediately. BTW, the language is dynamically
typed (not untyped) and he is a brilliant programmer, very experienced
in the language - actually, he has implemented the compiler.

> The bottom line, FOR ME, is that type systems (either
> dynamic or static) tend be a minor issue.

IMHO one of the reasons for the discrepancy in judgement of static
typing is that people with dislike for types tend not to use it as a
helpful and reliable tool properly, but instead just fight it. Of course
it's less useful then.

> From many of the papers
> I've read recently, I get the feeling that FP is all about type
> systems for some people.

It's not all about types, but types are an important aspect of FP, like
they are an important aspect of programming in general. You always deal
with types, in every language - whether explicitly or not! One of the FP
philosophies of course is to make every important semantic concept
explicit, or at least understand it properly.

--
Andreas Rossberg, ross...@ps.uni-sb.de

:: be declarative. be functional. just be. ::

Simon Raahauge DeSantis

unread,
Aug 23, 2000, 3:00:00 AM8/23/00
to
In article <39A3A8F6...@informatik.uni-bremen.de>, George Russell wrote:
>Where SML/NJ wins
>(1) the interactive mode. (Haskellers may want to remind me of Hugs but
> for me Hugs is almost useless because (a) Hugs doesn't do everything GHC does
> so won't compile my code; (b) you can't define new functions on the Hugs command
> line.)

NB there is also (last time I checked) HBI, HBC's interactive friend. I
believe HBI allows you to define functions on the command line (I haven't
used it much so my recollection might be incorrect)

--
-Simon Raahauge DeSantis

Brian Rogoff

unread,
Aug 23, 2000, 11:41:44 PM8/23/00
to
On Wed, 23 Aug 2000, George Russell wrote:
> Oh alright. But you'll all laugh at me. SML/NJ and GHC implementors, please don't
> take it personally, I love you all really . . .

Not really "trivial differences here! I'm going to pick at the language
specific rather than implementation specific points, but implementors
should note that

(1) Interactive mode is very important for many programmers
(2) Documentation and decent error reporting can't be neglected

> Where GHC wins
> (1) Having modules and classes rather than structures and functors. I
> know this is controversial, but I simply find the ML
> structures/functors/signatures too complicated.

This is a fundamental point in language design. I'm in favor of even more
"complicated" module systems.

> For example, with structures and signatures, you have to duplicate
> datatype definitions which you expose between the structure and the
> signature.
> I prefer (heresy!) to keep the specification and the implementation in
> the same file, though I think it's good practice to put the
> specification near the top of the file.

I disagree. There have been some long threads on this topic between the
Eiffel and Ada newsgroups, since Eiffel puts all of the info in one unit
and relies on a tool to extract the specification or "short form" of a
class, also has a tool to get the spec of a class and its ancestors
(short flat form). Ada has separated interface and implementation.
I'm in the Ada/ML camp on this topic.

> (4) Haskell gets rid of those horrid equality types, replacing them by
> the much more powerful type classes.

I agree that equality types are a wart. OCaml has polymorphic comparisons,
which are also kind of ugly (but convenient) IMO. I wouldn't mind a
similar overloading capability in ML. It looks like the ML2000 folks are
going in the other direction w.r.t. overloading, but the OCaml folks will
hopefully go their own way.

> (3) Haskell separates out pure from stateful computation, without
> making stateful computation deliberately cumbersome. Well mostly -
> it's a pity that mutable arrays and references aren't supported by
> the standard libraries, but GHC does provide them nevertheless.

Yes, the monadic style is easier in Haskell, on account of its type
classes and do notation. In general I find the ease with which you can use
infixes a very nice (and laughably trivial :) feature of Haskell.

I haven't done enough real programming in Haskell to make a judgement as
to whether being so careful about state is a big win. Do you think it
is?

> (5) Exceptions. The error-handling facilities in GHC are getting better, but still
> somewhat clunky compared with those of standard ML.

> I could go on and on but I think that'll do for now.

Actually I find this kind of comparison interesting, even though I like
ML. There certainly is lots of room for different opinions on things like
type classes vs modules.

-- Brian


Lars Lundgren

unread,
Aug 24, 2000, 2:25:40 AM8/24/00
to
On Wed, 23 Aug 2000, George Russell wrote:

> Peter Hancock wrote:
> > Where one wants to add new functions (of a less one-off kind, or
> > bigger), why not put them in an editor buffer and load them? Maybe
> > I'm not seeing the problem since I usually run hugs from an editor.
> Once I've loaded the editor buffer I then lose the previous module.
> Unless I explicitly load it as well, or import it in the editor buffer.

or load it with :a instead of :l.

/Lars L


Ketil Z Malde

unread,
Aug 24, 2000, 2:39:36 AM8/24/00
to
jam...@volition-inc.com (James Hague) writes:

> You can convince yourself that crazy things, like video games
> consisting of 200,000 lines of assembly code, airport baggage
> handling systems written in Forth, and complex desktop applications
> written in C++, do not exist.

They do exist all right, but it's not very hard to convince myself
that they don't work correctly. Now, if they'd used static typing... :-)

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants

George Russell

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
Brian Rogoff wrote:
> I haven't done enough real programming in Haskell to make a judgement as
> to whether being so careful about state is a big win. Do you think it
> is?
I see the IO monad (and its relatives, such as the ST monad) as a way of
annotating the type to indicate that something stateful is going on. I think
it is a big win that you can do this, especially when you have lots of both
stateful and pure computation in the program. For example, when I am looking
over old code (which is a big part of my job at the moment) I can be sure that
it's OK to rearrange or delete computations when they have a pure type.

Markus Mottl

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
James Hague <jam...@volition-inc.com> wrote:
>>IMHO one of the reasons for the discrepancy in judgement of static
>>typing is that people with dislike for types tend not to use it as a
>>helpful and reliable tool properly, but instead just fight it. Of course
>>it's less useful then.

[snip]
> But it is difficult to say what percentage of the benefit is coming from
> static typing in this case.
[snip]

The problem here is that it is fairly difficult to compare languages that
have a very different semantics in general besides having different type
systems. There are simply not too many languages around, which are very
similar in nature but only differ in this respect.

Maybe an example from the "logic" world would help: I have been using both
Prolog and LambdaProlog for various purposes. Surely, LambdaProlog has
many features that normal Prolog lacks, but for a fairly large number of
tasks, the two languages behave practically the same.

The big other difference is that LambdaProlog has a static type system. It
is surely not as advanced as the one of Mercury (no modes, determinism,
etc.), just simple verification of types.

I have definitely written more code in Prolog than in LambdaProlog so far,
but one thing has become clear to me: the static type system not only
improved the quality of my programs, it also helped me to get the job done
*faster*.

Just an example: I had to use some library with Prolog and got strange
behaviour. It turned out after some hours of debugging that I had
accidently passed something like "[(foo, bar)]" instead of "[foo, bar]" to
the library. - It seemed to run well with this bug for a long time.
Kind of...

The subtle difference between one element that is a tuple and a list of
fwo elements was sufficient to ruin my nerves. It was probably rather luck
that I found the problem after "a few" hours only. And this is just *one*
example where the lack of static typing has slowed me down (and degraded
the quality of my programs)!

Best regards,
Markus Mottl

--
Markus Mottl, mo...@miss.wu-wien.ac.at, http://miss.wu-wien.ac.at/~mottl

George Russell

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
"Albert Y. C. Lai" wrote:
>
This is indeed an interesting problem. However the naive solution of simulating
lazy evaluation as a functional language would do it is probably not the best way.
As indicated by the example test case you have to find elements S to E of a list,
where S and E can be very large - an example given is 43455436 to 43455438.
The naive solution would of course require at least O(E) space and time, which is
probably unacceptable. However there is a better way which gives you, with some
preprocessing, more like O(M(E-S)log(E)) where M depends (probably quadratically
or thereabouts - I can't be bothered to work it out) on the input declaration.

Here is the preprocessing you do. Consider the declarations in the input of the form
A = x B
where x is a string. If B is similarly declared
B = y C
we can replace the declaration of A by
A = xy C

We continue doing this for each name until one of two things happen:
(1) We hit a name which is defined by a zip. Then we can replace the declaration of A
by
A = x zip (B,C)
for a string x and two names B,C.
(2) We try to expand a name which we have already expanded. By looking back we discover
that we can replace the declaration for that name by
B = y B
for some string y. Thus B is just yyyyyyyy... and A is xyyyyyyy... which we write
as
A = xy*.
Once we've done this preprocessing for all the names, we can do the test cases. This
can now be done fairly easily using a recursive procedure. Suppose we want to print
A from s to e. If A = xy* then we can easily do this however big s is, if necessary
reducing s modulo (length x) to work out where in y to start and then taking (e-s)
symbols. On the other hand, if A = x zip (B,C) then we can reduce the problem to
at worst two test cases on B and C, in which (e-s), e and s are at most half as big as
they were.

Wolfhard Buß

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
Andreas Rossberg <ross...@ps.uni-sb.de> writes:

> the end it turned out to be a completely trivial error that a type
> checker would have caught immediately. BTW, the language is dynamically
> typed (not untyped)

Are you talking about Oz?

-wb

Russell Wallace

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
George Russell wrote:
>
> "Albert Y. C. Lai" wrote:
> >
> This is indeed an interesting problem. However the naive solution of simulating
> lazy evaluation as a functional language would do it is probably not the best way.

Yep, good analysis. Looks like a fun problem! I might have a go at it
at some point when I've the energy to spare.

Andreas Rossberg

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to

Yes, although the bug was in no way specific to Oz, but could have
similarly happened in any dynamically typed language with pattern
matching.

Markus Mottl

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
James Hague <jam...@volition-inc.com> wrote:
> I must be somewhat backward in that I prefer thinking about types in a
> very loose way until I have slogged through the problem at hand. Then
> locking down those types makes sense. Sometimes it does, anyway. And
> a separate consistency checking program--using after-the-fact type
> annotations--would be just as good as having it integral to a
> language.

It is very surely also a matter of practice: people who are used to
dynamically typed languages will find it annoying that the compiler moans
after each "small" type error. It is not difficult to configure your
editor in such a way that you jump through each of them and either correct
it (if it is a bug) or uncomment the offending part and replace it with a
call to an exception (using an editor macro) if you just want to have it
compile through.

Btw., I think this suggestion was already raised once: why don't compiler
implementors allow for a flag that automatically replaces every expression
(the smallest subexpression) that leads to a type error with an exception
that reports the line number (and a warning only during compilation)?

People who prefer writing up a lot first and want to run some parts
(possibly triggering exceptions) before cleaning up their work will surely
appreciate this. And in the "cleaning stage" they could turn off this flag
and get accurate error messages from the compiler.

This feature does not seem to be very difficult to implement.

> Neither system is actually *proving* anything, other than that the types
> are consistent, of course.

Well, type checking *is* indeed related to proofs... ;)

Btw., there are also advanced systems for program analysis that can
automatically verify fairly complex properties of programs. Though, one
shouldn't (yet) expect too much...

> I don't think there's any debate over the usefulness of types or
> type-checking. It's just that someone obsessed with the idea that his
> nose is too big is going to look in the mirror and see a BIG NOSE with
> a small face around it. Similarly, there's a tendency in FP research
> to view languages as executable type systems. That's one view, but
> not necessarily the only one, or one that determines the usefulness of
> a programming language.

It is interesting that there are often flame wars on this topic, but very
seldom suggestions (as the one above) that try to address shortcomings of
implementations in this respect.

One could work from both ways: try to get some features to do static
verification into dynamic languages and provide for ones that make static
languages less "strict" for testing purposes.

Markus Mottl

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
James Hague <jam...@volition-inc.com> wrote:
> You're taking too pointed a look at the topic. We're not talking
> about newbies here, but people with lots of experience programming in
> a variety of language types. People in that position can say "I think
> static typing is a useful tool, but I don't agree that it is the
> cornerstone of FP that I keep being told it is."

I haven't said that there aren't people who know both sides and prefer one
of them: just that many people find it annoying to have a machine
interrupting their workflow. Some people never get used to a specific
style of programming, which does not mean that they don't know how to do
it.

>>Well, type checking *is* indeed related to proofs... ;)

> It's provides a consistency check, it's a guarantee that certain
> things will not occur, but it is not a proof, or even the bulk of a
> proof.

I was rather referring to the relation of proofs and types (Curry-Howard
isomorphism).

> At the moment, proofs of *any* code are rare to non-existent.

We had to do quite a lot of inductive proofs here (which is, to what most
correctness proofs boil down). Especially the research group for
mathematical reasoning is very active in this field and has developed some
nice techniques that guide the proof process. So there is place for
hope... ;)

> It's pretty easy to slip into Dijkstrian mode and be shocked that no
> one derives programs mathematically, but it's hard to get anywhere as
> a programmer if you're that idealistic :)

Well, I'd say it is hard to get anywhere as a programmer if you're not a
Dijkstra ;)

The head of our division (Alan Bundy) once told us an anecdote about this:
he presented a proof (termination, soundness, etc.) of one of the
techniques mentioned above in front of an audience. He was terribly
frightened, because he knew that Dijkstra, who is known for posing very
nasty questions, was listening, too. After his talk, Bundy asked the
audience for questions and - of course - Dijkstra showed up. Bundy, in
great terror, thought "Now it comes...". And Dijkstra said: "Mr. Bundy,
you have omitted a parenthesis in this expression..."

Marcin 'Qrczak' Kowalczyk

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
21 Aug 2000 17:57:16 GMT, Fergus Henderson <f...@cs.mu.oz.au> pisze:

> Whether it be ML, Haskell, or Mercury, the proportion of errors

> that make it past the compiler seems to be quite small, so once
> the compiler does accept a program, it will often "just work".

This is my impression too... Except a stupid case a few days ago where
I discovered that my previous excitement about the compiler spotting
all errors was *not* justified.

| > > instance (ForeignArg fa ha, Call ff hf) => Call (fa -> ff) (ha -> hf) where
| > > callIO f = callIO . convertArg (\fa -> f >>= return . ($fa))
| >
| > BTW. I have not tested if it works nor thought how it would execute,
| > but I wrote down types of things to use and the resulting type to get,
| > and combined them in about the only type-correct way that does not
| > use bottoms.
| >
| > This is what I love in static typing: there are cases where type
| > correctness practically guarantees obtaining what was needed.
|
| ___ __
| /\ | ) / \ | | |
| /__\ |-< ( __ |---| |
| / \ | \ \__/ | | .
|
| How stupid! I finally did some tests. It does *not* do what was needed.
|
| It executes things in the wrong order. For example when a C function is
| applied to a string, it allocates memory, makes a C string, constructs
| an IO action that would call the C function, frees the string, and
| finally executes the action :-) :-(
|
| How could I present this as an example where "compiles" implies
| "it just has to work"?!
|
| [...]

--
__("< Marcin Kowalczyk * qrc...@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK

David A. Wagner

unread,
Aug 24, 2000, 3:00:00 AM8/24/00
to
George Russell <g...@informatik.uni-bremen.de> wrote:
> "Albert Y. C. Lai" wrote:
> > http://acm.gui.uva.es/problemset/v6/606.html

>
> This is indeed an interesting problem. However the naive solution
> of simulating lazy evaluation as a functional language would do it
> is probably not the best way. As indicated by the example test case

> you have to find elements S to E of a list, where S and E can be very
> large - an example given is 43455436 to 43455438. The naive solution
> would of course require at least O(E) space and time, which is probably
> unacceptable. However there is a better way which gives you, with some
> preprocessing, more like O(M(E-S)log(E)) where M depends (probably
> quadratically or thereabouts - I can't be bothered to work it out)
> on the input declaration.

Clever.

I think the running time is better than you claimed. The precomputation
should take only linear time, if done right---after all, it's just
a depth-first search---I think. (I believe it can actually be done
on-the-fly using memoization.)

Moreover, each subsequent test should take just O((E-S) log E) time,
if I am not mistaken. It is certainly true if you're looking for
elements S..E of A, where A = xy*. For the case A = x zip(B,C), clearly
finding elements S'..E' of B or C takes just t = O((E'-S') log E') time
(by induction hypothesis) so the total time for processing A is O(2t + (E-S)).
But S' = (S-|x|)/2 and E' = (E-|x|)/2, so t = O((E-S)/2 log (E/2)) and
2t + (E-S) = O((E-S) log E), as claimed.

Am I overlooking something?

By the way, are these tricks actually used in implementations of lazy
functional languages? (perhaps with memoization and invalidation to
do this stuff on the fly for read-mostly data, or somesuch)

Lars Lundgren

unread,
Aug 25, 2000, 3:00:00 AM8/25/00
to
On Thu, 24 Aug 2000, James Hague wrote:

> >Well, type checking *is* indeed related to proofs... ;)
>
> It's provides a consistency check, it's a guarantee that certain
> things will not occur, but it is not a proof, or even the bulk of a
> proof.

But it *is* a proof. If the type checker finds your code type correct,then
it has *proved* that all properties captured by the type system indeed
holds.

(Of course, it doesn't claim anything about important properties that has
not been captured by the types)

Given a type system expressive enough, you can express all sorts of
properties in the types.


(Just read your sentence above; "a guarantee that certain things will not
occur" - sounds like a sort of proof to me ;)

/Lars L

Marcin 'Qrczak' Kowalczyk

unread,
Aug 25, 2000, 3:00:00 AM8/25/00
to
Wed, 23 Aug 2000 12:35:34 +0200, George Russell <g...@informatik.uni-bremen.de> pisze:

> Where SML/NJ wins
> (1) the interactive mode. (Haskellers may want to remind me of Hugs but
> for me Hugs is almost useless because (a) Hugs doesn't do everything GHC does
> so won't compile my code; (b) you can't define new functions on the Hugs command
> line.)

I agree. There are plans to build an interpreter as part of GHC,
so this should change in future.

> (2) the library documentation. If I want to find out all the functions operating on
> say Lists, I can get a web page with for each function (1) the name; (2) the
> type; (3) a brief description in English of what the function does. For Haskell
> you need to look in several places and you usually are expected to decode the
> standard-writer's Haskell to work out what the function does, and whether it might
> be useful.

Instead of documentation I just look at the source; it works for sets
of small independent functions like those in module List.

George Russell

unread,
Aug 25, 2000, 3:00:00 AM8/25/00
to
Marcin 'Qrczak' Kowalczyk wrote:
> Instead of documentation I just look at the source; it works for sets
> of small independent functions like those in module List.
I often do this too. But for Glasgow Haskell I often have to do a grep
to find out where the source of a function actually is because (for many
of the library modules) it turns out that the source is really tucked
away in one of the Prelude modules to avoid duplication. In any
case I find it easier to read documentation rather than source.
Compare the following:

GHC source file:
find :: (a -> Bool) -> [a] -> Maybe a
find p = listToMaybe . filter p
SML library documentation:
find f l
applies f to each element x of the list l, from left to right, until f x evaluates to true; returns SOME x if such an x exists, otherwise NONE.

Personally I prefer the SML library documentation definition, perhaps because I was bought
up speaking English and not Haskell. Of course "find" is easy to understand anyway;
I chose a particularly simple function from a simple module.

As a matter of fact, "find" is actually defined in English in the Haskell Library Standard,
but is harder to find than for SML. One possible problem is that the Haskell standard
library seems to contain rather too many functions which I never use, such as nub, inits
and all the zips and zipWiths. (I appreciate that now millions are going to crawl out
from the woodwork to log into Usenet and tell comp.lang.functional that they consider nub, inits and zips absolutely indispensable and use them all the time . . .)

jt

unread,
Aug 25, 2000, 3:00:00 AM8/25/00
to

Markus Mottl wrote:

>
> Btw., there are also advanced systems for program analysis that can
> automatically verify fairly complex properties of programs. Though, one
> shouldn't (yet) expect too much...
>

Could you give a few pointers?

appreciated

jt

Markus Mottl

unread,
Aug 25, 2000, 3:00:00 AM8/25/00
to
jt <gk...@dial.pipex.com> wrote:
>> Btw., there are also advanced systems for program analysis that can
>> automatically verify fairly complex properties of programs. Though, one
>> shouldn't (yet) expect too much...
>>

> Could you give a few pointers?

Of course, here a few links to local research projects + software:

http://dream.dai.ed.ac.uk/software/systems/index.html

The theorem provers are biased towards program verification, program
transformation and synthesis. Although the system can automatically verify
pretty complex properties of programs, it's still not wise to use them
without human guidance... (automatic proof planning is a major topic
here).

Fergus Henderson

unread,
Aug 26, 2000, 1:07:29 AM8/26/00
to
qrc...@knm.org.pl (Marcin 'Qrczak' Kowalczyk) writes:

>21 Aug 2000 17:57:16 GMT, Fergus Henderson <f...@cs.mu.oz.au> pisze:
>
>> Whether it be ML, Haskell, or Mercury, the proportion of errors
>> that make it past the compiler seems to be quite small, so once
>> the compiler does accept a program, it will often "just work".
>
>This is my impression too... Except a stupid case a few days ago where
>I discovered that my previous excitement about the compiler spotting
>all errors was *not* justified.

...


>| It executes things in the wrong order. For example when a C function is
>| applied to a string, it allocates memory, makes a C string, constructs
>| an IO action that would call the C function, frees the string, and
>| finally executes the action :-) :-(

Well, if you start doing manual memory management, as you were
doing in this case, that will certainly increase the number of
errors that slip past the compiler. But when you're programming in
languages like ML, Haskell or Mercury, it's very rare that you need
to resort to manual memory management. In this case, the need for
manual memory management arose only because you were interfacing with C.

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
WWW: <http://www.cs.mu.oz.au/~fjh> | of excellence is a lethal habit"
PGP: finger f...@128.250.37.3 | -- the last words of T. S. Garp.

Lars Lundgren

unread,
Aug 28, 2000, 3:00:00 AM8/28/00
to
On Fri, 25 Aug 2000, James Hague wrote:

> On Fri, 25 Aug 2000 09:12:33 +0200, Lars Lundgren
> <d95...@dtek.chalmers.se> wrote:
> >
> >(Just read your sentence above; "a guarantee that certain things will not
> >occur" - sounds like a sort of proof to me ;)
>

> Picky, picky :)
>
> When the term "proof" is used in reference to a program, it is almost
> always short for "proof of correctness."

No I do not think so. The term "proof" has not an implicit "of full
correctness" attached to it. The term "proof" is often used for proofs of
certain aspects of programs. The term "*full* correctness" is not always
formalised.

> That implies more than just types.
>

It is correct that it is very seldom that all requirements of a program
can be expressed in the types. But it is not an all or nothing scenario.
It is valuable to be able to prove that the program will not
segfault, even if you can not prove that it will terminate within a
certain amount of time on a certain machine. And it is correct to say that
you have a "proof" that the program is segfault-free even if other aspects
of the program has not been proved.

"just types" can encode just about any property of a program, given a type
system expressive enough.

/Lars L

Peter Hancock

unread,
Aug 28, 2000, 3:00:00 AM8/28/00
to
>>>>> "James" == James Hague <jam...@volition-inc.com> writes:

> Hee-hee. That's one of those infamous lines, like "given a
> sufficiently smart compiler... " :)

Did you know that we have had type systems capable of expressing
just about any mathematical proposition since about 1972? When
programming language type-systems were still in nappies?

This makes it rather credible that they can express "just about any
property of a program". What is it that makes you giggle?

Peter

Ketil Z Malde

unread,
Aug 29, 2000, 2:18:04 AM8/29/00
to
jam...@volition-inc.com (James Hague) writes:

> Instead of writing programs, feel free to go ahead and use a
> sufficiently expressive type system to describe and implement your
> solutions.

I seem unable to derive any sensible meaning from that statement.

Don't you consider "describing and implementing solutions" "writing
programs"? (Or is it just a matter of 'Real Programmers' masochism
<http://www.pbm.com/~lindahl/real.programmers.html>?)

Lars Lundgren

unread,
Aug 29, 2000, 3:00:00 AM8/29/00
to
On Mon, 28 Aug 2000, James Hague wrote:

> On 28 Aug 2000 21:06:20 +0100, Peter Hancock <p...@dcs.ed.ac.uk> wrote:
>
> > > Hee-hee. That's one of those infamous lines, like "given a
> > > sufficiently smart compiler... " :)
> >
> >Did you know that we have had type systems capable of expressing
> >just about any mathematical proposition since about 1972? When
> >programming language type-systems were still in nappies?
> >
> >This makes it rather credible that they can express "just about any
> >property of a program". What is it that makes you giggle?
>

> Nothing. Instead of writing programs, feel free to go ahead and use a


> sufficiently expressive type system to describe and implement your

> solutions. That's fine by me. I wish you well.
>

What is it with you? Do you need to have the last word or what? I can not
make sense of your last comment.


On Thu, 24 Aug 2000, James Hague wrote:

> Neither system is actually *proving* anything, other than
> that the types are consistent, of course.
>

As if that would put a serious limit on the things you can prove....

> Similarly, there's a tendency in FP research
> to view languages as executable type systems.

I often see a program as an encoding of a computation. One can reason
about the program and conlude that it has certain properties. If you want
to do automatic reasoning about such properties, it is good to have a
tailormade language for them - the type system.

You can not "execute a type system", but you can execute certain proofs.
You can view a program as a proof that the properties encoded in its type
holds, and running the program then amounts to show the world that the
property holds. (That is, if you do not trust the proof-reader, I mean the
type checker ;)

> That's one view,
> but not necessarily the only one, or one that determines the usefulness
> of a programming language.

Agreed!

/Lars L

Ulf Wiger

unread,
Aug 29, 2000, 3:00:00 AM8/29/00
to
>>>>> "Lars" == Lars Lundgren <d95...@dtek.chalmers.se> writes:

Lars> On Mon, 28 Aug 2000, James Hague wrote:
>> On 28 Aug 2000 21:06:20 +0100, Peter Hancock <p...@dcs.ed.ac.uk>
>> wrote:
>>
>> > > Hee-hee. That's one of those infamous lines, like "given a
>> > > sufficiently smart compiler... " :)

>> Did you know that we
>> have had type systems capable of expressing just about any
>> mathematical proposition since about 1972? When programming
>> language type-systems were still in nappies?
>> This makes it rather credible that they can express
>> "just about any >property
>> of a program". What is it that makes you giggle?
>>
>> Nothing. Instead of writing programs, feel free to go ahead and
>> use a sufficiently expressive type system to describe and
>> implement your solutions. That's fine by me. I wish you well.
>>

Lars> What is it with you? Do you need to have the last word or
Lars> what? I can not make sense of your last comment.

If I may... I would think that James is taking a stance perhaps more
common in industry than in academia: It doesn't matter (to a
commercial product developer at least) that there exists an all-
powerful type system if it isn't supported by a good programming
environment. You can't use a theoretical model to build a product;
there must first exist a production environment based on that theory.

The parallel with the "sufficiently smart compiler" is something
I can relate to. I've often heard an inefficiently executed
construct defended with phrases like "an optimizing compiler will
remove the overhead, and this is more elegant". While there is
some merit to the claim, it is useless for people who make their
living on product development, and can't wait two years for that
optimizing compiler.

I've had similar, rather absurd, discussions about program
verification. It doesn't matter (to me) that it can be done
in some environment, for some problems, if it isn't supported
by environments that also give me the productivity and leverage
I need to actually build my product.

Note that I'm not (nor is, I would think, James) implying that
it's not of interest to pursue new avenues, but it's important
to make a distinction between "possible now" and "available now".

(My apologies, James, if I've misunderstood you.)

/Uffe

PS An anecdote perhaps?
The function lists:map/2 was implemented in Erlang thus:

map([H|T], F) ->
[F(H)|map(T, F)];
map([], F) ->
[].

This was modified by someone (no names!) to look like this:

map(L, F) ->
[F(X) || X <- L].

Great, but this construct was more than twice as slow as the first
one, given the rather inefficient implementation of list
comprehensions in Erlang at the time. The rationale was that the
new construct was more elegant. We on the product development side
caused enough commotion to reverse the change. A small demonstration
of the difference in mindset between academics and product developers.

PPS The person responsible for the above is now a multi-millionaire,
at least in stock options, from doing Erlang programming.
Congratulations! (:

http://www.alteonwebsystems.com/press/releases/2000/082800.asp
--
Ulf Wiger tfn: +46 8 719 81 95
Network Architecture & Product Strategies mob: +46 70 519 81 95
Ericsson Telecom AB, Datacom Networks and IP Services
Varuvägen 9, Älvsjö, S-126 25 Stockholm, Sweden

Lars Lundgren

unread,
Aug 29, 2000, 3:00:00 AM8/29/00
to
On 29 Aug 2000, Ulf Wiger wrote:

> >>>>> "Lars" == Lars Lundgren <d95...@dtek.chalmers.se> writes:
>
> Lars> On Mon, 28 Aug 2000, James Hague wrote:
> >> On 28 Aug 2000 21:06:20 +0100, Peter Hancock <p...@dcs.ed.ac.uk>
> >> wrote:
> >>
> >> > > Hee-hee. That's one of those infamous lines, like "given a
> >> > > sufficiently smart compiler... " :)
>

> >> Nothing. Instead of writing programs, feel free to go ahead and
> >> use a sufficiently expressive type system to describe and
> >> implement your solutions. That's fine by me. I wish you well.
> >>
>
> Lars> What is it with you? Do you need to have the last word or
> Lars> what? I can not make sense of your last comment.
>
> If I may... I would think that James is taking a stance perhaps more
> common in industry than in academia: It doesn't matter (to a
> commercial product developer at least) that there exists an all-
> powerful type system if it isn't supported by a good programming
> environment. You can't use a theoretical model to build a product;
> there must first exist a production environment based on that theory.
>

I agree with the above. I do not think it is very controversial. But it
is controversial if he says that a typechecker cannot prove anything
simply because it is plain wrong.

<snip, more stuff claiming that James means something else than what he
writes>

> (My apologies, James, if I've misunderstood you.)
>

I try not to do that misstake by responding to what people actually
writes.

/Lars L


Peter Hancock

unread,
Aug 29, 2000, 3:00:00 AM8/29/00
to
Speaking for myself, I am motivated by two connected things:

* to build things; I have worked in industry for many years,
on products.

* to understand, or get to the bottom of things.

Sometimes I care about one more than the other. If someone makes a
successful product, good luck to them; we don't all find the same
things rewarding, all the time.

There are regular spats on this newsgroup which get characterised
as "academics" vs. "developers", or something like that. They're
never particularly edifying. Any important human activity has a
theoretical and a practical side, and is doomed without both.

Peter


Ulf Wiger

unread,
Aug 29, 2000, 3:00:00 AM8/29/00
to
>>>>> "Peter" == Peter Hancock <p...@dcs.ed.ac.uk> writes:

Peter> There are regular spats on this newsgroup which get
Peter> characterised as "academics" vs. "developers", or something
Peter> like that. They're never particularly edifying. Any
Peter> important human activity has a theoretical and a practical
Peter> side, and is doomed without both.

Sure. I would say that the Erlang experience is a good example of
theory and practice in a fruitful marriage.

Still, amazingly, you're often fazed with rather extreme statements
and forced to take the extreme opposite stance.

The argument "this will become moot with an opimizing compiler" is
often useful, but of little consolation for people who are fighting
close deadlines and ever steeper requirements. On the other hand, if
you're writing an academic paper, it's probably counter-productive
to require your theories to be proven in commercial-grade apps
before you can publish. Most people I come in contact with in the
programming world face these decisions almost every day.

It's perfectly possible, and to some extent desireable, to have
one foot in each camp - as long as you don't get confused about
your priorities at any given time.

/Uffe

Robert Virding

unread,
Aug 29, 2000, 3:00:00 AM8/29/00
to
In article <xczem38...@etxb.ericsson.se>,

Ulf Wiger <Ulf.Wiger.c...@ericsson.com> writes:
>PS An anecdote perhaps?
>The function lists:map/2 was implemented in Erlang thus:
>
>map([H|T], F) ->
> [F(H)|map(T, F)];
>map([], F) ->
> [].
>
>This was modified by someone (no names!) to look like this:
>
>map(L, F) ->
> [F(X) || X <- L].
>
>Great, but this construct was more than twice as slow as the first
>one, given the rather inefficient implementation of list
>comprehensions in Erlang at the time. The rationale was that the
>new construct was more elegant. We on the product development side
>caused enough commotion to reverse the change. A small demonstration
>of the difference in mindset between academics and product developers.
>
>PPS The person responsible for the above is now a multi-millionaire,
>at least in stock options, from doing Erlang programming.
>Congratulations! (:

Sigh! Ulf is talking about me. I'm the guilty one! The main reason
I did it was for a joke, like implementing filter with:

filter(F, List) -> [ E || E <- List, F(E) ].

Unfortunately not many people understood it. :-) More seriously the
implementation didn't worry me much because:

1. Although it was slower you usually DO something in the map and that
would drown out the time taken for the map.

2. I knew that a better implementation of funs was coming so the
difference would disappear.

Even more seriously I am a firm believer in Richard O'Keefes maxim
"elegance is not optional". I think you should try and be elegant in
your code, it might cost a little but in the long run it will be worth
it. Too many programmers, even good programmers, write inelegant and
unclear code with confused flow of control which is hard to read and
understand. I KNOW many problems ARE hard but that is no reason not
to try and be elegant.

Also newbie programmers tend to use libraries as sources of
inspiration when they code, or at least they should, so they are a
good place to promote good programming style.

Robert

--
Robert Virding Tel: +46 (0)8 692 22 12
Bluetail AB Email: r...@bluetail.com
Hantverkargatan 78 WWW: http://www.bluetail.com/~rv
SE-112 38 Stockholm, SWEDEN
"Folk säger att jag inte bryr mig om någonting, men det skiter jag i".

Ulf Wiger

unread,
Aug 29, 2000, 3:00:00 AM8/29/00
to
>>>>> "Lars" == Lars Lundgren <d95...@dtek.chalmers.se> writes:

Lars> On 29 Aug 2000, Ulf Wiger wrote:

>> If I may... I would think that James is taking a stance perhaps
>> more common in industry than in academia: It doesn't matter (to a
>> commercial product developer at least) that there exists an all-
>> powerful type system if it isn't supported by a good programming
>> environment. You can't use a theoretical model to build a
>> product; there must first exist a production environment based on
>> that theory.
>>

Lars> I agree with the above. I do not think it is very
Lars> controversial. But it is controversial if he says that a
Lars> typechecker cannot prove anything simply because it is plain
Lars> wrong.

Well, I guess I really should let James answer this, but having
re-read the posts, *my* impression is that the issue was that a
type checker can prove that a program is type-correct, but it can't
prove much beyond that. It's perfectly possible to write a type-
correct program that doesn't work. This has been debated before in
this forum, and I'm sure you don't disagree with that.

The other issue, as I read it, was that you cannot implement an
application in just a type system. You can build a programming
environment that enforces a particular type system, and then build
your application in that.

Why is this important? If you're trying to prove a point from a
theoretical perspective, it's trivial and beside the point; if you're
building products to sell, it usually *is* the point (in almost
any argument: if it's not available and supported, it's not
intetersting.)

<BTW>
So why do you think that type theory has existed for so long
without making it into commercial programming languages?
One current reason could of course be inertia, but I'd venture to
guess that the main reasons were available computational capacity
and possibly that computer science is still rather immature, and
only lately has begun to cross-breed efficiently with other sciences.
</BTW>

Lars> <snip, more stuff claiming that James means something else
Lars> than what he writes>

See how differently we read things. (:


>> (My apologies, James, if I've misunderstood you.)

Lars> I try not to do that misstake by responding to what people
Lars> actually writes.

Then please read my post as describing *my* thoughts on the issue -
not James's. I may have made the mistake of thinking that he and
I were in agreement - thus the apology.

Peter Hancock

unread,
Aug 29, 2000, 3:00:00 AM8/29/00
to
>>>>> "Ulf" == Ulf Wiger <Ulf.Wiger.c...@ericsson.com> writes:

> Well, I guess I really should let James answer this, but having
> re-read the posts, *my* impression is that the issue was that a
> type checker can prove that a program is type-correct, but it can't
> prove much beyond that. It's perfectly possible to write a type-
> correct program that doesn't work. This has been debated before in
> this forum, and I'm sure you don't disagree with that.

Just for information, there "are", in the mathematicians sense of
"are" (not, perhaps the Network Architect & Product Strategist's
sense) type systems which are capable of expressing just about any
mathematical proposition. A program with that type is a constructive
proof of the proposition, that can in principle be run. To the extent
that a program's specification can be expressed as a mathematical
proposition, type checking a program in such a type system gives full
assurance of its correctness. This is not a matter for debate; it has
been known for about 3 decades. These type systems have actually been
implemented, and I have written programs using such systems. It is
essentially the same as writing machine checked proofs.

I hope I have written the above carefully enough so that you will not
leap to the conclusion that I am claiming that this is foresee-ably a
reasonable system to adopt for writing commercial programs. There are
a great many extremely hard research issues, let alone implementation
and sociological problems to solve before one would dream of
suggesting such a thing. And of course, a mathematical specification
is not such an easy thing to write; and a mathematically correct
program is not necessarily something that one would want to run.

Of course the kind of type-systems that we have in Haskell, ML, etc
are incapable of expressing program specifications. Its perfectly
possible to write a type-correct program in such a system which
doesn't work.

I guess it comes down to what you mean by "type-system".

Peter

Lars Lundgren

unread,
Aug 30, 2000, 3:00:00 AM8/30/00
to
On 29 Aug 2000, Ulf Wiger wrote:

>
> Well, I guess I really should let James answer this, but having
> re-read the posts, *my* impression is that the issue was that a
> type checker can prove that a program is type-correct, but it can't
> prove much beyond that. It's perfectly possible to write a type-
> correct program that doesn't work. This has been debated before in
> this forum, and I'm sure you don't disagree with that.
>

> The other issue, as I read it, was that you cannot implement an
> application in just a type system. You can build a programming
> environment that enforces a particular type system, and then build
> your application in that.
>
> Why is this important? If you're trying to prove a point from a
> theoretical perspective, it's trivial and beside the point; if you're
> building products to sell, it usually *is* the point (in almost
> any argument: if it's not available and supported, it's not
> intetersting.)
>

I'm just trying to work as a counterbalance against James statements about
marginalizing the possible benefits of static typing. What you say above
is much more well balanced.

> <BTW>
> So why do you think that type theory has existed for so long
> without making it into commercial programming languages?
> One current reason could of course be inertia, but I'd venture to
> guess that the main reasons were available computational capacity
> and possibly that computer science is still rather immature, and
> only lately has begun to cross-breed efficiently with other sciences.
> </BTW>
>

And of course, that there are very hard problems to deal with
before one can combine a commercial programming language with type
theory. The solutions also has to be "decidable in practice".

I do not think there are as many technical problems stopping us from
designing a commercial programming language with a very expressive strong
static type system. I guess many will claim that such languages already
exists. I think haskell is very promising, and think its main drawbacks
are:

- Small set of libraries
- Hard to reason about/predict/hand optimise performance.
- A high threshold into an otherwise steep learning curve. (At least for
hard core C hackers)

I think haskell is excellent as a pure functional language, and quite good
for imperative programming as well.

> Then please read my post as describing *my* thoughts on the issue -
> not James's. I may have made the mistake of thinking that he and
> I were in agreement - thus the apology.
>

Ok, no hard feelings ;

/Lars L


Markus Mottl

unread,
Aug 30, 2000, 3:00:00 AM8/30/00
to
James Hague <jam...@volition-inc.com> wrote:
> There's a frustating tendency here to go too far, for lack of a better
> phrase. Static typing is a useful tool, but it is routinely being
> represented as much more than it is. I've written OCaml programs that
> worked perfectly on the first try--something fairly rare in C--so I
> understand the enthusiasm. Isn't it great to have a tool that works?
> But this often goes off in the wrong direction:

> A: "When a program is accepted by the type system, it is correct!"

I have never read this, neither in this thread nor in any other thread in
c.l.f. (as a joke at best). This wouldn't be a "hard core" opinion, but
just nonsense. Nobody ever seriously said this.

> B: "But you can easily get parameters reversed or make logic errors
> and so on."

Yes, this may be the case - or not, depending on your point of view. If
you make, for example, full use of the label calculus in OCaml, you could
indeed eliminate the first problem under one interpretation. This does not
mean that it is *convenient* to *always* write code like this. But it is
not difficult either. The other interpretation is that reversing
parameters actually is a logic error: even labels won't help you if you
say "Give me 10 apples and 5 oranges!" although you wanted to say "Give me
5 oranges and 10 apples!". Labelling the things you wanted (apples,
oranges) is not sufficient. I'd say, this feature of the type system
mostly helps eliminate psychological weaknesses: it's confusing for us to
know what is meant if somebody says "Give me 5 and 10!".

Logic errors often indicate that the programmer has not understood the
problem. No type system will ever prevent programmers from implementing
the right solution for the wrong problem. This is not an argument against
strong type systems.

> A: "Those errors are very rare compared to type errors."

Who knows? - I often see people take one or the other position without any
statistics. I have no idea whether this is true or false in general. But I
definitely know from my own work that most errors (quantitatively
speaking) are syntactic (typos) and type errors. I agree that the rest of
the bugs is often more difficult to find or correct. But I am very
grateful that I never have to worry about the more "trivial" sources of
errors.

> B: "Surely you aren't trying to equate consistency of types with a
> proof of correctness?"

B: "Surely you aren't trying to equate syntactic correctness with a proof
of correctness?" ;)

Would you want to get rid of the syntax checker? (Hm, I agree that this
would probably cause more problems for the compiler than for the human...)

> A: "Given an expressive enough type system, yes!"

I guess that most people here know that type systems as they are
implemented in "mainstream FPLs" do not have enough expressive power to
nail down every semantic property in the type system (though, one can get
surprisingly far). It is probably too tedious for humans to go to such an
extreme. If you feel super-human, you might try Cayenne, which (to my
knowledge) can do this.

> This has drifted from what is currently available--the type
> inferencing system used in languages such as ML--to something that
> doesn't exist.

Ahem, but such things *do* exist! But you wouldn't want to write a
web-server in them! That's why they are not so widespread... (and yes, I
also wouldn't use them for my kinds of problems).

> There are big wins to be had from functional programming, as anyone at
> Bluetail can attest, but there's too much shooting down of anything
> that doesn't go all the way.

Why do you think so? I really like Haskell, but for practical work I
prefer OCaml: I just find it easier to solve my problems in a strict and
impure language. This is probably true for many (most?) people at c.l.f.

> If I use Erlang to solve a problem, I get called another product of
> programmer machismo, because I'm using oh-so-scary and dangerous dynamic
> typing.

No, I have never read anybody say so. Erlang is an impressive
language/development environment - but tastes differ as do problems.

> Working on big projects like that is also a good way to realize how
> far away proper proofs of correctness are, or at least how unrealistic
> they are in practice.

I agree that they (at least currently) are. You should, however, not
forget that type safety is in almost all cases a premise of correctness -
not a sufficent requirement, but a necessary one.

> It is much better to have tools that encourage general safety (e.g. gc,
> bounds checking, etc.), quick testing, and a flexible style of
> programming.

Well possible that this is your experience so far. Does this say anything
*against* static typing? I don't see this. E.g. in OCaml, which you seem
to have tried, also has GC, bounds checking, quick testing (interpreter,
debugger, very fast compilers) and (IMHO) allows a "flexible style" of
programming (whatever this may be). On the other hand, Erlang offers you
features that make certain difficult tasks very easy to solve (distributed
processing, etc.). This makes Erlang a good choice for some projects. I
would still prefer writing my machine learning systems in OCaml, one of
the reasons being its static type system.

> Static type inferencing systems that currently exist can fit perfectly
> into that model. But we don't know much beyond that. Will there be
> value in having to formally describe programs using more elaborate type
> systems?

A large effort has gone into developing type systems that have a very low
"work" impact on the programmer. If you take core-ML, there are never
cases where you have to use type annotations to "help" the compiler (not
so for Haskell). And type declarations usually just make up ~5% of the
code (I have measured this on my sources), and a responsible software
developer would also mention constructors within comments in dynamic
languages, anyway.

Nick Williams

unread,
Aug 30, 2000, 3:00:00 AM8/30/00
to
In article <8ojstj$3ca$1...@bird.wu-wien.ac.at>,
Markus Mottl <mo...@miss.wu-wien.ac.at> wrote:

>If you take core-ML, there are never cases where you have to
>use type annotations to "help" the compiler (not so for
>Haskell)

If you're referring to what I think you're referring to, that's
not exactly a fair comparison, as such types simply cannot be
declared in ML at all :-)

Cheers,

Nick.

--

[ Nick Williams ]
[ MSc Computation Mobile - 07957-138179 ]
[ New College, Oxford http://www.new.ox.ac.uk/~nick/ ]

Robert Virding

unread,
Aug 31, 2000, 3:00:00 AM8/31/00
to
In article <39ae354c....@nospam.usenet.volition.net>,

jam...@volition-inc.com (James Hague) writes:
>There's a frustating tendency here to go too far, for lack of a better
>phrase. Static typing is a useful tool, but it is routinely being
>represented as much more than it is. I've written OCaml programs that
>worked perfectly on the first try--something fairly rare in C--so I
>understand the enthusiasm. Isn't it great to have a tool that works?
>But this often goes off in the wrong direction:
>
>A: "When a program is accepted by the type system, it is correct!"
>B: "But you can easily get parameters reversed or make logic errors
>and so on."
>A: "Those errors are very rare compared to type errors."
>B: "Surely you aren't trying to equate consistency of types with a
>proof of correctness?"
>A: "Given an expressive enough type system, yes!"

Unfortunately, contrary to another comment, I have have seen all A's
statements more or less seriously given. It is unfortunate, as it is
really trivial to find cases which break these rules (especially the
first one) I think it gives string typing a bad name. I mean if its
proponents have such little contact with reality then how can you
believe what they are saying.

Even though I am strongly entrenched in the dynamic typing camp, and
always have been, I still see a (strong) type checker as a useful tool
which would catch some errors. Having a good test environment is
still much more important.

Personally I have found that in the type of applications I program
type errors are relatively rare and usually the first ones you catch
when checking.

>There are big wins to be had from functional programming, as anyone at
>Bluetail can attest, but there's too much shooting down of anything

>that doesn't go all the way. If I use Erlang to solve a problem, I


>get called another product of programmer machismo, because I'm using

>oh-so-scary and dangerous dynamic typing. But if you look at it the
>other way, I'm using something that's several times safer, more
>stable, and more productive than what's commonly used in 98% of all
>commercial programming jobs. Maybe it's because I spend my days
>buried in in C++ and assembly language that makes me happy to
>occasionally use a tool that lets me focus on problem solving rather
>than my ulcer-to-be?

Actually using Erlang is definitely *NOT* programmer machismo! It is
so much easier that it is really the hard-core C++ programmers who I
admire, and sometimes wonder if they really know what they are
doing. :-)

We use Erlang because it is a nice, relatively small language with a
good set of libraries which is practical for the type applications we
do. As we are not overdogmatic parts of our applications are written
in C as that happens to be the most practical in those cases. It all
boils down to having and using the best set of tools available.

>Working on big projects like that is also a good way to realize how
>far away proper proofs of correctness are, or at least how unrealistic

>they are in practice. It is much better to have tools that encourage


>general safety (e.g. gc, bounds checking, etc.), quick testing, and a

>flexible style of programming. Static type inferencing systems that


>currently exist can fit perfectly into that model. But we don't know
>much beyond that. Will there be value in having to formally describe

>programs using more elaborate type systems? It's hard to say, because
>today there's no way to see how it works in a real project, because
>there aren't any such languages available. Dijkstra has refined his
>calculus of programs for over twenty-five years now, and it's
>interesting stuff; I have several books on the subject. But it is a
>complete non-issue if you want to write code (Dijkstra does not, so
>there's no problem there :).

Too true, this has been our point of view for years and nothing has
really happened to change this.

Markus Mottl

unread,
Aug 31, 2000, 8:45:12 AM8/31/00
to
Nick Williams <ni...@thelonious.new.ox.ac.uk> wrote:
>>If you take core-ML, there are never cases where you have to
>>use type annotations to "help" the compiler (not so for
>>Haskell)

> If you're referring to what I think you're referring to, that's
> not exactly a fair comparison, as such types simply cannot be
> declared in ML at all :-)

I am not sure whether you mean polymorphic recursion - if so, no, I was
rather referring to resolution of overloading. This is probably also not a
fair comparison, because overloading gives you expressive power that ML
does not have. Anyway, in such cases the type system interfers with your
code: you *must* use type annotations to resolve them, which may already
be unacceptable for hard-core dynamic typists... ;)

Markus Mottl

unread,
Aug 31, 2000, 9:23:24 AM8/31/00
to
James Hague <jam...@volition-inc.com> wrote:
> (To head off criticism here, I mean that there's much more to building
> production software than type systems. So much more, that type
> systems are a very minor issue in the overall scheme of things.)

Yes, I agree that there are other issues that are important for software
engineering, possibly even more than static typing for many "practical"
tasks. But all these issues are, IMHO, pretty orthogonal to static
typing. They range from library support to tool support, something which
you can improve in all languages. I agree, too, that the focus of academic
researchers is just different in this respect: hey, they are not supposed
to do the job of companies, they are not paid for this. They are expected
to develop new methodologies, not new tools! More cooperation between the
academic and the private sector would be really beneficial here...

>>Ahem, but such things *do* exist! But you wouldn't want to write a
>>web-server in them! That's why they are not so widespread...
>>(and yes, I also wouldn't use them for my kinds of problems).

> Then, in all seriousness, they don't exist. They are not reasonable
> choices for solving problems, though they may make for interesting
> research.

You are not making your problems "reasonable" and the problems of academic
people "unreasonable", are you? ;)

Or do you think that anybody would work on any theory that is completely
useless for solving "their" problems? Academics, too, hate wasting their
time...

> The entire point here is that you have to differentiate
> between what is theory and what can be put into practice. I'll admit
> we're kind of a mixed bunch in this group, you know?

Just recently there was the so-called "Alpbach Technology Forum", in which
researchers and politicians, mostly from the German speaking countries,
discussed the future developments of technology: one of them, Anton
Zeilinger, the first physicist to demonstrate "quantum teleportation"
experimentally, gave a good example for a case of "research", where the
technological revolution was not obvious: in the last century the
physicist Hertz applied for funding to prove the existence of "radio
waves". He indeed got it, but the government officials said "though it
will very surely not lead to any useful application" - well, the rest is
history...

The conclusion that Zeilinger draws: research (and its funding) should not
be guided by immediate applicability but solely by the quality of the
research.

In short: let the "practicioners" do their job and the "theorists" theirs.
Most criticism I read against more "theoretically founded" languages is
"You are not solving our problems." - so what? That's *your* job. I don't
expect any of the Haskell designers to sit down and write fully fledged
XML-libraries so that you can have them for free and make money with
them... ;)

Markus Mottl

unread,
Aug 31, 2000, 12:57:29 PM8/31/00
to
James Hague <jam...@volition-inc.com> wrote:

> Markus Mottl <mo...@miss.wu-wien.ac.at> wrote:
>>In short: let the "practicioners" do their job and the "theorists" theirs.
>>Most criticism I read against more "theoretically founded" languages is
>>"You are not solving our problems." - so what? That's *your* job.

> Exactly right. There just needs to be the realization that some
> theories aren't suitable for practice yet,

I can agree with this that some new theoretic developments need more work
before they can help you solve your practical problems, but...

> or at least you can't
> criticize someone for not making use of, say, languages with algebraic
> data types, when there aren't any reasonable language implementations
> available.

... saying that there is no "reasonable language implemention" that has
"algebraic data types" goes quite a bit too far for me: algebraic
datatypes are *easy* and very well understood (and have been for a very
long time). I am sure that everybody who manages high-school maths can
comprehend pairs and sum types and how to define them recursively within a
single lecture. What is so theoretic about this? And what concerns
implementation: I'd find it strange if somebody called implementations
like e.g. OCaml "not reasonable"...

> This all comes back to the "Why Aren't More People Using FP?"
> discussion. The bottom line, I think, is that most of the
> disappointment is from people who don't understand why experimental
> language X isn't being relied upon by multi-million dollar
> corporations.

You don't think the NASA is using tools/languages for formal specification
of some software components when planning missions? I understand that the
typical web-design company (even if it has a million budget) does not have
such requirements and for them it may be "good enough" if they find people
who manage "not so theoretic" languages.

And what, for example, is "experimental" about e.g. SML? There is a
standard for this language (and there has always been one - of course, it
got revised, but not more often than industrial languages). Is e.g. Java
less "experimental" than SML? Or Perl? - Come on, all these languages and
their implementations mutate at much faster rates and not necessarily to
the well-being of their users!

One reason why FPLs have not caught on a few years ago was most probably
speed (not "experimentality"). They have missed to enter some niches early
enough (e.g. scripting, web-applications - for obvious reasons: not an
academically interesting field), and now it is difficult to fight the
network effect which made other (less developed) languages so popular...

> The simple answer is because "You'd be crazy to stake a multi-million
> dollar corporation on X and its available implementations." That is
> hard to understand for many people.

The only reason why it might be "less crazy" to use Java instead of FPLs
is that the first is backed up by a big company. Or does Java have any
technical features (besides the amount of available libraries and
development tools) that make it superiour to advanced FPL implementations?
- I can't think of any.

> But that doesn't make the research any less valuable (although,
> personally, I think there's too much of an obsession with type systems
> and language design in academia, and that there are more interesting
> problems out there).

"Out there" - right! That hits it! Show us the problems that academics
should tackle rather than companies on the market. What are those problems
you want to have solved? More libraries? IDEs? - Why don't companies work
together with research institutions and do this work? That's the job that
practioners are likely to get right, not so much academics.

Markus Mottl

unread,
Aug 31, 2000, 5:56:00 PM8/31/00
to
James Hague <jam...@volition-inc.com> wrote:
> Bad terminology on my part. I was referring to type systems beyond
> that of Haskell, specifically what is used by Cayenne. I should
> have said "dependent types." I'm not insulting OCaml or ML, don't
> worry :)

Ok, you mentioned "algebraic data types", which would indeed not be
justified as example for "too academic" concepts...

> There *are* more interesting problems than language design and
> developing type systems! Those are fine problems, yes, but you can
> learn a tremendous amount by picking an application area and looking
> at elegant ways of approaching it. Prototyping is tremendously useful
> and educational.

You are asking for academics in the field of software engineering then: I
have to disappoint you, many of them have never heard of functional
languages. Yes, this seems to be the general case, not a special one: when
asked, most "software engineers" would classify "C" as functional
language... :(

Unfortunately, many "practically" oriented academics rather consider what
is happening in "practice" (in industry) instead of trying to bring new
theoretic knowledge to maturity in practical fields. I consider this a
gross misorganisation in the academic sector. If we do not even manage to
enforce enough contact between the "engineers" and the "theorists", how
should we manage to convincingly introduce industry to new paradigms?

> I'm saying there should be more. It's a shame that so many of the great
> minds in functional programming are tied up in developing new and better
> functional programming languages. I know that may not fly well with
> some people, but I stand by it.

If people have talent for designing languages, then let them do it. If
they have talent in developing products for the market, let them do it.
What we are lacking is people who bridge this gap. They need not be
experts in both fields: they need to be experts in combining concepts. I
often think that human nature always goes to extremes: it seems to be
difficult to find people with a fair balance between "theory" and
"practice" (not necessarily in terms of talent, but pretty surely in terms
of interest).

Markus Mottl

unread,
Aug 31, 2000, 6:07:19 PM8/31/00
to
James Hague <jam...@volition-inc.com> wrote:
> Sure, but they also use C, C++, Fortran, Lisp, and Forth:

[snip]

> What does this prove? Nothing! :)

It would be overkill to apply formal techniques even to the least
important problem. Applying formal methods is expensive: there are not
many people who can do it, it costs pretty much time (=money) to apply and
if your budget is constrained (quite realistic - even for NASA ;) you may
have to take the cheapest tool for the job.

If you want to do heavy-weight number crunching: Fortran may be a viable
choice (tons of mature libraries). If you want to program a controller -
Forth, why not? If you want to keep your staff busy (most will probably
not know about FPLs), let them write C code (-> that's cheap because most
of their programmer can probably do it).

And then there are the cases where provable correctness is mandatory...

graham

unread,
Aug 31, 2000, 7:18:04 PM8/31/00
to
Markus Mottl at mo...@miss.wu-wien.ac.at wrote on 8/31/00 5:56 PM:

> James Hague <jam...@volition-inc.com> wrote:
>> I'm saying there should be more. It's a shame that so many of the great
>> minds in functional programming are tied up in developing new and better
>> functional programming languages. I know that may not fly well with
>> some people, but I stand by it.
>
> If people have talent for designing languages, then let them do it. If
> they have talent in developing products for the market, let them do it.
> What we are lacking is people who bridge this gap.

To me Markus your reply shows how insular a community functional language
developpers are. To me the whole notion of developping a computer
language without an application area is crazy. Only be testing your
language in an application area do you understand it's true strengths
and weaknesses. There is nothing like the cold hard reality of an
application area to test your academic theory against. The classic
example of this is lazyness. Yes lazyness is great in theory. It's
even great for developping certain kinds of language related applications
(eg. compilers and type checkers). But in most real applications lazyness
sucks because it uses too much space and it's too slow. Of course you
won't know that until you try developping non-language related applications
in a lazy language.

graham

graham

unread,
Aug 31, 2000, 7:20:21 PM8/31/00
to
graham at grah...@telocity.com wrote on 8/31/00 7:18 PM:

> To me Markus your reply shows how insular a community functional language
> developpers are. To me the whole notion of developping a computer
> language without an application area is crazy. Only be testing your
> language in an application area do you understand it's true strengths
> and weaknesses. There is nothing like the cold hard reality of an
> application area to test your academic theory against. The classic
> example of this is lazyness. Yes lazyness is great in theory. It's
> even great for developping certain kinds of language related applications
> (eg. compilers and type checkers). But in most real applications lazyness
> sucks because it uses too much space and it's too slow. Of course you
> won't know that until you try developping non-language related applications
> in a lazy language.

I forgot one thing. Predicting space usage with lazy languages is also
too hard. In real applications that's a problem.

graham

Harry Chomsky

unread,
Aug 31, 2000, 8:16:51 PM8/31/00
to
"Markus Mottl" <mo...@miss.wu-wien.ac.at> wrote in message
news:8omk9g$qu4$1...@bird.wu-wien.ac.at...

> If people have talent for designing languages, then let them do it. If
> they have talent in developing products for the market, let them do it.
> What we are lacking is people who bridge this gap. They need not be
> experts in both fields: they need to be experts in combining concepts. I
> often think that human nature always goes to extremes: it seems to be
> difficult to find people with a fair balance between "theory" and
> "practice" (not necessarily in terms of talent, but pretty surely in terms
> of interest).

Actually, I find myself precisely in that category. Well, I can't say for
sure that I have the talent, but I definitely have the interest! I've been
working in industry for a decade, and I'm pretty unsatisfied with what most
teams seem to consider state-of-the-art development practices (C, Perl,
spaghetti code with very little planning behind it). On the other hand, I
have enough experience in academia to know that I wouldn't be totally happy
there either.

I see great ideas in languages like Smalltalk, Erlang, and OCaml; I can see
how useful they are for solving real-world problems; but I can't convince
employers or colleagues to consider using them. And even I must admit their
practical limitations in some areas. I really want to have a computer on my
desk that I can understand and control, like I could back in 1980. Personal
computers do a lot more now than they did back then, and they're much more
complicated, so we need more well-designed layers of abstraction to help us
understand them. Instead, we get a choice of writing intricate C code or
checking boxes in "wizards", with almost nothing in between. We need
programming languages and environments that can bridge this gap.

I really see this as the crisis of modern computing, and I would love to be
a part of the solution, to help take the best ideas from academic language
design and make them into the best options for real-world product
development, and then to get people to start actually using them. I'm just
not sure where to turn to do this effectively. I'd welcome any suggestions.


Markus Mottl

unread,
Aug 31, 2000, 9:22:53 PM8/31/00
to
graham <grah...@telocity.com> wrote:
> To me Markus your reply shows how insular a community functional language
> developpers are. To me the whole notion of developping a computer
> language without an application area is crazy.

Are you saying that the languages are unusable? - What concerns me, I use
OCaml "even" for shell scripting and find it much more powerful in this
respect than Perl (yes, it does not have tons of fancy libraries, but for
99% of my tasks, the Unix-, DBM-, Tk-libraries, etc. are more than
sufficient). Not even to mention my primary application area (Artificial
Intelligence). And others will probably point you to theorem proving,
natural language processing, fast prototyping of compilers, etc.

> Only be testing your language in an application area do you understand
> it's true strengths and weaknesses.

I am very satisfied with the language: I couldn't use, for example,
Python, because performance for AI-applications (sorry for mentioning this
argument) is just not good enough for my purposes. OCaml-code, since it
has an excellent native code compiler, usually runs between one and two
orders of magnitude faster (and I find it more convenient to use
functional programming here). This is *very* significant. And I don't want
to get down to C and do lots of interfacing just to have functionality
available in another language.

> There is nothing like the cold hard reality of an application area to
> test your academic theory against. The classic example of this is
> lazyness. Yes lazyness is great in theory. It's even great for
> developping certain kinds of language related applications (eg.
> compilers and type checkers). But in most real applications lazyness
> sucks because it uses too much space and it's too slow. Of course you
> won't know that until you try developping non-language related
> applications in a lazy language.

Don't build a straw man: everybody here knows that in many cases the
efficiency of lazy programs (without strictness annotations) is not good
enough. This didn't prevent Lennart in last year's ICFP-contest to piss of
all kilo-loc C entries with his Haskell 100-liner in terms of efficiency.
Clean, for example, can indeed beat low-level C (same algorithm) in
certain cases, as I have verified (calculating prime numbers).

Markus Mottl

unread,
Aug 31, 2000, 9:23:52 PM8/31/00
to
graham <grah...@telocity.com> wrote:
> I forgot one thing. Predicting space usage with lazy languages is also
> too hard. In real applications that's a problem.

Not all FPLs are lazy.

Christopher Browne

unread,
Aug 31, 2000, 9:23:53 PM8/31/00
to
Centuries ago, Nostradamus foresaw a time when Markus Mottl would say:

>Unfortunately, many "practically" oriented academics rather consider what
>is happening in "practice" (in industry) instead of trying to bring new
>theoretic knowledge to maturity in practical fields. I consider this a
>gross misorganisation in the academic sector. If we do not even manage to
>enforce enough contact between the "engineers" and the "theorists", how
>should we manage to convincingly introduce industry to new paradigms?

This is indeed an unfortunate thing; in effect, the choice most often
is between:

a) Academics that Would Interface with Practice basically toadying
to practice, trying to come up with excuses for why industry does
what they do, or

b) Academics that "create new theory" that are perceived as being
utterly irrelevant to any kind of practice.

>> I'm saying there should be more. It's a shame that so many of the great
>> minds in functional programming are tied up in developing new and better
>> functional programming languages. I know that may not fly well with
>> some people, but I stand by it.
>
>If people have talent for designing languages, then let them do it. If
>they have talent in developing products for the market, let them do it.
>What we are lacking is people who bridge this gap. They need not be
>experts in both fields: they need to be experts in combining concepts. I
>often think that human nature always goes to extremes: it seems to be
>difficult to find people with a fair balance between "theory" and
>"practice" (not necessarily in terms of talent, but pretty surely in terms
>of interest).

Another problem is that of the distinction between "technicians" and
"management."

There is a strong tendancy for those that remain "technical" to be
relegated to lower levels, whilst "managers" get to move up the
hierarchy. This discourages there from _being_ high level
"technical experts."

If you reach a certain level, and the _only_ route upwards involves
walking away from _both_ "theory" and "practice," that does not
bode well for such gaps getting bridged...
--
cbbr...@acm.org - <http://www.hex.net/~cbbrowne/lsf.html>
Rules of the Evil Overlord #206. "When my Legions of Terror park their
vehicle to do reconnaissance on foot, they will be instructed to
employ The Club." <http://www.eviloverlord.com/>

graham

unread,
Aug 31, 2000, 10:07:51 PM8/31/00
to
Markus Mottl:

> graham <grah...@telocity.com> wrote:
>> To me Markus your reply shows how insular a community functional language
>> developpers are. To me the whole notion of developping a computer
>> language without an application area is crazy.
>
> Are you saying that the languages are unusable? - What concerns me, I use
> OCaml "even" for shell scripting and find it much more powerful in this
> respect than Perl (yes, it does not have tons of fancy libraries, but for
> 99% of my tasks, the Unix-, DBM-, Tk-libraries, etc. are more than
> sufficient). Not even to mention my primary application area (Artificial
> Intelligence). And others will probably point you to theorem proving,
> natural language processing, fast prototyping of compilers, etc.

I am not saying that all functional languages are unusable. I am saying
that most functional languages are unusable for commercial programming
since they are too slow and/or use too much memory and/or have space
usage that is difficult to predict. OCaml and Erlang are exceptions to
this. Interestingly enough Erlang was designed with a specific application
area in mind, and it shows in the language itself (pragmattic use of
functionalness where appropriate). I would be interested in knowing the
genealogy of Ocaml.

Clean is getting better with it's space usage and performance. Haskell
is getting better too, but has further to go.

Markus:


> Don't build a straw man: everybody here knows that in many cases the
> efficiency of lazy programs (without strictness annotations) is not good
> enough. This didn't prevent Lennart in last year's ICFP-contest to piss of
> all kilo-loc C entries with his Haskell 100-liner in terms of efficiency.
> Clean, for example, can indeed beat low-level C (same algorithm) in
> certain cases, as I have verified (calculating prime numbers).

I doubt this claim of beating low level C code. Care to benchmark your
code against a system like a number theory package written in C by someone
who knows what they are doing?

Moreover the fact that the same algorithm runs faster in Clean than in
C doesn't say much. Maybe this algorithm isn't the best way of solving
this problem in C.

graham

Brian Rogoff

unread,
Aug 31, 2000, 11:45:19 PM8/31/00
to
On Thu, 31 Aug 2000, graham wrote:
> Markus Mottl:
> > graham <grah...@telocity.com> wrote:
> >> To me Markus your reply shows how insular a community functional language
> >> developpers are. To me the whole notion of developping a computer
> >> language without an application area is crazy.
> >
> > Are you saying that the languages are unusable? - What concerns me, I use
> > OCaml "even" for shell scripting and find it much more powerful in this
> > respect than Perl (yes, it does not have tons of fancy libraries, but for
> > 99% of my tasks, the Unix-, DBM-, Tk-libraries, etc. are more than
> > sufficient). Not even to mention my primary application area (Artificial
> > Intelligence). And others will probably point you to theorem proving,
> > natural language processing, fast prototyping of compilers, etc.
>
> I am not saying that all functional languages are unusable. I am saying
> that most functional languages are unusable for commercial programming
> since they are too slow and/or use too much memory and/or have space
> usage that is difficult to predict. OCaml and Erlang are exceptions to
> this. Interestingly enough Erlang was designed with a specific application
> area in mind, and it shows in the language itself (pragmattic use of
> functionalness where appropriate). I would be interested in knowing the
> genealogy of Ocaml.

http://www.pps.jussieu.fr/~cousinea/Caml/caml_history.html

ML was originally the metalanguage for a theorem prover. I'd say it's
pretty general purpose now, though I think if I had to write very low
level code (device driver) or ultra high performance numerics code I would
choose something else. I look forward to the day that even that isn't true.
However, as a general purpose language for symbolic processing, OCaml is
superb.

-- Brian


Brian Rogoff

unread,
Sep 1, 2000, 12:21:07 AM9/1/00
to
On 31 Aug 2000, Markus Mottl wrote:
> It would be overkill to apply formal techniques even to the least
> important problem. Applying formal methods is expensive: there are not
> many people who can do it, it costs pretty much time (=money) to apply and
> if your budget is constrained (quite realistic - even for NASA ;) you may
> have to take the cheapest tool for the job.

See http://www.praxis-cs.co.uk for some examples of real projects
undertaken with formal methods. Since many of those projects were done
for government agencies, this won't convince the curmudgeonly naysayers
of c.l.f. about the utility of formal methods.

In the field of VLSI design, formal methods are already being used rather
frequently, and all of the EDA big boys have or have bought formal
verification knowhow. A lot of the tools are simple now (equivalence
checkers) but symbolic model checking seems to be sneaking into practice
too. The cost of respinning an ASIC is pretty large, so you pretty much want
to nail it the first time. (Interestingly, one of the failed ML vendors,
Abstract Hardware Ltd., was in this area...)

> If you want to do heavy-weight number crunching: Fortran may be a viable
> choice (tons of mature libraries).

Markus, you traitor! Of course you meant to say, "If you want to do
heavy-weight number crunching: an OCaml program which generates Fortran
may be a viable choice", right?
:-)

-- Brian


Ketil Z Malde

unread,
Sep 1, 2000, 2:34:17 AM9/1/00
to
graham <grah...@telocity.com> writes:

> To me Markus your reply shows how insular a community functional language
> developpers are. To me the whole notion of developping a computer
> language without an application area is crazy.

Yeah, and Hertz' work with radio waves was sure to be completely
useless. Come on, it's called research, and it is impossible to know
in advance what will work and what won't.

By the way, I'd be more than happy writing software in a functional
language, instead of working with the stuff that is used in industry.
As it is, I can (and do) use Haskell for my own stuff, like mining web
pages, parsing XML and toying with some other things, but when it
comes to real work, I must suffer C++, Visual Studio, and MS Word --
and consequently, loss of productivity and often temper as well. But
hey, that's industry for you.

No wonder people stick to academia. I want to get back in.

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants

graham

unread,
Sep 1, 2000, 8:02:11 AM9/1/00
to
Ketil Z Malde at ke...@ii.uib.no wrote on 9/1/00 2:34 AM:
> graham

>> To me Markus your reply shows how insular a community functional language
>> developpers are. To me the whole notion of developping a computer
>> language without an application area is crazy.
>
> Yeah, and Hertz' work with radio waves was sure to be completely
> useless. Come on, it's called research, and it is impossible to know
> in advance what will work and what won't.


But you are confusing two issues here. Pure research is fine -- that's
what Hertz did. It's necessary and you have no idea what will turn out
to be useful and what won't. But once you claim utility for your
research, as functional languages do, then it better indeed be useful.
This is where you need an applications area to test in.

graham


graham

unread,
Sep 1, 2000, 8:07:25 AM9/1/00
to
Brian Rogoff at b...@shell5.ba.best.com wrote on 8/31/00 11:45 PM:

>>> graham <grah...@telocity.com> wrote:
I would be interested in knowing the
>> genealogy of Ocaml.
>
> http://www.pps.jussieu.fr/~cousinea/Caml/caml_history.html
>
> ML was originally the metalanguage for a theorem prover. I'd say it's
> pretty general purpose now, though I think if I had to write very low
> level code (device driver) or ultra high performance numerics code I would
> choose something else. I look forward to the day that even that isn't true.
> However, as a general purpose language for symbolic processing, OCaml is
> superb.

Yes I know Caml was an offshoot of the ML metalanguage. But what I would
like to know is what they were aiming for when they decided to move away
from ML as a language for theorem proving and toward Caml as a language
for ...? Also why did they decide to make their own ML rather than
develop the existing ML?

graham

Brian Rogoff

unread,
Sep 1, 2000, 11:05:09 AM9/1/00
to
On Fri, 1 Sep 2000, graham wrote:
> Brian Rogoff at b...@shell5.ba.best.com wrote on 8/31/00 11:45 PM:
> >>> graham <grah...@telocity.com> wrote:
> I would be interested in knowing the
> >> genealogy of Ocaml.
> >
> > http://www.pps.jussieu.fr/~cousinea/Caml/caml_history.html

... snip ...

> Yes I know Caml was an offshoot of the ML metalanguage. But what I would
> like to know is what they were aiming for when they decided to move away
> from ML as a language for theorem proving and toward Caml as a language
> for ...?

Why do think that "they decided to move away from ML as a language for
theorem proving and toward Caml as a language for ..."? That is an
unfounded assumption. Perhaps they kept developing it as a language for
theorem proving but other capabilities were added which didn't lessen its
abilities in TP. Perhaps since it was used to write its own compiler it
became a better compiler writing language.

> Also why did they decide to make their own ML rather than
> develop the existing ML?

Did you read the URL? Here is what it says:

"This urged me to develop a new implementation of ML based on the CAM.
However, the language we implemented then was not SML but ... Caml. Why?
Our main reason for developing Caml was to use it for sofware developments
in the Formel project and indeed, it was used for the development of the Coq
system which became, after Thierry Coquand's thesis in 1985, the main aim
of the project. We were reluctant to adopt a standard that could later
prevent us to adapt the language to our programming needs. In particular,
the Formel project developed syntax manipulation tools (Philippe LeChenadec,
Michel Mauny ) which appeared useful for our developments and that we
incorporated into Caml. Having to synchronize with the SML team before
adopting the language modifications that seemed useful for our developments
would have introduced too much delay in our work and was anyway in conflict
with the notion of a "standard" language which was not supposed to evolve
too much. However, we tried to incorporate in Caml most of the improvements
brought by SML over Edinburgh ML. "

-- Brian

Markus Mottl

unread,
Sep 1, 2000, 12:01:25 PM9/1/00
to
graham <grah...@telocity.com> wrote:
> Interestingly enough Erlang was designed with a specific application area in
> mind, and it shows in the language itself (pragmattic use of functionalness
> where appropriate). I would be interested in knowing the genealogy of Ocaml.

Being a member of the ML-family, it's initial goals were surely in the
application of formal methods (theorem proving). That you can use it for
many other areas shows that the primary application is not "distracting"
it from them (IMHO, that's where it gets the generality from). Yes,
indeed, "genealogy" is an important application of OCaml itself:

http://pauillac.inria.fr/~ddr/GeneWeb

Probably the best free genealogy software that you can get... ;)

> I doubt this claim of beating low level C code. Care to benchmark your
> code against a system like a number theory package written in C by
> someone who knows what they are doing?

Compare equals to equals: a "number theory package" written by "someone
who knows what they are doing" is likely to use more advanced algorithms
of which I have never heard (I am not a mathematician). If you test the
same algorithm against the same algorithm, writing very low-level C-code
is likely to give you a small edge - but even this is *not* guaranteed!
Not even to mention the time it would take you to get things right...

Take these recent comments from Xavier on the ICFP-contest:

The program spends more than 50% of its running time checking for
intersections with the bounding spheres, which is a fairly simple
hand-optimized geometric formula for which the OCaml compiler generates
optimal Pentium Pro code.

and:

> It would be nice to have the computation times for those test images,
> especially for those of us who won't be able to go to ICFP.

Good idea. I added some timings to the Web page above. Not being
familiar with the state of the art in ray tracing, we have no idea if
these are good or bad timings. But from examination of the code
produced by ocamlopt, I can say that those floating-point optimizations
finally paid off...

This seems to indicate that even manually fiddling with the assembler code
wouldn't help you get out any relevant performance of the program in this
specific (but "real-world") case. And if you don't trust Xavier in this
respect, whom else? ;)

> Moreover the fact that the same algorithm runs faster in Clean than in
> C doesn't say much. Maybe this algorithm isn't the best way of solving
> this problem in C.

This algorithm heavily relies on arrays, something which is considered
"inefficient" in functional languages. Actually, both the OCaml and the
Clean code were nearly twice as fast as the initial version of the C-code,
which used while-loops. I changed this to make use of gotos, which made
the C-code run "only" about 10% slower than the others (I had used gcc and
fiddled around with the optimization flags, I even manually added register
keywords, which the compiler probably ignores, anyway). The only platform,
where the C-version was slightly faster was Solaris + the vendor cc. But
ok, I guess that the OCaml-backend for this target platform is not so
highly optimized as the one for Intel machines (and obviously Alphas).

I also know from experience that using the standard I/O-primitives in C++
(STL / stream operators, etc.) is not unlikely to yield programs that run
*several times* slower than comparable programs written in OCaml (with the
"Pervasives library" only, not the Unix-library, which may be even
faster). This is already very significant.

E.g. (g++ -O2 -fomit-frame-pointer):

---------------------------------------------------------------------------
#include<iostream>
#include<fstream>

int main () {
char buf[600];
ifstream foo("test.dat");
while (! foo.eof()) {
foo.getline(buf, 600);
cout << buf << endl;
}
}
---------------------------------------------------------------------------

real 0m0.365s
user 0m0.260s
sys 0m0.110s

vs.

---------------------------------------------------------------------------
let file = open_in "test.dat" in
try
while true do
print_endline (input_line file)
done
with End_of_file -> ()
---------------------------------------------------------------------------

real 0m0.352s
user 0m0.170s
sys 0m0.060s

You see, OCaml runs 50% faster - oops, sorry, these were the timings for
the OCaml-interpreter! ;)

Here the timings for the OCaml-native code compiler (ocamlopt - I haven't
even turned off array and string bounds checking or assertions):

real 0m0.109s
user 0m0.080s
sys 0m0.020s

Oh, well, OCaml beats "standard C++" by a factor of more than three! -
Now, don't say "Yes, but I can use system calls, use trick XY, etc.!" ->
so can OCaml: and there you don't have all this hassle with buffer size,
etc.

Speed is not an argument... and if then (IMHO) rather for OCaml (or where
do you want to goto today?)

Markus Mottl

unread,
Sep 1, 2000, 12:10:43 PM9/1/00
to
graham <grah...@telocity.com> wrote:
> But you are confusing two issues here. Pure research is fine -- that's
> what Hertz did. It's necessary and you have no idea what will turn out
> to be useful and what won't. But once you claim utility for your
> research, as functional languages do, then it better indeed be useful.
> This is where you need an applications area to test in.

http://caml.inria.fr/users_programs-eng.html

Numerical applications, scripting, networking, financial applications,
bioinformatics, distributed computation, raytracing (as the current ICFP
shows), (you name it)... - and of course its main fields: theorem proving,
program transformation, etc...

Many of these applications are arguably state-of-the-art. (E.g. FFTW,
Ensemble, Coq, ...)

But where is industry?

Markus Mottl

unread,
Sep 1, 2000, 12:24:17 PM9/1/00
to
Brian Rogoff <b...@shell5.ba.best.com> wrote:
> A lot of the tools are simple now (equivalence checkers) but symbolic model
> checking seems to be sneaking into practice too. The cost of respinning an
> ASIC is pretty large, so you pretty much want to nail it the first time.
> (Interestingly, one of the failed ML vendors, Abstract Hardware Ltd., was in
> this area...)

Right, there is heavy academic research going on in model-checking right
now (there were two PhD-studentships offered in this special field alone
just recently at our institution, e.g. for verification of protocols). It
will probably take some time until industry catches up...

>> If you want to do heavy-weight number crunching: Fortran may be a viable
>> choice (tons of mature libraries).

> Markus, you traitor! Of course you meant to say, "If you want to do
> heavy-weight number crunching: an OCaml program which generates Fortran
> may be a viable choice", right?
> :-)

Oh, damn, I am sure there was a paragraph like

"... But FPLs are surely the better choice."

Must have accidently entered "dap" in my VIM-editor and zapped this
paragraph ;)

graham

unread,
Sep 1, 2000, 1:20:35 PM9/1/00
to
Markus Mottl at mo...@miss.wu-wien.ac.at wrote on 9/1/00 12:10 PM:

> graham <grah...@telocity.com> wrote:
>> But you are confusing two issues here. Pure research is fine -- that's
>> what Hertz did. It's necessary and you have no idea what will turn out
>> to be useful and what won't. But once you claim utility for your
>> research, as functional languages do, then it better indeed be useful.
>> This is where you need an applications area to test in.
>
> http://caml.inria.fr/users_programs-eng.html
>
> Numerical applications, scripting, networking, financial applications,
> bioinformatics, distributed computation, raytracing (as the current ICFP
> shows), (you name it)... - and of course its main fields: theorem proving,
> program transformation, etc...

Markus I already said that Caml and Erlang weren't part of my critique. I
agree that Caml is a fantastic language -- fast, space efficient, etc. So
I am not sure what your point is.

> But where is industry?

A good question.

graham

graham

unread,
Sep 1, 2000, 1:24:51 PM9/1/00
to
Markus Mottl at mo...@miss.wu-wien.ac.at wrote on 9/1/00 12:01 PM:

> graham <grah...@telocity.com> wrote:
>> I doubt this claim of beating low level C code. Care to benchmark your
>> code against a system like a number theory package written in C by
>> someone who knows what they are doing?
>
> Compare equals to equals: a "number theory package" written by "someone
> who knows what they are doing" is likely to use more advanced algorithms
> of which I have never heard (I am not a mathematician). If you test the
> same algorithm against the same algorithm, writing very low-level C-code
> is likely to give you a small edge - but even this is *not* guaranteed!
> Not even to mention the time it would take you to get things right...

I am not suggesting we change algorithms. I am just saying that I doubt
the claim that C runs slower on the same algorithm. Give the same
algorithm to some people who know C, eg: the people who write number
theory code in C, and I bet they could implement this algorithm faster
in C.

Markus Mottl at mo...@miss.wu-wien.ac.at wrote on 9/1/00 12:01 PM:


> graham <grah...@telocity.com> wrote:
>> Moreover the fact that the same algorithm runs faster in Clean than in
>> C doesn't say much. Maybe this algorithm isn't the best way of solving
>> this problem in C.
>
> This algorithm heavily relies on arrays, something which is considered
> "inefficient" in functional languages. Actually, both the OCaml and the
> Clean code were nearly twice as fast as the initial version of the C-code,
> which used while-loops. I changed this to make use of gotos, which made
> the C-code run "only" about 10% slower than the others (I had used gcc and
> fiddled around with the optimization flags, I even manually added register
> keywords, which the compiler probably ignores, anyway). The only platform,
> where the C-version was slightly faster was Solaris + the vendor cc. But
> ok, I guess that the OCaml-backend for this target platform is not so
> highly optimized as the one for Intel machines (and obviously Alphas).

Can we see this code?

Markus Mottl at mo...@miss.wu-wien.ac.at wrote on 9/1/00 12:01 PM:


> I also know from experience that using the standard I/O-primitives in C++
> (STL / stream operators, etc.) is not unlikely to yield programs that run
> *several times* slower than comparable programs written in OCaml

> [... a small example]

Microbenchmarks tell you nothing about speed in general.

graham

Markus Mottl

unread,
Sep 1, 2000, 2:49:18 PM9/1/00
to
graham <grah...@telocity.com> wrote:
> I am not suggesting we change algorithms. I am just saying that I doubt
> the claim that C runs slower on the same algorithm. Give the same
> algorithm to some people who know C, eg: the people who write number
> theory code in C, and I bet they could implement this algorithm faster
> in C.

Maybe - I don't know. Several people have already seen the code, but
(to my knowledge) nobody found any way to make it faster (this may
depend on the architecture). Of course, it is a trivial algorithm -
nothing "real-world"ish. But it shows that even low-level C is not
necessarily the final answer... (some assembler guru once took a look
at the compiler output and said that OCaml was optimal (to him), whereas
C missed an opportunity to optimize).

> Markus Mottl at mo...@miss.wu-wien.ac.at wrote on 9/1/00 12:01 PM:
>> I also know from experience that using the standard I/O-primitives in C++
>> (STL / stream operators, etc.) is not unlikely to yield programs that run
>> *several times* slower than comparable programs written in OCaml
>> [... a small example]

> Microbenchmarks tell you nothing about speed in general.

This (I/O) is not an unusual operation: this was pretty standard code
as you'd find it in tons of projects. Most people are not aware of the
inefficiency of I/O in the STL (at least in the one shipped with gcc).

You don't really expect that things turn in favour for C/C++ when the
projects get larger? (I'd rather expect that the difference becomes
smaller or even generally in favour for OCaml). This seems to be at least
the experience of the Ensemble-team when they rewrote their project in
OCaml, which is *really* industrial size.

> Can we see this code?

Sure... (see below)...

Best regards,
Markus Mottl

---------------------------------------------------------------------------

Platform: Linux 2.2.14-6.1.1 i686 unknown
OCaml: The Objective Caml native-code compiler, version 3.00
GCC: 2.95.1

Timings for C with "normal" loops:

5.200u 0.000s 0:05.20 100.0% 0+0k 0+0io 87pf+0w

Timings for C with gotos:

5.270u 0.000s 0:05.27 100.0% 0+0k 0+0io 87pf+0w

Interesting! Last time I tried this (quite some time ago, was an older
gcc and another Intel machine), the version with gotos was quite a bit
faster - I hadn't expected that it would become slower...

Timings for OCaml:

4.890u 0.000s 0:04.89 100.0% 0+0k 0+0io 139pf+0w

This gives OCaml an edge of about 6-7%.

OCaml also handles command line arguments, whereas the C-code has its
parameter hard-wired.

OCaml-code: (compiled with: ocamlopt -unsafe -noassert)
You have to pass the argument "-n 200000" to the executable!
---------------------------------------------------------------------------
let count =
let count = ref 100 in
let n_opt = "-n", Arg.Int (fun n -> count := n), "nth prime (default 100)"
and usage = "Usage: fsieve [-n number] calculates primes" in
Arg.parse [n_opt] (fun _ -> raise (Arg.Bad "unknown argument!")) usage;
!count

let primes =
let primes = Array.create (max count 3) 0 in
primes.(0) <- 2; primes.(1) <- 3; primes.(2) <- 5; primes

let rec is_prime i pr bd =
if primes.(i) > bd then true
else if pr mod primes.(i) = 0 then false else is_prime (succ i) pr bd

let rec prime_n psize nr tog =
if psize < count then
let psize' =
if is_prime 2 nr (truncate (sqrt (float nr))) then begin
primes.(psize) <- nr; succ psize end
else psize in
prime_n psize' (nr + tog) (6 - tog)

let _ = prime_n 3 7 4; Printf.printf "prime %d: %d\n" count primes.(pred count)
---------------------------------------------------------------------------

C-code (with gotos): (compiled with: gcc -O2 -fomit-frame-pointer)
---------------------------------------------------------------------------
#include <stdio.h>
#include <math.h>

#define how_many 200000

int main()
{
unsigned int nr = 5, toggle = 2, max, primes_size = 3, i;
unsigned int primes[how_many];

primes[0] = 2; primes[1] = 3; primes[2] = 5;

loop:
nr += toggle; toggle = 6 - toggle;
max = sqrt(nr);

for (i = 2; primes[i] <= max; ++i)
if (!(nr % primes[i])) goto loop;

primes[primes_size++] = nr;
if (primes_size < how_many) goto loop;

printf("%i\n", primes[how_many - 1]);
return 0;
}
---------------------------------------------------------------------------

C-code (without gotos): (compiled with: gcc -O2 -fomit-frame-pointer)
---------------------------------------------------------------------------
#include <stdio.h>
#include <math.h>

#define how_many 200000

int main() {
unsigned int nr = 5, toggle = 2, max, primes_size = 3, i, is_prime;
unsigned int primes[how_many];

primes[0] = 2;
primes[1] = 3;
primes[2] = 5;

while (primes_size < how_many) {
nr += toggle; toggle = 6 - toggle;
max = sqrt(nr);
is_prime = 1; i = 2;

while (primes[i] <= max) {
if (!(nr % primes[i])) {
is_prime = 0; i = max + 1;
}
++i;
}

if (is_prime) primes[primes_size++] = nr;
}

printf("%i\n", primes[how_many - 1]);
return 0;
}
---------------------------------------------------------------------------

Markus Mottl

unread,
Sep 1, 2000, 2:52:45 PM9/1/00
to
graham <grah...@telocity.com> wrote:
> Markus I already said that Caml and Erlang weren't part of my critique. I
> agree that Caml is a fantastic language -- fast, space efficient, etc. So
> I am not sure what your point is.

So it is not FPLs in general? - Only Haskell??

I don't know of any other language you might mean: most other
implementations probably fill the requirement of "fast and space
efficient" to a sufficiently high degree. Of course, we are not talking
about Cayenne or so: such languages are (at least right now) purely
research oriented. Of course, scientists need something to play with...

Jeffrey Straszheim

unread,
Sep 1, 2000, 9:44:52 PM9/1/00
to
On Fri, 01 Sep 2000 00:16:51 GMT, Harry Chomsky <har...@chomsky.net>
wrote:

<snips galore>

> I see great ideas in languages like Smalltalk, Erlang, and OCaml; I
> can see how useful they are for solving real-world problems; but I
> can't convince employers or colleagues to consider using them. And
> even I must admit their practical limitations in some areas. I
> really want to have a computer on my desk that I can understand and
> control, like I could back in 1980. Personal computers do a lot
> more now than they did back then, and they're much more complicated,
> so we need more well-designed layers of abstraction to help us
> understand them. Instead, we get a choice of writing intricate C
> code or checking boxes in "wizards", with almost nothing in between.
> We need programming languages and environments that can bridge this
> gap.

> I really see this as the crisis of modern computing, and I would
> love to be a part of the solution, to help take the best ideas from
> academic language design and make them into the best options for
> real-world product development, and then to get people to start
> actually using them. I'm just not sure where to turn to do this
> effectively. I'd welcome any suggestions.

What you say borders on visionary -- heck, it is visionary, although
you're surely not the first to say it.

We can dream. Like what would have happened if the old Lisp machines
had been a smashing success. Or what if Microsoft, instead of another
layer of mediocrity (e.g., .NET), decided to throw their full
financial and marketing force into the TUNES project, or something
like that.

Of course, none of this will happen. What we will get are incremental
advances, and always far short of the quality that we dream of.

--
-- Jeffrey Straszheim | A sufficiently advanced
-- Systems Engineer, Programmer | regular expression is
-- http://www.shadow.net/~stimuli | indistinguishable from
-- stimuli AT shadow DOT net | magic

Riku Saikkonen

unread,
Sep 2, 2000, 7:17:46 AM9/2/00
to
Markus Mottl <mo...@miss.wu-wien.ac.at> writes:
>Timings for C with "normal" loops:
>
> 5.200u 0.000s 0:05.20 100.0% 0+0k 0+0io 87pf+0w
>
>Timings for C with gotos:
>
> 5.270u 0.000s 0:05.27 100.0% 0+0k 0+0io 87pf+0w
...
>Timings for OCaml:
>
> 4.890u 0.000s 0:04.89 100.0% 0+0k 0+0io 139pf+0w
>
>This gives OCaml an edge of about 6-7%.

I can't seem to reproduce your results. On my system (Debian GNU/Linux
2.2, AMD K6/450 MHz; p.c is your "without gotos" code and pr.ml your
Ocaml code):


rjs@anar:~/t$ gcc -O2 -fomit-frame-pointer -o p p.c
rjs@anar:~/t$ ocamlopt -unsafe -noassert -o pr pr.ml
rjs@anar:~/t$ time ./p
2750159

real 0m2.960s
user 0m2.950s
sys 0m0.010s
rjs@anar:~/t$ time ./p
2750159

real 0m2.956s
user 0m2.950s
sys 0m0.000s
rjs@anar:~/t$ time ./pr -n 200000
prime 200000: 2750159

real 0m4.574s
user 0m4.560s
sys 0m0.010s
rjs@anar:~/t$ time ./pr -n 200000
prime 200000: 2750159

real 0m4.580s
user 0m4.550s
sys 0m0.020s
rjs@anar:~/t$ gcc --version
2.95.2
rjs@anar:~/t$ ocamlopt -v


The Objective Caml native-code compiler, version 3.00

Standard library directory: /usr/lib/ocaml


So ocaml takes about 70% more time than gcc. Am I doing something
wrong?

As an aside, here's some Scheme code to do the same: (translated from
your ML code, so it isn't very idiomatic Scheme)
* * *
(define how-many 200000)

(define primes (make-vector how-many))
(vector-set! primes 0 2)
(vector-set! primes 1 3)
(vector-set! primes 2 5)

(define (prime? i pr bd)
(if (> (vector-ref primes i) bd)
#t
(if (= (modulo pr (vector-ref primes i)) 0)
#f
(prime? (+ i 1) pr bd))))

(define (prime-n psize nr tog)
(if (< psize how-many)
(begin
(if (prime? 2 nr (truncate (sqrt nr)))
(begin
(vector-set! primes psize nr)
(set! psize (+ psize 1))))
(prime-n psize (+ nr tog) (- 6 tog)))))

(prime-n 3 7 4)
(display (vector-ref primes (- how-many 1)))
(newline)
* * *

Compilation with Stalin 0.80 produces:

rjs@anar:~/t$ stalin -Ob -Om -On -Or -Ot -copt -O2 -copt -fomit-frame-pointer pri.sc
rjs@anar:~/t$ time ./pri
2750159

real 0m6.573s
user 0m6.520s
sys 0m0.020s
rjs@anar:~/t$ time ./pri
2750159

real 0m6.557s
user 0m6.540s
sys 0m0.000s

So it's not very much slower, but still takes over twice as much time
as the C version.

Using a Scheme interpreter is quite hopeless (SCM 5c2):

rjs@anar:~/t$ time scm -bf pri.sc
2750159

real 2m38.377s
user 2m38.090s
sys 0m0.290s

This is probably understandable due to the use of bignum arithmetic
and dynamically-typed vectors (I think a vector is an array of
pointers to heap-allocated boxed constants), but it's still quite a
bit slower than I would have thought.

--
-=- Rjs -=- r...@lloke.dna.fi

Markus Mottl

unread,
Sep 2, 2000, 1:50:58 PM9/2/00
to
Riku Saikkonen <r...@lloke.dna.fi> wrote:
> I can't seem to reproduce your results. On my system (Debian GNU/Linux
> 2.2, AMD K6/450 MHz; p.c is your "without gotos" code and pr.ml your
> Ocaml code):

Interesting! - I have already tried this test on several machines and it
always worked (and other people have verified it, too). I have tried it
another time on mine, but the results are still the same.

There are only two reasons I can imagine: either there has been a
significant improvement between gcc-2.95.1 and gcc-2.95.2 (rather
unlikely, I think) or AMD chips behave differently in this respect...

The machine on which I ran this experiment is an Intel PII with 400 MHz.
This would explain the difference between the OCaml-versions but not
between the C-versions. I am not a hardware expert, but it seems quite
surprising that there is such a significant difference (is Intel really so
bad? ;)

> So ocaml takes about 70% more time than gcc. Am I doing something
> wrong?

It would be interesting to see how the version with gotos performs: is it
faster or slower than the one with loops?

Best regards,
Markus Mottl

David A. Wagner

unread,
Sep 2, 2000, 4:19:20 PM9/2/00
to
Brian Rogoff <b...@shell5.ba.best.com> wrote:
> In the field of VLSI design, formal methods are already being used rather
> frequently, and all of the EDA big boys have or have bought formal
> verification knowhow. A lot of the tools are simple now (equivalence
> checkers) but symbolic model checking seems to be sneaking into practice
> too.

The interesting thing about these success stories is that, if I understand
correctly, they seem to be lightweight formal methods, focusing more on
finding bugs than on full formal verification of functional correctness.

Markus Mottl

unread,
Sep 2, 2000, 5:04:15 PM9/2/00
to
> Riku Saikkonen <r...@lloke.dna.fi> wrote:
>> I can't seem to reproduce your results. On my system (Debian GNU/Linux
>> 2.2, AMD K6/450 MHz; p.c is your "without gotos" code and pr.ml your
>> Ocaml code):

I have just installed the newest gcc-package (grmbl, had to update glibc,
too). As I had supposed, it doesn't seem to be gcc which makes the
C-example run so fast on your machine. Here are the new timings:

C with loop:
5.330u 0.000s 0:05.33 100.0% 0+0k 0+0io 85pf+0w

C with gotos:
5.260u 0.010s 0:05.26 100.1% 0+0k 0+0io 84pf+0w

OCaml:
4.890u 0.010s 0:04.89 100.2% 0+0k 0+0io 165pf+0w

Surprisingly, the rank of loops vs. gotos has again reverted! This times
it is the goto-version which is slightly faster. In any case the
gcc-versions do not seem to have become more efficient overall. So this
(admittedly "mini"-) benchmark is still in favour of OCaml, which runs
around 7.5 - 9% faster on my machine.

>> So ocaml takes about 70% more time than gcc. Am I doing something
>> wrong?

I have thought about possibilities - hm, maybe you had accidently logged
in on a faster machine for the gcc-tests? ;) (only joking)

Now, I don't know what hardware differences can cause this, but it would
be interesting to know whether your AMD-chip has more cache: I couldn't
imagine that speed differs by such a margin otherwise. Maybe my
architecture suffers from cache misses? The arguably lower memory
requirements of C might allow it to fit into your cache but OCaml is still
too large? Has anybody else tried this test on AMD and Intel chips?

Nils Goesche

unread,
Sep 3, 2000, 2:13:07 AM9/3/00
to
Markus Mottl <mo...@miss.wu-wien.ac.at> writes:

> Now, I don't know what hardware differences can cause this, but
> it would be interesting to know whether your AMD-chip has more
> cache: I couldn't imagine that speed differs by such a margin
> otherwise. Maybe my architecture suffers from cache misses? The
> arguably lower memory requirements of C might allow it to fit
> into your cache but OCaml is still too large? Has anybody else
> tried this test on AMD and Intel chips?

On my system, Pentium III with 650 MHz running Linux, OCaml wins
again:

C-Version with goto:

real 0m3.297s
user 0m3.280s
sys 0m0.000s

C-Version without goto:

real 0m3.305s
user 0m3.290s
sys 0m0.000s

OCaml:

real 0m3.009s
user 0m3.000s
sys 0m0.010s

Note that compiling the C files with -march=pentiumpro didn't
help either.
--
Nils Goesche
Ask not for whom the <CONTROL-G> tolls.

Daniel de Rauglaudre

unread,
Sep 3, 2000, 1:45:53 PM9/3/00
to
I am not sure that timings can be relevant when it is about programs
running only 2 or 4 seconds. Am I wrong?

--
Daniel de RAUGLAUDRE
daniel.de_...@inria.fr
http://cristal.inria.fr/~ddr/

Riku Saikkonen

unread,
Sep 3, 2000, 1:29:35 PM9/3/00
to
Markus Mottl <mo...@miss.wu-wien.ac.at> writes:
> C with loop:
> 5.330u 0.000s 0:05.33 100.0% 0+0k 0+0io 85pf+0w
>
> C with gotos:
> 5.260u 0.010s 0:05.26 100.1% 0+0k 0+0io 84pf+0w
>
> OCaml:
> 4.890u 0.010s 0:04.89 100.2% 0+0k 0+0io 165pf+0w

For me, the C example with gotos (pg.c) is also slightly faster than
the one with the loop (p.c):

rjs@anar:~/t$ gcc -O2 -fomit-frame-pointer -o pg pg.c
rjs@anar:~/t$ time ./pg
2750159

real 0m2.942s
user 0m2.930s
sys 0m0.010s
rjs@anar:~/t$ time ./pg
2750159

real 0m2.943s
user 0m2.930s
sys 0m0.020s


rjs@anar:~/t$ gcc -O2 -fomit-frame-pointer -o p p.c

rjs@anar:~/t$ time ./p
2750159

real 0m2.957s
user 0m2.940s


sys 0m0.010s
rjs@anar:~/t$ time ./p
2750159

real 0m2.956s
user 0m2.950s
sys 0m0.010s
rjs@anar:~/t$

Compiling with just -O6, the gotos version is slighty slower (3.066 s
real time vs. 2.975 s for the loop version), so -fomit-frame-pointer
appears to help the version with gotos more than the loop version. I
suppose a look at the assembler produced by gcc would reveal the reason.

>Now, I don't know what hardware differences can cause this, but it would
>be interesting to know whether your AMD-chip has more cache: I couldn't

Some information from Linux's bootup texts:
#Detected 451030 kHz processor.
#CPU: L1 I Cache: 32K L1 D Cache: 32K
#CPU: AMD AMD-K6(tm) 3D processor stepping 09

>imagine that speed differs by such a margin otherwise. Maybe my
>architecture suffers from cache misses? The arguably lower memory
>requirements of C might allow it to fit into your cache but OCaml is still
>too large? Has anybody else tried this test on AMD and Intel chips?

Hmm. I tried it on another machine, which has an Intel Pentium II (400
MHz, 512 Kb cache), with the same compilers, and got results very
close to yours:

C with loop: 5.191 s real time
C with goto: 5.263 s real time
ocamlopt: 4.870 s real time
stalin: 4.733 s real time

Here the Scheme version appears to be even faster than the Ocaml one!


The cache sizes might explain the difference: the ML and Scheme
versions probably use larger amounts of memory, which doesn't fit in a
32 Kb data cache, but does fit in the Intel processor's larger cache.

For me, these tests are at least a confirmation of the things I've
read about functional languages being even faster than C in some
cases. Of course, they say very little about large programs (and gcc
isn't a very efficient C compiler, from what I've heard)...

Markus Mottl

unread,
Sep 3, 2000, 2:23:13 PM9/3/00
to
Daniel de Rauglaudre <daniel.de_...@inria.fr> wrote:
> I am not sure that timings can be relevant when it is about programs
> running only 2 or 4 seconds. Am I wrong?

That depends on your notion of "relevance". If you mean "accuracy", this
time seems to be sufficient: there is hardly any variance in it. Even
running it with higher parameters does not make any interesting difference
as can be easily verified. One shouldn't forget that not all machines only
require 2-4 seconds for this job...

If, however, you mean "sufficient to show anything in the general case",
then it is not only the running time that is irrelevant but the given
toy-benchmark itself.

The purpose why I posted it was to show that the claim "you can always
produce more efficient programs in C" is not necessarily true - it does
not make much sense, anyway, because languages as such are not fast or
slow. For some it is just more difficult to generate efficient machine
code (though, even FPLs can have advantages in this respect due to
guaranteed non-mutability of certain data - they may make other things
more difficult).

Since the given example demonstrates this, I think it is justified to use
it: one special case where some general claim does not hold is enough to
invalidate it.

Best regards,
Markus Mottl

Markus Mottl

unread,
Sep 3, 2000, 2:55:15 PM9/3/00
to
Riku Saikkonen <r...@lloke.dna.fi> wrote:
> Compiling with just -O6, the gotos version is slighty slower (3.066 s
> real time vs. 2.975 s for the loop version), so -fomit-frame-pointer
> appears to help the version with gotos more than the loop version. I
> suppose a look at the assembler produced by gcc would reveal the reason.

Somebody once told me the reason, but I forgot it, because I have never
programmed Intel chips. It is, to my knowledge, a rather trivial
inefficiency that a competent human can easily spot.

>>Now, I don't know what hardware differences can cause this, but it would
>>be interesting to know whether your AMD-chip has more cache: I couldn't

> Some information from Linux's bootup texts:
> #Detected 451030 kHz processor.
> #CPU: L1 I Cache: 32K L1 D Cache: 32K
> #CPU: AMD AMD-K6(tm) 3D processor stepping 09

Yes, this may be the reason: I have looked in the web and read that the
Pentium II has 32K L1 Cache, but this is split into 16K for data and 16K
for instructions. So if I interpret your output correctly, the K6 has
twice as much.

> Hmm. I tried it on another machine, which has an Intel Pentium II (400
> MHz, 512 Kb cache), with the same compilers, and got results very
> close to yours:

> C with loop: 5.191 s real time
> C with goto: 5.263 s real time
> ocamlopt: 4.870 s real time
> stalin: 4.733 s real time

> Here the Scheme version appears to be even faster than the Ocaml one!

Is this indeed real time or user time? The latter can vary quite a bit
depending on your system load. If it's real time, I wonder what kind of
code stalin has produced - as I was told there was no obvious improvement
possible in the assembler output of ocamlopt. Maybe it has spotted
something more sophisticated?

> The cache sizes might explain the difference: the ML and Scheme
> versions probably use larger amounts of memory, which doesn't fit in a
> 32 Kb data cache, but does fit in the Intel processor's larger cache.

I also think that the difference has to do with the larger cache of the
AMD chip.

> For me, these tests are at least a confirmation of the things I've
> read about functional languages being even faster than C in some
> cases. Of course, they say very little about large programs (and gcc
> isn't a very efficient C compiler, from what I've heard)...

Of course, this example was not given with the intention to say anything
in the general case. - It's still good to have an example for
demonstration, because there are enough people around who think that they
can always beat any other language implementation by writing low-level C.

thomas...@bluetail.com

unread,
Sep 5, 2000, 3:50:05 AM9/5/00
to

Florian Hars <flo...@hars.de> writes:

> thomas...@bluetail.com writes:
> > Few programmers will change from C (or Fortran, for the diehards)
> > into something faster. Witness SISAL.
>
/.../
> For all practical purposes, there is no language called "SISAL" that
> could fill any niche. Apart from that, you are probably right.

There used to be. At the time SISAL was being developed, it was faster
than Fortran; no users made the transition; funding ran out; SISAL
shut down. Some lamentation was heard in this newsgroup.

Thomas
--
Thomas Lindgren thomas...@bluetail.com
Bluetail AB http://www.bluetail.com

"The need of a constantly expanding market for its products chases the
bourgeoisie over the entire surface of the globe. It must nestle
everywhere, settle everywhere, establish connections everywhere."
-- Communist Manifesto

Jerzy Karczmarczuk

unread,
Sep 5, 2000, 7:37:51 AM9/5/00
to
Jeffrey Straszheim wrote:

(about the progress in programming paradigms and related issues)
...

> We can dream. Like what would have happened if the old Lisp machines
> had been a smashing success. Or what if Microsoft, instead of another

> layer of mediocrity [...]

> Of course, none of this will happen. What we will get are incremental
> advances, and always far short of the quality that we dream of.

> -- Jeffrey Straszheim | A sufficiently advanced

Maestro, this is what we call EVOLUTION.

Of course, there are also revolutions. I was a revolutionist myself
when I had all my hair and teeth.
But now I have a tendency to think that the real progress is evolutive,
and takes place during these "incremental" periods. The revolutions
are good in order to smash the establishment, to destroy rigid
structures which cannot evolve, but directly constructive role of
revolutions is minimal. //But I will not defend this standpoint with
my blood...//

Jerzy Karczmarczuk
Caen, France

Ketil Z Malde

unread,
Sep 5, 2000, 6:58:58 AM9/5/00
to
Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:

> But now I have a tendency to think that the real progress is evolutive,
> and takes place during these "incremental" periods. The revolutions
> are good in order to smash the establishment, to destroy rigid
> structures which cannot evolve, but directly constructive role of
> revolutions is minimal. //But I will not defend this standpoint with
> my blood...//

You're not suggesting C++ will evolve into something clean and
elegant, are you? Personally I'd welcome a revolution. Of course,
evolutionary development of e.g. the Haskell libraries would be most
welcome also. :-)

graham

unread,
Sep 5, 2000, 7:04:53 AM9/5/00
to
Markus Mottl at mo...@miss.wu-wien.ac.at wrote on 9/4/00 9:23 PM:
> graham <grah...@telocity.com> wrote:
>> But a library in an imperative program could mimic what a functional
>> language does to share data.
>
> At a high price: implementation of sharing of complicated structures is
> not easy.

Sure but it only has to be done once.

Markus Mottl at mo...@miss.wu-wien.ac.at wrote on 9/4/00 9:23 PM:
> graham <grah...@telocity.com> wrote:
>> It also shows that perhaps functional languages, and in particular lazy
>> functional languages, are not ready for real application development. In real
>> applications space usage and predictability are important.
>
> If you say "perhaps functional languages", you hopefully do not mean all
> of them - or do you only mean purely functional ones only? In any case,
> I'd say that for many, many applications purely functional and lazy
> languages are more than sufficient. Just take a look at the
> ICFP-competitions: Haskell usually scores quite well there, also in terms
> of efficiency.

I don't mean all of them. I would use Caml for any project because it's
space usage is ok. I wouldn't use SML -- for some reason it has terrible
space usage. I would have to think hard about using Clean for anything but
a small project, because it uses more space than I like and because it's
space usage isn't predictable. I wouldn't use Haskell.

Sure there are some applications that functional languages are good for,
but not as many as people might like to think. These ICFP programs are
competitions, not real applications.

graham

Jerzy Karczmarczuk

unread,
Sep 5, 2000, 8:54:17 AM9/5/00
to
graham wrote:

...

> It also shows that perhaps functional languages, and in particular
> lazy functional languages, are not ready for real application
> development. In real applications space usage and predictability
> are important.
>

> graham


=====

Hereby, believing firmly that Graham is The Guru, I give up all my
theoretical calculations. I regret to discover that

* the paper space usage is completely unpredictible, and my waste
paper basked fills up completely randomly, without any decent
structuring, which proves that my work is not "real".

* I cannot predict even the consumption of ink, pencils, eraser,
omitting the scandalous indeterminacy of the consumption of
coffee, beer, etc.

* I will never touch any computer which works under an unpredictable,
unreal beast called "operating system".

* I would suggest that all developers and users of Computer Algebra
systems should be immediately euthanasied. Space usage here is so
rarely predictible, that they have no reason (nor right) to exist.

Jerzy Karczmarczuk
Caen, France

William Lee Irwin III

unread,
Sep 5, 2000, 3:02:04 PM9/5/00
to
\begin{gratuitous-flame-and-me-too-message}

graham wrote:
>> It also shows that perhaps functional languages, and in particular
>> lazy functional languages, are not ready for real application
>> development. In real applications space usage and predictability
>> are important.

Of course not! Gee, what are we all waiting for? Let's all go back to
C, C++, and Java. No, too high-level. Assembly! Or portable assembly,
but with undefined behavior all over the place. No, wait, that's C...

Jerzy Karczmarczuk <kar...@info.unicaen.fr> wrote:
> Hereby, believing firmly that Graham is The Guru, I give up all my
> theoretical calculations. I regret to discover that

> * the paper space usage is completely unpredictible, and my waste
> paper basked fills up completely randomly, without any decent
> structuring, which proves that my work is not "real".

Waste paper basket? I should get one of those for my apartment. Ah,
but my apartment _is_ a wastepaper basket.

> * I cannot predict even the consumption of ink, pencils, eraser,
> omitting the scandalous indeterminacy of the consumption of
> coffee, beer, etc.

For me it tends to be the consumption of whiteboard markers,
cigarrettes, Diet Coke, and Kahlua Mudslide... with the occasional
disastrous Southern Comfort leak in the conservative alcohol collector.

> * I will never touch any computer which works under an unpredictable,
> unreal beast called "operating system".

Ah, yes, it must be the fault of those unpredictable space using lazy
functional languages! Those operating systems and their unpredictable
space usage!

> * I would suggest that all developers and users of Computer Algebra
> systems should be immediately euthanasied. Space usage here is so
> rarely predictible, that they have no reason (nor right) to exist.

It's time to call Kevorkian. My space usage has become unpredictable.
I was going to the gym to try to counteract that, but the ravages of
a college-student diet kept up for over 10 years are too much.
\end{gratuitous-flame-and-me-too-message}

Cheers,
Bill
--
*pdu-* write irc in Erlang and deploy it on ATM switches
--

graham

unread,
Sep 5, 2000, 3:08:46 PM9/5/00
to
Jerzy Karczmarczuk at kar...@info.unicaen.fr wrote on 9/5/00 8:54 AM:

> graham wrote:
>> It also shows that perhaps functional languages, and in particular
>> lazy functional languages, are not ready for real application
>> development. In real applications space usage and predictability
>> are important.
>>
>> graham
>
>
> =====
>
> Hereby, believing firmly that Graham is The Guru, I give up all my
> theoretical calculations. I regret to discover that
>
> ....

So Jerzy doesn't want to debate the issue. Fine, but then why bother
saying anything at all?

graham

Ketil Z Malde

unread,
Sep 6, 2000, 2:24:14 AM9/6/00
to
graham <grah...@telocity.com> writes:

> Jerzy Karczmarczuk at kar...@info.unicaen.fr wrote on 9/5/00 8:54 AM:
>> graham wrote:

>>> It also shows that perhaps functional languages, and in particular
>>> lazy functional languages, are not ready for real application
>>> development.

> So Jerzy doesn't want to debate the issue. Fine, but then why bother
> saying anything at all?

Oh, come on. You post a troll like that, and get sore when somebody
teases you?

Granted, there are applications that are not well suited to functional
languages, and conversely, there are applications that are suited to
them. Even Haskell, with it's unpredictable (or rather, hard to
predict) space requirements.

Space isn't everything - I've a big PC now, and I worry about
correctness, code clarity and ease of implementation. I may not be
writing "real applications", but at least I write useful software.

Ulf Wiger

unread,
Sep 6, 2000, 3:28:57 AM9/6/00
to
>>>>> "Ketil" == Ketil Z Malde <ke...@ii.uib.no> writes:

Ketil> Granted, there are applications that are not well suited to
Ketil> functional languages, and conversely, there are applications
Ketil> that are suited to them. Even Haskell, with it's
Ketil> unpredictable (or rather, hard to predict) space
Ketil> requirements.

Forgive me if I'm nit-picking, but I can't resist.
I believe it should be the other way around: there are languages
which are well suited for some applications, but not for others.

You shouldn't invent a language and then start looking for an
application; you try to understand what you need to accomplish,
and how, and then invent a language for it (if there isn't already
a suitable existing one.)

If you're lucky, your language will fit applications outside its
original domain as well.

Case in point: Erlang came about exactly like this.

/Uffe
--
Ulf Wiger tfn: +46 8 719 81 95
Network Architecture & Product Strategies mob: +46 70 519 81 95
Ericsson Telecom AB, Datacom Networks and IP Services
Varuvägen 9, Älvsjö, S-126 25 Stockholm, Sweden

Ketil Z Malde

unread,
Sep 6, 2000, 4:11:54 AM9/6/00
to
Ulf Wiger <Ulf.Wiger.c...@ericsson.com> writes:

> >>>>> "Ketil" == Ketil Z Malde <ke...@ii.uib.no> writes:

> Ketil> Granted, there are applications that are not well suited to
> Ketil> functional languages,

Yes, you are nit-picking, but I shall forgive you, since you ask so
nicely :-)

> You shouldn't invent a language and then start looking for an
> application

I guess my wording was unfortunate, but I think it boils down to the
same thing. I'm not inventing languages, I'm just looking at the
current crop.

> If you're lucky, your language will fit applications outside its
> original domain as well.

Exactly. I don't know nor care what most languages were designed for,
my problem is fitting applications to them.

Lars Lundgren

unread,
Sep 6, 2000, 5:01:34 AM9/6/00
to
On 6 Sep 2000, Ulf Wiger wrote:

>
> You shouldn't invent a language and then start looking for an
> application;

Why not?

> you try to understand what you need to accomplish,
> and how, and then invent a language for it (if there isn't already
> a suitable existing one.)

I think it is very important that you make sure that the language you are
inventing stands on a solid formal ground of its own. If you focus *to*
much on a set of usecases from the intended problem domain you may end up
with a 'mishmash of features'-language. We already have to many of those.

/Lars L


Jerzy Karczmarczuk

unread,
Sep 6, 2000, 6:23:25 AM9/6/00
to
graham's -

(Perhaps one day we shall be eligible to know his full name) -

comments on my acrimonious remarks about the unsuitability of
FL for the "real world" because of the difficulties to assess the
space usage, predictibility of this and that, etc.


> So Jerzy doesn't want to debate the issue. Fine, but then why bother
> saying anything at all?
>
> graham

1. Because I am a free man, and I like it.

2. Because I use and I teach functional programming for some years,
and I firmly believe that its clean semantics and compactness are
Good Things also for those funny fellows who usurpate the right
to speak in the name of the Real Application World.

3. Because the full area of programming - as any human activity -
is so full of unpredictible things, redundancies, crises, wastes
etc., that attacking the FP implementations on this point seems
for me at least preposterous. Personally I prefer to throw
tomatoes at my government.

4. Because in my domain I *really need and use* lazy algorithms, and
I am fed up with comments style "weeeel, Haskell in not as bad, if
only you get rid of that stupid and inefficient non-strictness...".


OH YES/NO, I do not avoid the debate. I try to advocate/defend FP also
in other community than the CompScists. But I see no reason not to get
a bit [for theatrical purposes ONLY] nervous, when I see people who have
nothing to do than criticize the fact that the world is not ideal. It
reminds me an old anecdote:
A circus,
A loose rope, hanging 25m above the floor,
A dog dancing on the rope.
walking on his left front leg
juggling with his right rear leg
playing violin with the remaining.
A spectator says to his neighbour:
"What a lousy player! He has to learn a lot before he could
compete with Paganini!"

Jerzy Karczmarczuk
Caen, France

graham

unread,
Sep 6, 2000, 8:17:11 AM9/6/00
to
Ketil Z Malde at ke...@ii.uib.no wrote on 9/6/00 2:24 AM:

> graham <grah...@telocity.com> writes:
>
>> Jerzy Karczmarczuk at kar...@info.unicaen.fr wrote on 9/5/00 8:54 AM:
>>> graham wrote:
>
>>>> It also shows that perhaps functional languages, and in particular
>>>> lazy functional languages, are not ready for real application
>>>> development.
>
>> So Jerzy doesn't want to debate the issue. Fine, but then why bother
>> saying anything at all?
>
> Oh, come on. You post a troll like that, and get sore when somebody
> teases you?

My post wasn't a troll. So far no-one has disagreed with my claim that
lazy functional languges use too much space, and unpredictably so,
and that I think this makes them unsuitable for real application
development. Does that mean people agree? disagree? or what.

Ketil Z Malde at ke...@ii.uib.no wrote on 9/6/00 2:24 AM:


> Space isn't everything - I've a big PC now, and I worry about
> correctness, code clarity and ease of implementation. I may not be
> writing "real applications", but at least I write useful software.

I didn't say space was everything. Useful to whom?

graham

Ketil Z Malde

unread,
Sep 6, 2000, 8:57:56 AM9/6/00
to
graham <grah...@telocity.com> writes:

>> Oh, come on. You post a troll like that, and get sore when somebody
>> teases you?

> My post wasn't a troll. So far no-one has disagreed with my claim that
> lazy functional languges use too much space, and unpredictably so,

Your supposition was that "functional languages, and in particular
lazy functional languages" were not ready for "real application
development".

Sure, any Haskell program could probably, at considerable effort, be
coded more compactly in C. Sure, excessive memory occasionally bites
me. I still considering it vastly better for the things I do than
more mainstream, imperative languages.

> and that I think this makes them unsuitable for real application
> development. Does that mean people agree? disagree? or what.

It means that "real application development" is a too vague term to
discuss meaningfully. You've already dismissed ICFP entrants -
i.e. raytracers -- as not being "real", so it's all too easy to arrive
at the conclusion that you consider the claim a tautology, anyway.

>> correctness, code clarity and ease of implementation. I may not be
>> writing "real applications", but at least I write useful software.

> I didn't say space was everything. Useful to whom?

Useful to me, mostly. Clearly, space isn't everything. If you look
at commercial applications (and presumably "real" ones, too), their
space usage is often entirely horrible. I'm sure it's quite
predictable somewhere deep in their C++ internals, but it's also clear
that nobody bothered to actually predict it.

It is loading more messages.
0 new messages